code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Polynomial interpolation
---
Perform polynomial interpolation of air density from the data in the following table.
$$
\begin{aligned}
& \text {Table with air density against temperature}\\
&\begin{array}{c|c}
Temperature & Density \\
^\circ\,C & kg\,m^{-3} \\
\hline
100 & 0.946 \\
150 & 0.835 \\
200 & 0.746 \\
250 & 0.675 \\
300 & 0.616 \\
400 & 0.525 \\
500 & 0.457
\end{array}
\end{aligned}
$$
```
import numpy as np
from IPython.display import display, Math
# Input the data
# Temperature
T = [100, 150, 200, 250, 300, 400, 500]
# Air density
rho = [0.946, 0.835, 0.746, 0.675, 0.616, 0.525, 0.457]
# Plot the data
import matplotlib.pyplot as plt
plt.plot(T, rho, 'g*')
plt.xlabel("Temperature [$^\circ\,$C]")
plt.ylabel("Air density [kg$\,$m$^{-3}$]")
plt.show()
```
## Part a)
Use Lagrange interpolation to calculate the air density at $350^\circ\,$C from the measured data between $300^\circ\,$C and $500^\circ\,$C.
```
# Temperature at which the air density is sought
T0 = 350
# Form Lagrange multipliers
L = []
os = 4 # Offset to get to the correct position in the data
for i in range(3):
tmp = 1
for j in range(3):
if i != j:
tmp = tmp * (T0 - T[j+os])/(T[i+os] - T[j+os])
L.append(tmp)
# Calculate the air density at T0
rho_T0 = 0
for i in range(3):
rho_T0 = rho_T0 + L[i] * rho[i+os]
rho_T0
display(Math(r"\text{{The air density at }} {} ^\circ \text{{C is }}{:0.4f} \,kg\, m^{{-3}}".format(T0, rho_T0)))
```
*Remark:* You should try a few different values and add the value to the plot.
## Part b)
Calculate the Newton interpolation coefficients for the support points $300^\circ\,$C and $400^\circ\,$C. Add a third support point and calculate the air density at $350^\circ\,$C.
```
# Temperature at which the air density is sought
T0 = 350
# Offset to get to the correct position in the data
os = 4
# Calculate the Newton interpolation coefficients
a = []
a.append(rho[0+os])
a.append((rho[1+os] - rho[0+os]) / (T[1+os] - T[0+os]))
# Calculate the air density at T0
rho_T0 = a[0] + a[1] * (T0 - T[0+os])
display(Math(r"\text{{The air density at }} {} ^\circ \text{{C is }}{:0.4f} \,kg\, m^{{-3}}".format(T0, rho_T0)))
```
The air density at $T=350^\circ\,$C calculated by Newton interpolation with two support points is $\rho=0.5705\,$kg$\,$m$^{-3}$.
Now we add a third interpolation point.
```
# Add a third interpolation points
# Set to -1 for T=250 and to 2 for T=500
idx = -1
tmp = (rho[idx+os]- rho[1+os]) / (T[idx+os] - T[1+os])
a.append((tmp - a[1]) / (T[idx+os] - T[0+os]))
rho_T0_3rd = a[0] + a[1] * (T0 - T[0+os]) + a[2] * (T0 - T[0+os]) * (T0 - T[1+os])
display(Math(r"\text{{The air density at }} {} ^\circ \text{{C is }}{:0.4f} \,kg\, m^{{-3}}".format(T0, rho_T0_3rd)))
```
When we add a third support point at $T=500^\circ\,$C we get the same result as for the case of the Lagrange interpolation in part (a), i.e. $\rho=0.5676\,$kg$\,$m$^{-3}$. This is expected because the second order polynomial through the three support points is unique.
*Remark:* When we instead add a third support point at $T=250^\circ\,$C we get $\rho=0.5660\,$kg$\,$m$^{-3}$ which is slightly different from the other two interpolations. You should try this.
## Part c)
Use the Python functions numpy.interp to interpolate the air density between $300^\circ\,$C and $500^\circ\,$C. Plot the interpolated air density and the measured air densities at the three support points.
```
import numpy as np
T0 = 350
# What are the next two lines doing?
os = 4
length = 3
rho_T0_np = np.interp(T0, T[os:os+length], rho[os:os+length])
display(Math(r"\text{{The air density at }} {} ^\circ \text{{C is }}{:0.4f} \,kg\, m^{{-3}}".format(T0, rho_T0_np)))
x = np.linspace(300, 500, 201)
plt.plot(T[os:os+length], rho[os:os+length], 'x', label="Measured points")
plt.plot(x, np.interp(x, T[os:os+length], rho[os:os+length]), 'r', label="Interpolation")
plt.xlabel("Temperature [$^\circ\,$C]")
plt.ylabel("Air density [kg$\,$m$^{-3}$]")
plt.legend()
plt.show()
```
The plot shows that the numpy.interp function uses only linear interpolation which we could have expected from the value at $T=350^\circ\,$C or if we had looked at the documentation: https://numpy.org/doc/stable/reference/generated/numpy.interp.html
## Part d)
Use the Python functions scipy.interpolate.interp1d to interpolate the air density between $300^\circ\,$C and $500^\circ\,$C. Plot the interpolated air density and the measured air densities at the three support points.
```
from scipy import interpolate
os = 4
length = 3
f = interpolate.interp1d(T[os:os+length], rho[os:os+length], "quadratic", fill_value="extrapolate")
x = np.linspace(300, 500, 201)
plt.plot(T[os:os+length], rho[os:os+length], 'x', label="Measured points")
plt.plot(x, f(x), 'r', label="Interpolation")
plt.xlabel("Temperature [$^\circ\,$C]")
plt.ylabel("Air density [kg$\,$m$^{-3}$]")
plt.legend()
plt.show()
display(Math(r"\text{{The air density at }} {} ^\circ \text{{C is }}{:0.4f} \,kg\, m^{{-3}}".format(T0, f(T0))))
```
We can see that this interpolation is quadratic and produces the same interpolated value as the two examples in parts a) and the second interpolation in part b).
## Part e)
Calculate the air density at $200^\circ\,$C from the interpolation and compare the value to the measured value. Extend the plot from part d) to $200^\circ\,$C.
```
from scipy import interpolate
# Offset
os = 2
# length
length = 5
T0 = 200
x = np.linspace(200, 500, 301)
plt.plot(T[os:os+length], rho[os:os+length], 'x', label="Measured points")
plt.plot(x, f(x), 'r', label="Interpolation")
plt.xlabel("Temperature [$^\circ\,$C]")
plt.ylabel("Air density [kg$\,$m$^{-3}$]")
plt.legend()
plt.show()
display(Math(r"\text{{The air density at }} {} ^\circ \text{{C is }}{:0.4f} \,kg\, m^{{-3}}".format(T0, f(T0))))
```
The interpolated value is $\rho_{i,200}=0.73$ compared to the true value of $\rho_{200}=0.746$. This gives a relative error of
$$
e = \frac{0.746-0.73}{0.746} = 0.021
$$
In this case, the extrapolation beyond the support point interval gives a reasonably good approximation. This is due to the fact that the exact data almost follows a quadratic curve.
| github_jupyter |
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="..\images\qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Berat Yenilen, Utku Birkan, Arda Çınar and Özlem Salehi (<a href="http://qworld.lu.lv/index.php/qturkey/" target="_blank">QTurkey</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
<h1> <font color="blue"> Solutions for </font> Bernstein-Vazirani Problem </h1>
<a id="task1"></a>
### Task 1
- Using how many queries can you solve the problem clasically? How many queries if you use a probabilistic algorithm?
- How many queries do you think we need to make if we are to solve the problem with a quantum computer?
<h3>Solution</h3>
Let's illustrate the solution over an example.
Let $n$ = 4 and let's make the following queries to function $f$.
\begin{align*}
f(1000) &= s_1.1 + s_2.0 + s_3.0 + s_4.0 = s_1\\
f(0100) &= s_1.0 + s_2.1 + s_3.0 + s_4.0 = s_2\\
f(0010) &= s_1.0 + s_2.0 + s_3.1 + s_4.0 = s_3\\
f(0001) &= s_1.0 + s_2.0 + s_3.0 + s_4.1 = s_4\\
\end{align*}
We need $n$ queries and this is an optimal way of solving this problem by classical and probabilistic
algorithms. For further information about why classical and probabilistic algorithms can not perform better, please refer
to Information Theory.
<a id="task2"></a>
### Task 2
What can we say about the $f:\{0,1\}^n \rightarrow \{0,1\}$ function if $s = 0^n$?
<h3>Solution</h3>
If $s=0^n$, then $f(x)=0$ for all $x$.
<a id="task3"></a>
### Task 3
Given an oracle function `bv_oracle()` that constructs a 6 qubit oracle circuit ($s$ has length 5) for $f$,construct a circuit that implements the algorithm described above to find out $s$.
Note that qubit 5 is the output qubit.
Run the following cell to load function `bv_oracle()`.
```
%run ../include/oracle.py
from qiskit import QuantumCircuit, execute, Aer
n=5
#Create quantum circuit
bv_circuit = QuantumCircuit(n+1, n)
#Apply X gate to last qubit
bv_circuit.x(n)
#Apply Hadamard to all qubits
bv_circuit.h(range(n+1))
#Apply oracle
bv_circuit += bv_oracle()
#Apply Hadamard to all qubits
bv_circuit.h(range(n))
#Measure the first 4 qubits
bv_circuit.measure(range(n), range(n))
#Draw the circuit
bv_circuit.draw(output="mpl")
job = execute(bv_circuit, Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts()
for outcome in counts:
reverse_outcome = ''
for i in outcome:
reverse_outcome = i + reverse_outcome
print(reverse_outcome,"is observed",counts[outcome],"times")
```
<a id="task4"></a>
### Task 4
Given $\textbf{s} = 0110$, implement a function that returns an oracle for the function $ f(\mathbf{x}) = \mathbf{x} \cdot \mathbf{s} $. Note that $n=4$ and you will need a cirucit with 5 qubits where qubit 4 is the output qubit.
```
from qiskit import QuantumCircuit
def oracle():
circuit = QuantumCircuit(5)
circuit.barrier()
circuit.cx(1, 4)
circuit.cx(2, 4)
circuit.barrier()
return circuit
```
| github_jupyter |
# Jupyter Bridge Basic
## Yihang Xin and Alex Pico
## 2021-04-04
# Why use Jupyter Bridge
* Users do not need to worry about dependencies and environment.
* Easily share notebook-based workflows and data sets
* Workflows can reside in the cloud, access cloud resources, and yet still use Cytoscape features.
# How Jupyter Bridge works
Jupyter-Bridge enables a workflow running on remote Jupyter to execute functions on a PC-local Cytoscape – the remote Jupyter runs the request through Jupyter-Bridge, where it is picked up by Javascript code running on the Jupyter web page in the PC-local browser, which in turn calls Cytoscape. The Cytoscape response travels the reverse route.

Jupyter-Bridge allows a remote Jupyter Notebook to communicate with a workstation-based Cytoscape as if the Notebook were running on the Cytoscape workstation. A Jupyter Notebook passes a Cytoscape call to an independent Jupyter-Bridge server where it’s picked up by the Jupyter-Bridge browser component and is passed to Cytoscape. The Cytoscape response is returned via the opposite flow. As a result, workflows can reside in the cloud, access cloud resources, and yet still leverage Cytoscape features. Jupyter Bridge supports py4cytoscape first, and now RCy3 (R library for communicating with Cytoscape) also support Jupyter-Bridge.
Visit the [source code of Juputer Bridge](https://github.com/cytoscape/jupyter-bridge)
for more information.
# Prerequisites (Local machine)
## In addition to this package (py4cytoscape latest version 0.0.8), you will need:
* Latest version of Cytoscape, which can be downloaded from https://cytoscape.org/download.html. Simply follow the installation instructions on screen.
* Complete installation wizard
* Launch Cytoscape
* Install the filetransfer app from https://apps.cytoscape.org/apps/filetransfer
# Prerequisites (Cloud server)
There are a lot of cloud computing services online, such as Google Colab, Amazon EMR Notebook, Microsoft Azure, CoCalc and your own JupyterHub. You can choose your favorite one.
Here we use Google Colab to demonstrate. Click this [link](https://colab.research.google.com/notebooks/empty.ipynb) to create a new empty Python notebook in the Google colab.
<span style="color:red">Copy codes below to build connection between Jupyter notebook (cloud) and Cytoscape (local).</span>
<span style="color:red"> Make sure to run code below in the cloud!!!</span>
```
# Install and import required packages
%%capture
!pip install py4cytoscape
import IPython
import py4cytoscape as p4c
# Build connection between the cloud jupyter notebookand local Cytoscape
browser_client_js = p4c.get_browser_client_js()
IPython.display.Javascript(browser_client_js)
```
Then, launch Cytoscape and keep it running whenever using py4cytoscape and Jupyter Bridge. Confirm that you have everything installed and running:
```
# Check connection, you should see cytoscape version infomation
p4c.cytoscape_version_info()
```
Done! Now you can execute a workflow in a remote server-based Jupyter Notebook to leverage your workstation’s Cytoscape. You can also easily share notebook-based workflows and data sets.
For Jupyter Bridge workflow use case example, visit the Jupyter Bridge workflow notebook.
| github_jupyter |
```
%matplotlib inline
import os.path
import pprint
import pandas as pd
from gmprocess.io.asdf.stream_workspace import StreamWorkspace
from gmprocess.io.test_utils import read_data_dir
from gmprocess.io.read import read_data
from gmprocess.streamcollection import StreamCollection
from gmprocess.processing import process_streams
from gmprocess.event import get_event_object
from gmprocess.logging import setup_logger
# Only log errors; this suppresses many warnings that are
# distracting and not important.
setup_logger(level='error')
```
## Reading Data
We currently have a few different ways of reading in data. Here we use the `read_data_dir` helper function to quickly read streams and event (i.e., origin) information from the testing data in this repository.
```
datafiles, origin = read_data_dir('geonet', 'us1000778i', '*.V1A')
```
The read_data below finds the appropriate data reader for the format supplied.
```
tstreams = []
for dfile in datafiles:
tstreams += read_data(dfile)
```
Note that `tstreams` is just a list of StationStream objects:
```
print(type(tstreams))
print(type(tstreams[0]))
```
## gmprocess Subclasses of Obspy Classes
The `StationStream` class is a subclass of ObsPy's `Stream` class, which is effectively a list of `StationTrace` objects. The `StationTrace` class is, in turn, a subclass of ObsPy's `Trace` class. The motivation for these subclasses is primarily to enforce certain required metadata in the Trace stats dictionary (that ObsPy would generally store in their `Inventory` object).
We also have a `StreamCollection` class taht is effectively a list of `StationStream` objects, and enforces some rules that are required later for processing, such forcing all `StationTraces` in a `StationStream` be from the same network/station. The basic constructor for the StreamCollection class takes a list of streams:
```
sc = StreamCollection(tstreams)
```
The StreamCollection print method gives the number of StationStreams and the number that have passed/failed processing checks. Since we have not done any processing, all StationStreams should pass checks.
```
print(sc)
```
More detailed information about the StreamCollection is given by the `describe` method:
```
sc.describe()
```
## Processing
Note that processing options can be controlled in a config file that is installed in the user's home directory (`~/.gmprocess/config.yml`) and that event/origin information is required for processing:
```
pprint.pprint(origin)
sc_processed = process_streams(sc, origin)
print(sc_processed)
```
Note that all checks have passed. When a stream does not pass a check, it is not deleted, but marked as failed and subsequent processing is aborted.
Processing steps are recorded according to the SEIS-PROV standard for each StationTrace. We log this information as a list of dictionaries, where each dictionary has keys `prov_id` and `prov_attributes`. This can be retrieved from each traces with the `getAllProvenance` method:
```
pprint.pprint(sc_processed[0][0].getAllProvenance())
```
## Workspace
We use the ASDF format as a 'workspace' for saving data and metadata at all stages of processing/analysis.
```
outfile = os.path.join(os.path.expanduser('~'), 'geonet_test.hdf')
if os.path.isfile(outfile):
os.remove(outfile)
workspace = StreamWorkspace(outfile)
# create an ObsPy event object from our dictionary
event = get_event_object(origin)
# add the "raw" (GEONET actually pre-converts to gals) data
workspace.addStreams(event, sc, label='rawgeonet')
eventid = origin['id']
workspace.addStreams(event, sc_processed, label='processed')
```
## Creating and Retrieving Stream Metrics
Computation of metrics requires specifying a list of requested intensity measure types (IMTs) and intensity measure components (IMCs). Not all IMT-IMC combinations are currently supported and in those cases the code returns NaNs.
For real uses (not just demonstration) it is probably more convenient to specify these values through the config file, which allows for specifying response spectral periods and Fourier amplitude spectra periods as linear or logspaced arrays.
```
imclist = [
'greater_of_two_horizontals',
'channels',
'rotd50',
'rotd100'
]
imtlist = [
'sa1.0',
'PGA',
'pgv',
'fas2.0',
'arias'
]
workspace.setStreamMetrics(
eventid,
labels=['processed'],
imclist=imclist,
imtlist=imtlist
)
df = workspace.getMetricsTable(
eventid,
labels=['processed']
)
```
There are a lot of columns here, so we'll show them in sections:
```
pd.set_option('display.width', 1000)
print('ARIAS:')
print(df['ARIAS'])
print('\nSpectral Acceleration (1 second)')
print(df['SA(1.0)'])
print('\nFourier Amplitude Spectra (2 second)')
print(df['FAS(2.0)'])
print('\nPGA')
print(df['PGA'])
print('\nPGV')
print(df['PGV'])
print('\nStation Information:')
print(df[['STATION', 'NAME', 'LAT', 'LON', 'SOURCE', 'NETID']])
```
## Retrieving Streams
```
raw_hses = workspace.getStreams(
eventid,
stations=['hses'],
labels=['rawgeonet'])[0]
processed_hses = workspace.getStreams(
eventid,
stations=['hses'],
labels=['processed'])[0]
raw_hses.plot()
processed_hses.plot()
```
| github_jupyter |
# Day 6: Bagging and gradient boosting.
This practice notebook is based on Evgeny Sokolov's awesome [materials](https://github.com/esokolov/ml-course-hse/blob/master/2020-fall/seminars/sem09-gbm-part2.ipynb) and [this notebook](https://github.com/neychev/harbour_ml2020/blob/master/day07_Gradient_boosting/07_trees_boosting_ensembling.ipynb) from Harbour ML course.
# Part 1. Bagging and gradient boosting.
Let's analyze how the performance of bagging and gradient boosting depends on the number of base learners in the ensemble.
In case of bagging, all the learners fit to different samples from the same data distribution $\mathbb{X} \times \mathbb{Y}$. Some of them may be overfitted, nevertheless, subsequent averaging of their individual predictions allows to mitigate this effect. The reason for this is the fact that for uncorrelated algorithms the variance of their composition is $N$ times lower than the individual's. In other words, it's highly unlikely, that all the ensemble components would overfit to some atypical object from the training set (compared to one model). When the ensemble size $N$ becomes large enough, further addition of base learners does not increase the quality.
In boosting, each algorithm is being fit to the errors of the currently constructed composition, which allows the ensemble to gradually improve the quality of the data distribution approximation. However, the increase of ensemble size $N$ may lead to overfitting, as the addition of new models into the composition further fits the training data, and eventually may decrease the generalization ability.
```
import matplotlib.pyplot as plt
import numpy as np
```
Firstly, let's generate a synthetic dataset.
```
X_train = np.linspace(0, 1, 100)
X_test = np.linspace(0, 1, 1000)
@np.vectorize
def target(x):
return x > 0.5
Y_train = target(X_train) + np.random.randn(*X_train.shape) * 0.1
plt.figure(figsize=(16, 9))
plt.scatter(X_train, Y_train, s=50)
plt.grid()
plt.show()
```
Firstly, let's take bagging of decision trees algorithm.
Here, the ensemble size is being gradually increased.
Let's look at how the prediction depends on the size.
```
from sklearn.ensemble import BaggingRegressor, GradientBoostingRegressor
from sklearn.tree import DecisionTreeRegressor
reg = BaggingRegressor(DecisionTreeRegressor(max_depth=2), warm_start=True)
plt.figure(figsize=(20, 30))
sizes = [1, 2, 5, 20, 100, 500, 1000, 2000]
for i, s in enumerate(sizes):
reg.n_estimators = s
reg.fit(X_train.reshape(-1, 1), Y_train)
plt.subplot(4, 2, i + 1)
plt.xlim([0, 1])
plt.scatter(X_train, Y_train, s=30)
plt.plot(X_test, reg.predict(X_test.reshape(-1, 1)), c="green", linewidth=4)
plt.title("{} trees".format(s))
```
You can see that after a certain point the overall prediction does not change with the base learners' addition.
Now let's do the same with the gradient boosting.
```
reg = GradientBoostingRegressor(max_depth=1, learning_rate=1, warm_start=True)
plt.figure(figsize=(20, 30))
sizes = [1, 2, 5, 20, 100, 500, 1000, 2000]
for i, s in enumerate(sizes):
reg.n_estimators = s
reg.fit(X_train.reshape(-1, 1), Y_train)
plt.subplot(4, 2, i + 1)
plt.xlim([0, 1])
plt.scatter(X_train, Y_train, s=30)
plt.plot(X_test, reg.predict(X_test.reshape(-1, 1)), c="green", linewidth=4)
plt.title("{} trees".format(s))
```
Gradient boosting quickly captured the true dependency, but afterwards began overfitting towards individual objects from the training set. As a result, models with big ensemble sizes were severly overfitted.
One can tackle this problem by picking a very simple base learner, or intentionally lowering the weight of subsequent algorithms in the composition:
$$a_N(x) = \sum_{n=0}^N \eta \gamma_N b_n(x).$$
Here, $\eta$ is the step parameter, which controls the influence of new ensemble components.
Such an approach makes training slower compared to bagging, though, makes the final model less overfitted. Still, one should keep in mind that overfitting can happen for any $\eta$ in the limit of infinite ensemble size.
```
reg = GradientBoostingRegressor(max_depth=1, learning_rate=0.1, warm_start=True)
plt.figure(figsize=(20, 30))
sizes = [1, 2, 5, 20, 100, 500, 1000, 2000]
for i, s in enumerate(sizes):
reg.n_estimators = s
reg.fit(X_train.reshape(-1, 1), Y_train)
plt.subplot(4, 2, i + 1)
plt.xlim([0, 1])
plt.scatter(X_train, Y_train, s=30)
plt.plot(X_test, reg.predict(X_test.reshape(-1, 1)), c="green", linewidth=4)
plt.title("{} trees".format(s))
```
Let's look at the described phenomenon on a more realistic dataset.
```
from sklearn import datasets
from sklearn.model_selection import train_test_split
ds = datasets.load_diabetes()
X = ds.data
Y = ds.target
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.5, test_size=0.5)
MAX_ESTIMATORS = 300
gbclf = BaggingRegressor(warm_start=True)
err_train_bag = []
err_test_bag = []
for i in range(1, MAX_ESTIMATORS + 1):
gbclf.n_estimators = i
gbclf.fit(X_train, Y_train)
err_train_bag.append(1 - gbclf.score(X_train, Y_train))
err_test_bag.append(1 - gbclf.score(X_test, Y_test))
gbclf = GradientBoostingRegressor(warm_start=True, max_depth=2, learning_rate=0.1)
err_train_gb = []
err_test_gb = []
for i in range(1, MAX_ESTIMATORS + 1):
gbclf.n_estimators = i
gbclf.fit(X_train, Y_train)
err_train_gb.append(1 - gbclf.score(X_train, Y_train))
err_test_gb.append(1 - gbclf.score(X_test, Y_test))
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.plot(err_train_gb, label="Gradient Boosting")
plt.plot(err_train_bag, label="Bagging")
plt.legend()
plt.title("Train")
plt.subplot(1, 2, 2)
plt.plot(err_test_gb, label="Gradient Boosting")
plt.plot(err_test_bag, label="Bagging")
plt.legend()
plt.title("Test")
plt.gcf().set_size_inches(15, 7)
```
# Part 2. Multilabel classification with Decision Trees, Random Forests, and Gradient Boosting.
In the second part of our practice session, we will apply each method to the classification task and do optimal model selection.
```
from sklearn.datasets import load_digits
data = load_digits()
X = data.data
y = data.target
```
We're going to use the digits dataset. This is a task of recognizing hand-written digits - a multilabel classification into 10 classes.
```
np.unique(y)
fig, axs = plt.subplots(3, 3, figsize=(12, 12))
fig.suptitle("Training data examples")
for i in range(9):
img = X[i].reshape(8, 8)
axs[i // 3, i % 3].imshow(img)
axs[i // 3, i % 3].set_title("Class label: %s" % y[i])
```
Firstly, split the dataset in order to be able to validate your model.
**Hint**: use sklearn's `ShuffleSplit` or `train_test_split`.
```
# Split the dataset. Use any method you prefer
```
#### Decision trees
```
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
# Create and fit decision tree with the default parameters
# Evaluate it on the validation set. Use accuracy
```
#### Random forest
```
from sklearn.ensemble import RandomForestClassifier
# Create RandomForestClassifier with the default parameters
# Fit and evaluate
# Now let's see how the quality depends on the number of models in the ensemble
# For each value in [5, 10, 100, 500, 1000] create a random forest with the corresponding size, fit a model and evaluate
# How does the quality change? What number is sufficient?
# Please write you conslusions
```
#### Gradient boosting
```
from sklearn.ensemble import GradientBoostingClassifier
# Create GradientBoostingClassifier with the default parameters
# Fit and evaluate. Compare its quality to random forest
# Now let's see how the quality depends on the number of models in the ensemble
# For each value in [5, 10, 100, 500, 1000] train a gradient boosting with the corresponding size
# How does the quality change? What number is sufficient?
# Please write you conslusions
```
| github_jupyter |
```
# Check Python Version
import sys
import scipy
import numpy
import matplotlib
import pandas
import sklearn
print('Python: {}'.format(sys.version))
print('scipy: {}'.format(scipy.__version__))
print('numpy: {}'.format(numpy.__version__))
print('matplotlib: {}'.format(matplotlib.__version__))
print('pandas: {}'.format(pandas.__version__))
print('sklearn: {}'.format(sklearn.__version__))
import numpy as np
from sklearn import preprocessing, cross_validation
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn import model_selection
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
import pandas as pd
# Load Dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data"
names = ['id', 'clump_thickness', 'uniform_cell_size', 'uniform_cell_shape',
'marginal_adhesion', 'single_epithelial_size', 'bare_nuclei',
'bland_chromatin', 'normal_nucleoli', 'mitoses', 'class']
df = pd.read_csv(url, names=names)
# Preprocess the data
df.replace('?',-99999, inplace=True)
print(df.axes)
df.drop(['id'], 1, inplace=True)
# Let explore the dataset and do a few visualizations
print(df.loc[10])
# Print the shape of the dataset
print(df.shape)
# Describe the dataset
print(df.describe())
# Plot histograms for each variable
df.hist(figsize = (10, 10))
plt.show()
# Create scatter plot matrix
scatter_matrix(df, figsize = (18,18))
plt.show()
# Create X and Y datasets for training
X = np.array(df.drop(['class'], 1))
y = np.array(df['class'])
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.2)
# Testing Options
seed = 8
scoring = 'accuracy'
# Define models to train
models = []
models.append(('KNN', KNeighborsClassifier(n_neighbors = 5)))
models.append(('SVM', SVC()))
# evaluate each model in turn
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state = seed)
cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
# Make predictions on validation dataset
for name, model in models:
model.fit(X_train, y_train)
predictions = model.predict(X_test)
print(name)
print(accuracy_score(y_test, predictions))
print(classification_report(y_test, predictions))
# Accuracy - ratio of correctly predicted observation to the total observations.
# Precision - (false positives) ratio of correctly predicted positive observations to the total predicted positive observations
# Recall (Sensitivity) - (false negatives) ratio of correctly predicted positive observations to the all observations in actual class - yes.
# F1 score - F1 Score is the weighted average of Precision and Recall. Therefore, this score takes both false positives and false
clf = SVC()
clf.fit(X_train, y_train)
accuracy = clf.score(X_test, y_test)
print(accuracy)
example_measures = np.array([[4,2,1,1,1,2,3,2,1]])
example_measures = example_measures.reshape(len(example_measures), -1)
prediction = clf.predict(example_measures)
print(prediction)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/rizwandel/Auto_TS/blob/master/Copy_of_Q%26A_on_PDF_Files.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install cdqa
!pip install flask-ngrok
import os
import pandas as pd
from ast import literal_eval
from cdqa.utils.converters import pdf_converter
from cdqa.pipeline import QAPipeline
from cdqa.utils.download import download_model
download_model(model='bert-squad_1.1', dir='./models')
!ls models
!mkdir docs
!wget -P ./docs/ https://s2.q4cdn.com/299287126/files/doc_financials/2020/q3/AMZN-Q3-2020-Earnings-Release.pdf
!wget -P ./docs/ https://s2.q4cdn.com/299287126/files/doc_financials/2020/q2/Q2-2020-Amazon-Earnings-Release.pdf
!wget -P ./docs/ https://s2.q4cdn.com/299287126/files/doc_financials/2020/Q1/AMZN-Q1-2020-Earnings-Release.pdf
!wget -P ./docs/ https://s2.q4cdn.com/299287126/files/doc_news/archive/Amazon-Q4-2019-Earnings-Release.pdf
!wget -P ./docs/ https://s2.q4cdn.com/299287126/files/doc_news/archive/Q3-2019-Amazon-Financial-Results.pdf
df = pdf_converter(directory_path='./docs/')
df.head()
# pd.set_option('display.max_colwidth', -1)
df.head()
cdqa_pipeline = QAPipeline(reader='./models/bert_qa.joblib', max_df=1.0)
cdqa_pipeline.fit_retriever(df=df)
import joblib
# cdqa_pipeline.to("cpu")
joblib.dump(cdqa_pipeline, './models/bert_qa_customc.joblib')
cdqa_pipeline=joblib.load('./models/bert_qa_customc.joblib')
cdqa_pipeline
query = 'How much is increase in operating cash flow?'
prediction = cdqa_pipeline.predict(query, 3)
cdqa_pipeline
prediction
query = 'What is latest earnings per share?'
cdqa_pipeline.predict(query)
query = 'How many jobs are created in 2020?'
prediction = cdqa_pipeline.predict(query)
print('query: {}'.format(query))
print('answer: {}'.format(prediction[0]))
print('title: {}'.format(prediction[1]))
print('paragraph: {}'.format(prediction[2]))
query = 'How many full time employees are on Amazon roll?'
prediction = cdqa_pipeline.predict(query)
print('query: {}'.format(query))
print('answer: {}'.format(prediction[0]))
print('title: {}'.format(prediction[1]))
print('paragraph: {}'.format(prediction[2]))
query = 'General Availability of which AWS services were announced?'
prediction = cdqa_pipeline.predict(query, n_predictions=5)
prediction
query = 'What is the impact of COVID on business?'
prediction = cdqa_pipeline.predict(query, n_predictions=2)
print('query: {}'.format(query))
print('answer: {}'.format(prediction[0]))
print('title: {}'.format(prediction[1]))
# print('paragraph: {}'.format(prediction[2]))
from google.colab import drive
drive.mount('/content/drive')
!cp ./models/bert_qa_customc.joblib '/content/drive/MyDrive/'
'''import joblib
import requests
import streamlit as st
# st.set_option('deprecation.showfileUploaderEncoding',False)
st.title("Amazon QA WebApp")
st.text("What would you like to know regarding amazon today?")
@st.cache(allow_output_mutation=True)
def load_model():
model=joblib.load('./models/bert_qa_customc.joblib')
return model
with st.spinner('loading model into memory...'):
model=load_model()
text=st.text_input("Enter question here")
if text:
st.write('Response:- ')
with st.spinner('Searching for answers'):
prediction=model.predict(text)
st.write('answer: {}'.format(prediction[0]))
st.write('title: {}'.format(prediction[1]))
st.write('paragraph: {}'.format(prediction[2]))
st.write('')
'''
import joblib
import requests
# import streamlit as st
from flask import Flask, render_template, url_for, request
import pandas as pd
import pickle
from flask_ngrok import run_with_ngrok
model=joblib.load('./models/bert_qa_customc.joblib')
# from sklearn.externals import joblib
app = Flask(__name__)
run_with_ngrok(app)
@app.route('/')
def home():
return 'hii'
@app.route('/predict',methods=['GET'])
def predict():
message='amount of jobs created?'
prediction = model.predict(message)
return {'Question':message,'Prediction/Answer':list(prediction)}
if __name__ == '__main__':
app.run()
message='How many jobs are created in 2020?'
prediction = model.predict(message)
{'query':message,
'prediction':list(prediction)}
```
| github_jupyter |
# GMNS to AequilibraE example
## Inputs
1. Nodes as a .csv flat file in GMNS format
2. Links as a .csv flat file in GMNS format
3. Trips as a .csv flat file, with the following columns: orig_node, dest_node, trips
4. Sqlite database used by AequilibraE
## Steps
1. Read the GMNS nodes
- Place in SQLite database, then translate to AequilibraE nodes
- Generate the dictionary of zones for the omx trip table (uses node_type = centroid)
2. Read the GMNS links
- Place in SQLite database, then translate to AequilibraE links
3. Read the trips
- Translate into .omx file
A separate Jupyter notebook, Route, performs the following steps
4. Run AequilibraE shortest path and routing
5. Generate detail and summary outputs
```
#!/usr/bin/env python
# coding: utf-8
import os
import numpy as np
import pandas as pd
import sqlite3
#import shutil # needed?
import openmatrix as omx
import math
#run_folder = 'C:/Users/Scott.Smith/GMNS/Lima'
run_folder = 'C:/Users/Scott/Documents/Work/AE/Lima' #Change to match your local environment
#highest_centroid_node_number = 500 #we are now finding this from the nodes dataframe
```
## Read the nodes, and set up the dictionary of centroids
The dictionary of centroids is used later in setting up the omx trip table
```
#Read the nodes
node_csvfile = os.path.join(run_folder, 'GMNS_node.csv')
df_node = pd.read_csv(node_csvfile) #data already has headers
print(df_node.head()) #debugging
df_size = df_node.shape[0]
print(df_size)
# Set up the dictionary of centroids
# Assumption: the node_type = 'centroid' for centroid nodes
# The centroid nodes are the lowest numbered nodes, at the beginning of the list of nodes,
# but node numbers need not be consecutive
tazdictrow = {}
for index in df_node.index:
if df_node['node_type'][index]=='centroid':
#DEBUG print(index, df_node['node_id'][index], df_node['node_type'][index])
tazdictrow[df_node['node_id'][index]]=index
#tazdictrow = {1:0,2:1,3:2,4:3,...,492:447,493:448}
taz_list = list(tazdictrow.keys())
matrix_size = len(tazdictrow) #Matches the number of nodes flagged as centroids
print(matrix_size) #DEBUG
highest_centroid_node_number = max(tazdictrow, key=tazdictrow.get) #for future use
print(highest_centroid_node_number) #DEBUG
```
## Read the links
```
# Read the links
link_csvfile = os.path.join(run_folder, 'GMNS_link.csv')
df_link = pd.read_csv(link_csvfile) #data already has headers
#print(df_node.head()) #debugging
#df_size = df_link.shape[0]
print(df_link.shape[0]) #debug
```
## Put nodes and links into SQLite. Then translate to AequilibraE 0.6.5 format
1. Nodes are pushed into a table named GMNS_node
2. node table used by AequilibraE is truncated, then filled with values from GMNS_node
3. Centroid nodes are assumed to be the lowest numbered nodes, limited by the highest_centroid_node_number
- Number of centroid nodes must equal matrix_size, the size of the trip OMX Matrix
3. Links are pushed into a table named GMNS_link
4. link table used by AequilibraE is truncated, then filled with values from GMNS_link
### Some notes
1. All the nodes whole id is <= highest_centroid_node_number are set as centroids
2. GMNS capacity is in veh/hr/lane, AequilibraE is in veh/hr; hence, capacity * lanes in the insert statement
3. free_flow_time (minutes) is assumed to be 60 (minutes/hr) * length (miles) / free_speed (miles/hr)
```
#Open the Sqlite database, and insert the nodes and links
network_db = os.path.join(run_folder,'1_project','Lima.sqlite')
with sqlite3.connect(network_db) as db_con:
#nodes
df_node.to_sql('GMNS_node',db_con, if_exists='replace',index=False)
db_cur = db_con.cursor()
sql0 = "delete from nodes;"
db_cur.execute(sql0)
sql1 = ("insert into nodes(ogc_fid, node_id, x, y, is_centroid)" +
" SELECT node_id, node_id, x_coord,y_coord,0 from " +
" GMNS_node")
db_cur.execute(sql1)
sql2 = ("update nodes set is_centroid = 1 where ogc_fid <= " + str(highest_centroid_node_number))
db_cur.execute(sql2)
with sqlite3.connect(network_db) as db_con:
df_link.to_sql('GMNS_link',db_con, if_exists='replace',index=False)
db_cur = db_con.cursor()
sql0 = "delete from links;"
db_cur.execute(sql0)
sql1 = ("insert into links(ogc_fid, link_id, a_node, b_node, direction, distance, modes," +
" link_type, capacity_ab, speed_ab, free_flow_time) " +
" SELECT link_id, link_id, from_node_id, to_node_id, directed, length, allowed_uses," +
" facility_type, capacity*lanes, free_speed, 60*length / free_speed" +
" FROM GMNS_link where GMNS_link.capacity > 0")
db_cur.execute(sql1)
sql2 = ("update links set capacity_ba = 0, speed_ba = 0, b=0.15, power=4")
db_cur.execute(sql2)
```
Next step is to update the links with the parameters for the volume-delay function. This step is AequilibraE-specific and makes use of the link_types Sqlite table. This table is taken from v 0.7.1 of AequilibraE, to ease future compatibility. The link_types table expects at least one row with link_type = "default" to use for default values. The user may add other rows with the real link_types.
Its CREATE statement is as follows
```
CREATE TABLE 'link_types' (link_type VARCHAR UNIQUE NOT NULL PRIMARY KEY,
link_type_id VARCHAR UNIQUE NOT NULL,
description VARCHAR,
lanes NUMERIC,
lane_capacity NUMERIC,
alpha NUMERIC,
beta NUMERIC,
gamma NUMERIC,
delta NUMERIC,
epsilon NUMERIC,
zeta NUMERIC,
iota NUMERIC,
sigma NUMERIC,
phi NUMERIC,
tau NUMERIC)
```
| link_type | link_type_id | description | lanes | lane_capacity | alpha | beta | other fields not used |
| ----- | ----- | ----- | ----- | ----- |----- |----- |----- |
| default | 99 | Default general link type | 2 | 900 | 0.15 | 4 | |
```
with sqlite3.connect(network_db) as db_con:
db_cur = db_con.cursor()
sql1 = "update links set b = (select alpha from link_types where link_type = links.link_type)"
db_cur.execute(sql1)
sql2 = ("update links set b = (select alpha from link_types where link_type = 'default') where b is NULL")
db_cur.execute(sql2)
sql3 = "update links set power = (select beta from link_types where link_type = links.link_type)"
db_cur.execute(sql3)
sql4 = ("update links set power = (select beta from link_types where link_type = 'default') where power is NULL")
db_cur.execute(sql4)
```
## Read the trips, and translate to omx file
```
#Read a flat file trip table into pandas dataframe
trip_csvfile = os.path.join(run_folder, 'demand.csv')
df_trip = pd.read_csv(trip_csvfile) #data already has headers
print(df_trip.head()) #debugging
df_size = df_trip.shape[0]
print(df_size)
#print(df.iloc[50]['o_zone_id'])
#stuff for debugging
print(df_trip['total'].sum()) #for debugging: total number of trips
#for k in range(df_size): #at most matrix_size*matrix_size
# i = tazdictrow[df_trip.iloc[k]['orig_taz']]
# j = tazdictrow[df_trip.iloc[k]['dest_taz']]
# if k == 4: print(k," i=",i," j=",j) #debugging
#Write the dataframe to an omx file
# This makes use of tazdictrow and matrix_size, that was established earlier.
# The rows are also written to a file that is used only for debugging
outfile = os.path.join(run_folder, '0_tntp_data' ,'demand.omx')
outdebugfile = open(os.path.join(run_folder,'debug_demand.txt'),"w")
output_demand = np.zeros((matrix_size,matrix_size))
f_output = omx.open_file(outfile,'w')
f_output.create_mapping('taz',taz_list)
#write the data
for k in range(df_size): #at most matrix_size*matrix_size
i = tazdictrow[df_trip.iloc[k]['orig_taz']]
j = tazdictrow[df_trip.iloc[k]['dest_taz']]
output_demand[i][j] = df_trip.iloc[k]['total']
print('Row: ',df_trip.iloc[k]['orig_taz'],i," Col: ",df_trip.iloc[k]['dest_taz'],j," Output",output_demand[i][j],file=outdebugfile)
f_output['matrix'] = output_demand #puts the output_demand array into the omx matrix
f_output.close()
outdebugfile.close()
#You may stop here
# Not needed except for debugging
#Read the input omx trip table
infile = os.path.join(run_folder, '0_tntp_data' ,'demand.omx')
f_input = omx.open_file(infile)
m1 = f_input['matrix']
input_demand = np.array(m1)
print('Shape:',f_input.shape())
print('Number of tables',len(f_input))
print('Table names:',f_input.list_matrices())
print('attributes:',f_input.list_all_attributes())
print('sum of trips',np.sum(m1))
f_input.close()
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import requests
from bs4 import BeautifulSoup
import pandas as pd
from tqdm import tqdm
from ratelimit import limits, sleep_and_retry
import Edgar_scrapper
import re
pd.set_option('display.max_colwidth',200)
edgar_access = Edgar_scrapper.EdgarAccess()
def get_fillings(fillings_ticker,fillings_cik, doc_type, start=0, count=60):
fillings_url = 'https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany&CIK={}\
&type={}&start={}&count={}&owner=exclude&output=atom'.format(fillings_cik, doc_type, start, count)
fillings_html = edgar_access.get(fillings_url)
fillings_soup = BeautifulSoup(fillings_html, features="html.parser")
fillings_list = [
(fillings_ticker,
fillings_cik,
link.find('filing-date').getText(),
link.find('filing-href').getText())
#link.find('filing-type').getText())
for link in fillings_soup.find_all('entry')]
return fillings_list
ticker_ciks = pd.read_csv('tickers.csv')
sample_ticker = pd.DataFrame(data={'ticker' : ['AMC'],'cik':[1411579]})
sec_fillings_df = pd.DataFrame(columns=['ticker','cik','date','annual_report_url'])
for ticker, cik in sample_ticker[:1].values:
temp = pd.DataFrame(data = get_fillings(ticker,cik, '10-K'),columns=['ticker','cik','date','annual_report_url'])
sec_fillings_df = sec_fillings_df.append(temp,ignore_index = True)
del(temp)
sec_fillings_df.annual_report_url = sec_fillings_df.annual_report_url.\
replace('-index.htm', '.txt',regex=True)\
.replace('.txtl', '.txt',regex=True)
sec_fillings_df
sec_fillings_df['filling_text'] = [None] * len(sec_fillings_df)
for index,row in tqdm(sec_fillings_df.iterrows(),desc='Downloading Fillings', \
unit='filling',total=len(sec_fillings_df)):
filing_href = row['annual_report_url']
report_txt= edgar_access.get(filing_href)
report_soup = BeautifulSoup(report_txt, "html")
for document in report_soup.find_all('TYPE'):
if(re.match(r'\s+10-K',document.prettify().splitlines()[1])):
#if (document.find('html')):
sec_fillings_df.iloc[index]['filling_text'] = document#.find('html')
print("Memory consumption of sec_fillings_df is {:.2f}Mb".format(sec_fillings_df.memory_usage().sum()/1024**2))
sec_fillings_df
sample_report_txt = sec_fillings_df.iloc[0]['filling_text']
result = [p_tag.getText() for p_tag in sample_report_txt.find_all('p',text=True) if re.match(r'\w+',p_tag.getText())]
result[:2000]
```
| github_jupyter |
```
%reset -f
# libraries used
# https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn import preprocessing
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
import itertools
emotions = pd.read_csv("drive/MyDrive/EEG/emotions.csv")
emotions.replace(['NEGATIVE', 'POSITIVE', 'NEUTRAL'], [2, 1, 0], inplace=True)
emotions['label'].unique()
X = emotions.drop('label', axis=1).copy()
y = (emotions['label'].copy())
# Splitting data into training and testing as 80-20
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
x = X_train #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
df = pd.DataFrame(x_scaled)
# resetting the data - https://www.tensorflow.org/api_docs/python/tf/keras/backend/clear_session
tf.keras.backend.clear_session()
model = Sequential()
model.add(Dense((2*X_train.shape[1]/3), input_dim=X_train.shape[1], activation='relu'))
model.add(Dense((2*X_train.shape[1]/3), activation='relu'))
model.add(Dense((1*X_train.shape[1]/3), activation='relu'))
model.add(Dense((1*X_train.shape[1]/3), activation='relu'))
model.add(Dense(3, activation='softmax'))
#model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
# for categorical entropy
# https://stackoverflow.com/questions/63211181/error-while-using-categorical-crossentropy
from tensorflow.keras.utils import to_categorical
Y_one_hot=to_categorical(y_train) # convert Y into an one-hot vector
# https://stackoverflow.com/questions/59737875/keras-change-learning-rate
#optimizer = tf.keras.optimizers.Adam(0.001)
#optimizer.learning_rate.assign(0.01)
opt = keras.optimizers.Adamax(learning_rate=0.001)
model.compile(
optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
# to be run for categorical cross entropy
# model.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=0.01), metrics=['accuracy'])
# make sure that the input data is shuffled before hand so that the model doesn't notice patterns and generalizes well
# change y_train to y_hot_encoded when using categorical cross entorpy
import time
start_time = time.time()
history = model.fit(
df,
y_train,
validation_split=0.2,
batch_size=32,
epochs=75)
history.history
print("--- %s seconds ---" % (time.time() - start_time))
x_test = X_test #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled_test = min_max_scaler.fit_transform(x_test)
df_test = pd.DataFrame(x_scaled_test)
predictions = model.predict(x=df_test, batch_size=32)
rounded_predictions = np.argmax(predictions, axis=-1)
cm = confusion_matrix(y_true=y_test, y_pred=rounded_predictions)
label_mapping = {'NEGATIVE': 0, 'NEUTRAL': 1, 'POSITIVE': 2}
# for diff dataset
# label_mapping = {'NEGATIVE': 0, 'POSITIVE': 1}
plt.figure(figsize=(8, 8))
sns.heatmap(cm, annot=True, vmin=0, fmt='g', cbar=False, cmap='Blues')
clr = classification_report(y_test, rounded_predictions, target_names=label_mapping.keys())
plt.xticks(np.arange(3) + 0.5, label_mapping.keys())
plt.yticks(np.arange(3) + 0.5, label_mapping.keys())
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title("Confusion Matrix")
plt.show()
print("Classification Report:\n----------------------\n", clr)
# https://stackoverflow.com/questions/26413185/how-to-recover-matplotlib-defaults-after-setting-stylesheet
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
training_acc = history.history['accuracy']
validation_acc = history.history['val_accuracy']
training_loss = history.history['loss']
validation_loss = history.history['val_loss']
epochs = history.epoch
plt.plot(epochs, training_acc, color = '#17e6e6', label='Training Accuracy')
plt.plot(epochs, validation_acc,color = '#e61771', label='Validation Accuracy')
plt.title('Accuracy vs Epochs')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.savefig('AccuracyVsEpochs.png')
plt.show()
```
| github_jupyter |
Clasificación de las imágenes.
The problem: clasificación del dataset MNIST
- clasificación en escala de grises
- dígitos handwritten
- 28x28px
- 10 categorías (0-9)
```
# tensorflow low level library
# keras high level library
from tensorflow import keras
from tensorflow.keras import models
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import numpy as np
# importamos el dataset
from tensorflow.keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Observamos el tipo de dato
type(train_images)
train_images.shape
train_images[0]
plt.imshow(train_images[0], cmap='gray')
train_labels[0]
# Imágenes en formato 28x28 pixel en formato matrices
train_images
# Imágenes etiquetadas
train_labels
# explorando las 10 imágenes
n_images = 10
fig, axs = plt.subplots(1, n_images, figsize=(20,20))
for i in range(n_images):
axs[i].imshow(train_images[i], cmap='gray')
# explorando las 10 etiquetas
train_labels[:n_images]
```
## Entrenando un modelo clásico de ML
```
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import GridSearchCV
train_images[0].shape
X_train = train_images.reshape(train_images.shape[0], -1)
X_test = test_images.reshape(test_images.shape[0], -1)
train_images.shape
train_images[0].shape
X_train.shape
X_train[0].shape
# training
model = GradientBoostingClassifier(n_estimators=10,
max_depth=5,
max_features=0.1)
model.fit(X_train, train_labels)
# predicting
model.predict(X_test[:10])
n_images = 10
fig, axs = plt.subplots(1, n_images, figsize=(20,20))
for i in range(n_images):
axs[i].imshow(test_images[i], cmap='gray')
# explorando las 10 etiquetas
test_labels[:n_images]
# Realizamos el check de los errores
import numpy as np
error_indices = np.argwhere(test_labels[:1000] != model.predict(X_test[:1000]))
n_images = 10
fig, axs = plt.subplots(1, n_images, figsize=(20,20))
for i, index in zip(range(n_images), error_indices):
axs[i].imshow(test_images[index][0], cmap='gray')
for i in error_indices[:10]:
print(model.predict(X_test[i].reshape(1,-1)))
# Accuracy Score
model.score(X_train, train_labels)
# aplicando el score en los valores de test
model.score(X_test, test_labels)
```
# Aplicando las ANN (Artificial Neural Network)
Seguimos el workflow:
- primero creamos el modelo con los datos de training (images, labels)
- la red neuronal aprenderá de las imagénes y etiquetas
- finalmente, creamos la predicción para los test_images
- verificamos con las predicciones nuestras test_labels
```
# creamos nuestra red
network = models.Sequential()
```
# Construimos las capas
- dos capas Dense layer, que estarán *densely-connected* (también 'fully-connected').
- Una de las capas tendrá 10 salidas con la función 'softmax'
- esta última devolverá un array de 10 scoring (sumando hará 1).
- cada puntuacuón será la probabilidad de que el dígito real se encuentre entre las 10 clases.
```
network.add(layers.Dense(256, activation='relu', input_shape=(784,)))
network.add(layers.Dense(10, activation='softmax'))
# compilamos
network.compile(
optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy']
)
```
- necesitamos realizar el reshape de cada una de los 28x28 img a un vector de 784
- escalar los valores en un intervalo [0, 1]
```
train_vectors = train_images.reshape((60000, 28 * 28)).astype('float32') / 255
test_vectors = test_images.reshape((10000, 28 * 28)).astype('float32') / 255
train_vectors.shape
# también necesitamos transformar las etiquetas en categóricas
from tensorflow.keras.utils import to_categorical
train_labels
train_labels_hot = to_categorical(train_labels)
test_labels_hot = to_categorical(test_labels)
train_labels_hot
# comprobamos la transformación a categórica
train_labels[10]
train_labels_hot[10]
# resumen de nuestra red neuronal
network.summary()
%%time
network.fit(train_vectors, train_labels_hot,
epochs=15, batch_size=128,
validation_split=0.1)
# no GPU - 42.6 sec
%%time
network.fit(train_vectors, train_labels_hot,
epochs=15, batch_size=128,
validation_split=0.1)
# with GPU - 29.6s
plt.plot(network.history.history['accuracy'], label='train')
plt.plot(network.history.history['val_accuracy'], label='validation')
plt.legend()
_, test_acc = network.evaluate(test_vectors, test_labels_hot)
test_acc
# comprobaciones
n_images = 10
fig, axs = plt.subplots(1, n_images, figsize=(20,20))
for i in range(n_images):
axs[i].imshow(test_images[i], cmap='gray')
test_labels[:10]
np.argmax(network.predict(test_vectors), axis=-1)[:10]
# Comprobación de los errores
errores_indices = np.argwhere(test_labels[:1000] != np.argmax(network.predict(test_vectors), axis=-1)[:1000]).flatten()
errores_indices
n_images = 10
fig, axs = plt.subplots(1, n_images, figsize=(20,20))
for i, index in zip(range(n_images), errores_indices):
axs[i].imshow(test_images[index], cmap='gray')
test_labels[errores_indices][:10]
np.argmax(network.predict(test_vectors[errores_indices]), axis=-1)[:10]
```
# CNN - Convolutional neural network
```
model = models.Sequential()
# creamos las capas ocultas
model.add(layers.Conv2D(32, (3,3), activation='relu', input_shape=(28,28,1)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255
model.compile(
optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy']
)
train_images.shape
train_labels_hot
model.fit(train_images, train_labels_hot, epochs=10, batch_size=128, validation_split=0.1)
```
| github_jupyter |
# Performance analysis of a uniform linear array
We compare the MSE of MUSIC with the CRB for a uniform linear array (ULA).
```
import numpy as np
import doatools.model as model
import doatools.estimation as estimation
import doatools.performance as perf
import matplotlib.pyplot as plt
%matplotlib inline
wavelength = 1.0 # normalized
d0 = wavelength / 2
# Create a 12-element ULA.
ula = model.UniformLinearArray(12, d0)
# Place 8 sources uniformly within (-pi/3, pi/4)
sources = model.FarField1DSourcePlacement(
np.linspace(-np.pi/3, np.pi/4, 8)
)
# All sources share the same power.
power_source = 1 # Normalized
source_signal = model.ComplexStochasticSignal(sources.size, power_source)
# 200 snapshots.
n_snapshots = 200
# We use root-MUSIC.
estimator = estimation.RootMUSIC1D(wavelength)
```
We vary the SNR from -20 dB to 20 dB. Here the SNR is defined as:
\begin{equation}
\mathrm{SNR} = 10\log_{10}\frac{\min_i p_i}{\sigma^2_{\mathrm{n}}},
\end{equation}
where $p_i$ is the power of the $i$-th source, and $\sigma^2_{\mathrm{n}}$ is the noise power.
```
snrs = np.linspace(-20, 10, 20)
# 300 Monte Carlo runs for each SNR
n_repeats = 300
mses = np.zeros((len(snrs),))
crbs_sto = np.zeros((len(snrs),))
crbs_det = np.zeros((len(snrs),))
crbs_stouc = np.zeros((len(snrs),))
for i, snr in enumerate(snrs):
power_noise = power_source / (10**(snr / 10))
noise_signal = model.ComplexStochasticSignal(ula.size, power_noise)
# The squared errors and the deterministic CRB varies
# for each run. We need to compute the average.
cur_mse = 0.0
cur_crb_det = 0.0
for r in range(n_repeats):
# Stochastic signal model.
A = ula.steering_matrix(sources, wavelength)
S = source_signal.emit(n_snapshots)
N = noise_signal.emit(n_snapshots)
Y = A @ S + N
Rs = (S @ S.conj().T) / n_snapshots
Ry = (Y @ Y.conj().T) / n_snapshots
resolved, estimates = estimator.estimate(Ry, sources.size, d0)
# In practice, you should check if `resolved` is true.
# We skip the check here.
cur_mse += np.mean((estimates.locations - sources.locations)**2)
B_det = perf.ecov_music_1d(ula, sources, wavelength, Rs, power_noise,
n_snapshots)
cur_crb_det += np.mean(np.diag(B_det))
# Update the results.
B_sto = perf.crb_sto_farfield_1d(ula, sources, wavelength, power_source,
power_noise, n_snapshots)
B_stouc = perf.crb_stouc_farfield_1d(ula, sources, wavelength, power_source,
power_noise, n_snapshots)
mses[i] = cur_mse / n_repeats
crbs_sto[i] = np.mean(np.diag(B_sto))
crbs_det[i] = cur_crb_det / n_repeats
crbs_stouc[i] = np.mean(np.diag(B_stouc))
print('Completed SNR = {0:.2f} dB'.format(snr))
```
We plot the results below.
* The MSE should approach the stochastic CRBs in high SNR regions.
* The stochastic CRB should be tighter than the deterministic CRB.
* With the additional assumption of uncorrelated sources, we expect a even lower CRB.
* All three CRBs should converge together as the SNR approaches infinity.
```
plt.figure(figsize=(8, 6))
plt.semilogy(
snrs, mses, '-x',
snrs, crbs_sto, '--',
snrs, crbs_det, '--',
snrs, crbs_stouc, '--'
)
plt.xlabel('SNR (dB)')
plt.ylabel(r'MSE / $\mathrm{rad}^2$')
plt.grid(True)
plt.legend(['MSE', 'Stochastic CRB', 'Deterministic CRB',
'Stochastic CRB (Uncorrelated)'])
plt.title('MSE vs. CRB')
plt.margins(x=0)
plt.show()
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Customization basics: tensors and operations
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/customization/basics"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This is an introductory TensorFlow tutorial that shows how to:
* Import the required package
* Create and use tensors
* Use GPU acceleration
* Demonstrate `tf.data.Dataset`
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
```
## Import TensorFlow
To get started, import the `tensorflow` module. As of TensorFlow 2, eager execution is turned on by default. This enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
```
import tensorflow as tf
```
## Tensors
A Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `tf.Tensor` objects have a data type and a shape. Additionally, `tf.Tensor`s can reside in accelerator memory (like a GPU). TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce `tf.Tensor`s. These operations automatically convert native Python types, for example:
```
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
```
Each `tf.Tensor` has a shape and a datatype:
```
x = tf.matmul([[1]], [[2, 3]])
print(x)
print(x.shape)
print(x.dtype)
```
The most obvious differences between NumPy arrays and `tf.Tensor`s are:
1. Tensors can be backed by accelerator memory (like GPU, TPU).
2. Tensors are immutable.
### NumPy Compatibility
Converting between a TensorFlow `tf.Tensor`s and a NumPy `ndarray` is easy:
* TensorFlow operations automatically convert NumPy ndarrays to Tensors.
* NumPy operations automatically convert Tensors to NumPy ndarrays.
Tensors are explicitly converted to NumPy ndarrays using their `.numpy()` method. These conversions are typically cheap since the array and `tf.Tensor` share the underlying memory representation, if possible. However, sharing the underlying representation isn't always possible since the `tf.Tensor` may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion involves a copy from GPU to host memory.
```
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
```
## GPU acceleration
Many TensorFlow operations are accelerated using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation—copying the tensor between CPU and GPU memory, if necessary. Tensors produced by an operation are typically backed by the memory of the device on which the operation executed, for example:
```
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.config.experimental.list_physical_devices("GPU"))
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
```
### Device Names
The `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:<N>` if the tensor is placed on the `N`-th GPU on the host.
### Explicit Device Placement
In TensorFlow, *placement* refers to how individual operations are assigned (placed on) a device for execution. As mentioned, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation and copies tensors to that device, if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager, for example:
```
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.config.experimental.list_physical_devices("GPU"):
print("On GPU:")
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
```
## Datasets
This section uses the [`tf.data.Dataset` API](https://www.tensorflow.org/guide/datasets) to build a pipeline for feeding data to your model. The `tf.data.Dataset` API is used to build performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
### Create a source `Dataset`
Create a *source* dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices), or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Dataset guide](https://www.tensorflow.org/guide/datasets#reading_input_data) for more information.
```
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
```
### Apply transformations
Use the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), and [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) to apply transformations to dataset records.
```
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
```
### Iterate
`tf.data.Dataset` objects support iteration to loop over records:
```
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tenntucky/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/module1-afirstlookatdata/Kole_Goldsberry_LS_DSPT3_111_A_First_Look_at_Data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Lambda School Data Science - A First Look at Data
## Lecture - let's explore Python DS libraries and examples!
The Python Data Science ecosystem is huge. You've seen some of the big pieces - pandas, scikit-learn, matplotlib. What parts do you want to see more of?
```
2 + 2
def helloworld():
return print('Hello World!')
helloworld()
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3, 4, 5],
'b': [5, 4, 3, 2, 1]})
df.head()
df.plot.scatter('a', 'b');
joe = {'name': 'Joe', 'is_female': False, 'age': 19}
alice = {'name': 'Alice', 'is_female': True, 'age': 20}
sarah = {'name': 'Sarah', 'is_female': True, 'age': 20}
students = [joe, alice, sarah]
import numpy as np
np.random.randint(0, 10, size=10)
import matplotlib.pyplot as plt
x = [1, 2, 3, 4]
y = [2, 4, 6, 10]
print(x, y)
plt.scatter(x, y, color='r');
plt.plot(x, y, color='g');
df = pd.DataFrame({'first_col': x, 'second_col': y})
df
df['first_col']
df['second_col']
df.shape
df['third_col'] = df['first_col'] + 2*df['second_col']
df
df.shape
arr_1 = np.random.randint(low=0, high=100, size=10000)
arr_2 = np.random.randint(low=0, high=100, size=10000)
arr_1.shape
arr_2.shape
arr_1 + arr_2
x + y
type(arr_1)
type(arr_2)
type(x)
df
df['fourth_col'] = df['third_col'] > 10
df
df.shape
df[df['second_col'] < 10]
print(students)
df_1 = pd.DataFrame(students)
df_1
df_1['legal_drinker'] = df_1['age'] > 21
df_1
df_1[df_1['name'] == 'Alice']
```
## Assignment - now it's your turn
Pick at least one Python DS library, and using documentation/examples reproduce in this notebook something cool. It's OK if you don't fully understand it or get it 100% working, but do put in effort and look things up.
```
# TODO - your code here
# Use what we did live in lecture as an example
# 1.
# Above I recreated what Alex Kim did yesterday in lecture. I decided to look
# at the day one notes where they create a dictionary of three peoples names.
# I decided to add some columns with legal drinking status (worked) and one with
# the eligibility of said person on playing for the United States Womens National
# Soccer Team (didn't work). I couldn't get it to read a boolean (pass/fail)
# so I would like to figure that out.
# 2.
# I believe the boolean aspect will be the hardest. Also, changing some of the
# data types is going to be challenging.
# 3.
# I'm really hoping that about halfway through the NFL season I will be able
# to use this to predict where the teams currently sit then. So, appending a
# data frame based off of previous data.
# 4.
# I would like to continue exploring dataframes. The idea of a large subset of
# values that can be quickly explored with key words then used to give a
# summation of that data. I would like to be able to predict possible values of
# players versus teams in order to offer insight into personel decisions.
```
### Assignment questions
After you've worked on some code, answer the following questions in this text block:
1. Describe in a paragraph of text what you did and why, as if you were writing an email to somebody interested but nontechnical.
2. What was the most challenging part of what you did?
3. What was the most interesting thing you learned?
4. What area would you like to explore with more time?
## Stretch goals and resources
Following are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub (and since this is the first assignment of the sprint, open a PR as well).
- [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
- [scikit-learn documentation](http://scikit-learn.org/stable/documentation.html)
- [matplotlib documentation](https://matplotlib.org/contents.html)
- [Awesome Data Science](https://github.com/bulutyazilim/awesome-datascience) - a list of many types of DS resources
Stretch goals:
- Find and read blogs, walkthroughs, and other examples of people working through cool things with data science - and share with your classmates!
- Write a blog post (Medium is a popular place to publish) introducing yourself as somebody learning data science, and talking about what you've learned already and what you're excited to learn more about.
| github_jupyter |
## Gender Recognition by Voice Project
In this project, we will classify a person's gender by his/her various aspects of voice using different classification methods like logistic regression, k-nearest neighbors, Naive Bayes. These methods will be completely implemented from sratch using pure Python and related mathematical concepts. For each method, we'll compare it with its built-in version in sklearn library to see if there are any differences in results. In addition, other methods like SVM, Decision Tree, Random Forest are also used from sklearn to compare the accuracy among methods. Data were downloaded from Kaggle
## Imports
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## Read and explore data
```
df = pd.read_csv("voice.csv")
df.head()
df.describe()
sns.countplot(x='label', data=df)
sns.heatmap(df.drop('label', axis=1).corr(), square = True, cmap="YlGnBu", linecolor='black')
sns.FacetGrid(df, hue='label', size=5).map(plt.scatter, 'meandom','meanfun').add_legend()
```
## Standardize data
Data need to be standardized to a smaller scale to calculate more easily.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df.drop('label',axis=1))
scaled_features = scaler.transform(df.drop('label',axis=1))
# feat all columns except the "Type" one
df_feat = pd.DataFrame(scaled_features,columns=df.columns[:-1])
df_feat.head()
```
## Split the data
```
from sklearn.model_selection import train_test_split
# encoding label column
df['label'] = df['label'].replace(['male', 'female'], [0,1])
X = df_feat
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# df to compare results among methods
comparison = pd.DataFrame(columns=['Name', 'accuracy'])
```
## Use Logistic Regression from sklearn library
```
from sklearn.linear_model import LogisticRegression
# Train the model
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
# Get the prediction for X_test
y_logSK_pred = logmodel.predict(X_test)
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
print(classification_report(y_test,y_logSK_pred))
print(confusion_matrix(y_test,y_logSK_pred))
```
## Implementing Logistic Regression from scratch
```
# Create a logistic regression class to import from (LogisticRegressionFromScratch.py)
class LogisticRegressionFromScratch:
def __init__(self, X, y):
# Add a column of zeros for X
zeros_col = np.ones((X.shape[0],1))
X = np.append(zeros_col,X,axis=1)
# Initialize variables
self.X = X
self.y = y
self.m = X.shape[0]
self.n = X.shape[1]
# Randomize values for theta
self.theta = np.random.randn(X.shape[1],1)
def sigmoid(self, z):
return 1/(1 + np.exp(-z))
def costFunction(self):
# Calculate predicted h then cost value
h = self.sigmoid(np.matmul(self.X, self.theta))
self.J = (1/self.m)*(-self.y.T.dot(np.log(h)) - (1 - self.y).T.dot(np.log(1 - h)))
return self.J
def gradientDescent(self, alpha, num_iters):
# Keep records of cost values and thetas
self.J_history = []
self.theta_history = []
for i in range (num_iters):
# Calculate new value for h then update J_history
h = self.sigmoid(np.matmul(self.X, self.theta))
self.J_history.append(self.costFunction())
self.theta_history.append(self.theta)
self.theta = self.theta - (alpha/self.m)*(self.X.T.dot(h-self.y))
return self.J_history, self.theta_history, self.theta
def predict(self, X_test, y_test):
# Add a column of zeros for X_test
zeros_col = np.ones((X_test.shape[0],1))
X_test = np.append(zeros_col, X_test, axis = 1)
# Calculate final predicted y values after using gradient descent to update theta
cal_sigmoid = self.sigmoid(np.matmul(X_test, self.theta))
self.y_pred = []
for value in cal_sigmoid:
if value >= 0.5:
self.y_pred.append(1)
else:
self.y_pred.append(0)
return self.y_pred
from LogisticRegressionFromScratch import LogisticRegressionFromScratch
lmFromScratch = LogisticRegressionFromScratch(X_train, y_train.to_numpy().reshape(y_train.shape[0],1))
# PREDICT USING GRADIENT DESCENT
# set up number of iterations and learning rate
num_iters = 15000
alpha = 0.01
# update theta value and get predicted y
j_hist, theta_hist, theta = lmFromScratch.gradientDescent(alpha, num_iters)
y_logScratch_pred = lmFromScratch.predict(X_test, y_test.to_numpy().reshape(y_test.shape[0],1))
print(confusion_matrix(y_test,y_logScratch_pred))
print(classification_report(y_test,y_logScratch_pred))
new_data = {'Name': 'Logistic Regression', 'accuracy': accuracy_score(y_test,y_logScratch_pred)}
comparison = comparison.append(new_data, ignore_index=True)
```
## Use KNN from sklearn library
```
from sklearn.neighbors import KNeighborsClassifier
# start with k = 1
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_knnSK_pred = knn.predict(X_test)
print(confusion_matrix(y_test, y_knnSK_pred))
print(classification_report(y_test,y_knnSK_pred))
# plot out the error vs k-value graph to choose the best k value
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
pred_i = knn.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
plt.figure(figsize = (10,6))
plt.plot(range(1,40), error_rate, color = 'blue', linestyle = '--', marker = 'o', markerfacecolor = 'red', markersize = 10)
# From above plot, k = 3 is the value gives us the lowest error, so we'll retrain knn model with k = 3
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(X_train, y_train)
pred = knn.predict(X_test)
print(confusion_matrix(y_test, y_knnSK_pred))
print(classification_report(y_test,y_knnSK_pred))
```
There are no difference in results when changing k to 3 because the errors when k = 1 and k = 3 is too small and inevitable.
## Implementing KNN from scratch
```
import numpy as np
class KNNFromScratch():
def __init__(self, k):
self.k = k
# get training data
def train(self, X, y):
self.X_train = X
self.y_train = y
def predict(self, X_test):
dist = self.compute_dist(X_test)
return self.predict_label(dist)
# compute distance between each sample in X_test and X_train
def compute_dist(self, X_test):
test_size = X_test.shape[0]
train_size = self.X_train.shape[0]
dist = np.zeros((test_size, train_size))
for i in range(test_size):
for j in range(train_size):
dist[i, j] = np.sqrt(np.sum((X_test[i,:] - self.X_train[j,:])**2))
return dist
# return predicted label with given distance of X_test
def predict_label(self, dist):
test_size = dist.shape[0]
y_pred = np.zeros(test_size)
for i in range(test_size):
y_indices = np.argsort(dist[i, :])
k_closest = self.y_train[y_indices[: self.k]].astype(int)
y_pred[i] = np.argmax(np.bincount(k_closest))
return y_pred
from KNNFromScratch import KNNFromScratch
# train with k=3
knnFromScratch = KNNFromScratch(3)
knnFromScratch.train(X_train.to_numpy(), y_train.to_numpy())
y_knnScratch_pred = knnFromScratch.predict(X_test.to_numpy())
```
The result is slightly better than using sklearn library
```
print(confusion_matrix(y_test, y_knnScratch_pred))
print(classification_report(y_test,y_knnScratch_pred))
new_data = {'Name': 'KNN', 'accuracy': accuracy_score(y_test,y_knnScratch_pred)}
comparison = comparison.append(new_data, ignore_index=True)
```
## Use Naive Bayes from sklearn library
```
from sklearn.naive_bayes import GaussianNB
naiveBayes = GaussianNB()
naiveBayes.fit(X_train, y_train)
y_nbSK_pred = naiveBayes.predict(X_test)
print(confusion_matrix(y_test, y_nbSK_pred))
print(classification_report(y_test,y_nbSK_pred))
```
## Implementing Naive Bayes from scratch
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
class NaiveBayesFromScratch():
def __init__(self, X, y):
self.num_examples, self.num_features = X.shape
self.num_classes = len(np.unique(y))
def fit(self, X, y):
self.classes_mean = {}
self.classes_variance = {}
self.classes_prior = {}
# calculate the mean, variance, prior of each class
for c in range(self.num_classes):
X_c = X[y == c]
self.classes_mean[str(c)] = np.mean(X_c, axis=0)
self.classes_variance[str(c)] = np.var(X_c, axis=0)
self.classes_prior[str(c)] = X_c.shape[0] / X.shape[0]
# predict using Naive Bayes Gaussian formula
def predict(self, X):
probs = np.zeros((X.shape[0], self.num_classes))
for c in range(self.num_classes):
prior = self.classes_prior[str(c)]
probs_c = multivariate_normal.pdf(X, mean=self.classes_mean[str(c)], cov=self.classes_variance[str(c)])
probs[:,c] = probs_c*prior
return np.argmax(probs, 1)
from NaiveBayesFromScratch import NaiveBayesFromScratch
naiveBayesFromScratch = NaiveBayesFromScratch(X_train.to_numpy(), y_train.to_numpy())
naiveBayesFromScratch.fit(X_train.to_numpy(), y_train.to_numpy())
y_nbScratch_pred = naiveBayesFromScratch.predict(X_test.to_numpy())
print(confusion_matrix(y_test, y_nbScratch_pred))
print(classification_report(y_test,y_nbScratch_pred))
new_data = {'Name': 'Naive Bayes', 'accuracy': accuracy_score(y_test,y_nbScratch_pred)}
comparison = comparison.append(new_data, ignore_index=True)
```
## Use SVM from sklearn
```
from sklearn.svm import SVC
svm = SVC()
svm.fit(X_train,y_train)
svm_pred = svm.predict(X_test)
print(confusion_matrix(y_test,svm_pred))
print(classification_report(y_test,svm_pred))
new_data = {'Name': 'SVM', 'accuracy': accuracy_score(y_test,svm_pred)}
comparison = comparison.append(new_data, ignore_index=True)
# Grid Search
param_grid = {'C': [0.1,1, 10, 100, 1000], 'gamma': [1,0.1,0.01,0.001,0.0001], 'kernel': ['rbf']}
from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=3)
grid.fit(X_train,y_train)
grid.best_params_
grid_pred = grid.predict(X_test)
```
The result is not better than the default one
```
print(confusion_matrix(y_test,grid_pred))
print(classification_report(y_test,grid_pred))
```
## Use Decision Tree from sklearn
```
from sklearn.tree import DecisionTreeClassifier
# train the model and get predicted results for test set
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
dtre_pred = dtree.predict(X_test)
print(classification_report(y_test,dtre_pred))
print(confusion_matrix(y_test,dtre_pred))
new_data = {'Name': 'Decision Tree', 'accuracy': accuracy_score(y_test,dtre_pred)}
comparison = comparison.append(new_data, ignore_index=True)
```
## Use Random Forest from sklearn
```
from sklearn.ensemble import RandomForestClassifier
# train the model and get predicted results for test set
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
print(classification_report(y_test,rfc_pred))
print(confusion_matrix(y_test,rfc_pred))
new_data = {'Name': 'Random Forest', 'accuracy': accuracy_score(y_test,rfc_pred)}
comparison = comparison.append(new_data, ignore_index=True)
```
## Results comparison among methods
```
comparison
sns.barplot(data=comparison, x='accuracy', y='Name')
```
## Conclusion
In conclusion, results of methods implemented from scratch are quite similar to those from sklearn library. Among classification methods, SVM is the best method with highest accuracy.
| github_jupyter |
# Oddstradamus
### Good odds and where to find them
### Introduction
In the long run, the bookmaker always wins. The aim of this project is to disprove exactly this. We are in the football sports betting market and are trying to develop a strategy that is profitable in the long term and which will make the bookmaker leave the pitch as the loser. There are three aspects to this strategy that need to be optimised.
These are:
- the selection of suitable football matches
- the prediction of the corresponding outcome
- and the determination of the optimal stake per bet.
In order to achieve this goal, a data set is compiled containing data from almost 60,000 football matches from 22 different leagues. This data set is processed, evaluated and then used to develop the long-term strategy with the help of selected machine learning algorithms.
The data comes from the following source: [Data source](https://www.football-data.co.uk/downloadm.php)
### Merging the data
The first step is to read the data from 264 .csv files and combine them appropriately. Before the data set is saved, an additional column with information about the season of the match is created to ensure a unique allocation.
```
# import packages
import glob
import os
import pandas as pd
# loading the individual datasets of the different seasons
file_type = 'csv'
seperator =','
df_20_21 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('20:21' + "/*."+file_type)],ignore_index=True)
df_19_20 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('19:20' + "/*."+file_type)],ignore_index=True)
df_18_19 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('18:19' + "/*."+file_type)],ignore_index=True)
df_17_18 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('17:18' + "/*."+file_type)],ignore_index=True)
df_16_17 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('16:17' + "/*."+file_type)],ignore_index=True)
df_15_16 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('15:16' + "/*."+file_type)],ignore_index=True)
df_14_15 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('14:15' + "/*."+file_type)],ignore_index=True)
df_13_14 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('13:14' + "/*."+file_type)],ignore_index=True)
# add a column of the season for clear assignment
df_20_21['Season'] = '20/21'
df_19_20['Season'] = '19/20'
df_18_19['Season'] = '18/19'
df_17_18['Season'] = '17/18'
df_16_17['Season'] = '16/17'
df_15_16['Season'] = '15/16'
df_14_15['Season'] = '14/15'
df_13_14['Season'] = '13/14'
# combining the individual datasets into one
dfs = [df_14_15, df_15_16, df_16_17, df_17_18, df_18_19, df_19_20, df_20_21]
results = df_13_14.append(dfs, sort=False)
# saving the merged dataframe for processing
results.to_csv("Data/Results2013_2021.csv")
```
### Quick Overview
```
# output of the data shape
results.shape
```
In its initial state, the data set comprises almost 60000 rows and 133 columns. In addition to information on league affiliation, the season of the match and the team constellation, information on the final result is available in the form of the number of goals, shots, shots on target, corners, fouls and yellow and red cards for home and away teams. In addition, the dataset contains information on betting odds from a large number of bookmakers.
As a large proportion of the columns are only sporadically filled, especially with regard to the betting odds, those bookmakers whose odds are available for the 60,000 matches were filtered. This procedure alone reduced the data set from 133 to 31 columns.
```
# selecting the necessary columns of the original data set
results = results[['Div', 'Season', 'HomeTeam','AwayTeam', 'FTHG', 'FTAG', 'FTR', 'HS', 'AS', 'HST', 'AST', 'HF', 'AF', 'HC',
'AC', 'HY', 'AY', 'HR', 'AR','B365H','B365D','B365A', 'BWH','BWD','BWA', 'IWH', 'IWD', 'IWA', 'WHH', 'WHD', 'WHA']]
results.shape
```
Die verbleibenden Spalten werden in der folgenden Tabelle kurz erläutert:
| Column | Description |
| - | - |
| `Div` | League Division |
| `Season` | Season in which the match took place |
| `HomeTeam` | Home Team |
| `AwayTeam` | Away Team |
| `FTHG` | Full Time Home Team Goals |
| `FTAG`| Full Time Away Team Goals |
| `FTR` | Full Time Result (H=Home Win, D=Draw, A=Away Win) |
| `HS` | Home Team Shots |
| `AS` | Away Team Shots |
| `HST` | Home Team Shots on Target |
| `AST` | Away Team Shots on Target |
| `HF` | Home Team Fouls Committed |
| `AF` | Away Team Fouls Committed |
| `HC` | Home Team Corners |
| `AC` | Away Team Corners |
| `HY` | Home Team Yellow Cards |
| `AY` | Away Team Yellow Cards |
| `HR`| Home Team Red Cards |
| `AR` | Away Team Red Cards |
| `B365H` | Bet365 Home Win Odds |
| `B365D` | Bet365 Draw Odds |
| `B365A` | Bet365 Away Win Odds |
| `BWH` | Bet&Win Home Win Odds |
| `BWD` | Bet&Win Draw Odds |
| `BWA` | Bet&Win Away Win Odds |
| `IWH` | Interwetten Home Win Odds |
| `IWD` | Interwetten Draw Odds |
| `IWA` | Interwetten Away Win Odds |
| `WHH` | William Hill Home Win Odds |
| `WHD` | William Hill Draw Odds |
| `WHA` | William Hill Away Win Odds |
Since one aspect of the objective is to use the data to predict football matches, it must be noted that, with the exception of the league, the season, the team constellation and the betting odds, this is exclusively information that only becomes known after the end of the match. Accordingly, the data in its present form cannot be used without further ado to predict the outcome of the match. In the [following notebook](https://github.com/mue94/oddstradamus/blob/main/02_Data_Processing.ipynb), the corresponding data is processed and transformed in such a way that it can contribute to the prediction without hesitation and without data leakage.
| github_jupyter |
# Fast Bernoulli: Benchmark Python
In this notebooks we will measure performance of generating sequencies of Bernoulli-distributed random varibales in Python without and within LLVM JIT compiler. The baseline generator is based on top of expression `random.uniform() < p`.
```
import numpy as np
import matplotlib.pyplot as plt
from random import random
from typing import List
from bernoulli import LLVMBernoulliGenerator, PyBernoulliGenerator
from tqdm import tqdm
```
## Benchmarking
As it was mentioned above, the baseline generator is just thresholding a uniform-distributed random variable.
```
class BaselineBernoulliGenerator:
def __init__(self, probability: float, tolerance: float = float('nan'), seed: int = None):
self.prob = probability
def __call__(self, nobits: int = 32):
return [int(random() <= self.prob) for _ in range(nobits)]
```
Here we define some routines for benchmarking.
```
def benchmark(cls, nobits_list: List[int], probs: List[float], tol: float = 1e-6) -> np.ndarray:
timings = np.empty((len(probs), len(nobits_list)))
with tqdm(total=timings.size, unit='bench') as progress:
for i, prob in enumerate(probs):
generator = cls(prob, tol)
for j, nobits in enumerate(nobits_list):
try:
timing = %timeit -q -o generator(nobits)
timings[i, j] = timing.average
except Exception as e:
# Here we catch the case when number of bits is not enough
# to obtain desirable precision.
timings[i, j] = float('nan')
progress.update()
return timings
```
The proposed Bernoulli generator has two parameters. The first one is well-known that is probability of success $p$. The second one is precision of quantization.
```
NOBITS = [1, 2, 4, 8, 16, 32]
PROBAS = [1 / 2 ** n for n in range(1, 8)]
```
Now, start benchmarking!
```
baseline = benchmark(BaselineBernoulliGenerator, NOBITS, PROBAS)
py = benchmark(PyBernoulliGenerator, NOBITS, PROBAS)
llvm = benchmark(LLVMBernoulliGenerator, NOBITS, PROBAS)
```
Multiplication by factor $10^6$ corresponds to changing units from seconds to microseconds.
```
baseline *= 1e6
py *= 1e6
llvm *= 1e6
```
Save timings for the future.
```
np.save('../data/benchmark-data-baseline.npy', baseline)
np.save('../data/benchmark-data-py.npy', py)
np.save('../data/benchmark-data-llvm.npy', llvm)
```
## Visualization
On the figures below we depics how timings (or bitrate) depends on algorithm parameters.
```
fig = plt.figure(figsize=(14, 6))
ax = fig.add_subplot(1, 1, 1)
ax.grid()
for i, proba in enumerate(PROBAS):
ax.loglog(NOBITS, baseline[i, :], '-x',label=f'baseline p={proba}')
ax.loglog(NOBITS, py[i, :], '-+',label=f'python p={proba}')
ax.loglog(NOBITS, llvm[i, :], '-o',label=f'llvm p={proba}')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel('Sequence length, bit')
ax.set_ylabel('Click time, $\mu s$')
plt.show()
fig = plt.figure(figsize=(14, 6))
ax = fig.add_subplot(1, 1, 1)
ax.grid()
for i, proba in enumerate(PROBAS):
ax.loglog(NOBITS, NOBITS / baseline[i, :], '-x',label=f'baseline p={proba}')
ax.loglog(NOBITS, NOBITS / py[i, :], '-+',label=f'python p={proba}')
ax.loglog(NOBITS, NOBITS / llvm[i, :], '-o',label=f'llvm p={proba}')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel('Sequence length, bit')
ax.set_ylabel('Bit rate, Mbit per s')
plt.show()
fig = plt.figure(figsize=(14, 6))
ax = fig.add_subplot(1, 1, 1)
ax.grid()
for j, nobits in enumerate(NOBITS):
ax.loglog(PROBAS, nobits / baseline[:, j], '-x',label=f'baseline block={nobits}')
ax.loglog(PROBAS, nobits / py[:, j], '-+',label=f'python block={nobits}')
ax.loglog(PROBAS, nobits / llvm[:, j], '-o',label=f'llvm block={nobits}')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel('Bernoulli parameter')
ax.set_ylabel('Bitrate, Mbit / sec')
plt.show()
```
## Comments and Discussions
On the figures above one can see that direct implementation of the algorithm does not improve bitrate. Also, we can see that the statement is true for implementaion with LLVM as well as without LLVM. This means that overhead is too large.
Nevertheless, the third figure is worth to note that bitrate scales note very well for baseline generator. The bitrate of baseline drops dramatically while the bitrates of the others decrease much lesser.
Such benchmarking like this has unaccounted effects like different implementation levels (IR and Python), expansion of bit block to list of bits, overhead of Python object system.
| github_jupyter |
# Proyecto 3.
- Carlos González Mendoza
- Raul Enrique González Paz
- Juan Andres Serrano Rivera
Para este proyecto revisaremos los precios ajustados de las empresas **ADIDAS**, **NIKE** y **UNDER ARMOUR**, ya que son las dos empresas deportivas mas grandes del mundo. Y aparte son un trio de empresas con gran impacto en todo el mundo.
<img style="float: right; margin: 0px 0px 10px 10px;" src="http://content.nike.com/content/dam/one-nike/globalAssets/social_media_images/nike_swoosh_logo_black.png" width="300px" height="125px" />
<img style="float: right; margin: 0px 0px 10px 10px;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/20/Adidas_Logo.svg/2000px-Adidas_Logo.svg.png" width="300px" height="125px" />
<img style="float: right; margin: 0px 0px 10px 10px;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/44/Under_armour_logo.svg/2000px-Under_armour_logo.svg.png" width="300px" height="125px" />
```
# Primero importamos todas las librerias que ocuparemos.
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
import pandas_datareader.data as web
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Despues creamos la funcion con la cual descargaremos los datos
def get_closes(tickers, start_date=None, end_date=None, freq='d'):
# Por defecto la fecha de inicio es el 01 de Enero de 2010 y fecha de termino por defecto es hasta el dia de hoy.
# Frecuencia de muestreo por defecto es 'd'.
# Creamos DataFrame vacío de precios, con el índice de las fechas que necesitamos.
closes = pd.DataFrame(columns = tickers, index=web.YahooDailyReader(symbols=tickers[0], start=start_date, end=end_date, interval=freq).read().index)
# Una vez creado el DataFrame, agregamos cada uno de los precios con YahooDailyReader
for ticker in tickers:
df = web.YahooDailyReader(symbols=ticker, start=start_date, end=end_date, interval=freq).read()
closes[ticker]=df['Adj Close']
closes.index_name = 'Date'
closes = closes.sort_index()
return closes
# Instrumentos a descargar (Apple, Walmart, IBM, Nike)
names = ['NKE', 'ADDYY', 'UAA']
# Fechas: inicios 2011 a finales de 2015
start, end = '2011-01-01', '2017-12-31'
```
- Entonces ya realizado lo anterior, podremos obtener los precios ajustados de las empresas **ADIDAS** y **NIKE**
```
closes = get_closes(names, start, end)
closes
closes.plot(figsize=(15,10))
closes.describe()
```
- Obtendremos los rendimientos diarios
```
closes.shift()
```
- Entonces se calculan los rendimientos diarios de la siguiente manera.
```
rend = ((closes-closes.shift())/closes.shift()).dropna()
rend
```
- Y los rendimientos se grafican de la siguiente manera.
```
rend.plot(figsize=(15,10))
```
- Sacamos las medias y las desviaciones estandar de cada una de las empresas.
```
mu_NKE, mu_ADDYY, mu_UAA = rend.mean().NKE, rend.mean().ADDYY, rend.mean().UAA
s_NKE, s_ADDYY, s_UAA = rend.std().NKE, rend.std().ADDYY, rend.std().UAA
```
- Simulamos 10000 escenarios de rendimientos diarios para el 2017 para las 3 empresas deportivas.
```
def rend_sim(mu, sigma, ndays, nscen, start_date):
dates = pd.date_range(start=start_date,periods=ndays)
return pd.DataFrame(data = sigma*np.random.randn(ndays, nscen)+mu, index = dates)
simrend_NKE = rend_sim(mu_NKE, s_NKE, 252, 10000, '2018-01-01')
simrend_ADDYY = rend_sim(mu_ADDYY, s_ADDYY, 252, 10000, '2018-01-01')
simrend_UAA = rend_sim(mu_NKE, s_ADDYY, 252, 10000, '2018-01-01')
simcloses_NKE = closes.iloc[-1].NKE*((1+simrend_NKE).cumprod())
simcloses_ADDYY = closes.iloc[-1].ADDYY*((1+simrend_ADDYY).cumprod())
simcloses_UAA = closes.iloc[-1].UAA*((1+simrend_UAA).cumprod())
```
- Calculamos la probabilidad de que el precio incremente un 10% el siguiente año
```
K_NKE = (1+0.05)*closes.iloc[-1].NKE
prob_NKE = pd.DataFrame((simcloses_NKE>K_NKE).sum(axis=1)/10000)
prob_NKE.plot(figsize=(10,6), grid=True, color = 'r');
K_ADDYY = (1+0.05)*closes.iloc[-1].ADDYY
prob_ADDYY = pd.DataFrame((simcloses_ADDYY>K_ADDYY).sum(axis=1)/10000)
prob_ADDYY.plot(figsize=(10,6), grid=True, color = 'g');
K_UAA = (1+0.05)*closes.iloc[-1].UAA
prob_UAA = pd.DataFrame((simcloses_UAA>K_UAA).sum(axis=1)/10000)
prob_UAA.plot(figsize=(10,6), grid=True);
```
| github_jupyter |
# Install dependencies
```
!pip install pretrainedmodels
!pip install albumentations==0.4.5
!pip install transformers
# install dependencies for TPU
#!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
#!python pytorch-xla-env-setup.py --apt-packages libomp5 libopenblas-dev
```
# Download data
```
# https://drive.google.com/file/d/1jfkX_NXF8shxyWZCxJkzsLPDr4ebvdOP/view?usp=sharing
!pip install gdown
!gdown https://drive.google.com/uc?id=1jfkX_NXF8shxyWZCxJkzsLPDr4ebvdOP
!unzip -q plant-pathology-2020-fgvc7.zip -d /content/plant-pathology-2020-fgvc7
!rm plant-pathology-2020-fgvc7.zip
```
# Import libraries
```
# Import os
import os
# Import libraries for data manipulation
import numpy as np
import pandas as pd
# Import libries for data agumentations: albumentations
import albumentations as A
from albumentations.pytorch.transforms import ToTensor
from albumentations import Rotate
import cv2 as cv
# Import Pytorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torch.utils.data import Dataset, DataLoader
# Import pretrainmodels
import pretrainedmodels
# Import transformers
from transformers import get_cosine_schedule_with_warmup
from transformers import AdamW
# Import metrics for model evaluation
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedKFold
# Import libraries for data visualization
import matplotlib.pyplot as plt
# Import tqdm.notebook for loading visualization
from tqdm.notebook import tqdm
# Ignore warnings
import warnings
warnings.filterwarnings('ignore')
# Import for TPU configuration
#import torch_xla
#import torch_xla.core.xla_model as xm
```
# Settings
```
# Configuration path
# Data folder
IMAGES_PATH = '/content/plant-pathology-2020-fgvc7/images/'
# Sample submission csv
SAMPLE_SUBMISSION = '/content/plant-pathology-2020-fgvc7/sample_submission.csv'
# Train, test data path
TRAIN_DATA = '/content/plant-pathology-2020-fgvc7/train.csv'
TEST_DATA = '/content/plant-pathology-2020-fgvc7/test.csv'
# Configuration for training workflow
SEED = 1234
N_FOLDS = 5
N_EPOCHS = 20
BATCH_SIZE = 2
SIZE = 512
IMG_SHAPE = (1365, 2048, 3)
lr = 8e-4
submission_df = pd.read_csv(SAMPLE_SUBMISSION)
df_train = pd.read_csv(TRAIN_DATA)
df_test = pd.read_csv(TEST_DATA)
def get_image_path(filename):
return (IMAGES_PATH + filename + '.jpg')
#df_train['image_path'] = df_train['image_id'].apply(get_image_path)
#df_test['image_path'] = df_test['image_id'].apply(get_image_path)
#rain_labels = df_train.loc[:, 'healthy':'scab']
#train_paths = df_train.image_path
#test_paths = df_test.image_path
df_train.head()
df_test.head()
submission_df.head()
# for GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
# for TPU
#device = xm.xla_device()
#torch.set_default_tensor_type('torch.FloatTensor')
#device
```
# Define Dataset
```
class PlantDataset(Dataset):
def __init__(self, df, transforms=None):
self.df = df
self.transforms=transforms
def __len__(self):
return self.df.shape[0]
def __getitem__(self, idx):
# Solution 01: Read from raw image
image_src = IMAGES_PATH + self.df.loc[idx, 'image_id'] + '.jpg'
# Solution 02: Read from npy file, we convert all images in images folder from .jpg to .npy
# image_src = np.load(IMAGES_PATH + self.df.loc[idx, 'image_id'] + '.npy')
# print(image_src)
image = cv.imread(image_src, cv.IMREAD_COLOR)
if image.shape != IMG_SHAPE:
image = image.transpose(1, 0, 2)
image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
labels = self.df.loc[idx, ['healthy', 'multiple_diseases', 'rust', 'scab']].values
labels = torch.from_numpy(labels.astype(np.int8))
labels = labels.unsqueeze(-1)
if self.transforms:
transformed = self.transforms(image=image)
image = transformed['image']
return image, labels
```
# Data Agumentations
```
# Train transformation
transforms_train = A.Compose([
A.RandomResizedCrop(height=SIZE, width=SIZE, p=1.0),
A.OneOf([A.RandomBrightness(limit=0.1, p=1), A.RandomContrast(limit=0.1, p=1)]),
A.OneOf([A.MotionBlur(blur_limit=3), A.MedianBlur(blur_limit=3), A.GaussianBlur(blur_limit=3)], p=0.5),
A.VerticalFlip(p=0.5),
A.HorizontalFlip(p=0.5),
A.ShiftScaleRotate(
shift_limit=0.2,
scale_limit=0.2,
rotate_limit=20,
interpolation=cv.INTER_LINEAR,
border_mode=cv.BORDER_REFLECT_101,
p=1,
),
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225), max_pixel_value=255.0, p=1.0),
A.pytorch.ToTensorV2(p=1.0),
], p=1.0)
# Validation transformation
transforms_valid = A.Compose([
A.Resize(height=SIZE, width=SIZE, p=1.0),
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225), max_pixel_value=255.0, p=1.0),
A.pytorch.ToTensorV2(p=1.0),
])
```
# StratifiedKFold
```
# Get label from df_train
train_labels = df_train.iloc[:, 1:].values
train_y = train_labels[:, 2] + train_labels[:, 3] * 2 + train_labels[:, 1] * 3
folds = StratifiedKFold(n_splits=N_FOLDS, shuffle=True, random_state=SEED)
oof_preds = np.zeros((df_train.shape[0], 4))
```
# PretrainedModels
## Define cross entropy loss one hot
```
# define cross entropy loss one hot
class CrossEntropyLossOneHot(nn.Module):
def __init__(self):
super(CrossEntropyLossOneHot, self).__init__()
self.log_softmax = nn.LogSoftmax(dim=-1)
def forward(self, preds, labels):
return torch.mean(torch.sum(-labels * self.log_softmax(preds), -1))
```
## Define dense cross entropy
```
# define dense cross entropy
class DenseCrossEntropy(nn.Module):
def __init__(self):
super(DenseCrossEntropy, self).__init__()
def forward(self, logits, labels):
logits = logits.float()
labels = labels.float()
logprobs = F.log_softmax(logits, dim=-1)
loss = -labels * logprobs
loss = loss.sum(-1)
return loss.mean()
```
## Define plant model with ResNet34
```
# define plant model with ResNet
class PlantModel(nn.Module):
# define init function
def __init__(self, num_classes=4):
super().__init__()
self.backbone = torchvision.models.resnet34(pretrained=True)
in_features = self.backbone.fc.in_features
self.logit = nn.Linear(in_features, num_classes)
# define forward function
def forward(self, x):
batch_size, C, H, W = x.shape
x = self.backbone.conv1(x)
x = self.backbone.bn1(x)
x = self.backbone.relu(x)
x = self.backbone.maxpool(x)
x = self.backbone.layer1(x)
x = self.backbone.layer2(x)
x = self.backbone.layer3(x)
x = self.backbone.layer4(x)
x = F.adaptive_avg_pool2d(x,1).reshape(batch_size,-1)
x = F.dropout(x, 0.25, self.training)
x = self.logit(x)
return x
```
# Train for StratifiedKFold
```
def train_one_fold(i_fold, model, criterion, optimizer, lr_scheduler, dataloader_train, dataloader_valid):
train_fold_results = []
for epoch in range(N_EPOCHS):
# print information
print(' Epoch {}/{}'.format(epoch + 1, N_EPOCHS))
print(' ' + ('-' * 20))
os.system(f'echo \" Epoch {epoch}\"')
# call model
model.train()
tr_loss = 0
# looping
for step, batch in enumerate(dataloader_train):
# data preparation
images = batch[0].to(device)
labels = batch[1].to(device)
# forward pass and calculate loss
outputs = model(images)
loss = criterion(outputs, labels.squeeze(-1))
# backward pass
loss.backward()
tr_loss += loss.item()
# updates
# for TPU
#xm.optimizer_step(optimizer, barrier=True)
# for GPU
optimizer.step()
# empty gradient
optimizer.zero_grad()
# Validate
model.eval()
# init validation loss, predicted and labels
val_loss = 0
val_preds = None
val_labels = None
for step, batch in enumerate(dataloader_valid):
# data preparation
images = batch[0].to(device)
labels = batch[1].to(device)
# labels preparation
if val_labels is None:
val_labels = labels.clone().squeeze(-1)
else:
val_labels = torch.cat((val_labels, labels.squeeze(-1)), dim=0)
# disable torch grad to calculating normally
with torch.no_grad():
# calculate the output
outputs = model(batch[0].to(device))
# calculate the loss value
loss = criterion(outputs, labels.squeeze(-1))
val_loss += loss.item()
# predict with softmax activation function
preds = torch.softmax(outputs, dim=1).data.cpu()
#preds = torch.softmax(outputs, dim=1).detach().cpu().numpy()
if val_preds is None:
val_preds = preds
else:
val_preds = torch.cat((val_preds, preds), dim=0)
# if train mode
lr_scheduler.step(tr_loss)
with torch.no_grad():
train_loss = tr_loss / len(dataloader_train)
valid_loss = val_loss / len(dataloader_valid)
valid_score = roc_auc_score(val_labels.view(-1).cpu(), val_preds.view(-1).cpu(), average='macro')
# print information
if epoch % 2 == 0:
print(f'Fold {i_fold} Epoch {epoch}: train_loss={train_loss:.4f}, valid_loss={valid_loss:.4f}, acc={valid_score:.4f}')
train_fold_results.append({
'fold': i_fold,
'epoch': epoch,
'train_loss': train_loss,
'valid_loss': valid_loss,
'valid_score': valid_score,
})
return val_preds, train_fold_results
```
## Prepare submission file
```
submission_df.iloc[:, 1:] = 0
```
## Dataset test
```
dataset_test = PlantDataset(df=submission_df, transforms=transforms_valid)
dataloader_test = DataLoader(dataset_test, batch_size=BATCH_SIZE, num_workers=4, shuffle=False)
```
# Init model: EfficientNetB5
```
# EfficientNetB5
# B5 is the largest EfficientNet variant that fits in GPU memory with batch size 8.
!pip install efficientnet_pytorch
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b5')
num_ftrs = model._fc.in_features
model._fc = nn.Sequential(nn.Linear(num_ftrs,1000,bias=True),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(1000,4, bias = True))
model.to(device)
model
```
# Init model: ResNet
```
"""
# Download pretrained weights.
# model = PlantModel(num_classes=4)
model = torchvision.models.resnet18(pretrained=True)
# print number of features
num_features = model.fc.in_features
print(num_features)
# custome layers
model.fc = nn.Sequential(
nn.Linear(num_features, 512),
nn.ReLU(),
nn.BatchNorm1d(512),
nn.Dropout(0.5),
nn.Linear(512, 256),
nn.ReLU(),
nn.BatchNorm1d(256),
nn.Dropout(0.5),
nn.Linear(256, 4))
# initialize weights function
def init_weights(m):
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform_(m.weight)
m.bias.data.fill_(0.01)
# apply model with init weights
model.apply(init_weights)
# transfer model to device (cuda:0 mean using GPU, xla mean using TPU, otherwise using CPU)
model = model.to(device)
# Model details
model
"""
print(torch.cuda.memory_summary(device=None, abbreviated=False))
```
# Training model
```
submissions = None
train_results = []
for i_fold, (train_idx, valid_idx) in enumerate(folds.split(df_train, train_y)):
# data preparation phase
print("Fold {}/{}".format(i_fold + 1, N_FOLDS))
valid = df_train.iloc[valid_idx]
valid.reset_index(drop=True, inplace=True)
train = df_train.iloc[train_idx]
train.reset_index(drop=True, inplace=True)
# data transformation phase
dataset_train = PlantDataset(df=train, transforms=transforms_train)
dataset_valid = PlantDataset(df=valid, transforms=transforms_valid)
# data loader phase
dataloader_train = DataLoader(dataset_train, batch_size=BATCH_SIZE, num_workers=4, shuffle=True, pin_memory=True, drop_last=True)
dataloader_valid = DataLoader(dataset_valid, batch_size=BATCH_SIZE, num_workers=4, shuffle=False, pin_memory=True, drop_last=False)
# device = torch.device("cuda:0")
model = model.to(device)
# optimization phase
criterion = DenseCrossEntropy()
# optimizer = optim.Adam(model.parameters(), lr=0.001)
optimizer = AdamW(model.parameters(), lr = lr, weight_decay = 1e-3)
# lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[int(N_EPOCHS * 0.5), int(N_EPOCHS * 0.75)], gamma=0.1, last_epoch=-1)
num_train_steps = int(len(dataset_train) / BATCH_SIZE * N_EPOCHS)
lr_scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=len(dataset_train)/BATCH_SIZE*5, num_training_steps=num_train_steps)
# training in one fold
val_preds, train_fold_results = train_one_fold(i_fold, model, criterion, optimizer, lr_scheduler, dataloader_train, dataloader_valid)
oof_preds[valid_idx, :] = val_preds
# calculate the results phase
train_results = train_results + train_fold_results
# model evaluation phase
model.eval()
test_preds = None
# looping test dataloader
for step, batch in enumerate(dataloader_test):
images = batch[0].to(device, dtype=torch.float)
# empty torch gradient
with torch.no_grad():
outputs = model(images)
if test_preds is None:
test_preds = outputs.data.cpu()
else:
test_preds = torch.cat((test_preds, outputs.data.cpu()), dim=0)
# Save predictions per fold
submission_df[['healthy', 'multiple_diseases', 'rust', 'scab']] = torch.softmax(test_preds, dim=1)
submission_df.to_csv('submission_fold_{}.csv'.format(i_fold), index=False)
# logits avg
if submissions is None:
submissions = test_preds / N_FOLDS
else:
submissions += test_preds / N_FOLDS
print("5-Folds CV score: {:.4f}".format(roc_auc_score(train_labels, oof_preds, average='macro')))
torch.save(model.state_dict(), '5-folds_rnn34.pth')
```
# Generate training results
```
train_results = pd.DataFrame(train_results)
train_results.head(10)
train_results.to_csv('train_result.csv')
```
# Plotting results
## Training loss
```
def show_training_loss(train_result):
plt.figure(figsize=(15,10))
plt.subplot(3,1,1)
train_loss = train_result['train_loss']
plt.plot(train_loss.index, train_loss, label = 'train_loss')
plt.legend()
val_loss = train_result['valid_loss']
plt.plot(val_loss.index, val_loss, label = 'val_loss')
plt.legend()
show_training_loss(train_results)
```
## Validation score
```
def show_valid_score(train_result):
plt.figure(figsize=(15,10))
plt.subplot(3,1,1)
valid_score = train_result['valid_score']
plt.plot(valid_score.index, valid_score, label = 'valid_score')
plt.legend()
show_valid_score(train_results)
submission_df[['healthy', 'multiple_diseases', 'rust', 'scab']] = torch.softmax(submissions, dim=1)
submission_df.to_csv('submission.csv', index=False)
submission_df.head()
```
| github_jupyter |
# CS229: Problem Set 3
## Problem 1: A Simple Neural Network
**C. Combier**
This iPython Notebook provides solutions to Stanford's CS229 (Machine Learning, Fall 2017) graduate course problem set 3, taught by Andrew Ng.
The problem set can be found here: [./ps3.pdf](ps3.pdf)
I chose to write the solutions to the coding questions in Python, whereas the Stanford class is taught with Matlab/Octave.
## Notation
- $x^i$ is the $i^{th}$ feature vector
- $y^i$ is the expected outcome for the $i^{th}$ training example
- $m$ is the number of training examples
- $n$ is the number of features
### Question 1.b)

It seems that a triangle can separate the data.
We can construct a weight matrix by using a combination of linear classifiers, where each side of the triangle represents a decision boundary.
Each side of the triangle can be represented by an equation of the form $w_0 +w_1 x_1 + w_2 x_2 = 0$. If we transform this equality into an inequality, then the output represents on which side of the decision boundary a given data point $(x_1,x_2)$ belongs. The intersection of the outputs for each of these decision boundaries tells us whether $(x_1,x_2)$ lies within the triangle, in which case we will classify it $0$, and if not as $1$.
The first weight matrix can be written as:
$$
W^{[1]} = \left ( \begin{array}{ccc}
-1 & 4 & 0 \\
-1 & 0 & 4 \\
4.5 & -1 & -1
\end{array} \right )
$$
The input vector is:
$$
X = (\begin{array}{ccc}
1 & x_1 & x_2
\end{array})^T
$$
- The first line of $W^{[1]}$ is the equation for the vertical side of the triangle, $x_1 = 0.25$
- The second line of $W^{[1]}$ is the equation for the horizontal side of the triangle, $x_2 = 0.25$
- The third line of $W^{[1]}$ is the equation for the oblique side of the triangle, $x_2 = -x_1 + 4.5$
Consequently, with the given activation function, if the training example given by ($x_1$, $x_2$) lies within the triangle, then:
$$
f(W^{[1]}X) = (\begin{array}{ccc}
1 & 1 & 1
\end{array})^T
$$
In all other cases, at least one element of the output vector $f(W^{[1]}X)$ is not equal to $1$.
We can use this observation to find weights for the ouput layer. We take the sum of the components of $f(W^{[1]}X)$, and compare the value to 2.5 to check if all elements are equal to $1$ or not. This gives the weight matrix:
$$
W^{[2]} =(\begin{array}{cccc}
2.5 & -1 & -1 & -1
\end{array})
$$
The additional term 2.5 is the zero intercept. With this weight matrix, the ouput of the final layer will be $0$ if the training example is within the triangle, and $1$ if it is outside of the triangle.
The
### Question 1.c)
A linear activation function does not work, because the problem is not linearly separable, i.e. there is no hyperplane that perfectly separates the data.
| github_jupyter |
## Dependencies
```
# Dependencies to Visualize the model
%matplotlib inline
from IPython.display import Image, SVG
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
# Filepaths, numpy, and Tensorflow
import os
import numpy as np
import tensorflow as tf
# Sklearn scaling
from sklearn.preprocessing import MinMaxScaler
```
### Keras Specific Dependencies
```
# Keras
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import Dense
from tensorflow.keras.datasets import mnist
```
## Loading and Preprocessing our Data
### Load the MNIST Handwriting Dataset from Keras
```
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("Training Data Info")
print("Training Data Shape:", X_train.shape)
print("Training Data Labels Shape:", y_train.shape)
```
### Plot the first digit
```
# Plot the first image from the dataset
plt.imshow(X_train[0,:,:], cmap=plt.cm.Greys)
```
### Each Image is a 28x28 Pixel greyscale image with values from 0 to 255
```
# Our image is an array of pixels ranging from 0 to 255
X_train[0, :, :]
```
### For training a model, we want to flatten our data into rows of 1D image arrays
```
# We want to flatten our image of 28x28 pixels to a 1D array of 784 pixels
ndims = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], ndims)
X_test = X_test.reshape(X_test.shape[0], ndims)
print("Training Shape:", X_train.shape)
print("Testing Shape:", X_test.shape)
```
## Scaling and Normalization
We use Sklearn's MinMaxScaler to normalize our data between 0 and 1
```
# Next, we normalize our training data to be between 0 and 1
scaler = MinMaxScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# Alternative way to normalize this dataset since we know that the max pixel value is 255
# X_train = X_train.astype("float32")
# X_test = X_test.astype("float32")
# X_train /= 255.0
# X_test /= 255.0
```
## One-Hot Encoding
We need to one-hot encode our integer labels using the `to_categorical` helper function
```
# Our Training and Testing labels are integer encoded from 0 to 9
y_train[:20]
# We need to convert our target labels (expected values) to categorical data
num_classes = 10
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
# Original label of `5` is one-hot encoded as `0000010000`
y_train[0]
```
## Building our Model
In this example, we are going to build a Deep Multi-Layer Perceptron model with 2 hidden layers.
## Our first step is to create an empty sequential model
```
# Create an empty sequential model
```
## Next, we add our first hidden layer
In the first hidden layer, we must also specify the dimension of our input layer. This will simply be the number of elements (pixels) in each image.
```
# Add the first layer where the input dimensions are the 784 pixel values
# We can also choose our activation function. `relu` is a common
```
## We then add a second hidden layer with 100 densely connected nodes
A dense layer is when every node from the previous layer is connected to each node in the current layer.
```
# Add a second hidden layer
```
## Our final output layer uses a `softmax` activation function for logistic regression.
We also need to specify the number of output classes. In this case, the number of digits that we wish to classify.
```
# Add our final output layer where the number of nodes
# corresponds to the number of y labels
```
## Model Summary
```
# We can summarize our model
model.summary()
```
## Compile and Train our Model
Now that we have our model architecture defined, we must compile the model using a loss function and optimizer. We can also specify additional training metrics such as accuracy.
```
# Compile the model
```
## Finally, we train our model using our training data
Training consists of updating our weights using our optimizer and loss function. In this example, we choose 10 iterations (loops) of training that are called epochs.
We also choose to shuffle our training data and increase the detail printed out during each training cycle.
```
# Fit (train) the model
```
## Saving and Loading models
We can save our trained models using the HDF5 binary format with the extension `.h5`
```
# Save the model
# Load the model
```
## Evaluating the Model
We use our testing data to validate our model. This is how we determine the validity of our model (i.e. the ability to predict new and previously unseen data points)
```
# Evaluate the model using the training data
model_loss, model_accuracy = model.evaluate(X_test, y_test, verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
## Making Predictions
We can use our trained model to make predictions using `model.predict`
```
# Grab just one data point to test with
test = np.expand_dims(X_train[0], axis=0)
test.shape
plt.imshow(scaler.inverse_transform(test).reshape(28, 28), cmap=plt.cm.Greys)
# Make a prediction. The result should be 0000010000000 for a 5
model.predict(test).round()
# Grab just one data point to test with
test = np.expand_dims(X_train[2], axis=0)
test.shape
plt.imshow(scaler.inverse_transform(test).reshape(28, 28), cmap=plt.cm.Greys)
# Make a prediction. The resulting class should match the digit
print(f"One-Hot-Encoded Prediction: {model.predict(test).round()}")
print(f"Predicted class: {model.predict_classes(test)}")
```
# Import a Custom Image
```
filepath = "../Images/test8.png"
# Import the image using the `load_img` function in keras preprocessing
# Convert the image to a numpy array
# Scale the image pixels by 255 (or use a scaler from sklearn here)
# Flatten into a 1x28*28 array
plt.imshow(img.reshape(28, 28), cmap=plt.cm.Greys)
# Invert the pixel values to match the original data
# Make predictions
model.predict_classes(img)
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=8):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
left={"x":[],"y":[]}
right={"x":[],"y":[]}
midpoint=img.shape[1]//2
for line in lines:
for x1,y1,x2,y2 in line:
slope=((y2-y1)/(x2-x1))
if ( x1>midpoint and x2>midpoint):
right["x"].append(x1)
right["x"].append(x2)
right["y"].append(y1)
right["y"].append(y2)
#cv2.line(img, (x1, y1), (x2, y2), color, thickness)
elif( x1<midpoint and x2<midpoint):
left["x"].append(x1)
left["x"].append(x2)
left["y"].append(y1)
left["y"].append(y2)
#cv2.line(img, (x1, y1), (x2, y2), [0, 255, 0], thickness)
if(len(right["x"])>5 and len(right["y"])>5 ):
right_fit = np.polyfit(right["x"],right["y"], 1)
for i in range(len(right["x"])-1):
x1=image.shape[1]
y1=int(right_fit[0]*x1+right_fit[1])
x2=right["x"][i+1]
y2=int(right_fit[0]*x2+right_fit[1])
cv2.line(img, (x1,y1), (x2, y2), color, thickness)
if(len(left["x"])>5 and len(left["y"])>5 ):
left_fit = np.polyfit(left["x"],left["y"], 1)
for i in range(len(left["x"])-1):
x1=0
y1=int(left_fit[0]*x1+left_fit[1])
x2=left["x"][i+1]
y2=int(left_fit[0]*x2+left_fit[1])
cv2.line(img, (x1,y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
os.listdir("test_images/")
test_dir="test_images/"
test_images=os.listdir(test_dir)
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
test_img_index=2
image = mpimg.imread(test_dir+test_images[test_img_index])
gray_img=grayscale(image)
blured=gaussian_blur(gray_img, 5)
canny_img=canny(blured, 70, 140)
imshape=canny_img.shape
vertices = np.array([[(110,imshape[0]),(950,imshape[0] ), (530, 320), (440,320)]], dtype=np.int32)
ROI_img=region_of_interest(canny_img,vertices)
lines=hough_lines(ROI_img, 1, np.pi/180, 1, 5, 1)
output=weighted_img(image, lines, α=0.8, β=1., γ=0.)
vertices = vertices.reshape((-1,1,2))
cv2.polylines(output,[vertices],True,(0,255,255))
plt.imshow(ROI_img,cmap="gray")
plt.imshow(output)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
gray_img=grayscale(image)
blured=gaussian_blur(gray_img, 5)
canny_img=canny(blured, 70, 140)
imshape=canny_img.shape
vertices = np.array([[(110,imshape[0]),(950,imshape[0] ), (530, 320), (440,320)]], dtype=np.int32)
ROI_img=region_of_interest(canny_img,vertices)
lines=hough_lines(ROI_img, 1, np.pi/180, 1, 5, 1)
result=weighted_img(image, lines, α=0.8, β=1., γ=0.)
return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
# Summarizing Data
> What we have is a data glut.
>
> \- Vernor Vinge, Professor Emeritus of Mathematics, San Diego State University
## Applied Review
### Dictionaries
* The `dict` structure is used to represent **key-value pairs**
* Like a real dictionary, you look up a word (**key**) and get its definition (**value**)
* Below is an example:
```python
ethan = {
'first_name': 'Ethan',
'last_name': 'Swan',
'alma_mater': 'Notre Dame',
'employer': '84.51˚',
'zip_code': 45208
}
```
### DataFrame Structure
* We will start by importing the `flights` data set as a DataFrame:
```
import pandas as pd
flights_df = pd.read_csv('../data/flights.csv')
```
* Each DataFrame variable is a **Series** and can be accessed with bracket subsetting notation: `DataFrame['SeriesName']`
* The DataFrame has an **Index** that is visible the far left side and can be used to slide the DataFrame
### Methods
* Methods are *operations* that are specific to Python classes
* These operations end in parentheses and *make something happen*
* An example of a method is `DataFrame.head()`
## General Model
### Window Operations
Yesterday we learned how to manipulate data across one or more variables within the row(s):

Note that we return the same number of elements that we started with. This is known as a **window function**, but you can also think of it as summarizing at the row-level.
We could achieve this result with the following code:
```python
DataFrame['A'] + DataFrame['B']
```
We subset the two Series and then add them together using the `+` operator to achieve the sum.
Note that we could also use some other operation on `DataFrame['B']` as long as it returns the same number of elements.
### Summary Operations
However, sometimes we want to work with data across rows within a variable -- that is, aggregate/summarize values rowwise rather than columnwise.
<img src="images/aggregate-series.png" alt="aggregate-series.png" width="500" height="500">
Note that we return a single value representing some aggregation of the elements we started with. This is known as a **summary function**, but you can think of it as summarizing across rows.
This is what we are going to talk about next.
## Summarizing a Series
### Summary Methods
The easiest way to summarize a specific series is by using bracket subsetting notation and the built-in Series methods:
```
flights_df['distance'].sum()
```
Note that a *single value* was returned because this is a **summary operation** -- we are summing the `distance` variable across all rows.
There are other summary methods with a series:
```
flights_df['distance'].mean()
flights_df['distance'].median()
flights_df['distance'].mode()
```
All of the above methods work on quantitative variables, but we also have methods for character variables:
```
flights_df['carrier'].value_counts()
```
While the above isn't *technically* returning a single value, it's still a useful Series method summarizing our data.
<font class="your_turn">
Your Turn
</font>
1\. What is the difference between a window operation and a summary operation?
2\. Fill in the blanks in the following code to calculate the mean delay in departure:
```python
flights_df['_____']._____()
```
3\. Find the distinct number of carriers. Hint: look for a method to find the number of unique values in a Series.
### Describe Method
There is also a method `describe()` that provides a lot of this information -- this is especially useful in exploratory data analysis.
```
flights_df['distance'].describe()
```
Note that `describe()` will return different results depending on the `type` of the Series:
```
flights_df['carrier'].describe()
```
## Summarizing a DataFrame
The above methods and operations are nice, but sometimes we want to work with multiple variables rather than just one.
### Extending Summary Methods to DataFrames
Recall how we select variables from a DataFrame:
* Single-bracket subset notation
* Pass a list of quoted variable names into the list
```python
flights_df[['sched_dep_time', 'dep_time']]
```
We can use *the same summary methods from the Series on the DataFrame* to summarize data:
```
flights_df[['sched_dep_time', 'dep_time']].mean()
flights_df[['sched_dep_time', 'dep_time']].median()
```
<font class="question">
<strong>Question</strong>:<br><em>What is the class of <code>flights_df[['sched_dep_time', 'dep_time']].median()</code>?</em>
</font>
This returns a `pandas.core.series.Series` object -- the Index is the variable name and the values are the summarieze values.
### The Aggregation Method
While summary methods can be convenient, there are a few drawbacks to using them on DataFrames:
1. You have to lookup or remember the method names each time
2. You can only apply one summary method at a time
3. You have to apply the same summary method to all variables
4. A Series is returned rather than a DataFrame -- this makes it difficult to use the values in our analysis later
In order to get around these problems, the DataFrame has a powerful method `agg()`:
```
flights_df.agg({
'sched_dep_time': ['mean']
})
```
There are a few things to notice about the `agg()` method:
1. A `dict` is passed to the method with variable names as keys and a list of quoted summaries as values
2. *A DataFrame is returned* with variable names as variables and summaries as rows
We can extend this to multiple variables by adding elements to the `dict`:
```
flights_df.agg({
'sched_dep_time': ['mean'],
'dep_time': ['mean']
})
```
And because the values of the `dict` are lists, we can do additional aggregations at the same time:
```
flights_df.agg({
'sched_dep_time': ['mean', 'median'],
'dep_time': ['mean', 'min']
})
```
And notice that not all variables have to have the same list of summaries.
<font class="your_turn">
Your Turn
</font>
1\. What class of object is the returned by the below code?
```python
flights_df[['air_time', 'distance']].mean()
```
2\. What class of object is returned by the below code?
```python
flights_df.agg({
'air_time': ['mean'],
'distance': ['mean']
})
```
3\. Fill in the blanks in the below code to calculate the minimum and maximum distances traveled and the mean and median arrival delay:
```python
flights_df.agg({
'_____': ['min', '_____'],
'_____': ['_____', 'median']
})
```
### Describe Method
While `agg()` is a powerful method, the `describe()` method -- similar to the Series `describe()` method -- is a great choice during exploratory data analysis:
```
flights_df.describe()
```
<font class="question">
<strong>Question</strong>:<br><em>What is missing from the above result?</em>
</font>
The string variables are missing!
We can make `describe()` compute on all variable types using the `include` parameter and passing a list of data types to include:
```
flights_df.describe(include = ['int', 'float', 'object'])
```
## Questions
Are there any questions before we move on?
| github_jupyter |
```
import csv
import datetime
import json
import matplotlib.pyplot as plt
import numpy as np
import os
```
## Constants
```
LOGDIR = '../trace-data'
DATE_FORMAT_STR = '%Y-%m-%d %H:%M:%S'
MINUTES_PER_DAY = (24 * 60)
MICROSECONDS_PER_MINUTE = (60 * 1000)
```
## Utility code
```
def parse_date(date_str):
"""Parses a date string and returns a datetime object if possible.
Args:
date_str: A string representing a date.
Returns:
A datetime object if the input string could be successfully
parsed, None otherwise.
"""
if date_str is None or date_str == '' or date_str == 'None':
return None
return datetime.datetime.strptime(date_str, DATE_FORMAT_STR)
def timedelta_to_minutes(timedelta):
"""Converts a datetime timedelta object to minutes.
Args:
timedelta: The timedelta to convert.
Returns:
The number of minutes captured in the timedelta.
"""
minutes = 0.0
minutes += timedelta.days * MINUTES_PER_DAY
minutes += timedelta.seconds / 60.0
minutes += timedelta.microseconds / MICROSECONDS_PER_MINUTE
return minutes
def round_to_nearest_minute(t):
"""Rounds a datetime object down to the nearest minute.
Args:
t: A datetime object.
Returns:
A new rounded down datetime object.
"""
return t - datetime.timedelta(seconds=t.second, microseconds=t.microsecond)
def add_minute(t):
"""Adds a single minute to a datetime object.
Args:
t: A datetime object.
Returns:
A new datetime object with an additional minute.
"""
return t + datetime.timedelta(seconds=60)
def get_cdf(data):
"""Returns the CDF of the given data.
Args:
data: A list of numerical values.
Returns:
An pair of lists (x, y) for plotting the CDF.
"""
sorted_data = sorted(data)
p = 100. * np.arange(len(sorted_data)) / (len(sorted_data) - 1)
return sorted_data, p
class Job:
"""Encapsulates a job."""
def __init__(self, status, vc, jobid, attempts, submitted_time, user):
"""Records job parameters and computes key metrics.
Stores the passed in arguments as well as the number of GPUs
requested by the job. In addition, computes the queueing delay
as defined as the delta between the submission time and the start
time of the first attempt. Finally, computes run time as defined
as the delta between the initial attempt's start time and the last
attempt's finish time.
NOTE: Some jobs do not have any recorded attempts, and some attempts
have missing start and/or end times. A job's latest attempt having no
end time indicates that the job was still running when the log data
was collected.
Args:
status: One of 'Pass', 'Killed', 'Failed'.
vc: The hash of the virtual cluster id the job was run in.
jobid: The hash of the job id.
attempts: A list of dicts, where each dict contains the following keys:
'start_time': The start time of the attempt.
'end_time': The end time of the attempt.
'detail': A list of nested dicts where each dict contains
the following keys:
'ip': The server id.
'gpus': A list of the GPU ids allotted for this attempt.
submitted_time: The time the job was submitted to the queue.
user: The user's id.
"""
self._status = status
self._vc = vc
self._jobid = jobid
for attempt in attempts:
attempt['start_time'] = parse_date(attempt['start_time'])
attempt['end_time'] = parse_date(attempt['end_time'])
self._attempts = attempts
self._submitted_time = parse_date(submitted_time)
self._user = user
if len(self._attempts) == 0:
self._num_gpus = None
self._run_time = None
self._queueing_delay = None
else:
self._num_gpus = sum([len(detail['gpus']) for detail in self._attempts[0]['detail']])
if self._attempts[0]['start_time'] is None:
self._run_time = None
self._queueing_delay = None
else:
if self._attempts[-1]['end_time'] is None:
self._run_time = None
else:
self._run_time = \
timedelta_to_minutes(self._attempts[-1]['end_time'] -
self._attempts[0]['start_time'])
self._queueing_delay = \
timedelta_to_minutes(self._attempts[0]['start_time'] -
self._submitted_time)
@property
def status(self):
return self._status
@property
def vc(self):
return self._vc
@property
def jobid(self):
return self._jobid
@property
def attempts(self):
return self._attempts
@property
def submitted_time(self):
return self._submitted_time
@property
def user(self):
return self._user
@property
def num_gpus(self):
return self._num_gpus
@property
def queueing_delay(self):
return self._queueing_delay
@property
def run_time(self):
return self._run_time
def get_bucket_from_num_gpus(num_gpus):
"""Maps GPU count to a bucket for plotting purposes."""
if num_gpus is None:
return None
if num_gpus == 1:
return 0
elif num_gpus >= 2 and num_gpus <= 4:
return 1
elif num_gpus >= 5 and num_gpus <= 8:
return 2
elif num_gpus > 8:
return 3
else:
return None
def get_plot_config_from_bucket(bucket):
"""Returns plotting configuration information."""
if bucket == 0:
return ('1', 'green', '-')
elif bucket == 1:
return ('2-4', 'blue', '-.')
elif bucket == 2:
return ('5-8', 'red', '--')
elif bucket == 3:
return ('>8', 'purple', ':')
```
## Load the cluster log
```
cluster_job_log_path = os.path.join(LOGDIR, 'cluster_job_log')
with open(cluster_job_log_path, 'r') as f:
cluster_job_log = json.load(f)
jobs = [Job(**job) for job in cluster_job_log]
len(jobs)
```
# Job Runtimes (Figure 2)
```
run_times = {}
for job in jobs:
num_gpus = job.num_gpus
bucket = get_bucket_from_num_gpus(num_gpus)
if bucket is None:
continue
if bucket not in run_times:
run_times[bucket] = []
run_time = job.run_time
if run_time is not None:
run_times[bucket].append(run_time)
buckets = sorted([bucket for bucket in run_times])
for bucket in buckets:
num_gpus, color, linestyle = get_plot_config_from_bucket(bucket)
x, y = get_cdf(run_times[bucket])
plt.plot(x, y, label='%s GPU' % (num_gpus), color=color, linestyle=linestyle)
plt.legend(loc='lower right')
plt.xscale('log')
plt.xlim(10 ** -1, 10 ** 4)
plt.ylim(0, 100)
plt.xlabel('Time (min)')
plt.ylabel('CDF')
plt.grid(alpha=.3, linestyle='--')
plt.show()
```
# Queueing Delay (Figure 3)
```
queueing_delays = {}
for job in jobs:
vc = job.vc
if vc not in queueing_delays:
queueing_delays[vc] = {}
bucket = get_bucket_from_num_gpus(job.num_gpus)
if bucket is None:
continue
if bucket not in queueing_delays[vc]:
queueing_delays[vc][bucket] = []
# NOTE: Each period between the job being placed on the queue
# and being scheduled on a machine is recorded as an individual
# queueing delay.
queueing_delay = 0.0
queue_time = job.submitted_time
for attempt in job.attempts:
start_time = attempt['start_time']
if queue_time is not None and start_time is not None:
queueing_delay = timedelta_to_minutes(start_time - queue_time)
queue_time = attempt['end_time']
queueing_delays[vc][bucket].append(queueing_delay)
for vc in queueing_delays:
for bucket in queueing_delays[vc]:
queueing_delays[vc][bucket] = filter(None, queueing_delays[vc][bucket])
vcs = queueing_delays.keys()
for i, vc in enumerate(vcs):
for bucket in queueing_delays[vc]:
num_gpus, color, linestyle = get_plot_config_from_bucket(bucket)
x, y = get_cdf(queueing_delays[vc][bucket])
plt.plot(x, y, label='%s GPU' % (num_gpus), color=color, linestyle=linestyle)
plt.title('VC %s' % (vc))
plt.legend(loc='lower right')
plt.xscale('log')
plt.ylim(0, 100)
plt.xlim(10 ** -1, 10 ** 4)
plt.xlabel('Time (min)')
plt.ylabel('CDF')
plt.grid(alpha=.3, linestyle='--')
if i < len(vcs) - 1:
plt.figure()
plt.show()
```
# Locality Constraints (Figure 4)
```
data = {}
for i, job in enumerate(jobs):
if len(job.attempts) == 0:
continue
num_gpus = job.num_gpus
if num_gpus < 5:
continue
bucket = get_bucket_from_num_gpus(num_gpus)
if bucket not in data:
data[bucket] = {
'x': [],
'y': []
}
queueing_delay = job.queueing_delay
num_servers = len(job.attempts[0]['detail'])
data[bucket]['x'].append(queueing_delay)
data[bucket]['y'].append(num_servers)
for bucket in data:
num_gpus, _, _ = get_plot_config_from_bucket(bucket)
if bucket == 2:
marker = '+'
facecolors = 'black'
edgecolors = 'none'
else:
marker = 'o'
facecolors = 'none'
edgecolors = 'red'
plt.scatter(data[bucket]['x'], data[bucket]['y'], label='%s GPU' % (num_gpus),
marker=marker, facecolors=facecolors, edgecolors=edgecolors)
plt.legend()
plt.xscale('log')
plt.xlabel('Time (min)')
plt.ylabel('Num. Servers')
plt.show()
```
# GPU Utilization (Figures 5, 6)
```
gpu_util_path = os.path.join(LOGDIR, 'cluster_gpu_util')
gpu_util = {}
with open(gpu_util_path, 'r') as f:
reader = csv.reader(f)
next(reader)
for row in reader:
time = row[0][:-4] # Remove the timezone
machineId = row[1]
if machineId not in gpu_util:
gpu_util[machineId] = {}
gpu_util[machineId][time] = row[2:-1] # Ignore extra empty string at the end
def get_utilization_data(jobs, only_large_jobs=False, only_dedicated_servers=False):
"""Aggregates GPU utilization data for a set of jobs.
Args:
jobs: A list of Jobs.
only_large_jobs: If True, only considers jobs of size 8 or 16 GPUs.
Otherwise, considers jobs of size 1, 4, 8, or 16 GPUs.
only_dedicated_servers: If True, only considers jobs that use all GPUs
available on a server(s).
Returns:
A dict indexed by 1) job completion status, 2) number of GPUs requested
by the job, and 3) timestamp. The value of each nested dict is a list of
percentages indicating the utilization of each individual GPU on the
servers used by the job at the particular time requested.
"""
data = {}
for job in jobs:
num_gpus = job.num_gpus
if (len(job.attempts) == 0 or
(num_gpus != 1 and num_gpus != 4 and num_gpus != 8 and num_gpus != 16)):
continue
if only_large_jobs and num_gpus < 8:
continue
status = job.status
if status not in data:
data[status] = {}
if num_gpus not in data[status]:
data[status][num_gpus] = []
for attempt in job.attempts:
if only_dedicated_servers and len(attempt['detail']) > (num_gpus / 8):
continue
current_time = attempt['start_time']
if current_time is None or attempt['end_time'] is None:
continue
current_minute = round_to_nearest_minute(current_time)
while current_minute < attempt['end_time']:
current_minute_str = str(current_minute)
for detail in attempt['detail']:
machineId = detail['ip']
if current_minute_str in gpu_util[machineId]:
for gpu_id in detail['gpus']:
gpu_num = int(gpu_id[3:]) # Remove the 'gpu' prefix
try:
u = gpu_util[machineId][current_minute_str][gpu_num]
if u != 'NA':
data[status][num_gpus].append(float(u))
except Exception as e:
print(gpu_util[machineId][current_minute_str])
print(gpu_num)
raise ValueError(e)
current_minute = add_minute(current_minute)
return data
data = get_utilization_data(jobs)
statuses = data.keys()
for i, status in enumerate(statuses):
all_num_gpus = sorted(data[status].keys())
for num_gpus in all_num_gpus:
if num_gpus == 1:
color = 'green'
linestyle = '-'
elif num_gpus == 4:
color = 'blue'
linestyle = '-.'
elif num_gpus == 8:
color = 'red'
linestyle = '--'
elif num_gpus == 16:
color = 'cyan'
linestyle = ':'
x, y = get_cdf(data[status][num_gpus])
plt.plot(x, y, label='%s GPU' % (num_gpus), color=color, linestyle=linestyle)
plt.title(status)
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.legend(loc='lower right')
plt.xlabel('Utilization (%)')
plt.ylabel('CDF')
plt.grid(alpha=.3, linestyle='--')
if i < len(statuses) - 1:
plt.figure()
plt.show()
data = get_utilization_data(jobs, only_large_jobs=True, only_dedicated_servers=True)
aggregate_data = {}
for status in data:
for num_gpus in data[status]:
if num_gpus not in aggregate_data:
aggregate_data[num_gpus] = []
aggregate_data[num_gpus] += data[status][num_gpus]
all_num_gpus = sorted(aggregate_data.keys())
for num_gpus in all_num_gpus:
if num_gpus == 8:
linestyle = '-'
elif num_gpus == 16:
linestyle = '-.'
x, y = get_cdf(aggregate_data[num_gpus])
plt.plot(x, y, label='%s GPU' % (num_gpus), color='black', linestyle=linestyle)
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.legend(loc='lower right')
plt.xlabel('Utilization (%)')
plt.ylabel('CDF')
plt.grid(alpha=.3, linestyle='--')
plt.show()
```
# Host Resource Utilization (Figure 7)
```
mem_util_path = os.path.join(LOGDIR, 'cluster_mem_util')
mem_util = []
with open(mem_util_path, 'r') as f:
reader = csv.reader(f)
next(reader)
for row in reader:
if row[2] == 'NA':
continue
mem_total = float(row[2])
mem_free = float(row[3])
if mem_total == 0:
continue
mem_util.append(100.0 * (mem_total - mem_free) / mem_total)
cpu_util_path = os.path.join(LOGDIR, 'cluster_cpu_util')
cpu_util = []
with open(cpu_util_path, 'r') as f:
reader = csv.reader(f)
next(reader)
for row in reader:
if row[2] == 'NA':
continue
cpu_util.append(float(row[2]))
x, y = get_cdf(cpu_util)
plt.plot(x, y, label='CPU', color='black', linestyle='-')
x, y = get_cdf(mem_util)
plt.plot(x, y, label='Memory', color='black', linestyle='-.')
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.legend(loc='lower right')
plt.xlabel('Utilization (%)')
plt.ylabel('CDF')
plt.show()
```
| github_jupyter |
# Scikit-Learn Practice Exercises
This notebook offers a set of excercises for different tasks with Scikit-Learn.
Notes:
* There may be more than one different way to answer a question or complete an exercise.
* Some skeleton code has been implemented for you.
* Exercises are based off (and directly taken from) the quick [introduction to Scikit-Learn notebook](https://github.com/mrdbourke/zero-to-mastery-ml/blob/master/section-2-data-science-and-ml-tools/introduction-to-scikit-learn.ipynb).
* Different tasks will be detailed by comments or text. Places to put your own code are defined by `###` (don't remove anything other than `###`).
For further reference and resources, it's advised to check out the [Scikit-Learn documnetation](https://scikit-learn.org/stable/user_guide.html).
And if you get stuck, try searching for a question in the following format: "how to do XYZ with Scikit-Learn", where XYZ is the function you want to leverage from Scikit-Learn.
Since we'll be working with data, we'll import Scikit-Learn's counterparts, Matplotlib, NumPy and pandas.
Let's get started.
```
# Setup matplotlib to plot inline (within the notebook)
###
# Import the pyplot module of Matplotlib as plt
###
# Import pandas under the abbreviation 'pd'
###
# Import NumPy under the abbreviation 'np'
###
```
## End-to-end Scikit-Learn classification workflow
Let's start with an end to end Scikit-Learn workflow.
More specifically, we'll:
1. Get a dataset ready
2. Prepare a machine learning model to make predictions
3. Fit the model to the data and make a prediction
4. Evaluate the model's predictions
The data we'll be using is [stored on GitHub](https://github.com/mrdbourke/zero-to-mastery-ml/tree/master/data). We'll start with [`heart-disease.csv`](https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv), a dataset which contains anonymous patient data and whether or not they have heart disease.
**Note:** When viewing a `.csv` on GitHub, make sure it's in the raw format. For example, the URL should look like: https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv
### 1. Getting a dataset ready
```
# Import the heart disease dataset and save it to a variable
# using pandas and read_csv()
# Hint: You can directly pass the URL of a csv to read_csv()
heart_disease = ###
# Check the first 5 rows of the data
###
```
Our goal here is to build a machine learning model on all of the columns except `target` to predict `target`.
In essence, the `target` column is our **target variable** (also called `y` or `labels`) and the rest of the other columns are our independent variables (also called `data` or `X`).
And since our target variable is one thing or another (heart disease or not), we know our problem is a classification problem (classifying whether something is one thing or another).
Knowing this, let's create `X` and `y` by splitting our dataframe up.
```
# Create X (all columns except target)
X = ###
# Create y (only the target column)
y = ###
```
Now we've split our data into `X` and `y`, we'll use Scikit-Learn to split it into training and test sets.
```
# Import train_test_split from sklearn's model_selection module
###
# Use train_test_split to split X & y into training and test sets
X_train, X_test, y_train, y_test = ###
# View the different shapes of the training and test datasets
###
```
What do you notice about the different shapes of the data?
Since our data is now in training and test sets, we'll build a machine learning model to fit patterns in the training data and then make predictions on the test data.
To figure out which machine learning model we should use, you can refer to [Scikit-Learn's machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html).
After following the map, you decide to use the [`RandomForestClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html).
### 2. Preparing a machine learning model
```
# Import the RandomForestClassifier from sklearn's ensemble module
###
# Instantiate an instance of RandomForestClassifier as clf
clf =
```
Now you've got a `RandomForestClassifier` instance, let's fit it to the training data.
Once it's fit, we'll make predictions on the test data.
### 3. Fitting a model and making predictions
```
# Fit the RandomForestClassifier to the training data
clf.fit(###, ###)
# Use the fitted model to make predictions on the test data and
# save the predictions to a variable called y_preds
y_preds = clf.predict(###)
```
### 4. Evaluating a model's predictions
Evaluating predictions is as important making them. Let's check how our model did by calling the `score()` method on it and passing it the training (`X_train, y_train`) and testing data (`X_test, y_test`).
```
# Evaluate the fitted model on the training set using the score() function
###
# Evaluate the fitted model on the test set using the score() function
###
```
* How did you model go?
* What metric does `score()` return for classifiers?
* Did your model do better on the training dataset or test dataset?
## Experimenting with different classification models
Now we've quickly covered an end-to-end Scikit-Learn workflow and since experimenting is a large part of machine learning, we'll now try a series of different machine learning models and see which gets the best results on our dataset.
Going through the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), we see there are a number of different classification models we can try (different models are in the green boxes).
For this exercise, the models we're going to try and compare are:
* [LinearSVC](https://scikit-learn.org/stable/modules/svm.html#classification)
* [KNeighborsClassifier](https://scikit-learn.org/stable/modules/neighbors.html) (also known as K-Nearest Neighbors or KNN)
* [SVC](https://scikit-learn.org/stable/modules/svm.html#classification) (also known as support vector classifier, a form of [support vector machine](https://en.wikipedia.org/wiki/Support-vector_machine))
* [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) (despite the name, this is actually a classifier)
* [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) (an ensemble method and what we used above)
We'll follow the same workflow we used above (except this time for multiple models):
1. Import a machine learning model
2. Get it ready
3. Fit it to the data and make predictions
4. Evaluate the fitted model
**Note:** Since we've already got the data ready, we can reuse it in this section.
```
# Import LinearSVC from sklearn's svm module
###
# Import KNeighborsClassifier from sklearn's neighbors module
###
# Import SVC from sklearn's svm module
###
# Import LogisticRegression from sklearn's linear_model module
###
# Note: we don't have to import RandomForestClassifier, since we already have
```
Thanks to the consistency of Scikit-Learn's API design, we can use virtually the same code to fit, score and make predictions with each of our models.
To see which model performs best, we'll do the following:
1. Instantiate each model in a dictionary
2. Create an empty results dictionary
3. Fit each model on the training data
4. Score each model on the test data
5. Check the results
If you're wondering what it means to instantiate each model in a dictionary, see the example below.
```
# EXAMPLE: Instantiating a RandomForestClassifier() in a dictionary
example_dict = {"RandomForestClassifier": RandomForestClassifier()}
# Create a dictionary called models which contains all of the classification models we've imported
# Make sure the dictionary is in the same format as example_dict
# The models dictionary should contain 5 models
models = {"LinearSVC": ###,
"KNN": ###,
"SVC": ###,
"LogisticRegression": ###,
"RandomForestClassifier": ###}
# Create an empty dictionary called results
results = ###
```
Since each model we're using has the same `fit()` and `score()` functions, we can loop through our models dictionary and, call `fit()` on the training data and then call `score()` with the test data.
```
# EXAMPLE: Looping through example_dict fitting and scoring the model
example_results = {}
for model_name, model in example_dict.items():
model.fit(X_train, y_train)
example_results[model_name] = model.score(X_test, y_test)
# EXAMPLE: View the results
example_results
# Loop through the models dictionary items, fitting the model on the training data
# and appending the model name and model score on the test data to the results dictionary
for model_name, model in ###:
model.fit(###)
results[model_name] = model.score(###)
# View the results
results
```
* Which model performed the best?
* Do the results change each time you run the cell?
* Why do you think this is?
Due to the randomness of how each model finds patterns in the data, you might notice different results each time.
Without manually setting the random state using the `random_state` parameter of some models or using a NumPy random seed, every time you run the cell, you'll get slightly different results.
Let's see this in effect by running the same code as the cell above, except this time setting a [NumPy random seed equal to 42](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.seed.html).
```
# Run the same code as the cell above, except this time set a NumPy random seed
# equal to 42
np.random.seed(###)
for model_name, model in models.items():
model.fit(X_train, y_train)
results[model_name] = model.score(X_test, y_test)
results
```
* Run the cell above a few times, what do you notice about the results?
* Which model performs the best this time?
* What happens if you add a NumPy random seed to the cell where you called `train_test_split()` (towards the top of the notebook) and then rerun the cell above?
Let's make our results a little more visual.
```
# Create a pandas dataframe with the data as the values of the results dictionary,
# the index as the keys of the results dictionary and a single column called accuracy.
# Be sure to save the dataframe to a variable.
results_df = pd.DataFrame(results.###(),
results.###(),
columns=[####])
# Create a bar plot of the results dataframe using plot.bar()
###
```
Using `np.random.seed(42)` results in the `LogisticRegression` model perfoming the best (at least on my computer).
Let's tune its hyperparameters and see if we can improve it.
### Hyperparameter Tuning
Remember, if you're ever trying to tune a machine learning models hyperparameters and you're not sure where to start, you can always search something like "MODEL_NAME hyperparameter tuning".
In the case of LogisticRegression, you might come across articles, such as [Hyperparameter Tuning Using Grid Search by Chris Albon](https://chrisalbon.com/machine_learning/model_selection/hyperparameter_tuning_using_grid_search/).
The article uses [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) but we're going to be using [`RandomizedSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html).
The different hyperparameters to search over have been setup for you in `log_reg_grid` but feel free to change them.
```
# Different LogisticRegression hyperparameters
log_reg_grid = {"C": np.logspace(-4, 4, 20),
"solver": ["liblinear"]}
```
Since we've got a set of hyperparameters we can import `RandomizedSearchCV`, pass it our dictionary of hyperparameters and let it search for the best combination.
```
# Setup np random seed of 42
np.random.seed(###)
# Import RandomizedSearchCV from sklearn's model_selection module
# Setup an instance of RandomizedSearchCV with a LogisticRegression() estimator,
# our log_reg_grid as the param_distributions, a cv of 5 and n_iter of 5.
rs_log_reg = RandomizedSearchCV(estimator=###,
param_distributions=###,
cv=###,
n_iter=###,
verbose=###)
# Fit the instance of RandomizedSearchCV
###
```
Once `RandomizedSearchCV` has finished, we can find the best hyperparmeters it found using the `best_params_` attributes.
```
# Find the best parameters of the RandomizedSearchCV instance using the best_params_ attribute
###
# Score the instance of RandomizedSearchCV using the test data
###
```
After hyperparameter tuning, did the models score improve? What else could you try to improve it? Are there any other methods of hyperparameter tuning you can find for `LogisticRegression`?
### Classifier Model Evaluation
We've tried to find the best hyperparameters on our model using `RandomizedSearchCV` and so far we've only been evaluating our model using the `score()` function which returns accuracy.
But when it comes to classification, you'll likely want to use a few more evaluation metrics, including:
* [**Confusion matrix**](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) - Compares the predicted values with the true values in a tabular way, if 100% correct, all values in the matrix will be top left to bottom right (diagnol line).
* [**Cross-validation**](https://scikit-learn.org/stable/modules/cross_validation.html) - Splits your dataset into multiple parts and train and tests your model on each part and evaluates performance as an average.
* [**Precision**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score) - Proportion of true positives over total number of samples. Higher precision leads to less false positives.
* [**Recall**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score) - Proportion of true positives over total number of true positives and false positives. Higher recall leads to less false negatives.
* [**F1 score**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score) - Combines precision and recall into one metric. 1 is best, 0 is worst.
* [**Classification report**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) - Sklearn has a built-in function called `classification_report()` which returns some of the main classification metrics such as precision, recall and f1-score.
* [**ROC Curve**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_score.html) - [Receiver Operating Characterisitc](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is a plot of true positive rate versus false positive rate.
* [**Area Under Curve (AUC)**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) - The area underneath the ROC curve. A perfect model achieves a score of 1.0.
Before we get to these, we'll instantiate a new instance of our model using the best hyerparameters found by `RandomizedSearchCV`.
```
# Instantiate a LogisticRegression classifier using the best hyperparameters from RandomizedSearchCV
clf = LogisticRegression(###)
# Fit the new instance of LogisticRegression with the best hyperparameters on the training data
###
```
Now it's to import the relative Scikit-Learn methods for each of the classification evaluation metrics we're after.
```
# Import confusion_matrix and classification_report from sklearn's metrics module
###
# Import precision_score, recall_score and f1_score from sklearn's metrics module
###
# Import plot_roc_curve from sklearn's metrics module
###
```
Evaluation metrics are very often comparing a model's predictions to some ground truth labels.
Let's make some predictions on the test data using our latest model and save them to `y_preds`.
```
# Make predictions on test data and save them
###
```
Time to use the predictions our model has made to evaluate it beyond accuracy.
```
# Create a confusion matrix using the confusion_matrix function
###
```
**Challenge:** The in-built `confusion_matrix` function in Scikit-Learn produces something not too visual, how could you make your confusion matrix more visual?
You might want to search something like "how to plot a confusion matrix". Note: There may be more than one way to do this.
```
# Create a more visual confusion matrix
###
```
How about a classification report?
```
# Create a classification report using the classification_report function
###
```
**Challenge:** Write down what each of the columns in this classification report are.
* **Precision** - Indicates the proportion of positive identifications (model predicted class 1) which were actually correct. A model which produces no false positives has a precision of 1.0.
* **Recall** - Indicates the proportion of actual positives which were correctly classified. A model which produces no false negatives has a recall of 1.0.
* **F1 score** - A combination of precision and recall. A perfect model achieves an F1 score of 1.0.
* **Support** - The number of samples each metric was calculated on.
* **Accuracy** - The accuracy of the model in decimal form. Perfect accuracy is equal to 1.0.
* **Macro avg** - Short for macro average, the average precision, recall and F1 score between classes. Macro avg doesn’t class imbalance into effort, so if you do have class imbalances, pay attention to this metric.
* **Weighted avg** - Short for weighted average, the weighted average precision, recall and F1 score between classes. Weighted means each metric is calculated with respect to how many samples there are in each class. This metric will favour the majority class (e.g. will give a high value when one class out performs another due to having more samples).
The classification report gives us a range of values for precision, recall and F1 score, time to find these metrics using Scikit-Learn functions.
```
# Find the precision score of the model using precision_score()
###
# Find the recall score
###
# Find the F1 score
###
```
Confusion matrix: done.
Classification report: done.
ROC (receiver operator characteristic) curve & AUC (area under curve) score: not done.
Let's fix this.
If you're unfamiliar with what a ROC curve, that's your first challenge, to read up on what one is.
In a sentence, a [ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is a plot of the true positive rate versus the false positive rate.
And the AUC score is the area behind the ROC curve.
Scikit-Learn provides a handy function for creating both of these called [`plot_roc_curve()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_roc_curve.html).
```
# Plot a ROC curve using our current machine learning model using plot_roc_curve
###
```
Beautiful! We've gone far beyond accuracy with a plethora extra classification evaluation metrics.
If you're not sure about any of these, don't worry, they can take a while to understand. That could be an optional extension, reading up on a classification metric you're not sure of.
The thing to note here is all of these metrics have been calculated using a single training set and a single test set. Whilst this is okay, a more robust way is to calculate them using [cross-validation](https://scikit-learn.org/stable/modules/cross_validation.html).
We can calculate various evaluation metrics using cross-validation using Scikit-Learn's [`cross_val_score()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function along with the `scoring` parameter.
```
# Import cross_val_score from sklearn's model_selection module
###
# EXAMPLE: By default cross_val_score returns 5 values (cv=5).
cross_val_score(clf,
X,
y,
scoring="accuracy",
cv=5)
# EXAMPLE: Taking the mean of the returned values from cross_val_score
# gives a cross-validated version of the scoring metric.
cross_val_acc = np.mean(cross_val_score(clf,
X,
y,
scoring="accuracy",
cv=5))
cross_val_acc
```
In the examples, the cross-validated accuracy is found by taking the mean of the array returned by `cross_val_score()`.
Now it's time to find the same for precision, recall and F1 score.
```
# Find the cross-validated precision
###
# Find the cross-validated recall
###
# Find the cross-validated F1 score
###
```
### Exporting and importing a trained model
Once you've trained a model, you may want to export it and save it to file so you can share it or use it elsewhere.
One method of exporting and importing models is using the joblib library.
In Scikit-Learn, exporting and importing a trained model is known as [model persistence](https://scikit-learn.org/stable/modules/model_persistence.html).
```
# Import the dump and load functions from the joblib library
###
# Use the dump function to export the trained model to file
###
# Use the load function to import the trained model you just exported
# Save it to a different variable name to the origial trained model
###
# Evaluate the loaded trained model on the test data
###
```
What do you notice about the loaded trained model results versus the original (pre-exported) model results?
## Scikit-Learn Regression Practice
For the next few exercises, we're going to be working on a regression problem, in other words, using some data to predict a number.
Our dataset is a [table of car sales](https://docs.google.com/spreadsheets/d/1LPEIWJdSSJYrfn-P3UQDIXbEn5gg-o6I7ExLrWTTBWs/edit?usp=sharing), containing different car characteristics as well as a sale price.
We'll use Scikit-Learn's built-in regression machine learning models to try and learn the patterns in the car characteristics and their prices on a certain group of the dataset before trying to predict the sale price of a group of cars the model has never seen before.
To begin, we'll [import the data from GitHub](https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/car-sales-extended-missing-data.csv) into a pandas DataFrame, check out some details about it and try to build a model as soon as possible.
```
# Read in the car sales data
car_sales = pd.read_csv("https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/car-sales-extended-missing-data.csv")
# View the first 5 rows of the car sales data
###
# Get information about the car sales DataFrame
###
```
Looking at the output of `info()`,
* How many rows are there total?
* What datatypes are in each column?
* How many missing values are there in each column?
```
# Find number of missing values in each column
###
# Find the datatypes of each column of car_sales
###
```
Knowing this information, what would happen if we tried to model our data as it is?
Let's see.
```
# EXAMPLE: This doesn't work because our car_sales data isn't all numerical
from sklearn.ensemble import RandomForestRegressor
car_sales_X, car_sales_y = car_sales.drop("Price", axis=1), car_sales.Price
rf_regressor = RandomForestRegressor().fit(car_sales_X, car_sales_y)
```
As we see, the cell above breaks because our data contains non-numerical values as well as missing data.
To take care of some of the missing data, we'll remove the rows which have no labels (all the rows with missing values in the `Price` column).
```
# Remove rows with no labels (NaN's in the Price column)
###
```
### Building a pipeline
Since our `car_sales` data has missing numerical values as well as the data isn't all numerical, we'll have to fix these things before we can fit a machine learning model on it.
There are ways we could do this with pandas but since we're practicing Scikit-Learn, we'll see how we might do it with the [`Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) class.
Because we're modifying columns in our dataframe (filling missing values, converting non-numerical data to numbers) we'll need the [`ColumnTransformer`](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html), [`SimpleImputer`](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) and [`OneHotEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) classes as well.
Finally, because we'll need to split our data into training and test sets, we'll import `train_test_split` as well.
```
# Import Pipeline from sklearn's pipeline module
###
# Import ColumnTransformer from sklearn's compose module
###
# Import SimpleImputer from sklearn's impute module
###
# Import OneHotEncoder from sklearn's preprocessing module
###
# Import train_test_split from sklearn's model_selection module
###
```
Now we've got the necessary tools we need to create our preprocessing `Pipeline` which fills missing values along with turning all non-numerical data into numbers.
Let's start with the categorical features.
```
# Define different categorical features
categorical_features = ["Make", "Colour"]
# Create categorical transformer Pipeline
categorical_transformer = Pipeline(steps=[
# Set SimpleImputer strategy to "constant" and fill value to "missing"
("imputer", SimpleImputer(strategy=###, fill_value=###)),
# Set OneHotEncoder to ignore the unknowns
("onehot", OneHotEncoder(handle_unknown=###))])
```
It would be safe to treat `Doors` as a categorical feature as well, however since we know the vast majority of cars have 4 doors, we'll impute the missing `Doors` values as 4.
```
# Define Doors features
door_feature = ["Doors"]
# Create Doors transformer Pipeline
door_transformer = Pipeline(steps=[
# Set SimpleImputer strategy to "constant" and fill value to 4
("imputer", SimpleImputer(strategy=###, fill_value=###))])
```
Now onto the numeric features. In this case, the only numeric feature is the `Odometer (KM)` column. Let's fill its missing values with the median.
```
# Define numeric features (only the Odometer (KM) column)
numeric_features = ["Odometer (KM)"]
# Crearte numeric transformer Pipeline
numeric_transformer = ###(steps=[
# Set SimpleImputer strategy to fill missing values with the "Median"
("imputer", ###(strategy=###))])
```
Time to put all of our individual transformer `Pipeline`'s into a single `ColumnTransformer` instance.
```
# Setup preprocessing steps (fill missing values, then convert to numbers)
preprocessor = ColumnTransformer(
transformers=[
# Use the categorical_transformer to transform the categorical_features
("cat", categorical_transformer, ###),
# Use the door_transformer to transform the door_feature
("door", ###, door_feature),
# Use the numeric_transformer to transform the numeric_features
("num", ###, ###)])
```
Boom! Now our `preprocessor` is ready, time to import some regression models to try out.
Comparing our data to the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), we can see there's a handful of different regression models we can try.
* [RidgeRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html)
* [SVR(kernel="linear")](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) - short for Support Vector Regressor, a form form of support vector machine.
* [SVR(kernel="rbf")](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) - short for Support Vector Regressor, a form of support vector machine.
* [RandomForestRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) - the regression version of RandomForestClassifier.
```
# Import Ridge from sklearn's linear_model module
# Import SVR from sklearn's svm module
# Import RandomForestRegressor from sklearn's ensemble module
```
Again, thanks to the design of the Scikit-Learn library, we're able to use very similar code for each of these models.
To test them all, we'll create a dictionary of regression models and an empty dictionary for regression model results.
```
# Create dictionary of model instances, there should be 4 total key, value pairs
# in the form {"model_name": model_instance}.
# Don't forget there's two versions of SVR, one with a "linear" kernel and the
# other with kernel set to "rbf".
regression_models = {"Ridge": ###,
"SVR_linear": ###,
"SVR_rbf": ###,
"RandomForestRegressor": ###}
# Create an empty dictionary for the regression results
regression_results = ###
```
Our regression model dictionary is prepared as well as an empty dictionary to append results to, time to get the data split into `X` (feature variables) and `y` (target variable) as well as training and test sets.
In our car sales problem, we're trying to use the different characteristics of a car (`X`) to predict its sale price (`y`).
```
# Create car sales X data (every column of car_sales except Price)
car_sales_X = ###
# Create car sales y data (the Price column of car_sales)
car_sales_y = ###
# Use train_test_split to split the car_sales_X and car_sales_y data into
# training and test sets.
# Give the test set 20% of the data using the test_size parameter.
# For reproducibility set the random_state parameter to 42.
car_X_train, car_X_test, car_y_train, car_y_test = train_test_split(###,
###,
test_size=###,
random_state=###)
# Check the shapes of the training and test datasets
###
```
* How many rows are in each set?
* How many columns are in each set?
Alright, our data is split into training and test sets, time to build a small loop which is going to:
1. Go through our `regression_models` dictionary
2. Create a `Pipeline` which contains our `preprocessor` as well as one of the models in the dictionary
3. Fits the `Pipeline` to the car sales training data
4. Evaluates the target model on the car sales test data and appends the results to our `regression_results` dictionary
```
# Loop through the items in the regression_models dictionary
for model_name, model in regression_models.items():
# Create a model Pipeline with a preprocessor step and model step
model_pipeline = Pipeline(steps=[("preprocessor", ###),
("model", ###)])
# Fit the model Pipeline to the car sales training data
print(f"Fitting {model_name}...")
model_pipeline.###(###, ###)
# Score the model Pipeline on the test data appending the model_name to the
# results dictionary
print(f"Scoring {model_name}...")
regression_results[model_name] = model_pipeline.score(###,
###)
```
Our regression models have been fit, let's see how they did!
```
# Check the results of each regression model by printing the regression_results
# dictionary
###
```
* Which model did the best?
* How could you improve its results?
* What metric does the `score()` method of a regression model return by default?
Since we've fitted some models but only compared them via the default metric contained in the `score()` method (R^2 score or coefficient of determination), let's take the `RidgeRegression` model and evaluate it with a few other [regression metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics).
Specifically, let's find:
1. **R^2 (pronounced r-squared) or coefficient of determination** - Compares your models predictions to the mean of the targets. Values can range from negative infinity (a very poor model) to 1. For example, if all your model does is predict the mean of the targets, its R^2 value would be 0. And if your model perfectly predicts a range of numbers it's R^2 value would be 1.
2. **Mean absolute error (MAE)** - The average of the absolute differences between predictions and actual values. It gives you an idea of how wrong your predictions were.
3. **Mean squared error (MSE)** - The average squared differences between predictions and actual values. Squaring the errors removes negative errors. It also amplifies outliers (samples which have larger errors).
Scikit-Learn has a few classes built-in which are going to help us with these, namely, [`mean_absolute_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html), [`mean_squared_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html) and [`r2_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html).
```
# Import mean_absolute_error from sklearn's metrics module
###
# Import mean_squared_error from sklearn's metrics module
###
# Import r2_score from sklearn's metrics module
###
```
All the evaluation metrics we're concerned with compare a model's predictions with the ground truth labels. Knowing this, we'll have to make some predictions.
Let's create a `Pipeline` with the `preprocessor` and a `Ridge()` model, fit it on the car sales training data and then make predictions on the car sales test data.
```
# Create RidgeRegression Pipeline with preprocessor as the "preprocessor" and
# Ridge() as the "model".
ridge_pipeline = ###(steps=[("preprocessor", ###),
("model", Ridge())])
# Fit the RidgeRegression Pipeline to the car sales training data
ridge_pipeline.fit(###, ###)
# Make predictions on the car sales test data using the RidgeRegression Pipeline
car_y_preds = ridge_pipeline.###(###)
# View the first 50 predictions
###
```
Nice! Now we've got some predictions, time to evaluate them. We'll find the mean squared error (MSE), mean absolute error (MAE) and R^2 score (coefficient of determination) of our model.
```
# EXAMPLE: Find the MSE by comparing the car sales test labels to the car sales predictions
mse = mean_squared_error(car_y_test, car_y_preds)
# Return the MSE
mse
# Find the MAE by comparing the car sales test labels to the car sales predictions
###
# Return the MAE
###
# Find the R^2 score by comparing the car sales test labels to the car sales predictions
###
# Return the R^2 score
###
```
Boom! Our model could potentially do with some hyperparameter tuning (this would be a great extension). And we could probably do with finding some more data on our problem, 1000 rows doesn't seem to be sufficient.
* How would you export the trained regression model?
## Extensions
You should be proud. Getting this far means you've worked through a classification problem and regression problem using pure (mostly) Scikit-Learn (no easy feat!).
For more exercises, check out the [Scikit-Learn getting started documentation](https://scikit-learn.org/stable/getting_started.html). A good practice would be to read through it and for the parts you find interesting, add them into the end of this notebook.
Finally, as always, remember, the best way to learn something new is to try it. And try it relentlessly. If you're unsure of how to do something, never be afraid to ask a question or search for something such as, "how to tune the hyperparmaters of a scikit-learn ridge regression model".
| github_jupyter |
This material has been adapted by @dcapurro from the Jupyter Notebook developed by:
Author: [Yury Kashnitsky](https://yorko.github.io). Translated and edited by [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/), [Anastasia Manokhina](https://www.linkedin.com/in/anastasiamanokhina), Sergey Isaev and [Artem Trunov](https://www.linkedin.com/in/datamove/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose.
## 1. Demonstration of main Pandas methods
Well... There are dozens of cool tutorials on Pandas and visual data analysis. This one will guide us through the basic tasks when you are exploring your data (how deos the data look like?)
**[Pandas](http://pandas.pydata.org)** is a Python library that provides extensive means for data analysis. Data scientists often work with data stored in table formats like `.csv`, `.tsv`, or `.xlsx`. Pandas makes it very convenient to load, process, and analyze such tabular data using SQL-like queries. In conjunction with `Matplotlib` and `Seaborn`, `Pandas` provides a wide range of opportunities for visual analysis of tabular data.
The main data structures in `Pandas` are implemented with **Series** and **DataFrame** classes. The former is a one-dimensional indexed array of some fixed data type. The latter is a two-dimensional data structure - a table - where each column contains data of the same type. You can see it as a dictionary of `Series` instances. `DataFrames` are great for representing real data: rows correspond to instances (examples, observations, etc.), and columns correspond to features of these instances.
```
import numpy as np
import pandas as pd
pd.set_option("display.precision", 2)
```
We'll demonstrate the main methods in action by analyzing a dataset that is an extract of the MIMIC III Database.
Let's read the data (using `read_csv`), and take a look at the first 5 lines using the `head` method:
```
df = pd.read_csv('/home/shared/icu_2012.txt')
df.head()
```
<details>
<summary>About printing DataFrames in Jupyter notebooks</summary>
<p>
In Jupyter notebooks, Pandas DataFrames are printed as these pretty tables seen above while `print(df.head())` is less nicely formatted.
By default, Pandas displays 20 columns and 60 rows, so, if your DataFrame is bigger, use the `set_option` function as shown in the example below:
```python
pd.set_option('display.max_columns', 100)
pd.set_option('display.max_rows', 100)
```
</p>
</details>
Recall that each row corresponds to one patient, an **instance**, and columns are **features** of this instance.
Let’s have a look at data dimensionality, feature names, and feature types.
```
print(df.shape)
```
From the output, we can see that the table contains 4000 rows and 79 columns.
Now let's try printing out column names using `columns`:
```
print(df.columns)
```
We can use the `info()` method to output some general information about the dataframe:
```
print(df.info())
```
`bool`, `int64`, `float64` and `object` are the data types of our features. We see that one feature is logical (`bool`), 3 features are of type `object`, and 16 features are numeric. With this same method, we can easily see if there are any missing values. Here, we can see that there are columns with missing variables because some columns contain less than the 4000 number of instances (or rows) we saw before with `shape`.
The `describe` method shows basic statistical characteristics of each numerical feature (`int64` and `float64` types): number of non-missing values, mean, standard deviation, range, median, 0.25 and 0.75 quartiles.
```
df.describe()
```
The `describe` methods only gives us information about numerical variables. Some of these don't really make sense, like the `subject_id` or `gender` but since they are numbers, we are getting summary statistics anyways.
In order to see statistics on non-numerical features, one has to explicitly indicate data types of interest in the `include` parameter. We would use `df.describe(include=['object', 'bool'])` but in this case, the dataset only has variables of type `int` and `float`.
For categorical (type `object`) and boolean (type `bool`) features we can use the `value_counts` method. This also woeks for variables that have been encoded into integers like Gender. Let's have a look at the distribution of `Gender`:
```
df['Gender'].value_counts()
```
Since Gender is encoded in the following way: (0: female, or 1: male)
2246 intances are male patients
### Sorting
A DataFrame can be sorted by the value of one of the variables (i.e columns). For example, we can sort by *Age* (use `ascending=False` to sort in descending order):
```
pd.set_option('display.max_columns', 100)
pd.set_option('display.max_rows', 100)
df.sort_values(by='Age', ascending=False).head()
```
We can also sort by multiple columns:
```
df.sort_values(by=['Age', 'Height'],
ascending=[False, False]).head()
```
### Indexing and retrieving data
A DataFrame can be indexed in a few different ways.
To get a single column, you can use a `DataFrame['Name']` construction. Let's use this to answer a question about that column alone: **what is the average maximum heart rate of admitted patients in our dataframe?**
```
df['HR_max'].mean()
```
106 bpm is slightly elevated, but it seems reasonable for an ICU population
**Boolean indexing** with one column is also very convenient. The syntax is `df[P(df['Name'])]`, where `P` is some logical condition that is checked for each element of the `Name` column. The result of such indexing is the DataFrame consisting only of rows that satisfy the `P` condition on the `Name` column.
Let's use it to answer the question:
**What are average values of numerical features for male patients?**
```
df[df['Gender'] == 1].mean()
```
**What is the average Max Creatinine for patients female patients?**
```
df[df['Gender'] == 0]['Creatinine_max'].mean()
```
DataFrames can be indexed by column name (label) or row name (index) or by the serial number of a row. The `loc` method is used for **indexing by name**, while `iloc()` is used for **indexing by number**.
In the first case below, we say *"give us the values of the rows with index from 0 to 5 (inclusive) and columns labeled from State to Area code (inclusive)"*. In the second case, we say *"give us the values of the first five rows in the first three columns"* (as in a typical Python slice: the maximal value is not included).
```
df.loc[0:5, 'RecordID':'ICUType']
df.iloc[0:5, 0:3]
```
If we need the first or the last line of the data frame, we can use the `df[:1]` or `df[-1:]` construct:
```
df[-1:]
```
### Applying Functions to Cells, Columns and Rows
**To apply functions to each column, use `apply()`:**
In this example, we will obtain the max value for each feature.
```
df.apply(np.max)
```
The `map` method can be used to **replace values in a column** by passing a dictionary of the form `{old_value: new_value}` as its argument. Let's change the values of female and male for the corresponding `strings`
```
d = {0 : 'Female', 1 : 'Male'}
df['Gender'] = df['Gender'].map(d)
df.head()
```
The same thing can be done with the `replace` method:
```
d2 = {1: 'Coronary Care Unit', 2: 'Cardiac Surgery Recovery Unit', 3: 'Medical ICU', 4: 'Surgical ICU'}
df = df.replace({'ICUType': d2})
df.head()
```
We can also replace missing values when it is necessary. For that we use the `filna()` methohd. In this case, we will replace them in the Mechanical Ventilation column.
```
df['MechVent_min'].fillna(0, inplace=True)
df.head()
```
### Histograms
Histograms are an important tool to understand the distribution of your variables. It can help you detect errors in the data, like extreme or unplausible values.
```
df['Age'].hist()
```
We can quickly see that the distribution of age is not normal. Let's look at Na
```
df['Na_max'].hist()
```
Not a lot of resolution here. Let's increase the number of bins to 30
```
df['Na_max'].hist(bins=30)
```
Much better! It is easy to see that this is approximately a normal distribution.
### Grouping
In general, grouping data in Pandas works as follows:
```python
df.groupby(by=grouping_columns)[columns_to_show].function()
```
1. First, the `groupby` method divides the `grouping_columns` by their values. They become a new index in the resulting dataframe.
2. Then, columns of interest are selected (`columns_to_show`). If `columns_to_show` is not included, all non groupby clauses will be included.
3. Finally, one or several functions are applied to the obtained groups per selected columns.
Here is an example where we group the data according to `Gender` variable and display statistics of three columns in each group:
```
columns_to_show = ['Na_max', 'K_max',
'HCO3_max']
df.groupby(['Gender'])[columns_to_show].describe(percentiles=[])
```
Let’s do the same thing, but slightly differently by passing a list of functions to `agg()`:
```
columns_to_show = ['Na_max', 'K_max',
'HCO3_max']
df.groupby(['Gender'])[columns_to_show].agg([np.mean, np.std, np.min,
np.max])
```
### Summary tables
Suppose we want to see how the observations in our sample are distributed in the context of two variables - `Gender` and `ICUType`. To do so, we can build a **contingency table** using the `crosstab` method:
```
pd.crosstab(df['Gender'], df['ICUType'])
```
This will resemble **pivot tables** to those familiar with Excel. And, of course, pivot tables are implemented in Pandas: the `pivot_table` method takes the following parameters:
* `values` – a list of variables to calculate statistics for,
* `index` – a list of variables to group data by,
* `aggfunc` – what statistics we need to calculate for groups, ex. sum, mean, maximum, minimum or something else.
Let's take a look at the average number of day, evening, and night calls by area code:
```
df.pivot_table(['TroponinI_max', 'TroponinT_max'],
['ICUType'], aggfunc='mean')
```
Nothing surprising here, patients in the coronary/cardiac units have higher values of Troponins.
### DataFrame transformations
Like many other things in Pandas, adding columns to a DataFrame is doable in many ways.
For example, if we want to calculate the change in creatinine, let's create the `Delta_creatinine` Series and paste it into the DataFrame:
```
Delta_creatinine = df['Creatinine_max'] - df['Creatinine_min']
df.insert(loc=len(df.columns), column='Delta_creatinine', value=Delta_creatinine)
# loc parameter is the number of columns after which to insert the Series object
# we set it to len(df.columns) to paste it at the very end of the dataframe
df.head()
```
It is possible to add a column more easily without creating an intermediate Series instance:
```
df['Delta_BUN'] = df['BUN_max'] - df['BUN_min']
df.head()
```
To delete columns or rows, use the `drop` method, passing the required indexes and the `axis` parameter (`1` if you delete columns, and nothing or `0` if you delete rows). The `inplace` argument tells whether to change the original DataFrame. With `inplace=False`, the `drop` method doesn't change the existing DataFrame and returns a new one with dropped rows or columns. With `inplace=True`, it alters the DataFrame.
```
# get rid of just created columns
df.drop(['Delta_creatinine', 'Delta_BUN'], axis=1, inplace=True)
# and here’s how you can delete rows
df.drop([1, 2]).head()
```
## 2. Exploring some associations
Let's see how mechanical ventilation is related to Gender. We'll do this using a `crosstab` contingency table and also through visual analysis with `Seaborn`.
```
pd.crosstab(df['MechVent_min'], df['Gender'], margins=True)
# some imports to set up plotting
import matplotlib.pyplot as plt
# pip install seaborn
import seaborn as sns
# Graphics in retina format are more sharp and legible
%config InlineBackend.figure_format = 'retina'
```
Now we create the plot that will show us the counts of mechanically ventilated patients by gender.
```
sns.countplot(x='Gender', hue='MechVent_min', data=df);
```
We see that th number (and probably the proportion) of mechanically ventilated patients is greater among males.
Next, let's look at the same distribution but comparing the different ICU types: Let's also make a summary table and a picture.
```
pd.crosstab(df['ICUType'], df['MechVent_min'], margins=True)
sns.countplot(x='ICUType', hue='MechVent_min', data=df);
```
As you can see, the proportion of patients ventilated and not ventilated is very different across the different types of ICUs. That is particularly true in the cardiac surgery recovery unit. Can you think of a reason why that might be?
## 3. Some useful resources
* ["Merging DataFrames with pandas"](https://nbviewer.jupyter.org/github/Yorko/mlcourse.ai/blob/master/jupyter_english/tutorials/merging_dataframes_tutorial_max_palko.ipynb) - a tutorial by Max Plako within mlcourse.ai (full list of tutorials is [here](https://mlcourse.ai/tutorials))
* ["Handle different dataset with dask and trying a little dask ML"](https://nbviewer.jupyter.org/github/Yorko/mlcourse.ai/blob/master/jupyter_english/tutorials/dask_objects_and_little_dask_ml_tutorial_iknyazeva.ipynb) - a tutorial by Irina Knyazeva within mlcourse.ai
* Main course [site](https://mlcourse.ai), [course repo](https://github.com/Yorko/mlcourse.ai), and YouTube [channel](https://www.youtube.com/watch?v=QKTuw4PNOsU&list=PLVlY_7IJCMJeRfZ68eVfEcu-UcN9BbwiX)
* Official Pandas [documentation](http://pandas.pydata.org/pandas-docs/stable/index.html)
* Course materials as a [Kaggle Dataset](https://www.kaggle.com/kashnitsky/mlcourse)
* Medium ["story"](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-1-exploratory-data-analysis-with-pandas-de57880f1a68) based on this notebook
* If you read Russian: an [article](https://habrahabr.ru/company/ods/blog/322626/) on Habr.com with ~ the same material. And a [lecture](https://youtu.be/dEFxoyJhm3Y) on YouTube
* [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html)
* [Pandas cheatsheet PDF](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
* GitHub repos: [Pandas exercises](https://github.com/guipsamora/pandas_exercises/) and ["Effective Pandas"](https://github.com/TomAugspurger/effective-pandas)
* [scipy-lectures.org](http://www.scipy-lectures.org/index.html) — tutorials on pandas, numpy, matplotlib and scikit-learn
| github_jupyter |
# Example of building a MLDataSet
## Building a Features MLDataSet from a Table
```
from PrimalCore.heterogeneous_table.table import Table
from ElementsKernel.Path import getPathFromEnvVariable
ph_catalog=getPathFromEnvVariable('PrimalCore/test_table.fits','ELEMENTS_AUX_PATH')
catalog=Table.from_fits_file(ph_catalog,fits_ext=0)
catalog.keep_columns(['FLUX*','reliable_S15','STAR','AGN','MASKED','FLAG_PHOT'],regex=True)
```
First we import the classes and the functions we need
```
from PrimalCore.homogeneous_table.dataset import MLDataSet
```
```
dataset=MLDataSet.new_from_table(catalog)
print dataset.features_names
```
```
print dataset.features_original_entry_ID[1:10]
```
and in this way it **safely** can not be used as a feature.
## Building a Features MLDataSet from a FITS file
```
dataset_from_file=MLDataSet.new_from_fits_file(ph_catalog,fits_ext=0,\
use_col_names_list=['FLUX*','reliable_S15','STAR','AGN','MASKED','FLAG_PHOT'],\
regex=True)
print dataset_from_file.features_names
```
## Columns selection
### using `use_col_names_list` in the factories
```
dataset=MLDataSet.new_from_table(catalog,use_col_names_list=['FLUX*','reliable_S15','STAR','AGN','MASKED','FLAG_PHOT'],\
regex=True)
print dataset.features_names
```
### using dataset_handler fucntions
```
from PrimalCore.homogeneous_table.dataset_handler import drop_features
from PrimalCore.homogeneous_table.dataset_handler import keep_features
```
```
drop_features(dataset,['FLUX*1*'])
dataset.features_names
```
```
keep_features(dataset,['FLUX*2*'],regex=True)
print dataset.features_names
```
## Adding features
```
from PrimalCore.homogeneous_table.dataset_handler import add_features
test_feature=dataset.get_feature_by_name('FLUXERR_H_2')**2
add_features(dataset,'test',test_feature)
dataset.features_names
```
Or we can add a 2dim array of features
```
test_feature_2dim=np.zeros((dataset.features_N_rows,5))
test_feature_2dim_names=['a','b','c','d','e']
add_features(dataset,test_feature_2dim_names,test_feature_2dim)
dataset.features_names
```
We can think to a more meaningful example, i.e. we want to add flux ratios. Lets start by defining the list of
contigous bands, for the flux evaluation
```
flux_bands_list_2=['FLUX_G_2','FLUX_R_2','FLUX_I_2','FLUX_Z_2','FLUX_Y_2','FLUX_J_2','FLUX_VIS','FLUX_VIS','FLUX_VIS']
flux_bands_list_1=['FLUX_R_2','FLUX_I_2','FLUX_Z_2','FLUX_Y_2','FLUX_J_2','FLUX_H_2','FLUX_Y_2','FLUX_J_2','FLUX_H_2']
```
```
from PrimalCore.phz_tools.photometry import FluxRatio
for f1,f2 in zip(flux_bands_list_1,flux_bands_list_2):
f1_name=f1.split('_')[1]
f2_name=f2.split('_')[1]
if f1 in dataset.features_names and f2 in dataset.features_names:
f=FluxRatio('F_%s'%(f2_name+'-'+f1_name),f1,f2,features=dataset)
add_features(dataset,f.name,f.values)
```
```
dataset.features_names
```
## Operations on rows
### filtering NaN/Inf with dataset_preprocessing functions
```
from PrimalCore.preprocessing.dataset_preprocessing import drop_nan_inf
drop_nan_inf(dataset)
```
| github_jupyter |
<a href="https://www.pythonista.io"> <img src="img/pythonista.png"></a>
## Análisis econométrico.
Un análisis econométrico consiste en la aplicaciónde técnicas estadísticas para poder crear modelos capaces de predecir con cierto grado de confianza los fenoménos y observados.
https://economipedia.com/definiciones/modelo-econometrico.html
## Regresión lineal simple.
```
Data <- read.csv("data/16/Regresion.csv")
Data
XPrice = Data$AP
YPrice = Data$NASDAQ
plot(XPrice, YPrice,
xlab="Apple",
ylab="NASDAQ",
pch =19)
```
### La función ```lm()```.
```
help(lm)
LinearR.lm = lm(YPrice ~ XPrice, data=Data)
```
### La función ```coefficients()```.
```
help(coefficients)
coeffs = coefficients(LinearR.lm)
coeffs
```
* La siguiente aplicará la ecuación lineal usando los coeficientes obtenidos en función de ```XPrice```.
```
YPrice = 4124.10322215869 + 63.954703332101*(XPrice)
summary(LinearR.lm)$r.squared
summary(LinearR.lm)
```
[Residuals](https://www.statisticshowto.com/residual/#:~:text=A%20residual%20is%20the%20vertical,are%20below%20the%20regression%20line.&text=In%20other%20words%2C%20the%20residual,explained%20by%20the%20regression%20line)
```
Predictdata = data.frame(XPrice=75)
Predictdata
```
### La función ```predict()```.
```
help(predict)
predict(LinearR.lm, Predictdata, interval="confidence")
```
### la función ```resid()```.
```
help(resid)
LinearR.res = resid(LinearR.lm)
summary(LinearR.res)
plot(XPrice, LinearR.res,
ylab="Residuals", xlab="XPrice",
main="Residual Plot")
```
## Regresión lineal múltiple.
```
Data <- read.csv("data/16/RegresionMultiple.csv")
Date <- Data$Date
StockYPrice <- Data$NASDAQ
StockX1Price <- Data$AAPL
StockX2Price <- Data$IBM
StockX3Price <- Data$MSFT
MultipleR.lm = lm(StockYPrice ~ StockX1Price + StockX2Price + StockX3Price, data=Data[2:5])
summary(MultipleR.lm)
newdata = data.frame(StockX1Price=120, StockX2Price=120, StockX3Price=213)
predict(MultipleR.lm, newdata)
summary(MultipleR.lm)$r.squared
summary(MultipleR.lm)$adj.r.squared
predict(MultipleR.lm, newdata, interval="confidence")
```
<p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
<p style="text-align: center">© José Luis Chiquete Valdivieso. 2020.</p>
| github_jupyter |
# Week 2 -- Probability
<img align="right" style="padding-right:10px;" src="figures_wk2/stats_cover.png" width=200><br>
**Resources and References**
>**Practical Statistics for Data Scientists, 2nd Edition**<br>
>by Peter Bruce, Andrew Bruce, Peter Gedeck<br>
>Publisher: O'Reilly Media, Inc.<br>
>Release Date: May 2020<br>
>ISBN: 9781492072942<br>
<br>
<br>
>**Probability for Machine Learning**<br>
>by Jason Brownle<br>
>https://machinelearningmastery.com/probability-for-machine-learning/
<img align="right" style="padding-right:10px;" src="figures_wk2/probability_cover.png" width=200><br>
## Data Sampling and Distribution
### Bias and Random Sampling
**Sample**-- subset of data taken from larger data set (usually called a **Population.** NOTE: Different from a population in biology).<br>
**Population** -- Larger data set (real or theoretical).<br>
**N(n)** -- size of population or sample. <br>
**Random Sampling** -- Create a sample by randomly drawing elements from population.<br>
**Bias** -- Systemic error<br>
**Sample Bias** -- Sample that misrepresents the population.<br>
* Recent example: 2016 US. Presidential election polls that placed Hillary Clinton ahead of Donald Trump. **Sample bias** was one of the contributing factors to the incorrect predictions. (Source: "Harvard Researchers Warn 2016 Polling Mistakes Serve as a 'Cautionary Tale' in 2020" retrieved from https://www.thecrimson.com/article/2020/11/2/2016-election-polls-kuriwaki-isakov/)
#### Bias
Error due to bias represents something wrong with the data collection or selection system itself. In the "Practical Statistics for Data Scientists" book referenced above, the authors use the analogy of two guns shooting at a target X-Y axis:
<table style="font-size: 20px">
<tr>
<th>True Aim</th><th>Biased Aim</th>
</tr>
<tr>
<td><img src="figures_wk2/true_aim.png"></td><td><img src="figures_wk2/bias_aim.png"></td>
</tr>
</table>
The "True Aim" picture shows us the result of random errors whereas the pattern we see in the "Biased Aim" graph
#### Selection and Self-selection bias
**Selection bias**: Refers to choosing data favorable to a particular conclusion, whether done deliberately or accidentally.
**Self-selection bias**: Product or place reviews on social media or "review sites" like Yelp are not a good source of sample data. These types of reviews are not random -- rather, reviewers typically have a reason for self-selecting. Many times due to either a very good or very bad experience and thus represents a biased sample.
It is worth noting that most non-compulsory surveys suffer from this same bias. Think of end of course surveys. Only a small number of course attendees usually take the time and effort to fill out a survey, and then usually only due to an extremely good or extremely bad course experience.
#### Random Selection
George Gallup proposed random selection as a scientific sampling method after the *Literary Digest* poll of 1936 famously predicted the incorrect outcome of Alf Landon winning the presidential election over Franklin Roosevelt.
**Population**<br>
A vital point is to correctly define the population from which the sample will be drawn. For example:<br>
* Surveying 100 random customers to walk in the door of a grocery store may yield an acceptible sample for learning public opinion about general products.
* Surveying 100 random men to walk in the grocery store about feminine hygiene products will probably yield a less than optimal result.
Data quality and appropriate sampling is often more important than data quantity.
### Sampling Distribution
**Data Distribution:** Distribution of a sample's *individual data points*.
**Sampling Distribution:** Distribution of a sample statistic, such as mean. Tends to be more regular and bell-shaped than the data itself.
Below is an example of this, recreated from *Practical Statistics for Data Scientists, 2nd Edition*, using Lending Club data.
```
%matplotlib inline
from pathlib import Path
import pandas as pd
import numpy as np
from scipy import stats
from sklearn.utils import resample
import seaborn as sns
import matplotlib.pylab as plt
sns.set()
loans_income = pd.read_csv("data/loans_income.csv", squeeze=True)
sample_data = pd.DataFrame({
'income': loans_income.sample(1000),
'type': 'Data',
})
sample_mean_05 = pd.DataFrame({
'income': [loans_income.sample(5).mean() for _ in range(1000)],
'type': 'Mean of 5',
})
sample_mean_20 = pd.DataFrame({
'income': [loans_income.sample(20).mean() for _ in range(1000)],
'type': 'Mean of 20',
})
results = pd.concat([sample_data, sample_mean_05, sample_mean_20])
print(results.head())
g = sns.FacetGrid(results, col='type', col_wrap=3,
height=4, aspect=1)
g.map(plt.hist, 'income', range=[0, 200000], bins=40)
g.set_axis_labels('Income', 'Count')
g.set_titles('{col_name}')
plt.tight_layout()
plt.show()
```
* The first graph is the mean of 1000 values.
* The second graph is 1000 means of 5 values.
* The third graph is 1000 means of 20 values.
#### Central Limit Theorem
The **central limit theorem** states that means of multiple samples will be a bell-shaped curve, even if the population isn't normally distributed, if sample size is large enough and not too far off of normal.
### Normal (Gaussian) Distribution
**Standard Normal Distribution**
<img style="padding-right:10px;" src="figures_wk2/normal_distribution.png"><br>
---
$ \mu = $ The population mean.
Many statistical tests, **such as t-distributions and hypothesis testing**, assume sample statistics are normally distributed. Simple mathematics exist to compare data to a standard normal distribution, however, for our purposes, a QQ-plot is faster and easier.
Normality can be checked with a **QQ-plot**. Python's *scipy* package has a QQ-plot function, called `probplot`, seen below:
```
fig, ax = plt.subplots(figsize=(4, 4))
norm_sample = stats.norm.rvs(size=100)
stats.probplot(norm_sample, plot=ax)
plt.tight_layout()
plt.show()
```
The blue markers represent *z-scores*, or standardized data points, plotted vs. standard deviations away from the mean.
### Long-tailed Distributions
**Tail:** A long, narrow area of a frequency distribution where extreme cases happen with low frequency.<br>
**Skew:** Where one tail of a distribution is longer than another.
Despite the time and effort spent teaching about normal distributions, most data is **not** normally distributed.
An example can be seen with a QQ-plot of Netflix stock data.
```
sp500_px = pd.read_csv('data/sp500_data.csv.gz')
nflx = sp500_px.NFLX
# nflx = np.diff(np.log(nflx[nflx>0]))
fig, ax = plt.subplots(figsize=(4, 4))
stats.probplot(nflx, plot=ax)
plt.tight_layout()
plt.show()
sp500_px.head()
```
Low values are below the line and high values are above the line. This tells us the data is not normally distributed and we are more likely to see extreme values than if it was normally distributed.
Sometimes non-regular data can be normalized using methods like **taking the logarithm of values greater than 0.**
```
nflx = np.diff(np.log(nflx[nflx>0]))
fig, ax = plt.subplots(figsize=(4, 4))
stats.probplot(nflx, plot=ax)
plt.tight_layout()
plt.show()
```
You can see that helps significantly, but the data is still not very normal.
### Binomial (Bernoulli) Distribution
A **binomial outcome** is one for which there are only two possible answers:<br>
* yes / no<br>
* true / false<br>
* buy / don't buy<br>
* click / don't click<br>
* etc.
At its heart, binomial distributions analyze the probability of each outcome under certain conditions.
The classic example is the coin toss. The outcome will be either heads (H) or tails (T) for any particular toss.
A **trial** is an event of interest with a discrete outcome (e.g. a coin toss).
A **success** is defined as the outcome of interest in the trials. For example, in the coin toss above, we could say we are interested in the number of H outcomes out of 10 trials (tosses). Each H outcome would be a *success*. Also represented as a "1" (following binary logic).
A **binary distribution** is the number of successes (*x*) in *n* trials with *p* probability of success for each trial. Also called a *Bernoulli distribution*.
#### Calculating Binomial Probabilies
In general, we are concerned with calculating two situations:
1. The probability of *x* successes out of *n* trials. This is called the **probability mass function(pmf).**<br>
2. The probability of **no more than** _x_ successes out of *n* trials. This is called the **cumulative distribution function (cdf).**
Python uses scipy's `stats.binom.pmf()` and `stats.binom.cdf()` functions, respectively, for that functionality.
**pmf example:** A fair coin has a 50% (.50) chance of coming up heads on a toss. What is the probability of getting a head (H) 7 times out of 10 tosses?
x = 7<br>
n = 10<br>
p = 0.5<br>
```
stats.binom.pmf(7, n=10, p=0.5)
```
So, there is an 11.7% chance that a coin will land on heads 7 out of 10 tosses.
**cdf example:** Using that same fair coin, what is the probability of getting a head (H) **_no more than_** four times?
x = 4<br>
n = 10<br>
p = 0.5<br>
```
stats.binom.cdf(4, n=10, p=0.5)
```
There is a 37.6% chance that there will be 4 or fewer heads in 10 trials. Which is the same thing as
`(chance of 0 H) + (chance of 1 H) + (chance of 2 H) + (chance of 3 H) + (chance of 4 H)`
```
stats.binom.pmf(0, n=10, p=0.5) + stats.binom.pmf(1, n=10, p=0.5) \
+ stats.binom.pmf(2, n=10, p=0.5) + stats.binom.pmf(3, n=10, p=0.5) \
+ stats.binom.pmf(4, n=10, p=0.5)
```
There are many other useful data distributions. Students are encouraged to to independently research them.
# Bootstrapping
**Bootstrap sample:** A sample taken with replacement from a data set. <br>
**Resampling:** The process of taking repeated samples from observed data.<br>
Hypothesis testing requires some estimate of the sampling distribution. "Traditional" hypothesis testing requires formulas to create estimates of sampling distributions. *Bootstrapping* creates a sampling distribution through resampling.
Let's take a look. First, we'll find the median income of the Lending Club data.
```
loans_income.median()
```
Next, we'll use scikit-learn's `resample()` function to take 5 samples and print out the median of each.
```
for _ in range(5):
sample = resample(loans_income)
print(sample.median())
```
As you can see, the median is different for each sample.
Let's take 1000 samples and average the medians and see how different it is from the dataset median.
```
results = []
for nrepeat in range(1000):
sample = resample(loans_income)
results.append(sample.median())
results = pd.Series(results)
print('Bootstrap Statistics:')
print(f'original: {loans_income.median()}')
print(f'mean of medians: {results.mean()}')
print(f'bias: {results.mean() - loans_income.median()}')
print(f'std. error: {results.std()}')
```
Let's use bootstrapping on that wonky Netflix data. We'll take samples of 100 and store the mean of the sample in a list and do that 20,000 times.
You'll notice that the data looks much more normally distributed even without the logarithm trick.
```
len(results)
results[90]
sp500_px = pd.read_csv('data/sp500_data.csv.gz')
nflx = sp500_px.NFLX.values
means = []
for _ in range(20000):
mean = resample(nflx, replace=True, n_samples=100).mean()
means.append(mean)
plt.hist(means)
fig, ax = plt.subplots(figsize=(4, 4))
stats.probplot(means, plot=ax)
plt.tight_layout()
plt.show()
```
# Probability
**Joint probability:** Probability of two or more events happening at the same time.<br>
**Marginal probability:** Probability of an event regardless of other variables outcome.<br>
**Conditional probability:** Probability of an event occurring along with one or more other events. <br>
## Probability for one random variable
Probability shows the likelihood of an event happening. <br>
Probability of one random variable is the likelihood of an event that is independent of other factors. Examples include: <br>
* Coin toss.<br>
* Roll of a dice.<br>
* Drawing one card from a deck of cards. <br>
For random variable `x`, the function `P(x)` relates probabilities to all values of `x`.
<center>$Probability\ Density\ of\ x = P(x)$</center>
If `A` is a specific event of `x`,
<center>$Probability\ of\ Event\ A = P(A)$</center>
Probability of an event is calculated as *the number of desired outcomes* divided by *total number of possible outcomes*, where all outcomes are equally likely:
<center>$Probability = \frac{the\ number\ of\ desired\ outcomes}{total\ number\ of\ possible\ outcomes}$</center>
If we apply that principle to our examples above:<br>
* Coin toss: Probability of heads = 1 (desired outcome) / 2 (possible outcomes) = .50 or 50%<br>
* Dice roll: Probability of rolling 3 = 1 (specific number) / 6 (possible numbers) = .1666 or 16.66%<br>
* Cards: Probability of drawing 10 ♦ = 1 (specific card) / 52 (possible cards) = .0192 or 1.92%<br>
<center>$Sum of Probabilities\ for\ all\ outcomes\ = 1.0$</center>
---
The probability of an event not occurring is called the **complement** and is calculated:
<center>$Probability\ of\ Event\ not\ occurring = Probability\ of\ all\ outcomes\ - Probability\ of\ one\ outcome$</center>
That is:
<center>$P(not\ A) = 1 - P(A)$</center>
## Probability of multiple random variables
Each **column** in a machine learning data set represents a **variable** and each *row* represents an *observation*. Much of the behind-the-scenes math in machine learning deals the probability of one variable in the presence of the observation's other variables.
Let's look again at this section's definitions, in light of what we saw above:
**Joint probability:** Probability of events *A* and *B*.<br>
**Marginal probability:** Probability of event *A* given variable *Y*.<br>
**Conditional probability:** Probability of event *A* given event *B*. <br>
### Joint probability
Joint probability is the chance that **both** event A and event B happen. This can be written several ways:
<center>
$$P(A\ and\ B)$$
$$P(A\ \cap\ B)$$
$$P(A,B)$$
</center>
Joint probability of A and B can be calculated as *the probability of event A given event B times the probability of event B*. In more mathematical terms:
<center>$P(A\ \cap\ B) = P(A\ given\ B)\ \times\ P(B)$</center>
## Marginal probability
For given fixed event *A* and variable *Y*, marginal probability is the sum of probabilities that one of *Y*'s events will happen along with fixed event *A*. Let's look at that in table form.
* Let's say we ask a group of 60 people which color they like better, **blue** or **pink**.
|Gender| Blue|Pink|Total|
|------|-----|----|-----|
|Male|25|10|P(male) = 35 / 60 = 0.5833|
|Female|5|20|P(female) 25 / 60 = 0.4166|
|Total|P(blue) = 30 / 60 = .50 | P(pink) = 30 / 60 = .50| total = 60
**Rows** represent the probability that a respondent was a particular gender.<br>
**Columns** represent the probability of the response being that color.<br>
To express that more mathematically,
<center>$P(X=A)=\sum\limits_{}^{y\in Y}P(X=A,\ Y=y)$</center>
## Conditional probability
Remember, in programming languages, we call `if->then->else` statements *conditionals*.
A **conditional probability** can be thought of as **The probability that event A will happen _if_ event B has happened**.
The slightly more "mathy" way to say that is: **The probability of event A _given_ event B.**
In formula form, we use **"|"** (pipe) as the "given."
<center>
$P(A\ given\ B)$<br>
or<br>
$P(A|B)$<br>
</center>
<br><br>
The conditional probability of event A given event B can be calculated by:<br><br>
<center>$P(A|B) = \frac{P(A \cap B)}{P(B)}$</center>
---
**All of the probability above was included simply so we could understand Bayes Theorem (below) and its' application to machine learning.**
# Bayes Theorem
Bayes Theorem gives us a structured way to calculate **conditional probabilities**.
Remember from above, conditional probability is the probability that *event A* will happen *given event B*. In mathematical terms, that is:
<center>$P(A|B) = \frac{P(A \cap B)}{P(B)}$</center>
Note that $P(A|B) \neq P(B|A)$
**Bayes Theorem** gives us another way to calculate conditional probability when the joint probability is not known:
<center>$P(A|B) = \frac{P(B|A)\ \times\ P(A)}{P(B)}$</center>
However, we may not know $P(B)$. It can be calculated an alternatve way:
<center>$P(B)=P(B|A)\ \times\ P(A)\ +\ P(B|not\ A)\ \times\ P(not\ A)$</center><br>
Then, through the mathematical trickery of substitution, we get:<br>
<center>$P(A|B) = \frac{P(B|A)\ \times\ P(A)}{P(B|A)\ \times\ P(A)\ +\ P(B|not\ A)\ \times\ P(not\ A)}$</center>
Also, remember that <br>
<center>$P(not\ A)=1 - P(A)$</center><br>
Finally, if we have $P(not\ B|not\ A)$ we can calculate $P(B|not\ A)$:<br>
<center>$P(B|not\ A) = 1 - P(not\ B|not\ A)$</center>
### Terminology:
The probabilities are given English names to help understand what they are trying to say:
* $P(A|B)$: Posterior probability<br>
* $P(A)$: Prior probability<br>
* $P(B|A)$: Likelihood<br>
* $P(B)$: Evidence<br>
Now, Bayes Theorem can be restated as:
<center>$Posterior = \frac{Likelihood\ \times\ Prior}{Evidence}$</center>
---
Jason Brownlee gives us the fantastic analogy of the probability that there is fire given that there is smoke.
* $P(Fire)$ is the prior<br>
* $P(Smoke|Fire)$ is the likelihood<br>
* $P(Smoke)$ is the evidence<br>
<center>$P(Fire|Smoke) = \frac{P(Smoke|Fire)\ \times\ P(Fire)}{P(Smoke)}$</center>
# Bayes Theorem as Binary Classifier
Bayes Theorem is often used as a **binary classifier** -- the classic example that we will look at in a few moments is detecting spam in email. But first, more terminology.
## Terminology
* $P(not\ B|not\ A)$: True Negative Rate **TNR** (specificity)<br>
* $P(B|not\ A)$: False Positive Rate **FPR** <br>
* $P(not\ B|A)$: False Negative Rate **FNR** <br>
* $P(B|A)$: True Positive Rate **TPR** (sensitivity or recall) <br>
* $P(A|B)$: Positive Predictive Vale **PPV** (precision) <br>
Applying the above to the longer formula above:
<center>$Positive\ Predictive\ Value = \frac{True\ Positive\ Rate\ \times\ P(A)}{True\ Positive\ Rate\ \times\ P(A)\ +\ False\ Positive\ Rate\ \times\ P(not\ A) }$</center>
## Examples
Let's look at some (contrived) examples, courtesy of Jason Brownlee:
### Elderly Fall and Death
Let's define elderly as over 80 years of age. What is the probabiity that an elderly person will die from a fall? Let's use 10% as the base rate for elderly death - P(A), and the base rate for elderly falling is 5% - P(B), and 7% of elderly that die had a fall - P(B|A).
<center>$P(A|B) = \frac{P(B|A)\ \times\ P(A)}{P(B)}$</center><br>
<center>$P(Die|Fall) = \frac{P(Fall|Die)\ \times\ P(Die)}{P(Fall)}$</center><br>
<center>$P(A|B) = \frac{0.07\ \times\ 0.10}{0.05}$</center><br>
<center>$P(Die|Fall) = 0.14$</center><br>
So, using these completely fake numbers, 14% of elderly falls would end in death.
### Spam Detection
Let's say our spam filter put an email in the spam folder. What is the probability it was spam?
* 2% of email is spam - P(A).
* 99% accuracy on the spam filter - P(B|A)
* 0.1% of email is incorrectly marked as spam - P(B|not A)
<center>$P(A|B) = \frac{P(B|A)\ \times\ P(A)}{P(B)}$</center><br>
<center>$P(Spam|Detected) = \frac{P(Detected|Spam)\ \times\ P(Spam)}{P(Detected)}$</center><br>
Unfortunately, we don't know P(B) -- P(Detected), but we can figure it out. Recall,
<center>$P(B)=P(B|A)\ \times\ P(A)\ +\ P(B|not\ A)\ \times\ P(not\ A)$</center><br>
<center>$P(Detected)=P(Detected|Spam)\ \times\ P(Spam)\ +\ P(Detected|not\ Spam)\ \times\ P(not\ Spam)$</center><br>
And, we can calculate P(not Spam):
<center>$P(not\ Spam) = 1 - P(Spam) = 1 - 0.02 = 0.98$</center><br>
<center>$P(Detected) = 0.99\ \times\ 0.02\ +\ 0.001\ \times\
0.98$</center><br>
Remember order of operations here... multiply before addition: <br>
<br>
<center>$P(Detected) = 0.0198 + 0.00098 = 0.02078$</center><br>
We can finally put it all together:<br>
<center>$P(Spam|Detected) = \frac{0.99\ \times\ 0.02}{0.02078}$</center><br>
<center>$P(Spam|Detected) = \frac{0.0198}{0.02078}$</center><br>
<center>$P(Spam|Detected) = 0.9528392$</center><br>
Or, about a 95% chance that the email was classified properly.
# Naive Bayes Classification
Supervised machine learning is typically used for prediction or classification, as we will see in Week 7.
Bayes Theorem can be used for classification, however even with modern computing advances, figuring out all the probabilities of the dependent variables would be impractical. For this reason, the mathematics of Bayes Theorem is simplified in various ways, including by assuming all variables are independent.
We will look at Naive Bayes Classification in more depth later in this class and again in MSDS 680 Machine Learning.
| github_jupyter |
# Using surface roughness to date landslides
### Overview
In March of 2014, unusually high rainfall totals over a period of several weeks triggered a deep-seated landslide that mobilized into a rapidly moving debris flow. The debris flow inundated the town of Oso, Washington, resulting in 43 fatalities and the destruction of 49 houses. Other landslide deposits are visible in the vicinity of the 2014 Oso landslide (see figure below). The goal of this assignment is to estimate the ages of the nearby landslide deposits so that we can say something about the recurrence interval of large, deep-seated landslides in this area. Do they happen roughly every 100 years, 5000 years, or do they only happen once every 100,000 years?
<img src="OsoOverviewMap.jpg" alt="Drawing"/>
Our strategy will be to take advantage of the fact that recent landslides have “rougher” surfaces. Creep and bioturbation smooth the landslide deposits over time in a way that we can predict (using the diffusion equation!). We will use the standard linear diffusion model, shown below, to simulate how the surface of a landslide deposit will change with time:
$$ \frac{\partial z}{\partial t}=D\frac{\partial^2z}{\partial x^2} $$
Here, $z$ denotes elevation, $x$ is distance in the horizontal direction, and $D$ is the colluvial transport coefficient. Recall, that in a previous exercise we estimated the value of $D$ within the San Francisco Volcanic Field (SFVF) in northern Arizona. We found that $D\approx5$ $\mathrm{m^2}$ $\mathrm{kyr}^{-1}$ in the SFVF. In this exercise, we will use a larger value of $D=10$ $\mathrm{m^2}$ $\mathrm{kyr}^{-1}$ since our study site near Oso, Washington, is in a wetter climate with more vegetation (and therefore greater rates of bioturbation). Once we have a model that lets us determine how the surface of a landslide deposit will change with time, we may be able to use it to describe how surface roughness varies with age.
### Landslide Deposit Morphology
First, examine the map below showing the slope in the area of the Oso Landslide. Also pictured is the Rowan Landslide, which is older than the Oso Landslide.
<img src="OsoSlopeMap.jpg" alt="Drawing"/>
Notice how the Oso landslide, which is very recent, is characterized by a number of folds and a very “rough” surface. This type of hummocky topography is common in recent landslide deposits. The plot on the right shows a topographic transect that runs over the Oso Landslide deposit from north to south.
### Quantifying Surface Roughness
If we are ultimately going to use surface roughness to date landslide deposits (i.e. older deposits are less rough, younger deposits are more rought), we first need a way to quantify what we mean by "roughness". One way to quantify surface roughness is to extract a transect from the slope data and compute the standard deviation of the slope along that transect. That is what we do here; we compute the standard deviation of the slope (SDS) over each 30-meter interval along a transect and then take the mean of all of these standard deviations to arrive at an estimate of roughness for each landslide deposit that we are interested in dating. The plots below show slope (deg) along transects that run over the 2014 Oso Landslide and the nearby Rowan Landslide (unknown age). Note that the Rowan landslide looks slightly less “rough” and therefore has a lower SDS value associated with it.
<img src="RowanOsoTransects.png" alt="Drawing"/>
Don't worry about understanding exactly how SDS is computed. The most important thing to note here is that SDS gives us a way to objectively define how "rough" a surface is. Higher values of SDS correspond to rough surfaces whereas lower values correspond to smoother surfaces.
### Estimating the Age of the Rowan Landslide
We will now estimate the age of the Rowan Landslide using the diffusion model. This is the same model we used to simulate how cinder cones evolve. Since the same processes (creep, bioturbation) are driving sediment transport on landslide deposits, we can apply the same model here. However, when we modeled cinder cones we knew what the initial condition looked like. All cinder cones start with cone shape that is characterized by hillslope angles that are roughly equal to the angle of repose for granular material ($\approx 30^{\circ}$). We do not know what each of these landslide deposits looked like when they were first created. So, we will assume that all landslide deposits (including the Rowan Landslide) looked like the Oso Landslide immediately after they were emplaced. Of course, no two landslide deposits ever look exactly the same but it is reasonable to assume that the statistical properties of the initial landslide deposits (i.e. roughness) are similar to each other. We will make this assumption and simulate how the roughness, as quantified using the SDS, of the Oso Landslide deposit will change over time. If we know the relationship between SDS and deposit age, then we can estimate the age of any landside deposit in this region simply by computing its SDS.
Let’s start by using the model to estimate how much the Oso Landslide deposit will change after it is subjected to erosion via diffusive processes (e.g. bioturbation, rain splash, freeze-thaw) for 100 years. The code below is set up to run the diffusion model using a topographic transect through the Oso Landslide as the initial topography. All you need to do is assign realistic values for the colluvial transport coefficient (landscape diffusivity) and choose an age. Use a value of $D=10$ $\mathrm{m}^2$ $\mathrm{kyr}^{-1}$ for the colluvial transport coefficient and an age of 0.1 kyr (since we want to know what the deposit will look like when the Oso Landslide is 100 years old). Then run the code block below.
```
D=10; # Colluvial transport coefficient [m^2/kyr] (i.e. landscape diffusivity)
age=0.1; # Age of the simulated landslide deposit [kyr]
# !! YOU DO NOT NEED TO MODIFY THE CODE BELOW THIS LINE !!
from diffusion1d import oso
[distance,elevation,SDS]=oso(D,age)
import matplotlib.pyplot as plt
plt.plot(distance,elevation,'b-')
plt.xlabel('Distance (m)', fontsize=14)
plt.ylabel('Elevation (m)', fontsize=14)
plt.title('SDS = '+str(round(SDS,1)), fontsize=14)
plt.show()
```
You should see that the SDS value for the topography (shown on the plot) after 0.1 kyr of erosion is slightly smaller than the SDS value of the initial landslide surface (e.g. the SDS value of the Oso Landslide deposit). This is a result of the fact that diffusive processes smooth the surface over time, but 0.1 kyr is not a sufficient amount of time to substantially *smooth* the surface of the landslide deposit. Although the SDS value has decreased over a time period of 0.1 kyr, it is still larger than the SDS value that we have computed for the Rowan Landslide deposit. Therefore, the Rowan Landslide deposit must be older than 0.1 kyr. Continue to run the model using the code cell above with increasing values for the age until you find an age that gives you a SDS value that is close to the one computed for the Rowan Landslide ($SDS\approx5.2$). Based on this analysis, how old is the Rowan Landslide (you can round your answer to the nearest 0.1 kyr)?
INSERT YOUR ANSWER HERE
### How does SDS vary with age?
You have now successfully dated the Rowan Landslide! This process does not take too long, but it can be inefficient if we want to date a large number of landslide deposits. Later, I will give you SDS values for 12 different landslide deposits in this area. We want to date all of them so that we can have more data to accomplish our original goal of saying something about the recurrence interval of large, deep-seated landslides in this area. To do this, we will determine an equation that quantifies the relationship between SDS and age using our diffusion model. Then, we will use this equation to tell us the age of each landslide deposit based on its SDS.
To get started on this process, lets use the model in the code cell below to determine how the SDS value changes as we change the age of the landslide deposit. Use the model (in the code cell below) to simulate the surface of the Oso Landslide after 1 kyr, 2 kyr, 5 kyr, 10 kyr, and 20 kyr. Continue to use a value of $D=10$ $\mathrm{m}^2$ $\mathrm{kyr}^{-1}$. Write down each of the SDS values that you get for these 5 different ages. You will need each of them to complete the next step. Note that it may take 5-10 seconds to compute the SDS when the ages are 10 kyr or 20 kyr since more computations need to be performed to complete these longer simulations.
```
D=10; # Colluvial transport coefficient [m^2/kyr] (i.e. landscape diffusivity)
age=0.1; # Age of the simulated landslide deposit [kyr]
# !! YOU DO NOT NEED TO MODIFY THE CODE BELOW THIS LINE !!
from diffusion1d import oso
[distance,elevation,SDS]=oso(D,age)
import numpy as np
itopo=np.loadtxt('osotransect.txt')
import matplotlib.pyplot as plt
plt.plot(distance,elevation,'b-',label="Modeled Topography")
plt.plot(distance,itopo,'--',color='tab:gray',label="Modeled Topography")
plt.xlabel('Distance (m)', fontsize=14)
plt.ylabel('Elevation (m)', fontsize=14)
plt.title('SDS = '+str(round(SDS,1)), fontsize=14)
plt.show()
```
### A general method for estimating age based on SDS
In the code below, we are going to create several variables ("SDS_0kyr", "SDS_1kyr", etc) so that we can store the information that you obtained in the previous section. Each variable will hold the SDS value of our idealized landslide deposit for different ages. Notice that the variable called *SDS_0kyr* is equal to the SDS value of the Oso transect, which is the same as the SDS value at a time of 0 kyr (since the landslide occured in 2014). The variables *SDS_1kyr*, *SDS_2kyr*,...,*SDS_20kyr* are all set equal to a value of 1. Change these values in the code block below to reflect the SDS values that you computed in the above exercise. For example, if you determined that the landslide deposit has an SDS value of $6.4$ after 5 kyr then set *SDS_5kyr* equal to $6.4$. When you are finished, run the code cell. The code should produce a plot of your data. Verify that the plot appears to be accurate.
```
SDS_0kyr=9.5 # This is the initial (i.e. t=0) SDS value of our landslide deposit.
SDS_1kyr=1 # Change this values from "1" to the SDS value after 1 kyr.
SDS_2kyr=1 # Change this values from "1" to the SDS value after 2 kyr.
SDS_5kyr=1 # Change this values from "1" to the SDS value after 5 kyr.
SDS_10kyr=1 # Change this values from "1" to the SDS value after 10 kyr.
SDS_20kyr=1 # Change this values from "1" to the SDS value after 20 kyr.
# You do not need to modify any code below this point
import numpy as np
age=np.array([0,1,2,5,10,20])
SDS=np.array([SDS_0kyr,SDS_1kyr,SDS_2kyr,SDS_5kyr,SDS_10kyr,SDS_20kyr])
import matplotlib.pyplot as plt
plt.scatter(SDS,age,s=60, c='b', marker='o') # Create the scatter plot, set marker size, set color, set marker type
plt.xlabel('SDS [-]', fontsize=14)
plt.ylabel('Landslide Age [kyr]', fontsize=14)
plt.show()
```
Now, we need to find a way to use the information above to come up with a more general relationship between SDS and age. Right now we only have 6 points on a graph. We have no way to determine the age of a landslide if its SDS value falls in between any of the points on our plot. One way to proceed is to fit a curve to our 6 data points. Python has routines that can be used to fit a function to X and Y data points. You may have experience using similar techniques in programs like Excel or MATLAB.
Before proceeding to work with our data, lets examine how this process of curve fitting works for a simple case. Suppose we are given three points, having X coordinates of 1,2, and 3 and corresponding Y coordinates of 2,4, and 6. Below is an example of how to fit a line to data using Python. **Do not worry about understanding how all of the code works. The aim of this part of the exercise is simply to introduce you to the types of tools that are available to you in programming languages like Python. That way, if you run into problems later in your professional or academic career, you will know whether or not using Python or a similar approach will be helpful.** Run the code block below. Then we will examine the output of the code.
```
# You do not need to modify any code in this cell
# First, define some X data
X=[1,2,3]
# Then define the corresponding Y data
Y=[3,5,7]
# Use polyfit to find the coefficients of the best fit line (i.e the slope and y-intercept of the line)
import numpy as np
pfit=np.polyfit(X,Y,1)
# Print the values contained in the variable "pfit"
print(pfit)
```
You should see two values printed at the bottom of the code block. Python has determined the line that best fits the X and Y data that we provided. As you know, a line is described by two numbers: a slope and a y-intercept. Not surprisingly, Python has given us two numbers. The first number, which is a 2, corresponds to the slope of the best fit line. The second number, which is a 1, corresponds to the y-intercept. Thus, we now know that the best fit line for this X and Y data is given by
$$ Y=2X+1$$
### Fitting a line to your data
Now that we know how to interpret the output from the *polyfit* function, we can see what information it gives us about the relationship between SDS and age. Look at the plot that you created earlier that shows age as a function of SDS. Age is the Y variable (i.e. the dependent variable) and SDS is the X variable (or independent variable). This is what we want because we ultimately want to be able to estimate the age of a landslide based on its SDS value.
In the code below, we use polyfit to find the line that best describes the relationship between age and SDS. Notice that the code looks exactly like the code for the simple curve fitting example shown above except that *SDS* has been substituted for X and *age* has been substituted for Y. Run the code below (you don't need to make any changes to it) and then we will discuss the output.
```
# You do not need to modify any code in this cell
# You do not need to modify any code in this cell
pfit=np.polyfit(SDS,age,1)
# Print the values contained in the variable "pfit"
print(pfit)
```
Python has determined the line that best desribes the relationship between SDS and age. The first number in the output represents the slope of the line and the second number is the y intercept. The first number (the slope) should be roughly $-2.65$. The second number (the y-intercept) should be roughly $21.5$. This means that
$$
\mathrm{AGE}=21.5-2.65 \cdot{} \mathrm{SDS}
$$
where AGE denotes the age of the landslide deposit in kyr. The code below will plot your best fit line on top of the actual data. Run the code below to see what your best fit line looks like.
```
# You do not need to modify any code in this cell
pfit=np.polyfit(SDS,age,1);
import matplotlib.pyplot as plt
plt.scatter(SDS,age,s=60, c='b', marker='o',label="Original Data") # Create the scatter plot, set marker size, set color, set marker type
plt.plot(SDS,pfit[1]+pfit[0]*SDS,'k-',label="Best Fit Line")
plt.xlabel('SDS [-]', fontsize=14)
plt.ylabel('Landslide Age [kyr]', fontsize=14)
plt.legend()
plt.show()
```
You should see that a line does not fit the data very well. If you have correctly completed the assignment to this point, you will notice that the data points (blue circles) in the above plot show that age decreases rapidly with SDS at first and then decreases more slowly at higher SDS values. This pattern suggests that age varies nonlinearly with SDS. Motivated by this observation, let’s see if a 2nd order polynomial (i.e. a quadratic function) will provide a better fit to our data.
### Fitting a quadratic function to your SDS data
We use *polyfit* in the code below in much the same way as before. The only difference is that we want Python to find the quadratic funciton that best describes our data. We still need to provide the X data (i.e. "SDS") and the Y data (i.e. age). The only difference is that we change the third input for the *polyfit* function from a 1 (indicating that you want your data fit to a 1st order polynomial, which is a line) to a 2 (which indicates that you want your data fit to a 2nd order polynomial, which is a quadratic). Run the code block below and Python will determine the quadratic function that best fits your data.
```
# You do not need to modify any code in this cell
pfit=np.polyfit(SDS,age,2)
# Print the values contained in the variable "pfit"
print(pfit)
```
Notice that Python returns three numbers. This is because three numbers are required to define a quadratic function, which looks like:
$$ AGE=A\cdot (SDS)^2+B\cdot SDS+C $$
The first number above is the coefficient $A$. The second number is equal to $B$ and the third is equal to $C$. In your notes, write down the equation of the best fit quadratic function. You will need to use this equation to finish the exercise.
Let's see how well this quadratic function fits our data. Run the code below to plot the best fit quadratic function on the same plot as your data. Verify that the best fit quadratic looks reasonable in comparison to the actual data points. In other words, it should look like the curve fits the data reasonably well.
```
# You do not need to modify any code in this cell
pfit=np.polyfit(SDS,age,2);
import matplotlib.pyplot as plt
plt.scatter(SDS,age,s=60, c='b', marker='o',label="Original Data") # Create the scatter plot, set marker size, set color, set marker type
plt.plot(SDS,pfit[2]+pfit[1]*SDS+pfit[0]*SDS**2,'k-',label="Best Fit Quadratic")
plt.xlabel('SDS [-]', fontsize=14)
plt.ylabel('Landslide Age [kyr]', fontsize=14)
plt.legend()
plt.show()
```
You now have a model (i.e. the quadratic function that you just found) that can be used to predict the age of a landslide based on its SDS. It is definitely not a perfect model but it will be ok for our purposes today. If we had more time, it would be beneficial to try to fit our data to a function that looks like:
$$
\displaystyle{AGE=Ae^{-B*SDS}}
$$
Despite its limitations, the best quadratic function that we have found will be ok for our purposes. It will allow us to come up with rough estimates for the ages of other landslides in this area.
### Landslide recurrence
Below is a list of SDS values for $12$ other landslide deposits in the area of the Oso landslide. Use the best-fit quadratic function from your above analysis to compute the age of each deposit based on its SDS. You can make your computations any way that you like (excel, calculator, etc) - you do not need to write code to do this.
5.72
5.11
4.40
4.57
5.53
5.55
4.38
6.13
6.57
6.08
6.47
5.81
How many landslides have ages less than 1 kyr?
INSERT YOUR ANSWER HERE
How many landslides have ages between 1 kyr and 3 kyr?
INSERT YOUR ANSWER HERE
How many landslides have ages greater than 3 kyr?
INSERT YOUR ANSWER HERE
Based on your analysis, is it likely that this area will experience another large, deep-seated landslide within the next one thousand years? Since we are not doing a rigorous analsysis of recurrence intervals, you do not need to do any calculations (other than those needed to answer the three questions above). In the space below, include a parapgraph of text (3-5 sentences) in which you answer this question and provide some justification for your reasoning. Your justificaiton could include (but does not have to) some discussion about the limitations of the above analysis.
INSERT YOUR ANSWER HERE
| github_jupyter |
## regular expressions
```
input_str = "Yes, my zip code is 12345. I heard that Gary's zip code is 23456. But 212 is not a zip code."
import re
zips= re.findall(r"\d{5}", input_str)
zips
from urllib.request import urlretrieve
urlretrieve("https://raw.githubusercontent.com/ledeprogram/courses/master/databases/data/enronsubjects.txt", "enronsubjects.txt")
subjects = [x.strip() for x in open("enronsubjects.txt").readlines()] #x.trip()[\n]
subjects[:10]
subjects[-10]
[line for line in subjects if line.startswith("Hi!")]
import re
[line for line in subjects if re.search("shipping", line)] # if line string match the "" parameter
```
### metacharacters
special characters that you can use in regular expressions that have a
special meaning: they stand in for multiple different characters of the same "class"
.: any char
\w any alphanumeric char (a-z, A-Z, 0-9)
\s any whitespace char ("_", \t,\n)
\S any non-whitespace char
\d any digit(0-9)
\. actual period
```
[line for line in subjects if re.search("sh.pping", line)] #. means any single character sh.pping is called class
# subjects that contain a time, e.g., 5: 52pm or 12:06am
[line for line in subjects if re.search("\d:\d\d\wm", line)] # \d:\d\d\wm a template read character by character
[line for line in subjects if re.search("\.\.\.\.\.", line)]
# subject lines that have dates, e.g. 12/01/99
[line for line in subjects if re.search("\d\d/\d\d/\d\d", line)]
[line for line in subjects if re.search("6/\d\d/\d\d", line)]
```
### define your own character classes
inside your regular expression, write [aeiou]`
```
[line for line in subjects if re.search("[aeiou][aeiou][aeiou][aeiou]",line)]
[line for line in subjects if re.search("F[wW]:", line)] #F followed by either a lowercase w followed by a uppercase W
# subjects that contain a time, e.g., 5: 52pm or 12:06am
[line for line in subjects if re.search("\d:[012345]\d[apAP][mM]", line)]
```
### metacharacters 2: anchors
^ beginning of a str
$ end of str
\b word boundary
```
# begin with the New York character #anchor the search of the particular string
[line for line in subjects if re.search("^[Nn]ew [Yy]ork", line)]
[line for line in subjects if re.search("\.\.\.$", line)]
[line for line in subjects if re.search("!!!!!$", line)]
# find sequence of characters that match "oil"
[line for line in subjects if re.search("\boil\b", line)]
```
### aside: matacharacters and escape characters
#### escape sequences \n: new line; \t: tab \\backslash \b: word boundary
```
x = "this is \na test"
print(x)
x= "this is\t\t\tanother test"
print(x)
# ascii backspace
print("hello there\b\b\b\b\bhi")
print("hello\nthere")
print("hello\\nthere")
normal = "hello\nthere"
raw = r"hello\nthere" #don't interpret any escape character in the raw string
print("normal:", normal)
print("raw:", raw)
[line for line in subjects if re.search(r"\boil\b", line)] #r for regular expression, include r for regular expression all the time
[line for line in subjects if re.search(r"\b\.\.\.\b", line)]
[line for line in subjects if re.search(r"\banti", line)] #\b only search anti at the beginning of the word
```
### metacharacters 3: quantifiers
{n} matches exactly n times
{n.m} matches at least n times, but no more than m times
{n,} matches at least n times, but maybe infinite times
+ match at least onece ({1})
* match zero or more times
? match one time or zero times
```
[line for line in subjects if re.search(r"[A-Z]{15,}", line)]
[line for line in subjects if re.search(r"[aeiou]{4}", line)] #find words that have 4 characters from aeiou for each line
[line for line in subjects if re.search(r"^F[wW]d?:", line)] # find method that begins with F followed by either w or W and either a d or not d is not there; ? means find that character d or not
[line for line in subjects if re.search(r"[nN]ews.*!$", line)] # * means any characters between ews and !
[line in line in subjects if re.search(r"^R[eE]:.*\b[iI]nvestor", line)]
### more metacharacters: alternation
(?:x|y) match either x or y
(?:x|y|z) match x,y, or z
[line for line in subjects if re.search(r"\b(?:[Cc]at|[kK]itty|[kK]itten)\b", line)]
[line for line in subjects if re.search(r"(energy|oil|electricity)\b", line)]
```
### capturing
#### read the whole corpus in as one big string
```
all_subjects = open("enronsubjects.txt").read()
all_subjects[:1000]
# domain names: foo.org, cheese.net, stuff.come
re.findall(r"\b\w+\.(?:come|net|org)\b", all_subjects)
## differences between re.search() yes/no
##re.findall []
input_str = "Yes, my zip code is 12345. I heard that Gary's zip code is 23456. But 212 is not a zip code."
re.search(r"\b\d{5}\b", input_str)
```
```
re.findall(r"New York \b\w+\b", all_subjects)
re.findall(r"New York (\b\w+\b)", all_subjects) #the things in (): to group for the findall method
```
### using re.search to capture
```
src = "this example has been used 423 times"
if re.search(r"\d\d\d\d", src):
print("yup")
else:
print("nope")
src = "this example has been used 423 times"
match = re.search(r"\d\d\d", src)
type(match)
print(match.start())
print(match.end())
print(match.group())
for line in subjects:
match = re.search(r"[A-Z]{15,}", line)
if match: #if find that match
print(match.group())
courses=[
]
print "Course catalog report:"
for item in courses:
match = re.search(r"^(\w+) (\d+): (.*)$", item)
print(match.group(1)) #group 1: find the item in first group
print("Course dept", match.group(1))
print("Course #", match.group(2))
print("Course title", match.group(3))
```
| github_jupyter |
# Bite Size Bayes
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## The Euro problem
In [a previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/07_euro.ipynb) I presented a problem from David MacKay's book, [*Information Theory, Inference, and Learning Algorithms*](http://www.inference.org.uk/mackay/itila/p0.html):
> A statistical statement appeared in The Guardian on
Friday January 4, 2002:
>
> >"When spun on edge 250 times, a Belgian one-euro coin came
up heads 140 times and tails 110. ‘It looks very suspicious
to me’, said Barry Blight, a statistics lecturer at the London
School of Economics. ‘If the coin were unbiased the chance of
getting a result as extreme as that would be less than 7%’."
>
> But [asks MacKay] do these data give evidence that the coin is biased rather than fair?
To answer this question, we made these modeling decisions:
* If you spin a coin on edge, there is some probability, $x$, that it will land heads up.
* The value of $x$ varies from one coin to the next, depending on how the coin is balanced and other factors.
We started with a uniform prior distribution for $x$, then updated it 250 times, once for each spin of the coin. Then we used the posterior distribution to compute the MAP, posterior mean, and a credible interval.
But we never really answered MacKay's question.
In this notebook, I introduce the binomial distribution and we will use it to solve the Euro problem more efficiently. Then we'll get back to MacKay's question and see if we can find a more satisfying answer.
## Binomial distribution
Suppose I tell you that a coin is "fair", that is, the probability of heads is 50%. If you spin it twice, there are four outcomes: `HH`, `HT`, `TH`, and `TT`.
All four outcomes have the same probability, 25%. If we add up the total number of heads, it is either 0, 1, or 2. The probability of 0 and 2 is 25%, and the probability of 1 is 50%.
More generally, suppose the probability of heads is `p` and we spin the coin `n` times. What is the probability that we get a total of `k` heads?
The answer is given by the binomial distribution:
$P(k; n, p) = \binom{n}{k} p^k (1-p)^{n-k}$
where $\binom{n}{k}$ is the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), usually pronounced "n choose k".
We can compute this expression ourselves, but we can also use the SciPy function `binom.pmf`:
```
from scipy.stats import binom
n = 2
p = 0.5
ks = np.arange(n+1)
a = binom.pmf(ks, n, p)
a
```
If we put this result in a Series, the result is the distribution of `k` for the given values of `n` and `p`.
```
pmf_k = pd.Series(a, index=ks)
pmf_k
```
The following function computes the binomial distribution for given values of `n` and `p`:
```
def make_binomial(n, p):
"""Make a binomial PMF.
n: number of spins
p: probability of heads
returns: Series representing a PMF
"""
ks = np.arange(n+1)
a = binom.pmf(ks, n, p)
pmf_k = pd.Series(a, index=ks)
return pmf_k
```
And here's what it looks like with `n=250` and `p=0.5`:
```
pmf_k = make_binomial(n=250, p=0.5)
pmf_k.plot()
plt.xlabel('Number of heads (k)')
plt.ylabel('Probability')
plt.title('Binomial distribution');
```
The most likely value in this distribution is 125:
```
pmf_k.idxmax()
```
But even though it is the most likely value, the probability that we get exactly 125 heads is only about 5%.
```
pmf_k[125]
```
In MacKay's example, we got 140 heads, which is less likely than 125:
```
pmf_k[140]
```
In the article MacKay quotes, the statistician says, ‘If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%’.
We can use the binomial distribution to check his math. The following function takes a PMF and computes the total probability of values greater than or equal to `threshold`.
```
def prob_ge(pmf, threshold):
"""Probability of values greater than a threshold.
pmf: Series representing a PMF
threshold: value to compare to
returns: probability
"""
ge = (pmf.index >= threshold)
total = pmf[ge].sum()
return total
```
Here's the probability of getting 140 heads or more:
```
prob_ge(pmf_k, 140)
```
It's about 3.3%, which is less than 7%. The reason is that the statistician includes all values "as extreme as" 140, which includes values less than or equal to 110, because 140 exceeds the expected value by 15 and 110 falls short by 15.
The probability of values less than or equal to 110 is also 3.3%,
so the total probability of values "as extreme" as 140 is about 7%.
The point of this calculation is that these extreme values are unlikely if the coin is fair.
That's interesting, but it doesn't answer MacKay's question. Let's see if we can.
## Estimating x
As promised, we can use the binomial distribution to solve the Euro problem more efficiently. Let's start again with a uniform prior:
```
xs = np.arange(101) / 100
uniform = pd.Series(1, index=xs)
uniform /= uniform.sum()
```
We can use `binom.pmf` to compute the likelihood of the data for each possible value of $x$.
```
k = 140
n = 250
xs = uniform.index
likelihood = binom.pmf(k, n, p=xs)
```
Now we can do the Bayesian update in the usual way, multiplying the priors and likelihoods,
```
posterior = uniform * likelihood
```
Computing the total probability of the data,
```
total = posterior.sum()
total
```
And normalizing the posterior,
```
posterior /= total
```
Here's what it looks like.
```
posterior.plot(label='Uniform')
plt.xlabel('Probability of heads (x)')
plt.ylabel('Probability')
plt.title('Posterior distribution, uniform prior')
plt.legend()
```
**Exercise:** Based on what we know about coins in the real world, it doesn't seem like every value of $x$ is equally likely. I would expect values near 50% to be more likely and values near the extremes to be less likely.
In Notebook 7, we used a triangle prior to represent this belief about the distribution of $x$. The following code makes a PMF that represents a triangle prior.
```
ramp_up = np.arange(50)
ramp_down = np.arange(50, -1, -1)
a = np.append(ramp_up, ramp_down)
triangle = pd.Series(a, index=xs)
triangle /= triangle.sum()
```
Update this prior with the likelihoods we just computed and plot the results.
```
# Solution
posterior2 = triangle * likelihood
total2 = posterior2.sum()
total2
# Solution
posterior2 /= total2
# Solution
posterior.plot(label='Uniform')
posterior2.plot(label='Triangle')
plt.xlabel('Probability of heads (x)')
plt.ylabel('Probability')
plt.title('Posterior distribution, uniform prior')
plt.legend();
```
## Evidence
Finally, let's get back to MacKay's question: do these data give evidence that the coin is biased rather than fair?
I'll use a Bayes table to answer this question, so here's the function that makes one:
```
def make_bayes_table(hypos, prior, likelihood):
"""Make a Bayes table.
hypos: sequence of hypotheses
prior: prior probabilities
likelihood: sequence of likelihoods
returns: DataFrame
"""
table = pd.DataFrame(index=hypos)
table['prior'] = prior
table['likelihood'] = likelihood
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return table
```
Recall that data, $D$, is considered evidence in favor of a hypothesis, `H`, if the posterior probability is greater than the prior, that is, if
$P(H|D) > P(H)$
For this example, I'll call the hypotheses `fair` and `biased`:
```
hypos = ['fair', 'biased']
```
And just to get started, I'll assume that the prior probabilities are 50/50.
```
prior = [0.5, 0.5]
```
Now we have to compute the probability of the data under each hypothesis.
If the coin is fair, the probability of heads is 50%, and we can compute the probability of the data (140 heads out of 250 spins) using the binomial distribution:
```
k = 140
n = 250
like_fair = binom.pmf(k, n, p=0.5)
like_fair
```
So that's the probability of the data, given that the coin is fair.
But if the coin is biased, what's the probability of the data? Well, that depends on what "biased" means.
If we know ahead of time that "biased" means the probability of heads is 56%, we can use the binomial distribution again:
```
like_biased = binom.pmf(k, n, p=0.56)
like_biased
```
Now we can put the likelihoods in the Bayes table:
```
likes = [like_fair, like_biased]
make_bayes_table(hypos, prior, likes)
```
The posterior probability of `biased` is about 86%, so the data is evidence that the coin is biased, at least for this definition of "biased".
But we used the data to define the hypothesis, which seems like cheating. To be fair, we should define "biased" before we see the data.
## Uniformly distributed bias
Suppose "biased" means that the probability of heads is anything except 50%, and all other values are equally likely.
We can represent that definition by making a uniform distribution and removing 50%.
```
biased_uniform = uniform.copy()
biased_uniform[50] = 0
biased_uniform /= biased_uniform.sum()
```
Now, to compute the probability of the data under this hypothesis, we compute the probability of the data for each value of $x$.
```
xs = biased_uniform.index
likelihood = binom.pmf(k, n, xs)
```
And then compute the total probability in the usual way:
```
like_uniform = np.sum(biased_uniform * likelihood)
like_uniform
```
So that's the probability of the data under the "biased uniform" hypothesis.
Now we make a Bayes table that compares the hypotheses `fair` and `biased uniform`:
```
hypos = ['fair', 'biased uniform']
likes = [like_fair, like_uniform]
make_bayes_table(hypos, prior, likes)
```
Using this definition of `biased`, the posterior is less than the prior, so the data are evidence that the coin is *fair*.
In this example, the data might support the fair hypothesis or the biased hypothesis, depending on the definition of "biased".
**Exercise:** Suppose "biased" doesn't mean every value of $x$ is equally likely. Maybe values near 50% are more likely and values near the extremes are less likely. In the previous exercise we created a PMF that represents a triangle-shaped distribution.
We can use it to represent an alternative definition of "biased":
```
biased_triangle = triangle.copy()
biased_triangle[50] = 0
biased_triangle /= biased_triangle.sum()
```
Compute the total probability of the data under this definition of "biased" and use a Bayes table to compare it with the fair hypothesis.
Is the data evidence that the coin is biased?
```
# Solution
like_triangle = np.sum(biased_triangle * likelihood)
like_triangle
# Solution
hypos = ['fair', 'biased triangle']
likes = [like_fair, like_triangle]
make_bayes_table(hypos, prior, likes)
# Solution
# For this definition of "biased",
# the data are slightly in favor of the fair hypothesis.
```
## Bayes factor
In the previous section, we used a Bayes table to see whether the data are in favor of the fair or biased hypothesis.
I assumed that the prior probabilities were 50/50, but that was an arbitrary choice.
And it was unnecessary, because we don't really need a Bayes table to say whether the data favor one hypothesis or another: we can just look at the likelihoods.
Under the first definition of biased, `x=0.56`, the likelihood of the biased hypothesis is higher:
```
like_fair, like_biased
```
Under the biased uniform definition, the likelihood of the fair hypothesis is higher.
```
like_fair, like_uniform
```
The ratio of these likelihoods tells us which hypothesis the data support.
If the ratio is less than 1, the data support the second hypothesis:
```
like_fair / like_biased
```
If the ratio is greater than 1, the data support the first hypothesis:
```
like_fair / like_uniform
```
This likelihood ratio is called a [Bayes factor](https://en.wikipedia.org/wiki/Bayes_factor); it provides a concise way to present the strength of a dataset as evidence for or against a hypothesis.
## Summary
In this notebook I introduced the binomial disrtribution and used it to solve the Euro problem more efficiently.
Then we used the results to (finally) answer the original version of the Euro problem, considering whether the data support the hypothesis that the coin is fair or biased. We found that the answer depends on how we define "biased". And we summarized the results using a Bayes factor, which quantifies the strength of the evidence.
[In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/13_price.ipynb) we'll start on a new problem based on the television game show *The Price Is Right*.
## Exercises
**Exercise:** In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, `x`.
Based on previous tests, the distribution of `x` in the population of designs is roughly uniform between 10% and 40%.
Now suppose the new ultra-secret Alien Blaster 9000 is being tested. In a press conference, a Defense League general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent."
Is this data good or bad; that is, does it increase or decrease your estimate of `x` for the Alien Blaster 9000?
Plot the prior and posterior distributions, and use the following function to compute the prior and posterior means.
```
def pmf_mean(pmf):
"""Compute the mean of a PMF.
pmf: Series representing a PMF
return: float
"""
return np.sum(pmf.index * pmf)
# Solution
xs = np.linspace(0.1, 0.4)
prior = pd.Series(1, index=xs)
prior /= prior.sum()
# Solution
likelihood = xs**2 + (1-xs)**2
# Solution
posterior = prior * likelihood
posterior /= posterior.sum()
# Solution
prior.plot(color='gray', label='prior')
posterior.plot(label='posterior')
plt.xlabel('Probability of success (x)')
plt.ylabel('Probability')
plt.ylim(0, 0.027)
plt.title('Distribution of before and after testing')
plt.legend();
# Solution
pmf_mean(prior), pmf_mean(posterior)
# With this prior, being "consistent" is more likely
# to mean "consistently bad".
```
| github_jupyter |
# Monte Carlo Simulation of Dividend Discount Model
## Description
You are trying to determine the value of a mature company. The company has had stable dividend growth for a long time so you select the dividend discount model (DDM).
$$P = \frac{d_1}{r_s - g}$$
### Level 1
- The next dividend will be \\$1 and your baseline estimates of the cost of capital and growth are 9% and 4%, respectively
- Write a function which is able to get the price based on values of the inputs
- Then you are concerned about mis-estimation of the inputs and how it could affect the price. So then assume that the growth rate has a mean of 4% but a standard deviation of 1%
- Visualize and summarize the resulting probability distribution of the price
### Level 2
Continue from the first exercise:
- Now you are also concerned you have mis-estimated the cost of capital. So now use a mean of 9% and standard deviation of 2%, in addition to varying the growth
- Visualize and summarize the resulting probability distribution of the price
- Be careful as in some cases, the drawn cost of capital will be lower than the drawn growth rate, which breaks the DDM.
- You will need to modify your logic to throw out these cases.
## Setup
```
from IPython.display import HTML, display
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import random
```
### Level 1
```
def price_ddm(dividend, cost_of_capital, growth):
'''
Function to determine the price of a company based on the dividend
discount model, according to the example, inputs are placed by default
'''
return dividend / (cost_of_capital - growth)
price_ddm(1, 0.09, 0.04)
dividend = 1
cost_of_capital = 0.09
growth_mean = 0.04
growth_std = 0.01
n_iter = 1000000
def price_ddm_simulations(dividend, cost_of_capital, growth_mean, growth_std, n_iter):
outputs = []
for i in range(n_iter):
growth_prob = random.normalvariate(growth_mean, growth_std)
result = price_ddm(dividend, cost_of_capital, growth = growth_prob)
outputs.append((growth_prob,result))
return outputs
l1_results = price_ddm_simulations(dividend, cost_of_capital, growth_mean, growth_std, n_iter)
print(f'There are {len(l1_results)} results. First five:')
l1_results[:5]
```
### Visualize the Outputs
Usually a first good way to analyze the outputs is to visualize them. Let's plot the results. A histogram or KDE plot is usually most appropriate for visualizing a single output. The KDE is basically a smoothed out histogram.
```
df = pd.DataFrame(l1_results, columns = ['Growth', 'Price'])
df['Price'].plot.hist(bins=100)
df['Price'].plot.kde()
```
As we can see, the results look basically like a normal distribution. Which makes sense because we only have one input changing and it is following a normal distribution.
### Probability Table
We would like to see two other kinds of outputs. One is a table of probabilities, along with the result which is achived at that probability in the distribution. E.g. at 5%, only 5% of cases are lower than the given value. At 75%, 75% of cases are lower than the given value. <br><br>
First we'll get the percentiles we want to explore in the table.
```
percentiles = [i/20 for i in range(1, 20)]
percentiles
```
Now we can use the `.quantile` method from `pandas` to get this table easily. <br>
For a better look of the dataframe we can create a function which will style the DF
```
def styled_df(df):
return df.style.format({
'Growth' : '{:.2%}',
'Price' : '${:,.2f}'
})
styled_df(df.quantile(percentiles))
```
### Level 2
```
# We will now modify the previous function to include variation in the cost of capital
dividend = 1
cost_capital_mean = 0.09
cost_capital_std = 0.02
growth_mean = 0.04
growth_std = 0.01
n_iter = 1000000
def price_ddm_simulations(dividend, cost_capital_mean, cost_capital_std, growth_mean, growth_std, n_iter):
outputs = []
for i in range(n_iter):
growth_prob = random.normalvariate(growth_mean, growth_std)
rate_prob = random.normalvariate(cost_capital_mean, cost_capital_std)
result = price_ddm(dividend, cost_of_capital=rate_prob, growth = growth_prob)
outputs.append((growth_prob, rate_prob, result))
return outputs
l2_results = price_ddm_simulations(dividend, cost_capital_mean, cost_capital_std, growth_mean, growth_std, n_iter)
df_2 = pd.DataFrame(l2_results, columns = ['Growth', 'Cost of Capital', 'Price'])
def styled_df(df):
return df.style.format({
'Growth' : '{:.2%}',
'Price' : '${:,.2f}',
'Cost of Capital' : '{:.2%}'
})
styled_df(df_2.quantile(percentiles))
df_2['Price'].plot.hist(bins=100)
```
We notice that some price values are negative when adding variation to the cost of capital, this is due to growth rate being greater than the cost of capital which will "break" the DDM model and show negative prices. <br><br>
We can look where did that happen in the next line of code:
```
df_2[df_2['Growth'] > df_2['Cost of Capital']]
# We will now modify the previous function to get positive values
def price_ddm_simulations(dividend, cost_capital_mean, cost_capital_std, growth_mean, growth_std, n_iter):
outputs = []
for i in range(n_iter):
growth_prob = random.normalvariate(growth_mean, growth_std)
rate_prob = random.normalvariate(cost_capital_mean, cost_capital_std)
if growth_prob > rate_prob:
continue
result = price_ddm(dividend, cost_of_capital=rate_prob, growth = growth_prob)
outputs.append((growth_prob, rate_prob, result))
return outputs
new_l2_results = price_ddm_simulations(dividend, cost_capital_mean, cost_capital_std, growth_mean, growth_std, n_iter)
df_3 = pd.DataFrame(new_l2_results, columns = ['Growth', 'Cost of Capital', 'Price'])
styled_df(df_3.quantile(percentiles))
```
We notice now there is no instance where growth is greater than the cost of capital.
```
df_3[df_3['Growth'] > df_3['Cost of Capital']]
df_3['Price'].plot.hist(bins=100)
```
There are no longer negative values, however the price range is still looking odd. While growth rate is no longer greater than cost of capital, we can get really close values, which translates to the previous histogram. <br><br>
To make the model better, we will ignore if the growth rate and the cost of capital are quite similar (0.5%)
```
# We will now modify the previous function to get positive values
def price_ddm_simulations(dividend, cost_capital_mean, cost_capital_std, growth_mean, growth_std, n_iter):
outputs = []
for i in range(n_iter):
growth_prob = random.normalvariate(growth_mean, growth_std)
rate_prob = random.normalvariate(cost_capital_mean, cost_capital_std)
if growth_prob > (rate_prob - 0.005):
continue
result = price_ddm(dividend, cost_of_capital=rate_prob, growth = growth_prob)
outputs.append((growth_prob, rate_prob, result))
return outputs
new_l2_results = price_ddm_simulations(dividend, cost_capital_mean, cost_capital_std, growth_mean, growth_std, n_iter)
df_3 = pd.DataFrame(new_l2_results, columns = ['Growth', 'Cost of Capital', 'Price'])
df_3['Price'].plot.hist(bins=100)
styled_df(df_3.quantile(percentiles))
```
This range of prices make much more sense.
| github_jupyter |
# Dynamic Recurrent Neural Network.
TensorFlow implementation of a Recurrent Neural Network (LSTM) that performs dynamic computation over sequences with variable length. This example is using a toy dataset to classify linear sequences. The generated sequences have variable length.
- Author: Aymeric Damien
- Project: https://github.com/aymericdamien/TensorFlow-Examples/
These lessons are adapted from [aymericdamien TensorFlow tutorials](https://github.com/aymericdamien/TensorFlow-Examples)
/ [GitHub](https://github.com/aymericdamien/TensorFlow-Examples)
which are published under the [MIT License](https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/LICENSE) which allows very broad use for both academic and commercial purposes.
## RNN Overview
<img src="http://colah.github.io/posts/2015-08-Understanding-LSTMs/img/RNN-unrolled.png" alt="nn" style="width: 600px;"/>
References:
- [Long Short Term Memory](http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf), Sepp Hochreiter & Jurgen Schmidhuber, Neural Computation 9(8): 1735-1780, 1997.
```
from __future__ import print_function
import tensorflow as tf
import random
# ====================
# TOY DATA GENERATOR
# ====================
class ToySequenceData(object):
""" Generate sequence of data with dynamic length.
This class generate samples for training:
- Class 0: linear sequences (i.e. [0, 1, 2, 3,...])
- Class 1: random sequences (i.e. [1, 3, 10, 7,...])
NOTICE:
We have to pad each sequence to reach 'max_seq_len' for TensorFlow
consistency (we cannot feed a numpy array with inconsistent
dimensions). The dynamic calculation will then be perform thanks to
'seqlen' attribute that records every actual sequence length.
"""
def __init__(self, n_samples=1000, max_seq_len=20, min_seq_len=3,
max_value=1000):
self.data = []
self.labels = []
self.seqlen = []
for i in range(n_samples):
# Random sequence length
len = random.randint(min_seq_len, max_seq_len)
# Monitor sequence length for TensorFlow dynamic calculation
self.seqlen.append(len)
# Add a random or linear int sequence (50% prob)
if random.random() < .5:
# Generate a linear sequence
rand_start = random.randint(0, max_value - len)
s = [[float(i)/max_value] for i in
range(rand_start, rand_start + len)]
# Pad sequence for dimension consistency
s += [[0.] for i in range(max_seq_len - len)]
self.data.append(s)
self.labels.append([1., 0.])
else:
# Generate a random sequence
s = [[float(random.randint(0, max_value))/max_value]
for i in range(len)]
# Pad sequence for dimension consistency
s += [[0.] for i in range(max_seq_len - len)]
self.data.append(s)
self.labels.append([0., 1.])
self.batch_id = 0
def next(self, batch_size):
""" Return a batch of data. When dataset end is reached, start over.
"""
if self.batch_id == len(self.data):
self.batch_id = 0
batch_data = (self.data[self.batch_id:min(self.batch_id +
batch_size, len(self.data))])
batch_labels = (self.labels[self.batch_id:min(self.batch_id +
batch_size, len(self.data))])
batch_seqlen = (self.seqlen[self.batch_id:min(self.batch_id +
batch_size, len(self.data))])
self.batch_id = min(self.batch_id + batch_size, len(self.data))
return batch_data, batch_labels, batch_seqlen
# ==========
# MODEL
# ==========
# Parameters
learning_rate = 0.01
training_steps = 10000
batch_size = 128
display_step = 200
# Network Parameters
seq_max_len = 20 # Sequence max length
n_hidden = 64 # hidden layer num of features
n_classes = 2 # linear sequence or not
trainset = ToySequenceData(n_samples=1000, max_seq_len=seq_max_len)
testset = ToySequenceData(n_samples=500, max_seq_len=seq_max_len)
# tf Graph input
x = tf.placeholder("float", [None, seq_max_len, 1])
y = tf.placeholder("float", [None, n_classes])
# A placeholder for indicating each sequence length
seqlen = tf.placeholder(tf.int32, [None])
# Define weights
weights = {
'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_classes]))
}
def dynamicRNN(x, seqlen, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_steps, n_input)
# Required shape: 'n_steps' tensors list of shape (batch_size, n_input)
# Unstack to get a list of 'n_steps' tensors of shape (batch_size, n_input)
x = tf.unstack(x, seq_max_len, 1)
# Define a lstm cell with tensorflow
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden)
# Get lstm cell output, providing 'sequence_length' will perform dynamic
# calculation.
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x, dtype=tf.float32,
sequence_length=seqlen)
# When performing dynamic calculation, we must retrieve the last
# dynamically computed output, i.e., if a sequence length is 10, we need
# to retrieve the 10th output.
# However TensorFlow doesn't support advanced indexing yet, so we build
# a custom op that for each sample in batch size, get its length and
# get the corresponding relevant output.
# 'outputs' is a list of output at every timestep, we pack them in a Tensor
# and change back dimension to [batch_size, n_step, n_input]
outputs = tf.stack(outputs)
outputs = tf.transpose(outputs, [1, 0, 2])
# Hack to build the indexing and retrieve the right output.
batch_size = tf.shape(outputs)[0]
# Start indices for each sample
index = tf.range(0, batch_size) * seq_max_len + (seqlen - 1)
# Indexing
outputs = tf.gather(tf.reshape(outputs, [-1, n_hidden]), index)
# Linear activation, using outputs computed above
return tf.matmul(outputs, weights['out']) + biases['out']
pred = dynamicRNN(x, seqlen, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
for step in range(1, training_steps+1):
batch_x, batch_y, batch_seqlen = trainset.next(batch_size)
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
seqlen: batch_seqlen})
if step % display_step == 0 or step == 1:
# Calculate batch accuracy & loss
acc, loss = sess.run([accuracy, cost], feed_dict={x: batch_x, y: batch_y,
seqlen: batch_seqlen})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
print("Optimization Finished!")
# Calculate accuracy
test_data = testset.data
test_label = testset.labels
test_seqlen = testset.seqlen
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: test_data, y: test_label,
seqlen: test_seqlen}))
```
| github_jupyter |
# Wright-Fisher model of mutation and random genetic drift
A Wright-Fisher model has a fixed population size *N* and discrete non-overlapping generations. Each generation, each individual has a random number of offspring whose mean is proportional to the individual's fitness. Each generation, mutation may occur.
## Setup
```
import numpy as np
try:
import itertools.izip as zip
except ImportError:
import itertools
```
## Make population dynamic model
### Basic parameters
```
pop_size = 100
seq_length = 10
alphabet = ['A', 'T', 'G', 'C']
base_haplotype = "AAAAAAAAAA"
```
### Setup a population of sequences
Store this as a lightweight Dictionary that maps a string to a count. All the sequences together will have count *N*.
```
pop = {}
pop["AAAAAAAAAA"] = 40
pop["AAATAAAAAA"] = 30
pop["AATTTAAAAA"] = 30
pop["AAATAAAAAA"]
```
### Add mutation
Mutations occur each generation in each individual in every basepair.
```
mutation_rate = 0.005 # per gen per individual per site
```
Walk through population and mutate basepairs. Use Poisson splitting to speed this up (you may be familiar with Poisson splitting from its use in the [Gillespie algorithm](https://en.wikipedia.org/wiki/Gillespie_algorithm)).
* In naive scenario A: take each element and check for each if event occurs. For example, 100 elements, each with 1% chance. This requires 100 random numbers.
* In Poisson splitting scenario B: Draw a Poisson random number for the number of events that occur and distribute them randomly. In the above example, this will most likely involve 1 random number draw to see how many events and then a few more draws to see which elements are hit.
First off, we need to get random number of total mutations
```
def get_mutation_count():
mean = mutation_rate * pop_size * seq_length
return np.random.poisson(mean)
```
Here we use Numpy's [Poisson random number](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.poisson.html).
```
get_mutation_count()
```
We need to get random haplotype from the population.
```
pop.keys()
[x/float(pop_size) for x in pop.values()]
def get_random_haplotype():
haplotypes = list(pop.keys())
frequencies = [x/float(pop_size) for x in pop.values()]
total = sum(frequencies)
frequencies = [x / total for x in frequencies]
return np.random.choice(haplotypes, p=frequencies)
```
Here we use Numpy's [weighted random choice](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html).
```
get_random_haplotype()
```
Here, we take a supplied haplotype and mutate a site at random.
```
def get_mutant(haplotype):
site = np.random.randint(seq_length)
possible_mutations = list(alphabet)
possible_mutations.remove(haplotype[site])
mutation = np.random.choice(possible_mutations)
new_haplotype = haplotype[:site] + mutation + haplotype[site+1:]
return new_haplotype
get_mutant("AAAAAAAAAA")
```
Putting things together, in a single mutation event, we grab a random haplotype from the population, mutate it, decrement its count, and then check if the mutant already exists in the population. If it does, increment this mutant haplotype; if it doesn't create a new haplotype of count 1.
```
def mutation_event():
haplotype = get_random_haplotype()
if pop[haplotype] > 1:
pop[haplotype] -= 1
new_haplotype = get_mutant(haplotype)
if new_haplotype in pop:
pop[new_haplotype] += 1
else:
pop[new_haplotype] = 1
mutation_event()
pop
```
To create all the mutations that occur in a single generation, we draw the total count of mutations and then iteratively add mutation events.
```
def mutation_step():
mutation_count = get_mutation_count()
for i in range(mutation_count):
mutation_event()
mutation_step()
pop
```
### Add genetic drift
Given a list of haplotype frequencies currently in the population, we can take a [multinomial draw](https://en.wikipedia.org/wiki/Multinomial_distribution) to get haplotype counts in the following generation.
```
def get_offspring_counts():
haplotypes = list(pop.keys())
frequencies = [x/float(pop_size) for x in pop.values()]
return list(np.random.multinomial(pop_size, frequencies))
```
Here we use Numpy's [multinomial random sample](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multinomial.html).
```
get_offspring_counts()
```
We then need to assign this new list of haplotype counts to the `pop` dictionary. To save memory and computation, if a haplotype goes to 0, we remove it entirely from the `pop` dictionary.
```
def offspring_step():
haplotypes = list(pop.keys())
counts = get_offspring_counts()
for (haplotype, count) in zip(haplotypes, counts):
if (count > 0):
pop[haplotype] = count
else:
del pop[haplotype]
offspring_step()
pop
```
### Combine and iterate
Each generation is simply a mutation step where a random number of mutations are thrown down, and an offspring step where haplotype counts are updated.
```
def time_step():
mutation_step()
offspring_step()
```
Can iterate this over a number of generations.
```
generations = 5
def simulate():
for i in range(generations):
time_step()
simulate()
pop
```
### Record
We want to keep a record of past population frequencies to understand dynamics through time. At each step in the simulation, we append to a history object.
```
pop = {"AAAAAAAAAA": pop_size}
history = []
def simulate():
clone_pop = dict(pop)
history.append(clone_pop)
for i in range(generations):
time_step()
clone_pop = dict(pop)
history.append(clone_pop)
simulate()
pop
history[0]
history[1]
history[2]
```
## Analyze trajectories
### Calculate diversity
Here, diversity in population genetics is usually shorthand for the statistic *π*, which measures pairwise differences between random individuals in the population. *π* is usually measured as substitutions per site.
```
pop
```
First, we need to calculate the number of differences per site between two arbitrary sequences.
```
def get_distance(seq_a, seq_b):
diffs = 0
length = len(seq_a)
assert len(seq_a) == len(seq_b)
for chr_a, chr_b in zip(seq_a, seq_b):
if chr_a != chr_b:
diffs += 1
return diffs / float(length)
get_distance("AAAAAAAAAA", "AAAAAAAAAB")
```
We calculate diversity as a weighted average between all pairs of haplotypes, weighted by pairwise haplotype frequency.
```
def get_diversity(population):
haplotypes = list(population.keys())
haplotype_count = len(haplotypes)
diversity = 0
for i in range(haplotype_count):
for j in range(haplotype_count):
haplotype_a = haplotypes[i]
haplotype_b = haplotypes[j]
frequency_a = population[haplotype_a] / float(pop_size)
frequency_b = population[haplotype_b] / float(pop_size)
frequency_pair = frequency_a * frequency_b
diversity += frequency_pair * get_distance(haplotype_a, haplotype_b)
return diversity
get_diversity(pop)
def get_diversity_trajectory():
trajectory = [get_diversity(generation) for generation in history]
return trajectory
get_diversity_trajectory()
```
### Plot diversity
Here, we use [matplotlib](http://matplotlib.org/) for all Python plotting.
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
```
Here, we make a simple line plot using matplotlib's `plot` function.
```
plt.plot(get_diversity_trajectory())
```
Here, we style the plot a bit with x and y axes labels.
```
def diversity_plot():
mpl.rcParams['font.size']=14
trajectory = get_diversity_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("diversity")
plt.xlabel("generation")
diversity_plot()
```
### Analyze and plot divergence
In population genetics, divergence is generally the number of substitutions away from a reference sequence. In this case, we can measure the average distance of the population to the starting haplotype. Again, this will be measured in terms of substitutions per site.
```
def get_divergence(population):
haplotypes = population.keys()
divergence = 0
for haplotype in haplotypes:
frequency = population[haplotype] / float(pop_size)
divergence += frequency * get_distance(base_haplotype, haplotype)
return divergence
def get_divergence_trajectory():
trajectory = [get_divergence(generation) for generation in history]
return trajectory
get_divergence_trajectory()
def divergence_plot():
mpl.rcParams['font.size']=14
trajectory = get_divergence_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("divergence")
plt.xlabel("generation")
divergence_plot()
```
### Plot haplotype trajectories
We also want to directly look at haplotype frequencies through time.
```
def get_frequency(haplotype, generation):
pop_at_generation = history[generation]
if haplotype in pop_at_generation:
return pop_at_generation[haplotype]/float(pop_size)
else:
return 0
get_frequency("AAAAAAAAAA", 4)
def get_trajectory(haplotype):
trajectory = [get_frequency(haplotype, gen) for gen in range(generations)]
return trajectory
get_trajectory("AAAAAAAAAA")
```
We want to plot all haplotypes seen during the simulation.
```
def get_all_haplotypes():
haplotypes = set()
for generation in history:
for haplotype in generation:
haplotypes.add(haplotype)
return haplotypes
get_all_haplotypes()
```
Here is a simple plot of their overall frequencies.
```
haplotypes = get_all_haplotypes()
for haplotype in haplotypes:
plt.plot(get_trajectory(haplotype))
plt.show()
colors = ["#781C86", "#571EA2", "#462EB9", "#3F47C9", "#3F63CF", "#447CCD", "#4C90C0", "#56A0AE", "#63AC9A", "#72B485", "#83BA70", "#96BD60", "#AABD52", "#BDBB48", "#CEB541", "#DCAB3C", "#E49938", "#E68133", "#E4632E", "#DF4327", "#DB2122"]
colors_lighter = ["#A567AF", "#8F69C1", "#8474D1", "#7F85DB", "#7F97DF", "#82A8DD", "#88B5D5", "#8FC0C9", "#97C8BC", "#A1CDAD", "#ACD1A0", "#B9D395", "#C6D38C", "#D3D285", "#DECE81", "#E8C77D", "#EDBB7A", "#EEAB77", "#ED9773", "#EA816F", "#E76B6B"]
```
We can use `stackplot` to stack these trajectoies on top of each other to get a better picture of what's going on.
```
def stacked_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
haplotypes = get_all_haplotypes()
trajectories = [get_trajectory(haplotype) for haplotype in haplotypes]
plt.stackplot(range(generations), trajectories, colors=colors_lighter)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
stacked_trajectory_plot()
```
### Plot SNP trajectories
```
def get_snp_frequency(site, generation):
minor_allele_frequency = 0.0
pop_at_generation = history[generation]
for haplotype in pop_at_generation.keys():
allele = haplotype[site]
frequency = pop_at_generation[haplotype] / float(pop_size)
if allele != "A":
minor_allele_frequency += frequency
return minor_allele_frequency
get_snp_frequency(3, 5)
def get_snp_trajectory(site):
trajectory = [get_snp_frequency(site, gen) for gen in range(generations)]
return trajectory
get_snp_trajectory(3)
```
Find all variable sites.
```
def get_all_snps():
snps = set()
for generation in history:
for haplotype in generation:
for site in range(seq_length):
if haplotype[site] != "A":
snps.add(site)
return snps
def snp_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
snps = get_all_snps()
trajectories = [get_snp_trajectory(snp) for snp in snps]
data = []
for trajectory, color in zip(trajectories, itertools.cycle(colors)):
data.append(range(generations))
data.append(trajectory)
data.append(color)
plt.plot(*data)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
snp_trajectory_plot()
```
## Scale up
Here, we scale up to more interesting parameter values.
```
pop_size = 50
seq_length = 100
generations = 500
mutation_rate = 0.0001 # per gen per individual per site
```
In this case there are $\mu$ = 0.01 mutations entering the population every generation.
```
seq_length * mutation_rate
```
And the population genetic parameter $\theta$, which equals $2N\mu$, is 1.
```
2 * pop_size * seq_length * mutation_rate
base_haplotype = ''.join(["A" for i in range(seq_length)])
pop.clear()
del history[:]
pop[base_haplotype] = pop_size
simulate()
plt.figure(num=None, figsize=(14, 14), dpi=80, facecolor='w', edgecolor='k')
plt.subplot2grid((3,2), (0,0), colspan=2)
stacked_trajectory_plot(xlabel="")
plt.subplot2grid((3,2), (1,0), colspan=2)
snp_trajectory_plot(xlabel="")
plt.subplot2grid((3,2), (2,0))
diversity_plot()
plt.subplot2grid((3,2), (2,1))
divergence_plot()
```
| github_jupyter |
<img src="images/usm.jpg" width="480" height="240" align="left"/>
# MAT281 - Laboratorio N°04
## Objetivos de la clase
* Reforzar los conceptos básicos de los módulos de pandas.
## Contenidos
* [Problema 01](#p1)
* [Problema 02](#p2)
## Problema 01
<img src="https://image.freepik.com/vector-gratis/varios-automoviles-dibujos-animados_23-2147613095.jpg" width="360" height="360" align="center"/>
EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.
Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
```
import pandas as pd
import numpy as np
import os
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
```
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:
1. Elimine los valores nulos (Nan)
```
df=df.dropna()
```
2. Encuentra el nombre de la compañía de automóviles más cara
```
df.groupby(['company']).mean()['price'].idxmax()
```
3. Imprimir todos los detalles de Toyota Cars
```
df[df['company']=='toyota']
```
4. Cuente el total de automóviles por compañía
```
comp[['company']].count()
```
5. Encuentra el coche con el precio más alto por compañía
```
df[['company','price']].groupby('company').max()
```
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
```
comp=df.groupby('company')
df_leng = comp.agg({'average-mileage':[np.mean]}).reset_index()
df_leng
```
7. Ordenar todos los autos por columna de precio (**price**)
```
df.sort_values(by='price')
```
## Problema 02
Siguiendo la temática de los automóviles, resuelva los siguientes problemas:
#### a) Subproblema 01
A partir de los siguientes diccionarios:
```
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
```
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.
* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
```
carsDf1=pd.DataFrame(GermanCars)
carsDf2=pd.DataFrame(japaneseCars)
carsDf= pd.concat([carsDf1,carsDf2],keys=['Germany','Japan'])
carsDf
```
#### b) Subproblema 02
A partir de los siguientes diccionarios:
```
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
```
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.
* Junte ambos dataframes (**carsDf**) por la llave **Company**.
```
carsDf1=pd.DataFrame(Car_Price)
carsDf2=pd.DataFrame(car_Horsepower)
carsDf= pd.merge(carsDf1, carsDf2, on='Company')
carsDf
```
| github_jupyter |
# Dependencies
```
import os, warnings, shutil
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from transformers import AutoTokenizer
from sklearn.utils import shuffle
from sklearn.model_selection import StratifiedKFold
SEED = 0
warnings.filterwarnings("ignore")
```
# Parameters
```
MAX_LEN = 192
tokenizer_path = 'jplu/tf-xlm-roberta-large'
positive1 = 21384 * 2
positive2 = 112226 * 2
```
# Load data
```
train1 = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-toxic-comment-train.csv")
train2 = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-unintended-bias-train.csv")
train_df = pd.concat([train1[['comment_text', 'toxic']].query('toxic == 1'),
train1[['comment_text', 'toxic']].query('toxic == 0').sample(n=positive1, random_state=SEED),
train2[['comment_text', 'toxic']].query('toxic > .5'),
train2[['comment_text', 'toxic']].query('toxic <= .5').sample(n=positive2, random_state=SEED)
])
train_df = shuffle(train_df, random_state=SEED).reset_index(drop=True)
train_df['toxic_int'] = train_df['toxic'].round().astype(int)
print('Train samples %d' % len(train_df))
display(train_df.head())
```
# Tokenizer
```
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
```
# Data generation sanity check
```
for idx in range(5):
print('\nRow %d' % idx)
max_seq_len = 22
comment_text = train_df['comment_text'].loc[idx]
enc = tokenizer.encode_plus(comment_text,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=max_seq_len)
print('comment_text : "%s"' % comment_text)
print('input_ids : "%s"' % enc['input_ids'])
print('attention_mask: "%s"' % enc['attention_mask'])
assert len(enc['input_ids']) == len(enc['attention_mask']) == max_seq_len
```
# 5-Fold split
```
folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=SEED)
for fold_n, (train_idx, val_idx) in enumerate(folds.split(train_df, train_df['toxic_int'])):
print('Fold: %s, Train size: %s, Validation size %s' % (fold_n+1, len(train_idx), len(val_idx)))
train_df[('fold_%s' % str(fold_n+1))] = 0
train_df[('fold_%s' % str(fold_n+1))].loc[train_idx] = 'train'
train_df[('fold_%s' % str(fold_n+1))].loc[val_idx] = 'validation'
```
# Label distribution
```
for fold_n in range(folds.n_splits):
fold_n += 1
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 6))
fig.suptitle('Fold %s' % fold_n, fontsize=22)
sns.countplot(x="toxic_int", data=train_df[train_df[('fold_%s' % fold_n)] == 'train'], palette="GnBu_d", ax=ax1).set_title('Train')
sns.countplot(x="toxic_int", data=train_df[train_df[('fold_%s' % fold_n)] == 'validation'], palette="GnBu_d", ax=ax2).set_title('Validation')
sns.despine()
plt.show()
for fold_n in range(folds.n_splits):
fold_n += 1
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 6))
fig.suptitle('Fold %s' % fold_n, fontsize=22)
sns.distplot(train_df[train_df[('fold_%s' % fold_n)] == 'train']['toxic'], ax=ax1).set_title('Train')
sns.distplot(train_df[train_df[('fold_%s' % fold_n)] == 'validation']['toxic'], ax=ax2).set_title('Validation')
sns.despine()
plt.show()
```
# Output 5-fold set
```
train_df.to_csv('5-fold.csv', index=False)
display(train_df.head())
for fold_n in range(folds.n_splits):
if fold_n < 1:
fold_n += 1
base_path = 'fold_%d/' % fold_n
# Create dir
os.makedirs(base_path)
x_train = tokenizer.batch_encode_plus(train_df[train_df[('fold_%s' % fold_n)] == 'train']['comment_text'].values,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=MAX_LEN)
x_train = np.array([np.array(x_train['input_ids']),
np.array(x_train['attention_mask'])])
x_valid = tokenizer.batch_encode_plus(train_df[train_df[('fold_%s' % fold_n)] == 'validation']['comment_text'].values,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=MAX_LEN)
x_valid = np.array([np.array(x_valid['input_ids']),
np.array(x_valid['attention_mask'])])
y_train = train_df[train_df[('fold_%s' % fold_n)] == 'train']['toxic'].values
y_valid = train_df[train_df[('fold_%s' % fold_n)] == 'validation']['toxic'].values
np.save(base_path + 'x_train', np.asarray(x_train))
np.save(base_path + 'y_train', y_train)
np.save(base_path + 'x_valid', np.asarray(x_valid))
np.save(base_path + 'y_valid', y_valid)
print('\nFOLD: %d' % (fold_n))
print('x_train shape:', x_train.shape)
print('y_train shape:', y_train.shape)
print('x_valid shape:', x_valid.shape)
print('y_valid shape:', y_valid.shape)
# Compress logs dir
!tar -cvzf fold_1.tar.gz fold_1
# !tar -cvzf fold_2.tar.gz fold_2
# !tar -cvzf fold_3.tar.gz fold_3
# !tar -cvzf fold_4.tar.gz fold_4
# !tar -cvzf fold_5.tar.gz fold_5
# Delete logs dir
shutil.rmtree('fold_1')
# shutil.rmtree('fold_2')
# shutil.rmtree('fold_3')
# shutil.rmtree('fold_4')
# shutil.rmtree('fold_5')
```
# Validation set
```
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv", usecols=['comment_text', 'toxic', 'lang'])
display(valid_df.head())
x_valid = tokenizer.batch_encode_plus(valid_df['comment_text'].values,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=MAX_LEN)
x_valid = np.array([np.array(x_valid['input_ids']),
np.array(x_valid['attention_mask'])])
y_valid = valid_df['toxic'].values
np.save('x_valid', np.asarray(x_valid))
np.save('y_valid', y_valid)
print('x_valid shape:', x_valid.shape)
print('y_valid shape:', y_valid.shape)
```
# Test set
```
test_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/test.csv", usecols=['content'])
display(test_df.head())
x_test = tokenizer.batch_encode_plus(test_df['content'].values,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=MAX_LEN)
x_test = np.array([np.array(x_test['input_ids']),
np.array(x_test['attention_mask'])])
np.save('x_test', np.asarray(x_test))
print('x_test shape:', x_test.shape)
```
| github_jupyter |
```
import numpy as np
import lqrpols
import matplotlib.pyplot as plt
```
Here is a link to [lqrpols.py](http://www.argmin.net/code/lqrpols.py)
```
np.random.seed(1337)
# state transition matrices for linear system:
# x(t+1) = A x (t) + B u(t)
A = np.array([[1,1],[0,1]])
B = np.array([[0],[1]])
d,p = B.shape
# LQR quadratic cost per state
Q = np.array([[1,0],[0,0]])
# initial condition for system
z0 = -1 # initial position
v0 = 0 # initial velocity
x0 = np.vstack((z0,v0))
R = np.array([[1.0]])
# number of time steps to simulate
T = 10
# amount of Gaussian noise in dynamics
eq_err = 1e-2
# N_vals = np.floor(np.linspace(1,75,num=7)).astype(int)
N_vals = [1,2,5,7,12,25,50,75]
N_trials = 10
### Bunch of matrices for storing costs
J_finite_nom = np.zeros((N_trials,len(N_vals)))
J_finite_nomK = np.zeros((N_trials,len(N_vals)))
J_finite_rs = np.zeros((N_trials,len(N_vals)))
J_finite_ur = np.zeros((N_trials,len(N_vals)))
J_finite_pg = np.zeros((N_trials,len(N_vals)))
J_inf_nom = np.zeros((N_trials,len(N_vals)))
J_inf_rs = np.zeros((N_trials,len(N_vals)))
J_inf_ur = np.zeros((N_trials,len(N_vals)))
J_inf_pg = np.zeros((N_trials,len(N_vals)))
# cost for finite time horizon, true model
J_finite_opt = lqrpols.cost_finite_model(A,B,Q,R,x0,T,A,B)
### Solve for optimal infinite time horizon LQR controller
K_opt = -lqrpols.lqr_gain(A,B,Q,R)
# cost for infinite time horizon, true model
J_inf_opt = lqrpols.cost_inf_K(A,B,Q,R,K_opt)
# cost for zero control
baseline = lqrpols.cost_finite_K(A,B,Q,R,x0,T,np.zeros((p,d)))
# model for nominal control with 1 rollout
A_nom1,B_nom1 = lqrpols.lsqr_estimator(A,B,Q,R,x0,eq_err,1,T)
print(A_nom1)
print(B_nom1)
# cost for finite time horizon, one rollout, nominal control
one_rollout_cost = lqrpols.cost_finite_model(A,B,Q,R,x0,T,A_nom1,B_nom1)
K_nom1 = -lqrpols.lqr_gain(A_nom1,B_nom1,Q,R)
one_rollout_cost_inf = lqrpols.cost_inf_K(A,B,Q,R,K_nom1)
for N in range(len(N_vals)):
for trial in range(N_trials):
# nominal model, N x 40 to match sample budget of policy gradient
A_nom,B_nom = lqrpols.lsqr_estimator(A,B,Q,R,x0,eq_err,N_vals[N]*40,T);
# finite time horizon cost with nominal model
J_finite_nom[trial,N] = lqrpols.cost_finite_model(A,B,Q,R,x0,T,A_nom,B_nom)
# Solve for infinite time horizon nominal LQR controller
K_nom = -lqrpols.lqr_gain(A_nom,B_nom,Q,R)
# cost of using the infinite time horizon solution for finite time horizon
J_finite_nomK[trial,N] = lqrpols.cost_finite_K(A,B,Q,R,x0,T,K_nom)
# infinite time horizon cost of nominal model
J_inf_nom[trial,N] = lqrpols.cost_inf_K(A,B,Q,R,K_nom)
# policy gradient, batchsize 40 per iteration
K_pg = lqrpols.policy_gradient_adam_linear_policy(A,B,Q,R,x0,eq_err,N_vals[N]*5,T)
J_finite_pg[trial,N] = lqrpols.cost_finite_K(A,B,Q,R,x0,T,K_pg)
J_inf_pg[trial,N] = lqrpols.cost_inf_K(A,B,Q,R,K_pg)
# random search, batchsize 4, so uses 8 rollouts per iteration
K_rs = lqrpols.random_search_linear_policy(A,B,Q,R,x0,eq_err,N_vals[N]*5,T)
J_finite_rs[trial,N] = lqrpols.cost_finite_K(A,B,Q,R,x0,T,K_rs)
J_inf_rs[trial,N] = lqrpols.cost_inf_K(A,B,Q,R,K_rs)
# uniformly random sampling, N x 40 to match sample budget of policy gradient
K_ur = lqrpols.uniform_random_linear_policy(A,B,Q,R,x0,eq_err,N_vals[N]*40,T)
J_finite_ur[trial,N] = lqrpols.cost_finite_K(A,B,Q,R,x0,T,K_ur)
J_inf_ur[trial,N] = lqrpols.cost_inf_K(A,B,Q,R,K_ur)
colors = [ '#2D328F', '#F15C19',"#81b13c","#ca49ac"]
label_fontsize = 18
tick_fontsize = 14
linewidth = 3
markersize = 10
tot_samples = 40*np.array(N_vals)
plt.plot(tot_samples,np.amin(J_finite_pg,axis=0),'o-',color=colors[0],linewidth=linewidth,
markersize=markersize,label='policy gradient')
plt.plot(tot_samples,np.amin(J_finite_ur,axis=0),'>-',color=colors[1],linewidth=linewidth,
markersize=markersize,label='uniform sampling')
plt.plot(tot_samples,np.amin(J_finite_rs,axis=0),'s-',color=colors[2],linewidth=linewidth,
markersize=markersize,label='random search')
plt.plot([tot_samples[0],tot_samples[-1]],[baseline, baseline],color='#000000',linewidth=linewidth,
linestyle='--',label='zero control')
plt.plot([tot_samples[0],tot_samples[-1]],[J_finite_opt, J_finite_opt],color='#000000',linewidth=linewidth,
linestyle=':',label='optimal')
plt.axis([0,2000,0,12])
plt.xlabel('rollouts',fontsize=label_fontsize)
plt.ylabel('cost',fontsize=label_fontsize)
plt.legend(fontsize=18, bbox_to_anchor=(1.0, 0.54))
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.grid(True)
fig = plt.gcf()
fig.set_size_inches(9, 6)
plt.show()
plt.plot(tot_samples,np.median(J_finite_pg,axis=0),'o-',color=colors[0],linewidth=linewidth,
markersize=markersize,label='policy gradient')
plt.fill_between(tot_samples, np.amin(J_finite_pg,axis=0), np.amax(J_finite_pg,axis=0), alpha=0.25)
plt.plot(tot_samples,np.median(J_finite_ur,axis=0),'>-',color=colors[1],linewidth=linewidth,
markersize=markersize,label='uniform sampling')
plt.fill_between(tot_samples, np.amin(J_finite_ur,axis=0), np.amax(J_finite_ur,axis=0), alpha=0.25)
plt.plot(tot_samples,np.median(J_finite_rs,axis=0),'s-',color=colors[2],linewidth=linewidth,
markersize=markersize,label='random search')
plt.fill_between(tot_samples, np.amin(J_finite_rs,axis=0), np.amax(J_finite_rs,axis=0), alpha=0.25)
plt.plot([tot_samples[0],tot_samples[-1]],[baseline, baseline],color='#000000',linewidth=linewidth,
linestyle='--',label='zero control')
plt.plot([tot_samples[0],tot_samples[-1]],[J_finite_opt, J_finite_opt],color='#000000',linewidth=linewidth,
linestyle=':',label='optimal')
plt.axis([0,2000,0,12])
plt.xlabel('rollouts',fontsize=label_fontsize)
plt.ylabel('cost',fontsize=label_fontsize)
plt.legend(fontsize=18, bbox_to_anchor=(1.0, 0.54))
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.grid(True)
fig = plt.gcf()
fig.set_size_inches(9, 6)
plt.show()
plt.plot(tot_samples,np.median(J_inf_pg,axis=0),'o-',color=colors[0],linewidth=linewidth,
markersize=markersize,label='policy gradient')
plt.fill_between(tot_samples, np.amin(J_inf_pg,axis=0), np.minimum(np.amax(J_inf_pg,axis=0),15), alpha=0.25)
plt.plot(tot_samples,np.median(J_inf_ur,axis=0),'>-',color=colors[1],linewidth=linewidth,
markersize=markersize,label='uniform sampling')
plt.fill_between(tot_samples, np.amin(J_inf_ur,axis=0), np.minimum(np.amax(J_inf_ur,axis=0),15), alpha=0.25)
plt.plot(tot_samples,np.median(J_inf_rs,axis=0),'s-',color=colors[2],linewidth=linewidth,
markersize=markersize,label='random search')
plt.fill_between(tot_samples, np.amin(J_inf_rs,axis=0), np.minimum(np.amax(J_inf_rs,axis=0),15), alpha=0.25)
plt.plot([tot_samples[0],tot_samples[-1]],[J_inf_opt, J_inf_opt],color='#000000',linewidth=linewidth,
linestyle=':',label='optimal')
plt.axis([0,3000,5,10])
plt.xlabel('rollouts',fontsize=label_fontsize)
plt.ylabel('cost',fontsize=label_fontsize)
plt.legend(fontsize=18, bbox_to_anchor=(1.0, 0.54))
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.grid(True)
fig = plt.gcf()
fig.set_size_inches(9, 6)
plt.show()
plt.plot(tot_samples,1-np.sum(np.isinf(J_inf_pg),axis=0)/10,'o-',color=colors[0],linewidth=linewidth,
markersize=markersize,label='policy gradient')
plt.plot(tot_samples,1-np.sum(np.isinf(J_inf_ur),axis=0)/10,'>-',color=colors[1],linewidth=linewidth,
markersize=markersize,label='uniform sampling')
plt.plot(tot_samples,1-np.sum(np.isinf(J_inf_rs),axis=0)/10,'s-',color=colors[2],linewidth=linewidth,
markersize=markersize,label='random search')
plt.axis([0,3000,0,1])
plt.xlabel('rollouts',fontsize=label_fontsize)
plt.ylabel('fraction stable',fontsize=label_fontsize)
plt.legend(fontsize=18, bbox_to_anchor=(1.0, 0.54))
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.grid(True)
fig = plt.gcf()
fig.set_size_inches(9, 6)
plt.show()
one_rollout_cost-J_finite_opt
one_rollout_cost_inf-J_inf_opt
```
| github_jupyter |
```
#default_exp core
#export
from local.test import *
from local.imports import *
from local.notebook.showdoc import show_doc
```
# Core
> Basic functions used in the fastai library
```
# export
defaults = SimpleNamespace()
```
## Metaclasses
```
#export
class PrePostInitMeta(type):
"A metaclass that calls optional `__pre_init__` and `__post_init__` methods"
def __new__(cls, name, bases, dct):
x = super().__new__(cls, name, bases, dct)
def _pass(self, *args,**kwargs): pass
for o in ('__init__', '__pre_init__', '__post_init__'):
if not hasattr(x,o): setattr(x,o,_pass)
old_init = x.__init__
@functools.wraps(old_init)
def _init(self,*args,**kwargs):
self.__pre_init__()
old_init(self, *args,**kwargs)
self.__post_init__()
setattr(x, '__init__', _init)
return x
show_doc(PrePostInitMeta, title_level=3)
class _T(metaclass=PrePostInitMeta):
def __pre_init__(self): self.a = 0; assert self.a==0
def __init__(self): self.a += 1; assert self.a==1
def __post_init__(self): self.a += 1; assert self.a==2
t = _T()
t.a
#export
class PrePostInit(metaclass=PrePostInitMeta):
"Base class that provides `PrePostInitMeta` metaclass to subclasses"
pass
class _T(PrePostInit):
def __pre_init__(self): self.a = 0; assert self.a==0
def __init__(self): self.a += 1; assert self.a==1
def __post_init__(self): self.a += 1; assert self.a==2
t = _T()
t.a
#export
class NewChkMeta(PrePostInitMeta):
"Metaclass to avoid recreating object passed to constructor (plus all `PrePostInitMeta` functionality)"
def __new__(cls, name, bases, dct):
x = super().__new__(cls, name, bases, dct)
old_init,old_new = x.__init__,x.__new__
@functools.wraps(old_init)
def _new(cls, x=None, *args, **kwargs):
if x is not None and isinstance(x,cls):
x._newchk = 1
return x
res = old_new(cls)
res._newchk = 0
return res
@functools.wraps(old_init)
def _init(self,*args,**kwargs):
if self._newchk: return
old_init(self, *args, **kwargs)
x.__init__,x.__new__ = _init,_new
return x
class _T(metaclass=NewChkMeta):
"Testing"
def __init__(self, o=None): self.foo = getattr(o,'foo',0) + 1
class _T2():
def __init__(self, o): self.foo = getattr(o,'foo',0) + 1
t = _T(1)
test_eq(t.foo,1)
t2 = _T(t)
test_eq(t2.foo,1)
test_is(t,t2)
t = _T2(1)
test_eq(t.foo,1)
t2 = _T2(t)
test_eq(t2.foo,2)
test_eq(_T.__doc__, "Testing")
test_eq(str(inspect.signature(_T)), '(o=None)')
```
## Foundational functions
### Decorators
```
import copy
#export
def patch_to(cls):
"Decorator: add `f` to `cls`"
def _inner(f):
nf = copy.copy(f)
# `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually
for o in functools.WRAPPER_ASSIGNMENTS: setattr(nf, o, getattr(f,o))
nf.__qualname__ = f"{cls.__name__}.{f.__name__}"
setattr(cls, f.__name__, nf)
return f
return _inner
@patch_to(_T2)
def func1(x, a:bool): return a+2
t = _T2(1)
test_eq(t.func1(1), 3)
#export
def patch(f):
"Decorator: add `f` to the first parameter's class (based on f's type annotations)"
cls = next(iter(f.__annotations__.values()))
return patch_to(cls)(f)
@patch
def func(x:_T2, a:bool):
"test"
return a+2
t = _T2(1)
test_eq(t.func(1), 3)
test_eq(t.func.__qualname__, '_T2.func')
```
### Type checking
Runtime type checking is handy, so let's make it easy!
```
#export core
#NB: Please don't move this to a different line or module, since it's used in testing `get_source_link`
def chk(f): return typechecked(always=True)(f)
```
Decorator for a function to check that type-annotated arguments receive arguments of the right type.
```
@chk
def test_chk(a:int=1): return a
test_eq(test_chk(2), 2)
test_eq(test_chk(), 1)
test_fail(lambda: test_chk('a'), contains='"a" must be int')
```
Decorated functions will pickle correctly.
```
t = pickle.loads(pickle.dumps(test_chk))
test_eq(t(2), 2)
test_eq(t(), 1)
```
### Context managers
```
@contextmanager
def working_directory(path):
"Change working directory to `path` and return to previous on exit."
prev_cwd = Path.cwd()
os.chdir(path)
try: yield
finally: os.chdir(prev_cwd)
```
### Monkey-patching
```
#export
#NB: Please don't move this to a different line or module, since it's used in testing `get_source_link`
@patch
def ls(self:Path):
"Contents of path as a list"
return list(self.iterdir())
```
We add an `ls()` method to `pathlib.Path` which is simply defined as `list(Path.iterdir())`, mainly for convenience in REPL environments such as notebooks.
```
path = Path()
t = path.ls()
assert len(t)>0
t[0]
#hide
pkl = pickle.dumps(path)
p2 =pickle.loads(pkl)
test_eq(path.ls()[0], p2.ls()[0])
#export
def tensor(x, *rest):
"Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly."
if len(rest): x = (x,)+rest
# Pytorch bug in dataloader using num_workers>0
if isinstance(x, (tuple,list)) and len(x)==0: return tensor(0)
res = torch.tensor(x) if isinstance(x, (tuple,list)) else as_tensor(x)
if res.dtype is torch.int32:
warn('Tensor is int32: upgrading to int64; for better performance use int64 input')
return res.long()
return res
test_eq(tensor(array([1,2,3])), torch.tensor([1,2,3]))
test_eq(tensor(1,2,3), torch.tensor([1,2,3]))
```
#### `Tensor.ndim`
```
#export
Tensor.ndim = property(lambda x: x.dim())
```
We add an `ndim` property to `Tensor` with same semantics as [numpy ndim](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ndim.html), which allows tensors to be used in matplotlib and other places that assume this property exists.
```
test_eq(torch.tensor([1,2]).ndim,1)
test_eq(torch.tensor(1).ndim,0)
test_eq(torch.tensor([[1]]).ndim,2)
```
### Documentation functions
```
#export core
def add_docs(cls, cls_doc=None, **docs):
"Copy values from `docs` to `cls` docstrings, and confirm all public methods are documented"
if cls_doc is not None: cls.__doc__ = cls_doc
for k,v in docs.items():
f = getattr(cls,k)
if hasattr(f,'__func__'): f = f.__func__ # required for class methods
f.__doc__ = v
# List of public callables without docstring
nodoc = [c for n,c in vars(cls).items() if isinstance(c,Callable)
and not n.startswith('_') and c.__doc__ is None]
assert not nodoc, f"Missing docs: {nodoc}"
assert cls.__doc__ is not None, f"Missing class docs: {cls}"
#export core
def docs(cls):
"Decorator version of `add_docs"
add_docs(cls, **cls._docs)
return cls
class _T:
def f(self): pass
@classmethod
def g(cls): pass
add_docs(_T, "a", f="f", g="g")
test_eq(_T.__doc__, "a")
test_eq(_T.f.__doc__, "f")
test_eq(_T.g.__doc__, "g")
#export
def custom_dir(c, add:List):
"Implement custom `__dir__`, adding `add` to `cls`"
return dir(type(c)) + list(c.__dict__.keys()) + add
# export
def is_iter(o):
"Test whether `o` can be used in a `for` loop"
#Rank 0 tensors in PyTorch are not really iterable
return isinstance(o, (Iterable,Generator)) and getattr(o,'ndim',1)
assert is_iter([1])
assert not is_iter(torch.tensor(1))
assert is_iter(torch.tensor([1,2]))
assert (o for o in range(3))
# export
def coll_repr(c, max=1000):
"String repr of up to `max` items of (possibly lazy) collection `c`"
return f'(#{len(c)}) [' + ','.join(itertools.islice(map(str,c), 10)) + ('...'
if len(c)>10 else '') + ']'
test_eq(coll_repr(range(1000)), '(#1000) [0,1,2,3,4,5,6,7,8,9...]')
```
## GetAttr -
```
#export
class GetAttr:
"Inherit from this to have all attr accesses in `self._xtra` passed down to `self.default`"
_xtra=[]
def __getattr__(self,k):
assert self._xtra, "Inherited from `GetAttr` but no `_xtra` attrs listed"
if k in self._xtra: return getattr(self.default, k)
raise AttributeError(k)
def __dir__(self): return custom_dir(self, self._xtra)
class _C(GetAttr): default,_xtra = 'Hi',['lower']
t = _C()
test_eq(t.lower(), 'hi')
test_fail(lambda: t.upper())
assert 'lower' in dir(t)
```
## L -
```
# export
def _mask2idxs(mask):
mask = list(mask)
if len(mask)==0: return []
if isinstance(mask[0],bool): return [i for i,m in enumerate(mask) if m]
return [int(i) for i in mask]
def _listify(o):
if o is None: return []
if isinstance(o, list): return o
if isinstance(o, (str,np.ndarray,Tensor)): return [o]
if is_iter(o): return list(o)
return [o]
#export
class L(GetAttr, metaclass=NewChkMeta):
"Behaves like a list of `items` but can also index with list of indices or masks"
_xtra = [o for o in dir(list) if not o.startswith('_')]
def __init__(self, items=None, *rest, use_list=False, match=None):
items = [] if items is None else items
self.items = self.default = list(items) if use_list else _listify(items)
self.items += list(rest)
if match is not None:
if len(self.items)==1: self.items = self.items*len(match)
else: assert len(self.items)==len(match), 'Match length mismatch'
def __len__(self): return len(self.items)
def __delitem__(self, i): del(self.items[i])
def __repr__(self): return f'{coll_repr(self)}'
def __eq__(self,b): return all_equal(b,self)
def __iter__(self): return (self[i] for i in range(len(self)))
def __invert__(self): return L(not i for i in self)
def __mul__ (a,b): return L(a.items*b)
def __add__ (a,b): return L(a.items+_listify(b))
def __radd__(a,b): return L(b)+a
def __addi__(a,b):
a.items += list(b)
return a
def __getitem__(self, idx):
"Retrieve `idx` (can be list of indices, or mask, or int) items"
res = [self.items[i] for i in _mask2idxs(idx)] if is_iter(idx) else self.items[idx]
if isinstance(res,(tuple,list)) and not isinstance(res,L): res = L(res)
return res
def __setitem__(self, idx, o):
"Set `idx` (can be list of indices, or mask, or int) items to `o` (which is broadcast if not iterable)"
idx = idx if isinstance(idx,L) else _listify(idx)
if not is_iter(o): o = [o]*len(idx)
for i,o_ in zip(idx,o): self.items[i] = o_
def sorted(self, key=None, reverse=False):
"New `L` sorted by `key`. If key is str then use `attrgetter`. If key is int then use `itemgetter`."
if isinstance(key,str): k=lambda o:getattr(o,key,0)
elif isinstance(key,int): k=itemgetter(key)
else: k=key
return L(sorted(self.items, key=k, reverse=reverse))
def mapped(self, f, *args, **kwargs): return L(map(partial(f,*args,**kwargs), self))
def zipped(self): return L(zip(*self))
def itemgot(self, idx): return self.mapped(itemgetter(idx))
def attrgot(self, k): return self.mapped(lambda o:getattr(o,k,0))
def tensored(self): return self.mapped(tensor)
def stack(self, dim=0): return torch.stack(list(self.tensored()), dim=dim)
def cat (self, dim=0): return torch.cat (list(self.tensored()), dim=dim)
add_docs(L,
mapped="Create new `L` with `f` applied to all `items`, passing `args` and `kwargs` to `f`",
zipped="Create new `L` with `zip(*items)`",
itemgot="Create new `L` with item `idx` of all `items`",
attrgot="Create new `L` with attr `k` of all `items`",
tensored="`mapped(tensor)`",
stack="Same as `torch.stack`",
cat="Same as `torch.cat`")
```
You can create an `L` from an existing iterable (e.g. a list, range, etc) and access or modify it with an int list/tuple index, mask, int, or slice. All `list` methods can also be used with `L`.
```
t = L(range(12))
test_eq(t, list(range(12)))
test_ne(t, list(range(11)))
t.reverse()
test_eq(t[0], 11)
t[3] = "h"
test_eq(t[3], "h")
t[3,5] = ("j","k")
test_eq(t[3,5], ["j","k"])
test_eq(t, L(t))
t
```
You can also modify an `L` with `append`, `+`, and `*`.
```
t = L()
test_eq(t, [])
t.append(1)
test_eq(t, [1])
t += [3,2]
test_eq(t, [1,3,2])
t = t + [4]
test_eq(t, [1,3,2,4])
t = 5 + t
test_eq(t, [5,1,3,2,4])
test_eq(L(1,2,3), [1,2,3])
test_eq(L(1,2,3), L(1,2,3))
t = L(1)*5
t = t.mapped(operator.neg)
test_eq(t,[-1]*5)
test_eq(~L([True,False,False]), L([False,True,True]))
def _f(x,a=0): return x+a
t = L(1)*5
test_eq(t.mapped(_f), t)
test_eq(t.mapped(_f,1), [2]*5)
test_eq(t.mapped(_f,a=2), [3]*5)
```
An `L` can be constructed from anything iterable, although tensors and arrays will not be iterated over on construction, unless you pass `use_list` to the constructor.
```
test_eq(L([1,2,3]),[1,2,3])
test_eq(L(L([1,2,3])),[1,2,3])
test_ne(L([1,2,3]),[1,2,])
test_eq(L('abc'),['abc'])
test_eq(L(range(0,3)),[0,1,2])
test_eq(L(o for o in range(0,3)),[0,1,2])
test_eq(L(tensor(0)),[tensor(0)])
test_eq(L([tensor(0),tensor(1)]),[tensor(0),tensor(1)])
test_eq(L(tensor([0.,1.1]))[0],tensor([0.,1.1]))
test_eq(L(tensor([0.,1.1]), use_list=True), [0.,1.1]) # `use_list=True` to unwrap arrays/tensors
```
If `match` is not `None` then the created list is same len as `match`, either by:
- If `len(items)==1` then `items` is replicated,
- Otherwise an error is raised if `match` and `items` are not already the same size.
```
test_eq(L(1,match=[1,2,3]),[1,1,1])
test_eq(L([1,2],match=[2,3]),[1,2])
test_fail(lambda: L([1,2],match=[1,2,3]))
```
If you create an `L` from an existing `L` then you'll get back the original object (since `L` uses the `NewChkMeta` metaclass).
```
test_is(L(t), t)
```
### Methods
```
show_doc(L.__getitem__)
t = L(range(12))
test_eq(t[1,2], [1,2]) # implicit tuple
test_eq(t[[1,2]], [1,2]) # list
test_eq(t[:3], [0,1,2]) # slice
test_eq(t[[False]*11 + [True]], [11]) # mask
test_eq(t[tensor(3)], 3)
show_doc(L.__setitem__)
t[4,6] = 0
test_eq(t[4,6], [0,0])
t[4,6] = [1,2]
test_eq(t[4,6], [1,2])
show_doc(L.mapped)
test_eq(L(range(4)).mapped(operator.neg), [0,-1,-2,-3])
show_doc(L.zipped)
t = L([[1,2,3],'abc'])
test_eq(t.zipped(), [(1, 'a'),(2, 'b'),(3, 'c')])
show_doc(L.itemgot)
test_eq(t.itemgot(1), [2,'b'])
show_doc(L.attrgot)
a = [SimpleNamespace(a=3,b=4),SimpleNamespace(a=1,b=2)]
test_eq(L(a).attrgot('b'), [4,2])
show_doc(L.sorted)
test_eq(L(a).sorted('a').attrgot('b'), [2,4])
```
### Tensor methods
There are shortcuts for `torch.stack` and `torch.cat` if your `L` contains tensors or something convertible. You can manually convert with `tensored`.
```
t = L(([1,2],[3,4]))
test_eq(t.tensored(), [tensor(1,2),tensor(3,4)])
test_eq(t.stack(), tensor([[1,2],[3,4]]))
test_eq(t.cat(), tensor([1,2,3,4]))
```
## Utility functions
### Basics
```
# export
def ifnone(a, b):
"`b` if `a` is None else `a`"
return b if a is None else a
```
Since `b if a is None else a` is such a common pattern, we wrap it in a function. However, be careful, because python will evaluate *both* `a` and `b` when calling `ifnone` (which it doesn't do if using the `if` version directly).
```
test_eq(ifnone(None,1), 1)
test_eq(ifnone(2 ,1), 2)
#export
def get_class(nm, *fld_names, sup=None, doc=None, funcs=None, **flds):
"Dynamically create a class containing `fld_names`"
for f in fld_names: flds[f] = None
for f in L(funcs): flds[f.__name__] = f
sup = ifnone(sup, ())
if not isinstance(sup, tuple): sup=(sup,)
def _init(self, *args, **kwargs):
for i,v in enumerate(args): setattr(self, fld_names[i], v)
for k,v in kwargs.items(): setattr(self,k,v)
def _repr(self):
return '\n'.join(f'{o}: {getattr(self,o)}' for o in set(dir(self))
if not o.startswith('_') and not isinstance(getattr(self,o), types.MethodType))
if not sup: flds['__repr__'] = _repr
flds['__init__'] = _init
res = type(nm, sup, flds)
if doc is not None: res.__doc__ = doc
return res
_t = get_class('_t', 'a')
t = _t()
test_eq(t.a, None)
```
Most often you'll want to call `mk_class`, since it adds the class to your module. See `mk_class` for more details and examples of use (which also apply to `get_class`).
```
#export
def mk_class(nm, *fld_names, sup=None, doc=None, funcs=None, mod=None, **flds):
"Create a class using `get_class` and add to the caller's module"
if mod is None: mod = inspect.currentframe().f_back.f_locals
res = get_class(nm, *fld_names, sup=sup, doc=doc, funcs=funcs, **flds)
mod[nm] = res
sys.modules[ifnone.__module__]
```
Any `kwargs` will be added as class attributes, and `sup` is an optional (tuple of) base classes.
```
mk_class('_t', a=1, sup=GetAttr)
t = _t()
test_eq(t.a, 1)
assert(isinstance(t,GetAttr))
```
A `__init__` is provided that sets attrs for any `kwargs`, and for any `args` (matching by position to fields), along with a `__repr__` which prints all attrs. The docstring is set to `doc`. You can pass `funcs` which will be added as attrs with the function names.
```
def foo(self): return 1
mk_class('_t', 'a', sup=GetAttr, doc='test doc', funcs=foo)
t = _t(3, b=2)
test_eq(t.a, 3)
test_eq(t.b, 2)
test_eq(t.foo(), 1)
test_eq(t.__doc__, 'test doc')
t
#export
def wrap_class(nm, *fld_names, sup=None, doc=None, funcs=None, **flds):
"Decorator: makes function a method of a new class `nm` passing parameters to `mk_class`"
def _inner(f):
mk_class(nm, *fld_names, sup=sup, doc=doc, funcs=L(funcs)+f, mod=f.__globals__, **flds)
return f
return _inner
@wrap_class('_t', a=2)
def bar(self,x): return x+1
t = _t()
test_eq(t.a, 2)
test_eq(t.bar(3), 4)
t
# export
def noop (x=None, *args, **kwargs):
"Do nothing"
return x
noop()
test_eq(noop(1),1)
# export
def noops(self, x, *args, **kwargs):
"Do nothing (method)"
return x
mk_class('_t', foo=noops)
test_eq(_t().foo(1),1)
```
### Collection functions
```
#export
def tuplify(o, use_list=False, match=None):
"Make `o` a tuple"
return tuple(L(o, use_list=use_list, match=match))
test_eq(tuplify(None),())
test_eq(tuplify([1,2,3]),(1,2,3))
test_eq(tuplify(1,match=[1,2,3]),(1,1,1))
#export
def replicate(item,match):
"Create tuple of `item` copied `len(match)` times"
return (item,)*len(match)
t = [1,1]
test_eq(replicate([1,2], t),([1,2],[1,2]))
test_eq(replicate(1, t),(1,1))
#export
def uniqueify(x, sort=False, bidir=False, start=None):
"Return the unique elements in `x`, optionally `sort`-ed, optionally return the reverse correspondance."
res = list(OrderedDict.fromkeys(x).keys())
if start is not None: res = L(start)+res
if sort: res.sort()
if bidir: return res, {v:k for k,v in enumerate(res)}
return res
# test
test_eq(set(uniqueify([1,1,0,5,0,3])),{0,1,3,5})
test_eq(uniqueify([1,1,0,5,0,3], sort=True),[0,1,3,5])
v,o = uniqueify([1,1,0,5,0,3], bidir=True)
test_eq(v,[1,0,5,3])
test_eq(o,{1:0, 0: 1, 5: 2, 3: 3})
v,o = uniqueify([1,1,0,5,0,3], sort=True, bidir=True)
test_eq(v,[0,1,3,5])
test_eq(o,{0:0, 1: 1, 3: 2, 5: 3})
# export
def setify(o): return o if isinstance(o,set) else set(L(o))
# test
test_eq(setify(None),set())
test_eq(setify('abc'),{'abc'})
test_eq(setify([1,2,2]),{1,2})
test_eq(setify(range(0,3)),{0,1,2})
test_eq(setify({1,2}),{1,2})
#export
def is_listy(x):
"`isinstance(x, (tuple,list,L))`"
return isinstance(x, (tuple,list,L,slice))
assert is_listy([1])
assert is_listy(L([1]))
assert is_listy(slice(2))
assert not is_listy(torch.tensor([1]))
#export
def range_of(x):
"All indices of collection `x` (i.e. `list(range(len(x)))`)"
return list(range(len(x)))
test_eq(range_of([1,1,1,1]), [0,1,2,3])
# export
def mask2idxs(mask):
"Convert bool mask or index list to index `L`"
return L(_mask2idxs(mask))
test_eq(mask2idxs([False,True,False,True]), [1,3])
test_eq(mask2idxs(torch.tensor([1,2,3])), [1,2,3])
```
### File and network functions
```
def bunzip(fn):
"bunzip `fn`, raising exception if output already exists"
fn = Path(fn)
assert fn.exists(), f"{fn} doesn't exist"
out_fn = fn.with_suffix('')
assert not out_fn.exists(), f"{out_fn} already exists"
with bz2.BZ2File(fn, 'rb') as src, out_fn.open('wb') as dst:
for d in iter(lambda: src.read(1024*1024), b''): dst.write(d)
f = Path('files/test.txt')
if f.exists(): f.unlink()
bunzip('files/test.txt.bz2')
t = f.open().readlines()
test_eq(len(t),1)
test_eq(t[0], 'test\n')
f.unlink()
```
### Tensor functions
```
#export
def apply(func, x, *args, **kwargs):
"Apply `func` recursively to `x`, passing on args"
if is_listy(x): return [apply(func, o, *args, **kwargs) for o in x]
if isinstance(x,dict): return {k: apply(func, v, *args, **kwargs) for k,v in x.items()}
return func(x, *args, **kwargs)
#export
def to_detach(b, cpu=True):
"Recursively detach lists of tensors in `b `; put them on the CPU if `cpu=True`."
def _inner(x, cpu=True):
if not isinstance(x,Tensor): return x
x = x.detach()
return x.cpu() if cpu else x
return apply(_inner, b, cpu=cpu)
#export
def to_half(b):
"Recursively map lists of tensors in `b ` to FP16."
return apply(lambda x: x.half() if x.dtype not in [torch.int64, torch.int32, torch.int16] else x, b)
#export
def to_float(b):
"Recursively map lists of int tensors in `b ` to float."
return apply(lambda x: x.float() if x.dtype not in [torch.int64, torch.int32, torch.int16] else x, b)
#export
defaults.device = torch.cuda.current_device() if torch.cuda.is_available() else torch.device('cpu')
#export
def to_device(b, device=defaults.device):
"Recursively put `b` on `device`."
def _inner(o): return o.to(device, non_blocking=True) if isinstance(o,Tensor) else o
return apply(_inner, b)
t1,(t2,t3) = to_device([3,[tensor(3),tensor(2)]])
test_eq((t1,t2,t3),(3,3,2))
test_eq(t2.type(), "torch.cuda.LongTensor")
test_eq(t3.type(), "torch.cuda.LongTensor")
#export
def to_cpu(b):
"Recursively map lists of tensors in `b ` to the cpu."
return to_device(b,'cpu')
t3 = to_cpu(t3)
test_eq(t3.type(), "torch.LongTensor")
test_eq(t3, 2)
def to_np(x):
"Convert a tensor to a numpy array."
return x.data.cpu().numpy()
t3 = to_np(t3)
test_eq(type(t3), np.ndarray)
test_eq(t3, 2)
#export
def item_find(x, idx=0):
"Recursively takes the `idx`-th element of `x`"
if is_listy(x): return item_find(x[idx])
if isinstance(x,dict):
key = list(x.keys())[idx] if isinstance(idx, int) else idx
return item_find(x[key])
return x
#export
def find_device(b):
"Recursively search the device of `b`."
return item_find(b).device
test_eq(find_device(t2).index, defaults.device)
test_eq(find_device([t2,t2]).index, defaults.device)
test_eq(find_device({'a':t2,'b':t2}).index, defaults.device)
test_eq(find_device({'a':[[t2],[t2]],'b':t2}).index, defaults.device)
#export
def find_bs(b):
"Recursively search the batch size of `b`."
return item_find(b).shape[0]
x = torch.randn(4,5)
test_eq(find_bs(x), 4)
test_eq(find_bs([x, x]), 4)
test_eq(find_bs({'a':x,'b':x}), 4)
test_eq(find_bs({'a':[[x],[x]],'b':x}), 4)
def np_func(f):
"Convert a function taking and returning numpy arrays to one taking and returning tensors"
def _inner(*args, **kwargs):
nargs = [to_np(arg) if isinstance(arg,Tensor) else arg for arg in args]
return tensor(f(*nargs, **kwargs))
functools.update_wrapper(_inner, f)
return _inner
```
This decorator is particularly useful for using numpy functions as fastai metrics, for instance:
```
from sklearn.metrics import f1_score
@np_func
def f1(inp,targ): return f1_score(targ, inp)
a1,a2 = array([0,1,1]),array([1,0,1])
t = f1(tensor(a1),tensor(a2))
test_eq(f1_score(a1,a2), t)
assert isinstance(t,Tensor)
class Module(nn.Module, metaclass=PrePostInitMeta):
"Same as `nn.Module`, but no need for subclasses to call `super().__init__`"
def __pre_init__(self): super().__init__()
def __init__(self): pass
show_doc(Module, title_level=3)
class _T(Module):
def __init__(self): self.f = nn.Linear(1,1)
def forward(self,x): return self.f(x)
t = _T()
t(tensor([1.]))
```
### Functions on functions
```
# export
@chk
def compose(*funcs: Callable, order=None):
"Create a function that composes all functions in `funcs`, passing along remaining `*args` and `**kwargs` to all"
funcs = L(funcs)
if order is not None: funcs = funcs.sorted(order)
def _inner(x, *args, **kwargs):
for f in L(funcs): x = f(x, *args, **kwargs)
return x
return _inner
f1 = lambda o,p=0: (o*2)+p
f2 = lambda o,p=1: (o+1)/p
test_eq(f2(f1(3)), compose(f1,f2)(3))
test_eq(f2(f1(3,p=3),p=3), compose(f1,f2)(3,p=3))
test_eq(f2(f1(3, 3), 3), compose(f1,f2)(3, 3))
f1.order = 1
test_eq(f1(f2(3)), compose(f1,f2, order="order")(3))
#export
def mapper(f):
"Create a function that maps `f` over an input collection"
return lambda o: [f(o_) for o_ in o]
func = mapper(lambda o:o*2)
test_eq(func(range(3)),[0,2,4])
#export
def partialler(f, *args, order=None, **kwargs):
"Like `functools.partial` but also copies over docstring"
fnew = partial(f,*args,**kwargs)
fnew.__doc__ = f.__doc__
if order is not None: fnew.order=order
elif hasattr(f,'order'): fnew.order=f.order
return fnew
def _f(x,a=1):
"test func"
return x+a
_f.order=1
f = partialler(_f, a=2)
test_eq(f.order, 1)
f = partialler(_f, a=2, order=3)
test_eq(f.__doc__, "test func")
test_eq(f.order, 3)
test_eq(f(3), _f(3,2))
```
### Sorting objects from before/after
Transforms and callbacks will have run_after/run_before attributes, this function will sort them to respect those requirements (if it's possible). Also, sometimes we want a tranform/callback to be run at the end, but still be able to use run_after/run_before behaviors. For those, the function checks for a toward_end attribute (that needs to be True).
```
#export
def _is_instance(f, gs):
tst = [g if type(g) in [type, 'function'] else g.__class__ for g in gs]
for g in tst:
if isinstance(f, g) or f==g: return True
return False
def _is_first(f, gs):
for o in L(getattr(f, 'run_after', None)):
if _is_instance(o, gs): return False
for g in gs:
if _is_instance(f, L(getattr(g, 'run_before', None))): return False
return True
def sort_by_run(fs):
end = L(getattr(f, 'toward_end', False) for f in fs)
inp,res = L(fs)[~end] + L(fs)[end], []
while len(inp) > 0:
for i,o in enumerate(inp):
if _is_first(o, inp):
res.append(inp.pop(i))
break
else: raise Exception("Impossible to sort")
return res
class Tst(): pass
class Tst1():
run_before=[Tst]
class Tst2():
run_before=Tst
run_after=Tst1
tsts = [Tst(), Tst1(), Tst2()]
test_eq(sort_by_run(tsts), [tsts[1], tsts[2], tsts[0]])
Tst2.run_before,Tst2.run_after = Tst1,Tst
test_fail(lambda: sort_by_run([Tst(), Tst1(), Tst2()]))
def tst1(x): return x
tst1.run_before = Tst
test_eq(sort_by_run([tsts[0], tst1]), [tst1, tsts[0]])
class Tst1():
toward_end=True
class Tst2():
toward_end=True
run_before=Tst1
tsts = [Tst(), Tst1(), Tst2()]
test_eq(sort_by_run(tsts), [tsts[0], tsts[2], tsts[1]])
```
### Other helpers
```
#export
def num_cpus():
"Get number of cpus"
try: return len(os.sched_getaffinity(0))
except AttributeError: return os.cpu_count()
defaults.cpus = min(16, num_cpus())
#export
def add_props(f, n=2):
"Create properties passing each of `range(n)` to f"
return (property(partial(f,i)) for i in range(n))
class _T(): a,b = add_props(lambda i,x:i*2)
t = _T()
test_eq(t.a,0)
test_eq(t.b,2)
```
This is a quick way to generate, for instance, *train* and *valid* versions of a property. See `DataBunch` definition for an example of this.
```
#export
def make_cross_image(bw=True):
"Create a tensor containing a cross image, either `bw` (True) or color"
if bw:
im = torch.zeros(5,5)
im[2,:] = 1.
im[:,2] = 1.
else:
im = torch.zeros(3,5,5)
im[0,2,:] = 1.
im[1,:,2] = 1.
return im
plt.imshow(make_cross_image(), cmap="Greys");
plt.imshow(make_cross_image(False).permute(1,2,0));
```
## Export -
```
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
```
| github_jupyter |
# Python for ML
> Basic Python reference useful for ML
- toc: true
- badges: true
- comments: true
- categories: [Python, NumPy, Pandas]
- image: images/py.png
----------------------------------------------------------------------------------------------------------------------------
## Python Collections
Collection Types:
1) List is a collection which is ordered and changeable. Allows duplicate members
2) Tuple is a collection which is ordered and unchangeable. Allows duplicate members
3) Set is a collection which is unordered and unindexed. No duplicate members
4) Dictionary is a collection which is unordered, changeable and indexed. No duplicate members.
### 1) List
```
list = ["apple", "grapes", "banana"]
print(list)
print(list[1]) #access the list items by referring to the index number
print(list[-1]) #Negative indexing means beginning from the end, -1 refers to the last item
list2 = ["apple", "banana", "cherry", "orange", "kiwi", "melon", "mango"]
print(list2[:4]) #By leaving out the start value, the range will start at the first item
print(list2[2:])
print(list2[-4:-1]) #range
list3 = ["A", "B", "C"]
list3[1] = "D" #change the value of a specific item, by refering to the index number
print(list3)
# For loop
list4 = ["apple", "banana", "cherry"]
for x in list4:
print(x)
#To determine if a specified item is present in a list
if "apple" in list4:
print("Yes")
#To determine how many items a list has
print(len(list4))
```
-------------------------------------------------------------------------------------------------------------------------
#### List Methods:
- append() : Adds an element at the end of the list
- clear() : Removes all the elements from the list
- copy() : Returns a copy of the list
- count() : Returns the number of elements with the specified value
- extend() : Add the elements of a list (or any iterable), to the end of the current list
- index() : Returns the index of the first element with the specified value
- insert() : Adds an element at the specified position
- pop() : Removes the element at the specified position
- remove() : Removes the item with the specified value
- reverse() : Reverses the order of the list
- sort() : Sorts the list
```
#append() method to append an item
list4.append("orange")
print(list4)
#Insert an item as the second position
list4.insert(1, "orange")
print(list4)
#The remove() method removes the specified item
list4.remove("banana")
print(list4)
#pop() method removes the specified index
#and the last item if index is not specified
list4.pop()
print(list4)
#The del keyword removes the specified index
del list4[0]
print(list4)
#The del keyword can also delete the list completely
del list4
ptint(list4)
#The clear() method empties the list
list5 = ["apple", "banana", "cherry"]
list5.clear()
print(list5)
#the copy() method to make a copy of a list
list5 = ["apple", "banana", "cherry"]
mylist = list5.copy()
print(mylist)
#Join Two Lists
list1 = ["a", "b" , "c"]
list2 = [1, 2, 3]
list3 = list1 + list2
print(list3)
#Append list2 into list1
list1 = ["a", "b" , "c"]
list2 = [1, 2, 3]
for x in list2:
list1.append(x)
print(list1)
#the extend() method to add list2 at the end of list1
list1 = ["a", "b" , "c"]
list2 = [1, 2, 3]
list1.extend(list2)
print(list1)
```
### 2) Tuple
A tuple is a collection which is ordered and unchangeable.
```
tuple1 = ("apple", "banana", "cherry")
print(tuple1)
#access tuple item
print(tuple1[1])
#Negative indexing means beginning from the end, -1 refers to the last item
print(tuple1[-1])
#Range : Return the third, fourth, and fifth item
tuple2 = ("apple", "banana", "cherry", "orange", "kiwi", "melon", "mango")
print(tuple2[2:5])
#Specify negative indexes if you want to start the search from the end of the tuple
print(tuple2[-4:-1])
#loop through the tuple items by using a for loop
tuple3 = ("apple", "banana", "cherry")
for x in tuple3:
print(x)
#Check if Item Exists
if "apple" in tuple3:
print("Yes")
#Print the number of items in the tuple
print(len(tuple3))
# join two or more tuples you can use the + operator
tuple1 = ("a", "b" , "c")
tuple2 = (1, 2, 3)
tuple3 = tuple1 + tuple2
print(tuple3)
#Using the tuple() method to make a tuple
thistuple = tuple(("apple", "banana", "cherry")) # note the double round-brackets
print(thistuple)
```
### 3) Set
A set is a collection which is unordered and unindexed. Sets are written with curly brackets.
```
set1 = {"apple", "banana", "cherry"}
print(set1)
#Access items, Loop through the set, and print the values
for x in set1:
print(x)
if "apple" in set1:
print("Yes")
```
#### Set methods:
- add() Adds an element to the set
- clear() Removes all the elements from the set
- copy() Returns a copy of the set
- difference() Returns a set containing the difference between two or more sets
- difference_update() Removes the items in this set that are also included in another, specified set
- discard() Remove the specified item
- intersection() Returns a set, that is the intersection of two other sets
- intersection_update() Removes the items in this set that are not present in other, specified set(s)
- isdisjoint() Returns whether two sets have a intersection or not
- issubset() Returns whether another set contains this set or not
- issuperset() Returns whether this set contains another set or not
- pop() Removes an element from the set
- remove() Removes the specified element
- symmetric_difference() Returns a set with the symmetric differences of two sets
- symmetric_difference_update() inserts the symmetric differences from this set and another
- union() Return a set containing the union of sets
- update() Update the set with the union of this set and others
```
# Adding new items
set1.add("orange")
print(set1)
#Add multiple items to a set, using the update() method
set1.update(["orange", "mango", "grapes"])
print(set1)
# length of the set
print(len(set1))
# remove item
set1.remove("banana")
print(set1)
#Remove the last item by using the pop() method
set2 = {"apple", "banana", "cherry"}
x = set2.pop()
print(x)
print(set2)
#clear() method empties the set
thisset = {"apple", "banana", "cherry"}
thisset.clear()
print(thisset)
#del keyword will delete the set completely
thisset = {"apple", "banana", "cherry"}
del thisset
print(thisset)
#use the union() method that returns a new set containing all items from both sets,
#or the update() method that inserts all the items from one set into another
set1 = {"a", "b" , "c"}
set2 = {1, 2, 3}
set3 = set1.union(set2)
print(set3)
#update() method inserts the items in set2 into set1
set1 = {"a", "b" , "c"}
set2 = {1, 2, 3}
set1.update(set2)
print(set1)
```
#### 4) Dictionary
A dictionary is a collection which is unordered, changeable and indexed.
```
dict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
print(dict)
#access the items of a dictionary by referring to its key name, inside square brackets
dict["model"]
```
#### Dict methods
- clear() Removes all the elements from the dictionary
- copy() Returns a copy of the dictionary
- fromkeys() Returns a dictionary with the specified keys and value
- get() Returns the value of the specified key
- items() Returns a list containing a tuple for each key value pair
- keys() Returns a list containing the dictionary's keys
- pop() Removes the element with the specified key
- popitem() Removes the last inserted key-value pair
- setdefault() Returns the value of the specified key. If the key does not exist: insert the key, with the specified value
- update() Updates the dictionary with the specified key-value pairs
- values() Returns a list of all the values in the dictionary
```
#use get() to get the same result
dict.get("model")
#change the value of a specific item by referring to its key name
dict1 = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
dict1["year"] = 2018
print(dict1)
#loop through a dictionary by using a for loop
for x in dict1:
print(x)
#Print all values in the dictionary, one by one
for x in dict1:
print(dict1[x])
#use the values() method to return values of a dictionary
for x in dict1.values():
print(x)
#Loop through both keys and values, by using the items() method
for x, y in dict1.items():
print(x, y)
#Check if an item present in the dictionary
if "model" in dict1:
print("Yes")
print(len(dict1))
#adding items
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
thisdict["color"] = "red"
print(thisdict)
#pop() method removes the item with the specified key name
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
thisdict.pop("model")
print(thisdict)
# popitem() method removes the last inserted item
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
thisdict.popitem()
print(thisdict)
#del keyword removes the item with the specified key name
thisdict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}
del thisdict["model"]
print(thisdict)
#dictionary can also contain many dictionaries, this is called nested dictionaries
myfamily = {
"child1" : {
"name" : "Emil",
"year" : 2004
},
"child2" : {
"name" : "Tobias",
"year" : 2007
},
"child3" : {
"name" : "Linus",
"year" : 2011
}
}
#Create three dictionaries, then create one dictionary that will contain the other three dictionaries
child1 = {
"name" : "Emil",
"year" : 2004
}
child2 = {
"name" : "Tobias",
"year" : 2007
}
child3 = {
"name" : "Linus",
"year" : 2011
}
myfamily = {
"child1" : child1,
"child2" : child2,
"child3" : child3
}
```
## Python Conditions
### If statement
```
a = 100
b = 200
if b > a:
print("b is greater than a")
#simplyfied:
a = 100
b = 200
if a < b: print("a is greater than b")
a = 20
b = 20
if b > a:
print("b is greater than a")
elif a == b:
print("a and b are equal")
a = 200
b = 100
if b > a:
print("b is greater than a")
elif a == b:
print("a and b are equal")
else:
print("a is greater than b")
# simplyfied:
a = 100
b = 300
print("A") if a > b else print("B")
```
### AND and OR Statement
```
a = 200
b = 33
c = 500
if a > b and c > a:
print("Both conditions are True")
a = 200
b = 33
c = 500
if a > b or a > c:
print("At least one of the conditions is True")
```
### Nested If
x = 41
if x > 10:
print("Above ten,")
if x > 20:
print("and also above 20!")
else:
print("but not above 20.")
### Pass
```
#if statements cannot be empty, but if you for some reason have an if statement
#with no content, put in the pass statement to avoid getting an error
a = 33
b = 200
if b > a:
pass
```
### The while Loop
```
i = 1
while i < 6:
print(i)
i += 1
```
### Break Statement
```
i = 1
while i < 6:
print(i)
if i == 3:
break
i += 1
# with Continue
i = 0
while i < 6:
i += 1
if i == 3:
continue
print(i)
### Else statement
i = 1
while i < 6:
print(i)
i += 1
else:
print("i is no longer less than 6")
```
### For Loops
```
# For loop for List
fruits = ["apple", "banana", "cherry"]
for x in fruits:
print(x)
# strings
for x in "banana":
print(x)
#break statement
fruits = ["apple", "banana", "cherry"]
for x in fruits:
print(x)
if x == "banana":
break
fruits = ["apple", "banana", "cherry"]
for x in fruits:
if x == "banana":
break
print(x)
#continue
fruits = ["apple", "banana", "cherry"]
for x in fruits:
if x == "banana":
continue
print(x)
# Range
for x in range(6):
print(x)
for x in range(2, 6):
print(x)
for x in range(2, 30, 3):
print(x)
for x in range(6):
print(x)
else:
print("Finally finished!")
adj = ["red", "big", "tasty"]
fruits = ["apple", "banana", "cherry"]
for x in adj:
for y in fruits:
print(x, y)
for x in [0, 1, 2]:
pass
```
### Creating a Function
```
def my_function():
print("Hello")
my_function()
def my_function(*kids):
print("The youngest child is " + kids[2])
my_function("Emil", "Tobias", "Linus")
def my_function(child3, child2, child1):
print("The youngest child is " + child3)
my_function(child1 = "Emil", child2 = "Tobias", child3 = "Linus")
#Passing a List as an Argument
def my_function(food):
for x in food:
print(x)
fruits = ["apple", "banana", "cherry"]
my_function(fruits)
#return value
def my_function(x):
return 5 * x
print(my_function(3))
#Recursion Example
def tri_recursion(k):
if(k > 0):
result = k + tri_recursion(k - 1)
print(result)
else:
result = 0
return result
print("\n\nRecursion Example Results")
tri_recursion(6)
```
### lambda function
```
x = lambda a, b, c : a + b + c
print(x(5, 6, 2))
def myfunc(n):
return lambda a : a * n
def myfunc(n):
return lambda a : a * n
mydoubler = myfunc(2)
print(mydoubler(11))
def myfunc(n):
return lambda a : a * n
mydoubler = myfunc(2)
mytripler = myfunc(3)
print(mydoubler(11))
print(mytripler(11))
```
### Open a File on the Server
### Reading files
```
#f = open("demofile.txt", "r")
#print(f.read())
#f = open("D:\\myfiles\welcome.txt", "r")
#print(f.read())
#Read one line of the file
#f = open("demofile.txt", "r")
#print(f.readline())
#Loop through the file line by line
#f = open("demofile.txt", "r")
#for x in f:
# print(x)
#Close the file when you are finish with it
#f = open("demofile.txt", "r")
#print(f.readline())
#f.close()
```
### Writing files:
```
#Open the file "demofile2.txt" and append content to the file
#f = open("demofile2.txt", "a")
#f.write("Now the file has more content!")
#f.close()
#open and read the file after the appending:
#f = open("demofile2.txt", "r")
#print(f.read())
#Open the file "demofile3.txt" and overwrite the content
#f = open("demofile3.txt", "w")
#f.write("Woops! I have deleted the content!")
#f.close()
#open and read the file after the appending:
#f = open("demofile3.txt", "r")
#print(f.read())
#Create a file called "myfile.txt"
#f = open("myfile.txt", "x")
#Remove the file "demofile.txt"
#import os
#os.remove("demofile.txt")
#Check if file exists, then delete it:
#import os
#if os.path.exists("demofile.txt"):
# os.remove("demofile.txt")
#else:
# print("The file does not exist")
#Try to open and write to a file that is not writable:
#try:
# f = open("demofile.txt")
# f.write("Lorum Ipsum")
#except:
# print("Something went wrong when writing to the file")
#finally:
# f.close()
#Raise an error and stop the program if x is lower than 0:
#x = -1
#if x < 0:
# raise Exception("Sorry, no numbers below zero")
#Raise a TypeError if x is not an integer:
#x = "hello"
#if not type(x) is int:
# raise TypeError("Only integers are allowed")
```
-----------------------------------------------------------------------------------------------------------------------------
# NumPy
```
import numpy as np
simple_list = [1,2,3]
np.array(simple_list)
list_of_lists = [[1,2,3], [4,5,6], [7,8,9]]
np.array(list_of_lists)
np.arange(0,10)
np.arange(0,21,5)
np.zeros(50)
np.ones((4,5))
np.linspace(0,20,10)
np.eye(5)
np.random.rand(3,2)
np.random.randint(5,20,10)
np.arange(30)
np.random.randint(0,100,20)
sample_array = np.arange(30)
sample_array.reshape(5,6)
rand_array = np.random.randint(0,100,20)
rand_array.argmin()
sample_array.shape
sample_array.reshape(1,30)
sample_array.reshape(30,1)
sample_array.dtype
a = np.random.randn(2,3)
a.T
sample_array = np.arange(10,21)
sample_array
sample_array[[2,5]]
sample_array[1:2] = 100
sample_array
sample_array = np.arange(10,21)
sample_array[0:7]
sample_array = np.arange(10,21)
sample_array
subset_sample_array = sample_array[0:7]
subset_sample_array
subset_sample_array[:]=1001
subset_sample_array
sample_array
copy_sample_array = sample_array.copy()
copy_sample_array
copy_sample_array[:]=10
copy_sample_array
sample_array
sample_matrix = np.array(([50,20,1,23], [24,23,21,32], [76,54,32,12], [98,6,4,3]))
sample_matrix
sample_matrix[0][3]
sample_matrix[0,3]
sample_matrix[3,:]
sample_matrix[3]
sample_matrix = np.array(([50,20,1,23,34], [24,23,21,32,34], [76,54,32,12,98], [98,6,4,3,67], [12,23,34,56,67]))
sample_matrix
sample_matrix[:,[1,3]]
sample_matrix[:,(3,1)]
sample_array=np.arange(1,31)
sample_array
bool = sample_array < 10
sample_array[bool]
sample_array[sample_array <10]
a=11
sample_array[sample_array < a]
sample_array + sample_array
sample_array / sample_array
10/sample_array
sample_array + 1
np.var(sample_array)
array = np.random.randn(6,6)
array
np.std(array)
np.mean(array)
sports = np.array(['golf', 'cric', 'fball', 'cric', 'Cric', 'fooseball'])
np.unique(sports)
sample_array
simple_array = np.arange(0,20)
simple_array
np.save('sample_array', sample_array)
np.savez('2_arrays.npz', a=sample_array, b=simple_array)
np.load('sample_array.npy')
archive = np.load('2_arrays.npz')
archive['b']
np.savetxt('text_file.txt', sample_array,delimiter=',')
np.loadtxt('text_file.txt', delimiter=',')
data = {'prodID': ['101', '102', '103', '104', '104'],
'prodname': ['X', 'Y', 'Z', 'X', 'W'],
'profit': ['2738', '2727', '3497', '7347', '3743']}
```
--------------------------------------------------------------------------------------------------------------------------
# Pandas
```
import pandas as pd
score = [10, 15, 20, 25]
pd.Series(data=score, index = ['a','b','c','d'])
demo_matrix = np.array(([13,35,74,48], [23,37,37,38], [73,39,93,39]))
demo_matrix
demo_matrix[2,3]
np.arange(0,22,6)
demo_array=np.arange(0,10)
demo_array
demo_array <3
demo_array[demo_array <6]
np.max(demo_array)
s1 = pd.Series(['a', 'b'])
s2 = pd.Series(['c', 'd'])
pd.concat([s1+s2])
```
#### Creating a Series using Pandas
You could convert a list,numpy array, or dictionary to a Series in the following manner
```
labels = ['w','x','y','z']
list = [10,20,30,40]
array = np.array([10,20,30,40])
dict = {'w':10,'x':20,'y':30,'z':40}
pd.Series(data=list)
pd.Series(data=list,index=labels)
pd.Series(list,labels)
pd.Series(array)
pd.Series(array,labels)
pd.Series(dict)
```
#### Using an Index
We shall now see how to index in a Series using the following examples of 2 series
```
sports1 = pd.Series([1,2,3,4],index = ['Cricket', 'Football','Basketball', 'Golf'])
sports1
sports2 = pd.Series([1,2,5,4],index = ['Cricket', 'Football','Baseball', 'Golf'])
sports2
sports1 + sports2
```
#### DataFrames
DataFrames concept in python is similar to that of R programming language. DataFrame is a collection of Series combined together to share the same index positions.
```
from numpy.random import randn
np.random.seed(1)
dataframe = pd.DataFrame(randn(10,5),index='A B C D E F G H I J'.split(),columns='Score1 Score2 Score3 Score4 Score5'.split())
dataframe
```
#### Selection and Indexing
Ways in which we can grab data from a DataFrame
```
dataframe['Score3']
# Pass a list of column names in any order necessary
dataframe[['Score2','Score1']]
#DataFrame Columns are nothing but a Series each
type(dataframe['Score1'])
```
#### Adding a new column to the DataFrame
```
dataframe['Score6'] = dataframe['Score1'] + dataframe['Score2']
dataframe
```
#### Removing Columns from DataFrame
```
# Use axis=0 for dropping rows and axis=1 for dropping columns
dataframe.drop('Score6',axis=1)
# column is not dropped unless inplace input is TRUE
dataframe
dataframe.drop('Score6',axis=1,inplace=True)
dataframe
```
#### Dropping rows using axis=0
```
# Row will also be dropped only if inplace=TRUE is given as input
dataframe.drop('A',axis=0)
```
#### Selecting Rows
```
dataframe.loc['F']
```
#### select based off of index position instead of label - use iloc instead of loc function
```
dataframe.iloc[2]
```
#### Selecting subset of rows and columns using loc function
```
dataframe.loc['A','Score1']
dataframe.loc[['A','B'],['Score1','Score2']]
```
#### Conditional Selection
Similar to NumPy, we can make conditional selections using Brackets
```
dataframe>0.5
dataframe[dataframe>0.5]
dataframe[dataframe['Score1']>0.5]
dataframe[dataframe['Score1']>0.5]['Score2']
dataframe[dataframe['Score1']>0.5][['Score2','Score3']]
```
#### Some more features of indexing includes
- resetting the index
- setting a different value
- index hierarchy
```
# Reset to default index value instead of A to J
dataframe.reset_index()
# Setting new index value
newindex = 'IND JP CAN GE IT PL FY IU RT IP'.split()
dataframe['Countries'] = newindex
dataframe
dataframe.set_index('Countries')
# Once again, ensure that you input inplace=TRUE
dataframe
dataframe.set_index('Countries',inplace=True)
dataframe
```
#### Missing Data
Methods to deal with missing data in Pandas
```
dataframe = pd.DataFrame({'Cricket':[1,2,np.nan,4,6,7,2,np.nan],
'Baseball':[5,np.nan,np.nan,5,7,2,4,5],
'Tennis':[1,2,3,4,5,6,7,8]})
dataframe
dataframe.dropna()
# Use axis=1 for dropping columns with nan values
dataframe.dropna(axis=1)
dataframe.dropna(thresh=2)
dataframe.fillna(value=0)
dataframe['Baseball'].fillna(value=dataframe['Baseball'].mean())
```
#### Groupby
The groupby method is used to group rows together and perform aggregate functions
```
dat = {'CustID':['1001','1001','1002','1002','1003','1003'],
'CustName':['UIPat','DatRob','Goog','Chrysler','Ford','GM'],
'Profitinlakhs':[2005,3245,1245,8765,5463,3547]}
dataframe = pd.DataFrame(dat)
dataframe
```
We can now use the .groupby() method to group rows together based on a column name.
For example let's group based on CustID. This will create a DataFrameGroupBy object:
```
dataframe.groupby('CustID') #This object can be saved as a variable
CustID_grouped = dataframe.groupby("CustID") #Now we can aggregate using the variable
CustID_grouped.mean()
```
#### groupby function for each aggregation
```
dataframe.groupby('CustID').mean()
CustID_grouped.std()
CustID_grouped.min()
CustID_grouped.max()
CustID_grouped.count()
CustID_grouped.describe()
CustID_grouped.describe().transpose()
CustID_grouped.describe().transpose()['1001']
```
#### combining DataFrames together:
- Merging
- Joining
- Concatenating
```
dafa1 = pd.DataFrame({'CustID': ['101', '102', '103', '104'],
'Sales': [13456, 45321, 54385, 53212],
'Priority': ['CAT0', 'CAT1', 'CAT2', 'CAT3'],
'Prime': ['yes', 'no', 'no', 'yes']},
index=[0, 1, 2, 3])
dafa2 = pd.DataFrame({'CustID': ['101', '103', '104', '105'],
'Sales': [13456, 54385, 53212, 4534],
'Payback': ['CAT4', 'CAT5', 'CAT6', 'CAT7'],
'Imp': ['yes', 'no', 'no', 'no']},
index=[4, 5, 6, 7])
dafa3 = pd.DataFrame({'CustID': ['101', '104', '105', '106'],
'Sales': [13456, 53212, 4534, 3241],
'Pol': ['CAT8', 'CAT9', 'CAT10', 'CAT11'],
'Level': ['yes', 'no', 'no', 'yes']},
index=[8, 9, 10, 11])
```
#### Concatenation
Concatenation joins DataFrames basically either by rows or colums(axis=0 or 1).
We also need to ensure dimension sizes of dataframes are the same
```
pd.concat([dafa1,dafa2])
pd.concat([dafa1,dafa2,dafa3],axis=1)
```
#### Merging
Just like SQL tables, merge function in python allows us to merge dataframes
```
pd.merge(dafa1,dafa2,how='outer',on='CustID')
```
#### Operations
Let us discuss some useful Operations using Pandas
```
dataframe = pd.DataFrame({'custID':[1,2,3,4],'SaleType':['big','small','medium','big'],'SalesCode':['121','131','141','151']})
dataframe.head()
```
Info on Unique Values
```
dataframe['SaleType'].unique()
dataframe['SaleType'].nunique()
dataframe['SaleType'].value_counts()
```
Selecting Data
```
#Select from DataFrame using criteria from multiple columns
newdataframe = dataframe[(dataframe['custID']!=3) & (dataframe['SaleType']=='big')]
newdataframe
```
Applying Functions
```
def profit(a):
return a*4
dataframe['custID'].apply(profit)
dataframe['SaleType'].apply(len)
dataframe['custID'].sum()
```
#### Permanently Removing a Column
```
dataframe
del dataframe['custID']
dataframe
```
#### Get column and index names
```
dataframe.columns
dataframe.index
```
#### Sorting and Ordering a DataFrame
```
dataframe.sort_values(by='SaleType') #inplace=False by default
```
#### Find Null Values or Check for Null Values
```
dataframe.isnull()
# Drop rows with NaN Values
dataframe.dropna()
```
#### Filling in NaN values with something else
```
dataframe = pd.DataFrame({'Sale1':[5,np.nan,10,np.nan],
'Sale2':[np.nan,121,np.nan,141],
'Sale3':['XUI','VYU','NMA','IUY']})
dataframe.head()
dataframe.fillna('Not nan')
```
### Data Input and Output
Reading DataFrames from external sources using pd.read functions
CSV Input
```
# dataframe = pd.read_csv('filename.csv')
```
CSV output
```
#If index=FALSE then csv does not store index values
# dataframe.to_csv('filename.csv',index=False)
```
Excel Input
```
# pd.read_excel('filename.xlsx',sheet_name='Data1')
```
Excel Output
```
# dataframe.to_excel('Consumer2.xlsx',sheet_name='Sheet1')
```
| github_jupyter |
```
# ------------------------- #
# SET - UP #
# ------------------------- #
# ---- Requirements ----- #
#!pip install datasets
#!pip install sentencepiece
#!pip install transformers
#!pip install jsonlines
import csv
import datasets
from google.colab import drive
import huggingface_hub
import jsonlines
import json
import pandas as pd
import re
import sys
# ----- Check if GPU is connected ----- #
gpu_info = !nvidia-smi -L
gpu_info = "\n".join(gpu_info)
if gpu_info.find("failed") >= 0:
print("Not connected to a GPU")
else:
print(gpu_info)
# ----- Mounting Google Drive ----- #
drive.mount('/content/drive')
sys.path.append('/content/drive/MyDrive/CIS6930_final')
# ----- Importing TweetSum processing module ----- #
from tweet_sum_processor import TweetSumProcessor
# ----------------------------------------------------------------------
# ------------------------- #
# PRE-PROCESSING FUNCTIONS #
# ------------------------- #
def get_inputs(json_format):
'''
---------------------
Input: Dictionary containing the metadata for one tweet conversation
Output: Concatenated string containing the content of one conversation.
Notes:
Special characters inserted for links and transitions between speaker.
Anonymized usernames are removed, as they do not add value to the text,
as they are usually just located at the beginning of the tweet by default
(feature of threads). Whereas usernames containing the name of the business
are retained for contextual purpose.
---------------------
'''
dialogue = json_format['dialog']['turns']
full_text = []
for i in dialogue:
string = ' '.join(i['sentences'])
full_text.append(string + " <BR>")
conversation = ' '.join(full_text)
by_word = conversation.split(' ')
for i in range(0, len(by_word)):
if "https" in by_word[i]:
by_word[i] = "<LINK>"
if "@" in by_word[i]:
if by_word[i][1:].isnumeric():
by_word[i] = ''
text = ' '.join(by_word)
text = re.sub(r'[^a-zA-Z0-9,!.?<> ]', '', text)
text = re.sub(r'(\W)(?=\1)', '', text)
return text
def get_summary(json_format):
'''
---------------------
Input: Dictionary containing the metadata for one tweet conversation
Output: The text of a single human-generated summary for that one tweet
---------------------
'''
temp = json_format['summaries']['abstractive_summaries'][0]
summary = ' '.join(temp)
return summary
def prepare_data(file_name, processor):
'''
Processing the TweetSum dataset so that it can be read as a HuggingFace dataset.
---------------------
Input: Path to a dataset file and the TweetSum processor
Output: The inputs and summaries for the given data
---------------------
'''
inputs = []
summaries = []
with open('/content/drive/MyDrive/CIS6930_final/' + file_name) as f:
dialog_with_summaries = processor.get_dialog_with_summaries(f.readlines())
for dialog_with_summary in dialog_with_summaries:
try:
json_format = json.loads(dialog_with_summary.get_json())
inputs.append(get_inputs(json_format))
summaries.append(get_summary(json_format))
except TypeError:
pass
return inputs, summaries
# ----------------------------------------------------------------------
# ------------------------- #
# "MAIN" #
# ------------------------- #
# --- "Fake" main function because this is a notebook and not a script :)
# ------ Process data
processor = TweetSumProcessor('/content/drive/MyDrive/CIS6930_final/kaggle_files/twcs.csv')
train_inputs, train_summs = prepare_data('final_train_tweetsum.jsonl', processor)
valid_inputs, valid_summs = prepare_data('final_valid_tweetsum.jsonl', processor)
test_inputs, test_summs = prepare_data('final_test_tweetsum.jsonl', processor)
# ----- Save as CSVs
train = pd.DataFrame({"inputs": train_inputs, "summaries": train_summs})
train.to_csv('/content/drive/MyDrive/CIS6930_final/tweetsum_train.csv', index=False)
valid = pd.DataFrame({"inputs": valid_inputs, "summaries": valid_summs})
valid.to_csv('/content/drive/MyDrive/CIS6930_final/tweetsum_valid.csv', index=False)
test = pd.DataFrame({"inputs": test_inputs, "summaries": test_summs})
test.to_csv('/content/drive/MyDrive/CIS6930_final/tweetsum_test.csv', index=False)
```
| github_jupyter |
# Encoding of categorical variables
In this notebook, we will present typical ways of dealing with
**categorical variables** by encoding them, namely **ordinal encoding** and
**one-hot encoding**.
Let's first load the entire adult dataset containing both numerical and
categorical data.
```
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
# drop the duplicated column `"education-num"` as stated in the first notebook
adult_census = adult_census.drop(columns="education-num")
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name])
```
## Identify categorical variables
As we saw in the previous section, a numerical variable is a
quantity represented by a real or integer number. These variables can be
naturally handled by machine learning algorithms that are typically composed
of a sequence of arithmetic instructions such as additions and
multiplications.
In contrast, categorical variables have discrete values, typically
represented by string labels (but not only) taken from a finite list of
possible choices. For instance, the variable `native-country` in our dataset
is a categorical variable because it encodes the data using a finite list of
possible countries (along with the `?` symbol when this information is
missing):
```
data["native-country"].value_counts().sort_index()
```
How can we easily recognize categorical columns among the dataset? Part of
the answer lies in the columns' data type:
```
data.dtypes
```
If we look at the `"native-country"` column, we observe its data type is
`object`, meaning it contains string values.
## Select features based on their data type
In the previous notebook, we manually defined the numerical columns. We could
do a similar approach. Instead, we will use the scikit-learn helper function
`make_column_selector`, which allows us to select columns based on
their data type. We will illustrate how to use this helper.
```
from sklearn.compose import make_column_selector as selector
categorical_columns_selector = selector(dtype_include=object)
categorical_columns = categorical_columns_selector(data)
categorical_columns
```
Here, we created the selector by passing the data type to include; we then
passed the input dataset to the selector object, which returned a list of
column names that have the requested data type. We can now filter out the
unwanted columns:
```
data_categorical = data[categorical_columns]
data_categorical.head()
print(f"The dataset is composed of {data_categorical.shape[1]} features")
```
In the remainder of this section, we will present different strategies to
encode categorical data into numerical data which can be used by a
machine-learning algorithm.
## Strategies to encode categories
### Encoding ordinal categories
The most intuitive strategy is to encode each category with a different
number. The `OrdinalEncoder` will transform the data in such manner.
We will start by encoding a single column to understand how the encoding
works.
```
from sklearn.preprocessing import OrdinalEncoder
education_column = data_categorical[["education"]]
encoder = OrdinalEncoder()
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
We see that each category in `"education"` has been replaced by a numeric
value. We could check the mapping between the categories and the numerical
values by checking the fitted attribute `categories_`.
```
encoder.categories_
```
Now, we can check the encoding applied on all categorical features.
```
data_encoded = encoder.fit_transform(data_categorical)
data_encoded[:5]
print(
f"The dataset encoded contains {data_encoded.shape[1]} features")
```
We see that the categories have been encoded for each feature (column)
independently. We also note that the number of features before and after the
encoding is the same.
However, be careful when applying this encoding strategy:
using this integer representation leads downstream predictive models
to assume that the values are ordered (0 < 1 < 2 < 3... for instance).
By default, `OrdinalEncoder` uses a lexicographical strategy to map string
category labels to integers. This strategy is arbitrary and often
meaningless. For instance, suppose the dataset has a categorical variable
named `"size"` with categories such as "S", "M", "L", "XL". We would like the
integer representation to respect the meaning of the sizes by mapping them to
increasing integers such as `0, 1, 2, 3`.
However, the lexicographical strategy used by default would map the labels
"S", "M", "L", "XL" to 2, 1, 0, 3, by following the alphabetical order.
The `OrdinalEncoder` class accepts a `categories` constructor argument to
pass categories in the expected ordering explicitly. You can find more
information in the
[scikit-learn documentation](https://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features)
if needed.
If a categorical variable does not carry any meaningful order information
then this encoding might be misleading to downstream statistical models and
you might consider using one-hot encoding instead (see below).
### Encoding nominal categories (without assuming any order)
`OneHotEncoder` is an alternative encoder that prevents the downstream
models to make a false assumption about the ordering of categories. For a
given feature, it will create as many new columns as there are possible
categories. For a given sample, the value of the column corresponding to the
category will be set to `1` while all the columns of the other categories
will be set to `0`.
We will start by encoding a single feature (e.g. `"education"`) to illustrate
how the encoding works.
```
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse=False)
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p><tt class="docutils literal">sparse=False</tt> is used in the <tt class="docutils literal">OneHotEncoder</tt> for didactic purposes, namely
easier visualization of the data.</p>
<p class="last">Sparse matrices are efficient data structures when most of your matrix
elements are zero. They won't be covered in detail in this course. If you
want more details about them, you can look at
<a class="reference external" href="https://scipy-lectures.org/advanced/scipy_sparse/introduction.html#why-sparse-matrices">this</a>.</p>
</div>
We see that encoding a single feature will give a NumPy array full of zeros
and ones. We can get a better understanding using the associated feature
names resulting from the transformation.
```
feature_names = encoder.get_feature_names_out(input_features=["education"])
education_encoded = pd.DataFrame(education_encoded, columns=feature_names)
education_encoded
```
As we can see, each category (unique value) became a column; the encoding
returned, for each sample, a 1 to specify which category it belongs to.
Let's apply this encoding on the full dataset.
```
print(
f"The dataset is composed of {data_categorical.shape[1]} features")
data_categorical.head()
data_encoded = encoder.fit_transform(data_categorical)
data_encoded[:5]
print(
f"The encoded dataset contains {data_encoded.shape[1]} features")
```
Let's wrap this NumPy array in a dataframe with informative column names as
provided by the encoder object:
```
columns_encoded = encoder.get_feature_names_out(data_categorical.columns)
pd.DataFrame(data_encoded, columns=columns_encoded).head()
```
Look at how the `"workclass"` variable of the 3 first records has been
encoded and compare this to the original string representation.
The number of features after the encoding is more than 10 times larger than
in the original data because some variables such as `occupation` and
`native-country` have many possible categories.
### Choosing an encoding strategy
Choosing an encoding strategy will depend on the underlying models and the
type of categories (i.e. ordinal vs. nominal).
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">In general <tt class="docutils literal">OneHotEncoder</tt> is the encoding strategy used when the
downstream models are <strong>linear models</strong> while <tt class="docutils literal">OrdinalEncoder</tt> is often a
good strategy with <strong>tree-based models</strong>.</p>
</div>
Using an `OrdinalEncoder` will output ordinal categories. This means
that there is an order in the resulting categories (e.g. `0 < 1 < 2`). The
impact of violating this ordering assumption is really dependent on the
downstream models. Linear models will be impacted by misordered categories
while tree-based models will not.
You can still use an `OrdinalEncoder` with linear models but you need to be
sure that:
- the original categories (before encoding) have an ordering;
- the encoded categories follow the same ordering than the original
categories.
The **next exercise** highlights the issue of misusing `OrdinalEncoder` with
a linear model.
One-hot encoding categorical variables with high cardinality can cause
computational inefficiency in tree-based models. Because of this, it is not recommended
to use `OneHotEncoder` in such cases even if the original categories do not
have a given order. We will show this in the **final exercise** of this sequence.
## Evaluate our predictive pipeline
We can now integrate this encoder inside a machine learning pipeline like we
did with numerical data: let's train a linear classifier on the encoded data
and check the generalization performance of this machine learning pipeline using
cross-validation.
Before we create the pipeline, we have to linger on the `native-country`.
Let's recall some statistics regarding this column.
```
data["native-country"].value_counts()
```
We see that the `Holand-Netherlands` category is occurring rarely. This will
be a problem during cross-validation: if the sample ends up in the test set
during splitting then the classifier would not have seen the category during
training and will not be able to encode it.
In scikit-learn, there are two solutions to bypass this issue:
* list all the possible categories and provide it to the encoder via the
keyword argument `categories`;
* use the parameter `handle_unknown`.
Here, we will use the latter solution for simplicity.
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;">Tip</p>
<p class="last">Be aware the <tt class="docutils literal">OrdinalEncoder</tt> exposes as well a parameter
<tt class="docutils literal">handle_unknown</tt>. It can be set to <tt class="docutils literal">use_encoded_value</tt> and by setting
<tt class="docutils literal">unknown_value</tt> to handle rare categories. You are going to use these
parameters in the next exercise.</p>
</div>
We can now create our machine learning pipeline.
```
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
model = make_pipeline(
OneHotEncoder(handle_unknown="ignore"), LogisticRegression(max_iter=500)
)
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">Here, we need to increase the maximum number of iterations to obtain a fully
converged <tt class="docutils literal">LogisticRegression</tt> and silence a <tt class="docutils literal">ConvergenceWarning</tt>. Contrary
to the numerical features, the one-hot encoded categorical features are all
on the same scale (values are 0 or 1), so they would not benefit from
scaling. In this case, increasing <tt class="docutils literal">max_iter</tt> is the right thing to do.</p>
</div>
Finally, we can check the model's generalization performance only using the
categorical columns.
```
from sklearn.model_selection import cross_validate
cv_results = cross_validate(model, data_categorical, target)
cv_results
scores = cv_results["test_score"]
print(f"The accuracy is: {scores.mean():.3f} +/- {scores.std():.3f}")
```
As you can see, this representation of the categorical variables is
slightly more predictive of the revenue than the numerical variables
that we used previously.
In this notebook we have:
* seen two common strategies for encoding categorical features: **ordinal
encoding** and **one-hot encoding**;
* used a **pipeline** to use a **one-hot encoder** before fitting a logistic
regression.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
import os,sys
opj = os.path.join
from copy import deepcopy
import pickle as pkl
sys.path.append('../../src')
sys.path.append('../../src/dsets/cosmology')
from dset import get_dataloader
from viz import viz_im_r, cshow, viz_filters
from sim_cosmology import p, load_dataloader_and_pretrained_model
from losses import get_loss_f
from train import Trainer, Validator
# wt modules
from wavelet_transform import Wavelet_Transform, Attributer, get_2dfilts, initialize_filters
from utils import tuple_L1Loss, tuple_L2Loss, thresh_attrs, viz_list
import pywt
import warnings
from itertools import product
import numpy as np
from pywt._c99_config import _have_c99_complex
from pywt._extensions._dwt import idwt_single
from pywt._extensions._swt import swt_max_level, swt as _swt, swt_axis as _swt_axis
from pywt._extensions._pywt import Wavelet, Modes, _check_dtype
from pywt._multidim import idwt2, idwtn
from pywt._utils import _as_wavelet, _wavelets_per_axis
# get dataloader and model
(train_loader, test_loader), model = load_dataloader_and_pretrained_model(p, img_size=256)
torch.manual_seed(p.seed)
im = iter(test_loader).next()[0][0:1].numpy().squeeze()
db2 = pywt.Wavelet('db2')
start_level = 0
axes = (-2, -1)
trim_approx = False
norm = False
data = np.asarray(im)
# coefs = swtn(data, wavelet, level, start_level, axes, trim_approx, norm)
axes = [a + data.ndim if a < 0 else a for a in axes]
num_axes = len(axes)
print(axes)
wavelets = _wavelets_per_axis(db2, axes)
ret = []
level = 1
i = 0
coeffs = [('', data)]
axis = axes[0]
wavelet = wavelets[0]
new_coeffs = []
subband, x = coeffs[0]
cA, cD = _swt_axis(x, wavelet, level=1, start_level=i,
axis=axis)[0]
new_coeffs.extend([(subband + 'a', cA),
(subband + 'd', cD)])
coeffs = new_coeffs
coeffs
def swtn(data, wavelet, level, start_level=0, axes=None, trim_approx=False,
norm=False):
wavelets = _wavelets_per_axis(wavelet, axes)
if norm:
if not np.all([wav.orthogonal for wav in wavelets]):
warnings.warn(
"norm=True, but the wavelets used are not orthogonal: \n"
"\tThe conditions for energy preservation are not satisfied.")
wavelets = [_rescale_wavelet_filterbank(wav, 1/np.sqrt(2))
for wav in wavelets]
ret = []
for i in range(start_level, start_level + level):
coeffs = [('', data)]
for axis, wavelet in zip(axes, wavelets):
new_coeffs = []
for subband, x in coeffs:
cA, cD = _swt_axis(x, wavelet, level=1, start_level=i,
axis=axis)[0]
new_coeffs.extend([(subband + 'a', cA),
(subband + 'd', cD)])
coeffs = new_coeffs
coeffs = dict(coeffs)
ret.append(coeffs)
# data for the next level is the approximation coeffs from this level
data = coeffs['a' * num_axes]
if trim_approx:
coeffs.pop('a' * num_axes)
if trim_approx:
ret.append(data)
ret.reverse()
return ret
def swt2(data, wavelet, level, start_level=0, axes=(-2, -1),
trim_approx=False, norm=False):
axes = tuple(axes)
data = np.asarray(data)
if len(axes) != 2:
raise ValueError("Expected 2 axes")
if len(axes) != len(set(axes)):
raise ValueError("The axes passed to swt2 must be unique.")
if data.ndim < len(np.unique(axes)):
raise ValueError("Input array has fewer dimensions than the specified "
"axes")
coefs = swtn(data, wavelet, level, start_level, axes, trim_approx, norm)
ret = []
if trim_approx:
ret.append(coefs[0])
coefs = coefs[1:]
for c in coefs:
if trim_approx:
ret.append((c['da'], c['ad'], c['dd']))
else:
ret.append((c['aa'], (c['da'], c['ad'], c['dd'])))
return ret
```
| github_jupyter |
This notebook is part of the orix documentation https://orix.readthedocs.io. Links to the documentation won’t work from the notebook.
## Visualizing point groups
Point group symmetry operations are shown here in the stereographic projection.
Vectors located on the upper (`z >= 0`) hemisphere are displayed as points (`o`), whereas vectors on the lower hemisphere are reprojected onto the upper hemisphere and shown as crosses (`+`) by default. For more information about plot formatting and visualization, see [Vector3d.scatter()](reference.rst#orix.vector.Vector3d.scatter).
More explanation of these figures is provided at http://xrayweb.chem.ou.edu/notes/symmetry.html#point.
```
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from orix import plot
from orix.quaternion import Rotation, symmetry
from orix.vector import Vector3d
plt.rcParams.update({"font.size": 15})
```
For example, the `O (432)` point group:
```
symmetry.O.plot()
```
The stereographic projection of all point groups is shown below:
```
# fmt: off
schoenflies = [
"C1", "Ci", # triclinic,
"C2x", "C2y", "C2z", "Csx", "Csy", "Csz", "C2h", # monoclinic
"D2", "C2v", "D2h", # orthorhombic
"C4", "S4", "C4h", "D4", "C4v", "D2d", "D4h", # tetragonal
"C3", "S6", "D3x", "D3y", "D3", "C3v", "D3d", "C6", # trigonal
"C3h", "C6h", "D6", "C6v", "D3h", "D6h", # hexagonal
"T", "Th", "O", "Td", "Oh", # cubic
]
# fmt: on
assert len(symmetry._groups) == len(schoenflies)
schoenflies = [s for s in schoenflies if not (s.endswith("x") or s.endswith("y"))]
assert len(schoenflies) == 32
orientation = Rotation.from_axes_angles((-1, 8, 1), np.deg2rad(65))
fig, ax = plt.subplots(
nrows=8, ncols=4, figsize=(10, 20), subplot_kw=dict(projection="stereographic")
)
ax = ax.ravel()
for i, s in enumerate(schoenflies):
sym = getattr(symmetry, s)
ori_sym = sym.outer(orientation)
v = ori_sym * Vector3d.zvector()
# reflection in the projection plane (x-y) is performed internally in
# Symmetry.plot() or when using the `reproject=True` argument for
# Vector3d.scatter()
v_reproject = Vector3d(v.data.copy())
v_reproject.z *= -1
# the Symmetry marker formatting for vectors on the upper and lower hemisphere
# can be set using `kwargs` and `reproject_scatter_kwargs`, respectively, for
# Symmetry.plot()
# vectors on the upper hemisphere are shown as open circles
ax[i].scatter(v, marker="o", fc="None", ec="k", s=150)
# vectors on the lower hemisphere are reprojected onto the upper hemisphere and
# shown as crosses
ax[i].scatter(v_reproject, marker="+", ec="C0", s=150)
ax[i].set_title(f"${s}$ $({sym.name})$")
ax[i].set_labels("a", "b", None)
fig.tight_layout()
```
| github_jupyter |
## Loan EDA
```
import pandas as pd
import numpy as np
dtrain = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
```
## Data Cleaning
```
dtrain.head()
dtrain.shape
# Removing the commas form `Loan_Amount_Requested`
dtrain['Loan_Amount_Requested'] = dtrain.Loan_Amount_Requested.str.replace(',', '').astype(int)
test['Loan_Amount_Requested'] = test.Loan_Amount_Requested.str.replace(',', '').astype(int)
# Filling 0 for `Annual_Income` column
dtrain['Annual_Income'] = dtrain['Annual_Income'].fillna(0).astype(int)
test['Annual_Income'] = test['Annual_Income'].fillna(0).astype(int)
# Showing the different types of values for `Home_Owner`
dtrain['Home_Owner'] = dtrain['Home_Owner'].fillna('NA')
test['Home_Owner'] = test['Home_Owner'].fillna('NA')
print(dtrain.Home_Owner.value_counts())
```
We converted the ```NaN``` in ```Home_Owner``` into ```NA```. We are going to calculate the hash value of the string and we don't know how ```NaN``` should be replced. Now we see that there are almost **25349** rows which was NaN. Dropping these would cause loosing a lot of data. Hence we replaced these with sting "NA" and then converted these into hash values
```
# Filling 0 for missing `Months_Since_Deliquency`
dtrain['Months_Since_Deliquency'] = dtrain['Months_Since_Deliquency'].fillna(0)
test['Months_Since_Deliquency'] = test['Months_Since_Deliquency'].fillna(0)
dtrain.isnull().values.any()
dtrain['Length_Employed'] = dtrain['Length_Employed'].fillna('0 year')
test['Length_Employed'] = test['Length_Employed'].fillna('0 year')
def convert_length_employed(elem):
if elem[0] == '<':
return 0.5 # because mean of 0 to 1 is 0.5
elif str(elem[2]) == '+':
return 15.0 # because mean of 10 to 20 is 15
elif str(elem) == '0 year':
return 0.0
else:
return float(str(elem).split()[0])
dtrain['Length_Employed'] = dtrain['Length_Employed'].apply(convert_length_employed)
test['Length_Employed'] = test['Length_Employed'].apply(convert_length_employed)
dtrain['Loan_Grade'] = dtrain['Loan_Grade'].fillna('NA')
test['Loan_Grade'] = test['Loan_Grade'].fillna('NA')
dtrain.Loan_Grade.value_counts()
# dtrain[(dtrain.Annual_Income == 0) & (dtrain.Income_Verified == 'not verified')]
from sklearn.preprocessing import LabelEncoder
number = LabelEncoder()
dtrain['Loan_Grade'] = number.fit_transform(dtrain.Loan_Grade.astype('str'))
dtrain['Income_Verified'] = number.fit_transform(dtrain.Income_Verified.astype('str'))
dtrain['Area_Type'] = number.fit_transform(dtrain.Area_Type.astype('str'))
dtrain['Gender'] = number.fit_transform(dtrain.Gender.astype('str'))
test['Loan_Grade'] = number.fit_transform(test.Loan_Grade.astype('str'))
test['Income_Verified'] = number.fit_transform(test.Income_Verified.astype('str'))
test['Area_Type'] = number.fit_transform(test.Area_Type.astype('str'))
test['Gender'] = number.fit_transform(test.Gender.astype('str'))
dtrain.head()
# Converting `Purpose_Of_Loan` and `Home_Owner` into hash
import hashlib
def convert_to_hashes(elem):
return round(int(hashlib.md5(elem.encode('utf-8')).hexdigest(), 16) / 1e35, 5)
dtrain['Purpose_Of_Loan'] = dtrain['Purpose_Of_Loan'].apply(convert_to_hashes)
dtrain['Home_Owner'] = dtrain['Home_Owner'].apply(convert_to_hashes)
test['Purpose_Of_Loan'] = test['Purpose_Of_Loan'].apply(convert_to_hashes)
test['Home_Owner'] = test['Home_Owner'].apply(convert_to_hashes)
import xgboost as xgb
features = np.array(dtrain.iloc[:, 1:-1])
labels = np.array(dtrain.iloc[:, -1])
import operator
from xgboost import plot_importance
from matplotlib import pylab as plt
from collections import OrderedDict
def xgb_feature_importance(features, labels, num_rounds, fnames, plot=False):
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.1
param['max_depth'] = 6
param['silent'] = 1
param['num_class'] = 4
param['eval_metric'] = "merror"
param['min_child_weight'] = 1
param['subsample'] = 0.7
param['colsample_bytree'] = 0.7
param['seed'] = 42
nrounds = num_rounds
xgtrain = xgb.DMatrix(features, label=labels)
xgb_params = list(param.items())
gbdt = xgb.train(xgb_params, xgtrain, nrounds)
importance = sorted(gbdt.get_fscore().items(), key=operator.itemgetter(1), reverse=True)
if plot:
df = pd.DataFrame(importance, columns=['feature', 'fscore'])
df['fscore'] = df['fscore'] / df['fscore'].sum()
plt.figure()
df.plot(kind='bar', x='feature', y='fscore', legend=False, figsize=(8, 8))
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance')
plt.show()
else:
# fnames = dtrain.columns.values[1:-1].tolist()
imp_features = OrderedDict()
imps = dict(importance)
for each in list(imps.keys()):
index = int(each.split('f')[-1])
imp_features[fnames[index]] = imps[each]
return imp_features
xgb_feature_importance(features, labels, num_rounds=1000, fnames=dtrain.columns.values[1:-1].tolist(), plot=True)
```
## Features Scores
```python
OrderedDict([('Debt_To_Income', 29377),
('Loan_Amount_Requested', 22157),
('Annual_Income', 21378),
('Total_Accounts', 16675),
('Months_Since_Deliquency', 13287),
('Number_Open_Accounts', 13016),
('Length_Employed', 9140),
('Loan_Grade', 7906),
('Purpose_Of_Loan', 7284),
('Inquiries_Last_6Mo', 5691),
('Home_Owner', 4946),
('Income_Verified', 4434),
('Area_Type', 3755),
('Gender', 2027)])
```
## Feature Creation
### 1. RatioOfLoanAndIncome
```
dtrain['RatioOfLoanAndIncome'] = dtrain.Loan_Amount_Requested / (dtrain.Annual_Income + 1)
test['RatioOfLoanAndIncome'] = test.Loan_Amount_Requested / (test.Annual_Income + 1)
```
### 2. RatioOfOpenAccToTotalAcc
```
dtrain['RatioOfOpenAccToTotalAcc'] = dtrain.Number_Open_Accounts / (dtrain.Total_Accounts + 0.001)
test['RatioOfOpenAccToTotalAcc'] = test.Number_Open_Accounts / (test.Total_Accounts + 0.001)
dtrain.drop(['Interest_Rate', 'Loan_Amount_Requested',
'Annual_Income', 'Number_Open_Accounts', 'Total_Accounts'], inplace=True, axis=1)
dtrain['Interest_Rate'] = labels
test.drop(['Loan_Amount_Requested',
'Annual_Income', 'Number_Open_Accounts', 'Total_Accounts'], inplace=True, axis=1)
features = np.array(dtrain.iloc[:, 1:-1])
testFeatures = np.array(test.iloc[:, 1:])
xgb_feature_importance(features, labels, num_rounds=1000, fnames=dtrain.columns.values[1:-1].tolist(), plot=True)
```
## Feature Score with new features
```python
OrderedDict([('Debt_To_Income', 23402),
('RatioOfLoanAndIncome', 20254),
('RatioOfOpenAccToTotalAcc', 17868),
('Loan_Amount_Requested', 16661),
('Annual_Income', 14228),
('Total_Accounts', 11892),
('Months_Since_Deliquency', 10293),
('Number_Open_Accounts', 8553),
('Length_Employed', 7614),
('Loan_Grade', 6938),
('Purpose_Of_Loan', 6013),
('Inquiries_Last_6Mo', 4284),
('Home_Owner', 3760),
('Income_Verified', 3451),
('Area_Type', 2892),
('Gender', 1708)])
```
```
from sklearn.model_selection import train_test_split # for splitting the training and testing set
from sklearn.decomposition import PCA # For possible dimentionality reduction
from sklearn.feature_selection import SelectKBest # For feature selction
from sklearn.model_selection import StratifiedShuffleSplit # For unbalanced class cross-validation
from sklearn.preprocessing import MaxAbsScaler, StandardScaler, MinMaxScaler # Different scalars
from sklearn.pipeline import Pipeline # For putting tasks in Pipeline
from sklearn.model_selection import GridSearchCV # For fine tuning the classifiers
from sklearn.naive_bayes import BernoulliNB # For Naive Bayes
from sklearn.neighbors import NearestCentroid # From modified KNN
from sklearn.svm import SVC # For SVM Classifier
from sklearn.tree import DecisionTreeClassifier # From decision Tree Classifier
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.4, random_state=42)
from sklearn.metrics import accuracy_score
```
### Naive Bayes
```
clf = BernoulliNB()
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
print(accuracy_score(y_test, predictions, normalize=True))
X_test.shape
```
### XGBoost Cassification
```
def xgb_classification(X_train, X_test, y_train, y_test, num_rounds, fnames='*'):
if fnames == '*':
# All the features are being used
pass
else:
# Feature selection is being performed
fnames.append('Interest_Rate')
dataset = dtrain[fnames]
features = np.array(dataset.iloc[:, 0:-1])
labels = np.array(dataset.iloc[:, -1])
X_train, X_test, y_train, y_test = train_test_split(features, labels,
test_size=0.4,
random_state=42)
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.01
param['max_depth'] = 6
param['silent'] = 1
param['num_class'] = 4
param['nthread'] = -1
param['eval_metric'] = "merror"
param['min_child_weight'] = 1
param['subsample'] = 0.8
param['colsample_bytree'] = 0.5
param['seed'] = 42
xg_train = xgb.DMatrix(X_train, label=y_train)
xg_test = xgb.DMatrix(X_test, label=y_test)
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
bst = xgb.train(param, xg_train, num_rounds, watchlist)
pred = bst.predict(xg_test)
return accuracy_score(y_test, pred, normalize=True)
# 0.70447629480859353 <-- Boosting Rounds 1000
# 0.70506968535086123 <--- Boosting Rounds 862
# 0.70520662162984604 <-- Boosting Rounds 846
xgb_classification(X_train, X_test, y_train, y_test, num_rounds=1000, fnames='*')
```
## Submission
```
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.01
param['max_depth'] = 10
param['silent'] = 1
param['num_class'] = 4
param['nthread'] = -1
param['eval_metric'] = "merror"
param['min_child_weight'] = 1
param['subsample'] = 0.8
param['colsample_bytree'] = 0.5
param['seed'] = 42
xg_train = xgb.DMatrix(features,label=labels)
xg_test = xgb.DMatrix(testFeatures)
watchlist = [(xg_train, 'train')]
gbm = xgb.train(param,xg_train, 846, watchlist)
test_pred = gbm.predict(xg_test)
test['Interest_Rate'] = test_pred
test['Interest_Rate'] = test.Interest_Rate.astype(int)
test[['Loan_ID', 'Interest_Rate']].to_csv('submission5.csv', index=False)
pd.read_csv('submission5.csv')
# test[['Loan_ID', 'Interest_Rate']]
testFeatures.shape
features.shape
```
| github_jupyter |
# Merging Databases
```
import pandas as pd
help(pd.merge)
df = pd.DataFrame([{'Name': 'Chris', 'Item Purchased': 'Sponge', 'Cost': 22.50},
{'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50},
{'Name': 'Filip', 'Item Purchased': 'Spoon', 'Cost': 5.00}],
index=['Store 1', 'Store 1', 'Store 2'])
df
df['Date'] = ['December 1', 'January 1', 'mid-May'] # adding values using the square bracket index operator. index is shared
df
df['Delivered'] = True # adding a scalar value which becomes the default value for the entire column
df
df['Feedback'] = ['Positive', None, 'Negative'] # when we need to input a few values, we must supply the None values ourselves
df
# if rows have a unique index, we can assign a new column identifier to the series
adf = df.reset_index()
# think of it as create a index called 'Date' and set the value at row 0 to 'Dec 1'
adf['Date'] = pd.Series({0: 'December 1', 2: 'mid-May'})
adf
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR'},
{'Name': 'Sally', 'Role': 'Course liasion'},
{'Name': 'James', 'Role': 'Grader'}])
staff_df = staff_df.set_index('Name')
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business'},
{'Name': 'Mike', 'School': 'Law'},
{'Name': 'Sally', 'School': 'Engineering'}])
student_df = student_df.set_index('Name')
print(staff_df.head())
print()
print(student_df.head())
# call merge passing in 'left' df and 'right' df, using an outer join. We also tell merge that we want to use the left and right indices as the joining key(s). In this case, the indices are the same. 'outer' will sort keys lexicographically.
pd.merge(staff_df, student_df, how='outer', left_index=True, right_index=True)
# inner join gives us a dataframe with just those students who are also staff
pd.merge(staff_df, student_df, how='inner', left_index=True, right_index=True)
# left gives us a list of all staff regardless of whether they are students are not. if they are students, also gets their student details
pd.merge(staff_df, student_df, how='left', left_index=True, right_index=True)
# right gives us a list of all students
pd.merge(staff_df, student_df, how='right', left_index=True, right_index=True)
# pd.merge can also use column names to join on
staff_df = staff_df.reset_index()
student_df = student_df.reset_index()
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
# in cases where there is conflicting data, pandas preserves the data and appends either _x or _y depending on which dataframe it originated from
# _x - left dataframe
# _y - right dataframe
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR', 'Location': 'State Street'},
{'Name': 'Sally', 'Role': 'Course liasion', 'Location': 'Washington Avenue'},
{'Name': 'James', 'Role': 'Grader', 'Location': 'Washington Avenue'}])
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business', 'Location': '1024 Billiard Avenue'},
{'Name': 'Mike', 'School': 'Law', 'Location': 'Fraternity House #22'},
{'Name': 'Sally', 'School': 'Engineering', 'Location': '512 Wilson Crescent'}])
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
# merge can be passed the suffixes parameter which takes a tuple to control how _x and _y are represented
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name', suffixes=('_Staff', '_Student'))
staff_df = pd.DataFrame([{'First Name': 'Kelly', 'Last Name': 'Desjardins', 'Role': 'Director of HR'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'Role': 'Course liasion'},
{'First Name': 'James', 'Last Name': 'Wilde', 'Role': 'Grader'}])
student_df = pd.DataFrame([{'First Name': 'James', 'Last Name': 'Hammond', 'School': 'Business'},
{'First Name': 'Mike', 'Last Name': 'Smith', 'School': 'Law'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'School': 'Engineering'}])
print(staff_df)
print()
print(student_df)
# dataframes can also be merged on multiple column names and multi-indexes. The columns are passed as a list to the left_on/right_on parameteres. In this case, 'James Hammond' and 'James Wilde' will not satisfy the inner match condition.
pd.merge(staff_df, student_df, how='inner', left_on=['First Name', 'Last Name'], right_on=['First Name', 'Last Name'])
```
# Idiomatic Pandas: Making Code Pandorable
```
import pandas as pd, numpy as np, os
os.chdir('/Users/riro/Documents/GitHub/umich_ds/intro to data science/files')
df = pd.read_csv('census.csv')
df
# Pandorable code using method chaining
(df.where(df['SUMLEV'] == 50).dropna().set_index(['STNAME', 'CTYNAME']).rename(columns={'ESTIMATESBASE2010' : 'Estimates Base 2010'}))
# Unpandorable code
df = df[df['SUMLEV'] == 50]
df.set_index(['STNAME', 'CTYNAME'], inplace = True)
df.rename(columns={'ESTIMATESBASE2010' : 'Estimates Base 2010'})
# Using apply(): applies a function along an axis of the DataFrame. axis: '0' apply function to each column, '1' for each row
# min_max takes in a particular row of data, finds a min and max value and returns a new row of data
def min_max(row):
# create a small slice by projecting the population columns
data = row[['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']]
# create a new series with label values
return pd.Series({'min': np.min(data), 'max': np.max(data)})
# call apply on the dataframe
df.apply(min_max, axis=1)
def min_max(row):
data = row[['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']]
row['max'] = np.max(data)
row['min'] = np.min(data)
return row
df.apply(min_max, axis = 1)
# Using lambda functions
rows = ['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']
df.apply(lambda x: np.max(x[rows]), axis=1)
help(df.apply)
```
# Group by
```
# Looking at our backpacking equipment DataFrame, suppose we are interested in finding our total weight for each category. Use groupby to group the dataframe, and apply a function to calculate the total weight (Weight x Quantity) by category.
import pandas as pd, numpy as np
df = pd.DataFrame({'Item': ['Pack', 'Tent',
'Sleeping Pad', 'Sleeping Bag', 'Water Bottles'],
'Category': ['Pack', 'Shelter', 'Sleep', 'Sleep', 'Kitchen'],
'Quantity': [1, 1, 1, 1, 2],
'Weight (oz.)' : [33., 80., 27., 20., 35.]})
df = df.set_index('Item')
df
df.groupby(['Category']).apply(lambda x: sum(x['Quantity'] * x['Weight (oz.)']))
import pandas as pd
import numpy as np
df = pd.read_csv('census.csv')
df = df[df['SUMLEV']==50]
%%timeit -n 5
for state in df['STNAME'].unique():
avg = np.average(df.where(df['STNAME'] == state).dropna()['CENSUS2010POP'])
print('Counties in state ' + state + ' have an average population of ' + str(avg) )
%%timeit -n 5
# Using groupby is much quicker than iterating over the dataframe. groupby takes a col. name/names as an argument and splits the database based on those names
for group, frame in df.groupby('STNAME'):
avg = np.average(frame['CENSUS2010POP'])
print('Counties in state ' + group + ' have an average population of ' + str(avg))
df.head()
# groupby can also be passed functions that are used for segmenting data
df = df.set_index('STNAME')
def fun(item):
if item[0] < 'M':
return 0
if item[0] > 'Q':
return 1
else:
return 2
# lightweight hashing which distributes tasks across multiple workers, eg. cores in a processor, nodes in a supercomputer or disks in a database
for group, frame in df.groupby(fun):
print('There are ' + str(len(frame)) + ' records in group ' + str(group) + ' for processing.')
df = pd.read_csv('census.csv')
df = df[df['SUMLEV'] == 50]
# groupby's agg method applies a fucntion to the column/columns in a the group and returns the result.
df.groupby('STNAME').agg({'CENSUS2010POP': np.average})
print(type(df.groupby(level=0)['POPESTIMATE2010', 'POPESTIMATE2011']))
print(type(df.groupby(level=0)['POPESTIMATE2010']))
```
groupby.agg() with a dictionary when renaming has been deprecated (https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.20.0.html#whatsnew-0200-api-breaking-deprecate-group-agg-dict)
```
(df.set_index('STNAME').groupby(level=0)['POPESTIMATE2010','POPESTIMATE2011']
.agg(np.sum, np.average).rename(columns={'POPESTIMATE2010': 'avg'}))
df.groupby('STNAME').agg({'CENSUS2010POP' : [np.average , np.sum]})
(df.set_index('STNAME').groupby(level=0)['POPESTIMATE2010','POPESTIMATE2011']
.agg({'POPESTIMATE2010': np.average, 'POPESTIMATE2011': np.sum}))
```
# Data Scales
## (a,b)(c,d): Scales
As a data scientist, there are four scales worth knowing:
* Ratio scale
* units are equally spaced
* mathematical operations of +-/* are all valid
* eg. height and weight
* Interval scale
* units are equally spaced, but there is no true zero
* eg. temperature, since there is never an abscence of temperature, where 0°C is a valid value
* Ordinal scale
* the order of the units is important, but not evenly spaced
* letter grades such as A, A+ are a good example
* Nominal scale
* categories of data, but the categories have no order with respect to one another
* categories where there are only two possible values are binary
* eg. teams of a sport
Nominal data in pandas is called categorical data. pandas has a built-in type for categorical data and we can set a column of our data to categorical by using the astype method. astype changes the underlying type of the data, in this case to category data.
```
df = pd.DataFrame(['A+', 'A', 'A-', 'B+', 'B', 'B-', 'C+', 'C', 'C-', 'D+', 'D'],
index=['excellent', 'excellent', 'excellent', 'good', 'good', 'good', 'ok', 'ok', 'ok', 'poor', 'poor'])
df.rename(columns={0:'Grades'}, inplace=True)
df
# set the data type as category using astype
df['Grades'].astype('category').head()
# astype categories parameter deprecated, use pd.api.types.CategoricalDtype instead
grades = pd.DataFrame(['A+', 'A', 'A-', 'B+', 'B', 'B-', 'C+', 'C', 'C-', 'D+', 'D'],
index=['excellent', 'excellent', 'excellent', 'good', 'good', 'good', 'ok', 'ok', 'ok', 'poor', 'poor']).astype(pd.api.types.CategoricalDtype(categories=['D', 'D+', 'C-', 'C', 'C+', 'B-', 'B', 'B+', 'A-', 'A', 'A+'], ordered = True))
grades.rename(columns={0: 'Grades'}, inplace = True)
grades.head()
grades > 'C'
s = pd.Series(['Low', 'Low', 'High', 'Medium', 'Low', 'High', 'Low'])
s.astype(pd.api.types.CategoricalDtype(categories = ['Low', 'Medium', 'High'], ordered = True))
# get_dummies
s = pd.Series(list('abca'))
s
pd.get_dummies(s)
help(pd.get_dummies)
```
# Cut
```
df = pd.read_csv('census.csv')
df = df[df['SUMLEV']==50]
df
df = df.set_index('STNAME').groupby(level=0)['CENSUS2010POP'].agg([np.average])
# Using cut
df.rename(columns={'average' : 'avg_pop'}, inplace = True)
pd.cut(df['avg_pop'], 10)
s = pd.Series([168, 180, 174, 190, 170, 185, 179, 181, 175, 169, 182, 177, 180, 171])
pd.cut(s, 3, labels = ['Small', 'Medium', 'Large'])
```
# Pivot Tables
```
df = pd.read_csv('cars.csv')
df.head()
df.pivot_table(values='(kW)', index='YEAR', columns='Make', aggfunc=np.mean)
# aggfunc can also be passed a list of functions, and pandas will provide the result using hierarchical column names
# we also pass margins = True, so for each category now there is an all category, which shows the overall mean and min values for each year for a given year and vendor
#
df.pivot_table(values='(kW)', index='YEAR', columns='Make', aggfunc=[np.mean, np.min], margins = True)
import pandas as pd, os
df = pd.read_csv('mpg.csv')
df.head()
df
# see the cty and hwy mpg values for each car made by each manufacturer in every year in the dataset
df.pivot_table(values = ['cty', 'hwy'], index = ['manufacturer', 'class', 'model'], columns = 'year')
```
# Date Functionality in pandas
```
import pandas as pd
import numpy as np
```
### Timestamp
```
pd.Timestamp('07/20/2020 10:05 AM')
```
### Period
```
pd.Period('1/2020')
pd.Period('3/1/2020')
```
### DatetimeIndex
```
t1 = pd.Series(list('abc'), [pd.Timestamp('2020-07-18'), pd.Timestamp('2020-07-19'), pd.Timestamp('2020-07-20') ])
t1
type(t1.index)
```
### Period Index
```
t2 = pd.Series(list('def'), [pd.Period('2020-04'), pd.Period('2020-05'), pd.Period('2020-06') ])
t2
type(t2.index)
```
### Converting to Datetime
```
d1 = ['2 June 2013', 'Aug 29, 2014', '2015-06-26', '7/12/16']
t3 = pd.DataFrame(np.random.randint(10, 100, (4, 2)), index = d1, columns=list('ab'))
t3
t3.index = pd.to_datetime(t3.index)
t3
pd.to_datetime('4.7.12', dayfirst=True)
```
### Timedeltas
```
pd.Timestamp('9/3/2016') - pd.Timestamp('9/1/2016')
pd.Timestamp('9/2/2016 8:10AM') + pd.Timedelta('12D 3H')
```
### Working with dates in a dataframe
```
dates = pd.date_range('10-01-2016', periods = 9, freq='2W-SUN')
dates
df = pd.DataFrame({'Count 1': 100 + np.random.randint(-5, 10, 9).cumsum(),
'Count 2': 120 + np.random.randint(-5, 10, 9)}, index=dates)
df
# cumsum(): returns the cumulative sum of elements along a given axis
a = np.random.randint(1, 10, 9)
print(a)
b = a.cumsum()
print(b)
# weekday_name deprecated, use day_name() instead
df.index.day_name()
# diff(): calculates the difference of a DataFrame elements compared with another element in the DatFrame. defualts to the element of the previous row in the same column
df.diff()
# resample(): method for frequency conversion and resampling of a time series. Object must have a datetime index.
# DateOffset objects - https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects
df.resample('M').mean()
# Using partial string indexing
df['2017']
df['2016-12']
df['2016-12':]
df.asfreq('W', method='ffill')
import matplotlib.pyplot as plt
%matplotlib inline
df.plot()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
base_path = './data/ML-1M/'
ratings = pd.read_csv(base_path+'ratings.csv', sep='\t',
encoding='latin-1',
usecols=['user_id', 'movie_id', 'rating'])
users = pd.read_csv(base_path+'users.csv', sep='\t',
encoding='latin-1',
usecols=['user_id', 'gender', 'zipcode',
'age_desc', 'occ_desc'])
movies = pd.read_csv(base_path+'movies.csv', sep='\t',
encoding='latin-1',
usecols=['movie_id', 'title', 'genres'])
ratings.head()
users.head()
movies.head()
```
Plot the wordcloud
```
%matplotlib inline
import wordcloud
from wordcloud import WordCloud, STOPWORDS
# Create a wordcloud of the movie titles
movies['title'] = movies['title'].fillna("").astype('str')
title_corpus = ' '.join(movies['title'])
title_wordcloud = WordCloud(stopwords=STOPWORDS, background_color='black', height=2000, width=4000).generate(title_corpus)
# Plot the wordcloud
plt.figure(figsize=(16,8))
plt.imshow(title_wordcloud)
plt.axis('off')
plt.show()
```
Genre-based recommendations
```
# Import libraries
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel
# Break up the big genre string into a string array
movies['genres'] = movies['genres'].str.split('|')
# Convert genres to string value
movies['genres'] = movies['genres'].fillna("").astype('str')
# Movie feature vector
tf = TfidfVectorizer(analyzer='word', ngram_range=(1, 2), min_df=0,
stop_words='english')
tfidf_matrix = tf.fit_transform(movies['genres'])
# Movie similarity matrix
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)
# 1-d array of movie titles
titles = movies['title']
indices = pd.Series(movies.index, index=movies['title'])
# Function to return top-k most similar movies
def genre_recommendations(title, topk=20):
idx = indices[title]
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:topk+1]
movie_indices = [i[0] for i in sim_scores]
return titles.iloc[movie_indices].reset_index(drop=True)
# Checkout the results
# genre_recommendations('Good Will Hunting (1997)')
genre_recommendations('Toy Story (1995)')
# genre_recommendations('Saving Private Ryan (1998)')
```
Simple collaborative filtering
```
from math import sqrt
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.metrics.pairwise import pairwise_distances
# Fill NaN values in user_id and movie_id column with 0
ratings['user_id'] = ratings['user_id'].fillna(0)
ratings['movie_id'] = ratings['movie_id'].fillna(0)
# Replace NaN values in rating column with average of all values
ratings['rating'] = ratings['rating'].fillna(ratings['rating'].mean())
# Randomly sample 1% for faster processing
small_data = ratings.sample(frac=0.01)
# Split into train and test
train_data, test_data = train_test_split(small_data, test_size=0.2)
# Create two user-item matrices, one for training and another for testing
train_data_matrix = train_data.pivot(index='user_id', columns='movie_id', values='rating').fillna(0)
test_data_matrix = test_data.pivot(index='user_id', columns='movie_id', values='rating').fillna(0)
# Create user similarity using Pearson correlation
user_correlation = 1 - pairwise_distances(train_data_matrix, metric='correlation')
user_correlation[np.isnan(user_correlation)] = 0
# Create item similarity using Pearson correlation
item_correlation = 1 - pairwise_distances(train_data_matrix.T, metric='correlation')
item_correlation[np.isnan(item_correlation)] = 0
# Function to predict ratings
def predict(ratings, similarity, type='user'):
if type == 'user':
mean_user_rating = ratings.mean(axis=1)
# Use np.newaxis so that mean_user_rating has same format as ratings
ratings_diff = (ratings - mean_user_rating.values[:, np.newaxis])
pred = mean_user_rating.values[:, np.newaxis] + similarity.dot(ratings_diff) / np.array([np.abs(similarity).sum(axis=1)]).T
elif type == 'item':
pred = ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])
return pred
# Function to calculate RMSE
def rmse(pred, actual):
# Ignore nonzero terms.
pred = pd.DataFrame(pred).values
actual = actual.values
pred = pred[actual.nonzero()].flatten()
actual = actual[actual.nonzero()].flatten()
return sqrt(mean_squared_error(pred, actual))
# Predict ratings on the training data with both similarity score
user_prediction = predict(train_data_matrix, user_correlation, type='user')
item_prediction = predict(train_data_matrix, item_correlation, type='item')
# RMSE on the train data
print('User-based CF RMSE Train: ' + str(rmse(user_prediction, train_data_matrix)))
print('Item-based CF RMSE Train: ' + str(rmse(item_prediction, train_data_matrix)))
# RMSE on the test data
print('User-based CF RMSE Test: ' + str(rmse(user_prediction, test_data_matrix)))
print('Item-based CF RMSE Test: ' + str(rmse(item_prediction, test_data_matrix)))
```
SVD matrix factorization based collaborative filtering
```
!pip install surprise
from scipy.sparse.linalg import svds
# Create the interaction matrix
interactions = ratings.pivot(index='user_id', columns='movie_id', values='rating').fillna(0)
print(pd.DataFrame(interactions.values).head())
# De-normalize the data (normalize by each users mean)
user_ratings_mean = np.mean(interactions.values, axis=1)
interactions_normalized = interactions.values - user_ratings_mean.reshape(-1, 1)
print(pd.DataFrame(interactions_normalized).head())
# Calculating SVD
U, sigma, Vt = svds(interactions_normalized, k=50)
sigma = np.diag(sigma)
# Make predictions from the decomposed matrix by matrix multiply U, Σ, and VT
# back to get the rank k=50 approximation of A.
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
preds = pd.DataFrame(all_user_predicted_ratings, columns=interactions.columns)
print(preds.head().values)
# Get the movie with the highest predicted rating
def recommend_movies(predictions, userID, movies, original_ratings, num_recommendations):
# Get and sort the user's predictions
user_row_number = userID - 1 # User ID starts at 1, not 0
sorted_user_predictions = preds.iloc[user_row_number].sort_values(ascending=False) # User ID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings[original_ratings.user_id == (userID)]
user_full = (user_data.merge(movies, how = 'left', left_on = 'movie_id', right_on = 'movie_id').
sort_values(['rating'], ascending=False)
)
print('User {0} has already rated {1} movies.'.format(userID, user_full.shape[0]))
print('Recommending highest {0} predicted ratings movies not already rated.'.format(num_recommendations))
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies[~movies['movie_id'].isin(user_full['movie_id'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'movie_id',
right_on = 'movie_id').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
# Let's try to recommend 20 movies for user with ID 1310
already_rated, predictions = recommend_movies(preds, 1310, movies, ratings, 20)
# Top 20 movies that User 1310 has rated
print(already_rated.head(20))
# Top 20 movies that User 1310 hopefully will enjoy
print(predictions)
from surprise import Reader, Dataset, SVD
from surprise.model_selection import cross_validate
# Load Reader library
reader = Reader()
# Load ratings dataset with Dataset library
data = Dataset.load_from_df(ratings[['user_id', 'movie_id', 'rating']], reader)
# Use the SVD algorithm
svd = SVD()
# Compute the RMSE of the SVD algorithm
cross_validate(svd, data, cv=5, measures=['RMSE'], verbose=True)
# Train on the dataset and arrive at predictions
trainset = data.build_full_trainset()
svd.fit(trainset)
# Let's pick again user with ID 1310 and check the ratings he has given
print(ratings[ratings['user_id'] == 1310])
# Now let's use SVD to predict the rating that 1310 will give to movie 1994
print(svd.predict(1310, 1994))
```
| github_jupyter |
# Configuracion de grafica a usar
```
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
# lA ID de la GPU a usar, puede ser desde 0 hasta las N GPU's. Si es -1 significa que es en la CPU
os.environ["CUDA_VISIBLE_DEVICES"]="1";
```
# Importacion de librerias
```
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
from IPython.display import display, clear_output
from ipywidgets import interact, IntSlider
import h5py
import numpy as np
%matplotlib inline
import sys
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from LTC import *
sys.path.append('../')
from Datasets_utils.DatasetsLoader import VideoDataGenerator
```
# Configuraciones para Tensorflow y Keras
```
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(gpus[0], True)
tf.debugging.set_log_device_placement(False)
#Comprobar que estoy ejecutandome en modo eagerly
tf.executing_eagerly()
```
# Carga de Datos
```
root_path = "/home/jefelitman/DataSets/ut/videos_set_1"
root_path
batch_size = 12
original_size = [171,128]
size = [112,112]
frames = 16
canales = 3
def custom_steps_temporal(frames):
mitad = len(frames)//2
paso = len(frames)//16
videos = []
for i in range(paso):
indices = range(i, len(frames),paso)
videos.append([frames[j] for j in indices[:16]])
return videos
def custom_temp_crop_unique(frames):
mitad = len(frames)//2
paso = len(frames)//16
indices = sorted(list(range(mitad, -1,-paso)) + list(range(mitad+paso, len(frames),paso)))
indices = indices[len(indices)//2 - 8 : len(indices)//2 + 8]
return [[frames[i] for i in indices]]
def half_video_temporal(frames):
mitad = len(frames)//2
return [frames[mitad-8*2:mitad+8*2]]
def video_transf(video):
escalador = MinMaxScaler()
new_video = video.reshape((video.shape[0]*video.shape[1]*video.shape[2]*video.shape[3],1))
new_video = escalador.fit_transform(new_video)
return new_video.reshape((video.shape[0],video.shape[1],video.shape[2],video.shape[3]))
def flip_vertical(volume):
return np.flip(volume, (0, 2))[::-1]
def corner_frame_crop(original_width, original_height):
x = original_width-112
y = original_height-112
return [[x//2, original_width - x//2 -1, y//2, original_height-y//2],
[x//2, original_width - x//2 -1, y//2, original_height-y//2]
]
dataset = VideoDataGenerator(directory_path = root_path,
table_paths = None,
batch_size = batch_size,
original_frame_size = original_size,
frame_size=size,
video_frames = frames,
temporal_crop = ("custom", custom_steps_temporal),
video_transformation = [("augmented",flip_vertical)],
frame_crop = ("custom", corner_frame_crop),
shuffle = True,
conserve_original = False)
```
# Red Neuronal LTC
### Construccion del modelo
```
#Entrada de la red neuronal
video_shape = tuple([frames]+size[::-1]+[canales])
dropout = 0.5
lr = 1e-3
weigh_decay = 5e-3
ltc_save_path = '/home/jefelitman/Saved_Models/trained_ut/Encoder/Inception/inception_enhan/LTC-enhan-noMaxPoolT-SLTEnc_Seq_split1_{w}x{h}x{f}_softmax_sgd_'.format(
w=size[0], h=size[1],f=frames)
if canales == 3:
ltc_save_path += 'RGB_'
else:
ltc_save_path += 'B&N_'
ltc_save_path += 'lr={l}__dec-ori_2center-frame-crop_video-flip_temporal-dynamic_batchNorm_pretrained-c3d'.format(l = lr)
#Creacion de la carpeta donde se salvara el modelo
if not os.path.isdir(ltc_save_path):
os.mkdir(ltc_save_path)
model_saves_path = os.path.join(ltc_save_path,'model_saves')
if not os.path.isdir(model_saves_path):
os.mkdir(model_saves_path)
ltc_save_path
lr = 1e-3
weigh_decay = 5e-3
#Parametros para la compilacion del modelo
optimizador = keras.optimizers.SGD(learning_rate=lr, momentum=0.9)
#optimizador = keras.optimizers.Adam(learning_rate=lr)
perdida = keras.losses.SparseCategoricalCrossentropy()
precision = keras.metrics.SparseCategoricalAccuracy()
entrada = keras.Input(shape=(300,224,224,3),batch_size=1,
name="Input_video")
#Conv1
x = keras.layers.Conv3D(filters=16, kernel_size=3, padding="same", activation="relu",
kernel_regularizer=keras.regularizers.l2(weigh_decay),
name='conv3d_1')(entrada)
x = keras.layers.MaxPool3D(pool_size=(1,2,2),strides=(1,2,2), name='max_pooling3d_1')(x)
#Conv2
x = keras.layers.Conv3D(filters=32, kernel_size=3, padding="same", activation="relu",
kernel_regularizer=keras.regularizers.l2(weigh_decay),
name='conv3d_2')(x)
x = keras.layers.MaxPool3D(pool_size=(1,2,2),strides=(1,2,2), name='max_pooling3d_2')(x)
#Conv3
x = keras.layers.Conv3D(filters=64, kernel_size=3, padding="same", activation="relu",
kernel_regularizer=keras.regularizers.l2(weigh_decay),
name='conv3d_3')(x)
x = keras.layers.MaxPool3D(pool_size=(1,2,2),strides=(1,2,2),name='max_pooling3d_3')(x)
#Conv4
x = keras.layers.Conv3D(filters=128, kernel_size=3, padding="same", activation="relu",
kernel_regularizer=keras.regularizers.l2(weigh_decay),
name='conv3d_4')(x)
x = keras.layers.MaxPool3D(pool_size=(1,2,2),strides=(1,2,2),name='max_pooling3d_4')(x)
ltc = keras.Model(entrada, x, name="LTC_original")
ltc = get_LTC_encoder_slt_I(video_shape, len(dataset.to_class),dropout, weigh_decay, 256, 512, True)
#Compilacion del modelo
ltc.compile(optimizer = optimizador,
loss = perdida,
metrics = [precision])
#keras.utils.plot_model(ltc, 'LTC.png', show_shapes=True)
#ltc = keras.models.load_model('/home/jefelitman/Saved_Models/trained_ut/Inception/conv_channels/LTC-incept-channels_split1_112x112x16_softmax_sgd_RGB_lr=0.001_decreased-original_2center-frame-crop_video-flip_temporal-dynamic_batchNorm_pretrained-c3d/ltc_final_1.h5')
ltc.summary()
ltc.predict(np.zeros((1,300,224,224,3)))
```
### Cargo los pesos pre entrenados
##### Pesos del C3D
```
c3d_weights = h5py.File('/home/jefelitman/Saved_Models/c3d-sports1M_weights.h5', 'r')
print(c3d_weights.keys())
c3d_weights['layer_0'].keys()
weights = []
for capa in ['layer_0','layer_2','layer_4','layer_5','layer_5']:
weights.append([
np.moveaxis(np.r_[c3d_weights[capa]['param_0']], (0,1),(4,3)), #Cambio los ejes porque c3d estan con canales primero
np.r_[c3d_weights[capa]['param_1']]
])
for index, capa in enumerate(['conv3d_1','conv3d_2','conv3d_3','conv3d_4','conv3d_5']):
ltc.get_layer(capa).set_weights(weights[index])
```
##### Pesos de la InceptionV3
### Entrenamiento de la red con el generador
```
#Funcion customizadas para el entrenamiento del modelo
class custom_callback(keras.callbacks.Callback):
def __init__(self):
self.accuracies = []
self.losses = []
self.val_accuracies = []
self.val_loss = []
def on_batch_end(self, batch, logs):
corte = dataset.train_batches//3 + 1
if batch == corte or batch == corte*2:
keras.backend.set_value(optimizador.lr, optimizador.lr.numpy()*0.1)
for i in ['conv3d_1','conv3d_2','conv3d_3','conv3d_4','conv3d_5','dense_8','dense_9','dense_10']:
weigh_decay = ltc.get_layer(i).kernel_regularizer.get_config()['l2'] * 0.1
ltc.get_layer(i).kernel_regularizer = keras.regularizers.l2(weigh_decay)
print("\n","Actual LR: ", str(optimizador.lr.numpy()))
self.accuracies.append(logs['sparse_categorical_accuracy'])
self.losses.append(logs['loss'])
def on_epoch_begin(self, epoch, logs):
keras.backend.set_value(optimizador.lr, 0.001)
def on_epoch_end(self,batch, logs):
self.val_accuracies.append(logs['val_sparse_categorical_accuracy'])
self.val_loss.append(logs['val_loss'])
funciones = [
keras.callbacks.ModelCheckpoint(
filepath=os.path.join(model_saves_path,'ltc_epoch_{epoch}.h5'),
save_best_only=True,
monitor='val_sparse_categorical_accuracy',
verbose=1),
keras.callbacks.CSVLogger(os.path.join(ltc_save_path,'output.csv')),
custom_callback()
]
epoch = 1
historial = ltc.fit(x = dataset.get_train_generator(canales),
steps_per_epoch=dataset.train_batches,
epochs=epoch,
callbacks=funciones,
validation_data= dataset.get_test_generator(canales),
validation_steps=dataset.test_batches,
max_queue_size=batch_size%24)
```
### Guardado del modelo
```
#Salvado final definitivo del modelo una vez se detenga
ltc.save(os.path.join(ltc_save_path,"ltc_final_{e}.h5".format(e=epoch)))
```
### Graficas de los resultados de entrenamiento
```
fig = plt.figure()
plt.plot(funciones[-1].losses,'k--')
rango = [i*dataset.train_batches-1 for i in range(1,epoch+1)]
plt.plot(rango, funciones[-1].val_loss,'bo')
plt.title('Loss over steps')
plt.legend(labels=["Loss","Test Loss"])
plt.show()
fig.savefig(os.path.join(ltc_save_path,'train_loss_steps_{e}.png'.format(e=dataset.train_batches)))
fig = plt.figure()
plt.plot(funciones[-1].accuracies,'k--')
rango = [i*dataset.train_batches-1 for i in range(1,epoch+1)]
plt.plot(rango, funciones[-1].val_accuracies,'bo')
plt.title('Accuracy over steps')
plt.legend(labels=["Accuracy","Test Accuracy"])
plt.show()
fig.savefig(os.path.join(ltc_save_path,'train_accuracy_steps_{e}.png'.format(e=dataset.train_batches)))
```
### Evaluacion del entrenamiento
| github_jupyter |
```
# hide
from nbdev.showdoc import *
```
# Load model from Weights & Biases (wandb)
This tutorial is for people who are using [Weights & Biases (wandb)](https://wandb.ai/site) `WandbCallback` in their training pipeline and are looking for a convenient way to use saved models on W&B cloud to make predictions, evaluate and submit in a few lines of code.
Currently only Keras models (`.h5`) are supported for wandb loading in this framework. Future versions will include other formats like PyTorch support.
---------------------------------------------------------------------
## 0. Authentication
To authenticate your W&B account you are given several options:
1. Run `wandb login` in terminal and follow instructions.
2. Configure global environment variable `'WANDB_API_KEY'`.
3. Run `wandb.init(project=PROJECT_NAME, entity=ENTITY_NAME)` and pass API key from [https://wandb.ai/authorize](https://wandb.ai/authorize)
-----------------------------------------------------
## 1. Download validation data
The first thing we do is download the current validation data and example predictions to evaluate against. This can be done in a few lines of code with `NumeraiClassicDownloader`.
```
#other
import pandas as pd
from numerblox.download import NumeraiClassicDownloader
from numerblox.numerframe import create_numerframe
from numerblox.model import WandbKerasModel
from numerblox.evaluation import NumeraiClassicEvaluator
#other
downloader = NumeraiClassicDownloader("wandb_keras_test")
# Path variables
val_file = "numerai_validation_data.parquet"
val_save_path = f"{str(downloader.dir)}/{val_file}"
# Download only validation parquet file
downloader.download_single_dataset(val_file,
dest_path=val_save_path)
# Download example val preds
downloader.download_example_data()
# Initialize NumerFrame from parquet file path
dataf = create_numerframe(val_save_path)
# Add example preds to NumerFrame
example_preds = pd.read_parquet("wandb_keras_test/example_validation_predictions.parquet")
dataf['prediction_example'] = example_preds.values
```
--------------------------------------------------------------------
## 2. Predict (WandbKerasModel)
`WandbKerasModel` automatically downloads and loads in a `.h5` from a specified wandb run. The path for a run is specified in the ["Overview" tab](https://docs.wandb.ai/ref/app/pages/run-page#overview-tab) of the run.
- `file_name`: The default name for the best model in a run is `model-best.h5`. If you want to use a model you have saved under a different name specify `file_name` for `WandbKerasModel` initialization.
- `replace`: The model will be downloaded to the directory you are working in. You will be warned if this directory contains models with the same filename. If these models can be overwritten specify `replace=True`.
- `combine_preds`: Setting this to True will average all columns in case you have trained a multi-target model.
- `autoencoder_mlp:` This argument is for the case where your [model architecture includes an autoencoder](https://forum.numer.ai/t/autoencoder-and-multitask-mlp-on-new-dataset-from-kaggle-jane-street/4338) and therefore the output is a tuple of 3 tensors. `WandbKerasModel` will in this case take the third output of the tuple (target predictions).
```
#other
run_path = "crowdcent/cc-numerai-classic/h4pwuxwu"
model = WandbKerasModel(run_path=run_path,
replace=True, combine_preds=True, autoencoder_mlp=True)
```
After initialization you can generate predictions with one line. `.predict` takes a `NumerFrame` as input and outputs a `NumerFrame` with a new prediction column. The prediction column name will be of the format `prediction_{RUN_PATH}`.
```
#other
dataf = model.predict(dataf)
dataf.prediction_cols
#other
main_pred_col = f"prediction_{run_path}"
main_pred_col
```
----------------------------------------------------------------------
## 3. Evaluate
We can now use the output of the model to evaluate in 2 lines of code. Additionally, we can directly submit predictions to Numerai with this `NumerFrame`. Check out the educational notebook `submitting.ipynb` for more information on this.
```
#other
evaluator = NumeraiClassicEvaluator()
val_stats = evaluator.full_evaluation(dataf=dataf,
target_col="target",
pred_cols=[main_pred_col,
"prediction_example"],
example_col="prediction_example"
)
```
The evaluator outputs a `pd.DataFrame` with most of the main validation metrics for Numerai. We welcome new ideas and metrics for Evaluators. See `nbs/07_evaluation.ipynb` in this repository for full Evaluator source code.
```
#other
val_stats
```
After we are done, downloaded files can be removed with one call on `NumeraiClassicDownloader` (optional).
```
#other
# Clean up environment
downloader.remove_base_directory()
```
------------------------------------------------------------------
We hope this tutorial explained clearly to you how to load and predict with Weights & Biases (wandb) models.
Below you will find the full docs for `WandbKerasModel` and link to the source code:
```
# other
# hide_input
show_doc(WandbKerasModel)
```
| github_jupyter |
```
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Idiomatic Programmer Code Labs
## Code Labs #2 - Get Familiar with Data Augmentation
## Prerequistes:
1. Familiar with Python
2. Completed Handbook 2/Part 8: Data Augmentation
## Objectives:
1. Channel Conversion
2. Flip Images
3. Roll (Shift Images)
4. Rotate without Clipping
## Setup:
Install the additional relevant packages to continuen with OpenCV, and then import them.
```
# Install the matplotlib library for plotting
!pip install matplotlib
# special iPython command --tell's matplotlib to inline (in notebook) displaying plots
%matplotlib inline
# Adrian Rosenbrock's image manipulation library
!pip install imutils
# Import matplotlib python plot module
import matplotlib.pyplot as plt
# Import OpenCV
import cv2
# Import numpy scientific module for arrays
import numpy as np
# Import imutils
import imutils
```
## Channel Conversions
OpenCV reads in the channels as BGR (Blue, Green, Read) instead of the more common convention of RGB (Red, Green Blue). Let's learn how to change the channel ordering to RGB.
You fill in the blanks (replace the ??), make sure it passes the Python interpreter.
```
# Let's read in that apple image again.
image = cv2.imread('apple.jpg', cv2.IMREAD_COLOR)
plt.imshow(image)
```
### What, it's a blue apple!
Yup. It's the same data, but since matplotlib presumes RGB, then blue is the 3rd channel, but in BGR -- that's the red channel.
Let's reorder the channels from BGR to RGB and then display again.
```
# Let's convert the channel order to RGB
# HINT: RGB should be a big giveaway.
image = cv2.cvtColor(image, cv2.COLOR_BGR2??)
plt.imshow(image)
```
## Flip Images
Let's use OpenCV to flip an image (apple) vertically and then horizontally.
```
# Flip the image horizontally (upside down)
# HINT: flip should be a big giveaway
flip = cv2.??(image, 0)
plt.imshow(flip)
# Flip the image vertically (mirrored)
# HINT: If 0 was horizontal, what number would be your first guess to be vertical?
flip = cv2.flip(image, ??)
plt.imshow(flip)
```
## Roll (Shift) Images
Let's use numpy to shift an image -- say 80 pixels to the right.
```
# Let's shift the image vertical 80 pixels to the right, where axis=1 means along the width
# HINT: another name for shift is roll
roll = np.??(image, 80, axis=1)
plt.imshow(roll)
# Let's shift the image now horizontally 80 pixels down.
# HINT: if shifting the width axis is a 1, what do you think the value is for
# shifting along the height axis?
roll = np.roll(image, 80, axis=??)
plt.imshow(roll)
```
## Randomly Rotate the Image (w/o Clipping)
Let's use imutils to randomly rotate the image without clipping it.
```
import random
# Let's get a random value between 0 and 60 degrees.
degree = random.randint(0, 60)
# Let's rotate the image now by the randomly selected degree
rot = imutils.rotate_bound(image, ??)
plt.imshow(rot)
```
## End of Code Lab
| github_jupyter |
<a href="https://colab.research.google.com/github/LedaiThomasNilsson/github-slideshow/blob/master/C3_W1_Lab_1_transfer_learning_cats_dogs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Basic transfer learning with cats and dogs data
### Import tensorflow
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
```
### Import modules and download the cats and dogs dataset.
```
import urllib.request
import os
import zipfile
import random
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.optimizers import RMSprop
from shutil import copyfile
data_url = "https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip"
data_file_name = "catsdogs.zip"
download_dir = '/tmp/'
urllib.request.urlretrieve(data_url, data_file_name)
zip_ref = zipfile.ZipFile(data_file_name, 'r')
zip_ref.extractall(download_dir)
zip_ref.close()
```
Check that the dataset has the expected number of examples.
```
print("Number of cat images:",len(os.listdir('/tmp/PetImages/Cat/')))
print("Number of dog images:", len(os.listdir('/tmp/PetImages/Dog/')))
# Expected Output:
# Number of cat images: 12501
# Number of dog images: 12501
```
Create some folders that will store the training and test data.
- There will be a training folder and a testing folder.
- Each of these will have a subfolder for cats and another subfolder for dogs.
```
try:
os.mkdir('/tmp/cats-v-dogs')
os.mkdir('/tmp/cats-v-dogs/training')
os.mkdir('/tmp/cats-v-dogs/testing')
os.mkdir('/tmp/cats-v-dogs/training/cats')
os.mkdir('/tmp/cats-v-dogs/training/dogs')
os.mkdir('/tmp/cats-v-dogs/testing/cats')
os.mkdir('/tmp/cats-v-dogs/testing/dogs')
except OSError:
pass
```
### Split data into training and test sets
- The following code put first checks if an image file is empty (zero length)
- Of the files that are not empty, it puts 90% of the data into the training set, and 10% into the test set.
```
import random
from shutil import copyfile
def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):
files = []
for filename in os.listdir(SOURCE):
file = SOURCE + filename
if os.path.getsize(file) > 0:
files.append(filename)
else:
print(filename + " is zero length, so ignoring.")
training_length = int(len(files) * SPLIT_SIZE)
testing_length = int(len(files) - training_length)
shuffled_set = random.sample(files, len(files))
training_set = shuffled_set[0:training_length]
testing_set = shuffled_set[training_length:]
for filename in training_set:
this_file = SOURCE + filename
destination = TRAINING + filename
copyfile(this_file, destination)
for filename in testing_set:
this_file = SOURCE + filename
destination = TESTING + filename
copyfile(this_file, destination)
CAT_SOURCE_DIR = "/tmp/PetImages/Cat/"
TRAINING_CATS_DIR = "/tmp/cats-v-dogs/training/cats/"
TESTING_CATS_DIR = "/tmp/cats-v-dogs/testing/cats/"
DOG_SOURCE_DIR = "/tmp/PetImages/Dog/"
TRAINING_DOGS_DIR = "/tmp/cats-v-dogs/training/dogs/"
TESTING_DOGS_DIR = "/tmp/cats-v-dogs/testing/dogs/"
split_size = .9
split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)
split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)
# Expected output
# 666.jpg is zero length, so ignoring
# 11702.jpg is zero length, so ignoring
```
Check that the training and test sets are the expected lengths.
```
print("Number of training cat images", len(os.listdir('/tmp/cats-v-dogs/training/cats/')))
print("Number of training dog images", len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))
print("Number of testing cat images", len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))
print("Number of testing dog images", len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))
# expected output
# Number of training cat images 11250
# Number of training dog images 11250
# Number of testing cat images 1250
# Number of testing dog images 1250
```
### Data augmentation (try adjusting the parameters)!
Here, you'll use the `ImageDataGenerator` to perform data augmentation.
- Things like rotating and flipping the existing images allows you to generate training data that is more varied, and can help the model generalize better during training.
- You can also use the data generator to apply data augmentation to the validation set.
You can use the default parameter values for a first pass through this lab.
- Later, try to experiment with the parameters of `ImageDataGenerator` to improve the model's performance.
- Try to drive reach 99.9% validation accuracy or better.
```
TRAINING_DIR = "/tmp/cats-v-dogs/training/"
# Experiment with your own parameters to reach 99.9% validation accuracy or better
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=100,
class_mode='binary',
target_size=(150, 150))
VALIDATION_DIR = "/tmp/cats-v-dogs/testing/"
validation_datagen = ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size=100,
class_mode='binary',
target_size=(150, 150))
```
### Get and prepare the model
You'll be using the `InceptionV3` model.
- Since you're making use of transfer learning, you'll load the pre-trained weights of the model.
- You'll also freeze the existing layers so that they aren't trained on your downstream task with the cats and dogs data.
- You'll also get a reference to the last layer, 'mixed7' because you'll add some layers after this last layer.
```
weights_url = "https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5"
weights_file = "inception_v3.h5"
urllib.request.urlretrieve(weights_url, weights_file)
# Instantiate the model
pre_trained_model = InceptionV3(input_shape=(150, 150, 3),
include_top=False,
weights=None)
# load pre-trained weights
pre_trained_model.load_weights(weights_file)
# freeze the layers
for layer in pre_trained_model.layers:
layer.trainable = False
# pre_trained_model.summary()
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
```
### Add layers
Add some layers that you will train on the cats and dogs data.
- `Flatten`: This will take the output of the `last_layer` and flatten it to a vector.
- `Dense`: You'll add a dense layer with a relu activation.
- `Dense`: After that, add a dense layer with a sigmoid activation. The sigmoid will scale the output to range from 0 to 1, and allow you to interpret the output as a prediction between two categories (cats or dogs).
Then create the model object.
```
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a final sigmoid layer for classification
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)
```
### Train the model
Compile the model, and then train it on the test data using `model.fit`
- Feel free to adjust the number of epochs. This project was originally designed with 20 epochs.
- For the sake of time, you can use fewer epochs (2) to see how the code runs.
- You can ignore the warnings about some of the images having corrupt EXIF data. Those will be skipped.
```
# compile the model
model.compile(optimizer=RMSprop(lr=0.0001),
loss='binary_crossentropy',
metrics=['acc'])
# train the model (adjust the number of epochs from 1 to improve performance)
history = model.fit(
train_generator,
validation_data=validation_generator,
epochs=2,
verbose=1)
```
### Visualize the training and validation accuracy
You can see how the training and validation accuracy change with each epoch on an x-y plot.
```
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['acc']
val_acc=history.history['val_acc']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', "Training Accuracy")
plt.plot(epochs, val_acc, 'b', "Validation Accuracy")
plt.title('Training and validation accuracy')
plt.figure()
```
### Predict on a test image
You can upload any image and have the model predict whether it's a dog or a cat.
- Find an image of a dog or cat
- Run the following code cell. It will ask you to upload an image.
- The model will print "is a dog" or "is a cat" depending on the model's prediction.
```
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
image_tensor = np.vstack([x])
classes = model.predict(image_tensor)
print(classes)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a dog")
else:
print(fn + " is a cat")
```
| github_jupyter |
# Notebook 5: Clean Up Resources
Specify "Python 3" Kernel and "Data Science" Image.
### Background
In this notebook, we will clean up the resources we provisioned during this workshop:
- SageMaker Feature Groups
- SageMaker Endpoints
- Amazon Kinesis Data Stream
- Amazon Kinesis Data Analytics application
### Imports
```
from parameter_store import ParameterStore
from utils import *
```
### Session variables
```
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
boto_session = boto3.Session()
kinesis_client = boto_session.client(service_name='kinesis', region_name=region)
kinesis_analytics_client = boto_session.client('kinesisanalytics')
ps = ParameterStore(verbose=False)
ps.set_namespace('feature-store-workshop')
```
Load variables from previous notebooks
```
parameters = ps.read()
customers_feature_group_name = parameters['customers_feature_group_name']
products_feature_group_name = parameters['products_feature_group_name']
orders_feature_group_name = parameters['orders_feature_group_name']
click_stream_historical_feature_group_name = parameters['click_stream_historical_feature_group_name']
click_stream_feature_group_name = parameters['click_stream_feature_group_name']
cf_model_endpoint_name = parameters['cf_model_endpoint_name']
ranking_model_endpoint_name = parameters['ranking_model_endpoint_name']
kinesis_stream_name = parameters['kinesis_stream_name']
kinesis_analytics_application_name = parameters['kinesis_analytics_application_name']
```
### Delete feature groups
```
feature_group_list = [customers_feature_group_name, products_feature_group_name,
orders_feature_group_name, click_stream_historical_feature_group_name,
click_stream_feature_group_name]
for feature_group in feature_group_list:
print(f'Deleting feature group: {feature_group}')
delete_feature_group(feature_group)
```
### Delete endpoints and endpoint configurations
```
def clean_up_endpoint(endpoint_name):
response = sagemaker_session.sagemaker_client.describe_endpoint(EndpointName=endpoint_name)
endpoint_config_name = response['EndpointConfigName']
print(f'Deleting endpoint: {endpoint_name}')
print(f'Deleting endpoint configuration : {endpoint_config_name}')
sagemaker_session.sagemaker_client.delete_endpoint(EndpointName=endpoint_name)
sagemaker_session.sagemaker_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)
endpoint_list = [cf_model_endpoint_name, ranking_model_endpoint_name]
for endpoint in endpoint_list:
clean_up_endpoint(endpoint)
```
### Delete Kinesis Data Stream
```
kinesis_client.delete_stream(StreamName=kinesis_stream_name,
EnforceConsumerDeletion=True)
```
### Delete Kinesis Data Analytics application
```
response = kinesis_analytics_client.describe_application(ApplicationName=kinesis_analytics_application_name)
create_ts = response['ApplicationDetail']['CreateTimestamp']
kinesis_analytics_client.delete_application(ApplicationName=kinesis_analytics_application_name, CreateTimestamp=create_ts)
```
Go back to Workshop Studio and click on "Next".
| github_jupyter |
# PharmSci 175/275 (UCI)
## What is this??
The material below is a Jupyter notebook including some lecture content to supplement class material on fluctuations, correlations, and error analysis from Drug Discovery Computing Techniques, PharmSci 175/275 at UC Irvine.
Extensive materials for this course, as well as extensive background and related materials, are available on the course GitHub repository: [github.com/mobleylab/drug-computing](https://github.com/mobleylab/drug-computing)
This material is a set of slides intended for presentation with RISE as detailed [in the course materials on GitHub](https://github.com/MobleyLab/drug-computing/tree/master/uci-pharmsci/lectures/energy_minimization). While it may be useful without RISE, it will also likely appear somewhat less verbose than it would if it were intended for use in written form.
# Fluctuations, correlations, and error analysis
Today: Chemistry tools in Python; working with molecules; generating 3D conformers; shape search methods
### Instructor: David L. Mobley
### Contributors to today's materials:
- David L. Mobley
- I also appreciate John Chodera and Nathan Lim for help with OpenMM
- Some content also draws on John Chodera's work on [automated equilibration detection](https://www.biorxiv.org/content/early/2015/12/30/021659) and his [`pymbar`](https://github.com/choderalab/pymbar) Python package
- Density calculation work uses code from a former postdoc, Gaetano Calabró
# Outline of this notebook
1. Recap/info on central limit theorem and averaging
1. Some brief OpenMM basics -- mostly not covered in detail in class, but provides context for the density calculation
2. A simple example of a density calculation which stops when converged
3. Analyzing density results in this notebook
# Remember, the central limit thereom means the of a sum of many independent random variables is a Gaussian/normal distribution
```
from IPython.display import Image
Image('images/CLT.png')
```
## This is true regardless of the starting distribution
The distribution of a sum of many independent random variables will follow a Gaussian (normal) distribution, regardless of the starting distribution
A mean is a type of sum, so a mean (average) of many measurements or over many molecules will often be a normal distribution
## But which property? Sometimes several properties are related
Sometimes, related properties can be calculated from the same data -- for example, binding free energy:
$\Delta G_{bind} = k_B T \ln K_d$
Which should be normally dsitributed? $K_d$? $\Delta G$? Both?
Answer: Free energy, typically. Why? Equilibrium properties are determined by free energy. We measure, for example, the heat released on binding averaged over many molecules. We might *convert* to $K_d$, but the physical quantity relates to the free energy.
## This has important implications for averaging
For example, obtain results from several different experiments (i.e. different groups) and want to combine. How to average?
$\Delta G_{bind} = k_B T \ln K_d$
Example: combining vapor pressure measurements as part of a hydration free energy estimate: Three values, 1e-3, 1e-4, 1e-5. (Vapor pressure determined by partitioning between gas and condensed phase -- driven by free energy. It is the free energy which should be normally distributed, not the vapor pressure.)
**Simple mean: 3.7e-4**
**Average the logarithms and exponentiate: 1e-4**
**So, average the free energy, NOT the $K_d$.**
This also applies to other areas -- partitioning coefficients, solubility, etc.
# Generate some input files we'll use
- We want some inputs to work with as we see how OpenMM works
- And for our sample density calculation
- [`SolvationToolkit`](https://github.com/mobleylab/SolvationToolkit) provides simple wrappers for building arbitrary mixtures
- OpenEye toolkits
- [`packmol`](https://github.com/mcubeg/packmol)
- GAFF small molecule force field (and TIP3P or TIP4P, etc. for water)
Let's build a system to use later
```
from solvationtoolkit.solvated_mixtures import *
mixture = MixtureSystem('mixtures')
mixture.addComponent(label='phenol', name="phenol", number=1)
mixture.addComponent(label='toluene', smiles='Cc1ccccc1', number=10)
mixture.addComponent(label='cyclohexane', smiles='C1CCCCC1', number=100)
#Generate output files for AMBER
mixture.build(amber = True)
```
# OpenMM is more of a simulation toolkit than a simulation package
(This material is here mainly as background for the code below, down to [A simple example of a density calculation which stops when converged](#A-simple-example-of-a-density-calculation-which-stops-when-converged), and will not be covered in detail in this lecture).
- Easy-to-use Python API
- Very fast calculations on GPUs (but slow on CPUs)
- Really easy to implement new techniques, do new science
- Key ingredients in a calculation:
- `Topology`
- `System`
- `Simulation` (takes `System`, `Topology`, `Integrator`; contains positions)
## `Topology`: Chemical composition of your system
- Atoms, bonds, etc.
- Can be loaded from some common file formats such as PDB, mol2
- Can be created from OpenEye molecule via [`oeommtools`](https://github.com/oess/oeommtools), such as [`oeommtools.utils.oemol_to_openmmTop`](https://github.com/oess/oeommtools/blob/master/oeommtools/utils.py#L17)
- Side note: An OE "molecule" can contain more than one molecule, so can contain protein+ligand+solvent for example
- Tangent: Try to retain bond order info if you have it (e.g. from a mol2)
```
# Example Topology generation from a couple mechanisms:
# Load a PDB
from simtk.openmm.app import PDBFile
pdb = PDBFile('sample_files/T4-protein.pdb')
t4_topology = pdb.topology
# Load a mol2: MDTraj supports a variety of file formats including mol2
import mdtraj
traj = mdtraj.load('sample_files/mobley_20524.mol2')
# MDTraj objects contain a Topology, but an MDTraj topology; they support conversion to OpenMM
traj.topology.to_openmm()
# MDTraj can also handle PDB, plus trajectory formats which contain topology information
protein_traj = mdtraj.load('sample_files/T4-protein.pdb')
t4_topology = protein_traj.topology.to_openmm()
# And we can visualize with nglview (or drop out to VMD)
import nglview
view = nglview.show_mdtraj(protein_traj)
view
# Load AMBER gas phase topology
from simtk.openmm.app import *
prmtop = AmberPrmtopFile('sample_files/mobley_20524.prmtop')
print("Topology has %s atoms" % prmtop.topology.getNumAtoms())
# Gromacs files can be loaded by GromacsTopFile and GromacsGroFile but you need topology/coordinate files
# which don't have include statements, or a GROMACS installation
```
If the below cell does not run because of missing `oeommtools`, you'll need to install it, e.g. via `conda install -c openeye/label/Orion oeommtools -c omnia`
```
# Load an OEMol and convert (note advantage over MDTraj for bond order, etc.)
from openeye.oechem import *
from oeommtools.utils import *
mol = OEMol()
istream = oemolistream( 'sample_files/mobley_20524.mol2')
OEReadMolecule(istream, mol)
istream.close()
# Convert OEMol to Topology using oeommtools -- so you can get a topology from almost any format OE supports
topology, positions = oemol_to_openmmTop(mol)
print(topology.getNumAtoms())
```
## `System`: Your parameterized system
- Often generated by `createSystem`, but requires OpenMM know how to assign parameters
- Easy for standard biomolecules (proteins, nucleic acids), waters ions
- OpenMM FFXML files used; available for many common FFs
- More complex for general small molecules
- Can also be loaded from common file formats such as GROMACS, AMBER
- useful if you set up for AMBER or GROMACS
- We have new open forcefield effort that provides new force fields with an `openforcefield.createSystem` operator; generates OpenMM Systems.
```
# Example system creation
#From OpenMM Docs: http://docs.openmm.org/latest/userguide/application.html#running-simulations
from simtk.openmm.app import *
from simtk.openmm import *
from simtk.unit import *
from sys import stdout
# Example System creation using OpenMM XML force field libraries -- good for biomolecules, ions, water
pdb = PDBFile('sample_files/input.pdb')
forcefield = ForceField('amber99sb.xml', 'tip3p.xml')
system = forcefield.createSystem(pdb.topology, nonbondedMethod=PME,
nonbondedCutoff=1*nanometer, constraints=HBonds)
# Or you could set up your own molecule for simulation with e.g. GAFF using AmberTools
from openmoltools.amber import *
# Generate GAFF-typed mol2 file and AMBER frcmod file using AmberTools
gaff_mol2_file, frcmod_file = run_antechamber('phenol', 'sample_files/mobley_20524.mol2')
# Generate AMBER files
prmtop_name, inpcrd_name = run_tleap( 'phenol', gaff_mol2_file, frcmod_file)
print("Generated %s and %s" % (prmtop_name, inpcrd_name))
# Create System -- in this case, single molecule in gas phase
prmtop = AmberPrmtopFile( prmtop_name)
inpcrd = AmberInpcrdFile( inpcrd_name)
system = prmtop.createSystem(nonbondedMethod = NoCutoff, nonbondedCutoff = NoCutoff, constraints = HBonds)
# Load the mixture we generated above in Section 3
file_prefix = 'mixtures/amber/phenol_toluene_cyclohexane_1_10_100'
prmtop = AmberPrmtopFile( file_prefix+'.prmtop')
inpcrd = AmberInpcrdFile( file_prefix+'.inpcrd')
# Create system: Here, solution phase with periodic boundary conditions and constraints
system = prmtop.createSystem(nonbondedMethod = PME, nonbondedCutoff = 1*nanometer, constraints = HBonds)
#You can visualize the above with VMD, or we can do:
traj = mdtraj.load( file_prefix + '.inpcrd', top = file_prefix + '.prmtop')
view = nglview.show_mdtraj(traj)
view
```
## `Simulation`: The system, topology, and positions you're simulating, under what conditions
- Could be for energy minimization, or different types of dynamics
- Has an integrator attached (even if just minimizing), including temperature
- `context` -- including positions, periodic box if applicable, etc.
- If dynamics, has:
- timestep
- velocities
- potentially also has reporters which store properties like energies, trajectory snapshots, etc.
### Let's take that last `System` we set up and energy minimize it
(The mixture of toluene, phenol, and cyclohexane we generated originally)
```
# Prepare the integrator
integrator = LangevinIntegrator(300*kelvin, 1/picosecond, 0.002*picoseconds)
# Prep the simulation
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
# Get and print initial energy
state = simulation.context.getState(getEnergy = True)
energy = state.getPotentialEnergy() / kilocalories_per_mole
print("Energy before minimization (kcal/mol): %.2g" % energy)
# Energy minimize
simulation.minimizeEnergy()
# Get and print final energy
state = simulation.context.getState(getEnergy=True, getPositions=True)
energy = state.getPotentialEnergy() / kilocalories_per_mole
print("Energy after minimization (kcal/mol): %.2g" % energy)
# While we're at it, why don't we just run a few steps of dynamics
simulation.reporters.append(PDBReporter('sample_files/mixture_output.pdb', 100))
simulation.reporters.append(StateDataReporter(stdout, 100, step=True,
potentialEnergy=True, temperature=True))
simulation.step(1000) # Runs 1000 steps of dynamics
state = simulation.context.getState(getEnergy=True, getPositions=True)
```
# A simple example of a density calculation which stops when converged
- We'll do a very simple density estimation
- This is not a recommended protocol since we're just jumping straight in to "production"
- But it illustrates how you can do this type of thing easily with OpenMM
- For production data, you'd precede by equilibration (usually NVT, then NPT, then production)
## The most bare-bones version
```
# We'll pick up that same system again, loading it up again so we can add a barostat before setting up the simulation
import simtk.openmm as mm
file_prefix = 'mixtures/amber/phenol_toluene_cyclohexane_1_10_100'
prmtop = AmberPrmtopFile( file_prefix+'.prmtop')
inpcrd = AmberInpcrdFile( file_prefix+'.inpcrd')
system = prmtop.createSystem(nonbondedMethod = PME, nonbondedCutoff = 1*nanometer, constraints = HBonds)
# Now add a barostat
system.addForce(mm.MonteCarloBarostat(1*atmospheres, 300*kelvin, 25))
# Set up integrator and simulation
integrator = LangevinIntegrator(300*kelvin, 1/picosecond, 0.002*picoseconds)
simulation = Simulation(prmtop.topology, system, integrator)
# Let's pull the positions from the end of the brief "equilibration" we ran up above.
simulation.context.setPositions(state.getPositions())
# Set up a reporter to assess progress; will report every 100 steps (somewhat short)
prod_data_filename = os.path.join('sample_files', os.path.basename(file_prefix)+'.csv')
simulation.reporters.append(app.StateDataReporter( prod_data_filename, 100, step=True, potentialEnergy=True,
temperature=True, density=True))
# Set up for run; for a somewhat reasonable convergence threshold you probably want run_steps >= 2500
# and a density tolerance of 1e-3 or smaller; higher thresholds will likely result in early termination
# due to slow fluctuations in density.
# But that may take some time to run, so feel free to try higher also.
run_steps = 2500
converged = False
density_tolerance = 0.001
import pandas as pd
from pymbar import timeseries as ts
while not converged:
simulation.step(run_steps)
# Read data
d = pd.read_csv(prod_data_filename, names=["step", "U", "Temperature", "Density"], skiprows=1)
density_ts = np.array(d.Density)
# Detect when it seems to have equilibrated and clip off the part prior
[t0, g, Neff] = ts.detectEquilibration(density_ts)
density_ts = density_ts[t0:]
# Compute standard error of what's left
density_mean_stderr = density_ts.std() / np.sqrt(Neff)
# Print stats, see if converged
print("Current density mean std error = %f g/mL" % density_mean_stderr)
if density_mean_stderr < density_tolerance :
converged = True
print("...Convergence is OK; equilibration estimated to be achieved after data point %s\n" % t0)
```
# While that's running, let's look at what it's doing
## The first key idea is that we want to be able to tell it to stop when the density is known precisely enough
We have this bit of code:
```python
density_mean_stderr = density_ts.std() / np.sqrt(Neff)
```
This is estimating the standard error in the mean -- $\sigma_{err} = \frac{\sigma}{\sqrt{N_{eff}}}$ where $N_{eff}$ is the number of effective samples and $\sigma$ is the standard deviation.
**We can stop running our simulation when the number of effective samples gets high enough, relative to the standard deviation, that the standard error becomes as small as we want.**
## But how do we get the number of effective samples?
As discussed previously, $N_{eff} = N/g$ where $g$ is the statistical inefficiency -- a measure of how correlated our samples are.
John Chodera's `pymbar` module provides a handy `statisticalInefficiency` module which estimates this from calculations of the autocorrelation function/autocorrelation time.
## But there's another problem: What if our initial box size (and density) is very bad?
What if we built the box way too small? Or way too big? Then we'll have an initial period where the density is way off from the correct value.
<center><img src="images/Chodera_1_top.png" alt="GitHub" style="width: 800px;"/></center>
Here, instantaneous density of liquid argon averaged over 500 simulations from [Chodera](https://www.biorxiv.org/content/early/2015/12/30/021659)
### This could adversely affect computed properties, unless we throw out some data for equilibration
If we average these results (red) the result is biased relative to the true expectation value out to well past 1000$\tau$ (time units) [(Chodera)](https://www.biorxiv.org/content/early/2015/12/30/021659):
<center><img src="images/Chodera_1_all.png" alt="GitHub" style="width: 1000px;"/></center>
Throwing out 500 initial samples as equilibration gives much better results even by 200 $\tau$
### With a clever trick, you can do this automatically
Key idea: If you throw away unequilibrated data, it reduces the correlation time/statistical inefficiency, **increasing $N_{eff}$**. But if you throw away equilibrated data, you're just wasting data (**decreasing $N_{eff}$**). So pick how much data to throw away to [**maximize $N_{eff}$.**](https://www.biorxiv.org/content/early/2015/12/30/021659)
<center><img src="images/Chodera_2.png" alt="GitHub" style="width: 1000px;"/></center>
### Basically what we're doing here is making a bias/variance tradeoff
We pick an amount of data to throw away that minimizes bias without increasing the variance much
<center><img src="images/Chodera_3.png" alt="GitHub" style="width: 400px;"/></center>
This shows the same argon case, with 500 replicates, and looks at bias and variance as a function of the amount of data discarded (color bar). Throwing out a modest amount of data reduces bias a great deal while keeping the standard error low.
The arrow marks the point automatically selected.
### It turns out `pymbar` has code for this, too, and we use it in our example
```python
# Read stored trajectory data
d = pd.read_csv(prod_data_filename, names=["step", "U", "Temperature", "Density"], skiprows=1)
density_ts = np.array(d.Density)
# Detect when it seems to have equilibrated and clip off the part prior
[t0, g, Neff] = ts.detectEquilibration(density_ts)
density_ts = density_ts[t0:]
```
# Analyzing density results in this notebook
- The above may take some time to converge
- (You can set the threshold higher, which may lead to apparently false convergence)
- Here we'll analyze some sample density data I've provided
## As usual we'll use matplotlib for analysis: First let's view a sample set of density data
```
# Prep matplotlib/pylab to display here
%matplotlib inline
import pandas as pd
from pylab import *
from pymbar import timeseries as ts
# Load density
d = pd.read_csv('density_simulation/prod/phenol_toluene_cyclohexane_1_10_100_prod.csv', names=["step", "U", "Temperature", "Density"], skiprows=1)
# Plot instantaneous density
xlabel('Step')
ylabel('Density (g/mL)')
plot( np.array(d.step), np.array(d.Density), 'b-')
```
## Now we want to detect equilibration and throw out unequilibrated data
We'll also compute the mean density
```
#Detect equilibration
density_ts = np.array(d.Density)
[t0, g, Neff] = ts.detectEquilibration(density_ts)
print("Data equilibrated after snapshot number %s..." % t0)
# Clip out unequilibrated region
density_ts = density_ts[t0:]
stepnrs = np.array(d.step[t0:])
# Compute mean density up to the present at each time, along with associated uncertainty
mean_density = [ density_ts[0:i].mean() for i in range(2, len(density_ts)) ]
mean_density_stderr = [ ]
for i in range(2,len(density_ts)):
g = ts.statisticalInefficiency( density_ts[0:i])
stderr = density_ts[0:i].std()/sqrt(i/g)
mean_density_stderr.append(stderr)
```
## Finally let's graph and compare to experiment
```
# Plot
figure()
errorbar(stepnrs[2:], mean_density, yerr=mean_density_stderr, fmt='b-' )
plot( [0, stepnrs.max()], [0.78, 0.78], 'k-') #Overlay experimental value for cyclohexane
xlabel('Step')
ylabel('Density (g/mL)')
show()
print("Experimental density of cyclohexane is 0.78 g/mL at 20C" ) # per PubChem, https://pubchem.ncbi.nlm.nih.gov/compound/cyclohexane#section=Solubility
```
### Exercise: Do the same but for YOUR short simulation
The sample data analyzed here was generated by the `density.py` script in this directory; it discards a lot amount of data to equilibration BEFORE storing production data. This means (as `detectEquilibration` finds) it's already equilibrated.
As an exercise, analyze YOUR sample simulation and find out how much of it has equilibrated/how much data needs to be discarded.
# Now let's shift gears back to some more on statistics and error analysis
The [2013 Computer Aided Drug Design Gordon Research Conference](http://lanyrd.com/2013/grc-cadd-2013/) focused specificially on statistics relating to drug discovery, error, reproducibility, etc. Slides from many talks are available online along with Python code, statistics info, etc.
Here I draw especially on (with permission) slides from:
- Woody Sherman (then Schrodinger; now Silicon Therapeutics)
- Tom Darden (OpenEye)
- Paul Hawkins (OpenEye)
- John Chodera (MSKCC) and Terry Stouch (JCAMD)
I'd also like to include material from Ajay Jain that I normally show, but am waiting approval, and likewise content from Paul Hawkins (OpenEye)
## Some content from Sherman
<center><img src="images/Sherman_1.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Sherman_2.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Sherman_3.png" alt="GitHub" style="width: 1000px;"/></center>
### Why do we not know?
**No null hypothesis** -- no point of comparison as to how well we would do with some other model or just guessing or... We don't know what "useful" means!
Null hypothesis used differently in two approaches to statistical inference (same term is used, different meaning):
- **Significance testing** (Fisher): Null hypothesis is rejected or disproved on basis of data but never accepted or proved; magnitude of effect is unimportant
- **Hypothesis testing** (Neyman and Pearson): Contrast with alternate hypothesis, decide between them on basis of data. Must be better than alternative hypothesis.
Arguably hypothesis testing is more important, though significance testing is done a lot (see last lecture).
Sherman's slides below focus on hypothesis testing.
<center><img src="images/Sherman_4.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Sherman_5.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Sherman_6.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Sherman_7.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Sherman_8.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Sherman_9.png" alt="GitHub" style="width: 1000px;"/></center>
### Follow-up note
Sherman explains that Knight subsequently used this as an example of lessons learned/how important statistics are in a job talk at Schrodinger and was hired.
### As a result of this, design hypothesis testing studies carefully
For hypothesis testing/model comparison, pick a null/alternate model which has SOME value
Be careful not to bias your study against certain methods -- i.e. if you construct a test set to have very diverse structures, this will have implications for what types of methods will do well
## Some slides from Tom Darden, OpenEye
<center><img src="images/Darden_1.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Darden_2.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Darden_3.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Darden_4.png" alt="GitHub" style="width: 1000px;"/></center>
<center><img src="images/Darden_5.png" alt="GitHub" style="width: 1000px;"/></center>
## A final reminder from John Chodera and Terry Stouch
<center><img src="images/Chodera_talk_1.png" alt="GitHub" style="width: 1000px;"/></center>
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import math
import json
import os
class cell_migration3:
def __init__(self, L ,W, H, N0, C0, Uc, Un, Dc, Dn, Qcb0, Qcd0, Qn, A0, dx, dt):
#W = 10 #width
#L = 850 #length
#H = 17 #height
L_ = 850
V = L_*H*W
M = 20 #number of tubes
L1= V/(M*W*H)
self.eps = 0.01
self.d1 = Dc/Dn
self.d2 = 0 #Un*L/Dc
self.d3 = Qn*C0*L**2/Dn
self.e1 = Uc*L/Dc
self.e2 = A0*N0/Dc
self.e3 = Qcb0*N0*C0*L**2/Dc
self.e4 = Qcd0*L**2/(Dc*N0)
self.l_ = L/L_ #L = L^
self.l1 = L1/L_
self.dx = dx
self.dt = dt
self.a = int((self.l_+self.l1)/dx)#end of the real tube
self.b = int(1/dt) # n of step for iteration -> time
self.e = int(self.l_/dx) #end of our experiment: end of real+img. tube
#concentration of cell
self.c = pd.DataFrame(np.zeros([self.a+1, self.b+1]))
self.c.iloc[:,0] = 0
self.c.iloc[0,1:] = 1
#concentration of nutrient
self.n = pd.DataFrame(np.zeros([self.a+1, self.b+1]))
self.n.iloc[:int(1/dx),0] = 0
self.n.iloc[0,:] = 0
self.n.iloc[int(1/dx):,:] = 1
def f1(self,i):
f = self.e1*self.dt/(2*self.dx) - self.dt/self.dx**2 - self.e2*self.dt/(4*self.dx**2) \
*(self.n.iloc[:,i].shift(-1) - self.n.iloc[:,i].shift(1))
return f
def g1(self,i):
g = (1+2*self.dt/self.dx**2 - self.e2*self.dt/self.dx**2 * \
(self.n.iloc[:,i].shift(-1) -2*self.n.iloc[:,i] + self.n.iloc[:,i].shift(1)) \
- self.e3*self.dt*self.n.iloc[:,i]*(1-self.c.iloc[:,i]) + self.e4*self.dt/(self.c.iloc[:,i]+self.eps))
return g
def k1(self,i):
k = (-self.e1*self.dt/(2*self.dx) -self.dt/self.dx**2 + self.e2*self.dt/(4*self.dx**2)\
*(self.n.iloc[:,i].shift(-1) - self.n.iloc[:,i].shift(1)))
return k
# x => 1
def f2(self,i):
f =self.e1*self.dt/(2*self.dx) - self.dt/self.dx**2
return f
def g2(self,i):
f = 1 + 2*self.dt/self.dx**2 + self.e3*(1-self.c.iloc[self.e+1:,i]) + self.e4*self.dt/(1+self.eps)
return f
def k2(self,i):
f = -self.e1*self.dt/(2*self.dx) - self.dt/self.dx**2
return f
def n_new(self,i):
phi = self.d3 * self.dx**2 * self.c.values[1:self.e+1,i] + 2
A = (-np.diag(phi) + np.diag(np.ones(self.e-1),1) + np.diag(np.ones(self.e-1),-1))
A[-1] = np.append(np.zeros(self.e-1),1)
return np.linalg.solve(A, np.append(np.zeros(self.e-1),1))
def n_new2(self,i):
phi = self.d3 * self.dx**2 * self.c + 2
A = (-np.diag(phi) + np.diag(np.ones(self.e-1),1) + np.diag(np.ones(self.e-1),-1))
A[-1] = np.append(np.zeros(self.e-1),1)
return np.linalg.solve(A, np.append(np.zeros(self.e-1),1))
def n_new3(self,i):
phi = self.d3 * self.dx**2 * self.c.values[1:self.e+1,i] + 2
A = (-np.diag(phi) + np.diag(np.ones(self.e-1),1) + np.diag(np.ones(self.e-1),-1))
A[-1] = np.append(np.zeros(self.e-1),1)
return A
def new_c(self,j):
f_diag = self.f1(j)
f_diag[self.e] = (self.e1*self.dt/(2*self.dx) - self.dt/self.dx**2 - self.e2*self.dt/(4*self.dx**2)*(self.n.iloc[self.e+1,j] - self.n.iloc[self.e-1,j]))
f_diag[self.e+1:] = self.f2(j)
#g1
g_diag = self.g1(j)
g_diag[self.e] = (1+2*self.dt/self.dx**2 - self.e2*self.dt/self.dx**2\
*(self.n.iloc[self.e+1,j] - 2*self.n.iloc[self.e,j] + self.n.iloc[self.e-1,j]) \
- self.e3*self.dt*self.n.iloc[self.e,j]*(1-self.c.iloc[self.e,j]) + self.e4*self.dt/(self.n.iloc[self.e,j]+self.eps))
g_diag[self.e+1:] = self.g2(j)
g_diag[self.a+1] = 1
#k1
k_diag = self.k1(j).shift(1)
k_diag[self.e] = (-self.e1*self.dt/(2*self.dx) -self.dt/self.dx**2 + self.e2*self.dt/(4*self.dx**2)*(self.n.iloc[self.e+1,j] - self.n.iloc[self.e-1,j]))
k_diag[self.e+1:] = self.k2(j)
k_diag[self.a+1] = 0
c_df_test = pd.DataFrame(np.zeros(self.c.shape))
c_df_test = c_df_test + self.c.values
c_test = c_df_test.iloc[1:,j-1].values
c_test[0] = c_test[0] - self.k2(j)
c_test = np.append(c_test,0)
U = np.diag(g_diag.dropna()) + np.diag(k_diag.dropna(),-1) + np.diag(f_diag.dropna(),1)
U[self.a, self.a-2] = -1
return np.linalg.solve(U, c_test)[:-1]
def compute_all(self):
for cq in range(0,self.b):
self.n.iloc[1:self.e+1,cq+1] = self.n_new(cq)[:]
self.c.iloc[1:,cq+1] = self.new_c(cq)[:]
def compute_all_all(self):
comp = self.compute_all(var1,var2)
return com.sum()
def avg_channel(self):
return self.c.values[1:self.e,1:self.a].sum() / (self.e*(self.a))
def avg_entering(self):
return self.c.values[self.e,1:self.a].sum() / (self.a)
def plotting_conc(self,name):
fig_n = sns.lineplot(x = np.tile(np.arange(0,cm.a+1),cm.b+1), y = pd.melt(cm.n).value, hue = np.repeat(np.arange(0,cm.a+1),cm.b+1),palette = "Blues")
fig_c = sns.lineplot(x = np.tile(np.arange(0,cm.a+1),cm.b+1), y = pd.melt(cm.c).value, hue = np.repeat(np.arange(0,cm.a+1),cm.b+1),palette = "Blues")
plt.xlabel("x")
plt.ylabel("concentration")
plt.title("Cell & Nutrient Concentration")
fig_n.legend_.remove()
plt.plot(np.arange(self.a), np.zeros(self.a)+self.avg_channel(), linestyle='dashed')
plt.plot(np.arange(self.a), np.zeros(self.a)+self.avg_entering(), linestyle='-.')
#plt.text(self.a+self.b-9,self.avg_channel()-0.1, 'Avg # of Cells in a Channel')
#plt.text(self.a+self.b-9,self.avg_entering()-0.1, 'Avg # of Cells entering')
plt.savefig(name)
def get_n(self):
return self.n
def get_c(self):
return self.c
L = 100 #length
W = 10 #width
L_ = 850
H = 17 #height
# V = L*H*W
'''
is it has to be L_ or L? for the V
'''
V = L_*H*W
M = 20 #number of tubes
N0 = 1.204 #mol/um^3
C0 = 5*10**-4 #cells/um^2
Uc = 2 #um/min
Un = 0
Dc = 1
Dn = 1.8 #um^2/min
Qcb0 = 1
Qcd0 = 1
Qn = 1
A0 = 1
d1 = Dc/Dn
d2 = Un*L/Dc # = 0
d3 = Qn*C0*L**2 / Dn
e1 = Uc*L/Dc
e2 = A0*N0/Dc
e3 = Qcb0*N0*C0*L**2/Dc
e4 = Qcd0*L**2/Dc/N0
L1 = V/(M*W*H)
l_ = L/L_
l1 = L1/L_
dx = 0.05
dt = 0.05
cm = cell_migration3(10000, W, H, N0, C0, Uc, Un, Dc, Dn, Qcb0, Qcd0, Qn, A0, dx, dt)
cm.compute_all()
cm.c.round(4)
cm.plotting_conc('hi')
```
| github_jupyter |
```
import numpy as np
exp = np.exp
arange = np.arange
ln = np.log
from datetime import *
import matplotlib.pyplot as plt
from matplotlib import patches
# import plotly.plotly as py
# import plotly.graph_objs as go
from scipy.stats import norm
from scipy import interpolate as interp
pdf = norm.pdf
cdf = norm.cdf
ppf = norm.ppf
from scipy import stats
from scipy import special
erf = special.erf
import pandas as pd
# import palettable
import seaborn as sns
cp = sns.color_palette()
from lifelines import KaplanMeierFitter
from sklearn.metrics import brier_score_loss
from sklearn.linear_model import LogisticRegression
from sklearn import mixture
from sklearn import preprocessing
nsclc = pd.read_csv('nsclc_data.csv')
lc_df = pd.read_csv('lc_data.csv')
def create_kde(array, bandwidth=None):
""" calculating KDE and CDF using scipy """
if bandwidth == None:
bw = 'scott'
else:
bw = bandwidth
kde = stats.gaussian_kde(dataset=array,bw_method=bw)
num_test_points=200
x = np.linspace(0,np.max(array)*1.2,num_test_points)
kdens=kde.pdf(x)
cdf=np.zeros(shape=num_test_points)
for i in range(num_test_points):
cdf[i] = kde.integrate_box_1d(low=0,high=x[i])
return x,kdens,cdf
def calc_cdf(array,var,bandwidth=None):
if bandwidth == None:
bw = 1.2*array.std()*np.power(array.size,-1/5)
else:
bw = bandwidth
kde=stats.gaussian_kde(dataset=array,bw_method=bw)
return kde.integrate_box_1d(low=0,high=var)
```
## fig 1
```
from matplotlib import patches
from matplotlib import path
Path=path.Path
def bracket(xi, y, dy=.1, dx = .04,tail=.1):
yi = y - dy/2
xf = xi+dx
yf = yi+dy
vertices = [(xi,yi),(xf,yi),(xf,yf),(xi,yf)]+[(xf,y),(xf+tail,y)]
codes = [Path.MOVETO] + [Path.LINETO]*3 + [Path.MOVETO] + [Path.LINETO]
return Path(vertices,codes)
def hbracket(x, yi, dx=.1, dy = .04,tail=.1):
xi = x - dx/2
xf = xi+dx
yf = yi-dy
vertices = [(xi,yi),(xi,yf),(xf,yf),(xf,yi)]+[(x,yf),(x,yf-tail)]
codes = [Path.MOVETO] + [Path.LINETO]*3 + [Path.MOVETO] + [Path.LINETO]
return Path(vertices,codes)
def double_arrow(x,y,length,orient,endlength=.04,r=10):
l=length
if orient == 'horz':
x1= x - l/2
x2 = x + l/2
el = endlength/2
vertices = [(x1,y),(x2,y)]+[(x1+l/r,y+el),(x1,y),(x1+l/r,y-el)]+[(x2-l/r,y+el),(x2,y),(x2-l/r,y-el)]
else:
y1= y - l/2
y2 = y + l/2
el = endlength/2
vertices = [(x,y1),(x,y2)]+[(x-el,y1+l/r),(x,y1),(x+el,y1+l/r)]+[(x+el,y2-l/r),(x,y2),(x-el,y2-l/r)]
codes = [Path.MOVETO,Path.LINETO]+[Path.MOVETO]+[Path.LINETO]*2+[Path.MOVETO]+[Path.LINETO]*2
return Path(vertices,codes)
div_cmap = sns.light_palette((0,.5,.8),n_colors=20)#as_cmap=True)
#sns.palplot(div_cmap, size = .8)
colors = [(0,.5,.8),(.98,.98,.98),(.7,.1,.1)]
# sns.palplot(sns.blend_palette(colors,n_colors=20))
colmap=sns.blend_palette(colors,as_cmap=True)
fig,axes = plt.subplots(nrows=1,ncols=3,figsize=(18,6))
axes[0].set_title('(A)', loc='left')
axes[1].set_title('(B)', loc='left')
axes[2].set_title('(C)', loc='left')
ax=axes[0]
r = nsclc.rsi
d = 2
beta = 0.05
x, k, c = create_kde(r)
ax.plot(x,k)
bins=np.arange(0,1,.04)
hist = np.histogram(r,bins=bins,density=True)
bar_width = (hist[1][1]-hist[1][0])*.7
ax.bar(hist[1][:-1],hist[0],width=bar_width,alpha=.6,color=(.6,.6,.6))
ax.set_yticks([])
"""-----------------------------------------------------------------------------------------------"""
ax = axes[1]
x = lc_df.new_dose_5070.values
x.sort()
range60 = range(1,61)
x2 = lc_df.new_dose.values
x2.sort()
dose_5070 = lc_df.new_dose_5070.sort_values()
full70 = np.full(len(x),70)
ax.scatter(range60,x2, s = 80, c=x2,cmap=colmap,edgecolors='k',zorder=10) #label = 'RxRSI > 70')
ax.scatter(range60,x,edgecolor = 'k',facecolor='white', marker = 'o', s = 60, zorder = 5, label = 'RxRSI scaled\nto 50-70')
ax.hlines(y = [50,70],xmin = [-2,-2],xmax=[62,62], color = 'k',lw=1.5,zorder=0)
ax.fill_between([-2,62],70,50, color = (.95,.95,.95),alpha=.2)
j = np.where(x2<50)[0][-1]
k = np.where(x2>70)[0][0]
ax.vlines(range60[k:],ymin = full70[k:], ymax = x2[k:], lw = .5, linestyle = '--')
ax.vlines(x = range60[:j], ymin = x2[:j], ymax = np.full(j,50), lw = .5, linestyle = '--')
ax.set_xticklabels('')
ax.set_ylim((10,100))
ax.set_xlim(-1,61)
ax.set_ylabel('RxRSI (Gy)')
ax.set_xlabel('Patient IDs')
ax.set_xticks([])
"""-------------------------------------------------------------------------------"""
ax=axes[2]
r = nsclc.rsi
d = 2
beta = 0.05
# for SF2 alpha
n = 1
alpha_tcc = np.log(r)/(-n*d) - beta*d
rxdose_tcc = 33/(alpha_tcc+beta*d)
rxdose_tcc=rxdose_tcc.values
""" plotting histograms """
binlist=list(np.arange(0,150,2))+[300]
""" <60 range """
xdata = rxdose_tcc[np.where(rxdose_tcc<60)]
wts = np.full(len(xdata),.0002)
ax.hist(xdata,bins = binlist,
alpha=.6,#ec = 'k',
color=cp[0],
weights = wts)
""" 60-74 range """
xdata = rxdose_tcc[np.where((rxdose_tcc>60)&(rxdose_tcc<74))]
wts = np.full(len(xdata),.0002)
ax.hist(xdata,bins = binlist,
alpha=.8,#ec = 'k',
color=(.4,.4,.4),
weights = wts,zorder=5)
""" >74 range """
xdata = rxdose_tcc[np.where((rxdose_tcc>74))] #&(rxdose_tcc<80))]
wts = np.full(len(xdata),.0002)
ax.hist(xdata,bins = binlist,
alpha=.7,#ec = 'k',
color=cp[3],
weights = wts)
rxdose_kde = create_kde(rxdose_tcc,bandwidth=.28)
ax.plot(rxdose_kde[0], rxdose_kde[1] , c=(.2,.2,.3),lw=1,ls='--',label = 'KDE')
ax.set_xlim(-2,130)
ax.set_yticks([])
ax.set_xlabel('RxRSI for TCC Lung')
fig.subplots_adjust(left=.06, right=.95, wspace=.25)
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/tomwilde/100DaysOfMLCode/blob/master/2_numpy_linearRegression_with_CostFn.ipynb)
```
!pip install -U -q PyDrive
import numpy as np
import matplotlib.pyplot as plt
import pandas
import io
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once per notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# from: https://ml-cheatsheet.readthedocs.io/en/latest/linear_regression.html#cost-function
#
# We need a cost fn and its derivative...
# Download a file based on its file ID.
#
# A file ID looks like: laggVyWshwcyP6kEI-y_W3P8D26sz
file_id = '1_d2opSoZgMsSeoQUjtOcRQj5l0zO-Upi'
downloaded = drive.CreateFile({'id': file_id})
#print('Downloaded content "{}"'.format(downloaded.GetContentString()))
dataset = pandas.read_csv(io.StringIO(downloaded.GetContentString())).as_matrix()
def cost_function(X, y, weight, bias):
n = len(X)
total_error = 0.0
for i in range(n):
total_error += (y[i] - (weight*X[i] + bias))**2
return total_error / n
def update_weights(X, y, weight, bias, alpha):
weight_deriv = 0
bias_deriv = 0
n = len(X)
for i in range(n):
# Calculate partial derivatives
# -2x(y - (mx + b))
weight_deriv += -2*X[i] * (y[i] - (weight * X[i] + bias))
# -2(y - (mx + b))
bias_deriv += -2*(y[i] - (weight * X[i] + bias))
# We subtract because the derivatives point in direction of steepest ascent
weight -= (weight_deriv / n) * alpha
bias -= (bias_deriv / n) * alpha
return weight, bias
def train(X, y, weight, bias, alpha, iters):
cost_history = []
for i in range(iters):
weight,bias = update_weights(X, y, weight, bias, alpha)
#Calculate cost for auditing purposes
cost = cost_function(X, y, weight, bias)
# cost_history.append(cost)
# Log Progress
if i % 10 == 0:
print "iter: "+str(i) + " weight: "+str(weight) +" bias: "+str(bias) + " cost: "+str(cost)
return weight, bias #, cost_history
# work out
y = dataset[:,4].reshape(200,1)
X = dataset[:,1].reshape(200,1)
m = 0
c = 0
alpha = 0.1
iters = 100
# normalise the data
y = y/np.linalg.norm(y, ord=np.inf, axis=0, keepdims=True)
X = X/np.linalg.norm(X, ord=np.inf, axis=0, keepdims=True)
weight, bias = train(X, y, m, c, alpha, iters)
_ = plt.plot(X,y, 'o', [0, 1], [bias, weight + bias], '-')
```
| github_jupyter |
# A demo of XYZ and RDKitMol
There is no easy way to convert xyz to RDKit Mol/RWMol. Here RDKitMol shows a possibility by using openbabel / method from Jensen et al. [1] as a molecule perception backend.
[1] https://github.com/jensengroup/xyz2mol.
```
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath('')))
from rdmc.mol import RDKitMol
```
### 1. An example of xyz str block
```
######################################
# INPUT
xyz="""14
C -1.77596 0.55032 -0.86182
C -1.86964 0.09038 -2.31577
H -0.88733 1.17355 -0.71816
H -1.70996 -0.29898 -0.17103
O -2.90695 1.36613 -0.53334
C -0.58005 -0.57548 -2.76940
H -0.35617 -1.45641 -2.15753
H 0.26635 0.11565 -2.71288
H -0.67469 -0.92675 -3.80265
O -2.92111 -0.86791 -2.44871
H -2.10410 0.93662 -2.97107
O -3.87923 0.48257 0.09884
H -4.43402 0.34141 -0.69232
O -4.16782 -0.23433 -2.64382
"""
xyz_wo_header = """O 2.136128 0.058786 -0.999372
C -1.347448 0.039725 0.510465
C 0.116046 -0.220125 0.294405
C 0.810093 0.253091 -0.73937
H -1.530204 0.552623 1.461378
H -1.761309 0.662825 -0.286624
H -1.923334 -0.892154 0.536088
H 0.627132 -0.833978 1.035748
H 0.359144 0.869454 -1.510183
H 2.513751 -0.490247 -0.302535"""
######################################
```
### 2. Use pybel to generate a OBMol from xyz
pybel backend, `header` to indicate if the str includes lines of atom number and title.
```
rdkitmol = RDKitMol.FromXYZ(xyz, backend='openbabel', header=True)
rdkitmol
```
Please correctly use `header` arguments, otherwise molecule perception can be problematic
```
rdkitmol = RDKitMol.FromXYZ(xyz_wo_header, backend='openbabel', header=False)
rdkitmol
```
Using `jensen` backend. For most cases, Jensen's method returns the same molecule as using `pybel` backend
```
rdkitmol = RDKitMol.FromXYZ(xyz, backend='jensen', header=True)
rdkitmol
```
Here some options for Jensen et al. method are listed. The nomenclature is kept as it is in the original API.
```
rdkitmol = RDKitMol.FromXYZ(xyz, backend='jensen',
header=True,
allow_charged_fragments=False, # radical => False
use_graph=False, # accelerate for larger molecule but needs networkx as backend
use_huckel=True,
embed_chiral=True)
rdkitmol
```
### 3. Check the xyz of rdkitmol conformer
```
rdkitmol.GetConformer().GetPositions()
```
### 4. Export xyz
```
print(rdkitmol.ToXYZ(header=False))
```
| github_jupyter |
# Dimensionality Reduction with the Shogun Machine Learning Toolbox
#### *By Sergey Lisitsyn ([lisitsyn](https://github.com/lisitsyn)) and Fernando J. Iglesias Garcia ([iglesias](https://github.com/iglesias)).*
This notebook illustrates <a href="http://en.wikipedia.org/wiki/Unsupervised_learning">unsupervised learning</a> using the suite of dimensionality reduction algorithms available in Shogun. Shogun provides access to all these algorithms using [Tapkee](http://tapkee.lisitsyn.me/), a C++ library especialized in <a href="http://en.wikipedia.org/wiki/Dimensionality_reduction">dimensionality reduction</a>.
## Hands-on introduction to dimension reduction
First of all, let us start right away by showing what the purpose of dimensionality reduction actually is. To this end, we will begin by creating a function that provides us with some data:
```
import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
def generate_data(curve_type, num_points=1000):
if curve_type=='swissroll':
tt = np.array((3*np.pi/2)*(1+2*np.random.rand(num_points)))
height = np.array((np.random.rand(num_points)-0.5))
X = np.array([tt*np.cos(tt), 10*height, tt*np.sin(tt)])
return X,tt
if curve_type=='scurve':
tt = np.array((3*np.pi*(np.random.rand(num_points)-0.5)))
height = np.array((np.random.rand(num_points)-0.5))
X = np.array([np.sin(tt), 10*height, np.sign(tt)*(np.cos(tt)-1)])
return X,tt
if curve_type=='helix':
tt = np.linspace(1, num_points, num_points).T / num_points
tt = tt*2*np.pi
X = np.r_[[(2+np.cos(8*tt))*np.cos(tt)],
[(2+np.cos(8*tt))*np.sin(tt)],
[np.sin(8*tt)]]
return X,tt
```
The function above can be used to generate three-dimensional datasets with the shape of a [Swiss roll](http://en.wikipedia.org/wiki/Swiss_roll), the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do, we will use a few of the algorithms available in Shogun to embed this data into a two-dimensional space. This is essentially the dimension reduction process as we reduce the number of features from 3 to 2. The question that arises is: what principle should we use to keep some important relations between datapoints? In fact, different algorithms imply different criteria to answer this question.
Just to start, lets pick some algorithm and one of the data sets, for example lets see what embedding of the Swissroll is produced by the Isomap algorithm. The Isomap algorithm is basically a slightly modified Multidimensional Scaling (MDS) algorithm which finds embedding as a solution of the following optimization problem:
$$
\min_{x'_1, x'_2, \dots} \sum_i \sum_j \| d'(x'_i, x'_j) - d(x_i, x_j)\|^2,
$$
with defined $x_1, x_2, \dots \in X~~$ and unknown variables $x_1, x_2, \dots \in X'~~$ while $\text{dim}(X') < \text{dim}(X)~~~$,
$d: X \times X \to \mathbb{R}~~$ and $d': X' \times X' \to \mathbb{R}~~$ are defined as arbitrary distance functions (for example Euclidean).
Speaking less math, the MDS algorithm finds an embedding that preserves pairwise distances between points as much as it is possible. The Isomap algorithm changes quite small detail: the distance - instead of using local pairwise relationships it takes global factor into the account with shortest path on the neighborhood graph (so-called geodesic distance). The neighborhood graph is defined as graph with datapoints as nodes and weighted edges (with weight equal to the distance between points). The edge between point $x_i~$ and $x_j~$ exists if and only if $x_j~$ is in $k~$ nearest neighbors of $x_i$. Later we will see that that 'global factor' changes the game for the swissroll dataset.
However, first we prepare a small function to plot any of the original data sets together with its embedding.
```
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def plot(data, embedded_data, colors='m'):
fig = plt.figure()
fig.set_facecolor('white')
ax = fig.add_subplot(121,projection='3d')
ax.scatter(data[0],data[1],data[2],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
ax = fig.add_subplot(122)
ax.scatter(embedded_data[0],embedded_data[1],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
plt.show()
import shogun as sg
# wrap data into Shogun features
data, colors = generate_data('swissroll')
feats = sg.create_features(data)
# create instance of Isomap converter and set number of neighbours used in kNN search to 20
isomap = sg.create_transformer('Isomap', target_dim=2, k=20)
# create instance of Multidimensional Scaling converter and configure it
mds = sg.create_transformer('MultidimensionalScaling', target_dim=2)
# embed Swiss roll data
embedded_data_mds = mds.transform(feats).get('feature_matrix')
embedded_data_isomap = isomap.transform(feats).get('feature_matrix')
plot(data, embedded_data_mds, colors)
plot(data, embedded_data_isomap, colors)
```
As it can be seen from the figure above, Isomap has been able to "unroll" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this new representation maintains the properties of the original data, while it reduces the amount of information required to represent it. Note that the fact the embedding of the Swiss roll looks good in two dimensions stems from the *intrinsic* dimension of the input data. Although the original data is in a three-dimensional space, its intrinsic dimension is lower, since the only degree of freedom are the polar angle and distance from the centre, or height.
Finally, we use yet another method, Stochastic Proximity Embedding (SPE) to embed the helix:
```
# wrap data into Shogun features
data, colors = generate_data('helix')
features = sg.create_features(data)
# create MDS instance
converter = sg.create_transformer('StochasticProximityEmbedding', target_dim=2)
# embed helix data
embedded_features = converter.transform(features)
embedded_data = embedded_features.get('feature_matrix')
plot(data, embedded_data, colors)
```
## References
- Lisitsyn, S., Widmer, C., Iglesias Garcia, F. J. Tapkee: An Efficient Dimension Reduction Library. ([Link to paper in JMLR](http://jmlr.org/papers/v14/lisitsyn13a.html#!).)
- Tenenbaum, J. B., de Silva, V. and Langford, J. B. A Global Geometric Framework for Nonlinear Dimensionality Reduction. ([Link to Isomap's website](http://isomap.stanford.edu/).)
| github_jupyter |
___
<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
___
# Decision Trees and Random Forests in Python
This is the code for the lecture video which goes over tree methods in Python. Reference the video lecture for the full explanation of the code!
I also wrote a [blog post](https://medium.com/@josemarcialportilla/enchanted-random-forest-b08d418cb411#.hh7n1co54) explaining the general logic of decision trees and random forests which you can check out.
## Import Libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
## Get the Data
```
df = pd.read_csv('kyphosis.csv')
df.head()
```
## EDA
We'll just check out a simple pairplot for this small dataset.
```
sns.pairplot(df,hue='Kyphosis',palette='Set1')
```
## Train Test Split
Let's split up the data into a training set and a test set!
```
from sklearn.model_selection import train_test_split
X = df.drop('Kyphosis',axis=1)
y = df['Kyphosis']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
```
## Decision Trees
We'll start just by training a single decision tree.
```
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
```
## Prediction and Evaluation
Let's evaluate our decision tree.
```
predictions = dtree.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test,predictions))
print(confusion_matrix(y_test,predictions))
```
## Tree Visualization
Scikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this:
```
from IPython.display import Image
from sklearn.externals.six import StringIO
from sklearn.tree import export_graphviz
import pydot
features = list(df.columns[1:])
features
dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data,feature_names=features,filled=True,rounded=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph[0].create_png())
```
## Random Forests
Now let's compare the decision tree model to a random forest.
```
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
print(confusion_matrix(y_test,rfc_pred))
print(classification_report(y_test,rfc_pred))
```
# Great Job!
| github_jupyter |
# NOTE
# IMPORTS
```
# Architecture
from keras import layers
from keras import models
from keras.preprocessing.image import load_img
from keras import backend as K
from keras.utils import plot_model
# Automatic Downloads
import numpy as np
import requests
import time
import os
# Image labeling
import cv2
import imutils
from imutils import paths
```
# HELPER
# DATASET
## Download Captcha
```
url = 'https://www.e-zpassny.com/vector/jcaptcha.do'
total = 0
num_images = 2
output_path = 'C:/Users/Tajr/Desktop/Data/RadonPlus/RadonTechnology/Dev/Case Studi/Captcha/output'
# Loop over the number of images to download
for i in np.arange(0, num_images):
try:
# Grab a new captcha image
r = requests.get(url, timeout=60)
# save the image to disk
p = os.path.sep.join([output_path, '{}.do'.format(str(total).zfill(5))])
f = open(p, 'wb')
f.write(r.content)
f.close()
# update the counter
print('[INFO] downloaded: {}'.format(p))
total += 1
except:
print('[INFO] error downloading image...')
# introduce a small time sleep
time.sleep(0.1)
# r = requests.get(url, timeout=60)
# print(r.content)
```
## Labeling
```
image_paths = list(paths.list_images(output_path))
counts = {}
for(i, image_path) in enumerate(image_paths):
print('[INFO] processing image {}/{}'.format(str(i + 1), len(image_paths)))
image = cv2.imread(image_path)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.copyMakeBorder(gray, 8, 8, 8, 8, cv2.BORDER_REPLICATE)
cv2.imshow('imagwi',gray)
cv2.waitKey(0)
```
# ARCHITECTURE
## Variables
```
width = 28
height = 28
depth = 1
classes = 10
input_shape = (width, height, depth)
if K.image_data_format == 'channels_first':
input_shape = (depth, width, height)
```
## Model Definition
```
model = models.Sequential()
model.add(layers.Conv2D(20, (5, 5) ,padding='same', input_shape=input_shape))
model.add(layers.Activation('relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(layers.Conv2D(50, (5, 5), padding='same'))
model.add(layers.Activation('relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(500))
model.add(layers.Activation('relu'))
model.add(layers.Dense(classes))
model.add(layers.Activation('softmax'))
model.summary()
```
## Model Visualization
```
seriarize_to = ''
plot_model(model, to_file='serialized/model_architecture.png', show_shapes=True)
```
# COMPILATION
# TRAINING
# PLOTTING
# EVALUATION
# PREDICTIONS
| github_jupyter |
# Feature List View
## Usage
```
import sys, json, math
from mlvis import FeatureListView
from random import uniform, gauss
from IPython.display import display
if sys.version_info[0] < 3:
import urllib2 as url
else:
import urllib.request as url
def generate_random_steps(k):
randoms = [uniform(0, 1) / 2 for i in range(0, k)]
steps = [0] * (k - 1)
t = 0
for i in range(0, k - 1):
steps[i] = t + (1 - t) * randoms[i]
t = steps[i]
return steps + [1]
def generate_categorical_feature(states):
size = len(states)
distro_a = [uniform(0, 1) for i in range(0, size)]
distro_b = [uniform(0, 1) for i in range(0, size)]
return {
'name': 'dummy-categorical-feature',
'type': 'categorical',
'domain': list(states.values()),
'distributions': [distro_a, distro_b],
'distributionNormalized': [distro_a, distro_b],
'colors': ['#47B274', '#6F5AA7'],
'divergence': uniform(0, 1)
}
def generate_numerical_feature():
domain_size = 100
distro_a = [uniform(0, 1) for i in range(0, domain_size)]
distro_b = [uniform(0, 1) for i in range(0, domain_size)]
return {
'name': 'dummy-categorical-numerical',
'type': 'numerical',
'domain': generate_random_steps(domain_size),
'distributions': [distro_a, distro_b],
'distributionNormalized': [distro_a, distro_b],
'colors': ['#47B274', '#6F5AA7'],
'divergence': uniform(0, 1)
}
def generate_random_categorical_values(states):
k = 10000
values = [None] * k
domain = list(states.values())
size = len(states)
for i in range(0, k):
d = int(math.floor(uniform(0, 1) * size))
values[i] = domain[d]
return values
def generate_raw_categorical_feature(states):
return {
'name': 'dummy-raw-categorical-feature',
'type': 'categorical',
'values': [generate_random_categorical_values(states),
generate_random_categorical_values(states)]
}
def generate_raw_numerical_feature():
return {
'name': 'dummy-raw-numerical-feature',
'type': 'numerical',
'values': [
[gauss(2, 0.5) for i in range(0, 2500)],
[gauss(0, 1) for i in range(0, 7500)]
]
}
# load the US states data
PREFIX = 'https://d1a3f4spazzrp4.cloudfront.net/mlvis/'
response = url.urlopen(PREFIX + 'jupyter/states.json')
states = json.loads(response.read().decode())
# Randomly generate the data for the feature list view
categorical_feature = generate_categorical_feature(states)
raw_categorical_feature = generate_raw_categorical_feature(states)
numerical_feature = generate_numerical_feature()
raw_numerical_feature = generate_raw_numerical_feature()
data = [categorical_feature, raw_categorical_feature, numerical_feature, raw_numerical_feature]
feature_list_view = FeatureListView(props={"data": data, "width": 1000})
display(feature_list_view)
```
| github_jupyter |
```
from selenium import webdriver
#import urllib you can use urllib to send web request to websites and get back html text as response
import pandas as pd
from bs4 import BeautifulSoup
from selenium.webdriver.common.keys import Keys
from lxml import html
import numpy
# import dependencies
browser = webdriver.Firefox() #I only tested in firefox
browser.get('http://costcotravel.com/Rental-Cars')
browser.implicitly_wait(5)#wait for webpage download
browser.find_element_by_id('pickupLocationTextWidget').send_keys("PHX");
browser.find_element_by_css_selector('.sayt-result').click()
browser.find_element_by_id("pickupDateWidget").send_keys('08/27/2016')#you can't send it directly, need to clear first
browser.find_element_by_id("pickupDateWidget").clear()
browser.find_element_by_id("pickupDateWidget").send_keys('08/27/2016')
browser.find_element_by_id("dropoffDateWidget").clear()
browser.find_element_by_id("dropoffDateWidget").send_keys('08/31/2016',Keys.RETURN)
browser.find_element_by_css_selector('#pickupTimeWidget option[value="03:00 PM"]').click() #select time
browser.find_element_by_css_selector('#dropoffTimeWidget option[value="03:00 PM"]').click()
browser.find_element_by_link_text('SEARCH').click() #click the red button !!
n = browser.page_source #grab the page source
```
The follow code is same as before, but you can send the commands all in one go.
However, there are implicit wait for the driver so it can do AJAX request and render the page for elements
also, you can you find_element_by_xpath method
```
# browser = webdriver.Firefox() #I only tested in firefox
# browser.get('http://costcotravel.com/Rental-Cars')
# browser.implicitly_wait(5)#wait for webpage download
# browser.find_element_by_id('pickupLocationTextWidget').send_keys("PHX");
# browser.implicitly_wait(5) #wait for the airport suggestion box to show
# browser.find_element_by_xpath('//li[@class="sayt-result"]').click()
# #click the airport suggestion box
# browser.find_element_by_xpath('//input[@id="pickupDateWidget"]').send_keys('08/27/2016')
# browser.find_element_by_xpath('//input[@id="dropoffDateWidget"]').send_keys('08/30/2016',Keys.RETURN)
# browser.find_element_by_xpath('//select[@id="pickupTimeWidget"]/option[@value="09:00 AM"]').click()
# browser.find_element_by_xpath('//select[@id="dropoffTimeWidget"]/option[@value="05:00 PM"]').click()
# browser.implicitly_wait(5) #wait for the clicks to be completed
# browser.find_element_by_link_text('SEARCH').click()
# #click the search box
# time.sleep(8) #wait for firefox to download and render the page
# n = browser.page_source #grab the html source code
type(n) #the site use unicode
soup = BeautifulSoup(n,'lxml') #use BeautifulSoup to parse the source
print "--------------first 1000 characters:--------------\n"
print soup.prettify()[:1000]
print "\n--------------last 1000 characters:--------------"
print soup.prettify()[-1000:]
table = soup.find('div',{'class':'rentalCarTableDetails'}) #find the table
print "--------------first 1000 characters:--------------\n"
print table.prettify()[:1000]
print "\n--------------last 1000 characters:--------------"
print table.prettify()[-1000:]
tr = table.select('tr') #let's look at one of the row
type(tr)
#lets look at first three row
for i in tr[0:3]:
print i.prettify()
print "-----------------------------------"
```
let play with one of the row
```
row = tr[3]
row.find('th',{'class':'tar'}).text.encode('utf-8')
row
row.contents[4].text #1. this is unicode, 2. the dollar sign is in the way
'Car' in 'Econ Car' #use this string logic to filter out unwanted data
rows = [i for i in tr if (('Price' not in i.contents[0].text and 'Fees' not in i.contents[0].text and 'Location' not in i.contents[0].text and i.contents[0].text !='') and len(i.contents[0].text)<30)]
# use this crazy list comprehension to get the data we want
#1. don't want the text 'Price' in the first column
#2. don't want the text 'Fee' in the first column
#3. don't want the text 'Location' in the first column
#4. the text length of first column must be less than 30 characters long
rows[0].contents[0].text #just exploring here...
rows[0].contents[4].text #need to get rid of the $....
rows[3].contents[0].text #need to make it utf-8
#process the data
prices = {}
for i in rows:
#print the 1st column text
print i.contents[0].text.encode('utf-8')
prices[i.contents[0].text.encode('utf-8')] = [i.contents[1].text.encode('utf-8'),i.contents[2].text.encode('utf-8'), i.contents[3].text.encode('utf-8'),i.contents[4].text.encode('utf-8')]
prices
iteritems = prices.iteritems()
#call .iteritems() on a dictionary will give you a generator which you can iter over
iteritems.next() #run me five times
for name, priceList in prices.iteritems():
newPriceList = []
for i in priceList:
newPriceList.append(i.replace('$',''))
prices[name] = newPriceList
prices
data = pd.DataFrame.from_dict(prices, orient='index') #get a pandas DataFrame from the prices dictionary
data
data = data.replace('Not Available', numpy.nan) #replace the 'Not Available' data point to numpy.nan
data = pd.to_numeric(data, errors='coerce') #cast to numeric data
data
data.columns= ['Alamo','Avis','Budget','Enterprise'] #set column names
data
data.notnull() #check for missing data
data.min(axis=1, skipna=True) #look at the cheapest car in each class
```
From this point on, you can set up to run every night and email yourself results etc.
| github_jupyter |
```
# General purpose libraries
import boto3
import copy
import csv
import datetime
import json
import numpy as np
import pandas as pd
import s3fs
from collections import defaultdict
import time
import re
import random
from sentence_transformers import SentenceTransformer
import sentencepiece
from scipy.spatial import distance
from json import JSONEncoder
import sys
sys.path.append("/Users/dafirebanks/Projects/policy-data-analyzer/")
sys.path.append("C:/Users/jordi/Documents/GitHub/policy-data-analyzer/")
from tasks.data_loading.src.utils import *
```
### 1. Set up AWS
```
def aws_credentials_from_file(f_name):
with open(f_name, "r") as f:
creds = json.load(f)
return creds["aws"]["id"], creds["aws"]["secret"]
def aws_credentials(path, filename):
file = path + filename
with open(file, 'r') as dict:
key_dict = json.load(dict)
for key in key_dict:
KEY = key
SECRET = key_dict[key]
return KEY, SECRET
```
### 2. Optimized full loop
```
def aws_credentials(path, filename):
file = path + filename
with open(file, 'r') as dict:
key_dict = json.load(dict)
for key in key_dict:
KEY = key
SECRET = key_dict[key]
return KEY, SECRET
def aws_credentials_from_file(f_name):
with open(f_name, "r") as f:
creds = json.load(f)
return creds["aws"]["id"], creds["aws"]["secret"]
def load_all_sentences(language, s3, bucket_name, init_doc, end_doc):
policy_dict = {}
sents_folder = f"{language}_documents/sentences"
for i, obj in enumerate(s3.Bucket(bucket_name).objects.all().filter(Prefix="english_documents/sentences/")):
if not obj.key.endswith("/") and init_doc <= i < end_doc:
serializedObject = obj.get()['Body'].read()
policy_dict = {**policy_dict, **json.loads(serializedObject)}
return labeled_sentences_from_dataset(policy_dict)
def save_results_as_separate_csv(results_dictionary, queries_dictionary, init_doc, results_limit, aws_id, aws_secret):
path = "s3://wri-nlp-policy/english_documents/assisted_labeling"
col_headers = ["sentence_id", "similarity_score", "text"]
for i, query in enumerate(results_dictionary.keys()):
filename = f"{path}/query_{queries_dictionary[query]}_{i}_results_{init_doc}.csv"
pd.DataFrame(results_dictionary[query], columns=col_headers).head(results_limit).to_csv(filename, storage_options={"key": aws_id, "secret": aws_secret})
def labeled_sentences_from_dataset(dataset):
sentence_tags_dict = {}
for document in dataset.values():
sentence_tags_dict.update(document['sentences'])
return sentence_tags_dict
# Set up AWS
credentials_file = '/Users/dafirebanks/Documents/credentials.json'
aws_id, aws_secret = aws_credentials_from_file(credentials_file)
region = 'us-east-1'
s3 = boto3.resource(
service_name = 's3',
region_name = region,
aws_access_key_id = aws_id,
aws_secret_access_key = aws_secret
)
path = "C:/Users/jordi/Documents/claus/"
filename = "AWS_S3_keys_wri.json"
aws_id, aws_secret = aws_credentials(path, filename)
region = 'us-east-1'
bucket = 'wri-nlp-policy'
s3 = boto3.resource(
service_name = 's3',
region_name = region,
aws_access_key_id = aws_id,
aws_secret_access_key = aws_secret
)
# Define params
init_at_doc = 13136
end_at_doc = 14778
similarity_threshold = 0
search_results_limit = 500
language = "english"
bucket_name = 'wri-nlp-policy'
transformer_name = 'xlm-r-bert-base-nli-stsb-mean-tokens'
model = SentenceTransformer(transformer_name)
# Get all sentence documents
sentences = load_all_sentences(language, s3, bucket_name, init_at_doc, end_at_doc )
# Define queries
path = "../../input/"
filename = "English_queries.xlsx"
file = path + filename
df = pd.read_excel(file, engine='openpyxl', sheet_name = "Hoja1", usecols = "A:C")
queries = {}
for index, row in df.iterrows():
queries[row['Query sentence']] = row['Policy instrument']
# Calculate and store query embeddings
query_embeddings = dict(zip(queries, [model.encode(query.lower(), show_progress_bar=False) for query in queries]))
# For each sentence, calculate its embedding, and store the similarity
query_similarities = defaultdict(list)
i = 0
for sentence_id, sentence in sentences.items():
sentence_embedding = model.encode(sentence['text'].lower(), show_progress_bar=False)
i += 1
if i % 100 == 0:
print(i)
for query_text, query_embedding in query_embeddings.items():
score = round(1 - distance.cosine(sentence_embedding, query_embedding), 4)
if score > similarity_threshold:
query_similarities[query_text].append([sentence_id, score, sentences[sentence_id]['text']])
# Sort results by similarity score
for query in query_similarities:
query_similarities[query] = sorted(query_similarities[query], key = lambda x : x[1], reverse=True)
# Store results
save_results_as_separate_csv(query_similarities, queries, init_at_doc, search_results_limit, aws_id, aws_secret)
```
| github_jupyter |
# Investigating ocean models skill for sea surface height with IOOS catalog and Python
The IOOS [catalog](https://ioos.noaa.gov/data/catalog) offers access to hundreds of datasets and data access services provided by the 11 regional associations.
In the past we demonstrate how to tap into those datasets to obtain sea [surface temperature data from observations](http://ioos.github.io/notebooks_demos/notebooks/2016-12-19-exploring_csw),
[coastal velocity from high frequency radar data](http://ioos.github.io/notebooks_demos/notebooks/2017-12-15-finding_HFRadar_currents),
and a simple model vs observation visualization of temperatures for the [Boston Light Swim competition](http://ioos.github.io/notebooks_demos/notebooks/2016-12-22-boston_light_swim).
In this notebook we'll demonstrate a step-by-step workflow on how ask the catalog for a specific variable, extract only the model data, and match the nearest model grid point to an observation. The goal is to create an automated skill score for quick assessment of ocean numerical models.
The first cell is only to reduce iris' noisy output,
the notebook start on cell [2] with the definition of the parameters:
- start and end dates for the search;
- experiment name;
- a bounding of the region of interest;
- SOS variable name for the observations;
- Climate and Forecast standard names;
- the units we want conform the variables into;
- catalogs we want to search.
```
import warnings
# Suppresing warnings for a "pretty output."
warnings.simplefilter("ignore")
%%writefile config.yaml
date:
start: 2018-2-28 00:00:00
stop: 2018-3-5 00:00:00
run_name: 'latest'
region:
bbox: [-71.20, 41.40, -69.20, 43.74]
crs: 'urn:ogc:def:crs:OGC:1.3:CRS84'
sos_name: 'water_surface_height_above_reference_datum'
cf_names:
- sea_surface_height
- sea_surface_elevation
- sea_surface_height_above_geoid
- sea_surface_height_above_sea_level
- water_surface_height_above_reference_datum
- sea_surface_height_above_reference_ellipsoid
units: 'm'
catalogs:
- https://data.ioos.us/csw
```
To keep track of the information we'll setup a `config` variable and output them on the screen for bookkeeping.
```
import os
import shutil
from datetime import datetime
from ioos_tools.ioos import parse_config
config = parse_config("config.yaml")
# Saves downloaded data into a temporary directory.
save_dir = os.path.abspath(config["run_name"])
if os.path.exists(save_dir):
shutil.rmtree(save_dir)
os.makedirs(save_dir)
fmt = "{:*^64}".format
print(fmt("Saving data inside directory {}".format(save_dir)))
print(fmt(" Run information "))
print("Run date: {:%Y-%m-%d %H:%M:%S}".format(datetime.utcnow()))
print("Start: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["start"]))
print("Stop: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["stop"]))
print(
"Bounding box: {0:3.2f}, {1:3.2f},"
"{2:3.2f}, {3:3.2f}".format(*config["region"]["bbox"])
)
```
To interface with the IOOS catalog we will use the [Catalogue Service for the Web (CSW)](https://live.osgeo.org/en/standards/csw_overview.html) endpoint and [python's OWSLib library](https://geopython.github.io/OWSLib).
The cell below creates the [Filter Encoding Specification (FES)](http://www.opengeospatial.org/standards/filter) with configuration we specified in cell [2]. The filter is composed of:
- `or` to catch any of the standard names;
- `not` some names we do not want to show up in the results;
- `date range` and `bounding box` for the time-space domain of the search.
```
def make_filter(config):
from owslib import fes
from ioos_tools.ioos import fes_date_filter
kw = dict(
wildCard="*", escapeChar="\\", singleChar="?", propertyname="apiso:Subject"
)
or_filt = fes.Or(
[fes.PropertyIsLike(literal=("*%s*" % val), **kw) for val in config["cf_names"]]
)
not_filt = fes.Not([fes.PropertyIsLike(literal="GRIB-2", **kw)])
begin, end = fes_date_filter(config["date"]["start"], config["date"]["stop"])
bbox_crs = fes.BBox(config["region"]["bbox"], crs=config["region"]["crs"])
filter_list = [fes.And([bbox_crs, begin, end, or_filt, not_filt])]
return filter_list
filter_list = make_filter(config)
```
We need to wrap `OWSlib.csw.CatalogueServiceWeb` object with a custom function,
` get_csw_records`, to be able to paginate over the results.
In the cell below we loop over all the catalogs returns and extract the OPeNDAP endpoints.
```
from ioos_tools.ioos import get_csw_records, service_urls
from owslib.csw import CatalogueServiceWeb
dap_urls = []
print(fmt(" Catalog information "))
for endpoint in config["catalogs"]:
print("URL: {}".format(endpoint))
try:
csw = CatalogueServiceWeb(endpoint, timeout=120)
except Exception as e:
print("{}".format(e))
continue
csw = get_csw_records(csw, filter_list, esn="full")
OPeNDAP = service_urls(csw.records, identifier="OPeNDAP:OPeNDAP")
odp = service_urls(
csw.records, identifier="urn:x-esri:specification:ServiceType:odp:url"
)
dap = OPeNDAP + odp
dap_urls.extend(dap)
print("Number of datasets available: {}".format(len(csw.records.keys())))
for rec, item in csw.records.items():
print("{}".format(item.title))
if dap:
print(fmt(" DAP "))
for url in dap:
print("{}.html".format(url))
print("\n")
# Get only unique endpoints.
dap_urls = list(set(dap_urls))
```
We found 10 dataset endpoints but only 9 of them have the proper metadata for us to identify the OPeNDAP endpoint,
those that contain either `OPeNDAP:OPeNDAP` or `urn:x-esri:specification:ServiceType:odp:url` scheme.
Unfortunately we lost the `COAWST` model in the process.
The next step is to ensure there are no observations in the list of endpoints.
We want only the models for now.
```
from ioos_tools.ioos import is_station
from timeout_decorator import TimeoutError
# Filter out some station endpoints.
non_stations = []
for url in dap_urls:
try:
if not is_station(url):
non_stations.append(url)
except (IOError, OSError, RuntimeError, TimeoutError) as e:
print("Could not access URL {}.html\n{!r}".format(url, e))
dap_urls = non_stations
print(fmt(" Filtered DAP "))
for url in dap_urls:
print("{}.html".format(url))
```
Now we have a nice list of all the models available in the catalog for the domain we specified.
We still need to find the observations for the same domain.
To accomplish that we will use the `pyoos` library and search the [SOS CO-OPS](https://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/) services using the virtually the same configuration options from the catalog search.
```
from pyoos.collectors.coops.coops_sos import CoopsSos
collector_coops = CoopsSos()
collector_coops.set_bbox(config["region"]["bbox"])
collector_coops.end_time = config["date"]["stop"]
collector_coops.start_time = config["date"]["start"]
collector_coops.variables = [config["sos_name"]]
ofrs = collector_coops.server.offerings
title = collector_coops.server.identification.title
print(fmt(" Collector offerings "))
print("{}: {} offerings".format(title, len(ofrs)))
```
To make it easier to work with the data we extract the time-series as pandas tables and interpolate them to a common 1-hour interval index.
```
import pandas as pd
from ioos_tools.ioos import collector2table
data = collector2table(
collector=collector_coops,
config=config,
col="water_surface_height_above_reference_datum (m)",
)
df = dict(
station_name=[s._metadata.get("station_name") for s in data],
station_code=[s._metadata.get("station_code") for s in data],
sensor=[s._metadata.get("sensor") for s in data],
lon=[s._metadata.get("lon") for s in data],
lat=[s._metadata.get("lat") for s in data],
depth=[s._metadata.get("depth") for s in data],
)
pd.DataFrame(df).set_index("station_code")
index = pd.date_range(
start=config["date"]["start"].replace(tzinfo=None),
end=config["date"]["stop"].replace(tzinfo=None),
freq="1H",
)
# Preserve metadata with `reindex`.
observations = []
for series in data:
_metadata = series._metadata
series.index = series.index.tz_localize(None)
obs = series.reindex(index=index, limit=1, method="nearest")
obs._metadata = _metadata
observations.append(obs)
```
The next cell saves those time-series as CF-compliant netCDF files on disk,
to make it easier to access them later.
```
import iris
from ioos_tools.tardis import series2cube
attr = dict(
featureType="timeSeries",
Conventions="CF-1.6",
standard_name_vocabulary="CF-1.6",
cdm_data_type="Station",
comment="Data from http://opendap.co-ops.nos.noaa.gov",
)
cubes = iris.cube.CubeList([series2cube(obs, attr=attr) for obs in observations])
outfile = os.path.join(save_dir, "OBS_DATA.nc")
iris.save(cubes, outfile)
```
We still need to read the model data from the list of endpoints we found.
The next cell takes care of that.
We use `iris`, and a set of custom functions from the `ioos_tools` library,
that downloads only the data in the domain we requested.
```
from ioos_tools.ioos import get_model_name
from ioos_tools.tardis import is_model, proc_cube, quick_load_cubes
from iris.exceptions import ConstraintMismatchError, CoordinateNotFoundError, MergeError
print(fmt(" Models "))
cubes = dict()
for k, url in enumerate(dap_urls):
print("\n[Reading url {}/{}]: {}".format(k + 1, len(dap_urls), url))
try:
cube = quick_load_cubes(url, config["cf_names"], callback=None, strict=True)
if is_model(cube):
cube = proc_cube(
cube,
bbox=config["region"]["bbox"],
time=(config["date"]["start"], config["date"]["stop"]),
units=config["units"],
)
else:
print("[Not model data]: {}".format(url))
continue
mod_name = get_model_name(url)
cubes.update({mod_name: cube})
except (
RuntimeError,
ValueError,
ConstraintMismatchError,
CoordinateNotFoundError,
IndexError,
) as e:
print("Cannot get cube for: {}\n{}".format(url, e))
```
Now we can match each observation time-series with its closest grid point (0.08 of a degree) on each model.
This is a complex and laborious task! If you are running this interactively grab a coffee and sit comfortably :-)
Note that we are also saving the model time-series to files that align with the observations we saved before.
```
import iris
from ioos_tools.tardis import (
add_station,
ensure_timeseries,
get_nearest_water,
make_tree,
)
from iris.pandas import as_series
for mod_name, cube in cubes.items():
fname = "{}.nc".format(mod_name)
fname = os.path.join(save_dir, fname)
print(fmt(" Downloading to file {} ".format(fname)))
try:
tree, lon, lat = make_tree(cube)
except CoordinateNotFoundError:
print("Cannot make KDTree for: {}".format(mod_name))
continue
# Get model series at observed locations.
raw_series = dict()
for obs in observations:
obs = obs._metadata
station = obs["station_code"]
try:
kw = dict(k=10, max_dist=0.08, min_var=0.01)
args = cube, tree, obs["lon"], obs["lat"]
try:
series, dist, idx = get_nearest_water(*args, **kw)
except RuntimeError as e:
print("Cannot download {!r}.\n{}".format(cube, e))
series = None
except ValueError:
status = "No Data"
print("[{}] {}".format(status, obs["station_name"]))
continue
if not series:
status = "Land "
else:
raw_series.update({station: series})
series = as_series(series)
status = "Water "
print("[{}] {}".format(status, obs["station_name"]))
if raw_series: # Save cube.
for station, cube in raw_series.items():
cube = add_station(cube, station)
try:
cube = iris.cube.CubeList(raw_series.values()).merge_cube()
except MergeError as e:
print(e)
ensure_timeseries(cube)
try:
iris.save(cube, fname)
except AttributeError:
# FIXME: we should patch the bad attribute instead of removing everything.
cube.attributes = {}
iris.save(cube, fname)
del cube
print("Finished processing [{}]".format(mod_name))
```
With the matched set of models and observations time-series it is relatively easy to compute skill score metrics on them. In cells [13] to [16] we apply both mean bias and root mean square errors to the time-series.
```
from ioos_tools.ioos import stations_keys
def rename_cols(df, config):
cols = stations_keys(config, key="station_name")
return df.rename(columns=cols)
from ioos_tools.ioos import load_ncs
from ioos_tools.skill_score import apply_skill, mean_bias
dfs = load_ncs(config)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
skill_score = dict(mean_bias=df.to_dict())
# Filter out stations with no valid comparison.
df.dropna(how="all", axis=1, inplace=True)
df = df.applymap("{:.2f}".format).replace("nan", "--")
from ioos_tools.skill_score import rmse
dfs = load_ncs(config)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
skill_score["rmse"] = df.to_dict()
# Filter out stations with no valid comparison.
df.dropna(how="all", axis=1, inplace=True)
df = df.applymap("{:.2f}".format).replace("nan", "--")
import pandas as pd
# Stringfy keys.
for key in skill_score.keys():
skill_score[key] = {str(k): v for k, v in skill_score[key].items()}
mean_bias = pd.DataFrame.from_dict(skill_score["mean_bias"])
mean_bias = mean_bias.applymap("{:.2f}".format).replace("nan", "--")
skill_score = pd.DataFrame.from_dict(skill_score["rmse"])
skill_score = skill_score.applymap("{:.2f}".format).replace("nan", "--")
```
Last but not least we can assemble a GIS map, cells [17-23],
with the time-series plot for the observations and models,
and the corresponding skill scores.
```
import folium
from ioos_tools.ioos import get_coordinates
def make_map(bbox, **kw):
line = kw.pop("line", True)
zoom_start = kw.pop("zoom_start", 5)
lon = (bbox[0] + bbox[2]) / 2
lat = (bbox[1] + bbox[3]) / 2
m = folium.Map(
width="100%", height="100%", location=[lat, lon], zoom_start=zoom_start
)
if line:
p = folium.PolyLine(
get_coordinates(bbox), color="#FF0000", weight=2, opacity=0.9,
)
p.add_to(m)
return m
bbox = config["region"]["bbox"]
m = make_map(bbox, zoom_start=8, line=True, layers=True)
all_obs = stations_keys(config)
from glob import glob
from operator import itemgetter
import iris
from folium.plugins import MarkerCluster
iris.FUTURE.netcdf_promote = True
big_list = []
for fname in glob(os.path.join(save_dir, "*.nc")):
if "OBS_DATA" in fname:
continue
cube = iris.load_cube(fname)
model = os.path.split(fname)[1].split("-")[-1].split(".")[0]
lons = cube.coord(axis="X").points
lats = cube.coord(axis="Y").points
stations = cube.coord("station_code").points
models = [model] * lons.size
lista = zip(models, lons.tolist(), lats.tolist(), stations.tolist())
big_list.extend(lista)
big_list.sort(key=itemgetter(3))
df = pd.DataFrame(big_list, columns=["name", "lon", "lat", "station"])
df.set_index("station", drop=True, inplace=True)
groups = df.groupby(df.index)
locations, popups = [], []
for station, info in groups:
sta_name = all_obs[station]
for lat, lon, name in zip(info.lat, info.lon, info.name):
locations.append([lat, lon])
popups.append("[{}]: {}".format(name, sta_name))
MarkerCluster(locations=locations, popups=popups, name="Cluster").add_to(m)
titles = {
"coawst_4_use_best": "COAWST_4",
"pacioos_hycom-global": "HYCOM",
"NECOFS_GOM3_FORECAST": "NECOFS_GOM3",
"NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST": "NECOFS_MassBay",
"NECOFS_FVCOM_OCEAN_BOSTON_FORECAST": "NECOFS_Boston",
"SECOORA_NCSU_CNAPS": "SECOORA/CNAPS",
"roms_2013_da_avg-ESPRESSO_Real-Time_v2_Averages_Best": "ESPRESSO Avg",
"roms_2013_da-ESPRESSO_Real-Time_v2_History_Best": "ESPRESSO Hist",
"OBS_DATA": "Observations",
}
from itertools import cycle
from bokeh.embed import file_html
from bokeh.models import HoverTool, Legend
from bokeh.palettes import Category20
from bokeh.plotting import figure
from bokeh.resources import CDN
from folium import IFrame
# Plot defaults.
colors = Category20[20]
colorcycler = cycle(colors)
tools = "pan,box_zoom,reset"
width, height = 750, 250
def make_plot(df, station):
p = figure(
toolbar_location="above",
x_axis_type="datetime",
width=width,
height=height,
tools=tools,
title=str(station),
)
leg = []
for column, series in df.iteritems():
series.dropna(inplace=True)
if not series.empty:
if "OBS_DATA" not in column:
bias = mean_bias[str(station)][column]
skill = skill_score[str(station)][column]
line_color = next(colorcycler)
kw = dict(alpha=0.65, line_color=line_color)
else:
skill = bias = "NA"
kw = dict(alpha=1, color="crimson")
line = p.line(
x=series.index,
y=series.values,
line_width=5,
line_cap="round",
line_join="round",
**kw
)
leg.append(("{}".format(titles.get(column, column)), [line]))
p.add_tools(
HoverTool(
tooltips=[
("Name", "{}".format(titles.get(column, column))),
("Bias", bias),
("Skill", skill),
],
renderers=[line],
)
)
legend = Legend(items=leg, location=(0, 60))
legend.click_policy = "mute"
p.add_layout(legend, "right")
p.yaxis[0].axis_label = "Water Height (m)"
p.xaxis[0].axis_label = "Date/time"
return p
def make_marker(p, station):
lons = stations_keys(config, key="lon")
lats = stations_keys(config, key="lat")
lon, lat = lons[station], lats[station]
html = file_html(p, CDN, station)
iframe = IFrame(html, width=width + 40, height=height + 80)
popup = folium.Popup(iframe, max_width=2650)
icon = folium.Icon(color="green", icon="stats")
marker = folium.Marker(location=[lat, lon], popup=popup, icon=icon)
return marker
dfs = load_ncs(config)
for station in dfs:
sta_name = all_obs[station]
df = dfs[station]
if df.empty:
continue
p = make_plot(df, station)
marker = make_marker(p, station)
marker.add_to(m)
folium.LayerControl().add_to(m)
def embed_map(m):
from IPython.display import HTML
m.save("index.html")
with open("index.html") as f:
html = f.read()
iframe = '<iframe srcdoc="{srcdoc}" style="width: 100%; height: 750px; border: none"></iframe>'
srcdoc = html.replace('"', """)
return HTML(iframe.format(srcdoc=srcdoc))
embed_map(m)
```
| github_jupyter |
### Made by Kartikey Sharma (IIT Goa)
### GOAL
Predicting the costs of used cars given the data collected from various sources and distributed across various locations in India.
#### FEATURES:
<b>Name</b>: The brand and model of the car.<br>
<b>Location</b>: The location in which the car is being sold or is available for purchase.<br>
<b>Year</b>: The year or edition of the model.<br>
<b>Kilometers_Driven</b>: The total kilometres driven in the car by the previous owner(s) in KM.<br>
<b>Fuel_Type</b>: The type of fuel used by the car.<br>
<b>Transmission</b>: The type of transmission used by the car.<br>
<b>Owner_Type</b>: Whether the ownership is Firsthand, Second hand or other.<br>
<b>Mileage</b>: The standard mileage offered by the car company in kmpl or km/kg.<br>
<b>Engine</b>: The displacement volume of the engine in cc.<br>
<b>Power</b>: The maximum power of the engine in bhp.<br>
<b>Seats</b>: The number of seats in the car.<br>
<b>Price</b>: The price of the used car in INR Lakhs.<br>
### Process
Clean the data (missing values and categorical variables).'.
<br>Build the model and check the MAE.
<br>Try to improve the model.
<br>Brand matters too! I could select the brand name of the car and treat them as categorical data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
import seaborn as sns
sns.set_style('darkgrid')
warnings.filterwarnings('ignore')
#Importing datasets
df_train = pd.read_excel("Data_Train.xlsx")
df_test = pd.read_excel("Data_Test.xlsx")
df_train.head()
df_train.shape
df_train.info()
#No of duplicated values in the train set
df_train.duplicated().sum()
#Seeing the number of duplicated values
df_test.duplicated().sum()
#Number of null values
df_train.isnull().sum()
df_train.nunique()
df_train['Name'] = df_train.Name.str.split().str.get(0)
df_test['Name'] = df_test.Name.str.split().str.get(0)
df_train.head()
# all rows have been modified
df_train['Name'].value_counts().sum()
```
### Missing Values
```
# Get names of columns with missing values
cols_with_missing = [col for col in df_train.columns
if df_train[col].isnull().any()]
print("Columns with missing values:")
print(cols_with_missing)
df_train['Seats'].fillna(df_train['Seats'].mean(),inplace=True)
df_test['Seats'].fillna(df_test['Seats'].mean(),inplace=True)
# for more accurate predicitions
data = pd.concat([df_train,df_test], sort=False)
plt.figure(figsize=(20,5))
data['Mileage'].value_counts().head(100).plot.bar()
plt.show()
df_train['Mileage'] = df_train['Mileage'].fillna('17.0 kmpl')
df_test['Mileage'] = df_test['Mileage'].fillna('17.0 kmpl')
# o(zero) and null are both missing values clearly
df_train['Mileage'] = df_train['Mileage'].replace("0.0 kmpl", "17.0 kmpl")
df_test['Mileage'] = df_test['Mileage'].replace("0.0 kmpl", "17.0 kmpl")
plt.figure(figsize=(20,5))
data['Engine'].value_counts().head(100).plot.bar()
plt.show()
df_train['Engine'] = df_train['Engine'].fillna('1000 CC')
df_test['Engine'] = df_test['Engine'].fillna('1000 CC')
plt.figure(figsize=(20,5))
data['Power'].value_counts().head(100).plot.bar()
plt.show()
df_train['Power'] = df_train['Power'].fillna('74 bhp')
df_test['Power'] = df_test['Power'].fillna('74 bhp')
#null bhp created a problem during LabelEncoding
df_train['Power'] = df_train['Power'].replace("null bhp", "74 bhp")
df_test['Power'] = df_test['Power'].replace("null bhp", "74 bhp")
# Method to extract 'float' from 'object'
import re
def get_number(name):
title_search = re.search('([\d+\.+\d]+\W)', name)
if title_search:
return title_search.group(1)
return ""
df_train.isnull().sum()
df_train.info()
#Acquring float values and isolating them
df_train['Mileage'] = df_train['Mileage'].apply(get_number).astype('float')
df_train['Engine'] = df_train['Engine'].apply(get_number).astype('int')
df_train['Power'] = df_train['Power'].apply(get_number).astype('float')
df_test['Mileage'] = df_test['Mileage'].apply(get_number).astype('float')
df_test['Engine'] = df_test['Engine'].apply(get_number).astype('int')
df_test['Power'] = df_test['Power'].apply(get_number).astype('float')
df_train.info()
df_test.info()
df_train.head()
```
### Categorical Variables
```
from sklearn.model_selection import train_test_split
y = np.log1p(df_train.Price) # Made a HUGE difference. MAE went down highly
X = df_train.drop(['Price'],axis=1)
X_train, X_valid, y_train, y_valid = train_test_split(X,y,train_size=0.82,test_size=0.18,random_state=0)
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
X_train['Name'] = label_encoder.fit_transform(X_train['Name'])
X_valid['Name'] = label_encoder.transform(X_valid['Name'])
df_test['Name'] = label_encoder.fit_transform(df_test['Name'])
X_train['Location'] = label_encoder.fit_transform(X_train['Location'])
X_valid['Location'] = label_encoder.transform(X_valid['Location'])
df_test['Location'] = label_encoder.fit_transform(df_test['Location'])
X_train['Fuel_Type'] = label_encoder.fit_transform(X_train['Fuel_Type'])
X_valid['Fuel_Type'] = label_encoder.transform(X_valid['Fuel_Type'])
df_test['Fuel_Type'] = label_encoder.fit_transform(df_test['Fuel_Type'])
X_train['Transmission'] = label_encoder.fit_transform(X_train['Transmission'])
X_valid['Transmission'] = label_encoder.transform(X_valid['Transmission'])
df_test['Transmission'] = label_encoder.fit_transform(df_test['Transmission'])
X_train['Owner_Type'] = label_encoder.fit_transform(X_train['Owner_Type'])
X_valid['Owner_Type'] = label_encoder.transform(X_valid['Owner_Type'])
df_test['Owner_Type'] = label_encoder.fit_transform(df_test['Owner_Type'])
X_train.head()
X_train.info()
```
## Model
```
from xgboost import XGBRegressor
from sklearn.metrics import mean_absolute_error,mean_squared_error,mean_squared_log_error
from math import sqrt
my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05)
my_model.fit(X_train, y_train,
early_stopping_rounds=5,
eval_set=[(X_valid, y_valid)],
verbose=False)
predictions = my_model.predict(X_valid)
print("MAE: " + str(mean_absolute_error(predictions, y_valid)))
print("MSE: " + str(mean_squared_error(predictions, y_valid)))
print("MSLE: " + str(mean_squared_log_error(predictions, y_valid)))
print("RMSE: "+ str(sqrt(mean_squared_error(predictions, y_valid))))
```
## Prediciting on Test
```
preds_test = my_model.predict(df_test)
preds_test = np.exp(preds_test)-1 #converting target to original state
print(preds_test)
# The Price is in the format xx.xx So let's round off and submit.
preds_test = preds_test.round(5)
print(preds_test)
output = pd.DataFrame({'Price': preds_test})
output.to_excel('Output.xlsx', index=False)
```
#### NOTE
Treating 'Mileage' and the others as categorical variables was a mistake. Eg.: Mileage went up from 23.6 to around 338! Converting it to numbers fixed it.
LabelEncoder won't work if there are missing values.
ValueError: y contains previously unseen label 'Bentley'. Fixed it by increasing training_size in train_test_split.
Scaling all the columns made the model worse (as expected).
==============================================End of Project======================================================
```
# Code by Kartikey Sharma
# Veni.Vidi.Vici.
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import starry
import exoplanet as exo
starry.__version__
map = starry.Map(ydeg=20, udeg=2, rv=True, lazy=False)
time, vels, verr = np.loadtxt('../data/transit.vels', usecols=[0,1,2], unpack=True)
time -= 2458706.5
Prot = 2.85 # days
P = 8.1387 # days
t0 = 0.168
e = 0.0
w = 0.0
inc = 90.0
vsini = 18.3 * 1e3 # m /s
r = 0.06472 # In units of Rstar
b = -0.40 # I want it to transit in the South!
a = 19.42 # In units of Rstar
u1 = 0.95
u2 = 0.20
obl = -0
gamma = -15
gammadot = 100
gammadotdot = 800
veq = vsini / np.sin(inc * np.pi / 180.0)
map.reset()
map.inc = inc
map.obl = obl
#map.add_spot(spot_amp, sigma=spot_sig, lon=spot_lon, lat=-spot_lat)
map[1:] = [u1, u2]
map.veq = veq
orbit = exo.orbits.KeplerianOrbit(period=P, a=a, t0=t0, b=b, ecc=e, omega=w, r_star=1.0)
t = np.linspace(0.05, 0.30, 1000)
f = (t - t0)/P*2*np.pi
I = np.arccos(b/a)
zo = a*np.cos(f)
yo = -a*np.sin(np.pi/2+f)*np.cos(I)
xo = a*np.sin(f)*np.sin(I)
theta = 360.0 / Prot * t
rv = map.rv(xo=xo, yo=yo, zo=zo, ro=r, theta=theta)
rv += gamma + gammadot*(t-0.15) + gammadotdot*(t-0.15)**2
plt.figure(figsize=(15,5))
plt.plot(t, rv, "C1", lw=3)
plt.errorbar(time, vels, yerr=verr, fmt='.')
plt.ylim(-60, 40);
#map.show(rv=False)
from scipy.optimize import minimize
tuse = time + 0.0
euse = verr + 0.0
vuse = vels + 0.0
def rmcurve(params):
vsini, r, b, a, u1, u2, obl, gamma, gammadot, gammadotdot, jitter_good, jitter_bad, q, factor, t0 = params
veq = vsini / np.sin(inc * np.pi / 180.0)
if u1 + u2 > 1.0:
print('inf')
return 2700
map.reset()
map.inc = inc
map.obl = obl
#map.add_spot(spot_amp, sigma=spot_sig, lon=spot_lon, lat=-spot_lat)
map[1:] = [u1, u2]
map.veq = veq
f = (tuse - t0)/P*2*np.pi
I = np.arccos(b/a)
zo = a*np.cos(f)
yo = -a*np.sin(np.pi/2+f)*np.cos(I)
xo = a*np.sin(f)*np.sin(I)
theta = 360.0 / Prot * tuse
rv_0 = map.rv(xo=xo, yo=yo, zo=zo, ro=r, theta=theta)
trend = gamma + gammadot*(tuse-0.15) + gammadotdot*(tuse-0.15)**2
rv = rv_0 + trend
var_good = (euse**2 + jitter_good**2)
var_bad = (euse**2 + jitter_bad**2)
goodgauss = q / np.sqrt(2*np.pi*var_good) * np.exp(-(rv-vuse)**2/(2*var_good))
badgauss = (1-q) / np.sqrt(2*np.pi*var_bad) * np.exp(-(rv_0*factor+trend-vuse)**2/(2*var_bad))
totgauss = np.log(goodgauss + badgauss)
#print(np.log(goodgauss))
#print(np.log(badgauss))
print(-1*np.sum(totgauss))
return -1*np.sum(totgauss)
def plot_rmcurve(params):
vsini, r, b, a, u1, u2, obl, gamma, gammadot, gammadotdot, gamma3, gamma4, jitter_good, jitter_bad, q, factor, t0 = params
veq = vsini / np.sin(inc * np.pi / 180.0)
map.reset()
map.inc = inc
map.obl = obl
#map.add_spot(spot_amp, sigma=spot_sig, lon=spot_lon, lat=-spot_lat)
map[1:] = [u1, u2]
map.veq = veq
f = (t - t0)/P*2*np.pi
I = np.arccos(b/a)
zo = a*np.cos(f)
yo = -a*np.sin(np.pi/2+f)*np.cos(I)
xo = a*np.sin(f)*np.sin(I)
theta = 360.0 / Prot * t
rv = map.rv(xo=xo, yo=yo, zo=zo, ro=r, theta=theta)
trend = gamma + gammadot*(t-0.15) + gammadotdot*(t-0.15)**2 + gamma3*(t-0.15)**3 + gamma4*(t-0.15)**4
rv += trend
plt.figure(figsize=(15,5))
plt.plot(t, rv, "C1", lw=3)
plt.plot(t, trend, "C1", lw=3)
plt.errorbar(time, vels, yerr=verr, fmt='.')
plt.ylim(-50, 40);
plt.show()
inputs = np.array([19300, 0.0588, -0.09, 20.79, 0.8, 0.00, 10.0, -15.0, 100.1, 1300.0, 5500, -30000, 1.0, 1.0, 0.8, 0.60, 0.166])
bnds = ((12000, 24000), (0.04, 0.07), (-1.0, 0.0), (15,25), (0,1),(0,1), (-30,90), (-20,20),(50,300), (0, 3000), (-30000, 300000), (-100000, 100000), (0.0, 2.0), (0.0, 20.0), (0.4, 1.0), (0.0, 1.0), (0.16, 0.175))
#rmcurve(inputs)
plot_rmcurve(inputs)
res = minimize(rmcurve, inputs, method='L-BFGS-B', bounds=bnds)
# vsini, r, b, a, u1, u2, obl, gamma, gammadot, gammadotdot, jitter_good, jitter_bad, q, factor, t0
print(res.x.tolist())
test = res.x + 0.0
#test[0] = 20000
#test[4] = 1.0
#test[5] = 0.0
rmcurve(test)
plot_rmcurve(test)
orbit = exo.orbits.KeplerianOrbit(period=P, a=a, t0=t0, b=0.4, ecc=e, omega=w, r_star=1.0)
x, y, z = orbit.get_relative_position(tuse)
xp = x.eval()
yp = y.eval()
zp = z.eval()
(zp**2+yp**2+xp**2)**0.5
a
tuse = np.arange(-4, 4, 0.1)
f = (tuse - t0)/P*2*np.pi
I = np.arccos(b/a)
zpos = a*np.cos(f)
ypos = -a*np.sin(np.pi/2+f)*np.cos(I)
xpos = a*np.sin(f)*np.sin(I)
zpos-zp
x = np.arange(-10, 10, 0.02)
sigma1 = 3
mu1 = 0
sigma2 = 2
mu2 = -7
g1 = 0.8/np.sqrt(2*np.pi*sigma1**2) * np.exp(-(x-mu1)**2/(2*sigma1**2))
g2 = 0.2/np.sqrt(2*np.pi*sigma2**2) * np.exp(-(x-mu2)**2/(2*sigma2**2))
plt.plot(x, np.log(g1+g2))
#plt.plot(x, g1)
#plt.plot(x, g2)
var_good = (euse**2 + jitter_good**2)
var_bad = (euse**2 + jitter_bad**2)
gooddata = -0.5*q*(np.sum((rv-vuse)**2/var_good + np.log(2*np.pi*var_good)))
baddata = -0.5*(1-q)*(np.sum((rv-vuse)**2/var_bad + np.log(2*np.pi*var_bad)))
lnprob = gooddata + baddata
goodgauss = q / np.sqrt(2*np.pi*var_good) * np.exp(-(rv-vuse)**2/var_good)
badgauss = (1-q) / np.sqrt(2*np.pi*var_bad) * np.exp(-(rv-vuse)**2/var_bad)
totgauss = np.log(goodgauss + badgauss)
```
| github_jupyter |
Includes:
```
import matplotlib.pyplot as plt
import numpy as np
import math
import pandas as pd
import seaborn as sns
import scipy.integrate
```
Data and plots for Figure 2. Figure 1 is a cartoon while Figures 3-5 were produced directly in the ParaView visualisation software from the ChemChaste simulation output. This output is fully producible using the RunChemChaste.py control file provided.
Fisher-KPP equation definition as defined in the Manuscript:
```
def fisher_KPP(z,c,order=1):
if order ==1:
U = 1/(1+np.exp(z/c)) # first order
elif order == 2:
U = 1/(1+np.exp(z/c)) + (1/pow(c,2))*(np.exp(z/c)/pow((1+np.exp(z/c)),2))*np.log( 4*np.exp(z/c)/pow((1+np.exp(z/c)),2) )# second order
else:
U=0*z
return U
def integrand(x, t):
xdrift = 50
a = 1.0
c=a+(1/a)
tscale = 1.0
x=x+xdrift
x=1.0*x
z=x-c*t*tscale
return fisher_KPP(z,c,order=2)
```
Solving the Fisher-KPP equation:
```
times2 = np.arange(0,201,1)*3.0
times1 = np.arange(0,61,1)*10.0
times = np.arange(0,601,1)
integralValues = []
integralValues1 = []
integralValues2 = []
for i in range(0,len(times)):
t=times[i]
#print(t)
I = scipy.integrate.quad(integrand, 15, 100, args=(t) )
integralValues.append( I[0]/85)
for i in range(0,len(times1)):
t=times[i]
#print(t)
I = scipy.integrate.quad(integrand, 15, 100, args=(t) )
integralValues1.append( I[0]/85)
for i in range(0,len(times2)):
t=times[i]
#print(t)
I = scipy.integrate.quad(integrand, 15, 100, args=(t) )
integralValues2.append( I[0]/85)
gradInt= np.gradient(np.array(integralValues))
plt.plot(np.array(times),np.array(integralValues),label="Analytic",linestyle='solid')
```
Load and plot ParaView line output for Figure 2 a):
```
dfTrace2 = pd.read_csv("Fisher_Slice/160.csv");
dfTrace4 = pd.read_csv("Fisher_Slice/240.csv");
dfTrace6 = pd.read_csv("Fisher_Slice/320.csv");
dfTrace8 = pd.read_csv("Fisher_Slice/400.csv");
```
For Figure 2 a) plot the line output from ParaView against solution for Fisher-KPP
```
A=1
xsteps =np.arange(0,105,step=10)
xdrift = 8
tvec =[1,2,3,4]
colorVec=["tab:orange","tab:green","tab:blue","tab:red","tab:purple","tab:brown","tab:cyan"]
plt.plot(dfTrace2['arc_length'],dfTrace2['PDE variable 0'],label="t=160",color=colorVec[1])
plt.plot(dfTrace4['arc_length'],dfTrace4['PDE variable 0'],label="t=240",color=colorVec[2])
plt.plot(dfTrace6['arc_length'],dfTrace6['PDE variable 0'],label="t=320",color=colorVec[3])
plt.plot(dfTrace8['arc_length'],dfTrace8['PDE variable 0'],label="t=400",color=colorVec[4])
x = np.arange(10, 80, 0.01)
a = 1
c=a+(1/a)
tscale = 7.55
for tint in tvec:
t=tint*tscale
z=x-c*t
plt.plot(x+xdrift, fisher_KPP(z,c,order=1),linestyle='dashed',color=colorVec[tint])
plt.plot(x+xdrift, fisher_KPP(z,c,order=2),linestyle='dotted',color=colorVec[tint])
plt.xticks(xsteps);
plt.xlabel("X");
plt.ylabel("U(X)");
plt.legend(loc="right");
plt.xlabel("Position");
plt.xlim([10, 80])
plt.savefig('wavePlot.png')
plt.grid();
```
Full data processing from ParaView output for Figure 2 b)
```
dx = [1,0.8,0.6,0.4,0.2,0.1,0.08,0.06,0.04,0.02,0.01,0.008,0.006,0.004,0.002,0.001,0.0008,0.0006,0.0004,0.0002,0.0001]
dt = [0.1,0.08,0.06,0.04,0.02,0.01,0.008,0.006,0.004,0.002,0.001,0.0008,0.0006,0.0004,0.0002,0.0001]
filename_prefix_1 = "DataOut/"
filename_prefix_2 = "dx_"
filename_mid = "_dt_"
filename_suffix = ".csv"
files_exist = []
data_names = []
files_names = []
files_csv =[]
l2 = []
gradData = []
dt2=[0.1]
dx2 = [0.01]
for x in dx:
for t in dt:
filename = filename_prefix_1+filename_prefix_2+str(x)+filename_mid+str(t)
dataname = filename_prefix_2+str(x)+filename_mid+str(t)
filename = filename.replace('.', '')
filename = filename+filename_suffix
try:
df = pd.read_csv(filename)
files_exist.append(True)
data_names.append(dataname)
files_names.append(filename)
files_csv.append(df)
l2.append(np.sum(np.power((np.array(integralValues)-df['avg(PDE variable 0)']),2)))
gradData.append(np.gradient(np.array(df['avg(PDE variable 0)'])))
print(filename)
except:
files_exist.append(False)
files_names.append("")
data_names.append("")
files_csv.append("")
l2.append(1)
gradData.append(0)
```
For Figure 2 b)
```
threshold = 0.4
X = list(set(dx))
T = list(set(dt))
X.sort()
T.sort()
M = np.ones((len(X),len(T)))*threshold
for i in range(0,len(X)):
for j in range(0,len(T)):
dataname = filename_prefix_2+str(X[i])+filename_mid+str(T[j])
for k in range(0,len(data_names)):
if data_names[k] == dataname:
if l2[k]>threshold:
M[i,j]=threshold
else:
M[i,j]=l2[k]
ax = sns.heatmap(M, linewidth=0,yticklabels=X,xticklabels=T,cmap="gist_gray_r")
ax.invert_xaxis()
ax.set(xlabel="Spatial step size (dx)", ylabel = "Temporal step size (dt)")
plt.savefig('heatmap.png')
plt.show()
```
Processing for the data subset where $dt = 0.1$:
```
dx = [1,0.8,0.6,0.4,0.2,0.1,0.08,0.06,0.04,0.02,0.01,0.008,0.006,0.004,0.002,0.001,0.0008,0.0006,0.0004,0.0002,0.0001]
dt2=[0.1]
filename_prefix_1 = "DataOut/"
filename_prefix_2 = "dx_"
filename_mid = "_dt_"
filename_suffix = ".csv"
files_exist = []
data_names = []
files_names = []
files_csv =[]
l2 = []
gradData = []
for x in dx:
for t in dt2:
filename = filename_prefix_1+filename_prefix_2+str(x)+filename_mid+str(t)
dataname = filename_prefix_2+str(x)+filename_mid+str(t)
filename = filename.replace('.', '')
filename = filename+filename_suffix
try:
df = pd.read_csv(filename)
files_exist.append(True)
data_names.append(dataname)
files_names.append(filename)
files_csv.append(df)
l2.append(np.sum(np.power((np.array(integralValues)-df['avg(PDE variable 0)']),2)))
gradData.append(np.gradient(np.array(df['avg(PDE variable 0)'])))
print(filename)
except:
files_exist.append(False)
files_names.append("")
data_names.append("")
files_csv.append("")
l2.append(1)
gradData.append(0)
```
For Figure 2 c)
```
plt.plot(np.array(times),np.array(integralValues),label="Analytic",linestyle='solid')
for i in range(len(files_exist)):
if files_exist[i] == True:
#plt.plot(df[i]['Time'],df[i]['avg(PDE variable 0)'],label=files_names[i],linestyle='dotted')
df = pd.read_csv(files_names[i])
plt.plot(df['Time'],df['avg(PDE variable 0)'],label=data_names[i],linestyle='dotted')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("Time")
plt.ylabel("Domain average U")
plt.xlim([0,600])
plt.ylim([0,1.05])
plt.grid()
plt.savefig('averagePlot.png')
plt.show()
```
For Figure 2 d)
```
plt.plot(np.array(times),np.array(gradInt),label="Analytic",linestyle='solid')
for i in range(len(files_exist)):
if files_exist[i] == True:
#plt.plot(df[i]['Time'],df[i]['avg(PDE variable 0)'],label=files_names[i],linestyle='dotted')
df = pd.read_csv(files_names[i])
try:
plt.plot(df['Time'],gradData[i],label=data_names[i],linestyle='dotted')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
except:
print("skip")
plt.xlim([0,100])
plt.ylim([0,0.026])
plt.xlabel("Time")
plt.ylabel("Line gradient")
plt.grid()
plt.savefig('gradientPlot.png')
plt.show()
```
| github_jupyter |
In this notebook, we shall test the centered images on all major machine learning methods that predate neural networks. We do this in order to establish a baseline of performance for any later classifer that is developed.
```
import numpy as np
from scipy import *
import os
import h5py
from keras.utils import np_utils
import matplotlib.pyplot as plt
import pickle
from skimage.transform import rescale
from keras.models import model_from_json
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import RandomizedSearchCV
file = open("train_x.dat",'rb')
train_x = pickle.load(file)
file.close()
file = open("train_y.dat",'rb')
train_y = pickle.load(file)
file.close()
file = open("test_x.dat",'rb')
test_x = pickle.load(file)
file.close()
file = open("test_y.dat",'rb')
test_y = pickle.load(file)
file.close()
file = open("raw_train_x.dat",'rb')
raw_train_x = pickle.load(file)
file.close()
file = open("raw_test_x.dat",'rb')
raw_test_x = pickle.load(file)
file.close()
##### HOG Images #####
# Defining hyperparameter range for Random Forrest Tree
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
print(random_grid)
# Random Forrest Tree
# Use the random grid to search for best hyperparameters
# First create the base model to tune
rf = RandomForestClassifier()
# Random search of parameters, using 3 fold cross validation,
# search across 100 different combinations, and use all available cores
rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid,
n_iter = 100, cv = 3, verbose=2, random_state=42,
n_jobs = -1)
# Fit the random search model
rf_random.fit(train_x, train_y)
score
rf_random.best_params_
def evaluate(model, test_features, test_labels):
predictions = model.predict(test_features)
errors = abs(predictions - test_labels)
mape = 100 * np.mean(errors / test_labels)
accuracy = 100 - mape
print('Model Performance')
print('Average Error: {:0.4f} degrees.'.format(np.mean(errors)))
print('Accuracy = {:0.2f}%.'.format(accuracy))
return accuracy
base_model = RandomForestRegressor(n_estimators = 10, random_state = 42)
base_model.fit(train_x, train_y)
base_accuracy = evaluate(base_model, test_x, train_y)
best_random = rf_random.best_estimator_
random_accuracy = evaluate(best_random, test_features, test_labels)
# Naïve Bayes
gnb = GaussianNB()
gnb = gnb.fit(train_x, train_y)
score2 = gnb.score(test_x, test_y)
score2
# Support Vector Machine
C = 0.1 # SVM regularization parameter
# LinearSVC (linear kernel)
lin_svc = svm.LinearSVC(C=C).fit(train_x, train_y)
score4 = lin_svc.score(test_x, test_y)
score4
#### Raw Images #####
raw_train_x = raw_train_x.reshape(raw_train_x.shape[0], -1)
raw_test_x = raw_test_x.reshape(raw_test_x.shape[0], -1)
# Random Forrest Tree
clf_raw = RandomForestClassifier(n_estimators=100)
clf_raw = clf_raw.fit(raw_train_x, train_y)
score5 = clf_raw.score(raw_test_x, test_y)
score5
# Naïve Bayes
gnb_raw = GaussianNB()
gnb_raw = gnb_raw.fit(raw_train_x, train_y)
score6 = gnb_raw.score(raw_test_x, test_y)
score6
# LinearSVC (linear kernel)
lin_svc_raw = svm.LinearSVC(C=C).fit(raw_train_x, train_y)
score7 = lin_svc_raw.score(raw_test_x, test_y)
score7
```
| github_jupyter |
# Generating a Word Cloud
For this project, we generate a "word cloud" from a given text file. The script will process the text (should be "utf-8" encoded), remove punctuation, ignore words that do not contain english alphabets, ignore uninteresting or irrelevant words, and count the word frequencies. It then uses the `wordcloud` module to generate the image from the word frequencies.
The input text needs to be a file that contains text only. For the text itself, you can copy and paste the contents of a website you like. Or you can use a site like [Project Gutenberg](https://www.gutenberg.org/) to find books that are available online. You could see what word clouds you can get from famous books, like a Shakespeare play or a novel by Jane Austen. Save this as a .txt file somewhere on your computer.
<br><br>
You will need to upload your input file here so that your script will be able to process it. To do the upload, you will need an uploader widget. Run the following cell to perform all the installs and imports for your word cloud script and uploader widget. It may take a minute for all of this to run and there will be a lot of output messages. But, be patient. Once you get the following final line of output, the code is done executing. Then you can continue on with the rest of the instructions for this notebook.
<br><br>
**Enabling notebook extension fileupload/extension...**
<br>
**- Validating: <font color =green>OK</font>**
<br><br>
*Side Note* - Uncomment the lines beginning with `pip install` to make the code work properly. You can alternatively run the `pip install requirements.txt` command as mentioned in the README.md file accompanying the project.
```
# Here are all the installs and imports you will need for your word cloud script and uploader widget
# Requirements - already installed in the virtual environment
# !pip install wordcloud
# !pip install fileupload
# !pip install ipywidgets
!jupyter nbextension install --py --user fileupload
!jupyter nbextension enable --py fileupload
import wordcloud
import numpy as np
from matplotlib import pyplot as plt
from IPython.display import display
import fileupload
import io
import sys
```
Whew! That was a lot. All of the installs and imports for your word cloud script and uploader widget have been completed.
<br><br>
**IMPORTANT!** If this was your first time running the above cell containing the installs and imports, you will need save this notebook now. Then under the File menu above, select Close and Halt. When the notebook has completely shut down, reopen it. This is the only way the necessary changes will take affect.
<br><br>
To upload your text file, run the following cell that contains all the code for a custom uploader widget. Once you run this cell, a "Browse" button should appear below it. Click this button and navigate the window to locate your saved text file.
```
# This is the uploader widget
def _upload():
_upload_widget = fileupload.FileUploadWidget()
def _cb(change):
global file_contents
decoded = io.StringIO(change['owner'].data.decode('utf-8'))
filename = change['owner'].filename
print('Uploaded `{}` ({:.2f} kB)'.format(
filename, len(decoded.read()) / 2 **10))
file_contents = decoded.getvalue()
_upload_widget.observe(_cb, names='data')
display(_upload_widget)
_upload()
```
The function below does the text processing as described previously.<br>
It removes punctuation, non-alphabetic characters, removes pre-defined uninteresting words, and returns a dictionary containing the word frequencies in the text.
<br><br>
Feel free to tweak the `uninteresting_words` list and include any words that you don't want to see in the final word cloud. Some standard frequently occuring uninteresting words are already added in the list.
```
def calculate_frequencies(file_contents):
# Here is a list of punctuations and uninteresting words you can use to process your text
punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
uninteresting_words = ["the", "a", "to", "if", "is", "it", "of", "and", "or", "an", "as", "i", "me", "my", \
"we", "our", "ours", "you", "your", "yours", "he", "she", "him", "his", "her", "hers", "its", "they", "them", \
"their", "what", "which", "who", "whom", "this", "that", "am", "are", "was", "were", "be", "been", "being", \
"have", "has", "had", "do", "does", "did", "but", "at", "by", "with", "from", "here", "when", "where", "how", \
"all", "any", "both", "each", "few", "more", "some", "such", "no", "nor", "too", "very", "can", "will", "just", \
"in", "on", "one", "not", "he", "she", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten"]
mod_file = ""
for ind in range(len(file_contents)):
if file_contents[ind] not in punctuations:
mod_file += file_contents[ind].lower()
frequencies = {}
init_list = mod_file.split()
word_list = []
for word in init_list:
if word not in uninteresting_words:
word_list.append(word)
for word in word_list:
if word in frequencies:
frequencies[word] += 1
else:
frequencies[word] = 1
#wordcloud
cloud = wordcloud.WordCloud()
cloud.generate_from_frequencies(frequencies)
return cloud.to_array()
```
Run the cell below to generate the final word cloud.
<br><br>
Feel free to download and share the word clouds that you generate!
```
# Display your wordcloud image
myimage = calculate_frequencies(file_contents)
plt.imshow(myimage, interpolation = 'nearest')
plt.axis('off')
plt.show()
```
For the sample, I used text from an article on ["The arms race between bats and moths"](https://www.bbc.com/news/science-environment-11010458) by BBC. The generated word cloud does indeed match the general "feel" of the article. Have a read to confirm!
| github_jupyter |
```
library(dslabs)
library(HistData)
library(tidyverse)
data(heights)
data(Galton)
data(murders)
# HarvardX Data Science Course
# Module 2: Data Visualization
x <- Galton$child
x_with_error <- x
x_with_error[1] <- x_with_error[1] * 10
mean(x_with_error) - mean(x)
sd(x_with_error) - sd(x)
# Median and MAD (median absolute deviation) are robust measurements
median(x_with_error) - median(x)
mad(x_with_error) - mad(x)
# Using EDA (exploratory data analisys) to explore changes
# Returns the average of the vector x after the first entry changed to k
error_avg <- function(k) {
z <- x
z[1] = k
mean(z)
}
error_avg(10^4)
error_avg(-10^4)
# Quantile-quantile Plots
male_heights <- heights$height[heights$sex == 'Male']
p <- seq(0.05, 0.95, 0.05)
observed_quantiles <- quantile(male_heights, p)
theoretical_quantiles <- qnorm(p, mean=mean(male_heights), sd=sd(male_heights))
plot(theoretical_quantiles, observed_quantiles)
abline(0,1)
# It is better to use standard units
z <- scale(male_heights)
observed_quantiles <- quantile(z, p)
theoretical_quantiles <- qnorm(p)
plot(theoretical_quantiles, observed_quantiles)
abline(0,1)
# Porcentiles: when the value of p = 0.01...0.99
# Excercises
male <- heights$height[heights$sex == 'Male']
female <- heights$height[heights$sex == 'Female']
length(male)
length(female)
male_percentiles <- quantile(male, seq(0.1, 0.9, 0.2))
female_percentiles <- quantile(female, seq(0.1, 0.9, 0.2))
df <- data.frame(female=female_percentiles, male=male_percentiles)
df
# Excercises uing Galton data
mean(x)
median(x)
# ggplot2 basics
murders %>% ggplot(aes(population, total, label = abb)) + geom_point() + geom_label(color = 'blue')
murders_plot <- murders %>% ggplot(aes(population, total, label = abb, color = region))
murders_plot + geom_point() + geom_label()
murders_plot +
geom_point() +
geom_label() +
scale_x_log10() +
scale_y_log10() +
ggtitle('Gun Murder Data')
heights_plot <- heights %>% ggplot(aes(x = height))
heights_plot + geom_histogram(binwidth = 1, color = 'darkgrey', fill = 'darkblue')
heights %>% ggplot(aes(height)) + geom_density()
heights %>% ggplot(aes(x = height, group = sex)) + geom_density()
# When setting a color category ggplot know that it has to draw more than 1 plot so the 'group' param is inferred
heights %>% ggplot(aes(x = height, color = sex)) + geom_density()
heights_plot <- heights %>% ggplot(aes(x = height, fill = sex)) + geom_density(alpha = 0.2)
heights_plot
# These two lines achieve the same, summarize creates a second data frame with a single column "rate", .$rate reads the single value finally to the "r" object, see ?summarize
r <- sum(murders$total) / sum(murders$population) * 10^6
r <- murders %>% summarize(rate = sum(total) / sum(population) * 10^6) %>% .$rate
library(ggthemes)
library(ggrepel)
murders_plot <- murders %>% ggplot(aes(x = population / 10^6, y = total, color = region, label = abb))
murders_plot <- murders_plot +
geom_abline(intercept = log10(r), lty = 2, color = 'darkgray') +
geom_point(size = 2) +
geom_text_repel() +
scale_x_log10() +
scale_y_log10() +
ggtitle("US Gun Murders in the US, 2010") +
xlab("Population in millions (log scale)") +
ylab("Total number of murders (log scale)") +
scale_color_discrete(name = 'Region') +
theme_economist()
murders_plot
library(gridExtra)
grid.arrange(heights_plot, murders_plot, ncol = 1)
```
| github_jupyter |
```
import pandas as pd
df = pd.read_csv("../k2scoc/results/tables/full_table.csv")
hasflares = (df.real==1) & (df.todrop.isnull())
wassearched = (df.real==0) & (df.todrop.isnull())
df = df[hasflares & (df.cluster=="hyades") & (df.Teff_median > 3250.) & (df.Teff_median < 3500.)]
df[["EPIC"]].drop_duplicates()
```
3500K < Teff < 3750 K:
- [EPIC 247122957](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+36+04.172+%09%2B18+53+18.88&Radius=2&Radius.unit=arcsec&submit=submit+query)
- [EPIC 211036776](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+211036776&NbIdent=1&Radius=2&Radius.unit=arcsec&submit=submit+id) **binary or multiple**
- [EPIC 210923016](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+19+29.784+%09%2B21+45+13.99&Radius=2&Radius.unit=arcsec&submit=submit+query)
- [EPIC 246806983](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=05+11+09.708+%09%2B15+48+57.47&Radius=2&Radius.unit=arcsec&submit=submit+query)
- [EPIC 247289039](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+42+30.301+%09%2B20+27+11.43&Radius=2&Radius.unit=arcsec&submit=submit+query) **spectroscopic binary**
- [EPIC 247592661](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+30+38.192+%09%2B22+54+28.88&Radius=2&Radius.unit=arcsec&submit=submit+query) flare star
- [EPIC 247973705](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+40+06.776+%09%2B25+36+46.40&Radius=2&Radius.unit=arcsec&submit=submit+query)
- [EPIC 210317378](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+210317378&submit=submit+id)
- [EPIC 210721261](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+29+01.010+%09%2B18+40+25.33+%09&Radius=2&Radius.unit=arcsec&submit=submit+query) BY Dra
- [EPIC 210741091](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+210741091&submit=submit+id)
- [EPIC 247164626](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+40+12.713+%09%2B19+17+09.97&CooFrame=FK5&CooEpoch=2000&CooEqui=2000&CooDefinedFrames=none&Radius=2&Radius.unit=arcsec&submit=submit+query&CoordList=)
Teff < 3000 K:
- [EPIC 210563410](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=%403995861&Name=%5BRSP2011%5D+75&submit=display+all+measurements#lab_meas) p=21d
- [EPIC 248018423](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+51+18.846+%09%2B25+56+33.36+%09&Radius=2&Radius.unit=arcsec&submit=submit+query)
- [EPIC 210371851](http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=EPIC+210371851&submit=SIMBAD+search) **binary**
- [EPIC 210523892](http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=EPIC+210523892&submit=SIMBAD+search) not a binary in Gizis+Reid(1995)
- [EPIC 210643507](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=%403995810&Name=EPIC+210643507&submit=display+all+measurements#lab_meas) p=22d
- [EPIC 210839963](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+210839963&NbIdent=1&Radius=2&Radius.unit=arcsec&submit=submit+id) no rotation
- [EPIC 210835057](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+210835057&NbIdent=1&Radius=2&Radius.unit=arcsec&submit=submit+id)
- [EPIC 247230044](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+247230044&submit=submit+id)
- [EPIC 247254123](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=+%0904+35+13.549+%09%2B20+08+01.41+%09&Radius=2&Radius.unit=arcsec&submit=submit+query)
- [EPIC 247523445](http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=EPIC+247523445&submit=SIMBAD+search)
- [EPIC 247829435](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+46+44.990+%09%2B24+36+40.40&CooFrame=FK5&CooEpoch=2000&CooEqui=2000&CooDefinedFrames=none&Radius=2&Radius.unit=arcsec&submit=submit+query&CoordList=)
| github_jupyter |
# Import Libraries
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
```
## Data Transformations
```
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
```
Transforms.compose-Composes several transforms together.
ToTensor()-Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor
Converts a PIL Image or numpy.ndarray (H x W x C) in the range
[0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]
if the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1)
or if the numpy.ndarray has dtype = np.uint8
Normalize- Normalize a tensor image with mean and standard deviation.
Given mean: ``(M1,...,Mn)`` and std: ``(S1,..,Sn)`` for ``n`` channels, this transform
will normalize each channel of the input ``torch.*Tensor`` i.e.
``input[channel] = (input[channel] - mean[channel]) / std[channel]
Resize-Resize the input PIL Image to the given size.
CenterCrop-Crops the given PIL Image at the center.
Pad-Pad the given PIL Image on all sides with the given "pad" value
RandomTransforms-Base class for a list of transformations with randomness
# Dataset and Creating Train/Test Split
```
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
```
# Dataloader Arguments & Test/Train Dataloaders
```
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
#defining the network structure
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Dropout(0.15),
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Dropout(0.15),
nn.Conv2d(in_channels=32, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.MaxPool2d(2, 2)
)
self.conv2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Dropout(0.15),
nn.Conv2d(in_channels=32, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Dropout(0.15)
)
self.conv3 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Dropout(0.15),
nn.Conv2d(in_channels=32, out_channels=10, kernel_size=(5, 5), padding=0, bias=False)
)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = x.view(-1, 10)
return F.log_softmax(x)
```
# Model Params
Can't emphasize on how important viewing Model Summary is.
Unfortunately, there is no in-built model visualizer, so we have to take external help
```
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
```
# Training and Testing
Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs.
Let's write train and test functions
```
!pip install tqdm
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
```
# Let's Train and test our model
```
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.0335, momentum=0.9)
EPOCHS = 15
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
import matplotlib.pyplot as plt
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc)
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
```
| github_jupyter |
# Análise de Dados da Plataforma Consumidor.gov.br em 2019
O Consumidor.gov.br, plataforma criada pelo Governo Federal como alternativa para desafogar o Procon, trouxe ainda uma maior proximidade entre consumidor e empresa para a resolução de conflitos já que não há intermediadores. Serviço público, gratuito e monitorado pelos órgãos de defesa do consumidor, juntamente com a Secretaria Nacional do Consumidor do Ministério da Justiça.

A análise exploratória da plataforma, possibilita a compreensão de como ela está sendo utilizada e se seu propósito vem sendo cumprido.
Os dados foram obtidos no próprio site Consumidor.gov.br, nos dados abertos onde estão armazenados os dados do ano de 2014 ao ano de 2020.
### Importando Bibliotecas
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%pylab inline
plt.style.use('ggplot')
```
### Carregando Dataset
Os dados da plataforma referente ao ano de 2019 estão em um único arquivo xlsx.
```
df = pd.read_excel('C:\Analises_Exploratorias\Analises_Exploratorias\Consumidor.gov.br\Dados_gov.xlsx')
df.columns=['UF', 'Cidade', 'Sexo', 'Faixa Etária', 'Data Finalização', 'Tempo Resposta', 'Nome Fantasia', 'Segmentação de Mercado', 'Área', 'Assunto', 'Grupo Problema', 'Problema', 'Como Comprou Contratou', 'Procurou Empresa', 'Respondida', 'Situação', 'Avaliação Reclamação', 'Nota Consumidor']
```
### Dicionário de Dados
- UF: Sigla do estado do consumidor reclamante;
- Cidade: Município do consumidor reclamante;
- Sexo: Sigla do sexo do consumidor reclamante;
- Faixa Etária: Faixa etária do consumidor;
- Data Finalização: Data de finalização da reclamação;
- Tempo Resposta: Número de dias para a resposta da reclamação, desconsiderando o tempo que a reclamação tenha ficado em análise pelo gestor;
- Nome Fantasia: Nome pelo qual a empresa reclamada é conhecida pelo mercado;
- Segmentação de mercado: Principal segmento de mercado da empresa participante;
- Área: Área à qual percente o assunto objeot da reclamação;
- Assunto: Assunto objeto da reclamação;
- Grupo Problema: Agrupamento do qual faz parte o problema classificado na reclamação;
- Problema: Descrição do problema objeto da reclamação;
- Como Comprou Contratou: Descrição do meio utilizado para contratação/aquisição do produto ou serviço reclamado;
- Procurou Empresa: Sigla da resposta do consumidor à pergunta: "Procurou a empresa para solucionar o problema?"
- Respondida: Sigla que indica se a empresa respondeu à reclamação ou não;
- Situação: Situação atual da reclamação no sistema;
- Avaliação Reclamação: Classificação atribuída pelo consumidor sobre o desfecho da reclamação;
- Nota do Consumidor: Número da nota de 1 a 5 atribuída pelo consumidor ao atendimento da empresa;
```
df.head()
```
## Análise Exploratória dos Dados
```
# Quantidade de entradas e variáveis
print("Número de entradas do df:", df.shape[0])
print("Número de variáveis do df:", df.shape[1])
```
Nota-se que o dataset é bem extensos, possuindo centenas de milhares de entradas. Isso mostra que tal ferramenta de resolução dos conflitos, vem sendo bastante requisitada pelos consumidores.
Através do pd.dtypes saberemos quais são os tipos de dados existentes no DataFrame.
```
# Tipos das variáveis
df.dtypes
```
A grande maioria dos dados estão no formato object.
Em seguida veremos qual é a porcentagem de dados nulos presente no DataFrame. Dependendo da quantidade de valores nulos, será necessário removermos ou preenchermos essas ausências.
```
# Percentual de valores nulos em cada variável
df.isnull().sum().sort_values(ascending=False) / df.shape[0]
```
Tempo Resposta possui um valor irrisório de 1% de valores nulos. Já a variável Nota Consumidor está com quase 44% de seus dados ausentes. Para realizarmos uma análise mais precisa, faremos uma cópia do DataFrame chamada df_limpo, onde serão removidos todas as entradas com valores nulos.
```
# Criação de um novo DF para exclusão de valores nulos
df_limpo = df.copy()
# dropar os valores nulos
df_limpo.dropna(axis=0, inplace=True)
```
### Descrição do dataset
Visualizaremos agora algumas características estatísticas do DataFrame através da função pd.describe(). Como somente 2 variáveis possuem valores de floats, somente serão plotadas as estatísticas dessas 2 colunas.
```
df.describe()
```
Na variável Tempo Resposta encontramos os seguintes dados estatísticos:
- O tempo médio para as empresas responderem aos consumidores é de 6,5 dias;
- O máximo de tempo para as empresas darem uma resposta foi de 15 dias;
Percebemos que o tempo de resposta, sendo esse considerado apenas ao finalizar a reclamação, no geral, é satisfatório.
Na variável Nota Consumidor encontramos os seguintes dados estatísticos:
- A média da nota dada pelos consumidores foi de 3,2 pontos;
- Metade das notas foram de 4 pontos;
- Apenas 25% das notas foram de 5 pontos;
As notas dispostas pelos consumidores revelam que a resolução das queixas ainda não atingiu o alto nível de satisfação.
## Análise Gráfica dos Dados
Realizada a Análise Exploratória, o passo seguinte é analisarmos os gráficos e identificarmos quais insights poderão ser obtidos deles.
O primeiro gráfico a ser plotado será referente aos 6 estados com a maior porcentagem de consumidores que deram entrada na plataforma. Para isso definiremos o DataFrame df_uf com esses valores.
```
# criação do DataFrame com os 6 estados com a maior porcentagem de consumidores
df_uf = (df['UF'].value_counts() / df.shape[0])[0:6].copy()
# gráfico de barras para os 6 estados com maior número de registros
sns.set()
fig, ax = plt.subplots(figsize=(10, 6))
df_uf.plot(kind="bar", ax=ax)
ax.set_xlabel("Estados", fontsize=16)
ax.set_ylabel("Porcentagem de Registros", fontsize=16)
ax.set_title("6 Estados com o Maior Número de Registros de Reclamações", fontsize=20)
plt.xticks(rotation=0)
plt.tight_layout()
```
Identificamos que São Paulo, obteve o maior número de reclamações registradas na plataforma. Podemos considerar o fato de ser o estado mais populoso do país e seus habitantes possuírem um maior poder de compra.
Dos 6 estados com maior número de reclamações, 3 são do Sudestes e 2 são do Sul, sendo o 6º colocado o estado da Bahia, na região Nordeste.
```
# DataFrame com a quantidade de registros da variável Faixa Etária
df_etaria = df['Faixa Etária'].value_counts().copy()
# gráfico de barras da faixa etária dos consumidores
fig, ax = plt.subplots(figsize=(10,6))
df_etaria.plot(kind='bar', ax=ax)
ax.set_title("Faixa Etária dos Consumidores", fontsize=20)
ax.set_xlabel("Faixa Etária", fontsize=16)
ax.set_ylabel("Número de Registros", fontsize=16)
plt.xticks(rotation=35)
plt.tight_layout()
```
Vemos que a grande maioria dos consumidores possuem entre 21 e 40 anos. O apontamento não poderia ser outro, tendo em vista que a população nessa idade representa os maiores acessos a tecnologia.
```
# Quantidade de Reclamações por Sexo
df_sexo = df['Sexo'].value_counts().copy()
fig, ax = plt.subplots(figsize=(8,6))
sns.set(style='darkgrid')
sexo = df[u'Sexo'].unique()
cont = df[u'Sexo'].value_counts()
ax.set_title("Reclamações por Sexo", fontsize=20)
sns.barplot(x=sexo, y=cont)
plt.tight_layout()
```
Podemos identificar que a maioria dos consumidores que deram registraram reclamações na plataforma são do sexo masculino.
```
# Procuram a Empresa antes de Registrarem a Reclamação?
fig, ax = plt.subplots(figsize=(8,4))
df2 = df[df[u'Faixa Etária']=='entre 31 a 40 anos']
df2['Procurou Empresa'].value_counts().plot.barh()
ax.set_title("Procurou Empresa antes de Registrar Reclamação?", fontsize=20)
plt.tight_layout()
# Criação do DataFrame com as 10 empresas com o maior número de reclamações
df_empresas = df['Nome Fantasia'].value_counts()[0:10].copy()
# gráfico de barras das 10 empresas com o maior número de reclamações
fig, ax = plt.subplots(figsize=(10,5))
df_empresas.plot(kind='barh', ax=ax)
ax.set_title('10 Empresas com Maior Número de Reclamações', fontsize=20)
ax.set_xlabel('Número de Registros', fontsize=16)
ax.set_ylabel('Empresas', fontsize=16)
plt.show()
```
Das 10 empresas com o maior número de reclamações, 5 delas são do segmento de telefonia e/ou internet.
```
# Grupo de problema mais comum
df['Grupo Problema'].value_counts()
# gráfico de barras para a quantidade de reclamações respondidas
fig, ax = plt.subplots()
sns.countplot(df['Respondida'], ax=ax);
ax.set_title("Quantidade de Reclamações Respondidas e Não Respondidas", fontsize=20)
ax.set_xlabel("Resposta", fontsize=16)
ax.set_ylabel("Quantidade de Entradas", fontsize=16)
```
Percebe-se que a comunicação entre consumidor e empresa estabelecida pela plataforma é eficaz, com alto índice de respostas para resolução dos conflitos.
Na Análise Exploratória vimos estatísticas relacionadas ao tempo de resposta, como o tempo médio que as empresas leval para finalizar um registro de reclamação.
Para uma melhor observação dessa variável, podemos utilizar um histograma, permitindo visualizar a relação entre a quantidade de entradas e a quantidade de tempo dispendido para se obter uma resposta.
```
# histograma do tempo de resposta
fig, ax = plt.subplots(figsize=(10,6))
df.hist('Tempo Resposta', ax=ax)
ax.set_title('Tempo de Resposta', fontsize=20)
ax.set_xlabel('Número de Dias', fontsize=16)
ax.set_ylabel('Número de Entradas', fontsize=16)
plt.show()
```
O histograma revela que as resposta a maior parte das reclamações se dado entre 6 e 10 dias.
Outro histograma importante é o da nota dada pelos consumidores quanto a resolução de seus problemas.
```
# histograma da variável nota_consumidor
fig, ax = plt.subplots(figsize=(10,6))
df_limpo.hist('Nota Consumidor', ax=ax)
ax.set_title('Nota do Consumidor', fontsize=20)
ax.set_xlabel('Número de Dias', fontsize=16)
ax.set_ylabel('Número de Entradas', fontsize=16)
plt.show()
```
## Conclusão
Diversos fatores podem ser apontados como responsáveis pelo aumento exponencial do consumo da população brasileira. Os estímulos das empresas e da sociedade ao consumismo desenfreado e incosciente, juntamente com a obsolescência programada dos produtos, podem ser o fator de maior impacto nesse número. E, como em qualquer setor, serviço/produto ofertado, problemas acontecem.
Independetemente das causas e apesar das diversas contribuições econômicas e sociais desse comportamento, uma das consequências mais prejudiciais à sociedade é o abarrotamento gradativo sofrido pelo judiciário de causas envolvendo as relações de consumo.
Como vimos ao longo dessa análise, tal ferramenta vem obtendo uma crescente adesão e resolvendo de forma satisfatória os problemas apontados pelos consumidores, com baixo tempo de reposta e taxa de aprovação consideravelmente agradável.
Assim, o espaço onde é promovido o diálogo entre consumidores e empresas de diversos nichos econômicos de forma voluntária e participativa, possibilita o alcance de um desfecho benéfico a ambas as partes. Favorecendo ainda no engajamento dos clientes para que voltem a fazer negócios com as empresas reclamadas. Hoje, a plataforma possui em torno de 861 empresas ativas em seu catálogo.
| github_jupyter |
# `GiRaFFE_NRPy`: Solving the Induction Equation
## Author: Patrick Nelson
This notebook documents the function from the original `GiRaFFE` that calculates the flux for $A_i$ according to the method of Harten, Lax, von Leer, and Einfeldt (HLLE), assuming that we have calculated the values of the velocity and magnetic field on the cell faces according to the piecewise-parabolic method (PPM) of [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf), modified for the case of GRFFE.
**Notebook Status:** <font color=green><b> Validated </b></font>
**Validation Notes:** This code has been validated by showing that it converges to the exact answer at the expected order
### NRPy+ Source Code for this module:
* [GiRaFFE_NRPy/Afield_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Afield_flux.py)
Our goal in this module is to write the code necessary to solve the induction equation
$$
\partial_t A_i = \underbrace{\epsilon_{ijk} v^j B^k}_{\rm Flux\ terms} - \underbrace{\partial_i \left(\alpha \Phi - \beta^j A_j \right)}_{\rm Gauge\ terms}.
$$
To properly handle the flux terms and avoiding problems with shocks, we cannot simply take a cross product of the velocity and magnetic field at the cell centers. Instead, we must solve the Riemann problem at the cell faces using the reconstructed values of the velocity and magnetic field on either side of the cell faces. The reconstruction is done using PPM (see [here](Tutorial-GiRaFFE_NRPy-PPM.ipynb)); in this module, we will assume that that step has already been done. Metric quantities are assumed to have been interpolated to cell faces, as is done in [this](Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb) tutorial.
Tóth's [paper](https://www.sciencedirect.com/science/article/pii/S0021999100965197?via%3Dihub), Eqs. 30 and 31, are one of the first implementations of such a scheme. The original GiRaFFE used a 2D version of the algorithm from [Del Zanna, et al. (2002)](https://arxiv.org/abs/astro-ph/0210618); but since we are not using staggered grids, we can greatly simplify this algorithm with respect to the version used in the original `GiRaFFE`. Instead, we will adapt the implementations of the algorithm used in [Mewes, et al. (2020)](https://arxiv.org/abs/2002.06225) and [Giacomazzo, et al. (2011)](https://arxiv.org/abs/1009.2468), Eqs. 3-11.
We first write the flux contribution to the induction equation RHS as
$$
\partial_t A_i = -E_i,
$$
where the electric field $E_i$ is given in ideal MHD (of which FFE is a subset) as
$$
-E_i = \epsilon_{ijk} v^j B^k,
$$
where $v^i$ is the drift velocity, $B^i$ is the magnetic field, and $\epsilon_{ijk} = \sqrt{\gamma} [ijk]$ is the Levi-Civita tensor.
In Cartesian coordinates,
\begin{align}
-E_x &= [F^y(B^z)]_x = -[F^z(B^y)]_x \\
-E_y &= [F^z(B^x)]_y = -[F^x(B^z)]_y \\
-E_z &= [F^x(B^y)]_z = -[F^y(B^x)]_z, \\
\end{align}
where
$$
[F^i(B^j)]_k = \sqrt{\gamma} (v^i B^j - v^j B^i).
$$
To compute the actual contribution to the RHS in some direction $i$, we average the above listed field as calculated on the $+j$, $-j$, $+k$, and $-k$ faces. That is, at some point $(i,j,k)$ on the grid,
\begin{align}
-E_x(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^y(B^z)]_{x(i,j+1/2,k)}+[F_{\rm HLL}^y(B^z)]_{x(i,j-1/2,k)}-[F_{\rm HLL}^z(B^y)]_{x(i,j,k+1/2)}-[F_{\rm HLL}^z(B^y)]_{x(i,j,k-1/2)} \right) \\
-E_y(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^z(B^x)]_{y(i,j,k+1/2)}+[F_{\rm HLL}^z(B^x)]_{y(i,j,k-1/2)}-[F_{\rm HLL}^x(B^z)]_{y(i+1/2,j,k)}-[F_{\rm HLL}^x(B^z)]_{y(i-1/2,j,k)} \right) \\
-E_z(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^x(B^y)]_{z(i+1/2,j,k)}+[F_{\rm HLL}^x(B^y)]_{z(i-1/2,j,k)}-[F_{\rm HLL}^y(B^x)]_{z(i,j+1/2,k)}-[F_{\rm HLL}^y(B^x)]_{z(i,j-1/2,k)} \right). \\
\end{align}
Note the use of $F_{\rm HLL}$ here. This change signifies that the quantity output here is from the HLLE Riemann solver. Note also the indices on the fluxes. Values of $\pm 1/2$ indicate that these are computed on cell faces using the reconstructed values of $v^i$ and $B^i$ and the interpolated values of the metric gridfunctions. So,
$$
F_{\rm HLL}^i(B^j) = \frac{c_{\rm min} F_{\rm R}^i(B^j) + c_{\rm max} F_{\rm L}^i(B^j) - c_{\rm min} c_{\rm max} (B_{\rm R}^j-B_{\rm L}^j)}{c_{\rm min} + c_{\rm max}}.
$$
The speeds $c_\min$ and $c_\max$ are characteristic speeds that waves can travel through the plasma. In GRFFE, the expressions defining them reduce a function of only the metric quantities. $c_\min$ is the negative of the minimum amongst the speeds $c_-$ and $0$ and $c_\max$ is the maximum amongst the speeds $c_+$ and $0$. The speeds $c_\pm = \left. \left(-b \pm \sqrt{b^2-4ac}\right)\middle/ \left(2a\right) \right.$ must be calculated on both the left and right faces, where
$$a = 1/\alpha^2,$$
$$b = 2 \beta^i / \alpha^2$$
and $$c = g^{ii} - (\beta^i)^2/\alpha^2.$$
An outline of a general finite-volume method is as follows, with the current step in bold:
1. The Reconstruction Step - Piecewise Parabolic Method
1. Within each cell, fit to a function that conserves the volume in that cell using information from the neighboring cells
* For PPM, we will naturally use parabolas
1. Use that fit to define the state at the left and right interface of each cell
1. Apply a slope limiter to mitigate Gibbs phenomenon
1. Interpolate the value of the metric gridfunctions on the cell faces
1. **Solving the Riemann Problem - Harten, Lax, (This notebook, $E_i$ only)**
1. **Use the left and right reconstructed states to calculate the unique state at boundary**
We will assume in this notebook that the reconstructed velocities and magnetic fields are available on cell faces as input. We will also assume that the metric gridfunctions have been interpolated on the metric faces.
Solving the Riemann problem, then, consists of two substeps: First, we compute the flux through each face of the cell. Then, we add the average of these fluxes to the right-hand side of the evolution equation for the vector potential.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#prelim): Preliminaries
1. [Step 2](#a_i_flux): Computing the Magnetic Flux
1. [Step 2.a](#hydro_speed): GRFFE characteristic wave speeds
1. [Step 2.b](#fluxes): Compute the HLLE fluxes
1. [Step 3](#code_validation): Code Validation against `GiRaFFE_NRPy.Afield_flux` NRPy+ Module
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='prelim'></a>
# Step 1: Preliminaries \[Back to [top](#toc)\]
$$\label{prelim}$$
We begin by importing the NRPy+ core functionality. We also import the Levi-Civita symbol, the GRHD module, and the GRFFE module.
```
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os, sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
from outputC import outCfunction, outputC # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
thismodule = "GiRaFFE_NRPy-Afield_flux"
import GRHD.equations as GRHD
# import GRFFE.equations as GRFFE
```
<a id='a_i_flux'></a>
# Step 2: Computing the Magnetic Flux \[Back to [top](#toc)\]
$$\label{a_i_flux}$$
<a id='hydro_speed'></a>
## Step 2.a: GRFFE characteristic wave speeds \[Back to [top](#toc)\]
$$\label{hydro_speed}$$
Next, we will find the speeds at which the hydrodynamics waves propagate. We start from the speed of light (since FFE deals with very diffuse plasmas), which is $c=1.0$ in our chosen units. We then find the speeds $c_+$ and $c_-$ on each face with the function `find_cp_cm`; then, we find minimum and maximum speeds possible from among those.
Below is the source code for `find_cp_cm`, edited to work with the NRPy+ version of GiRaFFE. One edit we need to make in particular is to the term `psim4*gupii` in the definition of `c`; that was written assuming the use of the conformal metric $\tilde{g}^{ii}$. Since we are not using that here, and are instead using the ADM metric, we should not multiply by $\psi^{-4}$.
```c
static inline void find_cp_cm(REAL &cplus,REAL &cminus,const REAL v02,const REAL u0,
const REAL vi,const REAL lapse,const REAL shifti,
const REAL gammadet,const REAL gupii) {
const REAL u0_SQUARED=u0*u0;
const REAL ONE_OVER_LAPSE_SQUARED = 1.0/(lapse*lapse);
// sqrtgamma = psi6 -> psim4 = gammadet^(-1.0/3.0)
const REAL psim4 = pow(gammadet,-1.0/3.0);
//Find cplus, cminus:
const REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;
const REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );
const REAL c = u0_SQUARED*vi*vi * (1.0-v02) - v02 * ( gupii -
shifti*shifti*ONE_OVER_LAPSE_SQUARED);
REAL detm = b*b - 4.0*a*c;
//ORIGINAL LINE OF CODE:
//if(detm < 0.0) detm = 0.0;
//New line of code (without the if() statement) has the same effect:
detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */
cplus = 0.5*(detm-b)/a;
cminus = -0.5*(detm+b)/a;
if (cplus < cminus) {
const REAL cp = cminus;
cminus = cplus;
cplus = cp;
}
}
```
Comments documenting this have been excised for brevity, but are reproduced in $\LaTeX$ [below](#derive_speed).
We could use this code directly, but there's substantial improvement we can make by changing the code into a NRPyfied form. Note the `if` statement; NRPy+ does not know how to handle these, so we must eliminate it if we want to leverage NRPy+'s full power. (Calls to `fabs()` are also cheaper than `if` statements.) This can be done if we rewrite this, taking inspiration from the other eliminated `if` statement documented in the above code block:
```c
cp = 0.5*(detm-b)/a;
cm = -0.5*(detm+b)/a;
cplus = 0.5*(cp+cm+fabs(cp-cm));
cminus = 0.5*(cp+cm-fabs(cp-cm));
```
This can be simplified further, by substituting `cp` and `cm` into the below equations and eliminating terms as appropriate. First note that `cp+cm = -b/a` and that `cp-cm = detm/a`. Thus,
```c
cplus = 0.5*(-b/a + fabs(detm/a));
cminus = 0.5*(-b/a - fabs(detm/a));
```
This fulfills the original purpose of the `if` statement in the original code because we have guaranteed that $c_+ \geq c_-$.
This leaves us with an expression that can be much more easily NRPyfied. So, we will rewrite the following in NRPy+, making only minimal changes to be proper Python. However, it turns out that we can make this even simpler. In GRFFE, $v_0^2$ is guaranteed to be exactly one. In GRMHD, this speed was calculated as $$v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right),$$ where the Alfvén speed $v_{\rm A}^{2}$ $$v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}.$$ So, we can see that when the density $\rho_b$ goes to zero, $v_{0}^{2} = v_{\rm A}^{2} = 1$. Then
\begin{align}
a &= (u^0)^2 (1-v_0^2) + v_0^2/\alpha^2 \\
&= 1/\alpha^2 \\
b &= 2 \left(\beta^i v_0^2 / \alpha^2 - (u^0)^2 v^i (1-v_0^2)\right) \\
&= 2 \beta^i / \alpha^2 \\
c &= (u^0)^2 (v^i)^2 (1-v_0^2) - v_0^2 \left(\gamma^{ii} - (\beta^i)^2/\alpha^2\right) \\
&= -\gamma^{ii} + (\beta^i)^2/\alpha^2,
\end{align}
are simplifications that should save us some time; we can see that $a \geq 0$ is guaranteed. Note that we also force `detm` to be positive. Thus, `detm/a` is guaranteed to be positive itself, rendering the calls to `nrpyAbs()` superfluous. Furthermore, we eliminate any dependence on the Valencia 3-velocity and the time compoenent of the four-velocity, $u^0$. This leaves us free to solve the quadratic in the familiar way: $$c_\pm = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$.
```
# We'll write this as a function so that we can calculate the expressions on-demand for any choice of i
def find_cp_cm(lapse,shifti,gammaUUii):
# Inputs: u0,vi,lapse,shift,gammadet,gupii
# Outputs: cplus,cminus
# a = 1/(alpha^2)
a = sp.sympify(1)/(lapse*lapse)
# b = 2 beta^i / alpha^2
b = sp.sympify(2) * shifti /(lapse*lapse)
# c = -g^{ii} + (beta^i)^2 / alpha^2
c = - gammaUUii + shifti*shifti/(lapse*lapse)
# Now, we are free to solve the quadratic equation as usual. We take care to avoid passing a
# negative value to the sqrt function.
detm = b*b - sp.sympify(4)*a*c
import Min_Max_and_Piecewise_Expressions as noif
detm = sp.sqrt(noif.max_noif(sp.sympify(0),detm))
global cplus,cminus
cplus = sp.Rational(1,2)*(-b/a + detm/a)
cminus = sp.Rational(1,2)*(-b/a - detm/a)
```
In flat spacetime, where $\alpha=1$, $\beta^i=0$, and $\gamma^{ij} = \delta^{ij}$, $c_+ > 0$ and $c_- < 0$. For the HLLE solver, we will need both `cmax` and `cmin` to be positive; we also want to choose the speed that is larger in magnitude because overestimating the characteristic speeds will help damp unwanted oscillations. (However, in GRFFE, we only get one $c_+$ and one $c_-$, so we only need to fix the signs here.) Hence, the following function.
We will now write a function in NRPy+ similar to the one used in the old `GiRaFFE`, allowing us to generate the expressions with less need to copy-and-paste code; the key difference is that this one will be in Python, and generate optimized C code integrated into the rest of the operations. Notice that since we eliminated the dependence on velocities, none of the input quantities are different on either side of the face. So, this function won't really do much besides guarantee that `cmax` and `cmin` are positive, but we'll leave the machinery here since it is likely to be a useful guide to somebody who wants to something similar. The only modifications we'll make are those necessary to eliminate calls to `fabs(0)` in the C code. We use the same technique as above to replace the `if` statements inherent to the `MAX()` and `MIN()` functions.
```
# We'll write this as a function, and call it within HLLE_solver, below.
def find_cmax_cmin(field_comp,gamma_faceDD,beta_faceU,alpha_face):
# Inputs: flux direction field_comp, Inverse metric gamma_faceUU, shift beta_faceU,
# lapse alpha_face, metric determinant gammadet_face
# Outputs: maximum and minimum characteristic speeds cmax and cmin
# First, we need to find the characteristic speeds on each face
gamma_faceUU,unusedgammaDET = ixp.generic_matrix_inverter3x3(gamma_faceDD)
# Original needed for GRMHD
# find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])
# cpr = cplus
# cmr = cminus
# find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])
# cpl = cplus
# cml = cminus
find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])
cp = cplus
cm = cminus
# The following algorithms have been verified with random floats:
global cmax,cmin
# Now, we need to set cmax to the larger of cpr,cpl, and 0
import Min_Max_and_Piecewise_Expressions as noif
cmax = noif.max_noif(cp,sp.sympify(0))
# And then, set cmin to the smaller of cmr,cml, and 0
cmin = -noif.min_noif(cm,sp.sympify(0))
```
<a id='fluxes'></a>
## Step 2.b: Compute the HLLE fluxes \[Back to [top](#toc)\]
$$\label{fluxes}$$
Here, we we calculate the flux and state vectors for the electric field. The flux vector is here given as
$$
[F^i(B^j)]_k = \sqrt{\gamma} (v^i B^j - v^j B^i).
$$
Here, $v^i$ is the drift velocity and $B^i$ is the magnetic field.
This can be easily handled for an input flux direction $i$ with
$$
[F^j(B^k)]_i = \epsilon_{ijk} v^j B^k,
$$
where $\epsilon_{ijk} = \sqrt{\gamma} [ijk]$ and $[ijk]$ is the Levi-Civita symbol.
The state vector is simply the magnetic field $B^j$.
```
def calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gammaDD,betaU,alpha,ValenciavU,BU):
# Define Levi-Civita symbol
def define_LeviCivitaSymbol_rank3(DIM=-1):
if DIM == -1:
DIM = par.parval_from_str("DIM")
LeviCivitaSymbol = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# From https://codegolf.stackexchange.com/questions/160359/levi-civita-symbol :
LeviCivitaSymbol[i][j][k] = (i - j) * (j - k) * (k - i) * sp.Rational(1,2)
return LeviCivitaSymbol
GRHD.compute_sqrtgammaDET(gammaDD)
# Here, we import the Levi-Civita tensor and compute the tensor with lower indices
LeviCivitaDDD = define_LeviCivitaSymbol_rank3()
for i in range(3):
for j in range(3):
for k in range(3):
LeviCivitaDDD[i][j][k] *= GRHD.sqrtgammaDET
global U,F
# Flux F = \epsilon_{ijk} v^j B^k
F = sp.sympify(0)
for j in range(3):
for k in range(3):
F += LeviCivitaDDD[field_comp][j][k] * (alpha*ValenciavU[j]-betaU[j]) * BU[k]
# U = B^i
U = BU[flux_dirn]
```
Now, we write a standard HLLE solver based on eq. 3.15 in [the HLLE paper](https://epubs.siam.org/doi/pdf/10.1137/1025002),
$$
F^{\rm HLL} = \frac{c_{\rm min} F_{\rm R} + c_{\rm max} F_{\rm L} - c_{\rm min} c_{\rm max} (U_{\rm R}-U_{\rm L})}{c_{\rm min} + c_{\rm max}}
$$
```
def HLLE_solver(cmax, cmin, Fr, Fl, Ur, Ul):
# This solves the Riemann problem for the flux of E_i in one direction
# F^HLL = (c_\min f_R + c_\max f_L - c_\min c_\max ( st_j_r - st_j_l )) / (c_\min + c_\max)
return (cmin*Fr + cmax*Fl - cmin*cmax*(Ur-Ul) )/(cmax + cmin)
```
Here, we will use the function we just wrote to calculate the flux through a face. We will pass the reconstructed Valencia 3-velocity and magnetic field on either side of an interface to this function (designated as the "left" and "right" sides) along with the value of the 3-metric, shift vector, and lapse function on the interface. The parameter `flux_dirn` specifies which face through which we are calculating the flux. However, unlike when we used this method to calculate the flux term, the RHS of each component of $A_i$ does not depend on all three of the flux directions. Instead, the flux of one component of the $E_i$ field depends on flux through the faces in the other two directions. This will be handled when we generate the C function, as demonstrated in the example code after this next function.
Note that we allow the user to declare their own gridfunctions if they wish, and default to declaring basic symbols if they are not provided. The default names are chosen to imply interpolation of the metric gridfunctions and reconstruction of the primitives.
```
def calculate_E_i_flux(flux_dirn,alpha_face=None,gamma_faceDD=None,beta_faceU=None,\
Valenciav_rU=None,B_rU=None,Valenciav_lU=None,B_lU=None):
global E_fluxD
E_fluxD = ixp.zerorank1()
for field_comp in range(3):
find_cmax_cmin(field_comp,gamma_faceDD,beta_faceU,alpha_face)
calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gamma_faceDD,beta_faceU,alpha_face,\
Valenciav_rU,B_rU)
Fr = F
Ur = U
calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gamma_faceDD,beta_faceU,alpha_face,\
Valenciav_lU,B_lU)
Fl = F
Ul = U
E_fluxD[field_comp] += HLLE_solver(cmax, cmin, Fr, Fl, Ur, Ul)
```
Below, we will write some example code to use the above functions to generate C code for `GiRaFFE_NRPy`. We need to write our own memory reads and writes because we need to add contributions from *both* faces in a given direction, which is expressed in the code as adding contributions from adjacent gridpoints to the RHS, which is not something `FD_outputC` can handle. The `.replace()` function calls adapt these reads and writes to the different directions. Note that, for reconstructions in a given direction, the fluxes are only added to the other two components, as can be seen in the equations we are implementing.
\begin{align}
-E_x(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^y(B^z)]_{x(i,j+1/2,k)}+[F_{\rm HLL}^y(B^z)]_{x(i,j-1/2,k)}-[F_{\rm HLL}^z(B^y)]_{x(i,j,k+1/2)}-[F_{\rm HLL}^z(B^y)]_{x(i,j,k-1/2)} \right) \\
-E_y(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^z(B^x)]_{y(i,j,k+1/2)}+[F_{\rm HLL}^z(B^x)]_{y(i,j,k-1/2)}-[F_{\rm HLL}^x(B^z)]_{y(i+1/2,j,k)}-[F_{\rm HLL}^x(B^z)]_{y(i-1/2,j,k)} \right) \\
-E_z(x_i,y_j,z_k) &= \frac{1}{4} \left( [F_{\rm HLL}^x(B^y)]_{z(i+1/2,j,k)}+[F_{\rm HLL}^x(B^y)]_{z(i-1/2,j,k)}-[F_{\rm HLL}^y(B^x)]_{z(i,j+1/2,k)}-[F_{\rm HLL}^y(B^x)]_{z(i,j-1/2,k)} \right). \\
\end{align}
From this, we can see that when, for instance, we reconstruct and interpolate in the $x$-direction, we must add only to the $y$- and $z$-components of the electric field.
Recall that when we reconstructed the velocity and magnetic field, we constructed to the $i-1/2$ face, so the data at $i+1/2$ is stored at $i+1$.
```
def generate_Afield_flux_function_files(out_dir,subdir,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True):
if not inputs_provided:
# declare all variables
alpha_face = sp.symbols(alpha_face)
beta_faceU = ixp.declarerank1("beta_faceU")
gamma_faceDD = ixp.declarerank2("gamma_faceDD","sym01")
Valenciav_rU = ixp.declarerank1("Valenciav_rU")
B_rU = ixp.declarerank1("B_rU")
Valenciav_lU = ixp.declarerank1("Valenciav_lU")
B_lU = ixp.declarerank1("B_lU")
Memory_Read = """const double alpha_face = auxevol_gfs[IDX4S(ALPHA_FACEGF, i0,i1,i2)];
const double gamma_faceDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF, i0,i1,i2)];
const double gamma_faceDD01 = auxevol_gfs[IDX4S(GAMMA_FACEDD01GF, i0,i1,i2)];
const double gamma_faceDD02 = auxevol_gfs[IDX4S(GAMMA_FACEDD02GF, i0,i1,i2)];
const double gamma_faceDD11 = auxevol_gfs[IDX4S(GAMMA_FACEDD11GF, i0,i1,i2)];
const double gamma_faceDD12 = auxevol_gfs[IDX4S(GAMMA_FACEDD12GF, i0,i1,i2)];
const double gamma_faceDD22 = auxevol_gfs[IDX4S(GAMMA_FACEDD22GF, i0,i1,i2)];
const double beta_faceU0 = auxevol_gfs[IDX4S(BETA_FACEU0GF, i0,i1,i2)];
const double beta_faceU1 = auxevol_gfs[IDX4S(BETA_FACEU1GF, i0,i1,i2)];
const double beta_faceU2 = auxevol_gfs[IDX4S(BETA_FACEU2GF, i0,i1,i2)];
const double Valenciav_rU0 = auxevol_gfs[IDX4S(VALENCIAV_RU0GF, i0,i1,i2)];
const double Valenciav_rU1 = auxevol_gfs[IDX4S(VALENCIAV_RU1GF, i0,i1,i2)];
const double Valenciav_rU2 = auxevol_gfs[IDX4S(VALENCIAV_RU2GF, i0,i1,i2)];
const double B_rU0 = auxevol_gfs[IDX4S(B_RU0GF, i0,i1,i2)];
const double B_rU1 = auxevol_gfs[IDX4S(B_RU1GF, i0,i1,i2)];
const double B_rU2 = auxevol_gfs[IDX4S(B_RU2GF, i0,i1,i2)];
const double Valenciav_lU0 = auxevol_gfs[IDX4S(VALENCIAV_LU0GF, i0,i1,i2)];
const double Valenciav_lU1 = auxevol_gfs[IDX4S(VALENCIAV_LU1GF, i0,i1,i2)];
const double Valenciav_lU2 = auxevol_gfs[IDX4S(VALENCIAV_LU2GF, i0,i1,i2)];
const double B_lU0 = auxevol_gfs[IDX4S(B_LU0GF, i0,i1,i2)];
const double B_lU1 = auxevol_gfs[IDX4S(B_LU1GF, i0,i1,i2)];
const double B_lU2 = auxevol_gfs[IDX4S(B_LU2GF, i0,i1,i2)];
REAL A_rhsD0 = 0; REAL A_rhsD1 = 0; REAL A_rhsD2 = 0;
"""
Memory_Write = """rhs_gfs[IDX4S(AD0GF,i0,i1,i2)] += A_rhsD0;
rhs_gfs[IDX4S(AD1GF,i0,i1,i2)] += A_rhsD1;
rhs_gfs[IDX4S(AD2GF,i0,i1,i2)] += A_rhsD2;
"""
indices = ["i0","i1","i2"]
indicesp1 = ["i0+1","i1+1","i2+1"]
for flux_dirn in range(3):
calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU)
E_field_to_print = [\
sp.Rational(1,4)*E_fluxD[(flux_dirn+1)%3],
sp.Rational(1,4)*E_fluxD[(flux_dirn+2)%3],
]
E_field_names = [\
"A_rhsD"+str((flux_dirn+1)%3),
"A_rhsD"+str((flux_dirn+2)%3),
]
desc = "Calculate the electric flux on the left face in direction " + str(flux_dirn) + "."
name = "calculate_E_field_D" + str(flux_dirn) + "_right"
outCfunction(
outfile = os.path.join(out_dir,subdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,const REAL *auxevol_gfs,REAL *rhs_gfs",
body = Memory_Read \
+outputC(E_field_to_print,E_field_names,"returnstring",params="outCverbose=False").replace("IDX4","IDX4S")\
+Memory_Write,
loopopts ="InteriorPoints",
rel_path_for_Cparams=os.path.join("../"))
desc = "Calculate the electric flux on the left face in direction " + str(flux_dirn) + "."
name = "calculate_E_field_D" + str(flux_dirn) + "_left"
outCfunction(
outfile = os.path.join(out_dir,subdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,const REAL *auxevol_gfs,REAL *rhs_gfs",
body = Memory_Read.replace(indices[flux_dirn],indicesp1[flux_dirn]) \
+outputC(E_field_to_print,E_field_names,"returnstring",params="outCverbose=False").replace("IDX4","IDX4S")\
+Memory_Write,
loopopts ="InteriorPoints",
rel_path_for_Cparams=os.path.join("../"))
```
<a id='code_validation'></a>
# Step 3: Code Validation against `GiRaFFE_NRPy.Induction_Equation` NRPy+ Module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the $\texttt{GiRaFFE}$ evolution equations and auxiliary quantities we intend to use between
1. this tutorial and
2. the NRPy+ [GiRaFFE_NRPy.Induction_Equation](../../edit/in_progress/GiRaFFE_NRPy/Induction_Equation.py) module.
Below are the gridfunction registrations we will need for testing. We will pass these to the above functions to self-validate the module that corresponds with this tutorial.
```
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="C2P_P2C."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
# These are the standard gridfunctions we've used before.
#ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","ValenciavU",DIM=3)
#gammaDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gammaDD","sym01")
#betaU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","betaU")
#alpha = gri.register_gridfunctions("AUXEVOL",["alpha"])
#AD = ixp.register_gridfunctions_for_single_rank1("EVOL","AD",DIM=3)
#BU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","BU",DIM=3)
# We will pass values of the gridfunction on the cell faces into the function. This requires us
# to declare them as C parameters in NRPy+. We will denote this with the _face infix/suffix.
alpha_face = gri.register_gridfunctions("AUXEVOL","alpha_face")
gamma_faceDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gamma_faceDD","sym01")
beta_faceU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","beta_faceU")
# We'll need some more gridfunctions, now, to represent the reconstructions of BU and ValenciavU
# on the right and left faces
Valenciav_rU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Valenciav_rU",DIM=3)
B_rU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_rU",DIM=3)
Valenciav_lU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Valenciav_lU",DIM=3)
B_lU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_lU",DIM=3)
import GiRaFFE_NRPy.Afield_flux as Af
expr_list = []
exprcheck_list = []
namecheck_list = []
for flux_dirn in range(3):
calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU)
Af.calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU)
namecheck_list.extend([gfnm("E_fluxD",flux_dirn)])
exprcheck_list.extend([Af.E_fluxD[flux_dirn]])
expr_list.extend([E_fluxD[flux_dirn]])
for mom_comp in range(len(expr_list)):
comp_func(expr_list[mom_comp],exprcheck_list[mom_comp],namecheck_list[mom_comp])
import sys
if all_passed:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)
```
We will also check the output C code to make sure it matches what is produced by the python module.
```
import difflib
import sys
subdir = os.path.join("RHSs")
out_dir = os.path.join("GiRaFFE_standalone_Ccodes")
cmd.mkdir(out_dir)
cmd.mkdir(os.path.join(out_dir,subdir))
valdir = os.path.join("GiRaFFE_Ccodes_validation")
cmd.mkdir(valdir)
cmd.mkdir(os.path.join(valdir,subdir))
generate_Afield_flux_function_files(out_dir,subdir,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True)
Af.generate_Afield_flux_function_files(valdir,subdir,alpha_face,gamma_faceDD,beta_faceU,\
Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True)
print("Printing difference between original C code and this code...")
# Open the files to compare
files = ["RHSs/calculate_E_field_D0_right.h",
"RHSs/calculate_E_field_D0_left.h",
"RHSs/calculate_E_field_D1_right.h",
"RHSs/calculate_E_field_D1_left.h",
"RHSs/calculate_E_field_D2_right.h",
"RHSs/calculate_E_field_D2_left.h"]
for file in files:
print("Checking file " + file)
with open(os.path.join(valdir,file)) as file1, open(os.path.join(out_dir,file)) as file2:
# Read the lines of each file
file1_lines = file1.readlines()
file2_lines = file2.readlines()
num_diffs = 0
for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(valdir,file), tofile=os.path.join(out_dir,file)):
sys.stdout.writelines(line)
num_diffs = num_diffs + 1
if num_diffs == 0:
print("No difference. TEST PASSED!")
else:
print("ERROR: Disagreement found with .py file. See differences above.")
```
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GiRaFFE_NRPy-Induction_Equation.pdf](Tutorial-GiRaFFE_NRPy-Induction_Equation.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFE_NRPy-Afield_flux")
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import math
from matplotlib import style
from collections import Counter
style.use('fivethirtyeight') #Shows Grid
import pandas as pd
import random
df = pd.read_csv('Breast-Cancer.csv',na_values = ['?'])
means = df.mean().to_dict()
df.drop(['id'],1,inplace=True)
header = list(df)
df.fillna(df.mean(),inplace = True)
full_data = df.astype(float).values.tolist()
full_data
test_size1 = 0.5
train_data1 = full_data[:-int(test_size1*len(full_data))]
test_data1 = full_data[-int(test_size1*len(full_data)):]
len(test_data1)
test_size2 = 0.1
train_data2 = full_data[:-int(test_size2*len(full_data))]
test_data2 = full_data[-int(test_size2*len(full_data)):]
len(test_data2)
test_size3 = 0.3
train_data3 = full_data[:-int(test_size3*len(full_data))]
test_data3 = full_data[-int(test_size3*len(full_data)):]
len(test_data3)
def unique_vals(Data,col):
return set([row[col] for row in Data])
def class_counts(Data):
counts = {}
for row in Data:
label = row[-1]
if label not in counts:
counts[label] = 0
counts[label] += 1
return counts
class Question:
def __init__(self,column,value):
self.column = column
self.value = value
def match(self,example):
val = example[self.column]
return val == self.value
def __repr__(self):
return "Is %s %s %s?" %(
header[self.column],"==",str(self.value))
def partition(Data,question):
true_rows,false_rows = [],[]
for row in Data:
if(question.match(row)):
true_rows.append(row)
else:
false_rows.append(row)
return true_rows,false_rows
def gini(Data):
counts = class_counts(Data)
impurity = 1
for lbl in counts:
prob_of_lbl = counts[lbl]/float(len(Data))
impurity-=prob_of_lbl**2
return impurity
def info_gain(left,right,current_uncertainty):
p = float(len(left))/(len(left)+len(right))
return current_uncertainty - p*gini(left) - (1-p)*gini(right)
def find_best_split(Data):
best_gain = 0
best_question = None
current_uncertainty = gini(Data)
n_features = len(Data[0]) - 1
for col in range(n_features):
values = unique_vals(Data,col)
for val in values:
question = Question(col,val)
true_rows,false_rows = partition(Data,question)
if(len(true_rows) == 0 or len(false_rows)==0):
continue
gain = info_gain(true_rows,false_rows,current_uncertainty)
if gain>=best_gain:
best_gain, best_question = gain , question
return best_gain,best_question
class Leaf:
def __init__(self,Data):
self.predictions = class_counts(Data)
class Decision_Node:
def __init__(self, question, true_branch,false_branch):
self.question = question
self.true_branch = true_branch
self.false_branch = false_branch
#print(self.question)
def build_tree(Data,i=0):
gain, question = find_best_split(Data)
if gain == 0:
return Leaf(Data)
true_rows , false_rows = partition(Data,question)
true_branch = build_tree(true_rows,i)
false_branch = build_tree(false_rows,i)
return Decision_Node(question,true_branch,false_branch)
def print_tree(node,spacing=""):
if isinstance(node, Leaf):
print(spacing + "Predict",node.predictions)
return
print(spacing+str(node.question))
print(spacing + "--> True:")
print_tree(node.true_branch , spacing + " ")
print(spacing + "--> False:")
print_tree(node.false_branch , spacing + " ")
def print_leaf(counts):
total = sum(counts.values())*1.0
probs = {}
for lbl in counts.keys():
probs[lbl] = str(int(counts[lbl]/total * 100)) + "%"
return probs
def classify(row,node):
if isinstance(node,Leaf):
return node.predictions
if node.question.match(row):
return classify(row,node.true_branch)
else:
return classify(row,node.false_branch)
my_tree = build_tree(train_data1)
print_tree(my_tree)
def calc_accuracy(test_data, my_tree):
correct,total = 0,0
for row in test_data:
if(row[-1] in print_leaf(classify(row,my_tree)).keys()):
correct += 1
total += 1
return correct/total
for row in test_data1:
print("Actual: %s. Predicted: %s" % (row[-1],print_leaf(classify(row,my_tree))))
accuracy = calc_accuracy(test_data1,my_tree)
print(accuracy,"accuracy for 50% train data and 50% test data")
my_tree2 = build_tree(train_data2)
print_tree(my_tree2)
for row in test_data2:
print("Actual: %s. Predicted: %s" % (row[-1],print_leaf(classify(row,my_tree2))))
accuracy2 = calc_accuracy(test_data2,my_tree2)
print(accuracy2,"accuracy for 90% train data and 10% test data")
my_tree3 = build_tree(train_data3)
print_tree(my_tree3)
for row in test_data3:
print("Actual: %s. Predicted: %s" % (row[-1],print_leaf(classify(row,my_tree3))))
accuracy3 = calc_accuracy(test_data3,my_tree3)
print(accuracy3,"accuracy for 70% train data and 30% test data")
```
| github_jupyter |
# Simplify network topology and consolidate intersections
Author: [Geoff Boeing](https://geoffboeing.com/)
- [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
- [GitHub repo](https://github.com/gboeing/osmnx)
- [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)
- [Documentation](https://osmnx.readthedocs.io/en/stable/)
- [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/)
```
import networkx as nx
import osmnx as ox
%matplotlib inline
ox.__version__
```
## 1. Complex intersection consolidation
Many real-world street networks feature complex intersections and traffic circles, resulting in a cluster of graph nodes where there is really just one true intersection, as we would think of it in transportation or urban design. Similarly, divided roads are often represented by separate centerline edges: the intersection of two divided roads thus creates 4 nodes, representing where each edge intersects a perpendicular edge, but these 4 nodes represent a single intersection in the real world. Traffic circles similarly create a cluster of nodes where each street's edge intersects the roundabout.
OSMnx can consolidate nearby intersections and optionally rebuild the graph's topology.
```
# get a street network and plot it with all edge intersections
point = 37.858495, -122.267468
G = ox.graph_from_point(point, network_type="drive", dist=500)
fig, ax = ox.plot_graph(G, node_color="r")
```
Notice the complex intersections and traffic circles creating clusters of nodes.
We'll specify that any nodes with 15 meter buffers of each other in this network are part of the same intersection. Adjust this tolerance based on the street design standards in the community you are examining, and use a projected graph to work in meaningful units like meters. We'll also specify that we do not want dead-ends returned in our list of consolidated intersections.
```
# get a GeoSeries of consolidated intersections
G_proj = ox.project_graph(G)
ints = ox.consolidate_intersections(G_proj, rebuild_graph=False, tolerance=15, dead_ends=False)
len(ints)
# compare to number of nodes in original graph
len(G)
```
Note that these cleaned up intersections give us more accurate intersection counts and densities, but do not alter or integrate with the network's topology.
To do that, we need to **rebuild the graph**.
```
# consolidate intersections and rebuild graph topology
# this reconnects edge geometries to the new consolidated nodes
G2 = ox.consolidate_intersections(G_proj, rebuild_graph=True, tolerance=15, dead_ends=False)
len(G2)
fig, ax = ox.plot_graph(G2, node_color="r")
```
Notice how the traffic circles' many nodes are merged into a new single centroid node, with edge geometries extended to connect to it. Similar consolidation occurs at the intersection of the divided roads.
Running `consolidate_intersections` with `rebuild_graph=True` may yield somewhat (but not very) different intersection counts/densities compared to `rebuild_graph=False`. The difference lies in that the latter just merges buffered node points that overlap, whereas the former checks the topology of the overlapping node buffers before merging them.
This prevents topologically remote but spatially proximate nodes from being merged. For example:
- A street intersection may lie directly below a freeway overpass's intersection with an on-ramp. We would not want to merge these together and connnect their edges: they are distinct junctions in the system of roads.
- In a residential neighborhood, a bollarded street may create a dead-end immediately next to an intersection or traffic circle. We would not want to merge this dead-end with the intersection and connect their edges.
These examples illustrate (two-dimensional) geometric proximity, but topological remoteness. Accordingly, in some situations we may expect higher intersection counts when using `rebuild_graph=True` because it is more cautious with merging in these cases. The trade-off is that it has higher time complexity than `rebuild_graph=False`.
## 2. Graph simplification
Use simplification to clean-up nodes that are not intersections or dead-ends while retaining the complete edge geometry. OSMnx does this automatically by default when constructing a graph.
```
# create a network around some (lat, lng) point and plot it
location_point = (33.299896, -111.831638)
G = ox.graph_from_point(location_point, dist=500, simplify=False)
fig, ax = ox.plot_graph(G, node_color="r")
# show which nodes we'd remove if we simplify it (yellow)
nc = ["r" if ox.simplification._is_endpoint(G, node) else "y" for node in G.nodes()]
fig, ax = ox.plot_graph(G, node_color=nc)
# simplify the network
G2 = ox.simplify_graph(G)
# plot the simplified network and highlight any self-loop edges
loops = [edge[0] for edge in nx.selfloop_edges(G2)]
nc = ["r" if node in loops else "y" for node in G2.nodes()]
fig, ax = ox.plot_graph(G2, node_color=nc)
# turn off strict mode and see what nodes we'd remove
nc = ["r" if ox.simplification._is_endpoint(G, node, strict=False) else "y" for node in G.nodes()]
fig, ax = ox.plot_graph(G, node_color=nc)
# simplify network with strict mode turned off
G3 = ox.simplify_graph(G.copy(), strict=False)
fig, ax = ox.plot_graph(G3, node_color="r")
```
## 3. Cleaning up the periphery of the network
This is related to simplification. OSMnx by default (with clean_periphery parameter equal to True) buffers the area you request by 0.5km, and then retrieves the street network within this larger, buffered area. Then it simplifies the topology so that nodes represent intersections of streets (rather than including all the interstitial OSM nodes). Then it calculates the (undirected) degree of each node in this larger network. Next it truncates this network by the actual area you requested (either by bounding box, or by polygon). Finally it saves a dictionary of node degree values as a graph attribute.
This has two primary benefits. First, it cleans up stray false edges around the periphery. If clean_periphery=False, peripheral non-intersection nodes within the requested area appear to be cul-de-sacs, as the rest of the edge leading to an intersection outside the area is ignored. If clean_periphery=True, the larger graph is first created, allowing simplification of such edges to their true intersections, allowing their entirety to be pruned after truncating down to the actual requested area. Second, it gives accurate node degrees by both a) counting node neighbors even if they fall outside the retained network (so you don't claim a degree-4 node is degree-2 because only 2 of its neighbors lie within the area), and b) not counting all those stray false edges' terminus nodes as cul-de-sacs that otherwise grossly inflate the count of nodes with degree=1, even though these nodes are really just interstitial nodes in the middle of a chopped-off street segment between intersections.
See two examples below.
```
# get some bbox
bbox = ox.utils_geo.bbox_from_point((45.518698, -122.679964), dist=300)
north, south, east, west = bbox
G = ox.graph_from_bbox(north, south, east, west, network_type="drive", clean_periphery=False)
fig, ax = ox.plot_graph(G, node_color="r")
# the node degree distribution for this graph has many false cul-de-sacs
k = dict(G.degree())
{n: list(k.values()).count(n) for n in range(max(k.values()) + 1)}
```
Above, notice all the peripheral stray edge stubs. Below, notice these are cleaned up and that the node degrees are accurate with regards to the wider street network that may extend beyond the limits of the requested area.
```
G = ox.graph_from_bbox(north, south, east, west, network_type="drive")
fig, ax = ox.plot_graph(G, node_color="r")
# the streets per node distribution for this cleaned up graph is more accurate
# dict keys = count of streets emanating from the node (ie, intersections and dead-ends)
# dict vals = number of nodes with that count
k = nx.get_node_attributes(G, "street_count")
{n: list(k.values()).count(n) for n in range(max(k.values()) + 1)}
```
A final example. Compare the network below to the ones in the section above. It has the stray peripheral edges cleaned up. Also notice toward the bottom left, two interstitial nodes remain in that east-west street. Why? These are actually intersections, but their (southbound) edges were removed because these edges' next intersections were south of the requested area's boundaries. However, OSMnx correctly kept these nodes in the graph because they are in fact intersections and should be counted in measures of intersection density, etc.
```
location_point = (33.299896, -111.831638)
G = ox.graph_from_point(location_point, dist=500, simplify=True)
fig, ax = ox.plot_graph(G, node_color="r")
```
| github_jupyter |
```
import os
from collections import defaultdict, namedtuple
from copy import deepcopy
from pprint import pprint
import lxml
import lxml.html
import lxml.etree
from graphviz import Digraph
from similarity.normalized_levenshtein import NormalizedLevenshtein
normalized_levenshtein = NormalizedLevenshtein()
TAG_NAME_ATTRIB = '___tag_name___'
HIERARCHICAL = 'hierarchical'
SEQUENTIAL = 'sequential'
class DataRegion(
# todo rename n_nodes_per_region -> gnode_size
# todo rename start_child_index -> first_gnode_start_index
namedtuple("DataRegion", ["n_nodes_per_region", "start_child_index", "n_nodes_covered",])
):
def __str__(self):
return "DR({0}, {1}, {2})".format(self[0], self[1], self[2])
def extend_one_gnode(self):
return self.__class__(
self.n_nodes_per_region, self.start_child_index, self.n_nodes_covered + self.n_nodes_per_region
)
@classmethod
def binary_from_last_gnode(cls, gnode):
gnode_size = gnode.end - gnode.start
return cls(gnode_size, gnode.start - gnode_size, 2 * gnode_size)
@classmethod
def empty(cls):
return cls(None, None, 0)
return cls(0, 0, 0)
@property
def is_empty(self):
return self[0] is None
# todo use this more extensively
# Generalized Node
class GNode(
namedtuple("GNode", ["start", "end"])
):
def __str__(self):
return "GN({start}, {end})".format(start=self.start, end=self.end)
def open_doc(folder, filename):
folder = os.path.abspath(folder)
filepath = os.path.join(folder, filename)
with open(filepath, 'r') as file:
doc = lxml.html.fromstring(
lxml.etree.tostring(
lxml.html.parse(file), method='html'
)
)
return doc
def html_to_dot_sequential_name(html, with_text=False):
graph = Digraph(name='html')
tag_counts = defaultdict(int)
def add_node(html_node):
tag = html_node.tag
tag_sequential = tag_counts[tag]
tag_counts[tag] += 1
node_name = "{}-{}".format(tag, tag_sequential)
graph.node(node_name, node_name)
if len(html_node) > 0:
for child in html_node.iterchildren():
child_name = add_node(child)
graph.edge(node_name, child_name)
else:
child_name = "-".join([node_name, "txt"])
graph.node(child_name, html_node.text)
graph.edge(node_name, child_name)
return node_name
add_node(html)
return graph
def html_to_dot_hierarchical_name(html, with_text=False):
graph = Digraph(name='html')
def add_node(html_node, parent_suffix, brotherhood_index):
tag = html_node.tag
if parent_suffix is None and brotherhood_index is None:
node_suffix = ""
node_name = tag
else:
node_suffix = (
"-".join([parent_suffix, str(brotherhood_index)])
if parent_suffix else
str(brotherhood_index)
)
node_name = "{}-{}".format(tag, node_suffix)
graph.node(node_name, node_name, path=node_suffix)
if len(html_node) > 0:
for child_index, child in enumerate(html_node.iterchildren()):
child_name = add_node(child, node_suffix, child_index)
graph.edge(node_name, child_name)
else:
child_name = "-".join([node_name, "txt"])
child_path = "-".join([node_suffix, "txt"])
graph.node(child_name, html_node.text, path=child_path)
graph.edge(node_name, child_name)
return node_name
add_node(html, None, None)
return graph
def html_to_dot(html, name_option='hierarchical', with_text=False):
if name_option == SEQUENTIAL:
return html_to_dot_sequential_name(html, with_text=with_text)
elif name_option == HIERARCHICAL:
return html_to_dot_hierarchical_name(html, with_text=with_text)
else:
raise Exception('No name option `{}`'.format(name_option))
class MDR:
MINIMUM_DEPTH = 3
def __init__(self, max_tag_per_gnode, edit_distance_threshold, verbose=(False, False, False)):
self.max_tag_per_gnode = max_tag_per_gnode
self.edit_distance_threshold = edit_distance_threshold
self._verbose = verbose
self._phase = None
def _debug(self, msg, tabs=0, force=False):
if self._verbose[self._phase] or (any(self._verbose) and force):
if type(msg) == str:
print(tabs * '\t' + msg)
else:
pprint(msg)
@staticmethod
def depth(node):
d = 0
while node is not None:
d += 1
node = node.getparent()
return d
@staticmethod
def gnode_to_string(list_of_nodes):
return " ".join([
lxml.etree.tostring(child).decode('utf-8') for child in list_of_nodes
])
def __call__(self, root):
self.distances = {}
self.data_regions = {}
self.tag_counts = defaultdict(int)
self.root_copy = deepcopy(root)
self._checked_data_regions = defaultdict(set)
self._phase = 0
self._debug(
">" * 20 + " COMPUTE DISTANCES PHASE ({}) ".format(self._phase) + "<" * 20, force=True
)
self._compute_distances(root)
self._debug(
"<" * 20 + " COMPUTE DISTANCES PHASE ({}) ".format(self._phase) + ">" * 20, force=True
)
# todo remove debug variable
global DEBUG_DISTANCES
self.distances = DEBUG_DISTANCES if DEBUG_DISTANCES else self.distances
# todo change _identify_data_regions to get dist table as an input
self._debug("\n\nself.distances\n", force=True)
self._debug(self.distances, force=True)
self._debug("\n\n", force=True)
self._phase = 1
self._debug(
">" * 20 + " FIND DATA REGIONS PHASE ({}) ".format(self._phase) + "<" * 20, force=True
)
self._find_data_regions(root)
self._debug(
"<" * 20 + " FIND DATA REGIONS PHASE ({}) ".format(self._phase) + ">" * 20, force=True
)
self._phase = 2
def _compute_distances(self, node):
# each tag is named sequentially
tag = node.tag
tag_name = "{}-{}".format(tag, self.tag_counts[tag])
self.tag_counts[tag] += 1
self._debug("in _compute_distances of `{}`".format(tag_name))
# todo: stock depth in attrib???
node_depth = MDR.depth(node)
if node_depth >= MDR.MINIMUM_DEPTH:
# get all possible distances of the n-grams of children
distances = self._compare_combinations(node.getchildren())
self._debug("`{}` distances".format(tag_name))
self._debug(distances)
else:
distances = None
# !!! ATTENTION !!! this modifies the input HTML
# it is important that this comes after `compare_combinations` because
# otherwise the edit distances would change
# todo: remember, in the last phase, to clear the `TAG_NAME_ATTRIB` from all tags
node.set(TAG_NAME_ATTRIB, tag_name)
self.distances[tag_name] = distances
self._debug("\n\n")
for child in node:
self._compute_distances(child)
def _compare_combinations(self, node_list):
"""
Notation: gnode = "generalized node"
:param node_list:
:return:
"""
self._debug("in _compare_combinations")
if not node_list:
return {}
# version 1: {gnode_size: {((,), (,)): float}}
distances = defaultdict(dict)
# version 2: {gnode_size: {starting_tag: {{ ((,), (,)): float }}}}
# distances = defaultdict(lambda: defaultdict(dict))
n_nodes = len(node_list)
# for (i = 1; i <= K; i++) /* start from each node */
for starting_tag in range(1, self.max_tag_per_gnode + 1):
self._debug('starting_tag (i): {}'.format(starting_tag), 1)
# for (j = i; j <= K; j++) /* comparing different combinations */
for gnode_size in range(starting_tag, self.max_tag_per_gnode + 1): # j
self._debug('gnode_size (j): {}'.format(gnode_size), 2)
# if NodeList[i+2*j-1] exists then
if (starting_tag + 2 * gnode_size - 1) < n_nodes + 1: # +1 for pythons open set notation
self._debug(" ")
self._debug(">>> if 1 <<<", 3)
left_gnode_start = starting_tag - 1 # st
# for (k = i+j; k < Size(NodeList); k+j)
# for k in range(i + j, n, j):
for right_gnode_start in range(starting_tag + gnode_size - 1, n_nodes, gnode_size): # k
self._debug('left_gnode_start (st): {}'.format(left_gnode_start), 4)
self._debug('right_gnode_start (k): {}'.format(right_gnode_start), 4)
# if NodeList[k+j-1] exists then
if right_gnode_start + gnode_size < n_nodes + 1:
self._debug(" ")
self._debug(">>> if 2 <<<", 5)
# todo: avoid recomputing strings?
# todo: avoid recomputing edit distances?
# todo: check https://pypi.org/project/strsim/ ?
# NodeList[St..(k-1)]
left_gnode_indices = (left_gnode_start, right_gnode_start)
left_gnode = node_list[left_gnode_indices[0]:left_gnode_indices[1]]
left_gnode_str = MDR.gnode_to_string(left_gnode)
self._debug('left_gnode_indices: {}'.format(left_gnode_indices), 5)
# NodeList[St..(k-1)]
right_gnode_indices = (right_gnode_start, right_gnode_start + gnode_size)
right_gnode = node_list[right_gnode_indices[0]:right_gnode_indices[1]]
right_gnode_str = MDR.gnode_to_string(right_gnode)
self._debug('right_gnode_indices: {}'.format(right_gnode_indices), 5)
# edit distance
edit_distance = normalized_levenshtein.distance(left_gnode_str, right_gnode_str)
self._debug('edit_distance: {}'.format(edit_distance), 5)
# version 1
distances[gnode_size][(left_gnode_indices, right_gnode_indices)] = edit_distance
# version 2
# distances[gnode_size][starting_tag][
# (left_gnode_indices, right_gnode_indices)
# ] = edit_distance
left_gnode_start = right_gnode_start
else:
self._debug("skipped\n", 5)
self._debug(' ')
else:
self._debug("skipped\n", 3)
self._debug(' ')
# version 1
return dict(distances)
# version 2
# return {k: dict(v) for k, v in distances.items()}
def _find_data_regions(self, node):
tag_name = node.attrib[TAG_NAME_ATTRIB]
node_depth = MDR.depth(node)
self._debug("in _find_data_regions of `{}`".format(tag_name))
# if TreeDepth(Node) => 3 then
if node_depth >= MDR.MINIMUM_DEPTH:
# Node.DRs = IdenDRs(1, Node, K, T);
# data_regions = self._identify_data_regions(1, node) # 0 or 1???
data_regions = self._identify_data_regions(0, node)
self.data_regions[tag_name] = data_regions
# todo remove debug thing
if tag_name == "table-0":
return
# tempDRs = ∅;
temp_data_regions = set()
# for each Child ∈ Node.Children do
for child in node.getchildren():
# FindDRs(Child, K, T);
self._find_data_regions(child)
# tempDRs = tempDRs ∪ UnCoveredDRs(Node, Child);
uncovered_data_regions = self._uncovered_data_regions(node, child)
temp_data_regions = temp_data_regions | uncovered_data_regions
# Node.DRs = Node.DRs ∪ tempDRs
self.data_regions[tag_name] |= temp_data_regions
else:
for child in node.getchildren():
self._find_data_regions(child)
self._debug(" ")
def _identify_data_regions(self, start_index, node):
"""
Notation: dr = data_region
"""
tag_name = node.attrib[TAG_NAME_ATTRIB]
self._debug("in _identify_data_regions node:{}".format(tag_name))
self._debug("start_index:{}".format(start_index), 1)
# 1 maxDR = [0, 0, 0];
# max_dr = DataRegion(0, 0, 0)
# current_dr = DataRegion(0, 0, 0)
max_dr = DataRegion.empty()
current_dr = DataRegion.empty()
# 2 for (i = 1; i <= K; i++) /* compute for each i-combination */
for gnode_size in range(1, self.max_tag_per_gnode + 1):
self._debug('gnode_size (i): {}'.format(gnode_size), 2)
# 3 for (f = start; f <= start+i; f++) /* start from each node */
# for start_gnode_start_index in range(start_index, start_index + gnode_size + 1):
for first_gn_start_idx in range(start_index, start_index + gnode_size): # todo check if this covers everything
self._debug('first_gn_start_idx (f): {}'.format(first_gn_start_idx), 3)
# 4 flag = true;
dr_has_started = False
# 5 for (j = f; j < size(Node.Children); j+i)
# for left_gnode_start in range(start_node, len(node) , gnode_size):
for last_gn_start_idx in range(
# start_gnode_start_index, len(node) - gnode_size + 1, gnode_size
first_gn_start_idx + gnode_size, len(node) - gnode_size + 1, gnode_size
):
self._debug('last_gn_start_idx (j): {}'.format(last_gn_start_idx), 4)
# 6 if Distance(Node, i, j) <= T then
# todo: correct here
# from _compare_combinations
# left_gnode_indices = (left_gnode_start, right_gnode_start)
# right_gnode_indices = (right_gnode_start, right_gnode_start + gnode_size)
# left_gnode_indices = (start_gnode_start_index, start_gnode_start_index + gnode_size)
# right_gnode_indices = (end_gnode_start_index, end_gnode_start_index + gnode_size)
# gn_before_last = (last_gn_start_idx - gnode_size, last_gn_start_idx)
# gn_last = (last_gn_start_idx, last_gn_start_idx + gnode_size)
gn_before_last = GNode(last_gn_start_idx - gnode_size, last_gn_start_idx)
gn_last = GNode(last_gn_start_idx, last_gn_start_idx + gnode_size)
self._debug('gn_before_last : {}'.format(gn_before_last), 5)
self._debug('gn_last : {}'.format(gn_last), 5)
gn_pair = (gn_before_last, gn_last)
distance = self.distances[tag_name][gnode_size][gn_pair]
self._checked_data_regions[tag_name].add(gn_pair)
self._debug('dist : {}'.format(distance), 5)
if distance <= self.edit_distance_threshold:
self._debug('dist passes the threshold!'.format(distance), 6)
# 7 if flag=true then
if not dr_has_started:
self._debug('it is the first pair, init the `current_dr`...'.format(distance), 6)
# 8 curDR = [i, j, 2*i];
# current_dr = DataRegion(gnode_size, first_gn_start_idx - gnode_size, 2 * gnode_size)
# current_dr = DataRegion(gnode_size, first_gn_start_idx, 2 * gnode_size)
current_dr = DataRegion.binary_from_last_gnode(gn_last)
self._debug('current_dr: {}'.format(current_dr), 6)
# 9 flag = false;
dr_has_started = True
# 10 else curDR[3] = curDR[3] + i;
else:
self._debug('extending the DR...'.format(distance), 6)
# current_dr = DataRegion(
# current_dr[0], current_dr[1], current_dr[2] + gnode_size
# )
current_dr = current_dr.extend_one_gnode()
self._debug('current_dr: {}'.format(current_dr), 6)
# 11 elseif flag = false then Exit-inner-loop;
elif dr_has_started:
self._debug('above the threshold, breaking the loop...', 6)
# todo: keep track of all continuous regions per node...
break
self._debug(" ")
# 13 if (maxDR[3] < curDR[3]) and (maxDR[2] = 0 or (curDR[2]<= maxDR[2]) then
current_is_strictly_larger = max_dr.n_nodes_covered < current_dr.n_nodes_covered
current_starts_at_same_node_or_before = (
max_dr.is_empty or current_dr.start_child_index <= max_dr.start_child_index
)
if current_is_strictly_larger and current_starts_at_same_node_or_before:
self._debug('current DR is bigger than max! replacing...', 3)
# 14 maxDR = curDR;
self._debug('old max_dr: {}, new max_dr: {}'.format(max_dr, current_dr), 3)
max_dr = current_dr
self._debug('max_dr: {}'.format(max_dr), 2)
self._debug(" ")
self._debug("max_dr: {}\n".format(max_dr))
# 16 if ( maxDR[3] != 0 ) then
if max_dr.n_nodes_covered != 0:
# 17 if (maxDR[2]+maxDR[3]-1 != size(Node.Children)) then
last_covered_tag_index = max_dr.start_child_index + max_dr.n_nodes_covered - 1
self._debug("last_covered_tag_index: {}".format(last_covered_tag_index))
if last_covered_tag_index < len(node) - 1:
# 18 return {maxDR} ∪ IdentDRs(maxDR[2]+maxDR[3], Node, K, T)
self._debug("calling recursion! \n".format(last_covered_tag_index))
return {max_dr} | self._identify_data_regions(last_covered_tag_index + 1, node)
# 19 else return {maxDR}
else:
self._debug("returning max dr".format(last_covered_tag_index))
self._debug('max_dr: {}'.format(max_dr))
return {max_dr}
# 21 return ∅;
self._debug("returning empty set")
return set()
def _uncovered_data_regions(self, node, child):
return set()
# tests for cases in dev_6_cases
%load_ext autoreload
%autoreload 2
folder = '.'
filename = 'tables-2.html'
doc = open_doc(folder, filename)
dot = html_to_dot(doc, name_option=SEQUENTIAL)
from dev_6_cases import all_cases as cases
from dev_6_cases import DEBUG_THRESHOLD as edit_distance_threshold
cases = [
{
'body-0': None,
'html-0': None,
'table-0': case
}
for case in cases
]
DEBUG_DISTANCES = cases[2]
mdr = MDR(
max_tag_per_gnode=3,
edit_distance_threshold=edit_distance_threshold,
verbose=(False, True, False)
)
mdr(doc)
# tests for cases in dev_5_cases
# %load_ext autoreload
# %autoreload 2
#
# folder = '.'
# filename = 'tables-1.html'
# doc = open_doc(folder, filename)
# dot = html_to_dot(doc, name_option=SEQUENTIAL)
#
# from dev_5_cases import all_cases as cases
# from dev_5_cases import DEBUG_THRESHOLD as edit_distance_threshold
#
# cases = [
# {
# 'body-0': None,
# 'html-0': None,
# 'table-0': case
# }
# for case in cases
# ]
#
# DEBUG_DISTANCES = cases[6]
#
# mdr = MDR(
# max_tag_per_gnode=3,
# edit_distance_threshold=edit_distance_threshold,
# verbose=(False, True, False)
# )
# mdr(doc)
```
| github_jupyter |
# Imitation Learning with Neural Network Policies
In this notebook, you will implement the supervised losses for behavior cloning and use it to train policies for locomotion tasks.
```
import os
from google.colab import drive
drive.mount('/content/gdrive')
DRIVE_PATH = '/content/gdrive/My\ Drive/282'
DRIVE_PYTHON_PATH = DRIVE_PATH.replace('\\', '')
if not os.path.exists(DRIVE_PYTHON_PATH):
%mkdir $DRIVE_PATH
## the space in `My Drive` causes some issues,
## make a symlink to avoid this
SYM_PATH = '/content/282'
if not os.path.exists(SYM_PATH):
!ln -s $DRIVE_PATH $SYM_PATH
!apt update
!apt install -y --no-install-recommends \
build-essential \
curl \
git \
gnupg2 \
make \
cmake \
ffmpeg \
swig \
libz-dev \
unzip \
zlib1g-dev \
libglfw3 \
libglfw3-dev \
libxrandr2 \
libxinerama-dev \
libxi6 \
libxcursor-dev \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
lsb-release \
ack-grep \
patchelf \
wget \
xpra \
xserver-xorg-dev \
xvfb \
python-opengl \
ffmpeg > /dev/null 2>&1
MJC_PATH = '{}/mujoco'.format(SYM_PATH)
if not os.path.exists(MJC_PATH):
%mkdir $MJC_PATH
%cd $MJC_PATH
if not os.path.exists(os.path.join(MJC_PATH, 'mujoco200')):
!wget -q https://www.roboti.us/download/mujoco200_linux.zip
!unzip -q mujoco200_linux.zip
%mv mujoco200_linux mujoco200
%rm mujoco200_linux.zip
import os
os.environ['LD_LIBRARY_PATH'] += ':{}/mujoco200/bin'.format(MJC_PATH)
os.environ['MUJOCO_PY_MUJOCO_PATH'] = '{}/mujoco200'.format(MJC_PATH)
os.environ['MUJOCO_PY_MJKEY_PATH'] = '{}/mjkey.txt'.format(MJC_PATH)
## installation on colab does not find *.so files
## in LD_LIBRARY_PATH, copy over manually instead
!cp $MJC_PATH/mujoco200/bin/*.so /usr/lib/x86_64-linux-gnu/
%cd $MJC_PATH
if not os.path.exists('mujoco-py'):
!git clone https://github.com/openai/mujoco-py.git
%cd mujoco-py
%pip install -e .
## cythonize at the first import
import mujoco_py
%cd $SYM_PATH
%cd assignment4
%pip install -r requirements.txt
#@title imports
# As usual, a bit of setup
import os
import shutil
import time
import numpy as np
import torch
import deeprl.infrastructure.pytorch_util as ptu
from deeprl.infrastructure.rl_trainer import RL_Trainer
from deeprl.infrastructure.trainers import BC_Trainer
from deeprl.agents.bc_agent import BCAgent
from deeprl.policies.loaded_gaussian_policy import LoadedGaussianPolicy
from deeprl.policies.MLP_policy import MLPPolicySL
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def remove_folder(path):
# check if folder exists
if os.path.exists(path):
print("Clearing old results at {}".format(path))
# remove if exists
shutil.rmtree(path)
else:
print("Folder {} does not exist yet. No old results to delete".format(path))
bc_base_args_dict = dict(
expert_policy_file = 'deeprl/policies/experts/Hopper.pkl', #@param
expert_data = 'deeprl/expert_data/expert_data_Hopper-v2.pkl', #@param
env_name = 'Hopper-v2', #@param ['Ant-v2', 'Humanoid-v2', 'Walker2d-v2', 'HalfCheetah-v2', 'Hopper-v2']
exp_name = 'test_bc', #@param
do_dagger = True, #@param {type: "boolean"}
ep_len = 1000, #@param {type: "integer"}
save_params = False, #@param {type: "boolean"}
# Training
num_agent_train_steps_per_iter = 1000, #@param {type: "integer"})
n_iter = 1, #@param {type: "integer"})
# batches & buffers
batch_size = 10000, #@param {type: "integer"})
eval_batch_size = 1000, #@param {type: "integer"}
train_batch_size = 100, #@param {type: "integer"}
max_replay_buffer_size = 1000000, #@param {type: "integer"}
#@markdown network
n_layers = 2, #@param {type: "integer"}
size = 64, #@param {type: "integer"}
learning_rate = 5e-3, #@param {type: "number"}
#@markdown logging
video_log_freq = -1, #@param {type: "integer"}
scalar_log_freq = 1, #@param {type: "integer"}
#@markdown gpu & run-time settings
no_gpu = False, #@param {type: "boolean"}
which_gpu = 0, #@param {type: "integer"}
seed = 2, #@param {type: "integer"}
logdir = 'test',
)
```
# Infrastructure
**Policies**: We have provided implementations of simple neural network policies for your convenience. For discrete environments, the neural network takes in the current state and outputs the logits of the policy's action distribution at this state. The policy then outputs a categorical distribution using those logits. In environments with continuous action spaces, the network will output the mean of a diagonal Gaussian distribution, as well as having a separate single parameter for the log standard deviations of the Gaussian.
Calling forward on the policy will output a torch distribution object, so look at the documentation at https://pytorch.org/docs/stable/distributions.html.
Look at <code>policies/MLP_policy</code> to make sure you understand the implementation.
**RL Training Loop**: The reinforcement learning training loop, which alternates between gathering samples from the environment and updating the policy (and other learned functions) can be found in <code>infrastructure/rl_trainer.py</code>. While you won't need to understand this for the basic behavior cloning part (as you only use a fixed set of expert data), you should read through and understand the run_training_loop function before starting the Dagger implementation.
# Basic Behavior Cloning
The first part of the assignment will be a familiar exercise in supervised learning. Given a dataset of expert trajectories, we will simply train our policy to imitate the expert via maximum likelihood. Fill out the update method in the MLPPolicySL class in <code>policies/MLP_policy.py</code>.
```
### Basic test for correctness of loss and gradients
torch.manual_seed(0)
ac_dim = 2
ob_dim = 3
batch_size = 5
policy = MLPPolicySL(
ac_dim=ac_dim,
ob_dim=ob_dim,
n_layers=1,
size=2,
learning_rate=0.25)
np.random.seed(0)
obs = np.random.normal(size=(batch_size, ob_dim))
acts = np.random.normal(size=(batch_size, ac_dim))
first_weight_before = np.array(ptu.to_numpy(next(policy.mean_net.parameters())))
print("Weight before update", first_weight_before)
for i in range(5):
loss = policy.update(obs, acts)['Training Loss']
print(loss)
expected_loss = 2.628419
loss_error = rel_error(loss, expected_loss)
print("Loss Error", loss_error, "should be on the order of 1e-6 or lower")
first_weight_after = ptu.to_numpy(next(policy.mean_net.parameters()))
print('Weight after update', first_weight_after)
weight_change = first_weight_after - first_weight_before
print("Change in weights", weight_change)
expected_change = np.array([[ 0.04385546, -0.4614172, -1.0613215 ],
[ 0.20986436, -1.2060736, -1.0026767 ]])
updated_weight_error = rel_error(weight_change, expected_change)
print("Weight Update Error", updated_weight_error, "should be on the order of 1e-6 or lower")
```
Having implemented our behavior cloning loss, we can now start training some policies to imitate the expert policies provided.
Run the following cell to train policies with simple behavior cloning on the HalfCheetah environment.
```
bc_args = dict(bc_base_args_dict)
env_str = 'HalfCheetah'
bc_args['expert_policy_file'] = 'deeprl/policies/experts/{}.pkl'.format(env_str)
bc_args['expert_data'] = 'deeprl/expert_data/expert_data_{}-v2.pkl'.format(env_str)
bc_args['env_name'] = '{}-v2'.format(env_str)
# Delete all previous logs
remove_folder('logs/behavior_cloning/{}'.format(env_str))
for seed in range(3):
print("Running behavior cloning experiment with seed", seed)
bc_args['seed'] = seed
bc_args['logdir'] = 'logs/behavior_cloning/{}/seed{}'.format(env_str, seed)
bctrainer = BC_Trainer(bc_args)
bctrainer.run_training_loop()
```
Visualize your results using Tensorboard. You should see that on HalfCheetah, the returns of your learned policies (Eval_AverageReturn) are fairly similar (thought a bit lower) to that of the expert (Initial_DataCollection_Average_Return).
```
### Visualize behavior cloning results on HalfCheetah
%load_ext tensorboard
%tensorboard --logdir logs/behavior_cloning/HalfCheetah
```
Now run the following cell to train policies with simple behavior cloning on Hopper.
```
bc_args = dict(bc_base_args_dict)
env_str = 'Hopper'
bc_args['expert_policy_file'] = 'deeprl/policies/experts/{}.pkl'.format(env_str)
bc_args['expert_data'] = 'deeprl/expert_data/expert_data_{}-v2.pkl'.format(env_str)
bc_args['env_name'] = '{}-v2'.format(env_str)
# Delete all previous logs
remove_folder('logs/behavior_cloning/{}'.format(env_str))
for seed in range(3):
print("Running behavior cloning experiment on Hopper with seed", seed)
bc_args['seed'] = seed
bc_args['logdir'] = 'logs/behavior_cloning/{}/seed{}'.format(env_str, seed)
bctrainer = BC_Trainer(bc_args)
bctrainer.run_training_loop()
```
Visualize your results using Tensorboard. You should see that on Hopper, the returns of your learned policies (Eval_AverageReturn) are substantially lower than that of the expert (Initial_DataCollection_Average_Return), due to the distribution shift issues that arise when doing naive behavior cloning.
```
### Visualize behavior cloning results on Hopper
%load_ext tensorboard
%tensorboard --logdir logs/behavior_cloning/Hopper
```
# Dataset Aggregation
As discussed in lecture, behavior cloning can suffer from distribution shift, as a small mismatch between the learned and expert policy can take the learned policy to new states that were unseen during training, on which the learned policy hasn't been trained. In Dagger, we will address this issue iteratively, where we use our expert policy to provide labels for the new states we encounter with our learned policy, and then retrain our policy on these newly labeled states.
Implement the <code>do_relabel_with_expert</code> function in <code>infrastructure/rl_trainer.py</code>. The errors in the expert actions should be on the order of 1e-6 or less.
```
### Test do relabel function
bc_args = dict(bc_base_args_dict)
env_str = 'Hopper'
bc_args['expert_policy_file'] = 'deeprl/policies/experts/{}.pkl'.format(env_str)
bc_args['expert_data'] = 'deeprl/expert_data/expert_data_{}-v2.pkl'.format(env_str)
bc_args['env_name'] = '{}-v2'.format(env_str)
bctrainer = BC_Trainer(bc_args)
np.random.seed(0)
T = 2
ob_dim = 11
ac_dim = 3
paths = []
for i in range(3):
obs = np.random.normal(size=(T, ob_dim))
acs = np.random.normal(size=(T, ac_dim))
paths.append(dict(observation=obs,
action=acs))
rl_trainer = bctrainer.rl_trainer
relabeled_paths = rl_trainer.do_relabel_with_expert(bctrainer.loaded_expert_policy, paths)
expert_actions = np.array([[[-1.7814021, -0.11137983, 1.763353 ],
[-2.589222, -5.463195, 2.4301376 ]],
[[-2.8287444, -5.298558, 3.0320463],
[ 3.9611065, 2.626403, -2.8639293]],
[[-0.3055225, -0.9865407, 0.80830705],
[ 2.8788857, 3.5550566, -0.92875874]]])
for i, (path, relabeled_path) in enumerate(zip(paths, relabeled_paths)):
assert np.all(path['observation'] == relabeled_path['observation'])
print("Path {} expert action error".format(i), rel_error(expert_actions[i], relabeled_path['action']))
```
We can run Dagger on the Hopper env again.
```
dagger_args = dict(bc_base_args_dict)
dagger_args['do_dagger'] = True
dagger_args['n_iter'] = 10
env_str = 'Hopper'
dagger_args['expert_policy_file'] = 'deeprl/policies/experts/{}.pkl'.format(env_str)
dagger_args['expert_data'] = 'deeprl/expert_data/expert_data_{}-v2.pkl'.format(env_str)
dagger_args['env_name'] = '{}-v2'.format(env_str)
# Delete all previous logs
remove_folder('logs/dagger/{}'.format(env_str))
for seed in range(3):
print("Running Dagger experiment with seed", seed)
dagger_args['seed'] = seed
dagger_args['logdir'] = 'logs/dagger/{}/seed{}'.format(env_str, seed)
bctrainer = BC_Trainer(dagger_args)
bctrainer.run_training_loop()
```
Visualizing the Dagger results on Hopper, we see that Dagger is able to recover the performance of the expert policy after a few iterations of online interaction and expert relabeling.
```
### Visualize Dagger results on Hopper
%load_ext tensorboard
%tensorboard --logdir logs/dagger/Hopper
```
| github_jupyter |
## SIMPLE CONVOLUTIONAL NEURAL NETWORK
```
import numpy as np
# import tensorflow as tf
import tensorflow.compat.v1 as tf
import matplotlib.pyplot as plt
# from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
print ("PACKAGES LOADED")
```
# LOAD MNIST
```
def OnehotEncoding(target):
from sklearn.preprocessing import OneHotEncoder
target_re = target.reshape(-1,1)
enc = OneHotEncoder()
enc.fit(target_re)
return enc.transform(target_re).toarray()
def SuffleWithNumpy(data_x, data_y):
idx = np.random.permutation(len(data_x))
x,y = data_x[idx], data_y[idx]
return x,y
# mnist = input_data.read_data_sets('data/', one_hot=True)
# trainimg = mnist.train.images
# trainlabel = mnist.train.labels
# testimg = mnist.test.images
# testlabel = mnist.test.labels
# print ("MNIST ready")
print ("Download and Extract MNIST dataset")
# mnist = input_data.read_data_sets('data/', one_hot=True)
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
print
print (" tpye of 'mnist' is %s" % (type(mnist)))
print (" number of train data is %d" % (len(x_train)))
print (" number of test data is %d" % (len(x_test)))
num_train_data = len(x_train)
trainimg = x_train
trainimg = trainimg.reshape(len(trainimg),784)
trainlabel = OnehotEncoding(y_train)
testimg = x_test
testimg = testimg.reshape(len(testimg),784)
testlabel = OnehotEncoding(y_test)
print ("MNIST loaded")
tf.disable_eager_execution()
```
# SELECT DEVICE TO BE USED
```
device_type = "/gpu:1"
```
# DEFINE CNN
```
with tf.device(device_type): # <= This is optional
n_input = 784
n_output = 10
weights = {
'wc1': tf.Variable(tf.random_normal([3, 3, 1, 64], stddev=0.1)),
'wd1': tf.Variable(tf.random_normal([14*14*64, n_output], stddev=0.1))
}
biases = {
'bc1': tf.Variable(tf.random_normal([64], stddev=0.1)),
'bd1': tf.Variable(tf.random_normal([n_output], stddev=0.1))
}
def conv_simple(_input, _w, _b):
# Reshape input
_input_r = tf.reshape(_input, shape=[-1, 28, 28, 1])
# Convolution
_conv1 = tf.nn.conv2d(_input_r, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME')
# Add-bias
_conv2 = tf.nn.bias_add(_conv1, _b['bc1'])
# Pass ReLu
_conv3 = tf.nn.relu(_conv2)
# Max-pooling
_pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Vectorize
_dense = tf.reshape(_pool, [-1, _w['wd1'].get_shape().as_list()[0]])
# Fully-connected layer
_out = tf.add(tf.matmul(_dense, _w['wd1']), _b['bd1'])
# Return everything
out = {
'input_r': _input_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3
, 'pool': _pool, 'dense': _dense, 'out': _out
}
return out
print ("CNN ready")
```
# DEFINE COMPUTATIONAL GRAPH
```
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
# Parameters
learning_rate = 0.001
training_epochs = 10
batch_size = 10
display_step = 1
# Functions!
with tf.device(device_type): # <= This is optional
_pred = conv_simple(x, weights, biases)['out']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=_pred))
optm = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
_corr = tf.equal(tf.argmax(_pred,1), tf.argmax(y,1)) # Count corrects
accr = tf.reduce_mean(tf.cast(_corr, tf.float32)) # Accuracy
init = tf.global_variables_initializer()
# Saver
save_step = 1;
savedir = "nets/"
saver = tf.train.Saver(max_to_keep=3)
print ("Network Ready to Go!")
```
# OPTIMIZE
## DO TRAIN OR NOT
```
do_train = 1
# check operation gpu or cpu
# sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
config = tf.ConfigProto()
# config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.4
config.allow_soft_placement=True
sess = tf.Session(config=config)
sess.run(init)
len(testimg)
if do_train == 1:
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(num_train_data/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs=trainimg[i*batch_size:(i+1)*batch_size]
batch_ys=trainlabel[i*batch_size:(i+1)*batch_size]
# Fit training using batch data
sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys})/total_batch
# Display logs per epoch step
if (epoch +1)% display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch+1, training_epochs, avg_cost))
total_batch = int(num_train_data/batch_size)
train_acc=0
for i in range(total_batch):
batch_xs=trainimg[i*batch_size:(i+1)*batch_size]
batch_ys=trainlabel[i*batch_size:(i+1)*batch_size]
train_acc = train_acc + sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
print (" Training accuracy: %.3f" % (train_acc/total_batch))
# randidx = np.random.randint(len(testimg), size=1000)
# batch_test_xs = testimg[randidx, :]
# batch_test_ys = testlabel[randidx, :]
# test_acc = sess.run(accr, feed_dict={x: batch_test_xs, y: batch_test_ys})
total_batch = int(len(testimg)/batch_size)
test_acc=0
for i in range(total_batch):
batch_xs=testimg[i*batch_size:(i+1)*batch_size]
batch_ys=testlabel[i*batch_size:(i+1)*batch_size]
test_acc = test_acc + sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
print (" Test accuracy: %.3f" % (test_acc/total_batch))
# Save Net
if epoch % save_step == 0:
saver.save(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch))
trainimg,trainlabel = SuffleWithNumpy(trainimg,trainlabel)
print ("Optimization Finished.")
```
# RESTORE
```
do_train = 0
if do_train == 0:
epoch = training_epochs-1
# epoch = 3
saver.restore(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch))
print ("NETWORK RESTORED")
```
# LET'S SEE HOW CNN WORKS
```
with tf.device(device_type):
conv_out = conv_simple(x, weights, biases)
input_r = sess.run(conv_out['input_r'], feed_dict={x: trainimg[0:1, :]})
conv1 = sess.run(conv_out['conv1'], feed_dict={x: trainimg[0:1, :]})
conv2 = sess.run(conv_out['conv2'], feed_dict={x: trainimg[0:1, :]})
conv3 = sess.run(conv_out['conv3'], feed_dict={x: trainimg[0:1, :]})
pool = sess.run(conv_out['pool'], feed_dict={x: trainimg[0:1, :]})
dense = sess.run(conv_out['dense'], feed_dict={x: trainimg[0:1, :]})
out = sess.run(conv_out['out'], feed_dict={x: trainimg[0:1, :]})
```
# Input
```
# Let's see 'input_r'
print ("Size of 'input_r' is %s" % (input_r.shape,))
label = np.argmax(trainlabel[0, :])
print ("Label is %d" % (label))
# Plot !
plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray'))
plt.title("Label of this image is " + str(label) + "")
plt.colorbar()
plt.show()
```
# Conv1 (convolution)
```
# Let's see 'conv1'
print ("Size of 'conv1' is %s" % (conv1.shape,))
# Plot !
for i in range(3):
plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv1")
plt.colorbar()
plt.show()
```
# Conv2 (+bias)
```
# Let's see 'conv2'
print ("Size of 'conv2' is %s" % (conv2.shape,))
# Plot !
for i in range(3):
plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv2")
plt.colorbar()
plt.show()
```
# Conv3 (ReLU)
```
# Let's see 'conv3'
print ("Size of 'conv3' is %s" % (conv3.shape,))
# Plot !
for i in range(3):
plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv3")
plt.colorbar()
plt.show()
```
# Pool (max_pool)
```
# Let's see 'pool'
print ("Size of 'pool' is %s" % (pool.shape,))
# Plot !
for i in range(3):
plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th pool")
plt.colorbar()
plt.show()
```
# Dense
```
# Let's see 'dense'
print ("Size of 'dense' is %s" % (dense.shape,))
# Let's see 'out'
print ("Size of 'out' is %s" % (out.shape,))
plt.matshow(out, cmap=plt.get_cmap('gray'))
plt.title("out")
plt.colorbar()
plt.show()
```
# Convolution filters
```
# Let's see weight!
wc1 = sess.run(weights['wc1'])
print ("Size of 'wc1' is %s" % (wc1.shape,))
# Plot !
for i in range(3):
plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv filter")
plt.colorbar()
plt.show()
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TF Lattice Custom Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/custom_estimators"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
You can use custom estimators to create arbitrarily monotonic models using TFL layers. This guide outlines the steps needed to create such estimators.
## Setup
Installing TF Lattice package:
```
#@test {"skip": true}
!pip install tensorflow-lattice
```
Importing required packages:
```
import tensorflow as tf
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
from tensorflow import feature_column as fc
from tensorflow_estimator.python.estimator.canned import optimizers
from tensorflow_estimator.python.estimator.head import binary_class_head
logging.disable(sys.maxsize)
```
Downloading the UCI Statlog (Heart) dataset:
```
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/applied-dl/heart.csv')
df = pd.read_csv(csv_file)
target = df.pop('target')
train_size = int(len(df) * 0.8)
train_x = df[:train_size]
train_y = target[:train_size]
test_x = df[train_size:]
test_y = target[train_size:]
df.head()
```
Setting the default values used for training in this guide:
```
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 1000
```
## Feature Columns
As for any other TF estimator, data needs to be passed to the estimator, which is typically via an input_fn and parsed using [FeatureColumns](https://www.tensorflow.org/guide/feature_columns).
```
# Feature columns.
# - age
# - sex
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
feature_columns = [
fc.numeric_column('age', default_value=-1),
fc.categorical_column_with_vocabulary_list('sex', [0, 1]),
fc.numeric_column('ca'),
fc.categorical_column_with_vocabulary_list(
'thal', ['normal', 'fixed', 'reversible']),
]
```
Note that categorical features do not need to be wrapped by a dense feature column, since `tfl.laysers.CategoricalCalibration` layer can directly consume category indices.
## Creating input_fn
As for any other estimator, you can use input_fn to feed data to the model for training and evaluation.
```
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=train_x,
y=train_y,
shuffle=True,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
num_threads=1)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=test_x,
y=test_y,
shuffle=False,
batch_size=BATCH_SIZE,
num_epochs=1,
num_threads=1)
```
## Creating model_fn
There are several ways to create a custom estimator. Here we will construct a `model_fn` that calls a Keras model on the parsed input tensors. To parse the input features, you can use `tf.feature_column.input_layer`, `tf.keras.layers.DenseFeatures`, or `tfl.estimators.transform_features`. If you use the latter, you will not need to wrap categorical features with dense feature columns, and the resulting tensors will not be concatenated, which makes it easier to use the features in the calibration layers.
To construct a model, you can mix and match TFL layers or any other Keras layers. Here we create a calibrated lattice Keras model out of TFL layers and impose several monotonicity constraints. We then use the Keras model to create the custom estimator.
```
def model_fn(features, labels, mode, config):
"""model_fn for the custom estimator."""
del config
input_tensors = tfl.estimators.transform_features(features, feature_columns)
inputs = {
key: tf.keras.layers.Input(shape=(1,), name=key) for key in input_tensors
}
lattice_sizes = [3, 2, 2, 2]
lattice_monotonicities = ['increasing', 'none', 'increasing', 'increasing']
lattice_input = tf.keras.layers.Concatenate(axis=1)([
tfl.layers.PWLCalibration(
input_keypoints=np.linspace(10, 100, num=8, dtype=np.float32),
# The output range of the calibrator should be the input range of
# the following lattice dimension.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
monotonicity='increasing',
)(inputs['age']),
tfl.layers.CategoricalCalibration(
# Number of categories including any missing/default category.
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
)(inputs['sex']),
tfl.layers.PWLCalibration(
input_keypoints=[0.0, 1.0, 2.0, 3.0],
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
# You can specify TFL regularizers as tuple
# ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4),
monotonicity='increasing',
)(inputs['ca']),
tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Categorical monotonicity can be partial order.
# (i, j) indicates that we must have output(i) <= output(j).
# Make sure to set the lattice monotonicity to 'increasing' for this
# dimension.
monotonicities=[(0, 1), (0, 2)],
)(inputs['thal']),
])
output = tfl.layers.Lattice(
lattice_sizes=lattice_sizes, monotonicities=lattice_monotonicities)(
lattice_input)
training = (mode == tf.estimator.ModeKeys.TRAIN)
model = tf.keras.Model(inputs=inputs, outputs=output)
logits = model(input_tensors, training=training)
if training:
optimizer = optimizers.get_optimizer_instance_v2('Adagrad', LEARNING_RATE)
else:
optimizer = None
head = binary_class_head.BinaryClassHead()
return head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=optimizer,
logits=logits,
trainable_variables=model.trainable_variables,
update_ops=model.updates)
```
## Training and Estimator
Using the `model_fn` we can create and train the estimator.
```
estimator = tf.estimator.Estimator(model_fn=model_fn)
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=test_input_fn)
print('AUC: {}'.format(results['auc']))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yukinaga/minnano_kaggle/blob/main/section_2/02_titanic_random_forest.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# タイタニック号生存者の予測
「ランダムフォレスト」という機械学習のアルゴリズムにより、タイタニック号の生存者を予測します。
訓練済みのモデルによる予測結果は、csvファイルに保存して提出します。
## データの読み込み
タイタニック号の乗客データを読み込みます。
以下のページからタイタニック号の乗客データをダウロードして、「train.csv」「test.csv」をノートブック環境にアップします。
https://www.kaggle.com/c/titanic/data
訓練データには乗客が生き残ったどうかを表す"Survived"の列がありますが、テストデータにはありません。
訓練済みのモデルに、テストデータを入力して判定した結果を提出することになります。
```
import numpy as np
import pandas as pd
train_data = pd.read_csv("train.csv") # 訓練データ
test_data = pd.read_csv("test.csv") # テストデータ
train_data.head()
```
## データの前処理
判定に使用可能なデータのみを取り出し、データの欠損に対して適切な処理を行います。
また、文字列などのカテゴリ変数に対しては、数値に変換する処理を行います。
以下のコードでは、訓練データ及びテストデータから判定に使える列のみを取り出しています。
```
test_id = test_data["PassengerId"] # 結果の提出時に使用
labels = ["Pclass","Sex","Age","SibSp","Parch","Fare","Embarked"]
train_data = train_data[labels + ["Survived"]]
test_data = test_data[labels]
train_data.head()
```
`info()`によりデータの全体像を確認することができます。
Non-Null Countにより欠損していないデータの数が確認できるので、データの欠損が存在する列を確認しておきます。
```
train_data.info()
test_data.info()
```
AgeとFare、Embarkedに欠損が存在します。
AgeとFareの欠損値には平均値を、Embarkedの欠損値には最頻値をあてがって対処します。
```
# Age
age_mean = train_data["Age"].mean() # 平均値
train_data["Age"] = train_data["Age"].fillna(age_mean)
test_data["Age"] = test_data["Age"].fillna(age_mean)
# Fare
fare_mean = train_data["Fare"].mean() # 平均値
train_data["Fare"] = train_data["Fare"].fillna(fare_mean)
test_data["Fare"] = test_data["Fare"].fillna(fare_mean)
# Embarked
embarked_mode = train_data["Embarked"].mode() # 最頻値
train_data["Embarked"] = train_data["Embarked"].fillna(embarked_mode)
test_data["Embarked"] = test_data["Embarked"].fillna(embarked_mode)
```
`get_dummies()`により、カテゴリ変数の列を0か1の値を持つ複数の列に変換します。
```
cat_labels = ["Sex", "Pclass", "Embarked"] # カテゴリ変数のラベル
train_data = pd.get_dummies(train_data, columns=cat_labels)
test_data = pd.get_dummies(test_data, columns=cat_labels)
train_data.head()
```
## モデルの訓練
入力と正解を用意します。
"Survived"の列が正解となります。
```
t_train = train_data["Survived"] # 正解
x_train = train_data.drop(labels=["Survived"], axis=1) # "Survived"の列を削除して入力に
x_test = test_data
```
ランダムフォレストは、複数の決定木を組み合わせた「アンサンブル学習」の一種です。
アンサンブル学習は複数の機械学習を組み合わせる手法で、しばしば高い性能を発揮します。
以下のコードでは、`RandomForestClassifier()`を使い、ランダムフォレストのモデルを作成して訓練します。
多数の決定木の多数決により、分類が行われることになります。
```
from sklearn.ensemble import RandomForestClassifier
# n_estimators: 決定木の数 max_depth: 決定木の深さ
model = RandomForestClassifier(n_estimators=100, max_depth=5)
model.fit(x_train, t_train)
```
## 結果の確認と提出
`feature_importances_`により各特徴量の重要度を取得し、棒グラフで表示します。
```
import matplotlib.pyplot as plt
labels = x_train.columns
importances = model.feature_importances_
plt.figure(figsize = (10,6))
plt.barh(range(len(importances)), importances)
plt.yticks(range(len(labels)), labels)
plt.show()
```
テストデータを使って予測を行います。
予測結果には、分類されるクラスを表す「Label」列と、そのクラスに含まれる確率を表す「Score」ラベルが含まれます。
形式を整えた上で提出用のcsvファイルとして保存します。
```
# 判定
y_test = model.predict(x_test)
# 形式を整える
survived_test = pd.Series(y_test, name="Survived")
subm_data = pd.concat([test_id, survived_test], axis=1)
# 提出用のcsvファイルを保存
subm_data.to_csv("submission_titanic.csv", index=False)
subm_data
```
| github_jupyter |
# Driven Modal Simulation and S-Parameters
## Prerequisite
You must have a working local installation of Ansys.
```
%load_ext autoreload
%autoreload 2
import qiskit_metal as metal
from qiskit_metal import designs, draw
from qiskit_metal import MetalGUI, Dict, Headings
import pyEPR as epr
```
## Create the design in Metal
Set up a design of a given dimension. Dimensions will be respected in the design rendering.
<br>
Note the chip design is centered at origin (0,0).
```
design = designs.DesignPlanar({}, True)
design.chips.main.size['size_x'] = '2mm'
design.chips.main.size['size_y'] = '2mm'
#Reference to Ansys hfss QRenderer
hfss = design.renderers.hfss
gui = MetalGUI(design)
```
Perform the necessary imports.
```
from qiskit_metal.qlibrary.couplers.coupled_line_tee import CoupledLineTee
from qiskit_metal.qlibrary.tlines.meandered import RouteMeander
from qiskit_metal.qlibrary.qubits.transmon_pocket import TransmonPocket
from qiskit_metal.qlibrary.tlines.straight_path import RouteStraight
from qiskit_metal.qlibrary.terminations.open_to_ground import OpenToGround
```
Add 2 transmons to the design.
```
options = dict(
# Some options we want to modify from the deafults
# (see below for defaults)
pad_width = '425 um',
pocket_height = '650um',
# Adding 4 connectors (see below for defaults)
connection_pads=dict(
a = dict(loc_W=+1,loc_H=+1),
b = dict(loc_W=-1,loc_H=+1, pad_height='30um'),
c = dict(loc_W=+1,loc_H=-1, pad_width='200um'),
d = dict(loc_W=-1,loc_H=-1, pad_height='50um')
)
)
## Create 2 transmons
q1 = TransmonPocket(design, 'Q1', options = dict(
pos_x='+1.4mm', pos_y='0mm', orientation = '90', **options))
q2 = TransmonPocket(design, 'Q2', options = dict(
pos_x='-0.6mm', pos_y='0mm', orientation = '90', **options))
gui.rebuild()
gui.autoscale()
```
Add 2 hangers consisting of capacitively coupled transmission lines.
```
TQ1 = CoupledLineTee(design, 'TQ1', options=dict(pos_x='1mm',
pos_y='3mm',
coupling_length='200um'))
TQ2 = CoupledLineTee(design, 'TQ2', options=dict(pos_x='-1mm',
pos_y='3mm',
coupling_length='200um'))
gui.rebuild()
gui.autoscale()
```
Add 2 meandered CPWs connecting the transmons to the hangers.
```
ops=dict(fillet='90um')
design.overwrite_enabled = True
options1 = Dict(
total_length='8mm',
hfss_wire_bonds = True,
pin_inputs=Dict(
start_pin=Dict(
component='TQ1',
pin='second_end'),
end_pin=Dict(
component='Q1',
pin='a')),
lead=Dict(
start_straight='0.1mm'),
**ops
)
options2 = Dict(
total_length='9mm',
hfss_wire_bonds = True,
pin_inputs=Dict(
start_pin=Dict(
component='TQ2',
pin='second_end'),
end_pin=Dict(
component='Q2',
pin='a')),
lead=Dict(
start_straight='0.1mm'),
**ops
)
meanderQ1 = RouteMeander(design, 'meanderQ1', options=options1)
meanderQ2 = RouteMeander(design, 'meanderQ2', options=options2)
gui.rebuild()
gui.autoscale()
```
Add 2 open to grounds at the ends of the horizontal CPW.
```
otg1 = OpenToGround(design, 'otg1', options = dict(pos_x='3mm',
pos_y='3mm'))
otg2 = OpenToGround(design, 'otg2', options = dict(pos_x = '-3mm',
pos_y='3mm',
orientation='180'))
gui.rebuild()
gui.autoscale()
```
Add 3 straight CPWs that comprise the long horizontal CPW.
```
ops_oR = Dict(hfss_wire_bonds = True,
pin_inputs=Dict(
start_pin=Dict(
component='TQ1',
pin='prime_end'),
end_pin=Dict(
component='otg1',
pin='open')))
ops_mid = Dict(hfss_wire_bonds = True,
pin_inputs=Dict(
start_pin=Dict(
component='TQ1',
pin='prime_start'),
end_pin=Dict(
component='TQ2',
pin='prime_end')))
ops_oL = Dict(hfss_wire_bonds = True,
pin_inputs=Dict(
start_pin=Dict(
component='TQ2',
pin='prime_start'),
end_pin=Dict(
component='otg2',
pin='open')))
cpw_openRight = RouteStraight(design, 'cpw_openRight', options=ops_oR)
cpw_middle = RouteStraight(design, 'cpw_middle', options=ops_mid)
cpw_openLeft = RouteStraight(design, 'cpw_openLeft', options=ops_oL)
gui.rebuild()
gui.autoscale()
```
## Render the qubit from Metal into the HangingResonators design in Ansys. <br>
Open a new Ansys window, connect to it, and add a driven modal design called HangingResonators to the currently active project.<br>
If Ansys is already open, you can skip `hfss.open_ansys()`. <br>
**Wait for Ansys to fully open before proceeding.**<br> If necessary, also close any Ansys popup windows.
```
hfss.open_ansys()
hfss.connect_ansys()
hfss.activate_drivenmodal_design("HangingResonators")
```
Set the buffer width at the edge of the design to be 0.5 mm in both directions.
```
hfss.options['x_buffer_width_mm'] = 0.5
hfss.options['y_buffer_width_mm'] = 0.5
```
Here, pin cpw_openRight_end and cpw_openLeft_end are converted into lumped ports, each with an impedance of 50 Ohms. <br>
Neither of the junctions in Q1 or Q2 are rendered. <br>
As a reminder, arguments are given as <br><br>
First parameter: List of components to render (empty list if rendering whole Metal design) <br>
Second parameter: List of pins (qcomp, pin) with open endcaps <br>
Third parameter: List of pins (qcomp, pin, impedance) to render as lumped ports <br>
Fourth parameter: List of junctions (qcomp, qgeometry_name, impedance, draw_ind) to render as lumped ports or as lumped port in parallel with a sheet inductance <br>
Fifth parameter: List of junctions (qcomp, qgeometry_name) to omit altogether during rendering
Sixth parameter: Whether to render chip via box plus buffer or fixed chip size
```
hfss.render_design([],
[],
[('cpw_openRight', 'end', 50), ('cpw_openLeft', 'end', 50)],
[],
[('Q1', 'rect_jj'), ('Q2', 'rect_jj')],
True)
hfss.save_screenshot()
hfss.add_sweep(setup_name="Setup",
name="Sweep",
start_ghz=4.0,
stop_ghz=8.0,
count=2001,
type="Interpolating")
hfss.analyze_sweep('Sweep', 'Setup')
```
Plot S, Y, and Z parameters as a function of frequency. <br>
The left and right plots display the magnitude and phase, respectively.
```
hfss.plot_params(['S11', 'S21'])
hfss.plot_params(['Y11', 'Y21'])
hfss.plot_params(['Z11', 'Z21'])
hfss.disconnect_ansys()
gui.main_window.close()
```
| github_jupyter |
# Introduction
In this experiment we will be trying to do convolution operation on various signals and using
both methods of linear convolution and circular convolution.In the theory class we have ana-
lyzed the advantages of using circular convolution over using linear convolution when we are
recieving continuos samples of inputs.We will also be analysing the effect of passing the signal
x = cos(0.2 ∗ pi ∗ n) + cos(0.85 ∗ pi ∗ n) through a given filter.At last we will be analysing the cross-
correllation output of the Zadoff–Chu sequence.
# Importing packages
```
import numpy as np
import csv
from scipy import signal
import matplotlib.pyplot as plt
from math import *
import pandas as pd
```
# Filter sequence
Now we will use the signal.freqz function to visuaize the filter in frequency domain.
```
a = np.genfromtxt('h.csv',delimiter=',')
w,h = signal.freqz(a)
fig,ax = plt.subplots(2,sharex=True)
plt.grid(True,which="all")
ax[0].plot(w,(abs(h)),"b")
ax[0].set_title("Filter Magnitude response")
ax[0].set_xlabel("Frequency(w rad/s)")
ax[0].set_ylabel("AMplitude dB")
angle = np.unwrap(np.angle(h))
ax[1].plot(w,angle,"g")
ax[1].set_title("Filter Phase response")
ax[1].set_xlabel("Frequency(w rad/s)")
ax[1].set_ylabel("Phase")
plt.show()
```
Here I have plotted both the magnitude and phase response of the filter in the appropriate
frequency range. It is clear from the plot that the given filter is a low-pass filter with a cutoff
frequency of about 0.75 rad/s.
# Given Signal:
```
n = np.linspace(1,2**10,2**10)
x = np.cos(0.2*pi*n) + np.cos(0.85*pi*n)
fig2,bx = plt.subplots(1,sharex=True)
bx.plot(n,x)
bx.set_title("Sequence plot")
bx.set_xlabel("n")
bx.set_ylabel("x")
bx.set_xlim(0,40)
plt.show()
```
Clearly the input seqence has frequency components of 0.2pi = 0.628 rad/s and 0.85pi = 2.669
rad/s.
# Passing signal through Filter
```
y = np.convolve(x,a,mode="same")
fig3,cx = plt.subplots(1,sharex=True)
cx.plot(y)
cx.set_title("Filtered output plot using linear convolution ")
cx.set_xlabel("n")
cx.set_ylabel("y")
cx.set_xlim(0,40)
plt.show()
```
We can clearly see that it acted as a low pass filter
# Using Circular Convolution
```
a_adjusted = np.pad(a,(0,len(x)-len(a)),"constant")
y1 = np.fft.ifft(np.fft.fft(x) * np.fft.fft(a_adjusted))
fig4,dx = plt.subplots(1,sharex=True)
dx.plot(y1)
dx.set_title("Filtered output plot using circular convolution ")
dx.set_xlabel("n")
dx.set_ylabel("y")
dx.set_xlim(0,40)
plt.show()
```
norder to compute the output using circular-convolution I am imitially padding my signals in
to avoid overlapping of the output over itself.By doing this we will be getting the output seqence
just like linear convolution.
# Circular Convolution using linear stiching
```
N = len(a) + len(x) - 1
fil = np.concatenate([a,np.zeros(N-len(a))])
y_modified = np.concatenate([x,np.zeros(N-len(x))])
y2 = np.fft.ifft(np.fft.fft(y_modified) * np.fft.fft(fil))
fig5,fx = plt.subplots(1,sharex=True)
fx.plot(y2)
fx.set_title("Filtered output plot using linear convolution as circular convolution ")
fx.set_xlabel("n")
fx.set_ylabel("y")
fx.set_xlim(0,40)
plt.show()
```
We clearly see that the output is exactly similar to the one which we got by linear convolu-
tion.Hence it is seen that by padding the sequence appropriately we will be able to achieve the
output using circular convolution.
# Zadoff Sequence
```
zadoff = pd.read_csv("x1.csv").values[:,0]
zadoff = np.array([complex(zadoff[i].replace("i","j")) for i in range(len(zadoff))])
zw,zh = signal.freqz(zadoff)
fig5,ex = plt.subplots(2,sharex=True)
plt.grid(True,which="all")
ex[0].plot(zw,(abs(zh)),"b")
ex[0].set_title("zadoff Magnitude response")
ex[0].set_xlabel("Frequency(w rad/s)")
ex[0].set_ylabel("Zadoff Amplitude dB")
angle_z = np.unwrap(np.angle(zh))
ex[1].plot(zw,angle_z,"g")
ex[1].set_title("Zadoff Phase response")
ex[1].set_xlabel("Frequency(w rad/s)")
ex[1].set_ylabel("Phase")
plt.show()
```
Properties of Zadoff-Chu sequence:
$(a) It is a complex sequence.$
$(b) It is a constant amplitude sequence.$
$(c) The auto correlation of a Zadoff–Chu sequence with a cyclically shifted version
of itself is zero.$
$(d) Correlation of Zadoff–Chu sequence with the delayed version of itself will give
a peak at that delay.$
# Co-relation with shifting with itself
```
zadoff_modified = np.concatenate([zadoff[-5:],zadoff[:-5]])
z_out = np.correlate(zadoff,zadoff_modified,"full")
fig7,gx = plt.subplots(1,sharex=True)
plt.grid(True,which="all")
gx.plot((abs(z_out)),"b")
gx.set_title(" correlation of zadoff and shifted Z Magnitude ")
gx.set_xlabel("n")
gx.set_ylabel("Magnitude")
plt.show()
```
We clearly see a peak at shited magnitude frequency correspondent
# Conclusion
Hence through this assignment we are able to take the output of a system for a given signal
using convolution. We approached convolution using linear method and circular method.We use
padding to make the filter of appropriate size before we do the circular convolution.Later we
analysed the crosscorrelation function of Zadoff–Chu sequence with its circularly shifted version
of itself.We observe a sharp peak in appropriate location according to the circular shift done.
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import itertools
import numpy as np
import pyquil.api as api
from pyquil.gates import *
from pyquil.quil import Program
def qubit_strings(n):
qubit_strings = []
for q in itertools.product(['0', '1'], repeat=n):
qubit_strings.append(''.join(q))
return qubit_strings
def black_box_map(n, q_find):
"""
Black-box map, f(x), on n qubits such that f(q_find) = 1, and otherwise = 0
"""
qubs = qubit_strings(n)
d_blackbox = {q: 1 if q == q_find else 0 for q in qubs}
return d_blackbox
def qubit_ket(qub_string):
"""
Form a basis ket out of n-bit string specified by the input 'qub_string', e.g.
'001' -> |001>
"""
e0 = np.array([[1], [0]])
e1 = np.array([[0], [1]])
d_qubstring = {'0': e0, '1': e1}
# initialize ket
ket = d_qubstring[qub_string[0]]
for i in range(1, len(qub_string)):
ket = np.kron(ket, d_qubstring[qub_string[i]])
return ket
def projection_op(qub_string):
"""
Creates a projection operator out of the basis element specified by 'qub_string', e.g.
'101' -> |101> <101|
"""
ket = qubit_ket(qub_string)
bra = np.transpose(ket) # all entries real, so no complex conjugation necessary
proj = np.kron(ket, bra)
return proj
def black_box(n, q_find):
"""
Unitary representation of the black-box operator on (n+1)-qubits
"""
d_bb = black_box_map(n, q_find)
# initialize unitary matrix
N = 2**(n+1)
unitary_rep = np.zeros(shape=(N, N))
# populate unitary matrix
for k, v in d_bb.items():
unitary_rep += np.kron(projection_op(k), np.eye(2) + v*(-np.eye(2) + np.array([[0, 1], [1, 0]])))
return unitary_rep
def U_grov(n):
"""
The operator 2|psi><psi| - I , where |psi> = H^n |0>
"""
qubs = qubit_strings(n)
N = 2**n
proj_psipsi = np.zeros(shape=(N, N))
for s_ket in qubs:
ket = qubit_ket(s_ket)
for s_bra in qubs:
bra = np.transpose(qubit_ket(s_bra)) # no complex conjugation required
proj_psipsi += np.kron(ket, bra)
# add normalization factor
proj_psipsi *= 1/N
return 2*proj_psipsi - np.eye(N)
```
### Grover's Search Algorithm
```
# Specify an item to find
findme = '1011'
# number of qubits (excluding the ancilla)
n = len(findme)
# number of iterations
num_iters = max(1, int(np.sqrt(2**(n-2))))
p = Program()
# define blackbox operator (see above)
p.defgate("U_bb", black_box(n, findme))
# define the U_grov (see above)
p.defgate("U_grov", U_grov(n))
# Apply equal superposition state
for q in range(1, n+1):
p.inst(H(q))
# Make 0th qubit an eigenstate of the black-box operator
p.inst(H(0))
p.inst(Z(0))
# Grover iterations
for _ in range(num_iters):
# apply oracle
p.inst(("U_bb",) + tuple(range(n+1)[::-1]))
# apply H . U_perp . H
p.inst(("U_grov",) + tuple(range(1, n+1)[::-1]))
# measure and discard ancilla
p.measure(0, [0])
# run program, and investigate wavefunction
qvm = api.QVMConnection()
wavefunc = qvm.wavefunction(p)
outcome_probs = wavefunc.get_outcome_probs()
print ("The most probable outcome is: |%s>" % (max(outcome_probs, key=outcome_probs.get)[:-1]))
# histogram of outcome probs
plt.figure(figsize=(8, 6))
plt.bar([i[:-1] for i in outcome_probs.keys()], outcome_probs.values())
plt.show()
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TensorFlow Recommenders: Quickstart
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/recommenders/quickstart"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/quickstart.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/quickstart.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/quickstart.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this tutorial, we build a simple matrix factorization model using the [MovieLens 100K dataset](https://grouplens.org/datasets/movielens/100k/) with TFRS. We can use this model to recommend movies for a given user.
### Import TFRS
First, install and import TFRS:
```
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
```
### Read the data
```
# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
```
Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:
```
user_ids_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)
```
### Define a model
We can define a TFRS model by inheriting from `tfrs.Model` and implementing the `compute_loss` method:
```
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
```
Define the two models and the retrieval task.
```
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])
# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
movies.batch(128).map(movie_model)
)
)
```
### Fit and evaluate it.
Create the model, train it, and generate predictions:
```
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)
# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
index.index(movies.batch(100).map(model.movie_model), movies)
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
```
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
```
### Import Data Set & Normalize
---
we have imported the famoous mnist dataset, it is a 28x28 gray-scale hand written digits dataset. we have loaded the dataset, split the dataset. we also need to normalize the dataset. The original dataset has pixel value between 0 to 255. we have normalized it to 0 to 1.
```
import keras
from keras.datasets import mnist # 28x28 image data written digits 0-9
from keras.utils import normalize
#print(keras.__version__)
#split train and test dataset
(x_train, y_train), (x_test,y_test) = mnist.load_data()
#normalize data
x_train = normalize(x_train, axis=1)
x_test = normalize(x_test, axis=1)
import matplotlib.pyplot as plt
plt.imshow(x_train[0], cmap=plt.cm.binary)
plt.show()
#print(x_train[0])
```
## Specify Architecture:
---
we have specified our model architecture. added commonly used densely-connected neural network. For the output node we specified our activation function **softmax** it is a probability distribution function.
```
from keras.models import Sequential
from keras.layers import Flatten
from keras.layers import Dense
# created model
model = Sequential()
# flatten layer so it is operable by this layer
model.add(Flatten())
# regular densely-connected NN layer.
#layer 1, 128 node
model.add(Dense(128, activation='relu'))
#layer 2, 128 node
model.add(Dense(128, activation='relu'))
#output layer, since it is probability distribution we will use 'softmax'
model.add(Dense(10, activation='softmax'))
```
### Compile
---
we have compiled the model with earlystopping callback. when we see there are no improvement on accuracy we will stop compiling.
```
from keras.callbacks import EarlyStopping
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics = ['accuracy'])
#stop when see model not improving
early_stopping_monitor = EarlyStopping(monitor='val_loss', patience=2)
```
### Fit
---
Fit the model with train data, with epochs 10.
```
model.fit(x_train, y_train, epochs=10, callbacks=[early_stopping_monitor], validation_data=(x_test, y_test))
```
### Evaluate
---
Evaluate the accuracy of the model.
```
val_loss, val_acc = model.evaluate(x_test,y_test)
print(val_loss, val_acc)
```
### Save
---
Save the model and show summary.
```
model.save('mnist_digit.h5')
model.summary()
```
### Load
----
Load the model.
```
from keras.models import load_model
new_model = load_model('mnist_digit.h5')
```
### Predict
----
Here our model predicted the probability distribution, we have to covnert it to classifcation/label.
```
predict = new_model.predict([x_test])
#return the probability
print(predict)
print(predict[1].argmax(axis=-1))
plt.imshow(x_test[1])
plt.show()
```
| github_jupyter |
# Tarea 98 - Análisis del rendimiento de las aplicaciones de IA
## Ejercicio: Debes programar el problema que se plantea en la siguiente secuencia de videos en el lenguaje de programación que desees:
## Primera parte
[](https://www.youtube.com/watch?v=GD254Gotp-4 "video")
#### Reto para hacer:
Definir dos funciones, una, suma_lineal, que lleve a cabo la suma de n números del 1 a n, de una forma básica, y otra, suma_constante, que lleve a cabo la misma tarea, pero utilizando la fórmula de la suma aritmética de los números del 1 a n.
```
#Instalamos line_profiler en el único caso en que no funcione el siguiente script
#! pip install line_profiler
%load_ext line_profiler
import time
def suma_lineal(n):
pass
def suma_constante(n):
pass
cantidad = 1000000
def ejemplo(cantidad):
for i in range(4): # incrementamos 5 veces
start_time = time.time()
suma1 = suma_lineal(cantidad)
middle_time = time.time()
suma2 = suma_constante(cantidad)
stop_time = time.time()
set_time = middle_time - start_time
list_time = stop_time - middle_time
print("\tTest en lineal para la cantidad de {}:\t\t{} segundos".format(cantidad, set_time))
print("\tTest en constantepara para la cantidad de {}:\t{} segundos".format(cantidad, list_time))
cantidad *= 10 # comienza en 1000000 luego *10... hasta 10000000000
# return set_time, list_time
ejemplo(cantidad)
```
El código itera sobre la lista de entrada, extrayendo elementos de esta y acumulándolos en otra lista para cada iteración. Podemos utilizar lprun para ver cuales son las operaciones más costosas.
```
%lprun -f ejemplo ejemplo(cantidad)
```
El código tarda aproximandamente 0.003842 segundos en ejecutarse (el resultado puede variar en función de vuestra máquina). Del tiempo de ejecución, aprox. la mitad (42%) se utiliza para la función lineal) y un 55% para la suma constante) y el resto del tiempo básicamente para completar la función.
## Segunda parte
[](https://www.youtube.com/watch?v=MaY6FpP0FEU "video")
En este video hacemos una introducción a la notación asintótica, y la complejidad de los algoritmos, y resolvemos el reto que teníamos pendiente de definir dos funciones para sumar de 1 a n números enteros, mediante dos algoritmos con complejidad lineal y complejidad constante.
```
def suma_lineal(n):
suma=0
for i in range(1, n+1):
suma += i
return suma
def suma_constante(n):
return (n/2) * (n+1)
cantidad = 1000000
def ejemplo2(cantidad):
for i in range(4): # incrementamos 4 veces
start_time = time.time()
suma1 = suma_lineal(cantidad)
middle_time = time.time()
suma2 = suma_constante(cantidad)
stop_time = time.time()
set_time = middle_time - start_time
list_time = stop_time - middle_time
print("\tTest en lineal para la cantidad de {}:\t\t{} segundos".format(cantidad, set_time))
print("\tTest en constantepara para la cantidad de {}:\t{} segundos".format(cantidad, list_time))
cantidad *= 10 # comienza en 1000000 luego *10... hasta 10000000000
# return set_time, list_time
%time ejemplo2(cantidad)
ejemplo2(cantidad)
%lprun -f ejemplo2 ejemplo2(cantidad)
# Podemos utilizar lprun para ver cuales son las operaciones más costosas.
```
# Representación gŕafica según su complejidad.
```
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
def plot_funs(xs):
"""
Plot a set of predefined functions for the x values in 'xs'.
"""
ys0 = [1 for x in xs]
ys1 = [x for x in xs]
ys1_b = [x + 25 for x in xs]
ys2 = [x**2 for x in xs]
ys2_b = [x**2 + x for x in xs]
ys3 = [x**3 for x in xs]
ys3_b = [x**3 + x**2 for x in xs]
fig = plt.figure()
plt.plot(xs, ys0, '-', color='tab:brown')
plt.plot(xs, ys1, '-', color='tab:blue')
plt.plot(xs, ys1_b, ':', color='tab:blue')
plt.plot(xs, ys2, '-', color='tab:orange')
plt.plot(xs, ys2_b, ':', color='tab:orange')
plt.plot(xs, ys3, '-', color='tab:green')
plt.plot(xs, ys3_b, ':', color='tab:green')
plt.legend(["$1$", "$x$", "$x+25$", "$x^2$", "$x^2+x$", "$x^3$",
"$x^3+x^2$"])
plt.xlabel('$n$')
plt.ylabel('$f(n)$')
plt.title('Function growth')
plt.show()
plot_funs(range(10))
```
Las líneas de un mismo color representan funciones que tienen el mismo grado. Así, la línea marrón que casi no se aprecia muestra una función constante (𝑓(𝑛)=1), las líneas azules muestran funciones lineales (𝑥 y 𝑥+25), las líneas naranjas funciones cuadráticas (𝑥2 y 𝑥2+𝑥), y las líneas verdes funciones cúbicas (𝑥3 y 𝑥3+𝑥2). Para cada color, la línea continua (sólida) representa la función que contiene solo el término de mayor grado, y la línea de puntos es una función que tiene también otros términos de menor grado. Como se puede apreciar, el crecimiento de las funciones con el mismo grado es similar, sobre todo cuando crece el valor de 𝑛. Fijaos con la representación de las mismas funciones si aumentamos el valor de 𝑛 de 10 (gráfica anterior) a 100 (gráfica de la celda siguiente):
```
plot_funs(range(100))
```
| github_jupyter |
### Komentarze w Pythonie robimy przy użyciu # - jeśli go nie użyjemy, Python będzie to próbował zinterpretować jako kod
```
#jupyter notebook; jupyter hub
jupyter notebook; jupyter hub
10 + 5
2 - 7
4 * 6
9 / 3
8 ** 2
x = 10
x = ergbreoubhtoebeobf
x
print(x)
rocznik = 1991
rocznik
teraz = 2020
teraz - rocznik
ile_lat = teraz - rocznik
ile_lat
ile_lat + 25
#integer - liczba całkowita
type(ile_lat)
zarobki_biednego_doktoranta = 5.50
zarobki_biednego_doktoranta
#float - liczba rzeczywista
type(zarobki_biednego_doktoranta)
werdykt = "To są marne zarobki"
werdykt = 'To są marne zarobki'
type(werdykt)
zarobki_biednego_doktoranta + werdykt
10 + 5.50
# Przemnożenie ciągu liter przez np. 2 duplikuje ów ciąg
werdykt * 2
# Ciągu znaków nie można dzielić (a przynajmniej nie w taki sposób)
werdykt / 3
"swps" + "jest super"
a = "UW"
b = "jest"
c = "super"
a+b+c
a + " " + b + " " + c
print(a, b, c)
?print
print(a, b, c, sep = " nie ")
print(a, b, c, sep = "\n")
test = print(a, b, c)
"UW jest super a absolwenci tej uczelni zarabiają więcej niż 5.50 brutto na h"
info = f"{a} {b} {c} a absolwenci tej uczelni zarabiają więcej niż {zarobki_biednego_doktoranta} brutto na h"
info
ocena = 2
maksimum = 10
# Metoda - rodzaj funkcji, dedykowany tylko dla konkretnego rodzaju zmiennej
"Te warsztaty zostały ocenione na {} pkt na {} możliwych".format(ocena, maksimum)
10.format()
# backslash (\) pozawala na podzielenie długiego kodu na parę linijek
"Te warsztaty zostały ocenione \
na {} pkt \
na {} możliwych".format(ocena, maksimum)
"Te warsztaty zostały ocenione
na {} pkt
na {} możliwych".format(ocena, maksimum)
nr = "300"
nr + 100
int(nr) + 100
float(nr)
int(info)
prawda = True
falsz = False
## w R: TRUE/T ; FALSE/F
prawda
type(prawda)
prawda + prawda
True == 1
False == 0
10 > 5
10 > 20
10 < 5
20 >= 10
50 <= 5
10 != 3
[]
list()
moja_lista = ["Samsung", "Huawei", "Xiaomi"]
type(moja_lista)
aj = "Apple"
lista2 = ["Samsung", "Huawei", "Xiaomi", aj]
lista2
```
#### W Pythonie adresowanie elementów zaczynamy od zera!!!
```
lista2[1]
lista2[0]
lista2[-1]
```
#### Python nie bierze ostatniego elementu, zatem taki kod jak poniżej wybierze nam tylko elementy 0 i 1 (pierwsze dwa)
```
lista2[0:2]
lista2[:3]
lista2[0][0:3]
ruskie = 4.50
ziemniaki = 2.35
surowka = 2.15
nalesniki = 7.50
kompot = 2.00
kotlet = 8.50
pomidorowa = 2.35
wodka_spod_lady = 4.00
leniwe = 3.90
kasza = 2.25
ceny = [ruskie, ziemniaki, surowka, nalesniki, kompot, kotlet, pomidorowa, wodka_spod_lady, leniwe, kasza]
menu = ['ruskie', ruskie,
'ziemniaki', ziemniaki,
'surowka', surowka,
'nalesniki', nalesniki,
'kompot', kompot,
'kotlet', kotlet,
'pomidorowa', pomidorowa,
'wódka spod lady', wodka_spod_lady,
'leniwe', leniwe,
'kasza', kasza]
menu = [
['ruskie', ruskie],
['ziemniaki', ziemniaki],
['surowka', surowka],
['nalesniki', nalesniki],
['kompot', kompot],
['kotlet', kotlet],
['pomidorowa', pomidorowa],
['wódka spod lady', wodka_spod_lady],
['leniwe', leniwe],
['kasza', kasza]
]
menu[0]
menu[0][1]
menu[0][-1]
#ruskie, surowka, wódka spod lady
menu[0][-1] + menu[2][-1] + menu[-3][-1]
menu[-1] = ["suchy chleb", 10.50]
menu
len(menu)
?len
len("to ejst tekst")
ceny.sort()
#stack overflow
ceny
ceny2 = [4.0, 4.5, 7.5, 8.5,2.0, 2.15, 2.25]
sorted(ceny2)
ceny2
ceny2 = sorted(ceny2)
menuDict = {
'ruskie': ruskie,
'ziemniaki': ziemniaki,
'surowka': surowka,
'nalesniki': nalesniki,
'kompot': kompot,
'kotlet': kotlet,
'pomidorowa': pomidorowa,
'wódka spod lady': wodka_spod_lady,
'leniwe': leniwe,
'kasza': kasza
}
```
#### Słowniki - możemy je adresować tylko po hasłach (nie możemy po pozycji w zbiorze)
```
menuDict[0]
menuDict
menuDict["ruskie"]
menuDict.keys()
menuDict.values()
menuDict.items()
menuDict.keys()[0]
menuDict['wódka spod lady']
```
##### Krotka vs lista - działają podobnie, natomiast elementy listy można zmieniać. Krotki zmienić się nie da.
```
lista = [1,2,3,4]
krotka = (1,2,3,4)
lista
krotka
lista[0]
krotka[0]
lista[0] = 6
lista
krotka[0] = 6
type(krotka)
mini = [1,2,3,4,5]
for i in mini:
print(i)
for i in mini:
q = i ** 2
print(q)
"Liczba {} podniesiona do kwadratu daje {}".format(x, y)
f"Liczba {x} podniesiona do kwadratu daje {y}"
for i in mini:
print(f"Liczba {i} podniesiona do kwadratu daje {i**2}")
mini2 = [5, 10, 15, 20, 50]
for index, numer in enumerate(mini2):
print(index, numer)
for index, numer in enumerate(mini2):
print(f"Liczba {numer} (na pozycji {index}) podniesiona do kwadratu daje {numer**2}")
mini2
100 % 2 == 0
for index, numer in enumerate(mini2):
if numer % 2 == 0:
print(f"Liczba {numer} (na pozycji {index}) jest parzysta.")
else:
print(f"Liczba {numer} (na pozycji {index}) jest nieparzysta.")
parzyste = []
nieparzyste = []
#how to add a value to a list (in a loop) python
for index, numer in enumerate(mini2):
if numer % 2 == 0:
print(f"Liczba {numer} (na pozycji {index}) jest parzysta.")
parzyste.append(numer)
else:
print(f"Liczba {numer} (na pozycji {index}) jest nieparzysta.")
nieparzyste.append(numer)
parzyste
nieparzyste
mini3 = [5, 10, 15, 20, 50, 60, 80, 30, 100, 7]
for numer in mini3:
if numer == 50:
print("To jest zakazana liczba. Nie tykam.")
elif numer % 2 == 0:
print(f"Liczba {numer} jest parzysta.")
else:
print(f"Liczba {numer} jest nieparzysta.")
mini4 = [5, 10, 15, 20, 50, 666, 80, 30, 100, 7]
for numer in mini4:
if numer == 666:
print("To jest szatańska liczba. Koniec warsztatów.")
break
elif numer == 50:
print("To jest zakazana liczba. Nie tykam.")
elif numer % 2 == 0:
print(f"Liczba {numer} jest parzysta.")
else:
print(f"Liczba {numer} jest nieparzysta.")
menuDict
menuDict["ruskie"]
co_bralem = ["pomidorowa", "ruskie", "wódka spod lady"]
ile_place = 0
for pozycja in co_bralem:
#ile_place = ile_place + menuDict[pozycja]
ile_place += menuDict[pozycja]
ile_place
tqdm
!pip install tqdm #anaconda/colab
!pip3 intall tqdm
# w terminalu/wierszu polecen
pip install xxx
pip3 install xxx
!pip3 install tqdm
import tqdm
tqdm.tqdm
#from NAZWA_PAKIETU import NAZWA_FUNKCJI
from tqdm import tqdm
tqdm
n = 0
for i in tqdm(range(0, 100000)):
x = (i * i) / 3
n += x
n
import numpy as np
seed = np.random.RandomState(100)
wzrost_lista = list(seed.normal(loc=1.70,scale=.15,size=100000).round(2))
seed2 = np.random.RandomState(100)
waga_lista = list(seed2.normal(loc=80,scale=10,size=100000).round(2))
# bmi = waga / wzrost**2
waga_lista / wzrost_lista**2
bmi_lista = []
for index, value in tqdm(enumerate(wzrost_lista)):
bmi = waga_lista[index]/wzrost_lista[index]**2
bmi_lista.append(bmi)
bmi_lista[:20]
seed = np.random.RandomState(100)
wzrost = seed.normal(loc=1.70,scale=.15,size=100000).round(2)
seed2 = np.random.RandomState(100)
waga = seed2.normal(loc=80,scale=10,size=100000).round(2)
wzrost
len(wzrost)
mini5 = [5, 10, 30, 60, 100]
np.array(mini5)
vector = np.array([5, 10, 30, 60, 100])
mini5 * 2
vector * 2
vector / 3
vector ** 2
mini5 + 3
vector + 3
seed = np.random.RandomState(100)
wzrost = seed.normal(loc=1.70,scale=.15,size=100000).round(2)
seed2 = np.random.RandomState(100)
waga = seed2.normal(loc=80,scale=10,size=100000).round(2)
bmi = waga / wzrost ** 2
bmi
np.min(bmi)
np.max(bmi)
np.mean(bmi)
!pip3 install pandas
!pip3 install gapminder
from gapminder import gapminder as df
df
np.mean(df["lifeExp"])
df.iloc[0:10]
df.iloc[0:10]["lifeExp"]
df.iloc[0:10, -1]
df.iloc[0:20,:].loc[:,"pop"]
df["year"] == 2007
df2007 = df[df["year"] == 2007]
df2007
#matplotlib
import matplotlib.pyplot as plt
plt.style.use("ggplot")
#!pip install matplotlib
?plt.plot
df["gdpPercap"]
plt.plot(df2007["gdpPercap"], df2007['lifeExp'])
plt.scatter(df2007["gdpPercap"], df2007['lifeExp'])
plt.show()
plt.scatter(df2007["gdpPercap"], df2007['lifeExp'])
plt.xscale("log")
plt.show()
plt.hist(df2007['lifeExp'])
```
| github_jupyter |
## Support vector machine applicate a XOR
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import numpy as np
from sklearn import svm
from sklearn.kernel_approximation import RBFSampler
from sklearn.linear_model import SGDClassifier
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
from matplotlib import cm
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (16, 8)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c',
'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b',
'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']
cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]])
cmap_big = cm.get_cmap('Spectral', 512)
cmap = mcolors.ListedColormap(cmap_big(np.linspace(0.5, 1, 128)))
xx, yy = np.meshgrid(np.linspace(-3, 3, 500),
np.linspace(-3, 3, 500))
np.random.seed(0)
X = np.random.randn(300, 2)
Y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0)
fig = plt.figure(figsize=(16,8))
fig.patch.set_facecolor('white')
for i in range(2):
idx = np.where(Y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=colors[i], s=40, edgecolors='k', alpha = .9, label='Class {0:d}'.format(i),cmap=cmap)
plt.xlabel('$x_1$', fontsize=14)
plt.ylabel('$x_2$', fontsize=14)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.legend()
plt.show()
# fit the model
clf= svm.SVC(gamma=40)
#clf=svm.SVC(kernel='linear')
#clf=svm.SVC(kernel='poly', degree=5, coef0=1)
#clf=svm.SVC(kernel='sigmoid', gamma=15)
clf = clf.fit(X, Y)
# plot the decision function for each datapoint on the grid
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
fig = plt.figure(figsize=(16,8))
fig.patch.set_facecolor('white')
ax = fig.gca()
imshow_handle = plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()), aspect='auto',
origin='lower', alpha=.5, cmap=cmap)
contours = plt.contour(xx, yy, Z, levels=[0], linewidths=2,
linetypes='--', colors=[colors[9]])
for i in range(2):
idx = np.where(Y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=colors[i], edgecolors='k', s=40,
label='Class {0:d}'.format(i),cmap=cmap)
plt.xlabel('$x_1$', fontsize=14)
plt.ylabel('$x_2$', fontsize=14)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.legend()
plt.show()
print('Accuracy: {0:3.5f}'.format(np.sum(Y==clf.predict(X))/float(X.shape[0])*100))
```
# Gradient descent con hinge loss
```
def phi(X,nc):
rbf_feature = RBFSampler(gamma=10, n_components=nc, random_state=1)
Z = rbf_feature.fit_transform(X)
return Z
X.shape
nc =20
X_features = phi(X, nc)
clf = SGDClassifier(loss='hinge', penalty='l2', max_iter=1000, alpha=.001)
clf = clf.fit(X_features, Y)
X_features.shape
print('Accuracy: {0:3.5f}'.format(np.sum(Y==clf.predict(X_features))/float(X_features.shape[0])*100))
X_grid = np.c_[xx.ravel(), yy.ravel()]
X_grid_features = phi(X_grid,nc)
Z = clf.decision_function(X_grid_features)
Z = Z.reshape(xx.shape)
fig = plt.figure()
fig.patch.set_facecolor('white')
ax = fig.gca()
imshow_handle = plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()), aspect='auto',
origin='lower', alpha=.3)
contours = plt.contour(xx, yy, Z, levels=[0], linewidths=1,
linetypes='--')
for i in range(2):
idx = np.where(Y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=colors[i], edgecolors='k', s=40,
label='Class {0:d}'.format(i),cmap=cmap)
plt.xlabel('$x_1$', fontsize=14)
plt.ylabel('$x_2$', fontsize=14)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.legend()
plt.show()
Z
```
| github_jupyter |
```
from keras.preprocessing.sequence import TimeseriesGenerator
from FC_RNN_Evaluater.FC_RNN_Evaluater import *
from keras.initializers import RandomNormal
import numpy as np
timesteps = 10
inputMatrix = np.random.rand(57,224,224,3)# np.array([[[[i, i, i]]] for i in range(57)])
labels = np.array([[i, i, i] for i in range(57)])
inputMatrix, inputLabels, outputLabels = getSequencesToSequences(inputMatrix, labels, timesteps)
batch_size=1
img_gen = TimeseriesGenerator(inputMatrix, outputLabels, length=timesteps, sampling_rate=1, stride=timesteps, batch_size=batch_size)
ang_gen = TimeseriesGenerator(inputLabels, outputLabels, length=timesteps, sampling_rate=1, stride=timesteps, batch_size=batch_size)
batch_01 = img_gen[0]
batch_0 = ang_gen[0]
inputFrames, y = batch_01
inputSeq, y = batch_0
m = np.zeros((n,)+y.shape)
with tf.Session():
m = samples.eval()
from FC_RNN_Evaluater.Stateful_FC_RNN_Configuration import *
vgg_model, full_model, modelID, preprocess_input = getFinalModel(timesteps = timesteps, lstm_nodes = lstm_nodes, lstm_dropout = lstm_dropout, lstm_recurrent_dropout = lstm_recurrent_dropout,
num_outputs = num_outputs, lr = learning_rate, include_vgg_top = include_vgg_top, use_vgg16 = use_vgg16)
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import backend as k
from keras import losses
import numpy as np
import tensorflow as tf
from sklearn.metrics import mean_squared_error
from math import sqrt
model = full_model
n = 5
outputs = model.predict([inputFrames, inputSeq])
targets = y
print(outputs.shape, targets.shape)
sigma = 0.05
samples = RandomNormal(mean=model.outputs, stddev=sigma, seed=None)((n,)+outputs.shape)
samples
#with tf.Session() as sess:
# print(samples.eval())
yy = tf.convert_to_tensor(np.repeat(targets[np.newaxis, ...], n, axis=0), name='yy', dtype=tf.float32)
yy
abs_diff = tf.abs(samples - yy)
rti = - tf.reduce_mean(abs_diff, axis = -1) - tf.reduce_mean(abs_diff, axis = -1)
rti
ri = tf.reduce_sum(rti, axis=-1)
ri
bt = tf.reduce_mean(rti, axis=0)
bt
rti_bt = rti - bt
rti_bt
ri_b = tf.reduce_sum(rti_bt, axis=-1)
ri_b
mu = tf.convert_to_tensor(np.repeat(outputs[np.newaxis, ...], n, axis=0), name='mu', dtype=tf.float32)
mu
gti = (samples - mu) / tf.convert_to_tensor(sigma**2)
gti
gradients_per_episode = []
for i in range(samples.shape[0]):
#print(samples[i], model.output)
#tf.assign(model.output, samples[i])
#loss = losses.mean_squared_error(targets, model.output)tf..eval()
loss = losses.mean_squared_error(targets, samples[i])
gradients = k.gradients(loss, model.trainable_weights)
#print(gradients)
gradients = [g*ri_b[i] for g in gradients]
gradients_per_episode.append(gradients)
len(gradients_per_episode)
len(gradients_per_episode[0])
gradients_per_episode[0]
stacked_gradients = []
for i in range(len(gradients_per_episode[0])):
stacked_gradients.append(tf.stack([gradients[i] for gradients in gradients_per_episode]))
stacked_gradients
final_gradients = [tf.reduce_mean(g, axis=0) for g in stacked_gradients]
final_gradients
for i in range(len(model.trainable_weights)):
tf.assign_sub(model.trainable_weights[i], final_gradients[i])
# Begin TensorFlow
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
steps = 1 # steps of gradient descent
for s in range(steps):
#print(model.input)
# ===== Numerical gradient =====
#evaluated_gradients = sess.run(gradients, feed_dict={'tdCNN_input:0': inputFrames, 'aux_input:0': inputSeq})
# Step down the gradient for each layer
for i in range(len(model.trainable_weights)):
sess.run(tf.assign_sub(model.trainable_weights[i], final_gradients[i]))
# Every 10 steps print the RMSE
if s % 10 == 0:
print("step " + str(s))
#final_outputs = model.predict([inputFrames, inputSeq])
print("===AFTER STEPPING DOWN GRADIENT===")
print("outputs:\n", outputs)
print("targets:\n", targets)
loss = losses.mean_squared_error(targets, model.output)
print(targets.shape, model.output.shape)
print(loss)
# ===== Symbolic Gradient =====
gradients = k.gradients(loss, model.trainable_weights)
print(gradients)
#print("===BEFORE WALKING DOWN GRADIENT===")
#print("outputs:\n", outputs)
#print("targets:\n", targets)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Khislatz/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/Khislat_Zhuraeva_LS_DS_214_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 1, Module 4*
---
# Logistic Regression
## Assignment 🌯
You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?
> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.
- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.
- [ ] Begin with baselines for classification.
- [ ] Use scikit-learn for logistic regression.
- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)
- [ ] Get your model's test accuracy. (One time, at the end.)
- [ ] Commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Make exploratory visualizations.
- [ ] Do one-hot encoding.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Get and plot your coefficients.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall'])
df.head(3)
df.shape
df['Date'] = pd.to_datetime(df['Date'])
train = df[df['Date']<='12/31/2016']
val = df[(df['Date']>='01/01/2017') & (df['Date']<='12/31/2017')]
test = df[df['Date']>='01/01/2018']
train.shape, val.shape, test.shape
df['Great'].dtypes
target = 'Great'
y_train = train[target]
y_train.value_counts(normalize=True)
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train) #majority is False
### Training accuracy of majority class baseline =
### frequency of majority class
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
### Validation accuracy of majority class baseline =
### usually similar to Train accuracy
y_val = val[target]
y_pred = [majority_class]*len(y_val)
accuracy_score(y_val, y_pred)
train.describe().head(4)
# 1. Import estimator class
from sklearn.linear_model import LogisticRegression
# 2. Instantiate this class
log_reg = LogisticRegression()
# 3. Arrange X feature matrices (already did y target vectors)
features = ['Hunger', 'Circum','Volume', 'Tortilla','Temp', 'Meat','Fillings','Meat:filling','Salsa','Wrap']
X_train = train[features]
X_val = val[features]
# Impute missing values
from sklearn.impute import SimpleImputer
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train)
X_val_imputed = imputer.transform(X_val)
# 4. Fit the model
log_reg.fit(X_train_imputed, y_train)
print('Validation Accuracy', log_reg.score(X_val_imputed, y_val))
#Same things as
y_pred = log_reg.predict(X_val_imputed)
print('Validation Accuracy', accuracy_score(y_pred, y_val))
#The predictions look like this
log_reg.predict(X_val_imputed)
test_case = [[0.500000, 17.000000, 0.400000,1.400000,1.000000,1.000000,1.000000,0.500000,0.000000,0.000000]]
log_reg.predict(test_case)
log_reg.predict_proba(test_case)[0]
log_reg.intercept_
# The logistic sigmoid "squishing" function, implemented to accept numpy arrays
import numpy as np
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))
1 - sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegressionCV
X_test = test[features]
y_test = test[target]
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
model.fit(X_train_scaled, y_train);
X_test_imputed = imputer.transform(X_test)
X_test_scaled = scaler.transform(X_test_imputed)
print('Test Accuracy', model.score(X_test_scaled, y_test))
```
| github_jupyter |
# Basic Plotting in Python
Making explatory plots is a common task in data science and many good presentations usually feature excellent plots.
For us the most important plotting package is `matplotlib`, which is python's attempt to copy MATLAB's plotting functionality. Also of note is the package `seaborn`, but we won't be using this package nearly as much as `matplotlib`. We'll briefly touch on a `seaborn` feature that I like, but won't go beyond that.
First let's check that you have both packages installed.
```
## It is standard to import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
## It is standard to import seaborn as sns
import seaborn as sns
## Let's perform a version check
import matplotlib
# I had 3.3.2 when I wrote this
print("Your matplotlib version is",matplotlib.__version__)
# I had 0.11.0 when I wrote this
print("Your seaborn version is",sns.__version__)
```
##### Be sure you can run both of the above code chunks before continuing with this notebook, again it should be fine if your package version is slightly different than mine.
##### As a second note, you'll be able to run a majority of the notebook with just matplotlib. I'll put the seaborn content at the bottom of the notebook.
```
## We'll be using what we learned in the
## previous two notebooks to help
## generate data
import numpy as np
import pandas as pd
```
## A First Plot
Before getting into the nitty gritty, let's look at a first plot made with `matplotlib`.
```
## Here's our data
x = [0,1,2,3,4,5,6,7,8,9,10]
y = [2*i - 3 for i in x]
## plt.plot will make the plot
## First put what you want on the x-axis, then the y-axis
plt.plot(x,y)
## Always end your plotting block with plt.show
## in jupyter this makes sure that the plot displays
## properly
plt.show()
```
##### What Happened?
So what happened when we ran the above code?
`matplotlib` creates a figure object, and on that object it places a subplot object, and finally it places the points on the subplot then connects the points with straight lines.
We'll return to the topic of subplots later in the notebook
Now you try plotting the following `x` and `y`.
```
## Run this code first
## np.linspace makes an array that
## goes from -5 to 5 broken into
## 100 evenly spaced steps
x = 10*np.linspace(-5,5,100)
y = x**2 - 3
## You code
## Plot y against x
```
## Getting More Control of your Figures
So while you can certainly use the simple code above to generate figures, the best presentations typically have excellent graphics demonstrating the outcome. So why don't we learn how to control our figures a little bit more.
This process typically involves explicitly defining a figure and subplot object. Let's see.
```
## plt.figure() will make the figure object
## figsize can control how large it is (width,height)
## here we make a 10 x 12 window
plt.figure(figsize = (10,12))
## This still creates the subplot object
## that we plot on
plt.plot(x,y)
## we can add axis labels
## and control their fontsize
## A good rule of thumb is the bigger the better
## You want your plots to be readable
## As a note: matplotlib can use LaTeX commands
## so if you place math text in dollar signs it will
## be in a LaTeX environment
plt.xlabel("$x$", fontsize = 16)
plt.ylabel("$y$", fontsize = 16)
## we can set the plot axis limits like so
## This makes the x axis bounded between -20 and 20
plt.xlim((-20,20))
## this makes the y axis bounded between -100 and 100
plt.ylim(-100,100)
## Also a title
## again make it large font
plt.title("A Plot Title", fontsize = 20)
## Now we show the plot
plt.show()
```
#### Controlling How the Plotted Data Looks
We can control the appearance of what is plotted. Here's a quick cheatsheet of easy to use options:
| Color | Description |
| :-------------: |:------------:|
| r | red |
| b | blue |
| k | black |
| g | green |
| y | yellow |
| m | magenta |
| c | cyan |
| w | white |
|Line Style | Description |
|:---------:|:-------------:|
| - | Solid line |
| -- | Dashed line |
| : | Dotted line |
| -. | Dash-dot line |
| Marker | Description |
|:------:|:--------------:|
|o | Circle |
|+ | Plus Sign |
|* | Asterisk |
|. | Point |
| x | Cross |
| s | Square |
|d | Diamond |
|^ | Up Triangle |
|< | Right Triangle |
|> | Left Triangle |
|p | Pentagram |
| h | hexagram |
Let's try the above plot one more time, but using some of these to jazz it up.
```
## plt.figure() will make the figure object
## figsize can control how large it is (width,height)
plt.figure(figsize = (10,12))
## The third argument to plot(), 'mp' here
## tells matplotlib to make the points magenta
## and to use pentagrams, the absence of a line character
## means there will be no line connecting these points
## we can also add a label, and insert a legend later
plt.plot(x,y,'mp', label="points")
## We can even plot two things on the same plot
## here the third argument tells matplotlib to make a
## green dotted line
plt.plot(x+10,y-100,'g--', label="shifted line")
## we can add axis labels
## and control their fontsize
plt.xlabel("$x$", fontsize = 16)
plt.ylabel("$y$", fontsize = 16)
## Also a title
plt.title("A Plot Title", fontsize = 20)
## plt.legend() adds the legend to the plot
## This will display the labels we had above
plt.legend(fontsize=14)
# Now we show the plot
plt.show()
## You code
## Redefine x and y to be this data
x = 10*np.random.random(100) - 5
y = x**3 - x**2 + x
## You code
## Plot y against x here
## play around with different colors and markers
```
## Subplots
Sometimes you'll want to plot multiple things in the same Figure. Luckily `matplotlib` has the functionality to create subplots.
```
## plt.subplots makes a figure object
## then populates it with subplots
## the first number is the number of rows
## the second number is the number of columns
## so this makes a 2 by 2 subplot matrix
## fig is the figure object
## axes is a matrix containing the four subplots
fig, axes = plt.subplots(2, 2, figsize = (10,8))
## We can plot like before but instead of plt.plot
## we use axes[i,j].plot
## A random cumulative sum on axes[0,0]
axes[0,0].plot(np.random.randn(20).cumsum(),'r--')
## note I didn't have an x, y pair here
## so what happened was, matplotlib populated
## the x-values for us, and used the input
## as the y-values.
## I can set x and y labels on subplots like so
## Notice that here I must use set_xlabel instead of
## simply xlabel
axes[0,0].set_xlabel("$x$", fontsize=14)
axes[0,0].set_ylabel("$y$", fontsize=14)
## show the plot
plt.show()
## plt can also make a number of other useful graph types
fig, axes = plt.subplots(2, 2, figsize = (10,8))
axes[0,0].plot(np.random.randn(20).cumsum(),'r--')
## like scatter plots
## for these put the x, then the y
## you can then specify the "c"olor, "s"ize, and "marker"shape
## it is also good practice to let long code go onto multiple lines
## in python, you can go to a new line following a comma in a
## function call
axes[0,1].scatter(np.random.random(10), # start a new line now
np.random.random(10),
c = "purple", # color
s = 50, # marker size
marker = "*") # marker shape
## or histograms
## this can be done with .hist
## you input the data you want a histogram of
## and you can specify the number of bins with
## bins
axes[1,0].hist(np.random.randint(0,100,100), bins = 40)
## and text
## for this you call .text()
## you input the x, y position of the text
## then the text itself, then you can specify the fontsize
axes[1,1].text(.5, .5, "Hi Mom!", fontsize=20)
plt.show()
```
As a note all of the plotting capabilities shown above (`hist()`, `scatter()`, and `text()`) are available outside of subplots as well. You'd just call `plt.hist()`, `plt.scatter()` or `plt.text()` instead.
```
## You code
## Make a 2 x 2 subplot
## Use numpy to generate data and plot
## a cubic function in the 0,0 plot
## a scatter plot of two 100 pulls from random normal distribution
## in the 0,1 plot
## a histogram of 1000 pulls from the random normal distribution
## in the 1,0 plot
## and whatever text you'd like in the 1,1 plot
```
### Saving a Figure
We can also save a figure after we've plotted it with `plt.savefig(figure_name)`.
```
## We'll make a simple figure
## then save it
plt.figure(figsize=(8,8))
plt.plot([1,2,3,4], [1,2,3,4], 'k--')
## all you'll need is the figure name
## the default is to save the image as a png file
plt.savefig("my_first_matplotlib_plot.png")
plt.show()
```
If you check your repository you should now see `my_first_matplotlib_plot.png`. Open it up to admire its beauty.
That's really all we'll need to know for making plots in the boot camp. Of course we've come nowhere close to understanding the totality of `matplotlib`, so if you're interested check out the documentation, <a href="https://matplotlib.org/">https://matplotlib.org/</a>.
## `seaborn`
`seaborn` is a pretty user friendly package that can make nice plots quickly, however, we won't explore it much in this notebook. But we will introduce a useful function that allows you to give your plot gridlines for easier reading.
For those interesting in seeing fun `seaborn` plots check out this link, <a href="https://seaborn.pydata.org/examples/index.html">https://seaborn.pydata.org/examples/index.html</a>.
```
## Let's recall this plot from before
x = 10*np.linspace(-5,5,100)
y = x**2 - 3
plt.figure(figsize = (10,12))
plt.plot(x,y,'mp', label="points")
plt.plot(x+10,y-100,'g--', label="shifted line")
plt.xlabel("$x$", fontsize = 16)
plt.ylabel("$y$", fontsize = 16)
plt.title("A Plot Title", fontsize = 20)
plt.legend(fontsize=14)
plt.show()
```
Now we can use `seaborn` to add gridlines to the figure, which will allow for easier reading of plots like the one above.
```
## Run this code
sns.set_style("whitegrid")
## Now rerun the plot
x = 10*np.linspace(-5,5,100)
y = x**2 - 3
plt.figure(figsize = (10,12))
plt.plot(x,y,'mp', label="points")
plt.plot(x+10,y-100,'g--', label="shifted line")
plt.xlabel("$x$", fontsize = 16)
plt.ylabel("$y$", fontsize = 16)
plt.title("A Plot Title", fontsize = 20)
plt.legend(fontsize=14)
plt.show()
```
See the difference?
```
## You code
## see what this does to your plots
sns.set_style("darkgrid")
## Now rerun the plot
x = 10*np.linspace(-5,5,100)
y = x**2 - 3
plt.figure(figsize = (10,12))
plt.plot(x,y,'mp', label="points")
plt.plot(x+10,y-100,'g--', label="shifted line")
plt.xlabel("$x$", fontsize = 16)
plt.ylabel("$y$", fontsize = 16)
plt.title("A Plot Title", fontsize = 20)
plt.legend(fontsize=14)
plt.show()
```
## That's it!
That's all for this notebook. You now have a firm grasp of the basics of plotting figures with `matplotlib`. With a little practice you'll be a `matplotlib` pro in no time.
This notebook was written for the Erdős Institute Cőde Data Science Boot Camp by Matthew Osborne, Ph. D., 2021.
Redistribution of the material contained in this repository is conditional on acknowledgement of Matthew Tyler Osborne, Ph.D.'s original authorship and sponsorship of the Erdős Institute as subject to the license (see License.md)
| github_jupyter |
(docs-contribute)=
# Contributing to the Ray Documentation
There are many ways to contribute to the Ray documentation, and we're always looking for new contributors.
Even if you just want to fix a typo or expand on a section, please feel free to do so!
This document walks you through everything you need to do to get started.
## Building the Ray documentation
If you want to contribute to the Ray documentation, you'll need a way to build it.
You don't have to build Ray itself, which is a bit more involved.
Just clone the Ray repository and change into the `ray/doc` directory.
```shell
git clone git@github.com:ray-project/ray.git
cd ray/doc
```
To install the documentation dependencies, run the following command:
```shell
pip install -r requirements-doc.txt
```
Additionally, it's best if you install the dependencies for our linters with
```shell
pip install -r ../python/requirements_linters.txt
```
so that you can make sure your changes comply with our style guide.
Building the documentation is done by running the following command:
```shell
make html
```
which will build the documentation into the `_build` directory.
After the build finishes, you can simply open the `_build/html/index.html` file in your browser.
It's considered good practice to check the output of your build to make sure everything is working as expected.
Before committing any changes, make sure you run the
[linter](https://docs.ray.io/en/latest/ray-contribute/getting-involved.html#lint-and-formatting)
with `../scripts/format.sh` from the `doc` folder,
to make sure your changes are formatted correctly.
## The basics of our build system
The Ray documentation is built using the [`sphinx`](https://www.sphinx-doc.org/) build system.
We're using the [Sphinx Book Theme](https://github.com/executablebooks/sphinx-book-theme) from the
[executable books project](https://github.com/executablebooks).
That means that you can write Ray documentation in either Sphinx's native
[reStructuredText (rST)](https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html) or in
[Markedly Structured Text (MyST)](https://myst-parser.readthedocs.io/en/latest/).
The two formats can be converted to each other, so the choice is up to you.
Having said that, it's important to know that MyST is
[common markdown compliant](https://myst-parser.readthedocs.io/en/latest/syntax/reference.html#commonmark-block-tokens).
If you intend to add a new document, we recommend starting from an `.md` file.
The Ray documentation also fully supports executable formats like [Jupyter Notebooks](https://jupyter.org/).
Many of our examples are notebooks with [MyST markdown cells](https://myst-nb.readthedocs.io/en/latest/index.html).
In fact, this very document you're reading _is_ a notebook.
You can check this for yourself by either downloading the `.ipynb` file,
or directly launching this notebook into either Binder or Google Colab in the top navigation bar.
## What to contribute?
If you take Ray Tune as an example, you can see that our documentation is made up of several types of documentation,
all of which you can contribute to:
- [a project landing page](https://docs.ray.io/en/latest/tune/index.html),
- [a getting started guide](https://docs.ray.io/en/latest/tune/getting-started.html),
- [a key concepts page](https://docs.ray.io/en/latest/tune/key-concepts.html),
- [user guides for key features](https://docs.ray.io/en/latest/tune/tutorials/overview.html),
- [practical examples](https://docs.ray.io/en/latest/tune/examples/index.html),
- [a detailed FAQ](https://docs.ray.io/en/latest/tune/faq.html),
- [and API references](https://docs.ray.io/en/latest/tune/api_docs/overview.html).
This structure is reflected in the
[Ray documentation source code](https://github.com/ray-project/ray/tree/master/doc/source/tune) as well, so you
should have no problem finding what you're looking for.
All other Ray projects share a similar structure, but depending on the project there might be minor differences.
Each type of documentation listed above has its own purpose, but at the end our documentation
comes down to _two types_ of documents:
- Markup documents, written in MyST or rST. If you don't have a lot of (executable) code to contribute or
use more complex features such as
[tabbed content blocks](https://docs.ray.io/en/latest/ray-core/walkthrough.html#starting-ray), this is the right
choice. Most of the documents in Ray Tune are written in this way, for instance the
[key concepts](https://github.com/ray-project/ray/blob/master/doc/source/tune/key-concepts.rst) or
[API documentation](https://github.com/ray-project/ray/blob/master/doc/source/tune/api_docs/overview.rst).
- Notebooks, written in `.ipynb` format. All Tune examples are written as notebooks. These notebooks render in
the browser like `.md` or `.rst` files, but have the added benefit of adding launch buttons to the top of the
document, so that users can run the code themselves in either Binder or Google Colab. A good first example to look
at is [this Tune example](https://github.com/ray-project/ray/blob/master/doc/source/tune/examples/tune-serve-integration-mnist.ipynb).
## Fixing typos and improving explanations
If you spot a typo in any document, or think that an explanation is not clear enough, please consider
opening a pull request.
In this scenario, just run the linter as described above and submit your pull request.
## Adding API references
We use [Sphinx's autodoc extension](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html) to generate
our API documentation from our source code.
In case we're missing a reference to a function or class, please consider adding it to the respective document in question.
For example, here's how you can add a function or class reference using `autofunction` and `autoclass`:
```markdown
.. autofunction:: ray.tune.integration.docker.DockerSyncer
.. autoclass:: ray.tune.integration.keras.TuneReportCallback
```
The above snippet was taken from the
[Tune API documentation](https://github.com/ray-project/ray/blob/master/doc/source/tune/api_docs/integration.rst),
which you can look at for reference.
If you want to change the content of the API documentation, you will have to edit the respective function or class
signatures directly in the source code.
For example, in the above `autofunction` call, to change the API reference for `ray.tune.integration.docker.DockerSyncer`,
you would have to [change the following source file](https://github.com/ray-project/ray/blob/7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065/python/ray/tune/integration/docker.py#L15-L38).
## Adding code to an `.rST` or `.md` file
Modifying text in an existing documentation file is easy, but you need to be careful when it comes to adding code.
The reason is that we want to ensure every code snippet on our documentation is tested.
This requires us to have a process for including and testing code snippets in documents.
In an `.rST` or `.md` file, you can add code snippets using `literalinclude` from the Sphinx system.
For instance, here's an example from the Tune's "Key Concepts" documentation:
```markdown
.. literalinclude:: doc_code/key_concepts.py
:language: python
:start-after: __function_api_start__
:end-before: __function_api_end__
```
Note that in the whole file there's not a single literal code block, code _has to be_ imported using the `literalinclude` directive.
The code that gets added to the document by `literalinclude`, including `start-after` and `end-before` tags,
reads as follows:
```
# __function_api_start__
from ray import tune
def objective(x, a, b): # Define an objective function.
return a * (x ** 0.5) + b
def trainable(config): # Pass a "config" dictionary into your trainable.
for x in range(20): # "Train" for 20 iterations and compute intermediate scores.
score = objective(x, config["a"], config["b"])
tune.report(score=score) # Send the score to Tune.
# __function_api_end__
```
This code is imported by `literalinclude` from a file called `doc_code/key_concepts.py`.
Every Python file in the `doc_code` directory will automatically get tested by our CI system,
but make sure to run scripts that you change (or new scripts) locally first.
You do not need to run the testing framework locally.
In rare situations, when you're adding _obvious_ pseudo-code to demonstrate a concept, it is ok to add it
literally into your `.rST` or `.md` file, e.g. using a `.. code-cell:: python` directive.
But if your code is supposed to run, it needs to be tested.
## Creating a new document from scratch
Sometimes you might want to add a completely new document to the Ray documentation, like adding a new
user guide or a new example.
For this to work, you need to make sure to add the new document explicitly to the
[`_toc.yml` file](https://github.com/ray-project/ray/blob/master/doc/source/_toc.yml) that determines
the structure of the Ray documentation.
Depending on the type of document you're adding, you might also have to make changes to an existing overview
page that curates the list of documents in question.
For instance, for Ray Tune each user guide is added to the
[user guide overview page](https://docs.ray.io/en/latest/tune/tutorials/overview.html) as a panel, and the same
goes for [all Tune examples](https://docs.ray.io/en/latest/tune/examples/index.html).
Always check the structure of the Ray sub-project whose documentation you're working on to see how to integrate
it within the existing structure.
In some cases you may be required to choose an image for the panel. Images are located in
`doc/source/images`.
## Creating a notebook example
To add a new executable example to the Ray documentation, you can start from our
[MyST notebook template](https://github.com/ray-project/ray/tree/master/doc/source/_templates/template.md) or
[Jupyter notebook template](https://github.com/ray-project/ray/tree/master/doc/source/_templates/template.ipynb).
You could also simply download the document you're reading right now (click on the respective download button at the
top of this page to get the `.ipynb` file) and start modifying it.
All the example notebooks in Ray Tune get automatically tested by our CI system, provided you place them in the
[`examples` folder](https://github.com/ray-project/ray/tree/master/doc/source/tune/examples).
If you have questions about how to test your notebook when contributing to other Ray sub-projects, please make
sure to ask a question in [the Ray community Slack](https://forms.gle/9TSdDYUgxYs8SA9e8) or directly on GitHub,
when opening your pull request.
To work off of an existing example, you could also have a look at the
[Ray Tune Hyperopt example (`.ipynb`)](https://github.com/ray-project/ray/blob/master/doc/source/tune/examples/hyperopt_example.ipynb)
or the [Ray Serve guide for RLlib (`.md`)](https://github.com/ray-project/ray/blob/master/doc/source/serve/tutorials/rllib.md).
We recommend that you start with an `.md` file and convert your file to an `.ipynb` notebook at the end of the process.
We'll walk you through this process below.
What makes these notebooks different from other documents is that they combine code and text in one document,
and can be launched in the browser.
We also make sure they are tested by our CI system, before we add them to our documentation.
To make this work, notebooks need to define a _kernel specification_ to tell a notebook server how to interpret
and run the code.
For instance, here's the kernel specification of a Python notebook:
```markdown
---
jupytext:
text_representation:
extension: .md
format_name: myst
kernelspec:
display_name: Python 3
language: python
name: python3
---
```
If you write a notebook in `.md` format, you need this YAML front matter at the top of the file.
To add code to your notebook, you can use the `code-cell` directive.
Here's an example:
````markdown
```{code-cell} python3
:tags: [hide-cell]
import ray
import ray.rllib.agents.ppo as ppo
from ray import serve
def train_ppo_model():
trainer = ppo.PPOTrainer(
config={"framework": "torch", "num_workers": 0},
env="CartPole-v0",
)
# Train for one iteration
trainer.train()
trainer.save("/tmp/rllib_checkpoint")
return "/tmp/rllib_checkpoint/checkpoint_000001/checkpoint-1"
checkpoint_path = train_ppo_model()
```
````
Putting this markdown block into your document will render as follows in the browser:
```
import ray
import ray.rllib.agents.ppo as ppo
from ray import serve
def train_ppo_model():
trainer = ppo.PPOTrainer(
config={"framework": "torch", "num_workers": 0},
env="CartPole-v0",
)
# Train for one iteration
trainer.train()
trainer.save("/tmp/rllib_checkpoint")
return "/tmp/rllib_checkpoint/checkpoint_000001/checkpoint-1"
checkpoint_path = train_ppo_model()
```
As you can see, the code block is hidden, but you can expand it by click on the "+" button.
### Tags for your notebook
What makes this work is the `:tags: [hide-cell]` directive in the `code-cell`.
The reason we suggest starting with `.md` files is that it's much easier to add tags to them, as you've just seen.
You can also add tags to `.ipynb` files, but you'll need to start a notebook server for that first, which may
not want to do to contribute a piece of documentation.
Apart from `hide-cell`, you also have `hide-input` and `hide-output` tags that hide the input and output of a cell.
Also, if you need code that gets executed in the notebook, but you don't want to show it in the documentation,
you can use the `remove-cell`, `remove-input`, and `remove-output` tags in the same way.
### Testing notebooks
Removing cells can be particularly interesting for compute-intensive notebooks.
We want you to contribute notebooks that use _realistic_ values, not just toy examples.
At the same time we want our notebooks to be tested by our CI system, and running them should not take too long.
What you can do to address this is to have notebook cells with the parameters you want the users to see first:
````markdown
```{code-cell} python3
num_workers = 8
num_gpus = 2
```
````
which will render as follows in the browser:
```
num_workers = 8
num_gpus = 2
```
But then in your notebook you follow that up with a _removed_ cell that won't get rendered, but has much smaller
values and make the notebook run faster:
````markdown
```{code-cell} python3
:tags: [remove-cell]
num_workers = 0
num_gpus = 0
```
````
### Converting markdown notebooks to ipynb
Once you're finished writing your example, you can convert it to an `.ipynb` notebook using `jupytext`:
```shell
jupytext your-example.md --to ipynb
```
In the same way, you can convert `.ipynb` notebooks to `.md` notebooks with `--to myst`.
And if you want to convert your notebook to a Python file, e.g. to test if your whole script runs without errors,
you can use `--to py` instead.
## Where to go from here?
There are many other ways to contribute to Ray other than documentation.
See {ref}`our contributor guide <getting-involved>` for more information.
| github_jupyter |
```
import pandas as pd
from sqlalchemy import create_engine
# Store CSV into a DF
csv_file = "./Resources/MoviesOnStreamingPlatforms_updated.csv"
streaming_df = pd.read_csv(csv_file)
streaming_df
# Store CSV into a DF
csv_file2 = "./Resources/rotten_tomatoes_movies.csv"
tomato_df = pd.read_csv(csv_file2)
tomato_df
tomato_df.columns
streaming_df.columns
stream_df = streaming_df.rename(columns={"ID": "id", "Title": "title", "Year": "year", "Age": "age", "IMDb": "imdb","Rotten Tomatoes": "rotten_tomatoes", "Netflix": "netflix", "Hulu": "hulu", "Prime Video": "prime_video", "Disney+": "disney", "Type": "type", "Directors": "directors",
"Genres": "genres", "Country": "country", "Language": "language", "Runtime": "runtime"})
stream_df.columns
new_streaming_df = stream_df[["id", "title", "year", "age", "imdb", "rotten_tomatoes",
"netflix", "hulu", "prime_video", "disney", "type", "directors",
"genres", "country", "language", "runtime"]].copy()
new_streaming_df
```
#### Create a schema for where data will be loaded this is the SQL PART
```sql
CREATE TABLE streaming (
ID INT PRIMARY KEY,
Title TEXT,
Year INT,
Age VARCHAR,
IMDb DECIMAL,
Rotten_Tomatoes VARCHAR,
Netflix INT,
Hulu INT,
Prime_Video INT,
Disney INT,
Type INT,
Directors TEXT,
Genres TEXT,
Country TEXT,
Language TEXT,
Runtime DECIMAL
);
CREATE TABLE tomato (
rotten_tomatoes_link TEXT,
movie_title TEXT,
movie_info TEXT,
critics_consensus TEXT,
content_rating TEXT,
genres TEXT,
directors TEXT,
authors TEXT,
actors TEXt,
original_release_date TEXT,
streaming_release_date TEXT,
runtime INT,
production_company TEXT,
tomatometer_status TEXT,
tomatometer_rating DECIMAL,
tomatometer_count DECIMAL,
audience_status TEXT,
audience_rating DECIMAL,
audience_count DECIMAL,
tomatometer_top_critics_count INT,
tomatometer_fresh_critics_count INT,
tomatometer_rotten_critics_count INT);
SELECT * FROM streaming;
SELECT * FROM tomato;
```
```
# Connect to the db
connection_string = "postgres:{insertownpassword}@localhost:5432/ETL "
engine = create_engine(f'postgresql://{connection_string}')
engine.table_names()
# use pandas to load csv converted to DF into database
new_streaming_df.to_sql(name="streaming", con=engine, if_exists='append', index=False)
# Use pandas to load json converted to DF into database
tomato_df.to_sql(name='tomato', con=engine, if_exists='append', index=False)
# Confirm data is in the customer_name table
pd.read_sql_query('select * from streaming', con=engine)
pd.read_sql_query('select * from tomato', con=engine)
```
| github_jupyter |
```
import os
import pickle
import re
import sys
sys.path.append(os.path.abspath('') + '/../../..')
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from matplotlib.ticker import MultipleLocator
import matplotlib.font_manager as font_manager
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
font = {'family' : 'sans-serif',
'weight' : 'normal',
'size' : 9}
matplotlib.rc('font', **font)
plt.rc('axes', labelsize=9)
plt.rc('axes', titlesize=9)
from utils.functions.control import get_task_name
def plot(list):
fig, axs = plt.subplots(2,4, dpi=300)
for i in range(len(list)):
task, nb_gens, ylims, yticks, sb3_dict = list[i]
axs[i%2][i//2].set_title(
get_task_name(task)[:-3], fontdict={'fontweight': 'bold'})
gens = [1] + np.arange(nb_gens//100, nb_gens+1, nb_gens//100).tolist()
path = '../../../../../Videos/envs.multistep.imitate.control/'
path += 'merge.no~steps.0~task.' + task + '~transfer.no/'
path += 'bots.network.static.rnn.control/64/'
scores = {}
scores[64] = np.zeros( len(gens) ) * np.nan
scores['64 (elite)'] = np.zeros( len(gens) ) * np.nan
for gen in gens:
scores[64][gens.index(gen)] = \
np.load(path + str(gen) + '/scores.npy').mean()
scores['64 (elite)'][gens.index(gen)] = \
np.load(path + str(gen) + '/scores.npy').mean(axis=1).max()
if i in [1, 3, 5, 7]:
axs[i%2][i//2].set_xlabel('# Generations')
if i in [0, 1]:
axs[i%2][i//2].set_ylabel("Mean Score")
if nb_gens >= 1000:
xticks = 1000
else:
xticks = 100
axs[i%2][i//2].set_xticks(np.arange(0, nb_gens+1, step=xticks))
axs[i%2][i//2].set_yticks(
np.arange(yticks[0], yticks[1]+1, step=yticks[2]))
axs[i%2][i//2].set_xlim(
[-nb_gens//50+(xticks>=1000),nb_gens+nb_gens//50-(xticks>=1000)])
axs[i%2][i//2].set_ylim([ylims[0],ylims[1]])
sorted_sb3_dict = {k: v for k, v in sorted(sb3_dict.items(),
key=lambda item: item[1])}
sb3 = []
for key in sorted_sb3_dict:
if 'dqn' in key:
sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],
-nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',
colors='red', label=key))
elif 'ppo' in key:
sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],
-nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',
colors='peru', label=key))
elif 'sac' in key:
sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],
-nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',
colors='darkviolet', label=key))
elif 'td3' in key:
sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],
-nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',
colors='pink', label=key))
elif 'tqc' in key:
sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],
-nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',
colors='seagreen', label=key))
sb3.reverse()
ne = []
ne.append(axs[i%2][i//2].plot(
gens, scores[64], '.-', c='darkgrey', label='64')[0])
ne.append(axs[i%2][i//2].plot(
gens, scores['64 (elite)'], '.-', c='royalblue',
label='64 (elite)')[0])
ne.reverse()
if i == 6:
leg1 = axs[i%2][i//2].legend(
handles=ne, title="Population size", loc='lower right',
edgecolor='palegoldenrod', labelspacing=0.2)
leg1.get_frame().set_alpha(None)
leg1.get_frame().set_facecolor((1, 1, 1, .45))
axs[i%2][i//2].add_artist(leg1)
if i == 1:
font = font_manager.FontProperties(family='monospace')
leg2 = axs[i%2][i//2].legend(handles=sb3, loc='lower right',
edgecolor='palegoldenrod', labelspacing=0, prop=font)
leg2.get_frame().set_alpha(None)
leg2.get_frame().set_facecolor((1, 1, 1, .45))
axs[i%2][i//2].xaxis.set_minor_locator(MultipleLocator(100))
fig.tight_layout(pad=0.5)
plt.rcParams["figure.figsize"] = [9, 4]
plot([['acrobot', 300, [-510, -60], [-500, -50, 100], {'dqn':-80.4}],
['cart_pole', 300, [-5, 512], [0, 500, 100], {'ppo':500.0, 'dqn':-580.4, 'sac':-680.4, 'tqc':-780.4, 'td3':-880.4}],
['mountain_car', 1000, [-203, -98], [-200, -100, 25], {'dqn':-100.8}],
['mountain_car_continuous', 200, [-105, 105], [-100, 100, 50], {'sac':94.6}],
['pendulum', 3200, [-1505, -120], [-1500, -150, 450], {'tqc':-150.6}],
['lunar_lander', 1000, [-1180, 200], [-1100, 150, 250], {'ppo':142.7}],
['lunar_lander_continuous', 2500, [-670, 290], [-650, 250, 300], {'sac':269.7}],
['swimmer', 600, [-35, 375], [0, 361, 120], {'td3':358.3}]])
plt.savefig("../../../data/states/envs.multistep.imitate.control/extra/figures/results1.pdf")
def plot(list):
fig, axs = plt.subplots(2,4, dpi=300)
for i in range(len(list)):
task, nb_gens, ylims, yticks, sb3_agent = list[i]
### Set plot & variables
axs[i%2][i//2].set_title(
get_task_name(task)[:-3], fontdict = {'fontweight': 'bold'})
if i in [1, 3, 5, 7]:
axs[i%2][i//2].set_xlabel('# Timesteps')
if i in [0, 1]:
axs[i%2][i//2].set_ylabel("Score")
if 'dqn' in sb3_agent:
sb3_color = 'red'
elif 'ppo' in sb3_agent:
sb3_color = 'peru'
elif 'sac' in sb3_agent:
sb3_color = 'darkviolet'
elif 'td3' in sb3_agent:
sb3_color = 'pink'
else: # 'tqc' in sb3_agent:
sb3_color = 'seagreen'
gens = [1, nb_gens//4, nb_gens//2, nb_gens//2+nb_gens//4, nb_gens]
### Load data
# SB3
path_0 = '../../../data/states/envs.multistep.imitate.control/extra/'
path_0 += 'sb3_agent_rewards/' + task + '/rewards.pkl'
with open(path_0, 'rb') as f:
sb3_rewards_list = pickle.load(f)
# NE
path_1 = '../../../../../Videos/envs.multistep.imitate.control/'
path_1 += 'merge.no~steps.0~task.' + task + '~transfer.no/'
path_1 += 'bots.network.static.rnn.control/64/'
ne_rewards_list_5_timepoints = []
for gen in gens:
with open(path_1 + str(gen) + '/rewards.pkl', 'rb') as f:
ne_rewards_list_5_timepoints.append(pickle.load(f))
### Calculate run lengths
run_lengths = np.zeros(6, dtype=np.int32)
# SB3
run_lengths[0] = len(sb3_rewards_list[0])
# NE
for k in range(5):
run_lengths[k+1] = len(ne_rewards_list_5_timepoints[k][0])
### Fill Rewards Numpy Array
rewards = np.zeros((6, run_lengths.max())) * np.nan
# SB3
rewards[0][:run_lengths[0]] = sb3_rewards_list[0]
# NE
for k in range(5):
rewards[k+1][:run_lengths[k+1]] = \
ne_rewards_list_5_timepoints[k][0]
### Calculate Cumulative Sums
cum_sums = rewards.cumsum(axis=1)
if run_lengths.max() >= 999:
xticks = 1000
else:
xticks = 100
axs[i%2][i//2].set_xticks(
np.arange(0, run_lengths.max()+2, step=xticks))
axs[i%2][i//2].set_yticks(
np.arange(yticks[0], yticks[1]+1, step=yticks[2]))
axs[i%2][i//2].set_xlim(
[-run_lengths.max()//50,run_lengths.max()+run_lengths.max()//50])
axs[i%2][i//2].set_ylim([ylims[0],ylims[1]])
if task == 'cart_pole':
cum_sums[1] -= 25
cum_sums[2] -= 20
cum_sums[3] -= 15
cum_sums[4] -= 10
cum_sums[5] -= 5
if task == 'acrobot':
cum_sums[1] -= 10
cum_sums[2] -= 8
cum_sums[3] -= 6
cum_sums[4] -= 4
cum_sums[5] -= 2
if task == 'mountain_car':
cum_sums[1] -= 5
cum_sums[2] -= 4
cum_sums[3] -= 3
cum_sums[4] -= 2
cum_sums[5] -= 1
ne = []
ne.append(axs[i%2][i//2].plot(
np.arange(0, run_lengths.max()),
cum_sums[1],
'-', c='gainsboro', label=' 0%')[0])
ne.append(axs[i%2][i//2].plot(
np.arange(0, run_lengths.max()),
cum_sums[2],
'-', c='silver', label=' 25%')[0])
ne.append(axs[i%2][i//2].plot(
np.arange(0, run_lengths.max()),
cum_sums[3],
'-', c='darkgrey', label=' 50%')[0])
ne.append(axs[i%2][i//2].plot(
np.arange(0, run_lengths.max()),
cum_sums[4],
'-', c='grey', label=' 75%')[0])
ne.append(axs[i%2][i//2].plot(
np.arange(0, run_lengths.max()),
cum_sums[5],
'-', c='black', label='100%')[0])
sb3 = []
sb3.append(axs[i%2][i//2].plot(
np.arange(0, run_lengths.max()),
cum_sums[0],
'-', c=sb3_color, label=sb3_agent)[0])
if task in ['cart_pole']:
if 'dqn' in sb3_agent:
sb3_color = 'red'
elif 'ppo' in sb3_agent:
sb3_color = 'peru'
elif 'sac' in sb3_agent:
sb3_color = 'darkviolet'
elif 'td3' in sb3_agent:
sb3_color = 'pink'
else: # 'tqc' in sb3_agent:
sb3_color = 'seagreen'
sb3.append(axs[i%2][i//2].plot(
np.arange(0, 1),
np.arange(0, 1),
'-', c='red', label='dqn')[0])
sb3.append(axs[i%2][i//2].plot(
np.arange(0, 1),
np.arange(0, 1),
'-', c='darkviolet', label='sac')[0])
sb3.append(axs[i%2][i//2].plot(
np.arange(0, 1),
np.arange(0, 1),
'-', c='seagreen', label='tqc')[0])
sb3.append(axs[i%2][i//2].plot(
np.arange(0, 1),
np.arange(0, 1),
'-', c='pink', label='td3')[0])
leg1 = axs[i%2][i//2].legend(handles=sb3, loc='lower right', edgecolor='palegoldenrod', labelspacing=0)
leg1.get_frame().set_alpha(None)
leg1.get_frame().set_facecolor((1, 1, 1, .45))
axs[i%2][i//2].add_artist(leg1)
if task == 'lunar_lander_continuous':
leg2 = axs[i%2][i//2].legend(handles=ne, title='Generations', loc='lower right', edgecolor='palegoldenrod', labelspacing=0.1)
leg2.get_frame().set_alpha(None)
leg2.get_frame().set_facecolor((1, 1, 1, .45))
axs[i%2][i//2].add_artist(leg2)
axs[i%2][i//2].xaxis.set_minor_locator(MultipleLocator(100))
fig.tight_layout(pad=0.5)
plt.rcParams["figure.figsize"] = [9, 4]
plot([['acrobot', 300, [-520, 10], [-500, 50, 100], 'dqn'],
['cart_pole', 300, [-30, 510], [0, 500, 100], 'ppo'],
['mountain_car', 1000, [-210, 5], [-200, 0, 50], 'dqn'],
['mountain_car_continuous', 200, [-105, 105], [-100, 100, 50], 'sac'],
['pendulum', 3200, [-1530, 30], [-1500, 0, 500], 'tqc'],
['lunar_lander', 1000, [-110, 210], [-100, 200, 100], 'ppo'],
['lunar_lander_continuous', 2500, [-810, 290], [-800, 250, 350], 'sac'],
['swimmer', 600, [-10, 370], [0, 361, 120], 'td3']])
plt.savefig("../../../data/states/envs.multistep.imitate.control/extra/figures/results2.pdf")
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from IPython.html.widgets import interact
from sklearn.datasets import load_digits
digits = load_digits()
def sigmoid(x):
return 1/(1 + np.exp(-x))
sigmoid_v = np.vectorize(sigmoid)
def sigmoidprime(x):
return sigmoid(x) * (1 - sigmoid(x))
sigmoidprime_v = np.vectorize(sigmoidprime)
size = [64, 20, 10]
weights = []
for n in range(1, len(size)):
weights.append(np.random.rand(size[n], size[n-1]) * 2 - 1)
biases = []
for n in range(1, len(size)):
biases.append(np.random.rand(size[n]) * 2 - 1)
trainingdata = digits.data[0:1200]
traininganswers = digits.target[0:1200]
lc = 0.02
#convert the integer answers into a 10-dimension array
traininganswervectors = np.zeros((1796,10))
for n in range(1796):
traininganswervectors[n][digits.target[n]] = 1
def feedforward(weights, biases, a):
b = []
#first element is inputs "a"
b.append(a)
for n in range(1, len(size)):
#all other elements depend on the number of neurons
b.append(np.zeros(size[n]))
for n2 in range(0, size[n]):
b[n][n2] = sigmoid_v(np.dot(weights[n-1][n2], b[n-1]) + biases[n-1][n2])
return b
opt = feedforward(weights, biases, trainingdata[0])
print(opt[-1])
print(traininganswervectors[0])
print(costderivative(opt[-1], traininganswervectors[0]))
def gradient_descent(weights, biases, inputs, answers, batchsize, lc, epochs):
for n in range(epochs):
#pick random locations for input/result data
locations = np.random.randint(0, len(inputs), batchsize)
minibatch = []
#create tuples (inputs, result) based on random locations
for n2 in range(batchsize):
minibatch.append((inputs[locations[n2]], answers[locations[n2]]))
for n3 in range(batchsize):
weights, biases = train(weights, biases, minibatch, lc)
results = []
for n4 in range(len(trainingdata)):
results.append(feedforward(weights, biases, inputs[n4])[-1])
accresult = accuracy(inputs, results, answers)
print("Epoch ", n, " : ", accresult)
return weights, biases
def train(weights, biases, minibatch, lc):
#set the nabla functions to be the functions themselves initially, same size
nb = [np.zeros(b.shape) for b in biases]
nw = [np.zeros(w.shape) for w in weights]
#largely taken from Michael Nielsen's implementation
for i, r in minibatch:
dnb, dnw = backprop(weights, biases, i, r)
nb = [a+b for a, b in zip(nb, dnb)]
nw = [a+b for a, b in zip(nw, dnw)]
weights = [w-(lc/len(minibatch))*n_w for w, n_w in zip(weights, nw)]
biases = [b-(lc/len(minibatch))*n_b for b, n_b in zip(biases, nb)]
return weights, biases
def backprop(weights, biases, inputs, answers):
#set the nabla functions to be the same size as functions
nb = [np.zeros(b.shape) for b in biases]
nw = [np.zeros(w.shape) for w in weights]
a = inputs
alist = [inputs]
zlist = []
for b, w in zip(biases, weights):
z = np.dot(w, a)+b
zlist.append(z)
a = sigmoid_v(z)
alist.append(a)
delta = costderivative(alist[-1], answers) * sigmoidprime_v(zlist[-1])
nb[-1] = delta
print("delta", delta)
print("alist", alist)
#different from MN, alist[-2] not same size as delta?
nw[-1] = np.dot(delta, alist[-2].transpose())
for n in range(2, len(size)):
delta = np.dot(weights[-n+1].transpose(), delta) * sigmoidprime_v(zlist[-n])
nb[-n] = delta
#same here
nw[-n] = np.dot(delta, alist[-n-1].transpose())
return nb, nw
def costderivative(output, answers):
return (output - answers)
def accuracy(inputs, results, answers):
correct = 0
binresults = results
for n in range(0, len(results)):
#converts the output into a binary y/n for each digit
for n2 in range(len(results[n])):
if results[n][n2] == np.amax(results[n]):
binresults[n][n2] = 1
else:
binresults[n][n2] = 0
if np.array_equal(answers[n], binresults[n]):
correct += 1
return correct / len(results)
size = [64, 20, 10]
weights = []
for n in range(1, len(size)):
weights.append(np.random.rand(size[n], size[n-1]) * 2 - 1)
biases = []
for n in range(1, len(size)):
biases.append(np.random.rand(size[n]) * 2 - 1)
trainingdata = digits.data[0:500]
traininganswers = digits.target[0:500]
traininganswervectors = np.zeros((500,10))
for n in range(500):
traininganswervectors[n][digits.target[n]] = 1
final_weights, final_biases = gradient_descent(weights, biases, trainingdata,
traininganswervectors, 5, 1, 100)
print(final_weights)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/sullyvan15/Universidade-Vila-Velha/blob/master/Lista_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<?xml version="1.0" encoding="UTF-8"?>
<html>
<body>
<header></header>
<CENTER>
<img src="https://www.uvv.br/wp-content/themes/uvvBr/templates/assets//img/logouvv.svg" alt="UVV-LOGO" style = width="100px"; height="100px">
</CENTER>
<CENTER><b>Laboratório de Programação- PYTHON</b><br/>
<CENTER><b>Lista de Exercício 2 - Estrutura de Repetição</b><br/>
<CENTER><b>Professor Alessandro Bertolani Oliveira</b></CENTER><br/>
## NOME: Sullyvan Marks Nascimento De Oliveira
# Estrutura de repetição: for / in range / while / break
## Exercício 1
Escrever um algoritmo para exibir os múltiplos de 3 compreendidos no intervalo: [3 100].
```
cont = 0
for x in range(3, 100):
if x % 3 == 0:
cont = cont + 1
print(f'Multiplo {cont}: {x}')
```
## Exercício 2
Escrever um algoritmo para exibir os múltiplos de 11, a soma e a média dos múltiplos de 11, em ordem
decrescente (inversa), compreendidos entre o intervalo: [200 100].
```
soma = 0
media = 0
contador = 0
for contador in range(200, 100, -1):
if contador % 11 == 0:
print(f'Multiplo: {contador}')
soma = contador + soma
media = soma / contador
print(f'soma: {soma}')
print(f'media: {media}')
```
## Exercício 4
Faça um algoritmo que exiba a soma dos PARES e ÍMPARES compreendidos entre [10 99].
```
somapar = 0
for contador in range(10, 99):
if contador % 2 == 0:
somapar = somapar + contador
else:
somaimpar = somapar + contador
print(f'Soma dos pares: {somapar}')
print(f'Soma dos impares: {somaimpar}')
```
## Exercício 5
Escreva um algoritmo que leia de 10.000 habitantes de uma pequena cidade se está empregado ou não e exiba em porcentagem a quantidade de empregados e desempregados desta pequena cidade.
```
empregado = 0;
desempregado = 0;
ContHab = 1;
habitante = 10000
for h in range(0, habitante):
ler = int(input(f'Habitante {ContHab} está empregado? (Digite 1- Sim 2- Não): '))
if ler != 1 and ler != 2:
print("Dados inválidos. Tente novamente.")
else:
ContHab += 1
if ler == 1:
empregado += 1
else:
desempregado += 1
QuantEmpregado = empregado * 100 / habitante
QuantDesempregado = desempregado * 100 / habitante
print(f'Quantidade de Empregados: {empregado} sendo {QuantEmpregado: .2f}% do Total de habitantes ')
print(f'Quantidade de Desempregados: {desempregado} sendo {QuantDesempregado: .2f}% do Total de habitantes ')
```
## Exercício 6
Escreva um algoritmo que leia o salário em reais (R$) de 1000 clientes de um shopping e exiba na tela, em porcentagem, a divisão dos clientes por tipo: A, B ou C, conforme a seguir:
✓ A: Maior ou igual a 15 Salários Mínimos ou
✓ B: Menor que 15 Salários Mínimos ou maior ou igual a 5 Salários Mínimos ou
✓ C: Menor que 5 Salários Mínimos.
Declarar o Salário Mínimo (SM: R$ 998.05).
```
salMinimo = 998.05
contclient = 0
a = 0
b = 0
c = 0
TotalCliente = int(input('Insira a Quantidade de Cliente a ser Pesquisado: '))
for x in range(0, TotalCliente):
contclient += 1
salario = float(input(f'Insira o Salário do Cliente {contclient} em R$: '))
if salario >= (salMinimo * 15):
print('Cliente Tipo A ')
a += 1
elif salario < (salMinimo * 15) and salario >= (salMinimo * 5):
print('Cliente Tipo B')
b += 1
else:
print('Cliente Tipo C')
c += 1
claA = a * 100 / TotalCliente
claB = b * 100 / TotalCliente
claC = c * 100 / TotalCliente
print(f'Total de Tipos de Clientes:')
print(f'A = {a} Cerca de {claA: .1f}%')
print(f'B = {b} Cerca de {claB: .1f}%')
print(f'C = {c} Cerca de {claC: .1f}%')
```
## Exercício 7
Escrever um algoritmo que conte e soma todos os números ímpares que são múltiplos de três e NÃO
múltiplos de 5 que se encontram no intervalo [9 90]. Exiba a Contagem e a Soma destes números.
```
soma = 0
posicao = 0
# Ok
for contador in range(9, 90):
if contador % 2 == 1 and contador % 3 == 0 and contador % 5 != 0:
soma = contador + soma
posicao = posicao = posicao + 1
print(f'Soma N°{posicao}: {soma}')
```
## Exercício 9
Escrever um algoritmo que leia vários Números 𝑁 (𝑢𝑚 𝑝𝑜𝑟 𝑣𝑒𝑧) que, no intervalo entre [10 90], divididos por 5 possuem resto 2. Exiba a soma dos números lidos, parando o programa para 𝑁=0.
```
soma = 0
for x in range(10, 90):
ler = int(input('Digite um Número que dividido por 5, o resto é 2, Pressione 0 para parar: '))
if ler % 5 == 2:
print(f'O Número {ler} está Aprovado')
soma = soma + ler
elif ler > 0 and ler < 10:
print('Erro Na Leitura, Porfavor Escolha Valores entre 10 a 90')
elif ler == 0:
print('Fim da Leitura')
break
else:
print('O Número não é divisível por 5 ou resto não = 2')
print(f'Somatória dos Números Lidos: {soma}')
```
## Exercício 10
Escrever um algoritmo para que calcule a média dos números múltiplos de 6 que se encontram no intervalo de [6,6𝑥]. Onde 𝑥 é um (1) único número inteiro positivo (𝑥≥1), lido do usuário.
```
numero = int(input('Insira o Número: '))
contador = 0
soma = 0
for x in range(6, 6 * numero):
if x % 6 == 0:
print(f'{x}')
contador += 1
soma = soma + x
print(f'Somatória: {soma}')
media = soma / contador
print(f'Quantidade de Múltiplos: {contador}')
print(f'Média dos Múltiplos: {media}')
```
## Exercício 15
Escrever um algoritmo que leia vários números reais (um por um) e exiba, em porcentagem, a
quantidade de positivos e de negativos lidos. Pare o programa quando o usuário digitar ZERO.
```
contanegativo = 0
contageral = 0
contapositivo = 0
numPositivos = 0
numNegativos = 0
valor = 1
while valor != 0:
valor = float(input('Digite um número real ou 0 para sair do programa: '))
if valor < 0 or valor > 0: # Para excluir o 0 dos valores lidos
contageral += 1
if valor % 2 == 0:
contapositivo += 1
else:
contanegativo += 1
if contageral != 0:
numPositivos = contapositivo / contageral * 100
numNegativos = contanegativo / contageral * 100
else:
print(f'Nenhum numero positivo e negativo lido. ')
print(f'números postivos: {numPositivos: .1f} %')
print(f'números negativos: {numNegativos: .1f} %')
```
## Exercício 16
Escreva um algoritmo que leia 300 números positivos e exiba o menor e o maior: par e ímpar.
```
i = 0
maiorpar = 0
maiorimpar = 0
menorpar = menorimpar = 9999999999
while i < 300:
numero = float(input('Entre com um número: '))
if numero % 2 == 0:
if numero > maiorpar:
maiorpar = numero
if numero < menorpar:
menorpar = numero
else:
if numero > maiorimpar:
maiorimpar = numero
if numero < menorimpar:
menorimpar = numero
i = i + 1
print(f'Maior par: {maiorpar} e Menor par: {menorpar} \n'
f'Maior Impar: {maiorimpar} e Menor Impar: {menorimpar}')
```
| github_jupyter |
```
import urllib.request
import os
import geopandas as gpd
import rasterio
from rasterio.plot import show
import zipfile
import matplotlib.pyplot as plt
```
# GIS visualizations with geopandas
```
url = 'https://biogeo.ucdavis.edu/data/gadm3.6/shp/gadm36_COL_shp.zip'
dest = os.path.join('data', 'admin')
os.makedirs(dest, exist_ok=True)
urllib.request.urlretrieve(url, os.path.join(dest, 'gadm36_COL_shp.zip'))
with zipfile.ZipFile(os.path.join(dest, 'gadm36_COL_shp.zip'), 'r') as zip_ref:
zip_ref.extractall(dest)
gdf_adm0 = gpd.read_file(os.path.join(dest, 'gadm36_COL_0.shp'))
gdf_adm1 = gpd.read_file(os.path.join(dest, 'gadm36_COL_1.shp'))
gdf_adm1
gdf_adm0.plot()
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
gdf_adm0.plot(color='white', edgecolor='black', ax=ax)
gdf_adm1.plot(column='NAME_1', ax=ax, cmap='Set2',
legend=True,
legend_kwds={'loc': "upper right",
'bbox_to_anchor': (1.4, 1)})
url = 'https://download.geofabrik.de/south-america/colombia-latest-free.shp.zip'
dest = os.path.join('data', 'places')
os.makedirs(dest, exist_ok=True)
urllib.request.urlretrieve(url, os.path.join(dest, 'colombia-latest-free.shp.zip'))
with zipfile.ZipFile(os.path.join(dest, 'colombia-latest-free.shp.zip'), 'r') as zip_ref:
zip_ref.extractall(dest)
gdf_water = gpd.read_file(os.path.join(dest, 'gis_osm_water_a_free_1.shp'))
gdf_places = gpd.read_file(os.path.join(dest, 'gis_osm_places_free_1.shp'))
gdf_cities = gdf_places.loc[gdf_places['fclass']=='city'].copy()
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
gdf_adm0.plot(color='white', edgecolor='black', ax=ax)
gdf_adm1.plot(color='white', ax=ax)
gdf_water.plot(edgecolor='blue', ax=ax)
gdf_cities.plot(column='population', ax=ax, legend=True)
gdf_cities['size'] = gdf_cities['population'] / gdf_cities['population'].max() * 500
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
gdf_adm0.plot(color='white', edgecolor='black', ax=ax)
gdf_adm1.plot(color='white', edgecolor='gray', ax=ax)
gdf_water.plot(edgecolor='lightblue', ax=ax)
gdf_cities.plot(markersize='size', column='population',
cmap='viridis', edgecolor='white',
ax=ax, cax=cax, legend=True,
legend_kwds={'label': "Population by city"})
url = 'https://data.worldpop.org/GIS/Population_Density/Global_2000_2020_1km_UNadj/2020/COL/col_pd_2020_1km_UNadj.tif'
dest = os.path.join('data', 'pop')
os.makedirs(dest, exist_ok=True)
urllib.request.urlretrieve(url, os.path.join(dest, 'col_pd_2020_1km_UNadj.tif'))
with rasterio.open(os.path.join(dest, 'col_pd_2020_1km_UNadj.tif')) as src:
fig, ax = plt.subplots(figsize=(12, 12))
show(src, ax=ax, cmap='viridis_r')
gdf_adm1.boundary.plot(edgecolor='gray', linewidth=0.5, ax=ax)
```
| github_jupyter |
```
%matplotlib inline
from matplotlib import pyplot as pp
import numpy as np
```
# Introduction
Let's assume that we are given the function $f(\mathbf{x}) : \mathbb{R}^M \rightarrow \mathbb{R}$. At each point $\mathbf{x}$ this function produces the value $y = f(\mathbf{x})$. Due to real world circumstances, this assignment is usually noisy, meaning that for a point $\mathbf{x}$ rather than measuring $y$, we obtain a slightly misplaced value $\hat{y}$ calculated as
\begin{equation}
\hat{y} = f(\mathbf{x}) + \mathcal{N}(0,\sigma^2).
\end{equation}
Here $\mathcal{N}(0,\sigma^2)$, is the normal distribution with mean $0$ and standard deviation $\sigma^2$. Here we will refere to each $(\mathbf{x},\hat{y})$ as a data point. This term is referred as Gaussian noise. Now given a set of data points $\{(\mathbf{x}_n,\hat{y}_n)\}_{n=1}^{N}$, our goal is to find a function $g(\mathbf{x}) : \mathbb{R}^M \rightarrow \mathbb{R}$ that approximates $f$ as close as possible. In other words, we would like to find the function $g$ such that
\begin{equation}
\| g(\mathbf{x}) - f(\mathbf{x}) \|^2
\end{equation}
is minimized.
```
# This class contains the dataset
class dataset:
def _build_dataset( self ):
self._content = []
for i in range( self.N ):
x = np.random.uniform( self.I0, self.I1 )
y = self.func( x )
y_n = y + np.random.normal( 0, self.noise_std )
self._content.append( [ x,y,y_n ] )
self._content = np.array( self._content )
def _build_dataset_dense( self ):
x = np.arange( self.I0, self.I1, ( self.I1 - self.I0 )/(self.N_dense+1) )
y = self.func( x )
arr = np.array( [ x, y ] )
self._content_dense = arr.transpose()
def __init__( self ):
self.func = np.sin
self.N = 20
self.N_dense = 100
self.noise_std = 0.2
self.I0 = -1*np.pi
self.I1 = np.pi
self._build_dataset()
self._build_dataset_dense()
@property
def x( self ):
return self._content[:,0].ravel()
@property
def y( self ):
return self._content[:,1].ravel()
@property
def y_n( self ):
return self._content[:,2].ravel()
@property
def x_dense( self ):
return self._content_dense[:,0].ravel()
@property
def y_dense( self ):
return self._content_dense[:,1].ravel()
dset = dataset()
```
# Sinus Curve
Let $f(x) = \sin(x)$ for the values in the interval $[-\pi,\pi]$. The goal of this section is to reproduce the red data points given the blue plot data points. To achieve this goal we will be looking at two different ways of modeling the data.
```
pp.plot( dset.x_dense, dset.y_dense,'k-.', label='Actual Curve' )
pp.plot( dset.x, dset.y, 'r.', label='Actual Values')
pp.plot( dset.x, dset.y_n, 'b.', label='Noisy Values')
pp.legend()
pp.grid()
```
## Polynomial Curve-Fitting
We can assume that $g$ belongs to the class of polynomial functions of the degree $D$. This gives $g$ the form of
\begin{equation}
g(x) = \sum_{d=0}^{D} a_d x^d.
\end{equation}
To find $g$ we have to determine that values of $a_0, \dots, a_D$. For each $x_n$ we have the following equation
\begin{equation}
\sum_{d=0}^{D} a_d x_n^d = \hat{y}_n,
\end{equation}
and this yields to a system of linear equations from which values of $a_0, \dots, a_D$ can be calculated. We assume that this system has the form of $Xa=Y$ and the matrices $X \in \mathbb{R}^{(N,D)}$ and $Y\in \mathbb{R}^{(N,1)}$ as calculated as following :
```
D = 5
X = np.zeros((dset.N,D+1))
Y = np.zeros((dset.N,1))
for i in range( dset.N ):
Y[i,0] = dset.y_n[i]
for d in range( D+1 ):
X[i,d] = dset.x[i] ** d
X = np.matrix( X )
Y = np.matrix( Y )
```
One solution to this system is obtained by $a = (X^TX)^{-1}X^{T}Y$.
```
a = np.linalg.inv( X.T * X ) * X.T * Y
a = np.array( a )
a = a.ravel()
print( a )
```
Now given $a$ for every $x$ we can calculate the value of $g(x)$. We will do so for the values of x_dense.
```
def g(x,D,a) :
o = 0.0
for d in range(D+1):
o += a[d] * x ** d
return o
y_pred = []
for p in dset.x_dense :
y_pred.append( g(p,D,a) )
y_pred = np.array( y_pred )
pp.plot( dset.x_dense,dset.y_dense,'k-.', label='Actual Curve' )
pp.plot( dset.x_dense, y_pred, 'r-.', label='Predicted Values')
pp.legend()
pp.grid()
```
### Questions
1. How does changing $N$ and $D$ change the shape of the predicted curve?
2. Can we do this for some other functions? Lienar/Non-linear
3. How else can we obtain $a$?
4. How can we measure the error of this prediction?
## Radial Basis Function Kernel Curve Fitting (RBF Kernel)
Similar to polynomial curve fitting, our goal here to obtain the parameters of the predicted curve by solving a system of linear equations. Here, we will solve this problem by placing a basis function at each data point and formulate the predicted curve as a weighted sum of the basis functions. The Radial Basis funciton kernel has the form of
\begin{equation}
k(x,x') = \exp(- \frac{\|x-x'\|^2}{2\sigma^2}).
\end{equation}
At each point the shape of this kernel is as following :
```
def rbf( x, x_base, sigma2 ):
return np.exp(-1* (( x-x_base )**2) / (2*sigma2) )
kernal_sigma2 = 0.5
x_base = 0
y_rbf = []
for x in dset.x_dense :
y_rbf.append( rbf(x, x_base, kernal_sigma2) )
pp.plot( dset.x_dense,y_rbf,'k-.', label='RBF Kernel' )
pp.legend()
pp.grid()
```
Placing this function at each data point and calculating a weighted sum will give us
\begin{equation}
g(x) = \sum_{n=1}^{N} a_n k(x,x_n).
\end{equation}
This sum again provides us with a system of linear equations with the form a $Ka=Y$ where $K \in \mathbb{R}^{(N,N)}$ and $Y \in \mathbb{R}^{(N,1)}$ and they are calculated as :
```
K = np.zeros((dset.N,dset.N))
Y = np.zeros((dset.N,1))
for i in range( dset.N ):
Y[i,0] = dset.y_n[i]
for j in range( dset.N ):
K[i,j] = rbf( dset.x[i], dset.x[j], kernal_sigma2 )
# Regularizer
K = K + np.eye(dset.N)
K = np.matrix( K )
Y = np.matrix( Y )
```
Similarly, we solve this system as $a = (K^TK)^{-1}K^{T}Y$.
```
a = np.linalg.inv( K.T * K ) * K.T * Y
a = np.array( a )
a = a.ravel()
print( a )
```
Now given $a$ for every $x$ we can calculate the value of $g(x)$. We will do so for the values of x_dense.
```
def g(x,x_basis,a) :
o = 0.0
for d in range(dset.N):
o += a[d] * rbf(x,x_basis[d],kernal_sigma2)
return o
y_pred = []
for x in dset.x_dense :
y_pred.append( g(x,dset.x,a) )
y_pred = np.array( y_pred )
pp.plot( dset.x_dense,dset.y_dense,'k-.', label='Actual Curve' )
pp.plot( dset.x_dense, y_pred, 'r-.', label='Predicted Values')
pp.legend()
pp.grid()
```
### Questions
1. How does changing $\sigma^2$ changes the shape of the predicted curve?
2. What other basis functions can we use?
3. How can we measure the error of this prediction?
| github_jupyter |
```
import cobra
import copy
import mackinac
mackinac.modelseed.ms_client.url = 'http://p3.theseed.org/services/ProbModelSEED/'
mackinac.workspace.ws_client.url = 'http://p3.theseed.org/services/Workspace'
mackinac.genome.patric_url = 'https://www.patricbrc.org/api/'
# PATRIC user information
mackinac.get_token('mljenior')
# password: matrix54
```
### Generate models
```
# Barnesiella intestinihominis
genome_id = '742726.3'
template_id = '/chenry/public/modelsupport/templates/GramNegModelTemplate'
media_id = '/chenry/public/modelsupport/media/Complete'
file_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/barn_int.draft.json'
strain_id = 'Barnesiella intestinihominis YIT 11860'
mackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)
mackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)
mackinac.optimize_modelseed_model(genome_id)
model = mackinac.create_cobra_model_from_modelseed_model(genome_id)
model.id = strain_id
cobra.io.save_json_model(model, file_id)
# Lactobacillus reuteri
genome_id = '863369.3'
template_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'
media_id = '/chenry/public/modelsupport/media/Complete'
file_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/lact_reut.draft.json'
strain_id = 'Lactobacillus reuteri mlc3'
mackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)
mackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)
mackinac.optimize_modelseed_model(genome_id)
model = mackinac.create_cobra_model_from_modelseed_model(genome_id)
model.id = strain_id
cobra.io.save_json_model(model, file_id)
# Enterococcus hirae
genome_id = '768486.3'
template_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'
media_id = '/chenry/public/modelsupport/media/Complete'
file_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/ent_hir.draft.json'
strain_id = 'Enterococcus hirae ATCC 9790'
mackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)
mackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)
mackinac.optimize_modelseed_model(genome_id)
model = mackinac.create_cobra_model_from_modelseed_model(genome_id)
model.id = strain_id
cobra.io.save_json_model(model, file_id)
# Anaerostipes caccae
genome_id = '411490.6'
template_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'
media_id = '/chenry/public/modelsupport/media/Complete'
file_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/ana_stip.draft.json'
strain_id = 'Anaerostipes caccae DSM 14662'
mackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)
mackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)
mackinac.optimize_modelseed_model(genome_id)
model = mackinac.create_cobra_model_from_modelseed_model(genome_id)
model.id = strain_id
cobra.io.save_json_model(model, file_id)
# Staphylococcus warneri
genome_id = '596319.3'
template_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'
media_id = '/chenry/public/modelsupport/media/Complete'
file_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/staph_warn.draft.json'
strain_id = 'Staphylococcus warneri L37603'
mackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)
mackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)
mackinac.optimize_modelseed_model(genome_id)
model = mackinac.create_cobra_model_from_modelseed_model(genome_id)
model.id = strain_id
cobra.io.save_json_model(model, file_id)
# Adlercreutzia equolifaciens
genome_id = '1384484.3'
template_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'
media_id = '/chenry/public/modelsupport/media/Complete'
file_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/adl_equ.draft.json'
strain_id = 'Adlercreutzia equolifaciens DSM 19450'
mackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)
mackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)
mackinac.optimize_modelseed_model(genome_id)
model = mackinac.create_cobra_model_from_modelseed_model(genome_id)
model.id = strain_id
cobra.io.save_json_model(model, file_id)
```
### Curate Draft Models
```
# Read in draft models
mixB1 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/barn_int.draft.json')
mixB2 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/lact_reut.draft.json')
mixB3 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/ent_hir.draft.json')
mixB4 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/ana_stip.draft.json')
mixB5 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/staph_warn.draft.json')
mixB6 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/adl_equ.draft.json')
# Quality check functions
# Identify potentially gapfilled reactions
def _findGapfilledRxn(model, exclude):
gapfilled = []
transport = _findTransports(model)
if not type(exclude) is list:
exclude = [exclude]
for index in model.reactions:
if len(list(index.genes)) == 0:
if not index in model.boundary:
if not index.id in exclude or not index.id in transport:
gapfilled.append(index.id)
if len(gapfilled) > 0:
print(str(len(gapfilled)) + ' metabolic reactions not associated with genes')
return gapfilled
# Check for missing transport and exchange reactions
def _missingRxns(model, extracellular):
transporters = set(_findTransports(model))
exchanges = set([x.id for x in model.exchanges])
missing_exchanges = []
missing_transports = []
for metabolite in model.metabolites:
if not metabolite.compartment == extracellular:
continue
curr_rxns = set([x.id for x in list(metabolite.reactions)])
if not bool(curr_rxns & transporters):
missing_transports.append(metabolite.id)
if not bool(curr_rxns & exchanges):
missing_exchanges.append(metabolite.id)
if len(missing_transports) != 0:
print(str(len(missing_transports)) + ' extracellular metabolites are missing transport reactions')
if len(missing_exchanges) != 0:
print(str(len(missing_exchanges)) + ' extracellular metabolites are missing exchange reactions')
return missing_transports, missing_exchanges
# Checks which cytosolic metabolites are generated for free (bacteria only)
def _checkFreeMass(raw_model, cytosol):
model = copy.deepcopy(raw_model)
# Close all exchanges
for index in model.boundary:
model.reactions.get_by_id(index.id).lower_bound = 0.
# Identify all metabolites that are produced within the network
demand_metabolites = [x.reactants[0].id for x in model.demands if len(x.reactants) > 0] + [x.products[0].id for x in model.demands if len(x.products) > 0]
free = []
for index in model.metabolites:
if index.id in demand_metabolites:
continue
elif not index.compartment in cytosol:
continue
else:
demand = model.add_boundary(index, type='demand')
model.objective = demand
obj_val = model.slim_optimize(error_value=0.)
if obj_val > 1e-8:
free.append(index.id)
model.remove_reactions([demand])
if len(free) > 0:
print(str(len(free)) + ' metabolites are generated for free')
return free
# Check for mass and charge balance in reactions
def _checkBalance(model, exclude=[]):
imbalanced = []
mass_imbal = 0
charge_imbal = 0
elem_set = set()
for metabolite in model.metabolites:
try:
elem_set |= set(metabolite.elements.keys())
except:
pass
if len(elem_set) == 0:
imbalanced = model.reactions
mass_imbal = len(model.reactions)
charge_imbal = len(model.reactions)
print('No elemental data associated with metabolites!')
else:
if not type(exclude) is list: exclude = [exclude]
for index in model.reactions:
if index in model.boundary or index.id in exclude:
continue
else:
try:
test = index.check_mass_balance()
except ValueError:
continue
if len(list(test)) > 0:
imbalanced.append(index.id)
if 'charge' in test.keys():
charge_imbal += 1
if len(set(test.keys()).intersection(elem_set)) > 0:
mass_imbal += 1
if mass_imbal != 0:
print(str(mass_imbal) + ' reactions are mass imbalanced')
if charge_imbal != 0:
print(str(charge_imbal) + ' reactions are charge imbalanced')
return imbalanced
# Identify transport reactions (for any number compartments)
def _findTransports(model):
transporters = []
compartments = set(list(model.compartments))
if len(compartments) == 1:
raise Exception('Model only has one compartment!')
for reaction in model.reactions:
reactant_compartments = set([x.compartment for x in reaction.reactants])
product_compartments = set([x.compartment for x in reaction.products])
reactant_baseID = set([x.id.split('_')[0] for x in reaction.reactants])
product_baseID = set([x.id.split('_')[0] for x in reaction.products])
if reactant_compartments == product_compartments and reactant_baseID != product_baseID:
continue
elif bool(compartments & reactant_compartments) == True and bool(compartments & product_compartments) == True:
transporters.append(reaction.id)
return transporters
# Checks the quality of models by a couple metrics and returns problems
def checkQuality(model, exclude=[], cytosol='c', extracellular='e'):
gaps = _findGapfilledRxn(model, exclude)
freemass = _checkFreeMass(model, cytosol)
balance = _checkBalance(model, exclude)
trans, exch = _missingRxns(model, extracellular)
test = gaps + freemass + balance
if len(test) == 0: print('No inconsistencies detected')
# Create reporting data structure
quality = {}
quality['gaps'] = gaps
quality['freemass'] = freemass
quality['balance'] = balance
quality['trans'] = trans
quality['exch'] = exch
return quality
mixB1
mixB1_errors = checkQuality(mixB1)
mixB2
mixB2_errors = checkQuality(mixB2)
mixB3
mixB3_errors = checkQuality(mixB3)
mixB4
mixB4_errors = checkQuality(mixB4)
mixB5
mixB5_errors = checkQuality(mixB5)
mixB6
mixB6_errors = checkQuality(mixB6)
# Remove old bio1 (generic Gram-positive Biomass function) and macromolecule demand reactions
mixB1.remove_reactions(['rxn13783_c', 'rxn13784_c', 'rxn13782_c', 'bio1', 'SK_cpd11416_c'])
# Make sure all the models can grow anaerobically
model.reactions.get_by_id('EX_cpd00007_e').lower_bound = 0.
# Universal reaction bag
universal = cobra.io.load_json_model('/home/mjenior/Desktop/repos/Jenior_Cdifficile_2019/data/universal.json')
# Fix compartments
compartment_dict = {'Cytosol': 'cytosol', 'Extracellular': 'extracellular', 'c': 'cytosol', 'e': 'extracellular',
'cytosol': 'cytosol', 'extracellular': 'extracellular'}
for cpd in universal.metabolites:
cpd.compartment = compartment_dict[cpd1.compartment]
import copy
import cobra
import symengine
# pFBA gapfiller
def fast_gapfill(model, objective, universal, extracellular='extracellular', media=[], transport=False):
'''
Parameters
----------
model_file : str
Model to be gapfilled
objective : str
Reaction ID for objective function
universal_file : str
Reaction bag reference
extracellular : str
Label for extracellular compartment of model
media : list
list of metabolite IDs in media condition
transport : bool
Determine if passive transporters should be added in defined media
'''
# Define overlapping components
target_rxns = set([str(x.id) for x in model.reactions])
target_cpds = set([str(y.id) for y in model.metabolites])
ref_rxns = set([str(z.id) for z in universal.reactions])
shared_rxns = ref_rxns.intersection(target_rxns)
# Remove overlapping reactions from universal bag, add model reactions to universal bag
temp_universal = copy.deepcopy(universal)
for rxn in shared_rxns: temp_universal.reactions.get_by_id(rxn).remove_from_model()
temp_universal.add_reactions(list(copy.deepcopy(model.reactions)))
# Define minimum objective value
temp_universal.objective = objective
obj_constraint = temp_universal.problem.Constraint(temp_universal.objective.expression, lb=1.0, ub=1000.0)
temp_universal.add_cons_vars([obj_constraint])
temp_universal.solver.update()
# Set up pFBA objective
pfba_expr = symengine.RealDouble(0)
for rxn in temp_universal.reactions:
if not rxn.id in target_rxns:
pfba_expr += 1.0 * rxn.forward_variable
pfba_expr += 1.0 * rxn.reverse_variable
else:
pfba_expr += 0.0 * rxn.forward_variable
pfba_expr += 0.0 * rxn.reverse_variable
temp_universal.objective = temp_universal.problem.Objective(pfba_expr, direction='min', sloppy=True)
temp_universal.solver.update()
# Set media condition
for rxn in temp_universal.reactions:
if len(rxn.reactants) == 0 or len(rxn.products) == 0:
substrates = set([x.id for x in rxn.metabolites])
if len(media) == 0 or bool(substrates & set(media)) == True:
rxn.bounds = (max(rxn.lower_bound, -1000.), min(rxn.upper_bound, 1000.))
else:
rxn.bounds = (0.0, min(rxn.upper_bound, 1000.))
# Run FBA and save solution
solution = temp_universal.optimize()
active_rxns = set([rxn.id for rxn in temp_universal.reactions if abs(solution.fluxes[rxn.id]) > 1e-6])
# Screen new reaction IDs
new_rxns = active_rxns.difference(target_rxns)
# Get reactions and metabolites to be added to the model
new_rxns = copy.deepcopy([universal.reactions.get_by_id(rxn) for rxn in new_rxns])
new_cpds = set()
for rxn in new_rxns: new_cpds |= set([str(x.id) for x in list(rxn.metabolites)]).difference(target_cpds)
new_cpds = copy.deepcopy([universal.metabolites.get_by_id(cpd) for cpd in new_cpds])
# Create gapfilled model
new_model = copy.deepcopy(model)
new_model.add_metabolites(new_cpds)
new_model.add_reactions(new_rxns)
# Identify extracellular metabolites that need new exchanges
new_exchs = 0
model_exchanges = set()
rxns = set([str(rxn.id) for rxn in model.reactions])
for rxn in new_model.reactions:
if len(rxn.reactants) == 0 or len(rxn.products) == 0:
if extracellular in [str(cpd.compartment) for cpd in rxn.metabolites]:
model_exchanges |= set([rxn.id])
for cpd in new_model.metabolites:
if cpd.compartment != extracellular: continue
current_rxns = set([x.id for x in cpd.reactions])
if bool(current_rxns & model_exchanges) == False:
new_id = 'EX_' + cpd.id
new_model.add_boundary(cpd, type='exchange', reaction_id=new_id, lb=-1000.0, ub=1000.0)
new_exchs += 1
# Report to user
print('Gapfilled ' + str(len(new_rxns) + new_exchs) + ' reactions and ' + str(len(new_cpds)) + ' metabolites')
if new_model.slim_optimize() <= 1e-6: print('WARNING: Objective does not carries flux')
return new_model
# Load in models
model = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/lact_reut.draft.json')
# Fix compartments
compartment_dict = {'Cytosol': 'cytosol', 'Extracellular': 'extracellular', 'c': 'cytosol', 'e': 'extracellular',
'cytosol': 'cytosol', 'extracellular': 'extracellular'}
for cpd2 in model.metabolites:
cpd2.compartment = compartment_dict[cpd2.compartment]
# Thoroughly remove orphan reactions and metabolites
def all_orphan_prune(model):
pruned_cpd = 0
pruned_rxn = 0
removed = 1
while removed == 1:
removed = 0
# Metabolites
for cpd in model.metabolites:
if len(cpd.reactions) == 0:
cpd.remove_from_model()
pruned_cpd += 1
removed = 1
# Reactions
for rxn in model.reactions:
if len(rxn.metabolites) == 0:
rxn.remove_from_model()
pruned_rxn += 1
removed = 1
if pruned_cpd > 0: print('Pruned ' + str(pruned_cpd) + ' orphan metabolites')
if pruned_rxn > 0: print('Pruned ' + str(pruned_rxn) + ' orphan reactions')
return model
# Remove incorrect biomass-related components
# Unwanted reactions
rm_reactions = ['bio1']
for x in rm_reactions:
model.reactions.get_by_id(x).remove_from_model()
# Unwanted metabolites
rm_metabolites = ['cpd15666_c','cpd17041_c','cpd17042_c','cpd17043_c']
for y in rm_metabolites:
for z in model.metabolites.get_by_id(y).reactions:
z.remove_from_model()
model.metabolites.get_by_id(y).remove_from_model()
# Remove gap-filled reactions
# Gram-positive Biomass formulation
# DNA replication
cpd00115_c = universal.metabolites.get_by_id('cpd00115_c') # dATP
cpd00356_c = universal.metabolites.get_by_id('cpd00356_c') # dCTP
cpd00357_c = universal.metabolites.get_by_id('cpd00357_c') # TTP
cpd00241_c = universal.metabolites.get_by_id('cpd00241_c') # dGTP
cpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP
cpd00001_c = universal.metabolites.get_by_id('cpd00001_c') # H2O
cpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP
cpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate
cpd00012_c = universal.metabolites.get_by_id('cpd00012_c') # PPi
cpd17042_c = cobra.Metabolite(
'cpd17042_c',
formula='',
name='DNA polymer',
compartment='cytosol')
dna_rxn = cobra.Reaction('dna_rxn')
dna_rxn.lower_bound = 0.
dna_rxn.upper_bound = 1000.
dna_rxn.add_metabolites({
cpd00115_c: -1.0,
cpd00356_c: -0.5,
cpd00357_c: -1.0,
cpd00241_c: -0.5,
cpd00002_c: -4.0,
cpd00001_c: -1.0,
cpd17042_c: 1.0,
cpd00008_c: 4.0,
cpd00009_c: 4.0,
cpd00012_c: 1.0
})
#--------------------------------------------------------------------------------#
# RNA transcription
cpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP
cpd00052_c = universal.metabolites.get_by_id('cpd00052_c') # CTP
cpd00062_c = universal.metabolites.get_by_id('cpd00062_c') # UTP
cpd00038_c = universal.metabolites.get_by_id('cpd00038_c') # GTP
cpd00001_c = universal.metabolites.get_by_id('cpd00001_c') # H2O
cpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP
cpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate
cpd00012_c = universal.metabolites.get_by_id('cpd00012_c') # PPi
cpd17043_c = cobra.Metabolite(
'cpd17043_c',
formula='',
name='RNA polymer',
compartment='cytosol')
rna_rxn = cobra.Reaction('rna_rxn')
rna_rxn.name = 'RNA transcription'
rna_rxn.lower_bound = 0.
rna_rxn.upper_bound = 1000.
rna_rxn.add_metabolites({
cpd00002_c: -2.0,
cpd00052_c: -0.5,
cpd00062_c: -0.5,
cpd00038_c: -0.5,
cpd00001_c: -1.0,
cpd17043_c: 1.0,
cpd00008_c: 2.0,
cpd00009_c: 2.0,
cpd00012_c: 1.0
})
#--------------------------------------------------------------------------------#
# Protein biosynthesis
cpd00035_c = universal.metabolites.get_by_id('cpd00035_c') # L-Alanine
cpd00051_c = universal.metabolites.get_by_id('cpd00051_c') # L-Arginine
cpd00132_c = universal.metabolites.get_by_id('cpd00132_c') # L-Asparagine
cpd00041_c = universal.metabolites.get_by_id('cpd00041_c') # L-Aspartate
cpd00084_c = universal.metabolites.get_by_id('cpd00084_c') # L-Cysteine
cpd00053_c = universal.metabolites.get_by_id('cpd00053_c') # L-Glutamine
cpd00023_c = universal.metabolites.get_by_id('cpd00023_c') # L-Glutamate
cpd00033_c = universal.metabolites.get_by_id('cpd00033_c') # Glycine
cpd00119_c = universal.metabolites.get_by_id('cpd00119_c') # L-Histidine
cpd00322_c = universal.metabolites.get_by_id('cpd00322_c') # L-Isoleucine
cpd00107_c = universal.metabolites.get_by_id('cpd00107_c') # L-Leucine
cpd00039_c = universal.metabolites.get_by_id('cpd00039_c') # L-Lysine
cpd00060_c = universal.metabolites.get_by_id('cpd00060_c') # L-Methionine
cpd00066_c = universal.metabolites.get_by_id('cpd00066_c') # L-Phenylalanine
cpd00129_c = universal.metabolites.get_by_id('cpd00129_c') # L-Proline
cpd00054_c = universal.metabolites.get_by_id('cpd00054_c') # L-Serine
cpd00161_c = universal.metabolites.get_by_id('cpd00161_c') # L-Threonine
cpd00065_c = universal.metabolites.get_by_id('cpd00065_c') # L-Tryptophan
cpd00069_c = universal.metabolites.get_by_id('cpd00069_c') # L-Tyrosine
cpd00156_c = universal.metabolites.get_by_id('cpd00156_c') # L-Valine
cpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP
cpd00001_c = universal.metabolites.get_by_id('cpd00001_c') # H2O
cpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP
cpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate
cpd17041_c = cobra.Metabolite(
'cpd17041_c',
formula='',
name='Protein polymer',
compartment='cytosol')
protein_rxn = cobra.Reaction('protein_rxn')
protein_rxn.name = 'Protein biosynthesis'
protein_rxn.lower_bound = 0.
protein_rxn.upper_bound = 1000.
protein_rxn.add_metabolites({
cpd00035_c: -0.5,
cpd00051_c: -0.25,
cpd00132_c: -0.5,
cpd00041_c: -0.5,
cpd00084_c: -0.05,
cpd00053_c: -0.25,
cpd00023_c: -0.5,
cpd00033_c: -0.5,
cpd00119_c: -0.05,
cpd00322_c: -0.5,
cpd00107_c: -0.5,
cpd00039_c: -0.5,
cpd00060_c: -0.25,
cpd00066_c: -0.5,
cpd00129_c: -0.25,
cpd00054_c: -0.5,
cpd00161_c: -0.5,
cpd00065_c: -0.05,
cpd00069_c: -0.25,
cpd00156_c: -0.5,
cpd00002_c: -20.0,
cpd00001_c: -1.0,
cpd17041_c: 1.0,
cpd00008_c: 20.0,
cpd00009_c: 20.0
})
#--------------------------------------------------------------------------------#
# Cell wall synthesis
cpd02967_c = universal.metabolites.get_by_id('cpd02967_c') # N-Acetyl-beta-D-mannosaminyl-1,4-N-acetyl-D-glucosaminyldiphosphoundecaprenol
cpd00402_c = universal.metabolites.get_by_id('cpd00402_c') # CDPglycerol
cpd00046_c = universal.metabolites.get_by_id('cpd00046_c') # CMP
cpd12894_c = cobra.Metabolite(
'cpd12894_c',
formula='',
name='Teichoic acid',
compartment='cytosol')
teichoicacid_rxn = cobra.Reaction('teichoicacid_rxn')
teichoicacid_rxn.name = 'Teichoic acid biosynthesis'
teichoicacid_rxn.lower_bound = 0.
teichoicacid_rxn.upper_bound = 1000.
teichoicacid_rxn.add_metabolites({
cpd02967_c: -1.0,
cpd00402_c: -1.0,
cpd00046_c: 1.0,
cpd12894_c: 1.0
})
# Peptidoglycan subunits
# Undecaprenyl-diphospho-N-acetylmuramoyl--N-acetylglucosamine-L-ala-D-glu-meso-2-6-diaminopimeloyl-D-ala-D-ala (right)
cpd03495_c = universal.metabolites.get_by_id('cpd03495_c')
# Undecaprenyl-diphospho-N-acetylmuramoyl-(N-acetylglucosamine)-L-alanyl-gamma-D-glutamyl-L-lysyl-D-alanyl-D-alanine (left)
cpd03491_c = universal.metabolites.get_by_id('cpd03491_c')
cpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP
cpd00001_c = universal.metabolites.get_by_id('cpd00001_c') # H2O
cpd02229_c = universal.metabolites.get_by_id('cpd02229_c') # Bactoprenyl diphosphate
cpd00117_c = universal.metabolites.get_by_id('cpd00117_c') # D-Alanine
cpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP
cpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate
cpd16661_c = cobra.Metabolite(
'cpd16661_c',
formula='',
name='Peptidoglycan polymer',
compartment='cytosol')
peptidoglycan_rxn = cobra.Reaction('peptidoglycan_rxn')
peptidoglycan_rxn.name = 'Peptidoglycan biosynthesis'
peptidoglycan_rxn.lower_bound = 0.
peptidoglycan_rxn.upper_bound = 1000.
peptidoglycan_rxn.add_metabolites({
cpd03491_c: -1.0,
cpd03495_c: -1.0,
cpd00002_c: -4.0,
cpd00001_c: -1.0,
cpd16661_c: 1.0,
cpd02229_c: 1.0,
cpd00117_c: 0.5, # D-Alanine
cpd00008_c: 4.0, # ADP
cpd00009_c: 4.0 # Phosphate
})
cellwall_c = cobra.Metabolite(
'cellwall_c',
formula='',
name='Cell Wall polymer',
compartment='cytosol')
cellwall_rxn = cobra.Reaction('cellwall_rxn')
cellwall_rxn.name = 'Cell wall biosynthesis'
cellwall_rxn.lower_bound = 0.
cellwall_rxn.upper_bound = 1000.
cellwall_rxn.add_metabolites({
cpd16661_c: -1.5,
cpd12894_c: -0.05,
cellwall_c: 1.0
})
#--------------------------------------------------------------------------------#
# Lipid pool
cpd15543_c = universal.metabolites.get_by_id('cpd15543_c') # Phosphatidylglycerophosphate ditetradecanoyl
cpd15545_c = universal.metabolites.get_by_id('cpd15545_c') # Phosphatidylglycerophosphate dihexadecanoyl
cpd15540_c = universal.metabolites.get_by_id('cpd15540_c') # Phosphatidylglycerol dioctadecanoyl
cpd15728_c = universal.metabolites.get_by_id('cpd15728_c') # Diglucosyl-1,2 dipalmitoylglycerol
cpd15729_c = universal.metabolites.get_by_id('cpd15729_c') # Diglucosyl-1,2 dimyristoylglycerol
cpd15737_c = universal.metabolites.get_by_id('cpd15737_c') # Monoglucosyl-1,2 dipalmitoylglycerol
cpd15738_c = universal.metabolites.get_by_id('cpd15738_c') # Monoglucosyl-1,2 dimyristoylglycerol
cpd11852_c = cobra.Metabolite(
'cpd11852_c',
formula='',
name='Lipid Pool',
compartment='cytosol')
lipid_rxn = cobra.Reaction('lipid_rxn')
lipid_rxn.name = 'Lipid composition'
lipid_rxn.lower_bound = 0.
lipid_rxn.upper_bound = 1000.
lipid_rxn.add_metabolites({
cpd15543_c: -0.005,
cpd15545_c: -0.005,
cpd15540_c: -0.005,
cpd15728_c: -0.005,
cpd15729_c: -0.005,
cpd15737_c: -0.005,
cpd15738_c: -0.005,
cpd11852_c: 1.0
})
#--------------------------------------------------------------------------------#
# Ions, Vitamins, & Cofactors
# Vitamins
cpd00104_c = universal.metabolites.get_by_id('cpd00104_c') # Biotin MDM
cpd00644_c = universal.metabolites.get_by_id('cpd00644_c') # Pantothenate MDM
cpd00263_c = universal.metabolites.get_by_id('cpd00263_c') # Pyridoxine MDM
cpd00393_c = universal.metabolites.get_by_id('cpd00393_c') # folate
cpd00133_c = universal.metabolites.get_by_id('cpd00133_c') # nicotinamide
cpd00443_c = universal.metabolites.get_by_id('cpd00443_c') # p-aminobenzoic acid
cpd00220_c = universal.metabolites.get_by_id('cpd00220_c') # riboflavin
cpd00305_c = universal.metabolites.get_by_id('cpd00305_c') # thiamin
# Ions
cpd00149_c = universal.metabolites.get_by_id('cpd00149_c') # Cobalt
cpd00030_c = universal.metabolites.get_by_id('cpd00030_c') # Manganese
cpd00254_c = universal.metabolites.get_by_id('cpd00254_c') # Magnesium
cpd00971_c = universal.metabolites.get_by_id('cpd00971_c') # Sodium
cpd00063_c = universal.metabolites.get_by_id('cpd00063_c') # Calcium
cpd10515_c = universal.metabolites.get_by_id('cpd10515_c') # Iron
cpd00205_c = universal.metabolites.get_by_id('cpd00205_c') # Potassium
cpd00099_c = universal.metabolites.get_by_id('cpd00099_c') # Chloride
# Cofactors
cpd00022_c = universal.metabolites.get_by_id('cpd00022_c') # Acetyl-CoA
cpd00010_c = universal.metabolites.get_by_id('cpd00010_c') # CoA
cpd00015_c = universal.metabolites.get_by_id('cpd00015_c') # FAD
cpd00003_c = universal.metabolites.get_by_id('cpd00003_c') # NAD
cpd00004_c = universal.metabolites.get_by_id('cpd00004_c') # NADH
cpd00006_c = universal.metabolites.get_by_id('cpd00006_c') # NADP
cpd00005_c = universal.metabolites.get_by_id('cpd00005_c') # NADPH
# Energy molecules
cpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP
cpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP
cpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate
cpd00012_c = universal.metabolites.get_by_id('cpd00012_c') # PPi
cpd00038_c = universal.metabolites.get_by_id('cpd00038_c') # GTP
cpd00031_c = universal.metabolites.get_by_id('cpd00031_c') # GDP
cpd00274_c = universal.metabolites.get_by_id('cpd00274_c') # Citrulline
cofactor_c = cobra.Metabolite(
'cofactor_c',
formula='',
name='Cofactor Pool',
compartment='cytosol')
cofactor_rxn = cobra.Reaction('cofactor_rxn')
cofactor_rxn.name = 'Cofactor Pool'
cofactor_rxn.lower_bound = 0.
cofactor_rxn.upper_bound = 1000.
cofactor_rxn.add_metabolites({
cpd00104_c: -0.005,
cpd00644_c: -0.005,
cpd00263_c: -0.005,
cpd00393_c: -0.005,
cpd00133_c: -0.005,
cpd00443_c: -0.005,
cpd00220_c: -0.005,
cpd00305_c: -0.005,
cpd00149_c: -0.005,
cpd00030_c: -0.005,
cpd00254_c: -0.005,
cpd00971_c: -0.005,
cpd00063_c: -0.005,
cpd10515_c: -0.005,
cpd00205_c: -0.005,
cpd00099_c: -0.005,
cpd00022_c: -0.005,
cpd00010_c: -0.0005,
cpd00015_c: -0.0005,
cpd00003_c: -0.005,
cpd00004_c: -0.005,
cpd00006_c: -0.005,
cpd00005_c: -0.005,
cpd00002_c: -0.005,
cpd00008_c: -0.005,
cpd00009_c: -0.5,
cpd00012_c: -0.005,
cpd00038_c: -0.005,
cpd00031_c: -0.005,
cofactor_c: 1.0
})
#--------------------------------------------------------------------------------#
# Final Biomass
cpd11416_c = cobra.Metabolite(
'cpd11416_c',
formula='',
name='Biomass',
compartment='cytosol')
biomass_rxn = cobra.Reaction('biomass')
biomass_rxn.name = 'Gram-positive Biomass Reaction'
biomass_rxn.lower_bound = 0.
biomass_rxn.upper_bound = 1000.
biomass_rxn.add_metabolites({
cpd17041_c: -0.4, # Protein
cpd17043_c: -0.15, # RNA
cpd17042_c: -0.05, # DNA
cpd11852_c: -0.05, # Lipid
cellwall_c: -0.2,
cofactor_c: -0.2,
cpd00001_c: -20.0,
cpd00002_c: -20.0,
cpd00008_c: 20.0,
cpd00009_c: 20.0,
cpd11416_c: 1.0 # Biomass
})
grampos_biomass_components = [dna_rxn,rna_rxn, protein_rxn, teichoicacid_rxn, peptidoglycan_rxn, cellwall_rxn, lipid_rxn, cofactor_rxn, biomass_rxn]
# Add components to new model
model.add_reactions(grampos_biomass_components)
model.add_boundary(cpd11416_c, type='sink', reaction_id='EX_biomass', lb=0.0, ub=1000.0)
# Set new objective
model.objective = 'biomass'
model
model.slim_optimize()
# Gapfill
new_model = fast_gapfill(model, universal=universal, objective='biomass')
new_model
# Define minimal components and add to minimal media formulation
from cobra.medium import minimal_medium
ov = new_model.slim_optimize() * 0.1
components = minimal_medium(new_model, ov) # pick out necessary cofactors
essential = ['cpd00063_e','cpd00393_e','cpd00048_e','cpd00305_e','cpd00205_e','cpd00104_e','cpd00099_e',
'cpd00099_e','cpd00099_e','cpd00149_e','cpd00030_e','cpd10516_e','cpd00254_e','cpd00220_e',
'cpd00355_e','cpd00064_e','cpd00971_e','cpd00067_e']
# Wegkamp et al. 2009. Applied Microbiology.
# Lactobacillus minimal media
pmm5 = ['cpd00001_e','cpd00009_e','cpd00026_e','cpd00029_e','cpd00059_e','cpd00051_e','cpd00023_e',
'cpd00107_e','cpd00322_e','cpd00060_e','cpd00066_e','cpd00161_e','cpd00065_e','cpd00069_e',
'cpd00156_e','cpd00218_e','cpd02201_e','cpd00220_e','cpd00220_e','cpd04877_e','cpd28790_e',
'cpd00355_e']
minimal = essential + pmm5
new_model = fast_gapfill(new_model, universal=universal, objective='biomass', media=minimal)
new_model
new_model = all_orphan_prune(new_model)
new_model
print(len(new_model.genes))
new_model.slim_optimize()
new_model.name = 'Lactobactillus reuteri mlc3'
new_model.id = 'iLr488'
cobra.io.save_json_model(new_model, '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/lact_reut.curated.json')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.