text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|

https://networkx.github.io/
sources https://github.com/networkx
## Creating a graph
Create an empty graph with no nodes and no edges.
```
import networkx as nx
G = nx.Graph()
```
By definition, a `Graph` is a collection of nodes (vertices) along with identified pairs of nodes (called edges, links, etc). In NetworkX, nodes can be any hashable object e.g. a text string, an image, an XML object, another Graph, a customized node object, etc. (Note: Python's None object should not be used as a node as it determines whether optional function arguments have been assigned in many functions.)
## Nodes
The graph G can be grown in several ways. NetworkX includes many graph generator functions and facilities to read and write graphs in many formats. To get started though we'll look at simple manipulations. You can add one node at a time,
```
G.add_node(1)
```
add a list of nodes,
```
G.add_nodes_from([2, 3])
```
or add any `nbunch` of nodes. An nbunch is any iterable container of nodes that is not itself a node in the graph. (e.g. a list, set, graph, file, etc..)
```
H = nx.path_graph(10)
G.add_nodes_from(H)
```
Note that G now contains the nodes of H as nodes of G. In contrast, you could use the graph H as a node in G.
```
G.add_node(H)
```
The graph G now contains H as a node. This flexibility is very powerful as it allows graphs of graphs, graphs of files, graphs of functions and much more. It is worth thinking about how to structure your application so that the nodes are useful entities. Of course you can always use a unique identifier in G and have a separate dictionary keyed by identifier to the node information if you prefer. (Note: You should not change the node object if the hash depends on its contents.)
##Edges
G can also be grown by adding one edge at a time,
```
G.add_edge(1, 2)
e = (2, 3)
G.add_edge(*e) # unpack edge tuple*
```
by adding a list of edges,
```
G.add_edges_from([(1, 2),(1, 3)])
```
or by adding any `ebunch` of edges. An ebunch is any iterable container of edge-tuples. An edge-tuple can be a 2-tuple of nodes or a 3-tuple with 2 nodes followed by an edge attribute dictionary, e.g. (2, 3, {'weight' : 3.1415}). Edge attributes are discussed further below
```
G.add_edges_from(H.edges())
```
One can demolish the graph in a similar fashion; using `Graph.remove_node`, `Graph.remove_nodes_from`, `Graph.remove_edge` and `Graph.remove_edges_from`, e.g.
```
G
import matplotlib.pyplot as plt
# drawing reference https://networkx.github.io/documentation/networkx-1.10/reference/drawing.html
nx.draw(G, with_labels = True)
nx.draw_shell(G, with_labels = True)
nx.draw_spring(G, with_labels = True)
nx.draw_spectral(G, with_labels = True)
nx.draw_circular(G, with_labels= True)
G.nodes
G.edges
nx.draw(G, with_labels = True)
list(G.nodes)
G.remove_node(H)
```
There are no complaints when adding existing nodes or edges. For example, after removing all nodes and edges,
```
G.clear()
```
we add new nodes/edges and NetworkX quietly ignores any that are already present.
```
G.add_edges_from([(1, 2), (1, 3)])
G.add_node(1)
G.add_edge(1, 2)
G.add_node("spam") # adds node "spam"
G.add_nodes_from("spam") # adds 4 nodes: 's', 'p', 'a', 'm'
```
At this stage the graph G consists of 8 nodes and 2 edges, as can be seen by:
```
G.number_of_nodes()
G.number_of_edges()
```
We can examine them with
```
list(G.nodes()) # G.nodes() returns an iterator of nodes.
list(G.edges()) # G.edges() returns an iterator of edges.
list(G.neighbors(1)) # G.neighbors(n) returns an iterator of neigboring nodes of n
```
Removing nodes or edges has similar syntax to adding:
```
G.remove_nodes_from("spam")
list(G.nodes())
G.remove_edge(1, 3)
```
When creating a graph structure by instantiating one of the graph classes you can specify data in several formats.
```
H = nx.DiGraph(G) # create a DiGraph using the connections from G
list(H.edges())
edgelist = [(0, 1), (1, 2), (2, 3)]
H = nx.Graph(edgelist)
```
# Digraphs - directed graphs
```
import networkx as nx
# !pip install nxpd not using since matplotlib works well enough
# from nxpd import draw
G = nx.DiGraph()
G.graph['dpi'] = 120
G.add_nodes_from(range(1,9))
G.add_edges_from([(1,2),(1,3),(2,4),(3,6),(4,5),(4,6),(5,7),(5,8)])
nx.draw_circular(G, with_labels = True) # can anyone answer why with_labels defaults to False ? :)
G.nodes
G = nx.Graph()
G.add_edges_from(
[('A', 'B'), ('A', 'C'), ('D', 'B'), ('E', 'C'), ('E', 'F'),
('B', 'H'), ('B', 'G'), ('B', 'F'), ('C', 'G')])
val_map = {'A': 1.0,
# 'D': 0.5714285714285714,
'H': 0.0}
# little two color coloring
values = [ 0.75*(node > 'C') for node in G.nodes()]
color_map = []
for node in G:
if node < 'D':
color_map.append('blue')
else: color_map.append('green')
print(values)
#nx.draw(G, cmap=plt.get_cmap('jet'), node_color=values)
nx.draw(G, node_color=color_map)
# we can also set node attibutes
# https://networkx.github.io/documentation/stable/reference/generated/networkx.classes.function.set_node_attributes.html
```
### Exercise
### Enter into NetworkX this graph and display it

## Bonus round for something more complicated

```
# need http://www.graphviz.org/
# TODO we skip this as setting PATH on each user machine is too painful
```
https://graphviz.gitlab.io/_pages/Download/Download_windows.html
```
%env
%set_env GRAPHVIZ_DOT=C:\Program Files (x86)\Graphviz2.38\bin
%env
!set PATH=%PATH%;C:\\Program Files (x86)\\Graphviz2.38\\bin
!echo %PATH%
G = nx.DiGraph()
G.graph['dpi'] = 120
G.add_nodes_from(range(1,9))
G.add_edges_from([(1,2),(1,3),(2,4),(3,6),(4,5),(4,6),(5,7),(5,8)])
#draw(G, show='ipynb')
nx.draw(G)
```
## What to use as nodes and edges
You might notice that nodes and edges are not specified as NetworkX objects. This leaves you free to use meaningful items as nodes and edges. The most common choices are numbers or strings, but a node can be any hashable object (except None), and an edge can be associated with any object x using `G.add_edge(n1, n2, object=x)`.
As an example, n1 and n2 could be protein objects from the RCSB Protein Data Bank, and x could refer to an XML record of publications detailing experimental observations of their interaction.
We have found this power quite useful, but its abuse can lead to unexpected surprises unless one is familiar with Python. If in doubt, consider using `convert_node_labels_to_integers` to obtain a more traditional graph with integer labels.
## Accessing edges
In addition to the methods `Graph.nodes`, `Graph.edges`, and `Graph.neighbors`, iterator versions (e.g. `Graph.edges_iter`) can save you from creating large lists when you are just going to iterate through them anyway.
Fast direct access to the graph data structure is also possible using subscript notation.
Warning
Do not change the returned dict--it is part of the graph data structure and direct manipulation may leave the graph in an inconsistent state.
```
G[1] # Warning: do not change the resulting dict
G[1][2]
```
You can safely set the attributes of an edge using subscript notation if the edge already exists.
```
G.add_edge(1, 3)
G[1][3]['color']='blue'
```
Fast examination of all edges is achieved using adjacency(iterators). Note that for undirected graphs this actually looks at each edge twice.
```
FG = nx.Graph()
FG.add_weighted_edges_from([(1, 2, 0.125), (1, 3, 0.75), (2 ,4 , 1.2), (3 ,4 , 0.375)])
for n,nbrs in FG.adjacency():
for nbr,eattr in nbrs.items():
data = eattr['weight']
if data < 0.5:
print('(%d, %d, %.3f)' % (n, nbr, data))
```
Convenient access to all edges is achieved with the edges method.
```
for (u, v, d) in FG.edges(data='weight'):
if d < 0.5:
print('(%d, %d, %.3f)'%(n, nbr, d))
```
## Adding attributes to graphs, nodes, and edges
Attributes such as weights, labels, colors, or whatever Python object you like, can be attached to graphs, nodes, or edges.
Each graph, node, and edge can hold key/value attribute pairs in an associated attribute dictionary (the keys must be hashable). By default these are empty, but attributes can be added or changed using add_edge, add_node or direct manipulation of the attribute dictionaries named G.graph, G.node and G.edge for a graph G.
### Graph attributes
Assign graph attributes when creating a new graph
```
G = nx.Graph(day="Friday")
G.graph
```
Or you can modify attributes later
```
G.graph['day'] = 'Monday'
G.graph
```
### Node attributes
Add node attributes using `add_node(), add_nodes_from() or G.node`
```
G.add_node(1, time='5pm')
G.add_nodes_from([3], time='2pm')
G.node[1]
G.node[1]['room'] = 714
list(G.nodes(data=True))
```
Note that adding a node to `G.node` does not add it to the graph, use `G.add_node()` to add new nodes.
### Edge attributes
Add edge attributes using `add_edge()`, `add_edges_from()`, subscript notation, or `G.edge`.
```
G.add_edge(1, 2, weight=4.7)
G.add_edges_from([(3, 4), (4, 5)], color='red')
G.add_edges_from([(1, 2, {'color': 'blue'}), (2, 3, {'weight': 8})])
G[1][2]['weight'] = 4.7
G.edge[1][2]['weight'] = 4
list(G.edges(data=True))
```
The special attribute 'weight' should be numeric and holds values used by algorithms requiring weighted edges.
## Directed Graphs
The `DiGraph` class provides additional methods specific to directed edges, e.g. :meth:`DiGraph.out_edges`, `DiGraph.in_degree`, `DiGraph.predecessors`, `DiGraph.successors` etc. To allow algorithms to work with both classes easily, the directed versions of neighbors() and degree() are equivalent to successors() and the sum of in_degree() and out_degree() respectively even though that may feel inconsistent at times.
```
DG = nx.DiGraph()
DG.add_weighted_edges_from([(1, 2, 0.5), (3, 1, 0.75)])
DG.out_degree(1, weight='weight')
DG.degree(1,weight='weight')
list(DG.successors(1)) # DG.successors(n) returns an iterator
list(DG.neighbors(1)) # DG.neighbors(n) returns an iterator
```
Some algorithms work only for directed graphs and others are not well defined for directed graphs. Indeed the tendency to lump directed and undirected graphs together is dangerous. If you want to treat a directed graph as undirected for some measurement you should probably convert it using `Graph.to_undirected` or with
```
H = nx.Graph(G) # convert G to undirected graph
```
## MultiGraphs
NetworkX provides classes for graphs which allow multiple edges between any pair of nodes. The `MultiGraph` and `MultiDiGraph` classes allow you to add the same edge twice, possibly with different edge data. This can be powerful for some applications, but many algorithms are not well defined on such graphs. Shortest path is one example. Where results are well defined, e.g. `MultiGraph.degree` we provide the function. Otherwise you should convert to a standard graph in a way that makes the measurement well defined.
```
MG = nx.MultiGraph()
MG.add_weighted_edges_from([(1, 2, .5), (1, 2, .75), (2, 3, .5)])
list(MG.degree(weight='weight')) # MG.degree() returns a (node, degree) iterator
GG = nx.Graph()
for n,nbrs in MG.adjacency():
for nbr,edict in nbrs.items():
minvalue = min([d['weight'] for d in edict.values()])
GG.add_edge(n,nbr, weight = minvalue)
nx.shortest_path(GG, 1, 3)
```
## Graph generators and graph operations
In addition to constructing graphs node-by-node or edge-by-edge, they can also be generated by
* Applying classic graph operations, such as:
```
subgraph(G, nbunch) - induce subgraph of G on nodes in nbunch
union(G1,G2) - graph union
disjoint_union(G1,G2) - graph union assuming all nodes are different
cartesian_product(G1,G2) - return Cartesian product graph
compose(G1,G2) - combine graphs identifying nodes common to both
complement(G) - graph complement
create_empty_copy(G) - return an empty copy of the same graph class
convert_to_undirected(G) - return an undirected representation of G
convert_to_directed(G) - return a directed representation of G
```
* Using a call to one of the classic small graphs, e.g.
```
petersen = nx.petersen_graph()
tutte = nx.tutte_graph()
maze = nx.sedgewick_maze_graph()
tet = nx.tetrahedral_graph()
```
* Using a (constructive) generator for a classic graph, e.g.
```
K_5 = nx.complete_graph(5)
K_3_5 = nx.complete_bipartite_graph(3, 5)
barbell = nx.barbell_graph(10, 10)
lollipop = nx.lollipop_graph(10, 20)
```
* Using a stochastic graph generator, e.g.
```
er = nx.erdos_renyi_graph(100, 0.15)
ws = nx.watts_strogatz_graph(30, 3, 0.1)
ba = nx.barabasi_albert_graph(100, 5)
red = nx.random_lobster(100, 0.9, 0.9)
```
* Reading a graph stored in a file using common graph formats, such as edge lists, adjacency lists, GML, GraphML, pickle, LEDA and others.
```
nx.write_gml(red, "path.to.file")
mygraph = nx.read_gml("path.to.file")
```
Details on graph formats: :doc:`/reference/readwrite`
Details on graph generator functions: :doc:`/reference/generators`
## Analyzing graphs
The structure of G can be analyzed using various graph-theoretic functions such as:
```
G=nx.Graph()
G.add_edges_from([(1, 2), (1, 3)])
G.add_node("spam") # adds node "spam"
nx.connected_components(G)
list(nx.connected_components(G))
sorted(d for n, d in nx.degree(G))
nx.clustering(G)
```
Functions that return node properties return (node, value) tuple iterators.
```
nx.degree(G)
list(nx.degree(G))
```
For values of specific nodes, you can provide a single node or an nbunch of nodes as argument. If a single node is specified, then a single value is returned. If an nbunch is specified, then the function will return a (node, degree) iterator.
```
nx.degree(G, 1)
G.degree(1)
G.degree([1, 2])
list(G.degree([1, 2]))
```
Details on graph algorithms supported: :doc:`/reference/algorithms`
## Drawing graphs
NetworkX is not primarily a graph drawing package but basic drawing with Matplotlib as well as an interface to use the open source Graphviz software package are included. These are part of the networkx.drawing package and will be imported if possible. See :doc:`/reference/drawing` for details.
Note that the drawing package in NetworkX is not yet compatible with Python versions 3.0 and above.
First import Matplotlib's plot interface (pylab works too)
```
import matplotlib.pyplot as plt
```
You may find it useful to interactively test code using "ipython -pylab", which combines the power of ipython and matplotlib and provides a convenient interactive mode.
To test if the import of networkx.drawing was successful draw G using one of
```
nx.draw(G)
nx.draw_random(G)
nx.draw_circular(G)
nx.draw_spectral(G)
```
when drawing to an interactive display. Note that you may need to issue a Matplotlib
```
plt.show()
```
command if you are not using matplotlib in interactive mode: (See Matplotlib FAQ )
To save drawings to a file, use, for example
```
nx.draw(G)
plt.savefig("path.png")
```
writes to the file "path.png" in the local directory.
Details on drawing graphs: :doc:`/reference/drawing`
| github_jupyter |
# Stellar Initial Mass Function (IMF)
We are going to use a Salpeter IMF to generate stellar IMF data and then use MCMC to guess the slope.
The Salpeter IMF is given by:
$\frac{dN}{dM} \propto \frac{M}{M_\odot}^{-\alpha} ~~ or ~~ \frac{dN}{dlogM} \propto \frac{M}{M_\odot}^{1-\alpha}$
```
import numpy as np
import matplotlib.pyplot as plt
import copy
import corner
import emcee
%matplotlib inline
def sampleFromSalpeter(N,alpha,M_min,M_max):
# Draw random samples from a Salpeter IMF.
# N ... number of samples.
# alpha ... power-law index.
# M_min ... lower bound of mass interval.
# M_max ... upper bound of mass interval.
# Convert limits from M to logM.
log_M_Min = np.log(M_min)
log_M_Max = np.log(M_max)
# Since Salpeter SMF has a negative slope the maximum likelihood occurs at M_min
maxlik = M_min**(1.0 - alpha)
# Prepare array for output masses.
Masses = []
# Fill in array.
while (len(Masses) < N):
# Draw a candidate from logM interval.
logM = np.random.uniform(log_M_Min,log_M_Max)
M = np.exp(logM)
# Compute likelihood of candidate from Salpeter SMF.
likelihood = M**(1.0 - alpha)
# Accept randomly.
u = np.random.uniform(0.0,maxlik)
if (u < likelihood):
Masses.append(M)
return Masses
# and now generate the data
N = 1000000 # Draw 1 Million stellar masses.
alpha = 2.35
M_min = 1.0
M_max = 100.0
log_M_min = np.log10(M_min)
log_M_max = np.log10(M_max)
Masses = sampleFromSalpeter(N, alpha, M_min, M_max)
LogM = np.log(np.array(Masses))
D = np.mean(LogM)*N
```
Here we have created a set of test stellar mass data, distributed according to the Salpeter IMF, and now we will perform a MCMC to guess the slope.
We are given then a set of N-stellar masses, with negligible errors in the measurements.
Assuming that the minimum and maximum masses are known, the likelihood of the problem is:
$\mathcal L(\{M_1,M_2,\ldots,M_N\};\alpha) = \prod_{n=1}^N p(M_n|\alpha) = \prod_{n=1}^N c\left(\frac{M_n}{M_\odot}\right)^{-\alpha}$
where the normalization constant c can be found by:
$\int_{M_{min}}^{M_{max}}c M^{-\alpha} dM = 1 \Rightarrow c\frac{M_{max}^{1-\alpha}-M_{min}^{1-\alpha}}{1-\alpha}=1$
### 1) EMCEE MCMC
```
def ln_likelihood(params, D, N, M_min, M_max):
# Define logarithmic likelihood function.
# params ... array of fit params, here just alpha
# D ... sum over log(M_n)
# N ... number of data points.
# M_min ... lower limit of mass interval
# M_max ... upper limit of mass interval
alpha = params[0] # extract alpha
# Compute normalisation constant.
c = (1.0 - alpha)/(M_max**(1.0-alpha)
- M_min**(1.0-alpha))
# return log likelihood.
return N*np.log(c) - alpha*D
def ln_prior(params):
return 0.0
def ln_posterior(params, D, N, M_min, M_max):
lp = ln_prior(params)
ll = ln_likelihood(params, D, N, M_min, M_max)
return lp+ll
# Running the MCMC
nwalkers, ndim = 100, 50
# The array of initial guess
initial = np.array([3.0])
p0 = [np.random.rand(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, ln_posterior, args=(D, N, M_min, M_max))
pos, prob, state = sampler.run_mcmc(p0, 1000)
# Plot the trace
fig, ax = plt.subplots(1,1, figsize=(4,4))
# Plot trace
for j in range(nwalkers):
ax.plot(sampler.chain[j,:,0], alpha=0.1, color='k')
plt.ylim(2.2,2.4)
plt.show()
print('The alpha is',pos[0,0])
# # Reset the sampler, restart the sampler at this current position,
# # which we saved from before and called "pos"
# sampler.reset()
# pos,prob,state = sampler.run_mcmc(pos,1000)
# corner.corner(sampler.flatchain)
# plt.show()
```
### 2) We will give an example using the Metropolis-Hastings MCMC.
```
# initial guess for alpha as a list.
guess = [3.0]
# Prepare storing MCMC chain as list of lists.
A = [guess]
# define stepsize of MCMC.
stepsizes = [0.0005] # list of stepsizes
accepted = 0.0
# Metropolis-Hastings with 10,000 iterations.
for n in range(10000):
old_alpha = A[len(A)-1] # old parameter value as array
old_loglik = ln_likelihood(old_alpha, D, N, M_min,M_max)
# Suggest new candidate from Gaussian proposal distribution.
new_alpha = np.zeros([len(old_alpha)])
for i in range(len(old_alpha)):
# Use stepsize provided for every dimension.
new_alpha[i] = np.random.normal(old_alpha[i], stepsizes[i])
new_loglik = ln_likelihood(new_alpha, D, N, M_min,M_max)
# Accept new candidate in Monte-Carlo fashing.
if (new_loglik > old_loglik):
A.append(new_alpha)
accepted = accepted + 1.0 # monitor acceptance
else:
u = np.random.uniform(0.0,1.0)
if (u < np.exp(new_loglik - old_loglik)):
A.append(new_alpha)
accepted = accepted + 1.0 # monitor acceptance
else:
A.append(old_alpha)
print("Acceptance rate = "+str(accepted/10000.0))
# Discard first half of MCMC chain and thin out the rest.
Clean = []
for n in range(5000,10000):
if (n % 10 == 0):
Clean.append(A[n][0])
# Print Monte-Carlo estimate of alpha.
print("Mean: "+str(np.mean(Clean)))
print("Sigma: "+str(np.std(Clean)))
plt.figure(1)
plt.hist(Clean, 30, histtype='step', lw=2)
plt.xticks([2.346,2.348,2.35,2.352,2.354],
[2.346,2.348,2.35,2.352,2.354])
plt.xlim(2.345,2.355)
plt.xlabel(r'$\alpha$', fontsize=16)
plt.ylabel(r'$\cal L($Data$;\alpha)$', fontsize=16)
plt.show()
```
### 3) We will now perform the same procedure with Hamiltonian dynamics
```
def evaluateGradient(params, D, N, M_min, M_max, logMmin, logMmax):
alpha = params[0] # extract alpha
grad = logMmin*M_min**(1.0-alpha) - logMmax*M_max**(1.0-alpha)
grad = 1.0 + grad*(1.0 - alpha)/(M_max**(1.0-alpha)
- M_min**(1.0-alpha))
grad = -D - N*grad/(1.0 - alpha)
return np.array(grad)
guess = [3.0]
A = [guess]
# define stepsize of HMC.
stepsize = 0.00004
accepted = 0.0
# Hamiltonian Monte-Carlo.
for n in range(50000):
old_alpha = A[len(A)-1]
# Remember, energy = -loglik
old_energy = - ln_likelihood(old_alpha, D, N, M_min, M_max)
old_grad = - evaluateGradient(old_alpha, D, N, M_min, M_max, log_M_min, log_M_max)
new_alpha = copy.copy(old_alpha) # deep copy of array
new_grad = copy.copy(old_grad) # deep copy of array
# Suggest new candidate using gradient + Hamiltonian dynamics.
# draw random momentum vector from unit Gaussian.
p = np.random.normal(0.0, 1.0)
H = np.dot(p,p)/2.0 + old_energy # compute Hamiltonian
# Do 5 Leapfrog steps.
for tau in range(5):
# make half step in p
p = p - stepsize*new_grad/2.0
# make full step in alpha
new_alpha = new_alpha + stepsize*p
# compute new gradient
new_grad = - evaluateGradient(old_alpha, D, N, M_min,
M_max, log_M_min, log_M_max)
# make half step in p
p = p - stepsize*new_grad/2.0
# Compute new Hamiltonian. Remember, energy = -loglik.
new_energy = - ln_likelihood(new_alpha, D, N, M_min,
M_max)
new_grad = - evaluateGradient(old_alpha,D, N, M_min,
M_max, log_M_min, log_M_max)
newH = np.dot(p,p)/2.0 + new_energy
dH = newH - H
# Accept new candidate in Monte-Carlo fashion.
if (dH < 0.0):
A.append(new_alpha)
accepted = accepted + 1.0
else:
u = np.random.uniform(0.0,1.0)
if (u < np.exp(-dH)):
A.append(new_alpha)
accepted = accepted + 1.0
else:
A.append(old_alpha)
print("Acceptance rate = "+str(accepted/float(len(A))))
# Discard first half of MCMC chain and thin out the rest.
Clean = []
for n in range(len(A)//2,len(A)):
if (n % 10 == 0):
Clean.append(A[n][0])
# Print Monte-Carlo estimate of alpha.
print("Mean: "+str(np.mean(Clean)))
print("Sigma: "+str(np.std(Clean)))
plt.figure(1)
plt.hist(Clean, 30, histtype='step', lw=2)
#plt.xlim(2.3,2.358)
plt.xlabel(r'$\alpha$', fontsize=16)
plt.ylabel(r'$\cal L($Data$;\alpha)$', fontsize=16)
plt.show()
```
| github_jupyter |
```
import scipy.io as sio
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy.sparse import csc_matrix
data = sio.loadmat('XwindowsDocData.mat')
xtrain = data['xtrain']; xtest = data['xtest']
ytrain = data['ytrain']; ytest = data['ytest']
vocab = data['vocab']
```
### Naive Bayes classifiers
- To classify vectors of discrete-valued features, $\mathbf{x}\in\{1,...,K\}^D$, where K is the number of values for each feature, and D is the number of features. A generative approach requires to specify the class conditional distribution, $p(\mathbf{x}|y=c)$. The simplest approach is to assume the features are conditionally independent given the class label. The class conditional density as a product of one dimensional densities:
$$p(\mathbf{x}|y=c,\mathbf{\theta})= \prod^D_{j=1}p(x_j|y=c,\theta_{jc})$$
The resulting model is called a naive Bayes classifier (NBC) $\sim O(CD)$, for C classes and D features.
- In the case of **real-valued features**, the Gaussian distribution can be used. $p(\mathbf{x}|y=c,\theta)=\prod^D_{j=1}\mathcal{N}(x_j|\mu_{jc},\sigma^2_{jc})$, where $\mu_{jc}$ the mean of feature $j$ in objects of class $c$, and $\sigma^2_{jc}$ its variance.
- In the case of **binary features**, $x_j\in\{0,1\}$, the Bernoulli distribution $p(\mathbf{x}|y=c,\theta)=\prod^D_{j=1}Ber(x_j|\mu_{jc})$, where $\mu_{jc}$ the probability that feature $j$ occurs in class $c$. (**multivariate Bernoulli naive Bayes**)
- In the case of **categorical features**, $x_j\in\{1,...,K\}$, the multinoulli distribution is used, $p(\mathbf{x}|y=c,\theta)=\prod^D_{j=1}Cat(x_j|\mu_{jc})$, where $\mu_{jc}$ is a histogram over the K possible values for x_j in class c.
Train a naive Bayes classifier = computing the MLE or the MAP estimate for the parameters.
### MLE for NBC
The probability for a single data case is given by,
$$
p(\mathbf{x_i},y_i|\theta) = p(y_i|\pi)\prod_j p(x_{ij}|\theta_j) = \prod_c \pi_c^{\mathbb{I}(y_i=c)}\prod_j\prod_c p(x_{ij}|\theta_{jc})^{\mathbb{I}(y_i=c)}
$$
where $p(y_i|\pi)$ the likelihood over a class $c$, and $p(x_{ij}|\theta_j)$ the probability of feature $j$ in the text document $i$ over class $c$.
\begin{align}
\log p(\mathcal{D}|\theta) &= \sum_i\log p(\mathbf{x}_i,y_i|\theta) = \sum_i\log\prod_c \pi_c^{\mathbb{I}(y_i=c)}\prod_j\prod_c p(x_{ij}|\theta_{jc})^{\mathbb{I}(y_i=c)} \\
&= \sum_i\sum^C_{c=1}\mathbb{I}(y_i=c)\log\pi_c+\sum_i\sum_{j=1}^D\sum^C_{c=1}\mathbb{I}(y_i=c)\log p(x_{ij}|\theta_{jc}) \\
\log p(\mathcal{D}|\theta) &= \sum^C_{c=1}N_c\log\pi_c + \sum_{j=1}^D\sum^C_{c=1}\sum_{i:y_i=c}\log p(x_{ij}|\theta_{jc})
\end{align}
where $N_c\triangleq\sum_i\mathbb{I}(y_i=c)$ is the number of examples in class c.
To enforce the constraints that $\sum_c\pi_c = 1$, a **Lagrange multiplier** is used. The **constrained objective function (Lagrangian)** is given by the **log likelihood** + the **constraint**:
$$\mathcal{l}(\theta,\lambda) = \sum^C_{c=1}N_c\log\pi_c + \sum_{j=1}^D\sum^C_{c=1}\sum_{i:y_i=c}\log p(x_{ij}|\theta_{jc})+\lambda(1-\sum_c\pi_c)$$
Taking derivatives with respect to $\lambda$ yields the original constraint $\sum_c\pi_c = 1$.
Taking derivatives with respect to $\pi_c$ yields
\begin{align}
\frac{\partial \mathcal{l}}{\partial \pi_c} &= \frac{N_c}{\pi_c}-\lambda =0 \Longrightarrow N_c = \lambda\pi_c \\
\Longrightarrow \sum_c N_c &= \lambda\sum_c\pi_c \Longrightarrow
\lambda = \sum_c N_c = N
\end{align}
Therefore, $$\hat{\pi}_c = \frac{N_c}{\lambda}=\frac{N_c}{N}$$
### Classifying documents using bag of words
**Document classification** is the problem of classifying text documents into different categories. One simple approach is to represent each document as a binary vector, which records whether each word is present or not, so $x_{ij}=1$ iff (if or only if) word $j$ occurs in document $i$, otherwise $x_{ij}=0$.
Suppose all features are binary (sparse matrix), so $x_i|y = c \sim Ber(\theta_{jc})$ and $p(x_{ij}|\theta_{jc}) = Ber(x_{ij}|\theta_{jc}) = \theta_{jc}^{\mathbb{I}(x_{ij})}(1-\theta_{jc})^{\mathbb{I}(1-x_{ij})}$.
Class conditional density:
$$p(\mathbf{x}_i|y_i=c,\theta)=\prod^D_{j=1}Ber(x_{ij}|\theta_{jc})=\prod^D_{j=1}\theta_{jc}^{\mathbb{I}(x_{ij})}(1-\theta_{jc})^{\mathbb{I}(1-x_{ij})}$$
To enforce the constraints that $\sum_c\sum_j\theta_{jc} = 1$, a **Lagrange multiplier** is used. The **constrained objective function (Lagrangian)** is given by the **log likelihood** + the **constraint**:
$$\mathcal{l}(\theta,\lambda) = \sum^C_{c=1}N_c\log\pi_c + \sum_{j=1}^D\sum^C_{c=1}\sum_{i:y_i=c}\log p(x_{ij}|\theta_{jc})+\lambda(1-\sum_j\sum_c\theta_{jc})$$
Therefore, $$\hat{\theta}_{jc}=\frac{N_{jc}}{N}$$.
### Fitting a naive Bayes classifier to binary features
$$\hat{\pi}_c= \frac{N_c}{N}\text{ , }\hat{\theta}_{jc}=\frac{N_{jc}}{N}$$
```
def naiveBayesFit(xtrain,ytrain):
pC = 1
c = np.unique(ytrain)
Ntrain,D = xtrain.shape
theta = np.zeros((len(c),D))
Nclass = []
for i in c:
ndx = np.where(ytrain==i)[0]
Xtr = xtrain[ndx,:]
Non = np.sum(Xtr==1,axis=0)
Noff = np.sum(Xtr==0,axis=0)
theta[i-1,:] = (Non+pC)/(Non+Noff+2*pC)
Nclass.append(len(ndx))
classPrior = Nclass/np.sum(Nclass)
return theta,classPrior
theta,classPrior = naiveBayesFit(xtrain,ytrain)
```
### Predicting with a naive bayes classifier for binary features
$$p(y=c|\mathbf{x},\mathcal{D})\propto p(y=c|\mathcal{D})\prod^D_{j=1}p(x_j|y=c,\mathcal{D})$$
The correct Bayesian procedure is to integrate out the unknwon parameters,
$$p(y=c|\mathbf{x},\mathcal{D})\propto\bigg[\int Cat(y=c|\pi)p(\pi|\mathcal{D})d\pi\bigg]\prod^D_{j=1}\int Ber(x_j|y=c,\theta_{jc})p(\theta_{jc}|\mathcal{D})$$
The posterior predictive density is given by,
\begin{align}
p(y=c|\mathbf{x},\mathcal{D})&\propto \bar{\pi}_c\prod^D_{j=1}\bar{\theta}_{jc}^{\mathbb{I}(x_j=1)}(1-\bar{\theta}_{jc})^{\mathbb{I}(x_j=0)} \\
\bar{\theta}_{jk} &= \frac{N_{jc}+\beta_1}{N_c+\beta_0+\beta_1} \\
\bar{\pi}_c &= \frac{N_c+\alpha_c}{N+\alpha_0}
\end{align}
To avoid numerical underflow, the log-sum-exp trick is also used.
\begin{align}
\log p(y=c|\mathbf{x}) &= b_c - \log\sum^C_{c'=1}\exp(b_{c'}) = \log \frac{p(\mathbf{x}|y=c)p(y=c)}{p(\mathbf{x})}\\
b_c &\triangleq \log p(\mathbf{x}|y=c)+\log p(y=c) \\
\log\sum_{c'} \exp(b_{c'}) &= \log\sum_{c'}p(y=c',\mathbf{x})=\log p(\mathbf{x})
\end{align}
In general,
$$\log\sum_c\exp(b_c) = \log\bigg[(\sum_c\exp(b_c-B))\exp B\bigg] = \log\bigg[(\sum_c\exp(b_c-B))\bigg] + B$$
\begin{align}
p_{ic} &= \exp(L_{ic}-\log\sum\exp L_{i,:}) \\
\hat{y}_i &= \arg\max_c p_{ic}
\end{align}
```
def naiveBayesPredict(theta,classPrior,xtest):
Ntest = xtest.shape[0]
C = theta.shape[0]
logPrior = np.log(classPrior)
logPost = np.zeros((Ntest,C))
#logPost = []
logT = np.log(theta)
logTnot = np.log(1-theta)
xtestnot = csc_matrix((xtest.todense()==0)*1)
xtesttmp = xtest.todense()
xtestnottmp = xtestnot.todense()
for i in np.array([1,2]):
tmpT = np.tile(logT[i-1,:],(Ntest,1))
tmpTnot = np.tile(logTnot[i-1,:],(Ntest,1))
L1 = csc_matrix(np.multiply(tmpT,xtesttmp))
L0 = csc_matrix(np.multiply(tmpTnot,xtestnottmp))
logPost[:,i-1]=np.sum(L0+L1,axis=1).squeeze()
yhat = np.argmax(logPost,axis=1)
return yhat
def zeroOneLossFn(y,ypred):
err = y!=ypred
return err
ypred_train = naiveBayesPredict(theta,classPrior,xtrain)
err_train = np.mean(zeroOneLossFn(ytrain.squeeze(),ypred_train+1))
ypred_test = naiveBayesPredict(theta,classPrior,xtest)
err_test = np.mean(zeroOneLossFn(ytest.squeeze(),ypred_test+1))
print('Misclassification rate on train: '+str(err_train))
print('Misclassification rate on test: '+str(err_test))
plt.bar(range(theta[0,:].shape[0]),theta[0,:])
plt.bar(range(theta[1,:].shape[0]),theta[1,:])
```
### TO DO: Feature selection using mutual information
Since an NBC is fitting a joint distribution over potentially many features, it can suffer from overfitting. In addition, the run-time cost is $O(D)$, which may be too high for some applications.
One common approach to tackling both of these problems is to perform feature selection, to remove 'irrelevant' features that do not help much with the classification problem. The simplest approach to feature selection is to evaluate the relevance of each feature separately, and then take the top K, where K is chosen based on some tradeoff between accuracy and complexity. The approach is known as variable **ranking, filtering or screening**.
#### Mutual information
To measure relevance, the mutual information (MI) between feature $X_j$ and the class label $Y$.
$$I(X,Y) = \sum_{x_j}\sum_y p(x_j,y)\log\frac{p(x_j,y)}{p(x_j)p(y)}$$
If the features are binary, MI can be computed as follows,
$$I_j = \sum_c\bigg[\theta_{jc}\pi_c\log\frac{\theta_{jc}}{\theta_j}+(1-\theta_{jc})\pi_c\log\frac{1-\theta_{jc}}{1-\theta_j}\bigg]$$
where $\pi_c = p(y=c)$, $\theta_{jc}=p(x_j=1|y=c)$ and $\theta_j=p(x_j=1)=\sum_c\pi_c\theta_{jc}$.
| github_jupyter |
You can read an overview of this Numerical Linear Algebra course in [this blog post](http://www.fast.ai/2017/07/17/num-lin-alg/). The course was originally taught in the [University of San Francisco MS in Analytics](https://www.usfca.edu/arts-sciences/graduate-programs/analytics) graduate program. Course lecture videos are [available on YouTube](https://www.youtube.com/playlist?list=PLtmWHNX-gukIc92m1K0P6bIOnZb-mg0hY) (note that the notebook numbers and video numbers do not line up, since some notebooks took longer than 1 video to cover).
You can ask questions about the course on [our fast.ai forums](http://forums.fast.ai/c/lin-alg).
# 0. Course Logistics
## Ask Questions
Let me know how things are going. This is particularly important since I'm new to MSAN, I don't know everything you've seen/haven't seen.
## Intro
**My background and linear algebra love**:
- **Swarthmore College**: linear algebra convinced me to be a math major! (minors in CS & linguistics) I thought linear algebra was beautiful, but theoretical
- **Duke University**: Math PhD. Took numerical linear algebra. Enjoyed the course, but not my focus
- **Research Triangle Institute**: first time using linear algebra in practice (healthcare economics, markov chains)
- **Quant**: first time working with lots of data, decided to become a data scientist
- **Uber**: data scientist
- **Hackbright**: taught software engineering. Overhauled ML and collaborative filtering lectures
- **fast.ai**: co-founded to make deep learning more accessible. Deep Learning involves a TON of linear algebra
## Teaching
**Teaching Approach**
I'll be using a *top-down* teaching method, which is different from how most math courses operate. Typically, in a *bottom-up* approach, you first learn all the separate components you will be using, and then you gradually build them up into more complex structures. The problems with this are that students often lose motivation, don't have a sense of the "big picture", and don't know what they'll need.
If you took the fast.ai deep learning course, that is what we used. You can hear more about my teaching philosophy [in this blog post](http://www.fast.ai/2016/10/08/teaching-philosophy/) or [in this talk](https://vimeo.com/214233053).
Harvard Professor David Perkins has a book, [Making Learning Whole](https://www.amazon.com/Making-Learning-Whole-Principles-Transform/dp/0470633719) in which he uses baseball as an analogy. We don't require kids to memorize all the rules of baseball and understand all the technical details before we let them play the game. Rather, they start playing with a just general sense of it, and then gradually learn more rules/details as time goes on.
All that to say, don't worry if you don't understand everything at first! You're not supposed to. We will start using some "black boxes" or matrix decompositions that haven't yet been explained, and then we'll dig into the lower level details later.
To start, focus on what things DO, not what they ARE.
People learn by:
1. **doing** (coding and building)
2. **explaining** what they've learned (by writing or helping others)
**Text Book**
The book [**Numerical Linear Algebra**](https://www.amazon.com/Numerical-Linear-Algebra-Lloyd-Trefethen/dp/0898713617) by Trefethen and Bau is recommended. The MSAN program has a few copies on hand.
A secondary book is [**Numerical Methods**](https://www.amazon.com/Numerical-Methods-Analysis-Implementation-Algorithms/dp/0691151229) by Greenbaum and Chartier.
## Basics
**Office hours**: 2:00-4:00 on Friday afternoons. Email me if you need to meet at other times.
My contact info: **rachel@fast.ai**
Class Slack: #numerical_lin_alg
Email me if you will need to miss class.
Jupyter Notebooks will be available on Github at: https://github.com/fastai/numerical-linear-algebra Please pull/download before class. **Some parts are removed for you to fill in as you follow along in class**. Be sure to let me know **THIS WEEK** if you are having any problems running the notebooks from your own computer. You may want to make a separate copy, because running Jupyter notebooks causes them to change, which can create github conflicts the next time you pull.
Check that you have MathJax running (which renders LaTeX, used for math equations) by running the following cell:
$$ e^{\theta i} = \cos(\theta) + i \sin(\theta)$$
check that you can import:
```
import numpy as np
import sklearn
```
**Grading Rubric**:
| Assignment | Percent |
|-------------------|:-------:|
| Attendance | 10% |
| Homework | 20% |
| Writing: proposal | 10% |
| Writing: draft | 15% |
| Writing: final | 15% |
| Final Exam | 30% |
**Honor Code**
No cheating nor plagiarism is allowed, please see below for more details.
**On Laptops**
I ask you to be respectful of me and your classmates and to refrain from surfing the web or using social media (facebook, twitter, etc) or messaging programs during class. It is absolutely forbidden to use instant messaging programs, email, etc. during class lectures or quizzes.
## Syllabus
Topics Covered:
1\. Why are we here?
- Matrix and Tensor Products
- Matrix Decompositions
- Accuracy
- Memory use
- Speed
- Parallelization & Vectorization
2\. Topic Modeling with NMF and SVD
- Topic Frequency-Inverse Document Frequency (TF-IDF)
- Singular Value Decomposition (SVD)
- Non-negative Matrix Factorization (NMF)
- Stochastic Gradient Descent (SGD)
- Intro to PyTorch
- Truncated SVD, Randomized SVD
3\. Background Removal with Robust PCA
- Robust PCA
- Randomized SVD
- LU factorization
4\. Compressed Sensing for CT scans with Robust Regression
- L1 regularization
5\. Predicting Health Outcomes with Linear Regression
- Linear regression
- Polynomial Features
- Speeding up with Numba
- Regularization and Noise
- Implementing linear regression 4 ways
6\. PageRank with Eigen Decompositions
- Power Method
- QR Algorithm
- Arnoldi Iteration
7\. QR Factorization
- Gram-Schmidt
- Householder
- Stability
## Writing Assignment
**Writing Assignment:** Writing about technical concepts is a hugely valuable skill. I want you to write a technical blog post related to numerical linear algebra. [A blog is like a resume, only better](http://www.fast.ai/2017/04/06/alternatives/). Technical writing is also important in creating documentation, sharing your work with co-workers, applying to speak at conferences, and practicing for interviews. (You don't actually have to publish it, although I hope you do, and please send me the link if you do.)
- [List of ideas here](Project_ideas.txt)
- Always cite sources, use quote marks around quotes. Do this even as you are first gathering sources and taking notes. If you plagiarize parts of someone else's work, you will fail.
- Can be done in a Jupyter Notebook (Jupyter Notebooks can be turned into blog posts) or a [Kaggle Kernel](https://www.kaggle.com/xenocide/content-based-anime-recommender)
For the proposal, write a brief paragraph about the problem/topic/experiment you plan to research/test and write about. You need to include **4 sources** that you plan to use: these can include Trefethen, other blog posts, papers, or books. Include a sentence about each source, stating what it's in it.
Feel free to ask me if you are wondering if your topic idea is suitable!
### Excellent Technical Blogs
Examples of great technical blog posts:
- [Peter Norvig](http://nbviewer.jupyter.org/url/norvig.com/ipython/ProbabilityParadox.ipynb) (more [here](http://norvig.com/ipython/))
- [Stephen Merity](https://smerity.com/articles/2017/deepcoder_and_ai_hype.html)
- [Julia Evans](https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture) (more [here](https://jvns.ca/blog/2014/08/12/what-happens-if-you-write-a-tcp-stack-in-python/))
- [Julia Ferraioli](http://blog.juliaferraioli.com/2016/02/exploring-world-using-vision-twilio.html)
- [Edwin Chen](http://blog.echen.me/2014/10/07/moving-beyond-ctr-better-recommendations-through-human-evaluation/)
- [Slav Ivanov](https://blog.slavv.com/picking-an-optimizer-for-style-transfer-86e7b8cba84b)
- [Brad Kenstler](https://hackernoon.com/non-artistic-style-transfer-or-how-to-draw-kanye-using-captain-picards-face-c4a50256b814)
- find [more on twitter](https://twitter.com/math_rachel)
## Deadlines
| Assignment | Dates |
|-------------------|:--------:|
| Homeworks | TBA |
| Writing: proposal | 5/30 |
| Writing: draft | 6/15 |
| Writing: final | 6/27 |
| Final Exam | 6/29 |
## Linear Algebra
We will review some linear algebra in class. However, if you find there are concepts you feel rusty on, you may want to review on your own. Here are some resources:
- [3Blue1Brown Essence of Linear Algebra](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) videos about *geometric intuition* (fantastic! gorgeous!)
- Lectures 1-6 of Trefethen
- [Immersive linear algebra](http://immersivemath.com/ila/) free online textbook with interactive graphics
- [Chapter 2](http://www.deeplearningbook.org/contents/linear_algebra.html) of Ian Goodfellow's Deep Learning Book
## USF Policies
**Academic Integrity**
USF upholds the standards of honesty and integrity from all members of the academic community. All students are expected to know and adhere to the University’s Honor Code. You can find the full text of the [code online](www.usfca.edu/academic_integrity). The policy covers:
- Plagiarism: intentionally or unintentionally representing the words or ideas of another person as your own; failure to properly cite references; manufacturing references.
- Working with another person when independent work is required.
- Submission of the same paper in more than one course without the specific permission of each instructor.
- Submitting a paper written (entirely or even a small part) by another person or obtained from the internet.
- Plagiarism is plagiarism: it does not matter if the source being copied is on the Internet, from a book or textbook, or from quizzes or problem sets written up by other students.
- The penalties for violation of the policy may include a failing grade on the assignment, a failing grade in the course, and/or a referral to the Academic Integrity Committee.
**Students with Disabilities**
If you are a student with a disability or disabling condition, or if you think you may have a disability, please contact USF Student Disability Services (SDS) at 415 422-2613 within the first week of class, or immediately upon onset of disability, to speak with a disability specialist. If you are determined eligible for reasonable accommodations, please meet with your disability specialist so they can arrange to have your accommodation letter sent to me, and we will discuss your needs for this course. For more information, please visit [this website]( http://www.usfca.edu/sds) or call (415) 422-2613.
**Behavioral Expectations**
All students are expected to behave in accordance with the [Student Conduct Code and other University policies](https://myusf.usfca.edu/fogcutter). Open discussion and disagreement is encouraged when done respectfully and in the spirit of academic discourse. There are also a variety of behaviors that, while not against a specific University policy, may create disruption in this course. Students whose behavior is disruptive or who fail to comply with the instructor may be dismissed from the class for the remainder of the class period and may need to meet with the instructor or Dean prior to returning to the next class period. If necessary, referrals may also be made to the Student Conduct process for violations of the Student Conduct Code.
**Counseling and Psychological Services**
Our diverse staff offers brief individual, couple, and group counseling to student members of our community. CAPS services are confidential and free of charge. Call 415-422-6352 for an initial consultation appointment. Having a crisis at 3 AM? We are still here for you. Telephone consultation through CAPS After Hours is available between the hours of 5:00 PM to 8:30 AM; call the above number and press 2.
**Confidentiality, Mandatory Reporting, and Sexual Assault**
As an instructor, one of my responsibilities is to help create a safe learning environment on our campus. I also have a mandatory reporting responsibility related to my role as a faculty member. I am required to share information regarding sexual misconduct or information about a crime that may have occurred on USFs campus with the University. Here are other resources:
- To report any sexual misconduct, students may visit Anna Bartkowski (UC 5th floor) or see many other options by visiting [this website](https://myusf.usfca.edu/title-IX)
- Students may speak to someone confidentially, or report a sexual assault confidentially by contacting Counseling and Psychological Services at 415-422-6352
- To find out more about reporting a sexual assault at USF, visit [USF’s Callisto website](https://usfca.callistocampus.org/)
- For an off-campus resource, contact [San Francisco Women Against Rape](http://www.sfwar.org/about.html) 415-647-7273
| github_jupyter |
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/src/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>XArray Introduction</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="http://xarray.pydata.org/en/stable/_static/dataset-diagram-logo.png" alt="NumPy Logo" style="height: 250px;"></div>
### Questions
1. What is XArray?
2. How does XArray fit in with Numpy and Pandas?
### Objectives
1. Create a `DataArray`.
2. Open netCDF data using XArray
3. Subset the data.
## XArray
XArray expands on the capabilities on NumPy arrays, providing a lot of streamlined data manipulation. It is similar in that respect to Pandas, but whereas Pandas excels at working with tabular data, XArray is focused on N-dimensional arrays of data (i.e. grids). Its interface is based largely on the netCDF data model (variables, attributes, and dimensions), but it goes beyond the traditional netCDF interfaces to provide functionality similar to netCDF-java's Common Data Model (CDM).
### `DataArray`
The `DataArray` is one of the basic building blocks of XArray. It provides a NumPy ndarray-like object that expands to provide two critical pieces of functionality:
1. Coordinate names and values are stored with the data, making slicing and indexing much more powerful
2. It has a built-in container for attributes
```
# Convention for import to get shortened namespace
import numpy as np
import xarray as xr
# Create some sample "temperature" data
data = 283 + 5 * np.random.randn(5, 3, 4)
data
```
Here we create a basic `DataArray` by passing it just a numpy array of random data. Note that XArray generates some basic dimension names for us.
```
temp = xr.DataArray(data)
temp
```
We can also pass in our own dimension names:
```
temp = xr.DataArray(data, dims=['time', 'lat', 'lon'])
temp
```
This is already improved upon from a numpy array, because we have names for each of the dimensions (or axes in NumPy parlance). Even better, we can take arrays representing the values for the coordinates for each of these dimensions and associate them with the data when we create the `DataArray`.
```
# Use pandas to create an array of datetimes
import pandas as pd
times = pd.date_range('2018-01-01', periods=5)
times
# Sample lon/lats
lons = np.linspace(-120, -60, 4)
lats = np.linspace(25, 55, 3)
```
When we create the `DataArray` instance, we pass in the arrays we just created:
```
temp = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon'])
temp
```
...and we can also set some attribute metadata:
```
temp.attrs['units'] = 'kelvin'
temp.attrs['standard_name'] = 'air_temperature'
temp
```
Notice what happens if we perform a mathematical operaton with the `DataArray`: the coordinate values persist, but the attributes are lost. This is done because it is very challenging to know if the attribute metadata is still correct or appropriate after arbitrary arithmetic operations.
```
# For example, convert Kelvin to Celsius
temp - 273.15
```
### Selection
We can use the `.sel` method to select portions of our data based on these coordinate values, rather than using indices (this is similar to the CDM).
```
temp.sel(time='2018-01-02')
```
`.sel` has the flexibility to also perform nearest neighbor sampling, taking an optional tolerance:
```
from datetime import timedelta
temp.sel(time='2018-01-07', method='nearest', tolerance=timedelta(days=2))
```
<div class="alert alert-success">
<b>EXERCISE</b>:
.interp() works similarly to .sel(). Using .interp(), get an interpolated time series "forecast" for Boulder (40°N, 105°W) or your favorite latitude/longitude location. (Documentation for interp <a href="http://xarray.pydata.org/en/stable/interpolation.html">here</a>).
</div>
```
# YOUR CODE GOES HERE
```
<div class="alert alert-info">
<b>SOLUTION</b>
</div>
```
# %load solutions/interp_solution.py
```
### Slicing with Selection
```
temp.sel(time=slice('2018-01-01', '2018-01-03'), lon=slice(-110, -70), lat=slice(25, 45))
```
### `.loc`
All of these operations can also be done within square brackets on the `.loc` attribute of the `DataArray`. This permits a much more numpy-looking syntax, though you lose the ability to specify the names of the various dimensions. Instead, the slicing must be done in the correct order.
```
# As done above
temp.loc['2018-01-02']
temp.loc['2018-01-01':'2018-01-03', 25:45, -110:-70]
```
This does not work however:
```python
temp.loc[-110:-70, 25:45,'2018-01-01':'2018-01-03']
```
## Opening netCDF data
With its close ties to the netCDF data model, XArray also supports netCDF as a first-class file format. This means it has easy support for opening netCDF datasets, so long as they conform to some of XArray's limitations (such as 1-dimensional coordinates).
```
# Open sample North American Reanalysis data in netCDF format
ds = xr.open_dataset('../../../data/NARR_19930313_0000.nc')
ds
```
This returns a `Dataset` object, which is a container that contains one or more `DataArray`s, which can also optionally share coordinates. We can then pull out individual fields:
```
ds.isobaric1
```
or
```
ds['isobaric1']
```
`Dataset`s also support much of the same subsetting operations as `DataArray`, but will perform the operation on all data:
```
ds_1000 = ds.sel(isobaric1=1000.0)
ds_1000
ds_1000.Temperature_isobaric
```
### Aggregation operations
Not only can you use the named dimensions for manual slicing and indexing of data, but you can also use it to control aggregation operations, like `sum`:
```
u_winds = ds['u-component_of_wind_isobaric']
u_winds.std(dim=['x', 'y'])
```
<div class="alert alert-success">
<b>EXERCISE</b>:
Using the sample dataset, calculate the mean temperature profile (temperature as a function of pressure) over Colorado within this dataset. For this exercise, consider the bounds of Colorado to be:
<ul>
<li>x: -182km to 424km</li>
<li>y: -1450km to -990km</li>
</ul>
(37°N to 41°N and 102°W to 109°W projected to Lambert Conformal projection coordinates)
</div>
```
# YOUR CODE GOES HERE
```
<div class="alert alert-info">
<b>SOLUTION</b>
</div>
```
# %load solutions/mean_profile.py
```
## Resources
There is much more in the XArray library. To learn more, visit the [XArray Documentation](http://xarray.pydata.org/en/stable/index.html)
| github_jupyter |
**Note**: Click on "*Kernel*" > "*Restart Kernel and Run All*" in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) *after* finishing the exercises to ensure that your solution runs top to bottom *without* any errors. If you cannot run this file on your machine, you may want to open it [in the cloud <img height="12" style="display: inline-block" src="../static/link/to_mb.png">](https://mybinder.org/v2/gh/webartifex/intro-to-python/develop?urlpath=lab/tree/07_sequences/02_exercises.ipynb).
# Chapter 7: Sequential Data (Coding Exercises)
The exercises below assume that you have read the [second part <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/07_sequences/01_content.ipynb) of Chapter 7.
The `...`'s in the code cells indicate where you need to fill in code snippets. The number of `...`'s within a code cell give you a rough idea of how many lines of code are needed to solve the task. You should not need to create any additional code cells for your final solution. However, you may want to use temporary code cells to try out some ideas.
## Working with Lists
**Q1**: Write a function `nested_sum()` that takes a `list` object as its argument, which contains other `list` objects with numbers, and adds up the numbers! Use `nested_numbers` below to test your function!
Hint: You need at least one `for`-loop.
```
nested_numbers = [[1, 2, 3], [4], [5], [6, 7], [8], [9]]
def nested_sum(list_of_lists):
"""Add up numbers in nested lists.
Args:
list_of_lists (list): A list containing the lists with the numbers
Returns:
sum (int or float)
"""
...
...
...
return ...
nested_sum(nested_numbers)
```
**Q2**: Generalize `nested_sum()` into a function `mixed_sum()` that can process a "mixed" `list` object, which contains numbers and other `list` objects with numbers! Use `mixed_numbers` below for testing!
Hints: Use the built-in [isinstance() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#isinstance) function to check how an element is to be processed.
```
mixed_numbers = [[1, 2, 3], 4, 5, [6, 7], 8, [9]]
import collections.abc as abc
def mixed_sum(list_of_lists_or_numbers):
"""Add up numbers in nested lists.
Args:
list_of_lists_or_numbers (list): A list containing both numbers and
lists with numbers
Returns:
sum (int or float)
"""
...
...
...
...
...
...
return ...
mixed_sum(mixed_numbers)
```
**Q3.1**: Write a function `cum_sum()` that takes a `list` object with numbers as its argument and returns a *new* `list` object with the **cumulative sums** of these numbers! So, `sum_up` below, `[1, 2, 3, 4, 5]`, should return `[1, 3, 6, 10, 15]`.
Hint: The idea behind is similar to the [cumulative distribution function <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Cumulative_distribution_function) from statistics.
```
sum_up = [1, 2, 3, 4, 5]
def cum_sum(numbers):
"""Create the cumulative sums for some numbers.
Args:
numbers (list): A list with numbers for that the cumulative sums
are calculated
Returns:
cum_sums (list): A list with all the cumulative sums
"""
...
...
...
...
...
return ...
cum_sum(sum_up)
```
**Q3.2**: We should always make sure that our functions also work in corner cases. What happens if your implementation of `cum_sum()` is called with an empty list `[]`? Make sure it handles that case *without* crashing! What would be a good return value in this corner case?
Hint: It is possible to write this without any extra input validation.
```
cum_sum([])
```
< your answer >
| github_jupyter |
# Analitiche con Pandas
## [Scarica zip esercizi](../_static/generated/pandas.zip)
[Naviga file online](https://github.com/DavidLeoni/softpython-it/tree/master/pandas)
## 1. Introduzione
Python mette a disposizione degli strumenti potenti per l'analisi dei dati - uno dei principali è [Pandas](https://pandas.pydata.org/), che fornisce strutture di dati veloci e flessibili, soprattutto per l'analisi dei dati in tempo reale. Pandas riusa librerie esistenti che abbiamo già visto come Numpy:

In questo tutorial vedremo:
* analisi dati con Pandas
* plotting con MatPlotLib
* Esempi con dataset AstroPi
* Esercizi con dataset meteotrentino
* mappa regioni italiane con GeoPandas
### Che fare
- scompatta lo zip in una cartella, dovresti ottenere qualcosa del genere:
```
pandas
pandas.ipynb
pandas-sol.ipynb
jupman.py
```
<div class="alert alert-warning">
**ATTENZIONE**: Per essere visualizzato correttamente, il file del notebook DEVE essere nella cartella szippata.
</div>
- apri il Jupyter Notebook da quella cartella. Due cose dovrebbero aprirsi, prima una console e poi un browser. Il browser dovrebbe mostrare una lista di file: naviga la lista e apri il notebook `pandas.ipynb`
- Prosegui leggendo il file degli esercizi, ogni tanto al suo interno troverai delle scritte **ESERCIZIO**, che ti chiederanno di scrivere dei comandi Python nelle celle successive.
Scorciatoie da tastiera:
* Per eseguire il codice Python dentro una cella di Jupyter, premi `Control+Invio`
* Per eseguire il codice Python dentro una cella di Jupyter E selezionare la cella seguente, premi `Shift+Invio`
* Per eseguire il codice Python dentro una cella di Jupyter E creare una nuova cella subito dopo, premi `Alt+Invio`
* Se per caso il Notebook sembra inchiodato, prova a selezionare `Kernel -> Restart`
## Controlla l'installazione
Prima vediamo se hai già installato pandas sul tuo sistema, prova ad eseguire questa cella con Ctrl-Enter:
```
import pandas as pd
```
Se non hai visto messaggi di errore, puoi saltare l'installazione, altrimenti fai così:
* Se hai Anaconda - apri l'Anaconda Prompt e metti:
`conda install pandas`
* Senza Anaconda: (`--user` installa nella propria home):
`python3 -m pip install --user pandas`
## 2. Analisi dei dati di Astro Pi
Proviamo ad analizzare i dati registrati dal Raspberry presente sulla Stazione Spaziale Internazionale, scaricati da qui:
[https://projects.raspberrypi.org/en/projects/astro-pi-flight-data-analysis](https://projects.raspberrypi.org/en/projects/astro-pi-flight-data-analysis)
Nel sito è possibile trovare la descrizione dettagliata dei dati raccolti dai sensori, nel mese di febbraio 2016 (un record ogni 10 secondi).

**Importiamo il file**
Il metodo ```read_csv``` importa i dati da un file CSV e li memorizza in una struttura DataFrame.
In questo esercizio useremo il file [Columbus_Ed_astro_pi_datalog.csv](Columbus_Ed_astro_pi_datalog.csv)
```
import pandas as pd # importiamo pandas e per comodità lo rinominiamo in 'pd'
import numpy as np # importiamo numpy e per comodità lo rinominiamo in 'np'
# ricordatevi l'encoding !
df = pd.read_csv('Columbus_Ed_astro_pi_datalog.csv', encoding='UTF-8')
df.info()
```
Possiamo vedere rapidamente righe e colonne del dataframe con l'attributo `shape`:
**NOTA**: `shape` non è seguito da parentesi tonde !
```
df.shape
```
Il metodo `describe` vi da al volo tutta una serie di dati di riepilogo:
* il conteggio delle righe
* la media
* [la deviazione standard](https://it.wikipedia.org/wiki/Scarto_quadratico_medio)
* [i quartili](https://it.wikipedia.org/wiki/Quantile)
* minimo e massimo
```
df.describe()
```
**DOMANDA**: Manca qualche campo alla tabella prodotta da describe? Perchè non l'ha incluso ?
Per limitare `describe` ad una sola colonna come `humidity`, puoi scrivere così:
```
df['humidity'].describe()
```
Ancora più comodamente, puoi usare la notazione con il punto:
```
df.humidity.describe()
```
<div class="alert alert-warning">
**ATTENZIONE agli spazi!**
Nel caso il nome del campo avesse degli spazi (es. `'rotazioni frullatore'`), **non** potreste usare la notazione con il punto ma sareste costretti ad usare la notazione con le quadre vista sopra (es: `df.['rotazioni frullatore'].describe()`)
</div>
Il metodo `head()` restituisce le prime righe:
```
df.head()
```
Il metodo `tail()` restituisce le ultime righe:
```
df.tail()
```
La proprietà `colums` restituisce le intestazioni di colonna:
```
df.columns
```
**Nota**: Come si vede qua sopra, il tipo dell'oggetto ritornato non è una lista, ma un contenitore speciale definito da pandas:
```
type(df.columns)
```
Ciononostante, possiamo accedere agli elementi di questo contenitore usando indici dentro le parentesi quadre:
```
df.columns[0]
df.columns[1]
```
Il metodo `corr` permette di calcolare la correlazione tra le colonne del DataFrame, con valori da -1.0 a +1.0:
```
df.corr()
```
### 2.1 Esercizio - info meteo
✪ a) Crea un nuovo dataframe ```meteo``` importando i dati dal file meteo.csv, che contiene i dati meteo di Trento di novembre 2017 (fonte: https://www.meteotrentino.it). **IMPORTANTE**: assegna il dataframe ad una variabile chiamata `meteo` (così evitiamo confusione con il dataframe dell'AstroPi)
b) Visualizza le informazioni relative a questo Dataframe.
```
# scrivi qui - crea il dataframe
meteo = pd.read_csv('meteo.csv', encoding='UTF-8')
print("COLUMNS:")
print()
print(meteo.columns)
print()
print("INFO:")
print(meteo.info())
print()
print("HEAD():")
meteo.head()
```
## 3. Rivediamo MatPlotLib
Abbiamo già visto MatplotLib nella parte [sulla visualizzazione](http://it.softpython.org/visualization/visualization-sol.html), e oggi lo useremo [Matplotlib](http://matplotlib.org) per disegnare grafici.
### 3.1 Un esempio
Riprendiamo un esempio usando l'approccio in _stile Matlab_. Plotteremo una retta passando due liste di coordinate, una per le x e una per le y:
```
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
x = [1,2,3,4]
y = [2,4,6,8]
plt.plot(x, y) # possiamo direttamente passare liste per le x e y
plt.title('Qualche numero')
plt.show()
```
Possiamo anche creare serie con numpy. Proviamo a fare una parabola:
```
x = np.arange(0.,5.,0.1)
# '**' è l'operatore di elevamento a potenza in Python, NON '^'
y = x**2
```
Utilizziamo la funzione `type` per capire che tipo di dati sono x e y:
```
type(x)
type(y)
```
Si tratta quindi di vettori di NumPy.
Se vogliamo che le unità dell'asse x siano della stessa dimensione di quelle dell'asse y, possiamo utilizzare la funzione [gca](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.gca.html?highlight=matplotlib%20pyplot%20gca#matplotlib.pyplot.gca)
Per settare i limiti delle x e delle y, possiamo usare `xlim` e `ylim`:
```
plt.title('La parabola')
plt.plot(x,y);
plt.xlim([0, 5])
plt.ylim([0,10])
plt.title('La parabola')
plt.gca().set_aspect('equal')
plt.plot(x,y);
```
### 3.2 Grafici matplotlib da strutture pandas
Si possono ricavare grafici direttamente da strutture pandas, sempre usando lo _stile matlab_. Facciamo un esempio semplice, per casi più complessi rimandiamo alla documentazione di [DataFrame.plot](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html).
In caso di un numero molto elevato di dati, può essere utile avere un'idea qualitativa dei dati, mettendoli in grafico:
```
df.humidity.plot(label="Humidity", legend=True)
# con secondary_y=True facciamo apparire i numeri per l'asse delle y
# del secondo grafico sulla destra
df.pressure.plot(secondary_y=True, label="Pressure", legend=True);
```
Proviamo a mettere valori di pressione sull'asse orizzontale, e vedere quali valori di umidità sull'asse verticale corrispondono ad una certa pressione:
```
plt.plot(df['pressure'], df['humidity'])
```
## 4. Operazioni su righe
Se consideriamo le righe di un dataset, tipicamente le vorremo indicizzare, filtrare e ordinare.
### 4.1 Indicizzare con interi
Riportiamo qui l'indicizzazione più semplice tramite numeri di riga.
Per ottenere la i-esima serie si utilizza il metodo `iloc[i]` (qui riusiamo il dataset dell'AstroPI) :
```
df.iloc[6]
```
È possibile selezionare un dataframe di posizioni contigue, utilizzando lo _slicing_, come abbiamo già fatto per [stringhe](https://it.softpython.org/strings/strings2-sol.html#Slice) e [liste](https://it.softpython.org/lists/lists2-sol.html#Slice).
Qua per esempio selezioniamo le righe dalla 5 _inclusa_ alla 7 _esclusa_ :
```
df.iloc[5:7]
```
Filtrando le righe possiamo 'zommare' nel dataset, selezionando per esempio nel nuovo dataframe `df2` le righe tra la 12500esima (inclusa) e la 15000esima (esclusa):
```
df2=df.iloc[12500:15000]
plt.plot(df2['pressure'], df2['humidity'])
df2.humidity.plot(label="Humidity", legend=True)
df2.pressure.plot(secondary_y=True, label="Pressure", legend=True)
```
### 4.2 Filtrare
È possibile filtrare i dati in base al soddisfacimento di una condizione, che si può esprimere indicando una colonna e un operatore di comparazione, per esempio:
```
df.ROW_ID >= 6
```
Vediamo che si tratta di una serie di valori `True` o `False`, a seconda se il valore di ROW_ID è maggiore o uguale a 6. Qual'è il tipo di questo risultato?
```
type(df.ROW_ID >= 6)
```
In modo analogo `(df.ROW_ID >= 6) & (df.ROW_ID <= 10)` è una serie di valori `True` o `False`, se ROW_ID è contemporaneamente maggiore o uguale a 6 e minore e uguale a 10.
```
type((df.ROW_ID >= 6) & (df.ROW_ID <= 10))
```
Se vogliamo le righe complete del dataframe che soddisfano la condizione, possiamo scrivere così:
<div class="alert alert-warning">
**IMPORTANTE**: usiamo `df` all'_esterno_ dell'espressione `df[ ]` iniziando e chiudendo con le parentesi quadrate per dire a Python che vogliamo filtrare sul dataframe `df`, e usiamo di nuovo `df` all'_interno_ delle quadre per indicare su _quali colonne_ e _quali righe_ vogliamo filtrare
</div>
```
df[ (df.ROW_ID >= 6) & (df.ROW_ID <= 10) ]
```
Quindi se cerchiamo il record in cui la pressione è massima, utilizziamo la proprietà ```values``` della serie su cui calcoliamo il valore massimo:
```
df[ (df.pressure == df.pressure.values.max()) ]
```
### 4.3 Ordinare
Per avere un NUOVO dataframe ordinato in base a una o più colonne possiamo usare il metodo `sort_values`:
```
df.sort_values('pressure',ascending=False).head()
```
### 4.4 Esercizio - statistiche meteo
✪ Analizza i dati del Dataframe ```meteo``` per trovare:
* i valori di pressione media, minima e massima
* la temperatura media
* le date delle giornate di pioggia
```
# scrivi qui
print("Media pressione : %s" % meteo.Pressione.values.mean())
print("Minimo pressione : %s" % meteo.Pressione.values.min())
print("Massimo pressione : %s" % meteo.Pressione.values.max())
print("Media temperatura : %s" % meteo.Temp.values.mean())
meteo[(meteo.Pioggia > 0)]
```
## 5. Valori object e stringhe
In generale, quando vogliamo manipolare oggetti di un tipo conosciuto, diciamo stringhe che hanno il tipo `str`, possiamo scrivere `.str` dopo una serie e poi trattare il risultato come se fosse una stringa singola, usando un qualsiasi operatore (es: slice) o un metodo consentito da quella particolare classe o altri forniti da pandas.
Per il testo in particolare ci sono vari modi di manipolarlo, qua ne indichiamo un paio, per maggiori dettagli vedere [la documentazione di pandas](https://pandas.pydata.org/pandas-docs/stable/text.html))
### 5.1 Filtrare per valori testuali
Quando vogliamo filtrare per valori testuali, possiamo usare `.str.contains`, qua per esempio selezioniamo tutte le rilevazioni degli ultimi giorni di febbraio (che hanno quindi il timestamp che contiene `2016-02-2`) :
```
df[ df['time_stamp'].str.contains('2016-02-2') ]
```
### 5.2 Estrarre stringhe
Per estrarre solo il giorno dalla colonna `time_stamp`, possiamo usare `str` con l'operatore slice e parentesi quadre:
```
df['time_stamp'].str[8:10]
```
## 6. Operazioni su colonne
Vediamo ora come selezionare, aggiungere e trasformare colonne.
### 6.1 - Selezionare colonne
Se vogliamo un sotto-insieme di colonne, possiamo indicare i nomi in una lista così:
**NOTA**: dentro le quadre esterne c'è una semplice lista di stringhe senza `df`!
```
df[ ['temp_h', 'temp_p', 'time_stamp'] ]
```
Come sempre la selezione di colonne non cambia il dataframe originale:
```
df.head()
```
### 6.2 - Aggiungere Colonne
E' possibile ottenere nuove colonne effettuando calcoli da campi di altri colonne in modo molto naturale. Per esempio, qua ricaviamo la nuova colonna `mag_tot`, cioè il campo magnetico assoluto rilevato dalla stazione spaziale ricavandolo a partire da `mag_x`, `mag_y`, e `mag_z`, e poi la plottiamo:
```
df['mag_tot'] = df['mag_x']**2 + df['mag_y']**2 + df['mag_z']**2
df.mag_tot.plot()
```
Troviamo dove il campo magnetico era al massimo:
```
df['time_stamp'][(df.mag_tot == df.mag_tot.values.max())]
```
Inserendo il valore trovato sul sito [isstracker.com/historical](http://www.isstracker.com/historical), possiamo rilevare le posizioni in cui il campo magnetico è più forte.
#### Scrivere solo in alcune righe
La properietà `loc` ci permette di filtrare righe secondo una proprietà e selezionare una colonna, che può essere anche nuova. In questo caso, per le righe dove la temperatura cpu è eccessiva, scriviamo il valore `True` nei campi della colonna con intestazione `'Too hot'`:
```
df.loc[(df.temp_cpu > 31.68),'Too hot'] = True
```
Vediamo la tabella risultante (scorri fino in fondo per vedere la nuova colonna). Notiamo come i valori delle righe che non abbiamo filtrato vengono rappresentati con `NaN`, che letteralmente significa [not a number](https://it.softpython.org/matrices-numpy/matrices-numpy1-sol.html#NaN-e-infinit%C3%A0):
```
df.head()
```
Pandas è una libreria molto flessibile, e fornisce diversi modi per ottenere gli stessi obbiettivi. Per esempio, possiamo effettuare la stessa operazione di sopra con il comando `np.where` come qua sotto. Ad esempio, aggiungiamo una colonna che mi dice se la pressione è sopra o sotto la media.
```
pressione_media = df.pressure.values.mean()
df['check_p'] = np.where(df.pressure <= pressione_media, 'sotto', 'sopra')
```
### 6.2.1 Esercizio - temperatura meteo in Fahrenheit
Nel dataframe `meteo`, crea una colonna `Temp (Fahrenheit)` con la temperatura misurata in gradi Fahrenheit
Formula per calcolare la conversione dai gradi Celsius (C):
$Fahrenheit = \frac{9}{5}C + 32$
```
# scrivi qui
# SOLUZIONE
print()
print(" ************** OUTPUT SOLUZIONE **************")
meteo['Temp (Fahrenheit)'] = meteo['Temp']* 9/5 + 32
meteo.head()
```
### 6.2.2 Esercizio - Pressione vs Temperatura
La pressione dovrebbe essere direttamente proporzionale alla temperatura in un ambiente chiuso secondo la [Legge di Gay-Lussac](https://en.wikipedia.org/wiki/Gay-Lussac%27s_law):
$\frac{P}{T} = k$
E' vero per il dataset del `meteo`? Prova a scoprirlo calcolando direttamente la formula e comparando con i risultati del metodo `corr()`.
```
# SOLUZIONE
# come atteso, in un ambiente aperto non c'è molta correlazione lineare
#meteo.corr()
#meteo['Pressione'] / meteo['Temp']
```
### 6.3 Trasformare colonne
Supponiamo di voler convertire tutti i valori della colonna temperatura da float a interi.
Sappiamo che per convertire un float in un intero c'è la funzione predefinita di Python `int`:
```
int(23.7)
```
Vorremmo applicare questa funzione a tutti gli elementi della colonna `humidity`.
Per farlo, possiamo chiamare il metodo `transform` e passargli la funzione `int` _come parametro_
**NOTA**: non ci sono parentesi tonde dopo `int` !!!
```
df['humidity'].transform(int)
```
Per chiarire cosa voglia dire _passare una funzione_, vediamo altri due modi _completamente equivalenti_ che avremmo potuto usare per passare la funzione:
**Definire una funzione**: Avremmo potuto definire una funzione `mia_f` come questa (nota che la funzione DEVE RITORNARE qualcosa!)
```
def mia_f(x):
return int(x)
df['humidity'].transform(mia_f)
```
**funzione lambda**: Avremmo potuto usare una funzione lambda, cioè una funzione senza un nome che è definita su una sola linea:
```
df['humidity'].transform( lambda x: int(x) )
```
Indipendentemente dal modo in cui scegliamo di passare la funzione, il metodo `tranform` non cambia il dataframe originale:
```
df.info()
```
Se vogliamo aggiungere una nuova colonna, diciamo `humidity_int`, dobbiamo esplicitamente assegnare il risultato di `transform` alla nuova serie:
```
df['humidity_int'] = df['humidity'].transform( lambda x: int(x) )
```
Nota come pandas automaticamente inferisce il tipo `int64` per la colonna appena creata:
```
df.info()
```
## 7. Raggruppare
**Riferimenti**:
* [PythonDataScienceHandbook: Aggregation and Grouping](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html)
Per raggruppare oggetti ed effettuare statistiche su ogni gruppo si può usare il metodo `groupby`. Supponiamo di voler calcolare quante letture di `humidity` sono state fatte per ciascun valore intero di umidità `humidity_int` (qua usiamo il metodo pandas `groupby`, ma per gli istogrammi potresti anche usare [numpy](https://stackoverflow.com/a/13130357))
Dopo il metodo `groupby` indichiamo prima la colonna su cui raggruppare (``humidity_int``), poi la colonna su cui effettuare la statistica (`'humidity'`) e infine la statistica da calcolare, in questo caso `.count()` (altre comuni sono `sum()`, `min()`, `max()` e media `mean()`):
```
df.groupby(['humidity_int'])['humidity'].count()
```
Nota che abbiamo ottenuto solo 19 righe. Per avere una serie che riempia tutta la tabella, assegnando a ciascuna riga il conteggio del proprio gruppo, possiamo usare `transform` così:
```
df.groupby(['humidity_int'])['humidity'].transform('count')
```
Come al solito, `group_by` non modifica il dataframe, se vogliamo che il risultato sia salvato nel dataframe dobbiamo assegnare il risultato ad una nuova colonna:
```
df['Conteggio umidità'] = df.groupby(['humidity_int'])['humidity'].transform('count')
df
```
## 8. Esercizi col meteo
### 8.1 Plot meteo
✪ Mettere in grafico l'andamento delle temperature del dataframe _meteo_:
```
# scrivi qui
# SOLUZIONE
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
meteo.Temp.plot()
```
### 8.2 Pressione meteo e pioggia
✪ Nello stesso plot di sopra mostra la pressione e l'ammontare di pioggia.
```
# scrivi qui
# SOLUZIONE
meteo.Temp.plot(label="Temperatura", legend=True)
meteo.Pioggia.plot(label="Pioggia", legend=True)
meteo.Pressione.plot(secondary_y=True, label="Pressione", legend=True);
```
### 8.3 Temperatura media del meteo
✪✪✪ Calcola la temperatura media giornaliera per ciascun giorno, e mostrala nel plot, così da avere una coppia di nuove colonne come queste:
```
Giorno Temp_media_giorno
01/11/2017 7.983333
01/11/2017 7.983333
01/11/2017 7.983333
. .
. .
02/11/2017 7.384375
02/11/2017 7.384375
02/11/2017 7.384375
. .
. .
```
**SUGGERIMENTO 1**: aggiungi la colonna `'Giorno'` estraendo solo il giorno dalla data. Per farlo usa la funzione `.str` applicata a tutta la colonna.
**SUGGERIMENTO 2**: Ci sono vari modi per risolvere il problema:
- il più efficiente ed elegante è con l'operatore `groupby`, vedere [Pandas trasform - more than meets the eye](https://towardsdatascience.com/pandas-transform-more-than-meets-the-eye-928542b40b56)
- Come alternative, potresti usare un `for` per ciclare i giorni. Tipicamente usare un `for` non è una buona idea con Pandas, perchè con dataset larghi ci può voler molto ad eseguire gli aggiornamenti. Comunque, dato che questo dataset è piccolo a sufficienza, puoi provare ad usare un `for` per ciclare sui giorni e dovresti ottenere i risultati in un tempo ragionevole
```
# scrivi qui
# SOLUZIONE
meteo = pd.read_csv('meteo.csv', encoding='UTF-8')
meteo['Giorno'] = meteo['Data'].str[0:10]
#print("CON GIORNO")
#print(meteo.head())
for giorno in meteo['Giorno']:
temp_media_giorno = meteo[(meteo.Giorno == giorno)].Temp.values.mean()
meteo.loc[(meteo.Giorno == giorno),'Temp_media_giorno']= temp_media_giorno
print()
print(' ******* SOLUZIONE 1 OUTPUT - ricalcola media per ogni riga - lento !')
print()
print("CON TEMPERATURA MEDIA")
print(meteo.head())
meteo.Temp.plot(label="Temperatura", legend=True)
meteo.Temp_media_giorno.plot(label="Temperatura media", legend=True)
# SOLUZIONE
meteo = pd.read_csv('meteo.csv', encoding='UTF-8')
meteo['Giorno'] = meteo['Data'].str[0:10]
#print()
#print("CON GIORNO")
#print(meteo.head())
diz_medie = {}
for giorno in meteo['Giorno']:
if giorno not in diz_medie:
diz_medie[giorno] = meteo[ meteo['Giorno'] == giorno ]['Temp'].mean()
for giorno in meteo['Giorno']:
meteo.loc[(meteo.Giorno == giorno),'Temp_media_giorno']= diz_medie[giorno]
print()
print()
print('******** OUTPUT SOLUZIONE 2')
print(' ricalcola media solo 30 volte usando un dizionario diz_avg,')
print(' più veloce ma ancora non ottimale')
print(meteo.head())
meteo.Temp.plot(label="Temperatura", legend=True)
meteo.Temp_media_giorno.plot(label="Temperatura media", legend=True)
# SOLUZIONE
print()
print('******** OUTPUT SOLUZIONE 3 - soluzione migliore con groupby e transform ')
meteo = pd.read_csv('meteo.csv', encoding='UTF-8')
meteo['Giorno'] = meteo['Data'].str[0:10]
# .transform è necessaria per evitare di avere una tabella con solo 30 linee
meteo['Temp_media_giorno'] = meteo.groupby('Giorno')['Temp'].transform('mean')
meteo
print()
print("CON TEMPERATURA MEDIA")
print(meteo.head())
meteo.Temp.plot(label="Temperatura", legend=True)
meteo.Temp_media_giorno.plot(label="Temperatura media", legend=True)
```
## 9 Esercizio - Inquinanti aria
Proviamo ad analizzare i dati orari delle stazioni di monitoraggio della qualità dell'aria della Provincia Autonoma di Trento validati dall'Agenzia per l'ambiente.
Fonte: [dati.trentino.it](https://dati.trentino.it/dataset/qualita-dell-aria-rilevazioni-delle-stazioni-monitoraggio)
### 9.1 - caricare il file
✪ Carica in pandas il file [aria.csv](aria.csv)
**IMPORTANTE**: metti il dataframe nella variabile `aria`, così da non confonderlo coi dataframe precedenti
**IMPORTANTE**: metti come encoding `'latin-1'` (altrimenti a seconda del tuo sistema operativo potrebbe non caricarlo dando strani messaggi d'errore)
**IMPORTANTE**: se ricevi altri strani messaggi d'errore, aggiungi anche il parametro `engine=python`
```
# scrivi qui
import pandas as pd # importiamo pandas e per comodità lo rinominiamo in 'pd'
import numpy as np # importiamo numpy e per comodità lo rinominiamo in 'np'
# ricordati l'encoding !
aria = pd.read_csv('aria.csv', encoding='latin-1')
aria.info()
```
### 9.2 - media inquinanti
✪ Trova la media dei valori di inquinanti `PM10` al `Parco S. Chiara` (media su tutte le giornate). Dovresti ottenere il valore `11.385752688172044`
```
# scrivi qui
aria[(aria.Stazione == 'Parco S. Chiara') & (aria.Inquinante == 'PM10')].Valore.values.mean()
#jupman-purge-io
aria[(aria.Stazione == 'Parco S. Chiara') & (aria.Inquinante == 'PM10')]
#jupman-purge-io
aria[(aria.Stazione == 'Parco S. Chiara') & (aria.Inquinante == 'PM10') & (aria.Data == '2019-05-07')]
```
### 9.3 - Grafico PM10
✪ Usando `plt.plot` come visto in un [esempio precedente](#Grafici-matplotlib-da-strutture-pandas) (quindi passandogli direttamente le serie rilevanti di Pandas), mostra in un grafico l'andamento dei valori di inquinanti `PM10` nella giornata del 7 Maggio 2019
```
# scrivi qui
# SOLUZIONE
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
filtrato = aria[(aria.Stazione == 'Parco S. Chiara') & (aria.Inquinante == 'PM10') & (aria.Data == '2019-05-07')]
plt.plot(filtrato['Ora'], filtrato['Valore'] )
plt.title('SOLUZIONE PM10 7 Maggio 2019')
plt.xlabel('Ora')
plt.show()
```
## 10. Unire tabelle
Supponi di voler aggiungere una colonna con la posizione geografica della ISS. Per farlo, avresti bisogno di unire il nostro dataset con un altro che contenga questa informazione. Prendiamo per esempio il dataset [iss_coords.csv](iss_coords.csv)
```
iss_coords = pd.read_csv('iss-coords.csv', encoding='UTF-8')
iss_coords
```
Notiamo che c'è una colonna `timestamp`, che sfortunatamente ha un nome leggermente diverse dalla colonna `time_stamp` (nota l'underscore `_`) nel dataset original astropi:
```
df.info()
```
Per fondere i dataset in base a due colonne, possiamo usare il comando merge così:
```
# ricorda che merge produce un NUOVO dataframe:
geo_astropi = df.merge(iss_coords, left_on='time_stamp', right_on='timestamp')
# merge aggiungere sia la colonna time_stamp che timestamp,
# perciò rimuoviamo la colonna duplicata 'timestamp'
geo_astropi = geo_astropi.drop('timestamp', axis=1)
geo_astropi
```
### 10.1 Esercizio - migliorare merge
Se noti, la tabella sopra ha le colonne `lat` e `lon`, ma pochissime righe. Perchè? Prova a fondere le tabelle in qualche modo utile in modo da avere tutte le righe originali e tutte le celle di `lat` e `lon` riempite.
- Per altre strategie di merge, leggi l'attributo `how` [Why And How To Use Merge With Pandas in Python](https://towardsdatascience.com/why-and-how-to-use-merge-with-pandas-in-python-548600f7e738)
- Per riempire valori mancanti non usare tecniche di interpolazione, semplicemente metti la posizione della stazione in quel dato giorno o ora.
```
# scrivi qui
geo_astropi = df.merge(iss_coords, left_on='time_stamp', right_on='timestamp', how='left')
pd.merge_ordered(df, iss_coords, fill_method='ffill', how='left', left_on='time_stamp', right_on='timestamp')
geo_astropi
```
## 11. GeoPandas
<div class="alert alert-warning">
**ATTENZIONE: Questa parte del tutorial è SPERIMENTALE, mancano commenti**
</div>
Pandas è anche molto comodo per gestire dati geografici, con l'estensione [GeoPandas](http://geopandas.org/)
Installiamola subito:
Anaconda:
`conda install geopandas`
e poi
`conda install -c conda-forge descartes`
Linux/Mac (`--user` installa nella propria home):
- ``` python3 -m pip install --user geopandas descartes ```
### 11.1 Un esempio semplice con GeoPandas
Faremo un esempio mostrando regioni italiane colorate in base alla popolazione residente:

Quando si parla di mappe, tipicamente vogliamo mostrare delle regioni o nazioni colorate in base ad un valore associato ad ogni zona. Quindi servono sempre almeno due cose:
1. le forme geometriche delle zone da raffigurare
2. i valori da associare ad ogni zona da far corrispondere alle gradazioni di colore
Tipicamente questi dati vengono presi da almeno due dataset diversi, uno geografico e uno di statistiche, ma vi troverete spesso con il problema che nel dataset geografico le zone vengono chiamate con un nome o codice diverso da quello del dataset con le statistiche.
Divideremo l'esempio in due parti:
* nella prima, useremo tabelle già ripulite che trovate nella stessa cartella di questo foglio. Questo ci permetterà di comprendere i meccanismi di base di GeoPandas e del _fuzzy matching_
* nella seconda parte, proporremo di risolvere un esercizio completo che prevede lo scaricamento online del file html e pulizia
Vediamo il nostro esempio, in cui le zone geografiche vengono prese dal sito dell'istat da file geografici in formato shapefile. Il file è già salvato nella cartella qui: [reg2011/reg2011_g.shp](reg2011/reg2011_g.shp) , se volete vedere dove era online guardate basi territoriali qua: https://www.istat.it/it/archivio/104317
### 11.2 Leggere shapefiles in GeoPandas
Leggiamo con geopandas lo shapefile:
```
import geopandas as gpd
df_regioni = gpd.read_file(filename="reg2011/reg2011_g.shp")
df_regioni.head()
```
Oltre alla solita tabella di Pandas, notiamo che tra le colonne ci sono dei codice `COD_REG` per identificare le regioni, i loro nomi `NOME_REG` e la geometria `geometry`. Chiamando `plot()` sul dataframe di geopandas possiamo vedere la cartina risultante:
```
%matplotlib inline
df_regioni.plot()
```
### 11.3 Prendiamo statistiche da visualizzare
Nel nostro esempio, estraiamo statistiche sulla popolazione delle regioni italiane da una pagina HTML. Metteremo poi i dati estratti in un dataframe Pandas (non GeoPandas) chiamato `df_popolazione`. Per comodità abbiamo salvato tale pagina nel file [popolazione.html](popolazione.html) (se volete vedere la versione online, andate su questo sito: https://www.tuttitalia.it/regioni/popolazione)
<div class="alert alert-warning">
**ATTENZIONE**: Per il momento puoi ignorare il codice che segue, ci serve solo per caricare i dati nel dataframe `df_popolazione`
</div>
```
import pandas as pd
# prende la riga di una tabella html, e ritorna un dizionario con i dati estratti
def estrai_dizionario(riga_html):
colonne = riga_html.select('td')
return dict(name=colonne[1].text,
population=colonne[2].text.replace('.', '').replace(',', '.'),
area=colonne[3].text.replace('.', '').replace(',', '.'))
# Estrae la popolazione per regione da popolazione.html, e restituisce un dataframe Pandas (non GeoPandas)
def estrai_popolazione():
from bs4 import BeautifulSoup
with open('popolazione.html', encoding='utf-8') as f:
testo = f.read()
listona = [] # listona di dizionari, ogni dizionario rappresenta una riga
# usiamo il parser html5lib invece di lxml perchè il sito è complesso
soup = BeautifulSoup(testo, 'html5lib')
righe_html = soup.select('table.ut tr')[1:21]
for riga_html in righe_html:
listona.append(estrai_dizionario(riga_html))
return pd.DataFrame(listona)
```
Vediamo qui il contenuto del file:
```
df_popolazione = estrai_popolazione()
df_popolazione
```
Se compariamo i nomi in questa tabella con il dataframe della prima, notiamo subito che parecchi nomi non sono identici. Per esempio, nello shapefile troviamo `TRENTINO-ALTO ADIGE/SUDTIROL` mentre nelle statistiche c'è `Trentino-AA`. Volendo creare una tabella unica, occorrerà quindi fare integrazione dati cercando di ottenere un _matching_ tra le righe dei due dataset. Per venti regioni potremmo farla a mano ma chiaramente farlo per migliaia di righe sarebbe estremamente oneroso. Per agevolare questa operazione, ci conviene eseguire una cosiddetta _fuzzy join_, che cerca stringhe simili nei due dataset e in base ad un misura di similarità tra stringhe stabilisce come associare righe della prima tabella a righe della seconda.
Per
```
def fuzzy_join(df_geo, df_right, name_left, name_right):
""" Prende:
- un data frame di geo pandas df_geo che contiene una colonna chiamata name_left
- un'altro dataframe generico df_right che contiene una colonna chiamata name_right
Ritorna :
- un nuovo dataframe che è la join dei due dataframe in base alla similirità tra
le colonne name_left e name_right
ATTENZIONE: a volte l'agoritmo di similarità può confondersi e considerare uguale due nomi
che invece dovrebbero essere distinti !
Per quanto possibile, verificare sempre i risultati manualmente.
"""
from functools import partial
from itertools import product
import difflib
import heapq
#from pprint import pprint
df1 = df_geo.set_index(name_left)
df1.index = df1.index.str.lower()
df2 = df_right.set_index(name_right)
df2.index = df2.index.str.lower()
def get_matcher_smart(dfl, dfr):
heap = []
for l, r in product(dfl.index, dfr.index):
sm = difflib.SequenceMatcher(lambda x: ' .\n\t', l, r)
heapq.heappush(heap, (1. - sm.quick_ratio(), l, r))
ass_l, ass_r, ass_map = set(), set(), {}
while len(ass_map) < len(dfl):
score, l, r = heapq.heappop(heap)
if not (l in ass_l or r in ass_r):
ass_map[l] = r
ass_l.add(l)
ass_r.add(r)
#pprint(ass_map)
return dfl.index.map(lambda x: ass_map[x])
df1.index = get_matcher_smart(df1, df2)
return df1.join(df2)
tabellona = fuzzy_join(df_regioni, df_popolazione, 'NOME_REG', 'name')
tabellona
tabellona.plot(column='population', cmap='OrRd', edgecolor='k', legend=False)
```
### 11.4 Esempio di integrazione
<div class="alert alert-warning">
**ATTENZIONE: QUESTA PARTE E' INCOMPLETA**
</div>
Vediamo l'esempio di integrazione completo. Ti serviranno anche `requests`, `beautifulsoup4`, e `html5lib`. Installali così:
Anaconda:
- `conda install requests beautifulsoup4 html5lib`
Linux/Mac (`--user` installa nella propria home):
- ``` python3 -m pip install --user requests beautifulsoup4 html5lib ```
Per fare un esempio di integrazione, useremo una pagina HTML con i dati delle regioni italiane:
* https://www.tuttitalia.it/regioni/popolazione/
Per capire come estrarre la popoloziona dall'HTML, guarda il tutorial sull'[estrazione](https://it.softpython.org/extraction/extraction-sol.html)
Nel menu basi territoriali qua invece abbiamo dei file geografici in formato shapefile delle regioni:
* basi territoriali https://www.istat.it/it/archivio/104317
```
# Scarica la pagina HTML della popolazione, e la salva nel file 'popolazione.html'
def scarica_popolazione():
from bs4 import BeautifulSoup
import requests
r = requests.get("https://www.tuttitalia.it/regioni/popolazione/")
if r.status_code == 200:
testo = r.text
with open('popolazione.html', 'w', encoding='utf-8') as f:
f.write(testo)
print("Ho salvato il file 'popolazione.html'")
else:
# se il codice non è 200, qualcosa è probabilmente andato storto
# e blocchiamo l'esecuzione dello script
raise Exception('Errore durante lo scaricamento : %s' % r)
# scarica_popolazione()
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Gradient Boosted Trees: Model understanding
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/estimators/boosted_trees_model_understanding"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/estimators/boosted_trees_model_understanding.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/tree/master/site/en/r2/tutorials/estimators/boosted_trees_model_understanding.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
For an end-to-end walkthrough of training a Gradient Boosting model check out the [boosted trees tutorial](./boosted_trees). In this tutorial you will:
* Learn how to interpret a Boosted Tree model both *locally* and *globally*
* Gain intution for how a Boosted Trees model fits a dataset
## How to interpret Boosted Trees models both locally and globally
Local interpretability refers to an understanding of a model’s predictions at the individual example level, while global interpretability refers to an understanding of the model as a whole. Such techniques can help machine learning (ML) practitioners detect bias and bugs during the model development stage.
For local interpretability, you will learn how to create and visualize per-instance contributions. To distinguish this from feature importances, we refer to these values as directional feature contributions (DFCs).
For global interpretability you will retrieve and visualize gain-based feature importances, [permutation feature importances](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) and also show aggregated DFCs.
## Load the titanic dataset
You will be using the titanic dataset, where the (rather morbid) goal is to predict passenger survival, given characteristics such as gender, age, class, etc.
```
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
from IPython.display import clear_output
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
!pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
tf.random.set_seed(123)
```
For a description of the features, please review the prior tutorial.
## Create feature columns, input_fn, and the train the estimator
### Preprocess the data
Create the feature columns, using the original numeric columns as is and one-hot-encoding categorical variables.
```
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fc.indicator_column(
fc.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(fc.numeric_column(feature_name,
dtype=tf.float32))
```
### Build the input pipeline
Create the input functions using the `from_tensor_slices` method in the [`tf.data`](https://www.tensorflow.org/api_docs/python/tf/data) API to read in data directly from Pandas.
```
# Use entire batch since this is such a small dataset.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = (dataset
.repeat(n_epochs)
.batch(NUM_EXAMPLES))
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
```
### Train the model
```
params = {
'n_trees': 50,
'max_depth': 3,
'n_batches_per_layer': 1,
# You must enable center_bias = True to get DFCs. This will force the model to
# make an initial prediction before using any features (e.g. use the mean of
# the training labels for regression or log odds for classification when
# using cross entropy loss).
'center_bias': True
}
est = tf.estimator.BoostedTreesClassifier(feature_columns, **params)
# Train model.
est.train(train_input_fn, max_steps=100)
# Evaluation.
results = est.evaluate(eval_input_fn)
clear_output()
pd.Series(results).to_frame()
```
## Model interpretation and plotting
```
import matplotlib.pyplot as plt
import seaborn as sns
sns_colors = sns.color_palette('colorblind')
```
## Local interpretability
Next you will output the directional feature contributions (DFCs) to explain individual predictions using the approach outlined in [Palczewska et al](https://arxiv.org/pdf/1312.1121.pdf) and by Saabas in [Interpreting Random Forests](http://blog.datadive.net/interpreting-random-forests/) (this method is also available in scikit-learn for Random Forests in the [`treeinterpreter`](https://github.com/andosa/treeinterpreter) package). The DFCs are generated with:
`pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))`
(Note: The method is named experimental as we may modify the API before dropping the experimental prefix.)
```
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn))
# Create DFC Pandas dataframe.
labels = y_eval.values
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts])
df_dfc.describe().T
```
A nice property of DFCs is that the sum of the contributions + the bias is equal to the prediction for a given example.
```
# Sum of DFCs + bias == probabality.
bias = pred_dicts[0]['bias']
dfc_prob = df_dfc.sum(axis=1) + bias
np.testing.assert_almost_equal(dfc_prob.values,
probs.values)
```
Plot DFCs for an individual passenger. Let's make the plot nice by color coding based on the contributions' directionality and add the feature values on figure.
```
# Boilerplate code for plotting :)
def _get_color(value):
"""To make positive DFCs plot green, negative DFCs plot red."""
green, red = sns.color_palette()[2:4]
if value >= 0: return green
return red
def _add_feature_values(feature_values, ax):
"""Display feature's values on left of plot."""
x_coord = ax.get_xlim()[0]
OFFSET = 0.15
for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()):
t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12)
t.set_bbox(dict(facecolor='white', alpha=0.5))
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_weight('bold')
t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue',
fontproperties=font, size=12)
def plot_example(example):
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude.
example = example[sorted_ix]
colors = example.map(_get_color).tolist()
ax = example.to_frame().plot(kind='barh',
color=[colors],
legend=None,
alpha=0.75,
figsize=(10,6))
ax.grid(False, axis='y')
ax.set_yticklabels(ax.get_yticklabels(), size=14)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
return ax
# Plot results.
ID = 182
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index
ax = plot_example(example)
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability', size=14)
plt.show()
```
The larger magnitude contributions have a larger impact on the model's prediction. Negative contributions indicate the feature value for this given example reduced the model's prediction, while positive values contribute an increase in the prediction.
You can also plot the example's DFCs compare with the entire distribution using a voilin plot.
```
# Boilerplate plotting code.
def dist_violin_plot(df_dfc, ID):
# Initialize plot.
fig, ax = plt.subplots(1, 1, figsize=(10, 6))
# Create example dataframe.
TOP_N = 8 # View top 8 features.
example = df_dfc.iloc[ID]
ix = example.abs().sort_values()[-TOP_N:].index
example = example[ix]
example_df = example.to_frame(name='dfc')
# Add contributions of entire distribution.
parts=ax.violinplot([df_dfc[w] for w in ix],
vert=False,
showextrema=False,
widths=0.7,
positions=np.arange(len(ix)))
face_color = sns_colors[0]
alpha = 0.15
for pc in parts['bodies']:
pc.set_facecolor(face_color)
pc.set_alpha(alpha)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
# Add local contributions.
ax.scatter(example,
np.arange(example.shape[0]),
color=sns.color_palette()[2],
s=100,
marker="s",
label='contributions for example')
# Legend
# Proxy plot, to show violinplot dist on legend.
ax.plot([0,0], [1,1], label='eval set contributions\ndistributions',
color=face_color, alpha=alpha, linewidth=10)
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large',
frameon=True)
legend.get_frame().set_facecolor('white')
# Format plot.
ax.set_yticks(np.arange(example.shape[0]))
ax.set_yticklabels(example.index)
ax.grid(False, axis='y')
ax.set_xlabel('Contribution to predicted probability', size=14)
```
Plot this example.
```
dist_violin_plot(df_dfc, ID)
plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
plt.show()
```
Finally, third-party tools, such as [LIME](https://github.com/marcotcr/lime) and [shap](https://github.com/slundberg/shap), can also help understand individual predictions for a model.
## Global feature importances
Additionally, you might want to understand the model as a whole, rather than studying individual predictions. Below, you will compute and use:
1. Gain-based feature importances using `est.experimental_feature_importances`
2. Permutation importances
3. Aggregate DFCs using `est.experimental_predict_with_explanations`
Gain-based feature importances measure the loss change when splitting on a particular feature, while permutation feature importances are computed by evaluating model performance on the evaluation set by shuffling each feature one-by-one and attributing the change in model performance to the shuffled feature.
In general, permutation feature importance are preferred to gain-based feature importance, though both methods can be unreliable in situations where potential predictor variables vary in their scale of measurement or their number of categories and when features are correlated ([source](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-307)). Check out [this article](http://explained.ai/rf-importance/index.html) for an in-depth overview and great discussion on different feature importance types.
### 1. Gain-based feature importances
Gain-based feature importances are built into the TensorFlow Boosted Trees estimators using `est.experimental_feature_importances`.
```
importances = est.experimental_feature_importances(normalize=True)
df_imp = pd.Series(importances)
# Visualize importances.
N = 8
ax = (df_imp.iloc[0:N][::-1]
.plot(kind='barh',
color=sns_colors[0],
title='Gain feature importances',
figsize=(10, 6)))
ax.grid(False, axis='y')
```
### 2. Average absolute DFCs
You can also average the absolute values of DFCs to understand impact at a global level.
```
# Plot.
dfc_mean = df_dfc.abs().mean()
N = 8
sorted_ix = dfc_mean.abs().sort_values()[-N:].index # Average and sort by absolute.
ax = dfc_mean[sorted_ix].plot(kind='barh',
color=sns_colors[1],
title='Mean |directional feature contributions|',
figsize=(10, 6))
ax.grid(False, axis='y')
```
You can also see how DFCs vary as a feature value varies.
```
FEATURE = 'fare'
feature = pd.Series(df_dfc[FEATURE].values, index=dfeval[FEATURE].values).sort_index()
ax = sns.regplot(feature.index.values, feature.values, lowess=True)
ax.set_ylabel('contribution')
ax.set_xlabel(FEATURE)
ax.set_xlim(0, 100)
plt.show()
```
### 3. Permutation feature importance
```
def permutation_importances(est, X_eval, y_eval, metric, features):
"""Column by column, shuffle values and observe effect on eval set.
source: http://explained.ai/rf-importance/index.html
A similar approach can be done during training. See "Drop-column importance"
in the above article."""
baseline = metric(est, X_eval, y_eval)
imp = []
for col in features:
save = X_eval[col].copy()
X_eval[col] = np.random.permutation(X_eval[col])
m = metric(est, X_eval, y_eval)
X_eval[col] = save
imp.append(baseline - m)
return np.array(imp)
def accuracy_metric(est, X, y):
"""TensorFlow estimator accuracy."""
eval_input_fn = make_input_fn(X,
y=y,
shuffle=False,
n_epochs=1)
return est.evaluate(input_fn=eval_input_fn)['accuracy']
features = CATEGORICAL_COLUMNS + NUMERIC_COLUMNS
importances = permutation_importances(est, dfeval, y_eval, accuracy_metric,
features)
df_imp = pd.Series(importances, index=features)
sorted_ix = df_imp.abs().sort_values().index
ax = df_imp[sorted_ix][-5:].plot(kind='barh', color=sns_colors[2], figsize=(10, 6))
ax.grid(False, axis='y')
ax.set_title('Permutation feature importance')
plt.show()
```
# Visualizing model fitting
Lets first simulate/create training data using the following formula:
$$z=x* e^{-x^2 - y^2}$$
Where \(z\) is the dependent variable you are trying to predict and \(x\) and \(y\) are the features.
```
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
# Create fake data
seed(0)
npts = 5000
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
# Prep data for training.
df = pd.DataFrame({'x': x, 'y': y, 'z': z})
xi = np.linspace(-2.0, 2.0, 200),
yi = np.linspace(-2.1, 2.1, 210),
xi,yi = np.meshgrid(xi, yi)
df_predict = pd.DataFrame({
'x' : xi.flatten(),
'y' : yi.flatten(),
})
predict_shape = xi.shape
def plot_contour(x, y, z, **kwargs):
# Grid the data.
plt.figure(figsize=(10, 8))
# Contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k')
CS = plt.contourf(x, y, z, 15,
vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r')
plt.colorbar() # Draw colorbar.
# Plot data points.
plt.xlim(-2, 2)
plt.ylim(-2, 2)
```
You can visualize the function. Redder colors correspond to larger function values.
```
zi = griddata(x, y, z, xi, yi, interp='linear')
plot_contour(xi, yi, zi)
plt.scatter(df.x, df.y, marker='.')
plt.title('Contour on training data')
plt.show()
fc = [tf.feature_column.numeric_column('x'),
tf.feature_column.numeric_column('y')]
def predict(est):
"""Predictions from a given estimator."""
predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict))
preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)])
return preds.reshape(predict_shape)
```
First let's try to fit a linear model to the data.
```
train_input_fn = make_input_fn(df, df.z)
est = tf.estimator.LinearRegressor(fc)
est.train(train_input_fn, max_steps=500);
plot_contour(xi, yi, predict(est))
```
It's not a very good fit. Next let's try to fit a GBDT model to it and try to understand how the model fits the function.
```
n_trees = 22 #@param {type: "slider", min: 1, max: 80, step: 1}
est = tf.estimator.BoostedTreesRegressor(fc, n_batches_per_layer=1, n_trees=n_trees)
est.train(train_input_fn, max_steps=500)
clear_output()
plot_contour(xi, yi, predict(est))
plt.text(-1.8, 2.1, '# trees: {}'.format(n_trees), color='w', backgroundcolor='black', size=20)
plt.show()
```
As you increase the number of trees, the model's predictions better approximates the underlying function.

# Conclusion
In this tutorial you learned how to interpret Boosted Trees models using directional feature contributions and feature importance techniques. These techniques provide insight into how the features impact a model's predictions. Finally, you also gained intution for how a Boosted Tree model fits a complex function by viewing the decision surface for several models.
| github_jupyter |
# Parameter identification example
Here is a simple toy model that we use to demonstrate the working of the inference package
$\emptyset \xrightarrow[]{k_1} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$
```
%matplotlib inline
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
%matplotlib inline
import bioscrape as bs
from bioscrape.types import Model
from bioscrape.simulator import py_simulate_model
import numpy as np
import pylab as plt
import pandas as pd
M = Model(sbml_filename = 'toy_sbml_model.xml')
```
# Generate experimental data
1. Simulate bioscrape model
2. Add Gaussian noise of non-zero mean and non-zero variance to the simulation
3. Create appropriate Pandas dataframes
4. Write the data to a CSV file
```
timepoints = np.linspace(0,20,100)
result = py_simulate_model(timepoints, Model = M)['X']
num_trajectories = 10
exp_data = pd.DataFrame()
exp_data['timepoints'] = timepoints
for i in range(num_trajectories):
exp_data['X' + str(i)] = result + np.random.normal(5, 2, size = np.shape(result))
plt.plot(timepoints, exp_data['X' + str(i)], 'r', alpha = 0.3)
plt.plot(timepoints, result, 'k', linewidth = 3, label = 'Model')
plt.legend()
plt.xlabel('Time')
plt.ylabel('[X]')
plt.show()
```
## CSV looks like:
```
exp_data.to_csv('birth_death_data.csv')
exp_data
```
# Run the bioscrape MCMC algorithm to identify parameters from the experimental data
```
from bioscrape.inference import py_inference
# Import data from CSV
# Import a CSV file for each experiment run
exp_data = []
for i in range(num_trajectories):
df = pd.read_csv('birth_death_data.csv', usecols = ['timepoints', 'X'+str(i)])
df.columns = ['timepoints', 'X']
exp_data.append(df)
prior = {'k1' : ['uniform', 0, 100],'d1' : ['uniform',0,10]}
sampler, pid = py_inference(Model = M, exp_data = exp_data, measurements = ['X'], time_column = ['timepoints'],
nwalkers = 5, init_seed = 0.15, nsteps = 4000, sim_type = 'deterministic',
params_to_estimate = ['k1', 'd1'], prior = prior)
pid.plot_mcmc_results(sampler);
```
### Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis.
# OR
### You can also plot the results as follows
```
M_fit = Model('toy_model.xml')
timepoints = pid.timepoints[0]
flat_samples = sampler.get_chain(discard=200, thin=15, flat=True)
inds = np.random.randint(len(flat_samples), size=200)
for ind in inds:
sample = flat_samples[ind]
for pi, pi_val in zip(pid.params_to_estimate, sample):
M_fit.set_parameter(pi, pi_val)
plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.6)
# plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0)
for i in range(num_trajectories):
plt.plot(timepoints, list(pid.exp_data[i]['X']), 'b', alpha = 0.1)
plt.plot(timepoints, result, "k", label="original model")
plt.legend(fontsize=14)
plt.xlabel("Time")
plt.ylabel("[X]");
plt.close()
plt.title('Log-likelihood progress')
plt.plot(pid.cost_progress)
plt.xlabel('Steps (all chains)')
plt.show()
```
## Alll methods above have other advanced options that you can use. Refer to Parameter Identification Tools and Advanced Examples notebook for more details. There are many other tools available such as for multiple initial conditions and timepoints for each trajectory, options for the estimator etc.
| github_jupyter |
<a href="https://colab.research.google.com/github/Aman211409/HackFest-21/blob/ML/tflite_conversion.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from keras.models import load_model
from sklearn import metrics
from google.colab import files
uploaded = files.upload()
model = load_model('/content/final_model_covid_detection.hdf5')
model.summary()
import cv2
image1 = plt.imread('/content/Screenshot 2021-04-27 084902.jpg')
image1 = cv2.resize(image1, (224, 224))
image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
images = []
images.append(image1)
images = np.array(images) / 255.0
preds = model.predict(images)
image1 = plt.imread('/content/Screenshot 2021-04-27 101310.jpg')
image1 = cv2.resize(image1, (224, 224))
image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
images = []
images.append(image1)
images = np.array(images) / 255.0
preds = model.predict(images)
preds.resize(1, len(preds))
predIdxs = np.round(preds[0])
predIdxs
preds
converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file('/content/final_model_covid_detection.hdf5') # path to the SavedModel directory
tflite_model = converter.convert()
open('model_tflite.tflite', "wb").write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
print(interpreter)
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
```
IMage 1
```
image1 = plt.imread('/content/Screenshot 2021-04-27 101029.jpg')
image1 = cv2.resize(image1, (224, 224))
image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
images = []
images.append(image1)
input_shape = input_details[0]['shape']
input_data = np.array(images, dtype=np.float32) / 255.0
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
```
image 2
```
image1 = plt.imread('/content/unnamed.jpg')
image1 = cv2.resize(image1, (224, 224))
image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
images = []
images.append(image1)
input_shape = input_details[0]['shape']
input_data = np.array(images, dtype=np.float32) / 255.0
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
```
| github_jupyter |
# Code Reference
https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
```
import nibabel as nib
import numpy as np
import pandas as pd
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
import time
import pickle
from torch.utils.data import Dataset, DataLoader
import matplotlib.pyplot as plt
from PIL import Image
import cv2
device = torch.device('cpu')
class hcp_dataset(Dataset):
def __init__(self, df_path, train = False):
self.df = pd.read_csv(df_path)
self.train = train
def __len__(self):
return len(self.df)
def __getitem__(self,idx):
subject_name = self.df.iloc[idx]['Subject']
image_path ='../data/hcp2/'+str(subject_name)+'/T1w/T1w_acpc_dc_restore_brain.nii.gz'
image = nib.load(image_path)
image_array = image.get_fdata()
#Normaliztion
image_array = (image_array - image_array.mean()) / image_array.std()
#label = self.df.loc[idx][['N','E','O','A','C']].values.astype(int)
label = self.df.loc[idx][['C']].values[0].astype(int) # predict C
sample = {'x': image_array[None,:], 'y': label}
return sample
train_df_path = '../train_test.csv'
val_df_path = '../val_test.csv'
test_df_path = '../test.csv'
transformed_dataset = {'train': hcp_dataset(train_df_path, train = True),
'validate':hcp_dataset(val_df_path),
'test':hcp_dataset(test_df_path),}
bs = 1
dataloader = {x: DataLoader(transformed_dataset[x], batch_size=bs,
shuffle=True, num_workers=0) for x in ['train', 'validate','test']}
data_sizes ={x: len(transformed_dataset[x]) for x in ['train', 'validate','test']}
sample = next(iter(dataloader['train']))['x']
sample.size()
class CNN_3D(nn.Module):
def __init__(self):
super(CNN_3D, self).__init__()
self.conv1 = nn.Conv3d(1, 8, 3, stride=1, padding=1)
self.bn1 = nn.BatchNorm3d(8)
self.activation1 = nn.ReLU()
self.maxpool1 = nn.MaxPool3d(kernel_size=(3,3,3), stride=(3,3,3))
self.conv2 = nn.Conv3d(8, 32, 3, stride=1, padding=1)
self.bn2 = nn.BatchNorm3d(32)
self.activation2 = nn.ReLU()
self.maxpool2 = nn.MaxPool3d(kernel_size=(3,3,3), stride=(3,3,3))
self.conv3 = nn.Conv3d(32, 64, 3, stride=1, padding=1)
self.bn3 = nn.BatchNorm3d(64)
self.activation3 = nn.ReLU()
self.maxpool3 = nn.MaxPool3d(kernel_size=(3,3,3), stride=(3,3,3))
self.conv4 = nn.Conv3d(64, 64, 3, stride=1, padding=1)
self.bn4 = nn.BatchNorm3d(64)
self.activation4 = nn.ReLU()
self.maxpool4 = nn.MaxPool3d(kernel_size=(3,3,3), stride=(2,2,2))
self.conv5 = nn.Conv3d(64, 128, 3, stride=1, padding=1)
self.bn5 = nn.BatchNorm3d(128)
self.activation5 = nn.ReLU()
self.avgpool = nn.AdaptiveMaxPool3d((1,1,1))
self.fc1 = nn.Linear(128,3)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.activation1(x)
x = self.maxpool1(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.activation2(x)
x = self.maxpool2(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.activation3(x)
x = self.maxpool3(x)
x = self.conv4(x)
x = self.bn4(x)
x = self.activation4(x)
x = self.maxpool4(x)
x = self.conv5(x)
x = self.bn5(x)
x = self.activation5(x)
self.featuremap1 = x.detach()
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc1(x)
return x
model = CNN_3D()
model = model.to(device)
model.load_state_dict(torch.load('Model6_CAM.pt'))
from sklearn.preprocessing import label_binarize
from sklearn.utils.multiclass import unique_labels
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve, auc
from itertools import cycle
import warnings
warnings.filterwarnings('ignore')
def evaluate_model(model, dataloader, phase = 'test'):
model.eval()
y_test_label = []
y_test = []
y_score = []
pre = []
for data in dataloader[phase]:
image = data['x'].to(device,dtype=torch.float)
label = data['y'].to('cpu',dtype=torch.long)
y_test_label = y_test_label+label.tolist()
label = label_binarize(label, classes=[0, 1, 2])
if y_test == []:
y_test=label
else:
y_test = np.concatenate((y_test,label),axis = 0)
output = model(image)
#output = F.softmax(output,dim=1)
output = output.to('cpu')
if y_score == []:
y_score=np.array(output.detach().numpy())
else:
y_score = np.concatenate((y_score,output.detach().numpy()),axis = 0)
for i in y_score:
pre.append(list(i).index(max(i)))
return y_test_label,y_test,pre,y_score
# https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
def ROC_curve(y_test,y_score):
fpr = dict()
tpr = dict()
roc_auc = dict()
n_classes = y_test.shape[1]
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
lw = 2
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
return None
# https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
#classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#print("Normalized confusion matrix")
else:
pass
#print('Confusion matrix, without normalization')
#print(cm)
#print(cm.shape)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(-0.5, cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return None
y_test_label,y_test,pre,y_score = evaluate_model(model, dataloader, phase = 'test')
ROC_curve(y_test,y_score)
classes = ['0','1','2']
plot_confusion_matrix(y_test_label, pre, classes)
```
| github_jupyter |
```
#default_exp richext
```
# Extensions To rich
> Extensions to [rich](https://github.com/willmcgugan/rich) for ghtop.
```
#export
import time,random
from collections import defaultdict
from typing import List
from collections import deque, OrderedDict, namedtuple
from ghtop.all_rich import (Console, Color, FixedPanel, box, Segments, Live,
grid, ConsoleOptions, Progress, BarColumn, Spinner)
from ghapi.event import *
from fastcore.all import *
console = Console()
evts = load_sample_events()
exs = [first(evts, risinstance(o)) for o in described_evts]
```
## Animated Stats
This section outlines how we can display statistics and visualizations such as sparklines and status bars that are animated as events are received.
```
#export
class EProg:
"Progress bar with a heading `hdg`."
def __init__(self, hdg='Quota', width=10):
self.prog = Progress(BarColumn(bar_width=width), "[progress.percentage]{task.percentage:>3.0f}%")
self.task = self.prog.add_task("",total=100, visible=False)
store_attr()
def update(self, completed): self.prog.update(self.task, completed=completed)
def __rich_console__(self, console: Console, options: ConsoleOptions):
self.prog.update(self.task, visible=True)
yield grid([["Quota"], [self.prog.get_renderable()]], width=self.width+2, expand=False)
```
When you instantiate `Eprog` the starting progress is set to 0%:
```
p = EProg()
console.print(p)
```
You can update the progress bar with the `update` method:
```
p.update(10)
console.print(p)
```
### `Espark` - A sparkline combined with an EventTimer
fastcore's `EventTimer` calculates frequency metrics aggregated by slices of time specified by the argument `span`. The `EventTimer` can produce a sparkline that shows the last n time slices, where n is specified by the parameter `store`:
```
#export
class ESpark(EventTimer):
"An `EventTimer` that displays a sparkline with a heading `nm`."
def __init__(self, nm:str, color:str, ghevts=None, store=5, span=.2, mn=0, mx=None, stacked=True, show_freq=False):
super().__init__(store=store, span=span)
self.ghevts=L(ghevts)
store_attr('nm,color,store,span,mn,mx,stacked,show_freq')
def _spark(self):
data = L(list(self.hist)+[self.freq] if self.show_freq else self.hist)
num = f'{self.freq:.1f}' if self.freq < 10 else f'{self.freq:.0f}'
return f"[{self.color}]{num} {sparkline(data, mn=self.mn, mx=self.mx)}[/]"
def upd_hist(self, store, span): super().__init__(store=store, span=span)
def _nm(self): return f"[{self.color}] {self.nm}[/]"
def __rich_console__(self, console: Console, options: ConsoleOptions):
yield grid([[self._nm()], [self._spark()]]) if self.stacked else f'{self._nm()} {self._spark()}'
def add_events(self, evts):
evts = L([evts]) if isinstance(evts, dict) else L(evts)
if self.ghevts: evts.map(lambda e: self.add(1) if type(e) in L(self.ghevts) else noop)
else: self.add(len(evts))
__repr__ = basic_repr('nm,color,ghevts,store,span,stacked,show_freq,ylim')
from time import sleep
def _randwait(): yield from (sleep(random.random()/200) for _ in range(100))
c = EventTimer(store=5, span=0.03)
for o in _randwait(): c.add(1)
```
By default `nm` will be stacked on top of the sparkline. We simulate adding events to `ESpark` and render the result:
```
e = ESpark(nm='💌Issue', color='blue', store=5)
def _r(): return random.randint(1,30)
def _sim(e, steps=8, sleep=.2):
for i in range(steps):
e.add(_r())
time.sleep(sleep)
_sim(e)
console.print(e)
```
If you would prefer `nm` and the sparkline to be on one line instead, you can set `stacked` to `false`:
```
e = ESpark(color='blue', nm='💌Issue', stacked=False)
_sim(e)
console.print(e)
```
You can optionally specify a list of `GhEvent` types that will allow you to update sparklines by streaming in events. `described_evts` has a complete list of options:
```
described_evts
```
If `ghevts` is specified, only events that match the list of the `GhEvent` types will increment the event counter.
In the below example, the `IssueCommentEvent` and `IssuesEvent` are listed, therefore any other event types will not update the event counter:
```
_pr_evts = evts.filter(risinstance((PullRequestEvent, PullRequestReviewCommentEvent, PullRequestReviewEvent)))
_watch_evts = evts.filter(risinstance((WatchEvent)))
_s = ESpark('Issues', 'blue', [IssueCommentEvent, IssuesEvent], span=5)
_s.add_events(_pr_evts)
_s.add_events(_watch_evts)
test_eq(_s.events, 0)
```
However, events that match those types will update the event counter accordingly:
```
_issue_evts = evts.filter(risinstance((IssueCommentEvent, IssuesEvent)))
_s.add_events(_issue_evts)
test_eq(_s.events, len(_issue_evts))
```
If `ghevts` is not specified, all events are counted:
```
_s = ESpark('Issues', 'blue', span=5)
_s.add_events(evts)
test_eq(_s.events, len(evts))
```
You can also just add one event at a time instead of a list of events:
```
_s = ESpark('Issues', 'blue', span=5)
_s.add_events(evts[0])
test_eq(_s.events, 1)
```
## Update A Group of Sparklines with `SpkMap`
```
#export
class SpkMap:
"A Group of `ESpark` instances."
def __init__(self, spks:List[ESpark]): store_attr()
@property
def evcounts(self): return dict([(s.nm, s.events) for s in self.spks])
def update_params(self, store:int=None, span:float=None, stacked:bool=None, show_freq:bool=None):
for s in self.spks:
s.upd_hist(store=ifnone(store,s.store), span=ifnone(span,s.span))
s.stacked = ifnone(stacked,s.stacked)
s.show_freq = ifnone(show_freq,s.show_freq)
def add_events(self, evts:GhEvent):
"Update `SpkMap` sparkline historgrams with events."
evts = L([evts]) if isinstance(evts, dict) else L(evts)
for s in self.spks: s.add_events(evts)
def __rich_console__(self, console: Console, options: ConsoleOptions): yield grid([self.spks])
__repr__ = basic_repr('spks')
```
You can define a `SpkMap` instance with a list of `ESpark`:
```
s1 = ESpark('Issues', 'green', [IssueCommentEvent, IssuesEvent], span=60)
s2 = ESpark('PR', 'red', [PullRequestEvent, PullRequestReviewCommentEvent, PullRequestReviewEvent], span=60)
s3 = ESpark('Follow', 'blue', [WatchEvent, StarEvent, IssueCommentEvent, IssuesEvent], span=60)
s4 = ESpark('Other', 'red', span=60)
sm = SpkMap([s1,s2,s3,s4])
```
We haven't added any events to `SpkMap` so the event count will be zero for all sparklines:
```
sm.evcounts
```
In the above example, Issue events update both the `Issues` and `Follow` sparklines, as well as the `Other` sparkline which doesn't have any `GhEvent` type filters so it counts all events:
```
sm.add_events(_issue_evts)
test_eq(sm.evcounts['Issues'], len(_issue_evts))
test_eq(sm.evcounts['Follow'], len(_issue_evts))
test_eq(sm.evcounts['Other'], len(_issue_evts))
sm.evcounts
```
You can also just add one event at a time:
```
sm.add_events(_pr_evts[0])
test_eq(sm.evcounts['PR'], 1)
test_eq(sm.evcounts['Other'], len(_issue_evts)+1)
```
It may be desirable to make certain attributes of the sparklines the same so the group can look consistent. For example, by default sparklines are set to `stacked=True`, which means the labels are on top:
```
console.print(sm)
```
We can update `stack=False` for the entire group with the `update_params` method:
```
sm.update_params(stacked=False)
console.print(sm)
sm.update_params(stacked=True, span=.1, store=8)
def _sim(s):
with Live(s) as live:
for i in range(200):
s.add_events(evts[:random.randint(0,500)])
time.sleep(random.randint(0,10)/100)
_sim(sm)
console.print(sm.spks[0])
```
### Stats - Sparklines, Progress bars and Counts Combined
We may want to combine sparklines (with `ESpark`), spinners, and progress bars (with `EProg`) to display organized information concerning an event stream. `Stats` helps you create, group, display and update these elements together.
```
#export
class Stats(SpkMap):
"Renders a group of `ESpark` along with a spinner and progress bar that are dynamically sized."
def __init__(self, spks:List[ESpark], store=None, span=None, stacked=None, show_freq=None, max_width=console.width-5, spin:str='earth', spn_lbl="/min"):
super().__init__(spks)
self.update_params(store=store, span=span, stacked=stacked, show_freq=show_freq)
store_attr()
self.spn = Spinner(spin)
self.slen = len(spks) * max(15, store*2)
self.plen = max(store, 10) # max(max_width-self.slen-15, 15)
self.progbar = EProg(width=self.plen)
def get_spk(self): return grid([self.spks], width=min(console.width-15, self.slen), expand=False)
def get_spinner(self): return grid([[self.spn], [self.spn_lbl]])
def update_prog(self, pct_complete:int=None): self.progbar.update(pct_complete) if pct_complete else noop()
def __rich_console__(self, console: Console, options: ConsoleOptions):
yield grid([[self.get_spinner(), self.get_spk(), grid([[self.progbar]], width=self.plen+5) ]], width=self.max_width)
```
Instantiate `Stats` with a list of `Espark` instances. The parameters: `store`, `span`, and `stacked` allow you to set or override properties of underlying sparklines for consistency.
```
s1 = ESpark('Issues', 'green', [IssueCommentEvent, IssuesEvent])
s2 = ESpark('PR', 'red', [PullRequestEvent, PullRequestReviewCommentEvent, PullRequestReviewEvent])
s3 = ESpark('Follow', 'blue', [WatchEvent, StarEvent])
s4 = ESpark('Other', 'red')
s = Stats([s1,s2,s3,s4], store=5, span=.1, stacked=True)
console.print(s)
```
You can add events to update counters and sparklines just like `SpkMap`:
```
s.add_events(evts)
console.print(s)
```
You can update the progress bar with the `update_prog` method:
```
s.update_prog(50)
console.print(s)
```
Here is what this looks like when animated using `Live`:
```
def _sim_spark(s):
with Live(s) as live:
for i in range(101):
s.update_prog(i)
s.add_events(evts[:random.randint(0,500)])
time.sleep(random.randint(0,10)/100)
s.update_params(span=1, show_freq=True)
_sim_spark(s)
```
## Event Panel
Display GitHub events in a `FixedPanel`, which is a frame of fixed height that displays streaming data.
```
#export
@patch
def __rich_console__(self:GhEvent, console, options):
res = Segments(options)
kw = {'color': colors[self.type]}
res.add(f'{self.emoji} ')
res.add(self.actor.login, pct=0.25, bold=True, **kw)
res.add(self.description, pct=0.5, **kw)
res.add(self.repo.name, pct=0.5 if self.text else 1, space = ': ' if self.text else '', italic=True, **kw)
if self.text:
clean_text = self.text.replace('\n', ' ').replace('\n', ' ')
res.add (f'"{clean_text}"', pct=1, space='', **kw)
res.add('\n')
return res
#export
colors = dict(
PushEvent=None, CreateEvent=Color.red, IssueCommentEvent=Color.green, WatchEvent=Color.yellow,
PullRequestEvent=Color.blue, PullRequestReviewEvent=Color.magenta, PullRequestReviewCommentEvent=Color.cyan,
DeleteEvent=Color.bright_red, ForkEvent=Color.bright_green, IssuesEvent=Color.bright_magenta,
ReleaseEvent=Color.bright_blue, MemberEvent=Color.bright_yellow, CommitCommentEvent=Color.bright_cyan,
GollumEvent=Color.white, PublicEvent=Color.turquoise4)
colors2 = dict(
PushEvent=None, CreateEvent=Color.dodger_blue1, IssueCommentEvent=Color.tan, WatchEvent=Color.steel_blue1,
PullRequestEvent=Color.deep_pink1, PullRequestReviewEvent=Color.slate_blue1, PullRequestReviewCommentEvent=Color.tan,
DeleteEvent=Color.light_pink1, ForkEvent=Color.orange1, IssuesEvent=Color.medium_violet_red,
ReleaseEvent=Color.green1, MemberEvent=Color.orchid1, CommitCommentEvent=Color.tan,
GollumEvent=Color.sea_green1, PublicEvent=Color.magenta2)
p = FixedPanel(15, box=box.HORIZONTALS, title='ghtop')
for e in evts[:163]: p.append(e)
p
```
#### Using `grid` with `FixedPanel`
We can use `grid` to arrange multiple `FixedPanel` instances in rows and columns. Below is an example of how two `FixedPanel` instances can be arranged in a row:
```
p = FixedPanel(15, box=box.HORIZONTALS, title='ghtop')
for e in exs: p.append(e)
grid([[p,p]])
```
Here is another example of a four `FixedPanel` instances arranged in two rows and two columns:
```
types = IssueCommentEvent,IssuesEvent,PullRequestEvent,PullRequestReviewEvent
ps = {o:FixedPanel(15, box=box.HORIZONTALS, title=camel2words(remove_suffix(o.__name__,'Event'))) for o in types}
for k,v in ps.items(): v.extend(evts.filter(risinstance(k)))
isc,iss,prs,prrs = ps.values()
grid([[isc,iss],[prs,prrs]], width=110)
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
```
# default_exp sources
from nbdev import *
%load_ext autoreload
%autoreload 2
from utilities.ipynb_docgen import *
from nbdev.showdoc import show_doc
```
# Sources and weights
> Define the PointSource class, code to load weight tables to combine with photon data
### Weight tables
We use a full-sky catalog analysis model, currently pointlike, see [source_weights](https://github.com/tburnett/pointlike/blob/master/python/uw/like2/.py), to evaluate the predicted flux from a source of interest with respect to the
background, the combined fluxes from all other sources.
We choose the following binning:
* energy: 4/decade from 100 MeV to 1 TeV (but only upto 10 GeV really used)
* event types: Front/Back
* Angular position: HEALPix, nside from 64 to 512 depending on the PSF
In the tables, the energy index and event type are packed into a 1-byte band index, and the HEALPix index
is in NEST order.
The table output also includes the predicted flux for each band, and the spectral model used.
```
# collapse-hide
def weight_table_plots():
"""
### Plots of weight vs. radius
These plots, of a strong source with little background, and a weak one, show the
value of the weight vs. the radius of the pixels in the table. These is a
plot for each energy/event type band. The top row has energies 100 MeV to 1 GeV,
the bottom row 1 GeV to 10 GeV, Front, in green, and Back in orange.
{img}
Note the absense of Back for energies below {Emid} MeV. The pointlike model does not
compute these to avoid the larger PSF and Earth limb background. The maximum radius
per band is determined by the PSF.
"""
Emid = round(10**2.5)
img = image('weight_table_plots.png', width=500, caption=None)
return locals()
nbdoc(weight_table_plots)
```
The local code used to do the unpacking is in the class `WeightMan`
This table is used with the data, as a simple lookup: A weight is assigned to each photon according to which energy, event type or HEALPix pixel it lands in.
```
#hide
from wtlike.config import *
import zipfile
from pathlib import Path
config=Config()
if config.valid:
datapath=Path(config.datapath)
wtfolder =datapath/'weight_files'
nf = len(list(wtfolder.glob('*_weights.pkl')))
with zipfile.ZipFile(datapath/'weight_files.zip') as wtzip:
nzip = len(wtzip.filelist)
print(f'Weight files: {nf} in {wtfolder.name}/ and {nzip} in weight_files.zip ')
```
### Accounting for variations from neighboring sources
Consider the case where sources $S_1$ and $S_2$ have overlapping pixels. For a given pixel the corresponding weights are
$w_1$ and $w_2$, and we investigate the effect on $S_1$ from a fractional variation $\alpha_2 \ne 0$ of $S_2$, such that
its flux for that pixel, $s_2$, becomes $(1+\alpha )\ s_2$. With the background $b$, the flux of all
sources besides $S_1$ and $S_2$, we have for the $S_1$ weight,
$$ w_1 = \frac{s_1}{s_1+s_2+b}\ \ ,$$ and similarly for $S_2$.
Replacing $s_2$ with $(1+\alpha ) s_2$, we have for the modified weight $w_1'$ that we should use for $S_1$,
$$w'_1 = \frac{w_1}{1+\alpha_2\ w_2}\ \ . $$
```
# export
import os, sys, pickle, healpy, zipfile
from pathlib import Path
import numpy as np
import pandas as pd
from scipy.integrate import quad
from astropy.coordinates import SkyCoord, Angle
from wtlike.config import *
# export
def get_wtzip_index(config, update=False):
wtzipfile = config.datapath/'weight_files.zip'
if not wtzipfile.is_file():
print( f'Did not find the zip file {wtzipfile}', file=sys.stderr)
return None
with zipfile.ZipFile(wtzipfile) as wtzip:
if 'index.pkl' in wtzip.namelist() and not update:
return pickle.load(wtzip.open('index.pkl'))
if config.verbose>0:
print(f'Extracting info from {wtzipfile}')
name=[]; glat=[]; glon=[]
for filename in wtzip.namelist():
if filename=='index.pkl': continue
with wtzip.open(filename) as file:
wtd = pickle.load(file, encoding='latin1')
l,b = wtd['source_lb']
name.append(Path(filename).name.split('_weights.pkl')[0].replace('_',' ').replace('p','+') )
glon.append(l)
glat.append(b)
zip_index = dict(name=name,
coord=SkyCoord(glon, glat, unit='deg', frame='galactic').fk5
)
### write to temp file, insert back into the zip
### SHould be a way to just stream
pickle.dump(zip_index, open('/tmp/wtfile_index.pkl', 'wb'))
with zipfile.ZipFile(wtzipfile, mode='a') as wtzip:
wtzip.write('/tmp/wtfile_index.pkl', 'index.pkl')
return zip_index
#export
class WeightMan(dict):
""" Weight Management
* Load weight tables
* Assign weights to photons
"""
def __init__(self, config, source):
"""
"""
self.source = source
nickname = source.nickname
datapath =Path(config.datapath)
filename = 'weight_files/'+nickname.replace(' ','_').replace('+','p')+'_weights.pkl'
if (datapath/filename).exists():
# print('found in directory')
with open(datapath/filename, 'rb') as inp:
wtd = pickle.load(inp, encoding='latin1')
elif (datapath/'weight_files.zip').exists():
# check the zip file
# print('load from zip')
with zipfile.ZipFile(datapath/'weight_files.zip') as wtzip:
wtd = pickle.load(wtzip.open(filename), encoding='latin1')
else:
raise Exception(f'No weight info found for {nickname}')
self.update(wtd)
self.__dict__.update(wtd)
self.filename=filename
self.config = config
# pos = self['source_lb']
# print(f'\tSource is {self["source_name"]} at ({pos[0]:.2f}, {pos[1]:.2f})')
# check format--old has pixels, weights at tome
srcfile = f'file "{self.filename}"' if self.source is None else f'file from source "{source.filename}"_weights.pkl'
if hasattr(self, 'nside'):
self.format=0
if config.verbose>0:
print(f'WeightMan: {srcfile} old format, nside={self.nside}')
test_elements = 'energy_bins pixels weights nside model_name radius order roi_name'.split()
assert np.all([x in wtd.keys() for x in test_elements]),f'Dict missing one of the keys {test_elements}'
if config.verbose>0:
print(f'Load weights from file {os.path.realpath(filename)}')
pos = self['source_lb']
print(f'\tFound: {self["source_name"]} at ({pos[0]:.2f}, {pos[1]:.2f})')
# extract pixel ids and nside used
self.wt_pix = self['pixels']
self.nside_wt = self['nside']
# merge the weights into a table, with default nans
# indexing is band id rows by weight pixel columns
# append one empty column for photons not in a weight pixel
# calculated weights are in a dict with band id keys
self.wts = np.full((32, len(self.wt_pix)+1), np.nan, dtype=np.float32)
weight_dict = self['weights']
for k in weight_dict.keys():
t = weight_dict[k]
if len(t.shape)==2:
t = t.T[0] #???
self.wts[k,:-1] = t
else:
self.format=1
wtdict = self.wt_dict
nsides = [v['nside'] for v in wtdict.values() ];
if config.verbose>1:
print(f'WeightMan: {srcfile} : {len(nsides)} bamds'\
f' with nsides {nsides[0]} to {nsides[-1]}')
if self.source is not None:
self.source.fit_info = self.fitinfo
if config.verbose>2:
print(f'\tAdded fit info {self.fitinfo} to source')
def _new_format(self, photons):
wt_tables =self.wt_dict
data_nside=1024
photons.loc[:,'weight'] = np.nan
if self.config.verbose>1:
print(f'WeightMan: processing {len(photons):,} photons')
def load_data( band_id):
""" fetch pixels and weights for the band;
adjust pixels to the band nside
generate mask for pixels, weights
"""
band = photons[photons.band==band_id] #.query('band== @band_id')
wt_table = wt_tables[band_id]
nside = wt_table['nside']
new_weights = wt_table['wts'].astype(np.float16)
to_shift = int(2*np.log2(data_nside//nside))
data_pixels = np.right_shift(band.nest_index, to_shift)
wt_pixels=wt_table['pixels']
good = np.isin( data_pixels, wt_pixels)
if self.config.verbose>2:
print(f'\t {band_id:2}: {len(band):8,} -> {sum(good ):8,}')
return data_pixels, new_weights, good
def set_weights(band_id):
if band_id not in wt_tables.keys(): return
data_pixels, new_weights, good = load_data(band_id)
wt_pixels = wt_tables[band_id]['pixels']
indicies = np.searchsorted( wt_pixels, data_pixels[good])
new_wts = new_weights[indicies]
# get subset of photons in this band, with new weights
these_photons = photons[photons.band==band_id][good]
these_photons.loc[:,'weight']=new_wts
photons.loc[photons.band==band_id,'weight'] = (these_photons.weight).astype(np.float16)
# if self.config.verbose>1:
# print(f' -> {len(new_wts):8,}')
for band_id in range(16):
set_weights(band_id)
return photons
def add_weights(self, photons):
"""
get the photon pixel ids, convert to NEST (if not already) and right shift them
add column 'weight', remove `nest_index'
remove rows with nan weight
"""
assert photons is not None
photons = self._new_format(photons)
assert photons is not None
# don't need these columns now (add flag to config to control??)
if not getattr(self.config, 'keep_pixels', False):
photons.drop(['nest_index'], axis=1, inplace=True)
if self.config.verbose>2:
print('Keeping pixels')
noweight = np.isnan(photons.weight.values)
if self.config.verbose>1:
print(f'\tremove {sum(noweight):,} events without weight')
ret = photons[np.logical_not(noweight)]
assert ret is not None
return ret
def weight_radius_plots(photons):
"""
"""
import matplotlib.pyplot as plt
fig, axx = plt.subplots(2,8, figsize=(16,5), sharex=True, sharey=True)
plt.subplots_adjust(hspace=0.02, wspace=0)
for id,ax in enumerate(axx.flatten()):
subset = photons.query('band==@id & weight>0')
ax.semilogy(subset.radius, subset.weight, '.', label=f'{id}');
ax.legend(loc='upper right', fontsize=10)
ax.grid(alpha=0.5)
ax.set(ylim=(8e-4, 1.2), xlim=(0,4.9))
plt.suptitle('Weights vs. radius per band')
show_doc(WeightMan)
# export
def findsource(*pars, gal=False):
"""
Return a SkyCoord, looking up a name, or interpreting args
Optional inputs: len(pars) is 1 for a source name or Jxxx, or 2 for coordinate pair
- name -- look up the name, return None if not found
- Jxxxx.x+yyyy -- intrepret to get ra, dec, then convert
- ra,dec -- assume frame=fk5
- l,b, gal=True -- assume degrees, frame=galactic
"""
import astropy.units as u
if len(pars)==1:
name = tname= pars[0]
if name.startswith('J') and (len(name)==10 or len(name)==12) and ('+' in name or '-' in name):
# parse the name for (ra,dec)
if name[5]!='.':
tname = name[:5]+'.0'+name[5:]
ra=(tname[1:3]+'h'+tname[3:7]+'m')
dec = (tname[7:10]+'d'+tname[10:12]+'m')
try:
(ra,dec) = map(lambda a: float(Angle(a, unit=u.deg).to_string(decimal=True)),(ra,dec))
skycoord = SkyCoord(ra, dec, unit='deg', frame='fk5')
except ValueError:
print(f'Attempt to parse {name} failed: expect "J1234.5+6789" or "J1234.5678"', file=sys.stderr)
return None
# elif name.startswith('J'):
# print(f'The name "{name}" starts with a "J", but cannot be parsed for ra,dec', file=sys.stderr)
# return None
else:
try:
skycoord = SkyCoord.from_name(name)
except Exception as e:
# not found
return None
elif len(pars)==2:
name = f'({pars[0]},{pars[1]})'
#gal = kwargs.get('gal', False)
skycoord=SkyCoord(*pars, unit='deg', frame='galactic' if gal else 'fk5')
else:
raise TypeError('require name or ra,dec or l,b,gal=True')
return skycoord
# hide
names = \
"""
J1740+1000
J1740.1-4444
BL Lac
CTA1
PSR J0007+7303
4FGL J0007.0+7303
Sagittarius A*
Sgr A*
Mkn 421
Vela
J1512.8-0906
junk
""".split('\n')
print(f'Test findsource name lookup using astropy.coordinate.SkyCoord\nname{"l":>16}{"b":>10}')
for name in names:
if len(name)==0: continue
r = findsource(name)
if r is not None:
l,b = r.galactic.l.value, r.galactic.b.value
txt = '(not found)' if r is None else f'{l:07.3f} {b:+07.3f}'
print(f'{name:18} {txt}')
#exporti
class WTSkyCoord(SkyCoord):
def __repr__(self):
ra,dec = self.fk5.ra.deg, self.fk5.dec.deg
return f'({ra:.3f},{dec:.3f})'
#export
class FermiCatalog():
def __init__(self,config=None, max_sep=0.1):
from astropy.io import fits
self.max_sep=max_sep
if config is None:
config = Config()
if config.catalog_file is None or not Path(config.catalog_file).expanduser().is_file():
print('There is no link to 4FGL catalog file: set "catalog_file" in your config.yaml'
' or specify if in the Config() call',
file=sys.stderr)
else:
self.catalog_file = Path(config.catalog_file).expanduser()
# make this optional
with fits.open(self.catalog_file) as hdus:
self.data = data = hdus[1].data
cname = lambda n : [s.strip() for s in data[n]]
cvar = lambda a: data[a].astype(float)
ivar = lambda a: data[a].astype(int)
name = list(map(lambda x: x.strip() , data['Source_Name']))
self.skycoord = WTSkyCoord(data['RAJ2000'], data['DEJ2000'], unit='deg', frame='fk5')
self.df = pd.DataFrame(dict(
skycoord = self.skycoord,
significance = cvar('Signif_Avg'),
variability = cvar('Variability_Index'),
assoc_prob = cvar('ASSOC_PROB_BAY'), # for Bayesian, or _LR for likelihood ratio
assoc1_name = cname('ASSOC1'),
class1 = cname('CLASS1'),
flags = ivar('FLAGS'),
# ....
))
self.df.index = name
self.df.index.name = 'name'
def __repr__(self):
return f'4FGL file {self.catalog_file.name} with {len(self)} entries'
def field(self, colname):
## Todo: check type
return self.data[colname]
def __call__(self, skycoord):
"""select an entry by skydir return entry"""
idx, sep2d, _= skycoord.match_to_catalog_sky(self.skycoord)
csep = sep2d.deg[0]
return self.df.iloc[idx] if csep < self.max_sep else None
def __getitem__(self, name): return self.df.loc[name]
def __len__(self): return len(self.df)
#hide
config = Config()
if config.catalog_file is not None:
cat = FermiCatalog()
print(f'Lookup by name:\n {cat["4FGL J0001.2+4741"]}')
print(f'Lookup by position:\n {cat(SkyCoord(9.017,15.999, unit="deg", frame="galactic"))}')
#hide
# cat.df.head()
# flags = cat.df['flags']
# sum(flags.values & 2**13>0)
# sgu_flag[cat.df.assoc_prob>0]
# export
class SourceLookup():
""" Use lists of the pointlike and catalog sources to check for correspondence of a name or position
"""
max_sep = 0.1
def __init__(self, config):
from astropy.io import fits
import pandas as pd
self.config=config
zip_index = get_wtzip_index(config)
if zip_index is None:
raise Exception('Expected zip file weight_files.zip')
self.pt_dirs=zip_index['coord']
self.pt_names = zip_index['name']
if config.catalog_file is None or not Path(config.catalog_file).expanduser().is_file():
print('There is no link to 4FGL catalog file: set "catalog_file" in your config.yaml'
' or specify if in the Config() call',
file=sys.stderr)
self.cat_names=[]
self.cat_dirs =[]
else:
catalog_file = Path(config.catalog_file).expanduser()
# make this optional
with fits.open(catalog_file) as hdus:
data = hdus[1].data
self.cat_names = list(map(lambda x: x.strip() , data['Source_Name']))
self.cat_dirs = SkyCoord(data['RAJ2000'], data['DEJ2000'], unit='deg', frame='fk5')
def check_folder(self, *pars):
if len(pars)>1: return None
name = pars[0]
filename = self.config.datapath/'weight_files'/(name.replace(' ','_').replace('+','p')+'_weights.pkl')
if not filename.is_file():
return None
with open(filename, 'rb') as inp:
wd = pickle.load(inp, encoding='latin1')
#print(wd.keys(), wd['source_name'], wd['source_lb'])
self.skycoord = SkyCoord( *wd['source_lb'], unit='deg', frame='galactic')
self.check_4fgl()
return name
def __call__(self, *pars, **kwargs):
"""
Search the catalog lists. Options are:
* name of a pointlike source
* name of a source found by astropy, or a coordinate, which is close to a source in the pointlike list
* a coordinate pair (ra,dec), or (l,b, gal=True)
Returns the pointlike name.
"""
self.psep=self.csep=99 # flag not found
self.cat_name = None
# first, is the name in the weidht_files folder?
name = self.check_folder(*pars)
if name is not None: return name
# then check pointlike list
if self.pt_names is not None and len(pars)==1 and pars[0] in self.pt_names:
idx = list(self.pt_names).index(pars[0])
skycoord = self.pt_dirs[idx]
else:
# get coord either by known catalog name, or explict coordinate pair
try:
skycoord = findsource(*pars, **kwargs)
except TypeError as err:
print(err)
return None
except ValueError as err:
print(err, file=sys.stderr)
return None
if skycoord is None:
error = f'*** Name "{pars}" not found by astropy, and is not in the pointlike list'
print(error, file=sys.stderr)
return None
self.psep=0
idx, sep2d, _= skycoord.match_to_catalog_sky(self.pt_dirs)
self.psep = sep = sep2d.deg[0]
pt_name = self.pt_names[idx]
if sep > self.max_sep:
error = f'*** Name "{pars}" is {sep:.2f} deg > {self.max_sep} from pointlike source {pt_name}'
print(error, file=sys.stderr)
return None
self.skycoord=skycoord
self.check_4fgl()
return pt_name
def check_4fgl(self):
# check for 4FGL correspondence, set self.cat_name, self.csep
if len(self.cat_dirs)==0:
self.cat_name=None
return
idx, sep2d, _= self.skycoord.match_to_catalog_sky(self.cat_dirs)
self.csep = sep2d.deg[0]
self.cat_name = self.cat_names[idx] if self.csep < self.max_sep else None
show_doc(SourceLookup)
show_doc(SourceLookup.__call__)
#hide
config=Config() #catalog_file=None)
if config.valid:
sl = SourceLookup(config);
tests =\
"""
# cause problems?
J1740+1000
03bF-0173
4FGL J1510.1-5750
4FGL J1749.5-2747
J1749.5-2747
J1818.5-2036
# should all be the same
Mkn 421
P88Y2756
4FGL J1104.4+3812
# the GC source
Sgr A*
Geminga
# should fail, name not exact match
4FGL J1512.8-0905
# OK, since lookup by position
J1512.8-0905
# Name ok, but not near a gamma-ray source
IC 1623
# Pointlike source not in 4FGL, and its BL Lac association
211F-1019
87GB 234805.5+360514
# pulsars in 4FGL, but use its name
PSR B1259-63
PSR J0205+6449
""".split('\n')
for line in tests:
t = line.strip()
if len(t)==0 or t[0]=='#':
print(t)
else:
print(f' {t:20s}--> {sl(t)} ({sl.cat_name})')
print(f' {"0,0":20s}--> {sl(0,0)}')
print(f' {"0,0,gal=True":20s}--> {sl(0,0, gal=True)} ({sl.cat_name})')
#export
class PointSource():
"""Manage the position and name of a point source
"""
def __init__(self, *pars, **kwargs):
"""
"""
config = self.config = kwargs.pop('config', Config())
lookup = SourceLookup(config)
gal = kwargs.get('gal', False)
self.nickname = pt_name = lookup(*pars, ** kwargs )
if pt_name is None:
raise Exception('Source not found')
self.skycoord = lookup.skycoord
#print(pars)
if len(pars)==1:
name = pars[0]
if name==pt_name and lookup.cat_name is not None:
name = lookup.cat_name
else:
gal = kwargs.get('gal', False)
name=f'{"fk5" if gal else "gal"} ({pars[0]},{pars[1]}) '
self.name = name
gal = self.skycoord.galactic
self.l, self.b = gal.l.deg, gal.b.deg
self.cat_name = lookup.cat_name
# override 4FGL if identifiec pulasr
if self.nickname.startswith('PSR ') and self.cat_name is not None and self.cat_name.startswith('4FGL '):
self.name = self.nickname
# use cat name if used J-name to find it
elif self.name.startswith('J') and self.cat_name is not None and self.cat_name.startswith('4FGL'):
self.name = self.cat_name
try:
self.wtman = WeightMan(self.config, self)
# add wtman attribute references
self.__dict__.update(self.wtman.__dict__)
except Exception as e:
print(f'Unexpected Weigthman failure: {e}', file=sys.stderr)
raise
def __str__(self):
return f'Source "{self.name}" at: (l,b)=({self.l:.3f},{self.b:.3f}), nickname {self.nickname}'
def __repr__(self): return str(self)
@property
def ra(self):
sk = self.skycoord.transform_to('fk5')
return sk.ra.value
@property
def dec(self):
sk = self.skycoord.transform_to('fk5')
return sk.dec.value
@property
def filename(self):
"""Modified name for file system"""
return self.name.replace(' ', '_').replace('+','p') if getattr(self,'nickname',None) is None else self.nickname
@classmethod
def fk5(cls, name, position):
"""position: (ra,dec) tuple """
ra,dec = position
sk = SkyCoord(ra, dec, unit='deg', frame='fk5').transform_to('galactic')
return cls(name, (sk.l.value, sk.b.value))
@property
def spectral_model(self):
if not hasattr(self, 'fit_info'): return None
modelname = self.fit_info['modelname']
pars = self.fit_info['pars']
if modelname=='LogParabola':
return self.LogParabola(pars)
elif modelname=='PLSuperExpCutoff':
return self.PLSuperExpCutoff(pars)
else:
raise Exception(f'PointSource: Unrecognized spectral model name {fi["modelname"]}')
def __call__(self, energy):
"""if wtman set, return photon flux at energy"""
return self.spectral_model(energy) if self.spectral_model else None
def sed_plot(self, ax=None, figsize=(5,4), **kwargs):
"""Make an SED for the source
- kwargs -- for the Axes object (xlim, ylim, etc.)
"""
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=figsize) if ax is None else (ax.figure, ax)
x =np.logspace(2,5,61)
y = self(x)
ax.loglog(x/1e3, y*x**2 * 1e6, '-')
ax.grid(alpha=0.5)
kw = dict(xlabel='Energy (GeV)',
ylabel=r'$\mathrm{Energy\ Flux\ (eV\ cm^{-2}\ s^{-1})}$',
title=f'{self.name}',
xlim=(x.min(),x.max()),
)
kw.update(kwargs)
ax.set(**kw)
class FluxModel():
emin, emax = 1e2, 1e5
def __init__(self, pars, e0=1000):
self.pars=pars
self.e0=e0
def photon_flux(self):
return quad(self, self.emin, self.emax)[0]
def energy_flux(self):
func = lambda e: self(e) * e**2
return quad(func, self.emin, self.emax)[0]
class LogParabola(FluxModel):
def __call__(self, e):
n0,alpha,beta,e_break=self.pars
x = np.log(e_break/e)
y = (alpha - beta*x)*x
return n0*np.exp(y)
class PLSuperExpCutoff(FluxModel):
def __call__(self,e):
#print('WARNING: check e0!')
n0,gamma,cutoff,b=self.pars
return n0*(self.e0/e)**gamma*np.exp(-(e/cutoff)**b)
show_doc(PointSource)
show_doc(PointSource.fk5)
#hide
# TODO: upeate tests if no weight file
# for s, expect in [( PointSource('Geminga'), 'Source "Geminga" at: (l,b)=(195.134,4.266)'),
# ( PointSource('gal_source', (0,0)), 'Source "gal_source" at: (l,b)=(0.000,0.000)', ),
# ( PointSource.fk5('fk5_source',(0,0)),'Source "fk5_source" at: (l,b)=(96.337,-60.189)',)
# ]:
# assert str(s)==expect, f'expected {expect}, got {str(s)}'
# PointSource('3C 273').filename, PointSource('3C 273', nickname='3Cxxx').filename
# source = PointSource('Geminga'); print(source)
# source = PointSource(179.8, 65.04, gal=True); print(source, source.nickname)
# PointSource('PSR B1259-63')
# hide
from nbdev.export import notebook2script
notebook2script()
!date
```
| github_jupyter |
Using a neural network to predict stock prices, using only basic data
```
%matplotlib inline
from matplotlib import pyplot as plt
import datetime
import pandas_datareader.data as web
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
from sklearn.metrics import r2_score
# increase default figure size for matplotlib
from pylab import rcParams
rcParams['figure.figsize'] = 20, 10
# I will implement a forward distribution, to predict values for the future
def create_columns(df, days=7):
columns = df.columns
for n in range(1,days):
for column in columns:
new_column = "d{}-{}".format(n, column)
df[new_column] = 0
return df
def construct_features(df, days=7):
columns = df.columns
for n in range(1,days):
for column in columns:
for row in range(df.shape[0]):
column_to_update = "d{}-{}".format(n, column)
if row+1 > n:
df.ix[row, column_to_update] = df.ix[row-n, column]
else:
df.ix[row, column_to_update] = np.nan
# drop existing features
# df = df.drop(columns - ['Adj Close'], axis=1)
#drop NAs
df = df.dropna()
return df
# train test split, non-randomized
def split(array, test_size):
return array[:test_size], array[test_size:]
# scale
def scale(X_train, X_test):
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
return X_train, X_test
# classify
def train(X_train, y_train, reg=0.1):
clf = Lasso(alpha=reg)
clf.fit(X_train, y_train)
return clf
# score
def predict(clf, y_test):
return clf.predict(y_test)
# import data from yahoo finance
start_date = datetime.datetime(2012,1,1)
end_date = datetime.datetime(2016,7,31)
symbol = "HGTX3.SA"
df_base = web.DataReader(symbol, 'yahoo', start_date, end_date)
#start from scratch
df_base.columns
# transform dataframe
df = df_base.drop(['Open', 'High', 'Low', 'Close'], axis=1)
days=7
df = create_columns(df, days=days)
df = construct_features(df, days=days)
df.head()
# get X and y
df_short = df.iloc[:800]
X = df_short.drop(['Adj Close', 'Volume'], axis=1).values
y_price, y_volume = df_short['Adj Close'].values, df_short['Volume'].values
# separate test and train
test_size = int(X.shape[0]*.9)
X_train, X_test = split(X, test_size)
y_price_train, y_price_test = split(y_price, test_size)
y_volume_train, y_volume_test = split(y_volume, test_size)
# train two classifiers, one for price, one for volume
clf_price = train(X_train, y_price_train, reg=0.2)
clf_volume = train(X_train, y_volume_train, reg=0.2)
X_pred = X_train
y_price_pred = np.array([])
y_volume_pred = np.array([])
for _ in y_price_test:
# get the features
x = X_pred[-1]
# predict
price = clf_price.predict(x.reshape(1, -1))
volume = clf_volume.predict(x.reshape(1, -1))
# append to y values
y_price_pred = np.append(y_price_pred, price)
y_volume_pred = np.append(y_volume_pred, price)
# Create a new row, add the predition values, and append to X
x = X_pred[-1][:-2]
x = np.append(price, x)
x = np.append(volume, x)
X_pred = np.append(X_pred, x.reshape(1, -1), axis=0)
# plot
full_pred = pd.DataFrame(np.append(y_price_train, y_price_pred), index=df_short.index)
base = pd.DataFrame(np.append(y_price_train, y_price_test), index=df_short.index)
ax = full_pred.plot(color='red')
base.plot(color='blue', ax=ax)
r2_score(y_price_test, y_price_pred), r2_score(y_volume_test, y_volume_pred)
zip(df.columns.difference(['Adj Close', 'Volume']), clf_price.coef_)
zip(y_price_test, y_price_pred)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/convolutions.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/convolutions.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/convolutions.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Load and display an image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318')
Map.setCenter(-121.9785, 37.8694, 11)
Map.addLayer(image, {'bands': ['B5', 'B4', 'B3'], 'max': 0.5}, 'input image')
# Define a boxcar or low-pass kernel.
# boxcar = ee.Kernel.square({
# 'radius': 7, 'units': 'pixels', 'normalize': True
# })
boxcar = ee.Kernel.square(7, 'pixels', True)
# Smooth the image by convolving with the boxcar kernel.
smooth = image.convolve(boxcar)
Map.addLayer(smooth, {'bands': ['B5', 'B4', 'B3'], 'max': 0.5}, 'smoothed')
# Define a Laplacian, or edge-detection kernel.
laplacian = ee.Kernel.laplacian8(1, False)
# Apply the edge-detection kernel.
edgy = image.convolve(laplacian)
Map.addLayer(edgy,
{'bands': ['B5', 'B4', 'B3'], 'max': 0.5},
'edges')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# NWB use-case pvc-7
--- Data courtesy of Aleena Garner, Allen Institute for Brain Sciences ---
Here we demonstrate how data from the NWB pvc-7 use-case can be stored in NIX files.
### Context:
- *In vivo* calcium imaging of layer 4 cells in mouse primary visual cortex.
- Two-photon images sampled @ 30 Hz
- Visual stimuli of sinusoidal moving gratings were presented.
- In this example, we use a subset of the original data file.
- We only use image frames 5000 to frame 6000.
- Image data was 10 times down-sampled.
```
from nixio import *
import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
from utils.notebook import print_stats
from utils.plotting import Plotter
```
## Open a file and inspect its content
```
f = File.open("data/pvc-7.nix.h5", FileMode.ReadOnly)
print_stats(f.blocks)
block = f.blocks[0]
print_stats(block.data_arrays)
print_stats(block.tags)
```
## Explore Stimulus
```
# get recording tag
recording = block.tags[0]
# stimulus combinations array
stimulus = recording.features[0].data
# display the stimulus conditions
for label in stimulus.dimensions[0].labels:
print label + ' :',
print '\n'
# actual stimulus condition values
for cmb in stimulus.data[:]:
for x in cmb:
print "%.2f\t" % x,
print '\n'
# get particular stimulus combination
index = 2
print "a stimulus combination %s" % str(stimulus.data[index])
# find out when stimulus was displayed
start = recording.position[index]
end = recording.extent[index]
print "was displayed from frame %d to frame %d" % (start, end)
```
## Explore video and imaging data
```
# get movie arrays from file
movies = filter(lambda x: x.type == 'movie', recording.references)
print_stats(movies)
# get mouse image at the beginning of the selected stimulus
mouse = movies[1]
image_index = int(np.where(np.array(mouse.dimensions[0].ticks) > start)[0][0])
plt.imshow(mouse.data[image_index])
# get eye image at the end of the selected stimulus
eye = movies[0]
image_index = int(np.where(np.array(eye.dimensions[0].ticks) > end)[0][0])
plt.imshow(eye.data[image_index])
# get 2-photon image at the beginning of the selected stimulus
imaging = filter(lambda x: x.type == 'imaging', recording.references)[0]
image_index = int(np.where(np.array(imaging.dimensions[0].ticks) > start)[0][0])
plt.imshow(imaging.data[image_index])
# plot mouse speed in the whole window (TODO: add stimulus events)
speeds = filter(lambda x: x.type == 'runspeed', recording.references)[0]
p = Plotter()
p.add(speeds)
p.plot()
f.close()
```
| github_jupyter |
# Processing Dirty Data
## Background
This is fake data generated to demonstrate the capabilities of `pyjanitor`. It contains a bunch of common problems that we regularly encounter when working with data. Let's go fix it!
### Load Packages
Importing `pyjanitor` is all that's needed to give Pandas Dataframes extra methods to work with your data.
```
import pandas as pd
import janitor
```
## Load Data
```
df = pd.read_excel('dirty_data.xlsx', engine='openpyxl')
df
```
## Cleaning Column Names
There are a bunch of problems with this data. Firstly, the column names are not lowercase, and they have spaces. This will make it cumbersome to use in a programmatic function. To solve this, we can use the `clean_names()` method.
```
df_clean = df.clean_names()
df_clean.head(2)
```
Notice now how the column names have been made better.
If you squint at the unclean dataset, you'll notice one row and one column of data that are missing. We can also fix this! Building on top of the code block from above, let's now remove those empty columns using the `remove_empty()` method:
```
df_clean = df.clean_names().remove_empty()
df_clean.head(9).tail(4)
```
Now this is starting to shape up well!
## Renaming Individual Columns
Next, let's rename some of the columns. `%_allocated` and `full_time?` contain non-alphanumeric characters, so they make it a bit harder to use. We can rename them using the :py:meth:`rename_column()` method:
```
df_clean = (
df
.clean_names()
.remove_empty()
.rename_column("%_allocated", "percent_allocated")
.rename_column("full_time_", "full_time")
)
df_clean.head(5)
```
Note how now we have really nice column names! You might be wondering why I'm not modifying the two certifiation columns -- that is the next thing we'll tackle.
## Coalescing Columns
If we look more closely at the two `certification` columns, we'll see that they look like this:
```
df_clean[['certification', 'certification_1']]
```
Rows 8 and 11 have NaN in the left certification column, but have a value in the right certification column. Let's assume for a moment that the left certification column is intended to record the first certification that a teacher had obtained. In this case, the values in the right certification column on rows 8 and 11 should be moved to the first column. Let's do that with Janitor, using the `coalesce()` method, which does the following:
```
df_clean = (
df
.clean_names()
.remove_empty()
.rename_column("%_allocated", "percent_allocated")
.rename_column("full_time_", "full_time")
.coalesce(
column_names=['certification', 'certification_1'],
new_column_name='certification'
)
)
df_clean
```
Awesome stuff! Now we don't have two columns of scattered data, we have one column of densely populated data.`
## Dealing with Excel Dates
Finally, notice how the `hire_date` column isn't date formatted. It's got this weird Excel serialization.
To clean up this data, we can use the :py:meth:`convert_excel_date` method.
```
df_clean = (
df
.clean_names()
.remove_empty()
.rename_column('%_allocated', 'percent_allocated')
.rename_column('full_time_', 'full_time')
.coalesce(['certification', 'certification_1'], 'certification')
.convert_excel_date('hire_date')
)
df_clean
```
We have a cleaned dataframe!
| github_jupyter |
### Imports
```
import importlib
from AIBind.import_modules import *
from AIBind import AIBind
importlib.reload(AIBind)
```
### GPU Settings
```
str(subprocess.check_output('nvidia-smi', shell = True)).split('\\n')
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
```
### VAENet
#### Read In Test Datasets
```
targets_test = []
targets_validation = []
edges_test = []
edges_validation = []
train_sets = []
for run_number in tqdm(range(5)):
targets_test.append(pd.read_csv('/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/test_unseen_nodes_' + str(run_number) + '.csv'))
edges_test.append(pd.read_csv('/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/test_unseen_edges_' + str(run_number) + '.csv'))
targets_validation.append(pd.read_csv('/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/validation_unseen_nodes_' + str(run_number) + '.csv'))
edges_validation.append(pd.read_csv('/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/validation_unseen_edges_' + str(run_number) + '.csv'))
train_sets.append(pd.read_csv('/data/sars-busters-consolidated/GitData/VecNet_Unseen_Targets/train_' + str(run_number) + '.csv'))
```
#### AIBind Object
```
with open('/data/sars-busters-consolidated/chemicals/vae_chemicals.csv', 'rb') as file:
drugs = pkl.load(file)
with open('/data/sars-busters/Mol2Vec/amino_01_w_embed.pkl','rb') as file:
targets = pkl.load(file)
targets = targets.rename(columns = {'Label' : 'target_aa_code'})
targets_to_add = list(set(pd.concat(targets_test)['target_aa_code']).difference(targets['target_aa_code']))
targets_to_add = pd.DataFrame(targets_to_add)
targets_to_add.columns = ['target_aa_code']
print ("Number Of Targets To Add : ", targets_to_add.shape[0])
targets_to_add = vaenet_object.get_protvec_embeddings(prediction_interactions = targets_to_add,
embedding_dimension = 100,
replace_dataframe = False,
return_normalisation_conststants = False,
delimiter = '\t')
targets = targets[['target_aa_code', 'normalized_embeddings']]
targets = pd.concat([targets, targets_to_add])
# Create object
vaenet_object = AIBind.AIBind(interactions_location = '/data/sars-busters-consolidated/GitData/interactions/Network_Derived_Negatives.csv',
interactions = None,
interaction_y_name = 'Y',
absolute_negatives_location = None,
absolute_negatives = None,
drugs_location = None,
drugs_dataframe = drugs,
drug_inchi_name = 'InChiKey',
drug_smile_name = 'SMILE',
targets_location = None,
targets_dataframe = targets,
target_seq_name = 'target_aa_code',
mol2vec_location = None,
mol2vec_model = None,
protvec_location = '/home/sars-busters/Mol2Vec/Results/protVec_100d_3grams.csv',
protvec_model = None,
nodes_test = targets_test,
nodes_validation = targets_validation,
edges_test = edges_test,
edges_validation = edges_validation,
model_out_dir = '/data/sars-busters-consolidated/vaenet/KF-FinalTargets/',
debug = False)
```
#### Update Drugs and Targets
```
vaenet_object.get_external_drug_embeddings(pred_drug_embeddings = None,
normalized = False,
replace_dataframe = True,
return_normalisation_conststants = False)
vaenet_object.get_external_target_embeddings(pred_target_embeddings = None,
normalized = False,
replace_dataframe = True,
return_normalisation_conststants = False)
```
#### Create Train Sets
```
vaenet_object.create_train_sets(unseen_nodes_flag = False,
data_leak_check = True)
```
#### Train
```
vaenet_object.train_vecnet(model_name = 'vaenet_5_fold',
epochs = 30,
version = 0,
learning_rate = 0.00001,
beta_1 = 0.9,
beta_2 = 0.999,
batch_size = 16,
chunk_test_frequency = 250)
```
#### Validation
```
vaenet_object.get_validation_results(model_name = None,
show_plots = False,
plot_title = 'Validation Results - 5 Fold Cross Validation',
num_cols = 2,
plot_height = 1500,
plot_width = 1500,
write_plot_to_html = False,
plot_dir = None,
plot_name = None)
```
#### Test Results
```
vaenet_object.get_test_results(model_name = None,
version_number = None,
optimal_validation_model = None,
drug_filter_list = [],
target_filter_list = [])
```
#### Prediction
```
sars_targets = pd.read_csv('/data/External Predictions/SARS Sequences/20201203_Targets_Sequences_SARS_Cov2.csv')
sars_d_list = []
sars_t_list = []
for d in list(drugs['InChiKey'])[:500]:
sars_d_list = sars_d_list + ([d] * len(list(sars_targets['Sequence'])))
sars_t_list = sars_t_list + list(sars_targets['Sequence'])
predict_df = pd.DataFrame(list(zip(sars_d_list, sars_t_list)))
predict_df.columns = ['InChiKey', 'target_aa_code']
predict_df = predict_df.drop_duplicates(keep = False)
predict_df
vaenet_object.protvec_location = '/home/sars-busters/Mol2Vec/Results/protVec_100d_3grams.csv'
sars_embeddings = vaenet_object.get_protvec_embeddings(prediction_interactions = predict_df,
embedding_dimension = 100,
replace_dataframe = False,
return_normalisation_conststants = False,
delimiter = '\t')
sars_embeddings
vaenet_object.get_fold_averaged_prediction_results(model_name = None,
version_number = None,
model_paths = [],
optimal_validation_model = None,
test_sets = [predict_df],
get_drug_embed = False,
pred_drug_embeddings = None,
drug_embed_normalized = False,
get_target_embed = True,
pred_target_embeddings = sars_embeddings,
target_embed_normalized = False,
drug_filter_list = [],
target_filter_list = [],
return_dataframes = False)
```
| github_jupyter |
<p align="center">
<img width="100%" src="../../../multimedia/mindstorms_51515_logo.png">
</p>
# `my_favorite_color`
Python equivalent of the `My favorite color` program. Makes Charlie react differently depending on the color we show him.
# Required robot
* Charlie (with color sensor and color palette)
<img src="../multimedia/charlie_color.jpg" width="50%" align="center">
# Source code
You can find the code in the accompanying [`.py` file](https://github.com/arturomoncadatorres/lego-mindstorms/blob/main/base/charlie/programs/my_favorite_color.py). To get it running, simply copy and paste it in a new Mindstorms project.
# Imports
```
from mindstorms import MSHub, Motor, MotorPair, ColorSensor, DistanceSensor, App
from mindstorms.control import wait_for_seconds, wait_until, Timer
from mindstorms.operator import greater_than, greater_than_or_equal_to, less_than, less_than_or_equal_to, equal_to, not_equal_to
import math
```
# Initialization
```
hub = MSHub()
app = App()
print("-"*15 + " Execution started " + "-"*15 + "\n")
```
# Turn off center button
By setting its color to black
```
print("Turning center button off...")
hub.status_light.on('black')
print("DONE!")
```
# Set arm motors to starting position
In the original Scratch program, there's a `Charlie - Calibrate` block. I don't know exactly how the calibration is done, but in the end I think that it just sets the motor to position 0.
Notice that moving motors to a specific position needs to be done individually (and sequentially). In other words, we cannot run a `MotorPair` to a position, only one motor at a time.
```
print("Setting arm motors to position 0...")
motor_b = Motor('B') # Left arm
motor_f = Motor('F') # Right arm
motor_b.run_to_position(0)
motor_f.run_to_position(0)
print("DONE!")
```
# Configure motors
```
print("Configuring motors...")
motors_movement = MotorPair('A', 'E')
motors_movement.set_default_speed(80)
print("DONE!")
```
# Program color reactions
```
color_sensor = ColorSensor('C')
```
We will use a counter to control printing on the console.
First, we need to initialize it.
```
ii = 0
```
Execution
```
while True: # This will make the execution go forever
if ii == 0:
# We will only print "Waiting for color" when the counter is 0.
print("Waiting for color...")
# Get the color value.
color = color_sensor.get_color()
# We need to make sure that Charlie reacts only when he perceives a color.
# To do so, we check what color Charlie perceived. If he didn't perceive
# a color, color_sensor.get_color() returns None.
if not color == None:
print("Reacting to color " + color + "...")
# Turn on the center button of the corresponding color.
hub.status_light.on(color)
# Let's give it a short pause.
wait_for_seconds(1)
# Define reactions to each color.
if color == 'green':
hub.light_matrix.show_image('HAPPY')
motors_movement.move(40, unit='cm', steering=100)
motors_movement.move(40, unit='cm', steering=-100)
elif color == 'yellow':
hub.light_matrix.show_image('SURPRISED')
motors_movement.move(-20, unit='cm', steering=0)
elif color == 'red':
hub.light_matrix.show_image('ANGRY')
motors_movement.move(10, unit='cm', steering=0)
else:
# For all the other colors, do nothing.
# If we want a program to do nothing, we can use pass.
# Just for the sake of demonstration/completion.
pass
# Turn off center button and image.
hub.status_light.on('black')
hub.light_matrix.off()
print("DONE!")
# Reset the counter to 0, to make sure that "Waiting for color" is printed (again).
ii = 0
else:
# If no color was perceived, increase the counter.
# This way, we make sure that the "Waiting for color" text is printed on the console
# just once (when it has a value of 0). Otherwise, it would be printed continuously every time the sensor got its reading.
ii = ii + 1 # Alternatively, we could've written ii =+ 1, which is a bit shorter (and very common in Python).
```
Notice, though, how we will never reach the following line, since the execution of the program is in a `while True`.
```
print("-"*15 + " Execution ended " + "-"*15 + "\n")
```
| github_jupyter |
# Assignment 3: Question Answering
Welcome to this week's assignment of course 4. In this you will explore question answering. You will implement the "Text to Text Transfer from Transformers" (better known as T5). Since you implemented transformers from scratch last week you will now be able to use them.
<img src = "qa.png">
## Outline
- [Overview](#0)
- [Part 0: Importing the Packages](#0)
- [Part 1: C4 Dataset](#1)
- [1.1 Pre-Training Objective](#1.1)
- [1.2 Process C4](#1.2)
- [1.2.1 Decode to natural language](#1.2.1)
- [1.3 Tokenizing and Masking](#1.3)
- [Exercise 01](#ex01)
- [1.4 Creating the Pairs](#1.4)
- [Part 2: Transfomer](#2)
- [2.1 Transformer Encoder](#2.1)
- [2.1.1 The Feedforward Block](#2.1.1)
- [Exercise 02](#ex02)
- [2.1.2 The Encoder Block](#2.1.2)
- [Exercise 03](#ex03)
- [2.1.3 The Transformer Encoder](#2.1.3)
- [Exercise 04](#ex04)
<a name='0'></a>
### Overview
This assignment will be different from the two previous ones. Due to memory and time constraints of this environment you will not be able to train a model and use it for inference. Instead you will create the necessary building blocks for the transformer encoder model and will use a pretrained version of the same model in two ungraded labs after this assignment.
After completing these 3 (1 graded and 2 ungraded) labs you will:
* Implement the code neccesary for Bidirectional Encoder Representation from Transformer (BERT).
* Understand how the C4 dataset is structured.
* Use a pretrained model for inference.
* Understand how the "Text to Text Transfer from Transformers" or T5 model works.
<a name='0'></a>
# Part 0: Importing the Packages
```
import ast
import string
import textwrap
import itertools
import numpy as np
import trax
from trax import layers as tl
from trax.supervised import decoding
# Will come handy later.
wrapper = textwrap.TextWrapper(width=70)
# Set random seed
np.random.seed(42)
```
<a name='1'></a>
## Part 1: C4 Dataset
The [C4](https://www.tensorflow.org/datasets/catalog/c4) is a huge data set. For the purpose of this assignment you will use a few examples out of it which are present in `data.txt`. C4 is based on the [common crawl](https://commoncrawl.org/) project. Feel free to read more on their website.
Run the cell below to see how the examples look like.
```
# load example jsons
example_jsons = list(map(ast.literal_eval, open('data.txt')))
# Printing the examples to see how the data looks like
for i in range(5):
print(f'example number {i+1}: \n\n{example_jsons[i]} \n')
```
Notice the `b` before each string? This means that this data comes as bytes rather than strings. Strings are actually lists of bytes so for the rest of the assignments the name `strings` will be used to describe the data.
To check this run the following cell:
```
type(example_jsons[0].get('text'))
```
<a name='1.1'></a>
### 1.1 Pre-Training Objective
**Note:** The word "mask" will be used throughout this assignment in context of hiding/removing word(s)
You will be implementing the BERT loss as shown in the following image.
<img src = "loss.png" width="600" height = "400">
Assume you have the following text: <span style = "color:blue"> **Thank you <span style = "color:red">for inviting </span> me to your party <span style = "color:red">last</span> week** </span>
Now as input you will mask the words in red in the text:
<span style = "color:blue"> **Input:**</span> Thank you **X** me to your party **Y** week.
<span style = "color:blue">**Output:**</span> The model should predict the words(s) for **X** and **Y**.
**Z** is used to represent the end.
<a name='1.2'></a>
### 1.2 Process C4
C4 only has the plain string `text` field, so you will tokenize and have `inputs` and `targets` out of it for supervised learning. Given your inputs, the goal is to predict the targets during training.
You will now take the `text` and convert it to `inputs` and `targets`.
```
# Grab text field from dictionary
natural_language_texts = [example_json['text'] for example_json in example_jsons]
# First text example
natural_language_texts[4]
```
<a name='1.2.1'></a>
#### 1.2.1 Decode to natural language
The following functions will help you `detokenize` and`tokenize` the text data.
The `sentencepiece` vocabulary was used to convert from text to ids. This vocabulary file is loaded and used in this helper functions.
`natural_language_texts` has the text from the examples we gave you.
Run the cells below to see what is going on.
```
# Special tokens
PAD, EOS, UNK = 0, 1, 2
def detokenize(np_array):
return trax.data.detokenize(
np_array,
vocab_type='sentencepiece',
vocab_file='sentencepiece.model',
vocab_dir='.')
def tokenize(s):
# The trax.data.tokenize function operates on streams,
# that's why we have to create 1-element stream with iter
# and later retrieve the result with next.
return next(trax.data.tokenize(
iter([s]),
vocab_type='sentencepiece',
vocab_file='sentencepiece.model',
vocab_dir='.'))
# printing the encoding of each word to see how subwords are tokenized
tokenized_text = [(tokenize(word).tolist(), word) for word in natural_language_texts[0].split()]
print(tokenized_text, '\n')
# We can see that detokenize successfully undoes the tokenization
print(f"tokenized: {tokenize('Beginners')}\ndetokenized: {detokenize(tokenize('Beginners'))}")
```
As you can see above, you were able to take a piece of string and tokenize it.
Now you will create `input` and `target` pairs that will allow you to train your model. T5 uses the ids at the end of the vocab file as sentinels. For example, it will replace:
- `vocab_size - 1` by `<Z>`
- `vocab_size - 2` by `<Y>`
- and so forth.
It assigns every word a `chr`.
The `pretty_decode` function below, which you will use in a bit, helps in handling the type when decoding. Take a look and try to understand what the function is doing.
Notice that:
```python
string.ascii_letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
```
**NOTE:** Targets may have more than the 52 sentinels we replace, but this is just to give you an idea of things.
```
vocab_size = trax.data.vocab_size(
vocab_type='sentencepiece',
vocab_file='sentencepiece.model',
vocab_dir='.')
def get_sentinels(vocab_size=vocab_size, display=False):
sentinels = {}
for i, char in enumerate(reversed(string.ascii_letters), 1):
decoded_text = detokenize([vocab_size - i])
# Sentinels, ex: <Z> - <a>
sentinels[decoded_text] = f'<{char}>'
if display:
print(f'The sentinel is <{char}> and the decoded token is:', decoded_text, " --- (vocab_size-i)",vocab_size - i)
return sentinels
sentinels = get_sentinels(vocab_size, display=True)
def pretty_decode(encoded_str_list, sentinels=sentinels):
# If already a string, just do the replacements.
#print(sentinels)
if isinstance(encoded_str_list, (str, bytes)):
for token, char in sentinels.items():
encoded_str_list = encoded_str_list.replace(token, char)
return encoded_str_list
# We need to decode and then prettyfy it.
return pretty_decode(detokenize(encoded_str_list))
pretty_decode("I want to dress up as an Intellectual this halloween.")
```
The functions above make your `inputs` and `targets` more readable. For example, you might see something like this once you implement the masking function below.
- <span style="color:red"> Input sentence: </span> Younes and Lukasz were working together in the lab yesterday after lunch.
- <span style="color:red">Input: </span> Younes and Lukasz **Z** together in the **Y** yesterday after lunch.
- <span style="color:red">Target: </span> **Z** were working **Y** lab.
<a name='1.3'></a>
### 1.3 Tokenizing and Masking
You will now implement the `tokenize_and_mask` function. This function will allow you to tokenize and mask input words with a noise probability. We usually mask 15% of the words.
<a name='ex01'></a>
### Exercise 01
```
# UNQ_C1
# GRADED FUNCTION: tokenize_and_mask
def tokenize_and_mask(text, vocab_size=vocab_size, noise=0.15,
randomizer=np.random.uniform, tokenize=tokenize):
"""Tokenizes and masks a given input.
Args:
text (str or bytes): Text input.
vocab_size (int, optional): Size of the vocabulary. Defaults to vocab_size.
noise (float, optional): Probability of masking a token. Defaults to 0.15.
randomizer (function, optional): Function that generates random values. Defaults to np.random.uniform.
tokenize (function, optional): Tokenizer function. Defaults to tokenize.
Returns:
tuple: Tuple of lists of integers associated to inputs and targets.
"""
# current sentinel number (starts at 0)
cur_sentinel_num = 0
# inputs
inps = []
# targets
targs = []
### START CODE HERE (REPLACE INSTANCES OF 'None' WITH YOUR CODE) ###
# prev_no_mask is True if the previous token was NOT masked, False otherwise
# set prev_no_mask to True
prev_no_mask = True
# loop through tokenized `text`
for token in tokenize(text):
# check if the `noise` is greater than a random value (weighted coin flip)
if randomizer() < noise:
# check to see if the previous token was not masked
if prev_no_mask==True: # add new masked token at end_id
# number of masked tokens increases by 1
cur_sentinel_num += 1
# compute `end_id` by subtracting current sentinel value out of the total vocabulary size
end_id = vocab_size - cur_sentinel_num
# append `end_id` at the end of the targets
targs.append(end_id)
# append `end_id` at the end of the inputs
inps.append(end_id)
# append `token` at the end of the targets
targs.append(token)
# set prev_no_mask accordingly
prev_no_mask = False
else: # don't have two masked tokens in a row
# append `token ` at the end of the inputs
inps.append(token)
# set prev_no_mask accordingly
prev_no_mask = True
### END CODE HERE ###
return inps, targs
# Some logic to mock a np.random value generator
# Needs to be in the same cell for it to always generate same output
def testing_rnd():
def dummy_generator():
vals = np.linspace(0, 1, 10)
cyclic_vals = itertools.cycle(vals)
for _ in range(100):
yield next(cyclic_vals)
dumr = itertools.cycle(dummy_generator())
def dummy_randomizer():
return next(dumr)
return dummy_randomizer
input_str = natural_language_texts[0]
print(f"input string:\n\n{input_str}\n")
inps, targs = tokenize_and_mask(input_str, randomizer=testing_rnd())
print(f"tokenized inputs:\n\n{inps}\n")
print(f"targets:\n\n{targs}")
```
#### **Expected Output:**
```CPP
b'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.'
tokenized inputs:
[31999, 15068, 4501, 3, 12297, 3399, 16, 5964, 7115, 31998, 531, 25, 241, 12, 129, 394, 44, 492, 31997, 58, 148, 56, 43, 8, 1004, 6, 474, 31996, 39, 4793, 230, 5, 2721, 6, 1600, 1630, 31995, 1150, 4501, 15068, 16127, 6, 9137, 2659, 5595, 31994, 782, 3624, 14627, 15, 12612, 277, 5, 216, 31993, 2119, 3, 9, 19529, 593, 853, 21, 921, 31992, 12, 129, 394, 28, 70, 17712, 1098, 5, 31991, 3884, 25, 762, 25, 174, 12, 214, 12, 31990, 3, 9, 3, 23405, 4547, 15068, 2259, 6, 31989, 6, 5459, 6, 13618, 7, 6, 3604, 1801, 31988, 6, 303, 24190, 11, 1472, 251, 5, 37, 31987, 36, 16, 8, 853, 19, 25264, 399, 568, 31986, 21, 21380, 7, 34, 19, 339, 5, 15746, 31985, 8, 583, 56, 36, 893, 3, 9, 3, 31984, 9486, 42, 3, 9, 1409, 29, 11, 25, 31983, 12246, 5977, 13, 284, 3604, 24, 19, 2657, 31982]
targets:
[31999, 12847, 277, 31998, 9, 55, 31997, 3326, 15068, 31996, 48, 30, 31995, 727, 1715, 31994, 45, 301, 31993, 56, 36, 31992, 113, 2746, 31991, 216, 56, 31990, 5978, 16, 31989, 379, 2097, 31988, 11, 27856, 31987, 583, 12, 31986, 6, 11, 31985, 26, 16, 31984, 17, 18, 31983, 56, 36, 31982, 5]
```
You will now use the inputs and the targets from the `tokenize_and_mask` function you implemented above. Take a look at the masked sentence using your `inps` and `targs` from the sentence above.
```
print('Inputs: \n\n', pretty_decode(inps))
print('\nTargets: \n\n', pretty_decode(targs))
```
<a name='1.4'></a>
### 1.4 Creating the Pairs
You will now create pairs using your dataset. You will iterate over your data and create (inp, targ) pairs using the functions that we have given you.
```
# Apply tokenize_and_mask
inputs_targets_pairs = [tokenize_and_mask(text) for text in natural_language_texts]
def display_input_target_pairs(inputs_targets_pairs):
for i, inp_tgt_pair in enumerate(inputs_targets_pairs, 1):
inps, tgts = inp_tgt_pair
inps, tgts = pretty_decode(inps), pretty_decode(tgts)
print(f'[{i}]\n\n'
f'inputs:\n{wrapper.fill(text=inps)}\n\n'
f'targets:\n{wrapper.fill(text=tgts)}\n\n\n\n')
display_input_target_pairs(inputs_targets_pairs)
```
<a name='2'></a>
# Part 2: Transfomer
We now load a Transformer model checkpoint that has been pre-trained using the above C4 dataset and decode from it. This will save you a lot of time rather than have to train your model yourself. Later in this notebook, we will show you how to fine-tune your model.
<img src = "fulltransformer.png" width="300" height="600">
Start by loading in the model. We copy the checkpoint to local dir for speed, otherwise initialization takes a very long time. Last week you implemented the decoder part for the transformer. Now you will implement the encoder part. Concretely you will implement the following.
<img src = "encoder.png" width="300" height="600">
<a name='2.1'></a>
### 2.1 Transformer Encoder
You will now implement the transformer encoder. Concretely you will implement two functions. The first function is `FeedForwardBlock`.
<a name='2.1.1'></a>
#### 2.1.1 The Feedforward Block
The `FeedForwardBlock` function is an important one so you will start by implementing it. To do so, you need to return a list of the following:
- [`tl.LayerNorm()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.normalization.LayerNorm) = layer normalization.
- [`tl.Dense(d_ff)`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Dense) = fully connected layer.
- [`activation`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.activation_fns.Relu) = activation relu, tanh, sigmoid etc.
- `dropout_middle` = we gave you this function (don't worry about its implementation).
- [`tl.Dense(d_model)`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Dense) = fully connected layer with same dimension as the model.
- `dropout_final` = we gave you this function (don't worry about its implementation).
You can always take a look at [trax documentation](https://trax-ml.readthedocs.io/en/latest/) if needed.
**Instructions**: Implement the feedforward part of the transformer. You will be returning a list.
<a name='ex02'></a>
### Exercise 02
```
# UNQ_C2
# GRADED FUNCTION: FeedForwardBlock
def FeedForwardBlock(d_model, d_ff, dropout, dropout_shared_axes, mode, activation):
"""Returns a list of layers implementing a feed-forward block.
Args:
d_model: int: depth of embedding
d_ff: int: depth of feed-forward layer
dropout: float: dropout rate (how much to drop out)
dropout_shared_axes: list of integers, axes to share dropout mask
mode: str: 'train' or 'eval'
activation: the non-linearity in feed-forward layer
Returns:
A list of layers which maps vectors to vectors.
"""
dropout_middle = tl.Dropout(rate=dropout,
shared_axes=dropout_shared_axes,
mode=mode)
dropout_final = tl.Dropout(rate=dropout,
shared_axes=dropout_shared_axes,
mode=mode)
### START CODE HERE (REPLACE INSTANCES OF 'None' WITH YOUR CODE) ###
ff_block = [
# trax Layer normalization
tl.LayerNorm(),
# trax Dense layer using `d_ff`
tl.Dense(d_ff),
# activation() layer - you need to call (use parentheses) this func!
activation(),
# dropout middle layer
dropout_middle,
# trax Dense layer using `d_model`
tl.Dense(d_model),
# dropout final layer
dropout_final,
]
### END CODE HERE ###
return ff_block
# Print the block layout
feed_forward_example = FeedForwardBlock(d_model=512, d_ff=2048, dropout=0.8, dropout_shared_axes=0, mode = 'train', activation = tl.Relu)
print(feed_forward_example)
```
#### **Expected Output:**
```CPP
[LayerNorm, Dense_2048, Relu, Dropout, Dense_512, Dropout]
```
<a name='2.1.2'></a>
#### 2.1.2 The Encoder Block
The encoder block will use the `FeedForwardBlock`.
You will have to build two residual connections. Inside the first residual connection you will have the `tl.layerNorm()`, `attention`, and `dropout_` layers. The second residual connection will have the `feed_forward`.
You will also need to implement `feed_forward`, `attention` and `dropout_` blocks.
So far you haven't seen the [`tl.Attention()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.Attention) and [`tl.Residual()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Residual) layers so you can check the docs by clicking on them.
<a name='ex03'></a>
### Exercise 03
```
# UNQ_C3
# GRADED FUNCTION: EncoderBlock
def EncoderBlock(d_model, d_ff, n_heads, dropout, dropout_shared_axes,
mode, ff_activation, FeedForwardBlock=FeedForwardBlock):
"""
Returns a list of layers that implements a Transformer encoder block.
The input to the layer is a pair, (activations, mask), where the mask was
created from the original source tokens to prevent attending to the padding
part of the input.
Args:
d_model (int): depth of embedding.
d_ff (int): depth of feed-forward layer.
n_heads (int): number of attention heads.
dropout (float): dropout rate (how much to drop out).
dropout_shared_axes (int): axes on which to share dropout mask.
mode (str): 'train' or 'eval'.
ff_activation (function): the non-linearity in feed-forward layer.
FeedForwardBlock (function): A function that returns the feed forward block.
Returns:
list: A list of layers that maps (activations, mask) to (activations, mask).
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' WITH YOUR CODE) ###
# Attention block
attention = tl.Attention(
# Use dimension of the model
d_feature=d_ff,
# Set it equal to number of attention heads
n_heads=n_heads,
# Set it equal `dropout`
dropout=dropout,
# Set it equal `mode`
mode=mode
)
# Call the function `FeedForwardBlock` (implemented before) and pass in the parameters
feed_forward = FeedForwardBlock(
d_model,
d_ff,
dropout,
dropout_shared_axes,
mode,
ff_activation
)
# Dropout block
dropout_ = tl.Dropout(
# set it equal to `dropout`
rate=dropout,
# set it equal to the axes on which to share dropout mask
shared_axes=dropout_shared_axes,
# set it equal to `mode`
mode=mode
)
encoder_block = [
# add `Residual` layer
tl.Residual(
# add norm layer
tl.LayerNorm(),
# add attention
attention,
# add dropout
dropout_,
),
# add another `Residual` layer
tl.Residual(
# add feed forward
feed_forward,
),
]
### END CODE HERE ###
return encoder_block
# Print the block layout
encoder_example = EncoderBlock(d_model=512, d_ff=2048, n_heads=6, dropout=0.8, dropout_shared_axes=0, mode = 'train', ff_activation=tl.Relu)
print(encoder_example)
```
#### **Expected Output:**
```CPP
[Serial_in2_out2[
Branch_in2_out3[
None
Serial_in2_out2[
LayerNorm
Serial_in2_out2[
Dup_out2
Dup_out2
Serial_in4_out2[
Parallel_in3_out3[
Dense_512
Dense_512
Dense_512
]
PureAttention_in4_out2
Dense_512
]
]
Dropout
]
]
Add_in2
], Serial[
Branch_out2[
None
Serial[
LayerNorm
Dense_2048
Relu
Dropout
Dense_512
Dropout
]
]
Add_in2
]]
```
<a name='2.1.3'></a>
### 2.1.3 The Transformer Encoder
Now that you have implemented the `EncoderBlock`, it is time to build the full encoder. BERT, or Bidirectional Encoder Representations from Transformers is one such encoder.
You will implement its core code in the function below by using the functions you have coded so far.
The model takes in many hyperparameters, such as the `vocab_size`, the number of classes, the dimension of your model, etc. You want to build a generic function that will take in many parameters, so you can use it later. At the end of the day, anyone can just load in an API and call transformer, but we think it is important to make sure you understand how it is built. Let's get started.
**Instructions:** For this encoder you will need a `positional_encoder` first (which is already provided) followed by `n_layers` encoder blocks, which are the same encoder blocks you previously built. Once you store the `n_layers` `EncoderBlock` in a list, you are going to encode a `Serial` layer with the following sublayers:
- [`tl.Branch`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Branch): helps with the branching and has the following sublayers:
- `positional_encoder`.
- [`tl.PaddingMask()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.PaddingMask): layer that maps integer sequences to padding masks.
- Your list of `EncoderBlock`s
- [`tl.Select([0], n_in=2)`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Select): Copies, reorders, or deletes stack elements according to indices.
- [`tl.LayerNorm()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.normalization.LayerNorm).
- [`tl.Mean()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Mean): Mean along the first axis.
- `tl.Dense()` with n_units set to n_classes.
- `tl.LogSoftmax()`
Please refer to the [trax documentation](https://trax-ml.readthedocs.io/en/latest/) for further information.
<a name='ex04'></a>
### Exercise 04
```
# UNQ_C4
# GRADED FUNCTION: TransformerEncoder
def TransformerEncoder(vocab_size=vocab_size,
n_classes=10,
d_model=512,
d_ff=2048,
n_layers=6,
n_heads=8,
dropout=0.1,
dropout_shared_axes=None,
max_len=2048,
mode='train',
ff_activation=tl.Relu,
EncoderBlock=EncoderBlock):
"""
Returns a Transformer encoder model.
The input to the model is a tensor of tokens.
Args:
vocab_size (int): vocab size. Defaults to vocab_size.
n_classes (int): how many classes on output. Defaults to 10.
d_model (int): depth of embedding. Defaults to 512.
d_ff (int): depth of feed-forward layer. Defaults to 2048.
n_layers (int): number of encoder/decoder layers. Defaults to 6.
n_heads (int): number of attention heads. Defaults to 8.
dropout (float): dropout rate (how much to drop out). Defaults to 0.1.
dropout_shared_axes (int): axes on which to share dropout mask. Defaults to None.
max_len (int): maximum symbol length for positional encoding. Defaults to 2048.
mode (str): 'train' or 'eval'. Defaults to 'train'.
ff_activation (function): the non-linearity in feed-forward layer. Defaults to tl.Relu.
EncoderBlock (function): Returns the encoder block. Defaults to EncoderBlock.
Returns:
trax.layers.combinators.Serial: A Transformer model as a layer that maps
from a tensor of tokens to activations over a set of output classes.
"""
positional_encoder = [
tl.Embedding(vocab_size, d_model),
tl.Dropout(rate=dropout, shared_axes=dropout_shared_axes, mode=mode),
tl.PositionalEncoding(max_len=max_len)
]
### START CODE HERE (REPLACE INSTANCES OF 'None' WITH YOUR CODE) ###
# Use the function `EncoderBlock` (implemented above) and pass in the parameters over `n_layers`
encoder_blocks = [EncoderBlock(d_model, d_ff, n_heads, dropout,
dropout_shared_axes, mode, ff_activation) for _ in range(n_layers)]
# Assemble and return the model.
return tl.Serial(
# Encode
tl.Branch(
# Use `positional_encoder`
positional_encoder,
# Use trax padding mask
tl.PaddingMask(),
),
# Use `encoder_blocks`
encoder_blocks,
# Use select layer
tl.Select([0],n_in = 2),
# Use trax layer normalization
tl.LayerNorm(),
# Map to output categories.
# Use trax mean. set axis to 1
tl.Mean(axis=1),
# Use trax Dense using `n_classes`
tl.Dense(n_classes),
# Use trax log softmax
tl.LogSoftmax(),
)
### END CODE HERE ###
# Run this cell to see the structure of your model
# Only 1 layer is used to keep the output readable
TransformerEncoder(n_layers=1)
```
#### **Expected Output:**
```CPP
Serial[
Branch_out2[
[Embedding_32000_512, Dropout, PositionalEncoding]
PaddingMask(0)
]
Serial_in2_out2[
Branch_in2_out3[
None
Serial_in2_out2[
LayerNorm
Serial_in2_out2[
Dup_out2
Dup_out2
Serial_in4_out2[
Parallel_in3_out3[
Dense_512
Dense_512
Dense_512
]
PureAttention_in4_out2
Dense_512
]
]
Dropout
]
]
Add_in2
]
Serial[
Branch_out2[
None
Serial[
LayerNorm
Dense_2048
Relu
Dropout
Dense_512
Dropout
]
]
Add_in2
]
Select[0]_in2
LayerNorm
Mean
Dense_10
LogSoftmax
]
```
**NOTE Congratulations! You have completed all of the graded functions of this assignment.** Since the rest of the assignment takes a lot of time and memory to run we are providing some extra ungraded labs for you to see this model in action.
**Keep it up!**
To see this model in action continue to the next 2 ungraded labs. **We strongly recommend you to try the colab versions of them as they will yield a much smoother experience.** The links to the colabs can be found within the ungraded labs or if you already know how to open files within colab here are some shortcuts (if not, head to the ungraded labs which contain some extra instructions):
[BERT Loss Model Colab](https://drive.google.com/file/d/1EHAbMnW6u-GqYWh5r3Z8uLbz4KNpKOAv/view?usp=sharing)
[T5 SQuAD Model Colab](https://drive.google.com/file/d/1c-8KJkTySRGqCx_JjwjvXuRBTNTqEE0N/view?usp=sharing)
| github_jupyter |
# Za enunt
use this ! https://inloop.github.io/sqlite-viewer/
---
## Za recapitulare
SOCKETS
```
#!/usr/bin/python # This is server.py file
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.bind((host, port)) # Bind to the port
s.listen(5) # Now wait for client connection.
while True:
print 'looping'
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
c.send('Thank you for connecting')
c.close() # Close the connection
break
#!/usr/bin/python # This is client.py file
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
print 'connecting'
s.connect((host, port))
print s.recv(1024)
s.close # Close the socket when done
```
`setup_db.py`
```
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
# Create table
c.execute('''CREATE TABLE location
(id_location text, location text)''')
c.execute('''CREATE TABLE files
(id_location text, id_file text)''')
c.execute('''CREATE TABLE file_info
(id_file text, file_name text, file_size text, creation_time text, file_md5 text)''')
# Save (commit) the changes
conn.commit()
# Close the connection
# => any change that has not been commited will be lost
conn.close()
```
`add_files.py`
```
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
import os
import hashlib
import time
def get_file_md5(filePath):
h = hashlib.md5()
h.update(open(filePath,"rb").read())
return h.hexdigest()
def get_file_sha256(filePath):
h = hashlib.sha256()
h.update(open(filePath,"rb").read())
return h.hexdigest()
def get_dir_data(dir_path):
dir_path = os.path.realpath(dir_path)
#print next(os.walk(dir_path))[2]
#print os.path.basename(dir_path)
id_location = 0
id_file = 0
for dir_file in next(os.walk(dir_path))[2]:
file_name = dir_file
file_md5 = get_file_md5(dir_file)
file_sha256 = get_file_sha256(dir_file)
file_size = os.path.getsize(dir_file)
file_time = time.gmtime(os.path.getctime(dir_file))
file_formatted_time = time.strftime("%Y-%m-%d %I:%M:%S %p", file_time)
file_path = os.path.realpath(dir_file)
location_values = (id_location, file_path)
c.execute("INSERT INTO location VALUES (?, ?)", location_values)
files_values = (id_location, id_file)
c.execute("INSERT INTO files VALUES (?, ?)", files_values)
file_info_values = (id_file, file_name, file_size, file_formatted_time, file_md5)
c.execute("INSERT INTO file_info VALUES (?, ?, ?, ?, ?)", file_info_values)
id_location += 1
id_file += 1
get_dir_data('./')
# Save (commit) the changes
conn.commit()
conn.close()
```
`query_db.py`
```
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
print('\n\n*****location table****\n')
'''
for row in c.execute('SELECT * FROM location'):
# print(c.fetchall())
print(row)
'''
'''
c.execute('SELECT * FROM location')
print(c.fetchall())
'''
c.execute('SELECT * FROM location')
copy = c.fetchall()
for row in copy:
print row
print('\n\n*****files table****\n')
c.execute('SELECT * FROM files')
copy = c.fetchall()
for row in copy:
print row
print('\n\n*****file_info table****\n')
c.execute('SELECT * FROM file_info')
copy = c.fetchall()
for row in copy:
print row
conn.close()
```
urllib - https://docs.python.org/2/library/urllib.html
```
import urllib
import re
params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0})
f = urllib.urlopen("http://www.musi-cal.com/cgi-bin/query?%s" % params)
respData = f.read()
#print respData
count = 0
paragraphs = re.findall(r'<a .*>(.*)</a>',str(respData))
for eachP in paragraphs:
print(eachP)
if count == 5:
break
count += 1
```
| github_jupyter |
# Setting a Meaningful Index
The index of a DataFrame provides a label for each of the rows. If not explicitly provided, pandas uses the sequence of consecutive integers beginning at 0 as the index. In this chapter, we learn how to set one of the columns of the DataFrame as the new index so that it provides a more meaningful label for each row.
## Setting an index of a DataFrame
Instead of using the default index for your pandas DataFrame, you can use the `set_index` method to use one of the columns as the index. Let's read in a small dataset to show how this is done.
```
import pandas as pd
df = pd.read_csv('../data/sample_data.csv')
df
```
### The `set_index` method
Pass the `set_index` method the name of the column to use it as the index. This column is no longer part of the data of the returned DataFrame.
```
df.set_index('name')
```
### A new DataFrame copy is returned
The `set_index` method returns an entire new DataFrame copy by default and does not modify the original calling DataFrame. Let's verify this by outputting the original DataFrame.
```
df
```
### Assigning the result of `set_index` to a variable name
We must assign the result of the `set_index` method to a variable name if we are to use this new DataFrame with new index.
```
df2 = df.set_index('name')
df2
```
### Number of columns decreased
The new DataFrame, `df2`, has one less column than the original as the `name` column was set as the index. Let's verify this:
```
df.shape
df2.shape
```
## Accessing the index, columns, and data
The index, columns, and data are each separate objects that can be accessed from the DataFrame as attributes and NOT methods. Let's assign each of them to their own variable name beginning with the index and output it to the screen.
```
index = df2.index
index
columns = df2.columns
columns
data = df2.values
data
```
### Find the type of these objects
The output of these objects looks correct, but we don't know the exact type of each one. Let's find out the types of each object.
```
type(index)
type(columns)
type(data)
```
### Accessing the components does not change the DataFrame
Accessing these components does nothing to our DataFrame. It merely gives us a variable to reference each of these components. Let's verify that the DataFrame remains unchanged.
```
df2
```
### pandas `Index` type
Both the index and columns are each a special type of object named `Index` an is similar to a list. You can think of it as a sequence of labels for either the rows or the columns. You will not deal with this object much directly, so we will not go into further details about it here.
### Two-dimensional numpy array
The values are returned as a single two-dimensional numpy array.
### Operating with DataFrame and not its components
You rarely need to operate with these components directly and instead be working with the entire DataFrame. But, it is important to understand that they are separate components and you can access them directly if needed.
## Accessing the components of a Series
Similarly, we can access the two Series components - the index and the data. Let's first select a single column from our DataFrame so that we have a Series. When we select a column from the DataFrame as a Series, the index remains the same.
```
color = df2['color']
color
```
Let's access the index and the data from the `color` Series.
```
color.index
color.values
```
### The default index
If you don't specify an index when first reading in a DataFrame, then pandas creates one for you as integers beginning at 0. Let's read in the movie dataset and keep the default index.
```
movie = pd.read_csv('../data/movie.csv')
movie.head(3)
```
### Integers in the index
The integers you see above in the index are the labels for each of the rows. Let's examine the underlying index object.
```
idx = movie.index
idx
type(idx)
```
### The RangeIndex
pandas has various types of index objects. A `RangeIndex` is the simplest index and represents the sequence of consecutive integers beginning at 0. It is similar to a Python `range` object in that the values are not actually stored in memory. Th
### A numpy array underlies the index
The index has a `values` attribute just like the DataFrame. Use it to retrieve the underlying index values as a numpy array.
```
idx.values
```
It's not necessary to assign the index to a variable name to access its attributes and methods. You can access it beginning from the DataFrame.
```
movie.index.values
```
## Setting an index on read
The `read_csv` function provides dozens of parameters that allow us to read in a wide variety of text files. The `index_col` parameter may be used to select a particular column as the index. We can either use the column name or its integer location.
### Reread the movie dataset with the movie title as the index
There's a column in the movie dataset named `title`. Let's reread the data using it as the index.
```
movie = pd.read_csv('../data/movie.csv', index_col='title')
movie.head(3)
```
Notice that now the titles of each movie serve as the label for each row. Also notice that the word **title** appears directly above the index. This is a bit confusing. The word **title** is NOT a column name. Technically, it is the **name** of the index, but this isn't important at the moment.
### Access the new index and output its type
Let's access this new index, output its values, and verify that type is now `Index` instead of `RangeIndex`.
```
idx2 = movie.index
idx2
type(idx2)
```
### Select a value from the index
The index is a complex object on its own and has many attributes and methods. The minimum we should know about an index is how to select values from it. We can select single values from an index just like we do with a Python list, by placing the integer location of the item we want within the square brackets. Here, we select the 4th item (integer location 3) from the index.
```
idx2[3]
```
We can select this same index label without actually assigning the index to a variable first.
```
movie.index[3]
```
### Selection with slice notation
As with Python lists, you can select a range of values using slice notation. Provide the start, stop, and step components of slice notation separated by a colon within the brackets.
```
idx2[100:120:4]
```
### Selection with a list of integers
You can select multiple individual values with a list of integers. This type of selection does not exist for Python lists.
```
nums = [1000, 453, 713, 2999]
idx2[nums]
```
## Choosing a good index
Before even considering using one of the columns as an index, know that it's not a necessity. You can complete all of your analysis with just the default `RangeIndex`.
Setting a column to be an index can help make certain analysis easier in some situations, so it is something that can be considered. If you do choose to set an index for your DataFrame, I suggest using columns that are both **unique** and **descriptive**. Pandas does not enforce uniqueness for its index allowing the same value to repeat multiple times. That said, a good index will have unique values to identify each row.
## Exercises
You may wish to change the display options before completing the exercises.
### Exercise 1
<span style="color:green; font-size:16px">Read in the movie dataset and set the index to be something other than movie title. Are there any other good columns to use as an index?</span>
### Exercise 2
<span style="color:green; font-size:16px">Use `set_index` to set the index and keep the column as part of the data. Read the docstrings to find the parameter that controls this functionality.</span>
### Exercise 3
<span style="color:green; font-size:16px">Read in the movie DataFrame and set the index as the title column. Assign the index to its own variable and output the last 10 movies titles.</span>
### Exercise 4
<span style="color:green; font-size:16px">Use an integer instead of the column name for `index_col` when reading in the data using `read_csv`. What does it do?</span>
| github_jupyter |
```
import os
import pandas as pd
import random
import shutil
import librosa
import numpy as np
import soundfile as sf
# Freesound dataset
# negative data로 삼을 소리의 유형을 고릅니다.
# 여기서는 가정에서 날 법한 소리 위주로 선정했습니다.
OtherSound_class = ['Cat', 'Bell', 'Applause', 'Bark', 'Computer_keyboard', 'Clock', 'Cellphone_buzz_and_vibrating_alert',
'Drip','Pour', 'Cough', 'Dog', 'Domestic_animals_and_pets','Piano', 'Air_conditioning',
'Domestic_sounds_and_home_sounds', 'Hair_dryer', 'Meow', 'Vacuum_cleaner','Coin_(dropping)',
'Whimper_(dog)', 'Yip', 'Purr', 'Electric_toothbrush', 'Printer', 'Steam', 'Snoring', 'Pig',
'Spray', 'Water', 'Microwave_oven', 'Cat_communication', 'Growling', 'Typing',
'Mechanisms', 'Camera', 'Boiling', 'Hands', 'Sink_(filling_or_washing)', 'Scissors',
'Hands', 'Knock', 'Writing', 'Walk_and_footsteps', 'Toilet_flush', 'Door', 'Spray', 'Clicking',
'Wood', 'Sliding_door', 'Engine', 'Rain', 'Mosquito', 'Toothbrush','Tearing', 'Sink_(filling_or_washing)',
'Sneeze', 'Tap', 'Mechanical_fan', 'Ding-dong', 'Gargling',
'Door', 'Telephone_bell_ringing', 'Guitar', 'Dishes_and_pots_and_pans', 'Breathing',
'Doorbell', 'Knock', 'Alarm', 'Finger_snapping', 'Zipper_(clothing)', 'Keys_jangling',
'Blender', 'Glass', 'Clapping', 'Wind_noise_(microphone)', 'Wind', 'Yawn', 'Biting', 'Clip-clop',
'Drum', 'Gurgling', 'Packing_tape_and_duct_tape', 'Snap', 'Sniff', 'Snare_drum','Tambourine','Tick',
'Tick-tock', 'Typewriter', 'Squawk', 'Liquid', 'Heart_sounds_and_heartbeat', 'Fart', 'Drum_kit',
'Digestive', 'Chopping_(food)', 'Bathtub_(filling_or_washing)', 'Bass_drum','Alarm_clock',
'Acoustic_guitar', 'Chop', 'Drum_roll', 'Toot', 'Throat_clearing', 'Squeak', 'Screech',
'Scratching_(performance_technique)', 'Insect', 'Hiss', 'Hammer', 'Fly_and_housefly',
'Cupboard_open_or_close', 'Buzz', 'Bird', 'Bus', 'Car', 'Crumpling_and_crinkling',
'Car_alarm', 'Cash_register', 'Bass_guitar', 'Gong','Light_engine_(high_frequency)',
'Tools', 'Wind_chime', 'Vehicle', 'Velcro_and_hook_and_loop_fastener', 'Stomach_rumble', 'Gears',
'Truck', 'Stream', 'Ringtone','Gargling', 'Chink_and_clink', 'Electric_guitar',
'Ukulele','Trickle_and_dribble','Strum','Stir','Squish','Skidding','Power_tool','Musical_instrument',
'Jackhammer','Keyboard_(musical)','Harmonica','Frying_(food)','Busy_signal',
'Bird_vocalization_and_bird_call_and_bird_song','Accelerating_and_revving_and_vroom', 'Babbling',
'Bicycle', 'Bicycle_bell', 'Bird_flight_and_flapping_wings', 'Bleat','Caterwaul','Caw', 'Cello',
'Chainsaw','Choir','Clarinet','Cluck','Clunk',"Dental_drill_and_dentist's_drill",'Electric_shaver_and_electric_razor',
'Helicopter','Jingle_bell','Moo','Organ','Trombone','Violin_and_fiddle','Water_tap_and_faucet',
'Violin_and_fiddle', 'Tire_squeal','Sawing','Rodents_and_rats_and_mice','Raindrop',
'Percussion','Owl','Sewing_machine','Synthesizer','Synthetic_singing','Train_whistle','Waves_and_surf',
'Waterfall','Timpani','Vehicle_horn_and_car_horn_and_honking','Steam_whistle','Skateboard',
'Saxophone','Rhodes_piano','Race_car_and_auto_racing','Pulleys','Plucked_string_instrument','Police_car_(siren)',
'Oboe','Motorcycle','Howl', 'Howl_(wind)','Frog','Fire_alarm', 'Drawer_open_or_close', 'Drill', 'Duck',
'Electric_piano', 'Emergency_vehicle', 'Coo', 'Cowbell', 'Crack', 'Crackle', 'Crash_cymbal', 'Cricket',
'Accordion', 'Air_horn_and_truck_horn','Alto_saxophone','Ambulance_(siren)','Bee_and_wasp_and_etc.',
'Boat_and_Water_vehicle','Croak','Crow','Cymbal','Filing_(rasp)','Fill_(with_liquid)','Flute', 'Glockenspiel',
'Goose', 'Grunt','Hi-hat','Motor_vehicle_(road)','Rustling_leaves','Sheep','Splash_and_splatter',
'Speech_synthesizer', 'Thump_and_thud', 'Thunder', 'Thunderstorm', 'Thunk', 'Traffic_noise_and_roadway_noise',
'Train','Tubular_bells','Telephone_dialing_and_DTMF','Tabla','Siren','Shuffling_cards',
'Rail_transport', 'Rain_on_surface','Marimba_and_xylophone','Hoot',
'Dial_tone', 'Didgeridoo', 'Double_bass', 'Engine_starting', 'Boom', 'Bowed_string_instrument', 'Brass_instrument',
'Burping_and_eructation', 'Burst_and_pop', 'Buzzer', 'Aircraft', 'Artillery_fire','Car_passing_by',
'Church_bell','Cutlery_and_silverware', 'Explosion','Fire', 'Fire_engine_and_fire_truck_(siren)',
'Firecracker', 'Fireworks', 'Fixed-wing_aircraft_and_airplane','Harp', 'Harpsichord','Honk', 'Horse',
'Idling','Oink', 'Orchestra','Pant', 'Pigeon_and_dove', 'Pizzicato','Natural_sounds', 'Music',
'Ship','Shatter', 'Rimshot', 'Run','Slosh', 'Soprano_saxophone', 'Sitar',
'Splinter', 'Steel_guitar_and_slide_guitar', 'Subway_and_metro_and_underground', 'Telephone',
'Theremin','Wail_and_moan', 'Trumpet','Roar', 'Sanding', 'Mallet_percussion', 'Mantra', 'Maraca',
'Neigh_and_whinny','Foghorn', 'French_horn','Chime','Chirp_and_tweet','Crushing','Slam',
'Wild_animals', 'Wildfire', 'Wind_instrument_and_woodwind_instrument', 'Wobble','Train_horn',
'Ratchet_and_pawl','Quack','Ocean','Gull_and_seagull','Lawn_mower','Chicken_and_rooster',
'Chewing_and_mastication','Cattle_and_bovinae','Bassoon', 'Whistle', 'Whistling',
'Machine_gun','Gunshot_and_gunfire','Cap_gun'
]
OtherSound_class = sorted(set(OtherSound_class))
len(OtherSound_class)
# print(OtherSound_class)
vocab = pd.read_csv(r'../Dataset_audio/Freesound_dataset/FSD50K.metadata/FSD50K.metadata/collection/vocabulary_collection_eval.csv')
collection = pd.read_csv(r'../Dataset_audio/Freesound_dataset/FSD50K.metadata/FSD50K.metadata/collection/collection_eval.csv')
OtherSound_audio_files_path = r'../Dataset_audio/Freesound_dataset/FSD50k.eval_audio/FSD50K.eval_audio'
vocab.head()
#vocab.info()
#collection.head()
#collection.info()
collection['labels'] = collection['labels'].apply(lambda x: x.split(','))
#collection[:30]
# OtherSound_class 에 없는 소리 유형이 포함된 파일은 제외합니다.
# (예를 들어 기타 소리와 사람 목소리가 섞인 파일은 OtherSound로 학습되면 안 되기 때문입니다.)
erase_list = []
erase_idx = []
idx=0
for labels in collection['labels']:
for label in labels:
if label not in OtherSound_class:
erase_list.append(label)
erase_idx.append(idx)
idx +=1
# 중복 제거
set(erase_list)
len(erase_list)
c2 = collection.copy()
c2 = c2.drop(erase_idx)
c2
# 파일명이 있는 인덱스 찾기
#c2.index[c2['fname'] == 37199].tolist()[0]
def return_idx(dataframe, text):
fname_idx = dataframe.index[dataframe['fname'] == text].tolist()
return fname_idx[0]
filenames = c2['fname'].tolist()
labels = c2['labels'].tolist()
len(filenames)
len(labels)
type(filenames[1])
os.makedirs(r'../Dataset_audio/OtherSound', exist_ok=True)
selected_OtherSound_path = r'../Dataset_audio/OtherSound'
OtherSound_audio_files_path = r'../Dataset_audio/Freesound_dataset/FSD50k.eval_audio/FSD50K.eval_audio'
# 오디오 파일의 길이를 구하는 함수를 선업합니다.
def get_duration(path, fname):
data, sr = librosa.load(os.path.join(path, fname))
duration = librosa.get_duration(y=data, sr=sr)
return duration
idx=0
labels = []
for file in os.listdir(OtherSound_audio_files_path):
filename = int(os.path.splitext(file)[0])
if filename in filenames:
if get_duration(OtherSound_audio_files_path, file) <30: # 길이가 30초 미만인 파일만 선정합니다.
fname_idx = return_idx(c2, int(filename))
label = c2['labels'][int(fname_idx)][0]
print(filename, label)
labels.append(label)
shutil.copy(
os.path.join(OtherSound_audio_files_path, file),
os.path.join(selected_OtherSound_path, label+'_'+str(filename)+'.wav')
)
else:
print('this file was not selected')
idx +=1
len(os.listdir(selected_OtherSound_path))
# save labels of selected files
c2.to_csv("Labels_of_selected_noise.csv")
```
| github_jupyter |
# Pytorch Example: Neural Network with Categorical Embeddings
In this example we will demonstrate conversion of a Pytorch model that takes numeric and categorical inputs separately, and estimate the treatment effect using some simulated historical data. The model input data is sourced from this kaggle competition: https://www.kaggle.com/kmalit/bank-customer-churn-prediction/data
## 1. Conversion
```
import sys
import os
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
from pathlib import Path
import onnxruntime as rt
from pathlib import Path
from mlisne import convert_to_onnx
os.environ['KMP_DUPLICATE_LIB_OK']='True'
```
Below we define the model architecture. The `estimate_qps` function currently only supports models which output 1D arrays of treatment probabilities, which is why we only return the second output column after the Softmax layer.
```
class CatModel(nn.Module):
def __init__(self, embedding_size, num_numerical_cols, output_size, layers, p=0.4):
super().__init__()
self.all_embeddings = nn.ModuleList([nn.Embedding(ni, nf) for ni, nf in embedding_size])
self.embedding_dropout = nn.Dropout(p)
self.batch_norm_num = nn.BatchNorm1d(num_numerical_cols)
all_layers = []
num_categorical_cols = sum((nf for ni, nf in embedding_size))
input_size = num_categorical_cols + num_numerical_cols
for i in layers:
all_layers.append(nn.Linear(input_size, i))
all_layers.append(nn.ReLU(inplace=True))
all_layers.append(nn.BatchNorm1d(i))
all_layers.append(nn.Dropout(p))
input_size = i
all_layers.append(nn.Linear(layers[-1], output_size))
self.layers = nn.Sequential(*all_layers)
self.m = nn.Softmax(dim=1)
def forward(self, x_categorical, x_numerical):
embeddings = []
for i,e in enumerate(self.all_embeddings):
embeddings.append(e(x_categorical[:,i]))
x = torch.cat(embeddings, 1)
x = self.embedding_dropout(x)
x_numerical = self.batch_norm_num(x_numerical)
x = torch.cat([x, x_numerical], 1)
x = self.layers(x)
x = self.m(x)
return x[:,1]
```
Let's load in the state dict of the pretrained model
```
model = CatModel([(3, 2), (2, 1), (2, 1), (2, 1)], 6, 2, [200,100,50], p=0.4)
model.load_state_dict(torch.load(f"models/churn_categorical.pt"))
model.eval()
```
Load in simulated data and preprocess inputs
```
churn_data = pd.read_csv("data/churn_data.csv")
churn_data.head()
categorical_cols = ['Geography', 'Gender', 'HasCrCard', 'IsActiveMember']
numerical_cols = ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary']
for category in categorical_cols:
churn_data[category] = churn_data[category].astype('category')
cat = []
for c in categorical_cols:
cat.append(churn_data[c].cat.codes.values)
cat_data = np.stack(cat, 1)
num_data = np.array(churn_data[numerical_cols])
cat_tensor = torch.tensor(cat_data)
num_tensor = torch.tensor(num_data)
```
Create our dummy inputs and convert to ONNX
```
cat_dummy = cat_tensor[0,None]
num_dummy = num_tensor[0,None]
f = "models/churn_categorical.onnx"
```
**Important:** Your dummy data must match the expected input types of your model. Otherwise the conversion will fail. Pytorch embedding layers expect 'long' types and the model expects 'float32' types for the continuous data.
```
print(cat_dummy.dtype, num_dummy.dtype)
try:
convert_to_onnx(model, (cat_dummy, num_dummy), f, "pytorch", input_type=2, input_names=("d_inputs", "c_inputs"))
except Exception as e:
print(e)
try:
convert_to_onnx(model, (cat_dummy.long(), num_dummy), f, "pytorch", input_type=2, input_names=("d_inputs", "c_inputs"))
except Exception as e:
print(e)
```
To convert a model with separate discrete and continuous inputs, we must pass the arguments `input_type=2`, the dummy inputs as a tuple, and the input names.
```
convert_to_onnx(model, (cat_dummy.long(), num_dummy.float()), f, "pytorch", input_type=2, input_names=("d_inputs", "c_inputs"))
```
We can verify that the ONNX model's predictions match those of the original Pytorch model.
```
with torch.no_grad():
torch_preds = model(cat_tensor.long(), num_tensor.float()).numpy()
sess = rt.InferenceSession(f)
onnx_preds = sess.run(["output_0"], {"c_inputs": num_data.astype(np.float32),
"d_inputs": cat_data.astype(np.int64)})[0]
np.testing.assert_array_almost_equal(torch_preds, onnx_preds, decimal=5)
```
## 2. QPS Estimation
The qps estimation procedure is nearly identical to the single-input case, albeit with a few extra arguments that need to be passed.
```
from mlisne import estimate_qps_onnx
# Generate the outcome based on the treatment assignment
churn_data['Y_cat'] = churn_data['Y0']
churn_data.loc[churn_data['D_cat'] == 1, 'Y_cat'] = churn_data.loc[churn_data['D_cat'] == 1, 'Y1']
churn_data.head()
```
For separate inputs, we need to pass `input_type=2` and `input_names` in the format (continuous name, discrete name). Since the input data are not the same types as expected by the model, we will need to pass those as well.
```
qps = estimate_qps_onnx(X_c = num_data, X_d = cat_data, S=100, delta=0.8, ML_onnx=f, input_type=2, input_names=('c_inputs', 'd_inputs'),
types = (np.float32, np.int64))
qps[:5]
```
## 3. Treatment Effect Estimation
Following QPS estimation, we have everything we need to obtain a causal estimate of treatment effect. Our primary estimation function is `estimate_treatment_effect`.
```
from mlisne import estimate_treatment_effect
fitted_model = estimate_treatment_effect(qps = qps, data = churn_data[['Y_cat', 'Z_cat', 'D_cat']])
```
Compare estimated LATE against true treatment effects
```
# Treatment effects
ate = (churn_data.Y1 - churn_data.Y0).mean()
atet = (churn_data.loc[churn_data['D_cat'] == 1, 'Y1'] - churn_data.loc[churn_data['D_cat'] == 1, 'Y0']).mean()
late = (churn_data.loc[(churn_data['D_cat'] == churn_data['Z_cat']), 'Y1'] - churn_data.loc[(churn_data['D_cat'] == churn_data['Z_cat']), 'Y0']).mean()
print(f"ATE: {ate}")
print(f"ATET: {atet}")
print(f"LATE: {late}")
print(fitted_model.first_stage)
```
| github_jupyter |
# Modeling and Simulation in Python
Chapter 23
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
```
### Code from the previous chapter
```
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
degree = UNITS.degree
t_end = 20 * s
dt = t_end / 100
params = Params(x = 0 * m,
y = 1 * m,
g = 9.8 * m/s**2,
mass = 145e-3 * kg,
diameter = 73e-3 * m,
rho = 1.2 * kg/m**3,
C_d = 0.3,
angle = 45 * degree,
velocity = 40 * m / s,
t_end=t_end,
dt=dt)
def make_system(params):
"""Make a system object.
params: Params object with angle, velocity, x, y,
diameter, duration, g, mass, rho, and C_d
returns: System object
"""
angle, velocity = params.angle, params.velocity
# convert angle to degrees
theta = np.deg2rad(angle)
# compute x and y components of velocity
vx, vy = pol2cart(theta, velocity)
# make the initial state
R = Vector(params.x, params.y)
V = Vector(vx, vy)
init = State(R=R, V=V)
# compute area from diameter
diameter = params.diameter
area = np.pi * (diameter/2)**2
return System(params, init=init, area=area)
def drag_force(V, system):
"""Computes drag force in the opposite direction of `V`.
V: velocity Vector
system: System object with rho, C_d, area
returns: Vector drag force
"""
rho, C_d, area = system.rho, system.C_d, system.area
mag = rho * V.mag**2 * C_d * area / 2
direction = -V.hat()
f_drag = direction * mag
return f_drag
def slope_func(state, t, system):
"""Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with g, rho, C_d, area, mass
returns: sequence (vx, vy, ax, ay)
"""
R, V = state
mass, g = system.mass, system.g
a_drag = drag_force(V, system) / mass
a_grav = Vector(0, -g)
A = a_grav + a_drag
return V, A
def event_func(state, t, system):
"""Stop when the y coordinate is 0.
state: State object
t: time
system: System object
returns: y coordinate
"""
R, V = state
return R.y
```
### Optimal launch angle
To find the launch angle that maximizes distance from home plate, we need a function that takes launch angle and returns range.
```
def range_func(angle, params):
"""Computes range for a given launch angle.
angle: launch angle in degrees
params: Params object
returns: distance in meters
"""
params = Params(params, angle=angle)
system = make_system(params)
results, details = run_ode_solver(system, slope_func, events=event_func)
x_dist = get_last_value(results.R).x
print(angle, x_dist)
return x_dist
```
Let's test `range_func`.
```
range_func(45, params)
```
And sweep through a range of angles.
```
angles = linspace(20, 80, 21)
sweep = SweepSeries()
for angle in angles:
x_dist = range_func(angle, params)
sweep[angle] = x_dist
```
Plotting the `Sweep` object, it looks like the peak is between 40 and 45 degrees.
```
plot(sweep, color='C2')
decorate(xlabel='Launch angle (degree)',
ylabel='Range (m)',
title='Range as a function of launch angle',
legend=False)
savefig('figs/chap23-fig01.pdf')
```
We can use `maximize` to search for the peak efficiently.
```
bounds = [0, 90] * degree
res = maximize(range_func, bounds, params)
```
`res` is an `ModSimSeries` object with detailed results:
```
res
```
`x` is the optimal angle and `fun` the optional range.
```
optimal_angle = res.x
max_x_dist = res.fun
```
### Under the hood
Read the source code for `maximize` and `minimize_scalar`, below.
Add a print statement to `range_func` that prints `angle`. Then run `maximize` again so you can see how many times it calls `range_func` and what the arguments are.
```
source_code(maximize)
source_code(minimize_scalar)
```
### The Manny Ramirez problem
Finally, let's solve the Manny Ramirez problem:
*What is the minimum effort required to hit a home run in Fenway Park?*
Fenway Park is a baseball stadium in Boston, Massachusetts. One of its most famous features is the "Green Monster", which is a wall in left field that is unusually close to home plate, only 310 feet along the left field line. To compensate for the short distance, the wall is unusually high, at 37 feet.
Although the problem asks for a minimum, it is not an optimization problem. Rather, we want to solve for the initial velocity that just barely gets the ball to the top of the wall, given that it is launched at the optimal angle.
And we have to be careful about what we mean by "optimal". For this problem, we don't want the longest range, we want the maximum height at the point where it reaches the wall.
If you are ready to solve the problem on your own, go ahead. Otherwise I will walk you through the process with an outline and some starter code.
As a first step, write a function called `height_func` that takes a launch angle and a params as parameters, simulates the flights of a baseball, and returns the height of the baseball when it reaches a point 94.5 meters (310 feet) from home plate.
```
# Solution goes here
```
Always test the slope function with the initial conditions.
```
# Solution goes here
# Solution goes here
```
Test your function with a launch angle of 45 degrees:
```
# Solution goes here
```
Now use `maximize` to find the optimal angle. Is it higher or lower than the angle that maximizes range?
```
# Solution goes here
# Solution goes here
# Solution goes here
```
With initial velocity 40 m/s and an optimal launch angle, the ball clears the Green Monster with a little room to spare.
Which means we can get over the wall with a lower initial velocity.
### Finding the minimum velocity
Even though we are finding the "minimum" velocity, we are not really solving a minimization problem. Rather, we want to find the velocity that makes the height at the wall exactly 11 m, given given that it's launched at the optimal angle. And that's a job for `root_bisect`.
Write an error function that takes a velocity and a `Params` object as parameters. It should use `maximize` to find the highest possible height of the ball at the wall, for the given velocity. Then it should return the difference between that optimal height and 11 meters.
```
# Solution goes here
```
Test your error function before you call `root_bisect`.
```
# Solution goes here
```
Then use `root_bisect` to find the answer to the problem, the minimum velocity that gets the ball out of the park.
```
# Solution goes here
# Solution goes here
```
And just to check, run `error_func` with the value you found.
```
# Solution goes here
```
| github_jupyter |
# Predicting Star Temperature with Elastic Net Linear Regression
Using the Open Exoplanet Catalogue database: https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue/
## Data License
Copyright (C) 2012 Hanno Rein
Permission is hereby granted, free of charge, to any person obtaining a copy of this database and associated scripts (the "Database"), to deal in the Database without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Database, and to permit persons to whom the Database is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Database. A reference to the Database shall be included in all scientific publications that make use of the Database.
THE DATABASE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATABASE OR THE USE OR OTHER DEALINGS IN THE DATABASE.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
stars = pd.read_csv('../../ch_10/data/stars.csv')
stars.head()
```
## EDA
```
stars.info()
stars.describe()
sns.heatmap(stars.corr(), vmin=-1, vmax=1, center=0, annot=True, fmt='.1f')
```
## Train test split
```
from sklearn.model_selection import train_test_split
data = stars[[
'metallicity', 'temperature', 'magJ', 'radius',
'magB', 'magH', 'magK', 'mass', 'planets'
]].dropna()
y = data.pop('temperature')
X = data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=0
)
```
## Grid search for best hyperparameters in elastic net pipeline
```
%%capture
# don't show warning messages or output for this cell
from sklearn.linear_model import ElasticNet
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler
pipeline = Pipeline([
('scale', MinMaxScaler()),
('net', ElasticNet(random_state=0))
])
search_space = {
'net__alpha': [0.1, 0.5, 1, 1.5, 2, 5],
'net__l1_ratio': np.linspace(0, 1, num=10),
'net__fit_intercept': [True, False]
}
elastic_net = GridSearchCV(pipeline, search_space, cv=5).fit(X_train, y_train)
```
Check the best hyperparameters:
```
elastic_net.best_params_
```
## R<sup>2</sup>
```
elastic_net.score(X_test, y_test) # R-squared
```
## Model equation
```
[
(coef, feature) for coef, feature in
zip(elastic_net.best_estimator_.named_steps['net'].coef_, X_train.columns)
]
elastic_net.best_estimator_.named_steps['net'].intercept_
```
## Residuals
```
from ml_utils.regression import plot_residuals
plot_residuals(y_test, elastic_net.predict(X_test))
```
<hr>
<div>
<a href="../../ch_10/red_wine.ipynb">
<button>← Chapter 10</button>
</a>
<a href="./exercise_2.ipynb">
<button style="float: right;">Next Solution →</button>
</a>
</div>
<hr>
| github_jupyter |
# Visualizing Catalytic Potentials of Glycolytic Regulatory Kinases
This notebook provides two examples on creating more sophisticated figures using MASSpy. Specifically, the two examples will reproduce the following from the publication by <cite data-cite="YAHP18">Yurkovich et al., 2018</cite>
- Example **A**: Reproduce Figure 2 of main text.
- Example **B**: Reproduce Figure 4 of main text.
```
from os import path
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
from mass import Simulation
from mass.test import create_test_model
from mass.util import strip_time
from mass.visualization import plot_phase_portrait, plot_time_profile
```
All models for this example have already been constructed based on <cite data-cite="YAHP18">Yurkovich et al., 2018</cite> and can be found [here](https://github.com/SBRG/MASSpy/tree/master/docs/example_gallery/models).
## Example A: Phase Portraits of Catalytic Potentials for Individual Enzymes
In this example, the three key regulatory kinases of glycolysis are integrated into the glycolytic network as enzyme modules.
Specifically, this notebook example focuses on visualizing the catalytic potential of each individual enzyme when only one enzyme module are integrated into the network in order to reproduce [Figure 2](https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1006356.g002) of <cite data-cite="YAHP18">Yurkovich et al., 2018</cite>
### A1: All in one
The following cell reproduces [Figure 2](https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1006356.g002) of <cite data-cite="YAHP18">Yurkovich et al., 2018</cite>
```
# Create list of model IDs
model_ids = ["Glycolysis_Hb_PFK",
"Glycolysis_Hb_HEX1",
"Glycolysis_Hb_PYK"]
# Define perturbations
perturbations = [
{"kf_ATPM": "kf_ATPM * 0.85"},
{"kf_ATPM": "kf_ATPM * 1.50"}]
# Define colors and legend labels
colors = ["red", "blue"]
labels = ["15% decrease", "50 increase"]
# Define axes limits
xy_limits = {
"PFK": [(0.67, 0.98), (0.84, 1.01)],
"HEX1": [(0.65, 1.00), (0.78, 0.89)],
"PYK": [(0.60, 1.00), (0.15, 0.88)]}
# Define helper functions to make aggregate solutions
def make_energy_charge_solution(conc_sol):
conc_sol.make_aggregate_solution(
aggregate_id="energy_charge",
equation="(atp_c + 0.5 * adp_c) / (atp_c + adp_c + amp_c)",
variables=["atp_c", "adp_c", "amp_c"], update=True)
def make_active_fraction_solution(model, conc_sol, enzyme):
enzyme = model.enzyme_modules.get_by_id(enzyme)
active = enzyme.enzyme_module_forms_categorized.get_by_id(
enzyme.id + "_Active")
conc_sol.make_aggregate_solution(
aggregate_id="active_fraction",
equation="({0}) / ({1})".format(
" + ".join([e.id for e in active.members]),
str(strip_time(enzyme.enzyme_concentration_total_equation))),
variables=enzyme.enzyme_module_forms, update=True)
# Create figure and flatten axes into a list
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 5))
axes = axes.flatten()
# Create inset axes
ax_inset = axes[-1].inset_axes([0.1, 0.7, 0.4, 0.25])
ax_inset.tick_params(labelbottom=False, labelleft=False)
# Iterate through models and axes instances
for i, (model_id, ax) in enumerate(zip(model_ids, axes)):
# Load model and simulation
model = create_test_model(model_id + ".json")
sim = Simulation(model, verbose=True)
# Get ID of enzyme module in model
enzyme_id = model.id.split("_")[-1]
# Ensure models are in steady state and make aggregate solutions
conc_sol, flux_sol = sim.find_steady_state(
model, strategy="simulate", update_values=True, tfinal=1e4)
make_energy_charge_solution(conc_sol)
make_active_fraction_solution(model, conc_sol, enzyme_id)
# Plot steady state lines
ax.plot([conc_sol["energy_charge"]] * 2, [-0.1, 1.1],
color="grey", linestyle="--")
ax.plot([-0.1, 1.1], [conc_sol["active_fraction"]] * 2,
color="grey", linestyle="--")
# Initialize legend arguments
legend = None
time_points_legend = None
# Iterate through ATP perturbations
for j, perturbation in enumerate(perturbations):
# Simulate model with perturbation and make aggregate solutions
conc_sol, flux_sol = sim.simulate(
model, time=(0, 1000), perturbations=perturbation)
make_energy_charge_solution(conc_sol)
make_active_fraction_solution(model, conc_sol, enzyme_id)
# Set legends on the middle plot only
if i == 1:
legend = (labels[j] + " in ATP utilization", "lower outside")
time_points_legend = "upper outside"
# Get x and y axes limits
xlimits, ylimits = xy_limits[enzyme_id]
# Make phase portrait
plot_phase_portrait(
conc_sol, x="energy_charge", y="active_fraction",
ax=ax, legend=legend, # Set axes and legend
xlim=xlimits, ylim=ylimits, # Axes limits
xlabel="Energy Charge (E.C.)", # Axes labels
ylabel=enzyme_id + " Active Fraction ($f_{A}$)",
title=(model.id, {"size": "x-large"}), # Title
linestyle="--", color=colors[j], # Line color and style
annotate_time_points="endpoints", # Annotate time points
annotate_time_points_color="black",
annotate_time_points_legend=time_points_legend)
# Plot time profile of FDP on the inset axes
if i == 2:
# Plot FDP time profile
plot_time_profile(
conc_sol, observable="fdp_c", ax=ax_inset,
plot_function="semilogx",
xlim=(1e-6, 1000), ylim=(0.002, 0.016),
xlabel="Time (hours)", ylabel="[FDP] (mM)",
color=colors[j])
```
### A2: Steps to Reproduce
In this section, the steps for reproducing [Figure 2](https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1006356.g002) of <cite data-cite="YAHP18">Yurkovich et al., 2018</cite> are outlined below in an expanded workflow.
#### A2.1: Setup
The first step is to load the models and ensure they exist in a steady state.
```
models_and_simulations = {}
for model_id in ["Glycolysis_Hb_PFK",
"Glycolysis_Hb_HEX1",
"Glycolysis_Hb_PYK"]:
model = create_test_model(model_id + ".json")
simulation = Simulation(model, verbose=True)
simulation.find_steady_state(model, strategy="simulate",
update_values=True, tfinal=1e4)
models_and_simulations[model] = simulation
```
When the `EnzymeModule` objects were created, the active forms of the enzyme was categorized and placed in a group. This will make creating the solution for the active fraction of the enzyme significantly easier.
```
for model in models_and_simulations:
# Get the ID of the enzyme module
e_mod = model.id.split("_")[-1]
# Get the enzyme module
e_mod = model.enzyme_modules.get_by_id(e_mod)
# Get the active group from the categorized enzyme module forms
active = e_mod.enzyme_module_forms_categorized.get_by_id(
"_".join((e_mod.id, "Active")))
print("Enzyme: " + e_mod.id)
print("Group ID: " + active.id)
print("# of members: {0}\n".format(len(active.members)))
```
Because aggregate solutions for the adenylate energy charge and enzyme active fractions will need to be created multiple times, smaller "helper" functions are defined to facilate this process. A list containing enzyme IDs is also defined in the order in which the models were stored.
```
def make_energy_charge_solution(conc_sol):
conc_sol.make_aggregate_solution(
aggregate_id="energy_charge",
equation="(atp_c + 0.5 * adp_c) / (atp_c + adp_c + amp_c)",
variables=["atp_c", "adp_c", "amp_c"], update=True)
return
def make_active_fraction_solution(model, conc_sol, enzyme_id):
# Get the relevant enzyme module
e_mod = model.enzyme_modules.get_by_id(enzyme_id)
# Get active group
active = e_mod.enzyme_module_forms_categorized.get_by_id(
enzyme_id + "_Active")
# Create string representing sum of active enzyme forms
active_eq = " + ".join([e.id for e in active.members])
# Create string representing sum of all enzyme forms
total_eq = str(strip_time(e_mod.enzyme_concentration_total_equation))
# Make aggregate solution
conc_sol.make_aggregate_solution(
aggregate_id="active_fraction",
equation="({0}) / ({1})".format(active_eq, total_eq),
variables=e_mod.enzyme_module_forms, update=True)
return
enzyme_ids = [model.id.split("_")[-1] for model in models_and_simulations]
enzyme_ids
```
The figure and axes instances are created using **matplotlib**. The figure size is set as 15 x 5 to create three square plots in a single row. Each plot will be have a size of 5 x 5.
```
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 5))
axes = axes.flatten()
```
Lists containing tuples for the x-axis and y-axis limits are also created. They are placed in a dictionary where the enzyme module IDs are the `dict` keys.
```
# Values are formatted as [(xmin, xmax), (ymin, ymax)]
xy_limits = {
"PFK": [(0.67, 0.98), (0.84, 1.01)],
"HEX1": [(0.65, 1.00), (0.78, 0.89)],
"PYK": [(0.60, 1.00), (0.15, 0.88)]}
```
#### A2.2: Plot steady state lines
The steady state values are plotted as straight lines to visually indicate where the steady state lies.
```
for i, (model, sim) in enumerate(models_and_simulations.items()):
# Get axes to plot on
ax = axes[i]
# Get ID of enzyme module in model
enzyme_id = enzyme_ids[i]
# Get MassSolution objects containing steady state values
conc_sol_ss, flux_sol_ss = sim.find_steady_state(
model, strategy="simulate", update_values=True, tfinal=1e4)
# Make an aggregate solution for the energy charge
make_energy_charge_solution(conc_sol_ss)
# Plot a vertical line at the steady steady value
ax.plot([conc_sol_ss["energy_charge"]] * 2,
[-0.1, 1.1],
color="grey", linestyle="--")
# Make an aggregate solution for the active fraction
make_active_fraction_solution(model, conc_sol_ss, enzyme_id)
# Plot a horizontal line at the steady steady value
ax.plot([-0.1, 1.1],
[conc_sol_ss["active_fraction"]] * 2,
color="grey", linestyle="--")
fig
```
#### A2.3: Plot catalytic potentials for 15% decrease in ATP utilization
The first set of results to plot on the figure are the results obtained from simulating the models with a 15% decrease in ATP utilization. This behavior is mimicked by perturbing the rate constant for ATP use to be 0.85 times its original value. The color "red" will be used to represent these results.
Legends are also created for perturbations and the annotated time points, placed below and above the middle plot, respectively.
```
for i, (model, sim) in enumerate(models_and_simulations.items()):
# Get axes to plot on
ax = axes[i]
# Get ID of enzyme module in model
enzyme_id = enzyme_ids[i]
# Simulate model with a 50% increase in ATP utilization
conc_sol, flux_sol = sim.simulate(
model, time=(0, 1000),
perturbations={"kf_ATPM": "kf_ATPM * 1.5"});
# Make an aggregate solution for the energy charge
make_energy_charge_solution(conc_sol)
# Make an aggregate solution for the active fraction
make_active_fraction_solution(model, conc_sol, enzyme_id)
# Place legend on middle plot only
if i == 1:
legend = ("50% increase in ATP utilization", "lower outside")
time_points_legend = "upper outside"
else:
legend = None
time_points_legend = None
# Get x and y axes limits
xlimits, ylimits = xy_limits[enzyme_id]
# Make phase portrait
plot_phase_portrait(
conc_sol, x="energy_charge", y="active_fraction",
ax=ax, legend=legend, # Axes instance and legend
xlim=xlimits, ylim=ylimits, # Axes limits
xlabel="Energy Charge (E.C.)", # Axes labels
ylabel=enzyme_id + " Active Fraction ($f_{A}$)",
title=(model.id, {"size": "x-large"}), # Title
linestyle="--", color="blue", # Line color and style
annotate_time_points="endpoints", # Annotate time points
annotate_time_points_color="black",
annotate_time_points_legend=time_points_legend,
)
fig
```
#### A2.4: Plot catalytic potentials for 50% increase in ATP utilization
The second set of results to plot on the figure are the results obtained from simulating the models with a 50% increase in ATP utilization. This behavior is mimicked by perturbing the rate constant for ATP use to be 1.5 times its original value. The color "blue" will be used to represent these results.
```
for i, (model, sim) in enumerate(models_and_simulations.items()):
# Get axes to plot on
ax = axes[i]
# Get ID of enzyme module in model
enzyme_id = enzyme_ids[i]
# Simulate model with a 50% increase in ATP utilization
conc_sol, flux_sol = sim.simulate(
model, time=(0, 1000),
perturbations={"kf_ATPM": "kf_ATPM * 0.85"});
# Make an aggregate solution for the energy charge
make_energy_charge_solution(conc_sol)
# Make an aggregate solution for the active fraction
make_active_fraction_solution(model, conc_sol, enzyme_id)
# Place legend on middle plot only
if i == 1:
legend = ("15% decrease in ATP utilization", "lower outside")
time_points_legend = "upper outside"
else:
legend = None
time_points_legend = None
# Get x and y axes limits
xlimits, ylimits = xy_limits[enzyme_id]
# Make phase portrait
plot_phase_portrait(
conc_sol, x="energy_charge", y="active_fraction",
ax=ax, legend=legend, # Axes instance and legend
xlim=xlimits, ylim=ylimits, # Axes limits
xlabel="Energy Charge (E.C.)", # Axes labels
ylabel=enzyme_id + " Active Fraction ($f_{A}$)",
title=(model.id, {"size": "x-large"}), # Title
linestyle="--", color="red", # Line color and style
annotate_time_points="endpoints", # Annotate time points
annotate_time_points_color="black",
annotate_time_points_legend=time_points_legend,
)
fig
```
#### A2.5: Time profile of FDP concentration
The plot of FDP concentration in the upper left quadrant of the plot for the catalytic potential of PYK is created.
First, the inset axes instance is created and the axes tick labels are removed:
```
# Get the axes instance
right_ax = axes[-1]
ax_inset = right_ax.inset_axes([0.1, 0.7, 0.4, 0.25])
ax_inset.tick_params(labelbottom=False, labelleft=False)
fig
```
The model with the PYK enzyme module is simulated with a 15% decrease in ATP utilization, and the results are plotted.
```
for model, sim in models_and_simulations.items():
# Only interested in the third model
if model.id != "Glycolysis_Hb_PYK":
continue
# Simulate model with a 50% increase in ATP utilization
conc_sol, flux_sol = sim.simulate(
model, time=(0, 1000),
perturbations={"kf_ATPM": "kf_ATPM * 0.85"});
# Plot FDP time profile
plot_time_profile(
conc_sol, observable="fdp_c", ax=ax_inset,
plot_function="semilogx",
xlim=(1e-6, 1000), ylim=(0.002, 0.016),
xlabel=("Time (hours)", {"size": "large"}),
ylabel=("[FDP] (mM)", {"size": "large"}),
color="red")
fig
```
In a similar fashion, the results of simulating the model with a 50% increase in ATP utilization are plotted.
```
for model, sim in models_and_simulations.items():
# Only interested in the third model
if model.id != "Glycolysis_Hb_PYK":
continue
# Simulate model with a 50% increase in ATP utilization
conc_sol, flux_sol = sim.simulate(
model, time=(0, 1000),
perturbations={"kf_ATPM": "kf_ATPM * 1.5"});
# Plot FDP time profile
plot_time_profile(
conc_sol, observable="fdp_c", ax=ax_inset,
plot_function="semilogx",
xlim=(1e-6, 1000), ylim=(0.002, 0.016),
xlabel=("Time (hours)", {"size": "large"}),
ylabel=("[FDP] (mM)", {"size": "large"}),
color="blue")
fig
```
[Figure 2](https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1006356.g002) of <cite data-cite="YAHP18">Yurkovich et al., 2018</cite> has been reproduced.
## Example B: Phase Portraits of Catalytic Potentials and Interplay Among Enzymes
In this example, the three key regulatory kinases of glycolysis are integrated into the glycolytic network as enzyme modules.
Specifically, this notebook example focuses on visualizing the catalytic potentials of enzymes as well as the interplay among enzymes when all three enzyme modules are integrated into the network in order to reproduce [Figure 4](https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1006356.g004) of <cite data-cite="YAHP18">Yurkovich et al., 2018</cite>.
### B1: All in one
The following cell reproduces [Figure 4](https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1006356.g004) of <cite data-cite="YAHP18">Yurkovich et al., 2018</cite>.
```
# Load model and simulation
model = create_test_model("Glycolysis_FKRM.json")
simulation = Simulation(model, verbose=True)
# Define perturbations
perturbation_list = [
{"kf_ATPM": "kf_ATPM * 0.85"},
{"kf_ATPM": "kf_ATPM * 1.50"}]
# Define colors and legend labels
colors = ["red", "blue"]
labels = ["15% decrease", "50 increase"]
# Define enzyme pairs and axes limits
enzyme_pairs = [("PFK", "HEX1"), ("PFK", "PYK"), ("HEX1", "PYK")]
# Values are formatted as [(xmin, xmax), (ymin, ymax)]
xy_limits = {
"HEX1": ((0.68, 0.92), (0.77, 0.89)),
"PFK": ((0.68, 1.02), (0.70, 1.02)),
"PYK": ((0.68, 0.92), (0.35, 1.01)),
("PFK", "HEX1"): ((0.75, 1.02), (0.78, 0.89)),
("PFK", "PYK"): ((0.75, 1.02), (0.38, 0.93)),
("HEX1", "PYK"): ((0.78, 0.89), (0.38, 0.93))}
# Define roman numerals and coordinates
roman_num_coords= {"I": (0.95, 0.95), "II": (0.05, 0.95),
"III": (0.05, 0.05), "IV": (0.95, 0.05)}
# Define helper functions to make aggregate solutions
def make_energy_charge_solution(conc_sol):
conc_sol.make_aggregate_solution(
aggregate_id="energy_charge",
equation="(atp_c + 0.5 * adp_c) / (atp_c + adp_c + amp_c)",
variables=["atp_c", "adp_c", "amp_c"], update=True)
def make_active_fraction_solution(conc_sol, enzyme):
active = enzyme.enzyme_module_forms_categorized.get_by_id(
enzyme.id + "_Active")
conc_sol.make_aggregate_solution(
aggregate_id=enzyme.id + "_active_fraction",
equation="({0}) / ({1})".format(
" + ".join([e.id for e in active.members]),
str(strip_time(enzyme.enzyme_concentration_total_equation))),
variables=enzyme.enzyme_module_forms, update=True)
# Create figure and axes
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(15, 10))
# Get the rightmost axes instance on the top row
right_ax = axes[0][-1]
# Create inset axes
ax_inset = right_ax.inset_axes([0.18, 0.7, 0.37, 0.25])
# Remove tick labels
ax_inset.tick_params(labelbottom=False, labelleft=False)
# Simulate to steady state
conc_sol, flux_sol = simulation.find_steady_state(
model, strategy="simulate", update_values=True, tfinal=1e4)
# Make aggregate solution for energy charge
make_energy_charge_solution(conc_sol)
# Iterate through enzyme modules and axes for the top half of figure
for enzyme, ax in zip(model.enzyme_modules, axes[0]):
# Make aggregate solutions for the active fraction
make_active_fraction_solution(conc_sol, enzyme)
# Plot steady state lines for energy charge and active fraction
ax.plot([conc_sol["energy_charge"]] * 2,
[-0.1, 1.1],
color="grey", linestyle="--")
ax.plot([-0.1, 1.1],
[conc_sol[enzyme.id + "_active_fraction"]] * 2,
color="grey", linestyle="--")
# Add roman numerals to the top axes.
for roman_num, coords in roman_num_coords.items():
ax.text(*coords, roman_num, fontsize=18, transform=ax.transAxes,
horizontalalignment='center', verticalalignment='center')
# Iterate through perturbations and colors
for i, (perturbation, color) in enumerate(zip(perturbation_list, colors)):
# Simulate model with ATP utilization perturbation
conc_sol, flux_sol = simulation.simulate(
model, time=(0, 1000), perturbations=perturbation)
# Make aggregate solution for energy charge
make_energy_charge_solution(conc_sol)
# Iterate through enzyme modules and axes for the top half of figure
for j, (enzyme, ax) in enumerate(zip(model.enzyme_modules, axes[0])):
# Make aggregate solutions for the active fraction
make_active_fraction_solution(conc_sol, enzyme)
# Place time point legend on middle plot only
if j == 1:
time_points_legend = "upper outside"
else:
time_points_legend = None
# Make phase portrait of catalytic potential for an enzyme
plot_phase_portrait(
conc_sol, x="energy_charge",
y=enzyme.id + "_active_fraction", ax=ax,
xlim=xy_limits[enzyme.id][0], # Axes limits
ylim=xy_limits[enzyme.id][1],
xlabel="Energy Charge (E.C.)", # Axes labels
ylabel=enzyme.id + " Active Fraction ($f_{A}$)",
linestyle="--", color=color, # Line color and style
annotate_time_points="endpoints", # Annotate time points
annotate_time_points_color="black",
annotate_time_points_legend=time_points_legend)
# Plot time profile of FDP on the inset axes
if j == 2:
# Plot FDP time profile
plot_time_profile(
conc_sol, observable="fdp_c", ax=ax_inset,
plot_function="semilogx",
xlim=(1e-6, 1000), ylim=(0.004, 0.018),
xlabel=("Time (hours)", {"size": "large"}),
ylabel=("[FDP] (mM)", {"size": "large"}),
color=color)
# Iterate through enzyme pairs and axes for bottom half of figure
for j, (enzyme_pair, ax) in enumerate(zip(enzyme_pairs, axes[1])):
# Place legend on middle plot only (PFK vs. PYK)
if j == 1:
legend = (labels[i] + " in ATP utilization",
"lower outside")
else:
legend = None
# Get the enzyme for the x-axis and for the y-axis
enz_x, enz_y = enzyme_pair
plot_phase_portrait(
conc_sol,
x=enz_x + "_active_fraction", y=enz_y + "_active_fraction",
ax=ax, legend=legend, # Axes instance and legend
xlim=xy_limits[enzyme_pair][0], # Axes limits
ylim=xy_limits[enzyme_pair][1],
xlabel=enz_x + " Active Fraction ($f_{A}$)", # Axes labels
ylabel=enz_y + " Active Fraction ($f_{A}$)",
color=color, linestyle="--", # Line color and style
annotate_time_points="endpoints", # Annotate time points
annotate_time_points_color="black")
```
### B2: Steps to Reproduce
In this section, the steps for reproducing [Figure 4](https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1006356.g004) of <cite data-cite="YAHP18">Yurkovich et al., 2018</cite> are outlined below in an expanded workflow.
#### B2.1: Setup
The first step is to load the model and ensure it exists in a steady state.
```
model = create_test_model("Glycolysis_FKRM.json")
simulation = Simulation(model, verbose=True)
simulation.find_steady_state(model, strategy="simulate",
update_values=True, tfinal=1e4)
```
When the `EnzymeModule` objects were created, the active forms of the enzyme was categorized and placed in a group. This will make creating the solution for the active fraction of the enzyme significantly easier.
```
for enzyme_module in model.enzyme_modules:
active = enzyme_module.enzyme_module_forms_categorized.get_by_id(
"_".join((enzyme_module.id, "Active")))
print("Enzyme: " + enzyme_module.id)
print("Group ID: " + active.id)
print("# of members: {0}\n".format(len(active.members)))
```
Because aggregate solutions for the adenylate energy charge and enzyme active fractions will need to be created multiple times, smaller "helper" functions are defined to facilate this process.
```
def make_energy_charge_solution(conc_sol):
conc_sol.make_aggregate_solution(
aggregate_id="energy_charge",
equation="(atp_c + 0.5 * adp_c) / (atp_c + adp_c + amp_c)",
variables=["atp_c", "adp_c", "amp_c"], update=True)
return
def make_active_fraction_solution(conc_sol, enzyme_module):
# Get active group
active = enzyme_module.enzyme_module_forms_categorized.get_by_id(
enzyme_module.id + "_Active")
# Create string representing sum of active enzyme forms
active_eq = " + ".join([e.id for e in active.members])
# Create string representing sum of all enzyme forms
total_eq = str(strip_time(
enzyme_module.enzyme_concentration_total_equation))
# Make aggregate solution
conc_sol.make_aggregate_solution(
aggregate_id=enzyme_module.id + "_active_fraction",
equation="({0}) / ({1})".format(active_eq, total_eq),
variables=enzyme_module.enzyme_module_forms, update=True)
return
```
The figure and axes instances are created using **matplotlib**. The figure size is set as 15 x 10 to create three square plots in two rows for a total of 6 plots. Each plot will be have a size of 5 x 5.
```
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(15, 10))
```
#### B2.2: Plot catalytic potential of enzymes
The following section demonstrates the necessary steps for generating the top half of the figure. The top half of the figure contains plots of the catalytic potentials for enzymes as a function of energy charge.
##### B2.2.1: Plot steady state lines
The steady state values are plotted as straight lines to visually indicate where the steady state lies.
```
# Simulate to steady state
conc_sol_ss, flux_sol_ss = simulation.find_steady_state(
model, strategy="simulate", update_values=True, tfinal=1e4)
# Make aggregate solutions for energy charge
make_energy_charge_solution(conc_sol_ss)
# Iterate through enzyme modules and axes for the top half of figure
for enzyme_module, ax in zip(model.enzyme_modules, axes[0]):
# Make aggregate solutions for the active fraction
make_active_fraction_solution(conc_sol_ss, enzyme_module)
# Plot steady state lines for energy charge
ax.plot([conc_sol_ss["energy_charge"]] * 2,
[-0.1, 1.1],
color="grey", linestyle="--")
# Plot steady state lines for active fraction
ax.plot([-0.1, 1.1],
[conc_sol_ss[enzyme_module.id + "_active_fraction"]] * 2,
color="grey", linestyle="--")
```
##### B2.2.2: Label quadrants with roman numerals
Roman numerals I-IV are added to the quandrants as follows:
* (I) more enzyme in active form and higher energy charge
* (II) more enzyme in active form and lower energy
* (III) more enzyme in inactive form and lower energy charge
* (IV) more enzyme in inactive form and higher energy charge
```
# Define roman numerals and coordinates
roman_num_coords= {
"I": (0.95, 0.95),
"II": (0.05, 0.95),
"III": (0.05, 0.05),
"IV": (0.95, 0.05)}
# Iterate through each axes
for ax in axes[0]:
# Add each number to the current axes.
for roman_num, coords in roman_num_coords.items():
ax.text(*coords, roman_num, fontsize=18,
horizontalalignment='center',
verticalalignment='center',
transform=ax.transAxes)
fig
```
##### B2.2.3: Plot catalytic potentials for ATP utilization perturbations
After plotting the steady state lines and labeling the quadrants, the model is simulated with perturbations to the ATP utilization rate. These perturbations include a 15% decrease in the rate constant for ATP use and a 50% increase in the rate constant for ATP use, represented by the colors "red" and "blue" respectively.
A legend is also created for the annotated time points, placed above the top middle plot (PFK catalytic potential).
```
# Values are formatted as [(xmin, xmax), (ymin, ymax)]
xy_limits = {
"HEX1": [(0.68, 0.92), (0.77, 0.89)],
"PFK": [(0.68, 1.02), (0.70, 1.02)],
"PYK": [(0.68, 0.92), (0.35, 1.01)]}
# Iterate through perturbations and colors
perturbation_list = [
{"kf_ATPM": "kf_ATPM * 0.85"},
{"kf_ATPM": "kf_ATPM * 1.50"}]
colors = ["red", "blue"]
for perturbation, color in zip(perturbation_list, colors):
# Simulate model with ATP utilization perturbation
conc_sol, flux_sol = simulation.simulate(
model, time=(0, 1000), perturbations=perturbation)
# Make aggregate solutions for energy charge and active fraction
make_energy_charge_solution(conc_sol)
# Iterate through enzyme modules and axes for the top half of figure
for enzyme_module, ax in zip(model.enzyme_modules, axes[0]):
# Make aggregate solutions for the active fraction
make_active_fraction_solution(conc_sol, enzyme_module)
# Place time point legend on middle plot only (PFK)
if enzyme_module.id == "PFK":
time_points_legend = "upper outside"
else:
time_points_legend = None
# Make phase portrait
plot_phase_portrait(
conc_sol, x="energy_charge",
y=enzyme_module.id + "_active_fraction", ax=ax,
xlim=xy_limits[enzyme_module.id][0], # Axes limits
ylim=xy_limits[enzyme_module.id][1],
xlabel="Energy Charge (E.C.)", # Axes labels
ylabel=enzyme_module.id + " Active Fraction ($f_{A}$)",
linestyle="--", color=color, # Line color and style
annotate_time_points="endpoints", # Annotate time points
annotate_time_points_color="black",
annotate_time_points_legend=time_points_legend)
fig
```
##### B2.2.4: Time profile for FDP concentration
The last step for creating the top half of the figure is to add the time profile of FDP concentration in the 2nd quadrant of the PYK catalytic potential plot.
```
# Get the rightmost axes instance on the top row
right_ax = axes[0][-1]
# Create inset axes
ax_inset = right_ax.inset_axes([0.18, 0.7, 0.37, 0.25])
# Remove tick labels
ax_inset.tick_params(labelbottom=False, labelleft=False)
# Iterate through perturbations and colors
perturbation_list = [
{"kf_ATPM": "kf_ATPM * 0.85"},
{"kf_ATPM": "kf_ATPM * 1.50"}]
colors = ["red", "blue"]
for perturbation, color in zip(perturbation_list, colors):
# Simulate model with ATP utilization perturbation
conc_sol, flux_sol = simulation.simulate(
model, time=(0, 1000), perturbations=perturbation)
# Plot FDP time profile
plot_time_profile(
conc_sol, observable="fdp_c", ax=ax_inset,
plot_function="semilogx",
xlim=(1e-6, 1000), ylim=(0.004, 0.018),
xlabel=("Time (hours)", {"size": "large"}),
ylabel=("[FDP] (mM)", {"size": "large"}),
color=color)
fig
```
#### B2.3: Plot pairwise relationships of enzyme active fractions
The following section demonstrates the necessary steps for generating the bottom half of the figure. The bottom half of the figure contains plots of the pairwise relationships between the active fractions of two kinases.
A legend is also created for the different simulations performed, placed above the bottom middle plot (PFK vs. PYK active fractions).
```
enzyme_pairs = [("PFK", "HEX1"), ("PFK", "PYK"), ("HEX1", "PYK")]
xy_limits = dict(zip(
enzyme_pairs, [
((0.75, 1.02), (0.78, 0.89)),
((0.75, 1.02), (0.38, 0.93)),
((0.78, 0.89), (0.38, 0.93)),
]
))
labels = ["15% decrease", "50% increase"]
# Iterate through perturbations and colors
for i, (perturbation, color) in enumerate(zip(perturbation_list, colors)):
# Simulate model with ATP utilization perturbation
conc_sol, flux_sol = simulation.simulate(
model, time=(0, 1000), perturbations=perturbation)
# Make aggregate solutions for the active fractions
for enzyme_module in model.enzyme_modules:
make_active_fraction_solution(conc_sol, enzyme_module)
# Iterate through enzyme pairs and axes for bottom half of figure
for enzyme_pair, ax in zip(enzyme_pairs, axes[1]):
# Place legend on middle plot only (PFK vs. PYK)
if enzyme_pair == ("PFK", "PYK"):
legend = (labels[i] + " in ATP utilization",
"lower outside")
else:
legend = None
# Get the enzyme for the x-axis and for the y-axis
enz_x, enz_y = enzyme_pair
plot_phase_portrait(
conc_sol,
x=enz_x + "_active_fraction", y=enz_y + "_active_fraction",
ax=ax, legend=legend, # Axes instance and legend
xlim=xy_limits[enzyme_pair][0], # Axes limits
ylim=xy_limits[enzyme_pair][1],
xlabel=enz_x + " Active Fraction ($f_{A}$)", # Axes labels
ylabel=enz_y + " Active Fraction ($f_{A}$)",
color=color, linestyle="--", # Line color and style
annotate_time_points="endpoints", # Annotate time points
annotate_time_points_color="black")
fig
```
[Figure 4](https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1006356.g004) of <cite data-cite="YAHP18">Yurkovich et al., 2018</cite> has been reproduced.
| github_jupyter |
```
#export
from nbexp_personal import sendEmail
```
## utils
```
#export
def itemgetter(*args):
g = operator.itemgetter(*args)
def f(*args2):
return dict(zip(args, g(*args2)))
return f
#export
def write_json(filename, content):
with open(filename, 'w', encoding='UTF-8') as f:
json.dump( content,f, ensure_ascii=False, indent=4)
def read_json(filename):
with open(filename, 'r', encoding='UTF-8') as f:
return json.load( f)
```
## read and convert
```
#export
# 复制为cCURL(posix)
def read_code(code_path):
with open(code_path, 'r', encoding='UTF-8') as f:
code = f.read().split('\n')[0]
return code
code =read_code('../bili_curl.txt')
# code
import nbexp_uncurl
c =nbexp_uncurl.parse(code, timeout=5)
# print(c)
# !{code.strip()}
#export
import nbexp_uncurl
import requests
from functools import partial
def fetch_code(code):
"""
default timeout for five second
"""
c =nbexp_uncurl.parse(code, timeout=5)
r = eval(c)
j = r.json()
return j
#export
import operator
import json
import datetime
def get_time(timestamp):
d = datetime.datetime.fromtimestamp(timestamp)
d = d.isoformat()
return d
def cvt_cards(j):
cards = j['data']['cards']
# card = cards[0]
# uname = card['desc']['user_profile']['info']['uname']
# card = card['card']
# print( desc)
# return
unames = list(map(lambda card:card['desc']['user_profile']['info']['uname'], cards))
cards = list(map(operator.itemgetter('card'), cards))
cards = list(map(json.loads, cards))
kl = ('title', 'desc', 'pic', 'stat', 'ctime')
cards = list(map(itemgetter(*kl), cards))
def cvt(tp):
card, uname = tp
content_id = str(card['stat']['aid'])
content = itemgetter(*kl[:-2])(card)
pic = content['pic'] + '@64w_36h_1c.jpg'
content['pic'] = pic
d = get_time(card['ctime'])
url = 'https://www.bilibili.com/video/av' + content_id
return (content_id, {'content': content , "url":url , 'time':d, 'uname':uname } )
cards = dict((map(cvt, zip(cards, unames))))
return cards
def get_cards():
fetch = partial(fetch_code, code)
cards = cvt_cards(fetch())
return cards
get_cards()
```
## render
```
#export
def render_div(v):
content = v['content']
desc = content['desc']
if len(desc) > 50:
desc = desc[:20]+'...'
body = f"""
<div style="margin:10px">
<img src='{content['pic']}'>
<a href='{v['url']}'>{content['title']}</a>
<span>{desc} {v['time']}</span>
</div>
"""
return body
#export
def render_html(v_list):
divs = ''.join(map(render_div, v_list))
html = f"""\
<html>
<head></head>
<body>
{divs}
</body>
</html>
"""
return html
#export
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
# Create the body of the message (a plain-text and an HTML version).
def render_msg(v_list, sub_name=""):
v_list = list(v_list)
html = render_html(v_list)
msg = MIMEMultipart('alternative')
msg['Subject'] = sub_name + '订阅' + '+' + str(len(v_list))
msg['From'] = sub_name
msg['To'] = ''
# Record the MIME types of both parts - text/plain and text/html.
part2 = MIMEText(html, 'html')
msg.attach(part2)
return msg.as_string()
# msg = render_msg(cards.values())
```
## check oldcards
```
#export
def get_main(json_path, get_cards, sub_name=""):
"""
json_path where to read old cards and save merge content
"""
def main():
cards = get_cards()
wj = partial( write_json, json_path,)
rj = partial( read_json, json_path,)
if not exists(json_path):
# 发送所有
wj({})
old_cards = rj()
new_cards = filter(lambda tp:tp[0] not in old_cards, cards.items())
new_cards = map(operator.itemgetter(1), new_cards)
new_cards = list(new_cards)
if new_cards:
msg = render_msg(new_cards, sub_name)
sendEmail(msg)
old_cards.update(cards)
wj(old_cards)
return main
#export
def block_on_观视频工作室(tp):
key, o = tp
if o['uname'] != '观视频工作室': return True
if '睡前消息' in o['content']['title']: return True
return False
def filter_get_cards():
cards = get_cards()
cards = list(filter(block_on_观视频工作室, cards.items()))
cards = dict(cards)
return cards
filter_get_cards()
#export
from os.path import exists
json_path = './bili.json'
main = get_main(json_path, filter_get_cards, "bili")
main()
#export
if __name__ == '__main__': main()
!python notebook2script.py bilibili.ipynb
rj = partial( read_json, json_path,)
old_cards = rj()
old_cards = map(operator.itemgetter(1), cards.items())
new_cards = get_cards()
# msg = render_msg(new_cards, 'bili')
# sendEmail(msg, )
# cards
r = fetch_code(code)
```
| github_jupyter |
# Bayesian Parametric Survival Analysis with PyMC3
```
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib.ticker import StrMethodFormatter
import numpy as np
import pymc3 as pm
import scipy as sp
import seaborn as sns
from statsmodels import datasets
from theano import shared, tensor as tt
plt.style.use('seaborn-darkgrid')
print('Running on PyMC3 v{}'.format(pm.__version__))
```
[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time between when a subject comes under observation and when that subject experiences an event of interest. One of the fundamental challenges of survival analysis (which also makes is mathematically interesting) is that, in general, not every subject will experience the event of interest before we conduct our analysis. In more concrete terms, if we are studying the time between cancer treatment and death (as we will in this post), we will often want to analyze our data before every subject has died. This phenomenon is called <a href="https://en.wikipedia.org/wiki/Censoring_(statistics)">censoring</a> and is fundamental to survival analysis.
I have previously [written](http://austinrochford.com/posts/2015-10-05-bayes-survival.html) about Bayesian survival analysis using the [semiparametric](https://en.wikipedia.org/wiki/Semiparametric_model) [Cox proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model#The_Cox_model). Implementing that semiparametric model in PyMC3 involved some fairly complex `numpy` code and nonobvious probability theory equivalences. This post illustrates a parametric approach to Bayesian survival analysis in PyMC3. Parametric models of survival are simpler to both implement and understand than semiparametric models; statistically, they are also more [powerful](https://en.wikipedia.org/wiki/Statistical_power) than non- or semiparametric methods _when they are correctly specified_. This post will not further cover the differences between parametric and nonparametric models or the various methods for chosing between them.
As in the previous post, we will analyze [mastectomy data](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [`HSAUR`](https://cran.r-project.org/web/packages/HSAUR/index.html) package. First, we load the data.
```
sns.set()
blue, green, red, purple, gold, teal = sns.color_palette(n_colors=6)
pct_formatter = StrMethodFormatter('{x:.1%}')
df = (datasets.get_rdataset('mastectomy', 'HSAUR', cache=True)
.data
.assign(metastized=lambda df: 1. * (df.metastized == "yes"),
event=lambda df: 1. * df.event))
df.head()
```
The column `time` represents the survival time for a breast cancer patient after a mastectomy, measured in months. The column `event` indicates whether or not the observation is censored. If `event` is one, the patient's death was observed during the study; if `event` is zero, the patient lived past the end of the study and their survival time is censored. The column `metastized` indicates whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastasis) prior to the mastectomy. In this post, we will use Bayesian parametric survival regression to quantify the difference in survival times for patients whose cancer had and had not metastized.
## Accelerated failure time models
[Accelerated failure time models](https://en.wikipedia.org/wiki/Accelerated_failure_time_model) are the most common type of parametric survival regression models. The fundamental quantity of survival analysis is the [survival function](https://en.wikipedia.org/wiki/Survival_function); if $T$ is the random variable representing the time to the event in question, the survival function is $S(t) = P(T > t)$. Accelerated failure time models incorporate covariates $\mathbf{x}$ into the survival function as
$$S(t\ |\ \beta, \mathbf{x}) = S_0\left(\exp\left(\beta^{\top} \mathbf{x}\right) \cdot t\right),$$
where $S_0(t)$ is a fixed baseline survival function. These models are called "accelerated failure time" because, when $\beta^{\top} \mathbf{x} > 0$, $\exp\left(\beta^{\top} \mathbf{x}\right) \cdot t > t$, so the effect of the covariates is to accelerate the _effective_ passage of time for the individual in question. The following plot illustrates this phenomenon using an exponential survival function.
```
S0 = sp.stats.expon.sf
fig, ax = plt.subplots(figsize=(8, 6))
t = np.linspace(0, 10, 100)
ax.plot(t, S0(5 * t),
label=r"$\beta^{\top} \mathbf{x} = \log\ 5$");
ax.plot(t, S0(2 * t),
label=r"$\beta^{\top} \mathbf{x} = \log\ 2$");
ax.plot(t, S0(t),
label=r"$\beta^{\top} \mathbf{x} = 0$ ($S_0$)");
ax.plot(t, S0(0.5 * t),
label=r"$\beta^{\top} \mathbf{x} = -\log\ 2$");
ax.plot(t, S0(0.2 * t),
label=r"$\beta^{\top} \mathbf{x} = -\log\ 5$");
ax.set_xlim(0, 10);
ax.set_xlabel(r"$t$");
ax.yaxis.set_major_formatter(pct_formatter);
ax.set_ylim(-0.025, 1);
ax.set_ylabel(r"Survival probability, $S(t\ |\ \beta, \mathbf{x})$");
ax.legend(loc=1);
ax.set_title("Accelerated failure times");
```
Accelerated failure time models are equivalent to log-linear models for $T$,
$$Y = \log T = \beta^{\top} \mathbf{x} + \varepsilon.$$
A choice of distribution for the error term $\varepsilon$ determines baseline survival function, $S_0$, of the accelerated failure time model. The following table shows the correspondence between the distribution of $\varepsilon$ and $S_0$ for several common accelerated failure time models.
<center>
<table border="1">
<tr>
<th>Log-linear error distribution ($\varepsilon$)</th>
<th>Baseline survival function ($S_0$)</th>
</tr>
<tr>
<td>[Normal](https://en.wikipedia.org/wiki/Normal_distribution)</td>
<td>[Log-normal](https://en.wikipedia.org/wiki/Log-normal_distribution)</td>
</tr>
<tr>
<td>Extreme value ([Gumbel](https://en.wikipedia.org/wiki/Gumbel_distribution))</td>
<td>[Weibull](https://en.wikipedia.org/wiki/Weibull_distribution)</td>
</tr>
<tr>
<td>[Logistic](https://en.wikipedia.org/wiki/Logistic_distribution)</td>
<td>[Log-logistic](https://en.wikipedia.org/wiki/Log-logistic_distribution)</td>
</tr>
</table>
</center>
Accelerated failure time models are conventionally named after their baseline survival function, $S_0$. The rest of this post will show how to implement Weibull and log-logistic survival regression models in PyMC3 using the mastectomy data.
### Weibull survival regression
In this example, the covariates are $\mathbf{x}_i = \left(1\ x^{\textrm{met}}_i\right)^{\top}$, where
$$
\begin{align*}
x^{\textrm{met}}_i
& = \begin{cases}
0 & \textrm{if the } i\textrm{-th patient's cancer had not metastized} \\
1 & \textrm{if the } i\textrm{-th patient's cancer had metastized}
\end{cases}.
\end{align*}
$$
We construct the matrix of covariates $\mathbf{X}$.
```
n_patient, _ = df.shape
X = np.empty((n_patient, 2))
X[:, 0] = 1.
X[:, 1] = df.metastized
```
We place independent, vague normal prior distributions on the regression coefficients,
$$\beta \sim N(0, 5^2 I_2).$$
```
VAGUE_PRIOR_SD = 5.
with pm.Model() as weibull_model:
β = pm.Normal('β', 0., VAGUE_PRIOR_SD, shape=2)
```
The covariates, $\mathbf{x}$, affect value of $Y = \log T$ through $\eta = \beta^{\top} \mathbf{x}$.
```
X_ = shared(X)
with weibull_model:
η = β.dot(X_.T)
```
For Weibull regression, we use
$$
\begin{align*}
\varepsilon
& \sim \textrm{Gumbel}(0, s) \\
s
& \sim \textrm{HalfNormal(5)}.
\end{align*}
$$
```
with weibull_model:
s = pm.HalfNormal('s', 5.)
```
We are nearly ready to specify the likelihood of the observations given these priors. Before doing so, we transform the observed times to the log scale and standardize them.
```
y = np.log(df.time.values)
y_std = (y - y.mean()) / y.std()
```
The likelihood of the data is specified in two parts, one for uncensored samples, and one for censored samples. Since $Y = \eta + \varepsilon$, and $\varepsilon \sim \textrm{Gumbel}(0, s)$, $Y \sim \textrm{Gumbel}(\eta, s)$. For the uncensored survival times, the likelihood is implemented as
```
cens = df.event.values == 0.
cens_ = shared(cens)
with weibull_model:
y_obs = pm.Gumbel(
'y_obs', η[~cens_], s,
observed=y_std[~cens]
)
```
For censored observations, we only know that their true survival time exceeded the total time that they were under observation. This probability is given by the survival function of the Gumbel distribution,
$$P(Y \geq y) = 1 - \exp\left(-\exp\left(-\frac{y - \mu}{s}\right)\right).$$
This survival function is implemented below.
```
def gumbel_sf(y, μ, σ):
return 1. - tt.exp(-tt.exp(-(y - μ) / σ))
```
We now specify the likelihood for the censored observations.
```
with weibull_model:
y_cens = pm.Potential(
'y_cens', gumbel_sf(y_std[cens], η[cens_], s)
)
```
We now sample from the model.
```
SEED = 845199 # from random.org, for reproducibility
SAMPLE_KWARGS = {
'chains': 3,
'tune': 1000,
'random_seed': [
SEED,
SEED + 1,
SEED + 2
]
}
with weibull_model:
weibull_trace = pm.sample(**SAMPLE_KWARGS)
```
The energy plot and Bayesian fraction of missing information give no cause for concern about poor mixing in NUTS.
```
pm.energyplot(weibull_trace);
pm.bfmi(weibull_trace)
```
The Gelman-Rubin statistics also indicate convergence.
```
max(np.max(gr_stats) for gr_stats in pm.rhat(weibull_trace).values())
```
Below we plot posterior distributions of the parameters.
```
pm.plot_posterior(weibull_trace, lw=0, alpha=0.5);
```
These are somewhat interesting (espescially the fact that the posterior of $\beta_1$ is fairly well-separated from zero), but the posterior predictive survival curves will be much more interpretable.
The advantage of using [`theano.shared`](http://deeplearning.net/software/theano_versions/dev/library/compile/shared.html) variables is that we can now change their values to perform posterior predictive sampling. For posterior prediction, we set $X$ to have two rows, one for a subject whose cancer had not metastized and one for a subject whose cancer had metastized. Since we want to predict actual survival times, none of the posterior predictive rows are censored.
```
X_pp = np.empty((2, 2))
X_pp[:, 0] = 1.
X_pp[:, 1] = [0, 1]
X_.set_value(X_pp)
cens_pp = np.repeat(False, 2)
cens_.set_value(cens_pp)
with weibull_model:
pp_weibull_trace = pm.sample_posterior_predictive(
weibull_trace, samples=1500, vars=[y_obs]
)
```
The posterior predictive survival times show that, on average, patients whose cancer had not metastized survived longer than those whose cancer had metastized.
```
t_plot = np.linspace(0, 230, 100)
weibull_pp_surv = (np.greater_equal
.outer(np.exp(y.mean() + y.std() * pp_weibull_trace['y_obs']),
t_plot))
weibull_pp_surv_mean = weibull_pp_surv.mean(axis=0)
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(t_plot, weibull_pp_surv_mean[0],
c=blue, label="Not metastized");
ax.plot(t_plot, weibull_pp_surv_mean[1],
c=red, label="Metastized");
ax.set_xlim(0, 230);
ax.set_xlabel("Weeks since mastectomy");
ax.set_ylim(top=1);
ax.yaxis.set_major_formatter(pct_formatter);
ax.set_ylabel("Survival probability");
ax.legend(loc=1);
ax.set_title("Weibull survival regression model");
```
### Log-logistic survival regression
Other accelerated failure time models can be specificed in a modular way by changing the prior distribution on $\varepsilon$. A log-logistic model corresponds to a [logistic](https://en.wikipedia.org/wiki/Logistic_distribution) prior on $\varepsilon$. Most of the model specification is the same as for the Weibull model above.
```
X_.set_value(X)
cens_.set_value(cens)
with pm.Model() as log_logistic_model:
β = pm.Normal('β', 0., VAGUE_PRIOR_SD, shape=2)
η = β.dot(X_.T)
s = pm.HalfNormal('s', 5.)
```
We use the prior $\varepsilon \sim \textrm{Logistic}(0, s)$. The survival function of the logistic distribution is
$$P(Y \geq y) = 1 - \frac{1}{1 + \exp\left(-\left(\frac{y - \mu}{s}\right)\right)},$$
so we get the likelihood
```
def logistic_sf(y, μ, s):
return 1. - pm.math.sigmoid((y - μ) / s)
with log_logistic_model:
y_obs = pm.Logistic(
'y_obs', η[~cens_], s,
observed=y_std[~cens]
)
y_cens = pm.Potential(
'y_cens', logistic_sf(y_std[cens], η[cens_], s)
)
```
We now sample from the log-logistic model.
```
with log_logistic_model:
log_logistic_trace = pm.sample(**SAMPLE_KWARGS)
```
All of the sampling diagnostics look good for this model.
```
pm.energyplot(log_logistic_trace);
pm.bfmi(log_logistic_trace)
max(np.max(gr_stats) for gr_stats in pm.rhat(log_logistic_trace).values())
```
Again, we calculate the posterior expected survival functions for this model.
```
X_.set_value(X_pp)
cens_.set_value(cens_pp)
with log_logistic_model:
pp_log_logistic_trace = pm.sample_posterior_predictive(
log_logistic_trace, samples=1500, vars=[y_obs]
)
log_logistic_pp_surv = (np.greater_equal
.outer(np.exp(y.mean() + y.std() * pp_log_logistic_trace['y_obs']),
t_plot))
log_logistic_pp_surv_mean = log_logistic_pp_surv.mean(axis=0)
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(t_plot, weibull_pp_surv_mean[0],
c=blue, label="Weibull, not metastized");
ax.plot(t_plot, weibull_pp_surv_mean[1],
c=red, label="Weibull, metastized");
ax.plot(t_plot, log_logistic_pp_surv_mean[0],
'--', c=blue,
label="Log-logistic, not metastized");
ax.plot(t_plot, log_logistic_pp_surv_mean[1],
'--', c=red,
label="Log-logistic, metastized");
ax.set_xlim(0, 230);
ax.set_xlabel("Weeks since mastectomy");
ax.set_ylim(top=1);
ax.yaxis.set_major_formatter(pct_formatter);
ax.set_ylabel("Survival probability");
ax.legend(loc=1);
ax.set_title("Weibull and log-logistic\nsurvival regression models");
```
This post has been a short introduction to implementing parametric survival regression models in PyMC3 with a fairly simple data set. The modular nature of probabilistic programming with PyMC3 should make it straightforward to generalize these techniques to more complex and interesting data set.
## Authors
- Originally authored as a blog post by [Austin Rochford](https://austinrochford.com/posts/2017-10-02-bayes-param-survival.html) on October 2, 2017.
- Updated by [George Ho](https://eigenfoo.xyz/) on July 18, 2018.
| github_jupyter |
# Transposed Convolution
:label:`sec_transposed_conv`
The CNN layers we have seen so far,
such as convolutional layers (:numref:`sec_conv_layer`) and pooling layers (:numref:`sec_pooling`),
typically reduce (downsample) the spatial dimensions (height and width) of the input,
or keep them unchanged.
In semantic segmentation
that classifies at pixel-level,
it will be convenient if
the spatial dimensions of the
input and output are the same.
For example,
the channel dimension at one output pixel
can hold the classification results
for the input pixel at the same spatial position.
To achieve this, especially after
the spatial dimensions are reduced by CNN layers,
we can use another type
of CNN layers
that can increase (upsample) the spatial dimensions
of intermediate feature maps.
In this section,
we will introduce
*transposed convolution*, which is also called *fractionally-strided convolution* :cite:`Dumoulin.Visin.2016`,
for reversing downsampling operations
by the convolution.
```
import torch
from torch import nn
from d2l import torch as d2l
```
## Basic Operation
Ignoring channels for now,
let us begin with
the basic transposed convolution operation
with stride of 1 and no padding.
Suppose that
we are given a
$n_h \times n_w$ input tensor
and a $k_h \times k_w$ kernel.
Sliding the kernel window with stride of 1
for $n_w$ times in each row
and $n_h$ times in each column
yields
a total of $n_h n_w$ intermediate results.
Each intermediate result is
a $(n_h + k_h - 1) \times (n_w + k_w - 1)$
tensor that are initialized as zeros.
To compute each intermediate tensor,
each element in the input tensor
is multiplied by the kernel
so that the resulting $k_h \times k_w$ tensor
replaces a portion in
each intermediate tensor.
Note that
the position of the replaced portion in each
intermediate tensor corresponds to the position of the element
in the input tensor used for the computation.
In the end, all the intermediate results
are summed over to produce the output.
As an example,
:numref:`fig_trans_conv` illustrates
how transposed convolution with a $2\times 2$ kernel is computed for a $2\times 2$ input tensor.

:label:`fig_trans_conv`
We can (**implement this basic transposed convolution operation**) `trans_conv` for a input matrix `X` and a kernel matrix `K`.
```
def trans_conv(X, K):
h, w = K.shape
Y = torch.zeros((X.shape[0] + h - 1, X.shape[1] + w - 1))
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Y[i: i + h, j: j + w] += X[i, j] * K
return Y
```
In contrast to the regular convolution (in :numref:`sec_conv_layer`) that *reduces* input elements
via the kernel,
the transposed convolution
*broadcasts* input elements
via the kernel, thereby
producing an output
that is larger than the input.
We can construct the input tensor `X` and the kernel tensor `K` from :numref:`fig_trans_conv` to [**validate the output of the above implementation**] of the basic two-dimensional transposed convolution operation.
```
X = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
trans_conv(X, K)
```
Alternatively,
when the input `X` and kernel `K` are both
four-dimensional tensors,
we can [**use high-level APIs to obtain the same results**].
```
X, K = X.reshape(1, 1, 2, 2), K.reshape(1, 1, 2, 2)
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, bias=False)
tconv.weight.data = K
tconv(X)
```
## [**Padding, Strides, and Multiple Channels**]
Different from in the regular convolution
where padding is applied to input,
it is applied to output
in the transposed convolution.
For example,
when specifying the padding number
on either side of the height and width
as 1,
the first and last rows and columns
will be removed from the transposed convolution output.
```
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, padding=1, bias=False)
tconv.weight.data = K
tconv(X)
```
In the transposed convolution,
strides are specified for intermediate results (thus output), not for input.
Using the same input and kernel tensors
from :numref:`fig_trans_conv`,
changing the stride from 1 to 2
increases both the height and weight
of intermediate tensors, hence the output tensor
in :numref:`fig_trans_conv_stride2`.

:label:`fig_trans_conv_stride2`
The following code snippet can validate the transposed convolution output for stride of 2 in :numref:`fig_trans_conv_stride2`.
```
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, stride=2, bias=False)
tconv.weight.data = K
tconv(X)
```
For multiple input and output channels,
the transposed convolution
works in the same way as the regular convolution.
Suppose that
the input has $c_i$ channels,
and that the transposed convolution
assigns a $k_h\times k_w$ kernel tensor
to each input channel.
When multiple output channels
are specified,
we will have a $c_i\times k_h\times k_w$ kernel for each output channel.
As in all, if we feed $\mathsf{X}$ into a convolutional layer $f$ to output $\mathsf{Y}=f(\mathsf{X})$ and create a transposed convolutional layer $g$ with the same hyperparameters as $f$ except
for the number of output channels
being the number of channels in $\mathsf{X}$,
then $g(Y)$ will have the same shape as $\mathsf{X}$.
This can be illustrated in the following example.
```
X = torch.rand(size=(1, 10, 16, 16))
conv = nn.Conv2d(10, 20, kernel_size=5, padding=2, stride=3)
tconv = nn.ConvTranspose2d(20, 10, kernel_size=5, padding=2, stride=3)
tconv(conv(X)).shape == X.shape
```
## [**Connection to Matrix Transposition**]
:label:`subsec-connection-to-mat-transposition`
The transposed convolution is named after
the matrix transposition.
To explain,
let us first
see how to implement convolutions
using matrix multiplications.
In the example below, we define a $3\times 3$ input `X` and a $2\times 2$ convolution kernel `K`, and then use the `corr2d` function to compute the convolution output `Y`.
```
X = torch.arange(9.0).reshape(3, 3)
K = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
Y = d2l.corr2d(X, K)
Y
```
Next, we rewrite the convolution kernel `K` as
a sparse weight matrix `W`
containing a lot of zeros.
The shape of the weight matrix is ($4$, $9$),
where the non-zero elements come from
the convolution kernel `K`.
```
def kernel2matrix(K):
k, W = torch.zeros(5), torch.zeros((4, 9))
k[:2], k[3:5] = K[0, :], K[1, :]
W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k
return W
W = kernel2matrix(K)
W
```
Concatenate the input `X` row by row to get a vector of length 9. Then the matrix multiplication of `W` and the vectorized `X` gives a vector of length 4.
After reshaping it, we can obtain the same result `Y`
from the original convolution operation above:
we just implemented convolutions using matrix multiplications.
```
Y == torch.matmul(W, X.reshape(-1)).reshape(2, 2)
```
Likewise, we can implement transposed convolutions using
matrix multiplications.
In the following example,
we take the $2 \times 2$ output `Y` from the above
regular convolution
as the input to the transposed convolution.
To implement this operation by multiplying matrices,
we only need to transpose the weight matrix `W`
with the new shape $(9, 4)$.
```
Z = trans_conv(Y, K)
Z == torch.matmul(W.T, Y.reshape(-1)).reshape(3, 3)
```
Consider implementing the convolution
by multiplying matrices.
Given an input vector $\mathbf{x}$
and a weight matrix $\mathbf{W}$,
the forward propagation function of the convolution
can be implemented
by multiplying its input with the weight matrix
and outputting a vector
$\mathbf{y}=\mathbf{W}\mathbf{x}$.
Since backpropagation
follows the chain rule
and $\nabla_{\mathbf{x}}\mathbf{y}=\mathbf{W}^\top$,
the backpropagation function of the convolution
can be implemented
by multiplying its input with the
transposed weight matrix $\mathbf{W}^\top$.
Therefore,
the transposed convolutional layer
can just exchange the forward propagation function
and the backpropagation function of the convolutional layer:
its forward propagation
and backpropagation functions
multiply their input vector with
$\mathbf{W}^\top$ and $\mathbf{W}$, respectively.
## Summary
* In contrast to the regular convolution that reduces input elements via the kernel, the transposed convolution broadcasts input elements via the kernel, thereby producing an output that is larger than the input.
* If we feed $\mathsf{X}$ into a convolutional layer $f$ to output $\mathsf{Y}=f(\mathsf{X})$ and create a transposed convolutional layer $g$ with the same hyperparameters as $f$ except for the number of output channels being the number of channels in $\mathsf{X}$, then $g(Y)$ will have the same shape as $\mathsf{X}$.
* We can implement convolutions using matrix multiplications. The transposed convolutional layer can just exchange the forward propagation function and the backpropagation function of the convolutional layer.
## Exercises
1. In :numref:`subsec-connection-to-mat-transposition`, the convolution input `X` and the transposed convolution output `Z` have the same shape. Do they have the same value? Why?
1. Is it efficient to use matrix multiplications to implement convolutions? Why?
[Discussions](https://discuss.d2l.ai/t/1450)
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/FeatureCollection/select_by_location.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/select_by_location.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=FeatureCollection/select_by_location.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/select_by_location.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
Map.centerObject(roi, 10)
Map.addLayer(ee.Image().paint(roi, 0, 3), {}, 'HUC08')
# select polygons intersecting the roi
roi2 = HUC10.filter(ee.Filter.contains(**{'leftValue': roi.geometry(), 'rightField': '.geo'}))
Map.addLayer(ee.Image().paint(roi2, 0, 2), {'palette': 'blue'}, 'HUC10')
# roi3 = HUC10.filter(ee.Filter.stringContains(**{'leftField': 'huc10', 'rightValue': '10160002'}))
# # print(roi3)
# Map.addLayer(roi3)
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Calculate molecular descriptors by graph neural net
```
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.model_selection import train_test_split
from rdkit import Chem
import numpy as np
#load database
path="../database/small_db.csv"
df=pd.read_csv(path)
df
#set target y
target="boiling temperature"
#target="melting temperature"
#target="density"
#target="viscosity"
#prepare mol objects and y
smiles_list=df["SMILES"]
mol_list=[Chem.MolFromSmiles(s) for s in df["SMILES"]]
y_list=np.array(df[target],dtype=np.float32).reshape(-1,1)
#in this demo, train/test dataset is made
spl_ratio=0.8
train_mols,test_mols,train_y,test_y=train_test_split(mol_list,y_list,test_size=0.3)
#if you want to just calculate descriptors, train all data
#train_mols,train_y=mol_list,y_list
#prepare graph objects from mol objects
from gnn import mol2dgl_single,collate,ATOM_FDIM, MID_DIM,GCN
train_graphs = mol2dgl_single(train_mols)
test_graphs = mol2dgl_single(test_mols)
#set dataloader
from torch.utils.data import DataLoader
BATCH_SIZE=16
dataset = list(zip(train_graphs, train_y))
data_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate)
#define regressor
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import dgl
class Regressor(nn.Module):
def __init__(self, in_dim, hidden_dim, out_size):
super(Regressor, self).__init__()
self.layers = nn.ModuleList([GCN(in_dim, hidden_dim, F.relu),
GCN(hidden_dim, hidden_dim, F.relu)])
self.regress = nn.Linear(hidden_dim, out_size)
self.intermediate_mode=0
def forward(self, g):
h = g.ndata['h']
for conv in self.layers:
h = conv(g, h)
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
if self.intermediate_mode==2:
return hg
elif self.intermediate_mode==1:
return self.regress(hg)
else:
return self.regress(hg),hg
model = Regressor(ATOM_FDIM, MID_DIM, 1)
# if intermediate_mode == 1, return y. if 2, return hidden layer outputs (= neural descriptor)
model.intermediate_mode=1
loss_func = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
model.train()
epoch_losses = []
for epoch in range(200):
epoch_loss = 0
for i, (bg, label) in enumerate(data_loader):
bg.set_e_initializer(dgl.init.zero_initializer)
bg.set_n_initializer(dgl.init.zero_initializer)
pred = model(bg)
loss = loss_func(pred, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.detach().item()
epoch_loss /= (i + 1)
if (epoch+1) % 20 == 0:
print('Epoch {}, loss {:.4f}'.format(epoch+1, epoch_loss))
epoch_losses.append(epoch_loss)
plt.plot(epoch_losses, c='b')
def batch_predict(model,graphs):
model.eval()
bg = dgl.batch(graphs)
bg.set_e_initializer(dgl.init.zero_initializer)
bg.set_n_initializer(dgl.init.zero_initializer)
return model(bg).detach().numpy()
#predict
tr_pred_y=batch_predict(model,train_graphs)
te_pred_y=batch_predict(model,test_graphs)
sns.scatterplot(train_y.reshape(-1),tr_pred_y.reshape(-1))
sns.scatterplot(test_y.reshape(-1),te_pred_y.reshape(-1))
#output molecular descriptors
model.intermediate_mode=2
desc_array=batch_predict(model,train_graphs)
print(desc_array.shape)
desc_array
```
| github_jupyter |
```
import cuml
import cudf
import nvcategory
import xgboost as xgb
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, roc_auc_score
#Read in the data. Notice how it decompresses as it reads the data into memory.
gdf = cudf.read_csv('/rapids/Data/black-friday.zip')
#Taking a look at the data. We use "to_pandas()" to get the pretty printing.
gdf.head().to_pandas()
#Exercise: Let's do some descriptive statistics
#Hint: try some of the function you may know from Pandas like DataFrame.Series.max() or look up the documentation here:
#grabbing the first character of the years in city string to get rid of plus sign, and converting to int
gdf['city_years'] = gdf.Stay_In_Current_City_Years.str.get(0).stoi()
#Here we can see how we can control what the value of our dummies with the replace method and turn strings to ints
gdf['City_Category'] = gdf.City_Category.str.replace('A', '1')
gdf['City_Category'] = gdf.City_Category.str.replace('B', '2')
gdf['City_Category'] = gdf.City_Category.str.replace('C', '3')
gdf['City_Category'] = gdf['City_Category'].str.stoi()
#EXERCISE: replace city in the same way as City Category
#Hint: the Gender column only has values 'M' and 'F'
#Solution
gdf['Gender'] = gdf.Gender.str.replace('F', '1')
gdf['Gender'] = gdf.Gender.str.replace('M', '0')
gdf['Gender'] = gdf.Gender.str.stoi()
#Let's take a look at how many products we have
prod_count = cudf.Series(nvcategory.from_strings(gdf.Product_ID.data).values()).unique().count() #hideous one-liner
print("Unique Products: {}".format(prod_count))
#Let's take a look at how many primary product categories we have
#We do it differently here because the variable is a number, not a string
prod1_count = gdf.Product_Category_1.unique().count()
print("Unique Product Categories: {}".format(prod1_count))
#Filling missing values
gdf['Product_Category_2'] = gdf['Product_Category_2'].fillna(0)
#EXERCISE: Make a variable that's 1 if the product is multi-category, 0 otherwise
#Hint: think about how to combine the Product Category 2 and Product Category 3
#Solution:
gdf['Product_Category_3'] = gdf['Product_Category_3'].fillna(0)
gdf['multi'] = ((gdf['Product_Category_2'] + gdf['Product_Category_3'])>0).astype('int')
#EXERCISE: Create a Gender/Marital Status Interaction Effect
#Hint: bother Gender and Marital Status are 0/1
#Solution:
gdf['gen_mar_interaction'] = gdf['Gender']*gdf['Marital_Status']
#Because Occupation is a code, it should converted into indicator variables
gdf = gdf.one_hot_encoding('Occupation', 'occ_dummy', gdf.Occupation.unique())
#Dummy variable from Int
gdf = gdf.one_hot_encoding('City_Category', 'city_cat', gdf.City_Category.unique())
#Dummy from string
cat = nvcategory.from_strings(gdf.Age.data)
gdf['Age'] = cudf.Series(cat.values())
gdf = gdf.one_hot_encoding('Age', 'age', gdf.Age.unique())
#EXERCISE: Create dummy variables from Product Category 1
#Solution:
gdf = gdf.one_hot_encoding('Product_Category_1', 'product', gdf.Product_Category_1.unique())
#We're going to drop th variables we've transformed
drop_list = ['User_ID', 'Age', 'Stay_In_Current_City_Years', 'City_Category','Product_ID', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3']
gdf = gdf.drop(drop_list)
#We're going to make a list of all the first indicator variables in a series now so it will be
#easier to exclude them when we're doing regressions later
dummy_list = ['occ_dummy_0', 'city_cat_1', 'age_0', 'product_1', 'Purchase']
#All variables currently have to have the same type for some methods in cuML
for col in gdf.columns.tolist():
gdf[col] = gdf[col].astype('float64')
train_size = round(len(gdf)*0.2)
test_size = round(len(gdf)-train_size)
train = gdf.iloc[0:train_size]
#EXERCISE: Make the test set in a similar way
#Solution:
gdf_train = gdf.iloc[train_size:]
#Deleting the main gdf because we're going to be making other subsets and other stuff, so it will be nice to have the memory.
del(gdf)
y_train = gdf_train['Purchase']
X_reg = gdf_train.drop(dummy_list)
# # I'm going to perform a hyperparameter search for alpha in a ridge regression
# for alpha in np.arange(0.0, 1, 0.01):
# Ridge = cuml.Ridge(alpha=alpha, fit_intercept=True)
# _fit = Ridge.fit(X_reg, y_train)
# _y_hat = _fit.predict(X_reg)
# _roc = roc_auc_score(y_train, _y_hat)
# output['MSE_RIDGE_{}'.format(alpha)] = _roc
# print('MAX AUC: {}'.format(min(output, key=output.get)))
# Ridge = cuml.Ridge(alpha=.1, fit_intercept=True)
# _fit = Ridge.fit(X_reg, y_train)
# _y_hat = _fit.predict(X_reg)
# _roc = roc_auc_score(y_train, _y_hat)
# output['MSE_RIDGE_{}'.format(alpha)] = _roc
# y_xgb = gdf_train[['Purchase']]
# X_xgb = gdf_train.drop('Purchase')
# xgb_train_set = xgb.DMatrix(data=X_xgb, label=y_xgb)
# xgb_params = {
# 'nround':100,
# 'max_depth':4,
# 'max_leaves':2**4,
# 'tree_method':'gpu_hist',
# 'n_gpus':1,
# 'loss':'ls',
# 'objective':'reg:linear',
# 'max_features':'auto',
# 'criterion':'friedman_mse',
# 'grow_policy':'lossguide',
# 'verbose':True
# }
# xgb_model = xgb.train(xgb_params, dtrain=xgb_train_set)
# y_hat_xgb = xgb_model.predict(xgb_train_set)
# RMSE = np.sqrt(mean_squared_error(y_xgb['Purchase'].to_pandas(), y_hat_xgb)) #get out of sample RMSE too
# print(RMSE)
#EXERCISE: Change XGB around to predict if someone is married based on the data we have
#Hint: in the xgb parameters, change the objective function to 'reg:logistic'
#Solution
# y_xgb = gdf_train[['Marital_Status']]
# X_xgb = gdf_train.drop('Marital_Status')
# xgb_train_set = xgb.DMatrix(data=X_xgb, label=y_xgb)
# xgb_params = {
# 'nround':100,
# 'max_depth':4,
# 'max_leaves':2**4,
# 'tree_method':'gpu_hist',
# 'n_gpus':1,
# 'loss':'ls',
# 'objective':'reg:logistic',
# 'max_features':'auto',
# 'criterion':'friedman_mse',
# 'grow_policy':'lossguide',
# 'verbose':True
# }
# xgb_model = xgb.train(xgb_params, dtrain=xgb_train_set)
# y_hat_xgb = xgb_model.predict(xgb_train_set)
# AUC = roc_auc_score(y_xgb['Marital_Status'].to_pandas(), y_hat_xgb)
# print(AUC)
#EXTRA EXERCISE: Apply kNN to the customers
#EXTRA EXERCISE: Apply PCA to data
```
| github_jupyter |
# COVID-19 and ACE inhibitors
**This example does not use actual COVID-19 data and does not offer medical advice.**
This notebook shows how to test whether ACI inhibitors explain higher mortality from COVID-19 among hypertensive patients.
Suppose we have a machine learning model which predicts COVID-19 mortality based on a patient's characteristics. We can use generalized SHAP values to ask what variables lead our model to predict higher mortality rates for hypertensive patients.
```
import gshap
from gshap.datasets import load_recidivism
from gshap.intergroup import IntergroupDifference, absolute_mean_distance
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import multiprocessing as mp
```
Without access to a COVID-19 dataset, I load a recidivism dataset and rename the variables.
```
recidivism = load_recidivism()
X, y = recidivism.data, recidivism.target
X = X.rename(columns={'black': 'hypertensive', 'age': 'ACE_inhibitor'})
y = y.rename('mortality')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1000)
clf = SVC()
clf.fit(X_train, y_train)
print('Test score: %.4f' % clf.score(X_test, y_test))
```
Our model predicts higher mortality rates for hypertensive patients.
```
df = pd.concat((X_test, y_test), axis=1)
df['y_pred'] = clf.predict(X_test)
p_non_hyper, p_hyper = df.groupby('hypertensive')['y_pred'].mean()
print('Predicted mortality rate for non-hypertensive patients: {0:.0f}%'.format(100*p_non_hyper))
print('Predicted mortality rate for hypertensive patients: {0:.0f}%'.format(100*p_hyper))
print('Absolute difference: {0:.0f} percentage points'.format(100*(p_hyper - p_non_hyper)))
```
We now ask how many percentage points of the difference in mortality rates is explained by ACE inhibitors.
```
g = IntergroupDifference(group=X_test['hypertensive'], distance=absolute_mean_distance)
explainer = gshap.KernelExplainer(clf.predict, X_train, g)
gshap_value = explainer.gshap_value('ACE_inhibitor', X_test, nsamples=32)
print('Difference in mortality rates explained by ACE inhibitors: {0:.0f} percentage points'.format(100*gshap_value))
```
Additionally, we can use bootstrapping to run hypothesis tests and obtain confidence bounds.
```
bootstrap_samples = 200
def bootstrap_gshap_values(output):
sample = X_test.sample(len(X_test), replace=True)
g = IntergroupDifference(group=sample['hypertensive'], distance=absolute_mean_distance)
explainer = gshap.KernelExplainer(clf.predict, X_train, g)
output.put(explainer.gshap_value('ACE_inhibitor', sample, nsamples=10))
output = mp.Queue()
processes = [
mp.Process(target=bootstrap_gshap_values, args=(output,))
for i in range(bootstrap_samples)
]
[p.start() for p in processes]
[p.join() for p in processes]
gshap_values = np.array([output.get() for p in processes])
```
In this example, we ask how likely it is that ACE inhibitors explain more than 5 percentage points of the difference in mortality rates between hypertensive and non-hypertensive patients.
```
threshold = .05
p_val = (gshap_values>threshold).mean()
print('Probability that ACE inhibitors explain more than {0:.1f} percentage points of the difference in mortality rates: {1:.1f}%'.format(100*threshold, 100*p_val))
```
| github_jupyter |
### Lyrics model.
This notebook contains code for training the lyrics models, which is a built using
a pretrained BERT model.
```
import pandas as pd
import numpy as np
import torch
from tqdm.notebook import tqdm
from transformers import BertTokenizer, BertForSequenceClassification, AdamW, get_linear_schedule_with_warmup
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
# load the data
with open("../data/train/lyrics/lyrics.txt", "r") as f:
lyrics = f.read()
with open("../data/train/lyrics/labels.txt", "r") as f:
labels = f.read()
lyrics_split = lyrics.split("\n")
labels_split = labels.split("\n")
lyrics_split.remove('')
labels_split.remove('')
print(lyrics_split[0], "\n",labels_split[0])
data = pd.DataFrame({"Lyrics": lyrics_split,
"Quadrant": labels_split})
data.head()
data["Quadrant"] = pd.to_numeric(data["Quadrant"])
# labels or quadrants already encoded
data["Quadrant"].value_counts()
```
### Train / Validation Split
```
X_train, X_val, y_train, y_val = train_test_split(data.index.values,
data.Quadrant.values,
test_size=0.15,
random_state=42,
stratify=data.Quadrant.values)
data["data_type"] = ["not_set"]*data.shape[0]
data.loc[X_train, "data_type"] = "train"
data.loc[X_val, "data_type"] = "val"
data.groupby(["Quadrant", "data_type"]).count()
```
### Tokenize and Encode the Data
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
do_lower_case=True)
encoded_data_train = tokenizer.batch_encode_plus(
data[data.data_type=="train"].Lyrics.values,
add_special_tokens=True,
return_attention_mask=True,
padding=True,
truncation=True,
max_length=256,
return_tensors="pt"
)
encoded_data_val = tokenizer.batch_encode_plus(
data[data.data_type=="val"].Lyrics.values,
add_special_tokens=True,
return_attention_mask=True,
padding=True,
truncation=True,
max_length=256,
return_tensors="pt"
)
input_id_train = encoded_data_train["input_ids"]
attention_masks_train = encoded_data_train["attention_mask"]
labels_train = torch.tensor(data[data.data_type == "train"].Quadrant.values)
input_id_val = encoded_data_val["input_ids"]
attention_masks_val = encoded_data_val["attention_mask"]
labels_val = torch.tensor(data[data.data_type == "val"].Quadrant.values)
# datasets
trainset = TensorDataset(input_id_train, attention_masks_train, labels_train)
validationset = TensorDataset(input_id_val, attention_masks_val, labels_val)
```
### BERT Pre-trained Model
```
model = BertForSequenceClassification.from_pretrained("bert-base-uncased",
num_labels=4,
output_attentions=False,
output_hidden_states=False)
# dataloaders
trainloader = DataLoader(trainset, sampler=RandomSampler(trainset), batch_size=3)
validationloader = DataLoader(validationset, sampler=SequentialSampler(validationset), batch_size=3)
# optimizer and scheduler
optimizer = AdamW(model.parameters(), lr=1e-5, eps=1e-8)
epochs = 5
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps=0,
num_training_steps=len(trainloader)*epochs)
# performance metrics
def f1_score_func(preds, labels):
pred_flat = np.argmax(preds, 1).flatten()
labels_flat = labels.flatten()
return f1_score(labels_flat, pred_flat, average="weighted")
def accuracy_per_class(preds, labels):
quadrant_dict = {0: "Q1", 1: "Q2", 2: "Q3", 3: "Q4"}
pred_flat = np.argmax(preds, 1).flatten()
labels_flat = labels.flatten()
for label in np.unique(labels_flat):
y_preds = pred_flat[labels_flat == label]
y_true = labels_flat[labels_flat == label]
print(f"Quadrant: {quadrant_dict[label]}")
print(f"Accuracy: {len(y_preds[y_preds==label])}/{len(y_true)}\n")
```
### Training
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
import random
seed_val = 17
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed(seed_val)
def evaluate(validation_loader):
model.eval()
total_loss = 0
y_pred, y_true = [], []
for batch in validation_loader:
batch = tuple(b.to(device) for b in batch)
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"labels": batch[2]
}
with torch.no_grad():
outputs = model(**inputs)
loss = outputs[0]
logits = outputs[1]
total_loss += loss.item()
logits = logits.detach().cpu().numpy()
label_ids = inputs["labels"].cpu().numpy()
y_pred.append(logits)
y_true.append(label_ids)
loss_avg = total_loss / len(validation_loader)
predictions = np.concatenate(y_pred, axis=0)
true_vals = np.concatenate(y_true, axis=0)
return loss_avg, predictions, true_vals
for epoch in tqdm(range(1, epochs+1)):
model.train()
total_loss = 0
progress_bar = tqdm(trainloader,
desc="Epoch {:1d}".format(epochs),
leave=False,
disable=False)
for batch in progress_bar:
batch = tuple(b.to(device) for b in batch)
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"labels": batch[2]
}
model.zero_grad()
output = model(**inputs)
loss = output[0]
total_loss += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
progress_bar.set_postfix({"training_loss: {:.3f}".format(loss.item()/len(batch))})
torch.save(model.state_dict(), f"finetuned_BERT_epoch_{epoch}.pt")
tqdm.write(f"\nEpoch {epoch}")
loss_avg = total_loss / len((trainloader))
tqdm.write(f"Training Loss: {loss_avg}")
val_loss, predictions, true_vals = evaluate(validationloader)
val_f1_score = f1_score_func(predictions, true_vals)
tqdm.write(f"Validation Loss: {val_loss}")
tqdm.write(f"F1 Score (Weighted): {val_f1_score}')")
```
### Loading and Evalutaion
```
model = BertForSequenceClassification.from_pretrained("bert-base-uncased",
num_labels=4,
output_attentions=False,
output_hidden_states=False)
model.to(device)
model.load_state_dict(torch.load("finetuned_BERT_epoch_1.pt", map_location="cpu"))
model.eval()
_, predictions, true_vals = evaluate(validationloader)
accuracy_per_class(predictions, true_vals)
```
### Resources
https://towardsdatascience.com/multi-class-text-classification-with-deep-learning-using-bert-b59ca2f5c613
```
# single input prediction
with open("../data/train/lyrics/lyrics.txt", "r") as f:
lyrics = f.readline()
with open("../data/train/lyrics/labels.txt", "r") as f:
labels = f.readline()
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", do_lower_case=True)
encoded_input = tokenizer(lyrics, return_tensors="pt")
output = model(**encoded_input)
output
_, pred = torch.max(output[0], dim=1)
pred = pred.item() + 1
print(f"Prediction: Q{pred} \nLabel: Q{int(labels)}")
```
| github_jupyter |
```
cd ..
# %matplotlib notebook
# %matplotlib inline
#import mpld3
#mpld3.enable_notebook()
import StateModeling as stm
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from Corona.LoadData import loadData, preprocessData
from Corona.CoronaModel import CoronaModel, plotTotalCases
from bokeh.io import push_notebook, show, output_notebook
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
# import cufflinks as cf
output_notebook()
# AllMeasured = loadData(r"COVID-19 Linelist 2020_04_27.xlsx", useThuringia = True, pullData=False)
AllMeasured = loadData(useThuringia = False, pullData=False)
# AllMeasured = preprocessData(AllMeasured)
# AllMeasured = loadData(useThuringia = False, pullData=False)
ExampleRegions = ['SK Jena', 'LK Greiz'] # 'SK Gera',
# AllMeasured = preprocessData(AllMeasured, ReduceDistricts=ExampleRegions, SumDistricts=False, SumAges=True, SumGender=True)
AllMeasured = preprocessData(AllMeasured, ReduceDistricts=None, SumDistricts=True, SumAges=True, SumGender=True)
M = CoronaModel(AllMeasured, Tmax = 150)
M.DataDict={}
g = M.getGUI(showResults=M.showSimRes, doFit=M.doFit)
M.getDates(AllMeasured['Dates'],M.DataDict['Fit_deaths'].data['x']).shape
M.DataDict.keys()
M.DataDict['Fit_deaths'].data['y'].shape
#interact_manual(showSim,
# ymin=ymin,
# ymax=ymax)
d = widgets.FloatLogSlider(0.06,min=-10,max=2.0,continuous_update=False)
r = widgets.FloatLogSlider(0.01,min=-10,max=2.0,continuous_update=False)
uiS = widgets.HBox((d,r))
allSimWidgets = {'d':d}
print('Simulation Control:')
d.observe(assignParam, names='value')
# d.observe(showSimRes, names='value')
# outS = widgets.interactive_output(assignParam, allSimWidgets)
display(uiS, outS)
ymin = widgets.FloatLogSlider(0.001,min=-10,max=3,continuous_update=False)
ymax = widgets.FloatLogSlider(30.0,min=-10,max=3,continuous_update=False)
ui = widgets.HBox((ymin,ymax))
allWidgets = {'ymin': ymin, 'ymax': ymax}
out = widgets.interactive_output(showSimRes, allWidgets)
# out.layout.width = '700px';out.layout.height = '350px'
display(ui, out)
ymin = widgets.FloatLogSlider(0.0001,min=-10,max=6.0,continuous_update=False)
ymax = widgets.FloatLogSlider(0.01,min=-10,max=2.0,continuous_update=False)
ui = widgets.HBox((ymin,ymax))
allWidgets = {'ymin': ymin, 'ymax': ymax}
out2 = widgets.interactive_output(showSimStates, allWidgets)
#out2.layout.width = '700px';out2.layout.height = '350px'
#display(ui, out2)
import os
os.getcwd()
from ipywidgets import widgets, Layout
from IPython.display import display
item_layout = Layout(
display='flex',flex_flow='row',
justify_content='space-between'
)
box_layout=Layout(
display='flex',flex_flow='column',
border='solid 2px',align_items='stretch',width='50%')
tickLayout = Layout(display='flex', width='30%')
inFitWidget = widgets.Checkbox(value=True, indent=False, layout=tickLayout, description='Country')
drop = widgets.Dropdown(options=['a','b'], indent=False, value='a')
dropWidget = widgets.HBox((inFitWidget, drop), display='flex', layout = item_layout)
valueWidget = widgets.FloatLogSlider(value=1.0,base=10,min=-7,max=1)
boxWidget = widgets.HBox((Label('Hi'),valueWidget),layout=item_layout)
# valueWidget = widgets.HBox((inFitWidget,valueWidget))
widget = widgets.Box((dropWidget, boxWidget), layout=box_layout)
display(widgets.HBox((widget,widget,widget, widget)))
valueWidget.description
def showSimRes(ymin=0.0001,ymax=1.0):
doFit()
p=M.showResultsBokeh(title=AllMeasured['Region'], Scale=PopSum, ylabel='fraction',
xlim=xlim, dims=("District"), subPlot='cases',
legendPlacement='upper right',figsize=[10,5], Dates=AllMeasured['Dates'])
p=M.showResultsBokeh(title=AllMeasured['Region'], Scale=PopSum, ylabel='fraction',
xlim=xlim, dims=("District"), subPlot='hospitalization',
legendPlacement='upper right',figsize=[10,5], Dates=AllMeasured['Dates'])
p=M.showResultsBokeh(title=AllMeasured['Region'], Scale=PopSum, ylabel='fraction',
xlim=xlim, dims=("District"), subPlot='deaths',
legendPlacement='upper right',figsize=[10,5], Dates=AllMeasured['Dates'])
return p
g['T0'].children[1].value
M.Var['T0']()
from ipywidgets import interact
import numpy as np
import pandas as pd
from bokeh.io import push_notebook, show, output_notebook
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource
x = np.linspace(0, 2*np.pi, 20)
y = np.sin(x)
Dates = M.getDates(AllMeasured['Dates'],y)
Dates = pd.to_datetime(Dates, dayfirst=True)
source = ColumnDataSource(data=dict(x=x, y=y))
p = figure(title="simple line example", plot_height=300, plot_width=600, y_range=(-5,5),
background_fill_color='#efefef', name='Blubb') # , x_axis_type='datetime'
# r = p.line(pd.to_datetime(Dates), y, color="#8888cc", line_width=1.5, alpha=0.8)
# r = p.vbar_stack([x], y=[y-0.5], color="#8888cc")
r = p.vbar('x', top='y',color="#cc8800", alpha=0.4, source=source, name='Hi there')
# r = p.vbar(x, top=y+1, width=0.15, color="#8888cc", alpha=0.6)
p.xaxis.axis_label = 'Hi'
q=show(p, notebook_handle=True)
# pd.date_range(start=Dates[0], periods=toPlot.shape[0]).map(lambda x: x.strftime('%d.%m.%Y'))
# pd.to_datetime(Dates)
Dates = pd.date_range(start='14.02.2020', periods=x.shape[0]).map(lambda x: x.strftime('%d.%m.%Y'))
type(p)
from bokeh.io.notebook import CommsHandle
isinstance(q,CommsHandle)
type(p)
```
| github_jupyter |
# The Matrix Profile
## Laying the Foundation
At its core, the STUMPY library efficiently computes something called a <i><b>matrix profile</b>, a vector that stores the [z-normalized Euclidean distance](https://youtu.be/LnQneYvg84M?t=374) between any subsequence within a time series and its nearest neighbor</i>.
To fully understand what this means, let's take a step back and start with a simple illustrative example along with a few basic definitions:
## Time Series with Length n = 13
```
time_series = [0, 1, 3, 2, 9, 1, 14, 15, 1, 2, 2, 10, 7]
n = len(time_series)
```
To analyze this time series with length `n = 13`, we could visualize the data or calculate global summary statistics (i.e., mean, median, mode, min, max). If you had a much longer time series, then you may even feel compelled to build an ARIMA model, perform anomaly detection, or attempt a forecasting model but these methods can be complicated and may often have false positives or no interpretable insights.

However, if we were to apply <i>Occam's Razor</i>, then what is the most <b>simple and intuitive</b> approach that we could take analyze to this time series?
To answer this question, let's start with our first defintion:
## Subsequence /ˈsəbsəkwəns/ noun
### <i>a part or section of the full time series</i>
So, the following are all considered subsequences of our `time_series` since they can all be found in the time series above.



```
print(time_series[0:2])
print(time_series[4:7])
print(time_series[2:10])
```
We can see that each subsequence can have a different sequence length that we'll call `m`. So, for example, if we choose `m = 4`, then we can think about how we might compare any two subsequences of the same length.
```
m = 4
i = 0 # starting index for the first subsequence
j = 8 # starting index for the second subsequence
subseq_1 = time_series[i:i+m]
subseq_2 = time_series[j:j+m]
print(subseq_1, subseq_2)
```

One way to compare any two subsequences is to calculate what is called the Euclidean distance.
## Euclidean Distance /yo͞oˈklidēən/ /ˈdistəns/ noun
### <i>the straight-line distance between two points</i>

```
import math
D = 0
for k in range(m):
D += (time_series[i+k] - time_series[j+k])**2
print(f"The square root of {D} = {math.sqrt(D)}")
```
## Distance Profile - Pairwise Euclidean Distances
Now, we can take this a step further where we keep one subsequence the same (reference subsequence), change the second subsequence in a sliding window manner, and compute the Euclidean distance for each window. The resulting vector of pairwise Euclidean distances is also known as a <i><b>distance profile</b></i>.

Of course, not all of these distances are useful. Specifically, the distance for the self match (or trivial match) isn't informative since the distance will be always be zero when you are comparing a subsequence with itself. So, we'll ignore it and, instead, take note of the next smallest distance from the distance profile and choose that as our best match:

Next, we can shift our reference subsequence over one element at a time and repeat the same sliding window process to compute the distance profile for each new reference subsequence.

## Distance Matrix
If we take all of the distance profiles that were computed for each reference subsequence and stack them one on top of each other then we get something called a <i><b>distance matrix</b></i>

Now, we can simplify this distance matrix by only looking at the nearest neighbor for each subsequence and this takes us to our next concept:
## Matrix Profile /ˈmātriks/ /ˈprōˌfīl/ noun
### a vector that stores the [(z-normalized) Euclidean distance](https://youtu.be/LnQneYvg84M?t=374) between any subsequence within a time series and its nearest neighbor
Practically, what this means is that the matrix profile is only interested in storing the smallest non-trivial distances from each distance profile, which significantly reduces the spatial complexity to O(n):

We can now plot this matrix profile underneath our original time series. And, as it turns out, a reference subsequence with a small matrix profile value (i.e., it has a nearest neighbor significantly "closeby") may indicate a possible pattern while a reference subsequence with a large matrix profile value (i.e., its nearest neighbor is significantly "faraway") may suggest the presence of an anomaly.

So, by simply computing and inspecting the matrix profile alone, one can easily pick out the top pattern (global minimum) and rarest anomaly (global maximum). And this is only a small glimpse into what is possible once you've computed the matrix profile!
## The Real Problem - The Brute Force Approach
Now, it might seem pretty straightforward at this point but what we need to do is consider how to compute the full distance matrix efficiently. Let's start with the brute force approach:
```
for i in range(n-m+1):
for j in range(n-m+1):
D = 0
for k in range(m):
D += (time_series[i+k] - time_series[j+k])**2
D = math.sqrt(D)
```
At first glance, this may not look too bad but if we start considering both the computational complexity as well as the spatial complexity then we begin to understand the real problem. It turns out that, for longer time series (i.e., <i>n >> 10,000</i>) the computational complexity is <i>O(n<sup>2</sup>m)</i> (as evidenced by the three for loops in the code above) and the spatial complexity for storing the full distance matrix is <i>O(n<sup>2</sup>)</i>.
To put this into perspective, imagine if you had a single sensor that collected data 20 times/min over the course of 5 years. This would result:
```
n = 20 * 60 * 24 * 364 * 5 # 20 times/min x 60 mins/hour x 24 hours/day x 365 days/year x 5 years
print(f"There would be n = {n} data points")
```
Assuming that each calculation in the inner loop takes 0.0000001 seconds then this would take:
```
time = 0.0000001 * (n * n - n)/2
print(f"It would take {time} seconds to compute")
```
<b>Which is equivalent to 1,598.7 days (or 4.4 years) and 11.1 PB of memory to compute!</b> So, it is clearly not feasible to compute the distance matrix using our naive brute force method. Instead, we need to figure out how to reduce this computational complexity by efficiently generating a matrix profile and this is where <i>STUMPY</i> comes into play.
## STUMPY
In the fall of 2016, researchers from the [University of California, Riverside](https://www.cs.ucr.edu/~eamonn) and the [University of New Mexico](https://www.cs.unm.edu/~mueen/) published a beautiful set of [back-to-back papers](https://www.cs.ucr.edu/~eamonn/MatrixProfile.html) that described an <u>exact method</u> called <i><b>STOMP</b></i> for computing the matrix profile for any time series with a computational complexity of O(n<sup>2</sup>)! They also further demonstrated this using GPUs and they called this faster approach <i><b>GPU-STOMP</b></i>.
With the academics, data scientists, and developers in mind, we have taken these concepts and have open sourced STUMPY, a powerful and scalable library that efficiently computes the matrix profile according to this published research. And, thanks to other open source software such as [Numba](http://numba.pydata.org/) and [Dask](https://dask.org/), our implementation is highly parallelized (for a single server with multiple CPUs or, alternatively, multiple GPUs), highly distributed (with multiple CPUs across multiple servers). We've tested STUMPY on as many as 256 CPU cores (spread across 32 servers) or 16 NVIDIA GPU devices (on the same DGX-2 server) and have achieved similar [performance](https://github.com/TDAmeritrade/stumpy#performance) to the published GPU-STOMP work.
## Conclusion
According to the original authors, "these are the best ideas in times series data mining in the last two decades" and "given the matrix profile, [most time series data mining problems are trivial to solve in a few lines of code](https://www.cs.ucr.edu/~eamonn/100_Time_Series_Data_Mining_Questions__with_Answers.pdf)".
From our experience, this is definitely true and we are excited to share STUMPY with you! Please reach out and let us know how STUMPY has enabled your time series analysis work as we'd love to hear from you!
## Additional Notes
For the sake of completeness, we'll provide a few more comments for those of you who'd like to compare your own matrix profile implementation to STUMPY. However, due to the many details that are omitted in the original papers, we strongly encourage you to use [STUMPY](https://stumpy.readthedocs.io/en/latest/).
In our explanation above, we've only excluded the trivial match from consideration. However, this is insufficient since nearby subsequences (i.e., `i ± 1`) are likely highly similar and we need to expand this to a larger "exclusion zone" relative to the diagonal trivial match. Here, we can visualize what different exclusion zones look like:

However, in practice, it has been found that an exclusion zone of `i ± int(np.ceil(m / 4))` works well (where `m` is the subsequence window size) and the distances computed in this region are is set to `np.inf` before the matrix profile value is extracted for the `ith` subsequence. Thus, the larger the window size is, the larger the exclusion zone will be. Additionally, note that, since NumPy indexing has an inclusive start index but an exlusive stop index, the proper way to ensure a symmetrical exclusion zone is:
```
excl_zone = int(np.ceil(m / 4))
zone_start = i - excl_zone
zone_end = i + excl_zone + 1 # Notice that we add one since this is exclusive
distance_profile[zone_start : zone_end] = np.inf
```
| github_jupyter |
# Predict comparison
```
import vowpalwabbit
def my_predict(vw, ex):
pp = 0.0
for f, v in ex.iter_features():
pp += vw.get_weight(f) * v
return pp
def ensure_close(a, b, eps=1e-6):
if abs(a - b) > eps:
raise Exception(
"test failed: expected "
+ str(a)
+ " and "
+ str(b)
+ " to be "
+ str(eps)
+ "-close, but they differ by "
+ str(abs(a - b))
)
###############################################################################
vw = vowpalwabbit.Workspace("--quiet")
###############################################################################
vw.learn("1 |x a b")
###############################################################################
print("# do some stuff with a read example:")
ex = vw.example("1 |x a b |y c")
ex.learn()
ex.learn()
ex.learn()
ex.learn()
updated_pred = ex.get_updated_prediction()
print("current partial prediction =", updated_pred)
# compute our own prediction
print(
" my view of example =",
str([(f, v, vw.get_weight(f)) for f, v in ex.iter_features()]),
)
my_pred = my_predict(vw, ex)
print(" my partial prediction =", my_pred)
ensure_close(updated_pred, my_pred)
print("")
vw.finish_example(ex)
###############################################################################
print("# make our own example from scratch")
ex = vw.example()
ex.set_label_string("0")
ex.push_features("x", ["a", "b"])
ex.push_features("y", [("c", 1.0)])
ex.setup_example()
print(
" my view of example =",
str([(f, v, vw.get_weight(f)) for f, v in ex.iter_features()]),
)
my_pred2 = my_predict(vw, ex)
print(" my partial prediction =", my_pred2)
ensure_close(my_pred, my_pred2)
ex.learn()
ex.learn()
ex.learn()
ex.learn()
print(" final partial prediction =", ex.get_updated_prediction())
ensure_close(ex.get_updated_prediction(), my_predict(vw, ex))
print("")
vw.finish_example(ex)
###############################################################################
exList = []
for i in range(120):
ex = vw.example()
exList.append(ex)
# this is the safe way to delete the examples for VW to reuse:
for ex in exList:
vw.finish_example(ex)
exList = [] # this should __del__ the examples, we hope :)
for i in range(120):
ex = vw.example()
exList.append(ex)
for ex in exList:
vw.finish_example(ex)
###############################################################################
for i in range(2):
ex = vw.example("1 foo| a b")
ex.learn()
print("tag =", ex.get_tag())
print("partial pred =", ex.get_partial_prediction())
print("loss =", ex.get_loss())
print("label =", ex.get_label())
vw.finish_example(ex)
# to be safe, finish explicity (should happen by default anyway)
vw.finish()
###############################################################################
print("# test some save/load behavior")
vw = vowpalwabbit.Workspace("--quiet -f test.model")
ex = vw.example("1 |x a b |y c")
ex.learn()
ex.learn()
ex.learn()
ex.learn()
before_save = ex.get_updated_prediction()
print("before saving, prediction =", before_save)
vw.finish_example(ex)
vw.finish() # this should create the file
# now re-start vw by loading that model
vw = vowpalwabbit.Workspace("--quiet -i test.model")
ex = vw.example("1 |x a b |y c") # test example
ex.learn()
after_save = ex.get_partial_prediction()
print(" after saving, prediction =", after_save)
vw.finish_example(ex)
ensure_close(before_save, after_save)
vw.finish() # this should create the file
print("done!")
```
| github_jupyter |
# Analysis of model results
To do:
* write labels to geotiffs to dir data/test/predict_process or so
* implement masks for selecting no_img pixels
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("/mnt/hd_internal/hh/projects_DS/road_detection/roaddetection/")
import numpy as np
from keras.models import load_model
from sklearn.externals import joblib
from sklearn.ensemble import RandomForestClassifier
from src.data import utils
from src.models.data import *
from src.models.network_models import *
from src.models.predict_model import *
from src.visualization.visualize import *
from src.data.utils import get_tile_prefix
from src.models.metrics_img import IoU_binary, precision, recall, f1_score
import matplotlib
import matplotlib.pyplot as plt
import skimage.io as io
from pathlib import Path
import os, shutil, platform
%matplotlib inline
data_dir = "../../data"
model_dir = "../../models/UNet"
report_dir = "../../reports"
#sys.path.append("/home/ubuntu/roaddetection/")
```
## User settings
```
# base directory with data (image tiles) to be analyzed
dir_eval = data_dir + "/test"
# subdirs
dir_x = 'sat'
dir_y = 'map'
# image size in pixels
target_size = (512, 512)
# resolution
pixels_per_meter = 1.0/3.125
# max. number of samples (files) to analyze (predicition takes a long time)
max_num_x = 100
# ------------- selection of samples to plot in detail -----------------------------
if True:
# set *number* of samples (files) to analyze in detail and choose among
# 'random' and 'head_tail'
num_x_show = 20
mode_sample_choice = "random"
else:
# inverse: select specific samples (these MUST be within the set of files analyzed)
file_list_selected = ["20170815_005030_0c0b_3B_0072.tif"]
num_x_show = len(file_list_selected)
mode_sample_choice = None
# ------------------- graphics options ---------------------------------------
# display of results of individual samples: either "full" or "compact"
samples_display_mode = "full"
# directory in which to save graphics files
dir_figures = report_dir + "/figures"
# out output format of figures - set to None to skip (filenames will be given automatically
# according to underlying model and tile names)
format_figures = "png"
format_figures = None
# set resolution
plt.rcParams["figure.dpi"] = 400
# ----------------- selection of model to analyze -----------------------------
if True:
# path to & filename of model to analyze
trained_model_fn = model_dir + '/models_unet_borneo_and_harz_05_09_16_22.hdf5'
#trained_model_fn = model_dir + '/models_segnet_06_12_24_00.hdf5'
trained_model_fn = model_dir + '/unet_test.hdf5'
# framework underlying model
type_model = 'keras'
else:
#trained_model_fn = model_dir + '/RandomForest_binary.pkl'
trained_model_fn = model_dir + '/RandomForest_multiclass.pkl'
# framework underlying model
type_model = 'scikit'
# Keras models: list any custom loss or metric functions of the model here
custom_objects = {'IoU_binary': IoU_binary,
'precision': precision,
'recall': recall,
'f1_score': f1_score}
```
### Load model
```
if type_model == "keras":
# The additional input arg "custom_objects" is needed if custom loss or metrics were used in the model
model = load_model(trained_model_fn, custom_objects=custom_objects)
# based on the output of the last layer, find out whether the model is binary or multiclass
model_is_binary = model.get_layer(None,-1).output_shape[3] == 1
num_classes = max(2, model.get_layer(None,-1).output_shape[3])
# infer width, height and number of features (= bands in satellite images) from input layer
input_layer = model.get_layer(None,0).output_shape
# size of images
sz = input_layer[1:3]
num_features = input_layer[3]
assert(sz == target_size), "nonmatching image tile sizes"
elif type_model == "scikit":
model = joblib.load(trained_model_fn)
model_is_binary = model.n_classes_ == 2
num_classes = model.n_classes_
num_features = model.n_features_
print("{0:d} features, {1:d} classes".format(num_features, num_classes))
```
### Some preparatory computations
```
# obtain list and number of available samples (files)
file_list_x, num_x = utils.get_list_samplefiles(os.path.join(dir_eval, dir_x))
# actual number of samples that will be analyzed, given samples available and user's choice
num_x_use = min(num_x, max_num_x)
# actual number of samples to be *plotted*, given number of samples to be analyzed
num_x_show = min(num_x_show, num_x_use)
# base of saved figure file names: model name
base_fig_name = trained_model_fn.split("/")[-1].split(".")[0]
```
### Loop over files, collecting data & predicitions (takes a long time)
```
# **********************************************************************************************
# CLASS_DICT is central to everything that follows: it maps values in the label files to classes
# (no road, paved road, etc.) and also defines new classes (no_img) which are needed
# for evaluation metrics. If the values in this dict do not match the label values used during
# training, the code will not work or produce nonsense.
# **********************************************************************************************
CLASS_DICT = get_class_dict("all_legal")
# similarly, CLASS_PLOT_PROP defines colors for the different classes
CLASS_PLOT_PROP = get_class_plot_prop()
# number of pixels per image
img_size = np.prod(target_size)
# if it is a binary model, the score prediction matrix is 2D, otherwise it has as many layers
# (or slices, if you want) as there are classes
if model_is_binary:
dim_yscore = 1
class_dict = get_class_dict("binary")
# the following lines are needed to extract the correct column out of the prediction score
# from a Scikit-learn model
tmp_dict = class_dict.copy()
del tmp_dict["no_img"]
yscore_ix = get_sorted_key_index("any_road", tmp_dict)
else:
dim_yscore = num_classes
class_dict = get_class_dict("multiclass")
# preallocate arrays collecting the label (y) values and y scores of all
# all pixels of all tiles
arr_y = np.empty((img_size * num_x_use, 1), dtype=np.uint8)
arr_yscore = np.empty((img_size * num_x_use, dim_yscore), dtype=np.float32)
# array collecting the key metric for each sample (image tile) individually;
# useful for a sorted display of individual tiles
arr_metric = np.empty(num_x_use)
# loop over tiles up to num_x_use)
for i, fn in enumerate(file_list_x[:num_x_use]):
# read sat image tile
x = io.imread(os.path.join(dir_eval, dir_x, fn))
# read corresponding label tile
y = io.imread(os.path.join(dir_eval, dir_y, fn))
# refactor labels according to model
y, mask = refactor_labels(x, y, class_dict=CLASS_DICT, model_is_binary=model_is_binary, meta=None)
# scale x
x = x/255.0
# copy reshaped labels in array
y_reshaped = y.reshape((img_size, 1))
arr_y[i*img_size:(i+1)*img_size,:] = y_reshaped
# predict
print("analyzing {0:s} ({1:0.0f} % non-image pixels)...".format(fn, 100*np.sum(mask)/img_size))
if type_model == "keras":
# in the case of a binary classification, yscore is a (target_size) array (no third dim)
# in the case of multiclass classification, yscore is a (target_size) by (num_classes) array
yscore = model.predict(x.reshape((1,) + target_size +(4,)))
# reshape for storage and analysis
yscore_reshaped = yscore.reshape((img_size, dim_yscore), order = 'C')
elif type_model == "scikit":
# yscore is always a (img_size) by (num_classes) array
yscore = model.predict_proba(x.reshape((img_size, num_features), order = 'C'))
if model_is_binary:
# in contrast to keras' .predict most of scikit-learn's predict_proba methods put out one column per class
# also for binarry classification, so pick only one layer: in a binary classification, p(class 1) = 1-p(class 2)
yscore_reshaped = yscore[:,yscore_ix].reshape((img_size, dim_yscore))
else:
yscore_reshaped = yscore
# copy reshaped prediction in array
arr_yscore[i*img_size:(i+1)*img_size,:] = yscore_reshaped
# compute and store metric used for sorting
_, _, roc_auc_dict, _, _, pr_auc_dict, _, _, _ = multiclass_roc_pr(y_reshaped, yscore_reshaped, class_dict=class_dict)
if len(pr_auc_dict) == 0:
arr_metric[i] = None
elif len(pr_auc_dict) == 1:
# binary labels
arr_metric[i] = pr_auc_dict[list(pr_auc_dict.keys())[0]]
else:
# pick score of union of roads
arr_metric[i] = pr_auc_dict["any_road"]
```
### Compute and plot metrics on ensemble of data
```
(fpr_dict,
tpr_dict,
roc_auc_dict,
precision_dict,
recall_dict,
pr_auc_dict,
beven_ix_dict,
beven_thresh_dict,
reduced_class_dict) = multiclass_roc_pr(arr_y, arr_yscore, class_dict=class_dict)
# set up summary figure
fig_sum, axs = plt.subplots(2, 2, figsize=(10, 10))
plot_pr(recall_dict, precision_dict, pr_auc_dict, beven_ix_dict, beven_thresh_dict, axs[0, 0])
plot_roc(fpr_dict, tpr_dict, roc_auc_dict, axs[0, 1])
# save figure?
if format_figures is not None:
plt.savefig(os.path.join(dir_figures, base_fig_name + '_summary' + '.' + format_figures), orientation='portrait')
plt.show()
# prepare index for showing samples
if mode_sample_choice is None:
samples_ix = [ix for ix, fn in enumerate(file_list_x[:num_x_use]) if fn in file_list_selected]
if not len(samples_ix):
raise Exception("none of the tiles selected for individual plotting is among the tiles analyzed")
else:
samples_ix = utils.gen_sample_index(num_x_use, num_x_show, mode_sample_choice=mode_sample_choice, metric=arr_metric)
```
### Show individual samples
```
for ix in samples_ix:
fn = file_list_x[ix]
# read sat image tile
x = io.imread(os.path.join(dir_eval, dir_x, fn))
# retrieve true labels
y = arr_y[ix*img_size:(ix+1)*img_size].reshape(target_size + (1,), order = 'C')
# retrieve y score (prediction)
yscore = arr_yscore[ix*img_size:(ix+1)*img_size, :].reshape(target_size + (dim_yscore,), order = 'C')
# generate predicted labels from yscore using threshold at breakeven point
ypred = predict_labels(yscore, beven_thresh_dict, reduced_class_dict)
# show summary plot
fig_sample = show_sample_prediction(x, y, yscore, ypred, class_dict,
scale=pixels_per_meter,
title=fn,
display_mode=samples_display_mode)
# save figure?
if format_figures is not None:
plt.savefig(os.path.join(dir_figures, base_fig_name + '_' + fn + '.' + format_figures), orientation='portrait')
plt.show()
# save predicted labels (not yet fleshed out)
if False and (not model_is_binary):
# convert true labels to rgb
y_rgb = grayscale_to_rgb(y, CLASS_PLOT_PROP, class_dict)
# convert predicted labels to rgb
ypred_rgb = grayscale_to_rgb(ypred, CLASS_PLOT_PROP, class_dict)
# §§ save both true and predicted labels to rgb file
else:
# §§ save only predicted labels to file
pass
# halt
sys.exit()
```
## Outdated stuff
which is not used currently but may come in handy later
```
# a quick test of multiclass_roc
multiclass_roc(np.r_[0, 40, 40, 0, 255, 255, 0, 255], np.empty(8))
# input arguments to Keras' ImageDataGenerator - be sure not to include any image augmentation here!
data_gen_args = dict(data_format="channels_last")
# batch size for summary stats without visualization (the more, the more efficient, but limited by memory)
batch_size = 3
# 'steps' input par into evaluate_generator
steps = num_x_use // batch_size
```
### Run evaluation: only numeric values
```
# set up test gen with a batch size as large as possible for efficiency reasons
test_gen = trainGenerator(batch_size, eval_dir, img_dir, label_dir,
data_gen_args, save_to_dir = None, image_color_mode="rgba", target_size=target_size)
res = model.evaluate_generator(test_gen, steps=steps, workers=1, use_multiprocessing=True, verbose=1)
model.metrics_names
res
```
### Run prediction for display of images and more sophisticated evaluation
```
pred = model.predict_generator(test_gen, steps=steps, workers=1, use_multiprocessing=True, verbose=1)
plt.imshow(pred[5].reshape(target_size), cmap='gray');
plt.colorbar()
```
### Set up ImageDataGenerator
```
# this generator is supposed to yield single images and matching labels, hence batch size = 1
#batch1_test_gen = trainGenerator(1, eval_dir, img_dir, label_dir,
# data_gen_args, save_to_dir = None, image_color_mode="rgba", target_size=target_size)
# preallocate linear arrays for collecting flattened predicition and label data
```
| github_jupyter |
This Notebook creates the PNG surface figures for the sequential KL blueprint
```
import numpy as np
import nibabel as nib
import scipy.io as sio
from scipy import stats
import pandas as pd
import h5py
import nilearn
import plotly
from nilearn import plotting
import seaborn as sn
from math import pi
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from IPython.display import display, HTML
import mayavi
from mayavi import mlab
%matplotlib inline
###still getting our data. Matlab to python so we transpose the matrix
pig=h5py.File('./blueprints//L_pig27.mat','r')
variables=pig.items()
for var in variables:
pig_name=var[0]
pig_data=var[1]
print(pig_name)
print
pig_data=np.array(pig_data).transpose()
pig_data=pig_data
hum=h5py.File('./blueprints/L_hum27.mat','r')
variables=hum.items()
for var in variables:
hum_name=var[0]
print(hum_name)
hum_data=var[1]
hum_data=np.array(hum_data).transpose()
# ##### comment in to run on right hemisphere
# pig=h5py.File('./blueprints//R_pig27.mat','r')
# variables=pig.items()
# for var in variables:
# pig_name=var[0]
# pig_data=var[1]
# print(pig_name)
# print
# pig_data=np.array(pig_data).transpose()
# pig_data=pig_data
# hum=h5py.File('./blueprints//R_hum27.mat','r')
# variables=hum.items()
# for var in variables:
# hum_name=var[0]
# print(hum_name)
# hum_data=var[1]
# hum_data=np.array(hum_data).transpose()
#### remove extra tracts from human BP
with open('./blueprints//structureList.txt','r') as structs:
structs=structs.read()
structs=structs.split('\n')
ALL_dict={}
for i in range(len(structs)-1):
ALL_dict[structs[i]]=hum_data[:,i]
def remove_tracts(BP,quitar):
BP_new=dict(BP)
orig=BP_new.keys()
for i in range(len(quitar)):
val=quitar[i]
if val in orig:
del BP_new[val]
return BP_new
## make pig and human BP's dictionaries
with open('./blueprints//structureList.txt','r') as structs:
structs=structs.read()
structs=structs.split('\n')
pig_27={}
hum_27={}
for i in range(len(structs)-1):
pig_27[structs[i]]=pig_data[:,i]
hum_27[structs[i]]=hum_data[:,i]
##### pyhton implementation of mars and jabdi 2018 connectivity blueprint paper
def entropy(A):
if type(A) == dict:
A=list(A.values())
A=np.array(A).transpose()
else:
pass
p=A.shape
p=p[0]
## function takes two numpy arrays that are the blueprints not normalized
def normalize(BP,p):
if len(BP.shape) ==1:
BP=BP.reshape(1,p)
BP[(np.isnan(BP))]=0
row_sums=BP.sum(axis=1)
BP=BP/row_sums[:,np.newaxis]
return BP
A=normalize(A,p)
Amask= A!=0
A_invmask=Amask!=1
##### pyhton implementation of Saad Jaabdi's matlab code for entropy calculation of vector
# S= -sum(A.*log2(A+~A),2);
S=-1*(np.multiply(A,(np.log2((A+A_invmask)))).sum(axis=1))
return S
##### define KL calculation
### Calculate the KL divergence as done in the Mars blueprint paper
def calc_kl(A,B):
if type(A) == dict:
A=list(A.values())
A=np.array(A).transpose()
else:
pass
if type(B) == dict:
B=list(B.values())
B=np.array(B).transpose()
else:
pass
p=A.shape
p=p[0]
## function takes two numpy arrays that are the blueprints not normalized
def normalize(BP,p):
BP[(np.isnan(BP))]=0
row_sums=BP.sum(axis=1)
BP=BP/row_sums[:,np.newaxis]
return BP
A=normalize(A,p)
B=normalize(B,p)
Amask= A!=0
A_invmask=Amask!=1
Bmask= B !=0
B_invmask=Bmask!=1
##### pyhton implementation of Saad Jaabdi's matlab code fo KL divergence
KL=np.dot(np.multiply(A,(np.log2((A+A_invmask)))),Bmask.transpose()) \
- np.dot(A,(Bmask*np.log2(B+B_invmask)).transpose()) \
+ np.dot(Amask,(B*np.log2(B+B_invmask)).transpose())\
- np.dot(Amask*np.log2(A+A_invmask),B.transpose())
return KL
#### function defining the plotting of the K vectors over the surfaces
#### function defining the plotting of the K vectors over the surfaces
#### function defining the plotting of the K vectors over the surfaces
def oh_mayavi(surf,stat,cmap,vmi,vma,*args):
##### parse the gifti
anat=nib.load(surf)
coords=anat.darrays[0].data
x=coords[:,0]
y=coords[:,1]
z=coords[:,2]
triangles=anat.darrays[1].data
##### if subcortical mask provided use it
if len(args) >0:
print('masking out subcortex')
sub_cort=nilearn.surface.load_surf_data(args[0])
stat[sub_cort]=float('NaN')
else:
pass
### start mayavi
mlab.init_notebook('png',1500,1500)
maya=mlab.triangular_mesh(x,y,z,triangles,scalars=stat,colormap=cmap,vmin=vmi,vmax=vma)
mlab.view(azimuth=0, elevation=-90)
f = mlab.gcf()
cam = f.scene.camera
cam.zoom(1.1)
# mlab.colorbar()
mlab.draw()
img1=mlab.screenshot(figure=maya,mode='rgba',antialiased=True)
mlab.view(azimuth=0, elevation=90)
mlab.figure(bgcolor=(1, 1, 1))
### clear figure
mayavi.mlab.clf()
f = mlab.gcf()
cam = f.scene.camera
cam.zoom(1.1)
mlab.draw()
img2=mlab.screenshot(figure=maya,mode='rgba',antialiased=True)
### clear figure
mayavi.mlab.clf()
return img1,img2
##### pig = min on axis 1
#### hum = min on axis = 0
#### calculate KL and get min for only projection tracts
hum_proj=remove_tracts(hum_27,[ 'cbd_l', 'cbd_r', 'cbp_l', 'cbp_r', 'cbt_l', 'cbt_r', 'fma', 'fmi', 'fx_l', 'fx_r', 'ifo_l', 'ifo_r', 'ilf_l', 'ilf_r', 'mcp', 'unc_l', 'unc_r'])
pig_proj=remove_tracts(pig_27,[ 'cbd_l', 'cbd_r', 'cbp_l', 'cbp_r', 'cbt_l', 'cbt_r', 'fma', 'fmi', 'fx_l', 'fx_r', 'ifo_l', 'ifo_r', 'ilf_l', 'ilf_r', 'mcp', 'unc_l', 'unc_r'])
KL_proj=calc_kl(pig_proj,hum_proj)
PS_proj=entropy(pig_proj)
HS_proj=entropy(hum_proj)
p_proj=KL_proj.min(axis=1)
h_proj=KL_proj.min(axis=0)
plt.subplot(1,2,1)
sn.distplot(p_proj)
sn.distplot(h_proj)
plt.subplot(1,2,2)
sn.distplot(PS_proj)
sn.distplot(HS_proj)
hum_comm=remove_tracts(hum_27,[ 'cbd_l', 'cbd_r', 'cbp_l', 'cbp_r', 'cbt_l', 'cbt_r', 'fx_l', 'fx_r', 'ifo_l', 'ifo_r', 'ilf_l', 'ilf_r', 'unc_l', 'unc_r'])
pig_comm=remove_tracts(pig_27,[ 'cbd_l', 'cbd_r', 'cbp_l', 'cbp_r', 'cbt_l', 'cbt_r', 'fx_l', 'fx_r', 'ifo_l', 'ifo_r', 'ilf_l', 'ilf_r', 'unc_l', 'unc_r'])
KL_comm=calc_kl(pig_comm,hum_comm)
PS_comm=entropy(pig_comm)
HS_comm=entropy(hum_comm)
p_comm=KL_comm.min(axis=1)
h_comm=KL_comm.min(axis=0)
plt.subplot(1,2,1)
sn.distplot(p_comm)
sn.distplot(h_comm)
plt.subplot(1,2,2)
sn.distplot(PS_comm)
sn.distplot(HS_comm)
hum_assoc=remove_tracts(hum_27,[ 'cbd_l', 'cbd_r', 'cbp_l', 'cbp_r', 'cbt_l', 'cbt_r', 'fx_l', 'fx_r'])
pig_assoc=remove_tracts(pig_27,[ 'cbd_l', 'cbd_r', 'cbp_l', 'cbp_r', 'cbt_l', 'cbt_r', 'fx_l', 'fx_r'])
KL_assoc=calc_kl(pig_assoc,hum_assoc)
PS_assoc=entropy(pig_assoc)
HS_assoc=entropy(hum_assoc)
p_assoc=KL_assoc.min(axis=1)
h_assoc=KL_assoc.min(axis=0)
plt.subplot(1,2,1)
sn.distplot(p_assoc)
sn.distplot(h_assoc)
plt.subplot(1,2,2)
sn.distplot(PS_assoc)
sn.distplot(HS_assoc)
#### calc KL including all
PS=entropy(pig_27)
HS=entropy(hum_27)
KL=calc_kl(pig_27,hum_27)
h_all=KL.min(axis=0)
p_all=KL.min(axis=1)
# hmax=KL.min(axis=0).max()
# pmax=KL.min(axis=1).max()
plt.subplot(1,2,1)
sn.distplot(p_all)
sn.distplot(h_all)
plt.subplot(1,2,2)
sn.distplot(PS)
sn.distplot(HS)
hum_assoc.keys()
hmax=np.nanmax(HS)
pmax=np.nanmax(PS)
# ### plot the lateral and axial views of the surfaces in mayavi
# #### plotting human
##### note that it is best to run each set of tracts one by one commenting in and out for right now.
h_proj1,h_proj2=oh_mayavi('./surfaces/rmars-comparing-connectivity-blueprints-surfaces/L.rhum.inflated.surf.gii',HS_proj,'terrain',0,hmax,'./surfaces/labels/L.hum.subcort.label')
h_comm1,h_comm2=oh_mayavi('./surfaces/rmars-comparing-connectivity-blueprints-surfaces/L.rhum.inflated.surf.gii',HS_comm,'terrain',0,hmax,'./surfaces/labels/L.hum.subcort.label')
h_assoc1,h_assoc2=oh_mayavi('./surfaces/rmars-comparing-connectivity-blueprints-surfaces/L.rhum.inflated.surf.gii',HS_assoc,'terrain',0,hmax,'./surfaces/labels/L.hum.subcort.label')
h_all1,h_all2=oh_mayavi('./surfaces/rmars-comparing-connectivity-blueprints-surfaces/L.rhum.inflated.surf.gii',HS,'terrain',0,hmax,'./surfaces/labels/L.hum.subcort.label')
# ###### plotting pig
p_proj1,p_proj2=oh_mayavi('./surfaces/pig_surfaces/lh.graymid10k.surf.gii',PS_proj,'terrain',0,hmax,'./surfaces/labels/L.pig.subcort.label')
p_comm1,p_comm2=oh_mayavi('./surfaces/pig_surfaces/lh.graymid10k.surf.gii',PS_comm,'terrain',0,hmax,'./surfaces/labels/L.pig.subcort.label')
p_assoc1,p_assoc2=oh_mayavi('./surfaces/pig_surfaces/lh.graymid10k.surf.gii',PS_assoc,'terrain',0,hmax,'./surfaces/labels/L.pig.subcort.label')
p_all1,p_all2=oh_mayavi('./surfaces/pig_surfaces/lh.graymid10k.surf.gii',PS,'terrain',0,hmax,'./surfaces/labels/L.pig.subcort.label')
# ## commnet in to run on right
##### plot the lateral and axial views of the surfaces in mayavi
##### plotting human
# h_proj1,h_proj2=oh_mayavi('./surfaces/rmars-comparing-connectivity-blueprints-surfaces/R.rhum.inflated.surf.gii',h_proj,'terrain',0,hmax,'/surfaces/labels/R.hum.subcort.label')
# h_comm1,h_comm2=oh_mayavi('./surfaces/rmars-comparing-connectivity-blueprints-surfaces/R.rhum.inflated.surf.gii',h_comm,'terrain',0,hmax,'/surfaces/labels/R.hum.subcort.label')
# h_assoc1,h_assoc2=oh_mayavi('./surfaces/rmars-comparing-connectivity-blueprints-surfaces/R.rhum.inflated.surf.gii',h_assoc,'terrain',0,hmax,'/surfaces/labels/R.hum.subcort.label')
# # h_all1,h_all2=oh_mayavi('./surfaces/rmars-comparing-connectivity-blueprints-surfaces/R.rhum.inflated.surf.gii',h_all,'terrain',0,hmax,'/surfaces/labels/R.hum.subcort.label')
# ###### plotting pig
# p_proj1,p_proj2=oh_mayavi('./surfaces/pig_surfaces/rh.graymid10k.surf.gii',p_proj,'terrain',0,hmax,'./surfaces/labels/R.pig.subcort.label')
# p_comm1,p_comm2=oh_mayavi('./surfaces/pig_surfaces/rh.graymid10k.surf.gii',p_comm,'terrain',0,hmax,'./surfaces/labels/R.pig.subcort.label')
# p_assoc1,p_assoc2=oh_mayavi('./surfaces/pig_surfaces/rh.graymid10k.surf.gii',p_assoc,'terrain',0,hmax,'./surfaces/labels/R.pig.subcort.label')
# p_all1,p_all2=oh_mayavi('./surfaces/pig_surfaces/rh.graymid10k.surf.gii',p_all,'terrain',0,hmax,'./surfaces/labels/R.pig.subcort.label')
#### save KL images to pngs
def save_plots(a,b,name):
plt.subplot(1,2,1)
plt.imshow(b)
plt.axis('off')
plt.subplot(1,2,2)
plt.imshow(a)
plt.axis('off')
plt.subplots_adjust(hspace = 0.5)
plt.savefig(f'./L_entropy_pngs/{name}.png',bbox_inches='tight',dpi=800,facecolor='k')
plt.clf()
# #### comment in for right hemisphere
# def save_plots(a,b,name):
# plt.subplot(2,1,1)
# plt.imshow(a)
# plt.axis('off')
# plt.subplot(2,1,2)
# plt.imshow(b)
# plt.axis('off')
# plt.subplots_adjust(hspace = -0.2)
# plt.savefig(f'./L_KL-pngs/{name}.png',bbox_inches='tight',dpi=800)
# plt.clf()
# save_plots(p_all1,p_all2,'R_passoc')
##### run one by one for now
save_plots(p_proj2,p_proj1,'L_p_proj')
save_plots(p_comm2,p_comm1,'L_pcomm')
save_plots(p_assoc2,p_assoc1,'L_passoc')
save_plots(p_all2,p_all1,'L_pall')
save_plots(h_proj2,h_proj1,'L_h_proj')
save_plots(h_comm2,h_comm1,'L_hcomm')
save_plots(h_assoc2,h_assoc1,'L_hassoc')
save_plots(h_all2,h_all1,'L_hall')
###### comment in for right hemisphere (run one by one)
# save_plots(p_proj2,p_proj1,'R_p_proj')
# save_plots(p_comm2,p_comm1,'R_pcomm')
# save_plots(p_assoc2,p_assoc1,'R_passoc')
# save_plots(p_all2,p_all1,'R_pall')
# save_plots(h_proj2,h_proj1,'R_h_proj')
# save_plots(h_comm2,h_comm1,'R_hcomm')
# save_plots(h_assoc2,h_assoc1,'R_hassoc')
# save_plots(h_all2,h_all1,'R_hall')
hmax
fig, ax = plt.subplots(figsize=(1,20))
fig.subplots_adjust(bottom=0.5)
cmap = mpl.cm.terrain
norm = mpl.colors.Normalize(vmin=0, vmax=hmax)
cb1 = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
norm=norm,
orientation='vertical')
# cb1.outline.set_edgecolor('k')
ax.tick_params(axis='y', colors='white')
# ax.remove()
# cb1.set_label('Human KL Divergence')
plt.plot()
plt.savefig('././L_entropy_pngs/L_joint_colorbar.png',bbox_inches='tight',facecolor='k',edgecolor='w')
R_hmax=7.687934938285256
L_hmax=5.722649518067529
R_hmax=8.00090351247032
```
| github_jupyter |
[source](../../api/alibi_detect.od.llr.rst)
# Likelihood Ratios for Outlier Detection
## Overview
The outlier detector described by [Ren et al. (2019)](https://arxiv.org/abs/1906.02845) in [Likelihood Ratios for Out-of-Distribution Detection](https://arxiv.org/abs/1906.02845) uses the likelihood ratio (LLR) between 2 generative models as the outlier score. One model is trained on the original data while the other is trained on a perturbed version of the dataset. This is based on the observation that the log likelihood for an instance under a generative model can be heavily affected by population level background statistics. The second generative model is therefore trained to capture the background statistics still present in the perturbed data while the semantic features have been erased by the perturbations.
The perturbations are added using an independent and identical Bernoulli distribution with rate $\mu$ which substitutes a feature with one of the other possible feature values with equal probability. For images, this means for instance changing a pixel with a different pixel value randomly sampled within the $0$ to $255$ pixel range. The package also contains a [PixelCNN++](https://arxiv.org/abs/1701.05517) implementation adapted from the official TensorFlow Probability [version](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/PixelCNN), and available as a standalone model in `alibi_detect.models.pixelcnn`.
## Usage
### Initialize
Parameters:
* `threshold`: outlier threshold value used for the negative likelihood ratio. Scores above the threshold are flagged as outliers.
* `model`: a generative model, either as a `tf.keras.Model`, TensorFlow Probability distribution or built-in PixelCNN++ model.
* `model_background`: optional separate model fit on the perturbed background data. If this is not specified, a copy of `model` will be used.
* `log_prob`: if the model does not have a `log_prob` function like e.g. a TensorFlow Probability distribution, a function needs to be passed that evaluates the log likelihood.
* `sequential`: flag whether the data is sequential or not. Used to create targets during training. Defaults to *False*.
* `data_type`: can specify data type added to metadata. E.g. *'tabular'* or *'image'*.
Initialized outlier detector example:
```python
from alibi_detect.od import LLR
from alibi_detect.models import PixelCNN
image_shape = (28, 28, 1)
model = PixelCNN(image_shape)
od = LLR(threshold=-100, model=model)
```
### Fit
We then need to train the 2 generative models in sequence. The following parameters can be specified:
* `X`: training batch as a numpy array of preferably normal data.
* `mutate_fn`: function used to create the perturbations. Defaults to an independent and identical Bernoulli distribution with rate $\mu$
* `mutate_fn_kwargs`: kwargs for `mutate_fn`. For the default function, the mutation rate and feature range needs to be specified, e.g. *dict(rate=.2, feature_range=(0,255))*.
* `loss_fn`: loss function used for the generative models.
* `loss_fn_kwargs`: kwargs for the loss function.
* `optimizer`: optimizer used for training. Defaults to [Adam](https://arxiv.org/abs/1412.6980) with learning rate 1e-3.
* `epochs`: number of training epochs.
* `batch_size`: batch size used during training.
* `log_metric`: additional metrics whose progress will be displayed if verbose equals True.
```python
od.fit(X_train, epochs=10, batch_size=32)
```
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
```python
od.infer_threshold(X, threshold_perc=95, batch_size=32)
```
### Detect
We detect outliers by simply calling `predict` on a batch of instances `X`. Detection can be customized via the following parameters:
* `outlier_type`: either *'instance'* or *'feature'*. If the outlier type equals *'instance'*, the outlier score at the instance level will be used to classify the instance as an outlier or not. If *'feature'* is selected, outlier detection happens at the feature level (e.g. by pixel in images).
* `batch_size`: batch size used for model prediction calls.
* `return_feature_score`: boolean whether to return the feature level outlier scores.
* `return_instance_score`: boolean whether to return the instance level outlier scores.
The prediction takes the form of a dictionary with `meta` and `data` keys. `meta` contains the detector's metadata while `data` is also a dictionary which contains the actual predictions stored in the following keys:
* `is_outlier`: boolean whether instances or features are above the threshold and therefore outliers. If `outlier_type` equals *'instance'*, then the array is of shape *(batch size,)*. If it equals *'feature'*, then the array is of shape *(batch size, instance shape)*.
* `feature_score`: contains feature level scores if `return_feature_score` equals True.
* `instance_score`: contains instance level scores if `return_instance_score` equals True.
```python
preds = od.predict(X, outlier_type='instance', batch_size=32)
```
## Examples
### Image
[Likelihood Ratio Outlier Detection with PixelCNN++](../../examples/od_llr_mnist.nblink)
### Sequential Data
[Likelihood Ratio Outlier Detection on Genomic Sequences](../../examples/od_llr_genome.nblink)
| github_jupyter |
```
!pip --quiet install transformers
!pip --quiet install tokenizers
from google.colab import drive
drive.mount('/content/drive')
!cp -r '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Scripts/.' .
COLAB_BASE_PATH = '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/'
MODEL_BASE_PATH = COLAB_BASE_PATH + 'Models/Files/196-roBERTa_base/'
```
## Dependencies
```
import json, glob, warnings
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
pd.set_option('max_colwidth', 120)
```
# Load data
```
# Unzip files
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64/fold_1.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64/fold_2.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64/fold_3.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64/fold_4.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64/fold_5.tar.gz'
database_base_path = COLAB_BASE_PATH + 'Data/complete_64/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
display(k_fold.head())
```
# Model parameters
```
vocab_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-vocab.json'
merges_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-merges.txt'
base_path = COLAB_BASE_PATH + 'qa-transformers/roberta/'
with open(MODEL_BASE_PATH + 'config.json') as json_file:
config = json.load(json_file)
config
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
_, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
h08 = hidden_states[-5]
x = layers.Dropout(.1)(h08)
x_start = layers.Dense(1)(x)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.Dense(1)(x)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
return model
```
# Make predictions
```
k_fold_best = k_fold.copy()
for n_fold in range(config['N_FOLDS']):
n_fold +=1
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
# Load model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
# Make predictions
model.load_weights(MODEL_BASE_PATH + model_path)
predict_eval_df(k_fold_best, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
```
# Model evaluation
```
#@title
display(evaluate_model_kfold(k_fold_best, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
#@title
display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or
c.startswith('text_len') or
c.startswith('selected_text_len') or
c.startswith('text_wordCnt') or
c.startswith('selected_text_wordCnt') or
c.startswith('fold_') or
c.startswith('start_fold_') or
c.startswith('end_fold_'))]].head(15))
```
## Post-processing evaluation
```
#@title
k_fold_best_post = k_fold_best.copy()
k_fold_best_post.loc[k_fold_best_post['sentiment'] == 'neutral', 'selected_text'] = k_fold_best_post["text"]
print('\nImpute neutral')
display(evaluate_model_kfold(k_fold_best_post, config['N_FOLDS']).head(1).style.applymap(color_map))
k_fold_best_post = k_fold_best.copy()
k_fold_best_post.loc[k_fold_best_post['text_wordCnt'] <= 3, 'selected_text'] = k_fold_best_post["text"]
print('\nImpute <= 3')
display(evaluate_model_kfold(k_fold_best_post, config['N_FOLDS']).head(1).style.applymap(color_map))
k_fold_best_post = k_fold_best.copy()
k_fold_best_post.loc[k_fold_best_post['sentiment'] == 'neutral', 'selected_text'] = k_fold_best_post["text"]
k_fold_best_post.loc[k_fold_best_post['text_wordCnt'] <= 3, 'selected_text'] = k_fold_best_post["text"]
print('\nImpute neutral nao <= 3')
display(evaluate_model_kfold(k_fold_best_post, config['N_FOLDS']).head(1).style.applymap(color_map))
k_fold_best_post = k_fold_best.copy()
k_fold_best_post['selected_text'] = k_fold_best_post['selected_text'].apply(lambda x: x.replace('!!!!', '!') if len(x.split())==1 else x)
k_fold_best_post['selected_text'] = k_fold_best_post['selected_text'].apply(lambda x: x.replace('..', '.') if len(x.split())==1 else x)
k_fold_best_post['selected_text'] = k_fold_best_post['selected_text'].apply(lambda x: x.replace('...', '.') if len(x.split())==1 else x)
print('\nImpute noise')
display(evaluate_model_kfold(k_fold_best_post, config['N_FOLDS']).head(1).style.applymap(color_map))
```
# Error analysis
## 10 worst predictions
```
#@title
k_fold_best['jaccard_mean'] = (k_fold_best['jaccard_fold_1'] + k_fold_best['jaccard_fold_2'] +
k_fold_best['jaccard_fold_3'] + k_fold_best['jaccard_fold_4'] +
k_fold_best['jaccard_fold_4']) / 5
display(k_fold_best[['text', 'selected_text', 'sentiment', 'jaccard', 'jaccard_mean',
'prediction_fold_1', 'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(10))
```
# Sentiment
```
#@title
print('\n sentiment == neutral')
display(k_fold_best[k_fold_best['sentiment'] == 'neutral'][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(10))
print('\n sentiment == positive')
display(k_fold_best[k_fold_best['sentiment'] == 'positive'][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(10))
print('\n sentiment == negative')
display(k_fold_best[k_fold_best['sentiment'] == 'negative'][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(10))
```
# text_tokenCnt
```
#@title
display(k_fold_best[k_fold_best['text_tokenCnt'] <= 3][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(5))
display(k_fold_best[k_fold_best['text_tokenCnt'] >= 50][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(5))
```
# selected_text_tokenCnt
```
#@title
print('\n selected_text_tokenCnt <= 3')
display(k_fold_best[k_fold_best['selected_text_tokenCnt'] <= 3][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(5))
print('\n selected_text_tokenCnt >= 50')
display(k_fold_best[k_fold_best['selected_text_tokenCnt'] >= 50][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(5))
```
# Jaccard histogram
```
#@title
fig, ax = plt.subplots(1, 1, figsize=(20, 5))
sns.distplot(k_fold_best['jaccard_mean'], ax=ax).set_title(f"Overall [{len(k_fold_best)}]")
sns.despine()
plt.show()
```
## By sentiment
```
#@title
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(20, 15), sharex=False)
sns.distplot(k_fold_best[k_fold_best['sentiment'] == 'neutral']['jaccard_mean'], ax=ax1).set_title(f"Neutral [{len(k_fold_best[k_fold_best['sentiment'] == 'neutral'])}]")
sns.distplot(k_fold_best[k_fold_best['sentiment'] == 'positive']['jaccard_mean'], ax=ax2).set_title(f"Positive [{len(k_fold_best[k_fold_best['sentiment'] == 'positive'])}]")
sns.distplot(k_fold_best[k_fold_best['sentiment'] == 'negative']['jaccard_mean'], ax=ax3).set_title(f"Negative [{len(k_fold_best[k_fold_best['sentiment'] == 'negative'])}]")
sns.despine()
plt.show()
```
## By text token count
```
#@title
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 10), sharex=False)
sns.distplot(k_fold_best[k_fold_best['text_tokenCnt'] <= 3]['jaccard_mean'], ax=ax1).set_title(f"text_tokenCnt <= 3 [{len(k_fold_best[k_fold_best['text_tokenCnt'] <= 3])}]")
sns.distplot(k_fold_best[k_fold_best['text_tokenCnt'] >= 50]['jaccard_mean'], ax=ax2).set_title(f"text_tokenCnt >= 50 [{len(k_fold_best[k_fold_best['text_tokenCnt'] >= 50])}]")
sns.despine()
plt.show()
```
## By selected_text token count
```
#@title
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 10), sharex=False)
sns.distplot(k_fold_best[k_fold_best['selected_text_tokenCnt'] <= 3]['jaccard_mean'], ax=ax1).set_title(f"selected_text_tokenCnt <= 3 [{len(k_fold_best[k_fold_best['selected_text_tokenCnt'] <= 3])}]")
sns.distplot(k_fold_best[k_fold_best['selected_text_tokenCnt'] >= 50]['jaccard_mean'], ax=ax2).set_title(f"selected_text_tokenCnt >= 50 [{len(k_fold_best[k_fold_best['selected_text_tokenCnt'] >= 50])}]")
sns.despine()
plt.show()
```
| github_jupyter |
## Requesting Data from Application Programming Interfaces (API's)
This notebook demonstrates the fundamentals of interacting with a web-hosted API for the sake of data retrieval. Much of this functionality is made available through the **requests** library which should have already been installed on your machine as part of the **Anaconda** python distribution. Documentation for the **requests** library is here:
https://docs.python-requests.org/en/latest/user/quickstart/.
### 1.0. Prerequisites
If you find that the **requests** library isn't already installed on your machine then simply run the following command in a new **Terminal** window in your Jupyter environment... just as you have in following labs.
- python -m pip install requests
#### 1.1. Import the libaries that you'll be working with in the notebook
```
import os
import json
import pprint
import requests
import requests.exceptions
import pandas as pd
```
### 2.0. Issue a Request to an API Endpoint
The following function issues a **request** to a REST API endpoint via the HTTP request/response mechanism. It demonstrates returning the *JSON payload* of the **response** object as one of two **response_types**; either as a **string** or as a **Pandas DataFrame**.
#### 2.1. Exception Handling:
In order to cope with the stateless nature of HTTP communications, the **get_api_response()** function implements extensive **exception handling**. When attempting to connect to an HTTP endpoint, the following response **status_codes** may be returned:
- **200:** Everything went okay, and the result has been returned (if any).
- **301:** The server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed.
- **400:** The server thinks you made a bad request. This can happen when you don’t send along the right data, among other things.
- **401:** The server thinks you’re not authenticated. Many APIs require login ccredentials, so this happens when you submit the wrong credentials.
- **403:** The resource you’re trying to access is forbidden: you don’t have the right perlessons to see it.
- **404:** The resource you tried to access wasn’t found on the server.
- **503:** The server is not ready to handle the request.
```
def get_api_response(url, response_type):
try:
response = requests.get(url)
response.raise_for_status()
except requests.exceptions.HTTPError as errh:
return "An Http Error occurred: " + repr(errh)
except requests.exceptions.ConnectionError as errc:
return "An Error Connecting to the API occurred: " + repr(errc)
except requests.exceptions.Timeout as errt:
return "A Timeout Error occurred: " + repr(errt)
except requests.exceptions.RequestException as err:
return "An Unknown Error occurred: " + repr(err)
if response_type == 'json':
result = json.dumps(response.json(), sort_keys=True, indent=4)
elif response_type == 'dataframe':
result = pd.json_normalize(response.json())
else:
result = "An unhandled error has occurred!"
return result
```
#### 2.2. Unit test to ensure proper exception handling functionality
```
bad_url = "https://api.open-notify.org/this-api-doesnt-exist"
valid_url = "http://universities.hipolabs.com/search?name=middle"
response_type = ['json', 'dataframe']
json_string = get_api_response(bad_url, response_type[0])
print(json_string)
df = get_api_response(bad_url, response_type[1])
print(df)
```
#### 2.3. Unit test to ensure proper data retrieval functionality
Here we can see that when specifying **response_type[0]** we get back a **string in JSON format**, and when specifying **response_type[1]** we get back a **Pandas DataFrame**. On closer inspection we can observe that the JSON payload is in the form of a **list** of **dictionaries**, each of which includes nested **lists** for the **domains** and **web_pages** fields in addition to the other fields that are formatted in simple **"key" : "value"** format. This presents a problem we will have to handle in order to have a correctly formed **DataFrame** because, as we learned when desiging **OLTP** databases, having multiple values in a single column violates the **First Normal Form**.
```
json_string = get_api_response(valid_url, response_type[0])
print(json_string)
df = get_api_response(valid_url, response_type[1])
print(df.shape)
print(df.columns)
df.info()
df
```
#### 2.3. Perform Desired Transformations
In any ETL process, there will be some form of data **transformation**. Here we will explore transforming JSON data.
As identified above, the first issue we must handle is the nested **lists** that may contain multiple **domains** and **web_pages**. To do so we will exploring the advanced capabilities of the Pandas **json_normalize()** function, but first we will create a simplified function that retrieves a JSON object from an API.
```
def get_api_data(url):
try:
response = requests.get(url)
response.raise_for_status()
except requests.exceptions.HTTPError as errh:
return "An Http Error occurred: " + repr(errh)
except requests.exceptions.ConnectionError as errc:
return "An Error Connecting to the API occurred: " + repr(errc)
except requests.exceptions.Timeout as errt:
return "A Timeout Error occurred: " + repr(errt)
except requests.exceptions.RequestException as err:
return "An Unknown Error occurred: " + repr(err)
return response.json()
json_data = get_api_data(valid_url)
print(json_data)
```
Next, we can **flatten** (aka, Normalize) the fields containing the nested lists (**domains** and **web_pages**) using the **record_path** parameter of the **pandas.json_normalize** function.
```
pd.json_normalize(json_data, record_path=['domains'])
```
We can confirm that the *domains* field has been flattened since we now have 10 observations where before we had only 9. However, we also want to include other fields; which we accomplish with the **meta** parameter. Note that we've also omitted the **state-province** field since it doesn't appear to contain any useful data. What's more, since it's possible for some **keys** to be missing in a JSON document, we can supress any errors using the **errors='ignore'** parameter.
```
df = pd.json_normalize(json_data,
record_path=['domains'],
meta=['country', 'name', 'alpha_two_code'],
errors='ignore')
df
```
Next, we can normalize the **web_pages** list to ensure an unique row for each of its unique values as we add it to the DataFrame.
```
df['web_pages'] = pd.json_normalize(json_data, record_path=['web_pages'])
df
```
Finally, we create a dictionary to **map** new column names to the old ones using the **rename()** function of the **pandas.DataFrame**. We also demonstrate how columns can be reordered by simply passing a **list** of column names in the desired order.
```
column_name_map = {0 : "Domain",
"country" : "Country",
"name" : "Institution_Name",
"alpha_two_code" : "Country_Code",
"web_pages" : "Web_Address"
}
df.rename(columns=column_name_map, inplace=True)
df = df[['Institution_Name','Country','Country_Code','Domain','Web_Address']]
df
```
With the data having been **extracted** from an API, and any desired **transformations** having been accomplished, we can now **load** the data into any desired destination; e.g., SQL database, NoSQL database, or data lake (file system).
### 3.0. API Endpoint Authentication & Parameters
```
def get_api_response(url, headers, params):
try:
response = requests.get(url, headers=headers, params=params)
response.raise_for_status()
except requests.exceptions.HTTPError as errh:
return "An Http Error occurred: " + repr(errh)
except requests.exceptions.ConnectionError as errc:
return "An Error Connecting to the API occurred: " + repr(errc)
except requests.exceptions.Timeout as errt:
return "A Timeout Error occurred: " + repr(errt)
except requests.exceptions.InvalidHeader as erri:
return "A Header Error occurred: " + repr(erri)
except requests.exceptions.RequestException as err:
return "An Unknown Error occurred: " + repr(err)
return response.json()
GITHUB_TOKEN="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"
os.environ["GITHUB_TOKEN"] = GITHUB_TOKEN
token = os.getenv('GITHUB_TOKEN', '...')
print(token)
owner = "JTupitza-UVA"
repo = "DS-3002-01"
query_url = f"https://api.github.com/repos/{owner}/{repo}/issues"
params = {
"state": "open",
}
headers = {'Authorization': f'token {token}'}
json_data = get_api_response(query_url, headers, params)
pprint.pprint(json_data)
```
| github_jupyter |
# Value iteration
This assignment is taken from awesome [__CS294__](http://rll.berkeley.edu/deeprlcourse/) as is. All credit goes to them.
## Introduction
This assignment will review the two classic methods for solving Markov Decision Processes (MDPs) with finite state and action spaces.
We will implement value iteration (VI) and policy iteration (PI) for a finite MDP, both of which find the optimal policy in a finite number of iterations.
The experiments here will use the Frozen Lake environment, a simple gridworld MDP that is taken from `gym` and slightly modified for this assignment. In this MDP, the agent must navigate from the start state to the goal state on a 4x4 grid, with stochastic transitions.
```
from frozen_lake import FrozenLakeEnv
env = FrozenLakeEnv()
print(env.__doc__)
```
Let's look at what a random episode looks like.
```
# Some basic imports and setup
import numpy as np, numpy.random as nr, gym
np.set_printoptions(precision=3)
def begin_grading(): print("\x1b[43m")
def end_grading(): print("\x1b[0m")
# Seed RNGs so you get the same printouts as me
env.seed(0); from gym.spaces import prng; prng.seed(10)
# Generate the episode
env.reset()
for t in range(100):
env.render()
a = env.action_space.sample()
ob, rew, done, _ = env.step(a)
if done:
break
assert done
env.render();
```
In the episode above, the agent falls into a hole after two timesteps. Also note the stochasticity--on the first step, the DOWN action is selected, but the agent moves to the right.
We extract the relevant information from the gym Env into the MDP class below.
The `env` object won't be used any further, we'll just use the `mdp` object.
```
class MDP(object):
def __init__(self, P, nS, nA, desc=None):
self.P = P # state transition and reward probabilities, explained below
self.nS = nS # number of states
self.nA = nA # number of actions
self.desc = desc # 2D array specifying what each grid cell means (used for plotting)
mdp = MDP( {s : {a : [tup[:3] for tup in tups] for (a, tups) in a2d.items()} for (s, a2d) in env.P.items()}, env.nS, env.nA, env.desc)
print("mdp.P is a two-level dict where the first key is the state and the second key is the action.")
print("The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in")
print(np.arange(16).reshape(4,4))
print("mdp.P[state][action] is a list of tuples (probability, nextstate, reward).\n")
print("For example, state 0 is the initial state, and the transition information for s=0, a=0 is \nP[0][0] =", mdp.P[0][0], "\n")
print("As another example, state 5 corresponds to a hole in the ice, which transitions to itself with probability 1 and reward 0.")
print("P[5][0] =", mdp.P[5][0], '\n')
```
## Part 1: Value Iteration
### Problem 1: implement value iteration
In this problem, you'll implement value iteration, which has the following pseudocode:
---
Initialize $V^{(0)}(s)=0$, for all $s$
For $i=0, 1, 2, \dots$
- $V^{(i+1)}(s) = \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$, for all $s$
---
We additionally define the sequence of greedy policies $\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}$, where
$$\pi^{(i)}(s) = \arg \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$$
Your code will return two lists: $[V^{(0)}, V^{(1)}, \dots, V^{(n)}]$ and $[\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}]$
To ensure that you get the same policies as the reference solution, choose the lower-index action to break ties in $\arg \max_a$. This is done automatically by np.argmax. This will only affect the "# chg actions" printout below--it won't affect the values computed.
<div class="alert alert-warning">
Warning: make a copy of your value function each iteration and use that copy for the update--don't update your value function in place.
Updating in-place is also a valid algorithm, sometimes called Gauss-Seidel value iteration or asynchronous value iteration, but it will cause you to get different results than me.
</div>
```
def value_iteration(mdp, gamma, nIt):
"""
Inputs:
mdp: MDP
gamma: discount factor
nIt: number of iterations, corresponding to n above
Outputs:
(value_functions, policies)
len(value_functions) == nIt+1 and len(policies) == n
"""
print("Iteration | max|V-Vprev| | # chg actions | V[0]")
print("----------+--------------+---------------+---------")
Vs = [np.zeros(mdp.nS)] # list of value functions contains the initial value function V^{(0)}, which is zero
pis = []
for it in range(nIt):
oldpi = pis[-1] if len(pis) > 0 else None # \pi^{(it)} = Greedy[V^{(it-1)}]. Just used for printout
Vprev = Vs[-1] # V^{(it)}
# YOUR CODE HERE
# Your code should define the following two variables
# pi: greedy policy for Vprev,
# corresponding to the math above: \pi^{(it)} = Greedy[V^{(it)}]
# numpy array of ints
# V: bellman backup on Vprev
# corresponding to the math above: V^{(it+1)} = T[V^{(it)}]
# numpy array of floats
max_diff = np.abs(V - Vprev).max()
nChgActions="N/A" if oldpi is None else (pi != oldpi).sum()
print("%4i | %6.5f | %4s | %5.3f"%(it, max_diff, nChgActions, V[0]))
Vs.append(V)
pis.append(pi)
return Vs, pis
GAMMA=0.95 # we'll be using this same value in subsequent problems
begin_grading()
Vs_VI, pis_VI = value_iteration(mdp, gamma=GAMMA, nIt=20)
end_grading()
```
Below, we've illustrated the progress of value iteration. Your optimal actions are shown by arrows.
At the bottom, the value of the different states are plotted.
```
import matplotlib.pyplot as plt
%matplotlib inline
for (V, pi) in zip(Vs_VI[:10], pis_VI[:10]):
plt.figure(figsize=(3,3))
plt.imshow(V.reshape(4,4), cmap='gray', interpolation='none', clim=(0,1))
ax = plt.gca()
ax.set_xticks(np.arange(4)-.5)
ax.set_yticks(np.arange(4)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {0: (-1, 0), 1:(0, -1), 2:(1,0), 3:(-1, 0)}
Pi = pi.reshape(4,4)
for y in range(4):
for x in range(4):
a = Pi[y, x]
u, v = a2uv[a]
plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1)
plt.text(x, y, str(env.desc[y,x].item().decode()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
plt.grid(color='b', lw=2, ls='-')
plt.figure()
plt.plot(Vs_VI)
plt.title("Values of different states");
```
## Problem 2: construct an MDP where value iteration takes a long time to converge
When we ran value iteration on the frozen lake problem, the last iteration where an action changed was iteration 6--i.e., value iteration computed the optimal policy at iteration 6.
Are there any guarantees regarding how many iterations it'll take value iteration to compute the optimal policy?
There are no such guarantees without additional assumptions--we can construct the MDP in such a way that the greedy policy will change after arbitrarily many iterations.
Your task: define an MDP with at most 3 states and 2 actions, such that when you run value iteration, the optimal action changes at iteration >= 50. Use discount=0.95. (However, note that the discount doesn't matter here--you can construct an appropriate MDP with any discount.)
```
chg_iter = 50
# YOUR CODE HERE
# Your code will need to define an MDP (mymdp)
# like the frozen lake MDP defined above
begin_grading()
Vs, pis = value_iteration(mymdp, gamma=GAMMA, nIt=chg_iter+1)
end_grading()
```
## Problem 3: Policy Iteration
The next task is to implement exact policy iteration (PI), which has the following pseudocode:
---
Initialize $\pi_0$
For $n=0, 1, 2, \dots$
- Compute the state-value function $V^{\pi_{n}}$
- Using $V^{\pi_{n}}$, compute the state-action-value function $Q^{\pi_{n}}$
- Compute new policy $\pi_{n+1}(s) = \operatorname*{argmax}_a Q^{\pi_{n}}(s,a)$
---
Below, you'll implement the first and second steps of the loop.
### Problem 3a: state value function
You'll write a function called `compute_vpi` that computes the state-value function $V^{\pi}$ for an arbitrary policy $\pi$.
Recall that $V^{\pi}$ satisfies the following linear equation:
$$V^{\pi}(s) = \sum_{s'} P(s,\pi(s),s')[ R(s,\pi(s),s') + \gamma V^{\pi}(s')]$$
You'll have to solve a linear system in your code. (Find an exact solution, e.g., with `np.linalg.solve`.)
```
def compute_vpi(pi, mdp, gamma):
# YOUR CODE HERE
return V
```
Now let's compute the value of an arbitrarily-chosen policy.
```
begin_grading()
print(compute_vpi(np.ones(16), mdp, gamma=GAMMA))
end_grading()
```
As a sanity check, if we run `compute_vpi` on the solution from our previous value iteration run, we should get approximately (but not exactly) the same values produced by value iteration.
```
Vpi=compute_vpi(pis_VI[15], mdp, gamma=GAMMA)
V_vi = Vs_VI[15]
print("From compute_vpi", Vpi)
print("From value iteration", V_vi)
print("Difference", Vpi - V_vi)
```
### Problem 3b: state-action value function
Next, you'll write a function to compute the state-action value function $Q^{\pi}$, defined as follows
$$Q^{\pi}(s, a) = \sum_{s'} P(s,a,s')[ R(s,a,s') + \gamma V^{\pi}(s')]$$
```
def compute_qpi(vpi, mdp, gamma):
# YOUR CODE HERE
return Qpi
begin_grading()
Qpi = compute_qpi(np.arange(mdp.nS), mdp, gamma=0.95)
print("Qpi:\n", Qpi)
end_grading()
```
Now we're ready to run policy iteration!
```
def policy_iteration(mdp, gamma, nIt):
Vs = []
pis = []
pi_prev = np.zeros(mdp.nS,dtype='int')
pis.append(pi_prev)
print("Iteration | # chg actions | V[0]")
print("----------+---------------+---------")
for it in range(nIt):
vpi = compute_vpi(pi_prev, mdp, gamma)
qpi = compute_qpi(vpi, mdp, gamma)
pi = qpi.argmax(axis=1)
print("%4i | %6i | %6.5f"%(it, (pi != pi_prev).sum(), vpi[0]))
Vs.append(vpi)
pis.append(pi)
pi_prev = pi
return Vs, pis
Vs_PI, pis_PI = policy_iteration(mdp, gamma=0.95, nIt=20)
plt.plot(Vs_PI);
```
Now we can compare the convergence of value iteration and policy iteration on several states.
For fun, you can try adding modified policy iteration.
```
for s in range(5):
plt.figure()
plt.plot(np.array(Vs_VI)[:,s])
plt.plot(np.array(Vs_PI)[:,s])
plt.ylabel("value of state %i"%s)
plt.xlabel("iteration")
plt.legend(["value iteration", "policy iteration"], loc='best')
```
| github_jupyter |
# MOSDBDiscrete
In this module, we will have a brief overview of the `MOSDBDiscrete` class, which manages a transistor characterization database and provide methods for designers to query transistor small signal parameters.
## MOSDBDiscrete example
To use the transistor characterization database, evaluate the following cell, which defines two methods, `query()` and `plot_data()`.
```
%matplotlib inline
import os
import pprint
import numpy as np
import matplotlib.pyplot as plt
# noinspection PyUnresolvedReferences
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib import ticker
from verification.mos.query import MOSDBDiscrete
interp_method = 'spline'
spec_file = os.path.join(os.environ['BAG_WORK_DIR'], 'demo_data', 'mos_char_nch', 'specs.yaml')
env_default = 'tt'
intent = 'standard'
def query(vgs=None, vds=None, vbs=0.0, vstar=None, env_list=None):
"""Get interpolation function and plot/query."""
spec_list = [spec_file]
if env_list is None:
env_list = [env_default]
# initialize transistor database from simulation data
nch_db = MOSDBDiscrete(spec_list, interp_method=interp_method)
# set process corners
nch_db.env_list = env_list
# set layout parameters
nch_db.set_dsn_params(intent=intent)
# returns a dictionary of smal-signal parameters
return nch_db.query(vbs=vbs, vds=vds, vgs=vgs, vstar=vstar)
def plot_data(name='ibias', bounds=None, unit_val=None, unit_label=None):
"""Get interpolation function and plot/query."""
env_list = [env_default]
vbs = 0.0
nvds = 41
nvgs = 81
spec_list = [spec_file]
print('create transistor database')
nch_db = MOSDBDiscrete(spec_list, interp_method=interp_method)
nch_db.env_list = env_list
nch_db.set_dsn_params(intent=intent)
f = nch_db.get_function(name)
vds_min, vds_max = f.get_input_range(1)
vgs_min, vgs_max = f.get_input_range(2)
if bounds is not None:
if 'vgs' in bounds:
v0, v1 = bounds['vgs']
if v0 is not None:
vgs_min = max(vgs_min, v0)
if v1 is not None:
vgs_max = min(vgs_max, v1)
if 'vds' in bounds:
v0, v1 = bounds['vds']
if v0 is not None:
vds_min = max(vds_min, v0)
if v1 is not None:
vds_max = min(vds_max, v1)
# query values.
vds_test = (vds_min + vds_max) / 2
vgs_test = (vgs_min + vgs_max) / 2
pprint.pprint(nch_db.query(vbs=vbs, vds=vds_test, vgs=vgs_test))
vbs_vec = [vbs]
vds_vec = np.linspace(vds_min, vds_max, nvds, endpoint=True)
vgs_vec = np.linspace(vgs_min, vgs_max, nvgs, endpoint=True)
vbs_mat, vds_mat, vgs_mat = np.meshgrid(vbs_vec, vds_vec, vgs_vec, indexing='ij', copy=False)
arg = np.stack((vbs_mat, vds_mat, vgs_mat), axis=-1)
ans = f(arg)
vds_mat = vds_mat.reshape((nvds, nvgs))
vgs_mat = vgs_mat.reshape((nvds, nvgs))
ans = ans.reshape((nvds, nvgs, len(env_list)))
formatter = ticker.ScalarFormatter(useMathText=True)
formatter.set_scientific(True)
formatter.set_powerlimits((-2, 3))
if unit_label is not None:
zlabel = '%s (%s)' % (name, unit_label)
else:
zlabel = name
for idx, env in enumerate(env_list):
fig = plt.figure(idx + 1)
ax = fig.add_subplot(111, projection='3d')
cur_val = ans[..., idx]
if unit_val is not None:
cur_val = cur_val / unit_val
ax.plot_surface(vds_mat, vgs_mat, cur_val, rstride=1, cstride=1, linewidth=0, cmap=cm.cubehelix)
ax.set_title('%s (corner=%s)' % (name, env))
ax.set_xlabel('Vds (V)')
ax.set_ylabel('Vgs (V)')
ax.set_zlabel(zlabel)
ax.w_zaxis.set_major_formatter(formatter)
plt.show()
```
## Querying Small-Signal Parameters
To lookup transistor small signal parameters given a bias point, use the `query()` method by evaluating the following cell. Feel free to play around with the numbers.
```
query(vgs=0.4, vds=0.5, vbs=0.0)
```
## Plotting Small-Signal Parameters
`MOSDBDiscrete` stores each small signal parameter as a continuous function interpolated from simulation data. This makes it easy to manipulate those functions directly (such as using an optimization solver). For a simple example, the `plot_data()` method simply plots the functions versus $V_{gs}$ and $V_{ds}$. Evaluate the following cell to see plots of various different small signal parameters.
```
%matplotlib inline
plot_data(name='ibias')
```
| github_jupyter |
# Initiation à la Programmation Orientée Objet
> Cours NSI Terminale - Thème 1.
- toc: true
- badges: true
- comments: false
- categories: [python, NSI, Terminale, Structure_donnees, POO, TP]
- image: images/nsi1.png
## Introduction
Objets et POO sont au centre de la manière Python fonctionne. Vous n'êtes pas obligé d'utiliser la POO dans vos programmes - mais comprendre le concept est essentiel pour devenir plus qu'un débutant. Entre autres raisons parce que vous aurez besoin d'utiliser les classes et objets fournis par la librairie standard.
En effet en manipulant les tableaux en python, vous avez certainement remarqué qu'il y a deux syntaxes pour appeler des fonctions :
```
tableau = [1, 3, 5, 8]
taille = len(tableau)
tableau.append(11)
```
- le calcul de la longueur du tableau se fait par l'appel à la fonction `len()` avec une syntaxe identique aux foncitons que vous avez l'habitude d'écrire.
- l'ajout d'un élément dans le tableau est un peu différent car la fonction `append` semble provenir du tableau lui même : dans ce cas, on ne parle pas de fonciton mais de **méthode** associée à l'**objet** tableau.
Un objet est une structure de donnée qui intègre des variables (que l'on nomme **propriétés**) et des fonctions (que l'on nomme **méthodes**). Nous allons voir l'intérêt de cette approche, omniprésente dans Python, en particulier lorsqu'on développe des interfaces graphiques, mais avant quelques petits repères historiques et éléments de contexte
### Petit historique
La programmation en tant que telle est une matière relativement récente. Etonnament la programmation orientée objet remonte aussi loin que les années 1960. *Simula* est considéré comme le premier langage de programmation orienté objet.
Les années 1970 voient les principes de la programmation par objet se développent et prennent forme au travers notamment du langage *Smalltalk*
À partir des années 1980, commence l'effervescence des langages à objets : *Objective C* (début des années 1980, utilisé sur les plateformes Mac et iOS), *C++* (C with classes) en 1983 sont les plus célèbres.
Les années 1990 voient l'âge d'or de l'extension de la programmation par objet dans les différents secteurs du développement logiciel, notemment grâce à l'émergence des systèmes d'exploitation basés sur une interface graphique (MacOS, Linux, Windows) qui font appel abondamment aux principes de la POO.
### Programmation procédurale
La programmation procédurale est celle que vous avez utilisé jusqu'à maintenant : cela consiste à diviser votre programme en blocs réutilisables appelés fonctions.
Vous essayez autant que possible de garder votre code en blocs modulaires, en décidant de manière logique quel bloc est appelé. Cela demande moins d’effort pour visualiser ce que votre programme fait. Cela rend plus facile la maintenance de votre code – vous pouvez voir ce que fait une portion de code. Le fait d’améliorer une fonction (qui est réutilisée) peut améliorer la performance à plusieurs endroits dans votre programme.
Vous avez des variables, qui contiennent vos données, et des fonctions. Vous passez vos variables à vos fonctions – qui agissent sur elles et peut-être les modifient. L'interaction entre les variables et les fonctions n'est pas toujours simple à gérer : les variables locales, globales, les effets de bords que provoquent certaines fonctions qui modifient des variables globales sont souvent source de bugs difficiles à déceler.
On touche ici aux limites de la programmation procédurale, lorsque le nombre de fonctions et de variables devient important.
## Création d'une classe
Nous allons voir un premier exemple simple basé sur la notion de *pile* vue dans une séquence précédente.
Une pile possède un comportement différent d'un tableau. On a utilisé un tableau pour simuler le comportement d'une pile mais faisant cela, on peut être tenté d'utiliser des fonctionnalités du tableau qui ne sont pas possibles avec une vraie pile comme accéder au dernier élément de la pile en faisant pile[0].
Pour y remédier nous allons créer un objet *Pile* qui se comportera exactement comme on le souhaite. Un objet se définit dans une **classe** qui va nous permettre de définir les **propriétés** et les **méthodes** que l'on souhaite intégrer à notre objet *Pile*.

Notre classe *Pile* va nous permettre de définir le modèle de l'objet que l'on souhaite créer. Ce modèle possèdera
- 2 propriétés (variables intégrées à l'objet)
- longueur : la longueur de la pile
- sommet : la valeur du sommet de la pile
- 2 méthodes (fonctions agissant sur cet objet)
- empile(v) : empile la valeur `v` sur le sommet de la pile
- depile() : sort une valeur de la pile et la renvoie.
Avec ces caractéristiques nous avons donc défini le prototype de notre pile.
Voyons en pratique comment cela se passe.
### Définition de la classe
```
# Définir une classe - pour le moment vide
class Pile():
pass
```
Pour le moment, on a créé une classe **Pile**.
On peut voir la classe comme le modèle de fabrication de l'objet. Ce n'est pas un objet réel, juste une manière de décrire comment il est constitué et comment il se comporte.
Une classe seule ne sert à rien. C'est un peu comme voir le nouveau modèle du smartphone de vos rèves sur un site de commerce en ligne : Vous voyez à quoi il ressemble, ses caractéristiques, son prix, ses fonctionnalités... mais vous ne le possédez pas !
Vous allez donc craquer et passer la commande. Quelques jours plus tard, vous allez posséder l'**objet** réel, le tenir dans vos mains, le manipuler. Vous avez créé ce qu'on appelle une **instance** de la classe.
```
# Créer une instance de l'objet Pile
ma_pile = Pile()
ma_pile.longueur = 1
ma_pile.sommet = 2
autre_pile = Pile()
```
Vous avez convaincu quelques uns de vos camarades qui ont aussi commandé le même modèle de smartphone. Ils vont aussi
posséder leur propre instance. Ces smartphones fonctionneront de la même manière que le votre car fabriqué à partir des mêmes plans, mais ne possèderont pas les mêmes données : vos photos ou vos apps sont propres à votre instance de votre téléphone et n'apparaîtront pas sur celles de vos amis.
```
ma_pile.taille
autre_pile.taille
```
Nous avons créé deux instances de la classe *Pile* : `ma_pile` et `autre_pile`.
`ma_pile` possède à présent deux propriétés : *longueur* et *sommet*. Ces propriétés n'existent pas sur `aure_pile` car l'initialisation de ces propriétés est faite en dehors de la classe, ce qui n'est pas une bonne chose : nous voulons que toutes nos piles soient fabriquées sur le même modèle et donc initialiser les propriétés à l'intérieur de la classe.
Voici comment procéder :
```
class Pile():
"""Implémentation basique d'une pile"""
def __init__(self):
"""Initialisation de l'instance"""
# Initialisation des propriétés
self.longueur = 0
self.sommet = None
```
Pour initialiser les propriétés, nous avons créé une méthode spéciale à l'intérieur de la classe : la méthode ***__init__()***. Le nom de cette méthode lancée automatiquement à la création de chaque instance est imposé par Python et ne peut être changé. Attention aux 2 __ à suivre.
Le mot clé **self** désigne une instance de la classe - imaginez le remplacer par `ma_pile` ou `autre_pile`. Puisqu'au moment de concevoir ma classe, ces instances n'existent pas encore, le mot **self** a été introduit. Il est important de ne pas oublier le **self** car sinon *longueur* et *sommet* seront des variables locales à la fonction **__init__()** ce qui n'est pas du tout le but recherché ici !
Recréons à présent des instances de *Pile* et commençons à saisir des données :
```
ma_pile = Pile()
autre_pile = Pile()
ma_pile.longueur = 1
ma_pile.sommet = 2
print(ma_pile.longueur)
print(autre_pile.longueur)
```
Tout fonctionne comme prévu : mes deux instances possèdent les mêmes propriétés mais chacune possède ses valeurs qui lui sont propres.
Il est temps de définir nos méthodes. Commençons pas **empile**.
La définition d'une méthode est similaire à la définition d'une fonction à deux détails près
- les méthodes sont définies **à l'intérieur d'une classe**
- le premier paramètre d'une méthode est **toujours** ***self***
Pour empiler des valeurs dans ma pile je vais avoir besoin d'une structure qui mémorise les données de ma pile. Je vais donc créer une propriété *cachée* **__reste** qui contiendra toutes les valeurs de ma pile autre que le sommet. Les 2 __ sont une convention de nommage et signifie que la propriété ou la méthode n'a pas vocation à être appelée à l'extérieur de la définition de la classe, d'où la qualification de *cachée*.
```
class Pile():
"""Implémentation basique d'une pile"""
def __init__(self):
"""Initialisation de l'instance"""
# Initialisation des propriétés publiques
self.longueur = 0
self.sommet = None # None signifie que la pile est vide
# Initialisation du reste de la pile
self.__reste = []
def empile(self, valeur):
"""empile la valeur passée en paramètre"""
if self.longueur > 0:
# Le sommet de la pile passe dans le reste
self.__reste.append(self.sommet)
# le nouveau sommet est la valeur qu'on empile
self.sommet = valeur
# La longueur de la pile augmente de 1
self.longueur += 1
ma_pile = Pile()
ma_pile.empile(3)
ma_pile.empile(5)
print(ma_pile.sommet)
print(ma_pile.longueur)
```
En lisant la définition de la méthode **empile**, vous serez attentif aux points suivants :
- le premier paramètre est **self**, *valeur* arrive en second
- lorsqu'on invoque la méthode **empile**, on ne passe pas **self**, on passe juste *valeur*.
- à chaque fois qu'on fait appel à une propriété, on utilise le préfixe **self.**
## A vous de jouer
L'implémentation de notre pile n'est pas terminée. Vous allez devoir à présent implémenter la méthode **depile()**. Celle-ci ne prend pas de paramètres (a part bien sûr **self** que vous n'oublierez pas !) et renvoie la valeur qui a été sorti de la pile.
Vous serez attentif
- à modifier la propriété longueur
- à ne pas provoquer d'erreur si il n'y a rien dans la pile. Dans ce cas, vous renverrez **None**.
```
class Pile():
"""Implémentation basique d'une pile"""
def __init__(self):
"""Initialisation de l'instance"""
# Initialisation des propriétés publiques
self.longueur = 0
self.sommet = None # None signifie que la pile est vide
# Initialisation du reste de la pile
self.__reste = []
def empile(self, valeur):
"""empile la valeur passée en paramètre"""
if self.longueur > 0:
# Le sommet de la pile passe dans le reste
self.__reste.append(self.sommet)
# le nouveau sommet est la valeur qu'on empile
self.sommet = valeur
# La longueur de la pile augmente de 1
self.longueur += 1
# YOUR CODE HERE
raise NotImplementedError()
# Testez votre classe dans cette cellule
ma_pile = Pile()
ma_pile.empile(3)
# Vérification du fonctionnement de la classe Pile
ma_pile = Pile()
ma_pile.empile(3)
ma_pile.empile(5)
assert ma_pile.sommet == 5
assert ma_pile.longueur == 2
assert ma_pile.depile() == 5
assert ma_pile.longueur == 1
assert ma_pile.depile() == 3
assert ma_pile.longueur == 0
```
| github_jupyter |
##### Copyright 2018 The TF-Agents Authors.
### Get Started
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/tf_agents/colabs/4_drivers_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/tf_agents/colabs/4_drivers_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
```
# Note: If you haven't installed tf-agents or gym yet, run:
!pip install tf-nightly
!pip install tf-agents-nightly
!pip install gym
```
### Imports
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.policies import random_py_policy
from tf_agents.policies import random_tf_policy
from tf_agents.metrics import py_metrics
from tf_agents.metrics import tf_metrics
from tf_agents.drivers import py_driver
from tf_agents.drivers import dynamic_episode_driver
tf.compat.v1.enable_v2_behavior()
```
# Introduction
A common pattern in reinforcement learning is to execute a policy in an environment for a specified number of steps or episodes. This happens, for example, during data collection, evaluation and generating a video of the agent.
While this is relatively straightforward to write in python, it is much more complex to write and debug in TensorFlow because it involves `tf.while` loops, `tf.cond` and `tf.control_dependencies`. Therefore we abstract this notion of a run loop into a class called `driver`, and provide well tested implementations both in Python and TensorFlow.
Additionally, the data encountered by the driver at each step is saved in a named tuple called Trajectory and broadcast to a set of observers such as replay buffers and metrics. This data includes the observation from the environment, the action recommended by the policy, the reward obtained, the type of the current and the next step, etc.
# Python Drivers
The `PyDriver` class takes a python environment, a python policy and a list of observers to update at each step. The main method is `run()`, which steps the environment using actions from the policy until at least one of the following termination criteria is met: The number of steps reaches `max_steps` or the number of episodes reaches `max_episodes`.
The implementation is roughly as follows:
```python
class PyDriver(object):
def __init__(self, env, policy, observers, max_steps=1, max_episodes=1):
self._env = env
self._policy = policy
self._observers = observers or []
self._max_steps = max_steps or np.inf
self._max_episodes = max_episodes or np.inf
def run(self, time_step, policy_state=()):
num_steps = 0
num_episodes = 0
while num_steps < self._max_steps and num_episodes < self._max_episodes:
# Compute an action using the policy for the given time_step
action_step = self._policy.action(time_step, policy_state)
# Apply the action to the environment and get the next step
next_time_step = self._env.step(action_step.action)
# Package information into a trajectory
traj = trajectory.Trajectory(
time_step.step_type,
time_step.observation,
action_step.action,
action_step.info,
next_time_step.step_type,
next_time_step.reward,
next_time_step.discount)
for observer in self._observers:
observer(traj)
# Update statistics to check termination
num_episodes += np.sum(traj.is_last())
num_steps += np.sum(~traj.is_boundary())
time_step = next_time_step
policy_state = action_step.state
return time_step, policy_state
```
Now, let us run through the example of running a random policy on the CartPole environment, saving the results to a replay buffer and computing some metrics.
```
env = suite_gym.load('CartPole-v0')
policy = random_py_policy.RandomPyPolicy(time_step_spec=env.time_step_spec(),
action_spec=env.action_spec())
replay_buffer = []
metric = py_metrics.AverageReturnMetric()
observers = [replay_buffer.append, metric]
driver = py_driver.PyDriver(
env, policy, observers, max_steps=20, max_episodes=1)
initial_time_step = env.reset()
final_time_step, _ = driver.run(initial_time_step)
print('Replay Buffer:')
for traj in replay_buffer:
print(traj)
print('Average Return: ', metric.result())
```
# TensorFlow Drivers
We also have drivers in TensorFlow which are functionally similar to Python drivers, but use TF environments, TF policies, TF observers etc. We currently have 2 TensorFlow drivers: `DynamicStepDriver`, which terminates after a given number of (valid) environment steps and `DynamicEpisodeDriver`, which terminates after a given number of episodes. Let us look at an example of the DynamicEpisode in action.
```
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
tf_policy = random_tf_policy.RandomTFPolicy(action_spec=tf_env.action_spec(),
time_step_spec=tf_env.time_step_spec())
num_episodes = tf_metrics.NumberOfEpisodes()
env_steps = tf_metrics.EnvironmentSteps()
observers = [num_episodes, env_steps]
driver = dynamic_episode_driver.DynamicEpisodeDriver(
tf_env, tf_policy, observers, num_episodes=2)
# Initial driver.run will reset the environment and initialize the policy.
final_time_step, policy_state = driver.run()
print('final_time_step', final_time_step)
print('Number of Steps: ', env_steps.result().numpy())
print('Number of Episodes: ', num_episodes.result().numpy())
# Continue running from previous state
final_time_step, _ = driver.run(final_time_step, policy_state)
print('final_time_step', final_time_step)
print('Number of Steps: ', env_steps.result().numpy())
print('Number of Episodes: ', num_episodes.result().numpy())
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
font = {'family': 'normal',
'weight': 'bold',
'size': 15}
matplotlib.rc('font', **font)
plt.rcParams["figure.figsize"] = [15, 7]
from os.path import expanduser
SRC_PATH = expanduser("~") + '/SageMaker/mastering-ml-on-aws/chapter5/'
from pyspark.context import SparkContext
sc = SparkContext('local', 'test')
from pyspark.sql import SQLContext
spark = SQLContext(sc)
df = spark.read.csv(SRC_PATH + 'data.csv', header=True, inferSchema=True)
df.toPandas().head()
df = df.selectExpr("*", "Quantity * UnitPrice as TotalBought")
df.limit(5).toPandas()
customer_df = df.select("CustomerID", "TotalBought").groupBy("CustomerID").sum("TotalBought").withColumnRenamed(
'sum(TotalBought)', 'SumTotalBought')
customer_df.show(5)
from pyspark.sql.functions import *
joined_df = df.join(customer_df, 'CustomerId')
joined_df.show(5)
from pyspark.ml import Pipeline
from pyspark.ml.clustering import KMeans
from pyspark.ml.feature import Normalizer
from pyspark.ml.feature import OneHotEncoder
from pyspark.ml.feature import QuantileDiscretizer
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import VectorAssembler
stages = [StringIndexer(inputCol='StockCode', outputCol="stock_code_index", handleInvalid='keep'),
OneHotEncoder(inputCol='stock_code_index', outputCol='stock_code_encoded'),
StringIndexer(inputCol='Country', outputCol='country_index', handleInvalid='keep'),
OneHotEncoder(inputCol='country_index', outputCol='country_encoded'),
QuantileDiscretizer(numBuckets=3, inputCol='SumTotalBought', outputCol='total_bought_index'),
VectorAssembler(inputCols=['stock_code_encoded', 'country_encoded', 'total_bought_index'],
outputCol='features_raw'),
Normalizer(inputCol="features_raw", outputCol="features", p=1.0),
KMeans(featuresCol='features').setK(3).setSeed(42)]
pipeline = Pipeline(stages=stages)
model = pipeline.fit(joined_df)
df_with_clusters = model.transform(joined_df).cache()
df_with_clusters.limit(5).toPandas()
df_with_clusters.groupBy("prediction").count().toPandas().plot(kind='pie',x='prediction', y='count')
df_with_clusters.where(df_with_clusters.prediction==0).groupBy("Country").count().orderBy("count", ascending=False).show()
df_with_clusters.where(df_with_clusters.prediction==1).groupBy("Country").count().orderBy("count", ascending=False).show()
df_with_clusters.where(df_with_clusters.prediction==2).groupBy("Country").count().orderBy("count", ascending=False).show()
pandas_df = df_with_clusters.limit(5000).select('CustomerID','InvoiceNo','StockCode','Description','Quantity','InvoiceDate','UnitPrice','Country','TotalBought','SumTotalBought','prediction').toPandas()
import matplotlib
pandas_df.groupby('prediction').describe()['SumTotalBought']['mean'].plot(kind='bar', title = 'Mean total amount bought per cluster')
pandas_df.groupby('prediction').describe()
from pyspark.ml.evaluation import ClusteringEvaluator
evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(df_with_clusters)
silhouette
import itertools
import re
from functools import reduce
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
def plot_word_cloud(description_column):
list_of_word_sets = description_column.apply(str.split).tolist()
text = list(itertools.chain(*list_of_word_sets))
text = map(lambda x: re.sub(r'[^A-Z]', r'', x), text)
text = reduce(lambda x, y: x + ' ' + y, text)
wordcloud = WordCloud(
width=3000,
height=2000,
background_color='white',
stopwords=STOPWORDS,
collocations=False).generate(str(text))
fig = plt.figure(
figsize=(10, 5),
facecolor='k',
edgecolor='k')
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.tight_layout(pad=0)
plt.show()
plot_word_cloud(pandas_df[pandas_df.prediction==0].Description)
plot_word_cloud(pandas_df[pandas_df.prediction==1].Description)
plot_word_cloud(pandas_df[pandas_df.prediction==2].Description)
```
| github_jupyter |
# MMTL Basics Tutorial
The purpose of this tutorial is to introduce the basic classes and flow of the MMTL package within Snorkel MeTaL (not necessarily to motivate or explain multi-task learning at large; we assume prior experience with MTL). For a broader understanding of the general Snorkel pipeline and Snorkel MeTaL library, see the `Basics` tutorial. In this notebook, we'll look at a simple MTL model with only two tasks, each having distinct data and only one set of labels (the ground truth or "gold" labels).
The primary purpose of the MMTL package is to enable flexible prototyping and experimentation in what we call the _massive multi-task learning_ setting, where we have large numbers of tasks and labels of varying types, granularities, and label accuracies. A major requirement of this regime is the ability to easily add or remove new datasets, new label sets, new tasks, and new metrics. Thus, in the MMTL package, each of these concepts have been decoupled.
## Environment Setup
We first need to make sure that the `metal/` directory is on our Python path. If the following cell runs without an error, you're all set. If not, make sure that you've installed `snorkel-metal` with pip or that you've added the repo to your path if you're running from source; for example, running `source add_to_path.sh` from the repository root.
```
# Confirm we can import from metal
import sys
sys.path.append('../../metal')
import metal
# Import other dependencies
import torch
import torch.nn as nn
import torch.nn.functional as F
# Set random seed for notebook
SEED = 123
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
## Create Toy Dataset
We'll now create a toy dataset to work with.
Our data points are 2D points in the square with edge length 2 centered on the origin.
Our tasks will be classifying whether these points are:
1. Inside a unit circle centered on the origin
2. Inside a unit square centered on the origin
We'll visualize these decision boundaries in a few cells.
```
import torch
torch.manual_seed(SEED)
N = 500 # Data points per dataset
R = 1 # Unit distance
# Dataset 0
X0 = torch.rand(N, 2) * 2 - 1
Y0 = (X0[:,0]**2 + X0[:,1]**2 < R).long()
# Dataset 1
X1 = torch.rand(N, 2) * 2 - 1
Y1 = ((-0.5 < X1[:,0]) * (X1[:,0] < 0.5) * (-0.5 < X1[:,1]) * (X1[:,1] < 0.5)).long()
```
Note that, as is the case throughout the Snorkel MeTaL repo, the label 0 is reserved for abstaining/no label; all actual labels have values greater than 0. This provides flexibility for supervision sources to label only portions of a dataset, for example. Thus, we'll convert our labels from being (1 = positive, 0 = negative) to (0=abstain, 1 = positive, 2 = negative).
```
from metal.utils import convert_labels
Y0 = convert_labels(Y0, "onezero", "categorical")
Y1 = convert_labels(Y1, "onezero", "categorical")
```
We use our utility funtion `split_data()` to divide this synthetic data into train/valid/test splits.
```
from metal.utils import split_data
X0_splits, Y0_splits = split_data(X0, Y0, splits=[0.8, 0.1, 0.1], seed=SEED)
X1_splits, Y1_splits = split_data(X1, Y1, splits=[0.8, 0.1, 0.1], seed=SEED)
```
And we view can view the ground truth labels of our tasks visually to confirm our intuition on what the decision boundaries look like.
```
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1, 2)
axs[0].scatter(X0_splits[0][:,0], X0_splits[0][:,1], c=Y0_splits[0])
axs[0].set_aspect('equal', 'box')
axs[0].set_title('Task0', fontsize=10)
axs[1].scatter(X1_splits[0][:,0], X1_splits[0][:,1], c=Y1_splits[0])
axs[1].set_aspect('equal', 'box')
axs[1].set_title('Task1', fontsize=10)
print()
```
## Define MMTL Components
Now we'll define the core components of an MMTL problem: `Tasks`, `Models`, and `Payloads`.
### Tasks & MetalModels
A `Task` is a path through a network. In MeTaL, this corresponds to a particular sequence of Pytorch modules that each instance will pass through, ending with a "task head" module that outputs a prediction for that instance on that task. `Task` objects are not necessarily tied to a particular set of instances (data points) or labels.
In addition to specifying a path through the network, each task specifies which loss function and metrics it supports. You can look at the documentation for the `Task` class to see how to use custom losses or metrics; for now, we'll use the basic built-in `ClassificationTask` that uses cross-entropy loss and calculates accuracy.
The `MetalModel` is constructed from a set of `Tasks`. It constructs a network by stitching together the modules provided in each `Task`. In a future version of MeTaL, arbitrary DAG-like graphs of modules will be supported. In the present version, each `Task` can specify an input module, middle module, and head module (any module that is not provided will become an `IdentityModule`, which simply passes the data through with no modification).
The most common structure for MTL networks is to have a common trunk (e.g., input and/or middle modules) and separate heads (i.e., head modules). We will follow that design in this tutorial, making a feedforward network with 2 shared layers and separate task heads for each task. Each module can be composed of multiple submodules, so to accomplish this design, we can either include two linear layers in our input module, or assign one to the input module and one to the middle module; we arbitrarily use the former here.
```
from metal.mmtl.task import ClassificationTask
input_module = nn.Sequential(
torch.nn.Linear(2, 8),
nn.ReLU(),
torch.nn.Linear(8, 4),
nn.ReLU()
)
# Note that both tasks are initialized with the same copy of the input_module
# This ensures that those parameters will be shared (rather than creating two separate input_module copies)
task0 = ClassificationTask(
name="CircleTask",
input_module=input_module,
head_module=torch.nn.Linear(4, 2)
)
task1 = ClassificationTask(
name="SquareTask",
input_module=input_module,
head_module=torch.nn.Linear(4, 2)
)
```
We now create the `MetalModel` from our list of tasks.
```
from metal.mmtl import MetalModel
tasks = [task0, task1]
model = MetalModel(tasks, verbose=False)
```
### Payloads (Instances & Label Sets)
Now we'll define our `Payloads`.
A `Payload` is a bundle of instances (data points) and one or more corresponding label sets.
Each `Payload` contains data from only one split of the data (i.e., train data and test data should never touch).
Because we have two datasets with disjoint instance sets and three splits per dataset, we will make a total of six `Payloads`.
The instances in a `Payload` can consist of multiple fields of varied types (e.g., an image component and a text component for a caption generation task), and each `Payload` can contain multiple label sets (for example, if the same set of instances has labels for more than one task). If the instances have only one field and one label set, then you can use the helper method Payload.from_tensors(). In this case, the data you pass in (in our case, X) will be stored under the field name "data" by default and the label set will be given the name "labels" by default. See the other MMTL tutorial(s) for examples of problems where the data requires multiple fields or the instances have labels from multiple label sets.
Each `Payload` stores a dict that maps each label set to the task that it corresponds to.
```
from pprint import pprint
from metal.mmtl.payload import Payload
payloads = []
splits = ["train", "valid", "test"]
for i, (X_splits, Y_splits) in enumerate([(X0_splits, Y0_splits), (X1_splits, Y1_splits)]):
for X, Y, split in zip(X_splits, Y_splits, splits):
payload_name = f"Payload{i}_{split}"
task_name = tasks[i].name
payload = Payload.from_tensors(payload_name, X, Y, task_name, split, batch_size=32)
payloads.append(payload)
pprint(payloads)
```
## Train Model
The MetalModel is built from a list of `Task` objects.
When the network is printed, it displays one input/middle/head module for each `Task`, (even if multiple `Tasks` share the same module). We can also see that each module is wrapped in a DataParallel() layer (to enable parallelization across multiple GPUs when available) and MetalModuleWrappers (which wrap arbitrary Pytorch modules to ensure that they maintain the proper input/output formats that MeTaL expects. This output is often quite long, so we generally set `verbose=False` when constructing the model.
In a future version update, more flexibility will be provided for specifying arbitrary DAG-like networks of modules between tasks.
```
model = MetalModel(tasks, verbose=False)
```
To train the model, we create a `MultiTaskTrainer`.
The default scheduling strategy in MeTaL is to pull batches from `Payloads` proportional to the number of batches they contain relative to the total number of batches; this is approximately equivalent to dividing all `Payloads` into batches at the beginning of each epoch, shuffling them, and then operating over the shuffled list sequentially.
```
from metal.mmtl.trainer import MultitaskTrainer
trainer = MultitaskTrainer()
```
The `train_model()` method requires a `MetalModel` to train, and payloads with data and labels to run through the model.
Note once again that the data is separate from the tasks and model; the same model could be trained using payloads belonging to a different dataset, for example.
**Task-specific metrics are recorded in the form "task/payload/label_set/metric" corresponding to the task the made the predictions, the payload (data) the predictions were made on, the label set used to score the predictions, and the metric being calculated.**
For model-wide metrics (such as the total loss over all tasks or the learning rate), the default task name is `model`, the payload name is the name of the split, and the label_set is `all`.
```
scores = trainer.train_model(
model,
payloads,
n_epochs=20,
log_every=2,
lr=0.02,
progress_bar=False,
)
```
To calculate predictions or probabilities for an individual payload, we can then use the provided `MetalModel` methods.
```
from metal.contrib.visualization.analysis import *
Y_probs = np.array(model.predict_probs(payloads[0]))
Y_preds = np.array(model.predict(payloads[0]))
Y_gold = payloads[0].data_loader.dataset.Y_dict["labels"].numpy()
plot_predictions_histogram(Y_preds, Y_gold)
plot_probabilities_histogram(Y_probs)
```
This concludes the MMTL Basics Tutorial.
To see a more advanced use case, see the MMTL BERT Tutorial.
| github_jupyter |
<div id="qe-notebook-header" align="right" style="text-align:right;">
<a href="https://quantecon.org/" title="quantecon.org">
<img style="width:250px;display:inline;" width="250px" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
</a>
</div>
# Wealth Distribution Dynamics
## Contents
- [Wealth Distribution Dynamics](#Wealth-Distribution-Dynamics)
- [Overview](#Overview)
- [Lorenz Curves and the Gini Coefficient](#Lorenz-Curves-and-the-Gini-Coefficient)
- [A Model of Wealth Dynamics](#A-Model-of-Wealth-Dynamics)
- [Implementation](#Implementation)
- [Applications](#Applications)
- [Exercises](#Exercises)
In addition to what’s in Anaconda, this lecture will need the following libraries:
```
!pip install --upgrade quantecon
```
## Overview
This notebook gives an introduction to wealth distribution dynamics, with a
focus on
- modeling and computing the wealth distribution via simulation,
- measures of inequality such as the Lorenz curve and Gini coefficient, and
- how inequality is affected by the properties of wage income and returns on assets.
One interesting property of the wealth distribution we discuss is Pareto
tails.
The wealth distribution in many countries exhibits a Pareto tail
- See [this lecture](heavy_tails.ipynb) for a definition.
- For a review of the empirical evidence, see, for example, [[BB18]](zreferences.ipynb#benhabib2018skewed).
This is consistent with high concentration of wealth amongst the richest households.
It also gives us a way to quantify such concentration, in terms of the tail index.
One question of interest is whether or not we can replicate Pareto tails from a relatively simple model.
### A Note on Assumptions
The evolution of wealth for any given household depends on their
savings behavior.
Modeling such behavior will form an important part of this lecture series.
However, in this particular lecture, we will be content with rather ad hoc (but plausible) savings rules.
We do this to more easily explore the implications of different specifications of income dynamics and investment returns.
At the same time, all of the techniques discussed here can be plugged into models that use optimization to obtain savings rules.
We will use the following imports.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import quantecon as qe
from numba import njit, jitclass, float64, prange
```
## Lorenz Curves and the Gini Coefficient
Before we investigate wealth dynamics, we briefly review some measures of
inequality.
### Lorenz Curves
One popular graphical measure of inequality is the [Lorenz curve](https://en.wikipedia.org/wiki/Lorenz_curve).
The package [QuantEcon.py](https://github.com/QuantEcon/QuantEcon.py), already imported above, contains a function to compute Lorenz curves.
To illustrate, suppose that
```
n = 10_000 # size of sample
w = np.exp(np.random.randn(n)) # lognormal draws
```
is data representing the wealth of 10,000 households.
We can compute and plot the Lorenz curve as follows:
```
f_vals, l_vals = qe.lorenz_curve(w)
fig, ax = plt.subplots()
ax.plot(f_vals, l_vals, label='Lorenz curve, lognormal sample')
ax.plot(f_vals, f_vals, label='Lorenz curve, equality')
ax.legend()
plt.show()
```
This curve can be understood as follows: if point $ (x,y) $ lies on the curve, it means that, collectively, the bottom $ (100x)\% $ of the population holds $ (100y)\% $ of the wealth.
The “equality” line is the 45 degree line (which might not be exactly 45
degrees in the figure, depending on the aspect ratio).
A sample that produces this line exhibits perfect equality.
The other line in the figure is the Lorenz curve for the lognormal sample, which deviates significantly from perfect equality.
For example, the bottom 80% of the population holds around 40% of total wealth.
Here is another example, which shows how the Lorenz curve shifts as the
underlying distribution changes.
We generate 10,000 observations using the Pareto distribution with a range of
parameters, and then compute the Lorenz curve corresponding to each set of
observations.
```
a_vals = (1, 2, 5) # Pareto tail index
n = 10_000 # size of each sample
fig, ax = plt.subplots()
for a in a_vals:
u = np.random.uniform(size=n)
y = u**(-1/a) # distributed as Pareto with tail index a
f_vals, l_vals = qe.lorenz_curve(y)
ax.plot(f_vals, l_vals, label=f'$a = {a}$')
ax.plot(f_vals, f_vals, label='equality')
ax.legend()
plt.show()
```
You can see that, as the tail parameter of the Pareto distribution increases, inequality decreases.
This is to be expected, because a higher tail index implies less weight in the tail of the Pareto distribution.
### The Gini Coefficient
The definition and interpretation of the Gini coefficient can be found on the corresponding [Wikipedia page](https://en.wikipedia.org/wiki/Gini_coefficient).
A value of 0 indicates perfect equality (corresponding the case where the
Lorenz curve matches the 45 degree line) and a value of 1 indicates complete
inequality (all wealth held by the richest household).
The [QuantEcon.py](https://github.com/QuantEcon/QuantEcon.py) library contains a function to calculate the Gini coefficient.
We can test it on the Weibull distribution with parameter $ a $, where the Gini coefficient is known to be
$$
G = 1 - 2^{-1/a}
$$
Let’s see if the Gini coefficient computed from a simulated sample matches
this at each fixed value of $ a $.
```
a_vals = range(1, 20)
ginis = []
ginis_theoretical = []
n = 100
fig, ax = plt.subplots()
for a in a_vals:
y = np.random.weibull(a, size=n)
ginis.append(qe.gini_coefficient(y))
ginis_theoretical.append(1 - 2**(-1/a))
ax.plot(a_vals, ginis, label='estimated gini coefficient')
ax.plot(a_vals, ginis_theoretical, label='theoretical gini coefficient')
ax.legend()
ax.set_xlabel("Weibull parameter $a$")
ax.set_ylabel("Gini coefficient")
plt.show()
```
The simulation shows that the fit is good.
## A Model of Wealth Dynamics
Having discussed inequality measures, let us now turn to wealth dynamics.
The model we will study is
<a id='equation-wealth-dynam-ah'></a>
$$
w_{t+1} = (1 + r_{t+1}) s(w_t) + y_{t+1} \tag{1}
$$
where
- $ w_t $ is wealth at time $ t $ for a given household,
- $ r_t $ is the rate of return of financial assets,
- $ y_t $ is current non-financial (e.g., labor) income and
- $ s(w_t) $ is current wealth net of consumption
Letting $ \{z_t\} $ be a correlated state process of the form
$$
z_{t+1} = a z_t + b + \sigma_z \epsilon_{t+1}
$$
we’ll assume that
$$
R_t := 1 + r_t = c_r \exp(z_t) + \exp(\mu_r + \sigma_r \xi_t)
$$
and
$$
y_t = c_y \exp(z_t) + \exp(\mu_y + \sigma_y \zeta_t)
$$
Here $ \{ (\epsilon_t, \xi_t, \zeta_t) \} $ is IID and standard
normal in $ \mathbb R^3 $.
The value of $ c_r $ should be close to zero, since rates of return
on assets do not exhibit large trends.
When we simulate a population of households, we will assume all shocks are idiosyncratic (i.e., specific to individual households and independent across them).
Regarding the savings function $ s $, our default model will be
<a id='equation-sav-ah'></a>
$$
s(w) = s_0 w \cdot \mathbb 1\{w \geq \hat w\} \tag{2}
$$
where $ s_0 $ is a positive constant.
Thus, for $ w < \hat w $, the household saves nothing. For
$ w \geq \bar w $, the household saves a fraction $ s_0 $ of
their wealth.
We are using something akin to a fixed savings rate model, while
acknowledging that low wealth households tend to save very little.
## Implementation
Here’s some type information to help Numba.
```
wealth_dynamics_data = [
('w_hat', float64), # savings parameter
('s_0', float64), # savings parameter
('c_y', float64), # labor income parameter
('μ_y', float64), # labor income paraemter
('σ_y', float64), # labor income parameter
('c_r', float64), # rate of return parameter
('μ_r', float64), # rate of return parameter
('σ_r', float64), # rate of return parameter
('a', float64), # aggregate shock parameter
('b', float64), # aggregate shock parameter
('σ_z', float64), # aggregate shock parameter
('z_mean', float64), # mean of z process
('z_var', float64), # variance of z process
('y_mean', float64), # mean of y process
('R_mean', float64) # mean of R process
]
```
Here’s a class that stores instance data and implements methods that update
the aggregate state and household wealth.
```
@jitclass(wealth_dynamics_data)
class WealthDynamics:
def __init__(self,
w_hat=1.0,
s_0=0.75,
c_y=1.0,
μ_y=1.0,
σ_y=0.2,
c_r=0.05,
μ_r=0.1,
σ_r=0.5,
a=0.5,
b=0.0,
σ_z=0.1):
self.w_hat, self.s_0 = w_hat, s_0
self.c_y, self.μ_y, self.σ_y = c_y, μ_y, σ_y
self.c_r, self.μ_r, self.σ_r = c_r, μ_r, σ_r
self.a, self.b, self.σ_z = a, b, σ_z
# Record stationary moments
self.z_mean = b / (1 - a)
self.z_var = σ_z**2 / (1 - a**2)
exp_z_mean = np.exp(self.z_mean + self.z_var / 2)
self.R_mean = c_r * exp_z_mean + np.exp(μ_r + σ_r**2 / 2)
self.y_mean = c_y * exp_z_mean + np.exp(μ_y + σ_y**2 / 2)
# Test a stability condition that ensures wealth does not diverge
# to infinity.
α = self.R_mean * self.s_0
if α >= 1:
raise ValueError("Stability condition failed.")
def parameters(self):
"""
Collect and return parameters.
"""
parameters = (self.w_hat, self.s_0,
self.c_y, self.μ_y, self.σ_y,
self.c_r, self.μ_r, self.σ_r,
self.a, self.b, self.σ_z)
return parameters
def update_states(self, w, z):
"""
Update one period, given current wealth w and persistent
state z.
"""
# Simplify names
params = self.parameters()
w_hat, s_0, c_y, μ_y, σ_y, c_r, μ_r, σ_r, a, b, σ_z = params
zp = a * z + b + σ_z * np.random.randn()
# Update wealth
y = c_y * np.exp(zp) + np.exp(μ_y + σ_y * np.random.randn())
wp = y
if w >= w_hat:
R = c_r * np.exp(zp) + np.exp(μ_r + σ_r * np.random.randn())
wp += R * s_0 * w
return wp, zp
```
Here’s function to simulate the time series of wealth for in individual households.
```
@njit
def wealth_time_series(wdy, w_0, n):
"""
Generate a single time series of length n for wealth given
initial value w_0.
The initial persistent state z_0 for each household is drawn from
the stationary distribution of the AR(1) process.
* wdy: an instance of WealthDynamics
* w_0: scalar
* n: int
"""
z = wdy.z_mean + np.sqrt(wdy.z_var) * np.random.randn()
w = np.empty(n)
w[0] = w_0
for t in range(n-1):
w[t+1], z = wdy.update_states(w[t], z)
return w
```
Now here’s function to simulate a cross section of households forward in time.
Note the use of parallelization to speed up computation.
```
@njit(parallel=True)
def update_cross_section(wdy, w_distribution, shift_length=500):
"""
Shifts a cross-section of household forward in time
* wdy: an instance of WealthDynamics
* w_distribution: array_like, represents current cross-section
Takes a current distribution of wealth values as w_distribution
and updates each w_t in w_distribution to w_{t+j}, where
j = shift_length.
Returns the new distribution.
"""
new_distribution = np.empty_like(w_distribution)
# Update each household
for i in prange(len(new_distribution)):
z = wdy.z_mean + np.sqrt(wdy.z_var) * np.random.randn()
w = w_distribution[i]
for t in range(shift_length-1):
w, z = wdy.update_states(w, z)
new_distribution[i] = w
return new_distribution
```
Parallelization is very effective in the function above because the time path
of each household can be calculated independently once the path for the
aggregate state is known.
## Applications
Let’s try simulating the model at different parameter values and investigate
the implications for the wealth distribution.
### Time Series
Let’s look at the wealth dynamics of an individual household.
```
wdy = WealthDynamics( w_hat=1.0,
s_0=0.75,
c_y=1.0,
μ_y=1.0,
σ_y=0.2,
c_r=0.05,
μ_r=0.1,
σ_r=0.5,
a=0.5,
b=0.0,
σ_z=0.1)
ts_length = 200
w = wealth_time_series(wdy, wdy.y_mean, ts_length)
fig, ax = plt.subplots()
ax.plot(w)
plt.show()
```
Notice the large spikes in wealth over time.
Such spikes are similar to what we observed in time series when [we studied Kesten processes](kesten_processes.ipynb).
### Inequality Measures
Let’s look at how inequality varies with returns on financial assets.
The next function generates a cross section and then computes the Lorenz
curve and Gini coefficient.
```
def generate_lorenz_and_gini(wdy, num_households=100_000, T=500):
"""
Generate the Lorenz curve data and gini coefficient corresponding to a
WealthDynamics mode by simulating num_households forward to time T.
"""
ψ_0 = np.ones(num_households) * wdy.y_mean
z_0 = wdy.z_mean
ψ_star = update_cross_section(wdy, ψ_0, shift_length=T)
return qe.gini_coefficient(ψ_star), qe.lorenz_curve(ψ_star)
```
Now we investigate how the Lorenz curves associated with the wealth distribution change as return to savings varies.
The code below plots Lorenz curves for three different values of $ \mu_r $.
If you are running this yourself, note that it will take one or two minutes to execute.
This is unavoidable because we are executing a CPU intensive task.
In fact the code, which is JIT compiled and parallelized, runs extremely fast relative to the number of computations.
```
fig, ax = plt.subplots()
μ_r_vals = (0.0, 0.025, 0.05)
gini_vals = []
for μ_r in μ_r_vals:
wdy = WealthDynamics(w_hat=1.0,
s_0=0.75,
c_y=1.0,
μ_y=1.0,
σ_y=0.2,
c_r=0.05,
μ_r=μ_r,
σ_r=0.5,
a=0.5,
b=0.0,
σ_z=0.1)
gv, (f_vals, l_vals) = generate_lorenz_and_gini(wdy)
ax.plot(f_vals, l_vals, label=f'$\psi^*$ at $\mu_r = {μ_r:0.2}$')
gini_vals.append(gv)
ax.plot(f_vals, f_vals, label='equality')
ax.legend(loc="upper left")
plt.show()
```
The Lorenz curve shifts downwards as returns on financial income rise, indicating a rise in inequality.
We will look at this again via the Gini coefficient immediately below, but
first consider the following image of our system resources when the code above
is executing:
<a id='htop-again'></a>

Notice how effectively Numba has implemented multithreading for this routine:
all 8 CPUs on our workstation are running at maximum capacity (even though
four of them are virtual).
Since the code is both efficiently JIT compiled and fully parallelized, it’s
close to impossible to make this sequence of tasks run faster without changing
hardware.
Now let’s check the Gini coefficient.
```
fig, ax = plt.subplots()
ax.plot(μ_r_vals, gini_vals, label='gini coefficient')
ax.set_xlabel("$\mu_r$")
ax.legend()
plt.show()
```
Once again, we see that inequality increases as returns on financial income
rise.
Let’s finish this section by investigating what happens when we change the
volatility term $ \sigma_r $ in financial returns.
```
fig, ax = plt.subplots()
σ_r_vals = (0.35, 0.45, 0.52)
gini_vals = []
for σ_r in σ_r_vals:
wdy = WealthDynamics(w_hat=1.0,
s_0=0.75,
c_y=1.0,
μ_y=1.0,
σ_y=0.2,
c_r=0.05,
μ_r=0.1,
σ_r=σ_r,
a=0.5,
b=0.0,
σ_z=0.1)
gv, (f_vals, l_vals) = generate_lorenz_and_gini(wdy)
ax.plot(f_vals, l_vals, label=f'$\psi^*$ at $\mu_r = {σ_r:0.2}$')
gini_vals.append(gv)
ax.plot(f_vals, f_vals, label='equality')
ax.legend(loc="upper left")
plt.show()
```
We see that greater volatility has the effect of increasing inequality in this model.
## Exercises
### Exercise 1
For a wealth or income distribution with Pareto tail, a higher tail index suggests lower inequality.
Indeed, it is possible to prove that the Gini coefficient of the Pareto
distribution with tail index $ a $ is $ 1/(2a - 1) $.
To the extent that you can, confirm this by simulation.
In particular, generate a plot of the Gini coefficient against the tail index
using both the theoretical value just given and the value computed from a sample via `qe.gini_coefficient`.
For the values of the tail index, use `a_vals = np.linspace(1, 10, 25)`.
Use sample of size 1,000 for each $ a $ and the sampling method for generating Pareto draws employed in the discussion of Lorenz curves for the Pareto distribution.
To the extend that you can, interpret the monotone relationship between the
Gini index and $ a $.
```
a_vals = np.linspace(1, 10, 25)
ginis = np.empty(len(a_vals))
samp_size= 1000
fig, ax = plt.subplots()
for i,a in enumerate(a_vals):
y = np.random.uniform(size=samp_size)**(-1/a)
ginis[i]=qe.gini_coefficient(y)
ax.plot(a_vals, ginis)
ax.plot(a_vals, 1/(2*a_vals - 1))
plt.show()
```
### Exercise 2
The wealth process [(1)](#equation-wealth-dynam-ah) is similar to a [Kesten process](kesten_processes.ipynb).
This is because, according to [(2)](#equation-sav-ah), savings is constant for all wealth levels above $ \hat w $.
When savings is constant, the wealth process has the same quasi-linear
structure as a Kesten process, with multiplicative and additive shocks.
The Kesten–Goldie theorem tells us that Kesten processes have Pareto tails under a range of parameterizations.
The theorem does not directly apply here, since savings is not always constant and since the multiplicative and additive terms in [(1)](#equation-wealth-dynam-ah) are not IID.
At the same time, given the similarities, perhaps Pareto tails will arise.
To test this, run a simulation that generates a cross-section of wealth and
generate a rank-size plot.
In viewing the plot, remember that Pareto tails generate a straight line. Is
this what you see?
For sample size and initial conditions, use
```
@jitclass(wealth_dynamics_data)
class WealthDynamics:
def __init__(self,
w_hat=1.0,
s_0=0.75,
c_y=1.0,
μ_y=1.0,
σ_y=0.2,
c_r=0.05,
μ_r=0.1,
σ_r=0.5,
a=0.5,
b=0.0,
σ_z=0.1):
self.w_hat, self.s_0 = w_hat, s_0
self.c_y, self.μ_y, self.σ_y = c_y, μ_y, σ_y
self.c_r, self.μ_r, self.σ_r = c_r, μ_r, σ_r
self.a, self.b, self.σ_z = a, b, σ_z
# Record stationary moments
self.z_mean = b / (1 - a)
self.z_var = σ_z**2 / (1 - a**2)
exp_z_mean = np.exp(self.z_mean + self.z_var / 2)
self.R_mean = c_r * exp_z_mean + np.exp(μ_r + σ_r**2 / 2)
self.y_mean = c_y * exp_z_mean + np.exp(μ_y + σ_y**2 / 2)
# Test a stability condition that ensures wealth does not diverge
# to infinity.
α = self.R_mean * self.s_0
if α >= 1:
raise ValueError("Stability condition failed.")
def parameters(self):
"""
Collect and return parameters.
"""
parameters = (self.w_hat, self.s_0,
self.c_y, self.μ_y, self.σ_y,
self.c_r, self.μ_r, self.σ_r,
self.a, self.b, self.σ_z)
return parameters
def update_states(self, w, z):
"""
Update one period, given current wealth w and persistent
state z.
"""
# Simplify names
params = self.parameters()
w_hat, s_0, c_y, μ_y, σ_y, c_r, μ_r, σ_r, a, b, σ_z = params
zp = a * z + b + σ_z * np.random.randn()
# Update wealth
y = c_y * np.exp(zp) + np.exp(μ_y + σ_y * np.random.randn())
wp = y
if w >= w_hat:
R = c_r * np.exp(zp) + np.exp(μ_r + σ_r * np.random.randn())
wp += R * s_0 * w
return wp, zp
@njit
def wealth_time_series(wdy, w_0, n):
"""
Generate a single time series of length n for wealth given
initial value w_0.
The initial persistent state z_0 for each household is drawn from
the stationary distribution of the AR(1) process.
* wdy: an instance of WealthDynamics
* w_0: scalar
* n: int
"""
z = wdy.z_mean + np.sqrt(wdy.z_var) * np.random.randn()
w = np.empty(n)
w[0] = w_0
for t in range(n-1):
w[t+1], z = wdy.update_states(w[t], z)
return w
num_households = 250_000
T = 500 # shift forward T periods
ψ_0 = np.ones(num_households) * wdy.y_mean # initial distribution
z_0 = wdy.z_mean
#@njit
def wealth_time_series(wdy, w_0, n):
"""
Generate a single time series of length n for wealth given
initial value w_0.
The initial persistent state z_0 for each household is drawn from
the stationary distribution of the AR(1) process.
* wdy: an instance of WealthDynamics
* w_0: scalar
* n: int
"""
z = wdy.z_mean + np.sqrt(wdy.z_var) * np.random.randn()
w = np.empty(n)
w[0] = w_0
for t in range(n-1):
w[t+1], z = wdy.update_states(w[t], z)
return w
household=wealth_time_series(wdy, ψ_0[0], T)
ψ_star = update_cross_section(wdy, ψ_0, shift_length=T)
from scipy.stats import pareto, norm
import matplotlib.pyplot as plt
import numpy as np
log_size=np.sort(np.log(household))
log_size_rank=np.log(log_size.argsort()[::-1]+1)
fig, ax = plt.subplots()
ax.scatter(y=log_size,x=log_size_rank, marker='o', alpha=0.5)
ax.set_xlabel("log rank")
ax.set_ylabel("log size")
plt.show()
```
| github_jupyter |
[](https://gishub.org/leafmap-pangeo)
Uncomment the following line to install [leafmap](https://leafmap.org) if needed.
```
# !pip install leafmap
import leafmap.kepler as leafmap
```
If you are using a recently implemented leafmap feature that has not yet been released to PyPI or conda-forge, you can uncomment the following line to install the development version from GitHub.
```
# leafmap.update_package()
```
Create an interactive map. You can specify various parameters to initialize the map, such as `center`, `zoom`, `height`, and `widescreen`.
```
m = leafmap.Map(center=[40, -100], zoom=2, height=600, widescreen=False)
m
```
Save the map to an interactive html. To hide the side panel and disable map customization. Set `read_only=False`
```
m.to_html(outfile="../html/kepler.html", read_only=False)
```
Display the interactive map in a notebook cell.
```
# m.static_map(width=950, height=600, read_only=True)
```
Add a CSV to the map. If you have a map config file, you can directly apply config to the map.
```
m = leafmap.Map(center=[37.7621, -122.4143], zoom=12)
in_csv = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/hex_data.csv'
config = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/hex_config.json'
m.add_csv(in_csv, layer_name="hex_data", config=config)
m
```
Save the map configuration as a JSON file.
```
m.save_config("cache/config.json")
```
Save the map to an interactive html.
```
m.to_html(outfile="../html/kepler_hex.html")
```
Add a GeoJSON to the map.
```
m = leafmap.Map(center=[20, 0], zoom=1)
lines = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/cable-geo.geojson'
m.add_geojson(lines, layer_name="Cable lines")
m
m.to_html("../html/kepler_lines.html")
```
Add a GeoJSON with US state boundaries to the map.
```
m = leafmap.Map(center=[50, -110], zoom=2)
polygons = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us-states.json'
m.add_geojson(polygons, layer_name="Countries")
m
m.to_html("../html/kepler_states.html")
```
Add a shapefile to the map.
```
m = leafmap.Map(center=[20, 0], zoom=1)
in_shp = "https://github.com/giswqs/leafmap/raw/master/examples/data/countries.zip"
m.add_shp(in_shp, "Countries")
m
m.to_html("../html/kepler_countries.html")
```
Add a GeoPandas GeoDataFrame to the map.
```
import geopandas as gpd
gdf = gpd.read_file("https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.geojson")
gdf
m = leafmap.Map(center=[20, 0], zoom=1)
m.add_gdf(gdf, "World cities")
m
m.to_html("../html/kepler_cities.html")
```
| github_jupyter |
<a id='pd'></a>
<div id="qe-notebook-header" align="right" style="text-align:right;">
<a href="https://quantecon.org/" title="quantecon.org">
<img style="width:250px;display:inline;" width="250px" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
</a>
</div>
# Pandas
<a id='index-1'></a>
## Contents
- [Pandas](#Pandas)
- [Overview](#Overview)
- [Series](#Series)
- [DataFrames](#DataFrames)
- [On-Line Data Sources](#On-Line-Data-Sources)
- [Exercises](#Exercises)
- [Solutions](#Solutions)
In addition to what’s in Anaconda, this lecture will need the following libraries:
```
!pip install --upgrade pandas-datareader
```
## Overview
[Pandas](http://pandas.pydata.org/) is a package of fast, efficient data analysis tools for Python.
Its popularity has surged in recent years, coincident with the rise
of fields such as data science and machine learning.
Here’s a popularity comparison over time against STATA, SAS, and [dplyr](https://dplyr.tidyverse.org/) courtesy of Stack Overflow Trends
<img src="https://s3-ap-southeast-2.amazonaws.com/python-programming.quantecon.org/_static/lecture_specific/pandas/pandas_vs_rest.png" style="width:55%;height:55%">
Just as [NumPy](http://www.numpy.org/) provides the basic array data type plus core array operations, pandas
1. defines fundamental structures for working with data and
1. endows them with methods that facilitate operations such as
- reading in data
- adjusting indices
- working with dates and time series
- sorting, grouping, re-ordering and general data munging <sup><a href=#mung id=mung-link>[1]</a></sup>
- dealing with missing values, etc., etc.
More sophisticated statistical functionality is left to other packages, such
as [statsmodels](http://www.statsmodels.org/) and [scikit-learn](http://scikit-learn.org/), which are built on top of pandas.
This lecture will provide a basic introduction to pandas.
Throughout the lecture, we will assume that the following imports have taken
place
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import requests
```
## Series
<a id='index-2'></a>
Two important data types defined by pandas are `Series` and `DataFrame`.
You can think of a `Series` as a “column” of data, such as a collection of observations on a single variable.
A `DataFrame` is an object for storing related columns of data.
Let’s start with Series
```
s = pd.Series(np.random.randn(4), name='daily returns')
s
```
Here you can imagine the indices `0, 1, 2, 3` as indexing four listed
companies, and the values being daily returns on their shares.
Pandas `Series` are built on top of NumPy arrays and support many similar
operations
```
s * 100
np.abs(s)
```
But `Series` provide more than NumPy arrays.
Not only do they have some additional (statistically oriented) methods
```
s.describe()
```
But their indices are more flexible
```
s.index = ['AMZN', 'AAPL', 'MSFT', 'GOOG']
s
```
Viewed in this way, `Series` are like fast, efficient Python dictionaries
(with the restriction that the items in the dictionary all have the same
type—in this case, floats).
In fact, you can use much of the same syntax as Python dictionaries
```
s['AMZN']
s['AMZN'] = 0
s
'AAPL' in s
```
## DataFrames
<a id='index-3'></a>
While a `Series` is a single column of data, a `DataFrame` is several columns, one for each variable.
In essence, a `DataFrame` in pandas is analogous to a (highly optimized) Excel spreadsheet.
Thus, it is a powerful tool for representing and analyzing data that are naturally organized into rows and columns, often with descriptive indexes for individual rows and individual columns.
Let’s look at an example that reads data from the CSV file `pandas/data/test_pwt.csv` that can be downloaded
<a href=_static/lecture_specific/pandas/data/test_pwt.csv download>here</a>.
Here’s the content of `test_pwt.csv`
```text
"country","country isocode","year","POP","XRAT","tcgdp","cc","cg"
"Argentina","ARG","2000","37335.653","0.9995","295072.21869","75.716805379","5.5788042896"
"Australia","AUS","2000","19053.186","1.72483","541804.6521","67.759025993","6.7200975332"
"India","IND","2000","1006300.297","44.9416","1728144.3748","64.575551328","14.072205773"
"Israel","ISR","2000","6114.57","4.07733","129253.89423","64.436450847","10.266688415"
"Malawi","MWI","2000","11801.505","59.543808333","5026.2217836","74.707624181","11.658954494"
"South Africa","ZAF","2000","45064.098","6.93983","227242.36949","72.718710427","5.7265463933"
"United States","USA","2000","282171.957","1","9898700","72.347054303","6.0324539789"
"Uruguay","URY","2000","3219.793","12.099591667","25255.961693","78.978740282","5.108067988"
```
Supposing you have this data saved as `test_pwt.csv` in the present working directory (type `%pwd` in Jupyter to see what this is), it can be read in as follows:
```
df = pd.read_csv('https://raw.githubusercontent.com/QuantEcon/lecture-python-programming/master/source/_static/lecture_specific/pandas/data/test_pwt.csv')
type(df)
df
```
We can select particular rows using standard Python array slicing notation
```
df[2:5]
```
To select columns, we can pass a list containing the names of the desired columns represented as strings
```
df[['country', 'tcgdp']]
```
To select both rows and columns using integers, the `iloc` attribute should be used with the format `.iloc[rows, columns]`
```
df.iloc[2:5, 0:4]
```
To select rows and columns using a mixture of integers and labels, the `loc` attribute can be used in a similar way
```
df.loc[df.index[2:5], ['country', 'tcgdp']]
```
Let’s imagine that we’re only interested in population (`POP`) and total GDP (`tcgdp`).
One way to strip the data frame `df` down to only these variables is to overwrite the dataframe using the selection method described above
```
df = df[['country', 'POP', 'tcgdp']]
df
```
Here the index `0, 1,..., 7` is redundant because we can use the country names as an index.
To do this, we set the index to be the `country` variable in the dataframe
```
df = df.set_index('country')
df
```
Let’s give the columns slightly better names
```
df.columns = 'population', 'total GDP'
df
```
Population is in thousands, let’s revert to single units
```
df['population'] = df['population'] * 1e3
df
```
Next, we’re going to add a column showing real GDP per capita, multiplying by 1,000,000 as we go because total GDP is in millions
```
df['GDP percap'] = df['total GDP'] * 1e6 / df['population']
df
```
One of the nice things about pandas `DataFrame` and `Series` objects is that they have methods for plotting and visualization that work through Matplotlib.
For example, we can easily generate a bar plot of GDP per capita
```
ax = df['GDP percap'].plot(kind='bar')
ax.set_xlabel('country', fontsize=12)
ax.set_ylabel('GDP per capita', fontsize=12)
plt.show()
```
At the moment the data frame is ordered alphabetically on the countries—let’s change it to GDP per capita
```
df = df.sort_values(by='GDP percap', ascending=False)
df
```
Plotting as before now yields
```
ax = df['GDP percap'].plot(kind='bar')
ax.set_xlabel('country', fontsize=12)
ax.set_ylabel('GDP per capita', fontsize=12)
plt.show()
```
## On-Line Data Sources
<a id='index-4'></a>
Python makes it straightforward to query online databases programmatically.
An important database for economists is [FRED](https://research.stlouisfed.org/fred2/) — a vast collection of time series data maintained by the St. Louis Fed.
For example, suppose that we are interested in the [unemployment rate](https://research.stlouisfed.org/fred2/series/UNRATE).
Via FRED, the entire series for the US civilian unemployment rate can be downloaded directly by entering
this URL into your browser (note that this requires an internet connection)
```text
https://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv
```
(Equivalently, click here: [https://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv](https://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv))
This request returns a CSV file, which will be handled by your default application for this class of files.
Alternatively, we can access the CSV file from within a Python program.
This can be done with a variety of methods.
We start with a relatively low-level method and then return to pandas.
### Accessing Data with requests
<a id='index-6'></a>
One option is to use [requests](https://requests.readthedocs.io/en/master/), a standard Python library for requesting data over the Internet.
To begin, try the following code on your computer
```
r = requests.get('http://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv')
```
If there’s no error message, then the call has succeeded.
If you do get an error, then there are two likely causes
1. You are not connected to the Internet — hopefully, this isn’t the case.
1. Your machine is accessing the Internet through a proxy server, and Python isn’t aware of this.
In the second case, you can either
- switch to another machine
- solve your proxy problem by reading [the documentation](https://requests.readthedocs.io/en/master/)
Assuming that all is working, you can now proceed to use the `source` object returned by the call `requests.get('http://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv')`
```
url = 'http://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv'
source = requests.get(url).content.decode().split("\n")
source[0]
source[1]
source[2]
```
We could now write some additional code to parse this text and store it as an array.
But this is unnecessary — pandas’ `read_csv` function can handle the task for us.
We use `parse_dates=True` so that pandas recognizes our dates column, allowing for simple date filtering
```
data = pd.read_csv(url, index_col=0, parse_dates=True)
```
The data has been read into a pandas DataFrame called `data` that we can now manipulate in the usual way
```
type(data)
data.head() # A useful method to get a quick look at a data frame
pd.set_option('precision', 1)
data.describe() # Your output might differ slightly
```
We can also plot the unemployment rate from 2006 to 2012 as follows
```
ax = data['2006':'2012'].plot(title='US Unemployment Rate', legend=False)
ax.set_xlabel('year', fontsize=12)
ax.set_ylabel('%', fontsize=12)
plt.show()
```
Note that pandas offers many other file type alternatives.
Pandas has [a wide variety](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html) of top-level methods that we can use to read, excel, json, parquet or plug straight into a database server.
### Using pandas_datareader to Access Data
<a id='index-8'></a>
The maker of pandas has also authored a library called pandas_datareader that gives programmatic access to many data sources straight from the Jupyter notebook.
While some sources require an access key, many of the most important (e.g., FRED, [OECD](https://data.oecd.org/), [EUROSTAT](https://ec.europa.eu/eurostat/data/database) and the World Bank) are free to use.
For now let’s work through one example of downloading and plotting data — this
time from the World Bank.
The World Bank [collects and organizes data](http://data.worldbank.org/indicator) on a huge range of indicators.
For example, [here’s](http://data.worldbank.org/indicator/GC.DOD.TOTL.GD.ZS/countries) some data on government debt as a ratio to GDP.
The next code example fetches the data for you and plots time series for the US and Australia
```
from pandas_datareader import wb
govt_debt = wb.download(indicator='GC.DOD.TOTL.GD.ZS', country=['US', 'AU'], start=2005, end=2016).stack().unstack(0)
ind = govt_debt.index.droplevel(-1)
govt_debt.index = ind
ax = govt_debt.plot(lw=2)
ax.set_xlabel('year', fontsize=12)
plt.title("Government Debt to GDP (%)")
plt.show()
```
The [documentation](https://pandas-datareader.readthedocs.io/en/latest/index.html) provides more details on how to access various data sources.
## Exercises
<a id='pd-ex1'></a>
### Exercise 1
With these imports:
```
import datetime as dt
from pandas_datareader import data
```
Write a program to calculate the percentage price change over 2019 for the following shares:
```
ticker_list = {'INTC': 'Intel',
'MSFT': 'Microsoft',
'IBM': 'IBM',
'BHP': 'BHP',
'TM': 'Toyota',
'AAPL': 'Apple',
'AMZN': 'Amazon',
'BA': 'Boeing',
'QCOM': 'Qualcomm',
'KO': 'Coca-Cola',
'GOOG': 'Google',
'SNE': 'Sony',
'PTR': 'PetroChina'}
```
Here’s the first part of the program
```
def read_data(ticker_list,
start=dt.datetime(2019, 1, 2),
end=dt.datetime(2019, 12, 31)):
"""
This function reads in closing price data from Yahoo
for each tick in the ticker_list.
"""
ticker = pd.DataFrame()
for tick in ticker_list:
prices = data.DataReader(tick, 'yahoo', start, end)
closing_prices = prices['Close']
ticker[tick] = closing_prices
return ticker
ticker = read_data(ticker_list)
```
Complete the program to plot the result as a bar graph like this one:
<img src="https://s3-ap-southeast-2.amazonaws.com/python-programming.quantecon.org/_static/lecture_specific/pandas/pandas_share_prices.png" style="width:50%;height:50%">
<a id='pd-ex2'></a>
### Exercise 2
Using the method `read_data` introduced in [Exercise 1](#pd-ex1), write a program to obtain year-on-year percentage change for the following indices:
```
indices_list = {'^GSPC': 'S&P 500',
'^IXIC': 'NASDAQ',
'^DJI': 'Dow Jones',
'^N225': 'Nikkei'}
```
Complete the program to show summary statistics and plot the result as a time series graph like this one:
<img src="https://s3-ap-southeast-2.amazonaws.com/python-programming.quantecon.org/_static/lecture_specific/pandas/pandas_indices_pctchange.png" style="width:53%;height:53%">
## Solutions
### Exercise 1
There are a few ways to approach this problem using Pandas to calculate
the percentage change.
First, you can extract the data and perform the calculation such as:
```
p1 = ticker.iloc[0] #Get the first set of prices as a Series
p2 = ticker.iloc[-1] #Get the last set of prices as a Series
price_change = (p2 - p1) / p1 * 100
price_change
```
Alternatively you can use an inbuilt method `pct_change` and configure it to
perform the correct calculation using `periods` argument.
```
change = ticker.pct_change(periods=len(ticker)-1, axis='rows')*100
price_change = change.iloc[-1]
price_change
```
Then to plot the chart
```
price_change.sort_values(inplace=True)
price_change = price_change.rename(index=ticker_list)
fig, ax = plt.subplots(figsize=(10,8))
ax.set_xlabel('stock', fontsize=12)
ax.set_ylabel('percentage change in price', fontsize=12)
price_change.plot(kind='bar', ax=ax)
plt.show()
```
### Exercise 2
Following the work you did in [Exercise 1](#pd-ex1), you can query the data using `read_data` by updating the start and end dates accordingly.
```
indices_data = read_data(
indices_list,
start=dt.datetime(1928, 1, 2),
end=dt.datetime(2020, 12, 31)
)
```
Then, extract the first and last set of prices per year as DataFrames and calculate the yearly returns such as:
```
yearly_returns = pd.DataFrame()
for index, name in indices_list.items():
p1 = indices_data.groupby(indices_data.index.year)[index].first() # Get the first set of returns as a DataFrame
p2 = indices_data.groupby(indices_data.index.year)[index].last() # Get the last set of returns as a DataFrame
returns = (p2 - p1) / p1
yearly_returns[name] = returns
yearly_returns
```
Next, you can obtain summary statistics by using the method `describe`.
```
yearly_returns.describe()
```
Then, to plot the chart
```
fig, axes = plt.subplots(2, 2, figsize=(10, 6))
for iter_, ax in enumerate(axes.flatten()): # Flatten 2-D array to 1-D array
index_name = yearly_returns.columns[iter_] # Get index name per iteration
ax.plot(yearly_returns[index_name]) # Plot pct change of yearly returns per index
ax.set_ylabel("percent change", fontsize = 12)
ax.set_title(index_name)
plt.tight_layout()
```
**Footnotes**
<p><a id=mung href=#mung-link><strong>[1]</strong></a> Wikipedia defines munging as cleaning data from one raw form into a structured, purged one.
| github_jupyter |
# Assignment 1: Neural Machine Translation
Welcome to the first assignment of Course 4. Here, you will build an English-to-German neural machine translation (NMT) model using Long Short-Term Memory (LSTM) networks with attention. Machine translation is an important task in natural language processing and could be useful not only for translating one language to another but also for word sense disambiguation (e.g. determining whether the word "bank" refers to the financial bank, or the land alongside a river). Implementing this using just a Recurrent Neural Network (RNN) with LSTMs can work for short to medium length sentences but can result in vanishing gradients for very long sequences. To solve this, you will be adding an attention mechanism to allow the decoder to access all relevant parts of the input sentence regardless of its length. By completing this assignment, you will:
- learn how to preprocess your training and evaluation data
- implement an encoder-decoder system with attention
- understand how attention works
- build the NMT model from scratch using Trax
- generate translations using greedy and Minimum Bayes Risk (MBR) decoding
## Outline
- [Part 1: Data Preparation](#1)
- [1.1 Importing the Data](#1.1)
- [1.2 Tokenization and Formatting](#1.2)
- [1.3 tokenize & detokenize helper functions](#1.3)
- [1.4 Bucketing](#1.4)
- [1.5 Exploring the data](#1.5)
- [Part 2: Neural Machine Translation with Attention](#2)
- [2.1 Attention Overview](#2.1)
- [2.2 Helper functions](#2.2)
- [Exercise 01](#ex01)
- [Exercise 02](#ex02)
- [Exercise 03](#ex03)
- [2.3 Implementation Overview](#2.3)
- [Exercise 04](#ex04)
- [Part 3: Training](#3)
- [3.1 TrainTask](#3.1)
- [Exercise 05](#ex05)
- [3.2 EvalTask](#3.2)
- [3.3 Loop](#3.3)
- [Part 4: Testing](#4)
- [4.1 Decoding](#4.1)
- [Exercise 06](#ex06)
- [Exercise 07](#ex07)
- [4.2 Minimum Bayes-Risk Decoding](#4.2)
- [Exercise 08](#ex08)
- [Exercise 09](#ex09)
- [Exercise 10](#ex10)
<a name="1"></a>
# Part 1: Data Preparation
<a name="1.1"></a>
## 1.1 Importing the Data
We will first start by importing the packages we will use in this assignment. As in the previous course of this specialization, we will use the [Trax](https://github.com/google/trax) library created and maintained by the [Google Brain team](https://research.google/teams/brain/) to do most of the heavy lifting. It provides submodules to fetch and process the datasets, as well as build and train the model.
```
%%capture
!pip install trax==1.3.4
%%capture
!wget https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/deeplearning.ai/nlp/assets/w1_unittest.py
import random
import numpy as np
import trax
from termcolor import colored
from trax import layers as tl
from trax.fastmath import numpy as fastnp
from trax.supervised import training
!pip list | grep "termcolor\|trax"
```
Next, we will import the dataset we will use to train the model. To meet the storage constraints in this lab environment, we will just use a small dataset from [Opus](http://opus.nlpl.eu/), a growing collection of translated texts from the web. Particularly, we will get an English to German translation subset specified as `opus/medical` which has medical related texts. If storage is not an issue, you can opt to get a larger corpus such as the English to German translation dataset from [ParaCrawl](https://paracrawl.eu/), a large multi-lingual translation dataset created by the European Union. Both of these datasets are available via [Tensorflow Datasets (TFDS)](https://www.tensorflow.org/datasets)
and you can browse through the other available datasets [here](https://www.tensorflow.org/datasets/catalog/overview). We have downloaded the data for you in the `data/` directory of your workspace. As you'll see below, you can easily access this dataset from TFDS with `trax.data.TFDS`. The result is a python generator function yielding tuples. Use the `keys` argument to select what appears at which position in the tuple. For example, `keys=('en', 'de')` below will return pairs as (English sentence, German sentence).
```
train_stream_fn = trax.data.TFDS(
"opus/medical", keys=("en", "de"), eval_holdout_size=0.01, train=True
)
eval_stream_fn = trax.data.TFDS(
"opus/medical",
data_dir="./data/",
keys=("en", "de"),
eval_holdout_size=0.01,
train=False,
)
```
Notice that TFDS returns a generator *function*, not a generator. This is because in Python, you cannot reset generators so you cannot go back to a previously yielded value. During deep learning training, you use Stochastic Gradient Descent and don't actually need to go back -- but it is sometimes good to be able to do that, and that's where the functions come in. It is actually very common to use generator functions in Python -- e.g., `zip` is a generator function. You can read more about [Python generators](https://book.pythontips.com/en/latest/generators.html) to understand why we use them. Let's print a a sample pair from our train and eval data. Notice that the raw ouput is represented in bytes (denoted by the `b'` prefix) and these will be converted to strings internally in the next steps.
```
train_stream = train_stream_fn()
print(colored("train data(en, de) tuple:", "yellow"), next(train_stream))
eval_stream = eval_stream_fn()
print(colored("eval data (en, de) tuple:", "green"), next(eval_stream))
```
<a name="1.2"></a>
## 1.2 Tokenization and Formatting
Now that we have imported our corpus, we will be preprocessing the sentences into a format that our model can accept. This will be composed of several steps:
**Tokenizing the sentences using subword representations:** As you've learned in the earlier courses of this specialization, we want to represent each sentence as an array of integers instead of strings. For our application, we will use *subword* representations to tokenize our sentences. This is a common technique to avoid out-of-vocabulary words by allowing parts of words to be represented separately. For example, instead of having separate entries in your vocabulary for --"fear", "fearless", "fearsome", "some", and "less"--, you can simply store --"fear", "some", and "less"-- then allow your tokenizer to combine these subwords when needed. This allows it to be more flexible so you won't have to save uncommon words explicitly in your vocabulary (e.g. *stylebender*, *nonce*, etc). Tokenizing is done with the `trax.data.Tokenize()` command and we have provided you the combined subword vocabulary for English and German (i.e. `ende_32k.subword`) saved in the `data` directory. Feel free to open this file to see how the subwords look like.
```
# global variables that state the filename and directory of the vocabulary file
VOCAB_FILE = "ende_32k.subword"
VOCAB_DIR = "gs://trax-ml/vocabs/"
tokenized_train_stream = trax.data.Tokenize(vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR)(train_stream)
tokenized_eval_stream = trax.data.Tokenize(vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR)(eval_stream)
```
**Append an end-of-sentence token to each sentence:** We will assign a token (i.e. in this case `1`) to mark the end of a sentence. This will be useful in inference/prediction so we'll know that the model has completed the translation.
```
# Append EOS at the end of each sentence.
# Integer assigned as end-of-sentence (EOS)
EOS = 1
# generator helper function to append EOS to each sentence
def append_eos(stream):
for (inputs, targets) in stream:
inputs_with_eos = list(inputs) + [EOS]
targets_with_eos = list(targets) + [EOS]
yield np.array(inputs_with_eos), np.array(targets_with_eos)
tokenized_train_stream = append_eos(tokenized_train_stream)
tokenized_eval_stream = append_eos(tokenized_eval_stream)
```
**Filter long sentences:** We will place a limit on the number of tokens per sentence to ensure we won't run out of memory. This is done with the `trax.data.FilterByLength()` method and you can see its syntax below.
```
# Filter too long sentences to not run out of memory.
# length_keys=[0, 1] means we filter both English and German sentences, so
# both much be not longer that 256 tokens for training / 512 for eval.
filtered_train_stream = trax.data.FilterByLength(
max_length=256, length_keys=[0, 1]
)(tokenized_train_stream)
filtered_eval_stream = trax.data.FilterByLength(
max_length=512, length_keys=[0,1]
)(tokenized_eval_stream)
train_input, train_target = next(filtered_train_stream)
print(colored(f"Single tokenized example input:", "red"), train_input)
print(colored(f"Single tokenized example target:", "red"), train_target)
```
<a name="1.3"></a>
## 1.3 tokenize & detokenize helper functions
Given any data set, you have to be able to map words to their indices, and indices to their words. The inputs and outputs to your trax models are usually tensors of numbers where each number corresponds to a word. If you were to process your data manually, you would have to make use of the following:
- <span style='color:blue'> word2Ind: </span> a dictionary mapping the word to its index.
- <span style='color:blue'> ind2Word:</span> a dictionary mapping the index to its word.
- <span style='color:blue'> word2Count:</span> a dictionary mapping the word to the number of times it appears.
- <span style='color:blue'> num_words:</span> total number of words that have appeared.
Since you have already implemented these in previous assignments of the specialization, we will provide you with helper functions that will do this for you. Run the cell below to get the following functions:
- <span style='color:blue'> tokenize(): </span> converts a text sentence to its corresponding token list (i.e. list of indices). Also converts words to subwords (parts of words).
- <span style='color:blue'> detokenize(): </span> converts a token list to its corresponding sentence (i.e. string).
```
# Setup helper functions for tokenizing and detokenizing sentences
def tokenize(input_str, vocab_file=None, vocab_dir=None):
"""Encodes a string to an array of integers
Args:
input_str (str): human-readable string to encode
vocab_file (str): filename of the vocabulary text file
vocab_dir (str): path to the vocabulary file
Returns:
numpy.ndarray: tokenized version of the input string
"""
# Set the encoding of the "end of sentence" as 1
EOS = 1
# Use the trax.data.tokenize method. It takes streams and returns streams,
# we get around it by making a 1-element stream with `iter`.
inputs = next(
trax.data.tokenize(
iter([input_str]), vocab_file=vocab_file, vocab_dir=vocab_dir
)
)
# Mark the end of the sentence with EOS
inputs = list(inputs) + [EOS]
# Adding the batch dimension to the front of the shape
batch_inputs = np.reshape(np.array(inputs), [1, -1])
return batch_inputs
def detokenize(integers, vocab_file=None, vocab_dir=None):
"""Decodes an array of integers to a human readable string
Args:
integers (numpy.ndarray): array of integers to decode
vocab_file (str): filename of the vocabulary text file
vocab_dir (str): path to the vocabulary file
Returns:
str: the decoded sentence.
"""
# Remove the dimensions of size 1
integers = list(np.squeeze(integers))
# Set the encoding of the "end of sentence" as 1
EOS = 1
# Remove the EOS to decode only the original tokens
if EOS in integers:
integers = integers[: integers.index(EOS)]
return trax.data.detokenize(
integers, vocab_file=vocab_file, vocab_dir=vocab_dir
)
```
Let's see how we might use these functions:
```
# Detokenize an input-target pair of tokenized sentences
print(colored(f'Single detokenized example input:', 'red'), detokenize(train_input, vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR))
print(colored(f'Single detokenized example target:', 'red'), detokenize(train_target, vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR))
print()
# Tokenize and detokenize a word that is not explicitly saved in the vocabulary file.
# See how it combines the subwords -- 'hell' and 'o'-- to form the word 'hello'.
print(colored(f"tokenize('hello'): ", 'green'), tokenize('hello', vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR))
print(colored(f"detokenize([17332, 140, 1]): ", 'green'), detokenize([17332, 140, 1], vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR))
```
<a name="1.4"></a>
## 1.4 Bucketing
Bucketing the tokenized sentences is an important technique used to speed up training in NLP.
Here is a
[nice article describing it in detail](https://medium.com/@rashmi.margani/how-to-speed-up-the-training-of-the-sequence-model-using-bucketing-techniques-9e302b0fd976)
but the gist is very simple. Our inputs have variable lengths and you want to make these the same when batching groups of sentences together. One way to do that is to pad each sentence to the length of the longest sentence in the dataset. This might lead to some wasted computation though. For example, if there are multiple short sentences with just two tokens, do we want to pad these when the longest sentence is composed of a 100 tokens? Instead of padding with 0s to the maximum length of a sentence each time, we can group our tokenized sentences by length and bucket, as on this image (from the article above):

We batch the sentences with similar length together (e.g. the blue sentences in the image above) and only add minimal padding to make them have equal length (usually up to the nearest power of two). This allows to waste less computation when processing padded sequences.
In Trax, it is implemented in the [bucket_by_length](https://github.com/google/trax/blob/5fb8aa8c5cb86dabb2338938c745996d5d87d996/trax/supervised/inputs.py#L378) function.
```
# Bucketing to create streams of batches.
# Buckets are defined in terms of boundaries and batch sizes.
# Batch_sizes[i] determines the batch size for items with length < boundaries[i]
# So below, we'll take a batch of 256 sentences of length < 8, 128 if length is
# between 8 and 16, and so on -- and only 2 if length is over 512.
boundaries = [8, 16, 32, 64, 128, 256, 512]
batch_sizes = [256, 128, 64, 32, 16, 8, 4, 2]
# Create the generators.
train_batch_stream = trax.data.BucketByLength(
boundaries, batch_sizes,
length_keys=[0, 1] # As before: count inputs and targets to length.
)(filtered_train_stream)
eval_batch_stream = trax.data.BucketByLength(
boundaries, batch_sizes,
length_keys=[0, 1] # As before: count inputs and targets to length.
)(filtered_eval_stream)
# Add masking for the padding (0s).
train_batch_stream = trax.data.AddLossWeights(id_to_mask=0)(train_batch_stream)
eval_batch_stream = trax.data.AddLossWeights(id_to_mask=0)(eval_batch_stream)
```
<a name="1.5"></a>
## 1.5 Exploring the data
We will now be displaying some of our data. You will see that the functions defined above (i.e. `tokenize()` and `detokenize()`) do the same things you have been doing again and again throughout the specialization. We gave these so you can focus more on building the model from scratch. Let us first get the data generator and get one batch of the data.
```
input_batch, target_batch, mask_batch = next(train_batch_stream)
# let's see the data type of a batch
print("input_batch data type: ", type(input_batch))
print("target_batch data type: ", type(target_batch))
# let's see the shape of this particular batch (batch length, sentence length)
print("input_batch shape: ", input_batch.shape)
print("target_batch shape: ", target_batch.shape)
```
The `input_batch` and `target_batch` are Numpy arrays consisting of tokenized English sentences and German sentences respectively. These tokens will later be used to produce embedding vectors for each word in the sentence (so the embedding for a sentence will be a matrix). The number of sentences in each batch is usually a power of 2 for optimal computer memory usage.
We can now visually inspect some of the data. You can run the cell below several times to shuffle through the sentences. Just to note, while this is a standard data set that is used widely, it does have some known wrong translations. With that, let's pick a random sentence and print its tokenized representation.
```
# pick a random index less than the batch size.
index = random.randrange(len(input_batch))
# use the index to grab an entry from the input and target batch
print(colored('THIS IS THE ENGLISH SENTENCE: \n', 'red'), detokenize(input_batch[index], vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR), '\n')
print(colored('THIS IS THE TOKENIZED VERSION OF THE ENGLISH SENTENCE: \n ', 'red'), input_batch[index], '\n')
print(colored('THIS IS THE GERMAN TRANSLATION: \n', 'red'), detokenize(target_batch[index], vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR), '\n')
print(colored('THIS IS THE TOKENIZED VERSION OF THE GERMAN TRANSLATION: \n', 'red'), target_batch[index], '\n')
```
<a name="2"></a>
# Part 2: Neural Machine Translation with Attention
Now that you have the data generators and have handled the preprocessing, it is time for you to build the model. You will be implementing a neural machine translation model from scratch with attention.
<a name="2.1"></a>
## 2.1 Attention Overview
The model we will be building uses an encoder-decoder architecture. This Recurrent Neural Network (RNN) will take in a tokenized version of a sentence in its encoder, then passes it on to the decoder for translation. As mentioned in the lectures, just using a a regular sequence-to-sequence model with LSTMs will work effectively for short to medium sentences but will start to degrade for longer ones. You can picture it like the figure below where all of the context of the input sentence is compressed into one vector that is passed into the decoder block. You can see how this will be an issue for very long sentences (e.g. 100 tokens or more) because the context of the first parts of the input will have very little effect on the final vector passed to the decoder.
<img src='https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/plain_rnn.png' width="500px">
Adding an attention layer to this model avoids this problem by giving the decoder access to all parts of the input sentence. To illustrate, let's just use a 4-word input sentence as shown below. Remember that a hidden state is produced at each timestep of the encoder (represented by the orange rectangles). These are all passed to the attention layer and each are given a score given the current activation (i.e. hidden state) of the decoder. For instance, let's consider the figure below where the first prediction "Wie" is already made. To produce the next prediction, the attention layer will first receive all the encoder hidden states (i.e. orange rectangles) as well as the decoder hidden state when producing the word "Wie" (i.e. first green rectangle). Given these information, it will score each of the encoder hidden states to know which one the decoder should focus on to produce the next word. The result of the model training might have learned that it should align to the second encoder hidden state and subsequently assigns a high probability to the word "geht". If we are using greedy decoding, we will output the said word as the next symbol, then restart the process to produce the next word until we reach an end-of-sentence prediction.
<img src='https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/attention_overview.png' width="600px">
There are different ways to implement attention and the one we'll use for this assignment is the Scaled Dot Product Attention which has the form:
$$Attention(Q, K, V) = softmax(\frac{QK^T}{\sqrt{d_k}})V$$
You will dive deeper into this equation in the next week but for now, you can think of it as computing scores using queries (Q) and keys (K), followed by a multiplication of values (V) to get a context vector at a particular timestep of the decoder. This context vector is fed to the decoder RNN to get a set of probabilities for the next predicted word. The division by square root of the keys dimensionality ($\sqrt{d_k}$) is for improving model performance and you'll also learn more about it next week. For our machine translation application, the encoder activations (i.e. encoder hidden states) will be the keys and values, while the decoder activations (i.e. decoder hidden states) will be the queries.
You will see in the upcoming sections that this complex architecture and mechanism can be implemented with just a few lines of code. Let's get started!
<a name="2.2"></a>
## 2.2 Helper functions
We will first implement a few functions that we will use later on. These will be for the input encoder, pre-attention decoder, and preparation of the queries, keys, values, and mask.
### 2.2.1 Input encoder
The input encoder runs on the input tokens, creates its embeddings, and feeds it to an LSTM network. This outputs the activations that will be the keys and values for attention. It is a [Serial](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Serial) network which uses:
- [tl.Embedding](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Embedding): Converts each token to its vector representation. In this case, it is the the size of the vocabulary by the dimension of the model: `tl.Embedding(vocab_size, d_model)`. `vocab_size` is the number of entries in the given vocabulary. `d_model` is the number of elements in the word embedding.
- [tl.LSTM](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.rnn.LSTM): LSTM layer of size `d_model`. We want to be able to configure how many encoder layers we have so remember to create LSTM layers equal to the number of the `n_encoder_layers` parameter.
<img src="https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/input_encoder.png">
<a name="ex01"></a>
### Exercise 01
**Instructions:** Implement the `input_encoder_fn` function.
```
# UNQ_C1
# GRADED FUNCTION
def input_encoder_fn(input_vocab_size, d_model, n_encoder_layers):
""" Input encoder runs on the input sentence and creates
activations that will be the keys and values for attention.
Args:
input_vocab_size: int: vocab size of the input
d_model: int: depth of embedding (n_units in the LSTM cell)
n_encoder_layers: int: number of LSTM layers in the encoder
Returns:
tl.Serial: The input encoder
"""
# create a serial network
input_encoder = tl.Serial(
### START CODE HERE (REPLACE INSTANCES OF `None` WITH YOUR CODE) ###
# create an embedding layer to convert tokens to vectors
tl.Embedding(input_vocab_size, d_model),
# feed the embeddings to the LSTM layers. It is a stack of n_encoder_layers LSTM layers
[tl.LSTM(d_model) for _ in range(n_encoder_layers)]
### END CODE HERE ###
)
return input_encoder
```
*Note: To make this notebook more neat, we moved the unit tests to a separate file called `w1_unittest.py`. Feel free to open it from your workspace if needed. Just click `File` on the upper left corner of this page then `Open` to see your Jupyter workspace directory. From there, you can see `w1_unittest.py` and you can open it in another tab or download to see the unit tests. We have placed comments in that file to indicate which functions are testing which part of the assignment (e.g. `test_input_encoder_fn()` has the unit tests for UNQ_C1).*
```
# BEGIN UNIT TEST
import w1_unittest
w1_unittest.test_input_encoder_fn(input_encoder_fn)
# END UNIT TEST
```
### 2.2.2 Pre-attention decoder
The pre-attention decoder runs on the targets and creates activations that are used as queries in attention. This is a Serial network which is composed of the following:
- [tl.ShiftRight](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.ShiftRight): This pads a token to the beginning of your target tokens (e.g. `[8, 34, 12]` shifted right is `[0, 8, 34, 12]`). This will act like a start-of-sentence token that will be the first input to the decoder. During training, this shift also allows the target tokens to be passed as input to do teacher forcing.
- [tl.Embedding](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Embedding): Like in the previous function, this converts each token to its vector representation. In this case, it is the the size of the vocabulary by the dimension of the model: `tl.Embedding(vocab_size, d_model)`. `vocab_size` is the number of entries in the given vocabulary. `d_model` is the number of elements in the word embedding.
- [tl.LSTM](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.rnn.LSTM): LSTM layer of size `d_model`.
<img src="https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/pre_attention_decoder.png">
<a name="ex02"></a>
### Exercise 02
**Instructions:** Implement the `pre_attention_decoder_fn` function.
```
# UNQ_C2
# GRADED FUNCTION
def pre_attention_decoder_fn(mode, target_vocab_size, d_model):
""" Pre-attention decoder runs on the targets and creates
activations that are used as queries in attention.
Args:
mode: str: 'train' or 'eval'
target_vocab_size: int: vocab size of the target
d_model: int: depth of embedding (n_units in the LSTM cell)
Returns:
tl.Serial: The pre-attention decoder
"""
# create a serial network
pre_attention_decoder = tl.Serial(
### START CODE HERE (REPLACE INSTANCES OF `None` WITH YOUR CODE) ###
# shift right to insert start-of-sentence token and implement
# teacher forcing during training
tl.ShiftRight(mode=mode),
# run an embedding layer to convert tokens to vectors
tl.Embedding(target_vocab_size, d_model),
# feed to an LSTM layer
tl.LSTM(d_model)
### END CODE HERE ###
)
return pre_attention_decoder
# BEGIN UNIT TEST
w1_unittest.test_pre_attention_decoder_fn(pre_attention_decoder_fn)
# END UNIT TEST
```
### 2.2.3 Preparing the attention input
This function will prepare the inputs to the attention layer. We want to take in the encoder and pre-attention decoder activations and assign it to the queries, keys, and values. In addition, another output here will be the mask to distinguish real tokens from padding tokens. This mask will be used internally by Trax when computing the softmax so padding tokens will not have an effect on the computated probabilities. From the data preparation steps in Section 1 of this assignment, you should know which tokens in the input correspond to padding.
We have filled the last two lines in composing the mask for you because it includes a concept that will be discussed further next week. This is related to *multiheaded attention* which you can think of right now as computing the attention multiple times to improve the model's predictions. It is required to consider this additional axis in the output so we've included it already but you don't need to analyze it just yet. What's important now is for you to know which should be the queries, keys, and values, as well as to initialize the mask.
<a name="ex03"></a>
### Exercise 03
**Instructions:** Implement the `prepare_attention_input` function
```
# UNQ_C3
# GRADED FUNCTION
def prepare_attention_input(encoder_activations, decoder_activations, inputs):
"""Prepare queries, keys, values and mask for attention.
Args:
encoder_activations fastnp.array(batch_size, padded_input_length, d_model): output from the input encoder
decoder_activations fastnp.array(batch_size, padded_input_length, d_model): output from the pre-attention decoder
inputs fastnp.array(batch_size, padded_input_length): padded input tokens
Returns:
queries, keys, values and mask for attention.
"""
### START CODE HERE (REPLACE INSTANCES OF `None` WITH YOUR CODE) ###
# set the keys and values to the encoder activations
keys = encoder_activations
values = encoder_activations
# set the queries to the decoder activations
queries = decoder_activations
# generate the mask to distinguish real tokens from padding
# hint: inputs is 1 for real tokens and 0 where they are padding
mask = (inputs != 0)
### END CODE HERE ###
# add axes to the mask for attention heads and decoder length.
mask = fastnp.reshape(mask, (mask.shape[0], 1, 1, mask.shape[1]))
# broadcast so mask shape is [batch size, attention heads, decoder-len, encoder-len].
# note: for this assignment, attention heads is set to 1.
mask = mask + fastnp.zeros((1, 1, decoder_activations.shape[1], 1))
return queries, keys, values, mask
# BEGIN UNIT TEST
w1_unittest.test_prepare_attention_input(prepare_attention_input)
# END UNIT TEST
```
<a name="2.3"></a>
## 2.3 Implementation Overview
We are now ready to implement our sequence-to-sequence model with attention. This will be a Serial network and is illustrated in the diagram below. It shows the layers you'll be using in Trax and you'll see that each step can be implemented quite easily with one line commands. We've placed several links to the documentation for each relevant layer in the discussion after the figure below.
<img src="https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/NMTModel.png" width="700px">
<a name="ex04"></a>
### Exercise 04
**Instructions:** Implement the `NMTAttn` function below to define your machine translation model which uses attention. We have left hyperlinks below pointing to the Trax documentation of the relevant layers. Remember to consult it to get tips on what parameters to pass.
**Step 0:** Prepare the input encoder and pre-attention decoder branches. You have already defined this earlier as helper functions so it's just a matter of calling those functions and assigning it to variables.
**Step 1:** Create a Serial network. This will stack the layers in the next steps one after the other. Like the earlier exercises, you can use [tl.Serial](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Serial).
**Step 2:** Make a copy of the input and target tokens. As you see in the diagram above, the input and target tokens will be fed into different layers of the model. You can use [tl.Select](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Select) layer to create copies of these tokens. Arrange them as `[input tokens, target tokens, input tokens, target tokens]`.
**Step 3:** Create a parallel branch to feed the input tokens to the `input_encoder` and the target tokens to the `pre_attention_decoder`. You can use [tl.Parallel](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Parallel) to create these sublayers in parallel. Remember to pass the variables you defined in Step 0 as parameters to this layer.
**Step 4:** Next, call the `prepare_attention_input` function to convert the encoder and pre-attention decoder activations to a format that the attention layer will accept. You can use [tl.Fn](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.base.Fn) to call this function. Note: Pass the `prepare_attention_input` function as the `f` parameter in `tl.Fn` without any arguments or parenthesis.
**Step 5:** We will now feed the (queries, keys, values, and mask) to the [tl.AttentionQKV](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.AttentionQKV) layer. This computes the scaled dot product attention and outputs the attention weights and mask. Take note that although it is a one liner, this layer is actually composed of a deep network made up of several branches. We'll show the implementation taken [here](https://github.com/google/trax/blob/master/trax/layers/attention.py#L61) to see the different layers used.
```python
def AttentionQKV(d_feature, n_heads=1, dropout=0.0, mode='train'):
"""Returns a layer that maps (q, k, v, mask) to (activations, mask).
See `Attention` above for further context/details.
Args:
d_feature: Depth/dimensionality of feature embedding.
n_heads: Number of attention heads.
dropout: Probababilistic rate for internal dropout applied to attention
activations (based on query-key pairs) before dotting them with values.
mode: Either 'train' or 'eval'.
"""
return cb.Serial(
cb.Parallel(
core.Dense(d_feature),
core.Dense(d_feature),
core.Dense(d_feature),
),
PureAttention( # pylint: disable=no-value-for-parameter
n_heads=n_heads, dropout=dropout, mode=mode),
core.Dense(d_feature),
)
```
Having deep layers pose the risk of vanishing gradients during training and we would want to mitigate that. To improve the ability of the network to learn, we can insert a [tl.Residual](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Residual) layer to add the output of AttentionQKV with the `queries` input. You can do this in trax by simply nesting the `AttentionQKV` layer inside the `Residual` layer. The library will take care of branching and adding for you.
**Step 6:** We will not need the mask for the model we're building so we can safely drop it. At this point in the network, the signal stack currently has `[attention activations, mask, target tokens]` and you can use [tl.Select](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Select) to output just `[attention activations, target tokens]`.
**Step 7:** We can now feed the attention weighted output to the LSTM decoder. We can stack multiple [tl.LSTM](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.rnn.LSTM) layers to improve the output so remember to append LSTMs equal to the number defined by `n_decoder_layers` parameter to the model.
**Step 8:** We want to determine the probabilities of each subword in the vocabulary and you can set this up easily with a [tl.Dense](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Dense) layer by making its size equal to the size of our vocabulary.
**Step 9:** Normalize the output to log probabilities by passing the activations in Step 8 to a [tl.LogSoftmax](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.LogSoftmax) layer.
```
# UNQ_C4
# GRADED FUNCTION
def NMTAttn(input_vocab_size=33300,
target_vocab_size=33300,
d_model=1024,
n_encoder_layers=2,
n_decoder_layers=2,
n_attention_heads=4,
attention_dropout=0.0,
mode='train'):
"""Returns an LSTM sequence-to-sequence model with attention.
The input to the model is a pair (input tokens, target tokens), e.g.,
an English sentence (tokenized) and its translation into German (tokenized).
Args:
input_vocab_size: int: vocab size of the input
target_vocab_size: int: vocab size of the target
d_model: int: depth of embedding (n_units in the LSTM cell)
n_encoder_layers: int: number of LSTM layers in the encoder
n_decoder_layers: int: number of LSTM layers in the decoder after attention
n_attention_heads: int: number of attention heads
attention_dropout: float, dropout for the attention layer
mode: str: 'train', 'eval' or 'predict', predict mode is for fast inference
Returns:
A LSTM sequence-to-sequence model with attention.
"""
### START CODE HERE (REPLACE INSTANCES OF `None` WITH YOUR CODE) ###
# Step 0: call the helper function to create layers for the input encoder
input_encoder = input_encoder_fn(input_vocab_size, d_model, n_encoder_layers)
# Step 0: call the helper function to create layers for the pre-attention decoder
pre_attention_decoder = pre_attention_decoder_fn(mode, target_vocab_size, d_model)
# Step 1: create a serial network
model = tl.Serial(
# Step 2: copy input tokens and target tokens as they will be needed later.
tl.Select([0, 1, 0, 1]),
# Step 3: run input encoder on the input and pre-attention decoder the target.
tl.Parallel(input_encoder, pre_attention_decoder),
# Step 4: prepare queries, keys, values and mask for attention.
tl.Fn('PrepareAttentionInput', prepare_attention_input, n_out=4),
# Step 5: run the AttentionQKV layer
# nest it inside a Residual layer to add to the pre-attention decoder activations(i.e. queries)
tl.Residual(tl.AttentionQKV(d_model, n_heads=n_attention_heads, dropout=attention_dropout, mode=mode)),
# Step 6: drop attention mask (i.e. index = None
tl.Select([0, 2]),
# Step 7: run the rest of the RNN decoder
[tl.LSTM(d_model) for _ in range(n_decoder_layers)],
# Step 8: prepare output by making it the right size
tl.Dense(target_vocab_size),
# Step 9: Log-softmax for output
tl.LogSoftmax()
)
### END CODE HERE
return model
# BEGIN UNIT TEST
w1_unittest.test_NMTAttn(NMTAttn)
# END UNIT TEST
# print your model
model = NMTAttn()
print(model)
```
<a name="3"></a>
# Part 3: Training
We will now be training our model in this section. Doing supervised training in Trax is pretty straightforward (short example [here](https://trax-ml.readthedocs.io/en/latest/notebooks/trax_intro.html#Supervised-training)). We will be instantiating three classes for this: `TrainTask`, `EvalTask`, and `Loop`. Let's take a closer look at each of these in the sections below.
<a name="3.1"></a>
## 3.1 TrainTask
The [TrainTask](https://trax-ml.readthedocs.io/en/latest/trax.supervised.html#trax.supervised.training.TrainTask) class allows us to define the labeled data to use for training and the feedback mechanisms to compute the loss and update the weights.
<a name="ex05"></a>
### Exercise 05
**Instructions:** Instantiate a train task.
```
# UNQ_C5
# GRADED
train_task = training.TrainTask(
### START CODE HERE (REPLACE INSTANCES OF `None` WITH YOUR CODE) ###
# use the train batch stream as labeled data
labeled_data= train_batch_stream,
# use the cross entropy loss
loss_layer= tl.CrossEntropyLoss(),
# use the Adam optimizer with learning rate of 0.01
optimizer= trax.optimizers.Adam(learning_rate=0.01),
# use the `trax.lr.warmup_and_rsqrt_decay` as the learning rate schedule
# have 1000 warmup steps with a max value of 0.01
lr_schedule= trax.lr.warmup_and_rsqrt_decay(n_warmup_steps=1000, max_value=0.01),
# have a checkpoint every 10 steps
n_steps_per_checkpoint= 10,
### END CODE HERE ###
)
# BEGIN UNIT TEST
w1_unittest.test_train_task(train_task)
# END UNIT TEST
```
<a name="3.2"></a>
## 3.2 EvalTask
The [EvalTask](https://trax-ml.readthedocs.io/en/latest/trax.supervised.html#trax.supervised.training.EvalTask) on the other hand allows us to see how the model is doing while training. For our application, we want it to report the cross entropy loss and accuracy.
```
eval_task = training.EvalTask(
## use the eval batch stream as labeled data
labeled_data=eval_batch_stream,
## use the cross entropy loss and accuracy as metrics
metrics=[tl.CrossEntropyLoss(), tl.Accuracy()],
)
```
<a name="3.3"></a>
## 3.3 Loop
The [Loop](https://trax-ml.readthedocs.io/en/latest/trax.supervised.html#trax.supervised.training.Loop) class defines the model we will train as well as the train and eval tasks to execute. Its `run()` method allows us to execute the training for a specified number of steps.
```
# define the output directory
output_dir = 'output_dir/'
# remove old model if it exists. restarts training.
!rm -f ~/output_dir/model.pkl.gz
# define the training loop
training_loop = training.Loop(NMTAttn(mode='train'),
train_task,
eval_tasks=[eval_task],
output_dir=output_dir)
# NOTE: Execute the training loop. This will take around 8 minutes to complete.
training_loop.run(10)
```
<a name="4"></a>
# Part 4: Testing
We will now be using the model you just trained to translate English sentences to German. We will implement this with two functions: The first allows you to identify the next symbol (i.e. output token). The second one takes care of combining the entire translated string.
We will start by first loading in a pre-trained copy of the model you just coded. Please run the cell below to do just that.
```
# instantiate the model we built in eval mode
model = NMTAttn(mode="eval")
# initialize weights from a pre-trained model
model.init_from_file("/content/output_dir/model.pkl.gz", weights_only=True)
model = tl.Accelerate(model)
```
<a name="4.1"></a>
## 4.1 Decoding
As discussed in the lectures, there are several ways to get the next token when translating a sentence. For instance, we can just get the most probable token at each step (i.e. greedy decoding) or get a sample from a distribution. We can generalize the implementation of these two approaches by using the `tl.logsoftmax_sample()` method. Let's briefly look at its implementation:
```python
def logsoftmax_sample(log_probs, temperature=1.0): # pylint: disable=invalid-name
"""Returns a sample from a log-softmax output, with temperature.
Args:
log_probs: Logarithms of probabilities (often coming from LogSofmax)
temperature: For scaling before sampling (1.0 = default, 0.0 = pick argmax)
"""
# This is equivalent to sampling from a softmax with temperature.
u = np.random.uniform(low=1e-6, high=1.0 - 1e-6, size=log_probs.shape)
g = -np.log(-np.log(u))
return np.argmax(log_probs + g * temperature, axis=-1)
```
The key things to take away here are: 1. it gets random samples with the same shape as your input (i.e. `log_probs`), and 2. the amount of "noise" added to the input by these random samples is scaled by a `temperature` setting. You'll notice that setting it to `0` will just make the return statement equal to getting the argmax of `log_probs`. This will come in handy later.
<a name="ex06"></a>
### Exercise 06
**Instructions:** Implement the `next_symbol()` function that takes in the `input_tokens` and the `cur_output_tokens`, then return the index of the next word. You can click below for hints in completing this exercise.
<details>
<summary>
<font size="3" color="darkgreen"><b>Click Here for Hints</b></font>
</summary>
<p>
<ul>
<li>To get the next power of two, you can compute <i>2^log_2(token_length + 1)</i> . We add 1 to avoid <i>log(0).</i></li>
<li>You can use <i>np.ceil()</i> to get the ceiling of a float.</li>
<li><i>np.log2()</i> will get the logarithm base 2 of a value</li>
<li><i>int()</i> will cast a value into an integer type</li>
<li>From the model diagram in part 2, you know that it takes two inputs. You can feed these with this syntax to get the model outputs: <i>model((input1, input2))</i>. It's up to you to determine which variables below to substitute for input1 and input2. Remember also from the diagram that the output has two elements: [log probabilities, target tokens]. You won't need the target tokens so we assigned it to _ below for you. </li>
<li> The log probabilities output will have the shape: (batch size, decoder length, vocab size). It will contain log probabilities for each token in the <i>cur_output_tokens</i> plus 1 for the start symbol introduced by the ShiftRight in the preattention decoder. For example, if cur_output_tokens is [1, 2, 5], the model will output an array of log probabilities each for tokens 0 (start symbol), 1, 2, and 5. To generate the next symbol, you just want to get the log probabilities associated with the last token (i.e. token 5 at index 3). You can slice the model output at [0, 3, :] to get this. It will be up to you to generalize this for any length of cur_output_tokens </li>
</ul>
```
# UNQ_C6
# GRADED FUNCTION
def next_symbol(NMTAttn, input_tokens, cur_output_tokens, temperature):
"""Returns the index of the next token.
Args:
NMTAttn (tl.Serial): An LSTM sequence-to-sequence model with attention.
input_tokens (np.ndarray 1 x n_tokens): tokenized representation of the input sentence
cur_output_tokens (list): tokenized representation of previously translated words
temperature (float): parameter for sampling ranging from 0.0 to 1.0.
0.0: same as argmax, always pick the most probable token
1.0: sampling from the distribution (can sometimes say random things)
Returns:
int: index of the next token in the translated sentence
float: log probability of the next symbol
"""
### START CODE HERE (REPLACE INSTANCES OF `None` WITH YOUR CODE) ###
# set the length of the current output tokens
token_length = len(cur_output_tokens)
# calculate next power of 2 for padding length
padded_length = 2**int(np.ceil(np.log2(token_length + 1)))
# pad cur_output_tokens up to the padded_length
padded = cur_output_tokens + [0] * (padded_length - token_length)
# model expects the output to have an axis for the batch size in front so
# convert `padded` list to a numpy array with shape (x, <padded_length>) where the
# x position is the batch axis. (hint: you can use np.expand_dims() with axis=0 to insert a new axis)
padded_with_batch = np.expand_dims(padded, axis=0)
# get the model prediction. remember to use the `NMTAttn` argument defined above.
# hint: the model accepts a tuple as input (e.g. `my_model((input1, input2))`)
output, _ = NMTAttn((input_tokens, padded_with_batch))
# get log probabilities from the last token output
log_probs = output[0, token_length, :]
# get the next symbol by getting a logsoftmax sample (*hint: cast to an int)
symbol = int(tl.logsoftmax_sample(log_probs, temperature))
### END CODE HERE ###
return symbol, float(log_probs[symbol])
# BEGIN UNIT TEST
w1_unittest.test_next_symbol(next_symbol, model)
# END UNIT TEST
```
Now you will implement the `sampling_decode()` function. This will call the `next_symbol()` function above several times until the next output is the end-of-sentence token (i.e. `EOS`). It takes in an input string and returns the translated version of that string.
<a name="ex07"></a>
### Exercise 07
**Instructions**: Implement the `sampling_decode()` function.
| github_jupyter |
# Now You Code 1: Data Analysis of Movie Goers
In this assignment you will perform a data analysis of people who go to the movies.
A movie theatre chain asked movie goers to fill out a quick survey in exchange for a 1/2 price ticket. The survey asked for basic demographics: age, gender, occupation and zip code. This survey results are contained in the data file `'NYC1-moviegoers.csv'`
In this assignment you will write a series of Python pandas code (in several cells) to answer some basic questions about the responses in the dataset.
```
# this turns off warning messages
import warnings
warnings.filterwarnings('ignore')
```
### Part 1: Load the dataset
write code to import pandas and load the dataset (in csv format) into the variable `moviegoers` and then print a random sample of 5 people from the data set.
### Part 2: Gender distribution
How many males and females filled out our survey?
Write a single line of Python Pandas code to count the genders in the data set. (There should be M = 670, F = 273)
**HINT:** Select the `gender` column then use a built-in series method to count the values in the series.
### Part 3: People without jobs
Who are the survey respondents without jobs?
Write Python Pandas code to create a variable `no_occupation` which filters the `moviegoers` data set to only those survey respondents with an ocupation of `'none'`. (There should be 9 people)
### Part 4: Gender distribution of people without jobs.
What is the gender distribution of the 9 respondents without jobs?
Write Python Pandas code to display this.
**HINT:** Use the variable `no_occupation` from the previous step.
### Part 5: Young Artists
Write Python Pandas code to display the count of respondents with an occupation of artist who are 21 and under. (There should be 5)
**HINT:** You can either set each Pandas filter to a new `DataFrame` variable or try to chain the filters together. Also display them before you try and count them.
### Part 6: Distribution by age group
The movie theater which conducted this survey prices their tickets by age group:
- Youth (age 18 and under) $7.50
- Adult (age 19 55) $12.50
- Senior (age 56 and up) $8.50
Write python code to count the number of moviegoers in each of these age groups.
Your counts should be as follows:
```
Adult 837
Youth 54
Senior 52
```
**HINT:** You must perform feature engineering. Create a new column `'age_group'` and use the `'age'` column to assign one or more values to the age group. After you create the column and set the values get a count of values for the `'age_group'` column.
## Step 7: Questions
1. Pandas programs are different than typical Python programs. Explain the process by which you got the final solution?
Answer:
2. What was the most difficult aspect of this assignment?
Answer:
## Step 8: Reflection
Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
Keep your response to between 100 and 250 words.
`--== Write Your Reflection Below Here ==--`
| github_jupyter |
# Using Astropy Quantities and Units for astrophysical calculations
## Authors
Ana Bonaca, Erik Tollerud, Jonathan Foster, Lia Corrales, Kris Stern, Stephanie T. Douglas
## Learning Goals
* Use `Quantity` objects to estimate a hypothetical galaxy's mass
* Take advantage of constants in the `astropy.constants` library
* Print formatted unit strings
* Plot `Quantity` objects with unit labels, using `astropy.visualization.quantity_support`
* Do math with `Quantity` objects
* Convert quantities with `astropy.units`
* Convert between wavelength and energy with `astropy.units.spectral` equivalencies
* Use the small angle approximation with `astropy.units.dimensionless_angles` equivalencies
* Write functions that take `Quantity` objects instead of numpy arrays
* Make synthetic radio observations
* Use `Quantity` objects such as data cubes to facilitate a full derivation of the total mass of a molecular cloud
## Keywords
units, radio astronomy, data cubes, matplotlib
## Companion Content
[Tools for Radio Astronomy](https://www.springer.com/gp/book/9783662053942) by Rohlfs & Wilson
## Summary
In this tutorial we present some examples showing how Astropy's `Quantity` object can make astrophysics calculations easier. The examples include calculating the mass of a galaxy from its velocity dispersion and determining masses of molecular clouds from CO intensity maps. We end with an example of good practices for using quantities in functions you might distribute to other people.
For an in-depth discussion of `Quantity` objects, see the [astropy documentation section](http://docs.astropy.org/en/stable/units/quantity.html).
## Preliminaries
We start by loading standard libraries and set up plotting for ipython notebooks.
```
import numpy as np
import matplotlib.pyplot as plt
# You shouldn't use the `seed` function in real science code, but we use it here for example purposes.
# It makes the "random" number generator always give the same numbers wherever you run it.
np.random.seed(12345)
# Set up matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
```
It's conventional to load the Astropy `units` module as the variable `u`, demonstrated below. This will make working with `Quantity` objects much easier.
Astropy also has a `constants` module where typical physical constants are available. The constants are stored as objects of a subclass of `Quantity`, so they behave just like a `Quantity`. Here, we'll only need the gravitational constant `G`, Planck's constant `h`, and Boltzmann's constant, `k_B`.
```
import astropy.units as u
from astropy.constants import G, h, k_B
```
We will also show an example of plotting while taking advantage of the `astropy.visualization` package, which provides support for `Quantity` units.
```
from astropy.visualization import quantity_support
```
## 1. Galaxy mass
In this first example, we will use `Quantity` objects to estimate a hypothetical galaxy's mass, given its half-light radius and radial velocities of stars in the galaxy.
Let's assume that we measured the half-light radius of the galaxy to be 29 pc projected on the sky at the distance of the galaxy. This radius is often called the "effective radius", so we'll store it as a `Quantity` object with the name `Reff`. The easiest way to create a `Quantity` object is by multiplying the value with its unit. Units are accessed as u."unit", in this case u.pc.
```
Reff = 29 * u.pc
```
A completely equivalent (but more verbose) way of doing the same thing is to use the `Quantity` object's initializer, demonstrated below. In general, the simpler form (above) is preferred, as it is closer to how such a quantity would actually be written in text. The initalizer form has more options, though, which you can learn about from the [astropy reference documentation on Quantity](http://docs.astropy.org/en/stable/api/astropy.units.quantity.Quantity.html).
```
Reff = u.Quantity(29, unit=u.pc)
```
We can access the value and unit of a `Quantity` using the `value` and `unit` attributes.
```
print("""Half light radius
value: {0}
unit: {1}""".format(Reff.value, Reff.unit))
```
The `value` and `unit` attributes can also be accessed within the print function.
```
print("""Half light radius
value: {0.value}
unit: {0.unit}""".format(Reff))
```
Furthermore, we can convert the radius in parsecs to any other unit of length using the ``to()`` method. Here, we convert it to meters.
```
print("{0:.3g}".format(Reff.to(u.m)))
```
Next, we'll first create a synthetic dataset of radial velocity measurements, assuming a normal distribution with a mean velocity of 206 km/s and a velocity dispersion of 4.3 km/s.
```
vmean = 206
sigin = 4.3
v = np.random.normal(vmean, sigin, 500)*u.km/u.s
print("""First 10 radial velocity measurements:
{0}
{1}""".format(v[:10], v.to(u.m/u.s)[:10]))
```
One can ocassionally run into issues when attempting to plot `Quantity` objects with `matplotlib` libraries. It is always possible to fix this by passing the value array (e.g., `v.value`) to `matplotlib` functions. However, calling the `astropy.visualization.quantity_support()` function will change the settings on your `matplotlib` session to better handle astropy `Quantity` objects:
```
quantity_support()
```
Now we can plot a histogram of the velocity dataset. Note that, due to calling `quantity_support`, the x-axis is automatically labeled with the correct units.
```
plt.figure()
plt.hist(v, bins='auto', histtype="step")
plt.ylabel("N")
```
Now we can calculate the velocity dispersion of the galaxy. This demonstrates how you can perform basic operations like subtraction and division with `Quantity` objects, and also use them in standard numpy functions such as `mean()` and `size()`. They retain their units through these operations just as you would expect them to.
```
sigma = np.sqrt(np.sum((v - np.mean(v))**2) / np.size(v))
print("Velocity dispersion: {0:.2f}".format(sigma))
```
Note how we needed to use `numpy` square root function, because the resulting velocity dispersion quantity is a `numpy` array. If we used the python standard `math` library's `sqrt` function instead, we get an error.
```
sigma_scalar = np.sqrt(np.sum((v - np.mean(v))**2) / len(v))
```
In general, you should only use `numpy` functions with `Quantity` objects, *not* the `math` equivalents, unless you are sure you understand the consequences.
Now for the actual mass calculation. If a galaxy is pressure-supported (for example, an elliptical or dwarf spheroidal galaxy), its mass within the stellar extent can be estimated using a straightforward formula: $M_{1/2}=4\sigma^2 R_{eff}/G$. There are caveats to the use of this formula for science -- see Wolf et al. 2010 for details. For demonstrating `Quantity`, you can accept that this is often good enough. For the calculation, we can multiply the quantities together, and `astropy` will keep track of the units.
```
M = 4*sigma**2*Reff/G
M
```
The result is in a composite unit, so it's not really obvious it's a mass. However, it can be decomposed to cancel all of the length units ($km^2 pc/m^3$) using the `decompose()` method.
```
M.decompose()
```
We can also easily express the mass in whatever form you like -- solar masses are common in astronomy, or maybe you want the default SI and CGS units.
```
print("""Galaxy mass
in solar units: {0:.3g}
SI units: {1:.3g}
CGS units: {2:.3g}""".format(M.to(u.Msun), M.si, M.cgs))
```
Or, if you want the log of the mass, you can just use ``np.log10`` as long as the logarithm's argument is dimensionless.
```
np.log10(M.to_value(u.Msun))
```
However, you can't take the log of something with units, as that is not mathematically sensible.
```
np.log10(M)
```
## Exercises
Use `Quantity` and Kepler's law in the form given below to determine the (circular) orbital speed of the Earth around the sun in km/s. No need to look up constants or conversion factors to do this calculation -- it's all in `astropy.units` and `astropy.constants`.
$$v = \sqrt{\frac{G M_{\odot}}{r}}$$
There's a much easier way to figure out the velocity of the Earth using just two units or quantities. Do that and then compare to the Kepler's law answer (the easiest way is probably to compute the percentage difference, if any).
(Completely optional, but a good way to convince yourself of the value of Quantity:) Do the above calculations by hand -- you can use a calculator (or python just for its arithmatic) but look up all the appropriate conversion factors and use paper-and-pencil approaches for keeping track of them all. Which one took longer?
## 2. Molecular cloud mass
In this second example, we will demonstrate how using `Quantity` objects can facilitate a full derivation of the total mass of a molecular cloud using radio observations of isotopes of Carbon Monoxide (CO).
#### Setting up the data cube
Let's assume that we've mapped the inner part of a molecular cloud in the J=1-0 rotational transition of ${\rm C}^{18}{\rm O}$ and are interested in measuring its total mass. The measurement produced a data cube with RA and Dec as spatial coordiates and velocity as the third axis. Each voxel in this data cube represents the brightness temperature of the emission at that position and velocity. Furthermore, we'll assume that we have an independent measurement of distance to the cloud $d=250$ pc and that the excitation temperature is known and constant throughout the cloud: $T_{ex}=25$ K.
```
d = 250 * u.pc
Tex = 25 * u.K
```
We'll generate a synthetic dataset, assuming the cloud follows a Gaussian distribution in each of RA, Dec and velocity. We start by creating a 100x100x300 numpy array, such that the first coordinate is right ascension, the second is declination, and the third is velocity. We use the `numpy.meshgrid` function to create data cubes for each of the three coordinates, and then use them in the formula for a Gaussian to generate an array with the synthetic data cube. In this cube, the cloud is positioned at the center of the cube, with $\sigma$ and the center in each dimension shown below. Note in particular that the $\sigma$ for RA and Dec have different units from the center, but `astropy` automatically does the relevant conversions before computing the exponential.
```
# Cloud's center
cen_ra = 52.25 * u.deg
cen_dec = 0.25 * u.deg
cen_v = 15 * u.km/u.s
# Cloud's size
sig_ra = 3 * u.arcmin
sig_dec = 4 * u.arcmin
sig_v = 3 * u.km/u.s
#1D coordinate quantities
ra = np.linspace(52, 52.5, 100) * u.deg
dec = np.linspace(0, 0.5, 100) * u.deg
v = np.linspace(0, 30, 300) *u.km/u.s
#this creates data cubes of size for each coordinate based on the dimensions of the other coordinates
ra_cube, dec_cube, v_cube = np.meshgrid(ra, dec, v)
data_gauss = np.exp(-0.5*((ra_cube-cen_ra)/sig_ra)**2 +
-0.5*((dec_cube-cen_dec)/sig_dec)**2 +
-0.5*((v_cube-cen_v)/sig_v)**2 )
```
The units of the exponential are dimensionless, so we multiply the data cube by K to get brightness temperature units. Radio astronomers use a rather odd set of units [K km/s] as of integrated intensity (that is, summing all the emission from a line over velocity). As an aside for experts, we're setting up our artificial cube on the main-beam temperature scale (T$_{\rm MB}$) which is the closest we can normally get to the actual brightness temperature of our source.
```
data = data_gauss * u.K
```
We will also need to know the width of each velocity bin and the size of each pixel, so let's calculate that now.
```
# Average pixel size
# This is only right if dec ~ 0, because of the cos(dec) factor.
dra = (ra.max() - ra.min()) / len(ra)
ddec = (dec.max() - dec.min()) / len(dec)
#Average velocity bin width
dv = (v.max() - v.min()) / len(v)
print("""dra = {0}
ddec = {1}
dv = {2}""".format(dra.to(u.arcsec), ddec.to(u.arcsec), dv))
```
We're interested in the integrated intensity over all of the velocity channels, so let's create a 2D quantity array by summing our data cube along the velocity axis (multiplying by the velocity width of a pixel).
```
intcloud = np.sum(data*dv, axis=2)
intcloud.unit
```
We can plot the 2D quantity using matplotlib's imshow function, by passing the quantity's value. Similarly, we can set the correct extent using the values of $x_i$ and $x_f$. Finally, we can set the colorbar label to have proper units.
```
#Note that we display RA in the convential way by going from max to min
plt.imshow(intcloud.value,
origin='lower',
extent=[ra.value.max(), ra.value.min(), dec.value.min(), dec.value.max()],
cmap='hot',
interpolation='nearest',
aspect='equal')
plt.colorbar().set_label("Intensity ({})".format(intcloud.unit))
plt.xlabel("RA (deg)")
plt.ylabel("Dec (deg)");
```
#### Measuring The Column Density of CO
In order to calculate the mass of the molecular cloud, we need to measure its column density. A number of assumptions are required for the following calculation; the most important are that the emission is optically thin (typically true for ${\rm C}^{18}{\rm O}$) and that conditions of local thermodynamic equilibrium hold along the line of sight. In the case where the temperature is large compared to the separation in energy levels for a molecule and the source fills the main beam of the telescope, the total column density for ${\rm C}^{13}{\rm O}$ is
$N=C \frac{\int T_B(V) dV}{1-e^{-B}}$
where the constants $C$ and $B$ are given by:
$C=3.0\times10^{14} \left(\frac{\nu}{\nu_{13}}\right)^2 \frac{A_{13}}{A} {\rm K^{-1} cm^{-2} \, km^{-1} \, s}$
$B=\frac{h\nu}{k_B T}$
(Rohlfs & Wilson [Tools for Radio Astronomy](https://www.springer.com/gp/book/9783662053942)).
Here we have given an expression for $C$ scaled to the values for ${\rm C}^{13}{\rm O}$ ($\nu_{13}$ and $A_{13}$). In order to use this relation for ${\rm C}^{18}{\rm O}$, we need to rescale the frequencies ${\nu}$ and Einstein coefficients $A$. $C$ is in funny mixed units, but that's okay. We'll define it as a `Quantities` object and not have to worry about it.
First, we look up the wavelength for these emission lines and store them as quantities.
```
lambda13 = 2.60076 * u.mm
lambda18 = 2.73079 * u.mm
```
Since the wavelength and frequency of light are related using the speed of light, we can convert between them. However, doing so just using the `to()` method fails, as units of length and frequency are not convertible:
```
nu13 = lambda13.to(u.Hz)
nu18 = lambda18.to(u.Hz)
```
Fortunately, `astropy` comes to the rescue by providing a feature called "unit equivalencies." Equivalencies provide a way to convert between two physically different units that are not normally equivalent, but in a certain context have a one-to-one mapping. For more on equivalencies, see the [equivalencies section of astropy's documentation](http://docs.astropy.org/en/stable/units/equivalencies.html).
In this case, calling the ``astropy.units.spectral()`` function provides the equivalencies necessary to handle conversions between wavelength and frequency. To use it, provide the equivalencies to the `equivalencies` keyword of the ``to()`` call:
```
nu13 = lambda13.to(u.Hz, equivalencies=u.spectral())
nu18 = lambda18.to(u.Hz, equivalencies=u.spectral())
```
Next, we look up Einstein coefficients (in units of s$^{-1}$), and calculate the ratios in constant $C$. Note how the ratios of frequency and Einstein coefficient units are dimensionless, so the unit of $C$ is unchanged.
```
nu13 = 115271096910.13396 * u.Hz
nu18 = 109782318669.689 * u.Hz
A13 = 7.4e-8 / u.s
A18 = 8.8e-8 / u.s
C = 3e14 * (nu18/nu13)**3 * (A13/A18) / (u.K * u.cm**2 * u.km *(1/u.s))
C
```
Now we move on to calculate the constant $B$. This is given by the ratio of $\frac{h\nu}{k_B T}$, where $h$ is Planck's constant, $k_B$ is the Boltzmann's constant, $\nu$ is the emission frequency, and $T$ is the excitation temperature. The constants were imported from `astropy.constants`, and the other two values are already calculated, so here we just take the ratio.
```
B = h * nu18 / (k_B * Tex)
```
The units of $B$ are Hz sec, which can be decomposed to a dimensionless unit if you actually care about its value. Usually this is not necessary, though. Quantities are at their best if you use them without worrying about intermediate units, and only convert at the very end when you want a final answer.
```
print('{0}\n{1}'.format(B, B.decompose()))
```
At this point we have all the ingredients to calculate the number density of $\rm CO$ molecules in this cloud. We already integrated (summed) over the velocity channels above to show the integrated intensity map, but we'll do it again here for clarity. This gives us the column density of CO for each spatial pixel in our map. We can then print out the peak column column density.
```
NCO = C * np.sum(data*dv, axis=2) / (1 - np.exp(-B))
print("Peak CO column density: ")
np.max(NCO)
```
#### CO to Total Mass
We are using CO as a tracer for the much more numerous H$_2$, the quantity we are actually trying to infer. Since most of the mass is in H$_2$, we calculate its column density by multiplying the CO column density with the (known/assumed) H$_2$/CO ratio.
```
H2_CO_ratio = 5.9e6
NH2 = NCO * H2_CO_ratio
print("Peak H2 column density: ")
np.max(NH2)
```
That's a peak column density of roughly 50 magnitudes of visual extinction (assuming the conversion between N$_{\rm H_2}$ and A$_V$ from Bohlin et al. 1978), which seems reasonable for a molecular cloud.
We obtain the mass column density by multiplying the number column density by the mass of an individual H$_2$ molecule.
```
mH2 = 2 * 1.008 * u.Dalton #aka atomic mass unit/amu
rho = NH2 * mH2
```
A final step in going from the column density to mass is summing up over the area area. If we do this in the straightforward way of length x width of a pixel, this area is then in units of ${\rm deg}^2$.
```
dap = dra * ddec
print(dap)
```
Now comes an important subtlety: in the small angle approximation, multiplying the pixel area with the square of distance yields the cross-sectional area of the cloud that the pixel covers, in *physical* units, rather than angular units. So it's tempting to just multiply the area and the square of the distance.
```
da = dap * d**2 # don't actually do it this way - use the version below instead!
print(da)
dap.to(u.steradian).value * d**2
```
But this is **wrong**, because `astropy.units` treats angles (and solid angles) as actual physical units, while the small-angle approximation assumes angles are dimensionless. So if you, e.g., try to convert to a different area unit, it will fail:
```
da.to(u.cm**2)
```
The solution is to use the `dimensionless_angles` equivalency, which allows angles to be treated as dimensionless. This makes it so that they will automatically convert to radians and become dimensionless when a conversion is needed.
```
da = (dap * d**2).to(u.pc**2, equivalencies=u.dimensionless_angles())
da
da.to(u.cm**2)
```
Finally, multiplying the column density with the pixel area and summing over all the pixels gives us the cloud mass.
```
M = np.sum(rho * da)
M.decompose().to(u.solMass)
```
## Exercises
The astro material was pretty heavy on that one, so let's focus on some associated statistics using `Quantity`'s array capabililities. Compute the median and mean of the `data` with the ``np.mean`` and ``np.median`` functions. Why are their values so different?
Similarly, compute the standard deviation and variance (if you don't know the relevant functions, look it up in the numpy docs or just type np.<tab> and a code cell). Do they have the units you expect?
## 3. Using Quantities with Functions
`Quantity` is also a useful tool if you plan to share some of your code, either with collaborators or the wider community. By writing functions that take `Quantity` objects instead of raw numbers or arrays, you can write code that is agnostic to the input unit. In this way, you may even be able to prevent [the destruction of Mars orbiters](http://en.wikipedia.org/wiki/Mars_Climate_Orbiter#Cause_of_failure). Below, we provide a simple example.
Suppose you are working on an instrument, and the person funding it asks for a function to give an analytic estimate of the response function. You determine from some tests it's basically a Lorentzian, but with a different scale along the two axes. Your first thought might be to do this:
```
def response_func(xinarcsec, yinarcsec):
xscale = 0.9
yscale = 0.85
xfactor = 1 / (1 + xinarcsec/xscale)
yfactor = 1 / (1 + yinarcsec/yscale)
return xfactor * yfactor
```
You meant the inputs to be in arcsec, but alas, you send that to your collaborator and they don't look closely and think the inputs are instead supposed to be in arcmin. So they do:
```
response_func(1.0, 1.2)
```
And now they tell all their friends how terrible the instrument is, because it's supposed to have arcsecond resolution, but your function clearly shows it can only resolve an arcmin at best. But you can solve this by requiring they pass in `Quantity` objects. The new function could simply be:
```
def response_func(x, y):
xscale = 0.9 * u.arcsec
yscale = 0.85 * u.arcsec
xfactor = 1 / (1 + x/xscale)
yfactor = 1 / (1 + y/yscale)
return xfactor * yfactor
```
And your collaborator now has to pay attention. If they just blindly put in a number they get an error:
```
response_func(1.0, 1.2)
```
Which is their cue to provide the units explicitly:
```
response_func(1.0*u.arcmin, 1.2*u.arcmin)
```
The funding agency is impressed at the resolution you achieved, and your instrument is saved! You now go on to win the Nobel Prize due to discoveries the instrument makes. And it was all because you used `Quantity` as the input of code you shared.
## Exercise
Write a function that computes the Keplerian velocity you worked out in section 1 (using `Quantity` input and outputs, of course), but allowing for an arbitrary mass and orbital radius. Try it with some reasonable numbers for satellites orbiting the Earth, a moon of Jupiter, or an extrasolar planet. Feel free to use wikipedia or similar for the masses and distances.
| github_jupyter |
# 函数
- 函数可以用来定义可重复代码,组织和简化
- 一般来说一个函数在实际开发中为一个小功能
- 一个类为一个大功能
- 同样函数的长度不要超过一屏
Python中的所有函数实际上都是有返回值(return None),
如果你没有设置return,那么Python将不显示None.
如果你设置return,那么将返回出return这个值.
```
def HJN():
print('Hello')
return 1000
b=HJN()
print(b)
HJN
def panduan(number):
if number % 2 == 0:
print('O')
else:
print('J')
panduan(number=1)
panduan(2)
```
## 定义一个函数
def function_name(list of parameters):
do something

- 以前使用的random 或者range 或者print.. 其实都是函数或者类
函数的参数如果有默认值的情况,当你调用该函数的时候:
可以不给予参数值,那么就会走该参数的默认值
否则的话,就走你给予的参数值.
```
import random
def hahah():
n = random.randint(0,5)
while 1:
N = eval(input('>>'))
if n == N:
print('smart')
break
elif n < N:
print('太小了')
elif n > N:
print('太大了')
```
## 调用一个函数
- functionName()
- "()" 就代表调用
```
def H():
print('hahaha')
def B():
H()
B()
def A(f):
f()
A(B)
```

## 带返回值和不带返回值的函数
- return 返回的内容
- return 返回多个值
- 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值

- 当然也可以自定义返回None
## EP:

```
def main():
print(min(min(5,6),(51,6)))
def min(n1,n2):
a = n1
if n2 < a:
a = n2
main()
```
## 类型和关键字参数
- 普通参数
- 多个参数
- 默认值参数
- 不定长参数
## 普通参数
## 多个参数
## 默认值参数
## 强制命名
```
def U(str_):
xiaoxie = 0
for i in str_:
ASCII = ord(i)
if 97<=ASCII<=122:
xiaoxie +=1
elif xxxx:
daxie += 1
elif xxxx:
shuzi += 1
return xiaoxie,daxie,shuzi
U('HJi12')
```
## 不定长参数
- \*args
> - 不定长,来多少装多少,不装也是可以的
- 返回的数据类型是元组
- args 名字是可以修改的,只是我们约定俗成的是args
- \**kwargs
> - 返回的字典
- 输入的一定要是表达式(键值对)
- name,\*args,name2,\**kwargs 使用参数名
```
def TT(a,b)
def TT(*args,**kwargs):
print(kwargs)
print(args)
TT(1,2,3,4,6,a=100,b=1000)
{'key':'value'}
TT(1,2,4,5,7,8,9,)
def B(name1,nam3):
pass
B(name1=100,2)
def sum_(*args,A='sum'):
res = 0
count = 0
for i in args:
res +=i
count += 1
if A == "sum":
return res
elif A == "mean":
mean = res / count
return res,mean
else:
print(A,'还未开放')
sum_(-1,0,1,4,A='var')
'aHbK134'.__iter__
b = 'asdkjfh'
for i in b :
print(i)
2,5
2 + 22 + 222 + 2222 + 22222
```
## 变量的作用域
- 局部变量 local
- 全局变量 global
- globals 函数返回一个全局变量的字典,包括所有导入的变量
- locals() 函数会以字典类型返回当前位置的全部局部变量。
```
a = 1000
b = 10
def Y():
global a,b
a += 100
print(a)
Y()
def YY(a1):
a1 += 100
print(a1)
YY(a)
print(a)
```
## 注意:
- global :在进行赋值操作的时候需要声明
- 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.
- 
# Homework
- 1

```
def getPentagonalNumber(n):
return n*(3*n-1)/2
count =0
for n in range(1,101):
if count <9:
print( "%.0f、"%getPentagonalNumber(n),end="")
count += 1
else:
print( "%.0f"%getPentagonalNumber(n))
count = 0
```
- 2

```
def sumDits(a):
baiwei = a // 100
shiwei = a // 10 % 10
gewei = a % 100 % 10
sum = baiwei + shiwei + gewei
return sum
print(sumDits(111))
```
- 3

```
def displaySortedNumbers(num1,num2,num3):
x = [num1,num2,num3]
for i in range(3):
for j in range(2):
if(x[j]>x[j+1]):
t=x[j]
x[j]=x[j+1]
x[j+1]=t
return x
num1,num2,num3=eval(input("Enter three number: "))
x=displaySortedNumbers(num1,num2,num3)
print("The sorted numbers are",x[0],x[1],x[2])
```
- 4

```
sum=0
def futureInvestmentValue(investmentAmount,monthlyInterestRate,years):
global sum
sum = investmentAmount * (1 + monthlyInterestRate*0.01/12) ** years
return sum
amount = int(input("The amount invested : "))
rate = int(input("Annual interest rate : "))
count = 0
print("Years Future Value")
for i in range(1,361):
count += 1
if(count == 12):
print(" %d %.2f"%(i/12,futureInvestmentValue(amount,rate,i)))
count =0
```
- 5

```
def printChars(ch1,ch2,numberPerLine):
count = 0
for i in range(10):
print("%d "%i,end="")
count +=1
if(count >= numberPerLine):
print()
count = 0
for i in range(ord('a'),ord('z')+1):
print("%s "%chr(i),end="")
count += 1
if(count >= numberPerLine):
print()
count = 0
for i in range(ord('A'),ord('Z')+1):
print("%s "%chr(i),end="")
count +=1
if(count >= numberPerLine):
print()
count = 0
printChars(1,'Z',10)
```
- 6

```
def numberOfDaysInAYear(year):
days = 0
if (year % 400 == 0) or (year % 4 == 0) and (year % 100 != 0):
days = 366
else:
days = 365
return days
day1 = int(input(">>"))
day2 = int(input(">>"))
sum = 0
for i in range(day1,day2+1):
sum = numberOfDaysInAYear(i)
print("%d 年的天数为:%d"%(i,sum))
```
- 7

- 8

- 9


```
def time_():
import time
print(time.strftime("Current date and time is %b %d, %Y %H:%M:%S", time.localtime()))
time_()
import time
import time
localtime = time.asctime(time.localtime(time.time()))
print("本地时间为 :", localtime)
2019 - 1970
```
- 10

```
import random
def dice(x,y):
win = [7,11]
lose = [2,3,12]
other = [4,5,6,8,9,10]
count = 0
if(x+y in lose):
print("You lose")
elif(x+y in win):
print("You win")
elif (x+y in other):
count += 1
print("point is %d"%(x+y))
num1 = random.randint(1,6)
num2 = random.randint(1,6)
print("You rolled %d + %d = %d"%(num1,num2,num1+num2))
while num1+num2 != x+y or num1+num2 != 7:
if num1+num2 == 7:
print("You lose")
break
if num1+num2 == x+y:
print("You win")
break
num1 = random.randint(1,6)
num2 = random.randint(1,6)
print("You rolled %d + %d = %d"%(num1,num2,num1+num2))
x = random.randint(1,6)
y = random.randint(1,6)
print("You rolled %d + %d = %d"%(x,y,x+y))
dice(x,y)
```
- 11
### 去网上寻找如何用Python代码发送邮件
```
import smtplib
from email.mime.text import MIMEText
SMTPsever="smtp.163.com" #邮箱服务器
sender="**********@163.com" #邮件地址
password="Whl3386087" #密码
receivers=["********@qq.com"]
content = '端午节快乐哈哈 \n 邮箱轰炸机的问候'
title = '端午节问候' # 邮件主题
message = MIMEText(content, 'plain', 'utf-8') # 内容, 格式, 编码
message['From'] = "{}".format(sender)
message['To'] = ",".join(receivers)
message['Subject'] = title
# mailsever=smtplib.SMTP(SMTPsever,25) #服务器端口
# mailsever.login(sender,password)#登陆
try:
mailsever = smtplib.SMTP_SSL(SMTPsever, 465) # 启用SSL发信, 端口一般是465
mailsever.login(sender, password) # 登录验证
mailsever.sendmail(sender, receivers, message .as_string()) # 发送
print("mail has been send successfully.")
except smtplib.SMTPException as e:
print(e)
mailsever.quit()
print("OK")
```
| github_jupyter |
# Reading and writing files, JSON
## Contents:
* File Input/Output
* Reading and writing JSON
## File Input/Output
A huge portion of our input data will come from files that we have stored on our computer (on the file system). A lot of analysis of these files is done in memory in Python, when working with them. We have to save them back to the file system to store the results. So, mastering the art of reading and writing is crucial in programming.
Until now, we have run stuff (almost instantly) in our Jupyter Notebooks, but imagine that we write code that takes a couple of ours to run on a large collection of files. Then we want to save the result, either for further analysis, or to make these files available (i.e. sharing) in your research.
The following code opens a file in our filesystem, prints the first 10 lines and closes the file. Please note that this file must exist on your computer. If you only have downloaded this notebook, go back to the repository, download the file, and place it in the appropriate path (or change the path below). This path corresponds to the folder structure on your file system.
> **Please note:** The code below shows you how the `open()` function works. It's better to use a `width` block (see below), which does this opening and closing for you.
```
infile = open('data/adams-hhgttg.txt', 'r', encoding='utf-8')
for i, banana in enumerate(infile):
if i == 10:
break
print(banana)
infile.close()
```
The key passage here is the one in which the `open()` function opens a file and return a **file object** (hint: try printing the type of `infile`), and it is commonly used with the following three parameters: the **name of the file** that we want to open, the **mode** and the **encoding**.
- **filename**: the name of the file to open, this corresponds to the full/relative path to the file from the notebook.
- the **mode** in which we want to open a file: the most commonly used values are `r` for **reading** (default, which means that you don't have to put this in explicitly), `w` for **writing** (overwriting existing files), and `a` for **appending**. (Note that [the documentation](https://docs.python.org/3/library/functions.html#open) report mode values that may be necessary in some exceptional case)
- **encoding**: which mapping of string to code points (conversion to bytes) to use, more on this later.
>**IMPORTANT**: every opened file should be **closed** by using the function `close()` before the end of the program, or the file could be unavailable to successive manipulations or for other programs.
There are other ways to read a text file, among which the use of the methods `read()` and `readlines()`, that would simplify the above function in:
```python
infile = open('data/adams-hhgttg.txt', 'r', encoding='utf-8')
text = infile.readlines()
print(text[:10])
infile.close()
```
However, these methods **read the whole file at once**, thus creating capacity/efficiency problems when working with big corpora.
In the solution we adopt here the input file is read line by line, so that at any given moment **only one line of text** is loaded into memory.
You can see all file object methods, including examples, on this W3schools page: https://www.w3schools.com/python/python_ref_file.asp
```
with open('data/adams-hhgttg.txt', encoding='utf-8') as infile: # The file is opened
lines = infile.readlines()
# As soon as we exit the indented scope, the file is closed again
# (and made available to other programs on your computer)
print(lines[:10])
with open('data/', encoding='utf-8') as infile: # The file is opened
lines = infile.readlines()
# As soon as we exit the indented scope, the file is closed again
# (and made available to other programs on your computer)
print(lines[:10])
```
### The with statement
A `with` statement is used to wrap the execution of a block of code.
Using this construction to open files has three major advantages:
- there is no need to explicitly close the file (the file is automatically closed as soon as the nested code exits)
- the file is closed automatically even when unhandled errors cause the program to crash
- the code is way clearer (it is trivial to identify where in the code a file is opened)
Thus, you can make it yourself a bit easier. Forget about the explicit `.close()` method. The code above can be rewritten as follows:
```
with open('data/adams-hhgttg.txt', encoding='utf-8') as infile: # The file is opened
lines = infile.readlines()
# As soon as we exit the indented scope, the file is closed again
# (and made available to other programs on your computer)
print(lines[:10])
```
The code in the indented with block is executed while the file is opened. It is automatically closed as the block is closed.
### Quiz
Hint: you can call `.read()` on the file object.
* Write one function that takes a file path as argument and prints statistics about the file, giving:
* The number of words (often called 'tokens')
* The number of unique words (often called 'types')
* The type:token ratio (i.e. unique words / words)
* The 10 most frequent words, including their frequencies
* Write a normalization or cleaning function that takes a string as argument, that pre-processes this text and returns a normalized version, by removing/substituting:
* Uppercase characters
* Punctuation
* Call the normalization function inside the first function
Test the function on the filepath in `file_path` below. Compare the results from running the function with and without normalization.
```
import string
from collections import Counter
# Your code here
def get_file_statistics(file_path, normalization=False):
with open(file_path, 'r', encoding='utf-8') as infile:
text = infile.read()
if normalization:
text = normalize(text)
words = text.split()
n_words = len(words)
n_unique = len(set(words))
print("Number of words:", n_words)
print("Number of unique words:", n_unique)
print("TTR:", n_unique / n_words)
counter = Counter(words)
most_common_words = counter.most_common(10)
print("Frequencies:")
for word, frequency in most_common_words:
print("\t", word, "(" + str(frequency) + ")")
# Your code here
def get_file_statistics(file_path, normalization=False):
with open(file_path, 'r', encoding='utf-8') as infile:
text = infile.read()
if normalization:
text = normalize(text)
words = text.split()
n_words = len(words)
n_unique = len(set(words))
print("Number of words:", n_words)
print("Number of unique words:", n_unique)
print("TTR:", n_unique / n_words)
counter = Counter(words)
most_common_words = counter.most_common(10)
print("Frequencies:")
for word, frequency in most_common_words:
print("\t", word, "(" + str(frequency) + ")")
def normalize(text):
normalized_text = text.lower()
for char in string.punctuation:
normalized_text = normalized_text.replace(char, '')
return normalized_text
file_path = 'data/adams-hhgttg.txt'
# your_function_name(file_path)
get_file_statistics(file_path, normalization=False)
print()
get_file_statistics(file_path, normalization=True)
```
---
## Writing files
Writing an output file in Python has a structure that is close to that we're used in our reading examples above. The main difference are:
- the specification of the **mode** `w`
- the use of the function `write()` for each line of text
> **Warning!** Opening an _existing_ file in `w` mode will erase its contents!
```
# The folder you with to write the file to ('stuff' below) has to exist on the file system
with open('stuff/output-test-1.txt', 'w', encoding='utf-8') as outfile:
outfile.write("My name is:")
outfile.write("John")
```
When writing line by line, it's up to you to take care of the **newlines** by appending `\n` to each line. Unlike the `print()` function, the `write()` function has no standard line-end character.
```
with open('stuff/output-test-2.txt', 'w', encoding='utf-8') as outfile:
outfile.write("My name is:\n")
outfile.write("Alexander")
outfile.write("ééèèüAæøå")
```
We can inspect the file we just created with the command line. The following is not Python, but a basic command line tool to print the contents of a file. At least on Mac and Linux, this works. Otherwise, just navigate to the file in your file explorer and open it.
> Prepending a `!` to a command executes a program on your computer. Use it with care and don't run such a cell in a notebook that you do not trust!
```
!cat stuff/output-test-2.txt
```
### Quiz
Instead of printing the statistics in the previous quiz, write them to a file. For instance, use the file path in `file_path` to write the file to. Copy your function from above, rename it and add the required code to it.
```
# Your code here
file_path = 'stuff/adams-hhgttg-statistics.txt'
# your_adapted_function_that_writes_statistics(file_path)
# Your code here
def get_file_statistics(file_path, target_file, normalization=False):
with open(file_path, 'r', encoding='utf-8') as infile:
text = infile.read()
if normalization:
text = normalize(text)
words = text.split()
n_words = len(words)
n_unique = len(set(words))
counter = Counter(words)
most_common_words = counter.most_common(10)
with open(target_file, 'w', encoding='utf-8') as outfile:
outfile.write("Number of words:" + str(n_words))
outfile.write('\n')
outfile.write("Number of unique words:" + str(n_unique))
outfile.write('\n')
outfile.write("TTR:" + str(n_unique / n_words))
outfile.write('\n')
outfile.write("Frequencies:")
for word, frequency in most_common_words:
outfile.write("\t" + word + "(" + str(frequency) + ")")
outfile.write('\n')
def normalize(text):
normalized_text = text.lower()
for char in string.punctuation:
normalized_text = normalized_text.replace(char, '')
return normalized_text
get_file_statistics('data/adams-hhgttg.txt', target_file=file_path)
```
Let's quickly check its contents:
```
!cat stuff/adams-hhgttg-statistics.txt
```
---
## Reading files from a folder
```
import os
# Write a function that reads through the folders and files in a directory.
# Read through the data directory and all its contents.
def read_through_folder(path):
"""
Read from all files in a given folder.
Args:
path (str): Path to a folder
Returns:
dict: dictionary with filenames as keys and their contents as value
"""
files = os.listdir(path)
data = dict()
for n, file in enumerate(files, 1):
filepath = os.path.join(path, file)
content = read_from_file(filepath)
print(n, file)
data[file] = content[:100]
return data
def read_from_file(filepath):
with open(filepath, 'r', encoding='utf-8') as infile:
text = infile.read()
return text
path = 'data/gutenberg-extension'
data = read_through_folder(path)
data
# Write a function that reads through the folders and files in a directory.
# Read through the data directory and all its contents.
def read_through_folder(path):
"""
Read from all files in a given folder.
Args:
path (str): Path to a folder
Returns:
dict: dictionary with filenames as keys and their contents as value
"""
data = dict()
for root, dirs, files in os.walk(path):
# print(dirs)
# print()
# Read from the folders here
for folder in dirs:
folderpath = os.path.join(root, folder)
files = os.listdir(folderpath)
# print(files)
# print()
data[folder] = dict()
# Then every file in that folder
for n, file in enumerate(files, 1):
filepath = os.path.join(folderpath, file)
if os.path.isdir(filepath): # Some files can be folders
continue
# Read its contents
content = read_from_file(filepath)
# print(n, file)
print(type(data))
print(data)
data[folder][file] = content[:10]
break
return data
def read_from_file(filepath):
"""Give back the text from a file"""
with open(filepath, 'r', encoding='utf-8') as infile:
text = infile.read()
return text
path = 'data'
data = read_through_folder(path)
for folder, value in data.items():
for file, content in value.items():
print(content)
print(data)
```
## Looping through folders and files
If you want to load in multiple files in a folder, without explicitly providing the file pointers/paths for each file, you can also point to a folder. We can use the built-in `os` module to loop through a folder and load multiple files in memory.
```
import os # You only have to do this once in your code.
# Always put this at the top of your file.
list(os.walk("data/gutenberg-extension"))
gutenberg_books = dict() # Create an empty dictionary to store our data in
for root, dirs, files in os.walk("data/gutenberg-extension"):
for file in files:
if not file.endswith('.txt'): # Why this?
continue
# You have to specify the full (relative) path, not only the file name.
file_path = os.path.join(root, file)
with open(file_path, encoding='utf-8') as infile:
gutenberg_books[file] = infile.read()
gutenberg_books.keys()
```
The `os.walk()` method is convenient if you are dealing with a combination of files and folders, no matter how deep the hierarchy goes (folders in folders etc.). A simpler function is `os.listdir()`.
```
os.listdir('data/gutenberg-extension/')
gutenberg_books = dict() # Create an empty dictionary to store our data in
folder_path = "data/gutenberg-extension"
for file in os.listdir(folder_path):
if not file.endswith('.txt'): # Why this?
continue
file_path = os.path.join(folder_path, file)
with open(file_path, encoding='utf-8') as infile:
gutenberg_books[file] = infile.read()
gutenberg_books.keys()
```
The dictionary object now contains a lot of information: all the contents of all files. There's a chance that your browser/notebook will crash when calling the dictionary here. Instead, let's call a part of one of the books, the first 300 characters:
```
print(gutenberg_books['doyle-sherlock.txt'][:300])
```
---
# Reading and writing data in JSON and CSV
We now know how we can read and write textual content to files on our file system. Two more structed and common data formats to store data in are JSON and CSV. If you are not familiar with these, take a look at:
* JSON (https://www.w3schools.com/whatis/whatis_json.asp)
* CSV (https://www.howtogeek.com/348960/what-is-a-csv-file-and-how-do-i-open-it/)
## JSON
The syntax of JSON is very similar to the syntax of `int`, `str`, `list` and `dict` data types in Python.
The following data (excerpt) is taken from the data that feeds the Instagram page of the UvA (https://www.instagram.com/uva_amsterdam/). The API/service of Instagram returns web data in JSON that is used by your browser to show you a page with content. You can also find this when inspecting the source of the page.
A JSON file (named `example.json`) that looks like this:
```json
{
"biography": "Welcome to the UvA \u274c\u274c\u274c \nFind out more about our:\n\ud83c\udfdb campuses \ud83c\udf93 education \ud83d\udd0e research\nShare your \ud83d\udcf8 using: #uva_amsterdam\nQuestions? Contact us:",
"blocked_by_viewer": false,
"restricted_by_viewer": null,
"country_block": false,
"external_url": "https://linkin.bio/uva_amsterdam",
"external_url_linkshimmed": "https://l.instagram.com/?u=https%3A%2F%2Flinkin.bio%2Fuva_amsterdam\u0026e=ATOBo7L11uPBpsMfd6-pFnoBRaF3T-6ovlD9Blc2q1LGUjnmyuGutPfuK-ib70Bt_YmGu6cDNCX1Y1lC\u0026s=1",
"edge_followed_by": {
"count": 42241
},
"fbid": "17841401222133463",
"followed_by_viewer": false,
"edge_follow": {
"count": 362
},
"follows_viewer": false,
"full_name": "UvA: University of Amsterdam",
"id": "1501672737",
"is_business_account": true,
"is_joined_recently": false,
"business_category_name": "Professional Services",
"overall_category_name": null,
"category_enum": "UNIVERSITY",
"category_name": null,
"profile_pic_url": "https://scontent-amt2-1.cdninstagram.com/v/t51.2885-19/s150x150/117066908_1128864954173821_2797787766361156925_n.jpg?_nc_ht=scontent-amt2-1.cdninstagram.com\u0026_nc_ohc=PXsEzg-CKaUAX8dEtNL\u0026tp=1\u0026oh=86bb46d8006b77db2037955187e69de1\u0026oe=6056619F",
"username": "uva_amsterdam",
"connected_fb_page": null
}
```
Can be loaded into Python as a dictionary:
```python
{
'biography': 'Welcome to the UvA ❌❌❌ \nFind out more about our:\n🏛 campuses 🎓 education 🔎 research\nShare your 📸 using: #uva_amsterdam\nQuestions? Contact us:',
'blocked_by_viewer': False,
'restricted_by_viewer': None,
'country_block': False,
'external_url': 'https://linkin.bio/uva_amsterdam',
'external_url_linkshimmed': 'https://l.instagram.com/?u=https%3A%2F%2Flinkin.bio%2Fuva_amsterdam&e=ATOBo7L11uPBpsMfd6-pFnoBRaF3T-6ovlD9Blc2q1LGUjnmyuGutPfuK-ib70Bt_YmGu6cDNCX1Y1lC&s=1',
'edge_followed_by': {'count': 42241},
'fbid': '17841401222133463',
'followed_by_viewer': False,
'edge_follow': {'count': 362},
'follows_viewer': False,
'full_name': 'UvA: University of Amsterdam',
'id': '1501672737',
'is_business_account': True,
'is_joined_recently': False,
'business_category_name': 'Professional Services',
'overall_category_name': None,
'category_enum': 'UNIVERSITY',
'category_name': None,
'profile_pic_url': 'https://scontent-amt2-1.cdninstagram.com/v/t51.2885-19/s150x150/117066908_1128864954173821_2797787766361156925_n.jpg?_nc_ht=scontent-amt2-1.cdninstagram.com&_nc_ohc=PXsEzg-CKaUAX8dEtNL&tp=1&oh=86bb46d8006b77db2037955187e69de1&oe=6056619F',
'username': 'uva_amsterdam',
'connected_fb_page': None
}
```
The main differences between dictionaries in Python and the JSON file notation are:
* Python dictionaries exist in memory in Python, they are an abstract datatype. JSON is a data format and can be saved on your computer, or be transmitted as string (e.g. for a website request, sending data).
* Keys in JSON can only be of type string. This means that writing a Python dictionary with integers as keys will transform them to string. Reading back the file will therefore give you a Python dictionary with strings as keys.
* All non-ascii characters are escape sequences (e.g. `\u274c`) for ❌. This is the same for letters with diacritics (e.g. é, ê, ç, ñ). If all characters are escaped this way, you don't have to specify an encoding when opening json files.
* `True` and `False` are lowercased: `true` and `false`. `None` is `null`.
* JSON only allows double quotes for its "strings".
The built-in json module of Python needs to be imported first, to work with json files and notation.
```
import json
```
Let's read a json file from our disk using `json.load()`. The file comes from the public API of the municipality of Amsterdam to look up information on houses by searching on street name and house number. See: https://api.data.amsterdam.nl/atlas/search/adres/. Most often, information from such API's or 'REST-services' is given back in JSON.
```
with open('data/bg1.json') as jsonfile:
data = json.load(jsonfile)
```
Then, we can inspect the loaded data as a Python dictionary:
```
print(type(data))
data
data
```
When we are only interested in the information on the building, we can take out that part to store it separately. This is the first dictionary element in the list that can be found under key `data['results']`. The rest of the information is feedback from the API, telling us that there is 1 hit.
```
data_selection = data['results'][0]
# Delete all keys starting with an _underscore
for k in list(data_selection):
if k.startswith('_'):
del data_selection[k]
data_selection
# print(type(data_selection))
```
Then, save it back to a json file using `json.dump()`:
```
with open('stuff/bg1-selection.json', 'w') as outfile:
json.dump(data_selection, outfile, indent=4)
```
### Quiz
* Modify that function you previously built to generate statistics for a file once more so that it returns a python dictionary with these statistics.
* Write a function that uses the `os.walk()` or `os.listdir()` method to run the file statistics function over every file in a folder. Create a dictionary that takes the file name as key, and the returned statistics dictionary as value.
* Also add arguments for a `target_file_path`, and a `data` dictionary to that function. Use the `json.dump()` method to write the dictionary to the provided file path using a with statement.
* Inspect the file by opening it on your computer with a text editor of some sorts. Find a way to make it 'pretty printed' (e.g. with _indents_).
```
# Your code here
source_folder = "data/gutenberg-extension"
target_file_path = "stuff/gutenberg-statistics.json"
def your_modified_statistics_function(file_path):
# Your code
return statistics_dict
def your_functions_here():
return
# Your code here
def get_file_statistics(file_path, normalization=False):
with open(file_path, 'r', encoding='utf-8') as infile:
text = infile.read()
if normalization == True:
text = normalize(text)
words = text.split()
n_words = len(words)
n_unique = len(set(words))
counter = Counter(words)
most_common_words = counter.most_common(10)
mfw = []
for word, freq in most_common_words:
mfw.append(word)
statistics = dict()
statistics['n_words'] = n_words
statistics['n_unique'] = n_unique
statistics['TTR'] = n_unique / n_words
statistics['MFW'] = [i[0] for i in most_common_words]
return statistics
def normalize(text):
normalized_text = text.lower()
for char in string.punctuation:
normalized_text = normalized_text.replace(char, '')
return normalized_text
def get_statistics_for_folder(folder, target_file):
""""""
statistics_files = dict()
for f in os.listdir(folder):
filepath = os.path.join(folder, f)
stats_dict = get_file_statistics(filepath)
statistics_files[f] = stats_dict
with open(target_file, 'w') as jsonfile:
json.dump(statistics_files, jsonfile, indent=4)
# get_file_statistics('data/adams-hhgttg.txt')
get_statistics_for_folder('data/gutenberg-extension/', 'stuff/gutenberg_statistics.json')
```
---
# Exercises
### Exercise 1 (previously Exercise 6 in Notebook 2)
Read the file `data/adams-hhgttg.txt` and:
- Count the number of lines in the file
- Count the number of non-empty lines
- Read each line of the input file, remove its newline character and write it to file `stuff/adams-output.txt`
- Compute the average number of alphanumeric characters per line
- Identify all the unique words used in the text (no duplicates!) and write them in a text file called `stuff/lexicon.txt` (one word per line)
```
# your code here
with open("stuff/lexicon.txt", "w") as infile:
infile.write("something")
```
### Exercise 2
TBD
| github_jupyter |
# Optimizing mining operations
This tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on Cloud with IBM ILOG CPLEX Optimizer.
When you finish this tutorial, you'll have a foundational knowledge of _Prescriptive Analytics_.
>This notebook is part of [Prescriptive Analytics for Python](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html).
>It requires a valid subscription to **Decision Optimization on Cloud** or a **local installation of CPLEX Optimizers**.
Discover us [here](https://developer.ibm.com/docloud).
Table of contents:
- [Describe the business problem](#Describe-the-business-problem)
* [How decision optimization (prescriptive analytics) can help](#How--decision-optimization-can-help)
* [Use decision optimization](#Use-decision-optimization)
* [Step 1: Download the library](#Step-1:-Download-the-library)
* [Step 2: Set up the engines](#Step-2:-Set-up-the-prescriptive-engine)
- [Step 3: Model the data](#Step-3:-Model-the-data)
* [Step 4: Prepare the data](#Step-4:-Prepare-the-data)
- [Step 5: Set up the prescriptive model](#Step-5:-Set-up-the-prescriptive-model)
* [Define the decision variables](#Define-the-decision-variables)
* [Express the business constraints](#Express-the-business-constraints)
* [Express the objective](#Express-the-objective)
* [Solve with the Decision Optimization solve service](#Solve-with-the-Decision-Optimization-solve-service)
* [Step 6: Investigate the solution and run an example analysis](#Step-6:-Investigate-the-solution-and-then-run-an-example-analysis)
* [Summary](#Summary)
****
## Describe the business problem
This mining operations optimization problem is an implementation of Problem 7 from "Model Building in Mathematical Programming" by
H.P. Williams.
The operational decisions that need to be made are which mines should be operated each year and
how much each mine should produce.
### Business constraints
* A mine that is closed cannot be worked.
* Once closed, a mine stays closed until the end of the horizon.
* Each year, a maximum number of mines can be worked.
* For each mine and year, the quantity extracted is limited by the mine's maximum extracted quantity.
* The average blend quality must be greater than or equal to the requirement of the year.
### Objective and KPIs
#### Total actualized revenue
Each year, the total revenue is equal to the total quantity extracted multiplied by the blend price. The time series of revenues is aggregated in one expected revenue by applying the discount rate; in other terms, a revenue of \$1000 next year is counted as \$900 actualized, \$810 if the revenue is expected in two years, etc.
#### Total expected royalties
A mine that stays open must pay royalties (see the column **royalties** in the DataFrame). Again, royalties from different years are actualized using the discount rate.
#### Business objective
The business objective is to maximize the net actualized profit, that is the difference between the total actualized revenue and total actualized royalties.
## How decision optimization can help
* Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
* Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
* Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
<u>With prescriptive analytics, you can:</u>
* Automate the complex decisions and trade-offs to better manage your limited resources.
* Take advantage of a future opportunity or mitigate a future risk.
* Proactively update recommendations based on changing events.
* Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
## Checking minimum requirements
This notebook uses some features of pandas that are available in version 0.17.1 or above.
```
import pip
REQUIRED_MINIMUM_PANDAS_VERSION = '0.17.1'
try:
import pandas as pd
assert pd.__version__ >= REQUIRED_MINIMUM_PANDAS_VERSION
except:
raise Exception("Version %s or above of Pandas is required to run this notebook" % REQUIRED_MINIMUM_PANDAS_VERSION)
```
## Use decision optimization
### Step 1: Download the library
Run the following code to install the Decision Optimization CPLEX Modeling library. The *DOcplex* library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
```
import sys
try:
import docplex.mp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
```
<i>Note that the more global package docplex contains another subpackage docplex.cp that is dedicated to Constraint Programming, another branch of optimization.</i>
### Step 2: Set up the prescriptive engine
* Subscribe to our private cloud offer or Decision Optimization on Cloud solve service [here](https://developer.ibm.com/docloud) if you do not want to use a local solver.
* Get the service URL and your personal API key and enter your credentials here if accurate:
```
url = None
key = None
```
### Step 3: Model the data
#### Mining Data
The mine data is provided as a *pandas* DataFrame. For each mine, we are given the amount of royalty to pay when operating the mine, its ore quality, and the maximum quantity that we can extract from the mine.
```
# If needed, install the module pandas prior to executing this cell
import pandas as pd
from pandas import DataFrame, Series
df_mines = DataFrame({"royalties": [ 5 , 4, 4, 5 ],
"ore_quality": [ 1.0, 0.7, 1.5, 0.5],
"max_extract": [ 2 , 2.5, 1.3, 3 ]})
nb_mines = len(df_mines)
df_mines.index.name='range_mines'
df_mines
```
#### Blend quality data
Each year, the average blend quality of all ore extracted from the mines
must be greater than a minimum quality. This data is provided as a *pandas* Series, the length of which is the plan horizon in years.
```
blend_qualities = Series([0.9, 0.8, 1.2, 0.6, 1.0])
nb_years = len(blend_qualities)
print("* Planning mining operations for: {} years".format(nb_years))
blend_qualities.describe()
```
#### Additional (global) data
We need extra global data to run our planning model:
* a blend price (supposedly flat),
* a maximum number of worked mines for any given years (typically 3), and
* a discount rate to compute the actualized revenue over the horizon.
```
# global data
blend_price = 10
max_worked_mines = 3 # work no more than 3 mines each year
discount_rate = 0.10 # 10% interest rate each year
```
### Step 4: Prepare the data
The data is clean and does not need any cleansing.
### Step 5: Set up the prescriptive model
```
from docplex.mp.environment import Environment
env = Environment()
env.print_information()
```
#### Create DOcplex model
The model contains all the business constraints and defines the objective.
```
from docplex.mp.model import Model
mm = Model("mining_pandas")
```
What are the decisions we need to make?
* What mines do we work each year? (a yes/no decision)
* What mine do we keep open each year? (again a yes/no decision)
* What quantity is extracted from each mine, each year? (a positive number)
We need to define some decision variables and add constraints to our model related to these decisions.
#### Define the decision variables
```
# auxiliary data: ranges
range_mines = range(nb_mines)
range_years = range(nb_years)
# binary decisions: work the mine or not
work_vars = mm.binary_var_matrix(keys1=range_mines, keys2=range_years, name='work')
# open the mine or not
open_vars = mm.binary_var_matrix(range_mines, range_years, name='open')
# quantity to extract
ore_vars = mm.continuous_var_matrix(range_mines, range_years, name='ore')
mm.print_information()
```
#### Express the business constraints
##### Constraint 1: Only open mines can be worked.
In order to take advantage of the *pandas* operations to create the optimization model, decision variables are organized in a DataFrame which is automatically indexed by *'range_mines'* and *'range_years'* (that is, the same keys as the dictionary created by the *binary_var_matrix()* method).
```
# Organize all decision variables in a DataFrame indexed by 'range_mines' and 'range_years'
df_decision_vars = DataFrame({'work': work_vars, 'open': open_vars, 'ore': ore_vars})
# Set index names
df_decision_vars.index.names=['range_mines', 'range_years']
# Display rows of 'df_decision_vars' DataFrame for first mine
df_decision_vars[:nb_years]
```
Now, let's iterate over rows of the DataFrame *"df_decision_vars"* and enforce the desired constraints.
The *pandas* method *itertuples()* returns a named tuple for each row of a DataFrame. This method is efficient and convenient for iterating over all rows.
```
mm.add_constraints(t.work <= t.open for t in df_decision_vars.itertuples())
mm.print_information()
```
##### Constraint 2: Once closed, a mine stays closed.
These constraints are a little more complex: we state that the series of *open_vars[m,y]* for a given mine *_m_* is decreasing. In other terms, once some *open_vars[m,y]* is zero, all subsequent values for future years are also zero.
Let's use the *pandas* *groupby* operation to collect all *"open"* decision variables for each mine in separate *pandas* Series.<br>
Then, we iterate over the mines and invoke the *aggregate()* method, passing the *postOpenCloseConstraint()* function as the argument.<br>
The *pandas* *aggregate()* method invokes *postOpenCloseConstraint()* for each mine, passing the associated Series of *"open"* decision variables as argument.
The *postOpenCloseConstraint()* function posts a set of constraints on the sequence of *"open"* decision variables to enforce that a mine cannot re-open.
```
# Once closed, a mine stays closed
def postOpenCloseConstraint(open_vars):
mm.add_constraints(open_next <= open_curr
for (open_next, open_curr) in zip(open_vars[1:], open_vars))
# Optionally: return a string to display information regarding the aggregate operation in the Output cell
return "posted {0} open/close constraints".format(len(open_vars) - 1)
# Constraints on sequences of decision variables are posted for each mine,
# using pandas' "groupby" operation.
df_decision_vars.open.groupby(level='range_mines').aggregate(postOpenCloseConstraint)
```
##### Constraint 3: The number of worked mines each year is limited.
This time, we use the *pandas* *groupby* operation to collect all *"work"* decision variables for each **year** in separate *pandas* Series. Each Series contains the *"work"* decision variables for all mines.
Then, the maximum number of worked mines constraint is enforced by making sure that the sum of all the terms of each Series is smaller or equal to the maximum number of worked mines.<br>
The *aggregate()* method is used to post this constraint for each *year*.
```
# Maximum number of worked mines each year
# Note that Model.sum() accepts a pandas Series of variables.
df_decision_vars.work.groupby(level='range_years').aggregate(
lambda works: mm.add_constraint(mm.sum(works) <= max_worked_mines))
```
##### Constraint 4: The quantity extracted is limited.
This constraint expresses two things:
* Only a worked mine can give ore. (Note that there is no minimum on the quantity extracted, this model is very simplified).
* The quantity extracted is less than the mine's maximum extracted quantity.
To illustrate the *pandas* *join* operation, let's build a DataFrame that joins the *"df_decision_vars"* DataFrame and the *"df_mines.max_extract"* Series such that each row contains the information to enforce the quantity extracted limit constraint.<br>
The default behaviour of the *pandas* *join* operation is to look at the index of *left* DataFrame and to append columns of the *right* Series or DataFrame which have same index.<br>
Here is the result of this operation in our case:
```
# Display rows of 'df_decision_vars' joined with 'df_mines.max_extract' Series for first two mines
df_decision_vars.join(df_mines.max_extract)[:(nb_years * 2)]
```
Now, the constraint to limit quantity extracted is easily created by iterating over all rows of the joined DataFrames:
```
# quantity extracted is limited
mm.add_constraints(t.ore <= t.max_extract * t.work
for t in df_decision_vars.join(df_mines.max_extract).itertuples())
mm.print_information()
```
##### Blend constraints
We need to compute the total production of each year, stored in auxiliary variables.
Again, we use the *pandas* *groupby* operation, this time to collect all *"ore"* decision variables for each **year** in separate *pandas* Series.<br>
The *"blend"* variable for a given year is the sum of *"ore"* decision variables for the corresponding Series.
```
# blend variables
blend_vars = mm.continuous_var_list(nb_years, name='blend')
# define blend variables as sum of extracted quantities
mm.add_constraints(mm.sum(ores.values) == blend_vars[year]
for year, ores in df_decision_vars.ore.groupby(level='range_years'))
mm.print_information()
```
##### Minimum average blend quality constraint
The average quality of the blend is the weighted sum of extracted quantities, divided by the total extracted quantity. Because we cannot use division here, we transform the inequality:
```
# Quality requirement on blended ore
mm.add_constraints(mm.sum(ores.values * df_mines.ore_quality) >= blend_qualities[year] * blend_vars[year]
for year, ores in df_decision_vars.ore.groupby(level='range_years'))
mm.print_information()
```
#### KPIs and objective
Since both revenues and royalties are actualized using the same rate, we compute an auxiliary discount rate array.
##### The discount rate array
```
actualization = 1.0 - discount_rate
assert actualization > 0
assert actualization <= 1
#
s_discounts = Series((actualization ** y for y in range_years), index=range_years, name='discounts')
s_discounts.index.name='range_years'
# e.g. [1, 0.9, 0.81, ... 0.9**y...]
print(s_discounts)
```
##### Total actualized revenue
Total expected revenue is the sum of actualized yearly revenues, computed as total extracted quantities multiplied by the blend price (assumed to be constant over the years in this simplified model).
```
expected_revenue = blend_price * mm.dot(blend_vars, s_discounts)
mm.add_kpi(expected_revenue, "Total Actualized Revenue");
```
##### Total actualized royalty cost
The total actualized royalty cost is computed for all open mines, also actualized using the discounts array.
This time, we use the *pandas* *join* operation twice to build a DataFrame that joins the *"df_decision_vars"* DataFrame with the *"df_mines.royalties"* and *"s_discounts"* Series such that each row contains the relevant information to calculate its contribution to the total actualized royalty cost.<br>
The join with the *"df_mines.royalties"* Series is performed by looking at the common *"range_mines"* index, while the join with the *"s_discounts"* Series is performed by looking at the common *"range_years"* index.
```
df_royalties_data = df_decision_vars.join(df_mines.royalties).join(s_discounts)
# add a new column to compute discounted roylaties using pandas multiplication on columns
df_royalties_data['disc_royalties'] = df_royalties_data['royalties'] * df_royalties_data['discounts']
df_royalties_data[:nb_years]
```
The total royalty is now calculated by multiplying the columns *"open"*, *"royalties"* and *"discounts"*, and to sum over all rows.<br>
Using *pandas* constructs, this can be written in a very compact way as follows:
```
total_royalties = mm.dot(df_royalties_data.open, df_royalties_data.disc_royalties)
mm.add_kpi(total_royalties, "Total Actualized Royalties");
```
#### Express the objective
The business objective is to maximize the expected net profit, which is the difference between revenue and royalties.
```
mm.maximize(expected_revenue - total_royalties)
```
#### Solve with the Decision Optimization solve service
Solve the model on the cloud.
```
mm.print_information()
# turn this flag on to see the solve log
print_cplex_log = False
# start the solve
s1 = mm.solve(url=url, key=key, log_output=print_cplex_log)
assert s1, "!!! Solve of the model fails"
mm.report()
```
### Step 6: Investigate the solution and then run an example analysis
To analyze the results, we again leverage pandas, by storing the solution value of the _ore_ variables in a new DataFrame.
Note that we use the _float_ function of Python to convert the variable to its solution value. Of course, this requires that the model be successfully solved.<br>
For convenience, we want to organize the _ore_ solution values in a pivot table with *years* as row index and *mines* as columns. The *pandas* *unstack* operation does this for us.
```
mine_labels = [("mine%d" % (m+1)) for m in range_mines]
ylabels = [("y%d" % (y+1)) for y in range_years]
# Add a column to DataFrame containing 'ore' decision variables value
# Note that we extract the solution values of ore variables in one operation with get_values().
df_decision_vars['ore_values'] = s1.get_values(df_decision_vars.ore)
# Create a pivot table by (years, mines), using pandas' "unstack" method to transform the 'range_mines' row index
# into columns
df_res = df_decision_vars.ore_values.unstack(level='range_mines')
# Set user-friendly labels for column and row indices
df_res.columns = mine_labels
df_res.index = ylabels
df_res
```
#### Visualize results
In this section you'll need the *matplotlib* module to visualize the results of the solve.
```
# import matplotlib library for visualization
import matplotlib.pyplot as plt
# matplotlib graphics are printed -inside- the notebook
%matplotlib inline
df_res.plot(kind="bar", figsize=(10,4.5))
plt.xlabel("year")
plt.ylabel("ore")
plt.title('ore values per year');
```
## Adding operational constraints.
What if we wish to add operational constraints? For example, let us forbid work on certain pairs of (mines, years). Let's see how this impacts profit.
First, we add extra constraints to forbid work on those tuples.
```
# a list of (mine, year) tuples on which work is not possible.
forced_stops = [(1, 2), (0, 1), (1, 0), (3, 2), (2, 3), (3, 4)]
mm.add_constraints(work_vars[stop_m, stop_y] == 0
for stop_m, stop_y in forced_stops)
mm.print_information()
```
The previous solution does not satisfy these constraints; for example (0, 1) means mine 1 should not be worked on year 2, but it was in fact worked in the above solution.
To help CPLEX find a feasible solution, we will build a heuristic feasible solution and pass it to CPLEX.
## Using an heuristic start solution
In this section, we show how one can provide a start solution to CPLEX, based on heuristics.
First, we build a solution in which mines are worked whenever possible, that is for all couples *(m,y)* except for those in *forced_stops*.
```
# build a new, empty solution
full_mining = mm.new_solution()
# define the worked
for m in range_mines:
for y in range_years:
if (m,y) not in forced_stops:
full_mining.add_var_value(work_vars[m,y], 1)
#full_mining.display()
```
Then we pass this solution to the model as a MIP start solution and re-solve,
this time with CPLEX logging turned on.
```
mm.add_mip_start(full_mining)
s2 = mm.solve(url=url, key=key, log_output=True) # turns on CPLEX logging
assert s2, "solve failed"
mm.report()
```
You can see in the CPLEX log above, that our MIP start solution provided a good start for CPLEX, defining an initial solution with objective 157.9355
Now we can again visualize the results with *pandas* and *matplotlib*.
```
# Add a column to DataFrame containing 'ore' decision variables value and create a pivot table by (years, mines)
df_decision_vars['ore_values2'] = s2.get_values(df_decision_vars.ore)
df_res2 = df_decision_vars.ore_values2.unstack(level='range_mines')
df_res2.columns = mine_labels
df_res2.index = ylabels
df_res2.plot(kind="bar", figsize=(10,4.5))
plt.xlabel("year")
plt.ylabel("ore")
plt.title('ore values per year - what-if scenario');
```
As expected, mine1 is not worked in year 2: there is no blue bar at y2.
## Summary
You learned how to set up and use IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model and solve it with IBM Decision Optimization on Cloud.
## References
* [CPLEX Modeling for Python documentation](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html)
* [Decision Optimization on Cloud](https://developer.ibm.com/docloud/)
* Need help with DOcplex or to report a bug? Please go [here](https://developer.ibm.com/answers/smartspace/docloud).
* Contact us at dofeedback@wwpdl.vnet.ibm.com.
Copyright © 2017 IBM. IPLA licensed Sample Materials.
| github_jupyter |
<a href="http://cocl.us/pytorch_link_top">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " />
</a>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" />
<h1>Differentiation in PyTorch</h1>
<h2>Table of Contents</h2>
<p>In this lab, you will learn the basics of differentiation.</p>
<ul>
<li><a href="#Derivative">Derivatives</a></li>
<li><a href="#Partial_Derivative">Partial Derivatives</a></li>
</ul>
<p>Estimated Time Needed: <strong>25 min</strong></p>
<hr>
<h2>Preparation</h2>
The following are the libraries we are going to use for this lab.
```
# These are the libraries will be useing for this lab.
import torch
import matplotlib.pylab as plt
```
<!--Empty Space for separating topics-->
<h2 id="Derivative">Derivatives</h2>
Let us create the tensor <code>x</code> and set the parameter <code>requires_grad</code> to true because you are going to take the derivative of the tensor.
```
# Create a tensor x
x = torch.tensor(2.0, requires_grad = True)
print("The tensor x: ", x)
```
Then let us create a tensor according to the equation $ y=x^2 $.
```
# Create a tensor y according to y = x^2
y = x ** 2
print("The result of y = x^2: ", y)
```
Then let us take the derivative with respect x at x = 2
```
# Take the derivative. Try to print out the derivative at the value x = 2
y.backward()
print("The dervative at x = 2: ", x.grad)
```
The preceding lines perform the following operation:
$\frac{\mathrm{dy(x)}}{\mathrm{dx}}=2x$
$\frac{\mathrm{dy(x=2)}}{\mathrm{dx}}=2(2)=4$
```
print('data:',x.data)
print('grad_fn:',x.grad_fn)
print('grad:',x.grad)
print("is_leaf:",x.is_leaf)
print("requires_grad:",x.requires_grad)
print('data:',y.data)
print('grad_fn:',y.grad_fn)
print('grad:',y.grad)
print("is_leaf:",y.is_leaf)
print("requires_grad:",y.requires_grad)
```
Let us try to calculate the derivative for a more complicated function.
```
# Calculate the y = x^2 + 2x + 1, then find the derivative
x = torch.tensor(2.0, requires_grad = True)
y = x ** 2 + 2 * x + 1
print("The result of y = x^2 + 2x + 1: ", y)
y.backward()
print("The dervative at x = 2: ", x.grad)
```
The function is in the following form:
$y=x^{2}+2x+1$
The derivative is given by:
$\frac{\mathrm{dy(x)}}{\mathrm{dx}}=2x+2$
$\frac{\mathrm{dy(x=2)}}{\mathrm{dx}}=2(2)+2=6$
<!--Empty Space for separating topics-->
<h3>Practice</h3>
Determine the derivative of $ y = 2x^3+x $ at $x=1$
```
# Practice: Calculate the derivative of y = 2x^3 + x at x = 1
# Type your code here
```
Double-click <b>here</b> for the solution.
<!--
x = torch.tensor(1.0, requires_grad=True)
y = 2 * x ** 3 + x
y.backward()
print("The derivative result: ", x.grad)
-->
<!--Empty Space for separating topics-->
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors
```
class SQ(torch.autograd.Function):
@staticmethod
def forward(ctx,i):
"""
In the forward pass we receive a Tensor containing the input and return
a Tensor containing the output. ctx is a context object that can be used
to stash information for backward computation. You can cache arbitrary
objects for use in the backward pass using the ctx.save_for_backward method.
"""
result=i**2
ctx.save_for_backward(i)
return result
@staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
i, = ctx.saved_tensors
grad_output = 2*i
return grad_output
```
We can apply it the function
```
x=torch.tensor(2.0,requires_grad=True )
sq=SQ.apply
y=sq(x)
y
print(y.grad_fn)
y.backward()
x.grad
```
<h2 id="Partial_Derivative">Partial Derivatives</h2>
We can also calculate <b>Partial Derivatives</b>. Consider the function: $f(u,v)=vu+u^{2}$
Let us create <code>u</code> tensor, <code>v</code> tensor and <code>f</code> tensor
```
# Calculate f(u, v) = v * u + u^2 at u = 1, v = 2
u = torch.tensor(1.0,requires_grad=True)
v = torch.tensor(2.0,requires_grad=True)
f = u * v + u ** 2
print("The result of v * u + u^2: ", f)
```
This is equivalent to the following:
$f(u=1,v=2)=(2)(1)+1^{2}=3$
<!--Empty Space for separating topics-->
Now let us take the derivative with respect to <code>u</code>:
```
# Calculate the derivative with respect to u
f.backward()
print("The partial derivative with respect to u: ", u.grad)
```
the expression is given by:
$\frac{\mathrm{\partial f(u,v)}}{\partial {u}}=v+2u$
$\frac{\mathrm{\partial f(u=1,v=2)}}{\partial {u}}=2+2(1)=4$
<!--Empty Space for separating topics-->
Now, take the derivative with respect to <code>v</code>:
```
# Calculate the derivative with respect to v
print("The partial derivative with respect to u: ", v.grad)
```
The equation is given by:
$\frac{\mathrm{\partial f(u,v)}}{\partial {v}}=u$
$\frac{\mathrm{\partial f(u=1,v=2)}}{\partial {v}}=1$
<!--Empty Space for separating topics-->
Calculate the derivative with respect to a function with multiple values as follows. You use the sum trick to produce a scalar valued function and then take the gradient:
```
# Calculate the derivative with multiple values
x = torch.linspace(-10, 10, 10, requires_grad = True)
Y = x ** 2
y = torch.sum(x ** 2)
```
We can plot the function and its derivative
```
# Take the derivative with respect to multiple value. Plot out the function and its derivative
y.backward()
plt.plot(x.detach().numpy(), Y.detach().numpy(), label = 'function')
plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label = 'derivative')
plt.xlabel('x')
plt.legend()
plt.show()
```
The orange line is the slope of the blue line at the intersection point, which is the derivative of the blue line.
The method <code> detach()</code> excludes further tracking of operations in the graph, and therefore the subgraph will not record operations. This allows us to then convert the tensor to a numpy array. To understand the sum operation <a href="https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html">Click Here</a>
<!--Empty Space for separating topics-->
The <b>relu</b> activation function is an essential function in neural networks. We can take the derivative as follows:
```
# Take the derivative of Relu with respect to multiple value. Plot out the function and its derivative
x = torch.linspace(-10, 10, 1000, requires_grad = True)
Y = torch.relu(x)
y = Y.sum()
y.backward()
plt.plot(x.detach().numpy(), Y.detach().numpy(), label = 'function')
plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label = 'derivative')
plt.xlabel('x')
plt.legend()
plt.show()
```
<!--Empty Space for separating topics-->
```
y.grad_fn
```
<h3>Practice</h3>
Try to determine partial derivative $u$ of the following function where $u=2$ and $v=1$: $ f=uv+(uv)^2$
```
# Practice: Calculate the derivative of f = u * v + (u * v) ** 2 at u = 2, v = 1
# Type the code here
u = torch.tensor(2.0,requires_grad=True)
v = torch.tensor(1.0,requires_grad=True)
f = u * v + (u * v) ** 2
print("The result of u * v + (u * v) ** 2: ", f)
f.backward()
print("The partial derivative with respect to u: ", u.grad)
print("The partial derivative with respect to u: ", v.grad)
```
Double-click __here__ for the solution.
<!--
u = torch.tensor(2.0, requires_grad = True)
v = torch.tensor(1.0, requires_grad = True)
f = u * v + (u * v) ** 2
f.backward()
print("The result is ", u.grad)
-->
<!--Empty Space for separating topics-->
<a href="http://cocl.us/pytorch_link_bottom">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" />
</a>
<h2>About the Authors:</h2>
<a href="https://www.linkedin.com/in/joseph-s-50398b136/">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/">Michelle Carey</a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
Copyright © 2018 <a href="cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.
| github_jupyter |
# Multi-task recommenders
**Learning Objectives**
1. Training a model which focuses on ratings.
2. Training a model which focuses on retrieval.
3. Training a joint model that assigns positive weights to both ratings & retrieval models.
## Introduction
In the basic retrieval notebook we built a retrieval system using movie watches as positive interaction signals.
In many applications, however, there are multiple rich sources of feedback to draw upon. For example, an e-commerce site may record user visits to product pages (abundant, but relatively low signal), image clicks, adding to cart, and, finally, purchases. It may even record post-purchase signals such as reviews and returns.
Integrating all these different forms of feedback is critical to building systems that users love to use, and that do not optimize for any one metric at the expense of overall performance.
In addition, building a joint model for multiple tasks may produce better results than building a number of task-specific models. This is especially true where some data is abundant (for example, clicks), and some data is sparse (purchases, returns, manual reviews). In those scenarios, a joint model may be able to use representations learned from the abundant task to improve its predictions on the sparse task via a phenomenon known as [transfer learning](https://en.wikipedia.org/wiki/Transfer_learning). For example, [this paper](https://openreview.net/pdf?id=SJxPVcSonN) shows that a model predicting explicit user ratings from sparse user surveys can be substantially improved by adding an auxiliary task that uses abundant click log data.
In this jupyter notebook, we are going to build a multi-objective recommender for Movielens, using both implicit (movie watches) and explicit signals (ratings).
Each learning objective will correspond to a __#TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/recommendation_systems/solutions/multitask.ipynb) for reference.
## Imports
Let's first get our imports out of the way.
```
# Installing the necessary libraries.
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
```
**NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.**
```
# Importing the necessary modules
import os
import pprint
import tempfile
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
```
## Preparing the dataset
We're going to use the Movielens 100K dataset.
```
ratings = tfds.load('movielens/100k-ratings', split="train")
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"user_rating": x["user_rating"],
})
movies = movies.map(lambda x: x["movie_title"])
```
And repeat our preparations for building vocabularies and splitting the data into a train and a test set:
```
# Randomly shuffle data and split between train and test.
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
movie_titles = movies.batch(1_000)
user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"])
unique_movie_titles = np.unique(np.concatenate(list(movie_titles)))
unique_user_ids = np.unique(np.concatenate(list(user_ids)))
```
## A multi-task model
There are two critical parts to multi-task recommenders:
1. They optimize for two or more objectives, and so have two or more losses.
2. They share variables between the tasks, allowing for transfer learning.
In this jupyter notebook, we will define our models as before, but instead of having a single task, we will have two tasks: one that predicts ratings, and one that predicts movie watches.
The user and movie models are as before:
```python
user_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
# We add 1 to account for the unknown token.
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
movie_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
```
However, now we will have two tasks. The first is the rating task:
```python
tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError()],
)
```
Its goal is to predict the ratings as accurately as possible.
The second is the retrieval task:
```python
tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128)
)
)
```
As before, this task's goal is to predict which movies the user will or will not watch.
### Putting it together
We put it all together in a model class.
The new component here is that - since we have two tasks and two losses - we need to decide on how important each loss is. We can do this by giving each of the losses a weight, and treating these weights as hyperparameters. If we assign a large loss weight to the rating task, our model is going to focus on predicting ratings (but still use some information from the retrieval task); if we assign a large loss weight to the retrieval task, it will focus on retrieval instead.
```
class MovielensModel(tfrs.models.Model):
def __init__(self, rating_weight: float, retrieval_weight: float) -> None:
# We take the loss weights in the constructor: this allows us to instantiate
# several model objects with different loss weights.
super().__init__()
embedding_dimension = 32
# User and movie models.
self.movie_model: tf.keras.layers.Layer = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
self.user_model: tf.keras.layers.Layer = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
# A small model to take in user and movie embeddings and predict ratings.
# We can make this as complicated as we want as long as we output a scalar
# as our prediction.
self.rating_model = tf.keras.Sequential([
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dense(1),
])
# The tasks.
self.rating_task: tf.keras.layers.Layer = tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError()],
)
self.retrieval_task: tf.keras.layers.Layer = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.movie_model)
)
)
# The loss weights.
self.rating_weight = rating_weight
self.retrieval_weight = retrieval_weight
def call(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:
# We pick out the user features and pass them into the user model.
user_embeddings = self.user_model(features["user_id"])
# And pick out the movie features and pass them into the movie model.
movie_embeddings = self.movie_model(features["movie_title"])
return (
user_embeddings,
movie_embeddings,
# We apply the multi-layered rating model to a concatentation of
# user and movie embeddings.
self.rating_model(
tf.concat([user_embeddings, movie_embeddings], axis=1)
),
)
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
ratings = features.pop("user_rating")
user_embeddings, movie_embeddings, rating_predictions = self(features)
# We compute the loss for each task.
rating_loss = self.rating_task(
labels=ratings,
predictions=rating_predictions,
)
retrieval_loss = self.retrieval_task(user_embeddings, movie_embeddings)
# And combine them using the loss weights.
return (self.rating_weight * rating_loss
+ self.retrieval_weight * retrieval_loss)
```
### Rating-specialized model
Depending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings.
```
# Here, configuring the model with losses and metrics.
# TODO 1: Your code goes here.
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
# Training the ratings model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.")
print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
```
The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not: its accuracy at 100 is almost 4 times worse than a model trained solely to predict watches.
### Retrieval-specialized model
Let's now try a model that focuses on retrieval only.
```
# Here, configuring the model with losses and metrics.
# TODO 2: Your code goes here.
# Training the retrieval model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.")
print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
```
We get the opposite result: a model that does well on retrieval, but poorly on predicting ratings.
### Joint model
Let's now train a model that assigns positive weights to both tasks.
```
# Here, configuring the model with losses and metrics.
# TODO 3: Your code goes here.
# Training the joint model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.")
print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
```
The result is a model that performs roughly as well on both tasks as each specialized model.
While the results here do not show a clear accuracy benefit from a joint model in this case, multi-task learning is in general an extremely useful tool. We can expect better results when we can transfer knowledge from a data-abundant task (such as clicks) to a closely related data-sparse task (such as purchases).
| github_jupyter |
# Equilibrium analysis Chemical reaction
Number (code) of assignment: 2N4
Description of activity: H2 & H3
Report on behalf of:
name : Pieter van Halem
student number (4597591)
name : Dennis Dane
student number (4592239)
Data of student taking the role of contact person:
name : Pieter van Halem
email address : pietervanhalem@hotmail.com
```
import numpy as np
import matplotlib.pyplot as plt
```
# Function definitions:
In the following block the function that are used for the numerical analysis are defined. These are functions for calculation of the various time steps, plotting tables and plotting graphs.
```
def f(t,y,a,b,i):
if (t>i):
a = 0
du = a-(b+1)*y[0,0]+(y[0,0]**2)*y[0,1]
dv = b*y[0,0]-(y[0,0]**2)*y[0,1]
return np.matrix([du,dv])
def FE(t,y,h,a,b,i):
f1 = f(t,y,a,b,i)
pred = y + f1*h
corr = y + (h/2)*(f((t+h),pred,a,b,i) + f1)
return corr
def Integrate(y0, t0, tend, N,a,b,i):
h = (tend-t0)/N
t_arr = np.zeros(N+1)
t_arr[0] = t0
w_arr = np.zeros((2,N+1))
w_arr[:,0] = y0
t = t0
y = y0
for k in range(1,N+1):
y = FE(t,y,h,a,b,i)
w_arr[:,k] = y
t = t + h
t_arr[k] = t
return t_arr, w_arr
def PrintTable(t_arr, w_arr):
print ("%6s %6s: %17s %17s" % ("index", "t", "u(t)", "v(t)"))
for k in range(0,N+1):
print ("{:6d} {:6.2f}: {:17.7e} {:17.7e}".format(k,t_arr[k],
w_arr[0,k],w_arr[1,k]))
def PlotGraphs(t_arr, w_arr):
plt.figure("Initial value problem")
plt.plot(t_arr,w_arr[0,:],'r',t_arr,w_arr[1,:],'--')
plt.legend(("$u(t)$", "$v(t)$"),loc="best", shadow=True)
plt.xlabel("$t$")
plt.ylabel("$u$ and $v$")
plt.title("Graphs of $u(t)$ and $v(t)$")
plt.show()
def PlotGraphs2(t_arr, w_arr):
plt.figure("Initial value problem")
plt.plot(w_arr[0,:],w_arr[1,:],'g')
plt.legend(("$u,v$",""),loc="best", shadow=True)
plt.xlabel("$u(t)$")
plt.ylabel("$v(t)$")
plt.title("$Phase$ $plane$ $(u,v)$")
plt.axis("scaled")
plt.show()
```
# Assignment 2.9
Integrate the system with Modified Euler and time step h = 0.15. Make a table of u and v on the time interval 0 ≤ t ≤ 1.5. The table needs to give u and v in an 8-digit floating-point format.
```
y0 = np.matrix([0.0,0.0])
t0 = 0.0
tend = 1.5
N = 10
t_array, w_array = Integrate(y0, t0, tend, N,2,4.5,11)
print("The integrated system using Modified Euler with time step h = 0.15 is shown in the following table: \n")
PrintTable(t_array, w_array)
```
# Assignment 2.10
Integrate the system with Modified Euler and time step h = 0.05 for the interval [0,20]. Make plots of u and v as functions of t (put them in one figure). Also make a plot of u and v in the phase plane (u,v-plane). Do your plots correspond to your results of part 2?
```
y0 = np.matrix([0.0,0.0])
t0 = 0.0
tend = 20
N = 400
t_array, w_array = Integrate(y0, t0, tend, N,2,4.5, 25)
print("In this assignment the system has to be integrated using Modified Euler with a time step of h = 0.05 on \na interval of [0,20].")
print("The first graph is u(t) and v(t) against time (t).")
PlotGraphs(t_array, w_array)
print("The second graph shows the u-v plane")
PlotGraphs2(t_array, w_array)
```
Although the direction cannot be seen in the phase plane graph. Using the first plot one can see that u(t) and v(t) will go to an equilibrium as time is increasing.
Therefor the system is stable and a spiral. Which is consistent with the conclusion from assignment 1.3.
# Assignment 2.11
Using the formula derived in question 7, estimate the accuracy of u and v computed with h = 0.05 at t = 8. Hence, integrate once more with time step h = 0.1.
The error can be estimated with Richardson's method. We will use α = 1/3 found in assignment 7. Here the estimated error is: E ≈ α( w(h) - w(2h) ).
```
y0 = np.matrix([0.0,0.0])
t0 = 0.0
tend = 20
N = 400
t_array, w_array = Integrate(y0, t0, tend, N, 2, 4.5,25)
y0 = np.matrix([0.0,0.0])
t0 = 0.0
tend = 20
N = 200
t_array2, w_array2 = Integrate(y0, t0, tend, N, 2, 4.5, 25)
print("The value for u and v at t = 8.00 with h = 0.05 is: {:.2f} {:.7e} {:.7e}".format(t_array[160], w_array[0,160],w_array[1,160]))
print("The value for u and v at t = 8.00 with h = 0.10 is: {:.2f} {:.7e} {:.7e}".format(t_array[80], w_array[0,80],w_array[1,80]))
E1 = (w_array[0,160]-w_array2[0,80])*(1/3)
E2 = (w_array[1,160]-w_array2[1,80])*(1/3)
print("The estimated accuracy for u is: {:.7e}".format(E1))
print("The estimated accuracy for v is: {:.7e}".format(E2))
```
# Assignment 2.12
Apply Modified Euler with h = 0.05. For 0 ≤ t ≤ t1 it holds that a = 2. At t = t1 the supply of materials A fails, and therefore a = 0 for t > t1. Take t1 = 4.0. Make plot of u and v as a function of t on the intervals [0, 10] in one figure and a plot of u and v in the uv-plane. Evaluate your results by comparing them to your findings form part 8.
```
y0 = np.matrix([0.0,0.0])
t0 = 0.0
tend = 10.0
N = 200
t_array, w_array = Integrate(y0, t0, tend, N, 2, 4.5, 4)
PlotGraphs(t_array, w_array)
PlotGraphs2(t_array, w_array)
```
The first plot shows that u and v indeed converges to a certain value, as predicted in assignment 8. The phase plane shows that uv goes to a point on the u-axis. This was also predicted in assignment 8.
The first plot shows a "corner" in the u and v graph (a discontinuity in the first derivative). This does not contradict the theory, the system of differential equtions its first derivative does not have to be continuous. The line itself is continuous because of the initial conditions.
# Assignment 2.13
Take t1 = 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0. Make a table of the value of v-tilde and t-tilde. Evaluate your results.
```
for i in np.arange(3.0,6.5,0.50):
t0 = 0
tend = 10
N = 200
t_array2, w_array2 = Integrate(y0, t0, tend, N, 2.0, 4.5,i)
indices = np.nonzero(w_array2[0,:] >= 0.01)
index = np.max(indices[0])
t_tilde = t_array2[index+1]
v_tilde = w_array2[1,N]
if i == 3:
print("%6s %17s: %17s " % ("t1", "t_tilde", "v_tilde"))
print("{:6.2f} {:17.2f} {:17.7e}".format(i,t_tilde,v_tilde))
```
In the table above are the values for t-tilde and v-tilde shown. There is not an obvious relation between the time(t1) and t-tilde nor an obvious relation between t1 and v-tilde. However we can conclude that if t1 becomes larger t-tilde becomes larger. This cannot be concluded for v-tilde. The value for v-tilde seems to converge to an arbitrarily value between 3.0 and 3.5 (clearly dependent on the initial conditions). The convergence of v_tilde is also consistent with the findings in assignment 1.8.
| github_jupyter |
# Gaussian Process Distribution of Relaxation Times
## In this tutorial we will reproduce Figure 7 of the article https://doi.org/10.1016/j.electacta.2019.135316
GP-DRT is our newly developed approach that can be used to obtain both the mean and covariance of the DRT from the EIS data by assuming that the DRT is a Gaussian process (GP). The GP-DRP can predict the DRT and the imaginary part of the impedance at frequencies that were not previously measured.
To obtain the DRT from the impedance we take that $\gamma(\xi)$ is a GP where $f$ is the frequency and $\xi=\log f$. Under the DRT model and considering that GPs are closed linear transformations, it follows that $Z^{\rm DRT}_{\rm im}\left(\xi\right)$ is also a GP.
More precisely we can write
$$\begin{pmatrix}
\gamma(\xi) \\
Z^{\rm DRT}_{\rm im}\left(\xi\right)
\end{pmatrix}\sim \mathcal{GP}\left(\mathbf 0, \begin{pmatrix}
k(\xi, \xi^\prime) & \mathcal L^{\rm im}_{\xi^\prime} \left(k(\xi, \xi^\prime)\right)\\
\mathcal L^{\rm im}_{\xi} k(\xi, \xi^\prime) & \mathcal L^{\rm im}_{\xi^\prime}\left(\mathcal L^{\rm im}_{\xi} \left(k(\xi, \xi^\prime)\right)\right)
\end{pmatrix}\right)$$
where
$$\mathcal L^{\rm im}_\xi \left(\cdot\right) = -\displaystyle \int_{-\infty}^\infty \frac{2\pi \displaystyle e^{\xi-\hat \xi}}{1+\left(2\pi \displaystyle e^{\xi-\hat \xi}\right)^2} \left(\cdot\right) d \hat \xi$$
is a linear functional. The latter functional, transforms the DRT to the imaginary part of the impedance.
Assuming we have $N$ observations, we can set $\left(\mathbf Z^{\rm exp}_{\rm im}\right)_n = Z^{\rm exp}_{\rm im}(\xi_n)$ with $\xi_n =\log f_n$ and $n =1, 2, \ldots N $. The corresponding multivariate Gaussian random variable can be written as
$$\begin{pmatrix}
\boldsymbol{\gamma} \\
\mathbf Z^{\rm exp}_{\rm im}
\end{pmatrix}\sim \mathcal{N}\left(\mathbf 0, \begin{pmatrix}
\mathbf K & \mathcal L_{\rm im} \mathbf K\\
\mathcal L_{\rm im}^\sharp \mathbf K & \mathcal L^2_{\rm im} \mathbf K + \sigma_n^2 \mathbf I
\end{pmatrix}\right)$$
where
$$\begin{align}
(\mathbf K)_{nm} &= k(\xi_n, \xi_m)\\
(\mathcal L_{\rm im} \mathbf K)_{nm} &= \left. \mathcal L^{\rm im}_{\xi^\prime} \left(k(\xi, \xi^\prime)\right) \right |_{\xi_n, \xi_m}\\
(\mathcal L_{\rm im}^\sharp \mathbf K)_{nm} &= \left.\mathcal L^{\rm im}_{\xi} \left(k(\xi, \xi^\prime)\right) \right|_{\xi_n, \xi_m}\\
(\mathcal L^2_{\rm im} \mathbf K)_{nm} &= \left.\mathcal L^{\rm im}_{\xi^\prime}\left(\mathcal L^{\rm im}_{\xi} \left(k(\xi, \xi^\prime)\right)\right) \right|_{\xi_n, \xi_m}
\end{align}$$
and $\mathcal L_{\rm im} \mathbf K^\top = \mathcal L_{\rm im}^\sharp \mathbf K$.
To obtain the DRT from impedance, the distribution of $\mathbf{\gamma}$ conditioned on $\mathbf Z^{\rm exp}_{\rm im}$ can be written as
$$\boldsymbol{\gamma}|\mathbf Z^{\rm exp}_{\rm im}\sim \mathcal N\left( \mathbf \mu_{\gamma|Z^{\rm exp}_{\rm im}}, \mathbf\Sigma_{\gamma| Z^{\rm exp}_{\rm im}}\right)$$
with
$$\begin{align}
\mathbf \mu_{\gamma|Z^{\rm exp}_{\rm im}} &= \mathcal L_{\rm im} \mathbf K \left(\mathcal L^2_{\rm im} \mathbf K + \sigma_n^2 \mathbf I \right)^{-1} \mathbf Z^{\rm exp}_{\rm im} \\
\mathbf \Sigma_{\gamma| Z^{\rm exp}_{\rm im}} &= \mathbf K- \mathcal L_{\rm im} \mathbf K \left(\mathcal L^2_{\rm im} \mathbf K + \sigma_n^2 \mathbf I \right)^{-1}\mathcal L_{\rm im} \mathbf K^\top
\end{align}$$
The above formulas depend on 1) the kernel, $k(\xi, \xi^\prime)$; 2) the noise level, $\sigma_n$; and 3) the experimental data, $\mathbf Z^{\rm exp}_{\rm im}$ (at the log-frequencies $\mathbf \xi$).
```
from math import cos, pi, sin
import GP_DRT
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import minimize
%matplotlib inline
```
## 1) Define parameters of the ZARC circuit which will be used for the synthetic experiment generation
The impedance of a ZARC can be written as
$$
Z^{\rm exact}(f) = R_\infty + \displaystyle \frac{1}{\displaystyle \frac{1}{R_{\rm ct}}+C \left(i 2\pi f\right)^\phi}
$$
where $\displaystyle C = \frac{\tau_0^\phi}{R_{\rm ct}}$.
The analytical DRT can be computed analytically as
$$
\gamma(\log \tau) = \displaystyle \frac{\displaystyle R_{\rm ct}}{\displaystyle 2\pi} \displaystyle \frac{\displaystyle \sin\left((1-\phi)\pi\right)}{\displaystyle \cosh(\phi \log(\tau/\tau_0))-\cos(\pi(1-\phi))}
$$
```
# define the frequency range
N_freqs = 81
freq_vec = np.logspace(-4.0, 4.0, num=N_freqs, endpoint=True)
xi_vec = np.log(freq_vec)
tau = 1 / freq_vec
# define the frequency range used for prediction
# note: we could have used other values
freq_vec_star = np.logspace(-4.0, 4.0, num=81, endpoint=True)
xi_vec_star = np.log(freq_vec_star)
# parameters for ZARC model, the impedance and analytical DRT are calculated as the above equations
R_inf = 10
R_ct = 50
phi = 0.8
tau_0 = 1.0
C = tau_0 ** phi / R_ct
Z_exact = R_inf + 1.0 / (1.0 / R_ct + C * (1j * 2.0 * pi * freq_vec) ** phi)
gamma_fct = (
(R_ct)
/ (2.0 * pi)
* sin((1.0 - phi) * pi)
/ (np.cosh(phi * np.log(tau / tau_0)) - cos((1.0 - phi) * pi))
)
# we will use a finer mesh for plotting the results
freq_vec_plot = np.logspace(-4.0, 4.0, num=10 * (N_freqs - 1), endpoint=True)
tau_plot = 1 / freq_vec_plot
gamma_fct_plot = (
(R_ct)
/ (2.0 * pi)
* sin((1.0 - phi) * pi)
/ (np.cosh(phi * np.log(tau_plot / tau_0)) - cos((1.0 - phi) * pi))
) # for plotting only
# we will add noise to the impedance computed analytically
rng = np.random.seed(214975)
sigma_n_exp = 1.0
Z_exp = Z_exact + sigma_n_exp * (
np.random.normal(0, 1, N_freqs) + 1j * np.random.normal(0, 1, N_freqs)
)
```
## 2) Show the synthetic impedance in the Nyquist plot - this is similar to Figure 7 (a)
```
# Nyquist plot of the impedance
plt.plot(np.real(Z_exact), -np.imag(Z_exact), linewidth=4, color="black", label="exact")
plt.plot(
np.real(Z_exp), -np.imag(Z_exp), "o", markersize=10, color="red", label="synth exp"
)
plt.plot(
np.real(Z_exp[20:60:10]),
-np.imag(Z_exp[20:60:10]),
"s",
markersize=10,
color="black",
)
plt.rc("text", usetex=True)
plt.rc("font", family="serif", size=15)
plt.rc("xtick", labelsize=15)
plt.rc("ytick", labelsize=15)
plt.legend(frameon=False, fontsize=15)
plt.axis("scaled")
plt.xticks(range(10, 70, 10))
plt.yticks(range(0, 60, 10))
plt.gca().set_aspect("equal", adjustable="box")
plt.xlabel(r"$Z_{\rm re}/\Omega$", fontsize=20)
plt.ylabel(r"$-Z_{\rm im}/\Omega$", fontsize=20)
# label the frequency points
plt.annotate(
r"$10^{-2}$",
xy=(np.real(Z_exp[20]), -np.imag(Z_exp[20])),
xytext=(np.real(Z_exp[20]) - 2, 10 - np.imag(Z_exp[20])),
arrowprops=dict(arrowstyle="-", connectionstyle="arc"),
)
plt.annotate(
r"$10^{-1}$",
xy=(np.real(Z_exp[30]), -np.imag(Z_exp[30])),
xytext=(np.real(Z_exp[30]) - 2, 6 - np.imag(Z_exp[30])),
arrowprops=dict(arrowstyle="-", connectionstyle="arc"),
)
plt.annotate(
r"$1$",
xy=(np.real(Z_exp[40]), -np.imag(Z_exp[40])),
xytext=(np.real(Z_exp[40]), 10 - np.imag(Z_exp[40])),
arrowprops=dict(arrowstyle="-", connectionstyle="arc"),
)
plt.annotate(
r"$10$",
xy=(np.real(Z_exp[50]), -np.imag(Z_exp[50])),
xytext=(np.real(Z_exp[50]) - 1, 10 - np.imag(Z_exp[50])),
arrowprops=dict(arrowstyle="-", connectionstyle="arc"),
)
plt.show()
```
## 3) Obtain the optimal hyperparameters of the GP-DRT model by minimizing the negative marginal log likelihood (NMLL)
We constrain the kernel to be a squared exponential, _i.e._
$$
k(\xi, \xi^\prime) = \sigma_f^2 \exp\left(-\frac{1}{2 \ell^2}\left(\xi-\xi^\prime\right)^2 \right)
$$
and modify its two parameters, $\sigma_f$ and $\ell$ as well as the noise level $\sigma_n$. Therefore, the vector of hyperparameters of the GP-DRT is assumed to be $\boldsymbol \theta = \begin{pmatrix} \sigma_n, \sigma_f, \ell \end{pmatrix}^\top$.
Following the same derivation from the article we can write that
$$
\log p(\mathbf Z^{\rm exp}_{\rm im}|\boldsymbol \xi, \boldsymbol \theta)= - \frac{1}{2} {\mathbf Z^{\rm exp}_{\rm im}}^\top \left(\mathcal L^2_{\rm im} \mathbf K +\sigma_n^2\mathbf I \right)^{-1} \mathbf Z^{\rm exp}_{\rm im} -\frac{1}{2} \log \left| \mathcal L^2_{\rm im} \mathbf K+\sigma_n^2\mathbf I \right| - \frac{N}{2} \log 2\pi
$$
We will call $L(\boldsymbol \theta)$ the negative (and shifted) MLL (NMLL):
$$
L(\boldsymbol \theta) = - \log p(\mathbf Z^{\rm exp}_{\rm im}|\boldsymbol \xi, \boldsymbol \theta) - \frac{N}{2} \log 2\pi
$$
the experimental evidence is maximized for
$$
\boldsymbol \theta = \arg \min_{\boldsymbol \theta^\prime}L(\boldsymbol \theta^\prime)
$$
The above minimization problem is solved using the `optimize` method provided by `scipy` package
```
# initialize the parameter for global 3D optimization to maximize the marginal log-likelihood as shown in eq (31)
sigma_n = sigma_n_exp
sigma_f = 5.0
ell = 1.0
theta_0 = np.array([sigma_n, sigma_f, ell])
seq_theta = np.copy(theta_0)
def print_results(theta):
global seq_theta
seq_theta = np.vstack((seq_theta, theta))
print("{0:.7f} {1:.7f} {2:.7f}".format(theta[0], theta[1], theta[2]))
GP_DRT.NMLL_fct(theta_0, Z_exp, xi_vec)
GP_DRT.grad_NMLL_fct(theta_0, Z_exp, xi_vec)
print("sigma_n, sigma_f, ell")
# minimize the NMLL L(\theta) w.r.t sigma_n, sigma_f, ell using the Newton-CG method as implemented in scipy
res = minimize(
GP_DRT.NMLL_fct,
theta_0,
args=(Z_exp, xi_vec),
method="Newton-CG",
jac=GP_DRT.grad_NMLL_fct,
callback=print_results,
options={"disp": True},
)
# collect the optimized parameters
sigma_n, sigma_f, ell = res.x
```
## 4) Core of the GP-DRT
### 4a) Compute matrices
Once we have identified the optimized parameters we can compute $\mathbf K$, $\mathcal L_{\rm im} \mathbf K$, and $\mathcal L^2_{\rm im} \mathbf K$, which are given in equation `(18)` in the article
```
K = GP_DRT.matrix_K(xi_vec, xi_vec, sigma_f, ell)
L_im_K = GP_DRT.matrix_L_im_K(xi_vec, xi_vec, sigma_f, ell)
L2_im_K = GP_DRT.matrix_L2_im_K(xi_vec, xi_vec, sigma_f, ell)
Sigma = (sigma_n ** 2) * np.eye(N_freqs)
```
### 4b) Factorize the matrices and solve the linear equations
We are computing
$$
\boldsymbol{\gamma}|\mathbf Z^{\rm exp}_{\rm im}\sim \mathcal N\left( \boldsymbol \mu_{\gamma|Z^{\rm exp}_{\rm im}}, \boldsymbol \Sigma_{\gamma| Z^{\rm exp}_{\rm im}}\right)
$$
using
$$
\begin{align}
\boldsymbol \mu_{\gamma|Z^{\rm exp}_{\rm im}} &= \mathcal L_{\rm im} \mathbf K\left(\mathcal L^2_{\rm im} \mathbf K+\sigma_n^2\mathbf I\right)^{-1}\mathbf Z^{\rm exp}_{\rm im} \\
\boldsymbol \Sigma_{\gamma| Z^{\rm exp}_{\rm im}} &= \mathbf K-\mathcal L_{\rm im} \mathbf K\left(\mathcal L^2_{\rm im} \mathbf K+\sigma_n^2\mathbf I\right)^{-1}\mathcal L_{\rm im} \mathbf K^\top
\end{align}
$$
The key ingredient is to do Cholesky factorization of $\mathcal L^2_{\rm im} \mathbf K+\sigma_n^2\mathbf I$, _i.e._, `K_im_full`
```
# the matrix $\mathcal L^2_{\rm im} \mathbf K + \sigma_n^2 \mathbf I$ whose inverse is needed
K_im_full = L2_im_K + Sigma
# Cholesky factorization, L is a lower-triangular matrix
L = np.linalg.cholesky(K_im_full)
# solve for alpha
alpha = np.linalg.solve(L, Z_exp.imag)
alpha = np.linalg.solve(L.T, alpha)
# estimate the gamma of eq (21a), the minus sign, which is not included in L_im_K, refers to eq (65)
gamma_fct_est = -np.dot(L_im_K.T, alpha)
# covariance matrix
inv_L = np.linalg.inv(L)
inv_K_im_full = np.dot(inv_L.T, inv_L)
# estimate the sigma of gamma for eq (21b)
cov_gamma_fct_est = K - np.dot(L_im_K.T, np.dot(inv_K_im_full, L_im_K))
sigma_gamma_fct_est = np.sqrt(np.diag(cov_gamma_fct_est))
```
### 4c) Plot the obtained DRT against the analytical DRT
```
# plot the DRT and its confidence region
plt.semilogx(freq_vec_plot, gamma_fct_plot, linewidth=4, color="black", label="exact")
plt.semilogx(freq_vec, gamma_fct_est, linewidth=4, color="red", label="GP-DRT")
plt.fill_between(
freq_vec,
gamma_fct_est - 3 * sigma_gamma_fct_est,
gamma_fct_est + 3 * sigma_gamma_fct_est,
color="0.4",
alpha=0.3,
)
plt.rc("text", usetex=True)
plt.rc("font", family="serif", size=15)
plt.rc("xtick", labelsize=15)
plt.rc("ytick", labelsize=15)
plt.axis([1e-4, 1e4, -5, 25])
plt.legend(frameon=False, fontsize=15)
plt.xlabel(r"$f/{\rm Hz}$", fontsize=20)
plt.ylabel(r"$\gamma/\Omega$", fontsize=20)
plt.show()
```
### 4d) Predict the $\gamma$ and the imaginary part of the GP-DRT impedance
This part is explained in Section `2.3.3` of the main article
```
# initialize the imaginary part of impedance vector
Z_im_vec_star = np.empty_like(xi_vec_star)
Sigma_Z_im_vec_star = np.empty_like(xi_vec_star)
gamma_vec_star = np.empty_like(xi_vec_star)
Sigma_gamma_vec_star = np.empty_like(xi_vec_star)
# calculate the imaginary part of impedance at each $\xi$ point for the plot
for index, val in enumerate(xi_vec_star):
xi_star = np.array([val])
# compute matrices shown in eq (18), k_star corresponds to a new point
k_star = GP_DRT.matrix_K(xi_vec, xi_star, sigma_f, ell)
L_im_k_star = GP_DRT.matrix_L_im_K(xi_vec, xi_star, sigma_f, ell)
L2_im_k_star = GP_DRT.matrix_L2_im_K(xi_vec, xi_star, sigma_f, ell)
k_star_star = GP_DRT.matrix_K(xi_star, xi_star, sigma_f, ell)
L_im_k_star_star = GP_DRT.matrix_L_im_K(xi_star, xi_star, sigma_f, ell)
L2_im_k_star_star = GP_DRT.matrix_L2_im_K(xi_star, xi_star, sigma_f, ell)
# compute Z_im_star mean and standard deviation using eq (26)
Z_im_vec_star[index] = np.dot(L2_im_k_star.T, np.dot(inv_K_im_full, Z_exp.imag))
Sigma_Z_im_vec_star[index] = L2_im_k_star_star - np.dot(
L2_im_k_star.T, np.dot(inv_K_im_full, L2_im_k_star)
)
# compute Z_im_star mean and standard deviation
gamma_vec_star[index] = -np.dot(L_im_k_star.T, np.dot(inv_K_im_full, Z_exp.imag))
Sigma_gamma_vec_star[index] = k_star_star - np.dot(
L_im_k_star.T, np.dot(inv_K_im_full, L_im_k_star)
)
```
### 4e) Plot the imaginary part of the GP-DRT impedance together with the exact one and the synthetic experiment
```
plt.semilogx(
freq_vec_star, -np.imag(Z_exact), ":", linewidth=4, color="blue", label="exact"
)
plt.semilogx(
freq_vec, -Z_exp.imag, "o", markersize=10, color="black", label="synth exp"
)
plt.semilogx(freq_vec_star, -Z_im_vec_star, linewidth=4, color="red", label="GP-DRT")
plt.fill_between(
freq_vec_star,
-Z_im_vec_star - 3 * np.sqrt(abs(Sigma_Z_im_vec_star)),
-Z_im_vec_star + 3 * np.sqrt(abs(Sigma_Z_im_vec_star)),
alpha=0.3,
)
plt.rc("text", usetex=True)
plt.rc("font", family="serif", size=15)
plt.rc("xtick", labelsize=15)
plt.rc("ytick", labelsize=15)
plt.axis([1e-4, 1e4, -5, 25])
plt.legend(frameon=False, fontsize=15)
plt.xlabel(r"$f/{\rm Hz}$", fontsize=20)
plt.ylabel(r"$-Z_{\rm im}/\Omega$", fontsize=20)
plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import concatenate
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], enable=True)
import matplotlib.pyplot as plt
app = pd.read_csv('../data/application_train.csv')
count_class_0, count_class_1 = app.TARGET.value_counts()
df_class_0 = app[app['TARGET'] == 0]
df_class_1 = app[app['TARGET'] == 1]
df_class_0_under = df_class_0.sample(count_class_1)
df = pd.concat([df_class_0_under, df_class_1], axis=0)
print('Random under-sampling:')
print(app.TARGET.value_counts())
df.TARGET.value_counts().plot(kind='bar', title='Count (target)');
df = df[df.columns[df.isnull().mean() < 0.3]]
df = df._get_numeric_data()
df
df = df[df.corr().abs()['TARGET'].sort_values(ascending=False)[:31].index]
df = df.fillna(df.mean())
X, y = df.values[:, 1:], df.values[:, 0]
scaler = MinMaxScaler()
x = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.33)#, random_state=42)
model = keras.Sequential(
[
layers.Input(shape = (x.shape[1],)),
layers.Dense(200, activation="relu"),
layers.Dense(150, activation="relu"),
layers.Dense(100, activation="relu"),
layers.Dense(20, activation="relu"),
layers.Dense(1, activation='sigmoid'),
]
)
# model = Sequential()
# model.layers.add(Dense(50, input_dim=60, activation='relu'))
# model.layers.add(Dense(20, activation='relu'))
# model.layers.add(Dense(1, activation='sigmoid'))
# # compile the keras model
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs= 100, batch_size=100, validation_data=(X_test,y_test) )
loss_train = history.history['accuracy']
loss_val = history.history['val_accuracy']
epochs = range(0,100)
plt.plot(epochs, loss_train, 'g', label='Training acc')
plt.plot(epochs, loss_val, 'b', label='validation acc')
plt.title('Training and Validation acc')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
loss_train = history.history['accuracy']
loss_val = history.history['val_accuracy']
epochs = range(0,100)
plt.plot(epochs, loss_train, 'g', label='Training acc')
plt.plot(epochs, loss_val, 'b', label='validation acc')
plt.title('Training and Validation acc')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# mlp for binary classification
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/ionosphere.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
# split into train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# determine the number of input features
n_features = X_train.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(8, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# fit the model
history = model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=0)
# evaluate the model
loss, acc = model.evaluate(X_test, y_test, verbose=0)
print('Test Accuracy: %.3f' % acc)
# make a prediction
row = [1,0,0.99539,-0.05889,0.85243,0.02306,0.83398,-0.37708,1,0.03760,0.85243,-0.17755,0.59755,-0.44945,0.60536,-0.38223,0.84356,-0.38542,0.58212,-0.32192,0.56971,-0.29674,0.36946,-0.47357,0.56811,-0.51171,0.41078,-0.46168,0.21266,-0.34090,0.42267,-0.54487,0.18641,-0.45300]
yhat = model.predict([row])
print('Predicted: %.3f' % yhat)
```
| github_jupyter |
```
#Load libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from pandas import read_csv
from pandas import set_option
from matplotlib import pyplot
from pandas import read_csv
from pandas import set_option
from matplotlib import pyplot as plt
import seaborn
HOME_PATH = '' #home path of the project
FILENAME = 'C_Obesity_Data_Real.csv'
```
## 1. Load the dataset
```
dataset = pd.read_csv(HOME_PATH + FILENAME)
dataset['Age'] = np.round(dataset['Age']).astype('int64')
dataset['Height'] = np.round(dataset['Height'],2)
dataset['Weight'] = np.round(dataset['Weight'],2)
dataset
```
## 2. Analyze data
```
categorical_cols = (dataset.select_dtypes(include=['object'])).columns.tolist()
categorical_cols
#dimensions of the dataset
dataset.shape
#data types of each attribute
dataset.dtypes
dataset['FCVC'].values
#peak of the data
dataset.head(20)
#summarize the distribution of each attribute
set_option('precision', 2)
dataset.describe()
```
## 3. Data visualization
```
for col in dataset.columns :
# Multiple box plots on one Axes
data = dataset[col]
if col in categorical_cols :
data = data.astype("category").cat.codes
fig, ax = plt.subplots()
ax.boxplot(data)
ax.set_title(col)
for col in dataset.columns :
# Multiple box plots on one Axes
data = dataset[col]
if col in categorical_cols :
data = data.astype("category").cat.codes
fig, ax = plt.subplots()
ax.hist(data, density=False, histtype='bar')
ax.set_title(col)
#Correlation matrix
set_option('precision', 2)
pyplot.figure(figsize=(20,10))
cors = abs(dataset.corr(method='pearson'))
seaborn.heatmap(cors, mask=np.triu(np.ones_like(cors, dtype=bool)), vmin=0, vmax=1, cmap='Blues', annot=True)
pyplot.show()
```
## 4. Edit data
```
for col in dataset.columns :
if not dataset[col].isnull().values.any() :
print(col, ':', 'NO NaN values')
else :
print(col, ':', 'NaN values finded')
print('Number of NaN values: ', dataset[col].isnull().sum())
#quick look at the breakdown of class values
for col in categorical_cols :
dataset[col] = dataset[col].astype('category')
print('###########################')
print(dataset.groupby(col).size())
```
## 5. Data split (train and test)
```
from sklearn.model_selection import train_test_split
#Split data indixes in train and test
idx_train, idx_test = train_test_split(dataset.index.tolist(), train_size=0.8, random_state=42, shuffle=True)
print('Train data length: ', len(idx_train))
print('Test data length: ', len(idx_test))
print('Total data length: ', len(idx_train) + len(idx_test))
#Select train data and save locally
diabetes_train_data = dataset.loc[idx_train]
diabetes_train_data.to_csv(HOME_PATH + 'TRAIN DATASETS/A_Diabetes_Data_Real_Train.csv', index=False)
#Select test data and save locally
diabetes_test_data = dataset.loc[idx_test]
diabetes_test_data.to_csv(HOME_PATH + 'TEST DATASETS/A_Diabetes_Data_Real_Test.csv', index=False)
print('Train data size: ', diabetes_train_data.shape)
print('Test data length: ', diabetes_test_data.shape)
```
| github_jupyter |
```
import os
os.getcwd()
%cd ..
from pathlib import Path
from mimic.utils.experiment import MimicExperiment
from mimic.utils.filehandling import set_paths
from mimic.utils import plot
from mimic.utils.text import tensor_to_text
import json
import torch
from PIL import ImageFont
try:
font = ImageFont.truetype('FreeSerif.ttf', 38)
except:
font = ImageFont.truetype('/Library/Fonts/Arial.ttf', 38)
# experiment_dir = '~/klugh/mimic/moe/non_factorized/Mimic_2020_11_02_09_33_45_520718'
experiment_dir = '/Users/Hendrik/Documents/master3/leomed_klugh/mimic/moe/Mimic_2020_12_06_22_47_59_308716'
flags_path = os.path.expanduser(os.path.join(experiment_dir,'flags.rar'))
FLAGS = torch.load(flags_path)
FLAGS = set_paths(FLAGS)
FLAGS.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
alphabet_path = os.path.join(str(Path(os.getcwd())), 'alphabet.json')
with open(alphabet_path) as alphabet_file:
alphabet = str(''.join(json.load(alphabet_file)))
path =os.path.expanduser(os.path.join(experiment_dir,'checkpoints/0149/mm_vae'))
print(os.path.exists(path))
print(FLAGS.dir_data)
FLAGS.dir_data = os.path.expanduser('~/klugh')
FLAGS.dir_clf=os.path.expanduser('~/klugh/mimic/trained_classifiers')
mimic_experiment = MimicExperiment(flags=FLAGS, alphabet=alphabet)
%%capture
mimic_experiment.mm_vae.to(FLAGS.device);
print(FLAGS.img_size)
mimic_experiment.mm_vae.load_state_dict(state_dict=torch.load(path))
num_samples = 5
random_samples = mimic_experiment.mm_vae.generate(num_samples)
epoch = 299
import numpy as np
print(np.unique(random_samples['text'].detach().cpu()))
print(tensor_to_text(mimic_experiment, random_samples['text']))
print(random_samples['text'].shape)
mods = mimic_experiment.modalities
random_plots = dict();
for k, m_key_in in enumerate(mods.keys()):
mod = mods[m_key_in];
samples_mod = random_samples[m_key_in];
rec = torch.zeros(mimic_experiment.plot_img_size,
dtype=torch.float32).repeat(num_samples, 1, 1, 1);
for l in range(0, num_samples):
rand_plot = mod.plot_data(mimic_experiment, samples_mod[l]);
rec[l, :, :, :] = rand_plot;
random_plots[m_key_in] = rec;
for k, m_key in enumerate(mods.keys()):
fn = os.path.join(mimic_experiment.flags.dir_random_samples, 'random_epoch_' +
str(epoch).zfill(4) + '_' + m_key + '.png');
mod_plot = random_plots[m_key];
p = plot.create_fig(fn, mod_plot, 10, save_figure=mimic_experiment.flags.save_figure);
random_plots[m_key] = p;
print(random_plots.keys())
from matplotlib import pyplot as plt
plt.imshow(random_plots['PA'])
plt.show()
plt.imshow(random_plots['Lateral'])
plt.show()
plt.imshow(random_plots['text'])
plt.show()
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Mathematics/StatisticsProject/statistics-project.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
# Statistics Project
For this project, you will collect numerical data about a topic that interests you. Then you will perform a statistical analysis of your data and report on your findings. You will be expected to present your findings and your predictions using a Jupyter notebook, plus be prepared to explain your results when asked questions. There are a number of starter notebooks in the [AccessingData](AccessingData) folder.
For more background on statistics, check out this [Introductory Statistics Textbook](https://openstax.org/books/introductory-statistics/pages/1-introduction), this [online course](http://www.learnalberta.ca/content/t4tes/courses/senior/math20-2/index.html?l1=home&l2=4-m4&l3=4-m4-l01&page=m4/m20_2_m4_l01.html&title=Mathematics%2020-2:%20Module%204:%20Lesson%201), or these [class notes](https://sites.google.com/a/share.epsb.ca/ms-carlson-s-math-site/20-2-class-notes).
## Part 1: Creating an Action Plan
Create a research question on a statistical topic you would like to answer.
Think of some subjects that interest you. Then make a list of topics that are related to each
subject. Once you have chosen several topics, do some research to see which topic would best
support a project. Of these, choose the one that you think is the best.
Some questions to consider when selecting a topic:
* Does the topic interest you?
* Is the topic practical to research?
* Can you find enough numerical data to do a statistical analysis?
* Is there an important issue related to the topic?
* Will your audience appreciate your presentation?
You may choose a topic where you collect the data yourself through surveys, interviews, and direct observations (*primary data*) **OR** where you find data that has already been collected through other sources such as websites, newspapers, magazines, etc. (*secondary data*).
### Primary Data
* To make sure that you have an adequate amount of information with which you can perform a statistical analysis, **you must collect at least 100 data values**. If you are collecting your own data, and it involves surveying people, please note that if you collect any [personally identifiable information](https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/pipeda_brief/#_h2) such as age and name then you'll need to obtain people's consent.
#### Ideas for Primary Data Topics:
Feel free to choose a topic that is not in this list, but keep in mind that you need to be able to find enough data.
* People's: height, shoe size, etc.
* Family size, number of children in a family, the time separation between siblings, age
difference between mother and father
* Number in a household of: pets, TV’s, books, mobile devices, vehicles, etc.
* Number of mobile devices that a person has owned
* Number of hours of television watched in a day or week
* Number of songs on a person’s playlist
* Number of hours on the phone in a day or week
* Number of text messages sent in a day or week
* Mass of apples or any other type of produce
* The number of items of your favourite product sold in a day (contact local retailers)
* The length (time or distance) of a person's commute to school or work
* The length of a poet’s poems (either by lines or words)
* The size of classes
* The number or percentage of each gender in classes
* How long people can keep their eyes open or stand on one foot
* Number of people on the different school clubs or sports teams
### Secondary Data
If you choose this type of data, please make sure your research question is well-defined. To make sure that you have an adequate amount of information with which you can perform a statistical analysis, you must find at least 100 data values.
You may want to do a comparison (e.g. compare climate in provinces, compare career stats of two or more hockey players, etc.) in order to get enough information to perform a statistical analysis.
#### Ideas for Secondary Data Topics:
You can use the starter notebooks in the [AccessingData folder](AccessingData). Feel free to choose a topic that is not in that list, but keep in mind that you need to be able to find enough data.
### Creating Your Research Question or Statement
A good question requires thought and planning. A well-written research question or statement clarifies exactly what your project is designed to do. It should have the following characteristics:
* The research topic is easily identifiable and the purpose of the research is clear.
* The question/statement is focused. The people who are listening to or reading the question/statement will know what you are going to be researching.
### Evaluating Your Research Question or Statement
* Does the question or statement clearly identify the main objective of the research?
* Are you confident that the question or statement will lead you to sufficient and approprate data to reach a conclusion?
* Can you use the statistical methods you learned in class to analyze the data?
* Is the question or statement interesting? Does it make you want to learn more?
### Your Turn:
A. Write a research question for your topic.
B. Use the above checklist to evaluate your question. Adjust your question as needed.
C. Be prepared to discuss with your teacher your research question and your plan for collecting or finding the data.
## Part 2: Carrying Out Your Research
A. Decide if you will use primary data, secondary data, or both. Explain how you made your decision.
B. Make a plan you can follow to collect your data.
C. Collect the data. There is a sheet at the back of this booklet for you to record your data.
* If using primary data, describe your data collection method(s).
* If using secondary data, you must record detailed information about your source(s), so that you can cite them in your report.
* Consider the type of data you need and ensure that you have a reliable source (or sources) for that data, especially if you are using secondary data.
## Part 3: Analyzing Your Data
Statistical tools can help you analyze and interpret the data you collect. You need to think carefully about which statistical tools are most applicable to your topic. Some tools may work well; others may not be applicable with your topic. Keep in mind that you will be marked on the thoroughness of your statistical analyis, so don’t try to scrape by with the bare minimum!
### Tools
* Data Table
* Visualization(s) (e.g. Histogram, Frequency Polygon, Scatterplot)
* Measures of Central Tendency (Mean, Median, Mode)
* Which is the most appropriate for measuring the “average” of your data?
* Measures of Dispersion: Range and Standard Deviation
* Comment on the dispersion of your data.
* Outliers
* Are there outliers in your data? Do these skew the results? Would it be more appropriate to remove the outliers before calculating measures of central tendency or dispersion?
* Normal Distribution
* Does your data approximate a normal distribution? Explain why or why not.
* Z-Scores
* Find the z-score of a number of significant data points. If your data is normally distributed, find the percentage of data that is below or above a significant data point.
### Your Turn:
A. Determine which statistical tools are appropriate for your data.
B. Meet with your teacher to evaluate your plan for analyzing your data. Be prepared to explain why you chose the statistical tools you did. Modify your plan as necessary.
C. Use statistics to analyze your data.
* Your analysis must include a table and a graphical display (histogram or frequency polygon) of the data you collected. Make sure these are neat and labeled on a scale large enough to display to the class.
* Include all appropriate measures of central tendency, measures of dispersion, and outliers.
* Comment on trends in your data. Interpret the statistical measures you used. What conclusions can you draw?
* If you have collected data on more than one group, person, time frame, or region, comment on the differences between them. What conclusions can you draw?
### Example Projects
[Income Per Person](example-project-income-per-person.ipynb)
[Soccer Data](example-project-soccer.ipynb)
## Part 4: The Final Product and Presentation
Your final presentation should be more than just a factual written report of the information you have found. To make the most of your hard work, select a format for your final presentation that will suit your strengths, as well as your topic. You will be presenting the results of your research topic via video (approximately 3 to 5 minutes), and must include visuals.
### Evaluating Your Own Presentation
Before giving your presentation, you can use these questions to decide if your presentation will be effective:
* Did I define my topic well? What is the best way to define my topic?
* Is my presentation focused? Will my audience (classmates and teacher) find it focused?
* Did I organize my information effectively? Is it obvious that I am following a plan in my presentation?
* Does my topic suit one presentation format better than others?
* From which presentation format do I think my audience will gain the greatest understanding?
* Am I satisfied with my presentation? What might make it more effective?
* What unanswered questions might my audience have?
### Your Turn:
A. Choose a format for your presentation, and create your presentation.
B. Meet with your teacher to discuss your presentation plans and progress.
C. Present your topic via video.
D. Submit your statistical analysis and written conclusions.
## Statistical Research Project Scoring Rubric
|Criteria|4 Excellent|3 Proficient|2 Adequate|1 Limited|Insufficient or Blank|
|---|---|---|---|---|---|
|Data Collection|**pertinent** and from **reliable** and **documented** sources|**relevant** and from **substantially reliable** and **documented** sources|**suitable** but from **questionable** or **partially documented** sources|unsuitable|no data collected|
|Process and Reflection (Checkpoints)|prepared to discuss topic, plans, and methods insightfully|prepared to discuss topic, plans, and methods|partially prepared to discuss topic, plans, and method vaguely|unprepared to discuss topic, plans, and method with difficulty or incorrectly|no discussion|
|Statistical Calculations|thorough statistical analysis with accurate calculations|adequate statistical analysis with essentially correct calculations|superficial statistical analysis with partially correct calculations|minimal statistical analysis with flawed calculations|no statistical calculations performed|
|Interpretation|astute and insightful|credible and understandable|rudimentary and minimal|incomplete or flawed|no interpretation of the data and statistical measures|
|Organization and Presentation|purposeful and compelling|logical and effective|reasonable and simplistic|disorganized and ineffective manner|no organization or presentation|
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# 課程重點:
利用Keras 建立神經網路模型
查看優化器的結果
# 範例目標:
使用CIFAR-10圖庫, 看看完整神經網路
```
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
import os
#Blas GEMM launch failed , 避免動態分配GPU / CPU, 出現問題
import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
batch_size = 32
num_classes = 10
epochs = 20
data_augmentation = True
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# 檢查Dataset 的描述與資訊
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# 第一步:選擇模型, 順序模型是多個網絡層的線性堆疊
model = Sequential()
# 第二步:構建網絡層
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense( 10)) # 輸出結果是10個類別,所以維度是10
model.add(Activation('softmax')) # 最後一層用softmax作為激活函數
# 模型建立完成後,統計參數總量
print("Total Parameters:%d" % model.count_params())
# 輸出模型摘要資訊
model.summary()
#第三步編譯
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# 資料正規化
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
# 是否要做資料處理
if not data_augmentation:
print('Not using data augmentation.')
history=model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
else:
print('Using real-time data augmentation.')
print('')
# This will do preprocessing and realtime data augmentation:
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
zca_epsilon=1e-06, # epsilon for ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
# randomly shift images horizontally (fraction of total width)
width_shift_range=0.1,
# randomly shift images vertically (fraction of total height)
height_shift_range=0.1,
shear_range=0., # set range for random shear
zoom_range=0., # set range for random zoom
channel_shift_range=0., # set range for random channel shifts
# set mode for filling points outside the input boundaries
fill_mode='nearest',
cval=0., # value used for fill_mode = "constant"
horizontal_flip=True, # randomly flip images
vertical_flip=False, # randomly flip images
# set rescaling factor (applied before any other transformation)
rescale=None,
# set function that will be applied on each input
preprocessing_function=None,
# image data format, either "channels_first" or "channels_last"
data_format=None,
# fraction of images reserved for validation (strictly between 0 and 1)
validation_split=0.0)
# Compute quantities required for feature-wise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(x_train)
history=model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
'''
第四步:訓練
.fit的一些參數
batch_size:對總的樣本數進行分組,每組包含的樣本數量
epochs :訓練次數
shuffle:是否把數據隨機打亂之後再進行訓練
validation_split:拿出百分之多少用來做交叉驗證
verbose:屏顯模式 - 0:不輸出, 1:輸出進度, 2:輸出每次的訓練結果
'''
# Save model and weights
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
# 第六步:輸出
# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
import matplotlib.pyplot as plt
%matplotlib inline
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Valiidation'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Valiidation'], loc='upper left')
plt.show()
```
| github_jupyter |
# The data block API
```
from fastai.gen_doc.nbdoc import *
from fastai.tabular import *
from fastai.text import *
from fastai.vision import *
np.random.seed(42)
```
The data block API lets you customize the creation of a [`DataBunch`](/basic_data.html#DataBunch) by isolating the underlying parts of that process in separate blocks, mainly:
1. Where are the inputs and how to create them?
1. How to split the data into a training and validation sets?
1. How to label the inputs?
1. What transforms to apply?
1. How to add a test set?
1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.html#DataBunch)?
Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.html#DataBunch) (batch size, collate function...)
The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.html#DataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.html#DataBunch) are great for beginners but you can't always make your data fit in the tracks they require.
<img src="imgs/mix_match.png" alt="Mix and match" width="200">
As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts.
## Examples of use
Let's begin with our traditional MNIST example.
```
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
path.ls()
(path/'train').ls()
```
In [`vision.data`](/vision.data.html#vision.data), we create an easy [`DataBunch`](/basic_data.html#DataBunch) suitable for classification by simply typing:
```
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)
```
This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.html#train) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this:
```
data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders
.split_by_folder() #How to split in train/valid? -> use the folders
.label_from_folder() #How to label? -> depending on the folder of the filenames
.add_test_folder() #Optionally add a test set (here default name is test)
.transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64
.databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch
data.show_batch(3, figsize=(6,6), hide_axis=False)
```
Let's look at another example from [`vision.data`](/vision.data.html#vision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is:
```
planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms)
```
With the data block API we can rewrite this like that:
```
data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg')
#Where to find the data? -> in planet 'train' folder
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.label_from_df(sep=' ')
#How to label? -> use the csv file
.transform(planet_tfms, size=128)
#Data augmentation? -> use tfms with a size of 128
.databunch())
#Finally -> use the defaults for conversion to databunch
data.show_batch(rows=2, figsize=(9,7))
```
The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.html#DataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder.
```
camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'
```
We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...)
```
codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes
```
And we define the following function that infers the mask filename from the image filename.
```
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
```
Then we can easily define a [`DataBunch`](/basic_data.html#DataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image.
```
data = (SegmentationItemList.from_folder(path_img)
.random_split_by_pct()
.label_from_func(get_y_fn, classes=codes)
.transform(get_transforms(), tfm_y=True, size=128)
.databunch())
data.show_batch(rows=2, figsize=(7,5))
```
Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/#home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename.
```
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = dict(zip(images, lbl_bbox))
get_y_func = lambda o:img2bbox[o.name]
```
The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes.
```
data = (ObjectItemList.from_folder(coco)
#Where are the images? -> in coco
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.label_from_func(get_y_func)
#How to find the labels? -> use get_y_func
.transform(get_transforms(), tfm_y=True)
#Data augmentation? -> Standard transforms with tfm_y=True
.databunch(bs=16, collate_fn=bb_pad_collate))
#Finally we convert to a DataBunch and we use bb_pad_collate
data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6))
```
But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model.
```
imdb = untar_data(URLs.IMDB_SAMPLE)
data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text')
#Where are the inputs? Column 'text' of this csv
.random_split_by_pct()
#How to split it? Randomly with the default 20%
.label_for_lm()
#Label it for a language model
.databunch())
data_lm.show_batch()
```
For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`.
```
data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text')
.split_from_df(col='is_valid')
.label_from_df(cols='label')
.databunch())
data_clas.show_batch()
```
Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.html#PreProcessor)s that are going to be applied to our data once the splitting and labelling is done.
```
adult = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(adult/'adult.csv')
dep_var = '>=50k'
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain']
procs = [FillMissing, Categorify, Normalize]
data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs)
.split_by_idx(valid_idx=range(800,1000))
.label_from_df(cols=dep_var)
.databunch())
data.show_batch()
```
## Step 1: Provide inputs
The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.html#ItemList)).
```
show_doc(ItemList, title_level=3)
```
This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling.
It has multiple subclasses depending on the type of data you're handling. Here is a quick list:
- [`CategoryList`](/data_block.html#CategoryList) for labels in classification
- [`MultiCategoryList`](/data_block.html#MultiCategoryList) for labels in a multi classification problem
- [`FloatList`](/data_block.html#FloatList) for float labels in a regression problem
- [`ImageItemList`](/vision.data.html#ImageItemList) for data that are images
- [`SegmentationItemList`](/vision.data.html#SegmentationItemList) like [`ImageItemList`](/vision.data.html#ImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.html#SegmentationLabelList)
- [`SegmentationLabelList`](/vision.data.html#SegmentationLabelList) for segmentation masks
- [`ObjectItemList`](/vision.data.html#ObjectItemList) like [`ImageItemList`](/vision.data.html#ImageItemList) but will default labels to `ObjectLabelList`
- `ObjectLabelList` for object detection
- [`PointsItemList`](/vision.data.html#PointsItemList) for points (of the type [`ImagePoints`](/vision.image.html#ImagePoints))
- [`ImageImageList`](/vision.data.html#ImageImageList) for image to image tasks
- [`TextList`](/text.data.html#TextList) for text data
- [`TextFilesList`](/text.data.html#TextFilesList) for text data stored in files
- [`TabularList`](/tabular.data.html#TabularList) for tabular data
- [`CollabList`](/collab.html#CollabList) for collaborative filtering
Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods
```
show_doc(ItemList.from_folder)
show_doc(ItemList.from_df)
show_doc(ItemList.from_csv)
```
### Optional step: filter your data
The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods.
```
show_doc(ItemList.filter_by_func)
show_doc(ItemList.filter_by_folder)
show_doc(ItemList.filter_by_rand)
show_doc(ItemList.to_text)
show_doc(ItemList.use_partial_data)
```
### Writing your own [`ItemList`](/data_block.html#ItemList)
First check if you can't easily customize one of the existing subclass by:
- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)
- applying a custom `processor` (see step 4)
- changing the default `label_cls` for the label creation
- adding a default [`PreProcessor`](/data_block.html#PreProcessor) with the `_processor` class variable
If this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed.
```
show_doc(ItemList.analyze_pred)
show_doc(ItemList.get)
show_doc(ItemList.new)
```
You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`.
```
show_doc(ItemList.reconstruct)
```
## Step 2: Split the data between the training and the validation set
This step is normally straightforward, you just have to pick oe of the following functions depending on what you need.
```
show_doc(ItemList.no_split)
show_doc(ItemList.random_split_by_pct)
show_doc(ItemList.split_by_files)
show_doc(ItemList.split_by_fname_file)
show_doc(ItemList.split_by_folder)
jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.")
show_doc(ItemList.split_by_idx)
show_doc(ItemList.split_by_idxs)
show_doc(ItemList.split_by_list)
show_doc(ItemList.split_by_valid_func)
show_doc(ItemList.split_from_df)
jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.")
```
## Step 3: Label the inputs
To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.html#ItemList), and if there is none, it will go to [`CategoryList`](/data_block.html#CategoryList), [`MultiCategoryList`](/data_block.html#MultiCategoryList) or [`FloatList`](/data_block.html#FloatList) depending on the type of the labels). This is implemented in the following function:
```
show_doc(ItemList.get_label_cls)
```
The first example in these docs created labels as follows:
```
path = untar_data(URLs.MNIST_TINY)
ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train
```
If you want to save the data necessary to recreate your [`LabelList`](/data_block.html#LabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:
```python
ll.train.to_csv('tmp.csv')
```
Or just grab a `pd.DataFrame` directly:
```
ll.to_df().head()
show_doc(ItemList.label_empty)
show_doc(ItemList.label_from_list)
show_doc(ItemList.label_from_df)
jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.")
show_doc(ItemList.label_const)
show_doc(ItemList.label_from_folder)
jekyll_note("This method looks at the last subfolder in the path to determine the classes.")
show_doc(ItemList.label_from_func)
show_doc(ItemList.label_from_re)
show_doc(CategoryList, title_level=3)
```
[`ItemList`](/data_block.html#ItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.html#CategoryProcessor).
```
show_doc(MultiCategoryList, title_level=3)
```
It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.
If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels).
```
show_doc(FloatList, title_level=3)
show_doc(EmptyLabelList, title_level=3)
```
## Invisible step: preprocessing
This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.html#ItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.html#PreProcessor) classes).
A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.
Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.html#PreProcessor) and applied on the validation set.
This is the generic class for all processors.
```
show_doc(PreProcessor, title_level=3)
show_doc(PreProcessor.process_one)
```
Process one `item`. This method needs to be written in any subclass.
```
show_doc(PreProcessor.process)
```
Process a dataset. This default to apply `process_one` on every `item` of `ds`.
```
show_doc(CategoryProcessor, title_level=3)
show_doc(CategoryProcessor.generate_classes)
show_doc(MultiCategoryProcessor, title_level=3)
show_doc(MultiCategoryProcessor.generate_classes)
```
## Optional steps
### Add transforms
Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms.
```
show_doc(LabelLists.transform)
```
This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.
### Add a test set
To add a test set, you can use one of the two following methods.
```
show_doc(LabelLists.add_test)
jekyll_note("Here `items` can be an `ItemList` or a collection.")
show_doc(LabelLists.add_test_folder)
```
**Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset.
In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.
If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:
```
data_test = (ImageItemList.from_folder(path)
.split_by_folder(train='train', valid='test')
.label_from_folder()
...)
```
Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:
```
tfms = []
path = Path('data').resolve()
data = (ImageItemList.from_folder(path)
.split_by_pct()
.label_from_folder()
.transform(tfms)
.databunch()
.normalize() )
learn = create_cnn(data, models.resnet50, metrics=accuracy)
learn.fit_one_cycle(5,1e-2)
# now replace the validation dataset entry with the test dataset as a new validation dataset:
# everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder`
# (or perhaps you were already using the latter, so simply switch to valid='test')
data_test = (ImageItemList.from_folder(path)
.split_by_folder(train='train', valid='test')
.label_from_folder()
.transform(tfms)
.databunch()
.normalize()
)
learn.data = data_test
learn.validate()
```
Of course, your data block can be totally different, this is just an example.
## Step 4: convert to a [`DataBunch`](/basic_data.html#DataBunch)
This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.html#DataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.html#DataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you.
```
show_doc(LabelLists.databunch)
```
## Inner classes
```
show_doc(LabelList, title_level=3)
```
Optionally apply `tfms` to `y` if `tfm_y` is `True`.
```
show_doc(LabelList.export)
show_doc(LabelList.transform_y)
show_doc(LabelList.load_empty)
show_doc(LabelList.process)
show_doc(LabelList.set_item)
show_doc(LabelList.to_df)
show_doc(LabelList.to_csv)
show_doc(LabelList.transform)
show_doc(ItemLists, title_level=3)
show_doc(ItemLists.label_from_lists)
show_doc(ItemLists.transform)
show_doc(ItemLists.transform_y)
show_doc(LabelLists, title_level=3)
show_doc(LabelLists.get_processors)
show_doc(LabelLists.load_empty)
show_doc(LabelLists.process)
```
## Helper functions
```
show_doc(get_files)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(CategoryList.new)
show_doc(LabelList.new)
show_doc(CategoryList.get)
show_doc(LabelList.predict)
show_doc(ItemList.new)
show_doc(ItemList.process_one)
show_doc(ItemList.process)
show_doc(MultiCategoryProcessor.process_one)
show_doc(FloatList.get)
show_doc(CategoryProcessor.process_one)
show_doc(CategoryProcessor.create_classes)
show_doc(CategoryProcessor.process)
show_doc(MultiCategoryList.get)
show_doc(FloatList.new)
show_doc(FloatList.reconstruct)
show_doc(MultiCategoryList.analyze_pred)
show_doc(MultiCategoryList.reconstruct)
show_doc(CategoryList.reconstruct)
show_doc(CategoryList.analyze_pred)
```
## New Methods - Please document or move to the undocumented section
```
show_doc(EmptyLabelList.reconstruct)
show_doc(EmptyLabelList.get)
```
| github_jupyter |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
<!--NAVIGATION-->
< [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) | [Contents](Index.ipynb) | [Computation on Arrays: Broadcasting](02.05-Computation-on-arrays-broadcasting.ipynb) >
# Aggregations: Min, Max, and Everything In Between
Often when faced with a large amount of data, a first step is to compute summary statistics for the data in question.
Perhaps the most common summary statistics are the mean and standard deviation, which allow you to summarize the "typical" values in a dataset, but other aggregates are useful as well (the sum, product, median, minimum and maximum, quantiles, etc.).
NumPy has fast built-in aggregation functions for working on arrays; we'll discuss and demonstrate some of them here.
## Summing the Values in an Array
As a quick example, consider computing the sum of all values in an array.
Python itself can do this using the built-in ``sum`` function:
```
import numpy as np
L = np.random.random(100)
sum(L)
```
The syntax is quite similar to that of NumPy's ``sum`` function, and the result is the same in the simplest case:
```
np.sum(L)
```
However, because it executes the operation in compiled code, NumPy's version of the operation is computed much more quickly:
```
big_array = np.random.rand(1000000)
%timeit sum(big_array)
%timeit np.sum(big_array)
```
Be careful, though: the ``sum`` function and the ``np.sum`` function are not identical, which can sometimes lead to confusion!
In particular, their optional arguments have different meanings, and ``np.sum`` is aware of multiple array dimensions, as we will see in the following section.
## Minimum and Maximum
Similarly, Python has built-in ``min`` and ``max`` functions, used to find the minimum value and maximum value of any given array:
```
min(big_array), max(big_array)
```
NumPy's corresponding functions have similar syntax, and again operate much more quickly:
```
np.min(big_array), np.max(big_array)
%timeit min(big_array)
%timeit np.min(big_array)
```
For ``min``, ``max``, ``sum``, and several other NumPy aggregates, a shorter syntax is to use methods of the array object itself:
```
print(big_array.min(), big_array.max(), big_array.sum())
```
Whenever possible, make sure that you are using the NumPy version of these aggregates when operating on NumPy arrays!
### Multi dimensional aggregates
One common type of aggregation operation is an aggregate along a row or column.
Say you have some data stored in a two-dimensional array:
```
M = np.random.random((3, 4))
print(M)
```
By default, each NumPy aggregation function will return the aggregate over the entire array:
```
M.sum()
```
Aggregation functions take an additional argument specifying the *axis* along which the aggregate is computed. For example, we can find the minimum value within each column by specifying ``axis=0``:
```
M.min(axis=0)
```
The function returns four values, corresponding to the four columns of numbers.
Similarly, we can find the maximum value within each row:
```
M.max(axis=1)
```
The way the axis is specified here can be confusing to users coming from other languages.
The ``axis`` keyword specifies the *dimension of the array that will be collapsed*, rather than the dimension that will be returned.
So specifying ``axis=0`` means that the first axis will be collapsed: for two-dimensional arrays, this means that values within each column will be aggregated.
### Other aggregation functions
NumPy provides many other aggregation functions, but we won't discuss them in detail here.
Additionally, most aggregates have a ``NaN``-safe counterpart that computes the result while ignoring missing values, which are marked by the special IEEE floating-point ``NaN`` value (for a fuller discussion of missing data, see [Handling Missing Data](03.04-Missing-Values.ipynb)).
Some of these ``NaN``-safe functions were not added until NumPy 1.8, so they will not be available in older NumPy versions.
The following table provides a list of useful aggregation functions available in NumPy:
|Function Name | NaN-safe Version | Description |
|-------------------|---------------------|-----------------------------------------------|
| ``np.sum`` | ``np.nansum`` | Compute sum of elements |
| ``np.prod`` | ``np.nanprod`` | Compute product of elements |
| ``np.mean`` | ``np.nanmean`` | Compute mean of elements |
| ``np.std`` | ``np.nanstd`` | Compute standard deviation |
| ``np.var`` | ``np.nanvar`` | Compute variance |
| ``np.min`` | ``np.nanmin`` | Find minimum value |
| ``np.max`` | ``np.nanmax`` | Find maximum value |
| ``np.argmin`` | ``np.nanargmin`` | Find index of minimum value |
| ``np.argmax`` | ``np.nanargmax`` | Find index of maximum value |
| ``np.median`` | ``np.nanmedian`` | Compute median of elements |
| ``np.percentile`` | ``np.nanpercentile``| Compute rank-based statistics of elements |
| ``np.any`` | N/A | Evaluate whether any elements are true |
| ``np.all`` | N/A | Evaluate whether all elements are true |
We will see these aggregates often throughout the rest of the book.
## Example: What is the Average Height of US Presidents?
Aggregates available in NumPy can be extremely useful for summarizing a set of values.
As a simple example, let's consider the heights of all US presidents.
This data is available in the file *president_heights.csv*, which is a simple comma-separated list of labels and values:
```
!head -4 data/president_heights.csv
```
We'll use the Pandas package, which we'll explore more fully in [Chapter 3](03.00-Introduction-to-Pandas.ipynb), to read the file and extract this information (note that the heights are measured in centimeters).
```
import pandas as pd
data = pd.read_csv('data/president_heights.csv')
heights = np.array(data['height(cm)'])
print(heights)
```
Now that we have this data array, we can compute a variety of summary statistics:
```
print("Mean height: ", heights.mean())
print("Standard deviation:", heights.std())
print("Minimum height: ", heights.min())
print("Maximum height: ", heights.max())
```
Note that in each case, the aggregation operation reduced the entire array to a single summarizing value, which gives us information about the distribution of values.
We may also wish to compute quantiles:
```
print("25th percentile: ", np.percentile(heights, 25))
print("Median: ", np.median(heights))
print("75th percentile: ", np.percentile(heights, 75))
```
We see that the median height of US presidents is 182 cm, or just shy of six feet.
Of course, sometimes it's more useful to see a visual representation of this data, which we can accomplish using tools in Matplotlib (we'll discuss Matplotlib more fully in [Chapter 4](04.00-Introduction-To-Matplotlib.ipynb)). For example, this code generates the following chart:
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # set plot style
plt.hist(heights)
plt.title('Height Distribution of US Presidents')
plt.xlabel('height (cm)')
plt.ylabel('number');
```
These aggregates are some of the fundamental pieces of exploratory data analysis that we'll explore in more depth in later chapters of the book.
<!--NAVIGATION-->
< [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) | [Contents](Index.ipynb) | [Computation on Arrays: Broadcasting](02.05-Computation-on-arrays-broadcasting.ipynb) >
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
DATA_DIR='/content/drive/MyDrive/BIR_Workshop/model_mesh'
!pip install livelossplot --quiet
from google.colab import drive
import os
import matplotlib.pyplot as plt
import pandas as pd
import torch
import numpy as np
from sklearn.model_selection import train_test_split
import torchtext
from torch.utils.data import Dataset,DataLoader
from torchtext.legacy.data import Field, TabularDataset, BucketIterator, Iterator
import torch.nn as nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
import torch.optim as optim
import torch.nn.functional as F
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix,f1_score
from livelossplot import PlotLosses
device = torch.device('cuda' if True and torch.cuda.is_available() else 'cpu')
NUM_CLASSES = 5
BATCH_SIZE = 32
```
MeSH Dataset Class...
```
class MESHDataset(Dataset):
def __init__(self,numpy_file,label_file):
try:
self.data= np.load(numpy_file)
self.labels = np.load(label_file)
except Exception as err:
raise Exception(f'ERROR OPENING FILES: {numpy_file} | {label_file}. See Error below. \n {err}')
def __len__(self):
return self.data.shape[0]
def __getitem__(self, idx):
#Get the element with `idx`
#Output an 89*89 matrix
return self.data[idx].flatten(), self.labels[idx]
```
Baseline Model...
```
class BaselineModel(nn.Module):
def __init__(self, matrix_size=89):
super(BaselineModel, self).__init__()
self.linear1 = nn.Linear(matrix_size**2, (matrix_size**2)//2)
self.linear2 = nn.Linear((matrix_size**2)//2, matrix_size**2//4)
self.linear3 = nn.Linear((matrix_size**2)//4, NUM_CLASSES)
def forward(self, x):
x = self.linear1(x)
x = F.relu(x)
x = self.linear2(x)
x = F.relu(x)
x = self.linear3(x)
return x
```
Creating the dataloaders...
```
train_dataset = MESHDataset(os.path.join(DATA_DIR,'train.npy'), os.path.join(DATA_DIR,'grouped_train_labels.npy'))
dev_dataset = MESHDataset(os.path.join(DATA_DIR,'dev.npy'), os.path.join(DATA_DIR,'grouped_dev_labels.npy'))
test_dataset = MESHDataset(os.path.join(DATA_DIR,'test.npy'),os.path.join(DATA_DIR,'test_labels.npy'))
train_dataloader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
dev_dataloader = DataLoader(dev_dataset, batch_size=BATCH_SIZE, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=False)
```
Creating the model...
```
model = BaselineModel().to(device)
model
```
Training Configuration...
```
from sklearn.metrics import accuracy_score
def compute_accuracy(pred, target):
return float(100 * accuracy_score(target.detach().numpy(), pred.argmax(-1).detach().numpy()))
#Define criterion: Categorical cross entropy
criterion = nn.CrossEntropyLoss()
#Define optimizer. For now, Adam
optimizer = optim.Adam(model.parameters(), lr=0.0001)
PRINT_FREQ = 15
DO_VALIDATION_STEP=15
NUM_EPOCHS = 1500
PATIENCE=50
PATH=os.path.join(DATA_DIR,'best_model_3')
best_acc = 0
liveloss = PlotLosses()
patience=0
for epoch in range(NUM_EPOCHS):
for i,(features, label) in enumerate(train_dataloader):
features, label = features.to(device), label.to(device)
optimizer.zero_grad()
pred = model(features.float())
loss = criterion(pred,label)
loss.backward()
optimizer.step()
if i%DO_VALIDATION_STEP==0:
#Do validation
model.eval()
val_losses=[]
val_acc_list=[]
for val_features, val_label in dev_dataloader:
val_features, val_label = val_features.to(device), val_label.to(device)
val_pred = model(val_features.float())
val_loss = criterion(val_pred,val_label)
val_losses.append(val_loss.item())
val_acc_list.append(compute_accuracy(val_pred.cpu(),val_label.cpu()))
val_loss_ = np.mean(val_losses)
val_acc = np.mean(val_acc_list)
if val_acc> best_acc:
best_acc=val_acc
patience=0
#Save model weights
torch.save(model.state_dict(), PATH)
else:
patience+=1
if patience >= PATIENCE:
break
logs={'loss':loss.item(),'val_loss':val_loss_,'val_accuracy':val_acc}
liveloss.update(logs)
liveloss.send()
model.train()
```
Compute test metrics...
```
model.eval()
test_acc_list=[]
#f1_list=[]
for test_features, test_label in test_dataloader:
test_features, test_label =test_features.to(device), test_label.to(device)
test_pred = model(test_features.float())
test_acc_list.append(compute_accuracy(test_pred.cpu(),test_label.cpu()))
#f1_list.append(f1_loss(test_label, test_pred).item())
test_acc = np.mean(test_acc_list)
#test_f1 = np.mean(f1_list)
print(f'Test Metrics \n _________________ \n Mean Accuracy: {test_acc} ')
```
| github_jupyter |
# Demo_Chris
> Pure markup, demonstrate clustering work for detecting convoys.
As we investigated the AIS ship tracking data, we became interested in automatically detecting emergent behavior from groups of ships. For instance: can we automatically detect container ships following a shipping lane? can we find groups of ships moving together in a convoy, or a fishing fleet working together?
We began investigating traffic from our ETL-ed `AIS` data from January 1, 2015. The ships are all centered off the coast of Alaska, near the Aleutian Islands. First, we wanted an overview of the average ship position each hour, to get a sense of the data:

Near the end of the penninsula, seen in our map as the grey trapezoid on the right hand side, we can see a significant amount of ships all following the same path. Investigation of the ships following this path reveals that they are primarily large container and tanker vessels, indicating a shipping lane and a good target for testing our shipping lane identification algorithms.
In the southwestern corner of the map, we see several ships moving along approximately the same course. After direct examination, we identify that the two ships represented by the blue and yellow markers, the `Gulf Valour` and the `Pole`, travel at the same rate alongside each other throughout our sample day. This is the exact type of feature we want for identification of convoys, making this a good test candidate.
Finding groups in large datasets such as this is a great application of clustering analysis. Here, we leveraged `HDBSCAN`, developed by [Campello et al.](https://hdbscan.readthedocs.io/en/latest/how_hdbscan_works.html) in 2013. `HDBSCAN` improves upon the widely used density-based clustering algorithm `DBSCAN` by turning it into a hierarchical clustering algorithm, which allows it to discover clusters of varying densities within a dataset.
As a first pass at clustering, we directly used the ship positions over the course of the day. For each hour, we found the average position (lat-lon) of each ship (only when it checked in during that hour). We ended up with a dataset that looked like:

We then used the lattitude and longitude of the ship during each hour to cluster ships together. Because we subset each position by hour, each ship will appear in a cluster each hour that it is present. This means that ships traveling along the same route at different times can end up in the same cluster. When this occurs for a large number of ships across differeing hour intervals, we have a strong indication for a shipping lane.
<!--  -->
<img src="img/position_based_clusters.png" alt="position_based_clusters" style="width: 900px;"/>
After clustering only based on position, we see that we do find geographically similar groupings of ships. However, the shipping lane near the edge of the penninsula consists of many different cluster groupings, and many of the ships outside of the shipping lane also appear in the same clusters. We haven't really distinguished the types of activity we want, but it's a good start.
For the next iteration, we computed the average ship bearing for each time interval, using this as an additional feature. Ships traveling in the same shipping lane, or in a convoy, should often have the same approximate bearing. This, combined with the original lat-lon based proximity features, greatly improved the clustering. Here, we show the new set of clusters produced when we include bearing as a feature. Notice that the shipping lane just of the coast now becomes very distinct and well clustered.
<!--  -->
<img src="img/velocity_clusters_all.png" alt="velocity_clusters_all" style="width: 900px;"/>
For simplicity, we then look at only the top few clusters of interest. The shipping lane comes out as a strong blue line, thanks to the 18 ships that make up that cluster over the 24 hours of data present for this clustering analysis. We can also see additional patterns becoming more obvious now as well. The red line of 14 separate ships traveling northeast towards the tip of the penninsula, or the light green cluster showing the travel of the ship `Malalo`. The fairly consistent bearing and nearby position allows the clustering algorithm to identify the track taken by this ship even with a 10 hour gap in AIS data for this ship as it travels northwest. Additionally, we now see that the `Gulf Valour` and `Pole` have become a single cluster, getting us closer to our goal of convoy identification.
<!--  -->
<img src="img/velocity_clusters_subset.png" alt="velocity_clusters_subset" style="width: 900px;"/>
In the future, we want to move towards performing clustering each hour, and identifying groups of ships that consistently appear in the same cluster together across time. This is a solid indication of shared activity patterns such as convoys.
| github_jupyter |
# College Data Clustering
> Learn about K-means clustering model.
- toc: true
- badges: true
- comments: true
- categories: [clustering]
- image: images/collegeData.png
___
When using the Kmeans algorithm under normal circumstances, it is because you don't have labels. In this case we will use the labels to try to get an idea of how well the algorithm performed, but you won't usually do this for Kmeans, so the classification report and confusion matrix at the end of this project, don't truly make sense in a real world setting!.
___
## The Data
We will use a data frame with 777 observations on the following 18 variables.
* Private A factor with levels No and Yes indicating private or public university
* Apps Number of applications received
* Accept Number of applications accepted
* Enroll Number of new students enrolled
* Top10perc Pct. new students from top 10% of H.S. class
* Top25perc Pct. new students from top 25% of H.S. class
* F.Undergrad Number of fulltime undergraduates
* P.Undergrad Number of parttime undergraduates
* Outstate Out-of-state tuition
* Room.Board Room and board costs
* Books Estimated book costs
* Personal Estimated personal spending
* PhD Pct. of faculty with Ph.D.’s
* Terminal Pct. of faculty with terminal degree
* S.F.Ratio Student/faculty ratio
* perc.alumni Pct. alumni who donate
* Expend Instructional expenditure per student
* Grad.Rate Graduation rate
## Import Libraries
** Import the libraries you usually use for data analysis.**
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
## Get the Data
** Read in the College_Data file using read_csv. Figure out how to set the first column as the index.**
```
df=pd.read_csv("College.csv",index_col=0)
```
**Check the head of the data**
```
df.head()
```
** Check the info() and describe() methods on the data.**
```
df.info()
df.describe()
```
It's time to create some data visualizations!
** Create a scatterplot of Grad.Rate versus Room.Board where the points are colored by the Private column. **
```
sns.set_style('whitegrid')
sns.lmplot(x='Room.Board',y='Grad.Rate',data=df, hue='Private',
palette='coolwarm',height=6,aspect=1,fit_reg=False)
```
**Create a scatterplot of F.Undergrad versus Outstate where the points are colored by the Private column.**
```
sns.set_style('whitegrid')
sns.lmplot(x='Outstate',y='F.Undergrad',data=df, hue='Private',
palette='coolwarm',height=6,aspect=1,fit_reg=False)
```
** Create a stacked histogram showing Out of State Tuition based on the Private column. Try doing this using [sns.FacetGrid](https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.FacetGrid.html). If that is too tricky, see if you can do it just by using two instances of pandas.plot(kind='hist'). **
```
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',height=6,aspect=2)
g = g.map(plt.hist,'Outstate',bins=20,alpha=0.7)
```
**Create a similar histogram for the Grad.Rate column.**
```
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',height=6,aspect=2)
g = g.map(plt.hist,'Grad.Rate',bins=20,alpha=0.7)
```
** Notice how there seems to be a private school with a graduation rate of higher than 100%.What is the name of that school?**
```
df[df['Grad.Rate'] > 100]
```
** Set that school's graduation rate to 100 so it makes sense. You may get a warning not an error) when doing this operation, so use dataframe operations or just re-do the histogram visualization to make sure it actually went through.**
```
df.loc['Cazenovia College','Grad.Rate'] = 100
# dfmi.__getitem__('one').__setitem__('second', value)
df[df['Grad.Rate'] > 100]
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',height=6,aspect=2)
g = g.map(plt.hist,'Grad.Rate',bins=20,alpha=0.7)
```
## K Means Cluster Creation
Now it is time to create the Cluster labels!
** Import KMeans from SciKit Learn.**
```
from sklearn.cluster import KMeans
```
** Create an instance of a K Means model with 2 clusters.**
```
kmeans=KMeans(n_clusters=2)
```
**Fit the model to all the data except for the Private label.**
```
kmeans.fit(df.drop('Private',axis=1))
```
** What are the cluster center vectors?**
```
kmeans.cluster_centers_
```
## Evaluation
There is no perfect way to evaluate clustering if you don't have the labels, however since this is just an exercise, we do have the labels, so we take advantage of this to evaluate our clusters, keep in mind, you usually won't have this luxury in the real world.
** Create a new column for df called 'Cluster', which is a 1 for a Private school, and a 0 for a public school.**
```
def converter(cluster):
if cluster=='Yes':
return 1
else:
return 0
df['Cluster'] = df['Private'].apply(converter)
df.head()
```
** Create a confusion matrix and classification report to see how well the Kmeans clustering worked without being given any labels.**
```
from sklearn.metrics import confusion_matrix,classification_report
print(confusion_matrix(df['Cluster'],kmeans.labels_))
print(classification_report(df['Cluster'],kmeans.labels_))
```
| github_jupyter |
```
from pyrep import PyRep
import numpy as np
from matplotlib import pyplot as plt
from pyrep.objects.shape import Shape
from pyrep.const import PrimitiveShape
from pyrep.objects.vision_sensor import VisionSensor
from IPython import display
f = open("soma_cube.txt", "r")
text = f.read()
split_sols = text.split('solution')
solutions = [split_sols[j] for j in range(1,241)]
action_list = []
pic_list = []
for s in solutions:
actions = s.split('\n')[1:8]
action_list.append(actions)
pic = s.split('\n')[9:12]
pic_list.append(pic)
pr = PyRep()
pr.launch(headless=False)
pr.start()
from matplotlib import cm
cols = cm.get_cmap('tab20c', 7)
blocks = ['T','p','V','L','Z','b','a']
idx = 0
for i,a in enumerate(action_list[idx]):
col_idx = blocks.index(a[-1])
color = (cols.colors[col_idx][0:3]).tolist()
pose = np.array(' '.join(a[0:-1].split(',')).split()).reshape(-1,3).astype(int)*0.05 + [0.05,0.05,0.05]
for p in pose:
obj = Shape.create(type=PrimitiveShape.CUBOID,
color=color, size=[0.05, 0.05, 0.05],
position=p.tolist())
obj.set_color(color)
#pr.step()
pr.step()
cam0 = VisionSensor.create([64,64],position=[0.1,0.1,0.5],orientation=[np.pi,0,0])
cam1 = VisionSensor.create([64,64],position=[0.5,0.1,0.1],orientation=[0,-np.pi/2,0])
cam2 = VisionSensor.create([64,64],position=[0.1,0.5,0.1],orientation=[np.pi/2,0,0])
cam3 = VisionSensor.create([64,64],position=[0.1,-0.5,0.1],orientation=[-np.pi/2,0,0])
for j in range(100):
pr.step()
pr.stop()
cubes = []
for idx in range(len(action_list)):
print('\r%d'%idx,end='')
pr.start()
cam0 = VisionSensor.create([64,64],position=[0.1,0.1,0.5],orientation=[np.pi,0,0])
cam1 = VisionSensor.create([64,64],position=[0.5,0.1,0.1],orientation=[0,-np.pi/2,-np.pi/2])
cam2 = VisionSensor.create([64,64],position=[0.1,0.5,0.1],orientation=[np.pi/2,0,0])
cam3 = VisionSensor.create([64,64],position=[0.1,-0.3,0.1],orientation=[-np.pi/2,0,-np.pi])
# Make cube for
for i,a in enumerate(action_list[idx]):
col_idx = blocks.index(a[-1])
color = (cols.colors[col_idx][0:3]).tolist()
pose = np.array(' '.join(a[0:-1].split(',')).split()).reshape(-1,3).astype(int)*0.05 + 0.05
for p in pose:
obj = Shape.create(type=PrimitiveShape.CUBOID,
color=color, size=[0.05, 0.05, 0.05],
position=p.tolist())
obj.set_color(color)
#pr.step()
for j in range(10):
pr.step()
im0 = cam0.capture_rgb()
im1 = cam1.capture_rgb()
im2 = cam2.capture_rgb()
im3 = cam3.capture_rgb()
pr.step()
cubes.append((im0,im1,im2,im3))
plt.cla()
plt.subplot(2,2,1)
plt.imshow(cubes[-1][0])
plt.subplot(2,2,2)
plt.imshow(cubes[-1][1])
plt.subplot(2,2,3)
plt.imshow(cubes[-1][2])
plt.subplot(2,2,4)
plt.imshow(cubes[-1][3])
display.clear_output(wait=True)
display.display(plt.gcf())
pr.stop()
pr.stop()
#pr.shutdown()
for i,c in enumerate(cubes):
plt.clf()
plt.subplot(2,2,1)
plt.imshow(c[0])
plt.axis('off')
plt.subplot(2,2,2)
plt.imshow(c[1])
plt.axis('off')
plt.subplot(2,2,3)
plt.imshow(c[2])
plt.axis('off')
plt.subplot(2,2,4)
plt.imshow(c[3])
plt.axis('off')
plt.savefig('/tmp/im_%03d.jpg'%i)
display.clear_output(wait=True)
display.display(plt.gcf())
np.save('cube_ims.npy',cubes)
pr.shutdown()
```
| github_jupyter |
# 10장. 회귀 분석으로 연속적 타깃 변수 예측하기
**아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.**
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/python-machine-learning-book-2nd-edition/blob/master/code/ch10/ch10.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />주피터 노트북 뷰어로 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/python-machine-learning-book-2nd-edition/blob/master/code/ch10/ch10.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
</table>
`watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
```
#!pip install watermark
%load_ext watermark
%watermark -u -d -v -p numpy,pandas,matplotlib,sklearn,seaborn
```
맷플롯립을 기반의 그래픽 라이브러리인 seaborn 패키지는 다음 명령으로 설치할 수 있습니다.
conda install seaborn
또는
pip install seaborn
# 주택 데이터셋 탐색
## 데이터프레임으로 주택 데이터셋 읽기
이 설명은 [https://archive.ics.uci.edu/ml/datasets/Housing](https://archive.ics.uci.edu/ml/datasets/Housing)을 참고했습니다:
속성:
<pre>
1. CRIM: 도시의 인당 범죄율
2. ZN: 25,000 평방 피트가 넘는 주택 비율
3. INDUS: 도시에서 소매 업종이 아닌 지역 비율
4. CHAS: 찰스강 인접 여부(강 주변=1, 그 외=0)
5. NOX: 일산화질소 농도(10ppm 당)
6. RM: 주택의 평균 방 개수
7. AGE: 1940년 이전에 지어진 자가 주택 비율
8. DIS: 다섯 개의 보스턴 고용 센터까지 가중치가 적용된 거리
9. RAD: 방사형으로 뻗은 고속도로까지 접근성 지수
10. TAX: $10,000당 재산세율
11. PTRATIO: 도시의 학생-교사 비율
12. B: 1000(Bk - 0.63)^2, 여기에서 Bk는 도시의 아프리카계 미국인 비율
13. LSTAT: 저소득 계층의 비율
14. MEDV: 자가 주택의 중간 가격($1,000 단위)
</pre>
```
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/rasbt/'
'python-machine-learning-book-2nd-edition'
'/master/code/ch10/housing.data.txt',
header=None,
sep='\s+')
df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS',
'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df.head()
```
## 데이터셋의 중요 특징을 시각화하기
```
import matplotlib.pyplot as plt
import seaborn as sns
cols = ['LSTAT', 'INDUS', 'NOX', 'RM', 'MEDV']
sns.pairplot(df[cols], height=2.5)
plt.tight_layout()
plt.show()
import numpy as np
cm = np.corrcoef(df[cols].values.T)
#sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=cols,
xticklabels=cols)
plt.tight_layout()
plt.show()
```
# 최소 제곱 선형 회귀 모델 구현하기
## 경사 하강법으로 회귀 모델의 파라미터 구하기
```
class LinearRegressionGD(object):
def __init__(self, eta=0.001, n_iter=20):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
output = self.net_input(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
return self.net_input(X)
X = df[['RM']].values
y = df['MEDV'].values
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
sc_y = StandardScaler()
X_std = sc_x.fit_transform(X)
y_std = sc_y.fit_transform(y[:, np.newaxis]).flatten()
lr = LinearRegressionGD()
lr.fit(X_std, y_std)
plt.plot(range(1, lr.n_iter+1), lr.cost_)
plt.ylabel('SSE')
plt.xlabel('Epoch')
plt.show()
def lin_regplot(X, y, model):
plt.scatter(X, y, c='steelblue', edgecolor='white', s=70)
plt.plot(X, model.predict(X), color='black', lw=2)
return
lin_regplot(X_std, y_std, lr)
plt.xlabel('Average number of rooms [RM] (standardized)')
plt.ylabel('Price in $1000s [MEDV] (standardized)')
plt.show()
print('기울기: %.3f' % lr.w_[1])
print('절편: %.3f' % lr.w_[0])
num_rooms_std = sc_x.transform(np.array([[5.0]]))
price_std = lr.predict(num_rooms_std)
print("$1,000 단위 가격: %.3f" % sc_y.inverse_transform(price_std))
```
## 사이킷런으로 회귀 모델의 가중치 추정하기
```
from sklearn.linear_model import LinearRegression
slr = LinearRegression()
slr.fit(X, y)
y_pred = slr.predict(X)
print('기울기: %.3f' % slr.coef_[0])
print('절편: %.3f' % slr.intercept_)
lin_regplot(X, y, slr)
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000s [MEDV]')
plt.show()
```
**정규 방정식**을 사용한 방법:
```
# 1로 채워진 열 벡터 추가
Xb = np.hstack((np.ones((X.shape[0], 1)), X))
w = np.zeros(X.shape[1])
z = np.linalg.inv(np.dot(Xb.T, Xb))
w = np.dot(z, np.dot(Xb.T, y))
print('기울기: %.3f' % w[1])
print('절편: %.3f' % w[0])
```
# RANSAC을 사용하여 안정된 회귀 모델 훈련하기
```
from sklearn.linear_model import RANSACRegressor
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
loss='absolute_loss',
residual_threshold=5.0,
random_state=0)
ransac.fit(X, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
line_X = np.arange(3, 10, 1)
line_y_ransac = ransac.predict(line_X[:, np.newaxis])
plt.scatter(X[inlier_mask], y[inlier_mask],
c='steelblue', edgecolor='white',
marker='o', label='Inliers')
plt.scatter(X[outlier_mask], y[outlier_mask],
c='limegreen', edgecolor='white',
marker='s', label='Outliers')
plt.plot(line_X, line_y_ransac, color='black', lw=2)
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000s [MEDV]')
plt.legend(loc='upper left')
plt.show()
print('기울기: %.3f' % ransac.estimator_.coef_[0])
print('절편: %.3f' % ransac.estimator_.intercept_)
```
# 선형 회귀 모델의 성능 평가
```
from sklearn.model_selection import train_test_split
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
slr = LinearRegression()
slr.fit(X_train, y_train)
y_train_pred = slr.predict(X_train)
y_test_pred = slr.predict(X_test)
import numpy as np
import scipy as sp
ary = np.array(range(100000))
%timeit np.linalg.norm(ary)
%timeit sp.linalg.norm(ary)
%timeit np.sqrt(np.sum(ary**2))
plt.scatter(y_train_pred, y_train_pred - y_train,
c='steelblue', marker='o', edgecolor='white',
label='Training data')
plt.scatter(y_test_pred, y_test_pred - y_test,
c='limegreen', marker='s', edgecolor='white',
label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, color='black', lw=2)
plt.xlim([-10, 50])
plt.tight_layout()
plt.show()
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
print('훈련 MSE: %.3f, 테스트 MSE: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('훈련 R^2: %.3f, 테스트 R^2: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
```
# 회귀에 규제 적용하기
```
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=0.1)
lasso.fit(X_train, y_train)
y_train_pred = lasso.predict(X_train)
y_test_pred = lasso.predict(X_test)
print(lasso.coef_)
print('훈련 MSE: %.3f, 테스트 MSE: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('훈련 R^2: %.3f, 테스트 R^2: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
```
릿지 회귀:
```
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=1.0)
```
라쏘 회귀:
```
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=1.0)
```
엘라스틱 넷 회귀:
```
from sklearn.linear_model import ElasticNet
elanet = ElasticNet(alpha=1.0, l1_ratio=0.5)
```
# 선형 회귀 모델을 다항 회귀로 변환하기
```
X = np.array([258.0, 270.0, 294.0,
320.0, 342.0, 368.0,
396.0, 446.0, 480.0, 586.0])\
[:, np.newaxis]
y = np.array([236.4, 234.4, 252.8,
298.6, 314.2, 342.2,
360.8, 368.0, 391.2,
390.8])
from sklearn.preprocessing import PolynomialFeatures
lr = LinearRegression()
pr = LinearRegression()
quadratic = PolynomialFeatures(degree=2)
X_quad = quadratic.fit_transform(X)
# 선형 특성 학습
lr.fit(X, y)
X_fit = np.arange(250, 600, 10)[:, np.newaxis]
y_lin_fit = lr.predict(X_fit)
# 이차항 특성 학습
pr.fit(X_quad, y)
y_quad_fit = pr.predict(quadratic.fit_transform(X_fit))
# 결과 그래프
plt.scatter(X, y, label='training points')
plt.plot(X_fit, y_lin_fit, label='linear fit', linestyle='--')
plt.plot(X_fit, y_quad_fit, label='quadratic fit')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
y_lin_pred = lr.predict(X)
y_quad_pred = pr.predict(X_quad)
print('훈련 MSE 비교 - 선형 모델: %.3f, 다항 모델: %.3f' % (
mean_squared_error(y, y_lin_pred),
mean_squared_error(y, y_quad_pred)))
print('훈련 R^2 비교 - 선형 모델: %.3f, 다항 모델: %.3f' % (
r2_score(y, y_lin_pred),
r2_score(y, y_quad_pred)))
```
## 주택 데이터셋을 사용한 비선형 관계 모델링
```
X = df[['LSTAT']].values
y = df['MEDV'].values
regr = LinearRegression()
# 이차, 삼차 다항식 특성을 만듭니다
quadratic = PolynomialFeatures(degree=2)
cubic = PolynomialFeatures(degree=3)
X_quad = quadratic.fit_transform(X)
X_cubic = cubic.fit_transform(X)
# 학습된 모델을 그리기 위해 특성 범위를 만듭니다
X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis]
regr = regr.fit(X, y)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y, regr.predict(X))
regr = regr.fit(X_quad, y)
y_quad_fit = regr.predict(quadratic.fit_transform(X_fit))
quadratic_r2 = r2_score(y, regr.predict(X_quad))
regr = regr.fit(X_cubic, y)
y_cubic_fit = regr.predict(cubic.fit_transform(X_fit))
cubic_r2 = r2_score(y, regr.predict(X_cubic))
# 결과 그래프를 그립니다
plt.scatter(X, y, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2,
linestyle=':')
plt.plot(X_fit, y_quad_fit,
label='quadratic (d=2), $R^2=%.2f$' % quadratic_r2,
color='red',
lw=2,
linestyle='-')
plt.plot(X_fit, y_cubic_fit,
label='cubic (d=3), $R^2=%.2f$' % cubic_r2,
color='green',
lw=2,
linestyle='--')
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000s [MEDV]')
plt.legend(loc='upper right')
plt.show()
```
데이터셋을 변환합니다:
```
X = df[['LSTAT']].values
y = df['MEDV'].values
# 특성을 변환합니다
X_log = np.log(X)
y_sqrt = np.sqrt(y)
# 학습된 모델을 그리기 위해 특성 범위를 만듭니다
X_fit = np.arange(X_log.min()-1, X_log.max()+1, 1)[:, np.newaxis]
regr = regr.fit(X_log, y_sqrt)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y_sqrt, regr.predict(X_log))
# 결과 그래프를 그립니다
plt.scatter(X_log, y_sqrt, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2)
plt.xlabel('log(% lower status of the population [LSTAT])')
plt.ylabel('$\sqrt{Price \; in \; \$1000s \; [MEDV]}$')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
```
# 랜덤 포레스트를 사용하여 비선형 관계 다루기
## 결정 트리 회귀
```
from sklearn.tree import DecisionTreeRegressor
X = df[['LSTAT']].values
y = df['MEDV'].values
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X, y)
sort_idx = X.flatten().argsort()
lin_regplot(X[sort_idx], y[sort_idx], tree)
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000s [MEDV]')
plt.show()
```
## 랜덤 포레스트 회귀
```
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=1)
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
print('훈련 MSE: %.3f, 테스트 MSE: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('훈련 R^2: %.3f, 테스트 R^2: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
plt.scatter(y_train_pred,
y_train_pred - y_train,
c='steelblue',
edgecolor='white',
marker='o',
s=35,
alpha=0.9,
label='training data')
plt.scatter(y_test_pred,
y_test_pred - y_test,
c='limegreen',
edgecolor='white',
marker='s',
s=35,
alpha=0.9,
label='test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='black')
plt.xlim([-10, 50])
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Asking the right questions
### Business Task: *How can we use trends in smart device usage to produce actionable insights that guide Bellabeat's marketing efforts?*
#### About BellaBeat
Bellabeat is a wearable smart device company co-founded by Urška Sršen and Sando Mur. Their aim is to create fashionable, fitness smart products that integrate seamlessly into womens' lifestyles. Currently their products collect data on sleep, stress, activity, and reproductive health. This analysis is tailored towards marketing Bellabeat's wearable smart device 'Leaf' and its accompanying app.
#### Analysis Objectives:
1. Finding Patterns
+ What features encourage users to frequently/consistently use their smart device?
+ Based on these features, which customer segments should Bellabeat's marketing team target?
2. Discover Connections
+ How does daily steps, sleep, activity, intensity, and calories burned correlate to one another?
3. Make Educated Predictions
+ How can Bellabeat use information about key features and correlated factors to inform their marketing strategy?
#### Accessing and Sourcing Reliable Data
I used [FitBit Fitness Tracker Data](https://www.kaggle.com/arashnic/fitbit) under the CC0: Public Domain license (dataset made available by [Möbius](https://www.kaggle.com/arashnic) ). This dataset includes 30 individuals' data over 31 days.
#### Installing and loading necessary packages and libraries
I installed necessary packages for manipulating, processing, analyzing, and visualizing FitBit usage data.
```
#core R packages for visualization and manipulation
library('tidyverse')
#summary statistics
library('skimr')
#examine and clean data
library('janitor')
#formatting date/month/year
library('lubridate')
#clean and check data quickly
```
#### Loading CSV files
I decided to explore these data sets and renamed them for consistency:
* daily_activity <- dailyActivity_merged.csv
* daily_calories <- dailyCalories_merged.csv
* daily_intensities <- dailyIntensities_merged.csv
* daily_sleep <- sleepDay_merged.csv
```
daily_activity <- read.csv("../input/fitbit/Fitabase Data 4.12.16-5.12.16/dailyActivity_merged.csv")
hourly_intensities <- read.csv("../input/fitbit/Fitabase Data 4.12.16-5.12.16/hourlyIntensities_merged.csv")
daily_intensities <- read.csv("../input/fitbit/Fitabase Data 4.12.16-5.12.16/dailyIntensities_merged.csv")
daily_sleep <- read.csv("../input/fitbit/Fitabase Data 4.12.16-5.12.16/sleepDay_merged.csv")
```
# Exploring Raw Data
Previewing daily_activity data and identifying column names in daily_activity.
```
head(daily_activity)
colnames(daily_activity)
```
Previewing daily_sleep data and identifying columns names in daily_sleep.
```
head(daily_sleep)
colnames(daily_sleep)
```
#### Data Frame Summary Statistics
Checking to see how many distinct participants are in each data frame.
```
n_distinct(daily_activity$Id)
n_distinct(daily_intensities$Id)
n_distinct(daily_sleep$Id)
```
Compiling key summary statistics for the daily_activity data frame.
```
daily_activity %>%
select(TotalSteps,
TotalDistance,
SedentaryMinutes,
LightlyActiveMinutes,
FairlyActiveMinutes,
VeryActiveMinutes,
Calories) %>%
summary()
```
Compiling key summary statistics for the daily_sleep data frame.
```
daily_sleep %>%
select(TotalSleepRecords,
TotalMinutesAsleep,
TotalTimeInBed) %>%
summary()
```
# Preparing Data Frames for Analysis
#### Defining Variables
Using Fitbit's definition of very active, fairly active, and lightly active minutes, I created
**active_mins:** Based on [Fitbit's guide to Active Minutes](https://help.fitbit.com/articles/en_US/Help_article/1379.htm), I devised a weighted sum of lightly, fairly, and very active minutes to holistically represent 'active' minutes in a day.
```
summary_activity <- daily_activity %>%
group_by() %>% #computing summary by grouping by Id
summarize(avg_sedentary_mins = mean(SedentaryMinutes), #compute mean activity levels
avg_lightly_active_mins = mean(LightlyActiveMinutes),
avg_fairly_active_mins = mean(FairlyActiveMinutes),
avg_very_active_mins = mean(VeryActiveMinutes),
active_mins = ((LightlyActiveMinutes*0.5)+
(FairlyActiveMinutes)+(VeryActiveMinutes*1.75))
)
summary_activity$activity_level = case_when( #categorizing users based on activity levels
summary_activity$active_mins >= 206.06 ~ 'Very Active',
summary_activity$active_mins >= 147.01 ~ 'Fairly Active',
summary_activity$active_mins >= 82.57 ~ 'Somewhat Active',
summary_activity$active_mins < 82.57 ~ 'Sedentary',
)
```
#### Defining Variables
Here I calculated the sum, average, and number of sleep data entries each participant recorded. I then calculated sleep efficiency- the ratio of time spent in bed versus time asleep- using the sum and mean.
**avg_time_in_bed** = the sum of time spent in bed asleep and awake; a indication how easy or difficult it is for an individual to fall asleep
```
library(dplyr, warn.conflicts = FALSE)
# Suppress summarise info
options(dplyr.summarise.inform = FALSE)
summary_sleep <- daily_sleep %>%
group_by(Id) %>%
summarize(sum_sleep_mins = sum(TotalMinutesAsleep),
avg_sleep_mins = mean(TotalMinutesAsleep),
avg_time_in_bed = mean(TotalTimeInBed),
number_sleep_entries = length(TotalMinutesAsleep),
sleep_efficiency = (TotalMinutesAsleep/TotalTimeInBed)*100
)
```
Based on [medical research](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4302758/), I established a broad set of criteria to classify different recovery levels based on sleep efficiency.
**Sleep efficiency** impacts how well-rested an individual feels; high sleep efficiency leads to deeper, higher quality sleep while low sleep efficiency is associated with tiredness, restlessness, and sleep disorders.
Based on different sleep efficiency, I categorized individuals into different **recovery levels**. Research suggests that individuals with a sleep efficiency close to 100% are likely sleep deprived. Hence I categorized this as: **'Possible Sleep Deficit '**.
```
summary_sleep$recovery_level = case_when(
summary_sleep$sleep_efficiency >= 95 ~ 'Possible Sleep Deficit',
summary_sleep$sleep_efficiency >= 85 ~ 'Optimal',
summary_sleep$sleep_efficiency < 85 ~ 'Possible Sleep Disorder',
)
```
Based on [sleep guidelines](https://www.cdc.gov/sleep/about_sleep/how_much_sleep.html) by the CDC, I defined another set of criteria to classify how well rested an individual was based on their amount of sleep.
```
summary_sleep$rested_level = case_when(
summary_sleep$avg_sleep_mins >= 450 ~ 'Well Rested',
summary_sleep$avg_sleep_mins >= 390 ~ 'Moderately Rested',
summary_sleep$avg_sleep_mins >= 330 ~ 'Poorly Rested',
summary_sleep$avg_sleep_mins < 330 ~ 'Very Poorly Rested'
)
```
Merged sleep and activity dataframes to prepare for visualization.
```
sleep_activity_merged <- merge(summary_sleep, summary_activity, all = 'true') # merging sleep and activity data frames to prepare for analysis
```
Getting rid of duplicates in sleep_activity_merged.
```
sleep_activity_merged <- sleep_activity_merged[!duplicated(sleep_activity_merged), ]
```
# Analyzing and Visualizing Data
```
ggplot(data = summary_sleep) +
geom_smooth(mapping = aes(x = sleep_efficiency, y = avg_sleep_mins)) +
labs(title="Sleep Efficiency vs. Average Sleep (mins)") +
xlab("Sleep Efficiency (%)") + ylab("Average Sleep (mins)")
```
We can see that generally, the more sleep individuals get the higher their sleep efficiency. There is, however, a dip for individuals with sleep efficiency higher than ~97%. Let's explore this dip in more detail.
```
ggplot(data = summary_sleep) +
geom_smooth(mapping = aes(x = avg_sleep_mins, y = avg_time_in_bed)) +
geom_point(mapping = aes(x = avg_sleep_mins, y = avg_time_in_bed)) +
labs(title = "Time Asleep vs Time in Bed") +
xlab("Average Time in Bed (mins)") + xlab("Average Sleep (mins)")
```
Based on our visual we can see there is some correlation between time asleep and time spent in bed.
>Marketing Insight: Based on this correlation, Bellabeat could set reminders to encourage users to spend more time in bed. However, there are outliers for individuals spending long durations in bed (trying to fall asleep). This indicates some individuals struggle to sleep even with sufficient time in bed- lets explore this idea further.
*Next Steps:* To explore these outliers, I separated the Time Asleep vs Time in Bed based on recovery levels. This is because research suggests that individuals lacking sleep fall asleep quicker and experience longer durations of deep sleep.
```
ggplot(data = summary_sleep) +
geom_jitter(mapping = aes(x = avg_sleep_mins, y = avg_time_in_bed)) + #jitter used due to high density of points
facet_wrap(~recovery_level) + #find relationships by different recovery groups
labs(title = "Time Asleep vs. Time in Bed- Based on Recovery") +
xlab("Average Sleep Duration (mins)") + ylab("Time in Bed (mins)")
```
Interestingly we see that:
* There seems to be a linear relationship between Time Asleep and Time in Bed for individuals with *Optimal* or *Possible Sleep Deficits*
* There is more variation between Time Asleep and Time in Bed for indivdiuals with abnormally low sleep efficiency (classified as *Possible Sleep Disorder*).
>Marketing Insights: Use these findings to track sleep efficiency and recovery in relation to sleep disorders. Just as how other fitness trackers found broader application to their fitness tracking metric admist the pandemic [(using respiratory rate to predict COVID-19 symptoms)](https://www.whoop.com/thelocker/case-studies-respiratory-rate-covid-19/), Bellabeat can act on this data to shift branding towards combatting health issues. Bellabeat should play an **active role** in their consumers' health rather than being a bystander.
*Next Steps:* I explored further into sleep behavior by contrasting rest levels (based on sleep duration) and recovery levels (based on sleep efficiency).
```
ggplot(data = summary_sleep) +
geom_bar(mapping = aes(x = rested_level, fill = recovery_level)) +
theme(axis.text.x = element_text(angle = 90)) +
labs(title = "Rest vs. Recovery Levels") +
xlab("Sleep Sufficiency") + ylab("Sleep Entries (#)")
```
I noticed that:
* Poorly and Very Poorly Rested individuals are most frequently classified as having *Possible Sleep Disorders* and *Low Recovery*
* Moderately and Well Rested individuals are have optimal and above optimal recovery levels; this indicates that although they are well rested, they may be in a sleep deficit. There is also low occurrence of *Possible Sleep Disorders*.
>Marketing Insights: Make Bellabeat app analytics and marketing more health oriented by educating users. For example, if a user is experiences sustained periods of *Poor Rest* and *Low Recovery* the app could notify the user and explain possible issues and recommend the user to visit their doctor.
*Next Steps:* I separated Rest vs. Recovery levels based on activity level to explore whether activity levels impacted sleep quality and duration.
```
ggplot(data = sleep_activity_merged) +
geom_bar(mapping = aes(x = rested_level, fill = recovery_level)) +
labs(title="Rest vs. Recovery Levels Based on Activity Level") +
theme(axis.text.x = element_text(angle = 90)) +
facet_wrap(~activity_level) + xlab("Sleep Sufficiency") + ylab("Sleep Entries (#)")
```
This visualization surprised me because I predicted that sleep quality and sufficiency would increase with activity levels, instead:
* Among *Very* and *Fairly Active* individuals who were *Very Poorly Rested/Poorly Rested*, a relatively high proportion of this group experienced low sleep efficiency (categorized as Possible Sleep Deficiency)
* Across all activity levels, *Moderate* to *Well Rested* individuals mostly experienced high sleep efficiency
>Marketing Insights: Data suggests the more active you are, the more rest you need (makes sense!) Active, poorly rested individuals are at higher risk of sleep disorders because their bodies need longer high quality sleep to recovery; overtraining and poor rest will lead to insomnia. Using this information Bellabeat can take a more scientific/data-driven approach to daily performance.
*Next Steps:* Finding out how time in bed is related to recovery levels to determine optimal sleep duration.
```
ggplot(data = summary_sleep) +
geom_histogram(mapping = aes(x = avg_time_in_bed, fill = recovery_level)) +
theme(axis.text.x = element_text(angle = 50)) +
labs(title = "Time in Bed vs. Recovery Levels") +
xlab("Time in Bed (Mins)") + ylab("Sleep Entries (#)")
```
Within the FitBit dataset:
* Users who spent **450 to 520 minutes (7.5 to 8.67 hours~)** in bed experienced optimal recovery most frequently
* A relatively high proportion of users who spent excessive (10 hours>) or insufficient (2 hours<) time in bed experienced very poor or abnormally high recovery levels, indicating insufficient sleep or possible sleep disorders.
# Insights and Recommendations
#### **Final conclusions based on my analysis**
The [FitBit Fitness Tracker Data](https://www.kaggle.com/arashnic/fitbit) includes a wide range of users (in terms of activity levels and lifestyles). Based on Bellabeat's website they are targeting women who are: active, young, and fashionable. My main takeaway is how can Bellabeat make the Leaf more inclusive and specific to a broader range of lifestyles. To appeal to a wider audience, Bellabeat should help inactive/unhealthy individuals work towards a healthy lifestyle. Below I outlined a few recommendations to help Bellabeat refine their branding.
#### **How can I apply my findings to Bellabeat's marketing efforts?**
1. Play an active/proactive role in consumer health
>Use reminders to spend more time in bed based on sleep duration and quality. Highlight the broader application of health metrics relevant to pandemic times. Promote material on how the Leaf can help track sleep quality (and other metrics) to determine health. Provide an overview of incremental improvements to work towards (or maintain) a healthy lifestyle.
2. Apply a data-driven approach to optimizing health
>Place more focus on user data analytics and continual feedback. Continuous interactions can help increase engagement between users with different activity levels. Bellabeat could promote this approach through social media channels and blogs. Lastly, educate users about the importance of lesser known metrics (like sleep efficiency) through blogs, short reels (IG reels, Tiktok), and short informative notifications.
3. "Gameify" Bellabeat's smart app
>As a smart fitness device user, I was surprised by the inconsistency of these datasets. I rarely miss tracking my day because of the social element- comparing my daily statistics with my friends. Fitbit's initial success relied on adding social and competitive elements to their app UI. Bellabeat could also analyze user data to create comparative statistics customised to each user. I also really like [Whoop's use of communities](https://support.whoop.com/hc/en-us/articles/360043767753-Joining-a-Public-WHOOP-Team) to motivate users with similar interests.
#### **Additional data to expand on findings**
* Larger data set (more individuals across a longer period of time) to increase statistical power and bring more reliability/credibility to analysis
* Detailed demographic data for FitBit users to account for for cultural, geographical, and socio-economic factors
* Existing data by Bellabeat on 'reproductive health' statistics that they track. This could reveal how activity, sleep, and reproductive health are related and how Bellabeat can guide their female users.
##### If you're here congrats! You've stuck with me throughout this brain-racking but ultra-rewarding process. I'm still a beginner in my data analystics journey so any comments/feedback would be much appreciated!
##### Some newbie issues I ran into:
* When transferring code from R into Kaggle I encountered some errors that weren't an issue in R. However in Kaggle it prevented me from properly executing sections of my code. Specifically, I had trouble with merging my dataframes by "Id" and struggled with why I couldn't group.by then summarize (even though it had worked in R). I would really appreciate it if anyone knows why this happened and can share with me!
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import torch
from UnarySim.sw.metric.metric import NormStability, NSbuilder, Stability, ProgressiveError
from UnarySim.sw.stream.gen import RNG, SourceGen, BSGen
from UnarySim.sw.kernel.relu import UnaryReLU
import random
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import ticker, cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import time
import math
import numpy as np
import seaborn as sns
from tqdm import tqdm
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device = torch.device("cpu")
def test(
rng="Sobol",
total_cnt=100,
mode="bipolar",
bitwidth=8,
threshold=0.05,
sr=False
):
ns_val=[0.25, 0.5, 0.75]
stype = torch.float
rtype = torch.float
pbar = tqdm(total=3*total_cnt*(2**bitwidth))
if mode is "unipolar":
# all values in unipolar are non-negative
low_bound = 0
up_bound = 2**bitwidth
elif mode is "bipolar":
# values in bipolar are arbitrarily positive or negative
low_bound = -2**(bitwidth-1)
up_bound = 2**(bitwidth-1)
# input0 is dividend
input = []
for val in range(up_bound, low_bound-1, -1):
input.append(val)
input = torch.tensor(input, dtype=torch.float).div(up_bound).to(device)
output = torch.nn.ReLU()(input).to(device)
for ns in ns_val:
print("# # # # # # # # # # # # # # # # # #")
print("Target normstab:", ns)
print("# # # # # # # # # # # # # # # # # #")
result_ns_total = []
input_ns_total = []
output_ns_total = []
for rand_idx in range(1, total_cnt+1):
outputNS = NormStability(output, mode=mode, threshold=threshold).to(device)
inputNS = NormStability(input, mode=mode, threshold=threshold).to(device)
dut = UnaryReLU(depth=5, shiftreg=sr).to(device)
inputBSGen = NSbuilder(bitwidth=bitwidth,
mode=mode,
normstability=ns,
threshold=threshold,
value=input,
rng_dim=rand_idx).to(device)
start_time = time.time()
with torch.no_grad():
for i in range(2**bitwidth):
input_bs = inputBSGen()
inputNS.Monitor(input_bs)
output_bs = dut(input_bs)
outputNS.Monitor(output_bs)
pbar.update(1)
# get the result for different rng
input_ns = inputNS()
output_ns = outputNS()
result_ns = (output_ns/input_ns).clamp(0, 1).cpu().numpy()
result_ns_total.append(result_ns)
input_ns = input_ns.cpu().numpy()
input_ns_total.append(input_ns)
output_ns = output_ns.cpu().numpy()
output_ns_total.append(output_ns)
# print("--- %s seconds ---" % (time.time() - start_time))
# get the result for different rng
result_ns_total = np.array(result_ns_total)
input_ns_total = np.array(input_ns_total)
output_ns_total = np.array(output_ns_total)
#######################################################################
# check the error of all simulation
#######################################################################
input_ns_total_no_nan = input_ns_total[~np.isnan(result_ns_total)]
print("avg I NS:{:1.4}".format(np.mean(input_ns_total_no_nan)))
print("max I NS:{:1.4}".format(np.max(input_ns_total_no_nan)))
print("min I NS:{:1.4}".format(np.min(input_ns_total_no_nan)))
print()
output_ns_total_no_nan = output_ns_total[~np.isnan(result_ns_total)]
print("avg O NS:{:1.4}".format(np.mean(output_ns_total_no_nan)))
print("max O NS:{:1.4}".format(np.max(output_ns_total_no_nan)))
print("min O NS:{:1.4}".format(np.min(output_ns_total_no_nan)))
print()
result_ns_total_no_nan = result_ns_total[~np.isnan(result_ns_total)]
print("avg O/I NS:{:1.4}".format(np.mean(result_ns_total_no_nan)))
print("max O/I NS:{:1.4}".format(np.max(result_ns_total_no_nan)))
print("min O/I NS:{:1.4}".format(np.min(result_ns_total_no_nan)))
print()
#######################################################################
# check the error according to input value
#######################################################################
max_total = np.max(result_ns_total, axis=0)
min_total = np.min(result_ns_total, axis=0)
avg_total = np.mean(result_ns_total, axis=0)
axis_len = outputNS().size()[0]
input_x_axis = []
for axis_index in range(axis_len):
input_x_axis.append((axis_index/(axis_len-1)*(up_bound-low_bound)+low_bound)/up_bound)
fig, ax = plt.subplots()
ax.fill_between(input_x_axis, max_total, avg_total, facecolor="red", alpha=0.75)
ax.fill_between(input_x_axis, avg_total, min_total, facecolor="blue", alpha=0.75)
ax.plot(input_x_axis, avg_total, label='Avg error', color="black", linewidth=0.3)
plt.tight_layout()
plt.xlabel('Input value')
plt.ylabel('Output/Input NS')
plt.xticks(np.arange(0, 1.1, step=0.5))
# ax.xaxis.set_ticklabels([])
plt.xlim(0, 1)
plt.yticks(np.arange(0, 1.1, step=0.2))
# ax.yaxis.set_ticklabels([])
plt.ylim(0, 1.1)
plt.grid(b=True, which="both", axis="y", linestyle="--", color="grey", linewidth=0.3)
fig.set_size_inches(4, 4)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.show()
plt.close()
pbar.close()
test(rng="Sobol", total_cnt=100, mode="bipolar", bitwidth=8, threshold=0.1, sr=True)
```
| github_jupyter |
# Review of Day 1
## Fundamentals
### Data Types
Everything in Python is an object, and every object has a type.
Let's review the most important ones.
**Integers** – Whole Numbers
```
i = 3
i
```
**Floats** – Decimal Numbers
```
f = 3.4
f
```
**Strings** – Bits of Text
```
s = 'python'
s
```
**Lists** – Ordered collections of other Python objects
```
l = ['a', 'b', 'c']
l
```
**Dictionaries** – A collection of key-value pairs, which let you easily look up the value for a given key
```
d = {'a': 1,
'b': 2,
'z': 26}
d
```
**DataFrames** - Tabular datasets. Part of the Pandas library.
```
import pandas as pd
df = pd.DataFrame([(1, 2), (3, 4)], columns=['x', 'y'])
df
```
### The `type` Function
You can use the `type` function to determine the type of an object.
```
x = [1, 2, 3]
type(x)
x = 'hello'
type(x)
```
## Packages, Modules, and Functions
### Packages
*Packages* (generally synonymous with *modules* or *libraries*) are extensions for Python featuring useful code.
Some are included in every Python install, while others (like Pandas, matplotlib, and more) need to be installed separately.
The DataFrame type, a staple of data science, comes in the Pandas package.
### Functions
*Functions* are executable Python code stored in a name, just like a regular variable.
You can call a function by putting parentheses after its name, and optionally including *arguments* to it (e.g. `myfunction(argument_1, argument_2)`).
Well-named functions can help to simplify your code and make it much more readable.
### Attributes and Methods
Python objects (that's everything in Python, remember?) come with *attributes*, or internal information accessible through dot syntax:
```python
myobject.attribute
```
Attributes can be handy when you want to learn more about an object.
```
df.shape
```
Some attributes actually hold functions, in which case we call them *methods*.
```
df.describe()
```
### DataFrames and Series
When you extract individual rows or columns of DataFrames, you get a 1-dimensional dataset called a *Series*.
Series look like lists but their data must be all of the same type, and they provide similar (though subtly different) functionality to DataFrames.
## Importing Data
Importing data is the process of taking data *on disk* and moving it into *memory*, where Python can do its work.
Reading CSVs will likely be one of the most common ways you import data.
To do so, use Pandas' `read_csv` function, passing the name of your file as an argument.
```python
import pandas as pd
data = pd.read_csv('myfile.csv')
```
Though they are less common in data science, JSON and pickle files may come up in your work as well.
These are slightly more complicated to import, but it's still very doable.
JSON:
```python
import json
with open('myfile.json', 'r') as f:
data = json.load(f)
```
Pickle:
```python
import pickle
with open('myfile.pickle', 'rb') as f:
data = pickle.load(f)
```
## Subsetting and Filtering
There are three primary ways of subsetting data:
- **Selecting** - Including certain *columns* of the data while excluding others
- **Slicing** - Including only certain *rows* based on their position (index) in the DataFrame
- **Filtering** - Including only certain *rows* with data that meets some criterion
### Selecting
Selection is done with brackets.
Pass a single column name (as a string) or a list of column names.
```python
# The column "mycolumn", as a Series
df['mycolumn']
# The columns "column1" and "column2" as a DataFrame
df[['column_1', 'column_2']]
```
If you pass a list, the returned value will be a DataFrame.
If you pass a single column name, it will be a Series.
### Slicing
Slicing is typically done with the `.loc` accessor and brackets.
Pass in a row index or a range of row indices.
```python
# The fifth (zero-indexing!) row, as a Series
df.loc[4]
# The second, third, and fourth rows, as a DataFrame
df.loc[1:3]
```
If you pass a range of indices, the returned value will be a DataFrame.
Otherwise it will be a Series.
### Filtering
DataFrames can be filtered by passing a *condition* in brackets.
```python
# Keep rows where `condition` is true
df[condition]
```
Conditions are things like tests of equality, assertions that one value is greater than another, etc.
```python
# Keep rows where the value in "mycolumn" is equal to 5
df[df['mycolumn'] == 5]
```
```python
# Keep rows where mycolumn is less than 3 OR greater than 10
df[ (df['mycolumn'] < 3) | (df['mycolumn'] > 10) ]
```
### Selecting and Filtering Together
Using `.loc`, it's possible to do selecting and filtering all in one step.
```python
# Filter down to rows where column_a is equal to 5,
# and select column_b and column_c from those rows
df.loc[df['column_a'] == 5, ['column_b', 'column_c']]
```
## Manipulating Columns
### Numeric Calculations
It's possible to perform calculations using columns.
```python
df['mycolumn'] + 7
```
```python
df['mycolumn'] * 4 - 3
```
It's also possible to perform calculations based on values in multiple columns.
```python
df['column_a'] / df['column_b']
```
Generally you'll want to save the calculated values in a new column, which you can do with sensible assignment syntax.
```python
df['e'] = df['m'] * (df['c'] ** 2)
```
### String Manipulations
Lots of string functionality can be found within the `.str` accessor.
```python
# Convert the strings in mycolumn to all caps
df['mycolumn'].str.upper()
```
### Mapping Values
In some cases you may need to convert some values to other values.
This is a good case for the `.map` method of Series.
Pass in a dictionary whose keys are the elements to be converted and whose values are the desired new values.
```
df
df['x'] = df['x'].map({1: 11, 3: 33})
df
```
# Practice
1. Load the weather data (`weather.csv`) from the data folder of our repository. Store it in a variable called `weather`.
2. Keep only the rows that have precipitation (i.e. `precip > 0`).
3. Create a new column, "air_hazard_rating", that is `wind_speed / 2 + visib`.
4. Keep only the "origin" and "time" columns.
# Questions
Are there any questions before we move on?
| github_jupyter |
# データサイエンス100本ノック(構造化データ加工編) - SQL
## はじめに
- データベースはPostgreSQL13です
- 初めに以下のセルを実行してください
- セルに %%sql と記載することでSQLを発行することができます
- jupyterからはdescribeコマンドによるテーブル構造の確認ができないため、テーブル構造を確認する場合はlimitを指定したSELECTなどで代用してください
- 使い慣れたSQLクライアントを使っても問題ありません(接続情報は以下の通り)
- IPアドレス:Docker Desktopの場合はlocalhost、Docker toolboxの場合は192.168.99.100
- Port:5432
- database名: dsdojo_db
- ユーザ名:padawan
- パスワード:padawan12345
- 大量出力を行うとJupyterが固まることがあるため、出力件数は制限することを推奨します(設問にも出力件数を記載)
- 結果確認のために表示させる量を適切にコントロールし、作業を軽快にすすめる技術もデータ加工には求められます
- 大量結果が出力された場合は、ファイルが重くなり以降開けなくなることもあります
- その場合、作業結果は消えますがファイルをGitHubから取り直してください
- vimエディタなどで大量出力範囲を削除することもできます
- 名前、住所等はダミーデータであり、実在するものではありません
```
%load_ext sql
import os
pgconfig = {
'host': 'db',
'port': os.environ['PG_PORT'],
'database': os.environ['PG_DATABASE'],
'user': os.environ['PG_USER'],
'password': os.environ['PG_PASSWORD'],
}
dsl = 'postgresql://{user}:{password}@{host}:{port}/{database}'.format(**pgconfig)
# MagicコマンドでSQLを書くための設定
%sql $dsl
```
# 使い方
- セルの先頭に%%sqlと記載し、2行目以降にSQLを記述することでJupyterからPostgreSQLに対しSQLを実行できます。
```
%%sql
select 'このように実行できます' as sample
```
# データ加工100本ノック
---
> S-001: レシート明細テーブル(receipt)から全項目の先頭10件を表示し、どのようなデータを保有しているか目視で確認せよ。
```
%%sql
SELECT * FROM receipt LIMIT 10
```
---
> S-002: レシート明細のテーブル(receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、10件表示させよ。
```
%%sql
SELECT sales_ymd, customer_id, product_cd, amount FROM receipt LIMIT 10
```
---
> S-003: レシート明細のテーブル(receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、10件表示させよ。ただし、sales_ymdはsales_dateに項目名を変更しながら抽出すること。
```
%%sql
SELECT sales_ymd as sales_date, customer_id, product_cd, amount
FROM receipt LIMIT 10
```
---
> S-004: レシート明細のテーブル(receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
```
%%sql
SELECT
sales_ymd as sales_date, customer_id, product_cd, amount
FROM
receipt
WHERE
customer_id = 'CS018205000001'
```
---
> S-005: レシート明細のテーブル(receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
> - 売上金額(amount)が1,000以上
```
%%sql
SELECT
sales_ymd as sales_date, customer_id, product_cd, amount
FROM
receipt
WHERE
customer_id = 'CS018205000001'
and
amount >= 1000
```
---
> S-006: レシート明細テーブル(receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上数量(quantity)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
> - 売上金額(amount)が1,000以上または売上数量(quantity)が5以上
```
%%sql
SELECT
sales_ymd as sales_date, customer_id, product_cd, quantity, amount
FROM
receipt
WHERE
customer_id = 'CS018205000001'
and
(
amount >= 1000
or
quantity >= 5
)
```
---
> S-007: レシート明細のテーブル(receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
> - 売上金額(amount)が1,000以上2,000以下
```
%%sql
SELECT
sales_ymd as sales_date, customer_id, product_cd, amount
FROM
receipt
WHERE
customer_id = 'CS018205000001'
and
amount between 1000 and 2000
```
---
> S-008: レシート明細テーブル(receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
> - 商品コード(product_cd)が"P071401019"以外
```
%%sql
SELECT
sales_ymd as sales_date, customer_id, product_cd, amount
FROM
receipt
WHERE
customer_id = 'CS018205000001'
and
product_cd != 'P071401019'
```
---
> S-009: 以下の処理において、出力結果を変えずにORをANDに書き換えよ。
`select * from store where not (prefecture_cd = '13' or floor_area > 900)`
```
%%sql
SELECT * FROM store WHERE prefecture_cd != '13' and floor_area <= 900
```
---
> S-010: 店舗テーブル(store)から、店舗コード(store_cd)が"S14"で始まるものだけ全項目抽出し、10件だけ表示せよ。
```
%%sql
SELECT * FROM store WHERE store_cd like 'S14%' LIMIT 10
```
---
> S-011: 顧客テーブル(customer)から顧客ID(customer_id)の末尾が1のものだけ全項目抽出し、10件だけ表示せよ。
```
%%sql
SELECT * FROM customer WHERE customer_id like '%1' LIMIT 10
```
---
> S-012: 店舗テーブル(store)から横浜市の店舗だけ全項目表示せよ。
```
%%sql
SELECT * FROM store WHERE address LIKE '%横浜市%'
```
---
> S-013: 顧客テーブル(customer)から、ステータスコード(status_cd)の先頭がアルファベットのA〜Fで始まるデータを全項目抽出し、10件だけ表示せよ。
```
%%sql
SELECT * FROM customer WHERE status_cd ~ '^[A-F]' LIMIT 10
```
---
> S-014: 顧客テーブル(customer)から、ステータスコード(status_cd)の末尾が数字の1〜9で終わるデータを全項目抽出し、10件だけ表示せよ。
```
%%sql
SELECT * FROM customer WHERE status_cd ~ '[1-9]$' LIMIT 10
```
---
> S-015: 顧客テーブル(customer)から、ステータスコード(status_cd)の先頭がアルファベットのA〜Fで始まり、末尾が数字の1〜9で終わるデータを全項目抽出し、10件だけ表示せよ。
```
%%sql
SELECT * FROM customer WHERE status_cd ~ '^[A-F].*[1-9]$' LIMIT 10
```
---
> S-016: 店舗テーブル(store)から、電話番号(tel_no)が3桁-3桁-4桁のデータを全項目表示せよ。
```
%%sql
SELECT * FROM store WHERE tel_no ~ '^[0-9]{3}-[0-9]{3}-[0-9]{4}$'
```
---
> S-017: 顧客テーブル(customer)を生年月日(birth_day)で高齢順にソートし、先頭10件を全項目表示せよ。
```
%%sql
SELECT * from customer ORDER BY birth_day LIMIT 10
```
---
> S-018: 顧客テーブル(customer)を生年月日(birth_day)で若い順にソートし、先頭10件を全項目表示せよ。
```
%%sql
SELECT * from customer ORDER BY birth_day DESC LIMIT 10
```
---
> S-019: レシート明細テーブル(receipt)に対し、1件あたりの売上金額(amount)が高い順にランクを付与し、先頭10件を抽出せよ。項目は顧客ID(customer_id)、売上金額(amount)、付与したランクを表示させること。なお、売上金額(amount)が等しい場合は同一順位を付与するものとする。
```
%%sql
SELECT customer_id, amount, RANK() OVER(ORDER BY amount DESC) AS ranking
FROM receipt
LIMIT 10
```
---
> S-020: レシート明細テーブル(receipt)に対し、1件あたりの売上金額(amount)が高い順にランクを付与し、先頭10件を抽出せよ。項目は顧客ID(customer_id)、売上金額(amount)、付与したランクを表示させること。なお、売上金額(amount)が等しい場合でも別順位を付与すること。
```
%%sql
SELECT customer_id, amount, ROW_NUMBER() OVER(ORDER BY amount DESC) AS ranking
FROM receipt
LIMIT 10
```
---
> S-021: レシート明細テーブル(receipt)に対し、件数をカウントせよ。
```
%%sql
SELECT count(1) FROM receipt
```
---
> S-022: レシート明細テーブル(receipt)の顧客ID(customer_id)に対し、ユニーク件数をカウントせよ。
```
%%sql
SELECT count(distinct customer_id) FROM receipt
```
---
> S-023: レシート明細テーブル(receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)と売上数量(quantity)を合計せよ。
```
%%sql
SELECT store_cd
, SUM(amount) as amount
, SUM(quantity) as quantity
FROM receipt
group by store_cd
```
---
> S-024: レシート明細テーブル(receipt)に対し、顧客ID(customer_id)ごとに最も新しい売上日(sales_ymd)を求め、10件表示せよ。
```
%%sql
SELECT customer_id, MAX(sales_ymd)
FROM receipt
GROUP BY customer_id
LIMIT 10
```
---
> S-025: レシート明細テーブル(receipt)に対し、顧客ID(customer_id)ごとに最も古い売上日(sales_ymd)を求め、10件表示せよ。
```
%%sql
SELECT customer_id, MIN(sales_ymd)
FROM receipt
GROUP BY customer_id
LIMIT 10
```
---
> S-026: レシート明細テーブル(receipt)に対し、顧客ID(customer_id)ごとに最も新しい売上日(sales_ymd)と古い売上日を求め、両者が異なるデータを10件表示せよ。
```
%%sql
SELECT customer_id, MAX(sales_ymd), MIN(sales_ymd)
FROM receipt
GROUP BY customer_id
HAVING MAX(sales_ymd) != MIN(sales_ymd)
LIMIT 10
```
---
> S-027: レシート明細テーブル(receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の平均を計算し、降順でTOP5を表示せよ。
```
%%sql
SELECT store_cd, AVG(amount) as avr_amount
FROM receipt
GROUP BY store_cd
ORDER BY avr_amount DESC
LIMIT 5
```
---
> S-028: レシート明細テーブル(receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の中央値を計算し、降順でTOP5を表示せよ。
```
%%sql
SELECT
store_cd,
PERCENTILE_CONT(0.5) WITHIN GROUP(ORDER BY amount) as amount_50per
FROM receipt
GROUP BY store_cd
ORDER BY amount_50per desc
LIMIT 5
```
---
> S-029: レシート明細テーブル(receipt)に対し、店舗コード(store_cd)ごとに商品コード(product_cd)の最頻値を求めよ。
```
%%sql
-- コード例1: 分析関数でmodeを計算する
WITH product_mode AS (
SELECT store_cd,product_cd, COUNT(1) as mode_cnt,
RANK() OVER(PARTITION BY store_cd ORDER BY COUNT(1) DESC) AS rnk
FROM receipt
GROUP BY store_cd,product_cd
)
SELECT store_cd,product_cd, mode_cnt
FROM product_mode
WHERE rnk = 1
ORDER BY store_cd,product_cd;
%%sql
-- コード例2:mode()を使う簡易ケース(早いが最頻値が複数の場合は一つだけ選ばれる)
SELECT store_cd, mode() WITHIN GROUP(ORDER BY product_cd)
FROM receipt
GROUP BY store_cd
ORDER BY store_cd
```
---
> S-030: レシート明細テーブル(receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の標本分散を計算し、降順でTOP5を表示せよ。
```
%%sql
SELECT store_cd, var_samp(amount) as vars_amount
FROM receipt
GROUP BY store_cd
ORDER BY vars_amount desc
LIMIT 5
```
---
> S-031: レシート明細テーブル(receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の標本標準偏差を計算し、降順でTOP5を表示せよ。
```
%%sql
SELECT store_cd, stddev_samp(amount) as stds_amount
FROM receipt
GROUP BY store_cd
ORDER BY stds_amount desc
LIMIT 5
```
---
> S-032: レシート明細テーブル(receipt)の売上金額(amount)について、25%刻みでパーセンタイル値を求めよ。
```
%%sql
SELECT
PERCENTILE_CONT(0.25) WITHIN GROUP(ORDER BY amount) as amount_25per,
PERCENTILE_CONT(0.50) WITHIN GROUP(ORDER BY amount) as amount_50per,
PERCENTILE_CONT(0.75) WITHIN GROUP(ORDER BY amount) as amount_75per,
PERCENTILE_CONT(1.0) WITHIN GROUP(ORDER BY amount) as amount_100per
FROM receipt
```
---
> S-033: レシート明細テーブル(receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の平均を計算し、330以上のものを抽出せよ。
```
%%sql
SELECT store_cd, AVG(amount) as avg_amount
FROM receipt
GROUP BY store_cd
HAVING AVG(amount) >= 330
```
---
> S-034: レシート明細テーブル(receipt)に対し、顧客ID(customer_id)ごとに売上金額(amount)を合計して全顧客の平均を求めよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。
```
%%sql
WITH customer_amount AS (
SELECT customer_id, SUM(amount) AS sum_amount
FROM receipt
WHERE customer_id not like 'Z%'
GROUP BY customer_id
)
SELECT AVG(sum_amount) from customer_amount
```
---
> S-035: レシート明細テーブル(receipt)に対し、顧客ID(customer_id)ごとに販売金額(amount)を合計して全顧客の平均を求め、平均以上に買い物をしている顧客を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、データは10件だけ表示させれば良い。
```
%%sql
WITH customer_amount AS (
SELECT customer_id, SUM(amount) AS sum_amount
FROM receipt
WHERE customer_id not like 'Z%'
GROUP BY customer_id
)
SELECT customer_id, sum_amount
FROM customer_amount
WHERE sum_amount >= (SELECT AVG(sum_amount) from customer_amount)
limit 10
```
---
> S-036: レシート明細テーブル(receipt)と店舗テーブル(store)を内部結合し、レシート明細テーブルの全項目と店舗テーブルの店舗名(store_name)を10件表示させよ。
```
%%sql
SELECT r.*, s.store_name
FROM receipt r
JOIN store s
ON r.store_cd = s.store_cd
LIMIT 10
```
---
> S-037: 商品テーブル(product)とカテゴリテーブル(category)を内部結合し、商品テーブルの全項目とカテゴリテーブルの小区分名(category_small_name)を10件表示させよ。
```
%%sql
SELECT p.*, c.category_small_name
FROM product p
JOIN category c
on p.category_small_cd = c.category_small_cd
LIMIT 10
```
---
> S-038: 顧客テーブル(customer)とレシート明細テーブル(receipt)から、各顧客ごとの売上金額合計を求めよ。ただし、買い物の実績がない顧客については売上金額を0として表示させること。また、顧客は性別コード(gender_cd)が女性(1)であるものを対象とし、非会員(顧客IDが'Z'から始まるもの)は除外すること。なお、結果は10件だけ表示させれば良い。
```
%%sql
WITH customer_amount AS (
SELECT customer_id, SUM(amount) AS sum_amount
FROM receipt
GROUP BY customer_id
)
SELECT c.customer_id, COALESCE(a.sum_amount,0)
FROM customer c
LEFT JOIN customer_amount a
ON c.customer_id = a.customer_id
WHERE c.gender_cd = '1'
and c.customer_id not like 'Z%'
LIMIT 10
```
---
> S-039: レシート明細テーブル(receipt)から売上日数の多い顧客の上位20件と、売上金額合計の多い顧客の上位20件を抽出し、完全外部結合せよ。ただし、非会員(顧客IDが'Z'から始まるもの)は除外すること。
```
%%sql
WITH customer_days AS (
select customer_id, count(distinct sales_ymd) come_days
FROM receipt
WHERE customer_id NOT LIKE 'Z%'
GROUP BY customer_id
ORDER BY come_days DESC LIMIT 20
),
customer_amount AS (
SELECT customer_id, sum(amount) buy_amount
FROM receipt
WHERE customer_id NOT LIKE 'Z%'
GROUP BY customer_id
ORDER BY buy_amount DESC LIMIT 20
)
SELECT COALESCE(d.customer_id, a.customer_id), d.come_days, a.buy_amount
FROM customer_days d
FULL JOIN customer_amount a
ON d.customer_id = a.customer_id;
```
---
> S-040: 全ての店舗と全ての商品を組み合わせると何件のデータとなるか調査したい。店舗(store)と商品(product)を直積した件数を計算せよ。
```
%%sql
SELECT COUNT(1) FROM store CROSS JOIN product;
```
---
> S-041: レシート明細テーブル(receipt)の売上金額(amount)を日付(sales_ymd)ごとに集計し、前日からの売上金額増減を計算せよ。なお、計算結果は10件表示すればよい。
```
%%sql
WITH sales_amount_by_date AS (
SELECT sales_ymd, SUM(amount) as amount FROM receipt
GROUP BY sales_ymd
ORDER BY sales_ymd
)
SELECT sales_ymd, LAG(sales_ymd, 1) OVER(ORDER BY sales_ymd) lag_ymd,
amount,
LAG(amount, 1) OVER(ORDER BY sales_ymd) as lag_amount,
amount - LAG(amount, 1) OVER(ORDER BY sales_ymd) as diff_amount
FROM sales_amount_by_date
LIMIT 10;
```
---
> S-042: レシート明細テーブル(receipt)の売上金額(amount)を日付(sales_ymd)ごとに集計し、各日付のデータに対し、1日前、2日前、3日前のデータを結合せよ。結果は10件表示すればよい。
```
%%sql
-- コード例1:縦持ちケース
WITH sales_amount_by_date AS (
SELECT sales_ymd, SUM(amount) as amount FROM receipt
GROUP BY sales_ymd
ORDER BY sales_ymd
),
sales_amount_lag_date AS (
SELECT sales_ymd,
COALESCE(LAG(sales_ymd, 3) OVER (ORDER BY sales_ymd),
MIN(sales_ymd) OVER (PARTITION BY NULL)) as lag_date_3,
amount
FROM sales_amount_by_date
)
SELECT a.sales_ymd, b.sales_ymd as lag_ymd,
a.amount as amount, b.amount as lag_amount
FROM sales_amount_lag_date a
JOIN sales_amount_lag_date b
ON b.sales_ymd >= a.lag_date_3
and b.sales_ymd < a.sales_ymd
ORDER BY sales_ymd, lag_ymd
LIMIT 10;
%%sql
-- コード例2:横持ちケース
WITH sales_amount_by_date AS (
SELECT sales_ymd, SUM(amount) as amount FROM receipt
GROUP BY sales_ymd
ORDER BY sales_ymd
), sales_amount_with_lag AS(
SELECT sales_ymd, amount,
LAG(sales_ymd, 1) OVER (ORDER BY sales_ymd) as lag_ymd_1,
LAG(amount, 1) OVER (ORDER BY sales_ymd) as lag_amount_1,
LAG(sales_ymd, 2) OVER (ORDER BY sales_ymd) as lag_ymd_2,
LAG(amount, 2) OVER (ORDER BY sales_ymd) as lag_amount_2,
LAG(sales_ymd, 3) OVER (ORDER BY sales_ymd) as lag_ymd_3,
LAG(amount, 3) OVER (ORDER BY sales_ymd) as lag_amount_3
FROM sales_amount_by_date
)
SELECT * FROM sales_amount_with_lag
WHERE lag_ymd_3 IS NOT NULL
ORDER BY sales_ymd
LIMIT 10;
```
---
> S-043: レシート明細テーブル(receipt)と顧客テーブル(customer)を結合し、性別(gender)と年代(ageから計算)ごとに売上金額(amount)を合計した売上サマリテーブル(sales_summary)を作成せよ。性別は0が男性、1が女性、9が不明を表すものとする。
>
>ただし、項目構成は年代、女性の売上金額、男性の売上金額、性別不明の売上金額の4項目とすること(縦に年代、横に性別のクロス集計)。また、年代は10歳ごとの階級とすること。
```
%%sql
-- SQL向きではないため、やや強引に記載する(カテゴリ数が多いときはとても長いSQLとなってしまう点に注意)
DROP TABLE IF EXISTS sales_summary;
CREATE TABLE sales_summary AS
WITH gender_era_amount AS (
SELECT c.gender_cd,
TRUNC(age/ 10) * 10 AS era,
SUM(r.amount) AS amount
FROM customer c
JOIN receipt r
ON c.customer_id = r.customer_id
GROUP BY c.gender_cd, era
)
select era,
MAX(CASE gender_cd WHEN '0' THEN amount ELSE 0 END) AS male ,
MAX(CASE gender_cd WHEN '1' THEN amount ELSE 0 END) AS female,
MAX(CASE gender_cd WHEN '9' THEN amount ELSE 0 END) AS unknown
FROM gender_era_amount
GROUP BY era
ORDER BY era
;
%%sql
SELECT * FROM sales_summary;
```
---
> S-044: 前設問で作成した売上サマリテーブル(sales_summary)は性別の売上を横持ちさせたものであった。このテーブルから性別を縦持ちさせ、年代、性別コード、売上金額の3項目に変換せよ。ただし、性別コードは男性を'00'、女性を'01'、不明を'99'とする。
```
%%sql
-- SQL向きではないため、やや強引に記載する(カテゴリ数が多いときはとても長いSQLとなってしまう点に注意)
SELECT era, '00' as gender_cd , male AS amount FROM sales_summary
UNION ALL
SELECT era, '01' as gender_cd, female AS amount FROM sales_summary
UNION ALL
SELECT era, '99' as gender_cd, unknown AS amount FROM sales_summary
```
---
> S-045: 顧客テーブル(customer)の生年月日(birth_day)は日付型でデータを保有している。これをYYYYMMDD形式の文字列に変換し、顧客ID(customer_id)とともに抽出せよ。データは10件を抽出すれば良い。
```
%%sql
SELECT customer_id, TO_CHAR(birth_day, 'YYYYMMDD') FROM customer LIMIT 10;
```
---
> S-046: 顧客テーブル(customer)の申し込み日(application_date)はYYYYMMDD形式の文字列型でデータを保有している。これを日付型に変換し、顧客ID(customer_id)とともに抽出せよ。データは10件を抽出すれば良い。
```
%%sql
SELECT customer_id, TO_DATE(application_date, 'YYYYMMDD')
FROM customer LIMIT 10;
```
---
> S-047: レシート明細テーブル(receipt)の売上日(sales_ymd)はYYYYMMDD形式の数値型でデータを保有している。これを日付型に変換し、レシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。
```
%%sql
SELECT
TO_DATE(CAST(sales_ymd AS VARCHAR), 'YYYYMMDD'),
receipt_no,
receipt_sub_no
FROM receipt
LIMIT 10;
```
---
> S-048: レシート明細テーブル(receipt)の売上エポック秒(sales_epoch)は数値型のUNIX秒でデータを保有している。これを日付型に変換し、レシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。
```
%%sql
SELECT
TO_TIMESTAMP(sales_epoch) as sales_date,
receipt_no, receipt_sub_no
FROM receipt
LIMIT 10;
```
---
> S-049: レシート明細テーブル(receipt)の販売エポック秒(sales_epoch)を日付型に変換し、「年」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。
```
%%sql
SELECT
TO_CHAR(EXTRACT(YEAR FROM TO_TIMESTAMP(sales_epoch)),'FM9999') as sales_year,
receipt_no,
receipt_sub_no
FROM receipt
LIMIT 10;
```
---
> S-050: レシート明細テーブル(receipt)の売上エポック秒(sales_epoch)を日付型に変換し、「月」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。なお、「月」は0埋め2桁で取り出すこと。データは10件を抽出すれば良い。
```
%%sql
SELECT
TO_CHAR(EXTRACT(
MONTH FROM TO_TIMESTAMP(sales_epoch)
), 'FM00') as sales_month,
receipt_no, receipt_sub_no
FROM receipt LIMIT 10;
```
---
> S-051: レシート明細テーブル(receipt)の売上エポック秒を日付型に変換し、「日」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。なお、「日」は0埋め2桁で取り出すこと。データは10件を抽出すれば良い。
```
%%sql
SELECT
receipt_no, receipt_sub_no,
TO_CHAR(EXTRACT(DAY FROM TO_TIMESTAMP(sales_epoch)), 'FM00') as sales_day
FROM receipt LIMIT 10;
```
---
> S-052: レシート明細テーブル(receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計の上、売上金額合計に対して2,000円以下を0、2,000円より大きい金額を1に2値化し、顧客ID、売上金額合計とともに10件表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。
```
%%sql
SELECT
customer_id,
SUM(amount) AS sum_amount,
CASE
WHEN SUM(amount) > 2000 THEN 1
WHEN SUM(amount) <= 2000 THEN 0
END as amount_flg
FROM receipt
WHERE customer_id not like 'Z%'
GROUP BY customer_id
LIMIT 10
```
---
> S-053: 顧客テーブル(customer)の郵便番号(postal_cd)に対し、東京(先頭3桁が100〜209のもの)を1、それ以外のものを0に2値化せよ。さらにレシート明細テーブル(receipt)と結合し、全期間において買い物実績のある顧客数を、作成した2値ごとにカウントせよ。
```
%%sql
WITH cust AS (
SELECT
customer_id,
postal_cd,
CASE
WHEN 100 <= CAST(SUBSTR(postal_cd, 1, 3) AS INTEGER)
AND CAST(SUBSTR(postal_cd, 1, 3) AS INTEGER) <= 209 THEN 1
ELSE 0
END AS postal_flg
FROM customer
),
rect AS(
SELECT
customer_id,
SUM(amount)
FROM
receipt
GROUP BY
customer_id
)
SELECT
c.postal_flg, count(1)
FROM
rect r
JOIN
cust c
ON
r.customer_id = c.customer_id
GROUP BY
c.postal_flg
```
---
> S-054: 顧客テーブル(customer)の住所(address)は、埼玉県、千葉県、東京都、神奈川県のいずれかとなっている。都道府県毎にコード値を作成し、顧客ID、住所とともに抽出せよ。値は埼玉県を11、千葉県を12、東京都を13、神奈川県を14とすること。結果は10件表示させれば良い。
```
%%sql
-- SQL向きではないため、やや強引に記載する(カテゴリ数が多いときはとても長いSQLとなってしまう点に注意)
SELECT
customer_id,
-- 確認用に住所も表示
address,
CASE SUBSTR(address,1, 3)
WHEN '埼玉県' THEN '11'
WHEN '千葉県' THEN '12'
WHEN '東京都' THEN '13'
WHEN '神奈川' THEN '14'
END AS prefecture_cd
FROM
customer
LIMIT 10
```
---
> S-055: レシート明細テーブル(receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、その合計金額の四分位点を求めよ。その上で、顧客ごとの売上金額合計に対して以下の基準でカテゴリ値を作成し、顧客ID、売上金額合計とともに表示せよ。カテゴリ値は上から順に1〜4とする。結果は10件表示させれば良い。
>
> - 最小値以上第一四分位未満
> - 第一四分位以上第二四分位未満
> - 第二四分位以上第三四分位未満
> - 第三四分位以上
```
%%sql
WITH sales_amount AS(
SELECT
customer_id,
SUM(amount) as sum_amount
FROM
receipt
GROUP BY
customer_id
),
sales_pct AS (
SELECT
PERCENTILE_CONT(0.25) WITHIN GROUP(ORDER BY sum_amount) AS pct25,
PERCENTILE_CONT(0.50) WITHIN GROUP(ORDER BY sum_amount) AS pct50,
PERCENTILE_CONT(0.75) WITHIN GROUP(ORDER BY sum_amount) AS pct75
FROM
sales_amount
)
SELECT
a.customer_id,
a.sum_amount,
CASE
WHEN a.sum_amount < pct25 THEN 1
WHEN pct25 <= a.sum_amount and a.sum_amount < pct50 THEN 2
WHEN pct50 <= a.sum_amount and a.sum_amount < pct75 THEN 3
WHEN pct75 <= a.sum_amount THEN 4
END as pct_flg
FROM sales_amount a
CROSS JOIN sales_pct p
LIMIT 10
```
---
> S-056: 顧客テーブル(customer)の年齢(age)をもとに10歳刻みで年代を算出し、顧客ID(customer_id)、生年月日(birth_day)とともに抽出せよ。ただし、60歳以上は全て60歳代とすること。年代を表すカテゴリ名は任意とする。先頭10件を表示させればよい。
```
%%sql
SELECT
customer_id,
birth_day,
LEAST(CAST(TRUNC(age / 10) * 10 AS INTEGER), 60) AS era
FROM
customer
GROUP BY
customer_id,
birth_day
-- 確認用の条件
--HAVING LEAST(CAST(TRUNC(age / 10) * 10 AS INTEGER), 60) >= 60
LIMIT 10
```
---
> S-057: 前問題の抽出結果と性別(gender)を組み合わせ、新たに性別×年代の組み合わせを表すカテゴリデータを作成せよ。組み合わせを表すカテゴリの値は任意とする。先頭10件を表示させればよい。
```
%%sql
SELECT
customer_id,
birth_day,
gender_cd || LEAST(CAST(TRUNC(age / 10) * 10 AS INTEGER), 60) AS era
FROM
customer
GROUP BY
customer_id,
birth_day
HAVING LEAST(CAST(TRUNC(age / 10) * 10 AS INTEGER), 60) >= 60
LIMIT 10
```
---
> S-058: 顧客テーブル(customer)の性別コード(gender_cd)をダミー変数化し、顧客ID(customer_id)とともに抽出せよ。結果は10件表示させれば良い。
```
%%sql
-- SQL向きではないため、やや強引に記載する(カテゴリ数が多いときはとても長いSQLとなってしまう点に注意)
SELECT
customer_id,
CASE WHEN gender_cd = '0' THEN '1' ELSE '0' END AS gender_male,
CASE WHEN gender_cd = '1' THEN '1' ELSE '0' END AS gender_female,
CASE WHEN gender_cd = '9' THEN '1' ELSE '0' END AS gender_unknown
FROM
customer
LIMIT 10
```
---
> S-059: レシート明細テーブル(receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を平均0、標準偏差1に標準化して顧客ID、売上金額合計とともに表示せよ。標準化に使用する標準偏差は、不偏標準偏差と標本標準偏差のどちらでも良いものとする。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。
```
%%sql
WITH sales_amount AS(
SELECT
customer_id,
SUM(amount) as sum_amount
FROM
receipt
WHERE
customer_id NOT LIKE 'Z%'
GROUP BY
customer_id
),
stats_amount AS (
SELECT
AVG(sum_amount) as avg_amount,
stddev_samp(sum_amount) as std_amount
FROM
sales_amount
)
SELECT
customer_id,
sum_amount,
(sum_amount - avg_amount) / std_amount as normal_amount
FROM sales_amount
CROSS JOIN stats_amount
LIMIT 10
```
---
> S-060: レシート明細テーブル(receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を最小値0、最大値1に正規化して顧客ID、売上金額合計とともに表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。
```
%%sql
WITH sales_amount AS(
SELECT
customer_id,
SUM(amount) as sum_amount
FROM
receipt
WHERE
customer_id NOT LIKE 'Z%'
GROUP BY
customer_id
),
stats_amount AS (
SELECT
max(sum_amount) as max_amount,
min(sum_amount) as min_amount
FROM
sales_amount
)
SELECT
customer_id,
sum_amount,
(sum_amount - min_amount) * 1.0
/ (max_amount - min_amount) * 1.0 AS scale_amount
FROM sales_amount
CROSS JOIN stats_amount
LIMIT 10
```
---
> S-061: レシート明細テーブル(receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を常用対数化(底=10)して顧客ID、売上金額合計とともに表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。
```
%%sql
SELECT
customer_id,
SUM(amount),
LOG(SUM(amount) + 1) as log_amount
FROM
receipt
WHERE
customer_id NOT LIKE 'Z%'
GROUP BY
customer_id
LIMIT 10
```
---
> S-062: レシート明細テーブル(receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を自然対数化(底=e)して顧客ID、売上金額合計とともに表示せよ(ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること)。結果は10件表示させれば良い。
```
%%sql
SELECT
customer_id,
SUM(amount),
LN(SUM(amount) + 1) as log_amount
FROM
receipt
WHERE
customer_id NOT LIKE 'Z%'
GROUP BY
customer_id
LIMIT 10
```
---
> S-063: 商品テーブル(product)の単価(unit_price)と原価(unit_cost)から、各商品の利益額を算出せよ。結果は10件表示させれば良い。
```
%%sql
SELECT
product_cd,
unit_price,
unit_cost,
unit_price - unit_cost as unit_profit
FROM
product
LIMIT 10
```
---
> S-064: 商品テーブル(product)の単価(unit_price)と原価(unit_cost)から、各商品の利益率の全体平均を算出せよ。 ただし、単価と原価にはNULLが存在することに注意せよ。
```
%%sql
SELECT
AVG((unit_price * 1.0 - unit_cost) / unit_price) as unit_profit_rate
FROM
product
LIMIT 10
```
---
> S-065: 商品テーブル(product)の各商品について、利益率が30%となる新たな単価を求めよ。ただし、1円未満は切り捨てること。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。
```
%%sql
SELECT
product_cd,
unit_price,
unit_cost,
TRUNC(unit_cost / 0.7) as new_price,
((TRUNC(unit_cost / 0.7) - unit_cost)
/ TRUNC(unit_cost / 0.7)) as new_profit
FROM
product
LIMIT 10
```
---
> S-066: 商品テーブル(product)の各商品について、利益率が30%となる新たな単価を求めよ。今回は、1円未満を四捨五入すること。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。
```
%%sql
SELECT ROUND(2.5)
%%sql
SELECT
product_cd,
unit_price,
unit_cost,
ROUND(unit_cost / 0.7) as new_price,
((ROUND(unit_cost / 0.7) - unit_cost)
/ ROUND(unit_cost / 0.7)) as new_profit
FROM
product
LIMIT 10
```
---
> S-067: 商品テーブル(product)の各商品について、利益率が30%となる新たな単価を求めよ。今回は、1円未満を切り上げること。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。
```
%%sql
SELECT
product_cd,
unit_price,
unit_cost,
CEIL(unit_cost / 0.7) as new_price,
((CEIL(unit_cost / 0.7) - unit_cost) / CEIL(unit_cost / 0.7)) as new_profit
FROM
product
LIMIT 10
```
---
> S-068: 商品テーブル(product)の各商品について、消費税率10%の税込み金額を求めよ。 1円未満の端数は切り捨てとし、結果は10件表示すれば良い。ただし、単価(unit_price)にはNULLが存在することに注意せよ。
```
%%sql
SELECT
product_cd,
unit_price,
TRUNC(unit_price * 1.1) as tax_price
FROM
product
LIMIT 10
```
---
> S-069: レシート明細テーブル(receipt)と商品テーブル(product)を結合し、顧客毎に全商品の売上金額合計と、カテゴリ大区分(category_major_cd)が"07"(瓶詰缶詰)の売上金額合計を計算の上、両者の比率を求めよ。抽出対象はカテゴリ大区分"07"(瓶詰缶詰)の購入実績がある顧客のみとし、結果は10件表示させればよい。
```
%%sql
WITH amount_all AS(
SELECT
customer_id,
sum(amount) AS sum_all
FROM
receipt
GROUP BY
customer_id
),
amount_07 AS (
SELECT
r.customer_id,
sum(r.amount) AS sum_07
FROM
receipt r
JOIN
product p
ON
r.product_cd = p.product_cd
and p.category_major_cd = '07'
GROUP BY
customer_id
)
SELECT
amount_all.customer_id,
sum_all,
sum_07,
sum_07 * 1.0 / sum_all as sales_rate
FROM
amount_all
JOIN
amount_07
ON
amount_all.customer_id = amount_07.customer_id
LIMIT 10
```
---
> S-070: レシート明細テーブル(receipt)の売上日(sales_ymd)に対し、顧客テーブル(customer)の会員申込日(application_date)からの経過日数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。
```
%%sql
WITH receit_distinct AS (
SELECT distinct
customer_id,
sales_ymd
FROM
receipt
)
SELECT
c.customer_id,
r.sales_ymd,
c.application_date,
EXTRACT(DAY FROM (TO_TIMESTAMP(CAST(r.sales_ymd AS VARCHAR), 'YYYYMMDD')
- TO_TIMESTAMP(c.application_date, 'YYYYMMDD'))) AS elapsed_days
FROM
receit_distinct r
JOIN
customer c
ON
r.customer_id = c.customer_id
LIMIT 10
```
---
> S-071: レシート明細テーブル(receipt)の売上日(sales_ymd)に対し、顧客テーブル(customer)の会員申込日(application_date)からの経過月数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。1ヶ月未満は切り捨てること。
```
%%sql
WITH receit_distinct AS (
SELECT distinct
customer_id,
sales_ymd
FROM
receipt
),
time_age_tbl AS(
SELECT
c.customer_id,
r.sales_ymd,
c.application_date,
AGE(TO_TIMESTAMP(CAST(r.sales_ymd AS VARCHAR), 'YYYYMMDD'),
TO_TIMESTAMP(c.application_date, 'YYYYMMDD')) AS time_age
FROM
receit_distinct r
JOIN
customer c
ON
r.customer_id = c.customer_id
)
SELECT
customer_id,
sales_ymd, application_date,
extract(year from time_age) * 12
+ extract(month from time_age) AS elapsed_months
FROM
time_age_tbl
LIMIT 10
```
---
> S-072: レシート明細テーブル(receipt)の売上日(sales_ymd)に対し、顧客テーブル(customer)の会員申込日(application_date)からの経過年数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。1年未満は切り捨てること。
```
%%sql
WITH receit_distinct AS (
SELECT distinct
customer_id,
sales_ymd
FROM
receipt
)
SELECT
c.customer_id,
r.sales_ymd,
c.application_date,
EXTRACT(YEAR FROM AGE(
TO_TIMESTAMP(CAST(r.sales_ymd AS VARCHAR), 'YYYYMMDD'),
TO_TIMESTAMP(c.application_date, 'YYYYMMDD'))) AS elapsed_years
FROM
receit_distinct r
JOIN
customer c
ON
r.customer_id = c.customer_id
LIMIT 10
```
---
> S-073: レシート明細テーブル(receipt)の売上日(sales_ymd)に対し、顧客テーブル(customer)の会員申込日(application_date)からのエポック秒による経過時間を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。なお、時間情報は保有していないため各日付は0時0分0秒を表すものとする。
```
%%sql
WITH receit_distinct AS (
SELECT distinct
customer_id,
sales_ymd
FROM
receipt
)
SELECT
c.customer_id,
r.sales_ymd,
c.application_date,
EXTRACT(
EPOCH FROM TO_TIMESTAMP(CAST(r.sales_ymd AS VARCHAR), 'YYYYMMDD'))
- EXTRACT(
EPOCH FROM TO_TIMESTAMP(c.application_date, 'YYYYMMDD')
) AS elapsed_epoch
FROM
receit_distinct r
JOIN
customer c
ON
r.customer_id = c.customer_id
LIMIT 10
```
---
> S-074: レシート明細テーブル(receipt)の売上日(sales_ymd)に対し、当該週の月曜日からの経過日数を計算し、売上日、当該週の月曜日付とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値でデータを保持している点に注意)。
```
%%sql
SELECT
customer_id,
TO_DATE(CAST(sales_ymd AS VARCHAR), 'YYYYMMDD'),
EXTRACT(DOW FROM (
TO_DATE(CAST(sales_ymd AS VARCHAR), 'YYYYMMDD') - 1)) AS elapsed_years,
TO_DATE(CAST(sales_ymd AS VARCHAR), 'YYYYMMDD')
- CAST(EXTRACT(
DOW FROM (TO_DATE(CAST(sales_ymd AS VARCHAR), 'YYYYMMDD') - 1)
) AS INTEGER) AS monday
FROM
receipt
LIMIT 10
```
---
> S-075: 顧客テーブル(customer)からランダムに1%のデータを抽出し、先頭から10件データを抽出せよ。
```
%%sql
-- コード例1(シンプルにやるなら)
SELECT * FROM customer WHERE RANDOM() <= 0.01
LIMIT 10
%%sql
-- コード例2(丁寧にやるなら)
WITH customer_tmp AS(
SELECT
*
,ROW_NUMBER() OVER() as row
,COUNT(*) OVER() as count
FROM customer
ORDER BY random()
)
SELECT
customer_id
,customer_name
,gender_cd
,gender
,birth_day
,age
,postal_cd
,address
,application_store_cd
,application_date
,status_cd
FROM customer_tmp
WHERE row < count * 0.01
LIMIT 10
```
---
> S-076: 顧客テーブル(customer)から性別(gender_cd)の割合に基づきランダムに10%のデータを層化抽出し、性別ごとに件数を集計せよ。
```
%%sql
-- カテゴリ数が少ない場合はそれぞれサンプリングしUNIONするほうが簡単だが、カテゴリ数が多いケースを考慮して以下のSQLとした
-- RANDOMでORDER BYしているため、大量データを扱う場合は注意が必要
WITH cusotmer_random AS (
SELECT customer_id, g_cd, cnt
FROM (
SELECT
ARRAY_AGG(customer ORDER BY RANDOM()) AS customer_r,
gender_cd as g_cd, count(1) as cnt
FROM
customer
GROUP BY gender_cd
)sample, UNNEST(customer_r)
),
cusotmer_rownum AS(
SELECT * , ROW_NUMBER() OVER(PARTITION BY g_cd) AS rn FROM cusotmer_random
)
SELECT
g_cd,
count(1)
FROM
cusotmer_rownum
WHERE rn <= cnt * 0.1
GROUP BY g_cd
```
---
> S-077: レシート明細テーブル(receipt)の売上金額(amount)を顧客単位に合計し、合計した売上金額の外れ値を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、ここでは外れ値を平均から3σ以上離れたものとする。結果は10件表示させれば良い。
```
%%sql
WITH sales_amount AS(
SELECT customer_id, SUM(amount) AS sum_amount
FROM receipt
WHERE customer_id NOT LIKE 'Z%'
GROUP BY customer_id
)
SELECT customer_id, sum_amount
FROM sales_amount
CROSS JOIN (
SELECT AVG(sum_amount) AS avg_amount, STDDEV_SAMP(sum_amount) AS std_amount
FROM sales_amount
) stats_amount
WHERE ABS(sum_amount - avg_amount) / std_amount > 3
LIMIT 10
```
---
> S-078: レシート明細テーブル(receipt)の売上金額(amount)を顧客単位に合計し、合計した売上金額の外れ値を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、ここでは外れ値を第一四分位と第三四分位の差であるIQRを用いて、「第一四分位数-1.5×IQR」よりも下回るもの、または「第三四分位数+1.5×IQR」を超えるものとする。結果は10件表示させれば良い。
```
%%sql
WITH sales_amount AS(
SELECT customer_id, SUM(amount) AS sum_amount
FROM receipt
WHERE customer_id NOT LIKE 'Z%'
GROUP BY customer_id
)
SELECT customer_id, sum_amount
FROM sales_amount
CROSS JOIN (
SELECT
PERCENTILE_CONT(0.25) WITHIN GROUP(ORDER BY sum_amount) as amount_25per,
PERCENTILE_CONT(0.75) WITHIN GROUP(ORDER BY sum_amount) as amount_75per
FROM sales_amount
) stats_amount
WHERE sum_amount < amount_25per - (amount_75per - amount_25per) * 1.5
OR amount_75per + (amount_75per - amount_25per) * 1.5 < sum_amount
LIMIT 10
```
---
> S-079: 商品テーブル(product)の各項目に対し、欠損数を確認せよ。
```
%%sql
SELECT
SUM(
CASE WHEN product_cd IS NULL THEN 1 ELSE 0 END
) AS product_cd,
SUM(
CASE WHEN category_major_cd IS NULL THEN 1 ELSE 0 END
) AS category_major_cd,
SUM(
CASE WHEN category_medium_cd IS NULL THEN 1 ELSE 0 END
) AS category_medium_cd,
SUM(
CASE WHEN category_small_cd IS NULL THEN 1 ELSE 0 END
) AS category_small_cd,
SUM(
CASE WHEN unit_price IS NULL THEN 1 ELSE 0 END
) AS unit_price,
SUM(
CASE WHEN unit_cost IS NULL THEN 1 ELSE 0 END
) AS unit_cost
FROM product LIMIT 10
```
---
> S-080: 商品テーブル(product)のいずれかの項目に欠損が発生しているレコードを全て削除した新たなproduct_1を作成せよ。なお、削除前後の件数を表示させ、前設問で確認した件数だけ減少していることも確認すること。
```
%%sql
SELECT COUNT(1) FROM product;
%%sql
DROP TABLE IF EXISTS product_1;
CREATE TABLE product_1 AS (
SELECT * FROM product
WHERE unit_price IS NOT NULL OR unit_cost IS NOT NULL
);
SELECT COUNT(1) FROM product_1;
```
---
> S-081: 単価(unit_price)と原価(unit_cost)の欠損値について、それぞれの平均値で補完した新たなproduct_2を作成せよ。なお、平均値について1円未満は四捨五入とする。補完実施後、各項目について欠損が生じていないことも確認すること。
```
%%sql
DROP TABLE IF EXISTS product_2;
CREATE TABLE product_2 AS (
SELECT
product_cd,
category_major_cd,
category_medium_cd,
category_small_cd,
COALESCE(unit_price, unit_avg) as unit_price,
COALESCE(unit_cost, cost_avg) as unit_cost
FROM
product
CROSS JOIN (
SELECT
ROUND(AVG(unit_price)) AS unit_avg,
ROUND(AVG(unit_cost)) AS cost_avg
FROM
product
) stats_product
)
%%sql
SELECT
SUM(CASE WHEN unit_price IS NULL THEN 1 ELSE 0 END) AS unit_price,
SUM(CASE WHEN unit_cost IS NULL THEN 1 ELSE 0 END) AS unit_cost
FROM product_2 LIMIT 10
```
---
> S-082: 単価(unit_price)と原価(unit_cost)の欠損値について、それぞれの中央値で補完した新たなproduct_3を作成せよ。なお、中央値について1円未満は四捨五入とする。補完実施後、各項目について欠損が生じていないことも確認すること。
```
%%sql
DROP TABLE IF EXISTS product_3;
CREATE TABLE product_3 AS (
SELECT
product_cd,
category_major_cd,
category_medium_cd,
category_small_cd,
COALESCE(unit_price, unit_med) as unit_price,
COALESCE(unit_cost, cost_med) as unit_cost
FROM
product
CROSS JOIN (
SELECT
ROUND(
PERCENTILE_CONT(0.5) WITHIN GROUP(ORDER BY unit_price)
) AS unit_med,
ROUND(
PERCENTILE_CONT(0.5) WITHIN GROUP(ORDER BY unit_cost)
) AS cost_med
FROM
product
) stats_product
)
%%sql
SELECT
SUM(CASE WHEN unit_price IS NULL THEN 1 ELSE 0 END) AS unit_price,
SUM(CASE WHEN unit_cost IS NULL THEN 1 ELSE 0 END) AS unit_cost
FROM product_3 LIMIT 10
```
---
> S-083: 単価(unit_price)と原価(unit_cost)の欠損値について、各商品の小区分(category_small_cd)ごとに算出した中央値で補完した新たなproduct_4を作成せよ。なお、中央値について1円未満は四捨五入とする。補完実施後、各項目について欠損が生じていないことも確認すること。
```
%%sql
DROP TABLE IF EXISTS product_4;
CREATE TABLE product_4 AS (
WITH category_median AS(
SELECT
category_small_cd,
ROUND(
PERCENTILE_CONT(0.5) WITHIN GROUP(ORDER BY unit_price)
) AS unit_med,
ROUND(
PERCENTILE_CONT(0.5) WITHIN GROUP(ORDER BY unit_cost)
) AS cost_med
FROM product
GROUP BY category_small_cd
)
SELECT
product_cd,
category_major_cd,
category_medium_cd,
category_small_cd,
COALESCE(unit_price, unit_med) as unit_price,
COALESCE(unit_cost, cost_med) as unit_cost
FROM
product
JOIN
category_median
USING(category_small_cd)
)
%%sql
SELECT
SUM(CASE WHEN unit_price IS NULL THEN 1 ELSE 0 END) AS unit_price,
SUM(CASE WHEN unit_cost IS NULL THEN 1 ELSE 0 END) AS unit_cost
FROM product_4 LIMIT 10
```
---
> S-084: 顧客テーブル(customer)の全顧客に対し、全期間の売上金額に占める2019年売上金額の割合を計算せよ。ただし、販売実績のない場合は0として扱うこと。そして計算した割合が0超のものを抽出せよ。 結果は10件表示させれば良い。
```
%%sql
WITH sales_amount_2019 AS (
SELECT
customer_id,
SUM(amount) AS sum_amount_2019
FROM
receipt
WHERE
20190101 <= sales_ymd AND sales_ymd <= 20191231
GROUP BY
customer_id
),
sales_amount_all AS (
SELECT
customer_id,
SUM(amount) AS sum_amount_all
FROM
receipt
GROUP BY
customer_id
)
SELECT
a.customer_id,
COALESCE(b.sum_amount_2019, 0) AS sales_amount_2019,
COALESCE(c.sum_amount_all, 0) AS sales_amount_all,
CASE COALESCE(c.sum_amount_all, 0)
WHEN 0 THEN 0
ELSE COALESCE(b.sum_amount_2019, 0) * 1.0 / c.sum_amount_all
END AS sales_rate
FROM
customer a
LEFT JOIN
sales_amount_2019 b
ON a.customer_id = b.customer_id
LEFT JOIN
sales_amount_all c
ON a.customer_id = c.customer_id
WHERE CASE COALESCE(c.sum_amount_all, 0)
WHEN 0 THEN 0
ELSE COALESCE(b.sum_amount_2019, 0) * 1.0 / c.sum_amount_all
END > 0
LIMIT 10
```
---
> S-085: 顧客テーブル(customer)の全顧客に対し、郵便番号(postal_cd)を用いて経度緯度変換用テーブル(geocode)を紐付け、新たなcustomer_1を作成せよ。ただし、複数紐づく場合は経度(longitude)、緯度(latitude)それぞれ平均を算出すること。
```
%%sql
DROP TABLE IF EXISTS customer_1;
CREATE TABLE customer_1 AS (
WITH geocode_avg AS(
SELECT
postal_cd,
AVG(longitude) as m_longitude,
AVG(latitude) as m_latitude
FROM
geocode
GROUP BY
postal_cd
)
SELECT
*
FROM
customer c
JOIN
geocode_avg g
USING(postal_cd)
);
%%sql
SELECT * FROM customer_1 LIMIT 3
```
---
> S-086: 前設問で作成した緯度経度つき顧客テーブル(customer_1)に対し、申込み店舗コード(application_store_cd)をキーに店舗テーブル(store)と結合せよ。そして申込み店舗の緯度(latitude)・経度情報(longitude)と顧客の緯度・経度を用いて距離(km)を求め、顧客ID(customer_id)、顧客住所(address)、店舗住所(address)とともに表示せよ。計算式は簡易式で良いものとするが、その他精度の高い方式を利用したライブラリを利用してもかまわない。結果は10件表示すれば良い。
$$
緯度(ラジアン):\phi \\
経度(ラジアン):\lambda \\
距離L = 6371 * arccos(sin \phi_1 * sin \phi_2
+ cos \phi_1 * cos \phi_2 * cos(\lambda_1 − \lambda_2))
$$
```
%%sql
SELECT
c.customer_id,
c.address AS customer_address,
s.address AS store_address,
(
6371 * ACOS(
SIN(RADIANS(c.m_latitude))
* SIN(RADIANS(s.latitude))
+ COS(RADIANS(c.m_latitude))
* COS(RADIANS(s.latitude))
* COS(RADIANS(c.m_longitude) - RADIANS(s.longitude))
)
) AS distance
FROM
customer_1 c
JOIN
store s
ON
c.application_store_cd = s.store_cd
limit 10
```
---
> S-087: 顧客テーブル(customer)では、異なる店舗での申込みなどにより同一顧客が複数登録されている。名前(customer_name)と郵便番号(postal_cd)が同じ顧客は同一顧客とみなし、1顧客1レコードとなるように名寄せした名寄顧客テーブル(customer_u)を作成せよ。ただし、同一顧客に対しては売上金額合計が最も高いものを残すものとし、売上金額合計が同一もしくは売上実績の無い顧客については顧客ID(customer_id)の番号が小さいものを残すこととする。
```
%%sql
DROP TABLE IF EXISTS customer_u;
CREATE TABLE customer_u AS (
WITH sales_amount AS(
SELECT
c.customer_id,
c.customer_name,
c.postal_cd,
SUM(r.amount) as sum_amount
FROM
customer c
LEFT JOIN
receipt r
ON c.customer_id = r.customer_id
GROUP by
c.customer_id, c.customer_name, c.postal_cd
),
sales_ranking AS(
SELECT
*,
ROW_NUMBER() OVER(
PARTITION BY customer_name, postal_cd
ORDER BY sum_amount desc, customer_ID ) as rank
FROM sales_amount
)
SELECT c.*
FROM
customer c
JOIN
sales_ranking r
ON
c.customer_id = r.customer_id
and r.rank = 1
)
%%sql
SELECT
cnt,
cnt_u,
cnt - cnt_u AS diff
FROM
(SELECT count(1) as cnt FROM customer) customer
CROSS JOIN (SELECT count(1) as cnt_u FROM customer_u) customer_u
```
---
> S-088: 前設問で作成したデータを元に、顧客テーブルに統合名寄IDを付与したテーブル(customer_n)を作成せよ。ただし、統合名寄IDは以下の仕様で付与するものとする。
>
>- 重複していない顧客:顧客ID(customer_id)を設定
>- 重複している顧客:前設問で抽出したレコードの顧客IDを設定
```
%%sql
DROP TABLE IF EXISTS customer_n;
CREATE TABLE customer_n AS (
SELECT
c.*,
u.customer_id as integration_id
FROM
customer c
JOIN
customer_u u
ON c.customer_name = u.customer_name
and c.postal_cd = u.postal_cd
)
%%sql
SELECT count(1) FROM customer_n
WHERE customer_id != integration_id
```
---
> S-089: 売上実績のある顧客に対し、予測モデル構築のため学習用データとテスト用データに分割したい。それぞれ8:2の割合でランダムにデータを分割せよ。
```
%%sql
SELECT SETSEED(0.1);
CREATE TEMP TABLE IF NOT EXISTS sales_record_customer_id AS (
SELECT customer_id ,ROW_NUMBER()OVER(ORDER BY RANDOM()) AS row
FROM customer
LEFT JOIN receipt USING(customer_id)
GROUP BY customer_id
HAVING SUM(amount) IS NOT NULL
);
DROP TABLE IF EXISTS customer_train;
CREATE TABLE customer_train AS
SELECT customer.*
FROM sales_record_customer_id
LEFT JOIN customer USING(customer_id)
WHERE sales_record_customer_id.row < (SELECT
COUNT(0)
FROM sales_record_customer_id) *0.8
;
DROP TABLE IF EXISTS customer_test;
CREATE TABLE customer_test AS
SELECT customer.*
FROM sales_record_customer_id
LEFT JOIN customer USING(customer_id)
EXCEPT
SELECT * from customer_train
;
```
---
> S-090: レシート明細テーブル(receipt)は2017年1月1日〜2019年10月31日までのデータを有している。売上金額(amount)を月次で集計し、学習用に12ヶ月、テスト用に6ヶ月のモデル構築用データを3テーブルとしてセット作成せよ。データの持ち方は自由とする。
```
%%sql
-- SQL向きではないため、やや強引に記載する(分割数が多くなる場合はSQLが長くなるため現実的ではない)
-- また、秒単位のデータなど時系列が細かく、かつ長期間に渡る場合はデータが膨大となるため注意(そのようなケースではループ処理でモデル学習ができる言語が望ましい)
-- 学習データ(0)とテストデータ(1)を区別するフラグを付与する
DROP TABLE IF EXISTS sales_amount ;
CREATE TABLE sales_amount AS (
SELECT
SUBSTR(CAST(sales_ymd AS VARCHAR), 1, 6) AS sales_ym,
SUM(amount) AS sum_amount,
row_number() OVER(PARTITION BY NULL ORDER BY
SUBSTR(CAST(sales_ymd AS VARCHAR), 1, 6)) AS rn
FROM
receipt
GROUP BY sales_ym
);
-- SQLでは限界があるが、作成データセットの増加に伴いなるべく使いまわしができるものにする
-- WITH句内のLAG関数について、ラグ期間を変えれば使い回せるよう記述
DROP TABLE IF EXISTS series_data_1 ;
CREATE TABLE series_data_1 AS (
WITH lag_amount AS (
SELECT sales_ym, sum_amount, LAG(rn, 0) OVER (ORDER BY rn) AS rn
FROM sales_amount
)
SELECT
sales_ym, sum_amount,
CASE WHEN rn <= 12 THEN 0 WHEN 12 < rn THEN 1 END as test_flg
FROM lag_amount
WHERE rn <= 18);
DROP TABLE IF EXISTS series_data_2 ;
CREATE TABLE series_data_2 AS (
WITH lag_amount AS (
SELECT
sales_ym,
sum_amount,
LAG(rn, 6) OVER (ORDER BY rn) AS rn
FROM sales_amount
)
SELECT
sales_ym,
sum_amount,
CASE WHEN rn <= 12 THEN 0 WHEN 12 < rn THEN 1 END as test_flg
FROM lag_amount WHERE rn <= 18);
DROP TABLE IF EXISTS series_data_3 ;
CREATE TABLE series_data_3 AS (
WITH lag_amount AS (
SELECT sales_ym, sum_amount, LAG(rn, 12) OVER (ORDER BY rn) AS rn
FROM sales_amount
)
SELECT
sales_ym,
sum_amount,
CASE WHEN rn <= 12 THEN 0 WHEN 12 < rn THEN 1 END as test_flg
FROM lag_amount WHERE rn <= 18);
%%sql
SELECT * FROM series_data_1
```
---
> S-091: 顧客テーブル(customer)の各顧客に対し、売上実績のある顧客数と売上実績のない顧客数が1:1となるようにアンダーサンプリングで抽出せよ。
```
%%sql
SELECT SETSEED(0.1);
WITH pre_table_1 AS(
SELECT
c.*
,COALESCE(r.amount,0) AS r_amount
FROM
customer c
LEFT JOIN
receipt r
ON
c.customer_id=r.customer_id
)
,pre_table_2 AS(
SELECT
customer_id
,CASE WHEN SUM(r_amount)>0 THEN 1 ELSE 0 END AS is_buy_flag
,CASE WHEN SUM(r_amount)=0 THEN 1 ELSE 0 END AS is_not_buy_flag
FROM
pre_table_1
GROUP BY
customer_id
)
,pre_table_3 AS(
SELECT
*
,ROW_NUMBER() OVER(PARTITION BY is_buy_flag ORDER BY RANDOM())
FROM
pre_table_2
CROSS JOIN
(SELECT SUM(is_buy_flag) AS buying FROM pre_table_2) AS t1
CROSS JOIN
(SELECT SUM(is_not_buy_flag) AS not_buying FROM pre_table_2) AS t2
)
,pre_table_4 AS(
SELECT
*
FROM
pre_table_3
WHERE
row_number<=buying
AND
row_number<=not_buying
)
SELECT COUNT(*) FROM pre_table_4 GROUP BY is_buy_flag;
```
---
> S-092: 顧客テーブル(customer)では、性別に関する情報が非正規化の状態で保持されている。これを第三正規化せよ。
```
%%sql
DROP TABLE IF EXISTS customer_std;
CREATE TABLE customer_std AS (
SELECT
customer_id,
customer_name,
gender_cd,
birth_day,
age,
postal_cd,
application_store_cd,
application_date,
status_cd
FROM
customer
);
DROP TABLE IF EXISTS gender_std;
CREATE TABLE gender_std AS (
SELECT distinct
gender_cd, gender
FROM
customer
)
%%sql
SELECT * FROM gender_std
```
---
> S-093: 商品テーブル(product)では各カテゴリのコード値だけを保有し、カテゴリ名は保有していない。カテゴリテーブル(category)と組み合わせて非正規化し、カテゴリ名を保有した新たな商品テーブルを作成せよ。
```
%%sql
DROP TABLE IF EXISTS product_full;
CREATE TABLE product_full AS (
SELECT
p.product_cd,
p.category_major_cd,
c.category_major_name,
p.category_medium_cd,
c.category_medium_name,
p.category_small_cd,
c.category_small_name,
p.unit_price,
p.unit_cost
FROM
product p
JOIN
category c
USING(category_small_cd)
)
%%sql
SELECT * FROM product_full LIMIT 10
```
---
> S-094: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。出力先のパスは'/tmp/data'を指定することでJupyterの/'work/data'と共有されるようになっている。なお、COPYコマンドの権限は付与済みである。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ有り
> - 文字コードはUTF-8
```
%%sql
COPY product_full TO '/tmp/data/S_product_full_UTF-8_header.csv'
WITH CSV HEADER encoding 'UTF-8'
```
---
> S-095: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。出力先のパスは'/tmp/data'を指定することでJupyterの/'work/data'と共有されるようになっている。なお、COPYコマンドの権限は付与済みである。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ有り
> - 文字コードはSJIS
```
%%sql
COPY product_full TO '/tmp/data/S_product_full_SJIS_header.csv'
WITH CSV HEADER encoding 'SJIS'
```
---
> S-096: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。出力先のパスは'/tmp/data'を指定することでJupyterの/'work/data'と共有されるようになっている。なお、COPYコマンドの権限は付与済みである。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ無し
> - 文字コードはUTF-8
```
%%sql
COPY product_full TO '/tmp/data/S_product_full_UTF-8_noh.csv'
WITH CSV encoding 'UTF-8'
```
---
> S-097: 先に作成した以下形式のファイルを読み込み、テーブルを作成せよ。また、先頭3件を表示させ、正しくとりまれていることを確認せよ。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ有り
> - 文字コードはUTF-8
```
%%sql
DROP TABLE IF EXISTS product_full;
CREATE TABLE product_full (
product_cd VARCHAR(10),
category_major_cd VARCHAR(2),
category_major_name VARCHAR(20),
category_medium_cd VARCHAR(4),
category_medium_name VARCHAR(20),
category_small_cd VARCHAR(6),
category_small_name VARCHAR(20),
unit_price INTEGER,
unit_cost INTEGER
);
%%sql
COPY product_full FROM '/tmp/data/S_product_full_UTF-8_header.csv'
WITH CSV HEADER encoding 'UTF-8'
%%sql
SELECT * FROM product_full LIMIT 3
```
---
> S-098: 先に作成した以下形式のファイルを読み込み、テーブルを作成せよ。また、先頭3件を表示させ、正しくとりまれていることを確認せよ。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ無し
> - 文字コードはUTF-8
```
%%sql
DROP TABLE IF EXISTS product_full;
CREATE TABLE product_full (
product_cd VARCHAR(10),
category_major_cd VARCHAR(2),
category_major_name VARCHAR(20),
category_medium_cd VARCHAR(4),
category_medium_name VARCHAR(20),
category_small_cd VARCHAR(6),
category_small_name VARCHAR(20),
unit_price INTEGER,
unit_cost INTEGER
);
%%sql
COPY product_full FROM '/tmp/data/S_product_full_UTF-8_noh.csv'
WITH CSV encoding 'UTF-8'
%%sql
SELECT * FROM product_full LIMIT 3
```
---
> S-099: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。出力先のパスは'/tmp/data'を指定することでJupyterの/'work/data'と共有されるようになっている。なお、COPYコマンドの権限は付与済みである。
>
> - ファイル形式はTSV(タブ区切り)
> - ヘッダ有り
> - 文字コードはUTF-8
```
%%sql
COPY product_full TO '/tmp/data/S_product_full_UTF-8_header.tsv'
WITH CSV HEADER DELIMITER E'\t' encoding 'UTF-8'
```
---
> S-100: 先に作成した以下形式のファイルを読み込み、テーブルを作成せよ。また、先頭10件を表示させ、正しくとりまれていることを確認せよ。
>
> - ファイル形式はTSV(タブ区切り)
> - ヘッダ有り
> - 文字コードはUTF-8
```
%%sql
DROP TABLE IF EXISTS product_full;
CREATE TABLE product_full (
product_cd VARCHAR(10),
category_major_cd VARCHAR(2),
category_major_name VARCHAR(20),
category_medium_cd VARCHAR(4),
category_medium_name VARCHAR(20),
category_small_cd VARCHAR(6),
category_small_name VARCHAR(20),
unit_price INTEGER,
unit_cost INTEGER
);
%%sql
COPY product_full FROM '/tmp/data/S_product_full_UTF-8_header.tsv'
WITH CSV HEADER DELIMITER E'\t' encoding 'UTF-8'
%%sql
SELECT * FROM product_full LIMIT 3
```
# これで100本終わりです。おつかれさまでした!
| github_jupyter |
# The basics
Here we'll discuss how to instantiate spherical harmonic maps, manipulate them, plot them, and compute simple phase curves and occultation light curves.
```
%matplotlib inline
%run notebook_setup.py
import starry
import matplotlib.pyplot as plt
import numpy as np
starry.config.lazy = False
starry.config.quiet = True
```
## Introduction
Surface maps in ``starry`` are described by a vector of spherical harmonic coefficients. Just like polynomials on the real number line, spherical harmonics form a complete basis on the surface of the sphere. **Any** surface map can be expressed as a linear combination of spherical harmonics, provided one goes to sufficiently high degree in the expansion.
In ``starry``, the surface map is described by the vector **y**, which is indexed by increasing degree $l$ and order $m$:
$y = \{Y_{0,0}, \, Y_{1,-1}, \, Y_{1,0}, \, Y_{1,1} \, Y_{2,-2}, \, Y_{2,-1}, \, Y_{2,0} \, Y_{2,1}, \, Y_{2,2}, \, ...\}$.
For reference, here's what the first several spherical harmonic degrees look like:
```
ydeg = 5
fig, ax = plt.subplots(ydeg + 1, 2 * ydeg + 1, figsize=(12, 6))
fig.subplots_adjust(hspace=0)
for axis in ax.flatten():
axis.set_xticks([])
axis.set_yticks([])
axis.spines["top"].set_visible(False)
axis.spines["right"].set_visible(False)
axis.spines["bottom"].set_visible(False)
axis.spines["left"].set_visible(False)
for l in range(ydeg + 1):
ax[l, 0].set_ylabel(
"l = %d" % l,
rotation="horizontal",
labelpad=20,
y=0.38,
fontsize=10,
alpha=0.5,
)
for j, m in enumerate(range(-ydeg, ydeg + 1)):
ax[-1, j].set_xlabel("m = %d" % m, labelpad=10, fontsize=10, alpha=0.5)
# Loop over the orders and degrees
map = starry.Map(ydeg=ydeg)
for i, l in enumerate(range(ydeg + 1)):
for j, m in enumerate(range(-l, l + 1)):
# Offset the index for centered plotting
j += ydeg - l
# Compute the spherical harmonic
# with no rotation
map.reset()
if l > 0:
map[l, m] = 1
# Plot the spherical harmonic
ax[i, j].imshow(
map.render(),
cmap="plasma",
interpolation="none",
origin="lower",
extent=(-1, 1, -1, 1),
)
ax[i, j].set_xlim(-1.1, 1.1)
ax[i, j].set_ylim(-1.1, 1.1)
```
Each row corresponds to a different degree $l$, starting at $l = 0$. Within each row, the harmonics extend from order $m = -l$ to order $m = l$.
As an example, suppose we have the following map vector:
```
y = [1.00, 0.22, 0.19, 0.11, 0.11, 0.07, -0.11, 0.00, -0.05,
0.12, 0.16, -0.05, 0.06, 0.12, 0.05, -0.10, 0.04, -0.02,
0.01, 0.10, 0.08, 0.15, 0.13, -0.11, -0.07, -0.14, 0.06,
-0.19, -0.02, 0.07, -0.02, 0.07, -0.01, -0.07, 0.04, 0.00]
```
This is how much each spherical harmonic is contributing to the sum:
```
ydeg = 5
fig, ax = plt.subplots(ydeg + 1, 2 * ydeg + 1, figsize=(12, 6))
fig.subplots_adjust(hspace=0)
for axis in ax.flatten():
axis.set_xticks([])
axis.set_yticks([])
axis.spines["top"].set_visible(False)
axis.spines["right"].set_visible(False)
axis.spines["bottom"].set_visible(False)
axis.spines["left"].set_visible(False)
for l in range(ydeg + 1):
ax[l, 0].set_ylabel(
"l = %d" % l,
rotation="horizontal",
labelpad=20,
y=0.38,
fontsize=10,
alpha=0.5,
)
for j, m in enumerate(range(-ydeg, ydeg + 1)):
ax[-1, j].set_xlabel("m = %d" % m, labelpad=10, fontsize=10, alpha=0.5)
# Loop over the orders and degrees
map = starry.Map(ydeg=ydeg)
map.load("earth")
y = np.abs(np.array(map.y))
y[1:] /= np.max(y[1:])
n = 0
for i, l in enumerate(range(ydeg + 1)):
for j, m in enumerate(range(-l, l + 1)):
# Offset the index for centered plotting
j += ydeg - l
# Compute the spherical harmonic
# with no rotation
map.reset()
if l > 0:
map[l, m] = 1
# Plot the spherical harmonic
ax[i, j].imshow(
map.render(),
cmap="plasma",
interpolation="none",
origin="lower",
extent=(-1, 1, -1, 1),
alpha=y[n],
)
ax[i, j].set_xlim(-1.1, 1.1)
ax[i, j].set_ylim(-1.1, 1.1)
n += 1
```
If we add up all of the terms, we get the following image:
```
map = starry.Map(ydeg=ydeg, quiet=True)
map.load("earth")
fig, ax = plt.subplots(1, figsize=(3, 3))
ax.imshow(map.render(), origin="lower", cmap="plasma")
ax.axis("off");
```
which is the $l = 5$ spherical harmonic expansion of a map of the Earth! South America is to the left and Africa is toward the top right. It might still be hard to see, so here's what we would get if we carried the expansion up to degree $l = 20$:
```
map = starry.Map(ydeg=20)
map.load("earth", sigma=0.08)
fig, ax = plt.subplots(1, figsize=(3, 3))
ax.imshow(map.render(), origin="lower", cmap="plasma")
ax.axis("off");
```
## Using `starry`
OK, now that we've introduced the spherical harmonics, let's look at how we can use `starry` to model some celestial bodies.
The first thing we should do is import `starry` and instantiate a `Map` object. This is the simplest way of creating a spherical harmonic map. The `Map` object takes a few arguments, the most important of which is `ydeg`, the highest degree of the spherical harmonics used to describe the map. Let's create a fifth-degree map:
```
import starry
starry.config.lazy = False
map = starry.Map(ydeg=5)
```
(We're disabling ``lazy`` evaluation in this notebook; see [here](LazyGreedy.ipynb) for more details.) The ``y`` attribute of the map stores the spherical harmonic coefficients. We can see that our map is initialized to a constant map:
```
map.y
```
The $Y_{0,0}$ coefficient is always fixed at unity, and by default all other coefficients are set to zero. Our map is therefore just the first spherical harmonic, which if you scroll up you'll see is that constant dark blue disk at the top of the first figure. We can also quickly visualize the map by calling the `show` method:
```
map.show()
```
Not that interesting! But before we give this map some features, let's briefly discuss how we would *evaluate* our map. This means computing the intensity at a latitude/longitude point on the surface. Let's investigate the intensity at the center (``lat = lon = 0``) of the map:
```
map.intensity(lat=0, lon=0)
```
Since our map is constant, this is the intensity everywhere on the surface. It may seem like a strange number, but perhaps it will make sense if compute what the total *flux* (intensity integrated over area) of the map is. Since the map is constant, and since the body we're modeling has unit radius by default, the total flux visible to the observer is just...
```
import numpy as np
np.pi * 1.0 ** 2 * map.intensity(lat=0, lon=0)
```
So the total flux visible from the map is unity. **This is how maps in** `starry` **are normalized:** the average disk-integrated intensity is equal to the coefficient of the constant $Y_{0,0}$ harmonic, which is fixed at unity. We're going to discuss in detail how to compute fluxes below, but here's a sneak peek:
```
map.flux()
```
Given zero arguments, the `flux` method of the map returns the total visible flux from the map, which as we showed above, is just unity.
## Setting map coefficients
Okay, onto more interesting things. Setting spherical harmonic coefficients is extremely easy: we can assign values directly to the map instance itself. Say we wish to set the coefficient of the spherical harmonic $Y_{5, -3}$ to $-2$. We simply run
```
map[5, -3] = -2
```
We can check that the spherical harmonic vector (which is a flattened version of the image we showed above) has been updated accordingly:
```
map.y
```
And here's what our map now looks like:
```
map.show()
```
Just for fun, let's set two additional coefficients:
```
map[5, 0] = 2
map[5, 4] = 1
map.show()
```
Kind of looks like a smiley face!
**Pro tip:** *To turn your smiley face into a Teenage Mutant Ninja Turtle, simply edit the* $Y_{5,2}$ *coefficient:*
```
map[5, 2] = 1.5
map.show()
```
It's probably useful to play around with setting coefficients and plotting the resulting map to get a feel for how the spherical harmonics work.
Two quick notes on visualizing maps: first, you can animate them by passing a vector ``theta`` argument to ``show()``; this is just the rotational phase at which the map is viewed. By default, angles in ``starry`` are in degrees (this can be changed by setting ``map.angle_unit``).
```
theta = np.linspace(0, 360, 50)
map.show(theta=theta)
```
Second, we can easily get an equirectangular (latitude-longitude) global view of the map as follows:
```
map.show(projection="rect")
```
## Loading map images
In addition to directly specifying the spherical harmonic coefficients of a map, users can "load" images into ``starry`` via the ``load()`` method, which computes the spherical harmonic expansion of whatever image/array is provided to it. Users can pass paths to image files, numpy arrays on a rectangular latitude/longitude grid, or Healpix maps. ``starry`` comes with a few built-in maps to play around with:
```
import os
import glob
for file in glob.glob(os.path.join(os.path.dirname(starry.__file__), "img", "*.jpg")):
print(os.path.basename(file)[:-4])
```
Let's load the ``earth`` map and see what we get:
```
map = starry.Map(ydeg=20)
map.load("earth", sigma=0.08)
map.show()
map.show(projection="rect")
```
## Changing the orientation
We can change the orientation of the map by specifying its inclination `inc` and obliquity `obl`. Note that these are properties of the *observer*. Changing these values changes the vantage point from which we see the map; it does not change the internal spherical harmonic representation of the map. Rather, map coefficients are defined in a static, invariant frame and the map can be observed from different vantage points by changing these angles. (This is different from the convention in version `0.3.0` of the code; see the tutorial on **Map Orientation** for more information).
The obliquity is measured as the rotation angle of the objecet on the sky plane, measured counter-clockwise from north. The inclination is measured as the rotation of the object away from the line of sight. Let's set the inclination and obliquity of the Earth as an example:
```
map.obl = 23.5
map.inc = 60.0
map.show()
```
## Computing the intensity
We already hinted at how to compute the intensity at a point on the surface of the map: just use the ``intensity()`` method. This method takes the latitude and longitude of a point or a set of points on the surface and returns the specific intensity at each one.
As an example, let's plot the intensity of the Earth along the equator:
```
lon = np.linspace(-180, 180, 1000)
I = map.intensity(lat=0, lon=lon)
fig = plt.figure(figsize=(12, 5))
plt.plot(lon, I)
plt.xlabel("Longitude [degrees]")
plt.ylabel("Intensity");
```
We can easily identify the Pacific (dark), South American (bright), the Atlantic (dark), Africa (bright), the Indian Ocean (dark), and Australia (bright).
## Computing the flux: phase curves
The ``starry`` code is all about modeling light curves, so let's generate some. We'll talk about phase curves first, in which the observed flux is simply the integral over the entire disk when the object is viewed at a particular phase. Flux computations are done via the ``flux()`` method, and the phase is specified via the ``theta`` keyword:
```
theta = np.linspace(0, 360, 1000)
plt.figure(figsize=(12, 5))
plt.plot(theta, map.flux(theta=theta))
plt.xlabel("Rotational phase [degrees]", fontsize=20)
plt.ylabel("Flux [normalized]", fontsize=20);
```
Note that this phase curve corresponds to rotation about the axis of the map, which is inclined and rotated as we specified above. We are therefore computing the disk-integrated intensity at each frame of the following animation:
```
map.show(theta=np.linspace(0, 360, 50))
```
Changing the orientation of the map will change the phase curve we compute. Here's the phase curve of the Earth at different values of the inclination:
```
plt.figure(figsize=(12, 5))
for inc in [30, 45, 60, 75, 90]:
map.inc = inc
plt.plot(theta, map.flux(theta=theta), label="%2d deg" % inc)
plt.legend(fontsize=10)
plt.xlabel("Rotational phase [degrees]", fontsize=20)
plt.ylabel("Flux [normalized]", fontsize=20);
```
Trivially, changing the obliquity does not affect the phase curve:
```
plt.figure(figsize=(12, 5))
for obl in [30, 45, 60, 75, 90]:
map.obl = obl
plt.plot(theta, map.flux(theta=theta), label="%2d deg" % obl)
plt.legend(fontsize=10)
plt.xlabel("Rotational phase [degrees]", fontsize=20)
plt.ylabel("Flux [normalized]", fontsize=20);
```
## Computing the flux: transits and occultations
The spherical harmonic formalism in `starry` makes it easy to compute occultation light curves, since all the integrals are analytic! If we peek at the docstring for the `flux` method, we'll see that it takes four parameters in addition to the rotational phase `theta`:
```
print(map.flux.__doc__)
```
We can pass in the Cartesian position of an occultor (`xo`, `yo`, `zo`) and its radius, all in units of the occulted body's radius. Let's use this to construct a light curve of the moon occulting the Earth:
```
# Set the occultor trajectory
npts = 1000
time = np.linspace(0, 1, npts)
xo = np.linspace(-2.0, 2.0, npts)
yo = np.linspace(-0.3, 0.3, npts)
zo = 1.0
ro = 0.272
# Load the map of the Earth
map = starry.Map(ydeg=20)
map.load("earth", sigma=0.08)
# Compute and plot the light curve
plt.figure(figsize=(12, 5))
flux_moon = map.flux(xo=xo, yo=yo, ro=ro, zo=zo)
plt.plot(time, flux_moon)
plt.xlabel("Time [arbitrary]", fontsize=20)
plt.ylabel("Flux [normalized]", fontsize=20);
```
For reference, here is the trajectory of the occultor:
```
fig, ax = plt.subplots(1, figsize=(5, 5))
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
ax.axis("off")
ax.imshow(map.render(), origin="lower", cmap="plasma", extent=(-1, 1, -1, 1))
for n in list(range(0, npts, npts // 10)) + [npts - 1]:
circ = plt.Circle(
(xo[n], yo[n]), radius=ro, color="k", fill=True, clip_on=False, alpha=0.5
)
ax.add_patch(circ)
```
The two dips are due to occultations of South America and Africa; the bump in the middle of the transit is the moon crossing over the dark waters of the Atlantic!
## Computing the flux: limb-darkening
There's a separate tutorial on limb darkening, so we'll just mention it briefly here. It's super easy to add limb darkening to maps in `starry`. The most common reason for doing this is for modeling transits of planets across stars. To enable limb darkening, set the `udeg` parameter to the degree of the limb darkening model when instantiating a map. For quadratic limb darkening, we would do the following:
```
map = starry.Map(udeg=2)
```
Setting the limb darkening coefficients is similar to setting the spherical harmonic coefficients, except only a single index is used. For instance:
```
map[1] = 0.5
map[2] = 0.25
```
This sets the linear limb darkening coefficient to be $u_1 = 0.5$ and the quadratic limb darkening coefficient to be $u_2 = 0.25$ (the zeroth order coefficient, `map[0]`, is determined by the normalization and cannot be set). Let's look at the map:
```
map.show()
```
The effect of limb darkening is clear! Let's plot a transit across this object:
```
# Set the occultor trajectory
npts = 1000
time = np.linspace(0, 1, npts)
xo = np.linspace(-2.0, 2.0, npts)
yo = np.linspace(-0.3, 0.3, npts)
zo = 1.0
ro = 0.272
# Compute and plot the light curve
plt.figure(figsize=(12, 5))
plt.plot(time, map.flux(xo=xo, yo=yo, ro=ro, zo=zo))
plt.xlabel("Time [arbitrary]", fontsize=20)
plt.ylabel("Flux [normalized]", fontsize=20);
```
That's it! Note that `starry` also allows the user to mix spherical harmonics and limb darkening, so you may set both the `ydeg` and `udeg` parameters simultaneously. Let's look at a limb-darkened version of the Earth map, just for fun:
```
map = starry.Map(ydeg=20, udeg=2)
map.load("earth", sigma=0.08)
map[1] = 0.5
map[2] = 0.25
map.show()
```
Notice how the limb is now darker! Let's compute the transit light curve of the moon as before and compare it to the non-limb-darkened version:
```
# Set the occultor trajectory
npts = 1000
time = np.linspace(0, 1, npts)
xo = np.linspace(-2.0, 2.0, npts)
yo = np.linspace(-0.3, 0.3, npts)
zo = 1.0
ro = 0.272
# Set the map inclination and obliquity
map.inc = 90
map.obl = 0
# Compute and plot the light curve
plt.figure(figsize=(12, 5))
plt.plot(time, flux_moon, label="Limb darkening off")
plt.plot(time, map.flux(xo=xo, yo=yo, ro=ro, zo=zo), label="Limb darkening on")
plt.xlabel("Time [arbitrary]", fontsize=20)
plt.ylabel("Flux [normalized]", fontsize=20)
plt.legend();
```
A few things are different:
1. The normalization changed! The limb-darkened map is slightly brighter when viewed from this orientation. In `starry`, limb darkening conserves the total luminosity, so there will be other orientations at which the Earth will look *dimmer*;
2. The relative depths of the two dips change, since South America and Africa receive different weightings;
3. The limb-darkened light curve is slightly *smoother*.
That's it for this introductory tutorial. There's a LOT more you can do with `starry`, including incorporating it into `exoplanet` to model full planetary systems, computing multi-wavelength light curves, modeling the Rossiter-McLaughlin effect, doing fast probabibilistic inference, etc.
Make sure to check out the other examples in this directory.
| github_jupyter |
# süntaktiline analüüs [deplacy](https://koichiyasuoka.github.io/deplacy/) kaudu
## [Stanza](https://stanfordnlp.github.io/stanza)-ga
```
!pip install deplacy stanza
import stanza
stanza.download("et")
nlp=stanza.Pipeline("et")
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
## [spacy-udpipe](https://github.com/TakeLab/spacy-udpipe)-ga
```
!pip install deplacy spacy-udpipe
import spacy_udpipe
spacy_udpipe.download("et")
nlp=spacy_udpipe.load("et")
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
## [EstMalt](https://github.com/EstSyntax/EstMalt/)-ga
```
!test -f estudmodel4 || curl -LO https://raw.githubusercontent.com/EstSyntax/EstMalt/master/EstUDModel/estudmodel4
!pip install deplacy ufal.udpipe
import ufal.udpipe
model=ufal.udpipe.Model.load("estudmodel4")
nlp=ufal.udpipe.Pipeline(model,"tokenize","","","").process
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
## [COMBO-pytorch](https://gitlab.clarin-pl.eu/syntactic-tools/combo)-ga
```
!pip install --index-url https://pypi.clarin-pl.eu/simple deplacy combo
import combo.predict
nlp=combo.predict.COMBO.from_pretrained("estonian-ud27")
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
## [spaCy-COMBO](https://github.com/KoichiYasuoka/spaCy-COMBO)-ga
```
!pip install deplacy spacy_combo
import spacy_combo
nlp=spacy_combo.load("et_edt")
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
## [spaCy-jPTDP](https://github.com/KoichiYasuoka/spaCy-jPTDP)-ga
```
!pip install deplacy spacy_jptdp
import spacy_jptdp
nlp=spacy_jptdp.load("et_edt")
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
## [UDPipe 2](http://ufal.mff.cuni.cz/udpipe/2)-ga
```
!pip install deplacy
def nlp(t):
import urllib.request,urllib.parse,json
with urllib.request.urlopen("https://lindat.mff.cuni.cz/services/udpipe/api/process?model=et&tokenizer&tagger&parser&data="+urllib.parse.quote(t)) as r:
return json.loads(r.read())["result"]
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
## [Trankit](https://github.com/nlp-uoregon/trankit)-ga
```
!pip install deplacy trankit transformers
import trankit
nlp=trankit.Pipeline("estonian")
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
## [Camphr-Udify](https://camphr.readthedocs.io/en/latest/notes/udify.html)-ga
```
!pip install deplacy camphr en-udify@https://github.com/PKSHATechnology-Research/camphr_models/releases/download/0.7.0/en_udify-0.7.tar.gz
import pkg_resources,imp
imp.reload(pkg_resources)
import spacy
nlp=spacy.load("en_udify")
doc=nlp("Suuga teeb suure linna, käega ei tee kärbse pesagi.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
```
| github_jupyter |
# BloodPressure example
This example assumes that PyShEx has been installed in jupyter environment
```
from pyshex import ShExEvaluator
from rdflib import Namespace
shex = """
BASE <http://example.org/ex/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX ex: <http://ex.example/#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX : <http://hl7.org/fhir/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
start = @<BloodPressureMeasurementShape>
<PatientShape> { # A Patient has:
:name xsd:string*; # one or more names
:birthdate xsd:date? ; # and an optional birthdate.
}
<BloodPressureMeasurementShape> {
rdfs:label xsd:string ;
:subject @<PatientShape> ;
:hasmeasurementDate @<BPDateShape> ;
:valueSBP @<SBPvalueShape> ;
:valueDBP @<DBPvalueShape> ;
:valueABP @<ABPvalueShape>? ;
(:hasMethod @<BPMeasurementInvasiveMethodShape> |
:hasMethod @<BPMeasurementNoninvasiveMethodShape> ) ;
:hasLocation @<BPMeasurementLocationShape>? ;
:hasType @<DEPShape>? ;
:isAffectedBy @<BodyPositionShape>?
}
<SBPvalueShape> {
:valueS xsd:integer;
}
<DBPvalueShape> {
:valueD xsd:integer;
}
<ABPvalueShape> {
:valueA xsd:integer;
}
<BPMeasurementMethodShape> {
:method [<invasive> <non-invasive>];
}
<BPMeasurementInvasiveMethodShape> {
:method [<invasive>];
}
<BPMeasurementNoninvasiveMethodShape> {
:method [<non-invasive>];
}
<BPDateShape> {
:date xsd:date;
}
<BPMeasurementLocationShape> {
:location [<arm> <leg> <ankle>];
}
<DEPShape> {
:type [<typeIV> <typeV>];
}
<BodyPositionShape> {
:position [<sittingposition> <recumbentbodyposition> <orthostaticbodyposition> <positionwithtilt> <trendelenburgposition>];
}
"""
rdf = """
BASE <http://example.org/ex/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX ex: <http://ex.example/#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX : <http://hl7.org/fhir/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
<Patient2>
:name "Bob" ;
:birthdate "1999-12-31"^^xsd:date ;
:has :BloodPressureMeasurementShape .
<BPDate1>
:date "2010-12-31"^^xsd:date.
<SBP1>
:valueS 140 .
<DBP1>
:valueD 90 .
<ABP1>
:valueA 97 .
<BPMMethod1>
:method <non-invasive> .
<BPMLocation1>
:location <arm> .
<BodyPosition1>
:position <sittingposition> .
<DEP1>
:type <typeIV>.
<BPM1>
a :BloodPressureMeasurementShape ;
rdfs:label "First BP measurement" ;
:subject <Patient2> ;
:hasmeasurementDate <BPDate1> ;
:valueSBP <SBP1> ;
:valueDBP <DBP1> ;
:valueABP <ABP1> ;
:method <BPMMethod1> ;
:location <BPMLocation1> ;
:type <DEP1> ;
:position <BodyPosition1> .
"""
results = ShExEvaluator().evaluate(rdf, shex, focus="http://example.org/ex/BPM1")
for r in results:
if r.result:
print("PASS")
else:
print(f"FAIL:\n {r.reason}")
```
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
def runTest(self):
# adding this function to make the tests run
pass
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
```
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
The model doesn't predict the data well, the validation error is a lot larger than the training errror. However, both seem to be going down with more epochs, but validation error doesn't go down as much as training error. This means that the training data set is probably not a good representation of the validation data set, or that we need more training data. Also, the model is not able to capture the nonlinearities well enough, so it probably needs to be more complex.
| github_jupyter |
# Вычислительный практикум
# Задание №2
### Итерационные методы (простой итерации, Зейделя, верхней релаксации) решения СЛАУ.
## Ковальчуков Александр
### 321 группа
### Вариант №6
```
import numpy as np
```
# Параметры задачи
```
A = np.array([[ 9.016024, 1.082197, -2.783575],
[ 1.082197, 6.846595, 0.647647],
[-2.783575, 0.647647, 5.432541]])
b = np.array([[-1.340873],[4.179164],[5.478007]])
n, m = A.shape
```
# Схема Гаусса
```
def solve_Gauss(A, b):
A = np.hstack((A, b))
x = np.zeros((3, 1))
# Прямой ход
for k in range(n):
if abs(A[k, k]) < epsilon:
raise ZeroDivisionError
A[k, :] /= A[k, k]
for i in range(k + 1, n):
A[i, :] -= A[k, :] * A[i, k]
# Обратный ход
for i in range(0, n)[::-1]:
x[i] = A[i, n] - np.dot(A[i, :-1], x)
return x
x_g = solve_Gauss(A.copy(), b.copy())
x_g
```
# Метод простой итерации
```
D = A * np.eye(n)
H_D = np.eye(n) - np.dot(np.linalg.inv(D), A)
g_D = np.dot(np.linalg.inv(D), b)
x0 = np.zeros((n, 1))
x1 = x0
for i in range(10):
x0 = x1
x1 = np.dot(H_D, x0) + g_D
H_norm = np.linalg.norm(H_D, ord=np.inf)
```
### Апостриорная оценка погрешности (по норме $||\; . ||_\infty$)
```
apost = H_norm / (1 - H_norm) * np.linalg.norm(x0 - x1, ord=np.inf)
apost
```
### Фактическая погрешность
```
fact_it = np.linalg.norm(x - x_g, ord=np.inf)
fact_it
```
# Метод Зейделя
```
H_R = np.tril(np.ones(n * (n-1) // 2)).T
H_L = np.ones((n, n)) - H_R
H_R = H_R * H_D
H_L = H_L * H_D
x0 = np.zeros((n, 1))
x1 = x0
CC = np.dot(np.linalg.inv(np.eye(n) - H_L), H_R)
DD = np.linalg.inv(np.eye(n) - H_L)
for i in range(10):
x0 = x1
x1 = np.dot(CC, x0) + np.dot(DD, g_D)
x1
```
### Фактическая погрешность
```
fact_ze = np.linalg.norm(x1 - x_g, ord=np.inf)
fact_ze
```
### Спектральный радиус
```
rho = max(abs(np.linalg.eig(CC)[0]))
rho
```
# Приближение по Люстернику
```
x2 = x0 + 1/(1 + rho) * (x1 - x0)
x2
fact_lu = np.linalg.norm(x2 - x_g, ord=np.inf)
fact_lu
```
# Метод верхней релаксации
```
q = 2/(1 + (1 - rho**2)**(1/2))
q
x0 = np.zeros((n,1))
x1 = x1
for t in range(10):
x0 = x1
for i in range(n):
x1[i] = x0[i] + q * (np.dot(H_D[i, :], x1) - x0[i]+ g_D[i])
x1
fact_re = np.linalg.norm(x1 - x_g, ord=np.inf)
fact_re
```
# Сравнение методов
### Решение методом Гаусса
```
x_g
```
### Погрешность метода простой итерации
```
fact_it
```
### Погрешность метода Зейделя
```
fact_ze
```
### Погрешность метода Зейделя с приближением по Люстернику
```
fact_lu
```
### Погрешность метода верхней релаксации
```
fact_re
```
| github_jupyter |
# TensorFlow Basics
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
tf.__version__
```
## Constants
```
h = tf.constant('Hello World')
h
h.graph is tf.get_default_graph()
x = tf.constant(100)
x
# Create Session object in which we can run operations.
# A session object encapsulates the environment in which
# operations are executed. Tensor objects are evaluated
# by operations.
session = tf.Session()
session.run(h)
session.run(x)
type(session.run(x))
type(session.run(h))
```
## Operations
```
a = tf.constant(2)
b = tf.constant(3)
with tf.Session() as session:
print('Addition: {}'.format(session.run(a + b)))
print('Subtraction: {}'.format(session.run(a - b)))
print('Multiplication: {}'.format(session.run(a * b)))
print('Division: {}'.format(session.run(a / b)))
e = np.array([[5., 5.]])
f = np.array([[2.], [2.]])
e
f
# Convert numpy arrays to TensorFlow objects
ec = tf.constant(e)
fc = tf.constant(f)
matrix_mult_op = tf.matmul(ec, fc)
with tf.Session() as session:
print('Matrix Multiplication: {}'.format(session.run(matrix_mult_op)))
```
### Placeholders
Instead of using a constant, we can define a placeholder that allows us to provide the value at the time of execution just like function parameters.
```
c = tf.placeholder(tf.int32)
d = tf.placeholder(tf.int32)
add_op = tf.add(c, d)
sub_op = tf.subtract(c, d)
mult_op = tf.multiply(c, d)
div_op = tf.divide(c, d)
with tf.Session() as session:
input_dict = {c: 11, d: 10}
print('Addition: {}'.format(session.run(add_op, feed_dict=input_dict)))
print('Subtraction: {}'.format(session.run(sub_op, feed_dict=input_dict)))
print('Multiplication: {}'.format(session.run(mult_op, feed_dict=input_dict)))
print('Division: {}'.format(session.run(div_op, feed_dict={c:11, d:11})))
```
### Variables
A variable is a tensor that can change during program execution.
```
var2 = tf.get_variable('var2', [2])
var2
```
## Classification using the MNIST dataset
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data', one_hot=True)
type(mnist)
mnist.train.images
mnist.train.images.shape
```
The MNIST dataset contain 55,000 images. The dimensions of each image is 28-by-28. Each vector has 784 elements because 28*28=784.
```
# Convert the vector to a 28x28 matrix
sample_img = mnist.train.images[0].reshape(28, 28)
# Show the picture
plt.imshow(sample_img, cmap='Greys')
```
Before we begin, we specify three parameters:
- the learning rate $\alpha$: how quickly should the cost function be adjusted.
- training epoch: number of training cycles
- batch size: batches of training data
```
learning_rate = 0.001
training_epochs = 15
batch_size = 100
```
Network parameters
```
# Number of classes is 10 because we have 10 digits
n_classes = 10
# Number of training examples
n_samples = mnist.train.num_examples
# The flatten array of the 28x28 image matrix contains 784 elements
n_input = 784
# Number of neurons in the hidden layers. For image data, 256 neurons
# is common because we have 256 intensity values (8-bit).
# In this example, we only use 2 hidden layers. The more hidden
# layers, we use the longer it takes for the model to run but
# more layers has the possibility of being more accurate.
n_hidden_1 = 256
n_hidden_2 = 256
```
def multi
```
def multilayer_perceptron(x, weights, biases):
'''
x: Placeholder for the data input
weights: Dictionary of weights
biases: Dictionary of bias values
'''
# First hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Second hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer
layer_out = tf.add(tf.matmul(layer_2, weights['out']), biases['out'])
return layer_out
weights = {
'h1': tf.Variable(tf.random_normal(shape=[n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal(shape=[n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal(shape=[n_hidden_2, n_classes]))
}
tf.random_normal(shape=(n_input, n_hidden_1))
#tf.Session().run(weights['h1'])
```
| github_jupyter |
## 1-3. 複数量子ビットの記述
ここまでは1量子ビットの状態とその操作(演算)の記述について学んできた。この章の締めくくりとして、$n$個の量子ビットがある場合の状態の記述について学んでいこう。テンソル積がたくさん出てきてややこしいが、コードをいじりながら身につけていってほしい。
$n$個の**古典**ビットの状態は$n$個の$0,1$の数字によって表現され、そのパターンの総数は$2^n$個ある。
量子力学では、これらすべてのパターンの重ね合わせ状態が許されているので、$n$個の**量子**ビットの状態$|\psi \rangle$はどのビット列がどのような重みで重ね合わせになっているかという$2^n$個の複素確率振幅で記述される:
$$
\begin{eqnarray}
|\psi \rangle &= &
c_{00...0} |00...0\rangle +
c_{00...1} |00...1\rangle + \cdots +
c_{11...1} |11...1\rangle =
\left(
\begin{array}{c}
c_{00...0}
\\
c_{00...1}
\\
\vdots
\\
c_{11...1}
\end{array}
\right).
\end{eqnarray}
$$
ただし、
複素確率振幅は規格化
$\sum _{i_1,..., i_n} |c_{i_1...i_n}|^2=1$
されているものとする。
そして、この$n$量子ビットの量子状態を測定するとビット列$i_1 ... i_n$が確率
$$
\begin{eqnarray}
p_{i_1 ... i_n} &=&|c_{i_1 ... i_n}|^2
\label{eq02}
\end{eqnarray}
$$
でランダムに得られ、測定後の状態は$|i_1 \dotsc i_n\rangle$となる。
**このように**$n$**量子ビットの状態は、**$n$**に対して指数的に大きい**$2^n$**次元の複素ベクトルで記述する必要があり、ここに古典ビットと量子ビットの違いが顕著に現れる**。
そして、$n$量子ビット系に対する操作は$2^n \times 2^n$次元のユニタリ行列として表される。
言ってしまえば、量子コンピュータとは、量子ビット数に対して指数的なサイズの複素ベクトルを、物理法則に従ってユニタリ変換するコンピュータのことなのである。
※ここで、複数量子ビットの順番と表記の関係について注意しておく。状態をケットで記述する際に、「1番目」の量子ビット、「2番目」の量子ビット、……の状態に対応する0と1を左から順番に並べて表記した。例えば$|011\rangle$と書けば、1番目の量子ビットが0、2番目の量子ビットが1、3番目の量子ビットが1である状態を表す。一方、例えば011を2進数の表記と見た場合、上位ビットが左、下位ビットが右となることに注意しよう。すなわち、一番左の0は最上位ビットであって$2^2$の位に対応し、真ん中の1は$2^1$の位、一番右の1は最下位ビットであって$2^0=1$の位に対応する。つまり、「$i$番目」の量子ビットは、$n$桁の2進数表記の$n-i+1$桁目に対応している。このことは、SymPyなどのパッケージで複数量子ビットを扱う際に気を付ける必要がある(下記「SymPyを用いた演算子のテンソル積」も参照)。
(詳細は Nielsen-Chuang の `1.2.1 Multiple qbits` を参照)
### 例:2量子ビットの場合
2量子ビットの場合は、 00, 01, 10, 11 の4通りの状態の重ね合わせをとりうるので、その状態は一般的に
$$
c_{00} |00\rangle + c_{01} |01\rangle + c_{10}|10\rangle + c_{11} |11\rangle =
\left(
\begin{array}{c}
c_{00}
\\
c_{01}
\\
c_{10}
\\
c_{11}
\end{array}
\right)
$$
とかける。
一方、2量子ビットに対する演算は$4 \times 4$行列で書け、4行4列の行列成分はそれぞれ$|00\rangle,|01\rangle,|10\rangle, |01\rangle$に対応する。
このような2量子ビットに作用する演算としてもっとも重要なのが**制御NOT演算(CNOT演算)**であり、
行列表示では
$$
\begin{eqnarray}
\Lambda(X) =
\left(
\begin{array}{cccc}
1 & 0 & 0& 0
\\
0 & 1 & 0& 0
\\
0 & 0 & 0 & 1
\\
0 & 0 & 1& 0
\end{array}
\right)
\end{eqnarray}
$$
となる。
CNOT演算が2つの量子ビットにどのように作用するか見てみよう。まず、1つ目の量子ビットが$|0\rangle$の場合、$c_{10} = c_{11} = 0$なので、
$$
\Lambda(X)
\left(
\begin{array}{c}
c_{00}\\
c_{01}\\
0\\
0
\end{array}
\right) =
\left(
\begin{array}{c}
c_{00}\\
c_{01}\\
0\\
0
\end{array}
\right)
$$
となり、状態は変化しない。一方、1つ目の量子ビットが$|1\rangle$の場合、$c_{00} = c_{01} = 0$なので、
$$
\Lambda(X)
\left(
\begin{array}{c}
0\\
0\\
c_{10}\\
c_{11}
\end{array}
\right) =
\left(
\begin{array}{c}
0\\
0\\
c_{11}\\
c_{10}
\end{array}
\right)
$$
となり、$|10\rangle$と$|11\rangle$の確率振幅が入れ替わる。すなわち、2つ目の量子ビットが反転している。
つまり、CNOT演算は1つ目の量子ビットをそのままに保ちつつ、
- 1つ目の量子ビットが$|0\rangle$の場合は、2つ目の量子ビットにも何もしない(恒等演算$I$が作用)
- 1つ目の量子ビットが$|1\rangle$の場合は、2つ目の量子ビットを反転させる($X$が作用)
という効果を持つ。
そこで、1つ目の量子ビットを**制御量子ビット**、2つ目の量子ビットを**ターゲット量子ビット**と呼ぶ。
このCNOT演算の作用は、$\oplus$を mod 2の足し算、つまり古典計算における排他的論理和(XOR)とすると、
$$
\begin{eqnarray}
\Lambda(X) |ij \rangle = |i \;\; (i\oplus j)\rangle \:\:\: (i,j=0,1)
\end{eqnarray}
$$
とも書ける。よって、CNOT演算は古典計算でのXORを可逆にしたものとみなせる
(ユニタリー行列は定義$U^\dagger U = U U^\dagger = I$より可逆であることに注意)。
例えば、1つ目の量子ビットを$|0\rangle$と$|1\rangle$の
重ね合わせ状態にし、2つ目の量子ビットを$|0\rangle$として
$$
\begin{eqnarray}
\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle )\otimes |0\rangle =
\frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
1
\\
0
\\
1
\\
0
\end{array}
\right)
\end{eqnarray}
$$
にCNOTを作用させると、
$$
\begin{eqnarray}
\frac{1}{\sqrt{2}}( |00\rangle + |11\rangle ) =
\frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
1
\\
0
\\
0
\\
1
\end{array}
\right)
\end{eqnarray}
$$
が得られ、2つ目の量子ビットがそのままである状態$|00\rangle$と反転された状態$|11\rangle$の重ね合わせになる。(記号$\otimes$については次節参照)
さらに、CNOT ゲートを組み合わせることで重要な2量子ビットゲートである**SWAP ゲート**を作ることができる。
$$\Lambda(X)_{i,j}$$
を$i$番目の量子ビットを制御、$j$番目の量子ビットをターゲットとするCNOT ゲートとして、
$$
\begin{align}
\mathrm{SWAP} &= \Lambda(X)_{1,2} \Lambda(X)_{2,1} \Lambda(X)_{1,2}\\
&=
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{array}
\right)
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0
\end{array}
\right)
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{array}
\right)\\
&=
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{array}
\right)
\end{align}
$$
のように書ける。これは1 番目の量子ビットと2 番目の量子ビットが交換するゲートであることが分かる。
このことは、上記のmod 2の足し算$\oplus$を使った表記で簡単に確かめることができる。3つのCNOTゲート$\Lambda(X)_{1,2} \Lambda(X)_{2,1} \Lambda(X)_{1,2}$の$|ij\rangle$への作用を1ステップずつ書くと、$i \oplus (i \oplus j) = (i \oplus i) \oplus j = 0 \oplus j = j$であることを使って、
$$
\begin{align}
|ij\rangle &\longrightarrow
|i \;\; (i\oplus j)\rangle\\
&\longrightarrow
|(i\oplus (i\oplus j)) \;\; (i\oplus j)\rangle =
|j \;\; (i\oplus j)\rangle\\
&\longrightarrow
|j \;\; (j\oplus (i\oplus j))\rangle =
|ji\rangle
\end{align}
$$
となり、2つの量子ビットが交換されていることが分かる。
(詳細は Nielsen-Chuang の `1.3.2 Multiple qbit gates` を参照)
### テンソル積の計算
手計算や解析計算で威力を発揮するのは、**テンソル積**($\otimes$)である。
これは、複数の量子ビットがある場合に、それをどのようにして、上で見た大きな一つのベクトルへと変換するのか?という計算のルールを与えてくれる。
量子力学の世界では、2つの量子系があってそれぞれの状態が$|\psi \rangle$と$|\phi \rangle$のとき、
$$
|\psi \rangle \otimes |\phi\rangle
$$
とテンソル積 $\otimes$ を用いて書く。このような複数の量子系からなる系のことを**複合系**と呼ぶ。例えば2量子ビット系は複合系である。
基本的にはテンソル積は、**多項式と同じような計算ルール**で計算してよい。
例えば、
$$
(\alpha |0\rangle + \beta |1\rangle )\otimes (\gamma |0\rangle + \delta |1\rangle )
= \alpha \gamma |0\rangle |0\rangle + \alpha \delta |0\rangle |1\rangle + \beta \gamma |1 \rangle | 0\rangle + \beta \delta |1\rangle |1\rangle
$$
のように計算する。列ベクトル表示すると、$|00\rangle$, $|01\rangle$, $|10\rangle$, $|11\rangle$に対応する4次元ベクトル、
$$
\left(
\begin{array}{c}
\alpha
\\
\beta
\end{array}
\right)
\otimes
\left(
\begin{array}{c}
\gamma
\\
\delta
\end{array}
\right) =
\left(
\begin{array}{c}
\alpha \gamma
\\
\alpha \delta
\\
\beta \gamma
\\
\beta \delta
\end{array}
\right)
$$
を得る計算になっている。
### SymPyを用いたテンソル積の計算
```
from IPython.display import Image, display_png
from sympy import *
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import Qubit,QubitBra
from sympy.physics.quantum.gate import X,Y,Z,H,S,T,CNOT,SWAP, CPHASE
init_printing() # ベクトルや行列を綺麗に表示するため
# Google Colaboratory上でのみ実行してください
from IPython.display import HTML
def setup_mathjax():
display(HTML('''
<script>
if (!window.MathJax && window.google && window.google.colab) {
window.MathJax = {
'tex2jax': {
'inlineMath': [['$', '$'], ['\\(', '\\)']],
'displayMath': [['$$', '$$'], ['\\[', '\\]']],
'processEscapes': true,
'processEnvironments': true,
'skipTags': ['script', 'noscript', 'style', 'textarea', 'code'],
'displayAlign': 'center',
},
'HTML-CSS': {
'styles': {'.MathJax_Display': {'margin': 0}},
'linebreaks': {'automatic': true},
// Disable to prevent OTF font loading, which aren't part of our
// distribution.
'imageFont': null,
},
'messageStyle': 'none'
};
var script = document.createElement("script");
script.src = "https://colab.research.google.com/static/mathjax/MathJax.js?config=TeX-AMS_HTML-full,Safe";
document.head.appendChild(script);
}
</script>
'''))
get_ipython().events.register('pre_run_cell', setup_mathjax)
a,b,c,d = symbols('alpha,beta,gamma,delta')
psi = a*Qubit('0')+b*Qubit('1')
phi = c*Qubit('0')+d*Qubit('1')
TensorProduct(psi, phi) #テンソル積
represent(TensorProduct(psi, phi))
```
さらに$|\psi\rangle$とのテンソル積をとると8次元のベクトルになる:
```
represent(TensorProduct(psi,TensorProduct(psi, phi)))
```
### 演算子のテンソル積
演算子についても何番目の量子ビットに作用するのか、というのをテンソル積をもちいて表現することができる。たとえば、1つめの量子ビットには$A$という演算子、2つめの演算子には$B$を作用させるという場合には、
$$ A \otimes B$$
としてテンソル積演算子が与えられる。
$A$と$B$をそれぞれ、2×2の行列とすると、$A\otimes B$は4×4の行列として
$$
\left(
\begin{array}{cc}
a_{11} & a_{12}
\\
a_{21} & a_{22}
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
b_{11} & b_{12}
\\
b_{21} & b_{22}
\end{array}
\right) =
\left(
\begin{array}{cccc}
a_{11} b_{11} & a_{11} b_{12} & a_{12} b_{11} & a_{12} b_{12}
\\
a_{11} b_{21} & a_{11} b_{22} & a_{12} b_{21} & a_{12} b_{22}
\\
a_{21} b_{11} & a_{21} b_{12} & a_{22} b_{11} & a_{22} b_{12}
\\
a_{21} b_{21} & a_{21} b_{22} & a_{22} b_{21} & a_{22} b_{22}
\end{array}
\right)
$$
のように計算される。
テンソル積状態
$$|\psi \rangle \otimes | \phi \rangle $$
に対する作用は、
$$ (A|\psi \rangle ) \otimes (B |\phi \rangle )$$
となり、それぞれの部分系$|\psi \rangle$と$|\phi\rangle$に$A$と$B$が作用する。
足し算に対しては、多項式のように展開してそれぞれの項を作用させればよい。
$$
(A+C)\otimes (B+D) |\psi \rangle \otimes | \phi \rangle =
(A \otimes B +A \otimes D + C \otimes B + C \otimes D) |\psi \rangle \otimes | \phi \rangle\\ =
(A|\psi \rangle) \otimes (B| \phi \rangle)
+(A|\psi \rangle) \otimes (D| \phi \rangle)
+(C|\psi \rangle) \otimes (B| \phi \rangle)
+(C|\psi \rangle) \otimes (D| \phi \rangle)
$$
テンソル積やテンソル積演算子は左右横並びで書いているが、本当は
$$
\left(
\begin{array}{c}
A
\\
\otimes
\\
B
\end{array}
\right)
\begin{array}{c}
|\psi \rangle
\\
\otimes
\\
|\phi\rangle
\end{array}
$$
のように縦に並べた方がその作用の仕方わかりやすいのかもしれない。
例えば、CNOT演算を用いて作られるエンタングル状態は、
$$
\left(
\begin{array}{c}
|0\rangle \langle 0|
\\
\otimes
\\
I
\end{array}
+
\begin{array}{c}
|1\rangle \langle 1|
\\
\otimes
\\
X
\end{array}
\right)
\left(
\begin{array}{c}
\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)
\\
\otimes
\\
|0\rangle
\end{array}
\right) =
\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
|0 \rangle
\\
\otimes
\\
|0\rangle
\end{array}
+
\begin{array}{c}
|1 \rangle
\\
\otimes
\\
|1\rangle
\end{array}
\right)
$$
のようになる。
### SymPyを用いた演算子のテンソル積
SymPyで演算子を使用する時は、何桁目の量子ビットに作用する演算子かを常に指定する。「何**番目**」ではなく2進数表記の「何**桁目**」であることに注意しよう。$n$量子ビットのうちの左から$i$番目の量子ビットを指定する場合、SymPyのコードでは`n-i`を指定する(0を基点とするインデックス)。
`H(0)` は、1量子ビット空間で表示すると
```
represent(H(0),nqubits=1)
```
2量子ビット空間では$H \otimes I$に対応しており、その表示は
```
represent(H(1),nqubits=2)
```
CNOT演算は、
```
represent(CNOT(1,0),nqubits=2)
```
パウリ演算子のテンソル積$X\otimes Y \otimes Z$も、
```
represent(X(2)*Y(1)*Z(0),nqubits=3)
```
このようにして、上記のテンソル積のルールを実際にたしかめてみることができる。
### 複数の量子ビットの一部分だけを測定した場合
複数の量子ビットを全て測定した場合の測定結果の確率については既に説明した。複数の量子ビットのうち、一部だけを測定することもできる。その場合、測定結果の確率は、測定結果に対応する(部分系の)基底で射影したベクトルの長さの2乗になり、測定後の状態は射影されたベクトルを規格化したものになる。
具体的に見ていこう。以下の$n$量子ビットの状態を考える。
\begin{align}
|\psi\rangle &=
c_{00...0} |00...0\rangle +
c_{00...1} |00...1\rangle + \cdots +
c_{11...1} |11...1\rangle\\
&= \sum_{i_1 \dotsc i_n} c_{i_1 \dotsc i_n} |i_1 \dotsc i_n\rangle =
\sum_{i_1 \dotsc i_n} c_{i_1 \dotsc i_n} |i_1\rangle \otimes \cdots \otimes |i_n\rangle
\end{align}
1番目の量子ビットを測定するとしよう。1つ目の量子ビットの状態空間の正規直交基底$|0\rangle$, $|1\rangle$に対する射影演算子はそれぞれ$|0\rangle\langle0|$, $|1\rangle\langle1|$と書ける。1番目の量子ビットを$|0\rangle$に射影し、他の量子ビットには何もしない演算子
$$
|0\rangle\langle0| \otimes I \otimes \cdots \otimes I
$$
を使って、測定値0が得られる確率は
$$
\bigl\Vert \bigl(|0\rangle\langle0| \otimes I \otimes \cdots \otimes I\bigr) |\psi\rangle \bigr\Vert^2 =
\langle \psi | \bigl(|0\rangle\langle0| \otimes I \otimes \cdots \otimes I\bigr) | \psi \rangle
$$
である。ここで
$$
\bigl(|0\rangle\langle0| \otimes I \otimes \cdots \otimes I\bigr) | \psi \rangle =
\sum_{i_2 \dotsc i_n} c_{0 i_2 \dotsc i_n} |0\rangle \otimes |i_2\rangle \otimes \cdots \otimes |i_n\rangle
$$
なので、求める確率は
$$
p_0 = \sum_{i_2 \dotsc i_n} |c_{0 i_2 \dotsc i_n}|^2
$$
となり、測定後の状態は
$$
\frac{1}{\sqrt{p_0}}\sum_{i_2 \dotsc i_n} c_{0 i_2 \dotsc i_n} |0\rangle \otimes |i_2\rangle \otimes \cdots \otimes |i_n\rangle
$$
となる。0と1を入れ替えれば、測定値1が得られる確率と測定後の状態が得られる。
ここで求めた$p_0$, $p_1$の表式は、測定値$i_1, \dotsc, i_n$が得られる同時確率分布$p_{i_1, \dotsc, i_n}$から計算される$i_1$の周辺確率分布と一致することに注意しよう。実際、
$$
\sum_{i_2, \dotsc, i_n} p_{i_1, \dotsc, i_n} = \sum_{i_2, \dotsc, i_n} |c_{i_1, \dotsc, i_n}|^2 = p_{i_1}
$$
である。
測定される量子ビットを増やし、最初の$k$個の量子ビットを測定する場合も同様に計算できる。測定結果$i_1, \dotsc, i_k$を得る確率は
$$
p_{i_1, \dotsc, i_k} = \sum_{i_{k+1}, \dotsc, i_n} |c_{i_1, \dotsc, i_n}|^2
$$
であり、測定後の状態は
$$
\frac{1}{\sqrt{p_{i_1, \dotsc, i_k}}}\sum_{i_{k+1} \dotsc i_n} c_{i_1 \dotsc i_n} |i_1 \rangle \otimes \cdots \otimes |i_n\rangle
$$
となる。(和をとるのは$i_{k+1},\cdots,i_n$だけであることに注意)
SymPyを使ってさらに具体的な例を見てみよう。H演算とCNOT演算を組み合わせて作られる次の状態を考える。
$$
|\psi\rangle = \Lambda(X) (H \otimes H) |0\rangle \otimes |0\rangle = \frac{|00\rangle + |10\rangle + |01\rangle + |11\rangle}{2}
$$
```
psi = qapply(CNOT(1, 0)*H(1)*H(0)*Qubit('00'))
psi
```
この状態の1つ目の量子ビットを測定して0になる確率は
$$
p_0 = \langle \psi | \bigl( |0\rangle\langle0| \otimes I \bigr) | \psi \rangle =
\left(\frac{\langle 00 | + \langle 10 | + \langle 01 | + \langle 11 |}{2}\right)
\left(\frac{| 00 \rangle + | 01 \rangle}{2}\right) =
\frac{1}{2}
$$
で、測定後の状態は
$$
\frac{1}{\sqrt{p_0}} \bigl( |0\rangle\langle0| \otimes I \bigr) | \psi \rangle =
\frac{| 00 \rangle + | 01 \rangle}{\sqrt{2}}
$$
である。
この結果をSymPyでも計算してみよう。SymPyには測定用の関数が数種類用意されていて、一部の量子ビットを測定した場合の確率と測定後の状態を計算するには、`measure_partial`を用いればよい。測定する状態と、測定を行う量子ビットのインデックスを引数として渡すと、測定後の状態と測定の確率の組がリストとして出力される。1つめの量子ビットが0だった場合の量子状態と確率は`[0]`要素を参照すればよい。
```
from sympy.physics.quantum.qubit import measure_all, measure_partial
measured_state_and_probability = measure_partial(psi, (1,))
measured_state_and_probability[0]
```
上で手計算した結果と合っていることが分かる。測定結果が1だった場合も同様に計算できる。
```
measured_state_and_probability[1]
```
---
## コラム:ユニバーサルゲートセットとは
古典計算機では、NANDゲート(論理積ANDの出力を反転したもの)さえあれば、これをいくつか組み合わせることで、任意の論理演算が実行できることが知られている。
それでは、量子計算における対応物、すなわち任意の量子計算を実行するために最低限必要な量子ゲートは何であろうか?
実は、本節で学んだ
$$\{H, T, {\rm CNOT} \}$$
の3種類のゲートがその役割を果たしている、いわゆる**ユニバーサルゲートセット**であることが知られている。
これらをうまく組み合わせることで、任意の量子計算を実行できる、すなわち「**万能量子計算**」が可能である。
### 【より詳しく知りたい人のための注】
以下では$\{H, T, {\rm CNOT} \}$の3種のゲートの組が如何にしてユニバーサルゲートセットを構成するかを、順を追って説明する。
流れとしては、一般の$n$量子ビットユニタリ演算からスタートし、これをより細かい部品にブレイクダウンしていくことで、最終的に上記3種のゲートに行き着くことを見る。
#### ◆ $n$量子ビットユニタリ演算の分解
まず、任意の$n$量子ビットユニタリ演算は、以下の手順を経て、いくつかの**1量子ビットユニタリ演算**と**CNOTゲート**に分解できる。
1. 任意の$n$量子ビットユニタリ演算は、いくつかの**2準位ユニタリ演算**の積に分解できる。ここで2準位ユニタリ演算とは、例として3量子ビットの場合、$2^3=8$次元空間のうち2つの基底(e.g., $\{|000\rangle, |111\rangle \}$)の張る2次元部分空間にのみ作用するユニタリ演算である
2. 任意の2準位ユニタリ演算は、**制御**$U$**ゲート**(CNOTゲートのNOT部分を任意の1量子ビットユニタリ演算$U$に置き換えたもの)と**Toffoliゲート**(CNOTゲートの制御量子ビットが2つになったもの)から構成できる
3. 制御$U$ゲートとToffoliゲートは、どちらも**1量子ビットユニタリ演算**と**CNOTゲート**から構成できる
#### ◆ 1量子ビットユニタリ演算の構成
さらに、任意の1量子ビットユニタリ演算は、$\{H, T\}$の2つで構成できる。
1. 任意の1量子ビットユニタリ演算は、オイラーの回転角の法則から、回転ゲート$\{R_X(\theta), R_Z(\theta)\}$で(厳密に)実現可能である
2. 実は、ブロッホ球上の任意の回転は、$\{H, T\}$のみを用いることで実現可能である(注1)。これはある軸に関する$\pi$の無理数倍の回転が$\{H, T\}$のみから実現できること(**Solovay-Kitaevアルゴリズム**)に起因する
(注1) ブロッホ球上の連続的な回転を、離散的な演算である$\{H, T\}$で実現できるか疑問に思われる読者もいるかもしれない。実際、厳密な意味で1量子ビットユニタリ演算を離散的なゲート操作で実現しようとすると、無限個のゲートが必要となる。しかし実際には厳密なユニタリ演算を実現する必要はなく、必要な計算精度$\epsilon$で任意のユニタリ演算を近似できれば十分である。ここでは、多項式個の$\{H, T\}$を用いることで、任意の1量子ビットユニタリ演算を**十分良い精度で近似的に構成できる**ことが、**Solovay-Kitaevの定理**により保証されている。
<br>
以上の議論により、3種のゲート$\{H, T, {\rm CNOT} \}$があれば、任意の$n$量子ビットユニタリ演算が実現できることがわかる。
ユニバーサルゲートセットや万能量子計算について、より詳しくは以下を参照されたい:
[1] Nielsen-Chuang の 4.5 Universal quantum gates
[2] 藤井 啓祐 「量子コンピュータの基礎と物理との接点」(第62回物性若手夏の学校 講義)DOI: 10.14989/229039 http://mercury.yukawa.kyoto-u.ac.jp/~bussei.kenkyu/archives/1274.html
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Reinforcement Learning in Azure Machine Learning - Pong problem
Reinforcement Learning in Azure Machine Learning is a managed service for running distributed reinforcement learning training and simulation using the open source Ray framework.
This example uses Ray RLlib to train a Pong playing agent on a multi-node cluster.
## Pong problem
[Pong](https://en.wikipedia.org/wiki/Pong) is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth.
<table style="width:50%">
<tr>
<th style="text-align: center;"><img src="./images/pong.gif" alt="Pong image" align="middle" margin-left="auto" margin-right="auto"/></th>
</tr>
<tr style="text-align: center;">
<th>Fig 1. Pong game animation (from <a href="https://towardsdatascience.com/intro-to-reinforcement-learning-pong-92a94aa0f84d">towardsdatascience.com</a>).</th>
</tr>
</table>
The goal here is to train an agent to win an episode of Pong game against opponent with the score of at least 18 points. An episode in Pong runs until one of the players reaches a score of 21. Episodes are a terminology that is used across all the [OpenAI gym](https://gym.openai.com/envs/Pong-v0/) environments that contains a strictly defined task.
Training a Pong agent is a compute-intensive task and this example demonstrates the use of Reinforcement Learning in Azure Machine Learning service to train an agent faster in a distributed, parallel environment. You'll learn more about using the head and the worker compute targets to train an agent in this notebook below.
## Prerequisite
It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook.
## Set up Development Environment
The following subsections show typical steps to setup your development environment. Setup includes:
* Connecting to a workspace to enable communication between your local machine and remote resources
* Creating an experiment to track all your runs
* Setting up a virtual network
* Creating remote head and worker compute target on a virtual network to use for training
### Azure Machine Learning SDK
Display the Azure Machine Learning SDK version.
```
%matplotlib inline
# Azure Machine Learning core imports
import azureml.core
# Check core SDK version number
print("Azure Machine Learning SDK Version: ", azureml.core.VERSION)
```
### Get Azure Machine Learning workspace
Get a reference to an existing Azure Machine Learning workspace.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep = ' | ')
```
### Create Azure Machine Learning experiment
Create an experiment to track the runs in your workspace.
```
from azureml.core.experiment import Experiment
# Experiment name
experiment_name = 'rllib-pong-multi-node'
exp = Experiment(workspace=ws, name=experiment_name)
```
### Create Virtual Network and Network Security Group
**If you are using separate compute targets for the Ray head and worker, as we do in this notebook**, a virtual network must be created in the resource group. If you have already created a virtual network in the resource group, you can skip this step.
> Note that your user role must have permissions to create and manage virtual networks to run the cells below. Talk to your IT admin if you do not have these permissions.
#### Create Virtual Network
To create the virtual network you first must install the [Azure Networking Python API](https://docs.microsoft.com/python/api/overview/azure/network?view=azure-python).
`pip install --upgrade azure-mgmt-network`
Note: In this section we are using [DefaultAzureCredential](https://docs.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python)
class for authentication which, by default, examines several options in turn, and stops on the first option that provides
a token. You will need to log in using Azure CLI, if none of the other options are available (please find more details [here](https://docs.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python)).
```
# If you need to install the Azure Networking SDK, uncomment the following line.
#!pip install --upgrade azure-mgmt-network
from azure.mgmt.network import NetworkManagementClient
from azure.identity import DefaultAzureCredential
# Virtual network name
vnet_name ="rl_pong_vnet"
# Default subnet
subnet_name ="default"
# The Azure subscription you are using
subscription_id=ws.subscription_id
# The resource group for the reinforcement learning cluster
resource_group=ws.resource_group
# Azure region of the resource group
location=ws.location
network_client = NetworkManagementClient(credential=DefaultAzureCredential(), subscription_id=subscription_id)
async_vnet_creation = network_client.virtual_networks.begin_create_or_update(
resource_group,
vnet_name,
{
'location': location,
'address_space': {
'address_prefixes': ['10.0.0.0/16']
}
}
)
async_vnet_creation.wait()
print("Virtual network created successfully: ", async_vnet_creation.result())
```
#### Set up Network Security Group on Virtual Network
Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/azure/machine-learning/how-to-enable-virtual-network).
A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).
You may need to modify the code below to match your scenario.
```
import azure.mgmt.network.models
security_group_name = vnet_name + '-' + "nsg"
security_rule_name = "AllowAML"
# Create a network security group
nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(
location=location,
security_rules=[
azure.mgmt.network.models.SecurityRule(
name=security_rule_name,
access=azure.mgmt.network.models.SecurityRuleAccess.allow,
description='Reinforcement Learning in Azure Machine Learning rule',
destination_address_prefix='*',
destination_port_range='29876-29877',
direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,
priority=400,
protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,
source_address_prefix='BatchNodeManagement',
source_port_range='*'
),
],
)
async_nsg_creation = network_client.network_security_groups.begin_create_or_update(
resource_group,
security_group_name,
nsg_params,
)
async_nsg_creation.wait()
print("Network security group created successfully:", async_nsg_creation.result())
network_security_group = network_client.network_security_groups.get(
resource_group,
security_group_name,
)
# Define a subnet to be created with network security group
subnet = azure.mgmt.network.models.Subnet(
id='default',
address_prefix='10.0.0.0/24',
network_security_group=network_security_group
)
# Create subnet on virtual network
async_subnet_creation = network_client.subnets.begin_create_or_update(
resource_group_name=resource_group,
virtual_network_name=vnet_name,
subnet_name=subnet_name,
subnet_parameters=subnet
)
async_subnet_creation.wait()
print("Subnet created successfully:", async_subnet_creation.result())
```
#### Review the virtual network security rules
Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules.
```
from files.networkutils import *
from azure.identity import DefaultAzureCredential
check_vnet_security_rules(DefaultAzureCredential(), ws.subscription_id, ws.resource_group, vnet_name, True)
```
### Create compute targets
In this example, we show how to set up separate compute targets for the Ray head and Ray worker nodes.
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
#### Create head compute target
First we define the head cluster with GPU for the Ray head node. One CPU of the head node will be used for the Ray head process and the rest of the CPUs will be used by the Ray worker processes.
```
from azureml.core.compute import AmlCompute, ComputeTarget
# Choose a name for the Ray head cluster
head_compute_name = 'head-gpu'
head_compute_min_nodes = 0
head_compute_max_nodes = 2
# This example uses GPU VM. For using CPU VM, set SKU to STANDARD_D2_V2
head_vm_size = 'STANDARD_NC6'
if head_compute_name in ws.compute_targets:
head_compute_target = ws.compute_targets[head_compute_name]
if head_compute_target and type(head_compute_target) is AmlCompute:
if head_compute_target.provisioning_state == 'Succeeded':
print('found head compute target. just use it', head_compute_name)
else:
raise Exception(
'found head compute target but it is in state', head_compute_target.provisioning_state)
else:
print('creating a new head compute target...')
provisioning_config = AmlCompute.provisioning_configuration(
vm_size=head_vm_size,
min_nodes=head_compute_min_nodes,
max_nodes=head_compute_max_nodes,
vnet_resourcegroup_name=ws.resource_group,
vnet_name=vnet_name,
subnet_name='default')
# Create the cluster
head_compute_target = ComputeTarget.create(ws, head_compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
head_compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(head_compute_target.get_status().serialize())
```
#### Create worker compute target
Now we create a compute target with CPUs for the additional Ray worker nodes. CPUs in these worker nodes are used by Ray worker processes. Each Ray worker node, depending on the CPUs on the node, may have multiple Ray worker processes. There can be multiple worker tasks on each worker process (core).
```
# Choose a name for your Ray worker compute target
worker_compute_name = 'worker-cpu'
worker_compute_min_nodes = 0
worker_compute_max_nodes = 4
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
worker_vm_size = 'STANDARD_D2_V2'
# Create the compute target if it hasn't been created already
if worker_compute_name in ws.compute_targets:
worker_compute_target = ws.compute_targets[worker_compute_name]
if worker_compute_target and type(worker_compute_target) is AmlCompute:
if worker_compute_target.provisioning_state == 'Succeeded':
print('found worker compute target. just use it', worker_compute_name)
else:
raise Exception(
'found worker compute target but it is in state', head_compute_target.provisioning_state)
else:
print('creating a new worker compute target...')
provisioning_config = AmlCompute.provisioning_configuration(
vm_size=worker_vm_size,
min_nodes=worker_compute_min_nodes,
max_nodes=worker_compute_max_nodes,
vnet_resourcegroup_name=ws.resource_group,
vnet_name=vnet_name,
subnet_name='default')
# Create the compute target
worker_compute_target = ComputeTarget.create(ws, worker_compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
worker_compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(worker_compute_target.get_status().serialize())
```
## Train Pong Agent
To facilitate reinforcement learning, Azure Machine Learning Python SDK provides a high level abstraction, the _ReinforcementLearningEstimator_ class, which allows users to easily construct reinforcement learning run configurations for the underlying reinforcement learning framework. Reinforcement Learning in Azure Machine Learning supports the open source [Ray framework](https://ray.io/) and its highly customizable [RLLib](https://ray.readthedocs.io/en/latest/rllib.html#rllib-scalable-reinforcement-learning). In this section we show how to use _ReinforcementLearningEstimator_ and Ray/RLLib framework to train a Pong playing agent.
### Define worker configuration
Define a `WorkerConfiguration` using your worker compute target. We specify the number of nodes in the worker compute target to be used for training and additional PIP packages to install on those nodes as a part of setup.
In this case, we define the PIP packages as dependencies for both head and worker nodes. With this setup, the game simulations will run directly on the worker compute nodes.
```
from azureml.contrib.train.rl import WorkerConfiguration
# Specify the Ray worker configuration
worker_conf = WorkerConfiguration(
# Azure Machine Learning compute target to run Ray workers
compute_target=worker_compute_target,
# Number of worker nodes
node_count=4,
# GPU
use_gpu=False,
# Shared memory size
# Uncomment line below to set shm_size for workers (requires Azure Machine Learning SDK 1.33 or greater)
# shm_size=1024*1024*1024,
# PIP packages to use
)
```
### Create reinforcement learning estimator
The `ReinforcementLearningEstimator` is used to submit a job to Azure Machine Learning to start the Ray experiment run. We define the training script parameters here that will be passed to the estimator.
We specify `episode_reward_mean` to 18 as we want to stop the training as soon as the trained agent reaches an average win margin of at least 18 point over opponent over all episodes in the training epoch.
Number of Ray worker processes are defined by parameter `num_workers`. We set it to 13 as we have 13 CPUs available in our compute targets. Multiple Ray worker processes parallelizes agent training and helps in achieving our goal faster.
```
Number of CPUs in head_compute_target = 6 CPUs in 1 node = 6
Number of CPUs in worker_compute_target = 2 CPUs in each of 4 nodes = 8
Number of CPUs available = (Number of CPUs in head_compute_target) + (Number of CPUs in worker_compute_target) - (1 CPU for head node) = 6 + 8 - 1 = 13
```
```
from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray
training_algorithm = "IMPALA"
rl_environment = "PongNoFrameskip-v4"
# Training script parameters
script_params = {
# Training algorithm, IMPALA in this case
"--run": training_algorithm,
# Environment, Pong in this case
"--env": rl_environment,
# Add additional single quotes at the both ends of string values as we have spaces in the
# string parameters, outermost quotes are not passed to scripts as they are not actually part of string
# Number of GPUs
# Number of ray workers
"--config": '\'{"num_gpus": 1, "num_workers": 13}\'',
# Target episode reward mean to stop the training
# Total training time in seconds
"--stop": '\'{"episode_reward_mean": 18, "time_total_s": 3600}\'',
}
# Reinforcement learning estimator
rl_estimator = ReinforcementLearningEstimator(
# Location of source files
source_directory='files',
# Python script file
entry_script="pong_rllib.py",
# Parameters to pass to the script file
# Defined above.
script_params=script_params,
# The Azure Machine Learning compute target set up for Ray head nodes
compute_target=head_compute_target,
# GPU usage
use_gpu=True,
# Reinforcement learning framework. Currently must be Ray.
rl_framework=Ray('0.8.3'),
# Ray worker configuration defined above.
worker_configuration=worker_conf,
# How long to wait for whole cluster to start
cluster_coordination_timeout_seconds=3600,
# Maximum time for the whole Ray job to run
# This will cut off the run after an hour
max_run_duration_seconds=3600,
# Allow the docker container Ray runs in to make full use
# of the shared memory available from the host OS.
shm_size=24*1024*1024*1024
)
```
### Training script
As recommended in [RLlib](https://ray.readthedocs.io/en/latest/rllib.html) documentations, we use Ray [Tune](https://ray.readthedocs.io/en/latest/tune.html) API to run the training algorithm. All the RLlib built-in trainers are compatible with the Tune API. Here we use tune.run() to execute a built-in training algorithm. For convenience, down below you can see part of the entry script where we make this call.
```python
tune.run(
run_or_experiment=args.run,
config={
"env": args.env,
"num_gpus": args.config["num_gpus"],
"num_workers": args.config["num_workers"],
"callbacks": {"on_train_result": callbacks.on_train_result},
"sample_batch_size": 50,
"train_batch_size": 1000,
"num_sgd_iter": 2,
"num_data_loader_buffers": 2,
"model": {"dim": 42},
},
stop=args.stop,
local_dir='./logs')
```
### Submit the estimator to start a run
Now we use the rl_estimator configured above to submit a run.
```
run = exp.submit(config=rl_estimator)
```
### Monitor the run
Azure Machine Learning provides a Jupyter widget to show the status of an experiment run. You could use this widget to monitor the status of the runs. The widget shows the list of two child runs, one for head compute target run and one for worker compute target run. You can click on the link under **Status** to see the details of the child run. It will also show the metrics being logged.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
### Stop the run
To stop the run, call `run.cancel()`.
```
# Uncomment line below to cancel the run
# run.cancel()
```
### Wait for completion
Wait for the run to complete before proceeding. If you want to stop the run, you may skip this and move to next section below.
**Note: The run may take anywhere from 30 minutes to 45 minutes to complete.**
```
run.wait_for_completion()
```
### Performance of the agent during training
Let's get the reward metrics for the training run agent and observe how the agent's rewards improved over the training iterations and how the agent learns to win the Pong game.
Collect the episode reward metrics from the worker run's metrics.
```
# Get the reward metrics from worker run
episode_reward_mean = run.get_metrics(name='episode_reward_mean')
```
Plot the reward metrics.
```
import matplotlib.pyplot as plt
plt.plot(episode_reward_mean['episode_reward_mean'])
plt.xlabel('training_iteration')
plt.ylabel('episode_reward_mean')
plt.show()
```
We observe that during the training over multiple episodes, the agent learns to win the Pong game against opponent with our target of 18 points in each episode of 21 points.
**Congratulations!! You have trained your Pong agent to win a game.**
## Cleaning up
For your convenience, below you can find code snippets to clean up any resources created as part of this tutorial that you don't wish to retain.
```
# To archive the created experiment:
#experiment.archive()
# To delete the compute targets:
#head_compute_target.delete()
#worker_compute_target.delete()
```
## Next
In this example, you learned how to solve distributed reinforcement learning training problems using head and worker compute targets. This was an introductory tutorial on Reinforement Learning in Azure Machine Learning service offering. We would love to hear your feedback to build the features you need!
| github_jupyter |
# Algorithm accuracy analysis
- In order to test whether Compas scores do an accurate job of deciding whether an offender is Low, Medium or High risk, we ran a Cox Proportional Hazards model. Northpointe, the company that created COMPAS and markets it to Law Enforcement, also ran a Cox model in [their validation study](https://journals.sagepub.com/doi/abs/10.1177/0093854808326545).
- We used the counting model and removed people when they were incarcerated. Due to errors in the underlying jail data, we need to filter out 32 rows that have an end date more than the start date. Considering that there are 13,334 total rows in the data, such a small amount of errors will not affect the results.
## Setup
```
################# To use R in Jupyter Notebook ###############
import rpy2.ipython
%load_ext rpy2.ipython
################# To ignore warnings ##################
import warnings
warnings.filterwarnings('ignore')
################## To have multiple outputs ###################
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from IPython.display import display
```
## Loading packages
```
%%R
if (!require("pacman")) install.packages("pacman")
pacman::p_load(
tidyverse, # tidyverse packages
conflicted, # an alternative conflict resolution strategy
ggthemes, # for more themes
patchwork, # for arranging ggplots
scales, # for rescales
survival, # for survival analysis
ggfortify, # # data viz tools for statistical analysis
grid, # for adding grids
gridExtra, # for grid graphics
broom, # for modeling
reticulate, # Python enginge for R markdown
purrr # for multiple models
)
# To avoid conflicts
conflict_prefer("filter", "dplyr")
conflict_prefer("select", "dplyr")
```
## Loading data
We select fields for severity of charge, number of priors, demographics, age, sex, compas scores, and whether each person was accused of a crime within two years.
- N of observations (rows): 7,214
- N of variables (columns): 53
```
%%R
cox_data <- read_csv("/home/jae/compas-analysis/data/cox-parsed.csv")
```
## Wrangling data
```
%%R
# Wrangling data
df <- cox_data %>%
filter(score_text != "N/A") %>%
filter(end > start) %>%
mutate(c_charge_degree = factor(c_charge_degree),
age_cat = factor(age_cat),
race = factor(race, levels = c("Caucasian","African-American","Hispanic","Other","Asian","Native American")),
sex = factor(sex, levels = c("Male","Female")),
score_factor = factor(score_text, levels = c("Low", "Medium", "High")))
%%R
grp <- df[!duplicated(df$id),]
```
## Descriptive analysis
```
%%R
# Set theme
theme_set(theme_base())
grp %>%
group_by(score_factor) %>%
count() %>%
ggplot(aes(x = score_factor, y = n)) +
geom_col() +
labs(x = "Score",
y = "Count",
title = "Score distribution")
%%R
df %>%
ggplot(aes(ordered(score_factor))) +
geom_bar() +
facet_wrap(~race, nrow = 2) +
labs(x = "Decile Score",
y = "Count",
Title = "Defendant's Decile Score")
```
## Modeling
```
%%R
f2 <- Surv(start, end, event, type="counting") ~ race + score_factor + race * score_factor
model <- coxph(f2, data = df)
model %>%
tidy(conf.inf = TRUE) %>%
mutate(term = gsub("race|score_factor","", term)) %>%
filter(term != "<chr>") %>%
ggplot(aes(x = fct_reorder(term, estimate), y = estimate, ymax = conf.high, ymin = conf.low)) +
geom_pointrange() +
coord_flip() +
labs(y = "Estimate", x = "")
```
The interaction term shows a similar disparity as the logistic regression above.
High risk white defendants are 3.61 more likely than low risk white defendants, while High risk black defendants are 2.99 more likely than low.
```
%%R
visualize_surv <- function(input){
f <- Surv(start, end, event, type="counting") ~ score_factor
fit <- survfit(f, data = input)
fit %>%
tidy(conf.int = TRUE) %>%
mutate(strata = gsub("score_factor=","", strata)) %>%
mutate(strata = factor(strata, levels = c("High","Medium","Low"))) %>%
ggplot(aes(x = time, y = estimate, ymax = conf.high, ymin = conf.low, group = strata, col = strata)) +
geom_pointrange(alpha = 0.1) +
guides(colour = guide_legend(override.aes = list(alpha = 1))) +
ylim(c(0, 1)) +
labs(x = "Time", y = "Estimated survival rate", col = "Strata")}
%%R
visualize_surv(df) + ggtitle("Overall")
```
Black defendants do recidivate at higher rates according to race specific Kaplan Meier plots.
```
%%R
(df %>% filter(race == "Caucasian") %>% visualize_surv() + ggtitle("Caucasian")) /
(df %>% filter(race == "African-American") %>% visualize_surv() + ggtitle("African-American"))
```
In terms of underlying recidivism rates, we can look at gender specific Kaplan Meier estimates. There is a striking difference between women and men.
```
%%R
(df %>% filter(sex == "Female") %>% visualize_surv() + ggtitle("Female")) /
(df %>% filter(sex == "Male") %>% visualize_surv() + ggtitle("Male"))
```
As these plots show, the COMPAS score treats a High risk women the same as a Medium risk man.
## Risk of Recidivism Accuracy
The above analysis shows that the Compas algorithm does overpredict African-American defendant's future recidivism, but we haven't yet explored the direction of the bias. We can discover fine differences in overprediction and underprediction by comparing Compas scores across racial lines.
```
from truth_tables import PeekyReader, Person, table, is_race, count, vtable, hightable, vhightable
from csv import DictReader
people = []
with open("/home/jae/bias-in-ml/compas/data/cox-parsed.csv") as f:
reader = PeekyReader(DictReader(f))
try:
while True:
p = Person(reader)
if p.valid:
people.append(p)
except StopIteration:
pass
pop = list(filter(lambda i: ((i.recidivist == True and i.lifetime <= 730) or
i.lifetime > 730), list(filter(lambda x: x.score_valid, people))))
recid = list(filter(lambda i: i.recidivist == True and i.lifetime <= 730, pop))
rset = set(recid)
surv = [i for i in pop if i not in rset]
```
Define a function for a bar plot.
```
import matplotlib.pyplot as plt
def bar_plot(x, y):
t = table(list(x), list(y))
plt.bar(range(len(t)), list(t.values()), align='center') # Create a bar graph
plt.xticks(range(len(t)), list(t.keys())) # Create xlabel names
bar_plot(recid, surv)
plt.title("All defendants")
plt.show()
```
- That number is higher for African Americans at 44.85% and lower for whites at 23.45%.
```
is_afam = is_race("African-American")
bar_plot(filter(is_afam, recid), filter(is_afam, surv))
plt.title("Black defendants")
plt.show()
is_white = is_race("Caucasian")
bar_plot(filter(is_white, recid), filter(is_white, surv))
plt.title("White defendants")
plt.show()
```
## Risk of Violent Recidivism
Compas also offers a score that aims to measure a persons risk of violent recidivism, which has a similar overall accuracy to the Recidivism score.
```
vpeople = []
with open("/home/jae/compas-analysis/data/cox-violent-parsed.csv") as f:
reader = PeekyReader(DictReader(f))
try:
while True:
p = Person(reader)
if p.valid:
vpeople.append(p)
except StopIteration:
pass
vpop = list(filter(lambda i: ((i.violent_recidivist == True and i.lifetime <= 730) or
i.lifetime > 730), list(filter(lambda x: x.vscore_valid, vpeople))))
vrecid = list(filter(lambda i: i.violent_recidivist == True and i.lifetime <= 730, vpeople))
vrset = set(vrecid)
vsurv = [i for i in vpop if i not in vrset]
bar_plot(vrecid, vsurv)
plt.title("All defendants")
plt.show()
```
Even more so for Black defendants.
```
is_afam = is_race("African-American")
bar_plot(filter(is_afam, vrecid), filter(is_afam, vsurv))
plt.title("Black defendants")
plt.show()
is_white = is_race("Caucasian")
bar_plot(filter(is_white, vrecid), filter(is_white, vsurv))
plt.title("White defendants")
plt.show()
```
| github_jupyter |
# cadCAD Template: Robot and the Marbles - Part 4


## Non-determinism
Non-deterministic systems exhibit different behaviors on different runs for the same input. The order of heads and tails in a series of 3 coin tosses, for example, is non deterministic.
Our robots and marbles system is currently modelled as a deterministic system. Meaning that every time we run the simulation: none of the robots act on timestep 1; robot 1 acts on timestep 2; robot 2 acts on timestep 3; an so on.
If however we were to define that at every timestep each robot would act with a probability P, then we would have a non-deterministic (probabilistic) system. Let's make the following changes to our system.
* Robot 1: instead of acting once every two timesteps, there's a 50% chance it will act in any given timestep
* Robot 2: instead of acting once every three timesteps, there's a 33.33% chance it will act in any given timestep
```
# import libraries
import pandas as pd
import numpy as np
import matplotlib
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
import config
from cadCAD import configs
import matplotlib.pyplot as plt
%matplotlib inline
exec_mode = ExecutionMode()
# Run Cad^2
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
df = pd.DataFrame(raw_result)
df.set_index(['run', 'timestep', 'substep'])
df.plot('timestep', ['box_A', 'box_B'], grid=True,
colormap = 'RdYlGn',
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
```
### Since it is random, lets run it again:
```
# Run Cad^2
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
df = pd.DataFrame(raw_result)
df.plot('timestep', ['box_A', 'box_B'], grid=True,
colormap = 'RdYlGn',
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
```
### Run 50 Monte Carlo Runs
In order to take advantage of cadCAD's Monte Carlo simulation features, we should modify the configuration file so as to define the number of times we want the same simulation to be run. This is done in the N key of the sim_config dictionary.
```
import config2
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
df2 = pd.DataFrame(raw_result)
from IPython.display import display
tmp_rows = pd.options.display.max_rows
pd.options.display.max_rows = 10
display(df2.set_index(['run', 'timestep', 'substep']))
pd.options.display.max_rows = tmp_rows
```
Plotting two of those runs allows us to see the different behaviors over time.
```
df2[df2['run']==1].plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(11)),
colormap = 'RdYlGn');
df2[df2['run']==9].plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(11)),
colormap = 'RdYlGn');
```
If we plot all those runs onto a single chart, we can see every possible trajectory for the number of marbles in each box.
```
ax = None
for i in range(50):
ax = df2[df2['run']==i+1].plot('timestep', ['box_A', 'box_B'],
grid=True,
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+max(df2['box_A'].max(),df2['box_B'].max()))),
legend = (ax == None),
colormap = 'RdYlGn',
ax = ax
)
```
For some analyses, it might make sense to look at the data in aggregate. Take the median for example:
```
dfmc_median = df2.groupby(['timestep', 'substep']).median().reset_index()
dfmc_median.plot('timestep', ['box_A', 'box_B'],
grid=True,
xticks=list(dfmc_median['timestep'].drop_duplicates()),
yticks=list(range(int(1+max(dfmc_median['box_A'].max(),dfmc_median['box_B'].max())))),
colormap = 'RdYlGn'
)
```
Or look at edge cases
```
max_final_A = df2[df2['timestep']==df2['timestep'].max()]['box_A'].max()
# max_final_A
slow_runs = df2[(df2['timestep']==df2['timestep'].max()) &
(df2['box_A']==max_final_A)]['run']
slow_runs = list(slow_runs)
slow_runs
ax = None
for i in slow_runs:
ax = df2[df2['run']==i].plot('timestep', ['box_A', 'box_B'],
grid=True,
xticks=list(df2['timestep'].drop_duplicates()),
yticks=list(range(1+max(df2['box_A'].max(),df2['box_B'].max()))),
legend = (ax == None),
colormap = 'RdYlGn',
ax = ax
)
```
| github_jupyter |
```
#imports and helper functions
import GPy
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
def plot_hist(bins,bin_ranges,errs,alpha):
"""Plots a histogram"""
assert len(bins)==(len(bin_ranges)-1)
yvals = bins/(np.diff(bin_ranges)[:,None])
evals = 1.96*errs/(np.diff(bin_ranges)[:,None]) #the error is in the area, which is 'spread'
plt.bar(bin_ranges[0:-1],yvals,np.diff(bin_ranges),lw=1,alpha=alpha,color='#ccccff')
midpoints = .5*(bin_ranges[0:-1] + bin_ranges[1:])
plt.errorbar(midpoints,yvals,evals,elinewidth=1,capthick=1,color='black',alpha=alpha,lw=0)
def histogram_data(age,height,weight,age_step,height_step):
inputs = []
simple_inputs = []
outputs = []
for a in np.arange(1.5,8.5,age_step):
for h in np.arange(70,125,height_step):
weights = weight[ (age>=a) & (age<a+age_step) & (height>=h) & (height < h+height_step) ]
if len(weights)>0:
inputs.append([a,a+age_step,h,h+height_step])
simple_inputs.append([a+(age_step/2.0),h+(height_step/2.0)])
outputs.append(np.mean(weights))
inputs = np.array(inputs)
simple_inputs = np.array(simple_inputs)
original_outputs = np.array(outputs)
return inputs,simple_inputs,original_outputs
```
# Integral Kernel
I've called the kernel the 'integral kernel' as we use it when we know observations of the integrals of a function, and want to estimate the function itself.
Examples include:
- Knowing how far a robot has travelled after 2, 4, 6 and 8 seconds, but wanting an estimate of its speed after 5 seconds (use the `Integral` kernel)
- Wanting to know an estimate of the density of people aged 23, when we only have the total count for binned age ranges (use the `Integral_Limits` kernel)
- Wanting to estimate the weight distribution from a set of child, for a child of 5 years, with a height of 90cm, given data binned into a 3d table (using the `Multidimensional_Integral_Limits` kernel).
It should be noted that all three examples can be solved with the `Multidimensional_Integral_Limits` kernel. It might be worth, in the future, removing the other two kernels, to simplify the codebase.
## Speed of robot
This is the most simple example. For this example we use the `Integral` kernel.
We don't know this, but the robot is accelerating at a constant $\mathtt{1\; ms}^{-2}$, from stationary at $t=0$. We do know that after two seconds it had travelled two metres, after four seconds it had travelled eight metres, and after six and eight seconds it had travelled 18 and 32 metres respectively. The questions is: How fast was it going after five seconds?
```
#the observations
times = np.array([2.0,4.0, 6.0, 8.0])[:,None]
distances = np.array([2.0,8.0,18.0,32.0])[:,None]
#model configuration
kernel = GPy.kern.Integral(input_dim=1,variances=10.0)
m = GPy.models.GPRegression(times,distances,kernel)
m.optimize()
#m.plot_f()
#prediction for after five seconds
(speed, var) = m.predict_noiseless(np.array([[5.0]]))
print("After 5 seconds the robot was travelling at %0.2f m/s" % speed);
```
## Ages of people living in Kelham island: Probability density of being 23
In this example we are given some binned histogram data: We know that between the ages of zero and ten there are 19 children, between 11 and 20 there are 52, etc.
```
import numpy as np
bins = np.array([19,52,114,45,22,20])[None,:]
bin_ranges = np.array([0,10,20,30,40,60,100])
#The domain of each bin is set by pairs of values in the first two dimensions (see output).
#These could overlap!
X=np.vstack([bin_ranges[1:],bin_ranges[0:-1]]).T
print(X)
Y = bins.T
kernel = GPy.kern.Integral_Limits(input_dim=2, variances=0.1, lengthscale=10.0)
m = GPy.models.GPRegression(X,Y,kernel)
m.Gaussian_noise=0.001
m.optimize()
Xtest = np.arange(0,100,1.0) #for plotting
Xpred = np.array([Xtest,np.zeros(len(Xtest))])
Ypred,YpredCov = m.predict_noiseless(Xpred.T)
SE = np.sqrt(YpredCov)
plot_hist(bins.T,bin_ranges,0,alpha=0.2)
plt.plot(Xtest,Ypred,'r-',label='Mean')
plt.plot(Xtest,Ypred+SE*1.96,'r:',label='95% CI')
plt.plot(Xtest,Ypred-SE*1.96,'r:')
plt.title('Population density of people of different ages')
plt.xlabel('Age')
plt.ylabel('Density (Frequency)')
plt.ylim([0,15])
```
## Predicted range of weights of a child
I'm quickly generating some example data to illustrate the integral kernel over multiple dimensions.
```
#generate 100 simulated children heights, weights and ages.
Nchildren = 100
ages = np.random.random(Nchildren)*3.0+3.0 #ages 3 to 6
heights = 70 + ages * 7.0
weights = 9.0 + (heights-70)*0.2
weights += np.random.normal(0,0.3,weights.shape)
heights += np.random.normal(0,3,weights.shape)
ages += np.random.normal(0,0.3,weights.shape)
#group into a histogram (this is often the sort of data we'll actually have)
spacing_age = 1.0
spacing_height = 5.0
area = spacing_age * spacing_height
inputs, simple_inputs, outputs = histogram_data(ages,heights,weights,spacing_age,spacing_height)
print(" Age Range Height Range Average Weight (rounded)")
np.hstack([inputs,np.round(outputs[:,None])])
```
We now have a set of
```
outputmean = np.mean(outputs)
outputs -= outputmean
#we have twice as many dimensions as the actual data as each
#pair of dimensions is used to specify one dimension in the domain of each bin.
#the input matrix above is used, so dimension 0 and 1 specify the start
#and end of each bin in the 'age' direction. etc...
kernel = GPy.kern.Multidimensional_Integral_Limits(input_dim=4, variances=1.0, lengthscale=[8.0,4.0])
m = GPy.models.GPRegression(1.0*inputs,1.0*area*outputs[:,None],kernel)
m.optimize()
a = 4.0
h = 100.0
Xpred = []
test_heights = range(70,140)
for h in test_heights:
Xpred.append([a,0,h,0])
Xpred = np.array(Xpred)
weight, err = m.predict_noiseless(Xpred)
plt.plot(test_heights,weight+outputmean)
plt.plot(test_heights,weight+outputmean+np.sqrt(err),':')
plt.plot(test_heights,weight+outputmean-np.sqrt(err),':')
plt.xlabel('Height / cm')
plt.ylabel('Weight / kg')
plt.title('Results from Children Age, Height, Weight simulation')
```
The above curve is based on the histogram 'binned' data.
| github_jupyter |
# Solving the Traveling Salesperson Problem with Azure Quantum QIO
Hello and welcome! In this notebook we will walk you through how you (can) solve the traveling salesperson problem (also known as the traveling salesman problem) with the Azure Quantum quantum inspired optimization (QIO) service.
## Introduction
The traveling salesperson problem is a well-known optimization problem in which the aim is to find a (near) optimal route through a network of nodes. As you will see later on, it is not a straightforward problem to solve as the complexity (difficulty) grows exponentially with the number of nodes. Additionally, due to the rugged (non-convex) optimization space, it is difficult or even impossible to find an optimal route (global minimum/optimal solution) through a large network. Common solvers for these rugged problems are based on searches, which you will implement in this tutorial with Azure QIO!
Imagine you have to calculate an optimal route for the salesperson over $N$ nodes (addresses). Your manager wants you to minimize the traveling cost (time/distance/money/etc.) such that the salesperson is more profitable. There are a number of constraints which have to be satisfied by the salesperson:
1. The salesperson is required to visit each node.
2. The salesperson may visit each node only once!
3. The salesperson starts and finishes in the starting node (headquarters).
For more information regarding the traveling salesperson problem, check out these links:
- [Introduction to binary optimization - Azure Quantum docs](https://docs.microsoft.com/en-us/azure/quantum/optimization-binary-optimization)
- [Traveling salesperson problem - Wikipedia](https://en.wikipedia.org/wiki/Travelling_salesperson_problem)
> Please note this sample is intended to demonstrate how to formulate the cost function for a well-understood problem mathematically and then map it to a binary QUBO/PUBO format. The Traveling Salesperson Problem is not a good example of a problem that scales well in this format, as detailed in [this paper](https://arxiv.org/abs/1702.06248).
## Setup
First, you need to import some dependencies and connect to your Azure Quantum Workspace.
> If you do not have an Azure Quantum workspace, or an Azure subscription, please take a look at the [Get started with Azure Quantum](https://docs.microsoft.com/learn/modules/get-started-azure-quantum/) Learn module, or alternatively the [Azure Quantum docs site](https://docs.microsoft.com/azure/quantum/how-to-create-quantum-workspaces-with-the-azure-portal).
```
# Import dependencies
import numpy as np
import os
import time
import math
import requests
import json
import datetime
from azure.quantum.optimization import Problem, ProblemType, Term, HardwarePlatform, Solver
from azure.quantum.optimization import SimulatedAnnealing, ParallelTempering, HardwarePlatform, Tabu, QuantumMonteCarlo
from typing import List
# This allows you to connect to the Workspace you've previously deployed in Azure.
# Be sure to fill in the settings below which can be retrieved by running 'az quantum workspace show' in the terminal.
from azure.quantum import Workspace
# Copy the settings for your workspace below
workspace = Workspace (
subscription_id = "",
resource_group = "",
name = "",
location = ""
)
```
## Defining a cost function: Minimizing the travel cost
The generated route should minimize the travel cost for the salesperson. It is time to define this mathematically. You will first need to define the travel costs between the nodes and have a suitable mapping for the location of the salesperson.
### 1. Defining the travel cost matrix
Consider a single trip for the salesperson, from one node to another node. They start at node $i$ and travel to node $j$, which requires $c(i,j)$ in travel costs. Here, $i$ denotes the origin node and $j$ denotes the destination node. To keep matters simple for now, both $i$ and $j$ can be any node in the set of $N$ nodes.
$$ \text{The origin node } (\text{node } i) \text{ with } i \in \{0,N-1\}.$$
$$ \text{The destination node } (\text{node } j) \text{ with } j \in \{0,N-1\}.$$
$$ \text{Time traveling from } \text{node } i \text{ to } \text{node } j \text{ is } c_{i,j}.$$
As you may have already noticed, the travel cost between two nodes can be written out for every $i$ and $j$, which results in a travel cost matrix $C$ (linear algebra), shown below:
$$C = \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix}. $$
Here, we define the rows to be origin nodes and the columns to be destination nodes.
For example, traveling from node 0 to node 1 is simply described by:
$$ C(0,1) = c_{0,1}. $$
The unit for the travel cost is arbitrary: it can be time, distance, money, or a combination of these and/or other factors.
### 2. Defining the location vectors
Now that the travel cost between two nodes has been formulated, a representation of the origin and destination nodes of the salesperson must be defined to specify which element of the matrix gives the associated travel cost. Remember that this is still for a single trip!
This is the same as saying "we need a way to select a row-column pair in the cost matrix". This can be done by multiplying the cost matrix with a vector from the left, and a vector from the right! The left vector will specify the origin node and the right vector the destination node. For brevity they'll be named the origin vector (left) and the destination vector (right).
Consider the example where you tell the salesperson to travel from node 1 to node 2:
$$ \text{Travel cost node 0 to node 1 }= \begin{bmatrix} 1 & 0 & \dots & 0 \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{bmatrix} = c_{0,1}.$$
> Please note that the salesperson can only visit one node at a time (no magic allowed!) and that there is only one salesperson, thus the sum elements in the origin and destination vector must equal 1!
Fantastic, you now know how to the express the travel cost for a single trip. However, it is necessary to express these ideas in mathematical format that the solver understands. In the previous example the trip was hard-coded from node 1 to node 2, let's generalize this from any origin node to any destination node:
$$ x_k \in \{0,1\} \text{ for } k \in \{0,2N-1\}, $$
$$ \text{Travel cost for single trip }= \begin{bmatrix} x_0 & x_1 & \dots & x_{N-1} \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} x_{N} \\ x_{N+1}\\ \vdots \\ x_{2N-1} \end{bmatrix}. $$
The solver will determine which $x_k$ are given a value of 1 or 0 (this is the binary variable you are optimizing). If the value is 1, it means the salesperson is travelling between the corresponding origin and destination nodes. Correspondingly, if the value is 0 it means the salesperson is not originating from or traveling to that location.
> You may be wondering why the destination vector is indexed from $N+1$ to $2N$ (seperate variables). The reason for this is that otherwise, the solver would consider the left vector and the right vector equal. As as consequence, there would be an origin vector on both sides of the cost matrix, meaning the salesperson would remain at the origin node, which isn't allowed (it would count as visiting the same node more than once, which isn't allowed according to the constraints set out at the start).
To summarize: for the salesperson to be at one node at a time, the sum of the vector elements for both the origin and destination vectors has to be 1.
$$ \text{Sum of the origin vector elements: }\sum_{k = 0}^{N-1} x_k = 1, $$
$$ \text{Sum of the destination vector elements: }\sum_{k = N}^{2N-1} x_k = 1.$$
Later in this notebook, constraints will be designed to disable magic/superposition for the salesperson.
Now that the basic mathematical formulations are covered, the scope can be expanded. Let's take a look at how two trips can be modeled!
### 3. Defining the travel costs for a route
To derive the cost function for a route through a network, you'll need a way to describe the 'total travel cost'. As you might expect, the total cost of a route through the network is the sum of the travel costs between the nodes (sum of the trips). Say you have a route ($R$) from node 1 to node 3 to node 2. The total cost of the route would then be:
$$ \text{Cost of route: } R_{1-3-2} = c_{1,3} + c_{3,2}. $$
Note that for the second trip, the origin node is the same as the destination node of the previous trip. Knowing that the last destination equals the new origin is a useful property when reducing the number of variables the solver has to optimize for. Therefore these vectors can also be called the 'location' vectors.
Recall that the costs can be expressed with linear algebra. Then the total cost of two trips is:
$$ \text{Cost of route } = \begin{bmatrix} x_0 & x_1 & \dots & x_{N-1} \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} x_{N} \\ x_{N+1}\\ \vdots \\ x_{2N-1} \end{bmatrix} + \begin{bmatrix} x_{N} & x_{N+1} & \dots & x_{2N-1} \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} x_{2N} \\ x_{2N+1}\\ \vdots \\ x_{3N-1} \end{bmatrix}.$$
Generalizing the small example to a route in which the salesperson visits all $N$ nodes and returns back to the starting location gives
$$\text{Travel cost of route } = \sum_{k=0}^{N-1} \left( \begin{bmatrix} x_{Nk} & x_{Nk+1} & \dots & x_{Nk+N-1} \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} x_{N(k+1)} \\ x_{N(k+1)+1}\\ \vdots \\ x_{N(k+1)+N-1} \end{bmatrix} \right),$$
which can equivalently be written as:
$$\text{Travel cost of route} = \sum_{k=0}^{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1} \left( x_{Nk+i}\cdot x_{N(k+1)+j}\cdot c_{i,j} \right).$$
Fantastic! A cost function to optimize for the salesperson's route has been found! Because you want to minimize (denoted by the 'min') the total travel cost with respect to the variables $x_k$ (given below the 'min'), and make mathematicians happy, you'll want to slightly adjust the model:
$$\text{Travel cost of route} := \underset{x_0, x_1,\dots,x_{(N^2+2N)}}{min}\sum_{k=0}^{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1} \left( x_{Nk+i}\cdot x_{N(k+1)+j}\cdot c_{i,j} \right).$$
Time to write this out in code.
### 4. Coding the cost function
For the solver to find a suitable route, you'll need to specify how it calculates the travel cost for that route. The solver requires you to define a cost term for each possible trip-origin-destination combination given by the variables $k,i,j$, respectively. As described by the cost function, this weighting term is simply the $c(i,j)$ element of the cost matrix. The solver will optimize for the $x$ variables of the location vectors.
```
### Define variables
# The number of nodes
NumNodes = 5
# Max cost between nodes
maxCost = 10
# Node names, to interpret the solution later on
NodeName = {0:'A', 1:'B', 2:'C', 3:'D', 4:'E', 5:'F', 6:'G', 7:'H', 8:'I', 9:'J', 10:'K',
11:'L', 12:'M', 13:'N', 14:'O', 15:'P', 16:'Q', 17:'R', 18:'S', 19:'T',
20:'U', 21:'V', 22:'W', 23:'X', 24:'Y', 25:'Z'}
# Cost to travel between nodes -- note this matrix is not symmetric (traveling A->B is not same as B->A!)
CostMatrix = np.array([[1, 4, 7, 4, 3], [3, 3, 3, 1, 2], [2, 5, 2, 3, 1], [7, 8, 1, 3, 5], [3, 2, 1, 9, 8]]) # If you want to rerun with the same matrix
#CostMatrix = np.random.randint(maxCost, size=(NumNodes,NumNodes)) # If you want to run with a new cost matrix
############################################################################################
##### Define the optimization problem for the QIO Solver
def OptProblem(CostMatrix) -> Problem:
# 'terms' will contain the cost function terms for the trips
terms = []
############################################################################################
##### Cost of traveling between nodes
for k in range(0, len(CostMatrix)): # For each trip
for i in range(0, len(CostMatrix)): # For each origin node
for j in range(0, len(CostMatrix)): # For each destination node
# Assign a weight to every possible trip from node i to node j for each trip
terms.append(
Term(
c = CostMatrix.item((i,j)) ,
# Plus one to denote dependence on next location
indices = [i + (len(CostMatrix) * k), j + (len(CostMatrix) * (k + 1))]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k + 1))}') # Combinations of origin and destination nodes
#print(f'For x_{i + (len(CostMatrix) * k)}, to x_{j + (len(CostMatrix) * (k + 1))} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format for the solver (as formulated in the cost function)
#print(f'For node {i}, to node {j} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format that is easier to read for a human
return Problem(name="Traveling Salesperson", problem_type=ProblemType.pubo, terms=terms)
OptimizationProblem = OptProblem(CostMatrix)
```
## Defining optimization constraints: Penalizing invalid routes
The modeled cost function will allow you to find the cheapest route for the salesperson, however it does not include any information on invalid routes! It is now time to integrate constraints/penalties for routes which the salesperson should not travel.
### Constraint 1: The salesperson may not be at more than one node at a time (no magic)
The salesperson can only be at one node at a time. In the defined cost function this constraint is not enforced, and thus the solver can return origin and location vectors for which the sum is larger than one. Such vectors represent invalid solutions.
To avoid these solutions, the invalid solution has to be penalized. This is done by modifying the cost function. You can imagine the penalization of the cost function as reshaping/redesigning the rugged optimization landscape such that for invalid solutions the solver cannot find (local) minima. Invalid solutions should have such high cost that the solver is very unlikely to select them, but not so high as to block the solver from moving around that area of the cost function to find other, valid solutions.
To ensure that the salesperson is only ever at a single location, before or after a trip, we must require that only one element in the origin or destination vector is equal to 1, with the rest being 0. One way of designing the constraint would be to look at the sum of elements of each location vector (the origin and destination vectors):
$$
\begin{align}
\text{Location vector 0 (HQ)} &: \text{ } \hspace{0.5cm} x_0 + x_1 + \dots + x_{N-1} = 1, \\
\text{Location vector 1} &: \text{ } \hspace{0.5cm} x_{N} + x_{N+1} + \dots + x_{2N-1} = 1, \\
&\vdots \\
\text{Location vector N (HQ)} &: \text{ } \hspace{0.5cm} x_{N^2} + x_{N^2+1} + \dots + x_{N^2 + N-1} = 1.
\end{align}
$$
Enforcing the constraint over all trips would then yield ($N+1$ because the salesperson returns to starting node):
$$ \text{For all locations: } \hspace{0.5cm} x_0 + x_1 + \dots + x_{N^2 + N-1} = N+1. $$
The equation above is a valid way to model the constraint. However, there is a downside to it as well: in this formulation individual locations are penalized but there is no penalty for being in two locations at once! To see this, take the following example:
$$
\text{ If the first } N + 1 \text{ values are all 1:}\\
x_0=1, x_{1}=1, \dots, x_{N}=1, \\
\text{ } \\
\text{ and all other } x \text{ values are 0:}\\
x_{N+1}=0, x_{N+2}=0, \dots, x_{N^2+N-1}=0,
$$
the constraint is satisfied but the salesperson is still at all nodes at once. Thus the derived equation is not specific enough to model the location constraint. Let's rethink.
Consider three nodes (a length-3 location vector), then if the salesperson is at node 1, they cannot be at node 0 or node 2. If the salesperson is at node 0, they cannot be at node 1 or node 2. Instead of using a sum to express this, an equally valid way would be to use products. The product of elements in a location vector must always be zero, regardless of where the salesperson resides, because only one of the three $x$ values can take value 1 (denoting the salesperson being at that location). Therefore the constraint can also be expressed as:
$$ x_0 \cdot x_1 = 0,$$
$$ x_0 \cdot x_2 = 0,$$
$$ x_1 \cdot x_2 = 0.$$
In this format the constraint is much more specific and stricter for the solver. As a result, the solver will provide solutions that do not violate this constraint. Note that we do not want to count combinations more than once, as this would lead to (assymmetries) inbalances in the cost function. We therefore exclude the reverse combinations:
$$ x_{1}\cdot x_{0}, $$
$$ x_{2}\cdot x_{0}, $$
$$ x_{2}\cdot x_{1}. $$
Generalizing the location constraint for a salesperson who passes through all nodes and returns back to the starting node ($N+1$ nodes in total, iterating over $l$):
$$ \sum_{l=0}^{N} \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} x_{(i+Nl)} \cdot x_{(j+Nl)} = 0 \text{ with } \{ i,j | i<j \} $$
Great! Now we have an accurate description of this constraint, lets add it to the code.
```
### Define variables
# The number of nodes
NumNodes = 5
# Max cost between nodes
maxCost = 10
# Node names, to interpret the solution later on
NodeName = {0:'A', 1:'B', 2:'C', 3:'D', 4:'E', 5:'F', 6:'G', 7:'H', 8:'I', 9:'J', 10:'K',
11:'L', 12:'M', 13:'N', 14:'O', 15:'P', 16:'Q', 17:'R', 18:'S', 19:'T',
20:'U', 21:'V', 22:'W', 23:'X', 24:'Y', 25:'Z'}
# Cost to travel between nodes -- note this matrix is not symmetric (traveling A->B is not same as B->A!)
CostMatrix = np.array([[1, 4, 7, 4, 3], [3, 3, 3, 1, 2], [2, 5, 2, 3, 1], [7, 8, 1, 3, 5], [3, 2, 1, 9, 8]]) # If you want to rerun with the same matrix
#CostMatrix = np.random.randint(maxCost, size=(NumNodes,NumNodes)) # If you want to run with a new cost matrix
############################################################################################
##### Define the optimization problem for the Azure Quantum Solver
def OptProblem(CostMatrix) -> Problem:
# 'terms' will contain the cost function terms for the trips
terms = []
############################################################################################
##### Cost of traveling between nodes
for k in range(0, len(CostMatrix)): # For each trip
for i in range(0, len(CostMatrix)): # For each origin node
for j in range(0, len(CostMatrix)): # For each destination node
#Assign a weight to every possible trip from node i to node j for each trip
terms.append(
Term(
c = CostMatrix.item((i,j)), # Element of the cost matrix
indices = [i + (len(CostMatrix) * k), j + (len(CostMatrix) * (k + 1))] # +1 to denote dependence on next location
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k + 1))}') # Combinations between the origin and destination nodes
#print(f'For x_{i + (len(CostMatrix) * k)}, to x_{j + (len(CostMatrix) * (k + 1))} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format for the solver (as formulated in the cost function)
#print(f'For node_{i}, to node_{j} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format that is easier to read for a human
############################################################################################
##### Constraint: Location constraint - salesperson can only be at 1 node at a time.
for l in range(0,len(CostMatrix)+1): # The total number of nodes that are visited over the route (+1 because returning to starting node)
for i in range(0,len(CostMatrix)): # For each origin node
for j in range(0,len(CostMatrix)): # For each destination node
if i!=j and i<j: # i<j because we don't want to penalize twice // i==j is forbidden (above)
terms.append(
Term(
c = int(2 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [i + (len(CostMatrix) * l), j + (len(CostMatrix) * l)]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k))}')
#print(f'Location constraint 1: x_{i + (len(CostMatrix) * l)} - x_{j + (len(CostMatrix) * (l + 1))} (trip {l}) assigned weight: {int(2 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
return Problem(name="Traveling Salesperson", problem_type=ProblemType.pubo, terms=terms)
OptimizationProblem = OptProblem(CostMatrix)
```
### Constraint 2: The salesperson must be somewhere, they can't disappear
Due to the first constraint, the salesperson is penalized for being in multiple nodes at once. But due do the formulation of this constraint it is possible that the solver puts all all $x_k$ in a location vector equal to zero, meaning that the salesperson could be 'nowhere' after some trip.
> By looking at the code and previous sections you can come to the conclusion that the minimal cost is obtained by setting all $x_k$ to 0.
To encourage the salesperson to not disappear, it is necessary to reward them for being somewhere. Rewards can be assigned by incorporating negatively weighted terms to the cost function that decrease the cost function value for valid solutions of the optimization problem (remember we are minimizing the cost here, so reducing cost makes it more likely that a solution will be chosen). To demonstrate this point, take the following optimization problem:
$$\text{f(x)} := \underset{x_0, x_1, x_2}{min} x_0 + x_1 - x_2.$$
$$ \text{with } x_0, x_1, x_2 \in \{0,1\}$$
The minimum value for the example is attained for the solution $x_0$ = 0, $x_1$ = 0, $x_2$ = 1, with the optimal function value equal to -1. Here, the negatively weighted third term encourages $x_2$ to take a value 1 rather than 0, unlike $x_0$ and $x_1$. With this idea in mind, you can stop the salesperson from dissapearing! If the salesperson has to visit $N+1$ nodes over a route then $N+1$ of the $x_k$ variables must be assigned the value 1. Written as an equation:
$$ \sum_{k=0}^{N(N+1)-1} x_k = N+1,$$
with $(N(N+1))$ equal to the number of variables used to represent the origin and destination nodes for each trip. You could split the equation for each location vector seperately, but if you keep the constraint linear in $x_k$ the resulting cost function will be the same. To assign a reward to the salesperson for being at a node, the $x_k$ terms are moved to the right side of the equation:
$$ 0 = (N+1) -\left( \sum_{k=0}^{N(N+1)-1} x_k \right).$$
As you may be aware, there is no guarantee which particular $x_k$ the solver will assign the value 1 to in this equation. However, in the previous constraint the salesperson is already enforced to be in a maximum of one node before/after each trip. Therefore with the cost function weights properly tuned, it can be assumed that the salesperson will not be in two or more nodes at once.
In other words, the previous constraint penalizes the salesperson for being at more than one node at a once, while this constraint rewards them for being at as many nodes before/after any trip (also being in multiple nodes at once). The weights of the constraints will effectively determine how well they are satisfied, a balance between the two needs to be found such that both are adhered to.
Incorporating this constraint in the code can be done by assigning a negative term to each $x_k$. The $N+1$ value can be ignored, since it results in a linear shift of the optimization landscape and does not have an effect on solutions of the the minimization problem.
```
### Define variables
# The number of nodes
NumNodes = 5
# Max cost between nodes
maxCost = 10
# Node names, to interpret the solution later on
NodeName = {0:'A', 1:'B', 2:'C', 3:'D', 4:'E', 5:'F', 6:'G', 7:'H', 8:'I', 9:'J', 10:'K',
11:'L', 12:'M', 13:'N', 14:'O', 15:'P', 16:'Q', 17:'R', 18:'S', 19:'T',
20:'U', 21:'V', 22:'W', 23:'X', 24:'Y', 25:'Z'}
# Cost to travel between nodes -- note this matrix is not symmetric (traveling A->B is not same as B->A!)
CostMatrix = np.array([[1, 4, 7, 4, 3], [3, 3, 3, 1, 2], [2, 5, 2, 3, 1], [7, 8, 1, 3, 5], [3, 2, 1, 9, 8]]) # If you want to rerun with the same matrix
#CostMatrix = np.random.randint(maxCost, size=(NumNodes,NumNodes)) # If you want to run with a new cost matrix
############################################################################################
##### Define the optimization problem for the Quantum Inspired Solver
def OptProblem(CostMatrix) -> Problem:
#'terms' will contain the weighting terms for the trips!
terms = []
############################################################################################
##### Cost of traveling between nodes
for k in range(0, len(CostMatrix)): # For each trip
for i in range(0, len(CostMatrix)): # For each origin node
for j in range(0, len(CostMatrix)): # For each destination node
#Assign a weight to every possible trip from node i to node j for each trip
terms.append(
Term(
c = CostMatrix.item((i,j)), # Element of the cost matrix
indices = [i + (len(CostMatrix) *k ), j + (len(CostMatrix) * (k + 1))] # +1 to denote dependence on next location
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k + 1))}') # Combinations between the origin and destination nodes
#print(f'For x_{i + (len(CostMatrix) * k)}, to x_{j + (len(CostMatrix) * (k + 1))} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format for the solver (as formulated in the cost function)
#print(f'For node_{i}, to node_{j} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format that is easier to read for a human
############################################################################################
##### Constraint: Location constraint - salesperson can only be at 1 node at a time.
for l in range(0,len(CostMatrix)+1): # The total number of nodes that are visited over the route (+1 because returning to starting node)
for i in range(0,len(CostMatrix)): # For each origin node
for j in range(0,len(CostMatrix)): # For each destination node
if i!=j and i<j: # i<j because we don't want to penalize twice // i==j is forbidden (above)
terms.append(
Term(
c = int(2 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [i + (len(CostMatrix) * l), j + (len(CostMatrix) * l)]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k))}')
#print(f'Location constraint 1: x_{i + (len(CostMatrix) * l)} - x_{j + (len(CostMatrix) * (l + 1))} (trip {l}) assigned weight: {int(2 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
############################################################################################
##### Constraint: Location constraint - encourage the salesperson to be 'somewhere' otherwise all x_k might be 0 (for example).
for v in range(0, len(CostMatrix) + len(CostMatrix) * (len(CostMatrix))): # Select variable (v represents a node before/after any trip)
terms.append(
Term(
c = int(-1.65 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [v]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(v)
#print(f'Location constraint 2: x_{v} assigned weight: {int(-1.65 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
#print(f'Location constraint 2: node_{v % NumNodes} after {np.floor(v / NumNodes)} trips assigned weight: {int(-1.65 * np.max(CostMatrix))}') # In a format that is easier to read for a human
return Problem(name="Traveling Salesperson", problem_type=ProblemType.pubo, terms=terms)
OptimizationProblem = OptProblem(CostMatrix)
```
### Constraint 3: Same node constraint - can't travel to the same node more than once (except the starting node)
The salesperson may only visit each node once, meaning that routes containing revisits of a node (other than the starting node) must be penalized. As an example, consider node 3 for each trip (these $x$ describe the same node):
$$ \text{Node 3: } x_3, x_{3+N}, x_{3+2N}, \dots $$
If the salesperson has been in node 3 after any trip, they may not pass through the same node again. This means that if $x_3$ is 1 for example, $x_{3+N}$ and $x_{3+2N}$ have to be 0. As done similarly with the location constraint, the product of these variables is one way of designing a constraint. As you know that the product between variables representing the same node has to be zero, the following can be derived:
$$ x_3 \cdot x_{3+N} \cdot x_{3+2N} = 0. $$
Even though the equation seems to represent the constraint correctly, it is not stringent enough. With such an equation, if $x_3$ and $x_{3+N}$ are both 1 and $x_{3+2N}$ is 0, the constraint is satisfied with the salesperson having been in the same node twice. Therefore, similarly to the location constraint, more specificity is required. Luckily, the constraint can be split into smaller products:
$$ x_3 \cdot x_{3+N} =0$$
$$ x_3 \cdot x_{3+2N} = 0$$
$$ x_{3+N} \cdot x_{3+2N} = 0$$
Continuing this for all N trips and N nodes yields:
$$ \large{\sum}_{p=0}^{N^2+N-1}\hspace{0.25cm} \large{\sum}_{f=p+N,\hspace{0.2cm} \text{stepsize: } N}^{N^2-1} \hspace{0.35cm} (x_p \cdot x_f) = 0,$$
in which the first summation assigns a reference node $x_p$, and the second summation the same node but after some number of trips $x_f$ (multiple of N: stepsize).
This constraint penalizes routes in which nodes are visited more than once. It does not include the last trip back to the starting node (headquarters). The last trip does not need to penalized since the salesperson has already visited each node. Therefore, all remaning travels would result in a violation of the constraint. Incorporating it would only make the cost function larger without adding any value to the optimization problem.
```
### Define variables
# The number of nodes
NumNodes = 5
# Max cost between nodes
maxCost = 10
# Node names, to interpret the solution later on
NodeName = {0:'A', 1:'B', 2:'C', 3:'D', 4:'E', 5:'F', 6:'G', 7:'H', 8:'I', 9:'J', 10:'K',
11:'L', 12:'M', 13:'N', 14:'O', 15:'P', 16:'Q', 17:'R', 18:'S', 19:'T',
20:'U', 21:'V', 22:'W', 23:'X', 24:'Y', 25:'Z'}
# Cost to travel between nodes -- note this matrix is not symmetric (traveling A->B is not same as B->A!)
CostMatrix = np.array([[1, 4, 7, 4, 3], [3, 3, 3, 1, 2], [2, 5, 2, 3, 1], [7, 8, 1, 3, 5], [3, 2, 1, 9, 8]]) # If you want to rerun with the same matrix
#CostMatrix = np.random.randint(maxCost, size=(NumNodes,NumNodes)) # If you want to run with a new cost matrix
############################################################################################
##### Define the optimization problem for the Quantum Inspired Solver
def OptProblem(CostMatrix) -> Problem:
#'terms' will contain the weighting terms for the trips!
terms = []
############################################################################################
##### Cost of traveling between nodes
for k in range(0, len(CostMatrix)): # For each trip
for i in range(0, len(CostMatrix)): # For each origin node
for j in range(0, len(CostMatrix)): # For each destination node
#Assign a weight to every possible trip from node i to node j for each trip
terms.append(
Term(
c = CostMatrix.item((i,j)), # Element of the cost matrix
indices = [i + (len(CostMatrix) *k ), j + (len(CostMatrix) * (k + 1))] # +1 to denote dependence on next location
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k + 1))}') # Combinations between the origin and destination nodes
#print(f'For x_{i + (len(CostMatrix) * k)}, to x_{j + (len(CostMatrix) * (k + 1))} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format for the solver (as formulated in the cost function)
#print(f'For node_{i}, to node_{j} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format that is easier to read for a human
############################################################################################
##### Constraint: Location constraint - salesperson can only be at 1 node at a time.
for l in range(0,len(CostMatrix)+1): # The total number of nodes that are visited over the route (+1 because returning to starting node)
for i in range(0,len(CostMatrix)): # For each origin node
for j in range(0,len(CostMatrix)): # For each destination node
if i!=j and i<j: # i<j because we don't want to penalize twice // i==j is forbidden (above)
terms.append(
Term(
c = int(2 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [i + (len(CostMatrix) * l), j + (len(CostMatrix) * l)]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k))}')
#print(f'Location constraint 1: x_{i + (len(CostMatrix) * l)} - x_{j + (len(CostMatrix) * (l + 1))} (trip {l}) assigned weight: {int(2 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
############################################################################################
##### Constraint: Location constraint - encourage the salesperson to be 'somewhere' otherwise all x_k might be 0 (for example).
for v in range(0, len(CostMatrix) + len(CostMatrix) * (len(CostMatrix))): # Select variable (v represents a node before/after any trip)
terms.append(
Term(
c = int(-1.65 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [v]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(v)
#print(f'Location constraint 2: x_{v} assigned weight: {int(-1.65 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
#print(f'Location constraint 2: node_{v % NumNodes} after {np.floor(v / NumNodes)} trips assigned weight: {int(-1.65 * np.max(CostMatrix))}') # In a format that is easier to read for a human
############################################################################################
##### Penalty for traveling to the same node again --- (in the last step we can travel without penalties (this is to make it easier to specify an end node =) ))
for p in range(0, len(CostMatrix) + len(CostMatrix) * (len(CostMatrix))): # This selects a present node x: 'p' for present
for f in range(p + len(CostMatrix), len(CostMatrix) * (len(CostMatrix)), len(CostMatrix)): # This selects the same node x but after upcoming trips: 'f' for future
terms.append(
Term(
c = int(2 * np.max(CostMatrix)), # assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [p,f]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'x_{p}, x_{f}') # Just variable numbers
#print(f'Visit once constraint: x_{p} - x_{f} assigned weight: {int(2 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
#print(f' Visit once constraint: node_{p % NumNodes} - node_{(p + f) % NumNodes} after {(f - p) / NumNodes} trips assigned weight: {int(2 * np.max(CostMatrix))}') # In a format that is easier to read for a human
return Problem(name="Traveling Salesperson", problem_type=ProblemType.pubo, terms=terms)
OptimizationProblem = OptProblem(CostMatrix)
```
### Constraint 4 & 5: Beginning and ending at a specific node
The salesperson starts and finishes at the headquarters: the starting node from which they depart on their journey. Similarly to constraint 2, the salesperson can be rewarded for starting/finishing at a specific node.
For example, if you want the salesperson to start/end at a particular node, you can assign negative weights to the respective $x_k$ term in the first and last location vector. For node 0 that would result in negatively weighting $x_0$ and $x_{N^2}$, for example. This constraint is the most flexible one, and you can also expand it to encourage the salesperson to visit a specific node, or nodes, at pre-determined trip number $k$. Alternatively, you may see this constraint as a way to integrate prior-knowledge on the set of nodes into the optimization problem. Let's say you want the salesperson to visit node 1 after the second trip ($k=2$, it is the third node they visit), then by negatively weighting $x_{2N+1}$ the cost function will (likely) obtain a lower/smaller optimal (minimal) value if it gives this variable the value 1.
In this code, the salesperson is hard-coded to start and finish in node 0.
```
### Define variables
# The number of nodes
NumNodes = 5
# Max cost between nodes
maxCost = 10
# Node names, to interpret the solution later on
NodeName = {0:'A', 1:'B', 2:'C', 3:'D', 4:'E', 5:'F', 6:'G', 7:'H', 8:'I', 9:'J', 10:'K',
11:'L', 12:'M', 13:'N', 14:'O', 15:'P', 16:'Q', 17:'R', 18:'S', 19:'T',
20:'U', 21:'V', 22:'W', 23:'X', 24:'Y', 25:'Z'}
# Cost to travel between nodes -- note this matrix is not symmetric (traveling A->B is not same as B->A!)
CostMatrix = np.array([[1, 4, 7, 4, 3], [3, 3, 3, 1, 2], [2, 5, 2, 3, 1], [7, 8, 1, 3, 5], [3, 2, 1, 9, 8]]) # If you want to rerun with the same matrix
#CostMatrix = np.random.randint(maxCost, size=(NumNodes,NumNodes)) # If you want to run with a new cost matrix
############################################################################################
##### Define the optimization problem for the Quantum Inspired Solver
def OptProblem(CostMatrix) -> Problem:
#'terms' will contain the weighting terms for the trips!
terms = []
############################################################################################
##### Cost of traveling between nodes
for k in range(0, len(CostMatrix)): # For each trip
for i in range(0, len(CostMatrix)): # For each origin node
for j in range(0, len(CostMatrix)): # For each destination node
#Assign a weight to every possible trip from node i to node j for each trip
terms.append(
Term(
c = CostMatrix.item((i,j)), # Element of the cost matrix
indices = [i + (len(CostMatrix) *k ), j + (len(CostMatrix) * (k + 1))] # +1 to denote dependence on next location
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k + 1))}') # Combinations between the origin and destination nodes
#print(f'For x_{i + (len(CostMatrix) * k)}, to x_{j + (len(CostMatrix) * (k + 1))} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format for the solver (as formulated in the cost function)
#print(f'For node_{i}, to node_{j} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format that is easier to read for a human
############################################################################################
##### Constraint: Location constraint - salesperson can only be at 1 node at a time.
for l in range(0,len(CostMatrix)+1): # The total number of nodes that are visited over the route (+1 because returning to starting node)
for i in range(0,len(CostMatrix)): # For each origin node
for j in range(0,len(CostMatrix)): # For each destination node
if i!=j and i<j: # i<j because we don't want to penalize twice // i==j is forbidden (above)
terms.append(
Term(
c = int(2 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [i + (len(CostMatrix) * l), j + (len(CostMatrix) * l)]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'{i + (len(CostMatrix) * k)}, {j + (len(CostMatrix) * (k))}')
#print(f'Location constraint 1: x_{i + (len(CostMatrix) * l)} - x_{j + (len(CostMatrix) * (l + 1))} (trip {l}) assigned weight: {int(2 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
############################################################################################
##### Constraint: Location constraint - encourage the salesperson to be 'somewhere' otherwise all x_k might be 0 (for example).
for v in range(0, len(CostMatrix) + len(CostMatrix) * (len(CostMatrix))): # Select variable (v represents a node before/after any trip)
terms.append(
Term(
c = int(-1.65 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [v]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(v)
#print(f'Location constraint 2: x_{v} assigned weight: {int(-1.65 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
#print(f'Location constraint 2: node_{v % NumNodes} after {np.floor(v / NumNodes)} trips assigned weight: {int(-1.65 * np.max(CostMatrix))}') # In a format that is easier to read for a human
############################################################################################
##### Penalty for traveling to the same node again --- (in the last step we can travel without penalties (this is to make it easier to specify an end node =) ))
for p in range(0, len(CostMatrix) + len(CostMatrix) * (len(CostMatrix))): # This selects a present node x: 'p' for present
for f in range(p + len(CostMatrix), len(CostMatrix) * (len(CostMatrix)), len(CostMatrix)): # This selects the same node x but after upcoming trips: 'f' for future
terms.append(
Term(
c = int(2 * np.max(CostMatrix)), # assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [p,f]
)
)
##----- Uncomment one of the below statements if you want to see how the weights are assigned! -------------------------------------------------------------------------------------------------
#print(f'x_{p}, x_{f}') # Just variable numbers
#print(f'Visit once constraint: x_{p} - x_{f} assigned weight: {int(2 * np.max(CostMatrix))}') # In a format for the solver (as formulated in the cost function)
#print(f' Visit once constraint: node_{p % NumNodes} - node_{(p + f) % NumNodes} after {(f - p) / NumNodes} trips assigned weight: {int(2 * np.max(CostMatrix))}') # In a format that is easier to read for a human
#############################################################################################
##### Begin at x0
terms.append(
Term(
c = int(-10 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [0]
)
)
############################################################################################
##### End at x0
terms.append(
Term(
c = int(-10 * np.max(CostMatrix)), # Assign a weight penalty dependent on maximum distance from the cost matrix elements
indices = [len(CostMatrix) * (len(CostMatrix))]
)
)
return Problem(name="Traveling Salesperson", problem_type=ProblemType.pubo, terms=terms)
OptimizationProblem = OptProblem(CostMatrix)
```
### Submitting the optimization problem
The optimization problem subject to the necessary constraints have been defined! Now it is time hand the problem over to the Azure QIO solvers and analyze the routes that are returned to us.
Some essential specifications/explanations:
1. The problem type is a [PUBO (Polynomial Unconstrained Binary Optimization)](https://docs.microsoft.com/azure/quantum/optimization-binary-optimization) - the variables ($x_k$) that are optimized for can take a value of 0, or 1.
2. We submit the problem to an Azure QIO solver, which one is up to you. Each has benefits/drawbacks, and [selecting the best one requires some experimentation](https://docs.microsoft.com/azure/quantum/optimization-which-solver-should-you-use). To read more about the available solvers, please refer to the [Microsoft QIO solver overview page](https://docs.microsoft.com/azure/quantum/provider-microsoft-qio) on the Azure Quantum docs site.
3. These optimizers are heuristics, which means they aren't guaranteed to find the optimal solution (or even a valid one, depending on how well you have encoded your cost function). Due to this, it is important to validate the solution returned. It is also recommended to run the solver several times to see if it returns the same solution. Additionally, having longer optimization times (larger timeout) can return better solutions.
4. The solution returned by the solver is heavily influenced by the constraint weights. Feel free to play around with these (called tuning), and discover if you can make the optimization more efficient (speed vs solution quality trade-off). A suggestion would be to make the weights dependent on the cost matrix norm (Frobenius,1,2,etc.).
5. Here the hardware implementation is defaulted to CPU. For further information on available hardware, please refer to the [Microsoft QIO solver overview page](https://docs.microsoft.com/azure/quantum/provider-microsoft-qio) on the Azure Quantum docs site.
```
############################################################################################
##### Choose the solver and parameters --- uncomment if you wish to use a different one --- timeout = 120 seconds
solver = SimulatedAnnealing(workspace, timeout = 120)
#solver = ParallelTempering(workspace, timeout = 120)
#solver = Tabu(workspace, timeout = 120)
#solver = QuantumMonteCarlo(workspace, sweeps = 2, trotter_number = 10, restarts = 72, seed = 22, beta_start = 0.1, transverse_field_start = 10, transverse_field_stop = 0.1) # QMC is not available parameter-free yet
route = solver.optimize(OptimizationProblem) # Synchronously submit the optimization problem to the service -- wait until done.
print(route)
```
### Parse the results
Below are defined some utility functions which are needed to read and analyze the result returned by the solver.
```
############################################################################################
##### Read the results returned by the solver - need to make the solution readable
def ReadResults(Config: dict, NodeName, CostMatrix, NumNodes):
#############################################################################################
##### Read the return result (dictionary) from the solver and sort it
PathChoice = Config.items()
PathChoice = [(int(k), v) for k, v in Config.items()]
PathChoice.sort(key=lambda tup: tup[0])
#############################################################################################
##### Initialize variables to understand the routing
TimeStep=[] # This will contain an array of times/trips - each node is represented during/for each time/trip interval
Node = [] # This will contain an array of node names
Location = [] # This will contain the locations the salesperson is for each time/trip
RouteMatrixElements = [] # This will contain the indices of the cost matrix representing where the salesperson has traveled (to determine total cost)
#############################################################################################
##### Go through nodes during each timestep/trip to see where the salesperson has been
for Index in PathChoice:
TimeStep.append(math.floor(Index[0] / len(CostMatrix))) # Time step/trip = the k-th is floor of the index divided by the number of nodes
Node.append(NodeName[(Index[0] % len(CostMatrix))]) # Append node names for each time step
Location.append(Index[1]) # Append locations for each time step
if Index[1] == 1: # Save selected node where the salesperson travels to in that trip (if the variable == 1, the salesperson goes to that node)
RouteMatrixElements.append(Index[0] % len(CostMatrix)) # Save the indices (this returns the row index)
SimulationResult = np.array([TimeStep, Node, Location]) # Save all the route data (also where the salesperson did not go during a turn/trip/timestep)
#############################################################################################
##### Create the route dictionary
k=0
PathDict = {}
PathDict['Route'] = {}
Path = np.array([['Timestep,', 'Node']])
for i in range(0, (NumNodes * (NumNodes + 1))):
if SimulationResult[2][i] == '1': # If the SimulationResult[2][i] (location) == 1, then that's where the salesperson goes/went
Path = np.concatenate((Path, np.array([[SimulationResult[j][i] for j in range(0, 2)]])), axis=0) # Add the rows where the salesperson DOES travel to Path matrix
PathDict['Route'].update({k: Path[k + 1][1]}) # Save the route to a dictionary
k += 1 # Iterable keeps track for the dictionary, but also allows to check for constraint
AnalyzeResult(Path, NumNodes) # Check if Path array satisfies other constraints as well (could integrate previous one above in function)
#############################################################################################
###### Calculate the total cost of the route the salesperson made (can be in time (minutes) or in distance (km))
TotalRouteCost = 0
for trips in range(0, NumNodes):
TotalRouteCost = TotalRouteCost+float(CostMatrix.item(RouteMatrixElements[trips], RouteMatrixElements[trips + 1])) # The sum of the matrix elements where the salesperson has been (determined through the indices)
PathDict['RouteCost'] = {'Cost':TotalRouteCost}
##### Return the simulation result in a human understandable way =)
return PathDict
############################################################################################
##### Check whether the solution satisfies the optimization constraints
def AnalyzeResult(Path, NumNodes):
############################################################################################
##### Check if the number of travels is equal to the number of nodes + 1 (for returning home)
if (len(Path) - 1) != NumNodes + 1:
raise RuntimeError('This solution is not valid -- Number of nodes visited invalid!')
else:
NumNodesPassed = NumNodes
print(f"Number of nodes passed = {NumNodesPassed}. This is valid!")
############################################################################################
##### Check if the nodes are different (except start/end node)
PastNodes = []
for k in range(1, len(Path) - 1): # Start to second last node must all be different - skip header so start at 1, skip last node so - 1
for l in range(0, len(PastNodes)):
if Path[k][1] == PastNodes[l]:
raise RuntimeError('This solution is not valid -- Traveled to a non-starting node more than once')
PastNodes.append(Path[k][1])
print(f"Number of different nodes passed = {NumNodes}. This is valid!")
############################################################################################
##### Check if the end node is same as the start node
if Path[1][1] != Path[-1][1]:
raise RuntimeError(f'This solution is not valid -- Start node {Path[1][1]} is not equal to end node {Path[-1][1]}')
print('Start and end node are the same. This is valid!')
print('Valid route!')
```
### Read the results and analyze the path
In reading the returned solution, the route is mapped into a human readable format.
In analyzing the path, the validity of the route is checked by going over whether constraints are satisfied.
```
##### Call the function to interpret/convert/analyze the optimization results into a more meaningful/understandable format
PathDict = ReadResults(route['configuration'], NodeName, CostMatrix, NumNodes)
PathDict
```
Well done! If the solver returned a valid solution, you have solved the traveling salesperson problem! If an error was raised then feel free to go back and adjust some settings and weights.
## Next steps
Now that you understand the problem scenario and how to define the cost function, there are a number of experiments you can perform to deepen your understanding and improve the solution defined above:
- Modify the problem definition (e.g. by changing the number of nodes)
- Rewrite the penalty functions to improve their efficiency
- Tune the parameters (weights)
- Try using a different solver, or a parameterized version (see [Which optimization solver should I use?](https://docs.microsoft.com/en-gb/azure/quantum/optimization-which-solver-should-you-use) for some tips)
| github_jupyter |
# Deep Q-Network implementation
This notebook shamelessly demands you to implement a DQN - an approximate q-learning algorithm with experience replay and target networks - and see if it works any better this way.
```
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
```
__Frameworks__ - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
```
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
### Let's play some old videogames

This time we're gonna apply approximate q-learning to an atari game called Breakout. It's not the hardest thing out there, but it's definitely way more complex than anything we tried before.
### Processing game image
Raw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them.
We can thus save a lot of time by preprocessing game image, including
* Resizing to a smaller shape, 64 x 64
* Converting to grayscale
* Cropping irrelevant image parts (top & bottom)
```
from gym.core import ObservationWrapper
from gym.spaces import Box
from scipy.misc import imresize
class PreprocessAtari(ObservationWrapper):
def __init__(self, env):
"""A gym wrapper that crops, scales image into the desired shapes and optionally grayscales it."""
ObservationWrapper.__init__(self,env)
self.img_size = (1, 64, 64)
self.observation_space = Box(0.0, 1.0, self.img_size)
def _observation(self, img):
"""what happens to each observation"""
# Here's what you need to do:
# * crop image, remove irrelevant parts
# * resize image to self.img_size
# (use imresize imported above or any library you want,
# e.g. opencv, skimage, PIL, keras)
# * cast image to grayscale
# * convert image pixels to (0,1) range, float32 type
<Your code here>
return <...>
import gym
#spawn game instance for tests
env = gym.make("BreakoutDeterministic-v0") #create raw env
env = PreprocessAtari(env)
observation_shape = env.observation_space.shape
n_actions = env.action_space.n
env.reset()
obs, _, _, _ = env.step(env.action_space.sample())
#test observation
assert obs.ndim == 3, "observation must be [batch, time, channels] even if there's just one channel"
assert obs.shape == observation_shape
assert obs.dtype == 'float32'
assert len(np.unique(obs))>2, "your image must not be binary"
assert 0 <= np.min(obs) and np.max(obs) <=1, "convert image pixels to (0,1) range"
print("Formal tests seem fine. Here's an example of what you'll get.")
plt.title("what your network gonna see")
plt.imshow(obs[0, :, :],interpolation='none',cmap='gray');
```
### Frame buffer
Our agent can only process one observation at a time, so we gotta make sure it contains enough information to fing optimal actions. For instance, agent has to react to moving objects so he must be able to measure object's velocity.
To do so, we introduce a buffer that stores 4 last images. This time everything is pre-implemented for you.
```
from framebuffer import FrameBuffer
def make_env():
env = gym.make("BreakoutDeterministic-v4")
env = PreprocessAtari(env)
env = FrameBuffer(env, n_frames=4, dim_order='pytorch')
return env
env = make_env()
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
for _ in range(50):
obs, _, _, _ = env.step(env.action_space.sample())
plt.title("Game image")
plt.imshow(env.render("rgb_array"))
plt.show()
plt.title("Agent observation (4 frames top to bottom)")
plt.imshow(obs.reshape([-1, state_dim[2]]));
```
### Building a network
We now need to build a neural network that can map images to state q-values. This network will be called on every agent's step so it better not be resnet-152 unless you have an array of GPUs. Instead, you can use strided convolutions with a small number of features to save time and memory.
You can build any architecture you want, but for reference, here's something that will more or less work:

```
import torch, torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class DQNAgent(nn.Module):
def __init__(self, state_shape, n_actions, epsilon=0):
"""A simple DQN agent"""
nn.Module.__init__(self)
self.epsilon = epsilon
self.n_actions = n_actions
img_c, img_w, img_h = state_shape
# Define your network body here. Please make sure agent is fully contained here
<YOUR CODE>
def forward(self, state_t):
"""
takes agent's observation (Variable), returns qvalues (Variable)
:param state_t: a batch of 4-frame buffers, shape = [batch_size, 4, h, w]
Hint: if you're running on GPU, use state_t.cuda() right here.
"""
# Use your network to compute qvalues for given state
qvalues = <YOUR CODE>
assert isinstance(qvalues, Variable) and qvalues.requires_grad, "qvalues must be a torch variable with grad"
assert len(qvalues.shape) == 2 and qvalues.shape[0] == state_t.shape[0] and qvalues.shape[1] == n_actions
return qvalues
def get_qvalues(self, states):
"""
like forward, but works on numpy arrays, not Variables
"""
states = Variable(torch.FloatTensor(np.asarray(states)))
qvalues = self.forward(states)
return qvalues.data.cpu().numpy()
def sample_actions(self, qvalues):
"""pick actions given qvalues. Uses epsilon-greedy exploration strategy. """
epsilon = self.epsilon
batch_size, n_actions = qvalues.shape
random_actions = np.random.choice(n_actions, size=batch_size)
best_actions = qvalues.argmax(axis=-1)
should_explore = np.random.choice([0, 1], batch_size, p = [1-epsilon, epsilon])
return np.where(should_explore, random_actions, best_actions)
agent = DQNAgent(state_dim, n_actions, epsilon=0.5)
```
Now let's try out our agent to see if it raises any errors.
```
def evaluate(env, agent, n_games=1, greedy=False, t_max=10000):
""" Plays n_games full games. If greedy, picks actions as argmax(qvalues). Returns mean reward. """
rewards = []
for _ in range(n_games):
s = env.reset()
reward = 0
for _ in range(t_max):
qvalues = agent.get_qvalues([s])
action = qvalues.argmax(axis=-1)[0] if greedy else agent.sample_actions(qvalues)[0]
s, r, done, _ = env.step(action)
reward += r
if done: break
rewards.append(reward)
return np.mean(rewards)
evaluate(env, agent, n_games=1)
```
### Experience replay
For this assignment, we provide you with experience replay buffer. If you implemented experience replay buffer in last week's assignment, you can copy-paste it here __to get 2 bonus points__.

#### The interface is fairly simple:
* `exp_replay.add(obs, act, rw, next_obs, done)` - saves (s,a,r,s',done) tuple into the buffer
* `exp_replay.sample(batch_size)` - returns observations, actions, rewards, next_observations and is_done for `batch_size` random samples.
* `len(exp_replay)` - returns number of elements stored in replay buffer.
```
from replay_buffer import ReplayBuffer
exp_replay = ReplayBuffer(10)
for _ in range(30):
exp_replay.add(env.reset(), env.action_space.sample(), 1.0, env.reset(), done=False)
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(5)
assert len(exp_replay) == 10, "experience replay size should be 10 because that's what maximum capacity is"
def play_and_record(agent, env, exp_replay, n_steps=1):
"""
Play the game for exactly n steps, record every (s,a,r,s', done) to replay buffer.
Whenever game ends, add record with done=True and reset the game.
It is guaranteed that env has done=False when passed to this function.
PLEASE DO NOT RESET ENV UNLESS IT IS "DONE"
:returns: return sum of rewards over time
"""
# initial state
s = env.framebuffer
# Play the game for n_steps as per instructions above
<YOUR CODE>
return <mean rewards>
# testing your code. This may take a minute...
exp_replay = ReplayBuffer(20000)
play_and_record(agent, env, exp_replay, n_steps=10000)
# if you're using your own experience replay buffer, some of those tests may need correction.
# just make sure you know what your code does
assert len(exp_replay) == 10000, "play_and_record should have added exactly 10000 steps, "\
"but instead added %i" % len(exp_replay)
is_dones = list(zip(*exp_replay._storage))[-1]
assert 0 < np.mean(is_dones) < 0.1, "Please make sure you restart the game whenever it is 'done' and record the is_done correctly into the buffer."\
"Got %f is_done rate over %i steps. [If you think it's your tough luck, just re-run the test]"%(np.mean(is_dones), len(exp_replay))
for _ in range(100):
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(10)
assert obs_batch.shape == next_obs_batch.shape == (10,) + state_dim
assert act_batch.shape == (10,), "actions batch should have shape (10,) but is instead %s"%str(act_batch.shape)
assert reward_batch.shape == (10,), "rewards batch should have shape (10,) but is instead %s"%str(reward_batch.shape)
assert is_done_batch.shape == (10,), "is_done batch should have shape (10,) but is instead %s"%str(is_done_batch.shape)
assert [int(i) in (0,1) for i in is_dones], "is_done should be strictly True or False"
assert [0 <= a <= n_actions for a in act_batch], "actions should be within [0, n_actions]"
print("Well done!")
```
### Target networks
We also employ the so called "target network" - a copy of neural network weights to be used for reference Q-values:
The network itself is an exact copy of agent network, but it's parameters are not trained. Instead, they are moved here from agent's actual network every so often.
$$ Q_{reference}(s,a) = r + \gamma \cdot \max _{a'} Q_{target}(s',a') $$

```
target_network = DQNAgent(state_dim, n_actions)
# This is how you can load weights from agent into target network
target_network.load_state_dict(agent.state_dict())
```
### Learning with... Q-learning
Here we write a function similar to `agent.update` from tabular q-learning.
Compute Q-learning TD error:
$$ L = { 1 \over N} \sum_i [ Q_{\theta}(s,a) - Q_{reference}(s,a) ] ^2 $$
With Q-reference defined as
$$ Q_{reference}(s,a) = r(s,a) + \gamma \cdot max_{a'} Q_{target}(s', a') $$
Where
* $Q_{target}(s',a')$ denotes q-value of next state and next action predicted by __target_network__
* $s, a, r, s'$ are current state, action, reward and next state respectively
* $\gamma$ is a discount factor defined two cells above.
__Note 1:__ there's an example input below. Feel free to experiment with it before you write the function.
__Note 2:__ compute_td_loss is a source of 99% of bugs in this homework. If reward doesn't improve, it often helps to go through it line by line [with a rubber duck](https://rubberduckdebugging.com/).
```
def compute_td_loss(states, actions, rewards, next_states, is_done, gamma = 0.99, check_shapes = False):
""" Compute td loss using torch operations only. Use the formula above. """
states = Variable(torch.FloatTensor(states)) # shape: [batch_size, c, h, w]
actions = Variable(torch.LongTensor(actions)) # shape: [batch_size]
rewards = Variable(torch.FloatTensor(rewards)) # shape: [batch_size]
next_states = Variable(torch.FloatTensor(next_states)) # shape: [batch_size, c, h, w]
is_done = Variable(torch.FloatTensor(is_done.astype('float32'))) # shape: [batch_size]
is_not_done = 1 - is_done
#get q-values for all actions in current states
predicted_qvalues = agent(states)
# compute q-values for all actions in next states
predicted_next_qvalues = target_network(next_states)
#select q-values for chosen actions
predicted_qvalues_for_actions = predicted_qvalues[range(len(actions)), actions]
# compute V*(next_states) using predicted next q-values
next_state_values = < YOUR CODE >
assert next_state_values.dim() == 1 and next_state_values.shape[0] == states.shape[0], "must predict one value per state"
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
# at the last state use the simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
# you can multiply next state values by is_not_done to achieve this.
target_qvalues_for_actions = <YOUR CODE>
#mean squared error loss to minimize
loss = torch.mean((predicted_qvalues_for_actions - target_qvalues_for_actions.detach()) ** 2 )
if check_shapes:
assert predicted_next_qvalues.data.dim() == 2, "make sure you predicted q-values for all actions in next state"
assert next_state_values.data.dim() == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes"
assert target_qvalues_for_actions.data.dim() == 1, "there's something wrong with target q-values, they must be a vector"
return loss
# sanity checks
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(10)
loss = compute_td_loss(obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch, gamma=0.99,
check_shapes=True)
loss.backward()
assert isinstance(loss, Variable) and tuple(loss.data.size()) == (1,), "you must return scalar loss - mean over batch"
assert np.any(next(agent.parameters()).grad.data.numpy() != 0), "loss must be differentiable w.r.t. network weights"
```
### Main loop
It's time to put everything together and see if it learns anything.
```
from tqdm import trange
from IPython.display import clear_output
import matplotlib.pyplot as plt
from pandas import ewma
%matplotlib inline
mean_rw_history = []
td_loss_history = []
exp_replay = ReplayBuffer(10**5)
play_and_record(agent, env, exp_replay, n_steps=10000);
opt = < your favorite optimizer. Default to adam if you don't have one >
for i in trange(10**5):
# play
play_and_record(agent, env, exp_replay, 10)
# train
< sample data from experience replay>
loss = < compute TD loss >
< minimize loss by gradient descent >
td_loss_history.append(loss.data.cpu().numpy()[0])
# adjust agent parameters
if i % 500 == 0:
agent.epsilon = max(agent.epsilon * 0.99, 0.01)
mean_rw_history.append(evaluate(make_env(), agent, n_games=3))
#Load agent weights into target_network
<YOUR CODE>
if i % 100 == 0:
clear_output(True)
print("buffer size = %i, epsilon = %.5f" % (len(exp_replay), agent.epsilon))
plt.figure(figsize=[12, 4])
plt.subplot(1,2,1)
plt.title("mean reward per game")
plt.plot(mean_rw_history)
plt.grid()
assert not np.isnan(td_loss_history[-1])
plt.subplot(1,2,2)
plt.title("TD loss history (moving average)")
plt.plot(pd.ewma(np.array(td_loss_history), span=100, min_periods=100))
plt.grid()
plt.show()
assert np.mean(mean_rw_history[-10:]) > 10.
print("That's good enough for tutorial.")
```
__ How to interpret plots: __
This aint no supervised learning so don't expect anything to improve monotonously.
* __ TD loss __ is the MSE between agent's current Q-values and target Q-values. It may slowly increase or decrease, it's ok. The "not ok" behavior includes going NaN or stayng at exactly zero before agent has perfect performance.
* __ mean reward__ is the expected sum of r(s,a) agent gets over the full game session. It will oscillate, but on average it should get higher over time (after a few thousand iterations...).
* In basic q-learning implementation it takes 5-10k steps to "warm up" agent before it starts to get better.
* __ buffer size__ - this one is simple. It should go up and cap at max size.
* __ epsilon__ - agent's willingness to explore. If you see that agent's already at 0.01 epsilon before it's average reward is above 0 - __ it means you need to increase epsilon__. Set it back to some 0.2 - 0.5 and decrease the pace at which it goes down.
* Also please ignore first 100-200 steps of each plot - they're just oscillations because of the way moving average works.
At first your agent will lose quickly. Then it will learn to suck less and at least hit the ball a few times before it loses. Finally it will learn to actually score points.
__Training will take time.__ A lot of it actually. An optimistic estimate is to say it's gonna start winning (average reward > 10) after 20k steps.
But hey, look on the bright side of things:

### Video
```
agent.epsilon=0 # Don't forget to reset epsilon back to previous value if you want to go on training
#record sessions
import gym.wrappers
env_monitor = gym.wrappers.Monitor(make_env(),directory="videos",force=True)
sessions = [evaluate(env_monitor, agent, n_games=1) for _ in range(100)]
env_monitor.close()
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
```
## Assignment part I (5 pts)
We'll start by implementing target network to stabilize training.
To do that you should use TensorFlow functionality.
We recommend thoroughly debugging your code on simple tests before applying it in atari dqn.
## Bonus I (2+ pts)
Implement and train double q-learning.
This task contains of
* Implementing __double q-learning__ or __dueling q-learning__ or both (see tips below)
* Training a network till convergence
* Full points will be awarded if your network gets average score of >=10 (see "evaluating results")
* Higher score = more points as usual
* If you're running out of time, it's okay to submit a solution that hasn't converged yet and updating it when it converges. _Lateness penalty will not increase for second submission_, so submitting first one in time gets you no penalty.
#### Tips:
* Implementing __double q-learning__ shouldn't be a problem if you've already have target networks in place.
* You will probably need `tf.argmax` to select best actions
* Here's an original [article](https://arxiv.org/abs/1509.06461)
* __Dueling__ architecture is also quite straightforward if you have standard DQN.
* You will need to change network architecture, namely the q-values layer
* It must now contain two heads: V(s) and A(s,a), both dense layers
* You should then add them up via elemwise sum layer.
* Here's an [article](https://arxiv.org/pdf/1511.06581.pdf)
## Bonus II (5+ pts): Prioritized experience replay
In this section, you're invited to implement prioritized experience replay
* You will probably need to provide a custom data structure
* Once pool.update is called, collect the pool.experience_replay.observations, actions, rewards and is_alive and store them in your data structure
* You can now sample such transitions in proportion to the error (see [article](https://arxiv.org/abs/1511.05952)) for training.
It's probably more convenient to explicitly declare inputs for "sample observations", "sample actions" and so on to plug them into q-learning.
Prioritized (and even normal) experience replay should greatly reduce amount of game sessions you need to play in order to achieve good performance.
While it's effect on runtime is limited for atari, more complicated envs (further in the course) will certainly benefit for it.
Prioritized experience replay only supports off-policy algorithms, so pls enforce `n_steps=1` in your q-learning reference computation (default is 10).
| github_jupyter |
# *Unsupervised learning: Latent Dirichlet allocation (LDA) topic modeling*
```
## Install a Python package for LDA
# http://pythonhosted.org/lda/getting_started.html
!pip3 install lda
## Importing basic packages
import os
import numpy as np
## Downloading 'Essays' by Ralph Waldo Emerson
os.chdir('/sharedfolder/')
!wget http://www.gutenberg.org/cache/epub/16643/pg16643.txt
## Loading the text
text_path = 'pg16643.txt'
text_data = open(text_path).read()
## Dividing the document into segments, with the aim of extracting individual essays
len(text_data.split('\n\n\n\n\n'))
## Viewing the beginning of each segment to determine which ones to keep
counter = 0
for item in text_data.split('\n\n\n\n\n'):
print('-----')
print(counter)
print(item[:80])
counter+=1
## Creating a list of essays
document_list = text_data.split('\n\n\n\n\n')[9:20]
print(len(document_list))
## Creating a vectorized representation of each essay in the list
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(document_list)
## Viewing a single essay's vector
sample_essay_vector = X.toarray()[3]
print(len(sample_essay_vector))
sample_essay_vector
## Creating a vocabulary list corresponding to the vectors we created above
vocabulary = vectorizer.get_feature_names()
vocabulary[8950:8980]
## Viewing the 10 most frequent words in a single essay
print(np.array(vocabulary)[np.argsort(sample_essay_vector)[::-1]][:10])
print(np.argsort(sample_essay_vector)[::-1][:10]) # corresponding frequency values
## Initializing an LDA model: 10 topics and 1000 iterations
import lda
model = lda.LDA(n_topics=10, n_iter=1000, random_state=1)
## Fitting the model using our list of vectors
model.fit(X)
## Viewing the top 50 words in each 'topic'
topic_word = model.topic_word_
n_top_words = 50
for i, topic_distribution in enumerate(topic_word):
topic_words = np.array(vocabulary)[np.argsort(topic_distribution)][:-(n_top_words+1):-1]
print('Topic {}: {}'.format(i, ' '.join(topic_words)))
print()
```
### Repeating the process, removing stop words and punctuation first
```
from nltk.tokenize import word_tokenize
word_tokenize('We are symbols, and inhabit symbols.')
## Importing NLTK stop words
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
stop_words
## Importing Python punctuation set
import string
string.punctuation
## Testing tokenization + stop word removal
sentence = 'We are symbols, and inhabit symbols.'.lower()
token_list = word_tokenize(sentence)
sentence_filtered = [item for item in token_list if (item not in stop_words)&(item not in string.punctuation)]
sentence_filtered
## Tokenizing and removing stop words from our list of essays
documents_filtered = []
for document in document_list:
token_list = word_tokenize(document.lower())
tokens_filtered = [item for item in token_list if (item not in stop_words)&(item not in string.punctuation)]
documents_filtered.append(' '.join(tokens_filtered))
## Viewing a segment of a preprocessed essay
documents_filtered[3][2000:2100]
## Vectorizing preprocessed essays
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(documents_filtered)
## Creating a vocabulary list corresponding to the vectors we created above
vocabulary = vectorizer.get_feature_names()
vocabulary[1140:1160]
## Initializing an LDA model: 10 topics and 1000 iterations
model = lda.LDA(n_topics=10, n_iter=1000, random_state=1)
## Fitting the model using our list of vectors
model.fit(X)
## Viewing the top 50 words in each 'topic'
topic_word = model.topic_word_
n_top_words = 50
for i, topic_distribution in enumerate(topic_word):
topic_words = np.array(vocabulary)[np.argsort(topic_distribution)][:-(n_top_words+1):-1]
print('Topic ' + str(i) + ':')
print(' '.join(topic_words))
print()
```
### ▷Assignment
Modify the code above: Apply a stemming step to each word before vectorizing the text.
See example stemming code in the following cell.
```
## Stemming example
from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer()
print(stemmer.stem('nature'))
print(stemmer.stem('natural'))
print(stemmer.stem('naturalism'))
```
# *Supervised learning: Naive Bayes classification*
```
## Download sample text corpora from GitHub, then unzip.
os.chdir('/sharedfolder/')
## Uncomment the lines below if you need to re-download test corpora we used last week.
#!wget -N https://github.com/pcda17/pcda17.github.io/blob/master/week/8/Sample_corpora.zip?raw=true -O Sample_corpora.zip
#!unzip -o Sample_corpora.zip
os.chdir('/sharedfolder/Sample_corpora')
os.listdir('./')
## Loading Melville novels
os.chdir('/sharedfolder/Sample_corpora/Herman_Melville/')
melville_texts = []
for filename in os.listdir('./'):
text_data = open(filename).read().replace('\n', ' ')
melville_texts.append(text_data)
print(len(melville_texts))
## Loading Austen novels
os.chdir('/sharedfolder/Sample_corpora/Jane_Austen/')
austen_texts = []
for filename in os.listdir('./'):
text_data = open(filename).read().replace('\n', ' ')
austen_texts.append(text_data)
print(len(austen_texts))
## Removing the last novel from each list so we can use it to test our classifier
melville_train_texts = melville_texts[:-1]
austen_train_texts = austen_texts[:-1]
melville_test_text = melville_texts[-1]
austen_test_text = austen_texts[-1]
## Creating a master list of Melville sentences
from nltk.tokenize import sent_tokenize
melville_combined_texts = ' '.join(melville_train_texts)
melville_sentences = sent_tokenize(melville_combined_texts)
print(len(melville_sentences))
melville_sentences[9999]
## Extracting 2000 Melville sentences at random for use as a training set
import random
melville_train_sentences = random.sample(melville_sentences, 2000)
## Creating a list of Melville sentences for our test set
melville_test_sentences = sent_tokenize(melville_test_text)
print(len(melville_test_sentences))
melville_test_sentences[997]
## Creating a master list of Austen sentences
austen_combined_texts = ' '.join(austen_train_texts)
austen_sentences = sent_tokenize(austen_combined_texts)
print(len(austen_sentences))
austen_sentences[8979]
## Extracting 2000 Austen sentences at random for use as a training set
austen_train_sentences = random.sample(austen_sentences, 2000)
## Creating a list of Austen sentences for our test set
austen_test_sentences = sent_tokenize(austen_test_text)
print(len(austen_test_sentences))
austen_test_sentences[1000]
## Combing training data
combined_texts = melville_train_sentences + austen_train_sentences
## Creating list of associated class values:
## 0 for Melville, 1 for Austen
y = [0]*len(melville_train_sentences) + [1]*len(austen_train_sentences)
## Creating vectorized training set using our combined sentence list
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(combined_texts).toarray()
X.shape
## Training a multinomial naive Bayes classifier
## X is a combined list of Melville and Austen sentences (2000 sentences from each)
## y is a list of classes (0 or 1)
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB().fit(X, y)
## Classifying 5 sentences in our Austen test set
# Recall that 0 means Melville & 1 means Austen
from pprint import pprint
input_sentences = austen_test_sentences[3000:3005]
input_vector = vectorizer.transform(input_sentences) ## Converting a list of string to the same
## vector format we used for our training set.
pprint(austen_test_sentences[3000:3005])
classifier.predict(input_vector)
## Classifying 5 sentences in our Melville test set
input_sentences = melville_test_sentences[3000:3005]
input_vector = vectorizer.transform(input_sentences)
pprint(melville_test_sentences[3000:3005])
classifier.predict(input_vector)
```
### ▷Assignment
Write a script that prints Austen-like sentences written
by Melville, and Melville-like sentences written by Austen.
| github_jupyter |
```
#export
from fastai2.basics import *
from fastai2.vision.core import *
from fastai2.vision.data import *
from fastai2.vision.augment import *
from fastai2.vision import models
#default_exp vision.learner
from nbdev.showdoc import *
```
# Learner for the vision applications
> All the functions necessary to build `Learner` suitable for transfer learning in computer vision
## Cut a pretrained model
```
# export
def _is_pool_type(l): return re.search(r'Pool[123]d$', l.__class__.__name__)
m = nn.Sequential(nn.AdaptiveAvgPool2d(5), nn.Linear(2,3), nn.Conv2d(2,3,1), nn.MaxPool3d(5))
test_eq([bool(_is_pool_type(m_)) for m_ in m.children()], [True,False,False,True])
# export
def has_pool_type(m):
"Return `True` if `m` is a pooling layer or has one in its children"
if _is_pool_type(m): return True
for l in m.children():
if has_pool_type(l): return True
return False
m = nn.Sequential(nn.AdaptiveAvgPool2d(5), nn.Linear(2,3), nn.Conv2d(2,3,1), nn.MaxPool3d(5))
assert has_pool_type(m)
test_eq([has_pool_type(m_) for m_ in m.children()], [True,False,False,True])
# export
def _get_first_layer(m):
"Access first layer of a model"
c,p,n = m,None,None # child, parent, name
for n in next(m.named_parameters())[0].split('.')[:-1]:
p,c=c,getattr(c,n)
return c,p,n
#export
def _load_pretrained_weights(new_layer, previous_layer):
"Load pretrained weights based on number of input channels"
n_in = getattr(new_layer, 'in_channels')
if n_in==1:
# we take the sum
new_layer.weight.data = previous_layer.weight.data.sum(dim=1, keepdim=True)
elif n_in==2:
# we take first 2 channels + 50%
new_layer.weight.data = previous_layer.weight.data[:,:2] * 1.5
else:
# keep 3 channels weights and set others to null
new_layer.weight.data[:,:3] = previous_layer.weight.data
new_layer.weight.data[:,3:].zero_()
#export
def _update_first_layer(model, n_in, pretrained):
"Change first layer based on number of input channels"
if n_in == 3: return
first_layer, parent, name = _get_first_layer(model)
assert isinstance(first_layer, nn.Conv2d), f'Change of input channels only supported with Conv2d, found {first_layer.__class__.__name__}'
assert getattr(first_layer, 'in_channels') == 3, f'Unexpected number of input channels, found {getattr(first_layer, "in_channels")} while expecting 3'
params = {attr:getattr(first_layer, attr) for attr in 'out_channels kernel_size stride padding dilation groups padding_mode'.split()}
params['bias'] = getattr(first_layer, 'bias') is not None
params['in_channels'] = n_in
new_layer = nn.Conv2d(**params)
if pretrained:
_load_pretrained_weights(new_layer, first_layer)
setattr(parent, name, new_layer)
#export
def create_body(arch, n_in=3, pretrained=True, cut=None):
"Cut off the body of a typically pretrained `arch` as determined by `cut`"
model = arch(pretrained=pretrained)
_update_first_layer(model, n_in, pretrained)
#cut = ifnone(cut, cnn_config(arch)['cut'])
if cut is None:
ll = list(enumerate(model.children()))
cut = next(i for i,o in reversed(ll) if has_pool_type(o))
if isinstance(cut, int): return nn.Sequential(*list(model.children())[:cut])
elif callable(cut): return cut(model)
else: raise NamedError("cut must be either integer or a function")
```
`cut` can either be an integer, in which case we cut the model at the coresponding layer, or a function, in which case, this funciton returns `cut(model)`. It defaults to `cnn_config(arch)['cut']` if `arch` is in `cnn_config`, otherwise to the first layer that contains some pooling.
```
tst = lambda pretrained : nn.Sequential(nn.Conv2d(3,5,3), nn.BatchNorm2d(5), nn.AvgPool2d(1), nn.Linear(3,4))
m = create_body(tst)
test_eq(len(m), 2)
m = create_body(tst, cut=3)
test_eq(len(m), 3)
m = create_body(tst, cut=noop)
test_eq(len(m), 4)
for n in range(1,5):
m = create_body(tst, n_in=n)
test_eq(_get_first_layer(m)[0].in_channels, n)
```
## Head and model
```
#export
def create_head(nf, n_out, lin_ftrs=None, ps=0.5, concat_pool=True, bn_final=False, lin_first=False, y_range=None):
"Model head that takes `nf` features, runs through `lin_ftrs`, and out `n_out` classes."
lin_ftrs = [nf, 512, n_out] if lin_ftrs is None else [nf] + lin_ftrs + [n_out]
ps = L(ps)
if len(ps) == 1: ps = [ps[0]/2] * (len(lin_ftrs)-2) + ps
actns = [nn.ReLU(inplace=True)] * (len(lin_ftrs)-2) + [None]
pool = AdaptiveConcatPool2d() if concat_pool else nn.AdaptiveAvgPool2d(1)
layers = [pool, Flatten()]
if lin_first: layers.append(nn.Dropout(ps.pop(0)))
for ni,no,p,actn in zip(lin_ftrs[:-1], lin_ftrs[1:], ps, actns):
layers += LinBnDrop(ni, no, bn=True, p=p, act=actn, lin_first=lin_first)
if lin_first: layers.append(nn.Linear(lin_ftrs[-2], n_out))
if bn_final: layers.append(nn.BatchNorm1d(lin_ftrs[-1], momentum=0.01))
if y_range is not None: layers.append(SigmoidRange(*y_range))
return nn.Sequential(*layers)
tst = create_head(5, 10)
tst
#hide
mods = list(tst.children())
test_eq(len(mods), 9)
assert isinstance(mods[2], nn.BatchNorm1d)
assert isinstance(mods[-1], nn.Linear)
tst = create_head(5, 10, lin_first=True)
mods = list(tst.children())
test_eq(len(mods), 8)
assert isinstance(mods[2], nn.Dropout)
#export
from fastai2.callback.hook import num_features_model
#export
def create_cnn_model(arch, n_out, cut, pretrained, n_in=3, lin_ftrs=None, ps=0.5, custom_head=None,
bn_final=False, concat_pool=True, y_range=None, init=nn.init.kaiming_normal_):
"Create custom convnet architecture using `base_arch`"
body = create_body(arch, n_in, pretrained, cut)
if custom_head is None:
nf = num_features_model(nn.Sequential(*body.children())) * (2 if concat_pool else 1)
head = create_head(nf, n_out, lin_ftrs, ps=ps, concat_pool=concat_pool, bn_final=bn_final, y_range=y_range)
else: head = custom_head
model = nn.Sequential(body, head)
if init is not None: apply_init(model[1], init)
return model
tst = create_cnn_model(models.resnet18, 10, None, True)
tst = create_cnn_model(models.resnet18, 10, None, True, n_in=1)
#export
@delegates(create_cnn_model)
def cnn_config(**kwargs):
"Convenience function to easily create a config for `create_cnn_model`"
return kwargs
pets = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(),
get_y=RegexLabeller(pat = r'/([^/]+)_\d+.jpg$'))
dls = pets.dataloaders(untar_data(URLs.PETS)/"images", item_tfms=RandomResizedCrop(300, min_scale=0.5), bs=64,
batch_tfms=[*aug_transforms(size=224)])
#TODO: refactor, i.e. something like this?
# class ModelSplitter():
# def __init__(self, idx): self.idx = idx
# def split(self, m): return L(m[:self.idx], m[self.idx:]).map(params)
# def __call__(self,): return {'cut':self.idx, 'split':self.split}
#export
def default_split(m:nn.Module): return L(m[0], m[1:]).map(params)
#export
def _xresnet_split(m): return L(m[0][:3], m[0][3:], m[1:]).map(params)
def _resnet_split(m): return L(m[0][:6], m[0][6:], m[1:]).map(params)
def _squeezenet_split(m:nn.Module): return L(m[0][0][:5], m[0][0][5:], m[1:]).map(params)
def _densenet_split(m:nn.Module): return L(m[0][0][:7],m[0][0][7:], m[1:]).map(params)
def _vgg_split(m:nn.Module): return L(m[0][0][:22], m[0][0][22:], m[1:]).map(params)
def _alexnet_split(m:nn.Module): return L(m[0][0][:6], m[0][0][6:], m[1:]).map(params)
_default_meta = {'cut':None, 'split':default_split}
_xresnet_meta = {'cut':-4, 'split':_xresnet_split, 'stats':imagenet_stats}
_resnet_meta = {'cut':-2, 'split':_resnet_split, 'stats':imagenet_stats}
_squeezenet_meta = {'cut':-1, 'split': _squeezenet_split, 'stats':imagenet_stats}
_densenet_meta = {'cut':-1, 'split':_densenet_split, 'stats':imagenet_stats}
_vgg_meta = {'cut':-2, 'split':_vgg_split, 'stats':imagenet_stats}
_alexnet_meta = {'cut':-2, 'split':_alexnet_split, 'stats':imagenet_stats}
#export
model_meta = {
models.xresnet.xresnet18 :{**_xresnet_meta}, models.xresnet.xresnet34: {**_xresnet_meta},
models.xresnet.xresnet50 :{**_xresnet_meta}, models.xresnet.xresnet101:{**_xresnet_meta},
models.xresnet.xresnet152:{**_xresnet_meta},
models.resnet18 :{**_resnet_meta}, models.resnet34: {**_resnet_meta},
models.resnet50 :{**_resnet_meta}, models.resnet101:{**_resnet_meta},
models.resnet152:{**_resnet_meta},
models.squeezenet1_0:{**_squeezenet_meta},
models.squeezenet1_1:{**_squeezenet_meta},
models.densenet121:{**_densenet_meta}, models.densenet169:{**_densenet_meta},
models.densenet201:{**_densenet_meta}, models.densenet161:{**_densenet_meta},
models.vgg11_bn:{**_vgg_meta}, models.vgg13_bn:{**_vgg_meta}, models.vgg16_bn:{**_vgg_meta}, models.vgg19_bn:{**_vgg_meta},
models.alexnet:{**_alexnet_meta}}
```
## `Learner` convenience functions
```
#export
def _add_norm(dls, meta, pretrained):
if not pretrained: return
after_batch = dls.after_batch
if first(o for o in after_batch.fs if isinstance(o,Normalize)): return
stats = meta.get('stats')
if stats is None: return
after_batch.add(Normalize.from_stats(*stats))
#export
@delegates(Learner.__init__)
def cnn_learner(dls, arch, loss_func=None, pretrained=True, cut=None, splitter=None,
y_range=None, config=None, n_in=3, n_out=None, normalize=True, **kwargs):
"Build a convnet style learner"
if config is None: config = {}
meta = model_meta.get(arch, _default_meta)
if n_out is None: n_out = get_c(dls)
assert n_out, "`n_out` is not defined, and could not be infered from data, set `dls.c` or pass `n_out`"
if normalize: _add_norm(dls, meta, pretrained)
model = create_cnn_model(arch, n_out, ifnone(cut, meta['cut']), pretrained, n_in=n_in, y_range=y_range, **config)
learn = Learner(dls, model, loss_func=loss_func, splitter=ifnone(splitter, meta['split']), **kwargs)
if pretrained: learn.freeze()
return learn
```
The model is built from `arch` using the number of final activation inferred from `dls` by `get_c`. It might be `pretrained` and the architecture is cut and split using the default metadata of the model architecture (this can be customized by passing a `cut` or a `splitter`). To customize the model creation, use `cnn_config` and pass the result to the `config` argument.
```
learn = cnn_learner(dls, models.resnet34, loss_func=CrossEntropyLossFlat(), config=cnn_config(ps=0.25))
test_eq(to_cpu(dls.after_batch[1].mean[0].squeeze()), tensor(imagenet_stats[0]))
#export
@delegates(models.unet.DynamicUnet.__init__)
def unet_config(**kwargs):
"Convenience function to easily create a config for `DynamicUnet`"
return kwargs
#export
@delegates(Learner.__init__)
def unet_learner(dls, arch, loss_func=None, pretrained=True, cut=None, splitter=None, config=None, n_in=3, n_out=None,
normalize=True, **kwargs):
"Build a unet learner from `dls` and `arch`"
if config is None: config = unet_config()
meta = model_meta.get(arch, _default_meta)
body = create_body(arch, n_in, pretrained, ifnone(cut, meta['cut']))
size = dls.one_batch()[0].shape[-2:]
if n_out is None: n_out = get_c(dls)
assert n_out, "`n_out` is not defined, and could not be infered from data, set `dls.c` or pass `n_out`"
if normalize: _add_norm(dls, meta, pretrained)
model = models.unet.DynamicUnet(body, n_out, size, **config)
learn = Learner(dls, model, loss_func=loss_func, splitter=ifnone(splitter, meta['split']), **kwargs)
if pretrained: learn.freeze()
return learn
camvid = DataBlock(blocks=(ImageBlock, MaskBlock),
get_items=get_image_files,
splitter=RandomSplitter(),
get_y=lambda o: untar_data(URLs.CAMVID_TINY)/'labels'/f'{o.stem}_P{o.suffix}')
dls = camvid.dataloaders(untar_data(URLs.CAMVID_TINY)/"images", batch_tfms=aug_transforms())
dls.show_batch(max_n=9, vmin=1, vmax=30)
#TODO: Find a way to pass the classes properly
dls.vocab = np.loadtxt(untar_data(URLs.CAMVID_TINY)/'codes.txt', dtype=str)
learn = unet_learner(dls, models.resnet34, loss_func=CrossEntropyLossFlat(axis=1))
learn = unet_learner(dls, models.resnet34, pretrained=True, n_in=4)
```
## Show functions
```
#export
@typedispatch
def show_results(x:TensorImage, y, samples, outs, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize)
ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:TensorCategory, samples, outs, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize)
for i in range(2):
ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
ctxs = [r.show(ctx=c, color='green' if b==r else 'red', **kwargs)
for b,r,c,_ in zip(samples.itemgot(1),outs.itemgot(0),ctxs,range(max_n))]
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:(TensorMask, TensorPoint, TensorBBox), samples, outs, ctxs=None, max_n=6, rows=None, cols=1, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize, double=True,
title='Target/Prediction')
for i in range(2):
ctxs[::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs[::2],range(2*max_n))]
for o in [samples,outs]:
ctxs[1::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(o.itemgot(0),ctxs[1::2],range(2*max_n))]
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:TensorImage, samples, outs, ctxs=None, max_n=10, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(3*min(len(samples), max_n), cols=3, figsize=figsize, title='Input/Target/Prediction')
for i in range(2):
ctxs[i::3] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs[i::3],range(max_n))]
ctxs[2::3] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs[2::3],range(max_n))]
return ctxs
#export
@typedispatch
def plot_top_losses(x: TensorImage, y:TensorCategory, samples, outs, raws, losses, rows=None, cols=None, figsize=None, **kwargs):
axs = get_grid(len(samples), rows=rows, cols=cols, add_vert=1, figsize=figsize, title='Prediction/Actual/Loss/Probability')
for ax,s,o,r,l in zip(axs, samples, outs, raws, losses):
s[0].show(ctx=ax, **kwargs)
ax.set_title(f'{o[0]}/{s[1]} / {l.item():.2f} / {r.max().item():.2f}')
#export
@typedispatch
def plot_top_losses(x: TensorImage, y:TensorMultiCategory, samples, outs, raws, losses, rows=None, cols=None, figsize=None, **kwargs):
axs = get_grid(len(samples), rows=rows, cols=cols, add_vert=1, figsize=figsize)
for i,(ax,s) in enumerate(zip(axs, samples)): s[0].show(ctx=ax, title=f'Image {i}', **kwargs)
rows = get_empty_df(len(samples))
outs = L(s[1:] + o + (TitledStr(r), TitledFloat(l.item())) for s,o,r,l in zip(samples, outs, raws, losses))
for i,l in enumerate(["target", "predicted", "probabilities", "loss"]):
rows = [b.show(ctx=r, label=l, **kwargs) for b,r in zip(outs.itemgot(i),rows)]
display_df(pd.DataFrame(rows))
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
# Marginal Likelihood Implementation
The `gp.Marginal` class implements the more common case of GP regression: the observed data are the sum of a GP and Gaussian noise. `gp.Marginal` has a `marginal_likelihood` method, a `conditional` method, and a `predict` method. Given a mean and covariance function, the function $f(x)$ is modeled as,
$$
f(x) \sim \mathcal{GP}(m(x),\, k(x, x')) \,.
$$
The observations $y$ are the unknown function plus noise
$$
\begin{aligned}
\epsilon &\sim N(0, \Sigma) \\
y &= f(x) + \epsilon \\
\end{aligned}
$$
## The `.marginal_likelihood` method
The unknown latent function can be analytically integrated out of the product of the GP prior probability with a normal likelihood. This quantity is called the marginal likelihood.
$$
p(y \mid x) = \int p(y \mid f, x) \, p(f \mid x) \, df
$$
The log of the marginal likelihood, $p(y \mid x)$, is
$$
\log p(y \mid x) =
-\frac{1}{2} (\mathbf{y} - \mathbf{m}_x)^{T}
(\mathbf{K}_{xx} + \boldsymbol\Sigma)^{-1}
(\mathbf{y} - \mathbf{m}_x)
- \frac{1}{2}\log(\mathbf{K}_{xx} + \boldsymbol\Sigma)
- \frac{n}{2}\log (2 \pi)
$$
$\boldsymbol\Sigma$ is the covariance matrix of the Gaussian noise. Since the Gaussian noise doesn't need to be white to be conjugate, the `marginal_likelihood` method supports either using a white noise term when a scalar is provided, or a noise covariance function when a covariance function is provided.
The `gp.marginal_likelihood` method implements the quantity given above. Some sample code would be,
```python
import numpy as np
import pymc3 as pm
# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:,None]
with pm.Model() as marginal_gp_model:
# Specify the covariance function.
cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)
# Specify the GP. The default mean function is `Zero`.
gp = pm.gp.Marginal(cov_func=cov_func)
# The scale of the white noise term can be provided,
sigma = pm.HalfCauchy("sigma", beta=5)
y_ = gp.marginal_likelihood("y", X=X, y=y, noise=sigma)
# OR a covariance function for the noise can be given
# noise_l = pm.Gamma("noise_l", alpha=2, beta=2)
# cov_func_noise = pm.gp.cov.Exponential(1, noise_l) + pm.gp.cov.WhiteNoise(sigma=0.1)
# y_ = gp.marginal_likelihood("y", X=X, y=y, noise=cov_func_noise)
```
## The `.conditional` distribution
The `.conditional` has an optional flag for `pred_noise`, which defaults to `False`. When `pred_noise=False`, the `conditional` method produces the predictive distribution for the underlying function represented by the GP. When `pred_noise=True`, the `conditional` method produces the predictive distribution for the GP plus noise. Using the same `gp` object defined above,
```python
# vector of new X points we want to predict the function at
Xnew = np.linspace(0, 2, 100)[:, None]
with marginal_gp_model:
f_star = gp.conditional("f_star", Xnew=Xnew)
# or to predict the GP plus noise
y_star = gp.conditional("y_star", Xnew=Xnew, pred_noise=True)
```
If using an additive GP model, the conditional distribution for individual components can be constructed by setting the optional argument `given`. For more information on building additive GPs, see the main documentation page. For an example, see the Mauna Loa CO$_2$ notebook.
## Making predictions
The `.predict` method returns the conditional mean and variance of the `gp` given a `point` as NumPy arrays. The `point` can be the result of `find_MAP` or a sample from the trace. The `.predict` method can be used outside of a `Model` block. Like `.conditional`, `.predict` accepts `given` so it can produce predictions from components of additive GPs.
```python
# The mean and full covariance
mu, cov = gp.predict(Xnew, point=trace[-1])
# The mean and variance (diagonal of the covariance)
mu, var = gp.predict(Xnew, point=trace[-1], diag=True)
# With noise included
mu, var = gp.predict(Xnew, point=trace[-1], diag=True, pred_noise=True)
```
## Example: Regression with white, Gaussian noise
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import scipy as sp
%matplotlib inline
# set the seed
np.random.seed(1)
n = 100 # The number of data points
X = np.linspace(0, 10, n)[:, None] # The inputs to the GP, they must be arranged as a column vector
# Define the true covariance function and its parameters
ℓ_true = 1.0
η_true = 3.0
cov_func = η_true**2 * pm.gp.cov.Matern52(1, ℓ_true)
# A mean function that is zero everywhere
mean_func = pm.gp.mean.Zero()
# The latent function values are one sample from a multivariate normal
# Note that we have to call `eval()` because PyMC3 built on top of Theano
f_true = np.random.multivariate_normal(
mean_func(X).eval(), cov_func(X).eval() + 1e-8 * np.eye(n), 1
).flatten()
# The observed data is the latent function plus a small amount of IID Gaussian noise
# The standard deviation of the noise is `sigma`
σ_true = 2.0
y = f_true + σ_true * np.random.randn(n)
## Plot the data and the unobserved latent function
fig = plt.figure(figsize=(12, 5))
ax = fig.gca()
ax.plot(X, f_true, "dodgerblue", lw=3, label="True f")
ax.plot(X, y, "ok", ms=3, alpha=0.5, label="Data")
ax.set_xlabel("X")
ax.set_ylabel("The true f(x)")
plt.legend();
with pm.Model() as model:
ℓ = pm.Gamma("ℓ", alpha=2, beta=1)
η = pm.HalfCauchy("η", beta=5)
cov = η**2 * pm.gp.cov.Matern52(1, ℓ)
gp = pm.gp.Marginal(cov_func=cov)
σ = pm.HalfCauchy("σ", beta=5)
y_ = gp.marginal_likelihood("y", X=X, y=y, noise=σ)
mp = pm.find_MAP()
# collect the results into a pandas dataframe to display
# "mp" stands for marginal posterior
pd.DataFrame(
{
"Parameter": ["ℓ", "η", "σ"],
"Value at MAP": [float(mp["ℓ"]), float(mp["η"]), float(mp["σ"])],
"True value": [ℓ_true, η_true, σ_true],
}
)
```
The MAP values are close to their true values.
### Using `.conditional`
```
# new values from x=0 to x=20
X_new = np.linspace(0, 20, 600)[:, None]
# add the GP conditional to the model, given the new X values
with model:
f_pred = gp.conditional("f_pred", X_new)
# To use the MAP values, you can just replace the trace with a length-1 list with `mp`
with model:
pred_samples = pm.sample_posterior_predictive([mp], vars=[f_pred], samples=2000)
# plot the results
fig = plt.figure(figsize=(12, 5))
ax = fig.gca()
# plot the samples from the gp posterior with samples and shading
from pymc3.gp.util import plot_gp_dist
plot_gp_dist(ax, pred_samples["f_pred"], X_new)
# plot the data and the true latent function
plt.plot(X, f_true, "dodgerblue", lw=3, label="True f")
plt.plot(X, y, "ok", ms=3, alpha=0.5, label="Observed data")
# axis labels and title
plt.xlabel("X")
plt.ylim([-13, 13])
plt.title("Posterior distribution over $f(x)$ at the observed values")
plt.legend();
```
The prediction also matches the results from `gp.Latent` very closely. What about predicting new data points? Here we only predicted $f_*$, not $f_*$ + noise, which is what we actually observe.
The `conditional` method of `gp.Marginal` contains the flag `pred_noise` whose default value is `False`. To draw from the *posterior predictive* distribution, we simply set this flag to `True`.
```
with model:
y_pred = gp.conditional("y_pred", X_new, pred_noise=True)
y_samples = pm.sample_posterior_predictive([mp], vars=[y_pred], samples=2000)
fig = plt.figure(figsize=(12, 5))
ax = fig.gca()
# posterior predictive distribution
plot_gp_dist(ax, y_samples["y_pred"], X_new, plot_samples=False, palette="bone_r")
# overlay a scatter of one draw of random points from the
# posterior predictive distribution
plt.plot(X_new, y_samples["y_pred"][800, :].T, "co", ms=2, label="Predicted data")
# plot original data and true function
plt.plot(X, y, "ok", ms=3, alpha=1.0, label="observed data")
plt.plot(X, f_true, "dodgerblue", lw=3, label="true f")
plt.xlabel("x")
plt.ylim([-13, 13])
plt.title("posterior predictive distribution, y_*")
plt.legend();
```
Notice that the posterior predictive density is wider than the conditional distribution of the noiseless function, and reflects the predictive distribution of the noisy data, which is marked as black dots. The light colored dots don't follow the spread of the predictive density exactly because they are a single draw from the posterior of the GP plus noise.
### Using `.predict`
We can use the `.predict` method to return the mean and variance given a particular `point`. Since we used `find_MAP` in this example, `predict` returns the same mean and covariance that the distribution of `.conditional` has.
```
# predict
mu, var = gp.predict(X_new, point=mp, diag=True)
sd = np.sqrt(var)
# draw plot
fig = plt.figure(figsize=(12, 5))
ax = fig.gca()
# plot mean and 2σ intervals
plt.plot(X_new, mu, "r", lw=2, label="mean and 2σ region")
plt.plot(X_new, mu + 2 * sd, "r", lw=1)
plt.plot(X_new, mu - 2 * sd, "r", lw=1)
plt.fill_between(X_new.flatten(), mu - 2 * sd, mu + 2 * sd, color="r", alpha=0.5)
# plot original data and true function
plt.plot(X, y, "ok", ms=3, alpha=1.0, label="observed data")
plt.plot(X, f_true, "dodgerblue", lw=3, label="true f")
plt.xlabel("x")
plt.ylim([-13, 13])
plt.title("predictive mean and 2σ interval")
plt.legend();
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import plotly.graph_objects as go
df = pd.read_csv("/content/us_job_industry_data_2019.csv")
# TO DO: MAP CITIES TO MASTER LIST
def wrangle(X):
"""
Wrangles and cleans dataframe
"""
# Creating 2 copies to handle numeric and non-numeric data
numeric = X.copy()
non_numeric = X.copy()
# Filtering dataframe to retain relevant numeric columns
numeric = numeric.filter(["tot_emp", "jobs_1000_orig", "loc_quotient", "h_mean",
"a_mean", "h_pct25", "h_median", "h_pct75", "h_pct90",
"a_pct25", "a_median", "a_pct75", "a_pct90"], axis=1)
# Renaming columns
numeric = numeric.rename(columns={"tot_emp":"Total Employed",
"jobs_1000_orig":"Jobs per 1000",
"loc_quotient":"Job Dilution",
"h_mean":"Hourly Wage Mean",
"a_mean":"Annual Wage Mean",
"h_pct25":"Hourly Wage (25th Percentile)",
"h_median":"Hourly Wage (Median)",
"h_pct75":"Hourly Wage (75th Percentile)",
"h_pct90":"Hourly Wage (90th Percentile)",
"a_pct25":"Annual Wage (25th Percentile)",
"a_median":"Annual Wage (Median)",
"a_pct75":"Annual Wage (75th Percentile)",
"a_pct90":"Annual Wage (90th Percentile)"})
# Replacing NaN values with 0s
numeric = numeric.replace(np.nan, 0)
numeric = numeric.replace("*", 0)
numeric = numeric.replace("**", 0)
numeric["Hourly Wage Mean"] = numeric["Hourly Wage Mean"].replace("#", 100)
numeric["Annual Wage Mean"] = numeric["Annual Wage Mean"].replace("#", 208000)
numeric["Hourly Wage (25th Percentile)"] = numeric["Hourly Wage (25th Percentile)"].replace("#", 100)
numeric["Hourly Wage (Median)"] = numeric["Hourly Wage (Median)"].replace("#", 100)
numeric["Hourly Wage (75th Percentile)"] = numeric["Hourly Wage (75th Percentile)"].replace("#", 100)
numeric["Hourly Wage (90th Percentile)"] = numeric["Hourly Wage (90th Percentile)"].replace("#", 100)
numeric["Annual Wage (25th Percentile)"] = numeric["Annual Wage (25th Percentile)"].replace("#", 208000)
numeric["Annual Wage (Median)"] = numeric["Annual Wage (Median)"].replace("#", 208000)
numeric["Annual Wage (75th Percentile)"] = numeric["Annual Wage (75th Percentile)"].replace("#", 208000)
numeric["Annual Wage (90th Percentile)"] = numeric["Annual Wage (90th Percentile)"].replace("#", 208000)
numeric = numeric.replace(",", "", regex=True)
# Converting data to numbers for easy visualization
numeric = numeric.astype(float)
# Creating job sector percentage column
numeric["Job Sector Percentage"] = numeric["Jobs per 1000"] / 10
# Handling non-numeric data
non_numeric[["area_title", "area_state"]] = non_numeric["area_title"].str.split(",", expand=True)
non_numeric = non_numeric.filter(["area_title", "area_state", "occ_title"])
non_numeric = non_numeric.rename(columns={"area_title":"City", "area_state":"State", "occ_title":"Job Sector"})
# Resetting indices to concatenate
numeric.reset_index(drop=True, inplace=True)
non_numeric.reset_index(drop=True, inplace=True)
return pd.concat([non_numeric, numeric], axis=1)
# Condensing df to only include statistics at city level
df_wrangle = df[ df["area_type"] == 4]
df_wrangle = wrangle(df_wrangle)
df_wrangle.head()
def pie(df, city, n_industries):
df = wrangle(df)
df_city_top10 = df[ df["City"] == city].sort_values(by="Job Sector Percentage", ascending=False)[1:n_industries + 1]
df_city_other = df[ df["City"] == city].sort_values(by="Job Sector Percentage", ascending=False)[n_industries + 1:]
top_10_labels = df_city_top10["Job Sector"]
top_10_values = df_city_top10["Job Sector Percentage"]
df_top10_aggregate = pd.DataFrame({"Job Sector": top_10_labels,
"Job Sector Percentage": top_10_values})
df_city_other = pd.DataFrame({"Job Sector": ["Other"],
"Job Sector Percentage": [100 - sum(top_10_values)]})
df_combined = pd.concat([df_top10_aggregate, df_city_other])
fig = go.Figure(data=[go.Pie(labels=df_combined["Job Sector"], values=df_combined["Job Sector Percentage"], textinfo="label+percent", hole=.3)])
fig.update_layout(margin=dict(l=20, r=20, t=20, b=20))
return fig.show()
df_pie = df[ df["area_type"] == 4]
pie(df_pie, "Danville", 10)
```
| github_jupyter |
# Alignment & Operatrions
This notebook is more about *understanding* pandas, "going with the flow", than any particular method or operation.
Alignment is a key part of many parts of pandas, including
- binary operations (`+, -, *, /, **, ==, |, &`) between pandas objects
- merges / joins / concats
- constructors (`pd.DataFrame`, `pd.Series`)
- reindexing
That said, it's not really something you'll be doing explicitly.
It happens in the background, as part of all those tasks.
As far as I know, it's unique to pandas, so it may not click immediately.
It's all about pandas using *labels* (`Seies/DataFrame.index` and `DataFrame.columns`) to do the tricky work of making sure the operation goes through correctly.
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from IPython import display
%matplotlib inline
pd.options.display.max_rows = 10
sns.set(context='talk')
plt.style.use('default')
```
## Alignment without row labels (bad)
- separate datasets on GDP and CPI
- Goal: compute real GDP
- Problem: Different frequencies
I grabbed some data from [FRED](https://fred.stlouisfed.org/) on nominal US GDP (total output each quarter) and CPI (a measure of inflation).
Each CSV has a column of dates, and a column for the measurement.
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>DATE</th>
<th>CPIAUCSL</th>
</tr>
</thead>
<tbody>
<tr>
<td>1947-01-01</td>
<td>21.48</td>
</tr>
<tr>
<td>1947-02-01</td>
<td>21.62</td>
</tr>
<tr>
<td>1947-03-01</td>
<td>22.00</td>
</tr>
<tr>
<td>1947-04-01</td>
<td>22.00</td>
</tr>
<tr>
<td>1947-05-01</td>
<td>21.95</td>
</tr>
</tbody>
</table>
Typically, we would use `DATE` as the index (`index_col='DATE'` in `read_csv`).
But to appreciate the value of labels, we'll take them away for now.
This will result in the default `range(n)` index.
```
# The "wrong" way
# Read in CPI & GDP, parsing the dates
gdp_bad = pd.read_csv("data/gdp.csv", parse_dates=['DATE'])
cpi_bad = pd.read_csv("data/cpi.csv", parse_dates=['DATE'])
gdp_bad.head()
cpi_bad.head()
```
## Goal: Compute Real GDP
Our task is to calculate *real* GDP.
The data in the CSV is nominal GDP; it hasn't been adjusted for inflation.
To compute real GDP, you take nomial GDP (`gdp_bad`) and divide by a measure of inflation (`cpi_bad`).
- nomial GDP: Total output in dollars
- real GDP: Total output in constant dollars
- $\mathrm{real\ gdp} = \frac{\mathrm{nomial\ gdp}}{\mathrm{inflation}}$
Ideally, this would be as simple as `gdp_bad / cpi_bad`, but we have a slight issue: `gdp_bad` is measured quarterly, while `cpi_bad` is monthly.
The two need to be *aligned* before we can do the conversion from nominal to real GDP.
Normally, pandas would do this for us, but since we don't have meaningful row labels we have to do it manually.
We'll find the dates in common between the two series, manually filter to those, and then do the division.
You could do this a few ways; we'll go with a sql-style merge, roughly:
```SQL
select "DATE",
GDP / CPIAUCSL as real_gdp
from gdp_data
join cpi_data using ("DATE")
```
```
# merge on DATE, divide
m = pd.merge(gdp_bad, cpi_bad, on='DATE', how='inner')
m.head()
m['GDP'] / m['CPIAUCSL']
```
## Problems
1. The output has lost the `DATE` fields, we would need to manually bring those along after doing the division
2. We had to worry about doing the merge, which is incidental to the problem of calculating real gdp
## The Better Way
- Use row labels
- Specify `index_col='DATE'` in `read_csv`
- Just do the operation: `gdp / cpi`
When we have meaningful row labels shared across pandas objects, pandas will handle all the fiddly details for alignment for us.
Let's do things the proper way now, using `DATE` as our row labels.
We could use `gdp = gdp_bad.set_index("DATE")` to move a column into the index, but we'll just re-read the data from disk using the `index_col` method.
```
# use .squeeze to convert a 1 column df to a Series
gdp = pd.read_csv('data/gdp.csv', index_col='DATE',
parse_dates=['DATE']).squeeze()
gdp.head()
cpi = pd.read_csv('data/cpi.csv', index_col='DATE',
parse_dates=['DATE']).squeeze()
cpi.head()
```
Now when you do the division, pandas will handle the alignemnt.
```
rgdp = gdp / cpi
rgdp
```
You'll notice that a bunch of the values are `NaN`, short for ["Not A Number"](https://en.wikipedia.org/wiki/NaN).
This is the missing value indicator pandas uses for numeric data.
The `NaN`s are there because alignment produces the *union* of the two Indexes.
## Explicit Alignment
Roughly speaking, alignment composes two operations:
1. union the labels
2. reindex the data to conform to the unioned labels, inserting `NaN`s where necessary
```
# step 1: union indexes
full_idx = gdp.index.union(cpi.index)
full_idx
# step 2: reindex
gdp.reindex(full_idx)
```
Once the data have been reindexed, the operation (like `/` in our case) proceedes.
```
gdp.reindex(full_idx) / cpi.reindex(full_idx)
```
Ocassionally, you will do a manual `reindex`, but most of the time it's done in the background when you do an operation.
<div class="alert alert-success" data-title="Compute Real GDP">
<h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Compute Real GDP</h1>
</div>
<p>Compute real GDP in 2009 dollars</p>
You'll hear real GDP reported in '2009 dollars', or '2005 dollars'.
The deflator (CPI in our case) is an index, and doesn't really have units.
Some time span is chosen to be the base and set equal to 100.
Every other observation is relative to it.
The [data from FRED](https://fred.stlouisfed.org/series/CPIAUCSL) is indexed to 1982-1984.
For the exercise, compute real-gdp in 2009 dollars.
- Step 1: Convert CPI from base 1982-1984, to base 2009; Create a new series `cpi09` where the average value for 2009 is 100
+ Hint: Use [partial string indexing](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#datetimeindex-partial-string-indexing) to slice the values for just 2009
+ Divide the original `cpi` by that value and rescale to be an index (1 -> 100)
- Step 2: Divide `gdp` by the result from Step 1
```
# Your solution
cpi09 = cpi / ... * 100
...
%load solutions/alignment_real_gdp09.py
```
To the extent possible, you should use *meaningful labels*, rather than the default `range(n)` index.
This will put the burden of aligning things on pandas, rather than your memory.
Additionally, labels like the date are often "nuisance" columns, that would have to be dropped and recombined when doing arithmetic calculations.
When they're in the `.index`, they come along with the calculation but don't get in the way.
## Alignment on *both* axis
This may surpise you at some point down the road.
Above, we used the `.squeeze()` method to turn the 1-D `DataFrame` down to a `Series`.
We did this, because pandas will align on both the index *and* columns.
Can you guess what would happen if we divided two DataFrames, with different column names?
```
gdp_ = pd.read_csv('data/gdp.csv', index_col='DATE',
parse_dates=['DATE'])
gdp_.head()
cpi_ = pd.read_csv('data/cpi.csv', index_col='DATE',
parse_dates=['DATE'])
cpi_.head()
gdp_ / cpi_
```
So pandas aligned by the columns, in addition to the index.
Recall that alignment does the set *union*, so the output DataFrame has both CPI and GDP, which probably isn't what we wanted here.
## Aside: Handling Missing Data
Pandas, recognizing that missing data is a fact of life, has a bunch of methods for detecting and handling missing data.
1. detecting missing data
2. dropping missing data
3. filling missing data
## Detecting Missing Data
1. `pd.isna(), df.isna()`
2. `pd.notna(), df.notna()`
```
# detect with `isna` and `notna`
rgdp.isna().head()
rgdp.notna().head()
```
These are often useful as masks for boolean indexing:
```
rgdp[rgdp.isna()].head()
```
Or for counting (True counts as 1, and False as 0 for numeric operations):
```
rgdp.isna().sum()
```
## Dropping Missing Data
You can drop missing values with `.dropna`
```
DataFrame.dropna
Return object with labels on given axis omitted where
alternately any or all of the data are missing
Parameters
----------
axis : {0 or 'index', 1 or 'columns'}, or tuple/list thereof
Pass tuple or list to drop on multiple axes
how : {'any', 'all'}
* any : if any NA values are present, drop that label
* all : if all values are NA, drop that label
```
```
rgdp.dropna()
```
Almost all pandas methods return a new Series or DataFrame, and do not mutate data inplace.
`rgdp` still has the missing vaules, even though we called `.dropna`
```
rgdp.head()
```
To make the change stick, you can assign the output to a new variable (or re-assign it to `rgdp`) like `rgdp = rgdp.dropna()`.
## Dropna for DataFrames
Since `DataFrame` is a 2-d container, there are additional complexities with dropping missing data.
Do you drop the row or column? Does just one value in the row or column have to be missing, or all of them?
```
# We'll see concat later
df = pd.concat([gdp, cpi], axis='columns')
df.head()
```
The defaults, shown next, are to drop *rows* (`axis='index'`) that
have at any missing values (`how='any'`):
```
df.dropna(axis='index', how='any')
```
You can drop a row only if all of it's values are missing:
```
df.dropna(axis='index', how='all')
```
<div class="alert alert-success" data-title="Dropping Columns">
<h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Dropping Columns</h1>
</div>
<p>Drop any `columns` in `df` that have at least one missing value</p>
```
%load solutions/dropna_columns.py
```
## Filling Missing Values
Use `.fillna` to fill with a value (scalar, or mapping of `label: value`) or method.
There's also `.fillna` to fill missing values, either with a value (which can be a scalar or array) or a method like `ffill` to fill-foward the last-observed value.
```
rgdp.fillna(method='ffill').plot()
sns.despine()
```
Missing data will come up throughout.
## Joining Pandas Objects
You have some options:
1. `pd.merge`: SQL-style joins
2. `pd.concat`: array-style joins
You'll run into problems where you have multiple `Series` or `DataFrame`s, that you want to join into a single `DataFrame`.
We saw an example of this earlier, but let's follow it up as a pair of exercises.
There are two main ways to do this, `pd.merge` and `pd.concat`.
When to use `merge` vs. `concat`?
My general rule is to use `concat` for one-to-one joins of two or more Series/DataFrames, where your joining on the index.
I use `pd.merge` when doing database style joins that are one-to-many, or many-to-many, or whenever you're joining on a column.
<div class="alert alert-success" data-title="Merge Datasets">
<h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Merge Datasets</h1>
</div>
<p>
Use [`pd.merge`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html) to join the two DataFrames `gdp_bad` and `cpi_bad`, using an *outer* join (earlier we used an *inner* join).
</p>
- Hint: You may want to sort by date afterward (see [`DataFrame.sort_values`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html))
```
# Your solution
%load solutions/aligment_merge.py
```
<div class="alert alert-success" data-title="Concatenate Datasets">
<h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Concatenate Datasets</h1>
</div>
<p>
Use [`pd.concat`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html) to stick together `gdp` and `cpi` into a DataFrame</p>
- Hint: what should the argument to `axis` be?
```
# Your solution
%load solutions/aligment_concat.py
```
## ufuncs And Reductions
These next couple of topics aren't really related to alignment, but I didn't have anywhere else to put them.
NumPy has the concept of [universal functions](https://docs.scipy.org/doc/numpy/reference/ufuncs.html) (ufuncs) that operate on any sized array.
```
np.log(df)
```
`ufuncs` work elementwise, which means they don't care about the dimensions, just the data types.
Even something like adding a scalar is a ufunc.
```
df + 100
```
## Reductions
`DataFrame` has many methods that *reduce* a DataFrame to a Series by aggregating over a dimension.
Likewise, `Series` has many methods that collapse down to a scalar.
Some examples are `.mean`, `.std`, `.max`, `.any`, `.all`.
Let's get a DataFrame with two columns on a similar scale.
The `pct_change` method returns the `(current - previous) / previous` for each row (with `NaN` for the first since there isn't a previous.
```
pct_change = df.dropna().pct_change()
pct_change.head()
pct_change.plot();
```
By default, the index (0) axis is reduced for `DataFrames`.
```
pct_change.mean()
```
To collapse the columns (leaving the same row labels), use the `axis` argument.
Specifying `axis='columns'` or `axis=1` will aggregate over the columns
```
pct_change.max(axis='columns')
```
If you have trouble remembering, the `axis` argument specifies the axis you want to *remove*.
```
# Which column had the larger percent change?
pct_change.idxmax(axis="columns")
```
<div class="alert alert-success" data-title="Percent Positive">
<h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Percent Positive</h1>
</div>
<p>Exercise: What percent of the periods had a positive percent change for each column?</p>
```
%load solutions/alignment_positive.py
```
## Summary
- Auto-alignment in pandas is different than most other systems
- Let pandas handle the details of alignment, you worry about important things
- Pandas methods are non-mutating
- `.dropna`, `.filla`, `isna` for handling missing data
| github_jupyter |
# Tutorial XX: Template
This tutorial walks you through the process of *FILL IN*. The reason behind when and why this is important should be briefly described in the remainder of this paragraph. If possible, this should be further elucidated by a complementary figure, which can be placed in the folder *tutorials/img*. Figure 1 serves an example of this below.
本教程将指导您完成填充的过程。何时以及为什么这一点很重要,其背后的原因应在本段其余部分简要说明。如果可能的话,这应该由一个补充图来进一步说明,它可以放在tutorial /img文件夹中。下面的图1提供了一个示例。
<img src="img/template_img.png">
<center>**Figure 1.** A template image</center>
The remainder of this tutorial is organized as follows:
* Section 1 does XXX.
* Section 2 does YYY.
* Section 3 does ZZZ.
## 1. Demonstrating the Functionality of Code
All sections in this tutorial should consist of blocks of text describing a specific method or set of methods in Flow, following by a code snippet demonstrating the described methods being used.
For example, let us say we want to show how to add two numbers in python. We might begin by introducing the numbers and methods we will use to do the addition:
```
a = 1
b = 2
def add(a, b):
return a + b
```
Then we could show the functionality of the methods as follows:
```
add(a, b)
```
Whenever possible, sections should also be broken up into smaller subsections to help the reader more quickly identify which portion of the tutorial discusses which topic. This may be helpful to readers who are just interested in certain concept, e.g. just to find out how a specific parameter works. An example of this separation of subsections can be seen in Section 2.
## 2. Sequentially Building a Class
Classes, like lines of code presented in the prior section, should be described and written sequentially rather than all at once. An example of this can be for a class version of the add method, described below.
### 2.1 Introduce and Instantiate the Class
We begin by defining the class with its `__init__` method, and import any necessary modules. Make sure there are enough comments in the code to make it s much self-explanatory as possible.
```
import numpy as np
class Adder(object):
def __init__(self, numbers):
"""Instantiate the Adder class.
Parameters
----------
numbers : array_like
the numbers that should be added together.
"""
self.numbers = numbers
```
### 2.2 Include (Sequentially) New Methods
Next, we add a new method to the class. We do this by recreating the class and having it inherit its previous properties. As an example of this, let us consider including a `run` method to the `Adder` class than adds the numbers that is provided to the class during instantiation.
```
class Adder(Adder): # continuing from the previous definition of Adder
def run(self):
return np.sum(self.numbers)
```
### 2.3 Demonstrate Functionality of the Class
Finally, we can demonstrate the functionality of the class, through testing and validating the class, as seen in the code snippet below.
```
adder = Adder([1, 2, 3])
print("The sum of the values is:", adder.run())
```
## 3. Adding the Changes to the README and Website
Once you have completed your tutorial, you must include your new tutorial in all relevant descriptors of Flow's tutorials. This include adding it to both README and the Flow Website.
### 3.1 README
For one, begin by adding the new tutorial to the README.md file located in the tutorials/ directory (see the figure below). This should be included in your Pull Request (PR) whenever creating a new tutorial.
<img src="img/tutorials_readme.png">
You just need to add your tutorial with the correct number and title under the last tutorial in the README.md:
`
**Tutorial XX:** Name of your tutorial.
`
### 3.2 Website
Next, you need to inform the Flow web designer to add your tutorial to the Flow website:
<img src="img/tutorials_website.png">
To do so, send the Title and the Github link to your tutorial to the Flow web designer.
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer,CountVectorizer
import sklearn
import boto3
from s3 import get_file
from sklearn.decomposition import LatentDirichletAllocation
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
from matplotlib import pyplot as plt
from wordcloud import WordCloud, STOPWORDS
import matplotlib.colors as mcolor
```
# Import Data from Amazon S3 into DataFrame
```
# connect to Amazon S3
s3 = boto3.resource('s3')
lyrics = get_file(s3,'s3ssp', download_file='NLP_Data/new_master_lyrics_audio_features.csv',rename_file='nlp.csv')
# create a pandas dataframe and drop 'na'
df = pd.read_csv(lyrics,sep='|',encoding='utf-8')
df_demo = df.copy().dropna()
# print top words in a topic
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
message = "Topic #%d: " % topic_idx
message += " ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]])
print(message)
print()
# Styling
def color_green(val):
color = 'green' if val > .1 else 'black'
return 'color: {col}'.format(col=color)
def make_bold(val):
weight = 700 if val > .1 else 400
return 'font-weight: {weight}'.format(weight=weight)
# Remove more stopwords identified in pyLDAvis
df_demo['new_lyrics'] = df_demo['lyrics'].apply(lambda x: x.replace("wan", "")
.replace("chorus", "")
.replace("verse", "")
.replace("gon", ""))
#create samples
df_test_one = df_demo.sample(1000)
df_test_two = df_demo.sample(3000)
df_test_three = df_demo.sample(5000)
#CountVectorizer hyperparameters
max_df = .5
min_df = .005
#LDA hyperparameters
n_topics = 30
n_words=10
#Fit CountVectorizer on lyrics data
vectorizer = CountVectorizer(analyzer='word',
min_df=min_df, max_df=max_df,
stop_words='english',
lowercase=True,
token_pattern='[a-zA-Z0-9]{3,}') # num chars > 3
data_vectorized = vectorizer.fit_transform(df_test_two['new_lyrics'])
```
# Begin Topic Modeling
```
# lda model
lda_model = LatentDirichletAllocation(n_components=n_topics, learning_method="online",
max_iter=35, random_state=0, doc_topic_prior=.01)
lda_model.fit(data_vectorized)
```
# Visualize Topics Spatially
```
# visualize n_topics spatially
pyLDAvis.sklearn.prepare(lda_model, data_vectorized, vectorizer, mds='tsne')
print("\nTopics in LDA model:")
cv_feature_names = vectorizer.get_feature_names()
print_top_words(lda_model, cv_feature_names, 30)
```
# Calculate probabilities and find dominant topics for each document
```
# Create Document — Topic Matrix
lda_output = lda_model.transform(data_vectorized)
# column names
topicnames = ["Topic" + str(i) for i in range(lda_model.n_components)]
# index names
docnames = [df_test_two['track_uri'].iloc[i] + str(i) for i in range(len(df_test_two))]
# Make the pandas dataframe
df_document_topic = pd.DataFrame(np.round(lda_output, 2), columns=topicnames, index=docnames)
# Get dominant topic for each document
dominant_topic = np.argmax(df_document_topic.values, axis=1)
df_document_topic['dominant_topic'] = dominant_topic
# Apply Style
df_document_topics = df_document_topic.sample(50).style.applymap(color_green).applymap(make_bold)
df_document_topics
# print documents by topic in order of probability
df_document_topic[['Topic1', 'dominant_topic']].sort_values(by='Topic1', ascending=False)
```
| github_jupyter |
# Sign & Speak ML Instructions
This notebook shows how to use Amazon SageMaker to run the training and inference scripts for the Sign & Speak project.
Use the `conda_pytorch_p36` kernel to run the cells in this notebook.
## Training
The following cell defines the training job to be run by Amazon SageMaker. It points to the `grid_train.py` training script, defines the number and types of instances used for training, sets the hyperparameter values, and defines regular expressions which Amazon SageMaker uses to track the training metrics.
Before running this cell, you must provide a descriptive name for the training job and specify the Amazon S3 URI where the output should be stored. The URI should look like `s3://bucket-name/output-folder/`.
*Note: If you are using a new AWS account, you may not have access to p2 instance types yet. The code should run fine on a CPU instance type, but it will require more time to complete. Submit a limit increase request to use p2 instances.*
```
import sagemaker
from sagemaker.pytorch import PyTorch
# Replace the following variables with a descriptive name for the
# training job and an S3 URI where to store the output
JOB_NAME = 'INSERT_A_NAME_HERE'
OUTPUT_PATH = 'INSERT_AN_S3_URI_HERE'
role = sagemaker.get_execution_role()
estimator = PyTorch(entry_point='grid_train.py',
role=role,
base_job_name=JOB_NAME,
output_path=OUTPUT_PATH,
framework_version='1.1.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
"epochs": 10,
"batch-size": 4,
"gamma": 0.1,
"lr": 0.001,
"momentum": 0.9,
"step-size": 7
},
metric_definitions=[
{'Name': 'train:loss', 'Regex': 'train Loss: (.*?) '},
{'Name': 'train:acc', 'Regex': 'train Loss: .*? Acc: (.*?)$'},
{'Name': 'val:loss', 'Regex': 'val Loss: (.*?) '},
{'Name': 'val:acc', 'Regex': 'val Loss: .*? Acc: (.*?)$'}
]
)
```
Once the training job has been defined, pass in the Amazon S3 URI for the training data to start the training job. The URI should look like `s3://bucket-name/training-data-folder/`, where `training-data-folder` contains one folder per label containing the training images for that label.
This cell will output the logs of the training job, but you can also view the logs and visualize the metrics in the Amazon SageMaker console.
```
estimator.fit({'training': 'INSERT_AN_S3_URI_HERE'})
```
## Hyperparameter Tuning
This section shows how to run a hyperparameter tuning job using Amazon SageMaker. First, define the range of values for the hyperparameters which you want to tune.
```
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter
hyperparameter_ranges = {
'batch-size': IntegerParameter(3,30,scaling_type='Auto'),
'momentum': ContinuousParameter(0.1, 0.9, scaling_type='Auto'),
'step-size': IntegerParameter(3, 12, scaling_type='Auto'),
'gamma': ContinuousParameter(0.01, 0.9, scaling_type='Auto')
}
```
Next, define the training jobs which will be run during hyperparameter tuning. This is the same as in the above section on training.
```
import sagemaker
from sagemaker.pytorch import PyTorch
# Replace the following variables with a descriptive name for the
# training job and an S3 URI where to store the output
JOB_NAME = 'INSERT_A_NAME_HERE'
OUTPUT_PATH = 'INSERT_AN_S3_URI_HERE'
role = sagemaker.get_execution_role()
estimator = PyTorch(entry_point='grid_train.py',
role=role,
base_job_name=JOB_NAME
output_path=OUTPUT_PATH,
framework_version='1.1.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
"epochs": 20,
"lr": 0.001
},
metric_definitions=[
{'Name': 'train:loss', 'Regex': 'train Loss: (.*?) '},
{'Name': 'train:acc', 'Regex': 'train Loss: .*? Acc: (.*?)$'},
{'Name': 'val:loss', 'Regex': 'val Loss: (.*?) '},
{'Name': 'val:acc', 'Regex': 'val Loss: .*? Acc: (.*?)$'}
]
)
```
Next, define the hyperparameter tuning job based on the defined hyperparameter ranges. Set the objective metric, the maximum number of training jobs, and the maximum number of parallel training jobs.
*Note: make sure your AWS account limits allow for the number of parallel training jobs for the instance type defined in the training job.*
```
from sagemaker.tuner import HyperparameterTuner
TUNING_JOB_NAME = 'INSERT_A_NAME_HERE'
tuner = HyperparameterTuner(
estimator=estimator,
objective_metric_name='val:acc',
hyperparameter_ranges=hyperparameter_ranges,
metric_definitions=[
{'Name': 'train:loss', 'Regex': 'train Loss: (.*?) '},
{'Name': 'train:acc', 'Regex': 'train Loss: .*? Acc: (.*?)$'},
{'Name': 'val:loss', 'Regex': 'val Loss: (.*?) '},
{'Name': 'val:acc', 'Regex': 'val Loss: .*? Acc: (.*?)$'}
],
strategy='Bayesian',
objective_type='Maximize',
max_jobs=30,
max_parallel_jobs=3,
base_tuning_job_name=TUNING_JOB_NAME
)
```
Once the tuning job has been defined, pass in the Amazon S3 URI for the training data to start the tuning job. The URI should look like `s3://bucket-name/training-data-folder/`, where `training-data-folder` contains one folder per label containing the training images for that label.
View the logs and visualize the metrics for the training jobs linked to this tuning job in the Amazon SageMaker console.
```
tuner.fit(inputs='INSERT_AN_S3_URI_HERE')
```
## Deploying
After running some training jobs and/or hyperparameter tuning jobs, decide on which training job you want to base your deployment. Find the Amazon S3 URI of the model package, which should look like `s3://bucket-name/training-job-name/output/model.tar.gz`. Insert the URI in the code below.
```
from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer
class JSONPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(JSONPredictor, self).__init__(endpoint_name, sagemaker_session, json_serializer, json_deserializer)
from sagemaker.pytorch import PyTorchModel
import sagemaker
role = sagemaker.get_execution_role()
model = PyTorchModel(model_data='INSERT_S3_URI_OF_MODEL_PACKAGE',
role=role,
framework_version='1.1.0',
entry_point='grid_serve.py',
predictor_cls=JSONPredictor)
```
After defining the model and predictor type, we specify the number and type of instances for running the endpoint.
*Note: An endpoint takes several minutes to start up.*
```
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
Once your endpoint is complete, note down the name to link it up to the Sign & Speak user interface.
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Response-to-Thomas-Icard's-question-about-counterfactual-implementation-in-probability-trees." data-toc-modified-id="Response-to-Thomas-Icard's-question-about-counterfactual-implementation-in-probability-trees.-1"><span class="toc-item-num">1 </span>Response to Thomas Icard's question about counterfactual implementation in probability trees.</a></span><ul class="toc-item"><li><span><a href="#Probability-tree-for-the-counterfactual-example" data-toc-modified-id="Probability-tree-for-the-counterfactual-example-1.1"><span class="toc-item-num">1.1 </span>Probability tree for the counterfactual example</a></span></li><li><span><a href="#Reference-tree" data-toc-modified-id="Reference-tree-1.2"><span class="toc-item-num">1.2 </span>Reference tree</a></span></li><li><span><a href="#Indicative-tree" data-toc-modified-id="Indicative-tree-1.3"><span class="toc-item-num">1.3 </span>Indicative tree</a></span></li><li><span><a href="#Subjunctive-tree" data-toc-modified-id="Subjunctive-tree-1.4"><span class="toc-item-num">1.4 </span>Subjunctive tree</a></span></li><li><span><a href="#Composed-tree" data-toc-modified-id="Composed-tree-1.5"><span class="toc-item-num">1.5 </span>Composed tree</a></span></li><li><span><a href="#Counterfactual-tree" data-toc-modified-id="Counterfactual-tree-1.6"><span class="toc-item-num">1.6 </span>Counterfactual tree</a></span></li></ul></li><li><span><a href="#Can-probability-trees-calculate-the-Natural-Direct-Effect?" data-toc-modified-id="Can-probability-trees-calculate-the-Natural-Direct-Effect?-2"><span class="toc-item-num">2 </span>Can probability trees calculate the Natural Direct Effect?</a></span></li><li><span><a href="#Context-specific-conditional-independence-with-Probability-trees" data-toc-modified-id="Context-specific-conditional-independence-with-Probability-trees-3"><span class="toc-item-num">3 </span>Context-specific conditional independence with Probability trees</a></span><ul class="toc-item"><li><span><a href="#What-is-the-probability-tree-associated-with-$E[Y]$?" data-toc-modified-id="What-is-the-probability-tree-associated-with-$E[Y]$?-3.1"><span class="toc-item-num">3.1 </span>What is the probability tree associated with $E[Y]$?</a></span></li><li><span><a href="#What-is-the-probability-tree-associated-with-$E[Z(x=1)]$?" data-toc-modified-id="What-is-the-probability-tree-associated-with-$E[Z(x=1)]$?-3.2"><span class="toc-item-num">3.2 </span>What is the probability tree associated with $E[Z(x=1)]$?</a></span></li><li><span><a href="#What-is-the-probability-tree-associated-with-$E[Y(x=0)]$?" data-toc-modified-id="What-is-the-probability-tree-associated-with-$E[Y(x=0)]$?-3.3"><span class="toc-item-num">3.3 </span>What is the probability tree associated with $E[Y(x=0)]$?</a></span></li><li><span><a href="#What-is-the-probability-tree-associated-with-$E[Y(x=0,Z(x=1)=1]$?" data-toc-modified-id="What-is-the-probability-tree-associated-with-$E[Y(x=0,Z(x=1)=1]$?-3.4"><span class="toc-item-num">3.4 </span>What is the probability tree associated with $E[Y(x=0,Z(x=1)=1]$?</a></span></li></ul></li><li><span><a href="#Zenna's-question-about-the-intuition-of-intervention-on-probability-trees" data-toc-modified-id="Zenna's-question-about-the-intuition-of-intervention-on-probability-trees-4"><span class="toc-item-num">4 </span>Zenna's question about the intuition of intervention on probability trees</a></span></li><li><span><a href="#Alex's-question-about-intervening-on-$X=1$-or--$X=2$" data-toc-modified-id="Alex's-question-about-intervening-on-$X=1$-or--$X=2$-5"><span class="toc-item-num">5 </span>Alex's question about intervening on $X=1$ or $X=2$</a></span></li><li><span><a href="#Probabilistic-truth-versus-logical-truth" data-toc-modified-id="Probabilistic-truth-versus-logical-truth-6"><span class="toc-item-num">6 </span>Probabilistic truth versus logical truth</a></span><ul class="toc-item"><li><span><a href="#Alex-Lew's-first-model" data-toc-modified-id="Alex-Lew's-first-model-6.1"><span class="toc-item-num">6.1 </span>Alex Lew's first model</a></span><ul class="toc-item"><li><span><a href="#Probability-tree-of-model" data-toc-modified-id="Probability-tree-of-model-6.1.1"><span class="toc-item-num">6.1.1 </span>Probability tree of model</a></span></li><li><span><a href="#Intervention-at-$L=1$" data-toc-modified-id="Intervention-at-$L=1$-6.1.2"><span class="toc-item-num">6.1.2 </span>Intervention at $L=1$</a></span></li></ul></li><li><span><a href="#Alex's-second-model" data-toc-modified-id="Alex's-second-model-6.2"><span class="toc-item-num">6.2 </span>Alex's second model</a></span><ul class="toc-item"><li><span><a href="#Probability-tree-of-second-model" data-toc-modified-id="Probability-tree-of-second-model-6.2.1"><span class="toc-item-num">6.2.1 </span>Probability tree of second model</a></span></li><li><span><a href="#Intervention-of-$L=1$-on-second-model" data-toc-modified-id="Intervention-of-$L=1$-on-second-model-6.2.2"><span class="toc-item-num">6.2.2 </span>Intervention of $L=1$ on second model</a></span></li></ul></li></ul></li><li><span><a href="#Independent-choice-semantics-vs-program-trace-semantics" data-toc-modified-id="Independent-choice-semantics-vs-program-trace-semantics-7"><span class="toc-item-num">7 </span>Independent choice semantics vs program trace semantics</a></span><ul class="toc-item"><li><span><a href="#James-Koppel's-example-1" data-toc-modified-id="James-Koppel's-example-1-7.1"><span class="toc-item-num">7.1 </span>James Koppel's example 1</a></span></li><li><span><a href="#Trace-semantics-probability-tree" data-toc-modified-id="Trace-semantics-probability-tree-7.2"><span class="toc-item-num">7.2 </span>Trace semantics probability tree</a></span></li><li><span><a href="#Independent-choice-semantics-probability-tree" data-toc-modified-id="Independent-choice-semantics-probability-tree-7.3"><span class="toc-item-num">7.3 </span>Independent choice semantics probability tree</a></span></li><li><span><a href="#Trace-Semantics-Petri-net" data-toc-modified-id="Trace-Semantics-Petri-net-7.4"><span class="toc-item-num">7.4 </span>Trace Semantics Petri net</a></span></li></ul></li><li><span><a href="#Example-involving-conditioning-and-counterfactual-on-a-graph-with-dynamic-dependencies-and-interesting-conditioning-dependencies" data-toc-modified-id="Example-involving-conditioning-and-counterfactual-on-a-graph-with-dynamic-dependencies-and-interesting-conditioning-dependencies-8"><span class="toc-item-num">8 </span>Example involving conditioning and counterfactual on a graph with dynamic dependencies and interesting conditioning dependencies</a></span><ul class="toc-item"><li><span><a href="#Factual-world-Probability-Tree-mechanism-for-$Z=0$" data-toc-modified-id="Factual-world-Probability-Tree-mechanism-for-$Z=0$-8.1"><span class="toc-item-num">8.1 </span>Factual world Probability Tree mechanism for $Z=0$</a></span></li><li><span><a href="#Conditioned-on-$Z=0$" data-toc-modified-id="Conditioned-on-$Z=0$-8.2"><span class="toc-item-num">8.2 </span>Conditioned on $Z=0$</a></span></li><li><span><a href="#Intervene-on-$Z=1$" data-toc-modified-id="Intervene-on-$Z=1$-8.3"><span class="toc-item-num">8.3 </span>Intervene on $Z=1$</a></span></li><li><span><a href="#Counterfactual" data-toc-modified-id="Counterfactual-8.4"><span class="toc-item-num">8.4 </span>Counterfactual</a></span></li><li><span><a href="#" data-toc-modified-id="-8.5"><span class="toc-item-num">8.5 </span></a></span></li></ul></li></ul></div>
```
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from probability_trees import MinCut, Critical, PTree
from IPython.display import Latex
```
# Response to Thomas Icard's question about counterfactual implementation in probability trees.
During the discussion, Thomas questioned whether "the operations defined for probability trees can represent genuine counterfactuals." In particular, he pointed out that simply composing `see()` and `do()` does not result in counterfactual inference.
For example, if you condition on $Z=1$ and then ask what would happen to $Z$ if $Y\leftarrow 1$, then $P(Z=0|Z=1,Y\leftarrow 1) = 0$. However, the counterfactual method does not simply compose seeing and doing. It also splices together the referenced tree and the composed tree. I will demonstrate the answer to this question using both figures and code.
## Probability tree for the counterfactual example
Note that
```
def icard_tree( bvar ):
if 'X' not in bvar:
return [(0.5, 'X=0'),
(0.5, 'X=1')]
if bvar['X'] == '0':
if 'Y' not in bvar:
return [(0.25, 'Y=0'),
(0.75, 'Y=1')]
if 'Z' not in bvar:
if bvar['Y'] == '1':
return [(0.75, 'Z=0'),
(0.25, 'Z=1')]
if 'Z' not in bvar:
return [(0.25, 'Z=0'),
(0.75, 'Z=1')]
if 'Y' not in bvar:
if bvar['Z'] == '0':
return [(0.5, 'Y=0'),
(0.5, 'Y=1')]
else:
return [(0.75, 'Y=0'),
(0.25, 'Y=1')]
icard = PTree.fromFunc(icard_tree)
icard = PTree()
# Set the root node and its sub-nodes.
icard.root('O = 1', [
icard.child(0.5, 'X = 0', [
icard.child(0.25, 'Y = 0'),
icard.child(0.75, 'Y = 1',
[icard.child(0.75, 'Z = 0'),
icard.child(0.25, 'Z = 1')])
]),
icard.child(0.5, 'X = 1', [
icard.child(0.25, 'Z = 0',
[icard.child(0.5, 'Y = 0'),
icard.child(0.5, 'Y = 1')]),
icard.child(0.75, 'Z = 1',
[icard.child(0.75, 'Y = 0'),
icard.child(0.25, 'Y = 1')])
])
])
```
## Reference tree
We want to know the probability that $Z=0$ before conditioning or intervention:
```
Y1 = icard.prop('Y=1')
Z1 = icard.prop('Z=1')
Z0 = icard.prop('Z=0')
crit = icard.critical(Z0)
display(Latex(f'$$P(Z=0)={icard.prob(Z0)}$$'))
icard.show(show_prob=True,
cut=Z0,
crit=crit)
```
## Indicative tree
We want to know the probability that $Z=0$ given that $Z=1$.

```
see_Z1 = icard.see(Z1)
display(Latex(f'$$P(Z=0|Z=1)={see_Z1.prob(Z0)}$$'))
see_Z1.show(show_prob=True,
cut=Z0,
crit=crit)
```
## Subjunctive tree
We want to know the probability that $Z=0$ given that $Y\leftarrow 1$.
```
do_Y1 = icard.do(Y1)
display(Latex(f'$$P(Z=0|Y\leftarrow 1)={do_Y1.prob(Z0)}$$'))
do_Y1.show(show_prob=True,
cut=Z0,
crit=crit)
```
## Composed tree
We want to know the probability that $Z=0$ given that $Z= 1$ and $Y\leftarrow 1$.

```
see_Z1_do_Y1 = icard.see(Z1).do(Y1)
display(Latex(f'$$P(Z=0|Z=1,Y\leftarrow 1)={see_Z1_do_Y1.prob(Z0)}$$'))
see_Z1_do_Y1.show(show_prob=True,
cut=Z0,
crit=crit)
```
## Counterfactual tree
We want to know the probability that $Z_{Y\leftarrow 1}=0$ given that $Z=1$ and $Y\leftarrow 1$.

```
see_Z1= icard.see(Z1)
counterfactual = icard.cf(see_Z1, Y1)
display(Latex('$$P(Z_{Y\leftarrow 1}=0|Z=1,Y\leftarrow 1)='
f'{counterfactual.prob(Z0)}$$'))
counterfactual.show(show_prob=True,
cut=Z0,
crit=crit)
```
This figure shows how the counterfactual tree was obtained:
<img src="http://www.adaptiveagents.org/_media/wiki/cf.png" alt="Computing a counterfactual" width="700"/>
The example shows a counterfactual probability tree generated by imposing $Y
\leftarrow 1$, given the factual premise $Z = 1$. Starting from a **reference
probability tree**, we first derive two additional trees: a **indicative tree**,
capturing the observations made in the factual world; and a **subjunctive tree**,
represented as an intervention on the **reference tree**.
To form the counterfactual we proceed as follows:
- We slice both derived trees along the critical set
of the counterfactual premise.
- Then, we compose the counterfactual tree by
taking the transition probabilities **upstream of the slice**
from the factual premise, and those **downstream of the slice**
from the counterfactual premise.
The events downstream then span a new scope containing copies
of the original random variables (marked with "∗"), ready to
adopt new values.
In particular note that $Z^\ast = 0$ can happen in our alternate
reality, even though we know that $Z = 1$.
# Can probability trees calculate the Natural Direct Effect?
$$DE_{x,x'}(Y) = E[Y(x',Z(x))] - E[Y(x)]$$
Here, $Z$ represents all parents of $Y$ excluding $X$, and the expression $Y(x′, Z(x))$ represents the value that $Y$ would attain under the operation of setting $X$ to $x′$ and, simultaneously, setting $Z$ to whatever value it would have obtained under the setting $X = x$. We see that $DE_{x,x′}(Y)$, the natural direct effect of the transition from $x$ to $x′$, involves probabilities of nested counterfactuals and cannot be written in terms of the $do(x)$ operator. Therefore, the natural direct effect cannot in general be identified, even with the help of ideal, controlled experiments.
Pearl, Judea. Causality (p. 131). Cambridge University Press. Kindle Edition.
we test this with the following SCM:
\begin{align}
x = & bernoulli(0.75) \\
z = & x \\
y = & x + z \\
\end{align}
So we
```
def natural_direct_effect(bvar):
if 'X' not in bvar:
return [(0.25, 'X=0'),
(0.75, 'X=1')]
if 'Z' not in bvar:
return z_if_x( bvar['X'])
if 'Y' not in bvar:
return y_if_x_plus_z( bvar['X'], bvar['Z'])
return None
def y_if_x_plus_z( x, z ):
if (x == '0') and (z == '0'):
return [(1, 'Y=0'),
(0, 'Y=1'),
(0, 'Y=2')]
elif (x == '1') and (z == '1'):
return [(0, 'Y=0'),
(0, 'Y=1'),
(1, 'Y=2')]
else:
return [(0, 'Y=0'),
(1, 'Y=1'),
(0, 'Y=2')]
def z_if_x( x ):
if x == '1':
return [(0, 'Z=0'),
(1, 'Z=1')]
else:
return [(1, 'Z=0'),
(0, 'Z=1')]
nde = PTree.fromFunc(natural_direct_effect)
Y1 = nde.prop('Y=1')
X1 = nde.prop('X=1')
Z1 = nde.prop('Z=1')
X0 = nde.prop('X=0')
Y0 = nde.prop('Y=0')
Y2 = nde.prop('Y=2')
Z0 = nde.prop('Z=0')
```
# Context-specific conditional independence with Probability trees
## What is the probability tree associated with $E[Y]$?
```
display(Latex(f'$$E[Y]={nde.expect("Y")}$$'))
display(nde.show(show_prob=True, cut=Y0 | Y2, crit=nde.critical(Y0 | Y2)))
```
## What is the probability tree associated with $E[Z(x=1)]$?
```
do_X = nde.do(X1)
display(Latex(f'$$E[Z(x)]={do_X.expect("Z")}$$'))
display(do_X.show(show_prob=True, cut=do_X.prop('Z=1'), crit=do_X.critical(do_X.prop('Z=1'))))
```
## What is the probability tree associated with $E[Y(x=0)]$?
```
do_notX = nde.do(X0)
display(Latex(f"$$E[Y(x=0)]={do_notX.expect('Y')}$$"))
display(do_notX.show(show_prob=True, cut=Y1, crit=do_notX.critical(Y1)))
```
## What is the probability tree associated with $E[Y(x=0,Z(x=1)=1]$?
```
do_notX_and_Z = nde.do( X0 & Z1 )
display(Latex(f"$$E[Y(x=0,Z(x=1))]={do_notX_and_Z.expect('Y')}$$"))
display(do_notX_and_Z.show(show_prob=True, cut=Y1, crit=do_notX_and_Z.critical(Y1)))
see_Z1_do_X0 = nde.cf(nde.see(X1), X0)
display(Latex(f"$$E[Y(x',Z(x))]={see_Z1_do_X0.expect('Y')}$$"))
display(see_Z1_do_X0.show(show_prob=True, cut=Y1, crit=see_Z1_do_X0.critical(Y1)))
```
# Zenna's question about the intuition of intervention on probability trees
Zenna noted that the `see()` operator affects both probabilities upstream of the critical node and probabilities downstream of the critical node, whereas `do()` only affects probabilities downstream of the critical node, and asked why this makes sense.
This is the intuition I have: Conditioning is about information flow. It asks, if I tell you information about $A$, does that give me new information about $B$? In this case, $A$ could be upstream or downstream of $B$ and you could still gain information. Intervening is about causality. It asks, if I alter the barometer, will the weather change? If the answer is yes, then the barometer weather weather. If no, then the barometer does not cause weather.
# Alex's question about intervening on $X=1$ or $X=2$
```
pt = PTree()
pt.root(
'O = 1',
[pt.child(0.3, 'X = 1'),
pt.child(0.5, 'X = 2'),
pt.child(0.2, 'X = 3')])
# Let's pick the negative event for our minimal prob tree.
cut = pt.prop('X = 1') | pt.prop('X=2')
crit = pt.critical(cut)
# Intervene.
pt_do = pt.do(cut)
# Show results.
print('Before the intervention:')
display(pt.show(cut=cut, crit=crit))
print('After the invention on "X <- 1 or X <- 2":')
display(pt_do.show(cut=cut, crit=crit))
```
Pearl also doesn't want to deal with non-atomic interventions, but allows for higher-order `replace()` interventions:
We will confine our attention to actions in the form of do(X = x). Conditional actions of the form “$do(X = x)$ if $Z = z$” can be formalized using the replacement of equations by functions of $Z$, rather than by constants. We will not consider disjunctive actions of the form "$do(X = x \lor Z = z),$" since these complicate the probabilistic treatment of counterfactuals.
-- Pearl, Judea. Causality (p. 204). Cambridge University Press. Kindle Edition.
But any disjunction of actions can be converted to a negative conjunction of negative actions:
```
equivalent_cut = ~(~pt.prop('X=1') & ~pt.prop('X=2'))
equivalent_crit = pt.critical(equivalent_cut)
print('After the invention on "not (not X <- 1) and (not X <- 2)":')
display(pt.do(equivalent_cut).show(cut=equivalent_cut, crit=equivalent_crit))
```
# Probabilistic truth versus logical truth
In [section 2.2.3](https://nbviewer.jupyter.org/github/COVID-19-Causal-Reasoning/probability_trees/blob/main/Causal_Reasoning_in_Probability_Trees.ipynb#Special-case:-probabilistic-truth-versus-logical-truth) of the Probability tree tutorial, the distinction between logical truth and probabilistic truth is made:
Let's have a look at one special case. Our definitions make a distinction
between **logical** and **probabilistic truth**. This is best seen in the
example below.
In this example, we have a probability tree with three outcomes: $X = 1, 2$, and
$3$. - $X = 1$ occurs with probability one.
- Hence, probabilistically, the event $X=1$ is resolved at the level of the
root node.
- However, it isn't resolved at the logical level, since $X = 2$ or $X = 3$
can happen logically, although with probability zero.
Distinguishing between logical truth and probabilistic truth is important for
stating counterfactuals. This will become clearer later.
```
# First we add all the nodes.
pt = PTree()
pt.root('O = 1',
[pt.child(1, 'X = 1'),
pt.child(0, 'X = 2'),
pt.child(0, 'X = 3')])
# Show the cut for 'X = 0'
cut = pt.prop('X = 1')
print('While the root node "O=1" does resolve the event "X=1"\n' +
'probabilistically, it does not resolve the event logically.')
display(pt.show(cut=cut))
```
## Alex Lew's first model
Say $x_i$ is $1$ if there is an item present at position $i$ in the list, and $0$ otherwise. Situations like $x_1=0, x_2=1$ have probability mass $0$.
Let $L$ be the length of the list. Each choice of the $x_i$ with nonzero probability determines a setting of $L$. But it is possible to intervene on $L$ without intervening on the $x_i$.
-- Alex Lew
```
def L_logical_truth(bvar):
if 'x1' not in bvar:
return [(0.5, 'x1=0'),
(0.5, 'x1=1')]
if 'x2' not in bvar:
if bvar['x1'] == '1':
return [(0.5, 'x2=0'),
(0.5, 'x2=1')]
if 'L' not in bvar:
if bvar['x1'] == '0':
return [(1, 'L=0')]
if bvar['x2'] == '0':
return [(1,'L=1')]
if bvar['x2'] == '1':
return [(1, 'L=2')]
return None
```
### Probability tree of model
```
only_L = PTree.fromFunc(L_logical_truth)
what_if_L_was_1 = only_L.prop('L=1')
crit_L = only_L.critical(what_if_L_was_1)
show_L = only_L.show(show_prob=True, cut=what_if_L_was_1, crit=crit_L)
display(show_L)
```
### Intervention at $L=1$
I don’t see how to intervene on $L = 1$ without forcing us into the path where $x_1 = 1$ and $x_2 = 0$ here. (It seems like "$x_2 = 1$” and “$x_1 = 0$" are both in the false min-cut for the “$L = 1$” event.)
```
do_only_L = only_L.do(what_if_L_was_1)
show_do_only_L = do_only_L.show(show_prob=True,
cut=what_if_L_was_1,
crit=do_only_L.critical(what_if_L_was_1))
display(show_do_only_L)
```
## Alex's second model
But maybe the problem is that I need to draw “$L = 0$,” “$L = 1,$” and “$L = 2$” as siblings at every node where $L$ is being assigned? I don’t see anything in the probability tree definition that requires this, but if it’s the case, then I can see how their notion of “intervention” corresponds more closely to the usual one. Maybe the right way to think about probability trees as causal models is that the edges represent ‘possible mechanisms of intervention,’ and by failing to draw outgoing edges to the alternative “$L = *$” cases, I am encoding that intervention changing $L$ but not $x_i$ is impossible.
```
def L_probabilistic_truth(bvar):
if 'x1' not in bvar:
return [(0.5, 'x1=0'),
(0.5, 'x1=1')]
if 'x2' not in bvar:
if bvar['x1'] == '1':
return [(0.5, 'x2=0'),
(0.5, 'x2=1')]
if 'L' not in bvar:
if bvar['x1'] == '0':
return [(1, 'L=0'),
(0, 'L=1'),
(0, 'L=2')]
if bvar['x2'] == '0':
return [(0, 'L=0'),
(1,'L=1'),
(0, 'L=2' )]
if bvar['x2'] == '1':
return [(0, 'L=0'),
(0, 'L=1'),
(1, 'L=2')]
return None
```
### Probability tree of second model
```
with_L = PTree.fromFunc(L_probabilistic_truth)
what_if_L_was_1 = with_L.prop('L=1')
crit_L = with_L.critical(what_if_L_was_1)
show_L = with_L.show(show_prob=True, cut=what_if_L_was_1, crit=crit_L)
display(show_L)
```
### Intervention of $L=1$ on second model
```
do_with_L = with_L.do(what_if_L_was_1)
show_do_with_L = do_with_L.show(show_prob=True,
cut=what_if_L_was_1,
crit=do_with_L.critical(what_if_L_was_1))
display(show_do_with_L)
```
# Independent choice semantics vs program trace semantics
David Poole distinguishes between Independent choice semantics, where every variable is defined for all values in all worlds, and Program trace semantics, where variables are only defined for the values they take for a particular trace.
This lecture by David Poole is short (11 minutes) but contains a concise summary of how probabilistic programming languages, structural causal models, bayes nets, and logic programming are related:
1. ([0.59](https://youtu.be/L_D9Xne6ATc?t=59)) All probabilistic programming languages are basically generalizations of structural causal models: deterministic relationships with probability distributions over exogenous variables.
2. ([1:31](https://youtu.be/L_D9Xne6ATc?t=91)) All PPLs have the following features:
* All PPL's use probabilistic inputs,
* condition on observations, and
* query for distributions.
* Learning probabilities from data
PPL's employ inference which is much faster than, but equivalent to, rejection sampling for computing posterior distributions
Note: Adding the `do()` operator enables meaningful counterfactual queries.
3. ([2:29](https://youtu.be/L_D9Xne6ATc?t=149)) Any Bayesian network can be represented as a probabilistic program or equivalently as a structural causal model (with probabilistic inputs and bidirectional deterministic relationships)
4. ([4:16](https://youtu.be/L_D9Xne6ATc?t=256)) PPLs face several choices about what it means to assign values to variables encountered in the execution of a program
Rejection sampling semantics samples from the joint distribution and rejects any sample that does not meet the condition. (Simplest, but most inefficient approach)
Independent choice semantics treats each choice of value assignments to the random inputs as a possible world (This is the logic programming approach)
Program trace semantics only creates a possible world for each choice of value assignments encountered in an execution path. (This is what PPLs typically do)
Abductive semantics only creates a possible world for each choice needed to infer observations and a value for a query. (This is the most parsimonious approach. May be what Omega_C already does)
5. ([6:43](https://youtu.be/L_D9Xne6ATc?t=403)) variables are only defined in one trace and not another, this creates problems for inference and learning (These problems are mostly solved nowadays with MCMC and variational inference).
6. ([9:03](https://youtu.be/L_D9Xne6ATc?t=543)) (How do we align ontologies of causal knowledge with ontologies of observations about the world? (This problem is partially resolved with rich knowledge representation languages like BEL that are grounded in observation ontologies).
The video is missing details that are present in [these slides](https://www.cs.ubc.ca/~poole/talks/IndependentChoicesTalk2014.pdf), and the slides are missing details that are contained in [this paper](https://sciwheel.com/work/#/items/9526992/detail?collection=320250).
## James Koppel's example 1
```julia
A = bernoulli(0.5)
B = bernoulli(0.5)
If (A) {
Y = 1
Z = 1
} else {
Y = 0
Z = 0
}
If (B && Y) {
W = 1
} else {
W = 0
}
```
## Trace semantics probability tree
```
trace_semantics = PTree.fromFunc(trace_semantics_tree)
trace_semantics.show(show_prob=True,cut=trace_semantics.prop('W=1'),
crit=trace_semantics.critical(trace_semantics.prop('W=1')))
```
## Independent choice semantics probability tree
```
def independent_choice_semantics_tree( bvar ):
if 'A' not in bvar:
return [(0.5, 'A=0'),
(0.5, 'A=1')]
if 'B' not in bvar:
return [(0.5, 'B=0'),
(0.5, 'B=1')]
if ('Y' not in bvar) or ('Z' not in bvar):
if bvar['A'] == '1':
return [(0, 'Y=0,Z=0'),
(0, 'Y=0,Z=1'),
(0, 'Y=1,Z=0'),
(1, 'Y=1,Z=1')]
else:
return [(1, 'Y=0,Z=0'),
(0, 'Y=0,Z=1'),
(0, 'Y=1,Z=0'),
(0, 'Y=1,Z=1')]
if 'W' not in bvar:
if (bvar['B'] == '1') and (bvar['Y'] == '1'):
return [(0, 'W=0'),
(1, 'W=1'),]
else:
return [(1, 'W=0'),
(0, 'W=1')]
return None
independent_choice_semantics = PTree.fromFunc(independent_choice_semantics_tree)
independent_choice_semantics.show(show_prob=True,cut=independent_choice_semantics.prop('W=1'),
crit=independent_choice_semantics.critical(independent_choice_semantics.prop('W=1')))
def trace_semantics_tree( bvar ):
if 'A' not in bvar:
# A = bernoulli(0.5)
return [(0.5, 'A=0'),
(0.5, 'A=1')]
if 'B' not in bvar:
# B = bernoulli(0.5)
return [(0.5, 'B=0'),
(0.5, 'B=1')]
if ('Y' not in bvar) or ('Z' not in bvar):
if bvar['A'] == '1':
# If (A) {
# Y = 1
# Z = 1
# }
return [(1, 'Y=1,Z=1')]
else:
# else {
# Y = 0
# Z = 0
# }
return [(1,'Y=0,Z=0')]
if 'W' not in bvar:
if (bvar['B'] == '1') and (bvar['Y'] == '1'):
# If (B && Y) {
# W = 1
# }
return [(1, 'W=1')]
else:
# else {
# W = 0
# }
return [(1, 'W=0')]
```
## Trace Semantics Petri net
0. two kinds of nodes: state and decision nodes
1. at most one token per node
2. every transition from a decision node has a weight
3. the weights of transitions off a single node sum to 1
4. if two transitions share a parent, then they have the exact same parent set.
5. the graph is acyclic.
One can imagine future work relaxing (5).
We split nodes into “state nodes” (representing a variable assignment) and “decision nodes” (representing a decision involving that variable about to be made). State nodes will be copied into one or more decision nodes . I’ll just be drawing these as direct edges between states.
```
import snakes.plugins
nets = snakes.plugins.load('gv', 'snakes.nets', 'nets')
from nets import Place, PetriNet, Transition, MultiSet, Expression, Marking, OneOf, Substitution, Test,Variable,Value
dir(PetriNet)
def trace_semantics_petri_net():
n = PetriNet("N")
# State Nodes
n.add_place(Place("Root=1", [0,1,0,1]))
n.add_place(Place("A=0"))
n.add_place(Place('A=1'))
n.add_place(Place("B=0"))
n.add_place(Place('B=1' ))
n.add_place(Place("Y=0"))
n.add_place(Place('Y=1'))
n.add_place(Place("Z=0"))
n.add_place(Place('Z=1'))
n.add_place(Place("W=0"))
n.add_place(Place('W=1'))
# Decision Nodes
n.add_transition(Transition('A if Root=1'))
n.add_transition(Transition('B if Root=1'))
n.add_transition(Transition('Y=1 if A=1', Expression("A==1")))
n.add_transition(Transition('Z=1 if A=1', Expression("A==1")))
n.add_transition(Transition('Y=0 if A=0', Expression("A==0")))
n.add_transition(Transition('Z=0 if A=0', Expression("A==0")))
n.add_transition(Transition('W=1 if B=1 and Y=1', Expression("(B==1) and (Y==1)")))
n.add_transition(Transition('W=0 if B=0 or Y=0', Expression("(B==0) and (Y==0)")))
# Arcs from States to Decisions
n.add_input("Root=1","A if Root=1", Test(Variable("Root")))
n.add_input("Root=1", "B if Root=1", Test(Variable("Root")))
n.add_input("A=0","Y=0 if A=0", Variable('Y'))
n.add_input("A=0","Z=0 if A=0", Variable("Z"))
n.add_input("A=1","Y=1 if A=1", Variable("Y"))
n.add_input("A=1","Z=1 if A=1", Variable("Z"))
n.add_input("B=0", 'W=0 if B=0 or Y=0',Variable('W') )
n.add_input("Y=0", 'W=0 if B=0 or Y=0',Variable('W'))
n.add_input("B=1", 'W=1 if B=1 and Y=1', Variable('W'))
n.add_input("Y=1", 'W=1 if B=1 and Y=1', Variable('W'))
# Arcs from Decisions to States have a weight that sums to one.
n.add_output("A=0", "A if Root=1", Value(0.5))
n.add_output("A=1", "A if Root=1", Value(0.5))
n.add_output("B=0", "B if Root=1", Value(0.5))
n.add_output("B=1", "B if Root=1", Value(0.5))
n.add_output("Y=0", "Y=0 if A=0", Value(1))
n.add_output("Z=0", "Z=0 if A=0", Value(1))
n.add_output("Y=1", "Y=1 if A=1", Value(1))
n.add_output("Z=1", "Z=1 if A=1", Value(1))
n.add_output( "W=0",'W=0 if B=0 or Y=0', Value(1))
n.add_output( "W=1", 'W=1 if B=1 and Y=1', Value(1))
return n
n = trace_semantics_petri_net()
dir(n)
n.draw('trace_semantics_pnet.png')
```

```
n.get_marking()
n.transition('A if Root=1').modes()
n.transition('A if Root=1').fire(Substitution(Root=1))
n.get_marking()
n.transition('Y=1 if A=1').modes()
dir(PetriNet)
help(PetriNet.add_input)
dir(snakes.nets)
help(PetriNet.add_input)
help(PetriNet)
```
# Example involving conditioning and counterfactual on a graph with dynamic dependencies and interesting conditioning dependencies
```julia
X = bernoulli(0.8)
If (X) {
Y = bernoulli(0.6)
Z = Y XOR bernoulli(0.2)
} else {
Z= bernoulli(0.8)
Y =Z XOR bernoulli(0.4)
}
W = Y && Z && bernoulli(0.6)
```
```
def dynamic_dependencies( bvar ):
if 'X' not in bvar:
# X = bernoulli(0.8)
return [(0.2, 'X=0'),
(0.8, 'X=1')]
if 'Y' not in bvar:
if bvar['X'] == '1':
# Y = bernoulli(0.6)
return [(0.4, 'Y=0'),
(0.6, 'Y=1')]
if 'Z' not in bvar:
# Z = bernoulli(0.8) when X==0
return [(0.2, 'Z=0'),
(0.8, 'Z=1')]
if bvar['Z'] == '0':
# Y = Z XOR bernoulli(0.4) when Z == 0
return [(0.6,'Y=0'),
(0.4,'Y=1')]
else:
# Y = Z XOR bernoulli(0.4) when Z == 1
return [(0.4, 'Y=0'),
(0.6, 'Y=1')]
if 'Z' not in bvar:
if bvar['X'] == '1':
if bvar['Y'] == '0':
# Z = Y XOR bernoulli(0.2) when Y==0
return [(0.8, 'Z=0'),
(0.2, 'Z=1')]
else:
# Z = Y XOR bernoulli(0.2) when Y==1
return [(0.2, 'Z=0'),
(0.8, 'Z=1')]
else:
# Z = bernoulli(0.8)
return [(0.2, 'Z=0'),
(0.8, 'Z=1')]
if 'W' not in bvar:
if (bvar['Y'] == '1') and (bvar['Z'] == '1'):
# W = Y && Z && bernoulli(0.6) when Y==1 and Z==1
return [(0.4, 'W=0'),
(0.6, 'W=1')]
else:
# W = Y && Z && bernoulli(0.6) == 0 when Y==0 or Z==0
return [(1, 'W=0'),
(0, 'W=1')]
return None
```
## Factual world Probability Tree mechanism for $Z=0$
```
factual = PTree.fromFunc(dynamic_dependencies)
Z_is_0 = factual.prop('Z=0')
show_factual_Z_is_0 = factual.show(show_prob=True,
cut=Z_is_0,
crit=factual.critical(Z_is_0) )
#show_example3_factual_Z_is_0
W = factual.prop('W=1')
show_factual_W = factual.show(show_prob=True,
cut=W,
crit=factual.critical(W) )
display(Latex(f'$$P(W)={factual.prob(W)}$$'))
show_factual_W
```
## Conditioned on $Z=0$
```
see_Z_is_0 = factual.see(Z_is_0)
show_see_z_is_0 = see_Z_is_0.show(show_prob=True, cut=Z_is_0, crit=see_Z_is_0.critical(Z_is_0))
#show_see_z_is_0
show_W_given_z_is_0 = see_Z_is_0.show(show_prob=True, cut=W, crit=see_Z_is_0.critical(W))
display(Latex('$$P(W|Z=0)='
f'{see_Z_is_0.prob(W)}$$'))
show_W_given_z_is_0
```
## Intervene on $Z=1$
```
Z_is_1 = see_Z_is_0.prop('Z=1')
do_Z_is_1 = see_Z_is_0.do(Z_is_1)
show_do_Z_is_1 = do_Z_is_1.show(show_prob=True,
cut=Z_is_1,
crit=do_Z_is_1.critical(Z_is_1))
#show_do_Z_is_1
show_W_given_do_Z_is_1 = do_Z_is_1.show(show_prob=True,
cut=W,
crit=do_Z_is_1.critical(W))
display(Latex('$$P(W|Z=0,Z\leftarrow 1)='
f'{do_Z_is_1.prob(W)}$$'))
show_W_given_do_Z_is_1
```
## Counterfactual
```
counterfactual = factual.cf(tree_prem=see_Z_is_0,
cut_subj =Z_is_1)
show_counterfactual = counterfactual.show(show_prob=True,
cut=Z_is_1,
crit=counterfactual.critical(Z_is_1))
#show_counterfactual
```
Note that the counterfactual value for $W_{Z\leftarrow 1}$ is the same as the factual value for $W$ when intervening on $Z\leftarrow 1$. This is not always necessarily the case as demonstrated in [section 2.7.1](https://nbviewer.jupyter.org/github/COVID-19-Causal-Reasoning/probability_trees/blob/main/Causal_Reasoning_in_Probability_Trees.ipynb?flush_cache=true#Computing-a-counterfactual) of the Causal Reasoning in Probability Trees tutorial.
```
given_Z_is_0_show_W_in_a_world_where_Z_is_1 = counterfactual.show(show_prob=True,
cut=W,
crit=counterfactual.critical(W))
display(Latex('$$P(W_{Z\leftarrow 1}|Z=0)='
f'{counterfactual.prob(W)}$$'))
given_Z_is_0_show_W_in_a_world_where_Z_is_1
```
##
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.