text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# 06 Custom Kernels and Mean Functions
_[Estimated execution time: 1 min]_
In this tutorial we show how custom kernels can be trained with this toolkit as well as how we can train a mean function for our data set. We create an artificial data set, we show how to define a trainable mean function and how to seperate training of the mean function and the kernel.
```
import mogptk
import numpy as np
```
Let's create a function that is periodical and add a polynomial. In this case we choose a sinusoidal with unit frequency and a second-degree polynomial $2.0x - 0.2x^2$.
Remark that loading a `Data` object using `mogptk.LoadFunction` allows us to know the "true" signal to which we add a random variance. This is useful for plotting and also for calculating the "true" prediction error.
```
f = lambda x: np.sin(x[:,0]*2.0*np.pi) + 2*x[:,0] - 0.2*x[:,0]**2
data = mogptk.LoadFunction(f, start=0.0, end=10.0, n=100, var=0.5)
data.plot();
```
In order to make a trainable mean function where the parameters are automatically picked up by [`model.train`](https://games-uchile.github.io/mogptk/model.html#mogptk.model.Model.train), we need to create a new class and derive from [`mogptk.gpr.Mean`](https://games-uchile.github.io/mogptk/gpr/mean.html#mogptk.gpr.mean.Mean). In `__init__` we need to call the base class' initializer and then add parameters by assigning instantiations of [`mogptk.gpr.Parameter`](https://games-uchile.github.io/mogptk/gpr/parameter.html#mogptk.gpr.parameter.Parameter) to the class' properties.
The `__call__` function is called to evaluate the mean function and gets passed an `X` of shape `(data points,input dims)` and should returns the Y values of shape `(datapoints,)`. Make sure to "call" your parameters before using: `coefs = self.coefs()`!
```
class Mean(mogptk.gpr.Mean):
def __init__(self):
super(Mean, self).__init__()
self.coefficients = mogptk.gpr.Parameter([0.0, 0.0, 0.0])
def __call__(self, X):
coefs = self.coefficients()
return coefs[0] + coefs[1]*X[:,1] + coefs[2]*X[:,1]**2
```
We initialize our mean function, create a periodic kernel, and initialize our model. See all implemented [single output](https://games-uchile.github.io/mogptk/gpr/singleoutput.html) and [multi output](https://games-uchile.github.io/mogptk/gpr/multioutput.html) kernels. Note that we need to create an [`IndependentMultiOutputKernel`](https://games-uchile.github.io/mogptk/gpr/multioutput.html#mogptk.gpr.multioutput.IndependentMultiOutputKernel) to handle the multi-output nature of MOGPTK, even though in this case we have only one channel.
In this case, the random initialization of the parameters of the periodic kernel may give poor results. Setting reasonable values allows training to properly optimize the kernel.
```
mean = Mean()
kernel = mogptk.gpr.PeriodicKernel(input_dims=1)
mo_kernel = mogptk.gpr.IndependentMultiOutputKernel(kernel)
model = mogptk.Model(data, mo_kernel, mean=mean, name="Periodic")
# initialize kernel parameters to reasonable values
kernel.l.assign(1.0)
kernel.p.assign(1.0)
```
We will first only train the mean function, then the kernel, and then both the mean function and the kernel. Note that we can set `trainable` for kernels, means, and individual parameters to enable or disable training.
```
mean.trainable = True
kernel.trainable = False
model.train(method='Adam', lr=0.1, iters=100, plot=True, error='MAE');
mean.trainable = False
kernel.trainable = True
model.train(method='Adam', lr=0.1, iters=100, plot=True, error='MAE');
mean.trainable = True
kernel.trainable = True
model.train(method='Adam', lr=0.1, iters=100, plot=True, error='MAE');
model.predict()
data.plot();
mogptk.error(model, disp=True)
```
The trained parameters for the coefficients are close to the used values for creating the data set (`[0.0, 2.0, -0.2]`). Also note that the `IMO.noise` parameter is (usually) close to `0.5`. There are two `noise` parameters, one for the entire model and one for each channel of [`IndependentMultiOutputKernel`](https://games-uchile.github.io/mogptk/gpr/multioutput.html#mogptk.gpr.multioutput.IndependentMultiOutputKernel) (IMO). The latter allows training different noises per channel which makes the model-wide noise redundant, thus it is fixed to zero.
```
model.print_parameters()
```
| github_jupyter |
<a href="http://cocl.us/pytorch_link_top">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " />
</a>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" />
<h1 align=center><font size = 5>What's Convolution </h1 >
# Table of Contents
In this lab, you will study convolution and review how the different operations change the relationship between input and output.
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">What is Convolution </a></li>
<li><a href="#ref1">Determining the Size of Output</a></li>
<li><a href="#ref2">Stride</a></li>
<li><a href="#ref3">Zero Padding </a></li>
<li><a href="#ref4">Practice Questions </a></li>
<br>
<p></p>
Estimated Time Needed: <strong>25 min</strong>
</div>
<hr>
Import the following libraries:
```
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import numpy as np
from scipy import ndimage, misc
```
<a id="ref0"></a>
<h2 align=center>What is Convolution?</h2>
Convolution is a linear operation similar to a linear equation, dot product, or matrix multiplication. Convolution has several advantages for analyzing images. As discussed in the video, convolution preserves the relationship between elements, and it requires fewer parameters than other methods.
You can see the relationship between the different methods that you learned:
$$linear \ equation :y=wx+b$$
$$linear\ equation\ with\ multiple \ variables \ where \ \mathbf{x} \ is \ a \ vector \ \mathbf{y}=\mathbf{wx}+b$$
$$ \ matrix\ multiplication \ where \ \mathbf{X} \ in \ a \ matrix \ \mathbf{y}=\mathbf{wX}+\mathbf{b} $$
$$\ convolution \ where \ \mathbf{X} \ and \ \mathbf{Y} \ is \ a \ tensor \ \mathbf{Y}=\mathbf{w}*\mathbf{X}+\mathbf{b}$$
In convolution, the parameter <b>w</b> is called a kernel. You can perform convolution on images where you let the variable image denote the variable X and w denote the parameter.
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter%206/6.1.1xw.png" width = 500, align = "center">
Create a two-dimensional convolution object by using the constructor Conv2d, the parameter <code>in_channels</code> and <code>out_channels</code> will be used for this section, and the parameter kernel_size will be three.
```
conv = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=3)
conv
```
Because the parameters in <code>nn.Conv2d</code> are randomly initialized and learned through training, give them some values.
```
conv.state_dict()['weight'][0][0]=torch.tensor([[1.0,0,-1.0],[2.0,0,-2.0],[1.0,0.0,-1.0]])
conv.state_dict()['bias'][0]=0.0
conv.state_dict()
```
Create a dummy tensor to represent an image. The shape of the image is (1,1,5,5) where:
(number of inputs, number of channels, number of rows, number of columns )
Set the third column to 1:
```
image=torch.zeros(1,1,5,5)
image[0,0,:,2]=1
image
```
Call the object <code>conv</code> on the tensor <code>image</code> as an input to perform the convolution and assign the result to the tensor <code>z</code>.
```
z=conv(image)
z
```
The following animation illustrates the process, the kernel performs at the element-level multiplication on every element in the image in the corresponding region. The values are then added together. The kernel is then shifted and the process is repeated.
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter%206/6.1.1convltuon.gif" width = 500, align = "center">
<a id="ref1"></a>
<h2 align=center>Determining the Size of the Output</h2>
The size of the output is an important parameter. In this lab, you will assume square images. For rectangular images, the same formula can be used in for each dimension independently.
Let M be the size of the input and K be the size of the kernel. The size of the output is given by the following formula:
$$M_{new}=M-K+1$$
Create a kernel of size 2:
```
K=2
conv1 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=K)
conv1.state_dict()['weight'][0][0]=torch.tensor([[1.0,1.0],[1.0,1.0]])
conv1.state_dict()['bias'][0]=0.0
conv1.state_dict()
conv1
```
Create an image of size 2:
```
M=4
image1=torch.ones(1,1,M,M)
```
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter%206/6.1.1kernal2.png" width = 500, align = "center">
The following equation provides the output:
$$M_{new}=M-K+1$$
$$M_{new}=4-2+1$$
$$M_{new}=3$$
The following animation illustrates the process: The first iteration of the kernel overlay of the images produces one output. As the kernel is of size K, there are M-K elements for the kernel to move in the horizontal direction. The same logic applies to the vertical direction.
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter%206/6.1.1outsize.gif" width = 500, align = "center">
Perform the convolution and verify the size is correct:
```
z1=conv1(image1)
print("z1:",z1)
print("shape:",z1.shape[2:4])
```
<a id="ref2"></a>
<h2 align=center>Stride parameter</h2>
The parameter stride changes the number of shifts the kernel moves per iteration. As a result, the output size also changes and is given by the following formula:
$$M_{new}=\dfrac{M-K}{stride}+1$$
Create a convolution object with a stride of 2:
```
conv3 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=2,stride=2)
conv3.state_dict()['weight'][0][0]=torch.tensor([[1.0,1.0],[1.0,1.0]])
conv3.state_dict()['bias'][0]=0.0
conv3.state_dict()
```
For an image with a size of 4, calculate the output size:
$$M_{new}=\dfrac{M-K}{stride}+1$$
$$M_{new}=\dfrac{4-2}{2}+1$$
$$M_{new}=2$$
The following animation illustrates the process: The first iteration of the kernel overlay of the images produces one output. Because the kernel is of size K, there are M-K=2 elements. The stride is 2 because it will move 2 elements at a time. As a result, you divide M-K by the stride value 2:
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter%206/6.1.1stride2.gif" width = 500, align = "center">
Perform the convolution and verify the size is correct:
```
z3=conv3(image1)
print("z3:",z3)
print("shape:",z3.shape[2:4])
```
<a id='ref3'></a>
<h2 align=center>Zero Padding </h2>
As you apply successive convolutions, the image will shrink. You can apply zero padding to keep the image at a reasonable size, which also holds information at the borders.
In addition, you might not get integer values for the size of the kernel. Consider the following image:
```
image1
```
Try performing convolutions with the <code>kernel_size=2</code> and a <code>stride=3</code>. Use these values:
$$M_{new}=\dfrac{M-K}{stride}+1$$
$$M_{new}=\dfrac{4-2}{3}+1$$
$$M_{new}=1.666$$
```
conv4 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=2,stride=3)
conv4.state_dict()['weight'][0][0]=torch.tensor([[1.0,1.0],[1.0,1.0]])
conv4.state_dict()['bias'][0]=0.0
conv4.state_dict()
z4=conv4(image1)
print("z4:",z4)
print("z4:",z4.shape[2:4])
```
You can add rows and columns of zeros around the image. This is called padding. In the constructor <code>Conv2d</code>, you specify the number of rows or columns of zeros that you want to add with the parameter padding.
For a square image, you merely pad an extra column of zeros to the first column and the last column. Repeat the process for the rows. As a result, for a square image, the width and height is the original size plus 2 x the number of padding elements specified. You can then determine the size of the output after subsequent operations accordingly as shown in the following equation where you determine the size of an image after padding and then applying a convolutions kernel of size K.
$$M'=M+2 \times padding$$
$$M_{new}=M'-K+1$$
Consider the following example:
```
conv5 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=2,stride=3,padding=1)
conv5.state_dict()['weight'][0][0]=torch.tensor([[1.0,1.0],[1.0,1.0]])
conv5.state_dict()['bias'][0]=0.0
conv5.state_dict()
z5=conv5(image1)
print("z5:",z5)
print("z5:",z4.shape[2:4])
```
The process is summarized in the following animation:
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter%206/6.1.1zeropad.gif" width = 500, align = "center">
<a id='ref4'></a>
<h2 align=center>Practice Question </h2>
A kernel of zeros with a kernel size=3 is applied to the following image:
```
Image=torch.randn((1,1,4,4))
Image
```
Question: Without using the function, determine what the outputs values are as each element:
Double-click __here__ for the solution.
<!-- Your answer is below:
As each element of the kernel is zero, and for every output, the image is multiplied by the kernel, the result is always zero
-->
Question: Use the following convolution object to perform convolution on the tensor <code>Image</code>:
```
conv = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=3)
conv.state_dict()['weight'][0][0]=torch.tensor([[0,0,0],[0,0,0],[0,0.0,0]])
conv.state_dict()['bias'][0]=0.0
```
Double-click __here__ for the solution.
<!-- Your answer is below:
conv(Image)
-->
Question: You have an image of size 4. The parameters are as follows kernel_size=2,stride=2. What is the size of the output?
Double-click __here__ for the solution.
<!-- Your answer is below:
(M-K)/stride +1
(4-2)/2 +1
2
-->
<a href="http://cocl.us/pytorch_link_bottom">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" />
</a>
### About the Authors:
[Joseph Santarcangelo]( https://www.linkedin.com/in/joseph-s-50398b136/) has a PhD in Electrical Engineering. His research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition.
Other contributors: [Michelle Carey]( https://www.linkedin.com/in/michelleccarey/), [Mavis Zhou]( https://www.linkedin.com/in/jiahui-mavis-zhou-a4537814a/)
<hr>
Copyright © 2018 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| github_jupyter |
# AR6 WG1 - SPM.8
This notebook reproduces parts of **Figure SPM.8** of the IPCC's *Working Group I contribution to the Sixth Assessment Report* ([AR6 WG1](https://www.ipcc.ch/assessment-report/ar6/)).
The data supporting the SPM figure is published under a Creative Commons CC-BY license at
the [Centre for Environmental Data Analyis (CEDA)](https://catalogue.ceda.ac.uk/uuid/ae4f1eb6fce24adcb92ddca1a7838a5c).
This notebook uses a version of that data which was processed for interoperability with the format used by IPCC WG3, the so-called IAMC format.
The notebook is available under an open-source [BSD-3 License](https://github.com/openscm/AR6-WG1-Data-Compilation/blob/main/LICENSE) in the [openscm/AR6-WG1-Data-Compilation](https://github.com/openscm/AR6-WG1-Data-Compilation) GitHub repository.
The notebook uses the Python package [pyam](https://pyam-iamc.readthedocs.io), which provides a suite of features and methods for the analysis, validation and visualization of reference data and scenario results
generated by integrated assessment models, macro-energy tools and other frameworks
in the domain of energy transition, climate change mitigation and sustainable development.
```
import matplotlib.pyplot as plt
import pandas as pd
import pyam
import utils
rc = pyam.run_control()
rc.update("plotting.yaml")
```
## Import and inspect the scenario data
The processed time format for SPM 8 has two columns that are not standard IAMC/pyam columns *reference_period_end_year* and *reference_period_start_year* for advanced features with **smcdata**.
To work with the data in pyam, we first import the data as a **pandas.DataFrame**.<br />
Then, we remove these columns and cast the data as a [pyam.IamDataFrame](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html).
```
data = pd.read_csv(utils.DATA_DIR / "processed" / "fig-spm8" / "fig-spm8-timeseries.csv")
data.drop(columns=["reference_period_end_year", "reference_period_start_year"], inplace=True)
df = pyam.IamDataFrame(data)
df
```
## Create a simple plot for surface air temperature change (Panel a)
We use [matplotlib](https://matplotlib.org) and
the [pyam plotting module](https://pyam-iamc.readthedocs.io/en/stable/gallery/index.html)
to create a figure composed of several elements (the mean and the ranges).
The pyam-plotting feature `final_ranges` allows to highlight the ranges of temperature outcome by scenario
at the end of the century ([read the docs](https://pyam-iamc.readthedocs.io/en/stable/api/plotting.html#pyam.plotting.line)).
```
fig, ax = plt.subplots()
# Compile the variables to be used for this figure
variable = "Surface Air Temperature Change"
mean = f"{variable}|Mean"
ranges = [f"{variable}|{quant}" for quant in ["5%", "95%"]]
# Plot the data
df.filter(variable=mean).plot(ax=ax, color="scenario")
df.filter(variable=ranges).plot(ax=ax, color="scenario", alpha=0, fill_between=True, final_ranges=True)
# Clean and show the plot
ax.set_title("Global surface temperature change relative to 1850-1900")
plt.tight_layout()
fig
```
| github_jupyter |
# Introduction to scientific computing with Python
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this [IPython notebook](http://ipython.org/notebook.html) lecture is available at [http://github.com/jrjohansson/scientific-python-lectures](http://github.com/jrjohansson/scientific-python-lectures).
The other notebooks in this lecture series are indexed at [http://jrjohansson.github.io](http://jrjohansson.github.io).
## The role of computing in science
Science has traditionally been divided into experimental and theoretical disciplines, but during the last several decades computing has emerged as a very important part of science. Scientific computing is often closely related to theory, but it also has many characteristics in common with experimental work. It is therefore often viewed as a new third branch of science. In most fields of science, computational work is an important complement to both experiments and theory, and nowadays a vast majority of both experimental and theoretical papers involve some numerical calculations, simulations or computer modeling.
<center>
<img src="images/theory-experiment-computation.png" width="300">
</center>
In experimental and theoretical sciences there are well established codes of conducts for how results and methods are published and made available to other scientists. For example, in theoretical sciences, derivations, proofs and other results are published in full detail, or made available upon request. Likewise, in experimental sciences, the methods used and the results are published, and all experimental data should be available upon request. It is considered unscientific to withhold crucial details in a theoretical proof or experimental method, that would hinder other scientists from replicating and reproducing the results.
In computational sciences there are not yet any well established guidelines for how source code and generated data should be handled. For example, it is relatively rare that source code used in simulations for published papers are provided to readers, in contrast to the open nature of experimental and theoretical work. And it is not uncommon that source code for simulation software is withheld and considered a competitive advantage (or unnecessary to publish).
However, this issue has recently started to attract increasing attention, and a number of editorials in high-profile journals have called for increased openness in computational sciences. Some prestigious journals, including Science, have even started to demand of authors to provide the source code for simulation software used in publications to readers upon request.
Discussions are also ongoing on how to facilitate distribution of scientific software, for example as supplementary materials to scientific papers.
### References
* [Reproducible Research in Computational Science](http://dx.doi.org/10.1126/science.1213847), Roger D. Peng, Science 334, 1226 (2011).
* [Shining Light into Black Boxes](http://dx.doi.org/10.1126/science.1218263), A. Morin et al., Science 336, 159-160 (2012).
* [The case for open computer programs](http://dx.doi.org/doi:10.1038/nature10836), D.C. Ince, Nature 482, 485 (2012).
## Requirements on scientific computing
**Replication** and **reproducibility** are two of the cornerstones in the scientific method. With respect to numerical work, complying with these concepts have the following practical implications:
* Replication: An author of a scientific paper that involves numerical calculations should be able to rerun the simulations and replicate the results upon request. Other scientist should also be able to perform the same calculations and obtain the same results, given the information about the methods used in a publication.
* Reproducibility: The results obtained from numerical simulations should be reproducible with an independent implementation of the method, or using a different method altogether.
In summary: A sound scientific result should be reproducible, and a sound scientific study should be replicable.
To achieve these goals, we need to:
* Keep and take note of *exactly* which source code and version that was used to produce data and figures in published papers.
* Record information of which version of external software that was used. Keep access to the environment that was used.
* Make sure that old codes and notes are backed up and kept for future reference.
* Be ready to give additional information about the methods used, and perhaps also the simulation codes, to an interested reader who requests it (even years after the paper was published!).
* Ideally codes should be published online, to make it easier for other scientists interested in the codes to access it.
### Tools for managing source code
Ensuring replicability and reprodicibility of scientific simulations is a *complicated problem*, but there are good tools to help with this:
* Revision Control System (RCS) software.
* Good choices include:
* git - http://git-scm.com
* mercurial - http://mercurial.selenic.com. Also known as `hg`.
* subversion - http://subversion.apache.org. Also known as `svn`.
* Online repositories for source code. Available as both private and public repositories.
* Some good alternatives are
* Github - http://www.github.com
* Bitbucket - http://www.bitbucket.com
* Privately hosted repositories on the university's or department's servers.
#### Note
Repositories are also excellent for version controlling manuscripts, figures, thesis files, data files, lab logs, etc. Basically for any digital content that must be preserved and is frequently updated. Again, both public and private repositories are readily available. They are also excellent collaboration tools!
## What is Python?
[Python](http://www.python.org/) is a modern, general-purpose, object-oriented, high-level programming language.
General characteristics of Python:
* **clean and simple language:** Easy-to-read and intuitive code, easy-to-learn minimalistic syntax, maintainability scales well with size of projects.
* **expressive language:** Fewer lines of code, fewer bugs, easier to maintain.
Technical details:
* **dynamically typed:** No need to define the type of variables, function arguments or return types.
* **automatic memory management:** No need to explicitly allocate and deallocate memory for variables and data arrays. No memory leak bugs.
* **interpreted:** No need to compile the code. The Python interpreter reads and executes the python code directly.
Advantages:
* The main advantage is ease of programming, minimizing the time required to develop, debug and maintain the code.
* Well designed language that encourage many good programming practices:
* Modular and object-oriented programming, good system for packaging and re-use of code. This often results in more transparent, maintainable and bug-free code.
* Documentation tightly integrated with the code.
* A large standard library, and a large collection of add-on packages.
Disadvantages:
* Since Python is an interpreted and dynamically typed programming language, the execution of python code can be slow compared to compiled statically typed programming languages, such as C and Fortran.
* Somewhat decentralized, with different environment, packages and documentation spread out at different places. Can make it harder to get started.
## What makes python suitable for scientific computing?
<img src="images/optimizing-what.png" width="600">
* Python has a strong position in scientific computing:
* Large community of users, easy to find help and documentation.
* Extensive ecosystem of scientific libraries and environments
* numpy: http://numpy.scipy.org - Numerical Python
* scipy: http://www.scipy.org - Scientific Python
* matplotlib: http://www.matplotlib.org - graphics library
* Great performance due to close integration with time-tested and highly optimized codes written in C and Fortran:
* blas, atlas blas, lapack, arpack, Intel MKL, ...
* Good support for
* Parallel processing with processes and threads
* Interprocess communication (MPI)
* GPU computing (OpenCL and CUDA)
* Readily available and suitable for use on high-performance computing clusters.
* No license costs, no unnecessary use of research budget.
### The scientific python software stack
<!-- <img src="files/images/scientific-python-stack.svg" width="300"> -->
<img src="images/scientific-python-stack.png" width="300">
### Python environments
Python is not only a programming language, but often also refers to the standard implementation of the interpreter (technically referred to as [CPython](http://en.wikipedia.org/wiki/CPython)) that actually runs the python code on a computer.
There are also many different environments through which the python interpreter can be used. Each environment has different advantages and is suitable for different workflows. One strength of python is that it is versatile and can be used in complementary ways, but it can be confusing for beginners so we will start with a brief survey of python environments that are useful for scientific computing.
### Python interpreter
The standard way to use the Python programming language is to use the Python interpreter to run python code. The python interpreter is a program that reads and execute the python code in files passed to it as arguments. At the command prompt, the command ``python`` is used to invoke the Python interpreter.
For example, to run a file ``my-program.py`` that contains python code from the command prompt, use::
$ python my-program.py
We can also start the interpreter by simply typing ``python`` at the command line, and interactively type python code into the interpreter.
<!-- <img src="files/images/python-screenshot.jpg" width="600"> -->
<img src="images/python-screenshot.jpg" width="600">
This is often how we want to work when developing scientific applications, or when doing small calculations. But the standard python interpreter is not very convenient for this kind of work, due to a number of limitations.
### IPython
IPython is an interactive shell that addresses the limitation of the standard python interpreter, and it is a work-horse for scientific use of python. It provides an interactive prompt to the python interpreter with a greatly improved user-friendliness.
<!-- <img src="files/images/ipython-screenshot.jpg" width="600"> -->
<img src="images/ipython-screenshot.jpg" width="600">
Some of the many useful features of IPython includes:
* Command history, which can be browsed with the up and down arrows on the keyboard.
* Tab auto-completion.
* In-line editing of code.
* Object introspection, and automatic extract of documentation strings from python objects like classes and functions.
* Good interaction with operating system shell.
* Support for multiple parallel back-end processes, that can run on computing clusters or cloud services like Amazon EC2.
### IPython notebook
[IPython notebook](http://ipython.org/notebook.html) is an HTML-based notebook environment for Python, similar to Mathematica or Maple. It is based on the IPython shell, but provides a cell-based environment with great interactivity, where calculations can be organized and documented in a structured way.
<!-- <img src="files/images/ipython-notebook-screenshot.jpg" width="800"> -->
<img src="images/ipython-notebook-screenshot.jpg" width="800">
Although using a web browser as graphical interface, IPython notebooks are usually run locally, from the same computer that run the browser. To start a new IPython notebook session, run the following command:
$ ipython notebook
from a directory where you want the notebooks to be stored. This will open a new browser window (or a new tab in an existing window) with an index page where existing notebooks are shown and from which new notebooks can be created.
### Spyder
[Spyder](http://code.google.com/p/spyderlib/) is a MATLAB-like IDE for scientific computing with python. It has the many advantages of a traditional IDE environment, for example that everything from code editing, execution and debugging is carried out in a single environment, and work on different calculations can be organized as projects in the IDE environment.
<!-- <img src="files/images/spyder-screenshot.jpg" width="800"> -->
<img src="images/spyder-screenshot.jpg" width="800">
Some advantages of Spyder:
* Powerful code editor, with syntax high-lighting, dynamic code introspection and integration with the python debugger.
* Variable explorer, IPython command prompt.
* Integrated documentation and help.
## Versions of Python
There are currently two versions of python: Python 2 and Python 3. Python 3 will eventually supercede Python 2, but it is not backward-compatible with Python 2. A lot of existing python code and packages has been written for Python 2, and it is still the most wide-spread version. For these lectures either version will be fine, but it is probably easier to stick with Python 2 for now, because it is more readily available via prebuilt packages and binary installers.
To see which version of Python you have, run
$ python --version
Python 2.7.3
$ python3.2 --version
Python 3.2.3
Several versions of Python can be installed in parallel, as shown above.
## Installation
### Conda
The best way set-up an scientific Python environment is to use the cross-platform package manager `conda` from Continuum Analytics. First download and install miniconda http://conda.pydata.org/miniconda.html or Anaconda (see below). Next, to install the required libraries for these notebooks, simply run:
$ conda install ipython ipython-notebook spyder numpy scipy sympy matplotlib cython
This should be sufficient to get a working environment on any platform supported by `conda`.
### Linux
In Ubuntu Linux, to installing python and all the requirements run:
$ sudo apt-get install python ipython ipython-notebook
$ sudo apt-get install python-numpy python-scipy python-matplotlib python-sympy
$ sudo apt-get install spyder
### MacOS X
*Macports*
Python is included by default in Mac OS X, but for our purposes it will be useful to install a new python environment using [Macports](http://www.macports.org/), because it makes it much easier to install all the required additional packages. Using Macports, we can install what we need with:
$ sudo port install py27-ipython +pyside+notebook+parallel+scientific
$ sudo port install py27-scipy py27-matplotlib py27-sympy
$ sudo port install py27-spyder
These will associate the commands `python` and `ipython` with the versions installed via macports (instead of the one that is shipped with Mac OS X), run the following commands:
$ sudo port select python python27
$ sudo port select ipython ipython27
*Fink*
Or, alternatively, you can use the [Fink](http://www.finkproject.org/) package manager. After installing Fink, use the following command to install python and the packages that we need:
$ sudo fink install python27 ipython-py27 numpy-py27 matplotlib-py27 scipy-py27 sympy-py27
$ sudo fink install spyder-mac-py27
### Windows
Windows lacks a good packaging system, so the easiest way to setup a Python environment is to install a pre-packaged distribution. Some good alternatives are:
* [Enthought Python Distribution](http://www.enthought.com/products/epd.php). EPD is a commercial product but is available free for academic use.
* [Anaconda](http://continuum.io/downloads.html). The Anaconda Python distribution comes with many scientific computing and data science packages and is free, including for commercial use and redistribution. It also has add-on products such as Accelerate, IOPro, and MKL Optimizations, which have free trials and are free for academic use.
* [Python(x,y)](http://code.google.com/p/pythonxy/). Fully open source.
#### Note
EPD and Anaconda are also available for Linux and Max OS X.
## Further reading
* [Python](http://www.python.org). The official Python web site.
* [Python tutorials](http://docs.python.org/2/tutorial). The official Python tutorials.
* [Think Python](http://www.greenteapress.com/thinkpython). A free book on Python.
## Python and module versions
Since there are several different versions of Python and each Python package has its own release cycle and version number (for example scipy, numpy, matplotlib, etc., which we installed above and will discuss in detail in the following lectures), it is important for the reproducibility of an IPython notebook to record the versions of all these different software packages. If this is done properly it will be easy to reproduce the environment that was used to run a notebook, but if not it can be hard to know what was used to produce the results in a notebook.
To encourage the practice of recording Python and module versions in notebooks, I've created a simple IPython extension that produces a table with versions numbers of selected software components. I believe that it is a good practice to include this kind of table in every notebook you create.
To install this IPython extension, use `pip install version_information`:
```
# you only need to do this once
!pip install --upgrade version_information
```
or alternatively run (deprecated method):
Now, to load the extension and produce the version table
```
%load_ext version_information
%version_information numpy, scipy, matplotlib, sympy, version_information
done; Gopal
```
| github_jupyter |
```
%matplotlib inline
from sklearn.svm import SVR
from sklearn.ensemble import BaggingRegressor
from sklearn.datasets import make_regression
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
X, y = make_regression(n_samples=100, n_features=4,
n_informative=2, n_targets=1,
random_state=0, shuffle=False)
X[:,0]
plt.scatter(X[:,0],y) # first Feature with Target
plt.scatter(X[:,1],y) # Second Feature With Target
plt.scatter(X[:,2],y) # Third Feature With Target
plt.scatter(X[:,3],y) # Fourth Feature With Target
regr = BaggingRegressor(base_estimator=SVR(),
n_estimators=10, random_state=0)
regr.fit(X,y)
regr.predict([[0, 0, 0, 0]])
```
This example illustrates and compares the bias-variance decomposition of the expected mean squared error of a single estimator against a bagging ensemble.
* In regression, the expected mean squared error of an estimator can be decomposed in terms of bias, variance and noise. On average over datasets of the regression problem, the bias term measures the average amount by which the predictions of the estimator differ from the predictions of the best possible estimator for the problem (i.e., the Bayes model). The variance term measures the variability of the predictions of the estimator when fit over different instances LS of the problem. Finally, the noise measures the irreducible part of the error which is due the variability in the data
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import BaggingRegressor
from sklearn.tree import DecisionTreeRegressor
# Settings
n_repeat = 50 # Number of iterations for computing expectations
n_train = 50 # Size of the training set
n_test = 1000 # Size of the test set
noise = 0.1 # Standard deviation of the noise
np.random.seed(0)
estimators = [("Tree", DecisionTreeRegressor()),
("Bagging(Tree)", BaggingRegressor(DecisionTreeRegressor()))]
n_estimators = len(estimators)
# Generate data
def f(x):
x = x.ravel()
return np.exp(-x ** 2) + 1.5 * np.exp(-(x - 2) ** 2)
def generate(n_samples, noise, n_repeat=1):
X = np.random.rand(n_samples) * 10 - 5
X = np.sort(X)
if n_repeat == 1:
y = f(X) + np.random.normal(0.0, noise, n_samples)
else:
y = np.zeros((n_samples, n_repeat))
for i in range(n_repeat):
y[:, i] = f(X) + np.random.normal(0.0, noise, n_samples)
X = X.reshape((n_samples, 1))
return X, y
X_train = []
y_train = []
for i in range(n_repeat):
X, y = generate(n_samples=n_train, noise=noise)
X_train.append(X)
y_train.append(y)
X_test, y_test = generate(n_samples=n_test, noise=noise, n_repeat=n_repeat)
plt.figure(figsize=(30, 15))
# Loop over estimators to compare
for n, (name, estimator) in enumerate(estimators):
# Compute predictions
y_predict = np.zeros((n_test, n_repeat))
for i in range(n_repeat):
estimator.fit(X_train[i], y_train[i])
y_predict[:, i] = estimator.predict(X_test)
# Bias^2 + Variance + Noise decomposition of the mean squared error
y_error = np.zeros(n_test)
for i in range(n_repeat):
for j in range(n_repeat):
y_error += (y_test[:, j] - y_predict[:, i]) ** 2
y_error /= (n_repeat * n_repeat)
y_noise = np.var(y_test, axis=1)
y_bias = (f(X_test) - np.mean(y_predict, axis=1)) ** 2
y_var = np.var(y_predict, axis=1)
print("{0}: {1:.4f} (error) = {2:.4f} (bias^2) "
" + {3:.4f} (var) + {4:.4f} (noise)".format(name,
np.mean(y_error),
np.mean(y_bias),
np.mean(y_var),
np.mean(y_noise)))
# Plot figures
plt.subplot(2, n_estimators, n + 1)
plt.plot(X_test, f(X_test), "b", label="$f(x)$")
plt.plot(X_train[0], y_train[0], ".b", label="LS ~ $y = f(x)+noise$")
for i in range(n_repeat):
if i == 0:
plt.plot(X_test, y_predict[:, i], "r", label=r"$\^y(x)$")
else:
plt.plot(X_test, y_predict[:, i], "r", alpha=0.05)
plt.plot(X_test, np.mean(y_predict, axis=1), "c",
label=r"$\mathbb{E}_{LS} \^y(x)$")
plt.xlim([-5, 5])
plt.title(name)
if n == n_estimators - 1:
plt.legend(loc=(1.1, .5))
plt.subplot(2, n_estimators, n_estimators + n + 1)
plt.plot(X_test, y_error, "r", label="$error(x)$")
plt.plot(X_test, y_bias, "b", label="$bias^2(x)$"),
plt.plot(X_test, y_var, "g", label="$variance(x)$"),
plt.plot(X_test, y_noise, "c", label="$noise(x)$")
plt.xlim([-5, 5])
plt.ylim([0, 0.1])
if n == n_estimators - 1:
plt.legend(loc=(1.1, .5))
plt.subplots_adjust(right=.75)
plt.show()
```
| github_jupyter |
# ๊ต์ฐจ ๊ฒ์ฆ
## ๋ชจํ ๊ฒ์ฆ
์์ธก ๋ชจํ์ ์ต์ข
์ฑ๋ฅ์ ๊ฐ๊ด์ ์ผ๋ก ์ธก์ ํ๋ ค๋ฉด ๋ชจ์ ์ถ์ (parameter fitting) ์ฆ ํธ๋ ์ด๋(training)์ ์ฌ์ฉ๋์ง ์์ ์๋ก์ด ๋ฐ์ดํฐ, ์ฆ ํ
์คํธ ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํด์ผ ํ๋ค. ๋ชจํ์ ๋ชจ์ ๊ฐฏ์๋ฅผ ์ฆ๊ฐ์ํจ๋ค๋ ๊ฐ ์ปค๋ ๋ชจํ, ์ ๊ฒฝ๋ง ๋ชจํ๊ณผ ๊ฐ์ ๋น์ ํ ๋ชจํ์ ์ฌ์ฉํ๊ฒ ๋๋ฉด ํธ๋ ์ด๋ ๋ฐ์ดํฐ์ ๋ํ ์์ธก ์ฑ๋ฅ์ ์ผ๋ง๋ ์ง ๋์ผ ์ ์๊ธฐ ๋๋ฌธ์ด๋ค. ์ด๋ฌํ ๋ฐฉ๋ฒ์ ์ํด ๊ณผ์ต์ ํ(overfitting)๊ฐ ์ผ์ด๋๋ฉด ํธ๋ ์ด๋ ๋ฐ์ดํฐ์ ๋ํด์๋ ์์ธก์ด ์๋์ง๋ง ํ
์คํธ ๋ฐ์ดํฐ์ ๋ํด์๋ ์์ธก ์ฑ๋ฅ์ด ๊ธ๊ฒฉํ ๋จ์ด์ง๋ ํ์์ด ๋ฐ์ํ๋ค.
## ๊ต์ฐจ ๊ฒ์ฆ
์์์ ์ง์ ํ ๋ฐ์ ๊ฐ์ด ๋ชจํ ์ฑ๋ฅ์ ์ ์์ ์ผ๋ก ๊ฒ์ฌํ๋ ค๋ฉด ํ
์คํธ ๋ฐ์ดํฐ๊ฐ ๋ณ๋๋ก ์์ด์ผ ํ๊ธฐ ๋๋ฌธ์ ํ์ค์์๋ ํ๋ณดํ ๋ฐ์ดํฐ ์ค ์ผ๋ถ๋ฅผ ๋ผ์ด๋ด์ด ํ
์คํธ ๋ฐ์ดํฐ๋ก ์ฌ์ฉํ๋ค. ๊ทธ๋ฐ๋ฐ ํ
์คํธ ๋ฐ์ดํฐ๋ฅผ ์ด๋ป๊ฒ ๊ณจ๋ผ๋ด๋๋์ ๋ฐ๋ผ ๋ชจํ์ ์ฑ๋ฅ์ด ๋ฌ๋ผ์ง๋ฏ๋ก ํ ๊ฐ์ ํ
์คํธ ๋ฐ์ดํฐ๋ง ์ฌ์ฉํ๋ ๊ฒ์ด ์๋๋ผ ๊ฐ๊ธฐ ๋ค๋ฅธ ๋ฐฉ๋ฒ์ผ๋ก ์๋ก ๋ค๋ฅธ ํ
์คํธ ๋ฐ์ดํฐ๋ฅผ ์ฌ๋ฌ๋ฒ ๊ณจ๋ผ๋ด์ ๋ณต์์ ํ
์คํธ๋ฅผ ์ค์ํ๋ ๊ฒ์ด ์ผ๋ฐ์ ์ด๋ค.
์ด๋ฌํ ํ
์คํธ ๋ฐฉ๋ฒ์ ๊ต์ฐจ ๊ฒ์ฆ(cross validation)์ด๋ผ๊ณ ํ๋ค. ๊ต์ฐจ ๊ฒ์ฆ์ ํตํ ๋ชจํ ์ฑ๋ฅ์ ๋ณดํต ๋ค์๊ณผ ๊ฐ์ ๋ ๊ฐ์ง ๊ฐ์ผ๋ก ๋ํ๋๋ค.
* ์ค์ฐจ ํ๊ท (mean performance): ํธ๋ ์ด๋์ ์ฌ์ฉ๋์ง ์์ ํ
์คํธ ๋ฐ์ดํฐ(test data)์ ๋ํด์ ํ๊ท ์ค์ฐจ์ ํฌ๊ธฐ๊ฐ ์ผ๋ง๋ ์์๊ฐ?
* ์ค์ฐจ ๋ถ์ฐ(variance): ํธ๋ ์ด๋์ ์ฌ์ฉ๋์ง ์์ ํ
์คํธ ๋ฐ์ดํฐ(test data)์ ๋ํด ์ค์ฐจ์ ํฌ๊ธฐ๊ฐ ์ผ๋ง๋ ๋ฌ๋ผ์ง๋๊ฐ?
์ด ์ค์์ ์ค์ฐจ ๋ถ์ฐ์ ๊ณ์ฐํ๋ ค๋ฉด ํ
์คํธ ๋ฐ์ดํฐ ์
์ด ์ต์ํ ์ธ ๊ฐ ์ธํธ๊ฐ ์์ด์ผ ํ๋ค.
## Scikit-Learn์ ๊ต์ฐจ ๊ฒ์ฆ ๊ธฐ๋ฅ
Scikit-Learn์์๋ ๊ต์ฐจ ๊ฒ์ฆ์ ์ํด ์ ์ฒด ๋ฐ์ดํฐ ์
์์ ํธ๋ ์ด๋์ฉ ๋ฐ์ดํฐ๋ ํ
์คํธ์ฉ ๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฆฌํด ๋ด๋ ์ฌ๋ฌ๊ฐ์ง ๋ฐฉ๋ฒ์ ์ ๊ณตํ๋ค.
* data๋ฅผ train set๊ณผ test set์ผ๋ก ๋จ์ ๋ถ๋ฆฌ
* data splitter
* `train_test_split()` ๋ช
๋ น
* ๋ณต์์ test set ์ค๋น
* cross validation iterator
* `KFold`
* `StratifiedKFold`
* `LabelKFold`
* `LeaveOneOut`
* `LeavePOut`
* `LeaveOneLabelOut`
* `LeavePLabelOut`
* `ShuffleSplit`
* `LabelShuffleSplit`
* ๋ณต์์ test set ์ฌ์ฉํ์ฌ ํ๊ฐ ๊ณผ์ ๋ฐ๋ณต
* cross validation calculator
* `cross_val_score()`
### ๋จ์ ๋ฐ์ดํฐ ๋ถ๋ฆฌ
`train_test_split()` ๋ช
๋ น์ ๋ฐ์ดํฐ๋ฅผ ๋จ์ํ ํธ๋ ์ด๋ ๋ฐ์ดํฐ์ ํ
์คํธ ๋ฐ์ดํฐ๋ก ๋ถ๋ฆฌํ๋ค.
* ์ธ์
* arrays : ๋ฐ์ดํฐ
* test_size : ํ
์คํธ ๋ฐ์ดํฐ ์ฌ์ด์ฆ
* train_size : ์ฌ์ด์ฆ
* random_state : ๋์ ์๋
* ๋ฐํ๊ฐ
* ๋ฐฐ์ด ๋ฆฌ์คํธ
```
X = np.arange(10).reshape((5, 2))
X
y = np.arange(5)
y
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
X_train
y_train
X_test
y_test
```
### K-fold CV
K-fold CV(cross-validation) ๋ฐฉ๋ฒ์ ๋ฐ์ดํฐ ์
์ K๊ฐ์ sub-set๋ก ๋ถ๋ฆฌํ๋ ๋ฐฉ๋ฒ์ด๋ค. ๋ถ๋ฆฌ๋ K๊ฐ์ sub-set ์ค ํ๋๋ง ์ ์ธํ K-1๊ฐ์ sub-sets๋ฅผ training set์ผ๋ก ์ด์ฉํ์ฌ K๊ฐ์ ๋ชจํ ์ถ์ ํ๋ค.
<img src="https://docs.google.com/drawings/d/1JdgUDzuE75LBxqT5sKOhlPgP6umEkvD3Sm-gKnu-jqA/pub?w=762&h=651" style="margin: 0 auto 0 auto;">
Scikit-Learn ์ cross_validation ์๋ธ ํจํค์ง๋ K-Fold๋ฅผ ์ํ `KFold` ํด๋์ค๋ฅผ ์ ๊ณตํ๋ค.
```
N = 5
X = np.arange(8 * N).reshape(-1, 2) * 10
y = np.hstack([np.ones(N), np.ones(N) * 2, np.ones(N) * 3, np.ones(N) * 4])
print("X:\n", X, sep="")
print("y:\n", y, sep="")
from sklearn.cross_validation import KFold
cv = KFold(len(X), n_folds=3, random_state=0)
for train_index, test_index in cv:
print("test y:", y[test_index])
print("." * 80 )
print("train y:", y[train_index])
print("=" * 80 )
```
### Stratified K-Fold
* target class๊ฐ ์ด๋ ํ data set์ ๋ชฐ๋ฆฌ์ง ์๋๋ก ํ๋ค
```
from sklearn.cross_validation import StratifiedKFold
cv = StratifiedKFold(y, n_folds=3, random_state=0)
for train_index, test_index in cv:
print("test X:\n", X[test_index])
print("." * 80 )
print("test y:", y[test_index])
print("=" * 80 )
```
### Leave-One-Out (LOO)
* ํ๋์ sample๋ง์ test set์ผ๋ก ๋จ๊ธด๋ค.
```
from sklearn.cross_validation import LeaveOneOut
cv = LeaveOneOut(5)
for train_index, test_index in cv:
print("test X:", X[test_index])
print("." * 80 )
print("test y:", y[test_index])
print("=" * 80 )
```
### Label K-Fold
* ๊ฐ์ label์ด test์ train์ ๋์์ ๋ค์ด๊ฐ์ง ์๊ฒ ์กฐ์
* label์ ์ํ ์ํฅ์ ์ต์ํ
```
from sklearn.cross_validation import LabelKFold
cv = LabelKFold(y, n_folds=3)
for train_index, test_index in cv:
print("test y:", y[test_index])
print("." * 80 )
print("train y:", y[train_index])
print("=" * 80 )
```
### ShuffleSplit
* ์ค๋ณต๋ ๋ฐ์ดํฐ๋ฅผ ํ์ฉ
```
from sklearn.cross_validation import ShuffleSplit
cv = ShuffleSplit(5)
for train_index, test_index in cv:
print("test X:", X[test_index])
print("=" * 20 )
```
## ๊ต์ฐจ ํ๊ฐ ์ํ
CV๋ ๋จ์ํ ๋ฐ์ดํฐ ์
์ ๋๋๋ ์ญํ ์ ์ํํ ๋ฟ์ด๋ค. ์ค์ ๋ก ๋ชจํ์ ์ฑ๋ฅ(ํธํฅ ์ค์ฐจ ๋ฐ ๋ถ์ฐ)์ ๊ตฌํ๋ ค๋ฉด ์ด๋ ๊ฒ ๋๋์ด์ง ๋ฐ์ดํฐ์
์ ์ฌ์ฉํ์ฌ ํ๊ฐ๋ฅผ ๋ฐ๋ณตํ์ฌ์ผ ํ๋ค. ์ด ๊ณผ์ ์ ์๋ํํ๋ ๋ช
๋ น์ด `cross_val_score()` ์ด๋ค.
* `cross_val_score(estimator, X, y=None, scoring=None, cv=None)`
* cross validation iterator `cv`๋ฅผ ์ด์ฉํ์ฌ `X`, `y` data ๋ฅผ ๋ถํ ํ๊ณ `estimator`์ ๋ฃ์ด์ `scoring` metric์ ๊ตฌํ๋ ๊ณผ์ ์ ๋ฐ๋ณต
* ์ธ์
* estimator : โfitโ๋ฉ์๋๊ฐ ์ ๊ณต๋๋ ๋ชจํ
* X : ๋ฐฐ์ด
* ๋
๋ฆฝ ๋ณ์ ๋ฐ์ดํฐ
* y : ๋ฐฐ์ด
* ์ข
์ ๋ณ์ ๋ฐ์ดํฐ
* scoring : ๋ฌธ์์ด
* ์ฑ๋ฅ ๊ฒ์ฆ์ ์ฌ์ฉํ ํจ์
* cv : Cross Validator
* None ์ด๋ฉด ๋ํดํธ์ธ 3-ํด๋ CV
* ์ซ์ K ์ด๋ฉด K-ํด๋ CV
* Cross Validator ํด๋์ค ๊ฐ์ฒด
* ๋ฐํ๊ฐ
* scores
* ๊ณ์ฐ๋ ์ฑ๋ฅ ๊ฐ์ ๋ฆฌ์คํธ
```
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
X, y, coef = make_regression(n_samples=1000, n_features=1, noise=20, coef=True, random_state=0)
model = LinearRegression()
cv = KFold(1000, 10)
scores = np.zeros(10)
for i, (train_index, test_index) in enumerate(cv):
X_train = X[train_index]
y_train = y[train_index]
X_test = X[test_index]
y_test = y[test_index]
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
scores[i] = mean_squared_error(y_test, y_pred)
scores
from sklearn.cross_validation import cross_val_score
cross_val_score(model, X, y, "mean_squared_error", cv)
```
### ํ๊ท ๋ถ์์ ์ฌ์ฉ๋๋ ์ฑ๋ฅ ํจ์๋ค
* `r2_score(y_true, y_pred[, ...])`: R^2 (coefficient of determination) regression score function.
* `explained_variance_score(y_true, y_pred)`: Explained variance regression score function
* `mean_squared_error(y_true, y_pred[, ...])`: Mean squared error regression loss
* `mean_absolute_error(y_true, y_pred)`: Mean absolute error regression loss
* `median_absolute_error(y_true, y_pred)`: Median absolute error regression loss
| github_jupyter |
## 1. Preparing our dataset
<p><em>These recommendations are so on point! How does this playlist know me so well?</em></p>
<p><img src="https://s3.amazonaws.com/assets.datacamp.com/production/project_449/img/iphone_music.jpg" alt="Project Image Record" width="600px"></p>
<p>Over the past few years, streaming services with huge catalogs have become the primary means through which most people listen to their favorite music. But at the same time, the sheer amount of music on offer can mean users might be a bit overwhelmed when trying to look for newer music that suits their tastes.</p>
<p>For this reason, streaming services have looked into means of categorizing music to allow for personalized recommendations. One method involves direct analysis of the raw audio information in a given song, scoring the raw data on a variety of metrics. Today, we'll be examining data compiled by a research group known as The Echo Nest. Our goal is to look through this dataset and classify songs as being either 'Hip-Hop' or 'Rock' - all without listening to a single one ourselves. In doing so, we will learn how to clean our data, do some exploratory data visualization, and use feature reduction towards the goal of feeding our data through some simple machine learning algorithms, such as decision trees and logistic regression.</p>
<p>To begin with, let's load the metadata about our tracks alongside the track metrics compiled by The Echo Nest. A song is about more than its title, artist, and number of listens. We have another dataset that has musical features of each track such as <code>danceability</code> and <code>acousticness</code> on a scale from -1 to 1. These exist in two different files, which are in different formats - CSV and JSON. While CSV is a popular file format for denoting tabular data, JSON is another common file format in which databases often return the results of a given query.</p>
<p>Let's start by creating two pandas <code>DataFrames</code> out of these files that we can merge so we have features and labels (often also referred to as <code>X</code> and <code>y</code>) for the classification later on.</p>
```
import pandas as pd
# Read in track metadata with genre labels
tracks = pd.read_csv('datasets/fma-rock-vs-hiphop.csv')
# Read in track metrics with the features
echonest_metrics = pd.read_json('datasets/echonest-metrics.json', precise_float = True)
# Merge the relevant columns of tracks and echonest_metrics
echo_tracks = pd.merge(echonest_metrics, tracks[['track_id', 'genre_top']], on = 'track_id')
# Inspect the resultant dataframe
echo_tracks.info()
```
## 2. Pairwise relationships between continuous variables
<p>We typically want to avoid using variables that have strong correlations with each other -- hence avoiding feature redundancy -- for a few reasons:</p>
<ul>
<li>To keep the model simple and improve interpretability (with many features, we run the risk of overfitting).</li>
<li>When our datasets are very large, using fewer features can drastically speed up our computation time.</li>
</ul>
<p>To get a sense of whether there are any strongly correlated features in our data, we will use built-in functions in the <code>pandas</code> package.</p>
```
# Create a correlation matrix
corr_metrics = echo_tracks.corr()
corr_metrics.style.background_gradient()
```
## 3. Normalizing the feature data
<p>As mentioned earlier, it can be particularly useful to simplify our models and use as few features as necessary to achieve the best result. Since we didn't find any particular strong correlations between our features, we can instead use a common approach to reduce the number of features called <strong>principal component analysis (PCA)</strong>. </p>
<p>It is possible that the variance between genres can be explained by just a few features in the dataset. PCA rotates the data along the axis of highest variance, thus allowing us to determine the relative contribution of each feature of our data towards the variance between classes. </p>
<p>However, since PCA uses the absolute variance of a feature to rotate the data, a feature with a broader range of values will overpower and bias the algorithm relative to the other features. To avoid this, we must first normalize our data. There are a few methods to do this, but a common way is through <em>standardization</em>, such that all features have a mean = 0 and standard deviation = 1 (the resultant is a z-score).</p>
```
# Define our features
features = echo_tracks.drop(['genre_top', 'track_id'], axis = 1)
# Define our labels
labels = echo_tracks['genre_top']
# Import the StandardScaler
from sklearn.preprocessing import StandardScaler
# Scale the features and set the values to a new variable
scaler = StandardScaler()
scaled_train_features = scaler.fit_transform(features)
```
## 4. Principal Component Analysis on our scaled data
<p>Now that we have preprocessed our data, we are ready to use PCA to determine by how much we can reduce the dimensionality of our data. We can use <strong>scree-plots</strong> and <strong>cumulative explained ratio plots</strong> to find the number of components to use in further analyses.</p>
<p>Scree-plots display the number of components against the variance explained by each component, sorted in descending order of variance. Scree-plots help us get a better sense of which components explain a sufficient amount of variance in our data. When using scree plots, an 'elbow' (a steep drop from one data point to the next) in the plot is typically used to decide on an appropriate cutoff.</p>
```
# This is just to make plots appear in the notebook
%matplotlib inline
# Import our plotting module, and PCA class
#... YOUR CODE ...
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Get our explained variance ratios from PCA using all features
pca = PCA()
pca.fit(scaled_train_features)
exp_variance = pca.explained_variance_ratio_
# plot the explained variance using a barplot
fig, ax = plt.subplots()
ax.bar(range(8), exp_variance)
ax.set_xlabel('Principal Component #')
```
## 5. Further visualization of PCA
<p>Unfortunately, there does not appear to be a clear elbow in this scree plot, which means it is not straightforward to find the number of intrinsic dimensions using this method. </p>
<p>But all is not lost! Instead, we can also look at the <strong>cumulative explained variance plot</strong> to determine how many features are required to explain, say, about 90% of the variance (cutoffs are somewhat arbitrary here, and usually decided upon by 'rules of thumb'). Once we determine the appropriate number of components, we can perform PCA with that many components, ideally reducing the dimensionality of our data.</p>
```
exp_variance
# Import numpy
import numpy as np
# Calculate the cumulative explained variance
cum_exp_variance = np.cumsum(exp_variance)
# Plot the cumulative explained variance and draw a dashed line at 0.90.
fig, ax = plt.subplots()
ax.plot(cum_exp_variance)
ax.axhline(y=0.9, linestyle='--')
n_components = 6
# Perform PCA with the chosen number of components and project data onto components
pca = PCA(n_components, random_state=10)
pca.fit(scaled_train_features)
pca_projection = pca.transform(scaled_train_features)
```
## 6. Train a decision tree to classify genre
<p>Now we can use the lower dimensional PCA projection of the data to classify songs into genres. To do that, we first need to split our dataset into 'train' and 'test' subsets, where the 'train' subset will be used to train our model while the 'test' dataset allows for model performance validation.</p>
<p>Here, we will be using a simple algorithm known as a decision tree. Decision trees are rule-based classifiers that take in features and follow a 'tree structure' of binary decisions to ultimately classify a data point into one of two or more categories. In addition to being easy to both use and interpret, decision trees allow us to visualize the 'logic flowchart' that the model generates from the training data.</p>
<p>Here is an example of a decision tree that demonstrates the process by which an input image (in this case, of a shape) might be classified based on the number of sides it has and whether it is rotated.</p>
<p><img src="https://s3.amazonaws.com/assets.datacamp.com/production/project_449/img/simple_decision_tree.png" alt="Decision Tree Flow Chart Example" width="350px"></p>
```
# Import train_test_split function and Decision tree classifier
# ... YOUR CODE ...
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# Split our data
train_features, test_features, train_labels, test_labels = train_test_split(pca_projection,
labels,
random_state = 10)
# Train our decision tree
tree = DecisionTreeClassifier(random_state = 10)
tree.fit(train_features, train_labels)
# Predict the labels for the test data
pred_labels_tree = tree.predict(test_features)
```
## 7. Compare our decision tree to a logistic regression
<p>Although our tree's performance is decent, it's a bad idea to immediately assume that it's therefore the perfect tool for this job -- there's always the possibility of other models that will perform even better! It's always a worthwhile idea to at least test a few other algorithms and find the one that's best for our data.</p>
<p>Sometimes simplest is best, and so we will start by applying <strong>logistic regression</strong>. Logistic regression makes use of what's called the logistic function to calculate the odds that a given data point belongs to a given class. Once we have both models, we can compare them on a few performance metrics, such as false positive and false negative rate (or how many points are inaccurately classified). </p>
```
# Import LogisticRegression
from sklearn.linear_model import LogisticRegression
# Train our logistic regression and predict labels for the test set
logreg = LogisticRegression(random_state = 10)
logreg.fit(train_features, train_labels)
pred_labels_logit = logreg.predict(test_features)
# Create the classification report for both models
from sklearn.metrics import classification_report
class_rep_tree = classification_report(test_labels, pred_labels_tree)
class_rep_log = classification_report(test_labels, pred_labels_logit)
print("Decision Tree: \n", class_rep_tree)
print("Logistic Regression: \n", class_rep_log)
```
## 8. Balance our data for greater performance
<p>Both our models do similarly well, boasting an average precision of 87% each. However, looking at our classification report, we can see that rock songs are fairly well classified, but hip-hop songs are disproportionately misclassified as rock songs. </p>
<p>Why might this be the case? Well, just by looking at the number of data points we have for each class, we see that we have far more data points for the rock classification than for hip-hop, potentially skewing our model's ability to distinguish between classes. This also tells us that most of our model's accuracy is driven by its ability to classify just rock songs, which is less than ideal.</p>
<p>To account for this, we can weight the value of a correct classification in each class inversely to the occurrence of data points for each class. Since a correct classification for "Rock" is not more important than a correct classification for "Hip-Hop" (and vice versa), we only need to account for differences in <em>sample size</em> of our data points when weighting our classes here, and not relative importance of each class. </p>
```
# Subset only the hip-hop tracks, and then only the rock tracks
hop_only = echo_tracks.loc[echo_tracks["genre_top"] == "Hip-Hop"]
# sample the rocks songs to be the same number as there are hip-hop songs
rock_only = echo_tracks.loc[echo_tracks["genre_top"] == "Rock"].sample(len(hop_only), random_state=10)
# concatenate the dataframes rock_only and hop_only
rock_hop_bal = pd.concat([rock_only, hop_only])
# The features, labels, and pca projection are created for the balanced dataframe
features = rock_hop_bal.drop(['genre_top', 'track_id'], axis=1)
labels = rock_hop_bal['genre_top']
pca_projection = pca.fit_transform(scaler.fit_transform(features))
# Redefine the train and test set with the pca_projection from the balanced data
train_features, test_features, train_labels, test_labels = train_test_split(pca_projection, labels, random_state=10)
```
## 9. Does balancing our dataset improve model bias?
<p>We've now balanced our dataset, but in doing so, we've removed a lot of data points that might have been crucial to training our models. Let's test to see if balancing our data improves model bias towards the "Rock" classification while retaining overall classification performance. </p>
<p>Note that we have already reduced the size of our dataset and will go forward without applying any dimensionality reduction. In practice, we would consider dimensionality reduction more rigorously when dealing with vastly large datasets and when computation times become prohibitively large.</p>
```
# Train our decision tree on the balanced data
tree = DecisionTreeClassifier(random_state=10)
tree.fit(train_features, train_labels)
pred_labels_tree = tree.predict(test_features)
# Train our logistic regression on the balanced data
logreg = LogisticRegression(random_state=10)
logreg.fit(train_features, train_labels)
pred_labels_logit = logreg.predict(test_features)
# Compare the models
print("Decision Tree: \n", classification_report(test_labels, pred_labels_tree))
print("Logistic Regression: \n", classification_report(test_labels, pred_labels_logit))
```
## 10. Using cross-validation to evaluate our models
<p>Success! Balancing our data has removed bias towards the more prevalent class. To get a good sense of how well our models are actually performing, we can apply what's called <strong>cross-validation</strong> (CV). This step allows us to compare models in a more rigorous fashion.</p>
<p>Since the way our data is split into train and test sets can impact model performance, CV attempts to split the data multiple ways and test the model on each of the splits. Although there are many different CV methods, all with their own advantages and disadvantages, we will use what's known as <strong>K-fold</strong> CV here. K-fold first splits the data into K different, equally sized subsets. Then, it iteratively uses each subset as a test set while using the remainder of the data as train sets. Finally, we can then aggregate the results from each fold for a final model performance score.</p>
```
from sklearn.model_selection import KFold, cross_val_score
# Set up our K-fold cross-validation
kf = KFold(n_splits=10, random_state=10)
tree = DecisionTreeClassifier(random_state=10)
logreg = LogisticRegression(random_state=10)
# Train our models using KFold cv
tree_score = cross_val_score(tree, pca_projection, labels, cv=kf)
logit_score = cross_val_score(logreg, pca_projection, labels, cv=kf)
# Print the mean of each array of scores
print("Decision Tree:", np.mean(tree_score), "Logistic Regression:", np.mean(logit_score))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sklearn.metrics as metrics
import random
import torch
from torch import nn
import torch.nn.functional as F
import more_itertools
def load_activity_map():
map = {}
map[0] = 'transient'
map[1] = 'lying'
map[2] = 'sitting'
map[3] = 'standing'
map[4] = 'walking'
map[5] = 'running'
map[6] = 'cycling'
map[7] = 'Nordic_walking'
map[9] = 'watching_TV'
map[10] = 'computer_work'
map[11] = 'car driving'
map[12] = 'ascending_stairs'
map[13] = 'descending_stairs'
map[16] = 'vacuum_cleaning'
map[17] = 'ironing'
map[18] = 'folding_laundry'
map[19] = 'house_cleaning'
map[20] = 'playing_soccer'
map[24] = 'rope_jumping'
return map
def generate_three_IMU(name):
x = name +'_x'
y = name +'_y'
z = name +'_z'
return [x,y,z]
def generate_four_IMU(name):
x = name +'_x'
y = name +'_y'
z = name +'_z'
w = name +'_w'
return [x,y,z,w]
def generate_cols_IMU(name):
# temp
temp = name+'_temperature'
output = [temp]
# acceleration 16
acceleration16 = name+'_3D_acceleration_16'
acceleration16 = generate_three_IMU(acceleration16)
output.extend(acceleration16)
# acceleration 6
acceleration6 = name+'_3D_acceleration_6'
acceleration6 = generate_three_IMU(acceleration6)
output.extend(acceleration6)
# gyroscope
gyroscope = name+'_3D_gyroscope'
gyroscope = generate_three_IMU(gyroscope)
output.extend(gyroscope)
# magnometer
magnometer = name+'_3D_magnetometer'
magnometer = generate_three_IMU(magnometer)
output.extend(magnometer)
# oreintation
oreintation = name+'_4D_orientation'
oreintation = generate_four_IMU(oreintation)
output.extend(oreintation)
return output
def load_IMU():
output = ['time_stamp','activity_id', 'heart_rate']
hand = 'hand'
hand = generate_cols_IMU(hand)
output.extend(hand)
chest = 'chest'
chest = generate_cols_IMU(chest)
output.extend(chest)
ankle = 'ankle'
ankle = generate_cols_IMU(ankle)
output.extend(ankle)
return output
def load_subjects(root):
output = pd.DataFrame()
cols = load_IMU()
for i in range(101,110):
path = root + str(i) +'.dat'
subject = pd.read_table(path, header=None, sep='\s+')
subject.columns = cols
subject['id'] = i
output = output.append(subject, ignore_index=True)
output.reset_index(drop=True, inplace=True)
return output
df = load_subjects('subject')
df = df[df['activity_id'] != 0]
def fix_data(data):
data = data.drop(data[data['activity_id']==0].index)
data = data.interpolate()
# fill all the NaN values in a coulmn with the mean values of the column
for colName in data.columns:
data[colName] = data[colName].fillna(data[colName].mean())
activity_mean = data.groupby(['activity_id']).mean().reset_index()
return data
df = fix_data(df)
df
df.columns
sensor_list = ['time_stamp', 'heart_rate', 'hand_temperature',
'hand_3D_acceleration_16_x', 'hand_3D_acceleration_16_y',
'hand_3D_acceleration_16_z', 'hand_3D_acceleration_6_x',
'hand_3D_acceleration_6_y', 'hand_3D_acceleration_6_z',
'hand_3D_gyroscope_x', 'hand_3D_gyroscope_y', 'hand_3D_gyroscope_z',
'hand_3D_magnetometer_x', 'hand_3D_magnetometer_y',
'hand_3D_magnetometer_z', 'hand_4D_orientation_x',
'hand_4D_orientation_y', 'hand_4D_orientation_z',
'hand_4D_orientation_w', 'chest_temperature',
'chest_3D_acceleration_16_x', 'chest_3D_acceleration_16_y',
'chest_3D_acceleration_16_z', 'chest_3D_acceleration_6_x',
'chest_3D_acceleration_6_y', 'chest_3D_acceleration_6_z',
'chest_3D_gyroscope_x', 'chest_3D_gyroscope_y', 'chest_3D_gyroscope_z',
'chest_3D_magnetometer_x', 'chest_3D_magnetometer_y',
'chest_3D_magnetometer_z', 'chest_4D_orientation_x',
'chest_4D_orientation_y', 'chest_4D_orientation_z',
'chest_4D_orientation_w', 'ankle_temperature',
'ankle_3D_acceleration_16_x', 'ankle_3D_acceleration_16_y',
'ankle_3D_acceleration_16_z', 'ankle_3D_acceleration_6_x',
'ankle_3D_acceleration_6_y', 'ankle_3D_acceleration_6_z',
'ankle_3D_gyroscope_x', 'ankle_3D_gyroscope_y', 'ankle_3D_gyroscope_z',
'ankle_3D_magnetometer_x', 'ankle_3D_magnetometer_y',
'ankle_3D_magnetometer_z', 'ankle_4D_orientation_x',
'ankle_4D_orientation_y', 'ankle_4D_orientation_z',
'ankle_4D_orientation_w']
no_act = ['time_stamp', 'heart_rate', 'hand_temperature',
'hand_3D_acceleration_16_x', 'hand_3D_acceleration_16_y',
'hand_3D_acceleration_16_z', 'hand_3D_acceleration_6_x',
'hand_3D_acceleration_6_y', 'hand_3D_acceleration_6_z',
'hand_3D_gyroscope_x', 'hand_3D_gyroscope_y', 'hand_3D_gyroscope_z',
'hand_3D_magnetometer_x', 'hand_3D_magnetometer_y',
'hand_3D_magnetometer_z', 'hand_4D_orientation_x',
'hand_4D_orientation_y', 'hand_4D_orientation_z',
'hand_4D_orientation_w', 'chest_temperature',
'chest_3D_acceleration_16_x', 'chest_3D_acceleration_16_y',
'chest_3D_acceleration_16_z', 'chest_3D_acceleration_6_x',
'chest_3D_acceleration_6_y', 'chest_3D_acceleration_6_z',
'chest_3D_gyroscope_x', 'chest_3D_gyroscope_y', 'chest_3D_gyroscope_z',
'chest_3D_magnetometer_x', 'chest_3D_magnetometer_y',
'chest_3D_magnetometer_z', 'chest_4D_orientation_x',
'chest_4D_orientation_y', 'chest_4D_orientation_z',
'chest_4D_orientation_w', 'ankle_temperature',
'ankle_3D_acceleration_16_x', 'ankle_3D_acceleration_16_y',
'ankle_3D_acceleration_16_z', 'ankle_3D_acceleration_6_x',
'ankle_3D_acceleration_6_y', 'ankle_3D_acceleration_6_z',
'ankle_3D_gyroscope_x', 'ankle_3D_gyroscope_y', 'ankle_3D_gyroscope_z',
'ankle_3D_magnetometer_x', 'ankle_3D_magnetometer_y',
'ankle_3D_magnetometer_z', 'ankle_4D_orientation_x',
'ankle_4D_orientation_y', 'ankle_4D_orientation_z',
'ankle_4D_orientation_w']
```
### Sliding Window
```
activity_list = list(df['activity_id'].unique())
sub_id_list = list(df['id'].unique())
type(sub_id_list[0])
sub_id_list
index = df[(df['activity_id'] == 5) & (df['id'] == 104)].index
df1 = df.drop(index)
df1[(df1['activity_id'] == 5) & (df['id'] == 104)]
index2 = df1[(df1['activity_id'] == 24) & (df['id'] == 106)].index
df2 = df1.drop(index2)
df2
df1[no_act]
df1 = df1.drop(index2)
df1
from window_slider import Slider
def make_windows(df, bucket_size, overlap_count):
window_list = []
final = pd.DataFrame()
activity_list = list(df['activity_id'].unique()) #list of the four activities
sub_id_list = list(df['id'].unique()) #list of the subject ids
df_list = []
for i in sub_id_list:
df_subject = df[df['id'] == i] #isolate a single subject id
for j in activity_list:
df_subject_activity_round = df_subject[df_subject['activity_id'] == j] #isolate by activity
final_df = pd.DataFrame()
if df_subject_activity_round.empty:
pass
else:
df_flat = df_subject_activity_round[no_act].T.values #array of arrays, each row is every single reading in an array for a sensor in that isolation
slider = Slider(bucket_size,overlap_count)
#print(i, j)
slider.fit(df_flat)
while True:
window_data = slider.slide()
if slider.reached_end_of_list(): break
window_list.append(list(window_data))
final_df = final.append(window_list)
final_df.columns = [no_act]
final_df.insert(53, "id", [i]*len(final_df), True)
final_df.insert(54, "activity_id", [j]*len(final_df), True)
df_list.append(final_df)
window_list = []
final = pd.DataFrame(columns = df_list[0].columns)
for l in df_list:
final = final.append(l)
final
final.columns = final.columns.map(''.join)
return final
df_window = make_windows(df1, 500, 100)
df_window.columns = df_window.columns.map(''.join)
df_window
windowed = df_window
windowed['id']
```
### Split Train Test
```
ID_list = pd.unique(windowed['id'])
random.shuffle(ID_list)
train = pd.DataFrame()
test = pd.DataFrame()
#change size of train/test split
train = windowed[windowed['id'].isin(ID_list[:6])]
test = windowed[windowed['id'].isin(ID_list[6:])]
print(train.shape, test.shape)
X_train = train[no_act]
X_train = X_train.apply(pd.Series.explode).reset_index()
X_test = test[no_act]
X_test = X_test.apply(pd.Series.explode).reset_index()
X_test = X_test.drop(['index'], axis = 1)
X_train = X_train.drop(['index'], axis = 1)
#X_val = X_val.drop(['index'], axis = 1)
print(X_train.shape, X_test.shape, X_train.shape[0] + X_test.shape[0])
y_train = train['activity_id'].values
y_test = test['activity_id'].values
print(y_train.shape, y_test.shape, y_train.shape[0] + y_test.shape[0])
y_train
```
### One-Hot Encoding Subject_ID
```
X_train['train'] = 1
X_test['train'] = 0
combined = pd.concat([X_train, X_test])
combined_dum = pd.get_dummies(combined['SID'])
combined = pd.concat([combined, combined_dum], axis =1)
X_train = combined[combined['train'] == 1]
X_test = combined[combined['train'] == 0]
X_train.drop(["train"], axis = 1, inplace = True)
X_test.drop(["train"], axis = 1, inplace = True)
X_train.drop(["SID"], axis = 1, inplace = True)
X_test.drop(["SID"], axis = 1, inplace = True)
print(X_train.shape, X_test.shape, X_train.shape[0] + X_test.shape[0])
X_train.head(1)
```
### Reshaping windows as arrays
```
# Convert to transposed arrays
X_test = X_test.T.values
X_train = X_train.T.values
X_train[0]
X_test = X_test.astype('float64')
X_train = X_train.astype('float64')
# Reshape to -1, window_size, # features
X_train = X_train.reshape((-1, 500, X_train.shape[0]))
X_test = X_test.reshape((-1, 500, X_test.shape[0]))
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
y_train = y_train.values
y_test = y_test.values
```
# PyTorch ANN
```
len(windowed.columns)
NB_SENSOR_CHANNELS = 53
SLIDING_WINDOW_LENGTH = 500
```
#### If using TF convert y_train and y_test to One-Hot
```
class HARModel(nn.Module):
def __init__(self, n_hidden=10, n_layers=2, n_filters=64,
n_classes=53, filter_size=5, drop_prob=0.5):
super(HARModel, self).__init__()
self.drop_prob = drop_prob
self.n_layers = n_layers
self.n_hidden = n_hidden
self.n_filters = n_filters
self.n_classes = n_classes
self.filter_size = filter_size
#Layers 2-5
self.conv1 = nn.Conv1d(NB_SENSOR_CHANNELS, n_filters, filter_size)
self.conv2 = nn.Conv1d(n_filters, n_filters, filter_size)
self.conv3 = nn.Conv1d(n_filters, n_filters, filter_size)
self.conv4 = nn.Conv1d(n_filters, n_filters, filter_size)
#Layers 6-7 - each dense layer has LSTM cells
self.lstm1 = nn.LSTM(n_filters, n_hidden, n_layers)
self.lstm2 = nn.LSTM(n_hidden, n_hidden, n_layers)
self.lstm3 = nn.LSTM(n_hidden, n_hidden, n_layers)
self.lstm4 = nn.LSTM(n_hidden, n_hidden, n_layers)
self.lstm5 = nn.LSTM(n_hidden, n_hidden, n_layers)
#Layer 9 - prepare for softmax
self.fc = nn.Linear(n_hidden, n_classes)
#During training, this layer randomly replaces some of the input tensor with zeroes with the probability as denoted by drop_prob (Bernoulli distribution). Each channel is zeroed out independently on every feed-forward cell.
self.dropout = nn.Dropout(drop_prob)
def forward(self, x, hidden, batch_size):
#Layer 1 - flatten (see -1)
x = x.view(-1, NB_SENSOR_CHANNELS, SLIDING_WINDOW_LENGTH)
#print(x.size())
#Layers 2-5 - RELU
x = F.relu(self.conv1(x))
#print(x.size())
x = F.relu(self.conv2(x))
#print(x.size())
x = F.relu(self.conv3(x))
#print(x.size())
x = F.relu(self.conv4(x))
#Layers 5 and 6 - flatten
x = x.view(121, -1, self.n_filters)
#print(x.size())
#Layers 6-8 - hidden layers
x, hidden = self.lstm1(x, hidden)
#print(x.size())
x, hidden = self.lstm2(x, hidden)
#print(x.size())
x, hidden = self.lstm3(x, hidden)
#print(x.size())
x, hidden = self.lstm4(x, hidden)
#print(x.size())
x, hidden = self.lstm5(x, hidden)
#print(x.size())
#Layers 8 - flatten, fully connected for softmax. Not sure what dropout does here
x = x.contiguous().view(-1, self.n_hidden)
#print(x.size())
x = self.dropout(x)
#print(x.size())
x = self.fc(x)
#print(x.size())
#View flattened output layer
out = x.view(batch_size, -1, self.n_classes)[:,-1,:]
#print(x.size())
return out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x n_hidden,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
# changed this from batch_size to 3*batch_size
hidden = (weight.new(self.n_layers, 4*batch_size, self.n_hidden).zero_().cuda(),
weight.new(self.n_layers, 4*batch_size, self.n_hidden).zero_().cuda())
else:
# changed this from batch_size to 3*batch_size
hidden = (weight.new(self.n_layers, 4*batch_size, self.n_hidden).zero_(),
weight.new(self.n_layers, 4*batch_size, self.n_hidden).zero_())
return hidden
net = HARModel()
def init_weights(m):
if type(m) == nn.LSTM:
for name, param in m.named_parameters():
#the learnable input-hidden weights of the kth layer
if 'weight_ih' in name:
#fills input with semi-orthogonal tensor
torch.nn.init.orthogonal_(param.data)
#the learnable hidden-hidden weights of the kth layer
elif 'weight_hh' in name:
torch.nn.init.orthogonal_(param.data)
#the learnable input-hidden and hidden-hidden bias of the kth layer
elif 'bias' in name:
param.data.fill_(0)
elif type(m) == nn.Conv1d or type(m) == nn.Linear:
torch.nn.init.orthogonal_(m.weight)
m.bias.data.fill_(0)
#Recursively apply weights
net.apply(init_weights)
def init_weights(m):
if type(m) == nn.LSTM:
for name, param in m.named_parameters():
if 'weight_ih' in name:
torch.nn.init.orthogonal_(param.data)
elif 'weight_hh' in name:
torch.nn.init.orthogonal_(param.data)
elif 'bias' in name:
param.data.fill_(0)
elif type(m) == nn.Conv1d or type(m) == nn.Linear:
torch.nn.init.orthogonal_(m.weight)
m.bias.data.fill_(0)
net.apply(init_weights)
def iterate_minibatches(inputs, targets, batchsize, shuffle=True):
assert len(inputs) == len(targets)
if shuffle:
indices = np.arange(len(inputs))
np.random.shuffle(indices)
for start_idx in range(0, len(inputs) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield inputs[excerpt], targets[excerpt]
train_on_gpu = torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU!')
else:
print('No GPU available, training on CPU; consider making n_epochs very small.')
def train(net, epochs=25, batch_size=64, lr=0.002):
opt = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9, weight_decay=1e-4)
criterion = nn.CrossEntropyLoss()
if(train_on_gpu):
net.cuda()
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
train_losses = []
net.train()
for batch in iterate_minibatches(X_train, y_train, batch_size):
x, y = batch
x = x.astype('float32')
y = y.astype('float32')
inputs, targets = torch.from_numpy(x).float(), torch.from_numpy(y).float()
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
opt.zero_grad()
# get the output from the model
output, h = net(inputs, h, batch_size)
#print(inputs.shape)
loss = criterion(output, targets.long())
train_losses.append(loss.item())
loss.backward()
opt.step()
val_h = net.init_hidden(batch_size)
val_losses = []
accuracy=0
f1score=0
net.eval()
with torch.no_grad():
for batch in iterate_minibatches(X_test, y_test, batch_size):
x, y = batch
x = x.astype('float32')
y = y.astype('float32')
inputs, targets = torch.from_numpy(x).float(), torch.from_numpy(y).float()
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
output, val_h= net(inputs, val_h, batch_size)
val_loss = criterion(output, targets.long())
val_losses.append(val_loss.item())
top_p, top_class = output.topk(1, dim=1)
equals = top_class == targets.view(*top_class.shape).long()
accuracy += torch.mean(equals.type(torch.FloatTensor))
f1score += metrics.f1_score(top_class.cpu(), targets.view(*top_class.shape).long().cpu(), average='weighted')
net.train() # reset to train mode after iterationg through validation data
print("Epoch: {}/{}...".format(e+1, epochs),
"Train Loss: {:.4f}...".format(np.mean(train_losses)),
"Val Loss: {:.4f}...".format(np.mean(val_losses)),
"Val Acc: {:.4f}...".format(accuracy/(len(X_test)//batch_size)),
"F1-Score: {:.4f}...".format(f1score/(len(X_test)//batch_size)))
print(top_class)
train(net)
inputs
```
| github_jupyter |
# Feature selection
In this example we will see how to **select features** through **model-based selection**.
As a toy example, we will use data from 'Titanic: Machine Learning for Disaster', one of the most popular Kaggle competitions. However, we will not use the original data set. We will use a modified data set, which results from a [Kaggle kernel](https://www.kaggle.com/pmarcelino/data-analysis-and-feature-extraction-with-python) that I did.
[Here](https://github.com/pmarcelino/blog/data/titanic_modified.csv) you can access to the data set used in this exercise.
---
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Load data
Let's load the data set.
```
df = pd.read_csv('data/titanic_modified.csv', index_col=0)
df.head()
```
As I mentioned, this data set results from a [kernel](https://www.kaggle.com/pmarcelino/data-analysis-and-feature-extraction-with-python) that I already did for the Titanic competition. I'll summarize each of the features to give you some context:
* **Survived**. Target variable. It's 1 if the passenger survived and 0 if it didn't.
* **Fare**. Passenger fare. It keeps the same properties as the original feature.
* **FamilySize**. It's the sum of the original features SipSp and Parch. SipSp refers to the # of siblings / spouses aboard the Titanic. Parch refers to the # of parents / children aboard the Titanic.
* **Imputed**. Identifies instances where some missing data imputation was made.
* **Pclass**. Ticket class. The feature was [encoded](http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-categorical-features), so we just have the features corresponding to the second and third class. The one corresponding to the first class was deleted to avoid the [dummy trap](http://www.algosome.com/articles/dummy-variable-trap-regression.html).
* **Sex**. It was also encoded. It's 1 if the instance corresponds to a male, and 0 if it corresponds to a female.
* **Age**. Encoded to the following classes: Children, Adult, and Elder. It's an Adult if Age_Child = 0 and Age_Elder = 0.
* **Embarked**. Port of embarkation. Originally, we had three possible ports: C = Cherbourg, Q = Queenstown, S = Southampton.
* **Title**. Results from the name of the passenger. Guess what? It's also an encoded feature.
# Train and test data sets
In feature selection, our goal is to distinguish features that are useful for prediction from features that just add noise to the prediction model.
To test the model's performance on unseen data, we need a train and a test data set.
```
# Create train and test set
from sklearn.model_selection import train_test_split
X = df.drop('Survived', axis=1) # Keep all features except 'Survived'
y = df['Survived'] # Just keep 'Survived'
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=7)
```
# Feature selection
The solution we propose uses scikit-learn. You can read its documentation to know more about [feature selection](http://scikit-learn.org/stable/modules/feature_selection.html).
Basically, you just have to use **SelectFromModel**. When you use this meta-transformer, you specify which **model** you want to use (e.g. Random Forests) and the **threshold** value to use for feature selection. This threshold value defines which features should be kept: features whose value is above the threshold are kept, features whose value is below the threshold are discarded. You can read more about SelectFromModel [here](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectFromModel.html#sklearn.feature_selection.SelectFromModel).
In this example, I'll show you how to do feature selection using **Random Forests** as the base model. You can use other models. The logic is the same. I usually go for Random Forests because it's a flexible model, which works well with different types of features and in different types of problems.
Since the Titanic is a classification problem, we will use **RandomForestClassifier** as the base model.
```
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import RandomForestClassifier
select = SelectFromModel(RandomForestClassifier(n_estimators=100, random_state=7), threshold='median')
select.fit(X_train, y_train)
X_train_selected = select.transform(X_train)
print(X_train.shape)
print(X_train_selected.shape)
```
Some notes:
* We reduced the number of features from 14 to 7.
* We used 'median' as the threshold value, which is the reason why we have 7 features. You can use other thresholds, like the 'mean' or even the 'mean' times a scaling factor.
* We used n_estimators = 100 for illustrative purposes. You can use a different number.
# Model's performance
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
# Assess model performance
lr = LogisticRegression()
lr.fit(X_train_selected, y_train)
strat_kfold = StratifiedKFold(10, random_state=7)
score = cross_val_score(lr, X_train_selected, y_train, scoring='accuracy', cv=10)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(score), np.std(score)))
# Get features
print(select.get_support(indices=True))
```
# Conclusion
If we compare the results with the example we used in [univariate feature selection](), we can see that model's performance is lower with this model-based solution.
However, we must be aware that in this case, we define the 'median' as the threshold value. Accordingly, we are restricting the model to half of its original features.
If we compare the results achieved by this model, which has 7 features, with the models with 7 features resulting from the univariate feature selection, we will see that the model-based approach is as good or better than the univariate feature selection.
Feature selection can then be improved using a different base model or considering a different threshold value.
# Complete solution
```
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
# Select features
select = SelectFromModel(RandomForestClassifier(n_estimators=100, random_state=7), threshold='median')
select.fit(X_train, y_train)
X_train_selected = select.transform(X_train)
print(X_train.shape)
print(X_train_selected.shape)
# Assess model
lr = LogisticRegression()
lr.fit(X_train_selected, y_train)
strat_kfold = StratifiedKFold(10, random_state=7)
score = cross_val_score(lr, X_train_selected, y_train, scoring='accuracy', cv=10)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(score), np.std(score)))
# Get features
print(select.get_support(indices=True))
```
| github_jupyter |
RMinimum : Full - Test - Case: $k(n) = \log(n)/\log(\log(n))$
```
import math
import random
import queue
```
Testfรคlle : k(n) = n^(1/2)
```
# User input
n = 2**22
# Automatic generation: k = log(n)/loglog(n), X = [0, ..., n-1]
lgn = math.log(n) / math.log(2)
k = int(lgn / (math.log(lgn)/math.log(2)))
X = [i for i in range(n)]
# Show Testcase
print('')
print('Input tuple : (n, k)')
print('============')
print('(', n, ',', k, ')')
```
Algorithmus : Full
```
def rminimum(X,k, cnt = []):
k = int(k)
n = len(X)
if cnt == []:
cnt = [0 for _ in range(len(X))]
if len(X) == 3:
if X[0] < X[1]:
cnt[X[0]] += 2
cnt[X[1]] += 1
cnt[X[2]] += 1
if X[0] < X[2]:
X = X[0]
else:
X = X[2]
else:
cnt[X[0]] += 1
cnt[X[1]] += 2
cnt[X[2]] += 1
if X[1] < X[2]:
X = X[1]
else:
X = X[2]
return cnt
W, L, cnt = RMinimum_step1(X, cnt)
minele, cnt = RMinimum_step2(L, k, cnt)
res3, cnt = RMinimum_step3(W, k, minele, cnt)
res4, cnt = RMinimum_step4(res3, k, n, cnt)
return cnt
# ==================================================
def RMinimum_step1(lst, cnt):
random.shuffle(lst)
W = [0 for _ in range(len(lst) // 2)]
L = [0 for _ in range(len(lst) // 2)]
for i in range(len(lst) // 2):
if lst[2 * i] > lst[2 * i + 1]:
W[i] = lst[2 * i + 1]
L[i] = lst[2 * i]
else:
W[i] = lst[2 * i]
L[i] = lst[2 * i + 1]
cnt[lst[2 * i + 1]] += 1
cnt[lst[2 * i]] += 1
return W, L, cnt
# ==================================================
def RMinimum_step2(L, k, cnt):
random.shuffle(L)
res = [L[i * k:(i + 1) * k] for i in range((len(L) + k - 1) // k)]
minele = [0 for _ in range(len(res))]
var = list(res)
for i in range(len(var)):
q = queue.Queue()
for item in var[i]:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
minele[i] = q.get()
return minele, cnt
# ==================================================
def RMinimum_step3(lst, k, minele, cnt):
random.shuffle(lst)
var = [lst[i * k:(i + 1) * k] for i in range((len(lst) + k - 1) // k)]
res = [0 for _ in range(len(var))]
for i in range(len(var)):
res[i] = [elem for elem in var[i] if elem < minele[i]]
cnt[minele[i]] += len(var[i])
for elem in var[i]:
cnt[elem] += 1
res = [item for sublist in res for item in sublist]
return res, cnt
# ==================================================
def RMinimum_step4(newW, k, n, cnt):
if len(newW) <= (math.log(n)/math.log(2))**2:
q = queue.Queue()
var = list(newW)
for item in var:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
res = q.get()
else:
res = rminimum(newW,k, cnt)
return res, cnt
# ==================================================
# Testfall
cnt = rminimum(X, k)
```
Algorithmus: $\log^2(n)$ $\rightarrow$ $\log(n)$
```
def rminimum(X,k, cnt = []):
k = int(k)
n = len(X)
if cnt == []:
cnt = [0 for _ in range(len(X))]
if len(X) == 3:
if X[0] < X[1]:
cnt[X[0]] += 2
cnt[X[1]] += 1
cnt[X[2]] += 1
if X[0] < X[2]:
X = X[0]
else:
X = X[2]
else:
cnt[X[0]] += 1
cnt[X[1]] += 2
cnt[X[2]] += 1
if X[1] < X[2]:
X = X[1]
else:
X = X[2]
return cnt
W, L, cnt = RMinimum_step1(X, cnt)
minele, cnt = RMinimum_step2(L, k, cnt)
res3, cnt = RMinimum_step3(W, k, minele, cnt)
res4, cnt = RMinimum_step4(res3, k, n, cnt)
return cnt
# ==================================================
def RMinimum_step1(lst, cnt):
random.shuffle(lst)
W = [0 for _ in range(len(lst) // 2)]
L = [0 for _ in range(len(lst) // 2)]
for i in range(len(lst) // 2):
if lst[2 * i] > lst[2 * i + 1]:
W[i] = lst[2 * i + 1]
L[i] = lst[2 * i]
else:
W[i] = lst[2 * i]
L[i] = lst[2 * i + 1]
cnt[lst[2 * i + 1]] += 1
cnt[lst[2 * i]] += 1
return W, L, cnt
# ==================================================
def RMinimum_step2(L, k, cnt):
random.shuffle(L)
res = [L[i * k:(i + 1) * k] for i in range((len(L) + k - 1) // k)]
minele = [0 for _ in range(len(res))]
var = list(res)
for i in range(len(var)):
q = queue.Queue()
for item in var[i]:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
minele[i] = q.get()
return minele, cnt
# ==================================================
def RMinimum_step3(lst, k, minele, cnt):
random.shuffle(lst)
var = [lst[i * k:(i + 1) * k] for i in range((len(lst) + k - 1) // k)]
res = [0 for _ in range(len(var))]
for i in range(len(var)):
res[i] = [elem for elem in var[i] if elem < minele[i]]
cnt[minele[i]] += len(var[i])
for elem in var[i]:
cnt[elem] += 1
res = [item for sublist in res for item in sublist]
return res, cnt
# ==================================================
def RMinimum_step4(newW, k, n, cnt):
if len(newW) <= (math.log(n)/math.log(2)):
q = queue.Queue()
var = list(newW)
for item in var:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
res = q.get()
else:
res = rminimum(newW,k, cnt)
return res, cnt
# ==================================================
# Testfall
cnt2 = rminimum(X, k)
```
Resultat : Vergleich: $|W| < \log(n)^2$ vs $|W| < \log(n)$
```
def test(X, k, cnt, cnt2):
# cnt: log^2, cnt2: log
n = len(X)
lgn = int(math.log(n) / math.log(2))
lgk = int(math.log(k) / math.log(2))
f_min = cnt[0]
f_rem = max(cnt[1:])
work = int(sum(cnt)/2)
f_min2 = cnt2[0]
f_rem2 = max(cnt2[1:])
work2 = int(sum(cnt2)/2)
print('')
print('Testfall n / k:', n, '/', k)
print('====================================')
print('Attribute : log^2 | log')
print('------------------------------------')
print('f_min :', f_min, '|', f_min2)
print('E[f_min] :', k)
print('------------------------------------')
print('f_rem :', f_rem, '|', f_rem2)
print('E[f_rem] :', k)
print('------------------------------------')
print('log(n) :', lgn)
print('------------------------------------')
print('Work :', work, '|', work2)
print('O(n) :', n)
print('====================================')
return
# ==================================================
# Testfall
test(X, k, cnt, cnt2)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Escribir callbacks de Keras personalizados
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/custom_callback"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Ver en TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/es/guide/keras/custom_callback.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Ejecutar en Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/es/guide/keras/custom_callback.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Ver fuente en GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/es/guide/keras/custom_callback.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Descargar notebook</a>
</td>
</table>
Note: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidad
son basados en el "mejor esfuerzo", no hay ninguna garantia que esta sea un reflejo preciso y actual
de la [Documentacion Oficial en Ingles](https://www.tensorflow.org/?hl=en).
Si tienen sugerencias sobre como mejorar esta traduccion, por favor envian un "Pull request"
al siguiente repositorio [tensorflow/docs](https://github.com/tensorflow/docs).
Para ofrecerse como voluntario o hacer revision de las traducciones de la Comunidad
por favor contacten al siguiente grupo [docs@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs).
Un callback personalizado es una herramienta poderosa para personalizar el comportamiento de un modelo de Keras durante el entrenamiento, evaluacion o inferencia, incluyendo la lectura/cambio del modelo de Keras. Ejemplos incluyen `tf.keras.callbacks.TensorBoard`, donde se pueden exportar y visualizar el progreso del entrenamiento y los resultados con TensorBoard, o `tf.keras.callbacks.ModelCheckpoint` donde el modelo es automaticamente guardado durante el entrenamiento, entre otros. En esta guia aprenderas que es un callback de Keras, cuando se llama, que puede hacer y como puedes construir una propia. Al final de la guia habra demos para la creacion de aplicaciones simples de callback para ayudarte a empezar tu propio callback personalizados.
## Setup
```
import tensorflow as tf
```
## Introduccion a los callbacks de Keras
En Keras 'Callback' es una clase de python destinada a ser subclase para proporcionar una funcionalidad especรญfica, con un conjunto de mรฉtodos llamados en varias etapas de entrenamiento (incluyendo el inicio y fin de los batch/epoch), pruebas y predicciones. Los Callbacks son รบtiles para tener visibilidad de los estados internos y las estadรญsticas del modelo durante el entrenamiento. Puedes pasar una lista de callbacks (como argumento de palabra clave `callbacks`) a cualquiera de los siguientes metodos ` tf.keras.Model.fit () `,` tf.keras.Model.evaluate () `y` tf.keras.Model .predict () `. Los metodos de los callbacks se llamaran en diferentes etapas del entrenamiento/evaluaciรณn/inferencia.
Para comenzar, importemos TensorDlow y definamos un modelo secuencial sencillo en Keras:
```
# Definir el modelo de Keras model al que se le agregaran los callbacks
def get_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, activation = 'linear', input_dim = 784))
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=0.1), loss='mean_squared_error', metrics=['mae'])
return model
```
Luego, cara el dataset de MNIST para entrenamiento y pruebas de la APLI de datasetws de Keras:
```
# Cargar los datos de ejemplo de MNIST data y preprocesarlos
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
```
Ahora, define un callback simple y personalizado para rastrear el inicio y fin de cada batch de datos. Durante esas llamadas, imprime el indice del batch actual.
```
import datetime
class MyCustomCallback(tf.keras.callbacks.Callback):
def on_train_batch_begin(self, batch, logs=None):
print('Entrenamiento: batch {} comienza en {}'.format(batch, datetime.datetime.now().time()))
def on_train_batch_end(self, batch, logs=None):
print('Entrenamiento: batch {} termina en {}'.format(batch, datetime.datetime.now().time()))
def on_test_batch_begin(self, batch, logs=None):
print('Evaluacion: batch {} comienza en {}'.format(batch, datetime.datetime.now().time()))
def on_test_batch_end(self, batch, logs=None):
print('Evaluacion: batch {} termina en {}'.format(batch, datetime.datetime.now().time()))
```
Dar un callback mara los metodos del modelo tales como `tf.keras.Model.fit()` aseguran que los metodos son llamados en dichas etapas:
```
model = get_model()
_ = model.fit(x_train, y_train,
batch_size=64,
epochs=1,
steps_per_epoch=5,
verbose=0,
callbacks=[MyCustomCallback()])
```
## Metodos del Modelo que aceptan callbacks
Los usuarios pueden dar una lista de callbacks para los siguientes metodos de `tf.keras.Model`:
#### [`fit()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#fit), [`fit_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#fit_generator)
Entrena el modelo por una cantidad determinada de epochs (iteraciones en un dataset, o para los datos determinados por un generador de Python que va batch-por-batch).
#### [`evaluate()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#evaluate), [`evaluate_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#evaluate_generator)
Evalua el modelo para determinados datos o generador de datos. Regresa la perdida (loss) y valores metricos para la evaluacion.
#### [`predict()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#predict), [`predict_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#predict_generator)
Genera las predicciones a regresar para los datos ingresados o el generador de datos.
NOTA: Toda la documentacion esta en ingles.
```
_ = model.evaluate(x_test, y_test, batch_size=128, verbose=0, steps=5,
callbacks=[MyCustomCallback()])
```
## Una revision de los metodos de callback
### Metodos comunes para entrenamiento/pruebas/prediccion
Para entrenamiento, pruebas y prediccion, los siguientes metodos se han previsto para ser sobreescritos.
#### `on_(train|test|predict)_begin(self, logs=None)`
Llamado al inicio de `fit`/`evaluate`/`predict`.
#### `on_(train|test|predict)_end(self, logs=None)`
Llamado al fin de `fit`/`evaluate`/`predict`.
#### `on_(train|test|predict)_batch_begin(self, batch, logs=None)`
Llamado justo antes de procesar un batch durante entrenamiento/pruebas/prediccion. Dentro de este metodo, `logs` es un diccionario con las llaves `batch` y `size` disponibles, representando el numero de batch actual y las dimensiones del mismo.
#### `on_(train|test|predict)_batch_end(self, batch, logs=None)`
Llamado al final del entrenamiento/pruebas/prediccion de un batch. dentro de este metodo, `logs` es un diccionario que contiene resultados metricos con estado.
### Entrenamiento de metodos especificos
Adicionalmente, para el entrenamiento, los siguientes metodos son provistos.
#### on_epoch_begin(self, epoch, logs=None)
Llamado al inicio de una epoch durante el entrenamiento.
#### on_epoch_end(self, epoch, logs=None)
Llamado al final de una epoch durante el entrenamiento.
### Uso del diccionario `logs`
El diccionario `logs` contiene el valor de perdida (loss), y todas las metricas pertinentes al final de un batch o epoch. El ejemplo a continuacion incluye la perdidad (loss) y el MAE (Mean Absolute Error).
```
class LossAndErrorPrintingCallback(tf.keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs=None):
print('Para el batch {}, la perdida (loss) es {:7.2f}.'.format(batch, logs['loss']))
def on_test_batch_end(self, batch, logs=None):
print('Para el batch {}, la perdida (loss) es {:7.2f}.'.format(batch, logs['loss']))
def on_epoch_end(self, epoch, logs=None):
print('La perdida promedio para la epoch {} es {:7.2f} y el MAE es {:7.2f}.'.format(epoch, logs['loss'], logs['mae']))
model = get_model()
_ = model.fit(x_train, y_train,
batch_size=64,
steps_per_epoch=5,
epochs=3,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()])
```
De manera similar, uno puede proveer callbacks en las llamadas a `evaluate()`.
```
_ = model.evaluate(x_test, y_test, batch_size=128, verbose=0, steps=20,
callbacks=[LossAndErrorPrintingCallback()])
```
## Ejemplos de aplicaciones de callbacks de Keras
La siguiente seccion te guiara en la creacion de una aplicacion de callback simple.
### Detencion anticipada con perdida minima.
El primer ejemplo muestra la creacion de un `Callback` que detiene el entrenamiento de Keras cuando se alcanza el minimo de perdida mutando el atributo` model.stop_training` (boolean). Opcionalmente, el usuario puede proporcionar el argumento `patience` para especificar cuantas epochs debe esperar el entrenamiento antes de detenerse.
`tf.keras.callbacks.EarlyStopping` proporciona una implementaciรณn mas completa y general.
```
import numpy as np
class EarlyStoppingAtMinLoss(tf.keras.callbacks.Callback):
"""Detener el entrenamiento cuando la perdida (loss) esta en su minimo, i.e. la perdida (loss) deja de disminuir.
Arguments:
patience: Numero de epochs a esperar despues de que el min ha sido alcanzaado. Despues de este numero
de no mejoras, el entrenamiento para.
"""
def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights para almacenar los pesos en los cuales ocurre la perdida minima.
self.best_weights = None
def on_train_begin(self, logs=None):
# El numero de epoch que ha esperado cuando la perdida ya no es minima.
self.wait = 0
# El epoch en el que en entrenamiento se detiene.
self.stopped_epoch = 0
# Initialize el best como infinito.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get('loss')
if np.less(current, self.best):
self.best = current
self.wait = 0
# Guardar los mejores pesos si el resultado actual es mejor (menos).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print('Restaurando los pesos del modelo del final de la mejor epoch.')
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print('Epoch %05d: Detencion anticipada' % (self.stopped_epoch + 1))
model = get_model()
_ = model.fit(x_train, y_train,
batch_size=64,
steps_per_epoch=5,
epochs=30,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()])
```
### Programacion del Learning Rate
Algo que es hecho comunmente en el entrenamiento de un modelo es cambiar el learning rate conforme pasan mas epochs. El backend de Keras expone la API `get_value` la cual puede ser usada para definir las variables. En este ejemplo estamos mostrando como un Callback personalizado puede ser usado para cambiar dinamicamente el learning rate.
Nota: este es solo una implementacion de ejemplo, `callbacks.LearningRateScheduler` y `keras.optimizers.schedules` contienen implementaciones mas generales.
```
class LearningRateScheduler(tf.keras.callbacks.Callback):
"""Planificador de Learning rate que define el learning rate deacuerdo a lo programado.
Arguments:
schedule: una funcion que toma el indice del epoch
(entero, indexado desde 0) y el learning rate actual
como entradas y regresa un nuevo learning rate como salida (float).
"""
def __init__(self, schedule):
super(LearningRateScheduler, self).__init__()
self.schedule = schedule
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model.optimizer, 'lr'):
raise ValueError('Optimizer must have a "lr" attribute.')
# Obtener el learning rate actua del optimizer del modelo.
lr = float(tf.keras.backend.get_value(self.model.optimizer.lr))
# Llamar la funcion schedule para obtener el learning rate programado.
scheduled_lr = self.schedule(epoch, lr)
# Definir el valor en el optimized antes de que la epoch comience
tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
print('\nEpoch %05d: Learning rate is %6.4f.' % (epoch, scheduled_lr))
LR_SCHEDULE = [
# (epoch a comenzar, learning rate) tupla
(3, 0.05), (6, 0.01), (9, 0.005), (12, 0.001)
]
def lr_schedule(epoch, lr):
"""Funcion de ayuda para recuperar el learning rate programado basado en la epoch."""
if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:
return lr
for i in range(len(LR_SCHEDULE)):
if epoch == LR_SCHEDULE[i][0]:
return LR_SCHEDULE[i][1]
return lr
model = get_model()
_ = model.fit(x_train, y_train,
batch_size=64,
steps_per_epoch=5,
epochs=15,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), LearningRateScheduler(lr_schedule)])
```
### Callbacks de Keras estandar
Asegurate de revisar los callbacks de Keras preexistentes [visitando la documentacion de la api](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/callbacks). Las aplicaciones incluyen el registro a CSV, guardar el modelo, visualizar en TensorBoard y mucho mas.
NOTA: La documentacion aun esta en ingles
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
#How to train Boosted Trees models in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/estimators/boosted_trees"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimators/boosted_trees.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/tree/master/site/en/tutorials/estimators/boosted_trees.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a>
</td>
</table>
This tutorial is an end-to-end walkthrough of training a Gradient Boosting model using decision trees with the `tf.estimator` API. Boosted Trees models are among the most popular and effective machine learning approaches for both regression and classification. It is an ensemble technique that combines the predictions from several (think 10s, 100s or even 1000s) tree models.
Boosted Trees models are popular with many machine learning practioners as they can achieve impressive performance with minimal hyperparameter tuning.
## Load the titanic dataset
You will be using the titanic dataset, where the (rather morbid) goal is to predict passenger survival, given characteristics such as gender, age, class, etc.
```
!pip install tf-nightly # Requires tf 1.13
from __future__ import absolute_import, division, print_function
import numpy as np
import pandas as pd
import tensorflow as tf
tf.enable_eager_execution()
tf.logging.set_verbosity(tf.logging.ERROR)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tfbt/titanic_train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tfbt/titanic_eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
```
The dataset consists of a training set and an evaluation set:
* `dftrain` and `y_train` are the *training set*โthe data the model uses to learn.
* The model is tested against the *eval set*, `dfeval`, and `y_eval`.
For training you will use the following features:
<table>
<tr>
<th>Feature Name</th>
<th>Description</th>
</tr>
<tr>
<td>sex</td>
<td>Gender of passenger</td>
</tr>
<tr>
<td>age</td>
<td>Age of passenger</td>
</tr>
<tr>
<td>n_siblings_spouses</td>
<td># siblings and partners aboard</td>
</tr>
<tr>
<td>parch</td>
<td># of parents and children aboard</td>
</tr>
<tr>
<td>fare</td>
<td>Fare passenger paid.</td>
</tr>
<tr>
<td>class</td>
<td>Passenger's class on ship</td>
</tr>
<tr>
<td>deck</td>
<td>Which deck passenger was on</td>
</tr>
<tr>
<td>embark_town</td>
<td>Which town passenger embarked from</td>
</tr>
<tr>
<td>alone</td>
<td>If passenger was alone</td>
</tr>
</table>
## Explore the data
Let's first preview some of the data and create summary statistics on the training set.
```
dftrain.head()
dftrain.describe()
```
There are 627 and 264 examples in the training and evaluation sets, respectively.
```
dftrain.shape[0], dfeval.shape[0]
```
The majority of passengers are in their 20's and 30's.
```
dftrain.age.hist(bins=20);
```
There are approximately twice as male passengers as female passengers aboard.
```
dftrain.sex.value_counts().plot(kind='barh');
```
The majority of passengers were in the "third" class.
```
(dftrain['class']
.value_counts()
.plot(kind='barh'));
```
Most passengers embarked from Southampton.
```
(dftrain['embark_town']
.value_counts()
.plot(kind='barh'));
```
Females have a much higher chance of surviving vs. males. This will clearly be a predictive feature for the model.
```
ax = (pd.concat([dftrain, y_train], axis=1)\
.groupby('sex')
.survived
.mean()
.plot(kind='barh'))
ax.set_xlabel('% survive');
```
## Create feature columns and input functions
The Gradient Boosting estimator can utilize both numeric and categorical features. Feature columns work with all TensorFlow estimators and their purpose is to define the features used for modeling. Additionally they provide some feature engineering capabilities like one-hot-encoding, normalization, and bucketization. In this tutorial, the fields in `CATEGORICAL_COLUMNS` are transformed from categorical columns to one-hot-encoded columns ([indicator column](https://www.tensorflow.org/api_docs/python/tf/feature_column/indicator_column)):
```
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fc.indicator_column(
fc.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(fc.numeric_column(feature_name,
dtype=tf.float32))
```
You can view the transformation that a feature column produces. For example, here is the output when using the `indicator_column` on a single example:
```
example = dftrain.head(1)
class_fc = one_hot_cat_column('class', ('First', 'Second', 'Third'))
print('Feature value: "{}"'.format(example['class'].iloc[0]))
print('One-hot encoded: ', fc.input_layer(dict(example), [class_fc]).numpy())
```
Additionally, you can view all of the feature column transformations together:
```
fc.input_layer(dict(example), feature_columns).numpy()
```
Next you need to create the input functions. These will specify how data will be read into our model for both training and inference. You will use the `from_tensor_slices` method in the [`tf.data`](https://www.tensorflow.org/api_docs/python/tf/data) API to read in data directly from Pandas. This is suitable for smaller, in-memory datasets. For larger datasets, the tf.data API supports a variety of file formats (including [csv](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset)) so that you can process datasets that do not fit in memory.
```
# Use entire batch since this is such a small dataset.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((dict(X), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = dataset.repeat(n_epochs)
# In memory training doesn't use batching.
dataset = dataset.batch(NUM_EXAMPLES)
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
```
## Train and evaluate the model
Below you will do the following steps:
1. Initialize the model, specifying the features and hyperparameters.
2. Feed the training data to the model using the `train_input_fn` and train the model using the `train` function.
3. You will assess model performance using the evaluation setโin this example, the `dfeval` DataFrame. You will verify that the predictions match the labels from the `y_eval` array.
Before training a Boosted Trees model, let's first train a linear classifier (logistic regression model). It is best practice to start with simpler model to establish a benchmark.
```
linear_est = tf.estimator.LinearClassifier(feature_columns)
# Train model.
linear_est.train(train_input_fn, max_steps=100)
# Evaluation.
results = linear_est.evaluate(eval_input_fn)
print('Accuracy : ', results['accuracy'])
print('Dummy model: ', results['accuracy_baseline'])
```
Next let's train a Boosted Trees model. For boosted trees, regression (`BoostedTreesRegressor`) and classification (`BoostedTreesClassifier`) are supported, along with using any twice differentiable custom loss (`BoostedTreesEstimator`). Since the goal is to predict a class - survive or not survive, you will use the `BoostedTreesClassifier`.
```
# Since data fits into memory, use entire dataset per layer. It will be faster.
# Above one batch is defined as the entire dataset.
n_batches = 1
est = tf.estimator.BoostedTreesClassifier(feature_columns,
n_batches_per_layer=n_batches)
# The model will stop training once the specified number of trees is built, not
# based on the number of steps.
est.train(train_input_fn, max_steps=100)
# Eval.
results = est.evaluate(eval_input_fn)
print('Accuracy : ', results['accuracy'])
print('Dummy model: ', results['accuracy_baseline'])
```
For performance reasons, when your data fits in memory, it is recommended to use the `boosted_trees_classifier_train_in_memory` function. However if training time is not of a concern or if you have a very large dataset and want to do distributed training, use the `tf.estimator.BoostedTrees` API shown above.
When using this method, you should not batch your input data, as the method operates on the entire dataset.
```
def make_inmemory_train_input_fn(X, y):
def input_fn():
return dict(X), y
return input_fn
train_input_fn = make_inmemory_train_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
est = tf.contrib.estimator.boosted_trees_classifier_train_in_memory(
train_input_fn,
feature_columns)
print(est.evaluate(eval_input_fn)['accuracy'])
```
Now you can use the train model to make predictions on a passenger from the evaluation set. TensorFlow models are optimized to make predictions on a batch, or collection, of examples at once. Earlier, the `eval_input_fn` is defined using the entire evaluation set.
```
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities');
```
Finally you can also look at the receiver operating characteristic (ROC) of the results, which will give us a better idea of the tradeoff between the true positive rate and false positive rate.
```
from sklearn.metrics import roc_curve
from matplotlib import pyplot as plt
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,);
```
| github_jupyter |
# Software Engineering for Data Scientists
## *Manipulating Data with Python*
## CSE 583
## Today's Objectives
#### 1. Opening & Navigating the Jupyter Notebook
#### 2. Simple Math in the Jupyter Notebook
#### 3. Loading data with ``pandas``
#### 4. Cleaning and Manipulating data with ``pandas``
#### 5. Visualizing data with ``pandas`` & ``matplotlib``
## 1. Opening and Navigating the IPython Notebook
We will start today with the interactive environment that we will be using often through the course: the [Jupyter Notebook](http://jupyter.org).
We will walk through the following steps together:
1. Download [miniconda](https://conda.io/miniconda.html) (be sure to get Version 3.6) and install it on your system (hopefully you have done this before coming to class)
```
```
2. Use the ``conda`` command-line tool to update your package listing and install the IPython notebook:
Update ``conda``'s listing of packages for your system:
```
$ conda update conda
```
Install IPython notebook and all its requirements
```
$ conda install jupyter notebook
```
3. Navigate to the directory containing the course material. For example:
```
$ cd ~/courses/CSE583/
```
You should see a number of files in the directory, including these:
```
$ ls
...
Breakout-Simple-Math.ipynb
CSE599_Lecture_2.ipynb
...
```
4. Type ``jupyter notebook`` in the terminal to start the notebook
```
$ jupyter notebook
```
If everything has worked correctly, it should automatically launch your default browser
```
```
5. Click on ``Lecture-Python-And-Data-Autumn-2017.ipynb`` to open the notebook containing the content for this lecture.
With that, you're set up to use the Jupyter notebook!
## 2. Simple Math in the Jupyter Notebook
Now that we have the Jupyter notebook up and running, we're going to do a short breakout exploring some of the mathematical functionality that Python offers.
Please open [Breakout-Simple-Math.ipynb](Breakout-Simple-Math.ipynb), find a partner, and make your way through that notebook, typing and executing code along the way.
## 3. Loading data with ``pandas``
With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis.
### Python's Data Science Ecosystem
In addition to Python's built-in modules like the ``math`` module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python.
Some of the most important ones are:
#### [``numpy``](http://numpy.org/): Numerical Python
Numpy is short for "Numerical Python", and contains tools for efficient manipulation of arrays of data.
If you have used other computational tools like IDL or MatLab, Numpy should feel very familiar.
#### [``scipy``](http://scipy.org/): Scientific Python
Scipy is short for "Scientific Python", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more.
We will not look closely at Scipy today, but we will use its functionality later in the course.
#### [``pandas``](http://pandas.pydata.org/): Labeled Data Manipulation in Python
Pandas is short for "Panel Data", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a *Data Frame*.
If you've used the [R](http://rstats.org) statistical language (and in particular the so-called "Hadley Stack"), much of the functionality in Pandas should feel very familiar.
#### [``matplotlib``](http://matplotlib.org): Visualization in Python
Matplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly).
### Installing Pandas & friends
Because the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like ``conda``. All it takes is to run
```
$ conda install numpy scipy pandas matplotlib
```
and (so long as your conda setup is working) the packages will be downloaded and installed on your system.
### Downloading the data
shell commands can be run from the notebook by preceding them with an exclamation point:
```
!ls
```
uncomment this to download the data:
```
# !curl -o pronto.csv https://data.seattle.gov/api/views/tw7j-dfaw/rows.csv?accessType=DOWNLOAD
```
### Loading Data with Pandas
Because we'll use it so much, we often import under a shortened name using the ``import ... as ...`` pattern:
Now we can use the ``read_csv`` command to read the comma-separated-value data:
*Note: strings in Python can be defined either with double quotes or single quotes*
### Viewing Pandas Dataframes
The ``head()`` and ``tail()`` methods show us the first and last rows of the data
The ``shape`` attribute shows us the number of elements:
The ``columns`` attribute gives us the column names
The ``index`` attribute gives us the index names
The ``dtypes`` attribute gives the data types of each column:
## 4. Manipulating data with ``pandas``
Here we'll cover some key features of manipulating data with pandas
Access columns by name using square-bracket indexing:
Mathematical operations on columns happen *element-wise*:
Columns can be created (or overwritten) with the assignment operator.
Let's create a *tripminutes* column with the number of minutes for each trip
More complicated mathematical operations can be done with tools in the ``numpy`` package:
### Working with Times
One trick to know when working with columns of times is that Pandas ``DateTimeIndex`` provides a nice interface for working with columns of times.
For a dataset of this size, using ``pd.to_datetime`` and specifying the date format can make things much faster (from the [strftime reference](http://strftime.org/), we see that the pronto data has format ``"%m/%d/%Y %I:%M:%S %p"``
(Note: you can also use ``infer_datetime_format=True`` in most cases to automatically infer the correct format, though due to a bug it doesn't work when AM/PM are present)
With it, we can extract, the hour of the day, the day of the week, the month, and a wide range of other views of the time:
### Simple Grouping of Data
The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at *value counts* and the basics of *group-by* operations.
#### Value Counts
Pandas includes an array of useful functionality for manipulating and analyzing tabular data.
We'll take a look at two of these here.
The ``pandas.value_counts`` returns statistics on the unique values within each column.
We can use it, for example, to break down rides by gender:
Or to break down rides by age:
By default, the values rather than the index are sorted. Use ``sort=False`` to turn this behavior off:
We can explore other things as well: day of week, hour of day, etc.
### Group-by Operation
One of the killer features of the Pandas dataframe is the ability to do group-by operations.
You can visualize the group-by like this (image borrowed from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do))
```
from IPython.display import Image
Image('split_apply_combine.png')
```
The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
```
<data object>.groupby(<grouping values>).<aggregate>()
```
for example, we can group by gender and find the average of all numerical columns:
It's also possible to indes the grouped object like it is a dataframe:
You can even group by multiple values: for example we can look at the trip duration by time of day and by gender:
The ``unstack()`` operation can help make sense of this type of multiply-grouped data. What this technically does is split a multiple-valued index into an index plus columns:
## 5. Visualizing data with ``pandas``
Of course, looking at tables of data is not very intuitive.
Fortunately Pandas has many useful plotting functions built-in, all of which make use of the ``matplotlib`` library to generate plots.
Whenever you do plotting in the IPython notebook, you will want to first run this *magic command* which configures the notebook to work well with plots:
```
%matplotlib inline
```
Now we can simply call the ``plot()`` method of any series or dataframe to get a reasonable view of the data:
### Adjusting the Plot Style
Matplotlib has a number of plot styles you can use. For example, if you like R you might use the ggplot style:
### Other plot types
Pandas supports a range of other plotting types; you can find these by using the <TAB> autocomplete on the ``plot`` method:
For example, we can create a histogram of trip durations:
If you'd like to adjust the x and y limits of the plot, you can use the ``set_xlim()`` and ``set_ylim()`` method of the resulting object:
```
```
## Breakout: Exploring the Data
Make a plot of the total number of rides as a function of month of the year (You'll need to extract the month, use a ``groupby``, and find the appropriate aggregation to count the number in each group).
Split this plot by gender. Do you see any seasonal ridership patterns by gender?
Split this plot by user type. Do you see any seasonal ridership patterns by usertype?
Repeat the above three steps, counting the number of rides by time of day rather thatn by month.
Are there any other interesting insights you can discover in the data using these tools?
### Looking Forward to Homework
In the homework this week, you will have a chance to apply some of these patterns to a brand new (but closely related) dataset.
| github_jupyter |
```
#hide
from fastai.tabular.all import *
from fgcnn.data import *
from fgcnn.model import *
from fgcnn.train import *
```
# FGCNN
> Implementation of Feature Generation by Convolutional Neural Networks using `fast.ai`.
Abstract taken from [paper](https://arxiv.org/abs/1904.04447)
> Click-Through Rate prediction is an important task in recommender systems, which aims to estimate the probability of a user to click on a given item. Recently, many deep models have been proposed to learn low-order and high-order feature interactions from original features. However, since useful interactions are always sparse, it is difficult for DNN to learn them effectively under a large number of parameters. In real scenarios, artificial features are able to improve the performance of deep models (such as Wide & Deep Learning), but feature engineering is expensive and requires domain knowledge, making it impractical in different scenarios. Therefore, it is necessary to augment feature space automatically. In this paper, We propose a novel Feature Generation by Convolutional Neural Network (FGCNN) model with two components: Feature Generation and Deep Classifier. Feature Generation leverages the strength of CNN to generate local patterns and recombine them to generate new features. Deep Classifier adopts the structure of IPNN to learn interactions from the augmented feature space. Experimental results on three large-scale datasets show that FGCNN significantly outperforms nine state-of-the-art models. Moreover, when applying some state-of-the-art models as Deep Classifier, better performance is always achieved, showing the great compatibility of our FGCNN model. This work explores a novel direction for CTR predictions: it is quite useful to reduce the learning difficulties of DNN by automatically identifying important features.
- Extracting effective and efficient features for this task is very expensive and requires domain expertise.
- This work proposes to use FGCNN ( Feature Generation by CNN ) to solve this problem.
- CNN is used to extract local patterns which are then combined to generate new features.

- It is difficult for DNN to learn feature interactions because these are very sparse. e.g. suppose we want to calculate the probablity of a user downloading a game given features like `age`, `country`, `gender`, `city` and turns out that only `age` and `country` are useful in predicting the outcome variable and rest is noise. It becomes very hard for DNN to learn that embeddings for `country` and `city` must be `0`.
- CNN alone cannot solve this problem because there is no transitional invariance among the features ( `age`, `gender`, `country`, `city` ) is similar to ( `age`, `gender`, `city`, `country` ), so that is why the author has proposed to use CNNs with MLPs as shown in the following figure

### Input dataset requirements
- For most of the ctr prediction tasks, data is collected in the form of `multi-field` categorical form such that each feature could be transformed into high dimensional sparse ( binary ) vector via one hot encoding as an example.
- For example, (Gender=Male, Height=175, Age=18, Name=Bob) can be represented as:

### Model structure
- Feature Embedding Layer : Every categorical field is passed through an embedding layer before passing to feature generation module.
- Feature Generation Layer: ( CNN + Recombination ) layer. CNN extracts useful neighbor patterns and recombination layer generates global feature interactions.
- Deep Classifier Layer: IPNN model shown below

**We could use any custom model ( FM, DNN, IPNN, DeepFM etc. ) in the deep classifier layer based on the task at hand.**
### Loss Function

### Experiments

**We have tried to use the [Avazu]() dataset for our experiments.**
## Install
`pip install git+https://github.com/numb3r33/fgcnn.git`
## How to use
```
#slow
# prepare data loaders
dls = get_dl()
# prepare embedding matrix
emb_szs = get_emb_sz(dls.train_ds, k=40) # `k` - embedding_size ( 40 ) mentioned in the paper for this dataset.
# prepare model architecture
m = FGCNN(emb_szs=emb_szs,
conv_kernels=[14, 16, 18, 20],
kernels=[3, 3, 3, 3],
dense_layers=[4096, 2048, 1024, 512],
h=7,
hp=2
)
# create tabular learner
learn = TabularLearner(dls, m, loss_func=BCELossFlat(), opt_func=ranger)
# train and validate
learn = train(dls, m, lr, loss_func=BCELossFlat(), n_epochs=1)
```
| github_jupyter |
## Example
#### To be able to run the programm, import all the needed packages first.
```
from tabulate import tabulate
import sqlite3
import csv
import geopandas as geopandas
import numpy as np
import pandas as pd
import os
import platform
import sqlite3
import gdal
from gdal import ogr
from shapely.geometry import Point
```
#### In this second step, the table ``plants`` in the ``Pflanzendaten`` database is created.
```
connection = sqlite3.connect("Pflanzendaten.db")
cursor = connection.cursor()
sql_command = """
CREATE TABLE IF NOT EXISTS plants (
species VARCHAR(255),
name VARCHAR(255),
nativ BOOLEAN,
endangered VARCHAR(255),
habitat VARCHAR(255),
waterdepthmin INTEGER(255),
waterdepthmax INTEGER(255),
rootdepth INTEGER(255),
groundwatertablechange VARCHAR(255),
floodheightmax INTEGER(255),
floodloss REAL(255),
floodduration INTEGER(255),
PRIMARY KEY (species, name, habitat)
);"""
cursor.execute(sql_command)
```
#### The inputquestion() function is used to insert data into the table
```
def inputquestion():
"""function that lets the user put data into the database
the function provides 2 options for data input, if option 1 is chosen via console input "1", the user can provide the name
of a csv file. if option 2 is chosen, the user can add a single row via sql command. If neither of those two options is
chosen, the function will print a string in the python console
Returns:
string in console if none of the two options above is chosen
"""
print('Enter 1 to input data from csv file\n Enter 2 to input data via sql command')
src = int(input('Enter here:'))
if src == 1:
with open(input('enter csv-filename')+'.csv') as csvfile:
csv_reader_object = csv.reader(csvfile, delimiter=',')
with sqlite3.connect("Pflanzendaten.db") as connection:
cursor = connection.cursor()
sql_command = """
INSERT INTO plants (species,name,nativ,endangered,habitat,waterdepthmin,waterdepthmax,rootdepth,groundwatertablechange,floodheightmax,floodloss,floodduration)
VALUES (:species, :name, :nativ, :endangered, :habitat, :waterdepthmin, :waterdepthmax, :rootdepth, :groundwatertablechange, :floodheightmax, :floodloss, :floodduration)
"""
cursor.executemany(sql_command, csv_reader_object)
elif src == 2:
connection = sqlite3.connect("Pflanzendaten.db")
cursor = connection.cursor()
sql_command = (input("""Insert sql command"""))
cursor.execute(sql_command)
cursor.execute("COMMIT")
else:
print('only able to import data to table using csv file or sql command')
```
#### search_db_via_query() is allowing the usage of querys to search in the database for information
```
def search_db_via_query(query):
"""Function that checks database for matching entries with user input.
The function takes the user input and adds it to the used sql command to search for matching entries in the provided database
if there are matching entries these will be printed in the python console
Args:
query (string): habitat name in sql, provided by the user
Returns:
table entries matching with user input
"""
connection = sqlite3.connect("Pflanzendaten.db")
cursor = connection.cursor()
cursor.execute("SELECT * FROM plants WHERE " + query)
content = cursor.fetchall()
print(tabulate((content), headers=['species', 'name', 'nativ', 'endangered', 'habitat', 'waterdepthmin', 'waterdepthmax', 'rootdepth', 'groundwatertablechange', 'floodheightmax', 'floodloss', 'floodduration']))
print('Status 1 equals nativ')
connection.close()
```
#### To fill the table with data, run inputquestion() and select which option you want to chose. Note: the name of the provided csv file is "plantdata", if you want to try inserting data by using an sql command, just type it directly without using quotation marks.
```
inputquestion()
```
#### After successfully inserting data into the database, you can search for plants by running search_db_via_query() and providing it with an habitat name (Alpenvorland, Niederrheinisches Tiefland or Oberrheinisches Tiefland). Note: unfortunately, due to jupyter notebook the output is overlapping and as a result looks a bit rough.
```
habitat = input('Enter name of habitat\n')
query = "habitat = '" + habitat + "'"
search_db_via_query(query)
```
#### Defining the class "Plant" is enabling the usage of the habitat_search() function
```
class Plant:
"""
"""
def __init__(self, species, name, nativ, habitat, endangered, waterdepthmin, waterdepthmax, rootdepth, groundwatertablechange, floodheightmax, floodloss, flooddurationmax):
"""
Args:
species (str): scientific plant name
name (str): common german plant name
nativ (bool): equals 1 if the plant is nativ, 0 if its not
habitat (str): habit name of the plant
endangered (str): information about the endangerment status of the plant
waterdepthmin (int): minimal required water depth
waterdepthmax (int): maximum depth to groundwater
rootdepth (int): average root depth
groundwatertablechange (varchar): maximum change in groundwater table that the plant can survive
floodheightmax (int): maximum flood height the plant can survive
floodloss (float): losses during maximum flood height and flooding days that occured in plant population
flooddurationmax (int): maximum number of flooding days the plant can survive
"""
self.species = species
self.name_german = name
self.status = nativ
self.is_endangered = endangered
self.habitat_in_germany = habitat
self.minimum_waterdepth = waterdepthmin
self.maximum_waterdepth = waterdepthmax
self.average_root_depth = rootdepth
self.change_of_groundwatertable = groundwatertablechange
self.critical_flood_height = floodheightmax
self.plant_mortality_during_critial_flooding = floodloss
self.critical_flood_duration = flooddurationmax
def print_habitat(self):
"""
prints the plant parameters as string in console
Returns:
String in console
"""
print('\nscientific name:\n{0}\ncommon german name:\n{1}\nstatus:\n{2}\nendangered?:\n{3}'.format(self.species,
str(self.name_german),
str(self.status),
str(self.habitat_in_germany),
str(self.is_endangered),
str(self.minimum_waterdepth),
str(self.maximum_waterdepth),
str(self.average_root_depth),
str(self.change_of_groundwatertable),
str(self.critical_flood_height),
str(self.plant_mortality_during_critial_flooding),
str(self.critical_flood_duration)))
```
#### habitat_search() is used to search the csv file for data
```
def habitat_search(column, entry):
"""Function searches in csv file for vegetation matching the user input.
The function uses the console input to search for matching entries in the provided csv file,
if there are matching entries the function print_habitat gets called to print the information in the python console.
Args:
column(int): column in the .csv file
entry(int): entry in the .csv file
Returns:
String in console
"""
df = pd.read_csv('plantdata.csv', encoding='unicode_escape')
if platform.system() == 'Linux':
df = pd.read_csv('plantdata.csv')
else:
df = pd.read_csv('plantdata.csv', encoding='unicode_escape')
df1 = df.dropna()
def search(column, entry, df):
df2 = df1.to_numpy()
column = df[column]
for i in range(len(column)):
if column[i] == entry:
plant = Plant(df2[i, 0], df2[i, 1], df2[i, 2], df2[i, 3], df2[i, 4], df2[i, 5], df2[i, 6], df2[i, 7], df2[i, 8], df2[i, 9], df2[i, 10], df2[11])
plant.print_habitat()
else:
print('')
search(column, entry, df1)
```
#### search_by_habitat() is asking for the user input and calls habitat_search.
```
def search_by_habitat():
"""Function that enables the user to provide habitat input in console.
The function asks the user to provide the habitat name he wants to search for,
afterwards the input is given to the habitat_search() function and habitat_search() gets called.
Returns:
String in console to let the user know what the Status entries mean
"""
habitat = input('Enter name of habitat\n')
habitat_search('habitat', habitat)
print('Status 1 equals nativ')
```
#### To search for vegetation in the csv file, run search_by_habitat() and provide it with a habitat name (Alpenvorland, Niederrheinisches Tiefland or Oberrheinisches Tiefland).
```
search_by_habitat()
```
#### point_in_bound() checks if the provided coordinates are inside of the used shapefile
```
def point_in_bound(filename, x, y, area):
"""Function that checks if the coordinates provided by the user are in bound of the shapefile polygon.
If the provided coordinates are out of bounds, a string will be printed in the console to let the user know,
if they are matching one of the shapefiles, search_db_via_query() gets called.
Args:
filename (str): name of the shapefile
x (float): x - coordinate
y (float): y - coordinate
area (str): name of the study area
Returns:
string to console
"""
file_shape = geopandas.read_file(filename)
polygon = list(file_shape.geometry)[0]
point = Point(x, y)
if polygon.contains(point):
query = "habitat = '" + area + "'"
search_db_via_query(query)
print('Enter 1 if you want elevation data for the coordinates\nEnter 2 if you dont want elevation data')
src = int(input('Enter here:'))
if src == 1:
elevation(x, y)
elif src == 2:
print('done')
else:
print('\ncoordinates out of \n' + area + '\nplease check provided shapefile for suitable coordinates\n')
```
#### The elevation() function transforms the coordinates into rasterdata to provide information about the elevation
```
def elevation(x, y):
"""Function used to get information about elevation at the provided coordinates.
Args:
x (float): x - coordinate
y (float): y - coordinate
Returns:
elevation data for coordinate input in console
"""
file = os.path.abspath("..") + "\Shape\Shape.vrt"
layer = gdal.Open(file)
gt = layer.GetGeoTransform()
rasterx = int((x - gt[0]) / gt[1])
rastery = int((y - gt[3]) / gt[5])
print('elevation =', layer.GetRasterBand(1).ReadAsArray(rasterx, rastery, 1, 1)[0][0], 'm above sea level')
```
#### search_by_coordinates() enables the user to provide coordinates and transforms them into float values, afterwards point_in_bound() gets called for all 3 shapefiles
```
def search_by_coordinates():
"""Function that lets the user input coordinates.
After asking the user to input x and y coordinates, point_in_bound(..) gets called for the 3 provided shapefiles.
Afterwards the user gets asked if he wants to receive elevation data for the input coordinates.
Returns:
"""
print('CRS used is EPSG:3857 \n for reference check https://epsg.io/3857 ')
x = float(input('Enter x coordinate\n'))
y = float(input('Enter y coordinate\n'))
point_in_bound(os.path.abspath("..")+"\Shape\prealpinebavaria.shp", x, y, 'Alpenvorland')
point_in_bound(os.path.abspath("..")+"\Shape\oberrheinmaintiefland.shp", x, y, 'Oberrheinisches Tiefland')
point_in_bound(os.path.abspath("..")+"\Shape\Tiefland.shp", x, y, 'Niederrheinisches Tiefland')
```
#### To actually search for vegetation by coordinates, run search_by_coordinates() and enter x and y coordinates. Note: as the function will tell you in the beginning, you need to use coordinates from the 3857 CRS. The link provided will lead you to a website where you can copy them.
```
search_by_coordinates()
```
#### The function question() provides the option for an easy usage of the tool, asking directly which search option you want to choose
```
def question():
"""Function to let the user decide if he wants to search by habitat in csv file, search by habitat in database or search by coordinates.
The function prints a string in the console to ask the user if he wants to search by putting in coordinates or the name of the habitat,
furthermore it is asking the user if he wants to search by the name of the habitat in the provided csv file or database.
If option 1 is chosen, user is asked for an habitat name before calling search_db_via_query()
Args:
1 (int): calls search_db_via_query()
2 (int): calls search_by_coordinates()
3 (int): calls search_by_habitat()
Returns:
text string 'no data' if the input is anything else then 1, 2 or 3
"""
print('Enter 1 to search database by habitat with detailed information\nEnter 2 to search database by coordinates \nEnter 3 to search by habitat in csv file for a quick overview without detail')
print('habitat search options so far:\n Alpenvorland, Niederrheinisches Tiefland, Oberrheinisches Tiefland')
src = int(input('Enter here:'))
if src == 1:
habitat = input('Enter name of habitat\n')
query = "habitat = '" + habitat + "'"
search_db_via_query(query)
elif src == 2:
search_by_coordinates()
elif src == 3:
search_by_habitat()
else:
print('no data')
```
#### Run question() if you want to choose for one of the search functions.
```
question()
```
| github_jupyter |
# Chemkin Input and Output
This notebook describes pmutt's functionality to read and write Chemkin files. We will use the NH3 formation mechanism as a case study.
## Topics Covered
- Read species *ab-initio* data, reactions, and catalyst sites from a spreadsheet
- Write the thermdat, gas.inp, surf.inp, T_flow.inp, EAg.inp, EAs.inp, tube_mole.inp files
## Input Spreadsheet
All the data will be imported from the [`./inputs/NH3_Input_Data.xlsx`](https://github.com/VlachosGroup/pmutt/blob/master/docs/source/examples_jupyter/chemkin_io/inputs/NH3_Input_Data.xlsx) file. There are four sheets:
1. `cat_sites` contains catalyst site properties for the adsorbed species
2. `refs` contains *ab-initio* and experimental data for a handful of gas species to calculate references
3. `species` contains *ab-initio* data for each specie
4. `reactions` contains elementary steps
First, we change the working directory to the location of the Jupyter notebook.
```
import os
from pathlib import Path
# Find the location of Jupyter notebook
# Note that normally Python scripts have a __file__ variable but Jupyter notebook doesn't.
# Using pathlib can overcome this limiation
try:
notebook_path = os.path.dirname(__file__)
except NameError:
notebook_path = Path().resolve()
os.chdir(notebook_path)
excel_path = './inputs/NH3_Input_Data.xlsx'
```
Below is a helper function to print tables easily.
```
import pandas as pd
from IPython.display import display
def disp_data(io, sheet_name):
data = pd.read_excel(io=io, sheet_name=sheet_name, skiprows=[1])
data = data.fillna(' ')
display(data)
```
**Catalytic Sites**
```
disp_data(io=excel_path, sheet_name='cat_sites')
```
**References**
```
disp_data(io=excel_path, sheet_name='refs')
```
**Species**
```
disp_data(io=excel_path, sheet_name='species')
```
**Reactions**
```
disp_data(io=excel_path, sheet_name='reactions')
```
## Reading data
Before we can initialize our species, we need the catalytic sites and the references.
### Reading Catalytic Sites
```
import os
from pprint import pprint
from pathlib import Path
from pmutt.io.excel import read_excel
from pmutt.chemkin import CatSite
cat_site_data = read_excel(io=excel_path, sheet_name='cat_sites')[0]
cat_site = CatSite(**cat_site_data)
# Print the properties of the catalyst site
pprint(cat_site.to_dict())
```
### Reading reference species
```
from pmutt.empirical.references import Reference, References
references_data = read_excel(io=excel_path, sheet_name='refs')
# Convert data to Reference objects and put them in a list
refs_list = [Reference(**ref_data) for ref_data in references_data]
# Assign the Reference objects to a References object so offsets can be calculated
refs = References(references=refs_list)
# Print out the offsets calculated
print(refs.offset)
```
### Reading species
```
from pmutt.empirical.nasa import Nasa
# Range of data to fit the Nasa polynomials
T_low = 298. # K
T_high = 800. # K
species_data = read_excel(io=excel_path, sheet_name='species')
species = []
for specie_data in species_data:
specie = Nasa.from_model(T_low=T_low, T_high=T_high, references=refs, **specie_data)
# If the species is a surface species, assign the catalyst site specified above
if specie.phase.lower() == 's':
specie.cat_site = cat_site
specie.n_sites = 1
species.append(specie)
```
The warning above is typical when empirical objects are fitting to `StatMech` objects with the `placeholder` preset.
### Reading reactions
```
from pmutt import pmutt_list_to_dict
from pmutt.reaction import ChemkinReaction, Reactions
# Convert list of Nasa polynomials to dictionary of Nasa polynomials
species_dict = pmutt_list_to_dict(species)
reactions_data = read_excel(io=excel_path, sheet_name='reactions')
reactions_list = []
for reaction_data in reactions_data:
reaction = ChemkinReaction.from_string(species=species_dict, **reaction_data)
reactions_list.append(reaction)
reactions = Reactions(reactions=reactions_list)
```
## Writing Chemkin files
Now that we have all the required objects, we can write the output files. All outputs can be found in the [./outputs folder](https://github.com/VlachosGroup/pmutt/blob/master/docs/source/examples_jupyter/chemkin_io/outputs).
### Writing thermdat
```
from pmutt.io.thermdat import write_thermdat
write_thermdat(filename='./outputs/thermdat', nasa_species=species)
```
The thermdat file can be return as a string by omitting ``filename``.
```
thermdat_str = write_thermdat(nasa_species=species)
print(thermdat_str)
```
### Writing gas.inp and surf.inp
```
from pmutt.io import chemkin as ck_io
ck_io.write_gas(filename='./outputs/gas.inp',
nasa_species=species,
reactions=reactions,
act_method_name='get_G_act',
act_unit='kcal/mol')
ck_io.write_surf(filename='./outputs/surf.inp',
reactions=reactions,
act_method_name='get_G_act',
ads_act_method='get_H_act',
act_unit='kcal/mol')
```
<a id='act_method_name_explanation'></a>
Note that `act_method_name` is 'get_G_act'. We use this formalism here since we do not include entropic effects in the preexponential factor.
Similarly to ``write_thermdat``, the gas.inp and surf.inp file can written as a string by omitting the filename. Note there are no gas-phase reactions.
```
gas_file = ck_io.write_gas(nasa_species=species,
reactions=reactions,
act_method_name='get_G_act',
act_units='kcal/mol')
print(gas_file)
surf_file = ck_io.write_surf(reactions=reactions,
act_method_name='get_G_act',
ads_act_method='get_H_act',
act_unit='kcal/mol')
print(surf_file)
```
### Writing T_flow.inp
```
# Conditions used to write files
T = [300., 400., 500.] # Temperature in K
P = [1., 2., 3.] # Pressure in atm
Q = [10., 20., 30.] # Standard volumetric flow rate in cm3
abyv= [100., 50., 25.] # Catalyst surface area to reactor volume in 1/cm
ck_io.write_T_flow(filename='./outputs/T_flow.inp', T=T, P=P, Q=Q, abyv=abyv)
```
As shown before, we can return T_flow as a string by omitting the filename.
```
T_flow_str = ck_io.write_T_flow(T=T, P=P, Q=Q, abyv=abyv)
print(T_flow_str)
```
### Writing EAg.inp and EAs.inp
```
# Convert T_flow inputs into list of dictionaries that can be used by write_EA.
# In the future, this will be replaced by a function
conditions = []
for T_i, P_i, Q_i, abyv_i in zip(T, P, Q, abyv):
condition = {
'T': T_i,
'P': P_i,
'Q': Q_i,
'abyv': abyv}
conditions.append(condition)
ck_io.write_EA(filename='./outputs/EAs.inp',
reactions=reactions,
write_gas_phase=False,
act_method_name='get_GoRT_act',
ads_act_method='get_HoRT_act',
conditions=conditions)
ck_io.write_EA(filename='./outputs/EAg.inp',
reactions=reactions,
write_gas_phase=True,
act_method_name='get_GoRT_act',
conditions=conditions)
```
Reminder that we use `act_method_name` as 'get_GoRT_act' for the [reason described above](#act_method_name_explanation).
```
EAg_file = ck_io.write_EA(reactions=reactions,
write_gas_phase=False,
act_method_name='get_GoRT_act',
conditions=conditions)
print(EAg_file)
EAs_file = ck_io.write_EA(reactions=reactions,
write_gas_phase=True,
act_method_name='get_GoRT_act',
conditions=conditions)
print(EAs_file)
```
### Writing tube_mole.inp
```
import numpy as np
# Generating a list of conditions to input
mole_frac_conditions = []
for x_N2 in np.linspace(0., 0.25, 3):
x_H2 = x_N2*3.
x_NH3 = 1. - x_N2 - x_H2
mole_fractions = {'N2': x_N2, 'H2': x_H2, 'NH3': x_NH3, 'RU(S)': 1.}
mole_frac_conditions.append(mole_fractions)
# Write the tube_mole.inp file
ck_io.write_tube_mole(mole_frac_conditions=mole_frac_conditions,
nasa_species=species,
filename='./outputs/tube_mole.inp')
```
Below is the output of the function.
```
tube_mole_str = ck_io.write_tube_mole(mole_frac_conditions=mole_frac_conditions,
nasa_species=species)
print(tube_mole_str)
```
| github_jupyter |
```
# do the required imports
import pandas as pd
import numpy as np
import sys
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
# imports sklearn
from sklearn.metrics import accuracy_score
from sklearn import neighbors
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.utils.multiclass import unique_labels
from tqdm import tqdm, tqdm_notebook
filename = r'CElegans_384Well_A1-B24_GFP_TL_analyzed_TOC_CElegans_SingleObjects_Labels_cleaned.csv'
def check_separator(csvfile):
reader = pd.read_csv(csvfile, sep=None, engine='python', iterator=True)
sep = reader._engine.data.dialect.delimiter
reader.close()
return sep
def convert_dec_sep(df, np):
for id in range(np, len(df.columns)):
try:
df.iloc[:, id] = df.iloc[:, id].str.replace(',', '.').astype('float')
except:
print('No correction of types possible for column: ', df.columns[id])
return df
# check the used separator
sep = check_separator(filename)
# read the CSV table containing all the single object data
df_single = pd.read_csv(filename, sep=sep)
# get headers and number of measurement parameters
headers = df_single.head(0)
df_single[:3]
# rename table columns for better readability
cols={"ID::ID!!I":"ID",
"ImageIndexScene::Image Index Scene!!I":"S",
"ImageSceneContainerName::Image Scene Container Name ":"well"}
# rename colums
df_single.rename(columns=cols, inplace=True)
df_single[:3]
# remove columns not needed for classification
df_class = df_single.drop(columns=['ID',
'S',
'well'], inplace=False)
# remove rows with units from dataframe
df_class.drop([0], inplace=True)
# convert decimal separators to point "."
df_class = convert_dec_sep(df_class, 0)
df_class[:3]
# define label array from labels
label = np.zeros((df_class.shape[0],), dtype=np.integer)
index = -1
for t in df_class['Label GT']:
index += 1
if t == 'D':
label[index] = 0
if t == 'L':
label[index] = 1
if t == 'U':
label[index] = 2
# define class names
class_names = ['dead', 'live', 'undefined']
# remove type column
df_train = df_class.drop(columns=['Label GT'], inplace = False)
# get feature names
feature_list = list(df_train.columns)
# show dataset
print(df_train.shape)
df_train[:3]
extra_clean = False
if extra_clean:
# optiional data cleaning
df_train.drop(df_train[df_train.feretmax < 15.29].index, inplace=True)
print(df_train.shape)
df_train[:3]
# get data as numpy array from dataframe
data = df_train.values
print(type(data), data.shape)
# split data for training
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size=0.5, random_state=0)
# scale data
scale = True
if scale:
# Feature Scaling
sc = StandardScaler()
data_train = sc.fit_transform(data_train)
data_test = sc.transform(data_test)
# iniate classifier
classifier = RandomForestClassifier(n_estimators=10,
bootstrap=True,
random_state=20,
verbose=0)
# build a forest of trees from the training set (data, label)
classifier.fit(data_train, label_train)
# predict class for data
label_pred = classifier.predict(data_test)
# show results
confm_notnormed = confusion_matrix(label_test, label_pred)
confm_normed = np.round(confm_notnormed.astype('float') / confm_notnormed.sum(axis=1)[:, np.newaxis], 2)
class_report = classification_report(label_test, label_pred)
acc_score = accuracy_score(label_test, label_pred)
fi = classifier.feature_importances_
print(confm_notnormed)
print('------------------------------------')
print(confm_normed)
print('------------------------------------')
print(class_report)
print('------------------------------------')
print('Accuray Score : ', acc_score)
feature_imp = {'Features': feature_list, 'Importance': fi}
fimp = pd.DataFrame(data=feature_imp)
# show feature importance
fimp
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
#ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
fontsize=16,
color="red" if cm[i, j] > thresh else "black")
fig.tight_layout()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.2)
plt.colorbar(im, cax=cax)
return ax
# Plot non-normalized confusion matrix
plot_confusion_matrix(label_test, label_pred, classes=np.asarray(class_names), normalize=False,
title='Confusion matrix')
# Plot normalized confusion matrix
plot_confusion_matrix(label_test, label_pred, classes=np.asarray(class_names), normalize=True,
title='Confusion matrix normalized')
rs_max = 20
estimator_max = 20
rs_estimator_matrix = np.zeros((rs_max, estimator_max-1), dtype=np.float)
for n in tqdm_notebook(range(1, estimator_max, 1)):
for rstate in range(0, rs_max, 1):
# iniate classifier
classifier = RandomForestClassifier(n_estimators=n,
bootstrap=True,
random_state=rstate,
verbose=0)
# build a forest of trees from the training set (data, label)
classifier.fit(data_train, label_train)
# predict class for data
label_pred = classifier.predict(data_test)
# show results
rs_estimator_matrix[rstate, n-1] = accuracy_score(label_test, label_pred)
fig, ax = plt.subplots(figsize=(8, 8))
im = ax.imshow(rs_estimator_matrix, interpolation='nearest', cmap=plt.cm.Blues)
ax.set_xlabel('n_estimators')
ax.set_ylabel('random_state')
ax.set_title('Accuray')
fig.tight_layout()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.2)
plt.colorbar(im, cax=cax)
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex AI: Vertex AI Migration: AutoML Image Object Detection
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ5%20Vertex%20SDK%20AutoML%20Image%20Object%20Detection.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ5%20Vertex%20SDK%20AutoML%20Image%20Object%20Detection.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
### Dataset
The dataset used for this tutorial is the Salads category of the [OpenImages dataset](https://www.tensorflow.org/datasets/catalog/open_images_v4) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
- The Cloud Storage SDK
- Git
- Python 3
- virtualenv
- Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).
2. [Install Python 3](https://cloud.google.com/python/setup#installing_python).
3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.
4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.
5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
6. Open this notebook in the Jupyter Notebook Dashboard.
## Installation
Install the latest version of Vertex SDK for Python.
```
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
```
### Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
This tutorial does not require a GPU runtime.
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)
4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
```
import google.cloud.aiplatform as aip
```
## Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
```
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
```
#### Location of Cloud Storage training data.
Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
```
IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv"
```
#### Quick peek at your data
This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
```
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
```
## Create a dataset
### [datasets.create-dataset-api](https://cloud.google.com/vertex-ai/docs/datasets/create-dataset-api)
### Create the Dataset
Next, create the `Dataset` resource using the `create` method for the `ImageDataset` class, which takes the following parameters:
- `display_name`: The human readable name for the `Dataset` resource.
- `gcs_source`: A list of one or more dataset index files to import the data items into the `Dataset` resource.
- `import_schema_uri`: The data labeling schema for the data items.
This operation may take several minutes.
```
dataset = aip.ImageDataset.create(
display_name="Salads" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box,
)
print(dataset.resource_name)
```
*Example Output:*
INFO:google.cloud.aiplatform.datasets.dataset:Creating ImageDataset
INFO:google.cloud.aiplatform.datasets.dataset:Create ImageDataset backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/1941426647739662336
INFO:google.cloud.aiplatform.datasets.dataset:ImageDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592
INFO:google.cloud.aiplatform.datasets.dataset:To use this ImageDataset in another session:
INFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.ImageDataset('projects/759209241365/locations/us-central1/datasets/2940964905882222592')
INFO:google.cloud.aiplatform.datasets.dataset:Importing ImageDataset data: projects/759209241365/locations/us-central1/datasets/2940964905882222592
INFO:google.cloud.aiplatform.datasets.dataset:Import ImageDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/8100099138168815616
INFO:google.cloud.aiplatform.datasets.dataset:ImageDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592
projects/759209241365/locations/us-central1/datasets/2940964905882222592
## Train a model
### [training.automl-api](https://cloud.google.com/vertex-ai/docs/training/automl-api)
### Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
#### Create training pipeline
An AutoML training pipeline is created with the `AutoMLImageTrainingJob` class, with the following parameters:
- `display_name`: The human readable name for the `TrainingJob` resource.
- `prediction_type`: The type task to train the model for.
- `classification`: An image classification model.
- `object_detection`: An image object detection model.
- `multi_label`: If a classification task, whether single (`False`) or multi-labeled (`True`).
- `model_type`: The type of model for deployment.
- `CLOUD`: Deployment on Google Cloud
- `CLOUD_HIGH_ACCURACY_1`: Optimized for accuracy over latency for deployment on Google Cloud.
- `CLOUD_LOW_LATENCY_`: Optimized for latency over accuracy for deployment on Google Cloud.
- `MOBILE_TF_VERSATILE_1`: Deployment on an edge device.
- `MOBILE_TF_HIGH_ACCURACY_1`:Optimized for accuracy over latency for deployment on an edge device.
- `MOBILE_TF_LOW_LATENCY_1`: Optimized for latency over accuracy for deployment on an edge device.
- `base_model`: (optional) Transfer learning from existing `Model` resource -- supported for image classification only.
The instantiated object is the DAG (directed acyclic graph) for the training job.
```
dag = aip.AutoMLImageTrainingJob(
display_name="salads_" + TIMESTAMP,
prediction_type="object_detection",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
print(dag)
```
*Example output:*
<google.cloud.aiplatform.training_jobs.AutoMLImageTrainingJob object at 0x7f806a6116d0>
#### Run the training pipeline
Next, you run the DAG to start the training job by invoking the method `run`, with the following parameters:
- `dataset`: The `Dataset` resource to train the model.
- `model_display_name`: The human readable name for the trained model.
- `training_fraction_split`: The percentage of the dataset to use for training.
- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).
- `validation_fraction_split`: The percentage of the dataset to use for validation.
- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).
- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.
The `run` method when completed returns the `Model` resource.
The execution of the training pipeline will take upto 20 minutes.
```
model = dag.run(
dataset=dataset,
model_display_name="salads_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=20000,
disable_early_stopping=False,
)
```
*Example output:*
INFO:google.cloud.aiplatform.training_jobs:View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/2109316300865011712?project=759209241365
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
...
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712
INFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/1284590221056278528
## Evaluate the model
### [projects.locations.models.evaluations.list](https://cloud.devsite.corp.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.models.evaluations/list)
## Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
```
# Get model resource ID
models = aip.Model.list(filter="display_name=salads_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
```
*Example output:*
name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824"
metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml"
metrics {
struct_value {
fields {
key: "auPrc"
value {
number_value: 0.9891107
}
}
fields {
key: "confidenceMetrics"
value {
list_value {
values {
struct_value {
fields {
key: "precision"
value {
number_value: 0.2
}
}
fields {
key: "recall"
value {
number_value: 1.0
}
}
}
}
## Make batch predictions
### [predictions.batch-prediction](https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions)
### Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
```
test_items = !gsutil cat $IMPORT_FILE | head -n2
cols_1 = str(test_items[0]).split(",")
cols_2 = str(test_items[1]).split(",")
if len(cols_1) == 11:
test_item_1 = str(cols_1[1])
test_label_1 = str(cols_1[2])
test_item_2 = str(cols_2[1])
test_label_2 = str(cols_2[2])
else:
test_item_1 = str(cols_1[0])
test_label_1 = str(cols_1[1])
test_item_2 = str(cols_2[0])
test_label_2 = str(cols_2[1])
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
```
### Copy test item(s)
For the batch prediction, copy the test items over to your Cloud Storage bucket.
```
file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
```
### Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
- `content`: The Cloud Storage path to the image.
- `mime_type`: The content type. In our example, it is a `jpeg` file.
For example:
{'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
```
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
```
### Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
- `job_display_name`: The human readable name for the batch prediction job.
- `gcs_source`: A list of one or more batch request input files.
- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.
- `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
```
batch_predict_job = model.batch_predict(
job_display_name="salads_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
```
*Example output:*
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
### Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
```
batch_predict_job.wait()
```
*Example Output:*
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
### Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
- `content`: The prediction request.
- `prediction`: The prediction response.
- `ids`: The internal assigned unique identifiers for each prediction request.
- `displayNames`: The class names for each class label.
- `bboxes`: The bounding box of each detected object.
```
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
```
*Example Output:*
Prediction(predictions=[{'ids': ['225634079670796288', '2099131524656922624', '4404974533870616576', '6710817543084310528', '225634079670796288', '2099131524656922624', '6710817543084310528', '2099131524656922624', '225634079670796288', '6710817543084310528', '4404974533870616576', '6710817543084310528', '9016660552298004480', '2099131524656922624', '225634079670796288', '225634079670796288', '225634079670796288', '9016660552298004480', '4404974533870616576', '4404974533870616576', '2099131524656922624', '4404974533870616576', '4404974533870616576', '2099131524656922624', '9016660552298004480', '2099131524656922624', '4404974533870616576', '6710817543084310528', '2099131524656922624', '2099131524656922624', '2099131524656922624', '6710817543084310528', '6710817543084310528', '9016660552298004480', '4404974533870616576', '225634079670796288', '2099131524656922624', '2099131524656922624', '4404974533870616576', '225634079670796288'], 'confidences': [0.993109703, 0.985925615, 0.0453977883, 0.044005096, 0.0327100307, 0.0218445472, 0.019706985, 0.0119887572, 0.0119416527, 0.00427311379, 0.00346497213, 0.00306943036, 0.00277529657, 0.00233747251, 0.00226957048, 0.00169593049, 0.00143932423, 0.00143866683, 0.0012890877, 0.00107431656, 0.00104699621, 0.000975079951, 0.000959897472, 0.000695129682, 0.000612194242, 0.000498361886, 0.000362998835, 0.000331683928, 0.000291648932, 0.000270543911, 0.000256592786, 0.000191880303, 0.000177405163, 0.000124130107, 0.000117361531, 0.00010169336, 8.34302919e-05, 7.46671e-05, 7.43425626e-05, 7.28466039e-05], 'displayNames': ['Salad', 'Baked Goods', 'Cheese', 'Seafood', 'Salad', 'Baked Goods', 'Seafood', 'Baked Goods', 'Salad', 'Seafood', 'Cheese', 'Seafood', 'Tomato', 'Baked Goods', 'Salad', 'Salad', 'Salad', 'Tomato', 'Cheese', 'Cheese', 'Baked Goods', 'Cheese', 'Cheese', 'Baked Goods', 'Tomato', 'Baked Goods', 'Cheese', 'Seafood', 'Baked Goods', 'Baked Goods', 'Baked Goods', 'Seafood', 'Seafood', 'Tomato', 'Cheese', 'Salad', 'Baked Goods', 'Baked Goods', 'Cheese', 'Salad'], 'bboxes': [[0.37747705, 1.0, 0.31908837, 0.981482148], [0.0115042031, 0.553094864, 0.0954515785, 0.728246], [0.00692665577, 0.548264325, 0.204499051, 0.702045619], [0.38464877, 0.958207607, 0.193982765, 0.977637291], [0.17925939, 0.906846285, 0.173706621, 0.842920542], [0.184155583, 0.861931205, 0.132575363, 0.933382452], [0.00305101275, 0.611860037, 0.0430814438, 0.757687569], [0.339314103, 0.976260185, 0.193029925, 1.0], [0.226356596, 0.930693269, 0.581959367, 0.95653224], [0.179834038, 0.908775806, 0.0958697423, 0.939691424], [0.432378292, 0.933267236, 0.383782387, 0.998322189], [0.284259945, 0.931147337, 0.431660354, 0.955191612], [0.380976766, 0.981038809, 0.240252972, 0.948227406], [0.0822101831, 0.80000484, 0.0997514725, 0.613626182], [0.0185090601, 0.71911788, 0.238465369, 0.786021173], [0.226215214, 0.90732789, 0.745254755, 0.976904333], [0.0948396623, 0.77132678, 0.0453846678, 0.7058779], [0.0, 0.600441575, 0.0679637417, 0.726050138], [0.357916325, 0.954997301, 0.135904908, 0.931971073], [0.132627249, 0.893175125, 0.0913171396, 0.866321087], [0.0602070391, 0.563761592, 0.0754725188, 0.508511], [0.0, 0.604042649, 0.0415849313, 0.614845335], [0.229078263, 0.878014445, 0.402972162, 0.969400823], [0.414384753, 0.949710369, 0.0264594965, 0.378633976], [0.241480172, 0.879761755, 0.151504755, 0.885163963], [0.0922716856, 0.839965582, 0.0569753647, 0.344792545], [0.421173453, 0.909894347, 0.0263550486, 0.424664497], [0.469087541, 0.929275, 0.0197226219, 0.430724025], [0.0161168873, 0.577470899, 0.803146243, 0.980094135], [0.274543285, 1.0, 0.508762538, 0.98746717], [0.254469335, 0.936549246, 0.0800963044, 0.473506063], [0.115917385, 0.850970387, 0.0382353887, 0.610701382], [0.215452224, 0.915474534, 0.672911525, 0.974969745], [0.339072973, 0.927099466, 0.545033097, 0.961636961], [0.0, 0.521227062, 0.804131687, 0.996689677], [0.476845741, 0.914084792, 0.00162035227, 0.416255236], [0.00672900677, 0.79107517, 0.23690711, 0.807291448], [0.0806772411, 0.963176131, 0.715660572, 0.979084849], [0.029843241, 0.80046916, 0.0362186842, 0.465307206], [0.375386357, 0.906366706, 0.00989544392, 0.302769661]]}], deployed_model_id='2225068486990757888', explanations=None)
## Make online predictions
### [predictions.deploy-model-api](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api)
## Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the `deploy` method.
```
endpoint = model.deploy()
```
*Example output:*
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
### [predictions.online-prediction-automl](https://cloud.google.com/vertex-ai/docs/predictions/online-predictions-automl)
### Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
```
test_items = !gsutil cat $IMPORT_FILE | head -n1
cols = str(test_items[0]).split(",")
if len(cols) == 11:
test_item = str(cols[1])
test_label = str(cols[2])
else:
test_item = str(cols[0])
test_label = str(cols[1])
print(test_item, test_label)
```
### Make the prediction
Now that your `Model` resource is deployed to an `Endpoint` resource, you can do online predictions by sending prediction requests to the Endpoint resource.
#### Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using `tf.io.gfile.Gfile()`. To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is:
{ 'content': { 'b64': base64_encoded_bytes } }
Since the `predict()` method can take multiple items (instances), send your single test item as a list of one test item.
#### Response
The response from the `predict()` call is a Python dictionary with the following entries:
- `ids`: The internal assigned unique identifiers for each prediction request.
- `displayNames`: The class names for each class label.
- `confidences`: The predicted confidence, between 0 and 1, per class label.
- `bboxes`: The bounding box of each detected object.
- `deployed_model_id`: The Vertex AI identifier for the deployed Model resource which did the predictions.
```
import base64
import tensorflow as tf
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{"content": base64.b64encode(content).decode("utf-8")}]
prediction = endpoint.predict(instances=instances)
print(prediction)
```
*Example output:*
Prediction(predictions=[{'ids': ['2250776168359788544', '1097854663752941568', '2250776168359788544', '1097854663752941568', '1097854663752941568', '2250776168359788544', '5709540682180329472', '1097854663752941568', '2250776168359788544', '2250776168359788544', '3403697672966635520', '2250776168359788544', '3403697672966635520', '5709540682180329472', '8015383691394023424', '5709540682180329472', '3403697672966635520', '3403697672966635520', '3403697672966635520', '1097854663752941568', '2250776168359788544', '2250776168359788544', '1097854663752941568', '2250776168359788544', '2250776168359788544', '3403697672966635520', '8015383691394023424', '1097854663752941568', '2250776168359788544', '1097854663752941568', '2250776168359788544', '2250776168359788544', '5709540682180329472', '1097854663752941568', '1097854663752941568', '2250776168359788544', '5709540682180329472', '8015383691394023424', '2250776168359788544', '3403697672966635520'], 'displayNames': ['Salad', 'Baked Goods', 'Salad', 'Baked Goods', 'Baked Goods', 'Salad', 'Seafood', 'Baked Goods', 'Salad', 'Salad', 'Cheese', 'Salad', 'Cheese', 'Seafood', 'Tomato', 'Seafood', 'Cheese', 'Cheese', 'Cheese', 'Baked Goods', 'Salad', 'Salad', 'Baked Goods', 'Salad', 'Salad', 'Cheese', 'Tomato', 'Baked Goods', 'Salad', 'Baked Goods', 'Salad', 'Salad', 'Seafood', 'Baked Goods', 'Baked Goods', 'Salad', 'Seafood', 'Tomato', 'Salad', 'Cheese'], 'bboxes': [[0.423218071, 0.979368508, 0.339486301, 0.953063667], [0.00295934081, 0.562559605, 0.0866475701, 0.733601213], [0.446993053, 0.985957086, 0.103116274, 1.0], [0.0, 0.798019052, 0.0843312368, 0.910729289], [0.200755239, 0.878952861, 0.161852047, 0.962031841], [0.141423374, 0.893875718, 0.0966037139, 0.975178], [0.446385473, 0.954654098, 0.322997868, 0.955895185], [0.0540368557, 0.735850155, 0.22394672, 0.783547163], [0.0511586964, 0.590276837, 0.247575879, 0.904480815], [0.291286081, 0.943224788, 0.742888927, 0.981343627], [0.0209743679, 0.534335613, 0.145708829, 0.751705289], [0.286338538, 0.956577778, 0.584680378, 0.96791], [0.493991911, 0.92484349, 0.385724247, 0.975459754], [0.320449412, 0.9483518, 0.16373083, 1.0], [0.458497226, 0.937700331, 0.342862546, 0.935759783], [0.0386219025, 0.711036801, 0.0140583916, 0.901848], [0.484096915, 0.925073504, 0.0282530189, 0.426966846], [0.329300791, 0.922743678, 0.158341929, 1.0], [0.472070664, 0.945299745, 0.0491278172, 0.907515883], [0.395479739, 0.947133362, 0.233833641, 0.983137965], [0.498143733, 0.962923825, 0.000415623188, 0.462042093], [0.168997586, 0.846541345, 0.472939, 0.953329563], [0.0585409701, 0.603083611, 0.0806979761, 0.442619056], [0.149338543, 0.87841326, 0.68745774, 0.975087523], [0.353095531, 0.95818007, 0.0186360981, 0.329624653], [0.127367049, 0.796614647, 0.0658644438, 0.776186824], [0.103480637, 0.728991389, 0.102527976, 0.843147755], [0.0629016757, 0.891313374, 0.0636595339, 0.424591154], [0.0277965665, 0.718145788, 0.0, 0.728123844], [0.431717843, 0.966227055, 0.0133913755, 0.449536532], [0.220863849, 0.897798777, 0.0167301502, 0.446346819], [0.128112122, 0.62433213, 0.576314211, 0.960604429], [0.0, 0.589858651, 0.0557853393, 0.740757704], [0.0763699561, 0.564970851, 0.426471323, 0.898873031], [0.0173879862, 0.97320509, 0.0287637822, 0.322233558], [0.0586977303, 0.64173305, 0.797896147, 0.986974], [0.474722087, 0.961244881, 0.00339941191, 0.444934338], [0.0472798944, 0.610487342, 0.0866674632, 0.682843804], [0.477720022, 0.945115, 0.00805068, 0.81928432], [0.223478109, 0.819148779, 0.297299504, 0.964423835]], 'confidences': [0.99927026, 0.998335898, 0.122888528, 0.0565881766, 0.0334013812, 0.0227512382, 0.019394394, 0.0163640734, 0.0106114028, 0.0095577, 0.0092699714, 0.00591981411, 0.00551535888, 0.00492637418, 0.00376707385, 0.00370016089, 0.00318370084, 0.00264257635, 0.00248732581, 0.00241615647, 0.0022392564, 0.00208666641, 0.00198837952, 0.00168023549, 0.00166417053, 0.0011962672, 0.00100666645, 0.000980117242, 0.000788055069, 0.000682824815, 0.000606221438, 0.000559259, 0.000542622234, 0.000534898543, 0.000527789292, 0.000484947581, 0.000421707897, 0.000392853981, 0.000352483446, 0.0003487629]}], deployed_model_id='5770527293638180864', explanations=None)
## Undeploy the model
When you are done doing predictions, you undeploy the model from the `Endpoint` resouce. This deprovisions all compute resources and ends billing for the deployed model.
```
endpoint.undeploy_all()
```
# Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- AutoML Training Job
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
```
cd ..
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
from dotenv import dotenv_values
from src.libs.utils import read_yaml, read_query
# READ DATA
config = dotenv_values()
db = create_engine("mysql://{user}:{pwd}@localhost/football_data".format(user=config['USER'], pwd=config['PWD']))
input_query='queries/mlp_1_input.sql'
query = read_query(input_query)
df = pd.read_sql(query, con=db)
# MODEL CONFIG
# config_file = 'configs/rfc1_config.yml'
# cfg = read_yaml(config_file)
features = ['h_avg_scored',
'h_avg_conceded',
'h_avg_elo',
'a_avg_scored',
'a_avg_conceded',
'a_avg_elo',
'AvgH',
'AvgD',
'AvgA']
target = 'FTR'
train_seasons = ['0910', '1011', '1112', '1213', '1314', '1415', '1516', '1617']
valid_seasons = ['1718', '1819']
test_seasons = ['1920', '2021']
# df = df.set_index('MATCH_ID')
df = df.loc[~(df[features + [target]].isnull().any(1))]
train = df.loc[df.season_code.isin(train_seasons)]
valid = df.loc[df.season_code.isin(valid_seasons)]
test = df.loc[df.season_code.isin(test_seasons)]
```
## Modelling
```
import tensorflow as tf
from sklearn.preprocessing import StandardScaler
batch_size = 512
train_samples = train.shape[0]
n_complete_batch = np.floor(train_samples/512)
max_train_samples = int(n_complete_batch*batch_size)
ss = StandardScaler()
train_x = ss.fit_transform(X=train.iloc[:max_train_samples,:][features])
train_y = pd.get_dummies(train.iloc[:max_train_samples,:][target])
train_yD = train_y['D']
train_w = train.iloc[:max_train_samples,:]['AvgD']
tf.keras.backend.clear_session()
inputs_x = tf.keras.Input(shape=(9,))
inputs_w = tf.keras.Input(shape=(1,))
inputs_l = tf.keras.Input(shape=(1,))
x = tf.keras.layers.Dense(9, activation=tf.nn.relu)(inputs_x)
x = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x)
x = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x)
outputs = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)(x)
model = tf.keras.Model(inputs=[inputs_x, inputs_w, inputs_l], outputs=outputs)
def my_loss(y_true, y_pred):
y_pred_round = tf.math.round(y_pred)
return -((y_true*y_pred_round)*inputs_w-(2*y_pred_round))
def my_metrics():
# just to output something
return tf.math.reduce_mean(inputs_w)
def dummy_loss(y_true, y_pred):
return 0.
loss = my_loss(outputs, inputs_l)
metric = my_metrics()
model.add_loss(loss)
model.compile(optimizer='adam', loss=dummy_loss)
model.fit(x=[train_x, train_w, train_yD], batch_size=32)
pred = model.predict([train_x, train_w, train_yD])
pred.mean()
y_true = tf.random.uniform(shape=(10, ), minval=0, maxval=1, dtype=tf.int32)
y_pred = tf.random.uniform(shape=(10, ), minval=0, maxval=1, dtype=tf.int32)
y_weight = tf.random.uniform(shape=(10, ), minval=2, maxval=5, dtype=tf.float32)
print(y_true)
tf.__version__
```
| github_jupyter |
```
# We can use PCA to calculate a projection of a dataset and select a number of dimensions
# or principal components of the projection to use as input to a model.
# In this example we will experiment with PCA and logistic regression.
# We will evaluate the performance of logistic regression on a synthetic dataset
# for various numbers of principal components.
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from matplotlib import pyplot
# Create a synthetic dataset
# In our example we create a dataset with 1000 examples and 20 input features.
# That is, the dataset dimension is 20.
# Out of those 20, 15 inputs are "meaningful".
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Summarize the dataset
print(X.shape, y.shape)
# Create a list of models to experiment with
# We will create 20 models. Each model will take as input a different number of principal components.
# (i.e., only one component as input, two components, three,...,twenty)
def get_models():
models = dict()
for i in range(1,21):
# This way we essentially define a two-step process:
# Perform PCA and use the output of PCA as input to logistic regrassion
steps = [('pca', PCA(n_components=i)), ('m', LogisticRegression())]
models[str(i)] = Pipeline(steps=steps)
return models
# Evaluate a given model using cross-validation
def evaluate_model(model, X, y):
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
return scores
# Get the models
models = get_models()
# Evaluate the models and store results
results = []
names = []
for name, model in models.items():
scores = evaluate_model(model, X, y)
results.append(scores)
names.append(name)
print('>%s %.3f (%.3f)' % (name, mean(scores), std(scores)))
# Plot model performance for comparison
pyplot.boxplot(results, labels=names, showmeans=True)
pyplot.xticks(rotation=45)
pyplot.show()
# We see a general trend of increased performance as the number of dimensions is increased.
# On this dataset, the results suggest a trade-off in the number of dimensions vs. the classification accuracy
# of the model.
# Interestingly, we donโt see any improvement beyond 15 components.
# This matches our definition of the problem where only the first 15 components contain information
# about the class and the remaining five are redundant.
```
| github_jupyter |
<img alt="QTPyLib" align="right" src="http://qtpylib.io/docs/latest/_images/qtpy-logo.png" style="margin:10px 10px 0;width:250px">
# Data Workflow
QTPyLib: Quantitative Trading Python Library<br>
https://github.com/ranaroussi/qtpylib
Copyright 2016 Ran Aroussi
---
<code>Licensed under the GNU Lesser General Public License, v3.0 (the "License"); You may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.gnu.org/licenses/lgpl-3.0.en.html
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
</code>
---
# Overview
This Jupyter Notebook aims to provide developers with a quick overview on the new `Workflow` module, introduced in version `1.5.47a` or QTPyLib.
The `Workflow` module includes some handly methods for working with external data sources when backtesting, and is planned to include methods for analyzing your backtest results in the near future.
\* Please refer to the [Workflow API Documentation](http://qtpylib.io/docs/latest/api.html#workflow-api) for complete list of available methods and parametrs.
---
Let's start by checking out QTPyLib's version:
```
import qtpylib
qtpylib.__version__
```
Next, we need to import the `Workflow` module:
```
from qtpylib import workflow as wf
```
## 1. Working with External Data
Sometimes, you want to backtest your strategies using market data you already have from sources other than the Blotter. Before you can use market data from external data sources, you'll need to convert it into a QTPyLib-compatible data format.
Once the data is converted, it can be read by your strategies as CSV files. You can also save the converted data in your Blotter's MySQL database so it can be used just like any other data captured by your Blotter.
---
### 1.1. Downloading Data
The `Workflow` module includes 4 methods for downloading market data from either Yahoo! Finance, Google or Interactive Brokers. The methods are:
- `get_data_yahoo()` - downloads daily bars from Yahoo! finance
- `get_data_yahoo_intraday()` - downloads 1-min intraday bars from Yahoo! finance (2 weeks max)
- `get_data_google_intraday()` - downloads 1-min intraday bars from Google (3 weeks max)
- `get_data_ib()` - downloads 1s or higher bars from Interactive Brokers (requires a running Blotter, connected to TWS/IB Gateway)
---
In this example, let's first download intraday market data for the S&P E-mini Futures from Yahoo! finance, so we'll have some data to work with...
```
# get data from Yahoo! finance
df = wf.get_data_yahoo_intraday("ESZ16.CME")
df.tail()
```
Of course, if you already have your data as CSV files, simply load them into a `pd.DataFrame` that we will convert into QTPyLib-compatible data format in the next step.
```
# read data from existing csv file
# df = pd.read_csv("~/Desktop/existing_data.csv")
```
---
### 1.2. Convert data into a QTPyLib-compatible data format
Once you have your existing data loaded as a `pd.DataFrame`, it's time to convert it into a QTPyLib-compatible data format by using the `prepare_data()` method and passing our data and the **IB-Compatible contract tuple or string** as the first argument.
```
# prepare data for usage by QTPyLib
ibtuple = ("ES", "FUT", "GLOBEX", "USD", 201612, 0.0, "")
qtpylib_df = wf.prepare_data(ibtuple, data=df, output_path="~/Desktop/")
qtpylib_df.tail()
```
The resulting CSV file will be saved in `~/Desktop/ESZ2016_FUT.csv`
---
### 1.3. Storing Converted Data in MySQL
While this step is optional, you may want to load your converted data into QTPyLib's MySQL for future use by your strategies. **This step isn't required as you can backtest directly off of your converted CSV files.**
With your `Blotter` running and connected to TWS/IB Gateway, run this command:
```
# store data
wf.store_data(qtpylib_df, kind="BAR")
```
---
### 1.4. Using CSV files when Backtesting
Once you have your CSV files in a QTPyLib-compatible format, you can backtest using this data using the `--data` flag when running your backtests.
Example:
```
$ python strategy.py --backtest --start 2016-12-01 --end 2016-12-31 --data ~/Desktop/ -output ~/portfolio.pkl
```
---
Please refer to [Back-Testing Using QTPyLib](http://qtpylib.io/docs/latest/algo.html#back-testing-using-qtpylib) for more information about back-testing using QTPyLib.
| github_jupyter |
```
from IPython.core.display import HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
%load_ext autoreload
%autoreload 1
```
Author: Andrew Tarzia
Date Created: 12 Jul 2018
Distributed under the terms of the MIT License.
## Visualize properties of a random subset of reaction systems in a directory for comparison
- check:
- reaction components with Database website
- molecules with PUBCHEM/CHEBI
- molecule properties with PUBCHEM
- sequence with Database website
- sequence properties with UniPROT
```
import glob
from ercollect import molecule as mol
from ercollect.molecule import molecule
from ercollect import rxn_syst
from ercollect.rxn_syst import reaction
import numpy as np
import random
from rdkit.Chem import Draw
from rdkit.Chem import AllChem as Chem
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem.Draw import rdMolDraw2D
from rdkit.Chem.Draw.MolDrawing import MolDrawing, DrawingOptions
from IPython.display import clear_output
directory = '/home/atarzia/psp/screening_results/'
directory += 'kegg_111218/'
# directory += 'atlas_111218/'
# directory += 'new_reactions_kegg_atlas/'
# directory += 'new_reactions_sabio_wcharge/'
# directory += 'biomin_search_sabio_wcharge/'
```
### collect a random subset of X reactions
```
no_ = 300
rs_to_test = []
no_rxns_in_total = len(glob.glob(directory+"*sRS*.gpkl"))
idx = np.random.randint(no_rxns_in_total, size=no_) # + 10000
generator = rxn_syst.yield_rxn_syst(output_dir=directory)
smiles_list = []
n_list = []
for i, rs in enumerate(generator):
if i not in idx:
continue
if '-2_' not in rs.pkl:
continue
print('index:', i)
print('pkl', rs.pkl)
print('EC', rs.EC)
# try:
# print('reversible?', rs.reversible)
# except AttributeError:
# print('reversible?', 'unknown')
if rs.skip_rxn is True:
print('should be skipped?')
try:
print(rs.skip_reason)
except AttributeError:
pass
else:
print('----------------------------------------------')
# check component properties
smiles_list = []
n_list = []
for m in rs.components:
print('-----------')
print(m.name, '--', m.role)
print('iupac:', m.iupac_name)
if m.SMILES is not None:
n_list.append(m.DB+' - '+m.name)
smiles_list.append(m.SMILES)
# print('CHEBI ID:', m.chebiID)
print('SMILES:', m.SMILES)
print('PUBCHEM XlogP:', m.XlogP)
print('PUBCHEM complexity:', m.complexity)
print('RDKIT logP:', m.logP)
print('RDKIT Synthetic accessibility:', m.Synth_score)
print('size:', m.mid_diam, 'angstrom')
print('----------------------------------------------')
print('change in complexity:', rs.delta_comp)
print('change in synthetic accessibility:', rs.delta_sa)
# check sequence properties
print('----------------------------------------------')
try:
if rs.sequence is not None:
print(rs.sequence)
try:
print('uniprotID:', rs.UniprotID)
except AttributeError:
pass
print('add other sequence IDs for other DBs')
print('sequence length:', len(rs.sequence))
print('pI:', rs.pI)
print('GRAVY:', rs.GRAVY)
print('A index:', rs.A_index)
print('I index:', rs.I_index)
print('TM index:', rs.TM_index)
except AttributeError:
pass
input('done?')
clear_output()
m_list = [Chem.MolFromSmiles(i) for i in smiles_list]
Draw.MolsToGridImage(m_list, legends=n_list)
```
## Visualize a specific reaction system
- includes search functions
```
from ercollect.rxn_syst import reaction, get_RS, yield_rxn_syst
from rdkit.Chem import AllChem as Chem
from rdkit.Chem import Draw, Descriptors
directory = '/home/atarzia/psp/screening_results/'
directory += 'kegg_111218/'
# directory += 'atlas_111218/'
# directory += 'new_reactions_kegg_atlas/'
# directory += 'new_reactions_sabio_wcharge/'
# directory += 'biomin_search_sabio_wcharge/'
# get a list of RS with max_comp_size < XX and sequence != None
XX = 6.5
for rs in yield_rxn_syst(output_dir=directory):
if rs.skip_rxn is True:
continue
if rs.max_comp_size is None:
continue
if rs.max_comp_size < XX:
try:
if rs.sequence is not None:
print(rs.pkl, rs.TM_index, rs.A_index)
except AttributeError:
pass
pkl_name = 'sRS-2_10_1_1-KEGG-R09735.gpkl'
rs = get_RS(directory+pkl_name, output_dir=directory, verbose=True)
rs.__dict__
smiles_list = []
n_list = []
for m in rs.components:
print(m.name, m.mid_diam, m.logP, m.role, m.Synth_score, m.complexity, m.pkl)
print(m.name, m.DB)
print('---', Descriptors.ExactMolWt(Chem.MolFromSmiles(m.SMILES)))
n_list.append(m.name)
smiles_list.append(m.SMILES)
MOL = Chem.MolFromSmiles(m.SMILES)
# Draw.MolToFile(MOL, fileName=m.name+'.svg')
m_list = [Chem.MolFromSmiles(i) for i in smiles_list]
Draw.MolsToGridImage(m_list, legends=n_list)
```
## Analyse all RS for a certain skip_reaction reason
```
import os
directory = '/home/atarzia/psp/screening_results/'
directory += 'kegg_111218/'
# directory += 'atlas_111218/'
# directory += 'new_reactions_kegg_atlas/'
# directory += 'new_reactions_sabio_wcharge/'
# directory += 'biomin_search_sabio_wcharge/'
count = 0
count_total = 0
max_logP = 0
min_logS = 0
for rs in rxn_syst.yield_rxn_syst(output_dir=directory):
count_total += 1
try:
if rs.reversible is True:
count += 1
max_logP = max([max_logP, rs.max_logP])
min_logS = min([min_logS, rs.min_logS])
except AttributeError:
pass
# if rs.skip_rxn is True:
# if rs.skip_reason == 'one component has no SMILES':
# count += 1
# print(rs.pkl)
# os.system('rm '+rs_dir+rs.pkl)
max_logP, min_logS
print(count, count_total, count/count_total * 100)
```
## Analyse the number of times each skip_reason is used
```
from ercollect import rxn_syst
from rdkit.Chem import Descriptors
import matplotlib.pyplot as plt
reasons = {'one component failed resolution': 0, 'SABIO E-ID is for mutant': 0,
'SABIO R-ID not found': 0, 'DNA present - SABIO has a bug': 0,
'CHEBI ID of a component not available': 0, 'a component is in skip_names': 0,
'one component is ?': 0, 'No result for KEGG URL search - likely outdated': 0,
'CHEBI ID not available for one component': 0,
'one component has invalid SMILES': 0, 'one component has no SMILES': 0,
'one component has wildcard SMILES': 0,
'one component has no molecule - rxn is incomplete or generic': 0,
'one component could not have diameter calculated': 0,
'KEGG ID could not be converted to MOL': 0,
'KEGG ID gave generic structure': 0,
'KEGG rxn includes polymeric species': 0}
directory = '/home/atarzia/psp/screening_results/'
directory += 'kegg_111218/'
# directory += 'new_reactions_atlas_111218/'
# directory += 'new_reactions_kegg_atlas/'
# directory += 'new_reactions_sabio_wcharge/'
# directory += 'biomin_search_sabio_wcharge/'
count_total = 0
count_skipped = 0
for rs in rxn_syst.yield_rxn_syst(output_dir=directory):
# if 'KEGG' not in rs.pkl:
# continue
count_total += 1
if rs.skip_rxn is True:
count_skipped += 1
reasons[rs.skip_reason] += 1
# if rs.skip_reason == 'CHEBI ID not available for one component':
# print(rs.__dict__)
# input()
print(count_skipped, count_total, count_skipped/count_total * 100)
reasons
MWs =[]
for rs in rxn_syst.yield_rxn_syst(output_dir=directory):
rs_MW = []
if rs.skip_rxn is True:
if rs.skip_reason == 'one component could not have diameter calculated':
if rs.components is not None:
for m in rs.components:
MW = Descriptors.MolWt(m.mol)
rs_MW.append(MW)
MWs.append(MW)
if max(rs_MW) < 500:
print(rs.pkl)
```
## Analyse a list of RS
- example given here of RS with max_comp_size < some threshold
- using threshold 4.2 angstrom 10/12/18
```
from ercollect.rxn_syst import reaction, get_RS, yield_rxn_syst
from rdkit.Chem import AllChem as Chem
from rdkit.Chem import Draw
RS_des = ['sRS-1_11_1_21-KEGG-R00009.gpkl', 'sRS-1_11_1_21-KEGG-R00602.gpkl',
'sRS-1_11_1_8-KEGG-R02810.gpkl', 'sRS-1_13_11_49-KEGG-R05721.gpkl',
'sRS-1_15_1_1-KEGG-R00275.gpkl', 'sRS-1_16_3_1-KEGG-R00078.gpkl',
'sRS-1_1_3_13-KEGG-R00608.gpkl', 'sRS-1_2_2_4-KEGG-R00276.gpkl',
'sRS-1_2_98_1-KEGG-R00614.gpkl', 'sRS-1_4_3_21-KEGG-R06154.gpkl',
'sRS-1_7_1_14-KEGG-R09809.gpkl', 'sRS-1_7_3_6-KEGG-R10230.gpkl',
'sRS-1_8_3_4-KEGG-R01851.gpkl', 'sRS-3_13_1_5-KEGG-R10534.gpkl',
'sRS-3_13_1_5-KEGG-R10535.gpkl', 'sRS-3_13_1_5-KEGG-R10538.gpkl',
'sRS-3_5_1_49-KEGG-R00524.gpkl', 'sRS-3_5_5_8-KEGG-R05780.gpkl',
'sRS-3_5_5_XX-KEGG-R00152.gpkl', 'sRS-4_2_1_112-KEGG-R05380.gpkl',
'sRS-4_2_1_66-KEGG-R01408.gpkl', 'sRS-4_99_1_2-KEGG-R09339.gpkl',
'sRS-XX_XX_XX_XX-KEGG-R00793.gpkl', 'sRS-XX_XX_XX_XX-KEGG-R09094.gpkl',
'sRS-XX_XX_XX_XX-KEGG-R09996.gpkl']
directory = '/home/atarzia/psp/screening_results/'
directory += 'kegg_111218/'
# directory += 'atlas_111218/'
# directory += 'new_reactions_kegg_atlas/'
# directory += 'new_reactions_sabio_wcharge/'
# directory += 'biomin_search_sabio_wcharge/'
count = 0
for pkl in RS_des:
print(pkl)
rs = get_RS(directory+pkl, output_dir=directory, verbose=False)
smiles_list = []
n_list = []
for m in rs.components:
if m.mid_diam == 0:
print(m.name, m.mid_diam, m.SMILES)
print(m.pkl)
print('---------------------------------------------')
count += 1
print(count)
m.mol
```
## Modify some attribute of all RS
```
directory = '/home/atarzia/psp/screening_results/'
directory += 'kegg_111218/'
# directory += 'atlas_111218/'
# directory += 'new_reactions_kegg_atlas/'
# directory += 'new_reactions_sabio_wcharge/'
# directory += 'biomin_search_sabio_wcharge/'
rs.__dict__
for rs in rxn_syst.yield_rxn_syst(output_dir=rs_dir):
rs.mol_collected = False
rs.max_logP = None
rs.max_XlogP = None
rs.max_logS = None
rs.min_logP = None
rs.min_XlogP = None
rs.min_logS = None
rs.save_object(rs_dir+rs.pkl)
```
| github_jupyter |
```
! pip install -U pip
! pip install -U torch==1.5.1
! pip install -U clearml>=0.15.1
! pip install -U pandas==1.0.4
! pip install -U numpy==1.18.4
! pip install -U pathlib2==2.3.5
! pip install -U scikit-learn==0.23.1
import pandas as pd
import numpy as np
from collections import Counter
from sklearn.model_selection import train_test_split
import torch
from datetime import datetime
from pathlib2 import Path
from clearml import Task
task = Task.init(project_name="Table Example", task_name="tabular preprocessing")
logger = task.get_logger()
configuration_dict = {"test_size": 0.1, "split_random_state": 0}
configuration_dict = task.connect(
configuration_dict
) # enabling configuration override by clearml
print(
configuration_dict
) # printing actual configuration (after override in remote mode)
# Download shelter-animal-outcomes dataset (https://www.kaggle.com/c/shelter-animal-outcomes)
# This dataset aims to improve understanding trends in animal outcome,
# Which could help shelters focus their energy on specific animals who need extra help finding a new home.
path_to_ShelterAnimal = "./data"
train_set = pd.read_csv(Path(path_to_ShelterAnimal) / "train.csv")
logger.report_table(
title="Trainset - raw",
series="pandas DataFrame",
iteration=0,
table_plot=train_set.head(),
)
```
# **Pre-processing**
```
# Remove hour and year from DateTime data
timestamp = pd.to_datetime(train_set["DateTime"])
months = [d.month for d in timestamp]
train_set["Month"] = pd.DataFrame(months).astype("object")
age = train_set["AgeuponOutcome"]
months_age = []
for val in age:
if pd.isnull(val):
months_age.append(val)
else:
amount, time_type = val.split(" ")
if "day" in time_type:
mult = 1.0 / 30
if "week" in time_type:
mult = 1.0 / 4
if "month" in time_type:
mult = 1.0
if "year" in time_type:
mult = 12.0
months_age.append(int(amount) * mult)
train_set["Age"] = pd.DataFrame(months_age).astype(np.float32)
sex_neutered = train_set["SexuponOutcome"]
sex = []
neutered = []
for val in sex_neutered:
if pd.isnull(val):
sex.append(val)
neutered.append(val)
elif "Unknown" in val:
sex.append(np.nan)
neutered.append(np.nan)
else:
n, s = val.split(" ")
if n in ["Neutered", "Spayed"]:
neutered.append("Yes")
else:
neutered.append("No")
sex.append(s)
train_set["Sex"] = pd.DataFrame(sex)
train_set["Neutered"] = pd.DataFrame(neutered)
# Remove irrelevant columns
train_set.drop(
columns=[
"Name",
"OutcomeSubtype",
"AnimalID",
"DateTime",
"AgeuponOutcome",
"SexuponOutcome",
],
inplace=True,
)
logger.report_table(
title="Trainset - after preprocessing",
series="pandas DataFrame",
iteration=0,
table_plot=train_set.head(),
)
```
## *Fill NA Values*
```
object_columns = train_set.select_dtypes(include=["object"]).copy()
numerical_columns = train_set.select_dtypes(include=["number"]).copy()
for col in object_columns.columns:
if object_columns[col].isnull().sum() > 0:
most_common = Counter(object_columns[col]).most_common(1)[0][0]
print('Column "{}": replacing null values with "{}"'.format(col, most_common))
train_set[col].fillna(most_common, inplace=True)
for col in numerical_columns.columns:
if numerical_columns[col].isnull().sum() > 0:
median_val = numerical_columns[col].median()
print('Column "{}": replacing null values with "{}"'.format(col, median_val))
train_set[col].fillna(median_val, inplace=True)
logger.report_table(
title="Trainset - after filling missing values",
series="pandas DataFrame",
iteration=0,
table_plot=train_set.head(),
)
```
## *Labels Encoding*
```
out_encoding = train_set["OutcomeType"].astype("category").cat.categories
outcome_dict = {key: val for val, key in enumerate(out_encoding)}
task.upload_artifact("Outcome dictionary", outcome_dict)
for col in object_columns.columns:
train_set[col] = train_set[col].astype("category").cat.codes
logger.report_table(
title="Trainset - after labels encoding",
series="pandas DataFrame",
iteration=0,
table_plot=train_set.head(),
)
```
## *Splitting dataset*
```
X = train_set.drop(columns=["OutcomeType"])
Y = train_set["OutcomeType"]
X_train, X_val, Y_train, Y_val = train_test_split(
X,
Y,
test_size=configuration_dict.get("test_size", 0.1),
random_state=configuration_dict.get("split_random_state", 0),
)
# making all variables categorical
object_columns_names = object_columns.drop(columns=["OutcomeType"]).columns
for col in object_columns_names:
X[col] = X[col].astype("category")
columns_categories = {col: len(X[col].cat.categories) for col in object_columns_names}
task.upload_artifact("Categries per column", columns_categories)
train_df = X_train.join(Y_train)
train_df.to_csv(Path(path_to_ShelterAnimal) / "train_processed.csv", index=False)
val_df = X_val.join(Y_val)
val_df.to_csv(Path(path_to_ShelterAnimal) / "val_processed.csv", index=False)
paths = {
"train_data": str(Path(path_to_ShelterAnimal) / "train_processed.csv"),
"val_data": str(Path(path_to_ShelterAnimal) / "val_processed.csv"),
}
task.upload_artifact("Processed data", paths)
```
| github_jupyter |
# Explore Cell Cutoffs
We are unsatisfied with out current cutoffs for calling a cell vs background. We think cell ranger's cutoff is too arbitrary, so I need to import the unfiltered data and figure out our own filtering criteria.
```
import os
import sys
from pathlib import Path
from IPython.display import display, HTML, Markdown
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import tables
# Project level imports
sys.path.insert(0, '../lib')
from larval_gonad.notebook import Nb
from larval_gonad.plotting import make_figs
from larval_gonad.config import memory
from larval_gonad.cell_selection import cellranger_counts, store_umi, cellranger_umi, decompress_seq, _build_decode
# Setup notebook
nbconfig = Nb.setup_notebook()
# Get datastore
store = pd.HDFStore('../output/store.h5')
store_umi('../output/testis1/outs/molecule_info.h5', store, 'testis1/umi')
%%time
store['testis1/umi'].groupby('umi').size()
store['testis1/umi'].shape
def get_umi(name):
umi = cellranger_umi(f'../output/{name}/outs/molecule_info.h5').groupby("cell_id").umi.size().sort_values(ascending=False).to_frame()
umi['X'] = range(1, len(umi) + 1)
return umi
def get_genes(filtered, name, cutoff):
cell_ids = filtered.index.unique().tolist()
fname = f'../output/{name}/outs/raw_gene_bc_matrices_h5.h5'
raw = cellranger_counts(fname, barcodes=cell_ids)
raw.to_csv(f'../output/{name}/umi_filtered_{cutoff}_gene_counts.tsv', sep='\t')
def gene_count(name, cutoff):
raw = pd.read_csv(f'../output/{name}/umi_filtered_{cutoff}_gene_counts.tsv', sep='\t')
return (raw.sum(axis=1) > 0).sum()
def _plot_cutoff(ax, cutoff):
ax.axhline(cutoff, color=nbconfig.color_c1, ls='--', label=f'Cutoff: {cutoff:,}')
ax.legend()
def _plot_all(umi, name, ax):
rep = int(name[-1:])
ax.plot(umi['X'], umi['umi'])
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Cell Count (log10)')
ax.set_ylabel('UMI Count (log10)')
ax.set_title(f'Barcode Rank Plot (Rep {rep})')
def _plot_filtered(filtered, name, ax, cutoff):
size = len(filtered)
num_genes = gene_count(name, cutoff)
rep = int(name[-1:])
ax.plot(filtered['X'], filtered['umi'])
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Cell Count (log10)')
ax.set_ylabel('UMI Count (log10)')
ax.text(
1,
10,
f"""\
Number Cells: {size:,}
Number Genes โฅ 0: {num_genes:,}
"""
)
ax.set_title(f'Filtered Barcode UMI Plot (Rep {rep}: {size:,} Cells)')
def check_cutoff(umi_cutoff):
for rep in [1, 2, 3]:
print(f'Parsing Rep {rep}')
name = f'testis{rep}'
umi = get_umi(name)
filtered = umi.query(f'umi > {umi_cutoff}')
raw = get_genes(filtered, name, umi_cutoff)
def plot_cutoff(umi_cutoff):
fig, axes = plt.subplots(2, 3, sharex=True, sharey=True, figsize=(20, 10))
for rep in [1, 2, 3]:
ax1 = axes[0, rep - 1]
ax2 = axes[1, rep - 1]
name = f'testis{rep}'
umi = get_umi(name)
filtered = umi.query(f'umi > {umi_cutoff}')
_plot_all(umi, name, ax1)
_plot_filtered(filtered, name, ax2, umi_cutoff)
for ax in axes.flatten():
_plot_cutoff(ax, umi_cutoff)
plt.suptitle(f'UMI Cutoff: {umi_cutoff:,}')
plt.tight_layout(rect=[0, 0, 1, .95])
check_cutoff(1000)
check_cutoff(5000)
check_cutoff(10000)
plot_cutoff(1000)
%%time
plot_cutoff(5000)
store = pd.HDFStore('../output/store.h5')
umi = cellranger_umi('../output/testis1/outs/molecule_info.h5')
umi.head()
cell_ids = umi.cell_id.unique()
```
| github_jupyter |
# Esercizio 5
Prendere in input un file in formato `GTF` (Gene Transfer Format), che annota un set di geni su una genomica di riferimento, insieme al file `FASTA` della genomica di riferimento e produrre:
- le sequenze dei trascritti oppure le sequenze delle coding sequences (CDS) per i geni annotati in formato `FASTA`, a seconda della scelta dell'utente
- il set degli HUGO NAMES dei geni per cui รจ stata prodotta una sequenza (trascritto oppure CDS) al punto precedente
L'*header* `FASTA` di ogni sequenza prodotta deve contenere:
- lo HUGO name del gene di riferimento
- lโidentificatore del trascritto di riferimento
- la lunghezza della sequenza prodotta
- il tipo di sequenza (trascritto o CDS)
- lo strand del gene
Esempio di *header* per un trascritto:
>ARHGAP4; U52112.4-003; len=3235 type=transcript; strand=-
Esempio di *header* per una CDS:
>AVPR2; U52112.2-003; len=642; type=cds; strand=+
***
Parametri in input:
- file in formato `GTF`
- file della genomica di riferimento in formato `FASTA`
- *feature* della sequenza da ricostruire: `exon` se si vogliono ricostruire i trascritti o `CDS` se si vogliono ricostruire le coding sequences
***
Requisiti:
- deve essere definita una funzione `format_fasta()` che prenda come argomenti un header `FASTA` e una sequenza, e restituisca la sequenza in formato FASTA separata in righe di 80 caratteri.
- deve essere definita una funzione `reverse_complement()` che prenda come argomento una sequenza nucleotidica e ne restituisca il reverse&complement.
- deve essere definita una funzione `compose_feature()` che prenda come argomenti una lista di features come tuple *(start, end)*) di *features*, la genomica di riferimento, lo strand del gene di riferimento ed effettui la concatenazione delle sequenze delle *features*, eventualmente operando il reverse&complement se lo strand รจ `-`.
**NOTA BENE**: gli attributi del nono campo del file `GTF` non sono ad ordine fisso all'interno del campo. Per estrarre quindi un determinato attributo si deve usare un'espressione regolare e non il metodo `split()`.
***
## Soluzione
Importare il modulo `re` per usare le espressioni regolari.
```
import re
```
### Definizione della funzione `format_fasta()`
La funzione prende come argomento una stringa contenente un *header* `FASTA` e una sequenza (nucleotidica o di proteina) e restituisce la sequenza in formato `FASTA` separata in righe di 80 caratteri.
```
def format_fasta(header, sequence):
return header + '\n' + '\n'.join(re.findall('\w{1,80}', sequence))
```
**NOTA BENE**: supporre che l'*header* in input alla funzione abbia giร il simbolo `>` all'inizio, ma non il simbolo `\n` alla fine.
### Definizione della funzione `reverse_complement()`
La funzione prende come argomento una stringa contenente una sequenza nucleotidica e restituisce la versione *reverse and complement* della sequenza.
```
def reverse_complement(sequence):
sequence = sequence.lower()
sequence = sequence[::-1]
complement = {'a':'t', 't':'a', 'c':'g', 'g':'c'}
return ''.join([complement[c] for c in sequence])
reverse_complement('aaattt')
```
**NOTA BENE**: fare in modo che la funzione sia indipendente dal caso della sequenza in input (maiuscolo o minuscolo).
### Definizione della funzione `compose_feature()`
La funzione prende come argomenti una lista di features come tuple *(start, end)*, la genomica di riferimento, lo strand del gene, ed effettua la concatenazione delle sequenze delle *features*, ed eventualmente il reverse&complement se lo strand del gene รจ `-`.
```
def compose_feature(feature_list, reference_sequence, strand):
reconstructed_sequence = ''.join(reference_sequence[f[0]-1:f[1]] for f in sorted(feature_list))
if strand == '-':
reconstructed_sequence = reverse_complement(reconstructed_sequence)
return reconstructed_sequence
```
**NOTA BENE**: concatenare le sequenze delle *features* sempre per coordinate crescenti.
### Parametri in input
```
gtf_file_name = './input.gtf'
reference_file_name = './ENm006.fa'
feature_name = 'CDS'
```
### Lettura del file `FASTA` della genomica di riferimento
Lettura del file della genomica di riferimento nella lista di righe `reference_file_rows`
```
with open(reference_file_name, 'r') as input_file:
reference_file_rows = input_file.readlines()
reference_file_rows
```
Determinazione della sequenza di riferimento in un'unica stringa.
```
genomic_reference = ''.join(reference_file_rows[1:]).rstrip().replace('\n', '')
genomic_reference
```
Lettura del file della genomica di riferimento nella stringa `reference_file_string`
```
with open(reference_file_name, 'r') as input_file:
reference_file_string = input_file.read()
reference_file_string
```
Determinazione della sequenza di riferimento in un'unica stringa.
```
genomic_reference2 = ''.join(re.findall('\n(\w+)', reference_file_string))
genomic_reference2
genomic_reference2 == genomic_reference
```
### Lettura dei *record* del file `GTF`
Lettura dei *record* del file `GTF` nella lista di righe `gtf_file_rows`
```
with open(gtf_file_name, 'r') as input_file:
gtf_file_rows = input_file.readlines()
gtf_file_rows
```
### Filtraggio dei *record* `GTF` che occorrono per ricostruire le sequenze del tipo scelto
Eliminare dalla lista `gtf_file_rows` i *record* `GTF` che non corrispondono al tipo di *feature* che compone la sequenza che si รจ scelto di ricostruire, cioรฉ per i quali il terzo campo non รจ uguale al valore della variabile `feature_name` (`exon` se si รจ scelto di ricostruire i trascritti full-length e `CDS` se si รจ scelto di ricostruire le coding sequences).
```
gtf_file_rows = [row for row in gtf_file_rows if row.rstrip().split()[2] == feature_name]
gtf_file_rows
```
### Costruzione del dizionario degli *strand* e del set dei geni annotati
A partire dalla lista precedente, costruire:
- il dizionario degli *strand*:
- *chiave*: HUGO name del gene
- *valore*: strand del gene (`+` o `-`)
- il set dei geni annotati relativamente al tipo di sequenza che si vuole ricostruire
**NOTA BENE**: il valore dello *strand* (settimo campo del *record* `GTF`) รจ costante per un determinato gene.
Inizializzazione del dizionario vuoto.
```
strand_dict = {}
```
Attraversare la lista dei record di tipo uguale a `feature_name` e riempire il dizionario.
```
for row in gtf_file_rows:
strand = row.rstrip().split('\t')[6]
#hugo_name = re.search('[\w\s;]+gene_id\s"(\w+)', row.rstrip().split('\t')[8]).group(1)
hugo_name = re.search('gene_id\s"(\w+)";', row).group(1)
strand_dict[hugo_name] = strand
strand_dict
```
Estrarre dal dizionario il set dei geni annotati.
```
gene_set = set(strand_dict)
gene_set
```
### Ricostruzione delle sequenze
Costruire:
- il dizionario degli ID dei trascritti:
- *chiave*: HUGO name del gene
- *valore*: set dei `transcript_id` coinvolti in record di tipo `exon` (se si vogliono ricostruire i trascritti) oppure in record di tipo `CDS` (se si vogliono ricostruire le coding sequence)
- il dizionario delle composizioni in features:
- *chiave*: identificatore del trascritto
- *valore*: lista delle tuple *(start, end)* delle features (records) che compongono la sequenza da ricostruire (trascritto oppure coding sequence) per il trascritto
Inizializzare i dizionari vuoti.
```
id_dict = {}
composition_dict = {}
```
Attraversare la lista `gtf_file_rows` e riempire i due dizionari.
```
for row in gtf_file_rows:
hugo_name = re.search('gene_id\s"(\w+)";', row).group(1)
transcript_id = re.search('transcript_id\s"([^"]+)";', row).group(1)
feature_start = row.rstrip().split('\t')[3]
feature_end = row.rstrip().split('\t')[4]
id_dict_value = id_dict.get(hugo_name, set())
id_dict_value.add(transcript_id)
id_dict.update([(hugo_name, id_dict_value)])
composition_dict_value = composition_dict.get(transcript_id, list())
composition_dict_value.append((int(feature_start), int(feature_end)))
composition_dict.update([(transcript_id, composition_dict_value)])
```
**NOTA BENE**: un'espressione regolare simile a quella usata per estrarre lo HUGO NAME, cioรฉ `'transcript_id\s+"(\w+)";'` non puรฒ funzionare per estrarre dal *record* l'ID del trascritto in quanto in tale ID รจ presente anche il simbolo di punto `.` che non fa parte della classe dei simboli di parola rappresentata da `\w`. Quindi รจ meglio usare l'espressione regolare `'transcript_id\s+"([^"]+)";'`.
```
id_dict
composition_dict
```
A partire dai dizionari precedenti, costruire la lista di tuple *(header, sequenza)* in cui il primo elemento รจ l'*header* `FASTA` e il secondo elemento รจ la sequenza ricostruita.
L'*header* deve essere del tipo:
>ARHGAP4; U52112.4-003; len=3235; type=transcript; strand=-
se si รจ scelto di ricostruire i trascritti full-length, e:
>ARHGAP4; U52112.4-005; len=642; type=cds; strand=-
se si รจ scelto di ricostruire le coding sequences (CDS).
```
sequence_fasta_list = []
sequence_type = {'exon' : 'transcript', 'CDS' : 'cds'}
for hugo_name in id_dict:
for transcript_id in id_dict[hugo_name]:
r_sequence = compose_feature(composition_dict[transcript_id], genomic_reference, strand_dict[hugo_name])
header = '>' + hugo_name + '; ' + transcript_id + '; len=' + str(len(r_sequence)) + '; type=' + sequence_type[feature_name] + '; strand=' + strand_dict[hugo_name]
sequence_fasta_list.append((header, r_sequence))
sequence_fasta_list
```
Trasformare la lista di tuple in una lista di sequenze in formato `FASTA`.
```
sequence_fasta_list = [format_fasta(t[0], t[1]) for t in sequence_fasta_list]
for seq in sequence_fasta_list:
print(seq)
```
| github_jupyter |
# Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.
**In this assignment, you will:**
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let's run the cell below to load the required packages.
```
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
```
## 1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.
The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Vanishing gradient** <br> The speed of learning decreases very rapidly for the early layers as the network trains </center></caption>
You are now going to solve this problem by building a Residual Network!
## 2 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the gradient to be directly backpropagated to earlier layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : A ResNet block showing a **skip-connection** <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function--even more than skip connections helping with vanishing gradients--accounts for ResNets' remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
### 2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 3 layers.</center></caption>
Here're the individual steps.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: [See reference](https://keras.io/layers/convolutional/#conv2d)
- To implement BatchNorm: [See reference](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use: `Activation('relu')(X)`
- To add the value passed forward by the shortcut: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (โ3 lines)
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (โ2 lines)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (โ2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
</td>
</tr>
</table>
## 2.2 - The convolutional block
You've implemented the ResNet identity block. Next, the ResNet "convolutional block" is the other type of block. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Convolutional block** </center></caption>
The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`.
- The BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '1'`.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- [Conv Hint](https://keras.io/layers/convolutional/#conv2d)
- [BatchNorm Hint](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: `Activation('relu')(X)`
- [Addition Hint](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (โ3 lines)
X = Conv2D(F2, (f, f), strides = (1,1), padding='same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (โ2 lines)
X = Conv2D(F3, (1, 1), strides = (1,1), name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (โ2 lines)
X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (โ2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
</td>
</tr>
</table>
## 3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 5** </u><font color='purple'> : **ResNet-50 model** </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
- The 2 identity blocks use three set of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
- The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
- The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
- The 2 identity blocks use three set of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".
- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
- The flatten doesn't have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.
**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling [see reference](https://keras.io/layers/pooling/#averagepooling2d)
Here're some other functions we used in the code below:
- Conv2D: [See reference](https://keras.io/layers/convolutional/#conv2d)
- BatchNorm: [See reference](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: [See reference](https://keras.io/layers/convolutional/#zeropadding2d)
- Max pooling: [See reference](https://keras.io/layers/pooling/#maxpooling2d)
- Fully conected layer: [See reference](https://keras.io/layers/core/#dense)
- Addition: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (โ4 lines)
X = convolutional_block(X, f = 3, filters = [128, 128, 512], stage = 3, block='a', s = 2)
X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')
# Stage 4 (โ6 lines)
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5 (โ3 lines)
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL (โ1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D((2, 2), name='avg_pool')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
```
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.
```
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
```
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
```
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
The model is now ready to be trained. The only thing you need is a dataset.
Let's load the SIGNS Dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 6** </u><font color='purple'> : **SIGNS dataset** </center></caption>
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
```
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
```
**Expected Output**:
<table>
<tr>
<td>
** Epoch 1/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
</td>
</tr>
<tr>
<td>
** Epoch 2/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.
</td>
</tr>
</table>
Let's see how this model (trained on only two epochs) performs on the test set.
```
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
**Expected Output**:
<table>
<tr>
<td>
**Test Accuracy**
</td>
<td>
between 0.16 and 0.25
</td>
</tr>
</table>
For the purpose of this assignment, we've asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take โ1min to load the model.
```
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system!
## 4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
```
You can also print a summary of your model by running the following code.
```
model.summary()
```
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
```
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
```
<font color='blue'>
**What you should remember:**
- Very deep "plain" networks don't work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main type of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.
### References
This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet:
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)
- Francois Chollet's github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
| github_jupyter |
# Assignment 3 - Practical Deep Learning Workshop
#### In this task we will work with the dataset of the Home depot product search relevance competition.
#### Some background:
In this competition, Home Depot is asking to help them improve their customers' shopping experience by developing a model that can accurately predict the relevance of search results.
Search relevancy is an implicit measure Home Depot uses to gauge how quickly they can get customers to the right products.
This data set contains a number of products and real customer search terms from Home Depot's website. The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
The relevance is a number between 1 (not relevant) to 3 (highly relevant). For example, a search for "AA battery" would be considered highly relevant to a pack of size AA batteries (relevance = 3), mildly relevant to a cordless drill battery (relevance = 2), and not relevant to a snow shovel (relevance = 1).
Each pair was evaluated by at least three human raters. The provided relevance scores are the average value of the ratings. There are three additional things to know about the ratings:
โข The specific instructions given to the raters is provided in relevance_instructions.docx.
โข Raters did not have access to the attributes.
โข Raters had access to product images, while the competition does not include images.
#### Out task here is to predict the relevance for each pair listed in the test set. The test set contains both seen and unseen search terms.
```
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model, Sequential
from keras.layers import * # Dense, Embedding, LSTM
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
from keras.regularizers import l2
import re
import pandas as pd
import numpy as np
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
%matplotlib inline
from google.colab import drive
drive.mount('/content/gdrive')
```
#### First of all, we'll take a look at the data in each dataset of the input:
train.csv is the training set, contains products, searches, and relevance scores.
```
train = pd.read_csv('/content/gdrive/My Drive/Colab Notebooks/input/train.csv',encoding='latin1')
train.head()
```
test.csv is the test set, contains products and searches. We will need to predict the relevance for these pairs.
```
test = pd.read_csv('/content/gdrive/My Drive/Colab Notebooks/input/test.csv',encoding='latin1')
test.head()
```
product_descriptions.csv contains a text description of each product. We may join this table to the training or test set via the product_uid.
```
product_descriptions = pd.read_csv('/content/gdrive/My Drive/Colab Notebooks/input/product_descriptions.csv',encoding='latin1')
product_descriptions.head()
```
attributes.csv provides extended information about a subset of the products (typically representing detailed technical specifications). Not every product will have attributes.
```
attributes = pd.read_csv('/content/gdrive/My Drive/Colab Notebooks/input/attributes.csv',encoding='latin1')
attributes.head()
```
Data fields:
- id - a unique Id field which represents a (search_term, product_uid) pair
- product_uid - an id for the products
- product_title - the product title
- product_description - the text description of the product (may contain HTML content)
- search_term - the search query
- relevance - the average of the relevance ratings for a given id
- name - an attribute name
- value - the attribute's value
## Preprocessing the data
We would like to have the products' corresponding product description, so we will merge the train and test datasets with the product_description table.
Note: in order to decrease the dimensionality of the text, we will lower the characters.
```
mergedTrain = pd.merge(train, product_descriptions, how='inner', on='product_uid')
mergedTrain.search_term = mergedTrain.search_term.apply(lambda x: x.lower())
mergedTrain.product_description = mergedTrain.product_description.apply(lambda x: x.lower())
mergedTrain.head()
mergedTest= pd.merge(test, product_descriptions, how='inner', on='product_uid')
mergedTest.search_term = mergedTest.search_term.apply(lambda x: x.lower())
mergedTest.product_description = mergedTest.product_description.apply(lambda x: x.lower())
mergedTest.head()
```
We convert the product_description and search_term attributes' values to lists of characters.
```
search_term_chars = []
product_description_chars = []
search_term_chars = mergedTrain.search_term.apply(lambda x: search_term_chars + list(x))
product_description_chars = mergedTrain.product_description.apply(lambda x: product_description_chars + list(x))
search_term_chars = [item for sublist in search_term_chars for item in sublist]
product_description_chars = [item for sublist in product_description_chars for item in sublist]
```
And then, translate the characters to a unique integer values. We create two dictionaries (one for search_term and another for product_description), containing the pairs of characters and their uniquie values.
```
search_term_char_set = sorted(set(search_term_chars))
product_description_char_set = sorted(set(product_description_chars))
# translate from character to number, it's enumerator
search_term_char_to_int = dict((c, i) for i, c in enumerate(search_term_char_set))
search_term_int_to_char = dict((i, c) for i, c in enumerate(search_term_char_set))
product_description_char_to_int = dict((c, i) for i, c in enumerate(product_description_char_set))
product_description_int_to_char = dict((i, c) for i, c in enumerate(product_description_char_set))
# summarize the loaded data
n_chars = len(search_term_chars)
n_vocab = len(search_term_char_set)
print("search_term Total Characters: ", n_chars)
print("search_term Total Vocab: ", n_vocab)
n_chars2 = len(product_description_chars)
n_vocab2 = len(product_description_char_set)
print("product_description Total Characters: ", n_chars2)
print("product_description Total Vocab: ", n_vocab2)
mergedTrain.search_term = mergedTrain.search_term.apply(lambda x: list(x))
mergedTrain.product_description = mergedTrain.product_description.apply(lambda x: list(x))
mergedTrain.head()
```
We would like to turn the search_term and the product_description into sequences of unique integers.
```
def createData(char_to_int, char_arr):
#seq_length = 100
dataX = []
for i in range(0,len(char_arr)):
dataX.append(char_to_int[char_arr[i]])
return np.asarray(dataX)
mergedTrain.search_term = mergedTrain.search_term.apply(lambda x: createData(search_term_char_to_int, x))
mergedTrain.product_description = mergedTrain.product_description.apply(lambda x: createData(product_description_char_to_int, x))
mergedTrain.head()
```
## The target value - relevance
Each pair was evaluated by at least three human raters. The provided relevance scores are the average value of the ratings. Thus, we would like to see the number of unique values, between 1 and 3. There are 13 unique relevance values in the data sample. We could address the problem as a classification problem, but we want to address the distance from the maximum relevance value, so we will treat this as a regression problem.
```
plt.hist(np.unique(mergedTrain.relevance.values),density=True, histtype='bar')
plt.show()
np.unique(mergedTrain.relevance.values).size
```
In order to predict the relevance values we need to preprocess the values, and change the range from 1 - 3, to 0 - 1. We want to see the maximum length of each column - the search_term and product_description. We try to limit the char sequences to 75 chars as in the same manner as characters, the lengths of the sequences must be the same in order to unite the data in specified part of the network to enable predictions based on both of these inputs. We also want to see the max sizes of sequences in each column to find the optimal value that will get enough data from both columns.
```
from sklearn import preprocessing
target = mergedTrain['relevance'].values
min_max_scaler = preprocessing.MinMaxScaler()
Y = min_max_scaler.fit_transform(target.reshape(-1, 1))
Y[:5]
X1 = mergedTrain['search_term'].values
X2 = mergedTrain['product_description'].values
search_terms_lens = []
for element in mergedTrain['search_term'].values:
search_terms_lens.append(len(element))
product_description_lens = []
for element in mergedTrain['product_description'].values:
product_description_lens.append(len(element))
max_length1 = max(search_terms_lens)
max_length2 = max(product_description_lens)
```
After trying a few options, we choose the maximum lenght of the sequences to be 75 integers, in order to yield better results. Sequences that are shorter, will be padded in order to meet this lenght.
```
max_length = 75
def padding(seq, length):
ans = []
for i in range(0,min(len(seq),length)):
ans.append(seq[i])
if len(seq) <= length:
for i in range(0,length-len(seq)):
ans.append(0)
return ans
X1 = np.asarray([padding(x,max_length) for x in X1])
X2 = np.asarray([padding(x,max_length) for x in X2])
X1 = X1.reshape(X1.shape[0],X1.shape[1],1)
X2 = X2.reshape(X2.shape[0],X2.shape[1],1)
X1 = X1.astype(np.float32)
X2 = X2.astype(np.float32)
print(X1.shape)
print(X2.shape)
```
This is the input that we insert into the model.
## Building the model
Here we create a siamese model. Siamese neural network is a class of neural network architectures that contain two or more identical subnetworks and a contrastive loss function joining them. Identical here means they have the same configuration with the same parameters and weights. Parameter updating is mirrored across both subnetworks.
Our model also includes an LSTM and a Dense layers.
As for the function, we will use the negative manhattan distance. The Manhattan distance between two items is the sum of the differences of their corresponding components.
Our optimizer here is Adadelta, which is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done.
```
st_input = Input(shape=(max_length,1), name='st_input',dtype='float32')
pd_input = Input(shape=(max_length,1), name='pd_input',dtype='float32')
def createModel():
model = Sequential()
model.add(LSTM(40))
model.add(Dense(64, activation='relu'))
return model
from keras.optimizers import Adadelta
st_model = createModel()
pd_model = createModel()
def createSiameseModel(model1,model2,customLoss):
out = Lambda(function=lambda x: K.exp(-K.sum(K.abs(x[0]-x[1]), axis=1, keepdims=True)),
output_shape=lambda x: (x[0][0], 1),
name='prediction')([model1(st_input), model2(pd_input)])
siamese_net = Model(input=[st_input,pd_input],output=[out])
siamese_net.compile(loss=customLoss,optimizer=Adadelta(lr=1.0, rho=0.95,clipnorm=1.20))
return siamese_net
siamese_net1 = createSiameseModel(st_model,pd_model,'mse')
siamese_net2 = createSiameseModel(st_model,pd_model,'mae')
st_model.summary()
siamese_net1.summary()
```
We have a good amount of trainable parameters.
```
X1_train,X1_val,X2_train,X2_val,Y_train, Y_val = train_test_split(X1,X2,Y,test_size = 0.2)
```
We split the data into train and validation/test sets. We choose the validation to be 20% of the entire data.
We save the model weights that are best, in order to use them later for feature extraction without the need to train the model again.
```
from keras.callbacks import *
path = 'gdrive/My Drive/Colab Notebooks'
def set_callbacks(description='run1',patience=15,tb_base_logdir='./logs/'):
cp = ModelCheckpoint(path + '/best_model_weights_char.h5'.format(description),save_best_only=True)
rlop = ReduceLROnPlateau(patience=5)
cb = [cp,rlop]
return cb
```
### Here we train the model:
We trained the model for 5 epochs here which is a relatively small amount. The reason for that is that in our previous attempts, when traning the model for more epochs, we didn't see a significant improvemnet in the accuracy of the model.
```
start = time.time()
history = siamese_net1.fit([X1_train,X2_train],Y_train,batch_size=1024, epochs=5, verbose=1, validation_data=([X1_val,X2_val],Y_val), callbacks=set_callbacks())
end = time.time()
total_time = end - start
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
val_preds = siamese_net1.predict([X1_val,X2_val])
train_preds = siamese_net1.predict([X1_train,X2_train])
plt.hist(val_preds,density=True, histtype='bar')
plt.show()
plt.hist(Y_val,density=True, histtype='bar')
plt.show()
```
We can see that the model predicted values around the average mark.
```
resultsTable = pd.DataFrame(columns=['model','runtime','TrainRMSE','ValRMSE','TestRMSE','TrainMAE','ValMAE','TestMAE'])
def addToTable(modelName,runtime,train_rmse,val_rmse,test_rmse,train_mae,val_mae,test_mae):
return resultsTable.append({'model': modelName,'runtime': runtime,'TrainRMSE': train_rmse,'ValRMSE': val_rmse,
'TrainMAE': test_rmse,'TrainMAE': train_mae,'ValMAE' :val_mae,'TestMAE': test_mae},ignore_index=True)
```
Lets run the model on the test samples. In order to do that we need to repeat the preprocessing and the normalization process on the test data set as well.
```
search_term_chars2 = []
product_description_chars2 = []
search_term_chars2 = mergedTest.search_term.apply(lambda x: search_term_chars2 + list(x))
product_description_chars2 = mergedTest.product_description.apply(lambda x: product_description_chars2 + list(x))
search_term_chars2 = [item for sublist in search_term_chars2 for item in sublist]
product_description_chars2 = [item for sublist in product_description_chars2 for item in sublist]
search_term_char_set2 = sorted(set(search_term_chars2))
product_description_char_set2 = sorted(set(product_description_chars2))
# translate from character to number, it's enumerator
search_term_char_to_int2 = dict((c, i) for i, c in enumerate(search_term_char_set2))
search_term_int_to_char2 = dict((i, c) for i, c in enumerate(search_term_char_set2))
product_description_char_to_int2 = dict((c, i) for i, c in enumerate(product_description_char_set2))
product_description_int_to_char2 = dict((i, c) for i, c in enumerate(product_description_char_set2))
mergedTest.search_term = mergedTest.search_term.apply(lambda x: list(x))
mergedTest.product_description = mergedTest.product_description.apply(lambda x: list(x))
mergedTest.search_term = mergedTest.search_term.apply(lambda x: createData(search_term_char_to_int2, x))
mergedTest.product_description = mergedTest.product_description.apply(lambda x: createData(product_description_char_to_int2, x))
mergedTest.head()
X1_test = mergedTest.search_term.values
X2_test = mergedTest.product_description.values
X1_test = np.asarray([padding(x,max_length) for x in X1_test])
X2_test = np.asarray([padding(x,max_length) for x in X2_test])
X1_test = X1_test.reshape(X1_test.shape[0],X1_test.shape[1],1)
X2_test = X2_test.reshape(X2_test.shape[0],X2_test.shape[1],1)
test_preds = siamese_net1.predict([X1_test,X2_test])
from sklearn.metrics import mean_absolute_error as mae
from sklearn.metrics import mean_squared_error as mse
resultsTable = addToTable('CHAR_SiameseNetwork',total_time,mse(train_preds,Y_train),mse(val_preds,Y_val),'-',mae(train_preds,Y_train),mae(val_preds,Y_val),'-')
resultsTable
```
We calculated the RMSE and the MAE between the prediction and the true value in each one of the training, validation and test parts of the dataset.
Note: We could not find the true results of the test data samples, so we could not calculate the MAE and RMSE on these samples.
Regardless, we did submit our results on the test set to the Kaggle competitioื as a late submission in order to get some indication regarding the results we got. The RMSE we got was 0.56, while the best score in the leaderboard is 0.43192.
## ML Benchmark
Lets create a benchmark model to compare the results of our model and the benchmark. We do a similiar character embedding process like in our model, but this time we will use the sklearn Vectorizer to do this process. The benchmark model that we will use is the Random Forest Regressor.
```
mergedTrain2 = pd.merge(train, product_descriptions, how='inner', on='product_uid')
mergedTrain2.head()
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(encoding='latin-1', analyzer='char')
vectorizer.fit(mergedTrain2['search_term'])
mltrain_x, mlval_x, mltrain_y, mlval_y = train_test_split(mergedTrain2['search_term'].values,mergedTrain2['relevance'].values, test_size = 0.2)
train_x_count = vectorizer.transform(mltrain_x)
val_x_count = vectorizer.transform(mlval_x)
from sklearn import model_selection, preprocessing, linear_model, naive_bayes, metrics, svm,ensemble
ml = ensemble.RandomForestRegressor()
start_time = time.time()
ml.fit(train_x_count, mltrain_y)
end_time = time.time()
total_time = end_time - start_time
ml_train_preds = ml.predict(train_x_count)
ml_val_preds = ml.predict(val_x_count)
print(ml_val_preds.shape)
resultsTable = addToTable('CHAR_RandomForestBenchmark',total_time,mse(ml_train_preds,mltrain_y),mse(ml_val_preds,mlval_y),'-',mae(ml_train_preds,mltrain_y),mae(ml_val_preds,mlval_y),'-')
resultsTable
plt.hist(ml_val_preds,density=True, histtype='bar')
plt.show()
plt.hist(mlval_y,density=True, histtype='bar')
plt.show()
```
The benchmark model performed better than our siamese model. This shows us that our model is not achieving the desired scores to see good enough results.
Here are some possible ways to improve the model:
* The model is still not tuned with the correct parameters in the intermediate layers. Finding the most percise values for the LSTM node number or the number of outputs in the Dense layer. We tried higher values but it looked like the model was overfitted when it handles large number of LSTM nodes.
* There might be some imbalance in the data when we pad the search_term and the product_description, the search_term sequences are a lot shorter than the product_description, so we need to chose the right amount of characters in each sequence or change the value that we pad with in the padding function, currently its 0.
## Feature Extraction
We want to check how the feature extarction abilities of the model compare by taking out the last layers outputs - the processed search_term and product_description inputs and concatinate that to feed to a ML model to see the RMSE and MAE of the ML models with the features from our network.
The machine learning we use are the Random Forest model and the Linear Regression model from sklearn.
```
fe_st_input = Input(shape=(max_length,1), name='st_input',dtype='float32')
fe_pd_input = Input(shape=(max_length,1), name='pd_input',dtype='float32')
input_layer1 = siamese_net1.layers[0].input[0]
input_layer2 = siamese_net1.layers[1].input[0]
fe_st_model = createModel()
fe_pd_model = createModel()
output_layer1 = siamese_net1.layers[3].get_output_at(0)
output_layer2 = siamese_net1.layers[3].get_output_at(1)
output_fn = K.function([st_input, pd_input], [output_layer1, output_layer2])
def extractFeatures(model1,model2,customLoss):
out = concatenate([model1(fe_st_input), model2(fe_pd_input)])
siamese_net = Model(input=[fe_st_input,fe_pd_input],output=[out])
siamese_net.load_weights(path + '/best_model_weights_char.h5')
siamese_net.compile(loss=customLoss,optimizer=Adadelta(lr=1.0, rho=0.95,clipnorm=1.20))
return siamese_net
fe_model = extractFeatures(fe_st_model,fe_pd_model,'mse')
fe_train_features = fe_model.predict([X1_train,X2_train])
fe_val_features = fe_model.predict([X1_val,X2_val])
fe_test_features = fe_model.predict([X1_test,X2_test])
randomForest = ensemble.RandomForestRegressor()
start_time = time.time()
randomForest.fit(fe_train_features, Y_train)
end_time = time.time()
total_time = end_time - start_time
fe_train_preds = randomForest.predict(fe_train_features)
fe_val_preds = randomForest.predict(fe_val_features)
resultsTable = addToTable('FE_RandomForest_CHAR',total_time,mse(fe_train_preds,Y_train),mse(fe_val_preds,Y_val),'-',mae(fe_train_preds,Y_train),mae(fe_val_preds,Y_val),'-')
linear = linear_model.LinearRegression()
start_time = time.time()
linear.fit(fe_train_features, Y_train)
end_time = time.time()
total_time = end_time - start_time
fe_train_preds2= linear.predict(fe_train_features)
fe_val_preds2 = linear.predict(fe_val_features)
resultsTable = addToTable('FE_LinearRegression_CHAR',total_time,mse(fe_train_preds2,Y_train),mse(fe_val_preds2,Y_val),'-',mae(fe_train_preds2,Y_train),mae(fe_val_preds2,Y_val),'-')
resultsTable
```
We see that the feature extraction ML models had pretty much the same performance as the siamese network. This means that the inaccuarcy of our model is in the feature extraction phase, maybe by making the improvements listed above we might acheive a better score.
# Word Level Embedding
We want to repeat the process but this time we want to do the embedding on a word level. Each word, instead of each character as in the last part, will get a unique value in a sequence of values that will be fed to a similiar siamese network and will be checked in the same manner as the character embedding.
## Data Preprocessing
Similiarly we find the amount of unique words that are available for each of the search_term and product_description samples and create dictionaries for each one of them to converts the texts to unique value sequences for the model to train and predict on.
```
mergedTrain = pd.merge(train, product_descriptions, how='inner', on='product_uid')
mergedTrain.search_term = mergedTrain.search_term.apply(lambda x: x.lower())
mergedTrain.product_description = mergedTrain.product_description.apply(lambda x: x.lower())
mergedTrain.head()
mergedTest= pd.merge(test, product_descriptions, how='inner', on='product_uid')
mergedTest.search_term = mergedTest.search_term.apply(lambda x: x.lower())
mergedTest.product_description = mergedTest.product_description.apply(lambda x: x.lower())
mergedTest.head()
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
st_words = []
for term in mergedTrain.search_term.values:
for word in word_tokenize(term):
st_words.append(word)
st_word_set = sorted(set(st_words))
st_dict = dict((c, i) for i, c in enumerate(st_word_set))
pd_words = []
for term in mergedTrain.product_description.values:
for word in word_tokenize(term):
pd_words.append(word)
pd_word_set = sorted(set(pd_words))
pd_dict = dict((c, i) for i, c in enumerate(pd_word_set))
st_words2 = []
for term in mergedTest.search_term.values:
for word in word_tokenize(term):
st_words2.append(word)
st_word_set2 = sorted(set(st_words2))
st_dict2 = dict((c, i) for i, c in enumerate(st_word_set2))
pd_words2 = []
for term in mergedTest.product_description.values:
for word in word_tokenize(term):
pd_words2.append(word)
pd_word_set2 = sorted(set(pd_words2))
pd_dict2 = dict((c, i) for i, c in enumerate(pd_word_set2))
mergedTrain.search_term = mergedTrain.search_term.apply(lambda x: createData(st_dict, word_tokenize(x)))
mergedTrain.product_description = mergedTrain.product_description.apply(lambda x: createData(pd_dict, word_tokenize(x)))
mergedTrain.head()
mergedTest.search_term = mergedTest.search_term.apply(lambda x: createData(st_dict2, word_tokenize(x)))
mergedTest.product_description = mergedTest.product_description.apply(lambda x: createData(pd_dict2, word_tokenize(x)))
mergedTest.head()
```
## Data Normalization
We normalize the target relevance to be in the 0 - 1 range, like in the first part. We try to limit the word sequences to 50 words as in the same manner as characters.
```
target = mergedTrain['relevance'].values
min_max_scaler = preprocessing.MinMaxScaler()
Y = min_max_scaler.fit_transform(target.reshape(-1, 1))
Y[:5]
X1 = mergedTrain['search_term'].values
X2 = mergedTrain['product_description'].values
search_terms_lens = []
for element in mergedTrain['search_term'].values:
search_terms_lens.append(len(element))
product_description_lens = []
for element in mergedTrain['product_description'].values:
product_description_lens.append(len(element))
max_length1 = max(search_terms_lens)
max_length2 = max(product_description_lens)
max_length = 50
def padding(seq, length):
ans = []
for i in range(0,min(len(seq),length)):
ans.append(seq[i])
if len(seq) <= length:
for i in range(0,length-len(seq)):
ans.append(0)
return ans
X1 = np.asarray([padding(x,max_length) for x in X1])
X2 = np.asarray([padding(x,max_length) for x in X2])
X1 = X1.reshape(X1.shape[0],X1.shape[1],1)
X2 = X2.reshape(X2.shape[0],X2.shape[1],1)
X1_test = mergedTest.search_term.values
X2_test = mergedTest.product_description.values
X1_test = np.asarray([padding(x,max_length) for x in X1_test])
X2_test = np.asarray([padding(x,max_length) for x in X2_test])
X1_test = X1_test.reshape(X1_test.shape[0],X1_test.shape[1],1)
X2_test = X2_test.reshape(X2_test.shape[0],X2_test.shape[1],1)
```
## Model Fitting + Predictions
The model is created in the same manner as in the first part. the only difference is that the input now are embedded word sequences of the data samples.
```
def set_callbacks2(description='run1',patience=15,tb_base_logdir='./logs/'):
cp = ModelCheckpoint(path + '/best_model_weights_word.h5'.format(description),save_best_only=True)
rlop = ReduceLROnPlateau(patience=5)
cb = [cp,rlop]
return cb
st_input = Input(shape=(max_length,1), name='st_input')
pd_input = Input(shape=(max_length,1), name='pd_input')
def createModel():
model = Sequential()
model.add(LSTM(60))
model.add(Dense(140, activation='relu'))
return model
st_model3 = createModel()
pd_model3 = createModel()
def createSiameseModel(model1,model2,customLoss):
out = Lambda(function=lambda x: K.exp(-K.sum(K.abs(x[0]-x[1]), axis=1, keepdims=True)),
output_shape=lambda x: (x[0][0], 1),
name='prediction')([model1(st_input), model2(pd_input)])
siamese_net = Model(input=[st_input,pd_input],output=[out])
siamese_net.compile(loss=customLoss,optimizer=Adadelta(lr=1.0, rho=0.95,clipnorm=1.20))
return siamese_net
siamese_net3 = createSiameseModel(st_model3,pd_model3,'mse')
siamese_net4 = createSiameseModel(st_model3,pd_model3,'mae')
siamese_net3.summary()
X1_train,X1_val,X2_train,X2_val,Y_train, Y_val = train_test_split(X1,X2,Y,test_size = 0.2)
start = time.time()
history3 = siamese_net3.fit([X1_train,X2_train],Y_train,batch_size=1024, epochs=5, verbose=1, validation_data=([X1_val,X2_val],Y_val), callbacks=set_callbacks2())
end = time.time()
total_time = end - start
val_preds = siamese_net3.predict([X1_val,X2_val])
train_preds = siamese_net3.predict([X1_train,X2_train])
test_preds = siamese_net3.predict([X1_test,X2_test])
plt.plot(history3.history['loss'], label='train')
plt.plot(history3.history['val_loss'], label='test')
plt.legend()
plt.show()
plt.hist(val_preds,density=True, histtype='bar')
plt.show()
plt.hist(Y_val,density=True, histtype='bar')
plt.show()
resultsTable = addToTable('WORD_SiameseNetwork',total_time,mse(train_preds,Y_train),mse(val_preds,Y_val),'-',mae(train_preds,Y_train),mae(val_preds,Y_val),'-')
resultsTable
```
We see that the word model outperformed the character model, but only by a little bit.
## Feature Extraction - Word Level
Again, we want to check our feature extraction capabilites of the word model by feeding the features that the model finds to classic ML models to see their performance with the processed data that our model creates during the learning phase.
```
fe_st_input = Input(shape=(max_length,1), name='st_input',dtype='float32')
fe_pd_input = Input(shape=(max_length,1), name='pd_input',dtype='float32')
input_layer1 = siamese_net1.layers[0].input[0]
input_layer2 = siamese_net1.layers[1].input[0]
fe_st_model = createModel()
fe_pd_model = createModel()
output_layer1 = siamese_net1.layers[3].get_output_at(0)
output_layer2 = siamese_net1.layers[3].get_output_at(1)
output_fn = K.function([st_input, pd_input], [output_layer1, output_layer2])
def extractFeatures(model1,model2,customLoss):
out = concatenate([model1(fe_st_input), model2(fe_pd_input)])
siamese_net = Model(input=[fe_st_input,fe_pd_input],output=[out])
siamese_net.load_weights(path + '/best_model_weights_word.h5')
siamese_net.compile(loss=customLoss,optimizer=Adadelta(lr=1.0, rho=0.95,clipnorm=1.20))
return siamese_net
fe_model = extractFeatures(fe_st_model,fe_pd_model,'mse')
fe_train_features = fe_model.predict([X1_train,X2_train])
fe_val_features = fe_model.predict([X1_val,X2_val])
fe_test_features = fe_model.predict([X1_test,X2_test])
randomForest2 = ensemble.RandomForestRegressor()
start_time = time.time()
randomForest2.fit(fe_train_features, Y_train)
end_time = time.time()
total_time = end_time - start_time
fe_train_preds = randomForest2.predict(fe_train_features)
fe_val_preds = randomForest2.predict(fe_val_features)
resultsTable = addToTable('FE_RandomForest_WORD',total_time,mse(fe_train_preds,Y_train),mse(fe_val_preds,Y_val),'-',mae(fe_train_preds,Y_train),mae(fe_val_preds,Y_val),'-')
linear2 = linear_model.LinearRegression()
start_time = time.time()
linear2.fit(fe_train_features, Y_train)
end_time = time.time()
total_time = end_time - start_time
fe_train_preds2= linear2.predict(fe_train_features)
fe_val_preds2 = linear2.predict(fe_val_features)
resultsTable = addToTable('FE_LinearRegression_WORD',total_time,mse(fe_train_preds2,Y_train),mse(fe_val_preds2,Y_val),'-',mae(fe_train_preds2,Y_train),mae(fe_val_preds2,Y_val),'-')
resultsTable
```
## Test results submission
```
subans = test_preds.reshape(test_preds.shape[0])
subans = min_max_scaler.inverse_transform(subans.reshape(1, -1))
subans = subans.reshape(subans.shape[1])
subans
sub = pd.read_csv("/content/gdrive/My Drive/Colab Notebooks/input/sample_submission.csv")
sub.reset_index(drop = True)
print(sub.relevance.values.shape)
sub['relevance'] = subans
sub.to_csv(path + '/sub.csv')
```
# Summary
To conclude our research, we did see that the word model was slightly better than the character model we created using the seamese network.
We did submit our results on the test set to the Kaggle competitioื as a late submission in order to get some indication regarding the results we got. The RMSE we got was 0.55, while the best score in the leaderboard is 0.43192.
| github_jupyter |
# Introduction to Jupyter, Pandas, and Matplotlib
These three software packages are all popular in the world of data science and machine learning.
### Jupyter
Jupyter is the program that's running this "notebook" which is an interactive platform for executing python code. It's popular because it's an easy way to interactively edit and run python code, embed charts, and visualize data. Any code that can be run in a Jupyter notebook could also be run via the terminal or any other "runtime environment."
Basic tips for using Jupyter:
* Hit shift+enter to execute the code in a cell.
* Variables are often saved to global scope.
* Be mindful of the order you execute cells in.
* You can easily overwrite or change the data you're working with if you aren't careful!
https://jupyter.org/documentation
### Pandas
Pandas is a library that excells at storing and manipulating data. It also has a wealth of tools built-in for performing statistical analysis and making charts. We'll explore the basic features of `pandas` here, but it's a big and powerful library, so feel free to explore the documentation and myriad tutorials all available online.
Think of it as a super spreadsheet inside of Python.
https://pandas.pydata.org/docs/
### Matplotlib
Matplotlib is a powerful charting library for Python. Although it is sometimes confusing and clunky to use, it produces high quality charts and is very detail-oriented, making it generally possible to produce whatever kind of chart you need. It is by no means the only popular charting/plotting library in Python, feel free to explore others such as Plotly, Seaborn, and ggplot.
Like Pandas, we'll see some basic features of `matplotlib` here, but the library is expansive.
https://matplotlib.org/3.2.1/contents.html
```
# Conventionally, pd and plt are commonly chosen as names to import pandas and pyplot.
# This is not a requirement, but it is very common.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# One of the things we love about pandas is that it's easy to load CSV data
# into a "data frame" which is the
path_to_ny_sales = '../../datasets/nyc-property/nyc-rolling-sales.csv'
sales_df = pd.read_csv(path_to_ny_sales)
# And, it makes it easy to take a look at the first n items:
sales_df.head(5)
# We can also get a summary with bundles of useful information on the numerical fields
sales_df.describe()
# Hmm... we might not have expected "zip code" to have a mean value...
# Lets look at all the columns in our dataframe along with their data types:
# Sometimes we get some unexpected datatypes when loading data
for column_name, data_type in zip(sales_df.columns, sales_df.dtypes):
print(column_name, data_type)
# Lets make things simpler by dropping a few columns:
columns_to_drop = [
'Unnamed: 0', # I still honestly do not know what this column even is.
'TAX CLASS AT PRESENT',
'BLOCK',
'LOT',
'EASE-MENT',
'BUILDING CLASS AT PRESENT',
'TAX CLASS AT TIME OF SALE',
'BUILDING CLASS AT TIME OF SALE',
'BUILDING CLASS CATEGORY'
]
# Note, the drop operation is NOT in place, we have to store the result back into the
# variable if we want to replace the data. We could also make a NEW variable that has
# the data dropped, and maintain the data frame with al the original data.
sales_df = sales_df.drop(columns=columns_to_drop)
# Lets look at the columns now...
for column_name, data_type in zip(sales_df.columns, sales_df.dtypes):
print(column_name, data_type)
# SALE PRICE should be a numeric, but it's an object. Lets see why:
sales_df['SALE PRICE']
# All those dashes are missing data... lets remove records that don't have a known sale price
# note again, these operations are NOT in place.
# First, coerce the column to a numeric type, and give unconvertable values "NA"
sales_df['SALE PRICE'] = pd.to_numeric(sales_df['SALE PRICE'], errors='coerce')
# Second, select the rows where the SALE PRICE is NOT NA
sales_df = sales_df[sales_df['SALE PRICE'].notna()]
sales_df['SALE PRICE']
# Wow, there were nearly 15k missing sales prices!
# Now, lets convert the rest of the columns that ought to be numeric...
columns_to_convert = [
'LAND SQUARE FEET',
'GROSS SQUARE FEET'
]
for column_name in columns_to_convert:
sales_df[column_name] = pd.to_numeric(sales_df[column_name], errors='coerce')
sales_df = sales_df[sales_df[column_name].notna()]
# Lets look at the columns now...
for column_name, data_type in zip(sales_df.columns, sales_df.dtypes):
print(column_name, data_type)
# Now, lets get the summary again:
sales_df.describe()
# Now there are some columns acting as numberical columns that should be
# treated as categorical columns. Lets fix that too:
categorical_columns = [
'BOROUGH',
'ZIP CODE'
]
for c in categorical_columns:
sales_df[c] = sales_df[c].astype('category')
sales_df.describe()
# Lets use matplotlib and pandas to make a correlation matrix, we want to know which numeric
# columns correlate most with the sale price:
# Pandas gives us the correlation matrix
correlation_matrix = sales_df.corr()
# Matplotlib code to display it, along with the names of each field on the axes and a
# colorbar legend:
plt.matshow(correlation_matrix)
plt.xticks(range(len(correlation_matrix.columns)), correlation_matrix.columns, rotation='vertical')
plt.yticks(range(len(correlation_matrix.columns)), correlation_matrix.columns)
plt.colorbar()
plt.show()
# Notice that nothing really correlates super strongly with sale price...
# Gross sq feet is the strongest, in the 0.6 range.
# The truth is: real estate is complex. Location matters, and so do a bunch of other factors!
# Pandas can access matplotlib under the hood with some convinenent functions.
# Lets make a box plot and histogram for each numerical column.
# Lets plot two interesting charts:
for column_name, data_type in zip(sales_df.columns, sales_df.dtypes):
if data_type not in ['float', 'int', 'float64', 'int64']: continue
print(column_name)
sales_df.boxplot(column=[column_name])
sales_df.hist(column=[column_name])
# Matplotlib's tight_layout function makes the charts a bit cleaner.
plt.tight_layout()
plt.show()
# Interesting... lots of outliers in this dataset!
# Lets remove anything with a z-score higher than 1 for each column and plot that.
# Note, this code DOES NOT modify sales_df.
for column_name, data_type in zip(sales_df.columns, sales_df.dtypes):
if data_type not in ['float', 'int', 'float64', 'int64']: continue
# This will compute the z_score for each row in the whole column
# and the results will be parallel to our original dataframe.
column_z_score = (sales_df[column_name] - sales_df[column_name].mean()) / sales_df[column_name].std(ddof=0)
# now, grab the original data and filter down to rows with z-scores between -(1, 1)
filtered_column = sales_df[column_name][np.abs(column_z_score) < 1]
print('Filtered: ' + column_name)
filtered_column.plot.box()
plt.show()
filtered_column.plot.hist(bins=20)
plt.show()
# Much more interesting...
# That's the whirlwind tour of pandas, matplotlib, and jupyter notebook. Now, attempt the exercise.
```
| github_jupyter |
```
"""
Author: Moustafa Alzantot (malzantot@ucla.edu)
All rights reserved Networked and Embedded Systems Lab (NESL), UCLA.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
%load_ext autoreload
%autoreload 2
import data_utils
import model_utils
import model
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import numpy as np
data = data_utils.load_training_data('ecg')
```
## Training the model
```
# To get reasonable outputs, should use something bigger than 1000 !
num_epochs = 1000
model_utils.reset_session_and_model()
with tf.Session() as sess:
train_config = model.ModelConfig()
test_config = model.ModelConfig()
train_config.learning_rate = 0.003
train_config.num_layers = 1
train_config.batch_size = 128
test_config.num_layers = 1
test_config.batch_size = 1
test_config.num_steps = 1
loader = data_utils.DataLoader(data=data,batch_size=train_config.batch_size, num_steps=train_config.num_steps)
train_model = model.MDNModel(train_config, True)
test_model = model.MDNModel(test_config, False)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
for idx in range(num_epochs):
epoch_loss = train_model.train_for_epoch(sess, loader)
print(idx, ' ', epoch_loss)
if (idx+1) % 100 == 0:
saver.save(sess, './models/ecg_mdnmodel.ckpt', global_step=idx)
```
## Sampling from a trained model
```
ckpt_path = './models/ecg_mdnmodel.ckpt-999'
seq_len = 1200
model_utils.reset_session_and_model()
true_data = data[0,:seq_len]
with tf.Session() as sess:
test_config = model.ModelConfig()
test_config.num_layers = 1
test_config.batch_size = 1
test_config.num_steps = 1
test_model = model.MDNModel(test_config, True)
test_model.is_training = False
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
fake_data = test_model.predict(sess, seq_len)
fig, axes = plt.subplots(1,2, figsize=((14,8)))
axes[0].plot(true_data)
axes[0].set_title('True data')
axes[1].plot(fake_data)
axes[1].set_title('Fake data')
## Note:
# I didn't spend time to do hyperparameter selection for the model and improve the results.
# Changing the number of Guassian mixtures, number of hidden units , number of epochs, or learning rate should improve the quality of results.
# Another useful trick would be to 'quantize' the signal into discrete set of levels and use one-hot-encoding for input processing and cross-entropy loss.
# Under such a setting, the model will be similar to the RNNLM architecture trained using maximum likelihood estimate (MLE).
# Sometimes the model above makes a mistakce and produce signal that not looking well-shaped, this is mainly due to that MLE-based models
# suffer from `exposure bias` which means at generation time the model may produce data different from those seen during training time, and since
# the model relies on its own prediction as an input for next step, the error would propagate to future time steps as well.
# Further direction of improvement would be pairing the generator model with a discriminator and train the generator by adversarial loss.
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# Retraining an Image Classifier
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_image_retraining"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
## Introduction
Image classification models have millions of parameters. Training them from
scratch requires a lot of labeled training data and a lot of computing power. Transfer learning is a technique that shortcuts much of this by taking a piece of a model that has already been trained on a related task and reusing it in a new model.
This Colab demonstrates how to build a Keras model for classifying five species of flowers by using a pre-trained TF2 SavedModel from TensorFlow Hub for image feature extraction, trained on the much larger and more general ImageNet dataset. Optionally, the feature extractor can be trained ("fine-tuned") alongside the newly added classifier.
### Looking for a tool instead?
This is a TensorFlow coding tutorial. If you want a tool that just builds the TensorFlow or TF Lite model for, take a look at the [make_image_classifier](https://github.com/tensorflow/hub/tree/master/tensorflow_hub/tools/make_image_classifier) command-line tool that gets [installed](https://www.tensorflow.org/hub/installation) by the PIP package `tensorflow-hub[make_image_classifier]`, or at [this](https://colab.sandbox.google.com/github/tensorflow/examples/blob/master/tensorflow_examples/lite/model_maker/demo/image_classification.ipynb) TF Lite colab.
## Setup
```
import itertools
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("TF version:", tf.__version__)
print("Hub version:", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
```
## Select the TF2 SavedModel module to use
For starters, use [https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4](https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4). The same URL can be used in code to identify the SavedModel and in your browser to show its documentation. (Note that models in TF1 Hub format won't work here.)
You can find more TF2 models that generate image feature vectors [here](https://tfhub.dev/s?module-type=image-feature-vector&tf-version=tf2).
```
module_selection = ("mobilenet_v2_100_224", 224) #@param ["(\"mobilenet_v2_100_224\", 224)", "(\"inception_v3\", 299)"] {type:"raw", allow-input: true}
handle_base, pixels = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {}".format(MODULE_HANDLE, IMAGE_SIZE))
BATCH_SIZE = 32 #@param {type:"integer"}
```
## Set up the Flowers dataset
Inputs are suitably resized for the selected module. Dataset augmentation (i.e., random distortions of an image each time it is read) improves training, esp. when fine-tuning.
```
data_dir = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
datagen_kwargs = dict(rescale=1./255, validation_split=.20)
dataflow_kwargs = dict(target_size=IMAGE_SIZE, batch_size=BATCH_SIZE,
interpolation="bilinear")
valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
**datagen_kwargs)
valid_generator = valid_datagen.flow_from_directory(
data_dir, subset="validation", shuffle=False, **dataflow_kwargs)
do_data_augmentation = False #@param {type:"boolean"}
if do_data_augmentation:
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=40,
horizontal_flip=True,
width_shift_range=0.2, height_shift_range=0.2,
shear_range=0.2, zoom_range=0.2,
**datagen_kwargs)
else:
train_datagen = valid_datagen
train_generator = train_datagen.flow_from_directory(
data_dir, subset="training", shuffle=True, **dataflow_kwargs)
```
## Defining the model
All it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.
For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
```
do_fine_tuning = False #@param {type:"boolean"}
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
# Explicitly define the input shape so the model can be properly
# loaded by the TFLiteConverter
tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
hub.KerasLayer(MODULE_HANDLE, trainable=do_fine_tuning),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(train_generator.num_classes,
kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None,)+IMAGE_SIZE+(3,))
model.summary()
```
## Training the model
```
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.005, momentum=0.9),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1),
metrics=['accuracy'])
steps_per_epoch = train_generator.samples // train_generator.batch_size
validation_steps = valid_generator.samples // valid_generator.batch_size
hist = model.fit(
train_generator,
epochs=5, steps_per_epoch=steps_per_epoch,
validation_data=valid_generator,
validation_steps=validation_steps).history
plt.figure()
plt.ylabel("Loss (training and validation)")
plt.xlabel("Training Steps")
plt.ylim([0,2])
plt.plot(hist["loss"])
plt.plot(hist["val_loss"])
plt.figure()
plt.ylabel("Accuracy (training and validation)")
plt.xlabel("Training Steps")
plt.ylim([0,1])
plt.plot(hist["accuracy"])
plt.plot(hist["val_accuracy"])
```
Try out the model on an image from the validation data:
```
def get_class_string_from_index(index):
for class_string, class_index in valid_generator.class_indices.items():
if class_index == index:
return class_string
x, y = next(valid_generator)
image = x[0, :, :, :]
true_index = np.argmax(y[0])
plt.imshow(image)
plt.axis('off')
plt.show()
# Expand the validation image to (1, 224, 224, 3) before predicting the label
prediction_scores = model.predict(np.expand_dims(image, axis=0))
predicted_index = np.argmax(prediction_scores)
print("True label: " + get_class_string_from_index(true_index))
print("Predicted label: " + get_class_string_from_index(predicted_index))
```
Finally, the trained model can be saved for deployment to TF Serving or TF Lite (on mobile) as follows.
```
saved_model_path = "/tmp/saved_flowers_model"
tf.saved_model.save(model, saved_model_path)
```
## Optional: Deployment to TensorFlow Lite
[TensorFlow Lite](https://www.tensorflow.org/lite) lets you deploy TensorFlow models to mobile and IoT devices. The code below shows how to convert the trained model to TF Lite and apply post-training tools from the [TensorFlow Model Optimization Toolkit](https://www.tensorflow.org/model_optimization). Finally, it runs it in the TF Lite Interpreter to examine the resulting quality
* Converting without optimization provides the same results as before (up to roundoff error).
* Converting with optimization without any data quantizes the model weights to 8 bits, but inference still uses floating-point computation for the neural network activations. This reduces model size almost by a factor of 4 and improves CPU latency on mobile devices.
* On top, computation of the neural network activations can be quantized to 8-bit integers as well if a small reference dataset is provided to calibrate the quantization range. On a mobile device, this accelerates inference further and makes it possible to run on accelerators like EdgeTPU.
```
#@title Optimization settings
# docs_infra: no_execute
# TODO(b/156102192)
optimize_lite_model = False #@param {type:"boolean"}
#@markdown Setting a value greater than zero enables quantization of neural network activations. A few dozen is already a useful amount.
num_calibration_examples = 60 #@param {type:"slider", min:0, max:1000, step:1}
representative_dataset = None
if optimize_lite_model and num_calibration_examples:
# Use a bounded number of training examples without labels for calibration.
# TFLiteConverter expects a list of input tensors, each with batch size 1.
representative_dataset = lambda: itertools.islice(
([image[None, ...]] for batch, _ in train_generator for image in batch),
num_calibration_examples)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
if optimize_lite_model:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if representative_dataset: # This is optional, see above.
converter.representative_dataset = representative_dataset
lite_model_content = converter.convert()
with open("/tmp/lite_flowers_model", "wb") as f:
f.write(lite_model_content)
print("Wrote %sTFLite model of %d bytes." %
("optimized " if optimize_lite_model else "", len(lite_model_content)))
# docs_infra: no_execute
interpreter = tf.lite.Interpreter(model_content=lite_model_content)
# This little helper wraps the TF Lite interpreter as a numpy-to-numpy function.
def lite_model(images):
interpreter.allocate_tensors()
interpreter.set_tensor(interpreter.get_input_details()[0]['index'], images)
interpreter.invoke()
return interpreter.get_tensor(interpreter.get_output_details()[0]['index'])
#@markdown For rapid experimentation, start with a moderate number of examples.
# docs_infra: no_execute
num_eval_examples = 50 #@param {type:"slider", min:0, max:700}
eval_dataset = ((image, label) # TFLite expects batch size 1.
for batch in train_generator
for (image, label) in zip(*batch))
count = 0
count_lite_tf_agree = 0
count_lite_correct = 0
for image, label in eval_dataset:
probs_lite = lite_model(image[None, ...])[0]
probs_tf = model(image[None, ...]).numpy()[0]
y_lite = np.argmax(probs_lite)
y_tf = np.argmax(probs_tf)
y_true = np.argmax(label)
count +=1
if y_lite == y_tf: count_lite_tf_agree += 1
if y_lite == y_true: count_lite_correct += 1
if count >= num_eval_examples: break
print("TF Lite model agrees with original model on %d of %d examples (%g%%)." %
(count_lite_tf_agree, count, 100.0 * count_lite_tf_agree / count))
print("TF Lite model is accurate on %d of %d examples (%g%%)." %
(count_lite_correct, count, 100.0 * count_lite_correct / count))
```
| github_jupyter |
### Coupling GIPL and ECSimpleSnow models
Before you begin, install:
```conda install -c conda-forge pymt pymt_gipl pymt_ecsimplesnow seaborn```
```
import pymt.models
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import matplotlib.colors as mcolors
from matplotlib.colors import LinearSegmentedColormap
sns.set(style='whitegrid', font_scale= 1.2)
```
#### Load ECSimpleSnow module from PyMT
```
ec = pymt.models.ECSimpleSnow()
print(ec.name)
# List input and output variable names.
print(ec.output_var_names)
print(ec.input_var_names)
```
#### Load GIPL module from PyMT
```
gipl = pymt.models.GIPL()
print(gipl.name)
# List input and output variable names.
print(gipl.output_var_names)
print(gipl.input_var_names)
```
Call the setup method on both ECSimpleSnow and GIPL to get default configuration files and data.
```
ec_defaults = ec.setup('.')
print(ec_defaults)
gipl_defaults = gipl.setup('.')
print(gipl_defaults)
ec.initialize('snow_model.cfg')
gipl.initialize('gipl_config.cfg')
# Get soil depth: [unit: m]
depth = gipl.get_grid_z(2)
n_depth = int(len(depth))
# Get the length of forcing data:
ntime = int(gipl.end_time)
# Define a variable to store soil temperature through the time period
tsoil = np.zeros((n_depth, ntime)) * np.nan
print('Final soil temperatures will be ', tsoil.shape)
fig = plt.figure(figsize=[12,6])
ax2 = fig.add_subplot(2,3,1)
ax2.set_title('Air Temperature (Input)')
ax3 = fig.add_subplot(2,3,2)
ax3.set_title('Precipition (Input)')
ax4 = fig.add_subplot(2,3,4)
ax4.set_title('Snow Depth (EC Output)')
ax5 = fig.add_subplot(2,3,5)
ax5.set_title('Snow Density (EC Output)')
ax1 = fig.add_subplot(2,3,(3,6))
ax1.set_ylim([15,0])
ax1.set_xlim([-20,20])
ax1.set_xlabel('Soil Temperature ($^oC$)')
ax1.set_ylabel('Depth (m)')
ax1.plot([0,0],[15,0],'k--')
for i in np.arange(365):
ec.update() # Update Snow Model Once
# Get output from snow model
tair = ec.get_value('land_surface_air__temperature')
prec = ec.get_value('precipitation_mass_flux')
snd = ec.get_value('snowpack__depth', units='m')
rsn = ec.get_value('snowpack__mass-per-volume_density', units = 'g cm-3')
# Pass value to GIPL model
gipl.set_value('land_surface_air__temperature', tair)
gipl.set_value('snowpack__depth', snd)
gipl.set_value('snow__thermal_conductivity', rsn * rsn * 2.846)
gipl.update() # Update GIPL model Once
tsoil[:,i] = gipl.get_value('soil__temperature') # Save results to a matrix
ax1.plot(tsoil[depth>=0,i], depth[depth>=0],color = [0.7,0.7,0.7], alpha = 0.1)
ax2.scatter(i, tair, c = 'k')
ax3.scatter(i, prec, c = 'k')
ax4.scatter(i, snd , c = 'k')
ax5.scatter(i, rsn , c = 'k')
ax1.plot(tsoil[depth>=0,:].max(axis=1), depth[depth>=0], 'r', linewidth = 2, label = 'Max')
ax1.plot(tsoil[depth>=0,:].min(axis=1), depth[depth>=0], 'b', linewidth = 2, label = 'Min')
ax1.plot(tsoil[depth>=0,:].mean(axis=1), depth[depth>=0], 'k', linewidth = 2, label = 'Mean')
ax1.legend()
ax1.set_title('Ground Temperatures (GIPL output)')
ax2.set_xticks([])
ax3.set_xticks([])
fig = plt.figure(figsize=[9,4])
divnorm = mcolors.TwoSlopeNorm(vmin=-25., vcenter=0., vmax=10)
plt.contourf(np.arange(ntime), depth, tsoil, np.linspace(-25,10,15),
norm = divnorm,
cmap="RdBu_r", extend = 'both')
plt.ylim([5,0])
cb = plt.colorbar()
plt.xlabel('Day')
plt.ylabel('Depth (m)')
cb.ax.set_ylabel('Soil Temperature ($^oC$)')
plt.contour(np.arange(ntime), depth, tsoil, [0]) # ZERO
```
| github_jupyter |
```
import warnings
warnings.filterwarnings(action='ignore',
category=DeprecationWarning,
module='stable_baselines')
warnings.filterwarnings(action='ignore',
category=UserWarning,
module='stable_baselines')
warnings.filterwarnings("ignore", category=FutureWarning, module='tensorflow')
warnings.filterwarnings("ignore", category=FutureWarning, module='tensorboard')
warnings.filterwarnings("ignore", category=UserWarning, module='gym')
import gym
import PortfolioAllocationGym
import numpy as np
from stable_baselines import A2C
from stable_baselines.common.policies import MlpLnLstmPolicy #, MlpPolicy, MlpLstmPolicy
from stable_baselines.common.evaluation import evaluate_policy
from stable_baselines.common.env_checker import check_env
from stable_baselines.bench import Monitor
from tensorflow import nn as nn
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
from datetime import datetime
reward_fn = 'benchmark'
sample_size=500
observations = ['daily_returns', 'ema_50', 'ema_200', 'bb_bbm', 'bb_bbh', 'bb_bbl','bb_bbhi', 'bb_bbli', 'stoch', 'stoch_signal', 'macd','macd_signal', 'obv']
env_kwargs = {'filename':'sp500.csv',
'date_from':'2008-01-01',
'date_to':'2017-12-31',
'investment':1000000,
'risk_free_rate': 0.5, # approx US Treasury Note return
'sample_size':sample_size,
'random_sample':True,
'observations' : observations,
'save_info' : True,
#'report_point' : 252,
'reward_function':reward_fn}
train_env = gym.make('PortfolioAllocation-v0', **env_kwargs)
train_env = Monitor(train_env, 'monitor')
check_env(train_env)
venv, obs = train_env.get_sb_env()
#{'gamma': 0.9999, 'n_steps': 1, 'lr_schedule': 'constant', 'lr': 0.001, 'ent_coef': 0.1, 'vf_coef': 0, 'max_grad_norm': 5, 'n_lstm': 128, 'activation_fn': 'tanh', 'net_arch': 'medium'}.
model_kwargs = {
'gamma': 0.9999,
'n_steps': 1,
'lr_schedule': 'linear',
'learning_rate': 0.001,
'ent_coef': 0.1,
'vf_coef': 0,
'max_grad_norm': 5,
'full_tensorboard_log': True,
'policy_kwargs' : dict (
n_lstm=128,
act_fun=nn.tanh,
net_arch=[64, 'lstm', dict(pi=[256, 256], vf=[256, 256])]
)
}
a2c_model = A2C(policy = MlpLnLstmPolicy, tensorboard_log="tensorboard",env = venv, **model_kwargs)
ts_factor = 40
total_timesteps = ts_factor* (len(venv.venv.envs[0].data.date.unique()))
trained_a2c_model= a2c_model.learn(total_timesteps=total_timesteps,
tb_log_name='A2C_'+str(sample_size)+'_'+reward_fn+'_'+datetime.now().strftime("%H-%M"))
trained_a2c_model.save('ac2_mlplnltsm_'+str(sample_size)+'_'+str(ts_factor)+'_'+reward_fn+'.zip')
eval_kwargs = {'filename':'sp500.csv',
'date_from':'2018-01-01',
'date_to':'2020-12-31',
'investment':1000000,
'risk_free_rate': 0.5, # approx US Treasury Note return
'sample_size':sample_size,
'random_sample':False,
'observations' : observations,
'save_info' : True,
#'report_point' : 252,
'reward_function':reward_fn}
eval_env = gym.make('PortfolioAllocation-v0', **eval_kwargs)
eval_venv, obs = eval_env.get_sb_env()
# Random Agent, before training
mean_reward, std_reward = evaluate_policy(a2c_model, eval_venv, n_eval_episodes=10)
print(f"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}")
obs = eval_venv.reset()
mean_reward, std_reward = evaluate_policy(trained_a2c_model, eval_venv, n_eval_episodes=10)
print(f"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}")
```
| github_jupyter |
# Combination of Anglescan Analysis
Taken from combining together the analysis routines worked out in result_concatenation_4
```
%matplotlib tk
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import xarray as xr
import scipy.stats as stat
import collections
import sys
import os
import glob
import re
sys.path.append('/home/jleland/Coding/Projects/flopter')
import flopter.core.colours as col
import flopter.core.ivdata as iv
import flopter.core.lputils as lp
import flopter.magnum.database as ut
import flopter.magnum.utils as mgut
import flopter.core.fitters as fts
import flopter.core.constants as c
# Create analysed dataset metadata
path_to_datasets = '/home/jleland/data/external/magnum/'
# path_to_datasets = '/home/jleland/data/externy/magnum/'
# path_to_analysed_datasets = 'analysed_2'
# path_to_analysed_datasets = 'analysed_3'
# path_to_analysed_datasets = 'phobos_test'
# path_to_analysed_datasets = 'test'
# path_to_analysed_datasets = 'analysed_3_downsampled'
# path_to_analysed_datasets = 'analysed_4'
path_to_analysed_datasets = 'analysed_4_downsampled'
# path_to_analysed_datasets = 'analysed_5_downsampled'
os.chdir(path_to_datasets)
magnum_probes = lp.MagnumProbes()
print(magnum_probes.probe_position.items())
```
## Load adc file metadata
```
os.chdir('/home/jleland/data/external/magnum/')
# os.chdir('/home/jleland/data/externy/magnum/')
meta_data_ds = xr.open_dataset('all_meta_data.nc')
# print(meta_data_ds)
```
---
## Whole Analysis Function
```
def run_whole_anglescan_analysis(indices, probe_settings, global_fit_settings):
angle_scan_ds = mgut.get_dataset_from_indices(indices, anglescan_fl=True, average_direction_fl=False,
path_to_analysed_datasets=path_to_analysed_datasets)
fit_df = mgut.fit_multi_magnum_ds(angle_scan_ds, probes_settings=probe_settings, threshold=None,
plot_fl=False, **global_fit_settings)
combined_ds = mgut.combine_fit_ds(fit_df, angle_scan_ds)
return combined_ds
```
---
### Indices for different Anglescans
```
# Hydrogen Shots
# super_title = 'H shot @ 0.8T'
indices_08T_H = [41,42,43,44,45,46,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
ps_08T_H = {
('S', 'L'): {'mode': 3, 'trimming_vals': (0.16, 0.16, 0.02), 'sat_region': -25},
('B', 'R'): {'mode': 3, 'trimming_vals': (0.16, 0.16, 0.02), 'sat_region': -25},
}
ps_08T_H_alternate = {
('S', 'L'): {'mode': 3, 'trimming_vals': (0.16, 0.16, 0.02), 'sat_region': -25, 'fitter':fts.SimpleIVFitter()},
}
# super_title = 'H shot @ 1.2T'
indices_12T_H = [320,321,322,323,324,325,327,328,329,330,331,336,337,338,339,340,341,342,343,344,345,346]
ps_12T_H = {
('S', 'L'): {'mode': 3, 'trimming_vals': (0.2, 0.0, 0.02), 'sat_region': -50},
('B'): {'mode': 3, 'trimming_vals': (0.2, 0.1, 0.02), 'sat_region': -50},
# ('R'): {'mode': 3, 'trimming_vals': (0.2, 0.3, 0.02), 'sat_region': -50},
}
# super_title = 'H shot @ 1.5T'
indices_15T_H = [490,491,492,493,494,495,496,497,498,499,500,503,504,505,506,507,509,510,511,512,513,514]
ps_15T_H = {
('S', 'L', 'B'): {'mode': 0, 'trimming_vals': (0.3, 0.0, 0.02), 'sat_region': -50},
# ('R'): {'mode': 3, 'trimming_vals': (0.3, 0.2, 0.02), 'sat_region': -50},
}
# Helium Shots
# super_title = 'He shot @ 0.8T'
indices_08T_He = [357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386]
ps_08T_He = {
('S'): {'mode': 0, 'trimming_vals': (0.3, 0.0, 0.02), 'sat_region': -60},
('B'): {'mode': 0, 'trimming_vals': (0.3, 0.3, 0.02), 'sat_region': -60},
# ('R'): {'mode': 3, 'trimming_vals': (0.3, 0.15, 0.02), 'sat_region': -60},
}
# super_title = 'He shot @ 1.2T'
indices_12T_He = [245,246,247,248,249,250,251,252,254,255,256,261,262,263,264,265,266,267,268,269,270,271]
ps_12T_He = {
('S', 'L'): {'mode': 0, 'trimming_vals': (0.16, 0.0, 0.02), 'sat_region': -75, 'print_fl':True},
# ('B'): {'mode': 2, 'sat_region': -65, 'minimise_temp_fl':False}
}
# Deuterium Shot
# super_title = 'D shot @ 0.8T'
indices_08T_D = [462,463,464,465,466,467,468,469,470,471,472,475,476,477,478,479,480,481,482,483,484,485]
ps_08T_D = {
('S', 'L'): {'sat_region': -65, 'minimise_temp_fl':False, 'temp_from_phi_fl': False},
('B'): {'mode': 0, 'trimming_vals': (0.1, 0.1, 0.02), 'sat_region': -30}
}
plasma_settings = [
(0.8, 'H'),
(1.2, 'H'),
(1.5, 'H'),
(0.8, 'He'),
(1.2, 'He'),
]
fit_inputs = {
(0.8, 'H'): (indices_08T_H, ps_08T_H, {}),
(1.2, 'H'): (indices_12T_H, ps_12T_H, {}),
(1.5, 'H'): (indices_15T_H, ps_15T_H, {}),
(0.8, 'He'): (indices_08T_He, ps_08T_He, {'mass': 4, 'Z':2}),
(1.2, 'He'): (indices_12T_He, ps_12T_He, {'mass': 4, 'Z':2}),
}
```
---
## Aggregation!
The below cell runs all analyses and compiles the outputted 'ds_combined's into a list
```
fit_datasets = {}
for plasma_settings, fit_settings in fit_inputs.items():
indices, probe_settings, other_settings = fit_settings
fit_dataset = run_whole_anglescan_analysis(indices, probe_settings, other_settings)
fit_datasets[plasma_settings] = fit_dataset
fit_datasets[(0.8, 'H')].sel(probe='S').shot_number
def interpolate_ts_position(dataset, uncertainty=1.5, offset=0):
magnum_probes = lp.MagnumProbes()
ts_temp = dataset.ts_temperature.mean(['probe', 'tilt'])
ts_d_temp = dataset.ts_temperature.std(['probe', 'tilt'])
ts_dens = dataset.ts_density.mean(['probe', 'tilt'])
ts_d_dens = dataset.ts_density.std(['probe', 'tilt'])
ts_probe_temps = {}
ts_probe_denss = {}
for probe, probe_pos in magnum_probes.probe_position.items():
probe_pos += offset
probe_pos_limits = [probe_pos, probe_pos - uncertainty, probe_pos + uncertainty]
ts_probe_temps[probe.upper()] = ts_temp.interp(ts_radial_pos=probe_pos_limits).values
ts_probe_denss[probe.upper()] = ts_dens.interp(ts_radial_pos=probe_pos_limits).values
return ts_probe_temps, ts_probe_denss
```
## Plotting!
```
# The default colours from pyplot
# tableau_palette = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple',]
# markers = ['.', '+', 'x', '^', '*', 'o']
# probe_colours = {
# 'L': '#4477AA',
# 'S': '#EE6677',
# 'B': '#228833',
# 'R': '#CCBB44'
# }
probe_colours = {
'L': col.palettes['c'][3],
'S': col.palettes['c'][2],
'B': col.palettes['c'][1],
'R': col.palettes['b'][2],
}
probe_markers = {
'L': 's',
'S': '.',
'B': '^',
'R': 'p',
}
tableau_palette = ['tab:blue', 'tab:green', 'tab:red', 'black', 'tab:purple', 'tab:grey', 'tab:orange']
cb_palette = ['#4477AA', '#66CCEE', '#228833', '#CCBB44', '#EE6677', '#AA3377', '#BBBBBB']
markers = ['.', '+', 'x', '^', '*', 'o', 's']
global_options = {'fontsize':12, }
probes = ['L', 'S', 'B', 'R']
# probe_colours = {probes[i]: tableau_palette[i] for i in range(len(probes))}
import itertools
def plot_probe_anglescans(probe, fit_datasets, parameters=('temp', 'dens')):
fig, ax = plt.subplots(2, constrained_layout=True)
labels = []
handles = []
for i, (plasma, fit_ds) in enumerate(fit_datasets.items()):
probe_ds = fit_ds.sel(probe=probe)
ts_probe_temps, ts_probe_denss = interpolate_ts_position(fit_ds)
label = f'{plasma[0]} - {plasma[1]}'
ts_label = f'TS at {label}'
handle = ax[0].errorbar('tilt', 'temp', yerr='d_temp', data=probe_ds, fmt=markers[i], label=label,
color=tableau_palette[i])
ax[1].errorbar('tilt', 'dens', yerr='d_dens', fmt=markers[i], data=probe_ds, color=tableau_palette[i])
# Plot interpolated TS value over the top
ts_handle = ax[0].axhline(y=ts_probe_temps[probe][0], color=tableau_palette[i], linewidth=1,
linestyle='dashed', label=ts_label)
ax[1].axhline(y=ts_probe_denss[probe][0], color=tableau_palette[i], linewidth=1,
linestyle='dashed')
handles.append(handle)
labels.append(label)
labels.append(ts_label)
handles.append(ts_handle)
ax[0].set_ylabel('T / eV')
ax[0].set_yscale('log')
# ax[0].set_ylim(0, 15)
ax[1].set_yscale('log')
ax[1].set_ylabel(r'$n_e$ / m$^{-3}$')
ax[1].set_xlabel(r'probe tilt / $^{\circ}$')
fig.legend(handles, labels, loc='centre left', bbox_to_anchor=(0.8,0.5))
fig.set_constrained_layout_pads(hspace=0.0, wspace=3.0)
fig.suptitle(f'Anglescan Fits for All {probe} Probe Shots')
import importlib
importlib.reload(lp)
def plot_all_tiltscan_fitparams(plasma_settings, ax=None, phi_temp_fl=False, probes=('S', 'L', 'B'),
fontsize=18):
if ax is None:
fig, ax = plt.subplots(2, 3, sharex=True, constrained_layout=True, figsize=[15,10])
else:
fig = ax[0,0].figure
ds = fit_datasets[plasma_settings]
if phi_temp_fl:
ds['d_temp_phi'] = np.abs(ds['temp_phi'] - ds['temp_fit_phi'])
labels = []
handles = []
for i, probe in enumerate(probes):
probe_label = probe[0]
ds_probe = ds.sel(probe=probe)
ts_probe_temps, ts_probe_denss = interpolate_ts_position(ds)
probe_dims = magnum_probes[probe_label]
dummy_tilt = np.linspace(0.00001, 11.0, 1100)
# dummy_tilt = np.linspace(89.99, 78.99, 1101)
calced_a = lp.calc_sheath_expansion_param(
ts_probe_temps[probe_label][0],
ts_probe_denss[probe_label][0] / 2,
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(dummy_tilt),
c_1=0.9,
c_2=0.6,
)
calced_new_a = lp.calc_new_sheath_expansion_param(
ts_probe_temps[probe_label][0],
ts_probe_denss[probe_label][0],
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(dummy_tilt),
probe_dims.d_perp,
0.0 if not probe_dims.is_angled() else probe_dims.theta_p,
c_1=0.9,
c_2=0.6,
)
handle = ax[0][0].errorbar('tilt', 'temp', yerr='d_temp', data=ds_probe, label=probe, fmt='.',
color=probe_colours[probe])
labels.append(probe)
handles.append(handle)
if phi_temp_fl:
label = r'$T_e$ from $\phi$ on {}'.format(probe)
handle = ax[0][0].errorbar('tilt', 'temp_phi', yerr='d_temp_phi', data=ds_probe, mfc='none',
label=label, linestyle='none', marker='^', color=probe_colours[probe])
labels.append(label)
handles.append(handle)
ax[0][1].errorbar('tilt', 'v_f', yerr='d_v_f', data=ds_probe, fmt='.', color=probe_colours[probe])
ax[0][2].plot('tilt', 'reduced_chi2', '.', data=ds_probe, color=probe_colours[probe])
ax[1][0].errorbar('tilt', 'dens', yerr='d_dens', data=ds_probe, fmt='.', color=probe_colours[probe])
ax[1][1].errorbar('tilt', 'isat', yerr='d_isat', data=ds_probe, fmt='.', color=probe_colours[probe])
ax[1][2].errorbar('tilt', 'a', yerr='d_a', data=ds_probe, fmt='.', color=probe_colours[probe])
calced_a_label = r'$a_{{{probe}, calc}}$'.format(probe=probe)
handle = ax[1][2].errorbar(dummy_tilt, calced_a, label=calced_a_label, fmt='-', color=probe_colours[probe],
linewidth=0.8, alpha=0.6)
ax[1][2].errorbar(dummy_tilt, calced_new_a, label=calced_a_label, fmt=':', color=probe_colours[probe],
linewidth=0.8, alpha=0.6)
labels.append(calced_a_label)
handles.append(handle)
# Plot the ts values at the probe positions using interpolaton
handle = ax[0][0].axhline(y=ts_probe_temps[probe_label][0], color=probe_colours[probe], linewidth=1,
linestyle='dashed', label=f'TS at {probe}')
ax[1][0].axhline(y=ts_probe_denss[probe_label][0], color=probe_colours[probe], linewidth=1,
linestyle='dashed')
labels.append(f'TS at {probe}')
handles.append(handle)
ax[0,0].set_ylabel('T / eV', fontsize=fontsize)
ax[0,1].set_ylabel(r'$V_f$ (V)', fontsize=fontsize)
ax[0,2].set_ylabel(r'$\chi^2_{\nu}$', fontsize=fontsize)
ax[0,2].set_yscale('log')
ax[0,2].axhline(y=1.0, **c.AX_LINE_DEFAULTS)
ax[1,0].set_ylabel(r'$n_e$ / m$^{-3}$', fontsize=fontsize)
ax[1,0].set_yscale('log')
ax[1,1].set_ylabel(r'$I_{sat}$ / A', fontsize=fontsize)
ax[1,2].set_ylabel(r'$a$', fontsize=fontsize)
ax[1,0].set_xlabel(r'Tilt / $^{\circ}$', fontsize=fontsize)
ax[1,1].set_xlabel(r'Tilt / $^{\circ}$', fontsize=fontsize)
ax[1,2].set_xlabel(r'Tilt / $^{\circ}$', fontsize=fontsize)
ax[1,2].set_yscale('log')
ax[1,2].set_xlim(-0.5, 10.5)
# ax[1,2].set_xlim(91.5, 79.5)
ax[1,2].set_ylim(2e-3, 1e0)
# fig.set_constrained_layout_pads(hspace=0.0, wspace=3.0)
fig.legend(handles, labels, loc='center left', bbox_to_anchor=(0.9,0.5))
fig.suptitle('Anglescan Fit Parameters for {} Plasma at {}'.format(*plasma_settings))
return ax
def plot_paper_tiltscan_fitparams_4(plasma_settings, ax=None, phi_temp_fl=False, probes=('S', 'L'),
fontsize=18):
if ax is None:
fig, ax = plt.subplots(2, 2, sharex=True,
# constrained_layout=True,
# figsize=[8,6])
figsize=[10,7])
else:
fig = ax[0,0].figure
ds = fit_datasets[plasma_settings]
if phi_temp_fl:
ds['d_temp_phi'] = np.abs(ds['temp_phi'] - ds['temp_fit_phi'])
labels = []
handles = []
fp_markers = ['.', 's']
for i, probe in enumerate(probes):
probe_label = probe[0]
ds_probe = ds.sel(probe=probe)
probe_dims = magnum_probes[probe_label]
ave_ts_probe_temps, ave_ts_probe_denss = interpolate_ts_position(ds)
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = mgut.interpolate_ts_position(ds, aggregate_dims='probe')
ts_perprobe_data = mgut.interpolate_ts_position(ds_probe, aggregate_dims=None)
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = ts_perprobe_data
dummy_tilt = np.linspace(0.00001, 11.0, 1100)
# dummy_tilt = np.linspace(89.99, 78.99, 1101)
calced_a = lp.calc_sheath_expansion_param(
ave_ts_probe_temps[probe_label][0],
ave_ts_probe_denss[probe_label][0] / 2,
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(dummy_tilt),
c_1=0.9,
c_2=0.6,
)
alt_dummy_tilt = np.linspace(-1.0, 11.0, 1100)
try:
d_perp = probe_dims.d_perp[0]
except:
d_perp = probe_dims.d_perp
try:
theta_p = probe_dims.theta_p
except AttributeError:
theta_p = 0.0
alt_calced_a = lp.calc_2d_box_sheath_expansion_param(
ave_ts_probe_temps[probe_label][0],
ave_ts_probe_denss[probe_label][0] / 2,
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(alt_dummy_tilt),
d_perp,
theta_p,
c_1=2.0,
c_2=1.0,
)
handle = ax[0][0].errorbar('tilt', 'temp', yerr='d_temp', data=ds_probe, label=probe,
color=probe_colours[probe], marker=probe_markers[probe_label], mfc='none',
linestyle='none')
labels.append(probe)
handles.append(handle)
if phi_temp_fl:
label = r'$T_e$ from $\phi$ on {}'.format(probe)
handle = ax[0][0].errorbar('tilt', 'temp_phi', yerr='d_temp_phi', data=ds_probe, mfc='none',
label=label, linestyle='none', marker='^', color=probe_colours[probe])
labels.append(label)
handles.append(handle)
# ds_probe['dens_scaled'] = ds_probe['dens'] / 0.6
ax[1][0].errorbar('tilt', 'dens', yerr='d_dens', data=ds_probe,
marker=probe_markers[probe_label], mfc='none',
linestyle='none', color=probe_colours[probe])
ax[1][1].errorbar('tilt', 'isat', yerr='d_isat', data=ds_probe, marker=probe_markers[probe_label],
mfc='none', linestyle='none', color=probe_colours[probe])
ax[0][1].errorbar('tilt', 'a', yerr='d_a', data=ds_probe, marker=probe_markers[probe_label], mfc='none',
linestyle='none', color=probe_colours[probe])
calced_a_label = r'$a_{{{probe}, BMS}}$'.format(probe=probe)
handle = ax[0][1].errorbar(dummy_tilt, calced_a, label=calced_a_label, fmt='-', color=probe_colours[probe],
linewidth=0.8, alpha=0.6)
labels.append(calced_a_label)
handles.append(handle)
calced_a_label = r'$a_{{{probe}, RR}}$'.format(probe=probe)
handle = ax[0,1].errorbar(alt_dummy_tilt, alt_calced_a, label=calced_a_label, fmt='--',
color=probe_colours[probe], linewidth=0.9, alpha=0.8)
labels.append(calced_a_label)
handles.append(handle)
# Plot the ts values at the probe positions using interpolaton
handle = ax[0][0].errorbar(ds_probe['tilt'].values, ts_probe_temps[probe_label][:,0],
yerr=ts_probe_d_temps[probe_label][:,0],
color=probe_colours[probe], linewidth=1, linestyle='dashed',
label=f'TS at {probe}')
ax[1][0].errorbar(ds_probe['tilt'].values, ts_probe_denss[probe_label][:,0],
yerr=ts_probe_d_denss[probe_label][:,0],
color=probe_colours[probe], linewidth=1, linestyle='dashed')
labels.append(f'TS at {probe}')
handles.append(handle)
ax[0,0].set_ylabel('$T_e$ [eV]', fontsize=fontsize)
ax[1,0].set_ylabel(r'$n_e$ [m$^{-3}$]', fontsize=fontsize)
ax[1,0].set_yscale('log')
ax[0,1].set_ylabel(r'$a$', fontsize=fontsize)
ax[1,1].set_ylabel(r'$I_{sat}$ [A]', fontsize=fontsize)
ax[1,0].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
ax[1,1].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
ax[0,1].set_yscale('log')
ax[0,1].set_xlim(-0.5, 10.5)
# ax[1,2].set_xlim(91.5, 79.5)
ax[0,1].set_ylim(1e-3, 1e0)
# fig.set_constrained_layout_pads(hspace=0.1, wspace=0.1)
fig.tight_layout(pad=0.4, rect=[0, 0, 0.84, 1])
fig.legend(handles, labels, loc='center left', bbox_to_anchor=(0.85,0.5))
# fig.suptitle('Anglescan Fit Parameters for {} Plasma at {}'.format(*plasma_settings))
return ax
fit_datasets[anglescan_plset].sel(probe='S')['dens'].mean().values
def plot_paper_tiltscan_fitparams_3(plasma_settings, ax=None, probes=('S', 'L'), fontsize=18,
orientation='v', a_ylim=[1e-3, 1e0], ne_ylim=None, ts_offset=0.0):
if orientation[0] == 'v':
layout = [3]
figsize = [4.5,9.5]
elif orientation[0] == 'h':
layout = [1, 3]
figsize = [12.5,4.75]
else:
raise ValueError('Orientation must be one of [h(orizontal), v(ertical)]')
if ax is None:
fig, ax = plt.subplots(*layout, sharex=True,
# constrained_layout=True,
figsize=figsize)
else:
fig = ax[0].figure
ds = fit_datasets[plasma_settings]
ds['ts_d_temperature'] = ds['ts_temperature'] * mean_percentage_temp_ts
ds['ts_d_density'] = ds['ts_density'] * mean_percentage_dens_ts
labels = []
handles = []
fp_markers = ['.', 's']
for i, probe in enumerate(probes):
probe_label = probe[0]
ds_probe = ds.sel(probe=probe)
# ds['ts_d_temperature'] = ds['ts_temperature'] * mean_percentage_temp_ts
probe_dims = magnum_probes[probe_label]
ave_ts_probe_temps, ave_ts_probe_denss = interpolate_ts_position(ds)
ts_perprobe_data = mgut.interpolate_ts_position(ds_probe, aggregate_dims=None, offset=ts_offset)
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = ts_perprobe_data
print(ave_ts_probe_temps[probe_label][0], ave_ts_probe_denss[probe_label][0],)
dummy_tilt = np.linspace(0.00001, 11.0, 1100)
# dummy_tilt = np.linspace(89.99, 78.99, 1101)
calced_a = lp.calc_sheath_expansion_param(
ave_ts_probe_temps[probe_label][0],
ave_ts_probe_denss[probe_label][0] /2,
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(dummy_tilt),
c_1=0.9,
c_2=0.6,
# c_1 = 1.5,
# c_2 = 6,
)
probe_a = lp.calc_sheath_expansion_param(
ds_probe['temp'],
ds_probe['dens'],
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(ds_probe['tilt']),
c_1=0.9,
# c_1=60,
c_2=0.6,
)
alt_dummy_tilt = np.linspace(-1.0, 11.0, 1100)
try:
d_perp = probe_dims.d_perp[0]
except:
d_perp = probe_dims.d_perp
try:
theta_p = probe_dims.theta_p
except AttributeError:
theta_p = 0.0
print(ds_probe['temp'])
alt_calced_a = lp.calc_2d_box_sheath_expansion_param(
# ave_ts_probe_temps[probe_label][0],
# ave_ts_probe_denss[probe_label][0] / 2,
ds_probe['temp'].mean('tilt').values,
ds_probe['dens'].mean('tilt').values,
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(alt_dummy_tilt),
d_perp,
theta_p,
c_1=2.0,
c_2=1.0,
)
ax[0].errorbar('tilt', 'dens', yerr='d_dens', data=ds_probe, linestyle='none',
label=probe, color=probe_colours[probe], marker=probe_markers[probe_label], mfc='none')
# labels.append(probe)
# handles.append(handle)
ax[1].errorbar('tilt', 'temp', yerr='d_temp', data=ds_probe, label=probe, linestyle='none',
color=probe_colours[probe], marker=probe_markers[probe_label], mfc='none')
# Plot the ts values at the probe positions using interpolaton
ax[0].errorbar(ds_probe['tilt'].values, ts_probe_denss[probe_label][:,0],
label=f'TS at {probe}', yerr=ts_probe_d_denss[probe_label][:,0],
color=probe_colours[probe], linewidth=1, linestyle='dashed')
ax[1].errorbar(ds_probe['tilt'].values, ts_probe_temps[probe_label][:,0],
label=f'TS at {probe}', yerr=ts_probe_d_temps[probe_label][:,0],
color=probe_colours[probe], linewidth=1, linestyle='dashed')
# labels.append(f'TS at {probe}')
# handles.append(handle)
ax[2].errorbar('tilt', 'a', yerr='d_a', data=ds_probe, linestyle='none', color=probe_colours[probe],
marker=probe_markers[probe_label], mfc='none', label=probe)
# ax[2].plot(ds_probe['tilt'], probe_a, linestyle='none', color=probe_colours[probe],
# marker=probe_markers[probe_label], mfc='none', label=probe)
calced_a_label = r'$a_{{{probe}, BMS}}$'.format(probe=probe)
ax[2].errorbar(dummy_tilt, calced_a, label=calced_a_label, fmt='-',
color=probe_colours[probe], linewidth=0.8, alpha=0.6)
# labels.append(calced_a_label)
# handles.append(handle)
calced_a_label = r'$a_{{{probe}, RR}}$'.format(probe=probe)
ax[2].errorbar(alt_dummy_tilt, alt_calced_a, label=calced_a_label, fmt='--',
color=probe_colours[probe], linewidth=0.9, alpha=0.8)
ax[0].set_ylabel(r'$n_e$ [m$^{-3}$]', fontsize=fontsize)
ax[0].set_yscale('log')
ax[1].set_ylabel('$T_e$ [eV]', fontsize=fontsize)
ax[2].set_ylabel(r'$a$', fontsize=fontsize)
ax[2].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
if orientation[0] == 'h':
for i in range(2):
ax[i].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
# ax[1].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
ax[2].set_yscale('log')
ax[2].set_xlim(-0.5, 10.5)
ax[2].set_ylim(*a_ylim)
if ne_ylim:
ax[0].set_ylim(*ne_ylim)
# fig.set_constrained_layout_pads(hspace=0.1, wspace=0.1)
fig.tight_layout(pad=0.4) #, rect=[0, 0, 0.8, 1])
# fig.legend(handles, labels, loc='center left', bbox_to_anchor=(0.75,0.5))
for axis in ax:
axis.legend()
# fig.suptitle('Anglescan Fit Parameters for {} Plasma at {}'.format(*plasma_settings))
return ax
def plot_paper_tiltscan_fitparams_2(plasma_settings, ax=None, probes=('S', 'L'), fontsize=18):
if ax is None:
fig, ax = plt.subplots(2, sharex=True,
# constrained_layout=True,
figsize=[5,8])
else:
fig = ax[0].figure
ds = fit_datasets[plasma_settings]
labels = []
handles = []
for i, probe in enumerate(probes):
probe_label = probe[0]
ds_probe = ds.sel(probe=probe)
probe_dims = magnum_probes[probe_label]
ave_ts_probe_temps, ave_ts_probe_denss = interpolate_ts_position(ds)
ts_perprobe_data = mgut.interpolate_ts_position(ds, aggregate_dims='probe')
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = ts_perprobe_data
dummy_tilt = np.linspace(0.00001, 11.0, 1100)
# dummy_tilt = np.linspace(89.99, 78.99, 1101)
calced_a = lp.calc_sheath_expansion_param(
ave_ts_probe_temps[probe_label][0],
ave_ts_probe_denss[probe_label][0] / 2,
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(dummy_tilt),
c_1=0.9,
c_2=0.6,
)
handle = ax[0].errorbar('tilt', 'dens', yerr='d_dens', data=ds_probe, fmt='.',
color=probe_colours[probe])
labels.append(probe)
handles.append(handle)
# Plot the ts values at the probe positions using interpolaton
handle = ax[0].errorbar(ds_probe['tilt'].values, ts_probe_denss[probe_label][:,0],
color=probe_colours[probe], linewidth=1, linestyle='dashed')
labels.append(f'TS at {probe}')
handles.append(handle)
ax[1].errorbar('tilt', 'a', yerr='d_a', data=ds_probe, fmt='.', color=probe_colours[probe])
calced_a_label = r'$a_{{{probe}, calc}}$'.format(probe=probe)
handle = ax[1].errorbar(dummy_tilt, calced_a, label=calced_a_label, fmt='-',
color=probe_colours[probe], linewidth=0.8, alpha=0.6)
labels.append(calced_a_label)
handles.append(handle)
ax[0].set_ylabel(r'$n_e$ [m$^{-3}$]', fontsize=fontsize)
ax[0].set_yscale('log')
ax[1].set_ylabel(r'$a$', fontsize=fontsize)
ax[1].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
ax[1].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
ax[1].set_yscale('log')
ax[1].set_xlim(-0.5, 10.5)
# ax[1,2].set_xlim(91.5, 79.5)
ax[1].set_ylim(2e-3, 1e0)
# fig.set_constrained_layout_pads(hspace=0.1, wspace=0.1)
fig.tight_layout(pad=0.4) #, rect=[0, 0, 0.8, 1])
fig.legend(handles, labels, loc='center left', bbox_to_anchor=(0.75,0.5))
# fig.suptitle('Anglescan Fit Parameters for {} Plasma at {}'.format(*plasma_settings))
return ax
def plot_paper_tiltscan_fitparams_extra(plasma_settings, ax=None, phi_temp_fl=False, probes=('S', 'L'),
fontsize=18, mu=1, Z=1):
if ax is None:
fig, ax = plt.subplots(2, 3, sharex=True,
# constrained_layout=True,
figsize=[8,6])
else:
fig = ax[0,0].figure
ds = fit_datasets[plasma_settings]
if phi_temp_fl:
ds['d_temp_phi'] = np.abs(ds['temp_phi'] - ds['temp_fit_phi'])
labels = []
handles = []
for i, probe in enumerate(probes):
probe_label = probe[0]
ds_probe = ds.sel(probe=probe)
probe_dims = magnum_probes[probe_label]
ave_ts_probe_temps, ave_ts_probe_denss = interpolate_ts_position(ds)
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = mgut.interpolate_ts_position(ds, aggregate_dims='probe')
dummy_tilt = np.linspace(0.00001, 11.0, 1100)
# dummy_tilt = np.linspace(89.99, 78.99, 1101)
calced_a = lp.calc_sheath_expansion_param(
ave_ts_probe_temps[probe_label][0],
ave_ts_probe_denss[probe_label][0] / 2,
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(dummy_tilt),
c_1=0.9,
c_2=0.6,
)
handle = ax[0][0].errorbar('tilt', 'temp', yerr='d_temp', data=ds_probe, label=probe, fmt='.',
color=probe_colours[probe])
labels.append(probe)
handles.append(handle)
if phi_temp_fl:
label = r'$T_e$ from $\phi$ on {}'.format(probe)
handle = ax[0][0].errorbar('tilt', 'temp_phi', yerr='d_temp_phi', data=ds_probe, mfc='none',
label=label, linestyle='none', marker='^', color=probe_colours[probe])
labels.append(label)
handles.append(handle)
# ds_probe['dens_scaled'] = ds_probe['dens'] / 0.6
ax[1][0].errorbar('tilt', 'dens', yerr='d_dens', data=ds_probe, fmt='.', color=probe_colours[probe])
ax[1][1].errorbar('tilt', 'isat', yerr='d_isat', data=ds_probe, fmt='.', color=probe_colours[probe])
ax[0][1].errorbar('tilt', 'a', yerr='d_a', data=ds_probe, fmt='.', color=probe_colours[probe])
calced_a_label = r'$a_{{{probe}, calc}}$'.format(probe=probe)
handle = ax[0][1].errorbar(dummy_tilt, calced_a, label=calced_a_label, fmt='-', color=probe_colours[probe],
linewidth=0.8, alpha=0.6)
labels.append(calced_a_label)
handles.append(handle)
alt_calced_a = lp.calc_sheath_expansion_param(
ds_probe['temp'],
ds_probe['dens'],
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(ds_probe['tilt']),
c_1=0.9,
c_2=0.6,
)
ax[0][1].errorbar(ds_probe['tilt'], alt_calced_a, label=calced_a_label, marker='s', color=probe_colours[probe])
# Plot the ts values at the probe positions using interpolaton
handle = ax[0][0].errorbar(ds_probe['tilt'].values, ts_probe_temps[probe_label][:,0], yerr=0,
color=probe_colours[probe], linewidth=1, linestyle='dashed',
label=f'TS at {probe}')
ax[1][0].errorbar(ds_probe['tilt'].values, ts_probe_denss[probe_label][:,0],
color=probe_colours[probe], linewidth=1, linestyle='dashed')
labels.append(f'TS at {probe}')
handles.append(handle)
# larmor = (102 * np.sqrt(mu * ts_probe_temps[probe_label][:, 0]) / (Z * plasma_settings[0]))
larmor = lp.ion_larmor_radius(ts_probe_temps[probe_label][:, 0], plasma_settings[0], mu=mu, Z=Z)
ax[0,2].plot(ds_probe['tilt'].values, larmor)
ax[1,2].plot(ds_probe['tilt'].values, probe_dims.get_2d_collection_length(np.radians(ds_probe['tilt'].values)) / larmor)
ax[0,0].set_ylabel('$T_e$ [eV]', fontsize=fontsize)
ax[1,0].set_ylabel(r'$n_e$ [m$^{-3}$]', fontsize=fontsize)
ax[1,0].set_yscale('log')
ax[0,1].set_ylabel(r'$a$', fontsize=fontsize)
ax[1,1].set_ylabel(r'$I_{sat}$ [A]', fontsize=fontsize)
ax[1,0].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
ax[1,1].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
ax[0,1].set_yscale('log')
ax[0,1].set_xlim(-0.5, 10.5)
# ax[1,2].set_xlim(91.5, 79.5)
ax[0,1].set_ylim(2e-3, 1e0)
# fig.set_constrained_layout_pads(hspace=0.1, wspace=0.1)
fig.tight_layout(pad=0.4, rect=[0, 0, 0.84, 1])
fig.legend(handles, labels, loc='center left', bbox_to_anchor=(0.85,0.5))
# fig.suptitle('Anglescan Fit Parameters for {} Plasma at {}'.format(*plasma_settings))
return ax
def plot_coarse_anglescan_ivs(plasma_settings, probe='S', ax=None, ):
if ax is None:
fig, ax = plt.subplots(constrained_layout=True)
else:
fig = ax.figure
# markers = ['.', 'x', '+', '^', '*', 'o']
# tableau_palette = ['red', 'blue', 'green', 'orange', 'purple', 'black']
fit_ds = fit_datasets[plasma_settings]
fitter = fts.FullIVFitter()
coarse_s_ds = fit_ds.sel(probe=probe).isel(tilt=slice(0,11,2))
for i, tilt in enumerate(coarse_s_ds.tilt.values):
ds = coarse_s_ds.sel(tilt=tilt)
# coarse_s_ds.set_coords('voltage')['current'].plot.line(hue='tilt', x='voltage', ax=ax)
ax.errorbar(ds['voltage'].values, -ds['current'].values, yerr=ds['d_current'],
marker=markers[i], mfc='none', color=cb_palette[i], linestyle='none',
label=r'${:.2g}^{{\circ}}$'.format(np.abs(tilt)))
# ax.plot(ds['voltage'].values, fitter.fit_function(ds['voltage'].values, ))
ax.set_ylabel('Probe Current [A]')
ax.set_xlabel('Probe Voltage [V]')
ax.axhline(y=0, **c.AX_LINE_DEFAULTS)
ax.legend(title=r'$\theta$', loc='lower left')
fig.suptitle('IV Characteristics for {}T {} Plasma'.format(*plasma_settings))
ax.set_title(probe)
return ax
DEFAULT_LABEL_POS = {
'x0': 0.48,
'x1': 0.95,
'y0': 0.93,
'y1': 0.46
}
def plot_coarse_anglescan_ivs_multiprobe(plasma_settings, probes=('L', 'S', 'B', 'R'), sharey=False,
label_pos=DEFAULT_LABEL_POS):
fig, ax = plt.subplots(2, 2, sharex=True, sharey=sharey, figsize=[9,8])
plot_coarse_anglescan_ivs(plasma_settings, probe=probes[0], ax=ax[0,0])
plot_coarse_anglescan_ivs(plasma_settings, probe=probes[1], ax=ax[0,1])
plot_coarse_anglescan_ivs(plasma_settings, probe=probes[2], ax=ax[1,0])
plot_coarse_anglescan_ivs(plasma_settings, probe=probes[3], ax=ax[1,1])
ax[1,1].set_ylabel('')
ax[0,1].set_ylabel('')
ax[0,0].set_xlabel('')
ax[0,1].set_xlabel('')
ax[0,0].set_title('')
ax[0,1].set_title('')
ax[1,0].set_title('')
ax[1,1].set_title('')
fig.suptitle('')
fig.tight_layout() #rect=[0, 0, 0.95, 1])
fig.text(label_pos['x0'], label_pos['y0'], probes[0], ha='center', fontsize=18)
fig.text(label_pos['x1'], label_pos['y0'], probes[1], ha='center', fontsize=18)
fig.text(label_pos['x0'], label_pos['y1'], probes[2], ha='center', fontsize=18)
fig.text(label_pos['x1'], label_pos['y1'], probes[3], ha='center', fontsize=18)
def plot_coarse_anglescan_ivs_twoprobe(plasma_settings, probes=['S', 'L'], sharey=False,
label_pos=DEFAULT_LABEL_POS):
fig, ax = plt.subplots(2, sharex=True, figsize=[5,8],
sharey=sharey)
for i, probe in enumerate(probes):
plot_coarse_anglescan_ivs(plasma_settings, probe=probe, ax=ax[i])
# ax[1].set_ylabel('')
ax[0].set_xlabel('')
ax[0].set_title('')
ax[1].set_title('')
fig.suptitle('')
fig.tight_layout() #rect=[0, 0, 0.95, 1])
fig.text(label_pos['x0'], label_pos['y0'], probes[0], ha='center', fontsize=18)
fig.text(label_pos['x0'], label_pos['y1'], probes[1], ha='center', fontsize=18)
def plot_coarse_anglescan_ivs_twoprobe_horiz(plasma_settings, probes=['S', 'L'], sharey=False,
label_pos=DEFAULT_LABEL_POS):
fig, ax = plt.subplots(1, 2, sharex=True, figsize=[9, 4.75],
sharey=sharey)
for i, probe in enumerate(probes):
plot_coarse_anglescan_ivs(plasma_settings, probe=probe, ax=ax[i])
ax[1].set_ylabel('')
# ax[0].set_xlabel('')
ax[0].set_title('')
ax[1].set_title('')
fig.suptitle('')
fig.tight_layout() #rect=[0, 0, 0.95, 1])
fig.text(label_pos['x0'], label_pos['y0'], probes[0], ha='center', fontsize=18)
fig.text(label_pos['x1'], label_pos['y0'], probes[1], ha='center', fontsize=18)
def plot_ts_profiles(plasma_settings, probes=['S', 'L', 'B', 'R'], shot_index={'tilt':10}, probe_shift=-1.75):
fig, ax = plt.subplots(2, sharex=True)
shifted_probes = magnum_probes.probe_position.copy()
for probe in magnum_probes.probe_position.keys():
shifted_probes[probe] += probe_shift
fit_ds = fit_datasets[plasma_settings]
fitter = fts.FullIVFitter()
ts_ds = fit_ds.sel(probe=probes[0], **shot_index)
ax[0].errorbar('ts_radial_pos', 'ts_temperature', yerr='ts_d_temperature', data=ts_ds)
ax[1].errorbar('ts_radial_pos', 'ts_density', yerr='ts_d_density', data=ts_ds)
for probe, pos in shifted_probes.items():
probe = probe.upper()
# pos -= 10
ax[0].axvline(x=pos, linestyle='dashed', linewidth=0.75, color=probe_colours[probe], label=probe)
ax[1].axvline(x=pos, linestyle='dashed', linewidth=0.75, color=probe_colours[probe], label=probe)
ax[0].legend()
ax[1].legend()
plot_ts_profiles(anglescan_plset, shot_index={'tilt':2}, probe_shift=-2.5)
```
## Running the Plotting Functions!
```
# anglescan_plset = (0.8, 'H')
# anglescan_plset = (1.2, 'H')
# anglescan_plset = (1.5, 'H')
# anglescan_plset = (0.8, 'He')
anglescan_plset = (1.2, 'He')
# plot_all_tiltscan_fitparams(anglescan_plset, probes=('S', 'L'), fontsize=14)
# plot_paper_tiltscan_fitparams_4(anglescan_plset, probes=('S', 'L'), fontsize=14)
plot_paper_tiltscan_fitparams_3(anglescan_plset, probes=('S'), fontsize=14, orientation='h', ts_offset=-1.75)
# plot_coarse_anglescan_ivs_multiprobe(anglescan_plset, label_pos={'x0': 0.46, 'x1': 0.95, 'y0': 0.93, 'y1': 0.46})
# plot_coarse_anglescan_ivs_twoprobe(anglescan_plset, probes=['S', 'L'])
# plot_coarse_anglescan_ivs_twoprobe_horiz(anglescan_plset, sharey=False, probes=['S', 'L'],
# label_pos={'x0': 0.48, 'x1': 0.96, 'y0': 0.90})
fig, ax = plt.subplots(1, 2, figsize=[9, 4.75])
single_anglescan_plset = (1.2, 'H')
plot_coarse_anglescan_ivs(single_anglescan_plset, probe='R', ax=ax[0])
single_anglescan_plset = (1.2, 'He')
plot_coarse_anglescan_ivs(single_anglescan_plset, probe='B', ax=ax[1])
ax[0].set_title('')
ax[1].set_title('')
fig.text(0.46, 0.9, 'R', ha='center', fontsize=18)
fig.text(0.95, 0.9, 'B', ha='center', fontsize=18)
fig.suptitle('')
fig.tight_layout()
fig, ax = plt.subplots(3, 2, sharex=True, figsize=[9,10])
ne_ylim = [1e16, 3e20]
plot_paper_tiltscan_fitparams_3(anglescan_plset, probes=('S', 'L'), fontsize=14, orientation='v', ax=ax[:,0])
plot_paper_tiltscan_fitparams_3(anglescan_plset, probes=('B'), fontsize=14, orientation='v', ax=ax[:,1],
a_ylim=[1e-3, 1e-1])#, ne_ylim=ne_ylim)
fig.tight_layout()
def plot_fit_iv(plasma_settings, ax=None, probe='S', iv_indices={'tilt':10}):
if ax is None:
fig, ax = plt.subplots(figsize=[5, 4.75])
else:
fig = ax.figure
fit_ds = fit_datasets[plasma_settings] #.sel(time=slice(None, None, 100))
fitter = fts.FullIVFitter()
for i, (index, value) in enumerate(iv_indices.items()):
ds = fit_ds.sel(probe=probe).sel({index: value})
ds['current'] = -ds['current']
print(ds)
iv_data = iv.IVData.from_dataset(ds)
shot_fit = iv_data.multi_fit(**{'mode': 3, 'trimming_vals': (0.16, 0.16, 0.02), 'sat_region': -25})
fit_dens = magnum_probes.probe_s.get_density(shot_fit.get_isat(), shot_fit.get_temp(),
alpha=np.radians(ds.tilt.values))
ax.errorbar(ds['voltage'].values, ds['current'].values, yerr=ds['d_current'],
marker=markers[i], mfc='none', color=probe_colours[probe], alpha=0.5,
linestyle='none', zorder=-3)
ax.plot(*shot_fit.get_fit_plottables(), '-', color=probe_colours[probe],
label=r'$T_e$={:.2g} eV, $n_e$={:.2g} m$^{{-3}}$, $\chi^2_{{\nu}}$ = {:.2g}'
.format(shot_fit.get_temp(), fit_dens, shot_fit.reduced_chi2))
ax.set_ylabel('Probe Current [A]')
ax.set_xlabel('Probe Voltage [V]')
ax.axhline(y=0, **c.AX_LINE_DEFAULTS)
ax.legend(loc='lower left')
# ax.legend(title=r'$\theta$', loc='lower left')
# fig.suptitle('IV Characteristics for {}T {} Plasma'.format(*plasma_settings))
fig.tight_layout() #rect=[0, 0, 0.95, 1])
plot_fit_iv(anglescan_plset, probe='L', iv_indices={'tilt':2})
fig, ax = plt.subplots(1, 2, figsize=[8,5], sharey=True)
plot_fit_iv(anglescan_plset, probe='L', iv_indices={'tilt':2}, ax=ax[0])
plot_fit_iv(anglescan_plset, probe='S', iv_indices={'tilt':2}, ax=ax[1])
ax[1].set_ylabel('')
ax[0].set_title('Probe L')
ax[1].set_title('Probe S')
fig.tight_layout()
import importlib
importlib.reload(mgut)
importlib.reload(lp)
# for probe in probes:
# plot_probe_anglescans(probe, fit_datasets)
# plot_probe_anglescans('S', fit_datasets)
```
## Getting the TS Dataset to Plot Errorbars Properly
```
full_ts_ds = xr.open_dataset('full_ts_dataset.nc')
shot_number_ds = meta_data_ds.dropna('shot_number', subset=['adc_index']).swap_dims({'shot_number':'adc_index'}).sel(adc_index=indices_08T_H)
shot_numbers = shot_number_ds.shot_number
shot_number_ds
meta_data_ds.sel(shot_number=36)
H_08T_ts = full_ts_ds.sel(ts_time=np.isin(full_ts_ds.shot_number, shot_numbers))
H_08T_ts
mean_percentage_dens_ts = (H_08T_ts['ts_d_density'] / H_08T_ts['ts_density']).mean('ts_time').values
mean_percentage_temp_ts = (H_08T_ts['ts_d_temperature'] / H_08T_ts['ts_temperature']).mean('ts_time').values
for shot_number in H_08T_ts.shot_number.values[0:1]:
shot_ds = H_08T_ts.where(H_08T_ts.shot_number == shot_number, drop=True)
md_shot_ds = meta_data_ds.sel(shot_number=shot_number)
fig, ax = plt.subplots(2, sharex=True)
fig.suptitle(f'{shot_number} - {md_shot_ds.shot_tilt.values}')
ax[0].errorbar('ts_radial_pos', 'ts_temperature', yerr='ts_d_temperature', data=md_shot_ds, label='averaged')
ax[1].errorbar('ts_radial_pos', 'ts_density', yerr='ts_d_density', data=md_shot_ds, label='averaged')
for ts_time in shot_ds.ts_time.values:
ds = shot_ds.sel(ts_time=ts_time)
ax[0].errorbar('ts_radial_pos', 'ts_temperature', yerr='ts_d_temperature', data=ds, label='raw')
ax[1].errorbar('ts_radial_pos', 'ts_density', yerr='ts_d_density', data=ds, label='raw')
for shot_number in H_08T_ts.shot_number.values[0:5]:
shot_ds = H_08T_ts.where(H_08T_ts.shot_number == shot_number, drop=True)
md_shot_ds = meta_data_ds.sel(shot_number=shot_number)
fig, ax = plt.subplots(2, sharex=True)
fig.suptitle(f'{shot_number} - {md_shot_ds.shot_tilt.values}')
ax[0].plot('ts_radial_pos', 'ts_d_temperature', data=md_shot_ds, label='averaged')
ax[1].plot('ts_radial_pos', 'ts_d_density', data=md_shot_ds, label='averaged')
ax[0].plot('ts_radial_pos', 'ts_d_temperature', data=shot_ds.std('ts_time'), label='averaged_alt')
ax[1].plot('ts_radial_pos', 'ts_d_density', data=shot_ds.std('ts_time'), label='averaged_alt')
for ts_time in shot_ds.ts_time.values:
ds = shot_ds.sel(ts_time=ts_time)
ax[0].plot('ts_radial_pos', 'ts_d_temperature', data=ds, label='raw')
ax[1].plot('ts_radial_pos', 'ts_d_density', data=ds, label='raw')
ax[0].plot(ds['ts_radial_pos'].values, ds['ts_temperature'].values * mean_percentage_temp_ts, label='estimated')
ax[1].plot(ds['ts_radial_pos'].values, ds['ts_density'].values * mean_percentage_dens_ts, label='estimated')
for axis in ax:
axis.legend()
# axis.set_yscale('log')
```
## Answering Pete's Questions
Pete asked me a few questions following the presentation in an email. Namely the questions were:
1. If I use the $T_e$ from the TS in the density calculation, does the larger probe underestimate (compared to TS) more than small probe?
1. Look at TS profiles with different distances from probes.
1. Compare the $\lambda_D$ / $T_e$ / $n_i$ produced from the $I_{sat}$ and from $a$
```
def plot_temperatures_pete(plasma_settings, ax=None, probes=('S', 'L'), fontsize=18):
layout = [3]
figsize = [4.5,9.5]
if ax is None:
fig, ax = plt.subplots(*layout, sharex=True,
figsize=figsize)
else:
fig = ax[0,0].figure
ds = fit_datasets[plasma_settings]
ds['ts_d_temperature'] = ds['ts_temperature'] * mean_percentage_temp_ts
ds['ts_d_density'] = ds['ts_density'] * mean_percentage_dens_ts
labels = []
handles = []
fp_markers = ['.', 's']
for i, probe in enumerate(probes):
probe_label = probe[0]
ds_probe = ds.sel(probe=probe)
ds['ts_d_temperature'] = ds['ts_temperature'] * mean_percentage_temp_ts
probe_dims = magnum_probes[probe_label]
ave_ts_probe_temps, ave_ts_probe_denss = interpolate_ts_position(ds)
ts_perprobe_data = mgut.interpolate_ts_position(ds_probe, aggregate_dims=None)
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = ts_perprobe_data
dummy_tilt = np.linspace(0.00001, 11.0, 1100)
calced_a = lp.calc_sheath_expansion_param(
ave_ts_probe_temps[probe_label][0],
ave_ts_probe_denss[probe_label][0],
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(dummy_tilt),
c_1=0.9,
c_2=0.6,
)
probe_a = lp.calc_sheath_expansion_param(
ds_probe['temp'],
ds_probe['dens'],
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(ds_probe['tilt']),
c_1=0.9,
c_2=0.6,
) - ((np.radians(ds_probe['tilt'].values) * -5.168444408771983e-05) + 1.8689974068124587e-06)
ts_calced_dens = probe_dims.get_density(ds_probe['isat'],
ts_probe_temps[probe_label][:,0],
np.radians(ds_probe['tilt']))
ax[0].errorbar('tilt', 'dens', yerr='d_dens', data=ds_probe, linestyle='none',
label=probe, color=probe_colours[probe], marker=probe_markers[probe_label], mfc='none')
ax[1].errorbar('tilt', 'temp', yerr='d_temp', data=ds_probe, label=probe, linestyle='none',
color=probe_colours[probe], marker=probe_markers[probe_label], mfc='none')
# Plot the ts values at the probe positions using interpolaton
ax[0].errorbar(ds_probe['tilt'].values, ts_probe_denss[probe_label][:,0],
label=f'TS at {probe}', yerr=ts_probe_d_denss[probe_label][:,0],
color=probe_colours[probe], linewidth=1, linestyle='dashed')
ax[1].errorbar(ds_probe['tilt'].values, ts_probe_temps[probe_label][:,0],
label=f'TS at {probe}', yerr=ts_probe_d_temps[probe_label][:,0],
color=probe_colours[probe], linewidth=1, linestyle='dashed')
ax[0].plot(ds_probe['tilt'].values, ts_calced_dens, label=f'{probe} with TS Temp',
color=probe_colours[probe], linestyle='none', marker='o')
ax[2].errorbar('tilt', 'a', yerr='d_a', data=ds_probe, linestyle='none', color=probe_colours[probe],
marker=probe_markers[probe_label], mfc='none', label=probe)
calced_a_label = r'$a_{{{probe}, calc}}$'.format(probe=probe)
ax[2].errorbar(dummy_tilt, calced_a, label=calced_a_label, fmt='-',
color=probe_colours[probe], linewidth=0.8, alpha=0.6)
ax[2].errorbar(ds_probe['tilt'].values, probe_a, label=calced_a_label, fmt='o',
color=probe_colours[probe])
ax[0].set_ylabel(r'$n_e$ [m$^{-3}$]', fontsize=fontsize)
ax[0].set_yscale('log')
ax[1].set_ylabel('$T_e$ [eV]', fontsize=fontsize)
ax[2].set_ylabel(r'$a$', fontsize=fontsize)
ax[2].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
# ax[1].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
ax[2].set_yscale('log')
ax[2].set_xlim(-0.5, 10.5)
ax[2].set_ylim(1e-3, 1e0)
fig.tight_layout(pad=0.4) #, rect=[0, 0, 0.8, 1])
# fig.legend(handles, labels, loc='center left', bbox_to_anchor=(0.75,0.5))
for axis in ax:
axis.legend()
return ax
def calc_debye_from_a(a, alpha, probe, c_1=0.4, c_2=0.5):
return (a * (probe.L + probe.g) * np.sqrt(np.sin(alpha))) / (c_1 + (c_2 / np.tan(alpha)))
def calc_debye_from_alt_a(a, alpha, probe, c_1=0.4, c_2=0.5):
return a * (((((probe.L + probe.g) * np.tan(alpha)) + (probe.L * np.tan(probe.theta_p)) - probe.d_perp)
* np.sqrt(np.sin(alpha)))
/ (((c_1 * (np.tan(alpha) + np.tan(probe.theta_p))) + c_2)))
def plot_debye_pete(plasma_settings, probes=('S', 'L'), fontsize=18):
figsize = [4.5, 7]
fig, ax = plt.subplots(2, sharex=True, figsize=figsize)
fig2, ax2 = plt.subplots(2, sharex=True, figsize=figsize)
ds = fit_datasets[plasma_settings]
ds['ts_d_temperature'] = ds['ts_temperature'] * mean_percentage_temp_ts
ds['ts_d_density'] = ds['ts_density'] * mean_percentage_dens_ts
labels = []
handles = []
fp_markers = ['.', 's']
for i, probe in enumerate(probes):
probe_label = probe[0]
ds_probe = ds.sel(probe=probe)
ds['ts_d_temperature'] = ds['ts_temperature'] * mean_percentage_temp_ts
probe_dims = magnum_probes[probe_label]
ave_ts_probe_temps, ave_ts_probe_denss = interpolate_ts_position(ds)
ts_perprobe_data = mgut.interpolate_ts_position(ds_probe, aggregate_dims=None)
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = ts_perprobe_data
debye_from_a = calc_debye_from_a(ds_probe['a'], np.radians(ds_probe['tilt']), probe_dims)
# debye_from_a_corr = debye_from_a / ((0.5 * np.radians(ds_probe['tilt'].values)) + 0.16)
debye_from_a_corr = debye_from_a + ((np.radians(ds_probe['tilt'].values) * -5.168444408771983e-05) + 1.8689974068124587e-06)
# debye_from_alt_a = calc_debye_from_alt_a(ds_probe['a'], np.radians(ds_probe['tilt']), probe_dims)
debye_from_tn = lp.debye_length(ds_probe['temp'], ds_probe['dens'])
ax[i].plot(ds_probe['tilt'].values, debye_from_a, label=r'{} - a'.format(probe),
color=probe_colours[probe], linestyle='none', marker='s', mfc='none')
ax[i].plot(ds_probe['tilt'].values, debye_from_a_corr, label=r'{} - a'.format(probe),
color=probe_colours[probe], linestyle='none', marker='^', mfc='none')
ax[i].plot(ds_probe['tilt'].values, debye_from_tn, label=r'{} - $T_e$ and $n_e$'.format(probe),
color=probe_colours[probe], linestyle='none', marker='.', mfc='none')
ax2[0].plot(np.radians(ds_probe['tilt'].values), debye_from_tn - debye_from_a, label=r'{}'.format(probe),
color=probe_colours[probe], linestyle='none', marker='s', mfc='none')
ax2[1].plot(np.radians(ds_probe['tilt'].values), debye_from_a - debye_from_tn, label=r'{}'.format(probe),
color=probe_colours[probe], linestyle='none', marker='s', mfc='none')
sl_fitter = fts.StraightLineFitter()
fit_data = sl_fitter.fit(np.radians(ds_probe['tilt'].values), debye_from_tn - debye_from_a)
fit_data.plot(ax=ax2[0])
fit_data.print_fit_params()
# ax2[1].axhline(y=1, color=probe_colours[probe], linestyle='--', linewidth=0.75, alpha=0.8)
# dummy_tilt = np.linspace(0.00001, 11.0, 1100)
# ax2[1].plot(dummy_tilt, 1/((0.5 * dummy_tilt) + 0.16), label=r'{}'.format(probe),
# color=probe_colours[probe], linestyle='--', marker='', alpha=0.8)
# ax2[1].plot(dummy_tilt, 1/fit_data.fit_function(dummy_tilt), label=r'{}'.format(probe),
# color=probe_colours[probe], linestyle='-', marker='', alpha=0.8)
ax[0].set_ylabel(r'$\lambda_D$ [m]', fontsize=fontsize)
ax[1].set_ylabel(r'$\lambda_D$ [m]', fontsize=fontsize)
ax[1].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
# ax[1].set_xlabel(r'$\theta$ [$^{\circ}$]', fontsize=fontsize)
# ax[2].set_yscale('log')
ax[0].set_xlim(-0.5, 10.5)
# ax2[0].set_xlim(-0.5, 10.5)
# ax[2].set_ylim(1e-3, 1e0)
fig.tight_layout(pad=0.4) #, rect=[0, 0, 0.8, 1])
fig2.tight_layout(pad=0.4) #, rect=[0, 0, 0.8, 1])
for axis in ax:
axis.legend()
return ax
plot_temperatures_pete(anglescan_plset, probes=('S', 'L'), fontsize=14)
# plot_debye_pete(anglescan_plset, probes=('S', 'L'), fontsize=14)
```
## Attempt at a fixed $a$ plot
```
fit_ds = fit_datasets[0.8, 'H']
ds = fit_ds.sel(probe='L').sel(tilt=10.0)
ds
iv_data = iv.IVData.from_dataset(ds)
iv_data.plot()
fit_data = iv_data.multi_fit()
fit_data.plot()
print(f'fd: {fit_data.get_sheath_exp()}')
print(f'ds: {ds.a.values}')
ds.tilt.values
ave_ts_probe_temps, ave_ts_probe_denss = interpolate_ts_position(fit_ds)
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = mgut.interpolate_ts_position(fit_ds, aggregate_dims='probe')
ts_perprobe_data = mgut.interpolate_ts_position(fit_ds, aggregate_dims=None)
ts_probe_temps, ts_probe_denss, ts_probe_d_temps, ts_probe_d_denss = ts_perprobe_data
alt_dummy_tilt = np.linspace(0.0, 11.0, 1100)
probe_dims = lp.MagnumProbes()['L']
d_perp = probe_dims.d_perp
theta_p = probe_dims.theta_p
print(d_perp, theta_p)
alt_calced_a = lp.calc_2d_box_sheath_expansion_param(
# ave_ts_probe_temps['L'][0],
# ave_ts_probe_denss['L'][0] / 2,
fit_data.get_temp(),
ds.dens.mean().values,
probe_dims.get_2d_probe_length(),
probe_dims.g,
np.radians(ds.tilt.values),
d_perp,
theta_p,
c_1=2.0,
c_2=1.0,
)
fit_data.get_sheath_exp() / alt_calced_a
ds
corr_ds = ds.drop('fit_success_fl').where(ds.current < 0, drop=True)
# corr_ds = corr_ds.where(corr_ds.voltage > -60 , drop=True)
corr_v = np.float_power(np.abs((corr_ds.voltage - iv_data.get_vf())/ave_ts_probe_temps['L'][0]), .75)
# corr_v = corr_v.where()
fig, ax = plt.subplots(1, 2)
ax[0].plot(corr_ds.voltage, alt_calced_a * corr_v)
ax[1].plot(corr_ds.voltage, corr_ds.current + (alt_calced_a * corr_v))
iv_fitter = fts.FullIVFitter()
fit_data_corr = iv_fitter.fit(corr_ds.voltage.values, corr_ds.current.values + (alt_calced_a * corr_v.values))
fit_data_corr.plot()
print(f'fd: {fit_data.get_temp()}')
print(f'fdc: {fit_data_corr.get_temp()}')
print(f'ds: {ds.temp.mean().values}')
importlib.reload(fts)
fiv_fitter = fts.ExperimentalIVFitter(probe=probe_dims, theta=10.0)
fiv_fitter.set_fixed_values({'c_1':2.0, 'c_2':1.0})
fiv_alt_fitter = fts.Experimental3IVFitter(probe=probe_dims, c_1=16, c_2=5)
len(fiv_fitter._param_labels)
fit5_data = fiv_alt_fitter.fit(corr_ds.voltage.values, corr_ds.current.values)
# initial_vals=[-0.548, 2.058, -9.870])
fit5_data.plot()
dummy_v = np.linspace(-80.0, 10.0, 1001)
initial_params = [-0.01, 10.0, -2.5, 2.5, 1.0]
fig, ax = plt.subplots()
ax.plot(dummy_v, fiv_fitter.fit_function(dummy_v, *initial_params))
n_e = probe_dims.get_density(np.abs(-1), 5, np.radians(10.0))
a = probe_dims.get_sheath_exp_param(5, n_e, np.radians(10.0), c_1=0.5, c_2=0.6, form='rotated')
print(n_e, a)
fiv_fitter.fit_function(dummy_v, -1.0, 5.0, -2.5, 0.5, 0.6)
```
| github_jupyter |
# Conditional VAEs for Uncertainty Quantification in Fatigue Simulations
If running on google colab remember to change your runtime to "GPU" :
**(Runtimeโ > โChange runtime typeโ > โHardware acceleratorโ > select GPU)**
```
#@title Clone repository, and prepare directory for the rest of computations:
!git clone https://github.com/mylonasc/fatigue_cvae.git
!cp -r fatigue_cvae/data .
!cp fatigue_cvae/utils.py .
!pip install tqdm
# Imports
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
import numpy as np
from utils import BladeCSMeshPlotter, default_mesh_file,DELDatasetPreProc, PlotFatvals
import matplotlib.pyplot as pplot
from tqdm import tqdm
tf.__version__, tfp.__version__
# A conditional VAE (IWAE) model using the functional Keras API
class CVAEModel:
def __init__(self, n_conditioning, input_dims, n_latent,layer_width = 64, n_layers_enc = 2,n_layers_dec = 2,
activation = "relu", n_iwae = 40, enc_use_cond = True):
"""
A simple FFNN conditional VAE model [1] for tf 2.4
Implements the importance weighted autoencoder [2] by sampling the latent space multiple times.
A similar implementation (initially in tensorflow 1.13) was used for [3].
parameters:
n_conditioning : number of conditioning variables
input_dims : number of dimensions of the input
n_latent : the size of the latent variable.
layer_width : (default: 50) a width value for all layers
n_layers_enc : (default: 2) the number of encoder layers (with activations). The part of the
encoder that parametrizes the std of the latent is passed through a softplus.
n_layers_dec : (default: 2) number of decoder layers
activation : ['relu'] what activation to use
n_iwae : [10] number of samples from the latent space passed through the decoder for each pass.
references:
[1] Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes."
arXiv preprint arXiv:1312.6114 (2013).
[2] Burda, Yuri, Roger Grosse, and Ruslan Salakhutdinov. "Importance weighted autoencoders."
arXiv preprint arXiv:1509.00519 (2015).
[3] Mylonas, Charilaos, Imad Abdallah, and E. N. Chatzi. "Deep unsupervised learning
for condition monitoring and prediction of high dimensional data with application
on windfarm scada data." Model Validation and Uncertainty Quantification, Volume 3.
Springer, Cham, 2020. 189-196.
"""
all_params = [n_conditioning, input_dims, layer_width, n_layers_dec, n_layers_enc, n_latent,activation, n_iwae]
pnames = ["n_conditioning", "input_dims", "layer_width", "n_layers_dec", "n_layers_enc", "n_latent","activation","n_iwae"]
self.cvae_params = {k: v for v,k in zip(all_params,pnames)}
if enc_use_cond:
input_dims_xx = input_dims + n_conditioning
else:
input_dims_xx = input_dims
self.encoder = CVAEModel.make_encoder(input_dims = input_dims_xx,
layer_width = layer_width,
n_layers = n_layers_enc ,
n_latent= n_latent)
prior = tfd.MultivariateNormalDiag(loc = tf.zeros(n_latent), scale_diag=tf.ones(n_latent))
posterior = tfp.layers.DistributionLambda(make_distribution_fn = lambda t: tfd.MultivariateNormalDiag(loc = t[:,0:n_latent],
scale_diag= 1e-5 + tf.nn.softplus(t[:,n_latent:])),
convert_to_tensor_fn= lambda s : s.sample(n_iwae))
X = tf.keras.Input(shape = (input_dims,), name = "X_in")
W = tf.keras.Input(shape = (n_conditioning,) , name = "W")
if enc_use_cond:
X_in = tf.keras.layers.concatenate([X,W], axis = -1)
else:
X_in = X
vae_params = self.encoder(X_in)
posterior_out = posterior(tf.keras.layers.concatenate(vae_params))
kl_div = tfd.kl_divergence(tfd.MultivariateNormalDiag(loc = vae_params[0], scale_diag=vae_params[1] ), prior)
self.posterior_out = posterior_out
dec_output = input_dims
self.decoder = CVAEModel.make_decoder(output_dims = dec_output,
layer_width = layer_width,
conditioning_dims = n_conditioning,
n_layers = n_layers_dec,
n_latent= n_latent,
n_iwae = n_iwae,
activation = activation,
input_tensor = posterior_out)
decoder_out = self.decoder([posterior_out, W])
y_decoder_out = decoder_out
self.vae_model = tf.keras.Model(inputs = [X,W] , outputs = [y_decoder_out, posterior_out, kl_div])
self.posterior_out = posterior_out
self.prior = prior
self.posterior = posterior
@staticmethod
def make_encoder(input_dims = None,layer_width = None, n_layers = None,n_latent = None,activation = None):
x_in = tf.keras.layers.Input(shape = (input_dims,))
ln = tf.keras.layers.Dense(layer_width)(x_in)
for l in range(n_layers - 1):
ln = tf.keras.layers.Dense(layer_width, activation = activation)(ln)
ln = tf.keras.layers.Dropout(rate = 0.2)(ln)
l_out_mean = tf.keras.layers.Dense(n_latent)(ln)
l_out_std = tf.keras.layers.Dense(n_latent, activation = tf.nn.softplus)(ln)
return tf.keras.Model(x_in, [l_out_mean, l_out_std])
@staticmethod
def make_decoder(output_dims = None,
n_latent= None ,
layer_width = None,
conditioning_dims = None ,
n_layers = None ,
activation = None,
n_iwae = None,
input_tensor = None):
# This is to circumvent a keras bug (see https://github.com/tensorflow/probability/issues/1200)
dd = tf.identity(input_tensor, name = "z_in")
z_in = tf.keras.layers.Input(tensor = dd, name = "z_in")
w_in = tf.keras.layers.Input(shape = (conditioning_dims, ), name = "w_in")
w_in_tiled = tf.tile(w_in[tf.newaxis, ... ] , [n_iwae, 1,1] )
yyin = tf.keras.layers.concatenate([z_in, w_in_tiled])
ln = tf.keras.layers.Dense(layer_width)(yyin)
for l in range(n_layers - 1):
ln = tf.keras.layers.Dense(output_dims, activation = activation)(ln)
ln = tf.keras.layers.Dropout(rate = 0.2)(ln)
y_out = tf.keras.layers.Dense(output_dims)(ln)
return tf.keras.Model(inputs=[z_in, w_in], outputs = y_out)
```
# Load the fatigue dataset
The dataset consists of 1999 simulations of fatigue of the blade root. Please refer back to the paper for more details.
```
dataset = DELDatasetPreProc()
x_norm = dataset.get_normalized_data_DEL()
w_norm = dataset.get_normalized_data_X()
#@Title some code that is needed for plotting.
plot_blade = BladeCSMeshPlotter(default_mesh_file, dataset.hasval_inds)
mesh = plot_blade.mesh
nl_inds = np.array(mesh.nl_2d[:,0], dtype = np.int)
el_loc = mesh.nl_2d[:,1:3]
elnums = mesh.el_2d[:,0]
node_1 = el_loc[mesh.el_2d[:,1]-1]
node_2 = el_loc[mesh.el_2d[:,2]-1]
node_3 = el_loc[mesh.el_2d[:,3]-1]
node_4 = el_loc[mesh.el_2d[:,4]-1]
node_pos_avg = (node_1 + node_2 + node_3 + node_4 )/4
r0 =( node_pos_avg[:,0]**2 + node_pos_avg[:,1] ** 2 ) **0.5
theta_el =np.arctan2( node_pos_avg[:,1] , node_pos_avg[:,0])
# Filter the nodes that have a value for DEL:
r0 = r0[plot_blade.mesh_finite_mask]
theta_el = theta_el[plot_blade.mesh_finite_mask]
pfv = PlotFatvals(theta_el, r0)
```
# Training a CVAE on fatigue data
```
# initialize the optimizer:
opt = tf.keras.optimizers.Adam(learning_rate= 0.001)
## Train-test split:
from sklearn.model_selection import train_test_split
xtrain,xtest, wtrain , wtest = train_test_split(x_norm, w_norm, test_size=0.1)
input_dims = x_norm.shape[-1]
cond_size = w_norm.shape[-1]
nlatent = 30
cvae = CVAEModel( cond_size,input_dims, nlatent, layer_width=300)
def eval_loss(X,W, beta = 0.1 ):
Xhat ,z_out, kl_loss = cvae.vae_model([X,W])
rec_loss = tf.reduce_mean(tf.pow(Xhat-X,2),-1)
return rec_loss + beta * kl_loss/nlatent , rec_loss, kl_loss, Xhat, z_out
train_losses = []
test_losses = []
#@title A function to anneal the beta parameter
def beta_cyclic_anneal(epoch, beta_0 = 0.01,init_beta_epochs = 10, burnin_beta = 20,
beta_max = 0.5, beta_min = 2.,
period = 10):
"""
A funcion for changing the "beta " parameter during training according to a cyclic annealing scheme,
a burn-in perior, and a cyclic annealing. Feel free to experiment with experimenting with it!
parameters:
beta_0 : starting value for beta
init_beta_epochs : number of epochs that beta remains un-changed with beta-0
burnin_beta : the epoch that the beta value is (linearly) ramped to,
before the start of the annealing schedule
beta_max : the max value for beta
beta_min : the min value for beta
period : how many epochs to go from min to max beta
"""
if epoch <init_beta_epochs:
return beta_0
if epoch>=init_beta_epochs and epoch < burnin_beta:
# ramp-up
return beta_0 + ((beta_max - beta_0)/(burnin_beta-init_beta_epochs)) * (epoch-init_beta_epochs)
if epoch >= init_beta_epochs:
# sawtooth cycles - max/min beta:
db = (beta_max - beta_min)/period
return beta_max - (epoch - init_beta_epochs)%period * db
pplot.plot([beta_cyclic_anneal(ee) for ee in range(100)],'.-')
pplot.title("Beta annealing schedule.")
pplot.grid()
# Training loop:
epochs = 400
for e in tqdm(range(epochs)):
beta_curr = beta_cyclic_anneal(e)
with tf.GradientTape() as tape:
tot_loss,rec_loss, kl_loss,Xhat, z_out = eval_loss(xtrain, wtrain,beta = beta_curr)
grads = tape.gradient(tot_loss, cvae.vae_model.weights)
opt.apply_gradients(zip(grads, cvae.vae_model.weights))
train_losses.append([np.mean(tot_loss.numpy().flatten()), np.mean(rec_loss), np.mean(kl_loss)])
if e % 50 == 0:
test_tot_loss, test_rec_loss, test_kl_loss, test_xout, test_zout = eval_loss(xtest, wtest, beta = beta_curr)
test_losses.append([np.mean(test_tot_loss), np.mean(test_rec_loss), np.mean(test_kl_loss)])
pplot.subplot(2,1,1)
pplot.plot(test_xout[0][0::10].numpy(), xtest[0::10],'.C0')
pplot.plot(xtest[0::10],xtest[0::10])
pplot.show()
ii = 5
xout = cvae.vae_model([xtest, wtest])
pplot.plot(tf.reduce_mean(xout[0],axis=0)[ii])
pplot.plot(xtest[ii])
pplot.show()
pplot.pause(0.1)
```
# Plot CVAE results
The approximate posterior is replaced with the prior. If the KL part of the loss is small and the latent space can be captured with transformations of a spherical Gaussian, the prior should yield realistic samples from the distribution of the MC simulation.
```
# Evaluate the decoder using the spherical gaussian prior:
wcond = wtest
xvals = xtest
niwae = cvae.cvae_params['n_iwae']
z_prior = cvae.prior.sample(niwae)
xout_from_prior = cvae.decoder([np.tile(z_prior[:,np.newaxis,:], [1,wcond.shape[0],1]), wcond])
# Evaluate the VAE:
xout = cvae.vae_model([xvals,wcond])
def scale_to_plot(F_, mm2 = 0.0957):
ee = 0.2
F_ = dataset.unnormalize_DEL(F_)
F_ = (F_**ee )/mm2
return F_
F_post = xout[0].numpy()#
F_prior = xout_from_prior.numpy()
Fsc_post = scale_to_plot(F_post)
Fsc_prior = scale_to_plot(F_prior)
Fact = scale_to_plot(xtest)
Fact_train = scale_to_plot(xtrain)
#@title: a function to plot the CVAE samples together with the neighbors from the dataset:
def plot_with_neighbs(wsp, ti, shear, xbounds = 3):
wunnorm = dataset.unnormalize_X(w_norm)
fig, ax = pplot.subplots(nrows=2,ncols=2, figsize = (20,10))
gs = ax[0,0].get_gridspec()
for ax_ in ax[:,0]:
ax_.remove()
axleft = fig.add_subplot(gs[:,0])
vv_unnorm = np.array([[ti,wsp,shear]])
vv = (vv_unnorm-dataset.data_input_mean) / dataset.data_input_std
w_unnorm = dataset.data_input
# Compute 10 samples from the CVAE using the prior and plot:
niwae = cvae.cvae_params['n_iwae']
z_prior = cvae.prior.sample(niwae)
wcond = np.array([])
xout_from_prior = cvae.decoder([np.tile(z_prior[:,np.newaxis,:], [1,vv.shape[0],1]), vv])
xout_from_prior
Fsc_prior_ = scale_to_plot(xout_from_prior)
for k in range(niwae):
pfv.make_plot(Fsc_prior_[k]*500000, 0, opacity=0.1, color="C0",ax = axleft, tight = False)
axleft.set_title("Fatigue estimates\n ti/wsp/se: %2.1f/%2.2f/%2.1f"%(wsp, ti, shear), fontsize = 20)
axleft.set_xlim([-xbounds, xbounds])
axleft.set_ylim([-xbounds, xbounds])
axleft.grid()
# pfv.make_plot(Fact*500000, ii, opacity=0.5, color="C1", ax = ax_curr, tight = False)
# Find nearest neighbors from training set:
nneighbors = 2
inds_closest = np.argsort(np.sum(np.square(vv_unnorm-w_unnorm)*[100.,10.,1],1))[0:nneighbors]
# for kk in range
for n in inds_closest:
pfv.make_plot(scale_to_plot(x_norm[[n]])*500000, 0 , opacity=0.9, color="C1",ax = axleft, tight = False)
# plot nearest neighbor:
ax[0,1].plot(w_unnorm[:,0], w_unnorm[:,1],'.', label = "all data")
ax[0,1].plot(w_unnorm[inds_closest,0], w_unnorm[inds_closest,1],'*', markersize = 15, label = "neighbors")
ax[0,1].plot(vv_unnorm[0,0], vv_unnorm[0,1],'*r', markersize = 15, label = "CVAE conditioning")
ax[0,1].set_xlabel("Ti[pct]"); ax[0,1].set_ylabel("Wsp[m/s]")
ax[0,1].legend(fontsize = 15)
ax[1,1].plot(w_unnorm[:,1], w_unnorm[:,2],'.')
ax[1,1].plot(w_unnorm[inds_closest,1], w_unnorm[inds_closest,2],'*',markersize = 15)
ax[1,1].plot(vv_unnorm[0,1], vv_unnorm[0,2],'*r', markersize = 15, label = "CVAE conditioning")
ax[1,1].set_xlabel("Wsp[m/s]"); ax[1,1].set_ylabel("shear exp")
ax[0,1].grid(True)
ax[1,1].grid(True)
return fig
from ipywidgets import FloatSlider, interact
xbounds = 3
figidx = 0
@interact(wsp = FloatSlider(min = 0,max = 20, value = 5.90),
ti = FloatSlider(min = 0, max = 0.4, value = 0.16, step = 0.4/20),
shear = FloatSlider(min =-1.64, max = 2.15, value = 0))
def plot_with_inputs(wsp, ti, shear):
plot_with_neighbs(wsp, ti, shear)
#@title Run the following for creating an animation
#plot_with_neighbs
ngridpoints = 20
ti0 = 0.05
tigrid = np.linspace(ti0,0.15,ngridpoints)
wsp0 = 5
wspgrid = np.linspace(wsp0,20,ngridpoints)
frame_idx = 0
for kk in range(ngridpoints):
ti = tigrid[kk]
ff = plot_with_neighbs(wsp0, ti, 0.)
ff.savefig("anim_%03i.jpg"%frame_idx)
frame_idx += 1
for kk in reversed(range(ngridpoints)):
ti = tigrid[kk]
ff = plot_with_neighbs(wsp0, ti, 0.)
ff.savefig("anim_%03i.jpg"%frame_idx)
frame_idx += 1
for kk in range(ngridpoints):
ff = plot_with_neighbs(wspgrid[kk], ti0, 0.)
ff.savefig("anim_%03i.jpg"%frame_idx)
frame_idx += 1
for kk in reversed(range(ngridpoints)):
ff = plot_with_neighbs(wspgrid[kk], ti0, 0.)
ff.savefig("anim_%03i.jpg"%frame_idx)
frame_idx += 1
```
| github_jupyter |
# **Correlation Analysis**
```
import os
import warnings
warnings.filterwarnings("ignore", message="invalid value encountered in true_divide")
warnings.filterwarnings("ignore",message="Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range")
warnings.filterwarnings("ignore", message="invalid value encountered in reduce")
import xarray as xr
import numpy as np
import scipy.signal as signal
import math
import pickle as pkl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import cmocean.cm as cm
import esmtools.stats as esmstats
## set up rcParams for plotting
mpl.rcParams['figure.dpi']= 120
def open_metric(var, reg, metric, timescale='monthly', ens_type=''):
writedir = '/home/bbuchovecky/storage/so_predict_derived/'
if metric == 'anom' or metric == 'mean':
subdir = 'CTRL/'+var.upper()+'/'
filename = var.lower()+'_ts_'+reg+'_'+timescale+'_'+metric+'.nc'
if metric == 'ppp':
subdir = 'PPP/'+var.upper()+'/'
if ens_type != '':
ens_type += '_'
filename = var.lower()+'_ts_'+reg+'_'+timescale+'_'+ens_type+'ppp.nc'
return xr.open_dataset(writedir+subdir+filename)
```
# Python Correlation Functions
**Documentation:**
* [numpy.correlate](https://numpy.org/doc/stable/reference/generated/numpy.correlate.html)
* [scipy.signal.correlate](https://docs.scipy.org/doc/scipy/reference/reference/generated/scipy.signal.correlate.html#scipy.signal.correlate)
* [scipy.signal.correlation_lags](https://docs.scipy.org/doc/scipy/reference/reference/generated/scipy.signal.correlation_lags.html#scipy.signal.correlation_lags)
* [matplotlib.pyplot.xcorr](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.xcorr.html)
* **[esmtools.stats.corr](https://esmtools.readthedocs.io/en/stable/api/esmtools.stats.corr.html)**
**Helpful Resources:**
* [Wikipedia - Cross-correlation](https://en.wikipedia.org/wiki/Cross-correlation#Cross-correlation_of_stochastic_processes)
* [Wikipedia - Autocorrelation](https://en.wikipedia.org/wiki/Autocorrelation)
* [Why are the results of R's ccf and SciPy's correlate different?](https://stats.stackexchange.com/questions/339782/why-are-the-results-of-rs-ccf-and-scipys-correlate-different/353408)
* [How do I get R's ccf in Python?](https://stackoverflow.com/questions/53959879/how-do-i-get-rs-ccf-in-python)
* [How to interpret the values returned by numpy.correlate and numpy.corrcoef?](https://stackoverflow.com/questions/13439718/how-to-interpret-the-values-returned-by-numpy-correlate-and-numpy-corrcoef)
* [**Oceanography Correlation Analysis**](https://currents.soest.hawaii.edu/ocn_data_analysis/_static/SEM_EDOF.html)
* [Penn State - Cross Correlation Functions and Lagged Regressions](https://online.stat.psu.edu/stat510/lesson/8/8.2)
* [Deterministic Skill Scores](https://metclim.ucd.ie/wp-content/uploads/2017/07/DeterministicSkillScore.pdf)
* [**Applied Time Series Analysis for Fisheries and Environmental Sciences**](https://atsa-es.github.io/atsa/lectures.html)
* [**Correlation within and among time series**](https://atsa-es.github.io/atsa-labs/sec-tslab-correlation-within-and-among-time-series.html) <-- best source
# Definitions
Sample cross-variance function:
> $g_k^{xy} = \frac{1}{n}\sum_{t=1}^{n-k} \left(y_t-\bar{y}\right) \left(x_{t+k}-\bar{x}\right)$
Sample cross-correlation function:
> $r_k^{xy} = \frac{g_k^{xy}}{\sqrt{\text{SD}_x\text{SD}_y}} = \frac{g_k^{xy}}{\sigma_x\sigma_y}$
# Testing `esmtools.stats.corr`
Notes:
* only works with DataArrays
* need to test with manual computation of cross-correlation
```
esmstats.corr?
## generate a sine signal with specified frequency, amplitude, and x-axis/horizontal shift
def gensignal(freq, amp, shift, length=1, num=100):
time=np.linspace(0,length,num)
signal = amp * np.sin(2 * np.pi * freq * time + shift)
xr_signal = xr.DataArray(
data = signal,
dims = ['time'],
coords = dict(time=np.arange(0,length,length/num)),
attrs = dict(
frequency=freq,
amplitide=amp,
shift=shift))
return xr_signal
sig1 = gensignal(1, 1, np.pi)
sig2 = gensignal(1, 1, 0)
t = 150
leads = np.arange(-t/2, t/2, 1, dtype='int')
corrvals = np.zeros(t)
for (i, l) in zip(range(t), leads):
corrvals[i] = esmstats.corr(sig1, sig2, dim='time', lead=l).values
fig,axes = plt.subplots(2, 1, figsize=(6,5), constrained_layout=True)
sig1.plot.line(ax=axes[0])
sig2.plot.line(ax=axes[0])
axes[1].plot(leads*0.01, corrvals, color='k')
axes[1].axvline(leads[np.argmin(corrvals)]*0.01, ls=':', color='gray')
axes[1].axvline(leads[np.argmax(corrvals)]*0.01, ls='--', color='gray')
axes[1].set_xlabel('lead')
axes[1].set_ylabel('corr coeff')
print(f'Min corr coeff: {np.min(corrvals):7.4} @ lead = {leads[np.argmin(corrvals)]*0.01:5.2}')
print(f'Max corr coeff: {np.max(corrvals):7.4} @ lead = {leads[np.argmax(corrvals)]*0.01:5.2}')
sig1 = open_metric('mld', 'so', 'anom').SouthernOcean
sig2 = open_metric('sie', 'so', 'anom').SouthernOcean
num_leads = 250
leads = np.arange(-num_leads, num_leads, 1, dtype='int')
corrvals = np.zeros(num_leads*2)
for (i, l) in zip(range(num_leads*2), leads):
corrvals[i] = esmstats.corr(sig1, sig2, dim='month', lead=l).values
fig,[ax1a,ax2] = plt.subplots(2, 1, figsize=(6,5), constrained_layout=True)
ax1b = ax1a.twinx()
## time indexing for anomaly data
end_month = None
ax1a.plot(sig1.sel(month=slice(1,end_month)), color='tab:blue', label='mld anom')
ax1b.plot(sig2.sel(month=slice(1,end_month)), color='tab:orange', label='sie anom')
# sig1.sel(month=slice(1,36)).plot.line(ax=ax1a, color='tab:blue')
# sig2.sel(month=slice(1,36)).plot.line(ax=ax1b, color='tab:orange')
## time indexing for mean data
# ax1a.plot(sig1.sel(time=slice('0001-01', '0004-01')), color='tab:blue')
# ax1b.plot(sig2.sel(time=slice('0001-01', '0004-01')), color='tab:orange')
ax1a.set_ylabel('mld anom')
ax1b.set_ylabel('sie anom')
ax1a.legend()
ax1b.legend()
ax2.plot(leads, corrvals, color='k')
ax2.set_ylim(-1, 1)
ax2.set_xlabel('lead (>0 - blue leads orange, <0 - blue lags orange)')
ax2.set_ylabel('corr coeff')
abs_max_corrcoeff = np.max(abs(corrvals))
max_corrcoeff = corrvals[np.argmax(abs(corrvals))]
ax2.axvline(abs_max_corrcoeff, color='gray', label=('max corr coeff (%.3f) @ lead %d' % (max_corrcoeff, leads[np.argmax(abs(corrvals))])))
ax2.legend()
print(f'Abs max corr coeff: {np.max(abs(corrvals)):6.3} @ lead = {leads[np.argmax(abs(corrvals))]:2}\n')
print(f'Min corr coeff: {np.min(corrvals):6.3} @ lead = {leads[np.argmin(corrvals)]:2}')
print(f'Max corr coeff: {np.max(corrvals):6.3} @ lead = {leads[np.argmax(corrvals)]:2}')
## returns cross correlation with lag relative to y1
## (i.e., y2 lags y1)
## lead represented by a negative lag
def xcorr(y1, y2, max_lag=None, start_month=1, direction=None, normed=True):
assert len(y1) == len(y2), 'y1 and y2 dimensions are not equal'
mode = 'full'
method = 'auto'
size = len(y1)
## account for shifted lag=0 month
if start_month != 1:
y1 = y1[(start_month-1):]
y2 = y2[(start_month-1):]
## compute the matrices of cross-correlation and lags
if not normed:
lags = signal.correlation_lags(len(y1), len(y2), mode=mode)
xcorr = signal.correlate(y1, y2, mode=mode, method=method)
if normed:
lags = signal.correlation_lags(len(y1), len(y2), mode=mode)
## normalize with the following:
xcov = signal.correlate(y1 - np.mean(y1), y2 - np.mean(y2), mode=mode, method=method)
xcorr = xcov / (y1.std() * y2.std() * size)
## ensure max_lag is valid
if max_lag != None:
assert max_lag <= size, 'max_lag is greater than input dimension'
if max_lag == None:
max_lag = size
lo = size - max_lag
hi = size + (max_lag - 1)
if direction == 'lead':
hi = size
if direction == 'lag':
lo = size-1
return xcorr[lo:hi], lags[lo:hi]
oscillations = 2
npts = oscillations * 50
y1 = np.sin(np.linspace(0,np.pi*2*oscillations,npts))
y2 = np.cos(np.linspace(0,np.pi*2*oscillations,npts))
xcorr_arr, lags_arr = xcorr(y1,y2,start_month=1)
fig,ax = plt.subplots(figsize=(10,5))
ax.plot(lags_arr, xcorr_arr)
print(xcorr_arr.size)
fig,ax = plt.subplots(figsize=(10,5))
ax.plot(y1, label='y1')
ax.plot(y2, label='y2', color='tab:orange')
ax.legend()
fig,ax = plt.subplots(figsize=(10,5))
ax.plot(lags_arr, xcorr_arr, color='orange')
ax.xcorr(y1, y2, maxlags=len(y1)-1);
```
Slight difference between `plt.xcorr` and the normalized `signal.correlate`. I don't think it's a big deal, so I'm going to stick with the normalized `signal.correlate` because I have a better understanding of what exactly it is doing.
```
def xcorr_heatmap(
target_var, other_vars, reg, metric, max_lag=12, normed=True, start_month=1, yaxis_steps=2, cmap=cmo.balance, figsize=(6,6)):
file = open('/home/bbuchovecky/storage/so_predict_derived/plotting_dicts.pkl','rb')
plotting_dicts = pkl.load(file)
file.close()
reg_names = plotting_dicts['reg_names']
var_su_names = plotting_dicts['var_su_names']
var_lu_names = plotting_dicts['var_lu_names']
var_ll_names = plotting_dicts['var_ll_names']
abbrv_month_names = plotting_dicts['abbrv_month_names']
abbrv_month_names = abbrv_month_names[(start_month-1):] + abbrv_month_names[:(start_month-1)]
if reg.lower() == 'global':
reg = 'Global'
subreg = 'global'
if reg.lower() != 'global' and reg.lower() != 'all':
subreg = 'so'
target_metric = open_metric(target_var, subreg, metric, timescale='monthly')
xcorr_matrix = np.zeros((len(other_vars), 2*max_lag-1))
var_names = []
for (i,var) in zip(range(len(other_vars)), other_vars):
other_metric = open_metric(var, subreg, metric, timescale='monthly')
## the target metric lags the other metric = the other metric leads the target metric
xcorr_arr, lag_arr = xcorr(target_metric[reg].values, other_metric[reg].values, max_lag=max_lag, direction=None, normed=normed, start_month=start_month)
xcorr_matrix[i] = xcorr_arr
var_names.append(var_su_names[var])
xcorr_matrix = xcorr_matrix.T
## plot
fig,ax = plt.subplots(sharex=True, sharey=True, figsize=figsize)
im = ax.pcolormesh(xcorr_matrix, vmin=-1, vmax=1, cmap=cmap, shading='auto')
ax.set_xticks(np.arange(xcorr_matrix.shape[1]) + 0.5)
ax.set_xticklabels(var_names)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
fig.suptitle(reg_names[reg]+' - XCORR - '+metric.upper())
fig.tight_layout()
cb = fig.colorbar(im, label='Correlation to NPP')
return fig,ax
start_month=6
step = 2
file = open('/home/bbuchovecky/storage/so_predict_derived/plotting_dicts.pkl','rb')
plotting_dicts = pkl.load(file)
file.close()
abbrv_month_names = plotting_dicts['abbrv_month_names']
abbrv_month_names = abbrv_month_names[start_month-1:] + abbrv_month_names[:start_month-1]
print('center = '+abbrv_month_names[0])
max_lag = 8
mult_factor = int(max_lag/12)+1
abbrv_month_names = abbrv_month_names * mult_factor
abbrv_month_names = abbrv_month_names[-max_lag+1:] + abbrv_month_names[:max_lag]
print('length = '+str(len(abbrv_month_names)))
print(abbrv_month_names)
size = len(abbrv_month_names)
print(abbrv_month_names[size // 2])
print(abbrv_month_names[0:size // 2:step] + abbrv_month_names[size // 2:size:step])
variables = ['sst', 'sfc_chl', 'npp', 'sfc_biomass', 'sfc_irr', 'pco2surf', 'siv', 'sie', 'cn_inv', 'sss', 'sfc_fed', 'mld']
target_var='npp'
other_vars=variables
reg='SouthernOcean'
metric='anom'
xcorr_heatmap(target_var, other_vars, reg, metric, max_lag=7, start_month=1, yaxis_steps=2);
```
# Simple Test
`x_l[target - lead]`
```
def cross_corr(x1, x2):
product = np.mean((x1 - x1.mean()) * (x2 - x2.mean()))
stds = x1.std() * x2.std()
if stds == 0:
return 0
else:
product /= stds
return product
def select_month(x, imonth):
return x[imonth:None:12]
## use slice notation to select one month across the 300 year control to correlate
def month_corr(x_t, x_l, target, lead):
file = open('/home/bbuchovecky/storage/so_predict_derived/plotting_dicts.pkl','rb')
plotting_dicts = pkl.load(file)
file.close()
print('target month = ', plotting_dicts['month_names'][target])
print('lead month = ', plotting_dicts['month_names'][lead])
print('length of \'x_t\' = ', len(x_t[target:None:12]))
print('length of \'x_l\' = ', len(x_l[lead:None:12]))
return cross_corr(select_month(x_t, target).values, select_month(x_l, lead).values)
month_corr(npp, sie, 0, 4)
## y - response
## x - predictor
def cross_corr2(y, x, lead, target=0):
print('length of y (response) = ', len(y[target:]))
print('length of x (predictor) = ', len(x[lead:]))
product = np.mean((y[target:] - y.mean()) * (x[target-lead:] - x.mean()))
stds = y.std() * x.std()
if stds == 0:
return 0
else:
product /= stds
return product
cross_corr2(npp, sie, lead=0, target=0).values
```
## `signal.correlate` and `np.correlate`
```
y1 = np.cos(np.linspace(0,np.pi*2,200))
y2 = np.sin(np.linspace(0,np.pi*2,200))
mode = 'full'
signal_corr = signal.correlate(y1, y2, mode=mode)
signal_lags = signal.correlation_lags(y1.size, y2.size, mode=mode)
fig, (ax_signals, ax_corr) = plt.subplots(2, 1, figsize=(10,7))
ax_signals.plot(y1, label='y1')
ax_signals.plot(y2, label='y2')
ax_signals.legend()
ax_corr.plot(signal_lags, signal_corr)
xlim = ax_corr.get_xlim()
ax_corr.set_xlabel('Lag of y1 relative to y2');
```
In the plot above, `y1` leads `y2` (alternatively `y2` lags `y1`) which is shown by the peak in correlation at around lag = 50.
```
y1 = np.sin(np.linspace(0,np.pi*2,200))
y2 = np.cos(np.linspace(0,np.pi*2,200))
mode = 'full'
signal_corr = signal.correlate(y1, y2, mode=mode)
signal_lags = signal.correlation_lags(y1.size, y2.size, mode=mode)
fig, (ax_signals, ax_corr) = plt.subplots(2, 1, figsize=(10,7))
ax_signals.plot(y1, label='y1')
ax_signals.plot(y2, label='y2')
ax_signals.legend()
ax_corr.plot(signal_lags, signal_corr)
xlim = ax_corr.get_xlim()
ax_corr.set_xlabel('Lag of y1 relative to y2');
ax_corr.set_ylabel('Cross-correlation');
```
In the plot above, `y1` lags `y2` (alternatively `y2` leads `y1`) which is shown by the peak in correlation at around lag = 50.
```
False in (signal_lags == np.arange(-199,200))
## signal.correlate and np.correlate(mode='full') return the same arrays
## signal.correlate is faster for larger arrays
print('Different values from signal.correlate and np.correlate?')
print(False in (signal.correlate(y1, y2, mode='full') == np.correlate(y1, y2, mode='full')))
x = np.array([0,1,2,3,4,5])
skip = 2
shifted = np.array([])
for i in range(skip):
shifted = np.append(shifted, 0)
shifted = np.append(shifted, x[skip:])
np.corrcoef(shifted, x)
```
## `plt.xcorr`
It looks like `xcorr(normed=True)*100` $\approx$ `xcorr(normed=False)`
```
y1 = np.sin(np.linspace(0,np.pi*2,200))
y2 = np.cos(np.linspace(0,np.pi*2,200))
## matplotlib version of xcorr() which uses numpy.correlate(mode='full')
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True, figsize=(10,9))
maxlags = len(y1)-1
xcorr = ax1.xcorr(y1, y2, maxlags=maxlags, normed=False)
xcorr_normed = ax2.xcorr(y1, y2, maxlags=maxlags, normed=True)
ax3.plot(np.arange(-199,200), xcorr[1] - (xcorr_normed[1]*100))
ax1.set_title('normed=False')
ax2.set_title('normed=True')
ax3.set_title('(not normed) - (normed*100)')
ax1.set_ylabel('Cross-correlation');
ax2.set_ylabel('Cross-correlation');
ax3.set_xlabel('Lag of y1 relative to y2');
fig.tight_layout()
for arr in [xcorr[1], xcorr_normed[1]]:
print('max = '+str(arr.max()))
print('argmax = '+str(np.argmax(arr)))
print('size = '+str(arr.size))
print()
print(ax1.get_xlim())
```
| github_jupyter |
<h2>Quantum Tomography</h2>
[Watch Lecture](https://youtu.be/mIEiWCJ6R58)
We study a simplified version of quantum tomography here.
It is similar to learn the bias of a coin by collecting statistics from tossing this coin many times. But, only making measurement may not be enough to make a good guess.
Suppose that you are given 1000 copies of a qubit and your task is to learn the state of this qubit. We use a python class called "unknown_qubit" for doing our quantum experiments.
Please run the following cell before continuing.
```
# class unknown_qubit
# available_qubit = 1000 -> you get at most 1000 qubit copies
# get_qubits(number_of_qubits) -> you get the specified number of qubits for your experiment
# measure_qubits() -> your qubits are measured and the result is returned as a dictionary variable
# -> after measurement, these qubits are destroyed
# rotate_qubits(angle) -> your qubits are rotated with the specified angle in radian
# compare_my_guess(my_angle) -> your guess in radian is compared with the real angle
from random import randrange
from math import pi
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
class unknown_qubit:
def __init__(self):
self.__theta = randrange(18000)/18000*pi
self.__available_qubits = 1000
self.__active_qubits = 0
print(self.__available_qubits,"qubits are created")
def get_qubits(self,number_of_qubits=None):
if number_of_qubits is None or isinstance(number_of_qubits,int) is False or number_of_qubits < 1:
print()
print("ERROR: the method 'get_qubits' takes the number of qubit(s) as a positive integer, i.e., get_qubits(100)")
elif number_of_qubits <= self.__available_qubits:
self.__qc = QuantumCircuit(1,1)
self.__qc.ry(2 * self.__theta,0)
self.__active_qubits = number_of_qubits
self.__available_qubits = self.__available_qubits - self.__active_qubits
print()
print("You have",number_of_qubits,"active qubits that are set to (cos(theta),sin(theta))")
self.available_qubits()
else:
print()
print("WARNING: you requested",number_of_qubits,"qubits, but there is not enough available qubits!")
self.available_qubits()
def measure_qubits(self):
if self.__active_qubits > 0:
self.__qc.measure(0,0)
job = execute(self.__qc,Aer.get_backend('qasm_simulator'),shots=self.__active_qubits)
counts = job.result().get_counts(self.__qc)
print()
print("your",self.__active_qubits,"qubits are measured")
print("counts = ",counts)
self.__active_qubits = 0
return counts
else:
print()
print("WARNING: there is no active qubits -- you might first execute 'get_qubits()' method")
self.available_qubits()
def rotate_qubits(self,angle=None):
if angle is None or (isinstance(angle,float) is False and isinstance(angle,int) is False):
print()
print("ERROR: the method 'rotate_qubits' takes a real-valued angle in radian as its parameter, i.e., rotate_qubits(1.2121)")
elif self.__active_qubits > 0:
self.__qc.ry(2 * angle,0)
print()
print("your active qubits are rotated by angle",angle,"in radian")
else:
print()
print("WARNING: there is no active qubits -- you might first execute 'get_qubits()' method")
self.available_qubits()
def compare_my_guess(self,my_angle):
if my_angle is None or (isinstance(my_angle,float) is False and isinstance(my_angle,int) is False):
print("ERROR: the method 'compare_my_guess' takes a real-valued angle in radian as your guessed angle, i.e., compare_my_guess(1.2121)")
else:
self.__available_qubits = 0
diff = abs(my_angle-self.__theta)
print()
print(self.__theta,"is the original",)
print(my_angle,"is your guess")
print("the angle difference between the original theta and your guess is",diff/pi*180,"degree")
print("-->the number of available qubits is (set to) zero, and so you cannot make any further experiment")
def available_qubits(self):
print("--> the number of available unused qubit(s) is",self.__available_qubits)
```
class unknown_qubit:
available_qubit = 1000 -> you get at most 1000 qubit copies
get_qubits(number_of_qubits) -> you get the specified number of qubits for your experiment
measure_qubits() -> your qubits are measured and the result is returned as a dictionary variable
-> after measurement, these qubits are destroyed
rotate_qubits(angle) -> your qubits are rotated with the specified angle in radian
compare_my_guess(my_angle) -> your guess in radian is compared with the real angle
<h3> Task 1 </h3>
You are given 1000 copies of the identical qubits which are in the same quantum state lying in the first or second quadrant of the unit circle.
This quantum state is represented by an angle $ \theta \in [0,\pi) $, and your task is to guess this angle.
You use the class __unknown_qubit__ and its methods for your experiments.
_Remark that the measurement outcomes of the quantum states with angles $ \pi \over 3 $ and $ 2 \pi \over 3 $ are identical even though they are different quantum states. Therefore, getting 1000 qubits and then measuring them does not guarantee the correct answer._
Test your solution at least ten times.
```
from math import pi, cos, sin, acos, asin
# an angle theta is randomly picked and it is fixed througout the experiment
my_experiment = unknown_qubit()
#
# my_experiment.get_qubits(number_of_qubits)
# my_experiment.rotate_qubits(angle)
# my_experiment.measure_qubits()
# my_experiment.compare_my_guess(my_angle)
#
#
# your solution is here
#
# class unknown_qubit
# available_qubit = 1000 -> you get at most 1000 qubit copies
# get_qubits(number_of_qubits) -> you get the specified number of qubits for your experiment
# measure_qubits() -> your qubits are measured and the result is returned as a dictionary variable
# -> after measurement, these qubits are destroyed
# rotate_qubits(angle) -> your qubits are rotated with the specified angle in radian
# compare_my_guess(my_angle) -> your guess in radian is compared with the real angle
from random import randrange
from math import pi
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
class unknown_qubit:
def __init__(self):
self.__theta = randrange(18000)/18000*pi
self.__available_qubits = 1000
self.__active_qubits = 0
print(self.__available_qubits,"qubits are created")
def get_qubits(self,number_of_qubits=None):
if number_of_qubits is None or isinstance(number_of_qubits,int) is False or number_of_qubits < 1:
print()
print("ERROR: the method 'get_qubits' takes the number of qubit(s) as a positive integer, i.e., get_qubits(100)")
elif number_of_qubits <= self.__available_qubits:
self.__qc = QuantumCircuit(1,1)
self.__qc.ry(2 * self.__theta,0)
self.__active_qubits = number_of_qubits
self.__available_qubits = self.__available_qubits - self.__active_qubits
print()
print("You have",number_of_qubits,"active qubits that are set to (cos(theta),sin(theta))")
self.available_qubits()
else:
print()
print("WARNING: you requested",number_of_qubits,"qubits, but there is not enough available qubits!")
self.available_qubits()
def measure_qubits(self):
if self.__active_qubits > 0:
self.__qc.measure(0,0)
job = execute(self.__qc,Aer.get_backend('qasm_simulator'),shots=self.__active_qubits)
counts = job.result().get_counts(self.__qc)
print()
print("your",self.__active_qubits,"qubits are measured")
print("counts = ",counts)
self.__active_qubits = 0
return counts
else:
print()
print("WARNING: there is no active qubits -- you might first execute 'get_qubits()' method")
self.available_qubits()
def rotate_qubits(self,angle=None):
if angle is None or (isinstance(angle,float) is False and isinstance(angle,int) is False):
print()
print("ERROR: the method 'rotate_qubits' takes a real-valued angle in radian as its parameter, i.e., rotate_qubits(1.2121)")
elif self.__active_qubits > 0:
self.__qc.ry(2 * angle,0)
print()
print("your active qubits are rotated by angle",angle,"in radian")
else:
print()
print("WARNING: there is no active qubits -- you might first execute 'get_qubits()' method")
self.available_qubits()
def compare_my_guess(self,my_angle):
if my_angle is None or (isinstance(my_angle,float) is False and isinstance(my_angle,int) is False):
print("ERROR: the method 'compare_my_guess' takes a real-valued angle in radian as your guessed angle, i.e., compare_my_guess(1.2121)")
else:
self.__available_qubits = 0
diff = abs(my_angle-self.__theta)
print()
print(self.__theta,"is the original",)
print(my_angle,"is your guess")
print("the angle difference between the original theta and your guess is",diff/pi*180,"degree")
print("-->the number of available qubits is (set to) zero, and so you cannot make any further experiment")
def available_qubits(self):
print("--> the number of available unused qubit(s) is",self.__available_qubits)
```
__Single experiment__
A direct measument gives us two candidates. We use 900 copies here.
```
from math import pi, cos, sin, acos, asin
my_experiment = unknown_qubit()
# we use 900 copies to determine our two candidates
my_experiment.get_qubits(900)
counts = my_experiment.measure_qubits()
number_of_observed_zeros = 0
if '0' in counts:
number_of_observed_zeros = counts['0']
probability_of_observing_zeros = number_of_observed_zeros/900
cos_theta = probability_of_observing_zeros ** 0.5
theta = acos(cos_theta)
theta_first_candidate = theta
theta_second_candidate = pi-theta
print("the first candidate is",theta_first_candidate,"in radian and",theta_first_candidate*180/pi,"in degree")
print("the second candidate is",theta_second_candidate,"in radian and",theta_second_candidate*180/pi,"in degree")
my_experiment.get_qubits(100)
my_experiment.rotate_qubits(-1 * theta_first_candidate)
counts = my_experiment.measure_qubits()
number_of_observed_zeros = 0
if '0' in counts:
number_of_observed_zeros = counts['0']
if number_of_observed_zeros == 100:
my_guess = theta_first_candidate
else:
my_guess = theta_second_candidate
my_experiment.compare_my_guess(my_guess)
```
__Multiple Experiments__
```
for i in range(10):
print("Experiment",(i+1))
print("___________")
print()
my_experiment = unknown_qubit()
my_experiment.get_qubits(900)
counts = my_experiment.measure_qubits()
number_of_observed_zeros = 0
if '0' in counts:
number_of_observed_zeros = counts['0']
probability_of_observing_zeros = number_of_observed_zeros/900
cos_theta = probability_of_observing_zeros ** 0.5
theta = acos(cos_theta)
theta_first_candidate = theta
theta_second_candidate = pi-theta
my_experiment.get_qubits(100)
my_experiment.rotate_qubits(-1 * theta_first_candidate)
counts = my_experiment.measure_qubits()
number_of_observed_zeros = 0
if '0' in counts:
number_of_observed_zeros = counts['0']
if number_of_observed_zeros == 100:
my_guess = theta_first_candidate
else:
my_guess = theta_second_candidate
my_experiment.compare_my_guess(my_guess)
print()
print()
print()
```
<h3> Task 2 (extra) </h3>
You are given 1000 identical quantum systems with two qubits that are in states $ \myvector{\cos \theta_1 \\ \sin \theta_1} $ and $ \myvector{\cos \theta_2 \\ \sin \theta_2} $, where $ \theta_1,\theta_2 \in [0,\pi) $.
Your task is to guess the values of $ \theta_1 $ and $ \theta_2 $.
Create a quantum circuit with two qubits.
Randomly pick $\theta_1$ and $ \theta_2 $ and set the states of qubits respectively. (Do not use $ \theta_1 $ and $ \theta_2 $ except initializing the qubits.)
Do experiments (making measurements and/or applying basic quantum operators) with your circuit(s). You may create more than one circuit.
Assume that the total number of shots does not exceed 1000 throughout the whole experiment.
_Since you have two qubits, your measurement outcomes will be '00', '01', '10', and '11'._
```
#
# your solution
#
```
<h3> Task 3 (Discussion) </h3>
If the angle in Task 1 is picked in range $ [0,2\pi) $, then can we determine its quadrant correctly?
<h3> Global phase </h3>
Suppose that we have a qubit and its state is either $ \ket{0} $ or $ -\ket{0} $.
Is there any sequence of one-qubit gates such that we can measure different results after applying them?
All one-qubit gates are $ 2 \times 2 $ matrices, and their application is represented by a single matrix: $ A_n \cdot \cdots \cdot A_2 \cdot A_1 = A $.
By linearity, if $ A \ket{0} = \ket{u} $, then $ A (- \ket{0}) = -\ket{u} $. Thus, after measurement, the probabilities of observing state $ \ket{0} $ and state $ \ket{1} $ are the same for $ \ket{u} $ and $ -\ket{u} $. Therefore, we cannot distinguish them.
Even though the states $ \ket{0} $ and $ -\ket{0} $ are different mathematically, they are assumed as identical from the physical point of view.
The minus sign in front of $ -\ket{0} $ is called as a global phase.
In general, a global phase can be a complex number with magnitude 1.
| github_jupyter |
## 1. Setup
```
import sys
sys.path.append('../..')
import matplotlib.pyplot as plt
import numpy as np
import os
import re
import shutil
import warnings
from keras import Model
from neural_networks.fcrn import FCRN_A
from utils.data.data_generator import DataGenerator
from utils.data.data_ops import move_val_split_to_train
from utils.data.data_ops import create_val_split_from_train
from utils.input_output.io import save_np_arrays
from pprint import pprint
%matplotlib inline
%load_ext autoreload
%autoreload 2
warnings.filterwarnings('ignore')
```
## 2. Experiment parameters
```
DATASET_NAME = 'vgg_cells'
DATASET_PATH = f'../../datasets/{DATASET_NAME}'
IMG_DIM = None
D_MAP_MULT_FACT = None
LOSS_FUNCTIONS = ['logcosh']
EXP_DIR_NAMES = []
if DATASET_NAME.lower() == 'vgg_cells':
# misc
IMG_DIM = (256, 256, 3)
D_MAP_MULT_FACT = 100.
# plots
figsize=(25, 5)
fraction = 0.045
pad = 0.038
# checkpoints
for loss_name in LOSS_FUNCTIONS:
for input_type in ['patch_4_128x128']:#'full_img',
for train_size in [64]:
for rseed_idx in [1]:#range(1, 6):
exp_dir_name = f'./vgg_cells/'\
f'n_{train_size}_sigma_5_'\
f'randseed_{train_size}{rseed_idx}_'\
f'loss_{loss_name}_'\
f'{input_type}'
EXP_DIR_NAMES.append(exp_dir_name)
elif DATASET_NAME.lower() == 'carpk':
# misc
IMG_DIM = (720, 1280, 3)
D_MAP_MULT_FACT = 2000.
# plots
figsize=(25, 7)
fraction = 0.027
pad = 0.015
# checkpoints
EXP_DIR_NAMES = ['./carpk/sigma_10_loss_logcosh_full_img_epochs_10_lr_1e-4']
#'./carpk/sigma_10_loss_mse_full_img',
#'./carpk/sigma_10_loss_mse_patch_32_128x128_epochs_3']
#['./carpk/sigma_10_loss_logcosh_patch_32_128x128_15_epochs']
#['./carpk/sigma_10_loss_logcosh_patch_32_128x128_5_epochs']
elif DATASET_NAME.lower() == 'shanghai_tech/part_b':
# misc
IMG_DIM = (768, 1024, 3)
D_MAP_MULT_FACT = 2000.
# plots
figsize=(25, 7)
fraction = 0.0355
pad = 0.015
# checkpoints
EXP_DIR_NAMES = ['./shanghai_tech/part_b/sigma_10_loss_logcosh_full_img_epochs_30',
'./shanghai_tech/part_b/sigma_10_loss_logcosh_patch_32_128x128_epochs_100',
'./shanghai_tech/part_b/sigma_10_loss_mse_full_img',
'./shanghai_tech/part_b/sigma_10_loss_mse_patch_32_128x128_epochs_100']
params = {
'dim': IMG_DIM,
'batch_size': 1,
'patches_per_image': 1,
'density_map_multiplication_factor': D_MAP_MULT_FACT,
'shuffle': False
}
print(len(EXP_DIR_NAMES))
pprint(EXP_DIR_NAMES)
```
## 3. Predict and save results
```
gt_counts = []
pred_counts = []
for exp_dir_name in [EXP_DIR_NAMES[0]]:
if 'vgg_cells' in DATASET_PATH:
# for vgg_cells dataset we divide the train set in different train/val split each time
train_path = f'{DATASET_PATH}/train'
val_path = f'{DATASET_PATH}/val'
train_size = int(re.search(r'/n_\d{2}_', exp_dir_name).group().split('_')[1])
val_size = 100 - train_size
rand_seed = int(re.search(r'randseed_\d{3}_', exp_dir_name).group().split('_')[1])
move_val_split_to_train(val_path, train_path)
create_val_split_from_train(train_path, val_path, val_size, rand_seed)
# data splits
train_generator = DataGenerator(DATASET_PATH, 'train', **params)
val_generator = DataGenerator(DATASET_PATH, 'val', **params)
test_generator = DataGenerator(DATASET_PATH, 'test', **params)
# clean old dir for qualitative results
results_path = f'{exp_dir_name}/results/qualitative'
shutil.rmtree(results_path, ignore_errors=True)
os.makedirs(results_path)
# predict and save
checkpoint_filename = f'{exp_dir_name}/checkpoints/best_model.hdf5'
#####checkpoint_filename = f'{exp_dir_name}/checkpoints/model.02-0.01.hdf5'
model = FCRN_A(pretrained_weights=checkpoint_filename)
for data_generator, split_name in zip([test_generator],
['test']):
# prepare dirs for qualitative results
results_path_npy = f'{results_path}/{split_name}/npy'
results_path_png = f'{results_path}/{split_name}/png'
results_path_png_diff = f'{results_path}/{split_name}/png_diff'
results_path_png_gt_asc = f'{results_path}/{split_name}/png_gt_asc'
os.makedirs(results_path_npy)
os.makedirs(results_path_png)
os.makedirs(results_path_png_diff)
os.makedirs(results_path_png_gt_asc)
for idx in range(data_generator.__len__()):
# input image & gt density map, batch_size=1
input_img, gt_density_map = data_generator.__getitem__(idx)
gt_density_map /= D_MAP_MULT_FACT
# predicted density map
pred_density_map = (model.predict(input_img) / D_MAP_MULT_FACT)[0]
# gt & pred counts
gt_count = int(np.round(gt_density_map.sum()))
pred_count = pred_density_map.sum()
pred_gt_diff = pred_count - gt_count
gt_counts.append(gt_count)
pred_counts.append(pred_count)
error_type = 'match'
if pred_count < gt_count:
error_type = 'underestimate'
elif pred_count > gt_count:
error_type = 'overestimate'
img_name_png = data_generator.img_names[data_generator.indexes[idx]]
img_name = img_name_png.split('.')[0]
img_name_npy = f'{img_name}_gt_{gt_count}_pred_{pred_count:.2f}'\
f'_diff_gt_pred_{pred_gt_diff:.2f}'
img_name_diff = f'{error_type}_{np.abs(pred_gt_diff):7.2f}'\
f'_gt_{gt_count}_pred_{pred_count:.2f}'\
f'_{img_name}'.replace(' ', '0')
img_name_gt_asc = f'gt_{gt_count:3}_pred_{pred_count:6.2f}_diff_gt_pred_{pred_gt_diff:.2f}'\
f'_{img_name}'.replace(' ', '0')
# .npy arrays
#####save_np_arrays([pred_density_map], [img_name_npy], results_path_npy)
# plots
vmin = min(gt_density_map.min(), pred_density_map.min())
vmax = max(gt_density_map.max(), pred_density_map.max())
fig = plt.figure(figsize=figsize)
plt.subplot(1, 4, 1)
plt.title('Input image')
plt.imshow(input_img[0])
plt.axis('off')
plt.subplot(1, 4, 2)
plt.title(f'GT density map ({gt_count})')
plt.imshow(gt_density_map.squeeze(), cmap='jet', vmin=vmin, vmax=vmax)
plt.colorbar(fraction=fraction, pad=pad)
plt.axis('off')
plt.subplot(1, 4, 3)
plt.title(f'Predicted density map ({pred_count:.2f})')
plt.imshow(pred_density_map.squeeze(), cmap='jet', vmin=vmin, vmax=vmax)
plt.colorbar(fraction=fraction, pad=pad)
plt.axis('off')
plt.subplot(1, 4, 4)
plt.title(f'Prediction - GT ({(pred_density_map - gt_density_map).sum():.2f})')
plt.imshow((pred_density_map - gt_density_map).squeeze(), cmap='gray')
plt.colorbar(fraction=fraction, pad=pad)
plt.axis('off')
plt.savefig(f'{results_path_png}/{img_name_npy}.png')
plt.savefig(f'{results_path_png_diff}/{img_name_diff}.png')
plt.savefig(f'{results_path_png_gt_asc}/{img_name_gt_asc}.png')
plt.clf()
plt.close()
```
## 4. Plot GT vs Prediction
```
print(gt_counts)
print(pred_counts)
x = 1 + np.arange(len(gt_counts))
indices = np.argsort(gt_counts)
plt.figure(figsize=(7, 5))
plt.title('VGG Cells')
plt.plot(x, np.asarray(pred_counts)[indices], 'b.', label='Prediction')
plt.plot(x, np.asarray(gt_counts)[indices], 'g.', label='Ground truth')
plt.ylabel('Counts')
plt.xlabel('Image indices (ascending order of ground truth counts)')
plt.legend()
plt.show()
```
| github_jupyter |
<img align="right" style="max-width: 200px; height: auto" src="cfds_logo.png">
### Lab 02 - "Fundamentals of Python Programming"
Chartered Financial Data Scientist (CFDS), Autumn Term 2020
The lab environment of the **"Chartered Financial Data Scientist (CFDS)"** course is powered by Jupyter Notebooks (https://jupyter.org), which allows one to perform a great deal of data analysis and statistical validation. In this second lab, we want to have a look at the basic data types, containers, decision structures, loops and functions of the Python programming language.
<img align="left" style="max-width: 100px; height: auto" src="wizard.png">
The second lab (**"Halloween Edition"**) builds in parts on the excellent Python tutorial of the Stanford University CS 231n lecture series developed by Andrej Karpathy, Justin Johnson and Fei-Fei Li. The original tutorial is available under the following URL: http://cs231n.github.io/python-numpy-tutorial/.
As always, pls. don't hesitate to ask all your questions either during the lab, post them in our NextThought lab discussion forum (https://financial-data-science.nextthought.io), or send us an email (using our fds.ai email addresses).
### Lab Objectives:
After today's lab you should be able to:
> 1. Understand the basic **data types** of the Python e.g. integer, boolean, string.
> 2. Understand the basic **data containers** of Python e.g. lists and dictionaries.
> 3. Have a first idea of **numpy arrays** and **tensors**, a concept that we will use a lot in machine learning.
> 4. Know Python's **decision structures** to guide the workflow of a program.
> 5. Understand how to **loop** over data containers in order to access and manipulate individual values.
> 6. Implement small **functions** that allow for the execution of several Python statements.
### Let's start with a motivational video:
```
from IPython.display import YouTubeVideo
# Powered by TensorFlow: helping paleographers transcribe medieval text using machine learning
YouTubeVideo('v-FgOACRgfs', width=1000, height=600)
```
### 1. Python Versions
There are two different versions of Python, 2.x and 3.x, the latter of which has some backward-compatibility issues. However, as of January 1, 2020, Python 2.x is no longer supported so you should not use it. Thus, for this class, all code will use Python 3.x (were x refers to an arbitrary version number).
You may want to check your Python version at the command line by running `python --version`.
### 2. Fundamental Python Data Types
In general, a data type defines the format, sets the upper and lower bounds of the data so that a program could use it appropriately. However, **Python data types** are just more than that. In Python, we donโt need to declare a variable with explicitly mentioning the data type. This feature is famously known as **dynamic typing**.
Python determines the type of a literal (an element of code that has a fixed value) directly from the syntax at **runtime**. For example, the quotes mark the declaration of a string value, square brackets represent a list and curly brackets for a dictionary. Also, the non-decimal numbers will get assigned to Integer type whereas the ones with a decimal point will be a float.
There are four basic data types in the Python programming language:
> * **Integer's** - represent positive or negative whole numbers with no decimal point.
> * **Float's** - represent positive or negative real numbers and are written with a decimal point.
> * **String's** - represent sequences of Unicode characters.
> * **Boolean's** - represent constant objects that are either 'False' and 'True'.
<img align="center" style="max-width: 800px; height: auto" src="python_data_types.png">
(Source: https://www.tes.com/teaching-resource/python-version-3-data-types-11949410.)
#### 2.1 Numerical Data Type "Integer"
Numbers are one of the most prominent Python data types. Numbers in Python are often called just **'integers'** or **'ints'** and are positive or negative whole numbers with no decimal point. In Python 3, there is effectively no limit to how big an integer value can be.
Of course, it is constrained by the **amount of memory** your system has, as are all things. But beyond that, an integer can be as long as you need it to be:
```
x = 3
x
```
Print the variable type:
```
type(x)
```
Note that `x` still has the same value after these operations. This is due to the fact that we did not assign a new value to this variable. We can change its value by combining the two notations that we used above:
```
print(x, 'original value')
x = x + 1
print(x, 'after adding 1')
x = x * 2
print(x, 'after multiplying with 2')
x = x - 5
print(x, 'after subtracting 5')
x = x ** 2
print(x, 'after exponentation with 2')
```
Here are some shortcuts for basic operations combined with value assignment:
```
x += 1
print(x) # prints "4"
x *= 2
print(x) # prints "8"
```
#### 2.2 Numerical Data Type "Float"
The **float** type in Python represents real numbers and are written with a decimal point dividing the integer and fractional parts. As a result, float values are specified with a decimal point and are often called just **'floats'**.
A float type number can have precision up to **15 decimal places**:
```
y = 3.0
print(y)
```
Print the variable type:
```
type(y)
```
Basic mathematical operations:
```
print(y + 1) # addition
print(y - 1) # subtraction
print(y * 2) # multiplication
print(y ** 2) # exponentation
```
Optionally, the character e or E followed by a positive or negative integer may be appended to specify scientific notation:
```
z = 1e-7 # equals 0.0000001
print(z)
print(z + 1)
print(z * 2)
print(z ** 2)
```
#### 2.3 Non-Numerical Data Type "String"
A sequence of one or more characters enclosed within either single quotes **โ** or double quotes **โ** is considered as a **string** in Python. Any letter, a number or a symbol could be a part of the string. All the characters between the opening delimiter and matching closing delimiter are part of the string:
```
hello = 'hello' # string literals can use single quotes
world = "world, my name is peterli" # or double quotes; it does not matter.
print(hello)
```
Print the variable type:
```
type(hello)
```
Print the length of each string in terms of a number of characters:
```
print(len(hello))
print(len(world))
```
Like the most programming languages, Python allows accessing individual characters of a string. But it also supports negative indexes. Index of `โ-1โ` represents the last character of the String. Similarly using `โ-2โ`, we can access the penultimate character of the string and so on:
```
halloween = 'spooky'
halloween[0] # get the first string character
print(halloween[-1]) # get the last string character
print(halloween[-2]) # get the penultimate character
```
Concatenate two strings, e.g., to form a sentence:
```
hw = hello + '_' + world + ' 12' # string concatenation
print(hw) # prints "hello world, my name is peterli 12"
```
Concatenate two strings in C/C# notation (also allowed in Python):
```
hw2 = ' %s %s %d' % (hello, world, 12) # sprintf style string formatting
print(hw2) # prints "hello world, my name is hubert 12"
```
Concatenate two strings in Python3 notation:
```
hw3 = '{} {} 12'.format(hello, world)
print(hw3) # prints "hello world, my name is hubert 12"
```
String objects have a bunch of useful methods; for example:
```
s = "hello" # init string variable
print(s.capitalize()) # capitalize a string; prints "Hello"
print(s.upper()) # convert a string to uppercase; prints "HELLO"
print(s.rjust(7)) # right-justify a string, padding with spaces; prints " hello"
print(s.center(7)) # center a string, padding with spaces; prints " hello "
print(s.replace('l', '(ell)')) # replace all instances of one substring with another;
# prints "he(ell)(ell)o"
print(' world '.strip()) # strip leading and trailing whitespace; prints "world"
print('abracadabra'.split('a')) # split string whenever the letter 'a' occurs
```
#### 2.4 Non-Numerical Data Type "boolean"
Python 3 provides a Boolean data type. Objects of type boolean type may have one of two values, "True" or "False":
```
a = True
b = False
print(a)
print(b)
```
Print the variable type:
```
type(a)
```
Booleans are often used in Python to test conditions or constraints. For example, a string in Python can be tested for truth value. The return type will be then a Boolean value (True or False). Letโs have a look at a few examples:
```
s1 = 'Scary Halloween'
result_upper = s1.isupper() # test if string contains only upper case characters
print(result_upper)
```
Let's have a look at a couple more examples:
```
s2 = 'SCARY HALLOWEEN'
result_upper = s2.isupper() # test if string contains only upper case characters
print(result_upper)
n1 = 10
result_greather_than = n1 > 100 # test if 10 > 100
print(result_greather_than)
n2 = 99
result_in_between = n2 > 10 and n2 < 100 # test if 99 > 10 and 99 < 100
print(result_in_between)
```
We can even logically combine the tested conditions above:
```
print(a and b) # Logical AND; prints "False"
print(a or b) # Logical OR; prints "True"
print(a and result_upper) # Logical AND; prints "True"
print(not a) # Logical NOT; prints "False"
print(a != b) # Logical XOR; prints "True"
```
As you will see in upcoming labs, expressions in Python are often evaluated in a boolean context, meaning they are interpreted to represent truth or falsehood. A value that is true in Boolean context is sometimes said to be โtruthy,โ and one that is false in Boolean context is said to be โfalsyโ.
### Exercises:
We recommend you to try the following exercises as part of the lab:
**1. Write a set of (or single) Python command(s) that compare the first and last character of a string.**
> Write a set of Python commands that compare the first and last character of a string. In case both characters are the same print 'True' otherwise print 'False'. Test your statements using the words 'spooky' and 'sweets'.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**2. Write a set of (or single) Python command(s) that determine the properties of a string.**
> Write a set of Python commands that determine the number of characters of a string and if the characters are all upper case. If the number of characters is between 5 and 12 characters and all upper case print 'True' otherwise print 'False'. Test your statements using the words 'BROOMSTICK', 'ghostly', and 'mYstEriOUs'.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**3. Write a set of (or single) Python command(s) that prints a very scary sentence.**
> Write a set of Python commands that prints the scariest sentence that you could imagine. The sentence should include at least 3 of the following words 'tarantula', 'moonlight', 'supernatural', 'fog', 'owl', 'nightmare', or 'poltergeist'.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
### 3. Fundamental Python Data Containers
There are four collection data types in the Python programming language:
> * **List** - is a collection which is ordered and changeable. Allows duplicate members.
> * **Tuple** - is a collection which is ordered and unchangeable. Allows duplicate members.
> * **Set** - is a collection which is unordered and unindexed. No duplicate members.
> * **Dictionary** - is a collection which is unordered, changeable and indexed. No duplicate members.
When choosing a collection type, it is useful to understand the properties of that type. Choosing the right type for a particular data set could mean retention of meaning, and, it could mean an increase in efficiency or security.
During this lab, we will have a closer look into **lists** and **dictionaries**.
<img align="center" style="max-width: 800px; height: auto" src="python_data_containers.png">
(Source: https://www.tes.com/teaching-resource/python-version-3-data-types-11949410.)
#### 3.1. Data Container "List"
A list is a collection of basic Python data types "elements" which is ordered and changeable.
<img align="center" style="max-width: 800px; height: auto" src="python_lists.png">
In Python, lists are written with square brackets (equivalent to an array). Python lists allow duplicate elements, are resizeable and can contain elements of different data types. Lists can be used like this:
```
spooky_list = [42, 5, 128, 5, 97, 208] # create a list
print(spooky_list) # print list
```
Print the variable type:
```
type(spooky_list)
```
Determine individual elements of a list:
```
print(spooky_list[2]) # print third element of list created
print(spooky_list[-1]) # print the last list element
```
**Note**: Python list indices start with 0. That means that the first list element has the index 0, the second element has the index 1, and so on...
Determine the total number of list elements:
```
print(len(spooky_list)) # print the number of elements contained in the list
```
Replace a list element by assigning a new value at a specific index:
```
spooky_list[2] = 'happy' # lists can contain elements of different types
print(spooky_list)
```
Append an element to the end of a list:
```
spooky_list.append('coding') # add a new element to the end of the list
print(spooky_list)
```
Remove the last element of a list:
```
this_last = spooky_list.pop() # remove and return the last element of the list
print(this_last) # prints the element removed
print(spooky_list) # prints the remaining elements
```
Create a list of numbers:
```
list_of_scary_numbers = list(range(5)) # range is a built-in function that creates a list of integers
print(list_of_scary_numbers) # prints "[0, 1, 2, 3, 4]"
```
Slice list using distinct indexing techniques:
```
print(list_of_scary_numbers[2:4]) # get a slice from index 2 to 4 (exclusive); prints "[2, 3]"
print(list_of_scary_numbers[2:]) # get a slice from index 2 to the end; prints "[2, 3, 4]"
print(list_of_scary_numbers[:2]) # get a slice from the start to index 2 (exclusive); prints "[0, 1]"
print(list_of_scary_numbers[:]) # get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print(list_of_scary_numbers[:-1]) # slice indices can be negative; prints ["0, 1, 2, 3]"
```
Replace range of list elements:
```
list_of_scary_numbers[2:4] = [8, 9] # assign a new sublist to a slice
print(list_of_scary_numbers) # prints "[0, 1, 8, 9, 4]"
```
#### 3.2. Data Container "Dictionary"
In Python, dictionaries are used to store (key, value) pairs.
<img align="center" style="max-width: 800px; height: auto" src="python_dicts.png">
A dictionary is a collection of basic Python data types "elements" which is unordered, changeable and indexed. In Python, dictionaries are written with curly brackets. Dictionaries can be used like this:
```
this_screaming_dictionary = {'cat': 'animal', 'night': 'non-animal', 'vampire': 'non-animal', 'owl': 'animal'} # create a new dictionary
print(this_screaming_dictionary)
```
Retrieve and print the value corresponding to the key "cat":
```
print(this_screaming_dictionary['cat']) # get an entry from a dictionary; prints "spooky"
```
Add a new dictionary element:
```
this_screaming_dictionary['pumpkin'] = 'ugly' # set an entry in a dictionary
print(this_screaming_dictionary)
```
Retrieve and print the value corresponding to the added entry with key "pumpkin":
```
print(this_screaming_dictionary['pumpkin']) # get an entry from a dictionary; prints "ugly"
```
Retrieve and print all dictionary keys:
```
keys = this_screaming_dictionary.keys() # obtain all dictionary keys
print(keys)
```
Retrieve and print all dictionary values:
```
values = this_screaming_dictionary.values() # obtain all dictionary values
print(values)
```
Try to retrieve a dictionary value that is not contained in the dictionary (this will result in an error):
```
#print(this_screaming_dictionary['ghost']) # KeyError: 'ghost' not a key of the dictionary
```
However, we can "catch" such errors using:
```
print(this_screaming_dictionary.get('ghost', 'N/A')) # get an element with a default; prints "N/A"
print(this_screaming_dictionary.get('pumpkin', 'N/A')) # get an element with a default; prints "ugly")
```
Remove an element from a dictionary:
```
del this_screaming_dictionary['pumpkin'] # remove an element from a dictionary
print(this_screaming_dictionary)
```
Try to retrieve the removed dictionary element:
```
print(this_screaming_dictionary.get('pumpkin', 'N/A')) # "pumpkin" is no longer a key; prints "N/A"
```
### Exercises:
We recommend you to try the following exercises as part of the lab:
**1. Write a set of (or single) Python command(s) that determine the number of characters of a list element.**
> Write a set of Python commands that determine the number of characters of the second element of a list. In case the element consists of more than 4 characters print 'True' otherwise print 'False'. Test your statements using the following lists `['angel', 'nightmare', 'poltergeist']` and `['darkness', 'king', 'fairy', 'owl']`.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**2. Write a set of (or single) Python command(s) that compares the elements of a list.**
> Write a set of Python commands that compares the first and last elements of a list. In case both elements consist of the same characters print 'True' otherwise print 'False'. Test your statements using the following list `['BROOMSTICK', 'ghostly', 'mYstEriOUs', 'BROOMSTICK']` and `['darkness', 'king', 'fairy', 'owl']`.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**3. Write a set of (or single) Python command(s) that removes elements of a list.**
> Write a set of Python commands to print a specified list after removing the 0th, 2th, 3th, and 5th element. Test your statements using the following list `['BROOMSTICK', 'Happy', 'mYstEriOUs', 'BROOMSTICK', 'Halloween', 'Poltergeist']`.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
### 4. Numpy Arrays and PyTorch Tensors
Lists are useful, but rather memory intensive. If you want to process large amounts of data - like we do in machine learning applications - you want to use your limited memory more efficiently.
[NumPy](https://numpy.org/) is here to help you. NumPy is the fundamental package for scientific computing with Python. It implements the concept of arrays, which are lists on steroids. Let's see what they can do.
Creating an array is as easy as creating a list:
```
import numpy as np # we have to import NumPy first
l = [0, 1, 2] # this is a list
a = np.array(l) # this is an array
a
```
You can create an array from a list, or you can use the NumPy equivalent of `range`:
```
a = np.arange(0, 3, 1)
a
```
The first big difference between lists and arrays is that all elements in an array **must** be of the same data type (list elements can have different data types, though). As a result, an array has a data type attribute:
```
a.dtype
```
In this case, the data type is 64-bit integer, which is a derivate of the integer we already met as a basic data type.
Arrays are able to perform element-wise operations:
```
print(a + 1) # element-wise addition
print(a - 4) # element-wise subtraction
print(a * 2) # element-wise multiplication
print(a ** 2) # element-wise exponentiation
#print(l+1) # a list cannot do this
```
Some operations may require your array to adopt a different datatype. Python will do that for you:
```
print(a/2)
print((a/2).dtype)
```
Indexing and slicing work the exact same way as for lists:
```
print(a[0])
print(a[:2])
print(a[-1])
```
You can also use arithmetic operations involving two arrays:
```
print(a + np.array([1, 2, 3]))
#print(l + [1, 2, 3]) # in the case of lists, this will concatenate the two lists
```
You can consider a one-dimensional array as a **vector**. But arrays are not limited to one dimension. You can create a two-dimensional array, which you can then consider as a **matrix**:
```
m = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
m
```
Everything we learned about arrays so far holds up for two-dimensional arrays:
```
print(m[0]) # the first element of m is a one-dimensional array
print(m[:, 0]) # the first element of each row
print(m + 1) # element-wise addition
print(m * a) # multiply vector a with each row of matrix m
```
Please note the notation in line 2: `m[:, 0]`
This might look weird to you, but it's easy to explain what happens here: in a two-dimensional array, you have one index for each dimension. In `[:, 0]`, `:` refers to the first dimension (the rows) and `0` refers to the second dimension (the columns). Thus, it retrieves the element in the first column for each row. If you have additional (n) dimensions, the same logic applies, which gives you a powerful tool to easily access data in your arrays.
Since `a` is a vector and `m` is a matrix, we can compute the dot product (or *inner product*) of the two:
```
np.dot(m, a)
```
More details on what you can do with arrays will follow in a later lab. Just two more things to mention:
Why are arrays useful? Because they are fast. Let me show you just how fast.
Let's assume you have a long list and want to change every element in that list. This will take a while:
```
l = range(10000000)
l2 = []
for x in l:
l2.append(x+1)
l2
```
If you try the same in NumPy, you can do it in fewer lines of code and with much faster runtime:
```
a = np.arange(10000000)
a+1
```
NumPy is so much fast since it vectorizes computations: it applies the operation on each element of an array at the same time. This is possible since NumPy uses C implementations in the background. You, as a user, benefit from the speed of C and the simplicity of Python at the same time.
Finally, in the next lab courses we will be dealing with tensors. A **tensor**, mathematically, is just an n-dimensional array. In terms of programming, we will be using the Python module `pytorch`, which implements tensors (NumPy arrays can be used as tensors, too, but we prefer to use pytorch tensors for a reason that we will explain later to you). The good thing for you is that pytorch tensors work exactly like NumPy arrays:
```
import torch
a = torch.tensor([1, 2, 3])
print(a + 1) # element-wise addition
print(a * 2) # element-wise multiplication
print(a[2]) # indexing
print(a[1:]) # slicing
b = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(torch.mul(b, a)) # dot product
```
### Exercises:
**1. Write a set of (or single) Python command(s) that adds a constant to every element of an array and a list.**
> Write a set of Python commands to increase the value of each element of array `a` by 3. Do the same for list `l`.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**2. Write a single Python command that creates a 2d-array that looks like the following matrix:**
1 0 0
0 1 0
0 0 1
> Hint: you can use `np.array(l)` where `l` is a list to create the matrix.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**3. Write a single Python command that turns array `a` into [5, 5, 5, 5].**
> Apply a mathematical operation to bring `a` into the desired form.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
### 5. Fundamental Programming Structures
As part of this lab, we want to have a closer look at three basic programming structures of Python:
> * **For-Loops** - used to iterate over a sequence of program statements.
> * **Decision Structures** - used to anticipate conditions occurring while the execution of a program.
> * **Functions** - used to define a block of code which only runs when it is called.
#### 5.1 Python Loop Structures
To keep a program doing some useful work we need some kind of repetition, looping back over the same block of code again and again. Below we will describe the different kinds of loops available in Python.
#### 5.1.1. The "For"-Loop and Lists
The for loop that is used to iterate over elements of a sequence, it is often used when you have a piece of code which you want to repeat a specifc "n" number of times. The nature of a for-loop in Python is very straightforward **"for all elements in a list, do this".**
<img align="center" style="max-width: 600px; height: auto" src="python_for_loop.png">
Let's say that you have a list; you can then loop over the list elements using the `for` keyword like this:
```
# list initialization
halloween_elements = ['cat', 'night', 'pumpkin']
# loop initialization and run
for anyname in halloween_elements:
print(anyname)
```
**Note:** Python relies on the concept of indentation, using whitespace (or tabs), to define the scope in the code. Other programming languages such as Java or C# often use brackets or curly-brackets for this purpose.
Let's have a look at another example of a for-loop:
```
# init a list of numbers
numbers = [1, 10, 20, 30, 40, 50]
# init the result
result = 0
# loop initialization and run
for number in numbers:
result = result + number
# result += number
# print the result
print(result)
```
To loop over a list of numbers, we can use Python's `range` function. The `range(lower_bound, upper_bound, step_size)` function generates a sequence of numbers, starting from the `lower_bound` to the `upper_bound`. The `lower_bound` and `step_size` parameters are optional. By default the lower bound is set to zero, the incremental step is set to one.
```
# loop over range elements
for i in range(1, 10):
# print current value of i
print(i)
```
To break out from a loop, you can use the keyword `break`. Let's have a look at the following example:
```
# loop over range elements
for i in range(1, 10000000000):
# case: current value of i equals 3?
if i == 3:
# break: stop the loop
break
# print current value of i
print(i)
```
In contrast, the `continue` keyword is used to tell Python to skip the rest of the statements in the current loop block and to continue to the next iteration of the loop.
```
# loop over range elements
for i in range(1, 10):
# case: current value of i equals 3?
if i == 3:
# continue: jump to next loop iteration
continue
# print current value if i
print(i)
```
If you want access to the index of each element within the body of a loop, use the built-in enumerate function:
```
halloween_elements = ['cat', 'night', 'pumpkin']
for idx, element in enumerate(halloween_elements):
print('#%d: %s' % (idx + 1, element))
```
When programming frequently, we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:
```
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
print(squares)
```
You can make this code simpler using a list comprehension:
```
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
print(squares)
```
List comprehensions can also contain conditions:
```
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
print(even_squares)
```
#### 5.1.2. The "For"-Loop and Dictionaries
Similarly, it is easy to iterate over the keys in a dictionary:
```
d = {'pumpkin': 0, 'cat': 4, 'spider': 8}
for animal in d:
legs = d[animal]
print('A %s has %d legs' % (animal, legs))
```
If you want access to keys and their corresponding values, use the items method:
```
d = {'pumpkin': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.items():
print('A %s has %d legs' % (animal, legs))
```
### Exercises:
We recommend you to try the following exercises as part of the lab:
**1. Write a Python loop that multiplies all elements of a list with 10.**
> Write a Python loop that multiplies all elements of a list with `66`. The input list is given by `range(0, 10)` and its output should result in a list as denoted by: `[0, 66, 132, 198, 264, 330, 396, 462, 528, 594]`.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**2. Write a Python loop that prints the numbers 1 to 10 backwards.**
> Write a Python loop that prints the numbers 0 to 10 backwards. The output of the loop should print the following: `10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0`.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
#### 5.2 Python Decision Structures
Decision structures evaluate multiple expressions which produce **True** or **False** as an outcome. When solving a data science task, you often need to determine which action to take and which statements to execute if an outcome is **True** or **False** otherwise.
<img align="center" style="max-width: 600px; height: auto" src="python_decision_structures.png">
Let's briefly recap Python's use of logical or mathematical conditions:
```
# init sample variables
a = 4
b = 7
print(a == b) # equals
print(a != b) # not equals
print(a < b) # less than
print(a <= b) # less than or equal to
print(a > b) # greater than
print(a >= b) # greater than or equal to
```
The mathematical conditions outlined above can be used in several ways, most commonly in if-statements. An if-statement is written by using the `if` keyword. Let's have a look at an example:
```
# init sample variables
a = 4
b = 7
# test condition
if b > a:
print("b is greater than a")
```
In the example above we used two variables, `a` and `b`, which are used as part of the if-statement to test whether `b` is greater than `a`. As a is 4, and b is 7, we know that 7 is greater than 4, and so we print that "b is greater than a".
We can easily enhance the if-statement above by adding additional conditions using the `elif` keyword. The `elif` keyword is pythons way of saying "if the previous conditions were not true, then try this condition":
```
a = 4
b = 4
# test condition 1
if b > a:
print("b is greater than a")
# test condition 2
elif a == b:
print("a and b are equal")
elif a != b:
print("test check and so on... ")
```
Finally, we can use the `else` keyword to catch any case which isn't found by the preceding conditions:
```
a = 8
b = 4
# test condition 1
if b > a:
print("b is greater than a")
# test condition 2
elif a == b:
print("a and b are equal")
# all other cases
else:
print("a is greater than b")
```
In the example above the value assigned to a variable `a` is greater than the value assigned to `b`, so the first `if` condition is not true. Also, the `elif` condition is not true, so we ultimately go to the `else` condition and print that "a is greater than b".
### Exercises:
We recommend you to try the following exercises as part of the lab:
**1. Write a Python decision structure that prints all the numbers from 0 to 6 except 4 and 7.**
> Write a Python decision structure that prints a number if it doesn't equal to 4 and 7. If the number equals 4 or 7 it should print 'forbidden number'.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**2. Write a Python decision structure that evaluates if a number is a multiple of 5 and 7.**
> Write a Python decision structure that evaluates if number is a multiple of 5 and 7. Hint: You may want to use Python's percentage `%` operator as part of your case evaluation.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:
```
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
print(even_num_to_square)
```
#### 5.3 Python Functions
A function is a block of organized, reusable code that is used to perform a single, related action. Functions provide better modularity for your application and allow for a high degree of reusable code. As you already saw, Python provides you with many built-in functions such as `print()`, etc. but you can also create your functions. These functions are called **user-defined functions**.
A function is a block of code which only runs when it is called. You can pass data, known as parameters, into a function. A function can return data as a result. Python functions are defined using the `def` keyword.
<img align="mid" style="width: 400px; height: auto" src="function.png">
(Source: https://swcarpentry.github.io/python-novice-inflammation/06-func/index.html.)
Let's define our first function that takes a string as input parameter and prints it:
```
# defines a printme function
def print_me(characters):
# this prints a passed string into this function
print(characters)
```
Now, we can call our newly defined function using the function name followed by the arguments that we aim to pass in parenthesis:
```
print_me(characters="I'm first call to user defined function!")
print_me(characters="Again second call to the same function")
```
Isn't that fantastic?
Now that we understood the syntax to create customized functions, we can create even more complex functions. Let's implement a function that determines if a given integer number is positive, negative or zero using a decision structure and prints the result accordingly:
```
# defines a sign evaluation function
def sign(x):
# case: positive value
if x > 0:
# return the string 'positive'
return 'positive'
# case: negative value
elif x < 0:
# return the string 'negative'
return 'negative'
# else: other value
else:
# return the string 'zero'
return 'zero'
```
Now we call our function and print the result of sign evaluation for distinct values:
```
print(sign(x=-1))
print(sign(x=0))
print(sign(x=1))
```
We will often define functions to take optional keyword arguments. An optional argument is an argument that assumes a default value if a value is not provided in the function call for that argument. The following example provides an idea on default arguments, it prints the characters given in upper case if not specified differently, like this:
```
def hello(characters, loud=1633):
# case: default - loud print enabled
if loud:
print('HELLO, %s' % characters.upper())
# case: non-loud print enabled
else:
print('Hello, %s!' % characters)
hello(characters='Helloween', loud=1000)
hello(characters='Helloween', loud=False)
```
### Exercises:
We recommend you to try the following exercises as part of the lab:
**1. Write a Python function to calculate the length of a string.**
>Write a Python function named **"string_length"** to calculate the length of an arbitrary string. The function should take an arbitrary string as input and count the number of its characters. Test your function accordingly using various string values and print the results, e.g., input: 'Halloween', expected result: 9.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**2. Write a Python program to get the largest number from a list.**
>Write a Python function named **"max_num_in_list"** to get the largest number from a list. The function should take an arbitrary list of integer values as an input and should return the integer that corresponds to the highest value. Test your function accordingly using various string values and print the results, e.g., input: [1, 5, 8, 3], expected result: 8.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**3. Write a Python program to count the number of characters (character frequency) in a string.**
>Write a Python function named **"char_frequency"** to count the number of distinct characters occurring in it. The function should take an arbitrary string as an input and should return the count of occurrence each individual character. Test your function accordingly using various string values and print the results, e.g., input: 'Happy Halllllloweeeeeen!', expected result: {'a': 2, ' ': 1, 'e': 6, 'H': 2, 'l': 6, 'o': 1, 'n': 1, 'p': 2, '!': 1, 'w': 1, 'y': 1}.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
**Bonus: Write a Python function that takes a list of words and returns the one exhibiting the most characters.**
>Write a Python function named **find_longest_word** that takes a list of words and returns the length of the longest word in the list. The function should take an arbitrary list of string values (words) as an input and should return the word that exhibits the most characters. Test your function accordingly using various lists of string values and print the results, e.g., input: ['Happy', 'Halloween', '2018'], expected result: 'Halloween'.
```
# ***************************************************
# INSERT YOUR SOLUTION CODE HERE
# ***************************************************
```
### [BONUS] 6. Basic Python Image Manipulations
The **"Python Imaging Library (Pillow)"** by Alex Clark and contributors (abbreviated as PIL, in newer versions known as Pillow), is a free library for the Python programming language that adds support for opening, manipulating, and saving many different image file formats (https://pillow.readthedocs.io/en/stable/). Using the PIL library in combination with the Numpy library allows for a great deal image analysis and manipulation.
To make an arbitrary image interpretable for a computer images are usually represented by 3D tensors as shown below:
<img align="center" style="max-width: 1000px; height: auto" src="python_image_processing.png">
(Source: **"Learning Perceptual Organization with Minimal Human Supervision"**, Stella Yu, UC Berkeley, 2019.)
Pls. note, that prior to using PIL you need to install the Python Image package. A brief installation instruction can be obtained from: https://pillow.readthedocs.io/en/5.3.x/installation.html. The package can installed via the following command:
```
!pip3 install pillow
```
Upon successful installation we can now import the `PIL` and `Scikit-Image` libraries:
```
# use this statement to show plots inside your notebook
%matplotlib inline
# import the pil, numpy and the matplotlib libraries
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
```
The `PIL` library supports a wide variety of image file formats. To read files from disk, use the image read `imread` function of the `Scikit-Image` library. You donโt have to know the file format to open a file. The library automatically determines the format based on the contents of the file.
#### 6.1. Image Loading and Plotting
Let's open and plot the `halloween.jpg` image file:
```
# import the requests and bytesIO libraries
import requests
from io import BytesIO
# set URL of the image to be loaded
url = "https://raw.githubusercontent.com/financial-data-science/CFDS-Notebooks/master/lab_02/halloween.jpg"
# send load request to URL
response = requests.get(url)
# parse and convert image data to numpy array
image = np.asarray(Image.open(BytesIO(response.content)))
# plot image in Jupyter notebook
plt.imshow(image, vmin=0, vmax=255)
```
Let's also have a look at the exact shape of the image:
```
h, w, c = image.shape
print(h) # prints the image height
print(w) # prints the image width
print(c) # prints the number of image color channels = 3 for red, green, blue (rgb)
```
#### 6.2. Image Cropping
Use PIL to extract the upper pumpkin of the image:
```
upper_pumpkin = image[20:250, 20:300, :]
plt.imshow(upper_pumpkin, vmin=0, vmax=255)
upper_pumpkin.shape
```
#### 6.3. Image Channel Extraction
Let's extract the distinct (RGB) colour channels of the image:
```
# extraction of the red colour channel
red_channel = np.zeros(upper_pumpkin.shape, dtype='uint8')
red_channel[:,:,0] = upper_pumpkin[:,:,0]
plt.imshow(red_channel)
# extraction of the green colour channel
green_channel = np.zeros(upper_pumpkin.shape, dtype='uint8')
green_channel[:,:,1] = upper_pumpkin[:,:,1]
plt.imshow(green_channel)
# extraction of a blue colour channel
blue_channel = np.zeros(upper_pumpkin.shape, dtype='uint8')
blue_channel[:,:,2] = upper_pumpkin[:,:,2]
plt.imshow(blue_channel)
```
#### 6.4. Image Manipulation
Greyscale the image by setting the third image channel to mean of all colour values:
```
gray = upper_pumpkin.mean(axis=2)
plot = plt.imshow(gray, cmap=plt.cm.gray, vmin=0, vmax=255)
```
To learn more about PIL and its capabilities visit: https://pillow.readthedocs.io.
### Lab Summary:
In this second lab, the basic data types and containers of the Python programming language are presented. The code and exercises presented in this lab may serve as a starting point for more complex and tailored analytics.
You may want to execute the content of your lab outside of the Jupyter notebook environment, e.g. on a compute node or a server. The cell below converts the lab notebook into a standalone and executable python script. Pls. note, that to convert the notebook you need to install Python's **nbconvert** library and its extensions:
```
# installing the nbconvert library
!pip install nbconvert
!pip install jupyter_contrib_nbextensions
```
Convert the Jupyter notebook into a plain Python script:
```
!jupyter nbconvert --to script cfds_lab_02.ipynb
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Image segmentation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/images/segmentation">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/images/segmentation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/images/segmentation.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/tutorials/images/segmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial focuses on the task of image segmentation, using a modified [U-Net](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/).
## What is image segmentation?
So far you have seen image classification, where the task of the network is to assign a label or class to an input image. However, suppose you want to know where an object is located in the image, the shape of that object, which pixel belongs to which object, etc. In this case you will want to segment the image, i.e., each pixel of the image is given a label. Thus, the task of image segmentation is to train a neural network to output a pixel-wise mask of the image. This helps in understanding the image at a much lower level, i.e., the pixel level. Image segmentation has many applications in medical imaging, self-driving cars and satellite imaging to name a few.
The dataset that will be used for this tutorial is the [Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/), created by Parkhi *et al*. The dataset consists of images, their corresponding labels, and pixel-wise masks. The masks are basically labels for each pixel. Each pixel is given one of three categories :
* Class 1 : Pixel belonging to the pet.
* Class 2 : Pixel bordering the pet.
* Class 3 : None of the above/ Surrounding pixel.
```
!pip install git+https://github.com/tensorflow/examples.git
try:
%tensorflow_version 2.x # Colab only.
except Exception:
pass
import tensorflow as tf
from __future__ import absolute_import, division, print_function, unicode_literals
from tensorflow_examples.models.pix2pix import pix2pix
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from IPython.display import clear_output
import matplotlib.pyplot as plt
```
## Download the Oxford-IIIT Pets dataset
The dataset is already included in TensorFlow datasets, all that is needed to do is download it. The segmentation masks are included in version 3.0.0, which is why this particular version is used.
```
dataset, info = tfds.load('oxford_iiit_pet:3.0.0', with_info=True)
```
The following code performs a simple augmentation of flipping an image. In addition, image is normalized to [0,1]. Finally, as mentioned above the pixels in the segmentation mask are labeled either {1, 2, 3}. For the sake of convinience, let's subtract 1 from the segmentation mask, resulting in labels that are : {0, 1, 2}.
```
def normalize(input_image, input_mask):
input_image = tf.cast(input_image, tf.float32)/128.0 - 1
input_mask -= 1
return input_image, input_mask
@tf.function
def load_image_train(datapoint):
input_image = tf.image.resize(datapoint['image'], (128, 128))
input_mask = tf.image.resize(datapoint['segmentation_mask'], (128, 128))
if tf.random.uniform(()) > 0.5:
input_image = tf.image.flip_left_right(input_image)
input_mask = tf.image.flip_left_right(input_mask)
input_image, input_mask = normalize(input_image, input_mask)
return input_image, input_mask
def load_image_test(datapoint):
input_image = tf.image.resize(datapoint['image'], (128, 128))
input_mask = tf.image.resize(datapoint['segmentation_mask'], (128, 128))
input_image, input_mask = normalize(input_image, input_mask)
return input_image, input_mask
```
The dataset already contains the required splits of test and train and so let's continue to use the same split.
```
TRAIN_LENGTH = info.splits['train'].num_examples
BATCH_SIZE = 64
BUFFER_SIZE = 1000
STEPS_PER_EPOCH = TRAIN_LENGTH // BATCH_SIZE
train = dataset['train'].map(load_image_train, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test = dataset['test'].map(load_image_test)
train_dataset = train.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
train_dataset = train_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
test_dataset = test.batch(BATCH_SIZE)
```
Let's take a look at an image example and it's correponding mask from the dataset.
```
def display(display_list):
plt.figure(figsize=(15, 15))
title = ['Input Image', 'True Mask', 'Predicted Mask']
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i+1)
plt.title(title[i])
plt.imshow(tf.keras.preprocessing.image.array_to_img(display_list[i]))
plt.axis('off')
plt.show()
for image, mask in train.take(1):
sample_image, sample_mask = image, mask
display([sample_image, sample_mask])
```
## Define the model
The model being used here is a modified U-Net. A U-Net consists of an encoder (downsampler) and decoder (upsampler). In-order to learn robust features, and reduce the number of trainable parameters, a pretrained model can be used as the encoder. Thus, the encoder for this task will be a pretrained MobileNetV2 model, whose intermediate outputs will be used, and the decoder will be the upsample block already implemented in TensorFlow Examples in the [Pix2pix tutorial](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/pix2pix/pix2pix.py).
The reason to output three channels is because there are three possible labels for each pixel. Think of this as multi-classification where each pixel is being classified into three classes.
```
OUTPUT_CHANNELS = 3
```
As mentioned, the encoder will be a pretrained MobileNetV2 model which is prepared and ready to use in [tf.keras.applications](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/applications). The encoder consists of specific outputs from intermediate layers in the model. Note that the encoder will not be trained during the training process.
```
base_model = tf.keras.applications.MobileNetV2(input_shape=[128, 128, 3], include_top=False)
# Use the activations of these layers
layer_names = [
'block_1_expand_relu', # 64x64
'block_3_expand_relu', # 32x32
'block_6_expand_relu', # 16x16
'block_13_expand_relu', # 8x8
'block_16_project', # 4x4
]
layers = [base_model.get_layer(name).output for name in layer_names]
# Create the feature extraction model
down_stack = tf.keras.Model(inputs=base_model.input, outputs=layers)
down_stack.trainable = False
```
The decoder/upsampler is simply a series of upsample blocks implemented in TensorFlow examples.
```
up_stack = [
pix2pix.upsample(512, 3), # 4x4 -> 8x8
pix2pix.upsample(256, 3), # 8x8 -> 16x16
pix2pix.upsample(128, 3), # 16x16 -> 32x32
pix2pix.upsample(64, 3), # 32x32 -> 64x64
]
def unet_model(output_channels):
# This is the last layer of the model
last = tf.keras.layers.Conv2DTranspose(
output_channels, 3, strides=2,
padding='same', activation='softmax') #64x64 -> 128x128
inputs = tf.keras.layers.Input(shape=[128, 128, 3])
x = inputs
# Downsampling through the model
skips = down_stack(x)
x = skips[-1]
skips = reversed(skips[:-1])
# Upsampling and establishing the skip connections
for up, skip in zip(up_stack, skips):
x = up(x)
concat = tf.keras.layers.Concatenate()
x = concat([x, skip])
x = last(x)
return tf.keras.Model(inputs=inputs, outputs=x)
```
## Train the model
Now, all that is left to do is to compile and train the model. The loss being used here is losses.sparse_categorical_crossentropy. The reason to use this loss function is because the network is trying to assign each pixel a label, just like multi-class prediction. In the true segmentation mask, each pixel has either a {0,1,2}. The network here is outputting three channels. Essentially, each channel is trying to learn to predict a class, and losses.sparse_categorical_crossentropy is the recommended loss for such a scenario. Using the output of the network, the label assigned to the pixel is the channel with the highest value. This is what the create_mask function is doing.
```
model = unet_model(OUTPUT_CHANNELS)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
Let's try out the model to see what it predicts before training.
```
def create_mask(pred_mask):
pred_mask = tf.argmax(pred_mask, axis=-1)
pred_mask = pred_mask[..., tf.newaxis]
return pred_mask[0]
def show_predictions(dataset=None, num=1):
if dataset:
for image, mask in dataset.take(num):
pred_mask = model.predict(image)
display([image[0], mask[0], create_mask(pred_mask)])
else:
display([sample_image, sample_mask,
create_mask(model.predict(sample_image[tf.newaxis, ...]))])
show_predictions()
```
Let's observe how the model improves while it is training. To accomplish this task, a callback function is defined below.
```
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions()
print ('\nSample Prediction after epoch {}\n'.format(epoch+1))
EPOCHS = 20
VAL_SUBSPLITS = 5
VALIDATION_STEPS = info.splits['test'].num_examples//BATCH_SIZE//VAL_SUBSPLITS
model_history = model.fit(train_dataset, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_steps=VALIDATION_STEPS,
validation_data=test_dataset,
callbacks=[DisplayCallback()])
loss = model_history.history['loss']
val_loss = model_history.history['val_loss']
epochs = range(EPOCHS)
plt.figure()
plt.plot(epochs, loss, 'r', label='Training loss')
plt.plot(epochs, val_loss, 'bo', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss Value')
plt.ylim([0, 1])
plt.legend()
plt.show()
```
## Make predictions
Let's make some predictions. In the interest of saving time, the number of epochs was kept small, but you may set this higher to achieve more accurate results.
```
show_predictions(test_dataset, 3)
```
## Next steps
Now that you have an understanding of what image segmentation is and how it works, you can try this tutorial out with different intermediate layer outputs, or even different pretrained model. You may also challenge yourself by trying out the [Carvana](https://www.kaggle.com/c/carvana-image-masking-challenge/overview) image masking challenge hosted on Kaggle.
You may also want to see the [Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) for another model you can retrain on your own data.
| github_jupyter |
# Fit Pixel
In this notebook we will fit a single pixel in a data cube for NGC628.
#### * If you have not yet downloaded the sample data cube, uncomment the below cell and run and download the sample data cube.
This will save a 900MB file called `sample_data.hdf5` to the **`ExampleData`** directory in the **`LUCI`** folder. Do not be concerned if it takes a few minutes to download.
```
# !wget -O Data/NGC628_SN3.hdf5 https://ws.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/data/pub/CFHT/2307000z.hdf5?RUNID=xc9le6u8llecp7fp
# Imports
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
# Get location of LUCI
path = os.path.abspath(os.path.pardir)
sys.path.insert(0, path) # add LUCI to the available paths
from LuciBase import Luci
%config Completer.use_jedi=False # enable autocompletion when typing in Jupyter notebooks
```
Set the required parameters. We will be using our machine learning algorithm to get the initial guesses - this happens under the hood in `LuciFit`, so that the user is not required to think about the initial guess
```
# Initialize paths and set parameters
Luci_path = path
cube_dir = '/media/carterrhea/carterrhea/NGC628' # Full path to data cube
cube_name = 'NGC628_SN3.merged.cm1.1.0' # don't add .hdf5 extension
object_name = 'NGC628'
redshift = 0.000133 # Redshift of object
resolution = 1000 # The actual resolution is 400, but we don't have ML algorithms for that resolution, so use 1000
```
Intialize our LUCI object
```
cube = Luci(Luci_path, cube_dir + '/' + cube_name, cube_dir, object_name, redshift, resolution)
```
Create a deep frame
Let's extract a background region and take a look at it. The background region is defined in a ds9 region file in the `ExampleData` folder.
```
# We use 'mean = True' to take the mean of the emission in the region instead of the sum
bkg_axis, bkg_sky = cube.extract_spectrum_region(cube_dir+'/bkg.reg', mean=True)
```
We will now fit a single pixel and take a look at the fit. This fit commands has all the same options as all the other commands except for binning :)
```
# Fit!
axis, sky, fit_dict = cube.fit_pixel(
['Halpha', 'NII6548', 'NII6583', 'SII6716', 'SII6731'], # lines
'sincgauss', # fit function
[1,1,1,1,1], # velocity relationship
[1,1,1,1,1], # sigma relationship
1250, 1045, # x & y coordinate
bkg=bkg_sky
)
```
And now we can plot the results
```
plt.plot(axis, sky, label='spectrum')
plt.plot(axis, fit_dict['fit_vector'], label='fit', linestyle='--')
plt.legend()
plt.xlim(15000, 16000)
```
And that is it! Congratulations, you have just used `LUCI`!
```
fit_dict
```
| github_jupyter |
# How to Use OPX Sweeping
This notebook is a work in progress and the information presented here might be outdated.
## Imports
```
from functools import wraps
import numpy as np
import qcodes as qc
from itertools import chain
from typing import Callable, Dict, Generator, List
from labcore.measurement import *
from configuration import QMConfig
from qm.qua import *
from qm.QuantumMachinesManager import QuantumMachinesManager
from plottr.data import datadict_storage as dds, datadict as dd
# global module variable for the config file
global_config = None
```
## Base Decorator
```
class BackgroundRecordingBase:
"""
Base class decorator used to record asynchronous data from instrument.
Use the decorator with create_background_sweep function to create Sweeps that collect asynchronous data from
external devices running experiments independently of the measurement PC,
e.i. the measuring happening is not being controlled by a Sweep but instead an external device (e.g. the OPX).
Each instrument should have its own custom setup_wrapper (see setup_wrapper docstring for more info),
and a custom collector.
Auxiliary functions for the start_wrapper and collector should also be located in this class.
:param *specs: A list of the DataSpecs to record the data produced.
"""
def __init__(self, *specs):
self.specs = specs
self.communicator = {}
def __call__(self, fun) -> Callable:
"""
When the decorator is called the experiment function gets wrapped so that it returns an Sweep object composed
of 2 different Sweeps, the setup sweep and the collector Sweep.
"""
def sweep(**collector_kwargs) -> Sweep:
"""
Returns a Sweep comprised of 2 different Sweeps: start_sweep and collector_sweep.
start_sweep should perform any setup actions as well as starting the actual experiment. This sweep is only
executed once. collector_sweep is iterated multiple time to collect all the data generated from the
instrument.
:param collector_kwargs: Any arguments that the collector needs.
"""
start_sweep = once(self.start_wrapper(fun))
collector_sweep = Sweep(record_as(self.collector(**collector_kwargs), *self.specs))
return start_sweep + collector_sweep
return sweep
def start_wrapper(self, fun: Callable) -> Callable:
"""
Wraps the start function. setup_wrapper should consist of another function inside of it decorated with @wraps
with fun as its argument.
In this case the wrapped function is setup.
Setup should accept the *args and **kwargs of fun. It should also place any returns from fun in the communicator.
setup_wrapper needs to return the wrapped function (setup)
:param fun: The measurement function. In the case of the OPX this would be the function that returns the QUA
code with any arguments that it might use.
"""
@wraps(fun)
def start(*args, **kwargs) -> None:
"""
Starts the experiment and saves anything that the collector needs from the startup of the measurement in the
collector dictionary.
:param args: Any args that fun needs.
:param kwargs: Any kwargs that fun needs.
"""
self.communicator['setup_return'] = fun(*args, **kwargs)
return None
return start
def collector(self, **kwargs) -> Generator[Dict, None, None]:
"""
Data collection generator. The generator should contain all the logic of waiting for the asynchronous data.
Its should yield a dictionary with the name of the of the DataSpecs as keywords and numpy arrays with the values
collected from the instrument. The generator should exhaust itself once all the data produced by the
measurement has been generated
:param kwargs: Any kwargs necessary for the specific implementation of the collector.
"""
data = {}
yield data
def create_background_sweep(decorated_measurement_function: Callable, **collector_kwargs) -> Sweep:
"""
Creates the Sweep object from a measurement function decorated with any implementation of BackgroundRecordingBase.
:param decorated_measurement_function: Measurement function decorated with
a BackgroundRecordingBase class decorator.
:param collector_kwargs: Any kwargs that the collector needs.
"""
sweep = decorated_measurement_function(**collector_kwargs)
return sweep
```
## Raw Value DataSpec
```
def raw_OPX_data(name: str, depends_on: List[str] = [], unit: str = '', type_var: str = 'array'):
indep_name = f'{name}_time'
indep = independent(indep_name, unit, type_var)
depends_on.append(indep_name)
dep = dependent(name,depends_on, unit, type_var)
ret = indep, dep, name
return ret
```
## Specific OPX Implementation
```
class RecordOPX(BackgroundRecordingBase):
"""
Implementation of BackgroundRecordingBase for use with the OPX machine.
"""
def __init__(self, *specs):
self.communicator = {}
self.communicator['raw_variables'] = []
self.specs = tuple(self._flatten_and_sort_dataspecs(specs))
def _flatten_and_sort_dataspecs(self, specs):
ret = []
if isinstance(specs,tuple):
for spec in specs:
if isinstance(spec, record.DataSpec):
ret.append(spec)
elif isinstance(spec, str):
self.communicator['raw_variables'].append(spec)
elif len(spec) > 1:
rec_list = self._flatten_and_sort_dataspecs(spec)
if len(rec_list) > 1:
for item in rec_list:
ret.append(item)
else:
ret.append(item)
return ret
def start_wrapper(self, fun: Callable) -> Callable:
"""
start_wrapper for the OPX machine. Wraps the startup function.
Returns the actual startup function to be executed when the sweep is iterated through.
:param fun: Function that returns the QUA program.
"""
@wraps(fun)
def startup(*args, **kwargs) -> None:
"""
Establishes connection with the OPX and starts the the measurement. The config of the OPX is passed through
the module variable global_config. It saves the result handles and saves initial values to the communicator
dictionary.
"""
# Start the measurement in the OPX.
qmachine_mgr = QuantumMachinesManager()
qmachine = qmachine_mgr.open_qm(global_config)
job = qmachine.execute(fun(*args, **kwargs))
result_handles = job.result_handles
# Save the result handle and create initial parameters in the communicator used in the collector.
self.communicator['result_handles'] = result_handles
self.communicator['active'] = True
self.communicator['counter'] = 0
return startup
def _wait_for_data(self, batchsize: int) -> None:
"""
Waits for the opx to have measured more data points than the ones indicated in the batchsize. Also checks that
the OPX is still collecting data, when the OPX is no longer processing, turn communicator['active'] to False to
exhaust the collector.
:param batchsize: Size of batch. How many data-points is the minimum for the sweep to get in an iteration.
e.g. if 5, _control_progress will keep running until at least 5 new data-points
are available for collection.
"""
# When ready becomes True, the infinite loop stops.
ready = False
# Collect necessary values from communicator.
res_handle = self.communicator['result_handles']
counter = self.communicator['counter']
while not ready:
for name, handle in res_handle:
current_datapoint = handle.count_so_far()
# Check if the OPX is still processing.
if res_handle.is_processing():
# Check if enough data-points are available.
if current_datapoint - counter >= batchsize:
ready = True
else:
ready = False
else:
# Once the OPX is done processing turn ready True and turn active False to exhaust the generator.
ready = True
self.communicator['active'] = False
def collector(self, batchsize: int) -> Generator[Dict, None, None]:
"""
Implementation of collector for the OPX. Collects new data-points from the OPX and yields them in a dictionary
with the names of the recorded variables as keywords and numpy arrays with the values. Raises ValueError if a
stream name inside the QUA program has a different name than a recorded variable and if the amount of recorded
variables and streams are different.
:param batchsize: Size of batch. How many data-points is the minimum for the sweep to get in an iteration.
e.g. if 5, _control_progress will keep running until at least 5 new data-points
are available for collection.
"""
# Get the result_handles from the communicator.
result_handle = self.communicator['result_handles']
# Get the names of all variables from the specs.
data_specs_names = []
raw_values_count = len(self.communicator['raw_variables'])
data_specs_names = [x.name for x in self.specs]
variable_counter = 0
for name, handle in result_handle:
# Check that the stream names are present in the DataSpecs.
if name not in data_specs_names:
raise ValueError(f'{name} is not a recorded variable')
else:
variable_counter += 1
# Check that the number of recorded variables and streams are the same.
if variable_counter != len(data_specs_names) - raw_values_count:
raise ValueError(f'Number of recorded variables ({variable_counter}) \
does not match number of variables gathered from the OPX ({len(data_specs_names)})')
while self.communicator['active']:
# Restart values for each iteration.
data = {}
counter = self.communicator['counter'] # Previous iteration data-point number.
first = True
current = 0
# Make sure that the result_handle is active.
if result_handle is None:
yield None
# Waits until new data-points are ready to be gathered.
self._wait_for_data(batchsize)
for name, handle in result_handle:
# To ensure that we get the same number of data-points from every variable only get the current count
# for the first variable in the stream.
if first:
current = handle.count_so_far() # Current data-point number
first = False
# if the current data-point number is the same as the previous data-point number, no new data
# has been gathered.
if current == counter:
yield None
break
# Make sure that the OPX has actually measured the current value for all variables and fetch the
# new data lines.
handle.wait_for_values(current)
data_temp = np.array(handle.fetch(slice(counter, current)))
# If the trace is a raw measurement, we need to go through its shape to properly convert it
if name in self.communicator['raw_variables']:
holding_converting = []
for i in data_temp:
i_holder = []
for j in i:
converted = j.astype(float)
i_holder.append(converted)
holding_converting.append(i_holder)
if len(holding_converting) == 1:
converted_data_temp = [np.squeeze(holding_converting)]
else:
converted_data_temp = np.squeeze(holding_converting)
raw_count = []
for rep in range(len(data['repetition'])):
raw_count.append(np.arange(len(converted_data_temp[0])))
data[f'{name}_time'] = raw_count
else:
# data comes from the OPX as numpy.void. Converts array to contain floats instead.
converted_data_temp = data_temp.astype(float)
data[name] = converted_data_temp
self.communicator['counter'] = current
yield data
```
## Proposal for Base Running and Saving
```
def _create_datadict_structure(sweep: Sweep) -> dd.DataDict:
"""
Returns a structured DataDict from the DataSpecs of a Sweep.
:param sweep: Sweep object from which the DataDict is created.
"""
data_specs = sweep.get_data_specs()
data_dict = dd.DataDict()
for spec in data_specs:
depends_on = spec.depends_on
unit = spec.unit
name = spec.name
# Checks which fields have information and which ones are None.
if depends_on is None:
if unit is None:
data_dict[name] = dict()
else:
data_dict[name] = dict(unit=unit)
else:
if unit is None:
data_dict[name] = dict(axes=depends_on)
else:
data_dict[name] = dict(axes=depends_on, unit=unit)
data_dict.validate()
return data_dict
def _check_none(line: Dict) -> bool:
"""
Checks if the values in a Dict are all None. Returns True if all values are None, False otherwise.
"""
for arg in line.keys():
if line[arg] is not None:
return False
return True
def run_and_save_sweep(sweep: Sweep, data_dir: str, name: str, prt: bool = False) -> None:
"""
Iterates through a sweep, saving the data coming through it into a file called <name> at <data_dir> directory.
:param sweep: Sweep object to iterate through.
:param data_dir: Directory of file location
:param name: name of the file
:param prt: Bool, if True, the function will print every result coming from the sweep. Default, False.
"""
data_dict = _create_datadict_structure(sweep)
# Creates a file even when it fails.
with dds.DDH5Writer(data_dir, data_dict, name=name) as writer:
for line in sweep:
if not _check_none(line):
if prt:
print(line)
writer.add_data(**line)
print('The measurement has finished successfully and all of the data has been saved.')
```
## QUA experiment with implemented decorator
```
@RecordOPX(
dependent('tracker', depends_on=['repetition'], type='array'),
independent('repetition', type='array'),
raw_OPX_data('data_raw', depends_on=['repetition'], type_var='array'),
dependent('V', depends_on=['repetition'], type='array'))
def my_qua_experiment(n_reps=1000):
with program() as qua_measurement:
raw_stream = declare_stream(adc_trace=True)
v_stream = declare_stream()
tracker_stream = declare_stream()
i_stream = declare_stream()
i = declare(int)
v = declare(fixed)
tracker = declare(int, value=0)
with for_(i, 0, i<n_reps, i+1):
save(i, i_stream)
measure('box', 'readout', raw_stream, ("box_sin",v))
save(v, v_stream)
assign(tracker, tracker+2)
save(tracker, tracker_stream)
play('box', "readout")
wait(1000000)
with stream_processing():
i_stream.save_all('repetition')
v_stream.save_all('V')
tracker_stream.save_all('tracker')
raw_stream.input1().save_all('data_raw')
return qua_measurement
DATADIR = './data/'
config = QMConfig()
global_config = config.config()
sweep = create_background_sweep(my_qua_experiment, batchsize=5)
sweep.set_action_opts(
my_qua_experiment=dict(n_reps=100)
)
sweep.get_data_specs()
run_and_save_sweep(sweep, DATADIR, f'OPX test data', prt=True)
```
## Notes
* The run_and_save function should work with any sweep, so that could be added to the general labcore package.
* If a measurement fails to be executed, the writter still creates a file but the file is corrupted and plottr jsut raises a warning.
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# 1D Fast Wave `GiRaFFEfood` Initial Data for `GiRaFFE`
## This module provides another initial data option for `GiRaFFE`, drawn from [this paper](https://arxiv.org/abs/1310.3274) .
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). The initial data has validated against the original `GiRaFFE`, as documented [here](Tutorial-Start_to_Finish_UnitTest-GiRaFFEfood_NRPy.ipynb).
### NRPy+ Source Code for this module: [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Fast_Wave.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Fast_Wave.py)
## Introduction:
### Fast Wave:
This is a flat-spacetime test with initial data
\begin{align}
A_x &= 0,\ A_y = 0 \\
A_z &= y+\left \{ \begin{array}{lll} -x-0.0075 & \mbox{if} & x \leq -0.1\ \\
0.75x^2 - 0.85x & \mbox{if} & -0.1 \leq x \leq 0.1 \\
-0.7x-0.0075 & \mbox{if} & x \geq 0.1 \end{array} \right. , \\
\end{align}
which generates the magnetic field
\begin{align}
B^x(0,x) &= 1.0 \\
B^y(0,x) &= \left \{ \begin{array}{lll} 1.0 & \mbox{if} & x \leq -0.1 \\
1.0-1.5(x+0.1) & \mbox{if} & -0.1 \leq x \leq 0.1 \\
0.7 & \mbox{if} & x \geq 0.1 \end{array} \right. \\
B^z(0,x) &= 0\ .
\end{align}
The electric field is then given by
$$E^x(0,x) = 0.0 \ \ , \ \ E^y(x) = 0.0 \ \ , \ \ E^z(x) = -B^y(0,x) .$$
and the velocity is given by $$\mathbf{v} = \frac{\mathbf{E} \times \mathbf{B}}{B^2}$$ in flat spacetime.
For the eventual purpose of testing convergence, any quantity $Q$ evolves as $Q(t,x) = Q(0,x - t)$
See the [Tutorial-GiRaFFEfood_NRPy](Tutorial-GiRaFFEfood_NRPy.ipynb) tutorial notebook for more general detail on how this is used.
<a id='toc'></a>
# Table of Contents:
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Import core NRPy+ modules and set NRPy+ parameters
1. [Step 2](#set_a_i): Set the vector $A_i$
1. [Step 3](#set_vi): Calculate $v^i$ from $B^i$ and $E_i$
1. [Step 4](#code_validation): Code Validation against `GiRaFFEfood_NRPy.GiRaFFEfood_NRPy` NRPy+ Module
1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Import core NRPy+ modules and set NRPy+ parameters \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Here, we will import the NRPy+ core modules, set the reference metric to Cartesian, and set commonly used NRPy+ parameters. We will also set up a parameter to determine what initial data is set up, although it won't do much yet.
```
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0.a: Import the NRPy+ core modules and set the reference metric to Cartesian
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy_Common_Functions as gfcf # Some useful functions for GiRaFFE initial data.
import reference_metric as rfm # NRPy+: Reference metric support
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
# Step 1a: Set commonly used parameters.
thismodule = "GiRaFFEfood_NRPy_1D"
```
##### <a id='set_a_i'></a>
# Step 2: Set the vector $A_i$ \[Back to [top](#toc)\]
$$\label{set_a_i}$$
The vector potential is given as
\begin{align}
A_x &= 0,\ A_y = 0 \\
A_z &= y+\left \{ \begin{array}{lll} -x-0.0075 & \mbox{if} & x \leq -0.1 \\
0.75x^2 - 0.85x & \mbox{if} & -0.1 \leq x \leq 0.1 \\
-0.7x-0.0075 & \mbox{if} & x \geq 0.1 \end{array} \right. . \\
\end{align}
However, to take full advantage of NRPy+'s automated function generation capabilities, we want to write this without the `if` statements, replacing them with calls to `fabs()`. To do so, we will use the NRPy+ module `Min_Max_and_Piecewise_Expressions`.
We will rewrite $A_y$ to make use of the functions provided by `Min_Max_and_Piecewise_Expressions`. As shown below, we make sure that at each boundary, each $\leq$ is paired with a $>$. (This choice is arbitrary, we could just as easily choose $<$ and $\geq$.) This does not change the data since the function is continuous. However, it is necessary for the functions in `Min_Max_and_Piecewise_Expressions` to output the correct results:
\begin{align}
A_x &= 0,\ A_y = 0 \\
A_z &= y+\left \{ \begin{array}{lll} -x-0.0075 & \mbox{if} & x \leq -0.1 \\
0.75x^2 - 0.85x & \mbox{if} & -0.1 < x \leq 0.1 \\
-0.7x-0.0075 & \mbox{if} & x > 0.1 \end{array} \right. . \\
\end{align}
```
import Min_Max_and_Piecewise_Expressions as noif
bound = sp.Rational(1,10)
def Ax_FW(x,y,z, **params):
return sp.sympify(0)
def Ay_FW(x,y,z, **params):
return sp.sympify(0)
def Az_FW(x,y,z, **params):
# A_z = y+ (-x-0.0075) if x <= -0.1
# (0.75x^2 - 0.85x) if -0.1 < x <= 0.1
# (-0.7x-0.0075) if x > 0.1
Azleft = y - x - sp.Rational(75,10000)
Azcenter = y + sp.Rational(75,100)*x*x - sp.Rational(85,100)*x
Azright = y - sp.Rational(7,10)*x - sp.Rational(75,10000)
out = noif.coord_leq_bound(x,-bound)*Azleft\
+noif.coord_greater_bound(x,-bound)*noif.coord_leq_bound(x,bound)*Azcenter\
+noif.coord_greater_bound(x,bound)*Azright
return out
```
<a id='set_vi'></a>
# Step 3: Calculate $v^i$ from $B^i$ and $E_i$ \[Back to [top](#toc)\]
$$\label{set_vi}$$
Now, we will set the magnetic and electric fields that we will need to define the initial velocities.
We will first set the magnetic field, once again rewriting $B^z(x)$ to be compatible with `Min_Max_and_Piecewise_Expressions`:
\begin{align}
B^x(0,x) &= 1.0 \\
B^y(0,x) &= \left \{ \begin{array}{lll} 1.0 & \mbox{if} & x \leq -0.1 \\
1.0-1.5(x+0.1) & \mbox{if} & -0.1 < x \leq 0.1 \\
0.7 & \mbox{if} & x > 0.1 \end{array} \right. \\
B^z(0,x) &= 0\ .
\end{align}
Then, we will set the electric field:
$$E^x(0,x) = 0.0 \ \ , \ \ E^y(x) = 0.0 \ \ , \ \ E^z(x) = -B^y(0,x) .$$
```
def ValenciavU_func_FW(**params):
# B^x(0,x) = 1.0
# B^y(0,x) = 1.0 if x <= -0.1
# 1.0-1.5(x+0.1) if -0.1 < x <= 0.1
# 0.7 if x > 0.1
# B^z(0,x) = 0
x = rfm.xx_to_Cart[0]
y = rfm.xx_to_Cart[1]
Byleft = sp.sympify(1)
Bycenter = sp.sympify(1) - sp.Rational(15,10)*(x+sp.Rational(1,10))
Byright = sp.Rational(7,10)
BU = ixp.zerorank1()
BU[0] = sp.sympify(1)
BU[1] = noif.coord_leq_bound(x,-bound)*Byleft\
+noif.coord_greater_bound(x,-bound)*noif.coord_leq_bound(x,bound)*Bycenter\
+noif.coord_greater_bound(x,bound)*Byright
BU[2] = 0
# E^x(0,x) = 0.0 , E^y(x) = 0.0 , E^z(x) = -B^y(0,x)
EU = ixp.zerorank1()
EU[0] = sp.sympify(0)
EU[1] = sp.sympify(0)
EU[2] = -BU[1]
# In flat space, ED and EU are identical, so we can still use this function.
return gfcf.compute_ValenciavU_from_ED_and_BU(EU, BU)
```
<a id='code_validation'></a>
# Step 4: Code Validation against `GiRaFFEfood_NRPy.GiRaFFEfood_NRPy` NRPy+ Module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the `GiRaFFE` Aligned Rotator initial data equations we intend to use between
1. this tutorial and
2. the NRPy+ [`GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests_fast_wave.py`](../edit/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests_fast_wave.py) module.
```
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy as gf
A_fwD = gfcf.Axyz_func_Cartesian(Ax_FW,Ay_FW,Az_FW,stagger_enable = True,)
Valenciav_fwD = ValenciavU_func_FW()
gf.GiRaFFEfood_NRPy_generate_initial_data(ID_type = "FastWave", stagger_enable = True)
def consistency_check(quantity1,quantity2,string):
if quantity1-quantity2==0:
print(string+" is in agreement!")
else:
print(string+" does not agree!")
sys.exit(1)
print("Consistency check between GiRaFFEfood_NRPy tutorial and NRPy+ module:")
for i in range(3):
consistency_check(Valenciav_fwD[i],gf.ValenciavU[i],"ValenciavU"+str(i))
consistency_check(A_fwD[i],gf.AD[i],"AD"+str(i))
```
<a id='latex_pdf_output'></a>
# Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GiRaFFEfood_NRPy_1D_tests.pdf](Tutorial-GiRaFFEfood_NRPy_1D_tests.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFEfood_NRPy-Fast_Wave",location_of_template_file=os.path.join(".."))
```
| github_jupyter |
# Hyperparameter tuning with Cloud ML Engine
**Learning Objectives:**
* Improve the accuracy of a model by hyperparameter tuning
```
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
os.environ['TFVERSION'] = '1.8' # Tensorflow version
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
```
## Create command-line program
In order to submit to Cloud ML Engine, we need to create a distributed training program. Let's convert our housing example to fit that paradigm, using the Estimators API.
```
%%bash
rm -rf trainer
mkdir trainer
touch trainer/__init__.py
%%writefile trainer/house.py
import os
import math
import json
import shutil
import argparse
import numpy as np
import pandas as pd
import tensorflow as tf
def train(output_dir, batch_size, learning_rate):
tf.logging.set_verbosity(tf.logging.INFO)
# Read dataset and split into train and eval
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
df['num_rooms'] = df['total_rooms'] / df['households']
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
# Train and eval input functions
SCALE = 100000
train_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]],
y = traindf["median_house_value"] / SCALE, # note the scaling
num_epochs = None,
batch_size = batch_size, # note the batch size
shuffle = True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]],
y = evaldf["median_house_value"] / SCALE, # note the scaling
num_epochs = 1,
batch_size = len(evaldf),
shuffle=False)
# Define feature columns
features = [tf.feature_column.numeric_column('num_rooms')]
def train_and_evaluate(output_dir):
# Compute appropriate number of steps
num_steps = (len(traindf) / batch_size) / learning_rate # if learning_rate=0.01, hundred epochs
# Create custom optimizer
myopt = tf.train.FtrlOptimizer(learning_rate = learning_rate) # note the learning rate
# Create rest of the estimator as usual
estimator = tf.estimator.LinearRegressor(model_dir = output_dir,
feature_columns = features,
optimizer = myopt)
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec = tf.estimator.TrainSpec(input_fn = train_input_fn,
max_steps = num_steps)
eval_spec = tf.estimator.EvalSpec(input_fn = eval_input_fn,
steps = None)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run the training
shutil.rmtree(output_dir, ignore_errors=True) # start fresh each time
train_and_evaluate(output_dir)
if __name__ == '__main__' and "get_ipython" not in dir():
parser = argparse.ArgumentParser()
parser.add_argument(
'--job-dir',
help = 'GCS location to write checkpoints and export models.',
required = True
)
# TODO: Add learning_rate and batch_size as command line args
args = parser.parse_args()
print("Writing checkpoints to {}".format(args.job_dir))
#pass the command line args to the train function
train(args.job_dir, args.batch_size, args.learning_rate)
%%bash
rm -rf house_trained
gcloud ml-engine local train \
--module-name=trainer.house \
--job-dir=house_trained \
--package-path=$(pwd)/trainer \
-- \
--batch_size=30 \
--learning_rate=0.02
```
# Create hyperparam.yaml
```
%%writefile hyperparam.yaml
trainingInput:
hyperparameters:
goal: MINIMIZE
maxTrials: 5
maxParallelTrials: 1
hyperparameterMetricTag: rmse
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 64
scaleType: UNIT_LINEAR_SCALE
- parameterName: learning_rate
type: DOUBLE
minValue: 0.01
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
%%bash
OUTDIR=gs://${BUCKET}/house_trained # CHANGE bucket name appropriately
gsutil rm -rf $OUTDIR
gcloud ml-engine jobs submit training house_$(date -u +%y%m%d_%H%M%S) \
--config=hyperparam.yaml \
--module-name=trainer.house \
--package-path=$(pwd)/trainer \
--job-dir=$OUTDIR \
--runtime-version=$TFVERSION \
!gcloud ml-engine jobs describe house_180403_231031 # CHANGE jobId appropriately
```
## Challenge exercise
Add a few engineered features to the housing model, and use hyperparameter tuning to choose which set of features the model uses.
<p>
Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
**[Advanced SQL Home Page](https://www.kaggle.com/learn/advanced-sql)**
---
# Introduction
Now that you know how to query nested and repeated data, you're ready to draw interesting insights from the [GitHub Repos](https://www.kaggle.com/github/github-repos) dataset.
Before you get started, run the following cell to set everything up.
```
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql_advanced.ex3 import *
print("Setup Complete")
```
# Exercises
### 1) Who had the most commits in 2016?
GitHub is the most popular place to collaborate on software projects. A GitHub **repository** (or repo) is a collection of files associated with a specific project, and a GitHub **commit** is a change that a user has made to a repository. We refer to the user as a **committer**.
The `sample_commits` table contains a small sample of GitHub commits, where each row corresponds to different commit. The code cell below fetches the table and shows the first five rows of this table.
```
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "github_repos" dataset
dataset_ref = client.dataset("github_repos", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "sample_commits" table
table_ref = dataset_ref.table("sample_commits")
# API request - fetch the table
sample_commits_table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(sample_commits_table, max_results=5).to_dataframe()
```
Run the next code cell to print the table schema.
```
# Print information on all the columns in the table
sample_commits_table.schema
```
Write a query to find the individuals with the most commits in this table in 2016. Your query should return a table with two columns:
- `committer_name` - contains the name of each individual with a commit (from 2016) in the table
- `num_commits` - shows the number of commits the individual has in the table (from 2016)
Sort the table, so that people with more commits appear first.
**NOTE**: You can find the name of each committer and the date of the commit under the "committer" column, in the "name" and "date" child fields, respectively.
```
# Write a query to find the answer
max_commits_query = """
SELECT committer.name AS committer_name, COUNT(*) AS num_commits
FROM `bigquery-public-data.github_repos.sample_commits`
WHERE committer.date >= '2016-01-01' AND committer.date < '2017-01-01'
GROUP BY committer_name
ORDER BY num_commits DESC
"""
# Check your answer
q_1.check()
```
### 2) Look at languages!
Now you will work with the `languages` table. Run the code cell below to print the first few rows.
```
# Construct a reference to the "languages" table
table_ref = dataset_ref.table("languages")
# API request - fetch the table
languages_table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(languages_table, max_results=5).to_dataframe()
```
Each row of the `languages` table corresponds to a different repository.
- The "repo_name" column contains the name of the repository,
- the "name" field in the "language" column contains the programming languages that can be found in the repo, and
- the "bytes" field in the "language" column has the size of the files (in bytes, for the corresponding language).
Run the following code cell to print the table schema.
```
# Print information on all the columns in the table
languages_table.schema
```
Assume for the moment that you have access to a table called `sample_languages` that contains only a very small subset of the rows from the `languages` table: in fact, it contains only three rows! This table is depicted in the image below.

How many rows are in the table returned by the query below?

Fill in your answer in the next code cell.
```
# Fill in the blank
num_rows = 6
# Check your answer
q_2.check()
```
### 3) What's the most popular programming language?
Write a query to leverage the information in the `languages` table to determine which programming languages appear in the most repositories. The table returned by your query should have two columns:
- `language_name` - the name of the programming language
- `num_repos` - the number of repositories in the `languages` table that use the programming language
Sort the table so that languages that appear in more repos are shown first.
```
# Write a query to find the answer
pop_lang_query = """
SELECT l.name as language_name, COUNT(*) as num_repos
FROM `bigquery-public-data.github_repos.languages`,
UNNEST(language) AS l
GROUP BY language_name
ORDER BY num_repos DESC
"""
# Check your answer
q_3.check()
```
### 4) Which languages are used in the repository with the most languages?
For this question, you'll restrict your attention to the repository with name `'polyrabbit/polyglot'`.
Write a query that returns a table with one row for each language in this repository. The table should have two columns:
- `name` - the name of the programming language
- `bytes` - the total number of bytes of that programming language
Sort the table by the `bytes` column so that programming languages that take up more space in the repo appear first.
```
# Your code here
all_langs_query = """
SELECT l.name, l.bytes
FROM `bigquery-public-data.github_repos.languages`,
UNNEST(language) as l
WHERE repo_name = 'polyrabbit/polyglot'
ORDER BY l.bytes DESC
"""
# Check your answer
q_4.check()
```
| github_jupyter |
# Gallery of examples

Here you can browse a gallery of examples using EinsteinPy in the form of Jupyter notebooks.
## [Visualizing advance of perihelion of a test particle in Schwarzschild space-time](docs/source/examples/Visualizing%20advance%20of%20perihelion%20of%20a%20test%20particle%20in%20Schwarzschild%20space-time.ipynb)
[](docs/source/examples/Visualizing%20advance%20of%20perihelion%20of%20a%20test%20particle%20in%20Schwarzschild%20space-time.ipynb)
## [Analysing Earth using EinsteinPy!](docs/source/examples/Analysing%20Earth%20using%20EinsteinPy!.ipynb)
[](docs/source/examples/Analysing%20Earth%20using%20EinsteinPy!.ipynb)
## [Symbolically Understanding Christoffel Symbol and Riemann Metric Tensor using EinsteinPy](docs/source/examples/Symbolically%20Understanding%20Christoffel%20Symbol%20and%20Riemann%20Curvature%20Tensor%20using%20EinsteinPy.ipynb)
[](docs/source/examples/Symbolically%20Understanding%20Christoffel%20Symbol%20and%20Riemann%20Curvature%20Tensor%20using%20EinsteinPy.ipynb)
## [Visualizing frame dragging in Kerr spacetime](docs/source/examples/Visualizing%20frame%20dragging%20in%20Kerr%20spacetime.ipynb)
[](docs/source/examples/Visualizing%20frame%20dragging%20in%20Kerr%20spacetime.ipynb)
## [Visualizing event horizon and ergosphere of Kerr black hole](docs/source/examples/Visualizing%20event%20horizon%20and%20ergosphere%20of%20Kerr%20black%20hole.ipynb)
[](docs/source/examples/Visualizing%20event%20horizon%20and%20ergosphere%20of%20Kerr%20black%20hole.ipynb)
## [Spatial Hypersurface Embedding for Schwarzschild Space-Time!](docs/source/examples/Plotting%20spacial%20hypersurface%20embedding%20for%20schwarzschild%20spacetime.ipynb)
[](docs/source/examples/Plotting%20spacial%20hypersurface%20embedding%20for%20schwarzschild%20spacetime.ipynb)
## [Playing with Contravariant and Covariant Indices in Tensors(Symbolic)](docs/source/examples/Playing%20with%20Contravariant%20and%20Covariant%20Indices%20in%20Tensors(Symbolic).ipynb)
[](docs/source/examples/Playing%20with%20Contravariant%20and%20Covariant%20Indices%20in%20Tensors(Symbolic).ipynb)
## [Ricci Tensor and Scalar Curvature calculations using Symbolic module](docs/source/examples/Ricci%20Tensor%20and%20Scalar%20Curvature%20symbolic%20calculation.ipynb)
[](docs/source/examples/Ricci%20Tensor%20and%20Scalar%20Curvature%20symbolic%20calculation.ipynb)
<center><em>Gregorio Ricci-Curbastro</em></center>
## [Weyl Tensor calculations using Symbolic module](docs/source/examples/Weyl%20Tensor%20symbolic%20calculation.ipynb)
[](docs/source/examples/Weyl%20Tensor%20symbolic%20calculation.ipynb)
<center><em>Hermann Weyl</em></center>
## [Einstein Tensor calculations using Symbolic module](docs/source/examples/Einstein%20Tensor%20symbolic%20calculation.ipynb)
[](docs/source/examples/Einstein%20Tensor%20symbolic%20calculation.ipynb)
## [Lambdify in Symbolic module](docs/source/examples/Lambdify%20symbolic%20calculation.ipynb)
[](docs/source/examples/Lambdify%20symbolic%20calculation.ipynb)
## [Predefined Metrics in Symbolic Module](docs/source/examples/Predefined%20Metrics%20in%20Symbolic%20Module.ipynb)
[](docs/source/examples/Predefined%20Metrics%20in%20Symbolic%20Module.ipynb)
## [Shadow cast by an thin emission disk around a black hole](docs/source/examples/Shadow%20cast%20by%20an%20thin%20emission%20disk%20around%20a%20black%20hole.ipynb)
[](docs/source/examples/Shadow%20cast%20by%20an%20thin%20emission%20disk%20around%20a%20black%20hole.ipynb)
| github_jupyter |
```
using PyPlot
using PyCall
using Peaks
np=pyimport("numpy")
eV = 27.2114; #Hartree to eV conversion
rcParams = PyPlot.PyDict(PyPlot.matplotlib."rcParams")
rcParams["font.size"] = 15
rcParams["legend.fontsize"] = "small"
rcParams["axes.labelsize"] = "xx-large"
rcParams["axes.titlesize"] = "xx-large"
rcParams["xtick.labelsize"] = "xx-large"
rcParams["ytick.labelsize"] = "xx-large"
rcParams["font.sans-serif"] = "Arial"
rcParams["font.family"] = "sans-serif"
rcParams["figure.figsize"] = (6, 6);
dir_nohybrid = string(pwd(), "/nohybrid");
# We consider the following range of wavelengths to be "green":
println("Green range: ", round(1240/2.3, digits=3) ,"nm to ", round(1240/2.6, digits=3), "nm")
println("Red range: ", round(1240/1.8, digits=3) ,"nm to ", round(1240/2, digits=3), "nm")
#Loop over all simulations and store the band indices and oscillator strengths corresponding to the
#relevant green and red transitions.
AllValidPairsGreen = Vector{Tuple{String, Vector{Tuple{Integer, Integer, Integer, Float64, Float64}}}}()
# Vector of tuples of the form: (ID, (spin, lowerband, upperband, energydiff, oscillator strength))
AllValidPairsRed = Vector{Tuple{String, Vector{Tuple{Integer, Integer, Integer, Float64, Float64}}}}()
# Vector of tuples of the form: (ID, (spin, lowerband, upperband, energydiff, oscillator strength))
#in the end since we have no phase interfering terms.
transitioneigenvals = Float64[]
for (i, j) in Tuple.(CartesianIndices(rand(4, 5)))
id="$(i)$(j)"
ฮผ=0
(isfile("$dir_nohybrid/STH2$(id).eigStats") && isfile("$dir_nohybrid/STH2$(id).momenta") && isfile("$dir_nohybrid/STH2$(id).eigenvals")) || continue
for l in readlines("$dir_nohybrid/STH2$(id).eigStats")
if VERSION.minor >= 5
contains(l, "mu" ) || continue ;
elseif VERSION.minor < 5
occursin("mu", l) || continue ;
end
ฮผ = parse(Float64, l[6:end]);
end
eigenvalsup = np.fromfile("$dir_nohybrid/STH2$(id).eigenvals")[1:48]*eV
eigenvalsdn = np.fromfile("$dir_nohybrid/STH2$(id).eigenvals")[49:96]*eV
momenta = np.reshape(abs.(np.fromfile("$dir_nohybrid/STH2$(id).momenta", dtype=np.complex )), (2, 3, 48, 48));
(permutedims(momenta, (1, 2, 4, 3)) โ conj(momenta)) || error("Momenta Do Not Satisfy Hermiticity Condition")
ValidPairsGreen = Vector{Tuple{Integer, Integer, Integer, Float64, Float64}}()
ValidPairsRed = Vector{Tuple{Integer, Integer, Integer, Float64, Float64}}()
for spin in [1, 2]
transitioneigenvals = (spin == 1) ? eigenvalsup : eigenvalsdn
for (i, e) in enumerate(transitioneigenvals)
for (j, eprime) in enumerate(transitioneigenvals)
(i > 10 && j > 10 ) || continue # Only look at particular range of bands
(i < 30 && j < 30) || continue # Only look at particular range of bands
if (2.3 < abs(e-eprime) < 2.6) # Range for green transitions
#Note that we must convert back from eV to hartree, so the denominator is divided by 27.2
!((j, i) in [v[2:3] for v in ValidPairsGreen]) && push!(ValidPairsGreen, (spin, i, j, abs(e-eprime), eV*2/3*sum((abs.(momenta[spin, :, i, j])).^2)/abs(e-eprime) ))
elseif (1.8 < abs(e-eprime) < 2) # Range for red transitions
!((j, i) in [v[2:3] for v in ValidPairsRed]) && push!(ValidPairsRed, (spin, i, j, abs(e-eprime), eV*2/3*sum((abs.(momenta[spin, :, i, j])).^2)/abs(e-eprime)))
end
end
end
end
push!(AllValidPairsGreen, (id, ValidPairsGreen))
push!(AllValidPairsRed, (id, ValidPairsRed))
end
#figure(figsize=(20, 10))
#id, kpoint, band1, band2
Highest_Oscillators_Green = Vector{Tuple{String, Tuple{Integer, Integer, Integer, Float64, Float64}}}()
Highest_Oscillators_Red = Vector{Tuple{String, Tuple{Integer, Integer, Integer, Float64, Float64}}}()
for (ValidPairsGreen, ValidPairsRed) in zip(AllValidPairsGreen, AllValidPairsRed)
figure(figsize=(8, 10))
id = ValidPairsGreen[1]
id!="31" && continue
suptitle("ID: $id")
eigenvalsup = np.fromfile("$dir_nohybrid/STH2$(id).eigenvals")[1:48]*eV
eigenvalsdn = np.fromfile("$dir_nohybrid/STH2$(id).eigenvals")[49:96]*eV
momenta = np.reshape(abs.(np.fromfile("$dir_nohybrid/STH2$(id).momenta", dtype=np.complex )), (2, 3, 48, 48));
maxgreen = argmax([v[5] for v in ValidPairsGreen[2]])
maxred = argmax([v[5] for v in ValidPairsRed[2]])
push!(Highest_Oscillators_Green, (id, ValidPairsGreen[2][maxgreen]))
push!(Highest_Oscillators_Red, (id, ValidPairsRed[2][maxred]))
subplot(1, 2, 1)
for i in 1:48
((i == ValidPairsGreen[2][maxgreen][2]) && (ValidPairsGreen[2][maxgreen][1] == 1)) ? plot(reshape(np.repeat(eigenvalsup[i], 10), (10, 1)), color="black", linewidth=5) : plot(reshape(np.repeat(eigenvalsup[i], 10), (10, 1)), color="gray", linewidth=.4);
((i == ValidPairsGreen[2][maxgreen][3]) && (ValidPairsGreen[2][maxgreen][1] == 1)) ? plot(reshape(np.repeat(eigenvalsup[i], 10), (10, 1)), color="green", linewidth=5) : plot(reshape(np.repeat(eigenvalsup[i], 10), (10, 1)), color="gray", linewidth=.4);
((i == ValidPairsGreen[2][maxgreen][2]) && (ValidPairsGreen[2][maxgreen][1] == 2)) ? plot(reshape(np.repeat(eigenvalsdn[i], 10), (10, 1)), color="black", linewidth=5) : plot(reshape(np.repeat(eigenvalsdn[i], 10), (10, 1)), color="gray", linewidth=.4);
((i == ValidPairsGreen[2][maxgreen][3]) && (ValidPairsGreen[2][maxgreen][1] == 2)) ? plot(reshape(np.repeat(eigenvalsdn[i], 10), (10, 1)), color="green", linewidth=5) : plot(reshape(np.repeat(eigenvalsdn[i], 10), (10, 1)), color="gray", linewidth=.4);
end
ylim(-20, -10)
xticks([])
ylabel("Energy (eV)")
subplot(1, 2, 2)
for i in 1:48
((i == ValidPairsRed[2][maxred][2]) && (ValidPairsRed[2][maxred][1] == 1)) ? plot(reshape(np.repeat(eigenvalsup[i], 10), (10, 1)), color="black", linewidth=5) : plot(reshape(np.repeat(eigenvalsup[i], 10), (10, 1)), color="gray", linewidth=.4);
((i == ValidPairsRed[2][maxred][3]) && (ValidPairsRed[2][maxred][1] == 1)) ? plot(reshape(np.repeat(eigenvalsup[i], 10), (10, 1)), color="red", linewidth=5) : plot(reshape(np.repeat(eigenvalsup[i], 10), (10, 1)), color="gray", linewidth=.4);
((i == ValidPairsRed[2][maxred][2]) && (ValidPairsRed[2][maxred][1] == 2)) ? plot(reshape(np.repeat(eigenvalsdn[i], 10), (10, 1)), color="black", linewidth=5) : plot(reshape(np.repeat(eigenvalsdn[i], 10), (10, 1)), color="gray", linewidth=.4);
((i == ValidPairsRed[2][maxred][3]) && (ValidPairsRed[2][maxred][1] == 2)) ? plot(reshape(np.repeat(eigenvalsdn[i], 10), (10, 1)), color="red", linewidth=5) : plot(reshape(np.repeat(eigenvalsdn[i], 10), (10, 1)), color="gray", linewidth=.4);
end
ylim(-20, -10)
xticks([])
savefig("energylevels.svg")
end
energies = collect(range(1, 30, length=3000))
lambdas = 1240 ./ collect(range(1, 30, length=3000))
broadening=0.15
for (ValidPairsGreen, ValidPairsRed) in zip(Highest_Oscillators_Green, Highest_Oscillators_Red)
figure(figsize=(5, 5))
id = ValidPairsGreen[1]
id != "31" && continue
suptitle("ID: $id")
FSTH2=zeros(100*30)
ediff_green = ValidPairsGreen[2][4]
strength_green = ValidPairsGreen[2][5]
ediff_red = ValidPairsRed[2][4]
strength_red = ValidPairsRed[2][5]
FSTH2 = strength_green*exp.(-0.5*(energies.-ediff_green).^2/(broadening)^2)
FSTH2 += strength_red*exp.(-0.5*(energies.-ediff_red).^2/(broadening)^2)
FSTH2 *= eV
plot(lambdas, FSTH2, linewidth=5)
ylabel("Oscillator strength (a.u.)")
xlabel("Wavelength (nm)")
xlim(400,800)
ylim(0, maximum(FSTH2[argmin(abs.(lambdas .-800)):argmin(abs.(lambdas .- 400))]))
println("Peaks for id: $id: ", lambdas[argmaxima(FSTH2)])
end
savefig("spectrum.svg")
```
| github_jupyter |
# Scraping Monthly Tweets
This notebook uses the getoldtweet package to scrape monthly tweets from twitters search results page. also it contains sme post processing options in the lower cells.
## Import Packages
```
import os
import gc
from IPython.display import Audio, display
import time
from collections import Counter
import pandas as pd
from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
import GetOldTweets3 as got
```
## Run Monthly Scrape
### Construct each group to be scraped
```
#Artificial Intelligence graph terms
AI_graph = ['#AI', '#ML', '#NLP', 'Artificial Intelligence','Deep Learning',
'"Machine Learning"', 'Natural Language Processing','Neural Network']
#Distributed Ledger graph terms
DL_graph = ['Bitcoin', 'Blockchain','Ethereum','distributed ledger','smart contract']
```
### Define Functions
```
def allDone():
'''this function outputs a short funny audio when called.
Typically this is used to signal a task completion'''
display(Audio(url='https://sound.peal.io/ps/audios/000/000/537/original/woo_vu_luvub_dub_dub.wav', autoplay=True))
def update_tweet_csv(path,DF,start,end,delta,Verbose=True):
'''This function saves the results of the scrape to the disk. it is meant to be passed
within a loop and append data being scraped with each loop to the DF stored on the disk. typically the loop
runs daily scrapes for the period of a month'''
#if the scrape was successful and the file doesnt exist then create a file and save the DF as a csv
if len(DF)>0 and os.path.isfile(path) == False:
DF.to_csv(path, index=False)
#start and end parameters dont need editing since scrape was successful
start, end = start, end
#print date scraped, time of scrape, and number of daily tweets scraped
if Verbose==True:
print(since," // ",datetime.now()," / ", round(len(DF))," tweets/day")
#if the scrape is successful and file name exists, then append to it
elif len(DF)>0 and os.path.isfile(path) == True:
#open the csv of the month being scraped
globe = pd.read_csv(path)
#append the day scraped
globe = globe.append(DF)
#save new DF to the csv
globe.to_csv(path, index=False)
start, end = start, end
if Verbose==True:
print(since," // ",datetime.now()," // ", round(len(DF))," tweets/day ",len(globe))
#If twitter data was not reached due to any interruptions/block wait then try that day again
elif len(DF)==0:
if Verbose==True:
print(since," // ",datetime.now()," // ", round(len(DF))," tweets/day **")
#adjust the start and end dates to retry scraping this day
start -= delta
end -= delta
time.sleep(60)
return start, end
def tweets_to_df(tweet):
'''this function saves the results of the twitter scrapes into lists then creates a DF out of them.
this is needed to extract info from the getoldtweets3 generator object'''
#initialize lists
text, date, hashtag, username, link, keyword, ID = [], [], [], [], [], [], []
#add content to lists using GOT3 "tweet" generator object
for tweets in tweet:
text.append(str(tweets.text))
date.append(str(tweets.date))
hashtag.append(str(tweets.hashtags))
username.append(str(tweets.username))
link.append(tweets.permalink)
keyword.append(word)
ID.append(tweets.id)
#compile content into a DF
DF = pd.DataFrame({'tweet':text, 'date/time':date, 'hashtags':hashtag, 'user':username, 'links':link,
'search':keyword,'tweet_id':ID})
return DF
```
#### why twitter has limitations and why you should download in daily intervals:
"The issue here is **Min_position** and **Has_more_items** flags. Twitter's legacy timeline caching system **Haplo** has its limitations. So when you start downloading millions of tweets, it runs out of memory and sometimes returns has_more_items as false. You can read about how twitter cache works in here
https://blog.twitter.com/engineering/en_us/topics/infrastructure/2017/the-infrastructure-behind-twitter-scale.html "
source: https://github.com/Mottl/GetOldTweets3/issues/3
### Run Monthly Scrape
Info relating to the steps to be followed below:
- set the start and end date to be scraped
- scrapes are ran in daily intervals because that is the smallest interval allowed by twitter.(e.g. since the results are scraped in descending chronological order if a scrape is ran over a week and gets interrupted,due to hash issues, days worth of data can be lost, however if a single day's scrape gets interrupted then only hours are lost. this saves the user the hassle of rechecking for missing days and rescraping.)
- if the user doesnt want to see process updates set verbose==False in the update_tweet_csv function
Background info:
- the typical speed of GOT3 is roughly 2.5 million tweets/day
- scraping a month worth's data, using the lists above(AI_graph and DL_graph), takes a full day
- using a different proxy for each request(20 tweets) using services like crawlera reduces scraping speed by 5.5 times.
- it is recommended to use a diffferent IP address for each day of scraping or scraping gets blocked by twitter repeatedly 10 cycles.
```
for word in AI_graph+DL_graph:
delta = timedelta(days = 1) #set scrape range (e.g. number of days, ,weeks, months)
start = datetime(2019,7,1) - delta #set first day of scrape
x = start + 2*delta # x is the element used in the while loop indicating the current start date being scraped
stop_point = datetime(2019,8,1) #set final day of scrape, this is not inclusive
data_dir = os.getcwd() + '/twitter_data_2019/'
file_name = 'globe_' + word + "_" + (start+delta).strftime('%Y-%m') + '.csv'
print(file_name, '\nstart: ', datetime.now())
while x < stop_point:
try:
start += delta
end = start + delta
since = (start).strftime("%Y-%m-%d")
until = (end).strftime("%Y-%m-%d")
x = end
#Get tweets by query search
tweetCriteria = got.manager.TweetCriteria().setQuerySearch(word).setSince(since).setUntil(until)
tweet = got.manager.TweetManager.getTweets(tweetCriteria)
#store the data as a DF
DF = tweets_to_df(tweet)
#save the daily scrape to csv on disk and update start & end accordingly
path = data_dir + file_name
start, end = update_tweet_csv(path,DF,start,end,delta,Verbose=True)
#minimize memory retention
del [DF, tweet, tweetCriteria]
gc.collect()
#in case of an error occuring mid a scrape cycle, wait then repeat the cycle
except:
print('error occured at ', since, datetime.now())
#maintian same date and dont save the data
start -= delta
end -= delta
#wait a while before trying again
time.sleep(120)
#audio signal when each each phrase/month finishes
allDone()
```
## Check Continuity of Data
As mentioned above, due to hash issues and others, twitter sometimes limits the results returned in a search. to detect the missing data use the cell below and discover the number of hours and dates missing.
info to be filled:
- the "filename" should be changed to the psth of the desired csv
- change the range of dates to be searched for missing data by changing b(end date) and a(start date)
- set the min_hrs parameter which is used to show the days with more than a certain number of hours missing(e.g. min_hrs=2 then only dates with more than 2 hrs missing will be printed)
results:
- percent of hours missed (typically a DF will have <3% of missing data if scrape is done in daily intervals as is recommended)
- a list showing how many days have how many hours missing (# of hrs, num of days) e.g.[(1, 4), (2, 2),..., (23, 5), (24, 2)]
- the number of days with more than min_hrs missing
- date of each day with more than min_hrs missing (this list can later be used to rescrape dates with significant number of hrs/day missing)
```
#### Check if data is continous
filename = 'globe_Bitcoin_130101_170121_6.csv'
print(filename, datetime.now())
# get hours scraped, for days change 13 to 10
actual = set([datetime.strptime(date_str[:13],"%Y-%m-%d %H") for date_str in pd.read_csv(filename)['date/time']])
# generate all possible hours in date range
b = datetime(2017,1,21)
a = datetime(2013,1,1)
numhrs = 24*((b.date()-a.date()).days)
dateList = []
for x in range(0, numhrs+2):
dateList.append((a - timedelta(hours = x)))
#the list incomplete/missing dates
min_hrs = 1 #the minumum number hours needed to display date
hours_missed = sorted(set(dateLis) - actual) #all missing hours
counter = Counter([date.date() for date in hours_missed]) #count hours missed per day
sort = sorted(counter.items()) #sort in chronological order
dates_missed = [date[0] for date in sort if date[1]>min_hrs] #keep dates with more than 2 hours missing
#calculate the total number of hours missed as a percentage
summary = Counter([date[1] for date in sort])
summary = sorted(summary.items())
total_missed_hours = sum([x[0]*x[1] for x in summary])
print('Missing: ', total_missed_hours*100/numhrs, "%","\n",summary)
# create since and until to search twitter for those missing date ranges
since_missing = dates_missed
until_missing = [dm + timedelta(days=1) for dm in dates_missed]
print(" # Days: ", len(dates_missed),"\n",
"Ranges sizes: ", since_missing)
```
## Rescrape missing Data
this cell is used to rescrape days with significant missing hours. this cell is optional and can be avoided by simply scraping 1 day at a time as reccomended above. it was only created to ammend scrapes initially done in larger intervals (1 week scrapes). however since twitter's smallest date interval is a date then by setting the scrape to 1 day intervals the highest accracy will be acheived from the start.
```
# import urllib3
# urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# import warnings
# warnings.filterwarnings('ignore', category=ResourceWarning)
#Amended search for missing dates
filename = 'globe_' + AI_graph[0] + '_130101_170121_5_missing.csv'
print("start: ", filename, datetime.now())
word = AI_graph[0]
for i in range(len(since_missing)):
if (until_missing[i] - since_missing[i]).days <= 0:
continue
since = (since_missing[i]).strftime("%Y-%m-%d")
until = (until_missing[i]).strftime("%Y-%m-%d")
text, date, hashtag, username, link, keyword, ID = [], [], [], [], [], [], []
try:
#Get tweets by query search
tweetCriteria = got.manager.TweetCriteria().setQuerySearch(word).setSince(since).setUntil(until)
tweet = got.manager.TweetManager.getTweets(tweetCriteria)
except:
print('ERROR: ', since)
time.sleep(15)
continue
#add content to lists
for tweets in tweet:
text.append(str(tweets.text))
date.append(str(tweets.date))
hashtag.append(str(tweets.hashtags))
username.append(str(tweets.username))
link.append(tweets.permalink)
keyword.append(word)
ID.append(tweets.id)
#compile content into a DF
DF = pd.DataFrame({'tweet':text, 'date/time':date, 'hashtags':hashtag, 'user':username, 'links':link,
'search':keyword,'tweet_id':ID})
if len(DF)>0 and os.path.isfile(filename) == False:
DF.to_csv(filename, index=False)
print(since,"-->",until," // ",datetime.now()," / ", len(DF)/(until_missing[i] - since_missing[i]).days,"rows/days")
del [DF, text, date, hashtag, username, link, keyword, ID, tweet, tweetCriteria ]
gc.collect()
continue
elif len(DF)>0 and os.path.isfile(filename) == True:
globe = pd.read_csv(filename)
globe = globe.append(DF)
globe.to_csv(filename, index=False)
print(since,"-->",until," // ",datetime.now()," // ", (len(DF))/(until_missing[i] - since_missing[i]).days,"rows/days")
del [globe, DF, text, date, hashtag, username, link, keyword, ID, tweet, tweetCriteria ]
gc.collect()
continue
else:
print(since," // ",datetime.now()," // "," 0 rows")
```
| github_jupyter |
# Function ptrans
## Synopse
Perform periodic translation in 1-D, 2-D or 3-D space.
- **g = ptrans(f, t)**
- **OUTPUT**
- **g**: Image. Periodically translated image.
- **INPUT**
- **f**: Image ndarray. Image to be translated.
- **t**: Tuple. (tz,tr,tc)
## Description
Translate a 1-D, 2-D or 3-dimesional image periodically. This translation can be seen as a window view
displacement on an infinite tile wall where each tile is a copy of the original image. The
periodical translation is related to the periodic convolution and discrete Fourier transform.
Be careful when implementing this function using the mod, some mod implementations in C does not
follow the correct definition when the number is negative.
```
def ptrans(f,t):
import numpy as np
g = np.empty_like(f)
if f.ndim == 1:
W = f.shape[0]
col = np.arange(W)
g = f[(col-t)%W]
elif f.ndim == 2:
H,W = f.shape
rr,cc = t
row,col = np.indices(f.shape)
g = f[(row-rr)%H, (col-cc)%W]
elif f.ndim == 3:
Z,H,W = f.shape
zz,rr,cc = t
z,row,col = np.indices(f.shape)
g = f[(z-zz)%Z, (row-rr)%H, (col-cc)%W]
return g
# implementation using periodic convolution
def ptrans2(f, t):
f, t = np.asarray(f), np.asarray(t).astype('int32')
h = np.zeros(2*np.abs(t) + 1)
t = t + np.abs(t)
h[tuple(t)] = 1
g = ia.pconv(f, h)
return g
def ptrans2d(f,t):
rr,cc = t
H,W = f.shape
r = rr%H
c = cc%W
g = np.empty_like(f)
g[:r,:c] = f[H-r:H,W-c:W]
g[:r,c:] = f[H-r:H,0:W-c]
g[r:,:c] = f[0:H-r,W-c:W]
g[r:,c:] = f[0:H-r,0:W-c]
return g
```
## Examples
```
testing = (__name__ == '__main__')
if testing:
! jupyter nbconvert --to python ptrans.ipynb
import numpy as np
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
```
### Example 1
Numeric examples in 2D and 3D.
```
if testing:
# 2D example
f = np.arange(15).reshape(3,5)
print("Original 2D image:\n",f,"\n\n")
print("Image translated by (0,0):\n",ia.ptrans(f, (0,0)).astype(int),"\n\n")
print("Image translated by (0,1):\n",ia.ptrans(f, (0,1)).astype(int),"\n\n")
print("Image translated by (-1,2):\n",ia.ptrans(f, (-1,2)).astype(int),"\n\n")
if testing:
# 3D example
f1 = np.arange(60).reshape(3,4,5)
print("Original 3D image:\n",f1,"\n\n")
print("Image translated by (0,0,0):\n",ia.ptrans(f1, (0,0,0)).astype(int),"\n\n")
print("Image translated by (0,1,0):\n",ia.ptrans(f1, (0,1,0)).astype(int),"\n\n")
print("Image translated by (-1,3,2):\n",ia.ptrans(f1, (-1,3,2)).astype(int),"\n\n")
```
### Example 2
Image examples in 2D
```
if testing:
# 2D example
f = mpimg.imread('../data/cameraman.tif')
plt.imshow(f,cmap='gray'), plt.title('Original 2D image - Cameraman')
plt.imshow(ia.ptrans(f, np.array(f.shape)//3),cmap='gray'), plt.title('Cameraman periodically translated')
```
## Equation
For 2D case we have
$$ \begin{matrix}
t &=& (t_r t_c),\\
g = f_t &=& f_{tr,tc},\\
g(rr,cc) &=& f((rr-t_r)\ mod\ H, (cc-t_c) \ mod\ W), 0 \leq rr < H, 0 \leq cc < W,\\
\mbox{where} & & \\ a \ mod\ N &=& (a + k N) \ mod\ N, k \in Z.
\end{matrix} $$
The equation above can be extended to n-dimensional space.
```
if testing:
print('testing ptrans')
f = np.array([[1,2,3,4,5],[6,7,8,9,10],[11,12,13,14,15]],'uint8')
print(repr(ia.ptrans(f, [-1,2]).astype(np.uint8)) == repr(np.array(
[[ 9, 10, 6, 7, 8],
[14, 15, 11, 12, 13],
[ 4, 5, 1, 2, 3]],'uint8')))
```
## Contributions
- Roberto A Lotufo, Sept 2013, converted to index computation
- Andrรฉ Luis da Costa, 1st semester 2011
| github_jupyter |
# Initialize Folders
```
from __future__ import print_function
from imutils import paths
import os
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
import keras
import tensorflow as tf
from keras.preprocessing import image as image_utils
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import array_to_img
from keras.preprocessing.image import ImageDataGenerator
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TensorBoard
#from keras import backend as keras
from keras.layers.advanced_activations import LeakyReLU, PReLU
#import skimage.io as io
#import skimage.transform as trans
#K.set_image_data_format("channels_last")
%matplotlib inline
%load_ext autoreload
%autoreload 2
root_dir = './data'
training_data_dir = os.path.join(root_dir, 'train/images')
training_data_mask_dir = os.path.join(root_dir, 'train/masks')
val_data_dir = os.path.join(root_dir, 'val/images')
val_data_pred_dir = os.path.join(root_dir, 'val/predict')
val_data_mask_dir = os.path.join(root_dir, 'val/masks')
test_data_dir = os.path.join(root_dir, 'test/images')
test_data_pred_dir = os.path.join(root_dir, 'test/predict')
test_data_mask_dir = os.path.join(root_dir, 'test/masks')
model_localtion = "attention_unet_lesion.hdf5"
result_file = "attention_unet_test_result.csv"
img_rows = 256
img_cols = 256
```
# Define Loss function
```
def jaccard_coef(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) - intersection + smooth)
def jaccard_coef_loss(y_true, y_pred):
j = -jaccard_coef(y_true, y_pred)
return j
# DO NOT USE
def normalizeData_rgb(img,mask):
for i in range(3):
mean = np.mean(img[:,:,i]) # mean for data centering
std = np.std(img[:,:,i]) # std for data normalization
img[:,:,i] -= mean
img[:,:,i] /= std
mask = mask /255
mask[mask > 0.5] = 1
mask[mask <= 0.5] = 0
return (img,mask)
def normalizeData(img,mask):
mean = np.mean(img) # mean for data centering
std = np.std(img) # std for data normalization
img -= mean
img /= std
mask = mask /255
mask[mask > 0.5] = 1
mask[mask <= 0.5] = 0
return (img,mask)
def trainGenerator(batch_size,train_path,image_folder,mask_folder,aug_dict,image_color_mode = "rgb",
mask_color_mode = "grayscale",image_save_prefix = "image",mask_save_prefix = "mask",
flag_multi_class = False,num_class = 2,save_to_dir = None,target_size = (256,256),seed = 1):
'''
can generate image and mask at the same time
use the same seed for image_datagen and mask_datagen to ensure the transformation for image and mask is the same
if you want to visualize the results of generator, set save_to_dir = "your path"
'''
image_datagen = ImageDataGenerator(**aug_dict)
mask_datagen = ImageDataGenerator(**aug_dict)
image_generator = image_datagen.flow_from_directory(
train_path,
classes = [image_folder],
class_mode = None,
color_mode = image_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = image_save_prefix,
seed = seed)
mask_generator = mask_datagen.flow_from_directory(
train_path,
classes = [mask_folder],
class_mode = None,
color_mode = mask_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = mask_save_prefix,
seed = seed)
train_generator = zip(image_generator, mask_generator)
for (img,mask) in train_generator:
#img,mask = normalizeData_rgb(img,mask)
img, mask = normalizeData(img, mask)
yield (img,mask)
def validationGenerator(batch_size,train_path,image_folder,mask_folder,aug_dict,image_color_mode = "rgb",
mask_color_mode = "grayscale",image_save_prefix = "image",mask_save_prefix = "mask",
flag_multi_class = False,num_class = 2,save_to_dir = None,target_size = (256,256),seed = 1):
'''
can generate image and mask at the same time
use the same seed for image_datagen and mask_datagen to ensure the transformation for image and mask is the same
if you want to visualize the results of generator, set save_to_dir = "your path"
'''
image_datagen = ImageDataGenerator(**aug_dict)
mask_datagen = ImageDataGenerator(**aug_dict)
image_generator = image_datagen.flow_from_directory(
train_path,
classes = [image_folder],
class_mode = None,
color_mode = image_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = image_save_prefix,
seed = seed)
mask_generator = mask_datagen.flow_from_directory(
train_path,
classes = [mask_folder],
class_mode = None,
color_mode = mask_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = mask_save_prefix,
seed = seed)
train_generator = zip(image_generator, mask_generator)
for (img,mask) in train_generator:
#img,mask = normalizeData_rgb(img,mask)
img, mask = normalizeData(img, mask)
yield (img,mask)
def BiggerLeakyUnetModel():
inputs = Input((img_rows, img_cols,3))
conv1 = Conv2D(32, (3, 3), padding="same")(inputs)
acti1 = LeakyReLU(alpha=0.001)(conv1)
conv1 = Conv2D(32, (3, 3), padding="same")(acti1)
acti1 = LeakyReLU(alpha=0.001)(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(acti1)
conv2 = Conv2D(64, (3, 3), padding="same")(pool1)
acti2 = LeakyReLU(alpha=0.001)(conv2)
conv2 = Conv2D(64, (3, 3), padding="same")(acti2)
acti2 = LeakyReLU(alpha=0.001)(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(acti2)
conv3 = Conv2D(128, (3, 3), padding="same")(pool2)
acti3 = LeakyReLU(alpha=0.001)(conv3)
conv3 = Conv2D(128, (3, 3), padding="same")(acti3)
acti3 = LeakyReLU(alpha=0.001)(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(acti3)
conv4 = Conv2D(256, (3, 3), padding="same")(pool3)
acti4 = LeakyReLU(alpha=0.001)(conv4)
conv4 = Conv2D(256, (3, 3), padding="same")(acti4)
acti4 = LeakyReLU(alpha=0.001)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(acti4)
conv5 = Conv2D(512, (3, 3), padding="same")(pool4)
acti5 = LeakyReLU(alpha=0.001)(conv5)
conv5 = Conv2D(512, (3, 3), padding="same")(acti5)
acti5 = LeakyReLU(alpha=0.001)(conv5)
pool5 = MaxPooling2D(pool_size=(2, 2))(acti5)
conv6 = Conv2D(1024, (3, 3), padding="same")(pool5)
acti6 = LeakyReLU(alpha=0.001)(conv6)
conv6 = Conv2D(1024, (3, 3), padding="same")(acti6)
acti6 = LeakyReLU(alpha=0.001)(conv6)
pool6 = MaxPooling2D(pool_size=(2, 2))(acti6)
conv7 = Conv2D(2048, (3, 3), padding="same")(pool6)
acti7 = LeakyReLU(alpha=0.001)(conv7)
conv7 = Conv2D(2048, (3, 3), padding="same")(acti7)
acti7 = LeakyReLU(alpha=0.001)(conv7)
right_up6 = concatenate([UpSampling2D(size=(2, 2))(acti7), acti6], axis=3)
right_conv6 = Conv2D(512, (3, 3), padding="same")(right_up6)
right_acti6 = LeakyReLU(alpha=0.001)(right_conv6)
right_conv6 = Conv2D(512, (3, 3), padding="same")(right_acti6)
right_acti6 = LeakyReLU(alpha=0.001)(right_conv6)
right_up5 = concatenate([UpSampling2D(size=(2, 2))(right_acti6), acti5], axis=3)
right_conv5 = Conv2D(512, (3, 3), padding="same")(right_up5)
right_acti5 = LeakyReLU(alpha=0.001)(right_conv5)
right_conv5 = Conv2D(512, (3, 3), padding="same")(right_acti5)
right_acti5 = LeakyReLU(alpha=0.001)(right_conv5)
right_up4 = concatenate([UpSampling2D(size=(2, 2))(right_acti5), acti4], axis=3)
right_conv4 = Conv2D(256, (3, 3), padding="same")(right_up4)
right_acti4 = LeakyReLU(alpha=0.001)(right_conv4)
right_conv4 = Conv2D(256, (3, 3), padding="same")(right_acti4)
right_acti4 = LeakyReLU(alpha=0.001)(right_conv4)
right_up3 = concatenate([UpSampling2D(size=(2, 2))(right_acti4), acti3], axis=3)
right_conv3 = Conv2D(128, (3, 3), padding="same")(right_up3)
right_acti3 = LeakyReLU(alpha=0.001)(right_conv3)
right_conv3 = Conv2D(128, (3, 3), padding="same")(right_acti3)
right_acti3 = LeakyReLU(alpha=0.001)(right_conv3)
right_up2 = concatenate([UpSampling2D(size=(2, 2))(right_acti3), acti2], axis=3)
right_conv2 = Conv2D(64, (3, 3), padding="same")(right_up2)
right_acti2 = LeakyReLU(alpha=0.001)(right_conv2)
right_conv2 = Conv2D(64, (3, 3), padding="same")(right_acti2)
right_acti2 = LeakyReLU(alpha=0.001)(right_conv2)
right_up1 = concatenate([UpSampling2D(size=(2, 2))(right_acti2), acti1], axis=3)
right_conv1 = Conv2D(32, (3, 3), padding="same")(right_up1)
right_acti1 = LeakyReLU(alpha=0.001)(right_conv1)
right_conv1 = Conv2D(32, (3, 3), padding="same")(right_acti1)
right_acti1 = LeakyReLU(alpha=0.001)(right_conv1)
output = Conv2D(1, (1, 1), activation='sigmoid')(right_acti1)
model = Model(input=inputs, output=output)
model.compile(optimizer=Adam(lr=5e-5), loss=jaccard_coef_loss, metrics=[jaccard_coef])
return model
```
# Define UNet Network
```
def conv_block(prevlayer, ch_out, prefix, strides=(1, 1)):
conv = Conv2D(ch_out, (3, 3), padding="same", kernel_initializer="he_normal", strides=strides, name=prefix + "1_conv")(prevlayer)
conv = BatchNormalization(name=prefix + "1_bn")(conv)
conv = LeakyReLU(alpha=0.001, name=prefix + "1_activation")(conv)
conv = Conv2D(ch_out, (3, 3), padding="same", kernel_initializer="he_normal", strides=strides, name=prefix + "2_conv")(conv)
conv = BatchNormalization(name=prefix + "1_bn")(conv)
conv = LeakyReLU(alpha=0.001, name=prefix + "1_activation")(conv)
return conv
def double_block(prevlayer, filters, prefix, strides=(1, 1)):
layer1 = conv_block(prevlayer, filters, prefix+"1", strides)
layer2 = conv_block(layer1 , filters, prefix+"2", strides)
return layer2
def up_sampling_block(up_sampling_layer, left_skip_layer, filters, prefix, strides = (1,1)):
up_layer = Concatenate([UpSampling2D(size=(2, 2))(up_sampling_layer), left_skip_layer], axis=3)
double_block_layer = double_block(up_layer, filters, prefix, strides)
return double_block_layer
def attention_block(direct_layer, left_skip_layer, filters, prefix, kernel=(1,1), strides=(1,1)):
conv_skip = Conv2D(filters, kernel, kernel_initializer="he_normal", strides=strides, name=prefix + "skip_conv")(left_skip_layer)
# TO Try: Add batch norm
conv_direct = Conv2D(filters, kernel, kernel_initializer="he_normal", strides=strides, name=prefix + "direct_conv")(direct_layer)
# TO Try: Add batch norm
psi = add([conv_skip, conv_direct])
psi = LeakyReLU(alpha=0.001, name=prefix + "_relu")(psi)
psi = Conv2D(1, kernel, kernel_initializer="glorot_normal", activation="sigmoid", strides=strides, name=prefix + "_att")(psi)
return multiply([left_skip_layer, psi])
def AttentionUnetModel():
# 3 - RGB
# Based on BiggerLeakyUnetModel
inputs = Input((img_rows, img_cols,3))
conv1 = Conv2D(32, (3, 3), padding="same")(inputs)
acti1 = LeakyReLU(alpha=0.001)(conv1)
conv1 = Conv2D(32, (3, 3), padding="same")(acti1)
acti1 = LeakyReLU(alpha=0.001)(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(acti1)
conv2 = Conv2D(64, (3, 3), padding="same")(pool1)
acti2 = LeakyReLU(alpha=0.001)(conv2)
conv2 = Conv2D(64, (3, 3), padding="same")(acti2)
acti2 = LeakyReLU(alpha=0.001)(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(acti2)
conv3 = Conv2D(128, (3, 3), padding="same")(pool2)
acti3 = LeakyReLU(alpha=0.001)(conv3)
conv3 = Conv2D(128, (3, 3), padding="same")(acti3)
acti3 = LeakyReLU(alpha=0.001)(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(acti3)
conv4 = Conv2D(256, (3, 3), padding="same")(pool3)
acti4 = LeakyReLU(alpha=0.001)(conv4)
conv4 = Conv2D(256, (3, 3), padding="same")(acti4)
acti4 = LeakyReLU(alpha=0.001)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(acti4)
conv5 = Conv2D(512, (3, 3), padding="same")(pool4)
acti5 = LeakyReLU(alpha=0.001)(conv5)
conv5 = Conv2D(512, (3, 3), padding="same")(acti5)
acti5 = LeakyReLU(alpha=0.001)(conv5)
pool5 = MaxPooling2D(pool_size=(2, 2))(acti5)
conv6 = Conv2D(1024, (3, 3), padding="same")(pool5)
acti6 = LeakyReLU(alpha=0.001)(conv6)
conv6 = Conv2D(1024, (3, 3), padding="same")(acti6)
acti6 = LeakyReLU(alpha=0.001)(conv6)
pool6 = MaxPooling2D(pool_size=(2, 2))(acti6)
conv7 = Conv2D(2048, (3, 3), padding="same")(pool6)
acti7 = LeakyReLU(alpha=0.001)(conv7)
conv7 = Conv2D(2048, (3, 3), padding="same")(acti7)
acti7 = LeakyReLU(alpha=0.001)(conv7)
right_acti7 = UpSampling2D(size=(2, 2))(acti7)
right_att6 = attention_block(right_acti7, acti6, 1024, "att6")
right_up6 = concatenate([right_att6, right_acti7], axis=3)
right_conv6 = Conv2D(1024, (3, 3), padding="same")(right_up6)
right_acti6 = LeakyReLU(alpha=0.001)(right_conv6)
right_conv6 = Conv2D(1024, (3, 3), padding="same")(right_acti6)
right_acti6 = LeakyReLU(alpha=0.001)(right_conv6)
right_acti6 = UpSampling2D(size=(2, 2))(right_acti6)
right_att5 = attention_block(right_acti6, acti5, 512, "att5")
right_up5 = concatenate([right_att5, right_acti6], axis=3)
right_conv5 = Conv2D(512, (3, 3), padding="same")(right_up5)
right_acti5 = LeakyReLU(alpha=0.001)(right_conv5)
right_conv5 = Conv2D(512, (3, 3), padding="same")(right_acti5)
right_acti5 = LeakyReLU(alpha=0.001)(right_conv5)
right_acti5 = UpSampling2D(size=(2, 2))(right_acti5)
right_att4 = attention_block(right_acti5, acti4, 256, "att4")
right_up4 = concatenate([right_att4, right_acti5], axis=3)
right_conv4 = Conv2D(256, (3, 3), padding="same")(right_up4)
right_acti4 = LeakyReLU(alpha=0.001)(right_conv4)
right_conv4 = Conv2D(256, (3, 3), padding="same")(right_acti4)
right_acti4 = LeakyReLU(alpha=0.001)(right_conv4)
right_acti4 = UpSampling2D(size=(2, 2))(right_acti4)
right_att3 = attention_block(right_acti4, acti3, 128, "att3")
right_up3 = concatenate([right_att3, right_acti4], axis=3)
right_conv3 = Conv2D(128, (3, 3), padding="same")(right_up3)
right_acti3 = LeakyReLU(alpha=0.001)(right_conv3)
right_conv3 = Conv2D(128, (3, 3), padding="same")(right_acti3)
right_acti3 = LeakyReLU(alpha=0.001)(right_conv3)
right_acti3 = UpSampling2D(size=(2, 2))(right_acti3)
right_att2 = attention_block(right_acti3, acti2, 64, "att2")
right_up2 = concatenate([right_att2, right_acti3], axis=3)
right_conv2 = Conv2D(64, (3, 3), padding="same")(right_up2)
right_acti2 = LeakyReLU(alpha=0.001)(right_conv2)
right_conv2 = Conv2D(64, (3, 3), padding="same")(right_acti2)
right_acti2 = LeakyReLU(alpha=0.001)(right_conv2)
right_acti2 = UpSampling2D(size=(2, 2))(right_acti2)
right_att1 = attention_block(right_acti2, acti1, 32, "att1")
right_up1 = concatenate([right_att1, right_acti2], axis=3)
right_conv1 = Conv2D(32, (3, 3), padding="same")(right_up1)
right_acti1 = LeakyReLU(alpha=0.001)(right_conv1)
right_conv1 = Conv2D(32, (3, 3), padding="same")(right_acti1)
right_acti1 = LeakyReLU(alpha=0.001)(right_conv1)
output = Conv2D(1, (1, 1), activation='sigmoid')(right_acti1)
model = Model(input=inputs, output=output)
model.compile(optimizer=Adam(lr=5e-5), loss=jaccard_coef_loss, metrics=[jaccard_coef])
return model
#Training data generation
data_gen_args = dict(
# samplewise_center = True,
# samplewise_std_normalization = True,
rotation_range=180,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
zoom_range=0.05,
horizontal_flip=True,
vertical_flip = True,
fill_mode='nearest')
#Validation data generation
data_val_gen_args = dict(
#samplewise_center = True,
#samplewise_std_normalization = True
)
#Create UNet Model
#model = FullUnetModel()
#model = UnetModel()
model = AttentionUnetModel()
model.summary()
#model = BiggerLeakyUnetModel()
#Setup generator
batch_size = 2
myGene = trainGenerator(batch_size,'data/train','images','masks',data_gen_args)
myValGene = validationGenerator(batch_size,'data/val','images','masks',data_val_gen_args)
#Setup Checkpoint to only capture best estimate
model_checkpoint = ModelCheckpoint(model_localtion, monitor='loss',verbose=1, save_best_only=True)
#Enable tensorboard
tensorBoard = TensorBoard(
log_dir='./logs',
histogram_freq=0,
batch_size=batch_size,
write_graph=True,
write_grads=False,
write_images=True,
embeddings_freq=0)
#Train
history = model.fit_generator(
myGene,
steps_per_epoch = 1000,
#epochs=100,
epochs = 100,
callbacks=[model_checkpoint,tensorBoard],
validation_data=myValGene,
validation_steps=300)
#Continue traing
#Use initial_epoch
'''
history2 = model.fit_generator(
myGene,
steps_per_epoch = 1000,
epochs=100,
callbacks=[model_checkpoint,tensorBoard],
initial_epoch = 125,
validation_data=myValGene,
validation_steps=200)
'''
# Plot training & validation accuracy values
print (history.history['val_loss'])
plt.plot(history.history['jaccard_coef'])
plt.plot(history.history['val_jaccard_coef'])
plt.title('Coefficiency')
plt.ylabel('Coefficiency')
plt.xlabel('Epoch')
plt.legend(['Train'], loc='upper left')
plt.legend(['Validation'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')
plt.show()
file_names = next(os.walk(test_data_dir))[2]
model = AttentionUnetModel()
model.load_weights(model_localtion)
help(model)
file_names = next(os.walk(test_data_dir))[2]
model = AttentionUnetModel()
model.load_weights(model_localtion)
for file in file_names:
print(file)
grey_img = load_img(os.path.join(test_data_dir,file), target_size=(img_rows, img_cols), grayscale=False)
img = img_to_array(grey_img)
mean = np.mean(img) # mean for data centering
std = np.std(img) # std for data normalization
img -= mean
img /= std
img = np.reshape(img,(1,)+img.shape)
results = model.predict(img)
result_img = array_to_img(results[0] * 255 )
plt.imshow(result_img)
result_img.save(os.path.join(test_data_pred_dir, file.split('.')[0] + '_predict.jpg'))
model.summary()
def predictTestSet(model_location):
model.load_weights(model_location)
file_names = next(os.walk(test_data_dir))[2]
scores = []
for file in file_names:
print (file)
grey_img = load_img(os.path.join(test_data_dir,file), target_size=(img_rows, img_cols), grayscale=False)
mask_img = load_img(os.path.join(test_data_mask_dir,file.split('.')[0]+"_segmentation.png"),
target_size=(img_rows, img_cols), grayscale=True)
img = img_to_array(grey_img)
img_mask = img_to_array(mask_img)
#Preprocess image mask
#img_mask = img_mask /255
#img_mask[img_mask > 0.5] = 1
#img_mask[img_mask <= 0.5] = 0
#Preprocess images
#mean = np.mean(img) # mean for data centering
#std = np.std(img) # std for data normalization
#img -= mean
#img /= std
#img, img_mask= normalizeData_rgb(img, img_mask)
img, img_mask = normalizeData(img, img_mask)
img = np.reshape(img,(1,)+img.shape)
pred = model.predict([img])
sess = tf.Session()
score = sess.run(jaccard_coef(img_mask, pred))
print("{} -- jaccard index: {}".format(file,score))
scores.append([file,score])
result_img = array_to_img(pred[0] * 255 )
result_img.save(os.path.join(test_data_pred_dir, file.split('.')[0] + '_predict.jpg'))
with open(result_file, 'w') as f:
f.write("filename, jaccard_index\n")
for i in range(len(scores)):
#print(scores[i])
f.write("{},{}\n".format(scores[i][0], scores[i][1]))
predictTestSet(model_localtion)
file_names = next(os.walk(test_data_dir))[2]
image = file_names[1]
fig = plt.figure()
a = fig.add_subplot(1, 4, 1)
imgplot = plt.imshow(load_img(os.path.join(test_data_dir,image)), shape = (256,256))
a.set_title('Image')
a.set_axis_off()
a = fig.add_subplot(1, 4, 2)
imgplot = plt.imshow(load_img(os.path.join(test_data_mask_dir,image.split('.')[0]+"_segmentation.png")), shape = (256,256))
a.set_title('Mask')
a.set_axis_off()
a = fig.add_subplot(1, 4, 3)
imgplot = plt.imshow(load_img(os.path.join(test_data_pred_dir,image.split('.')[0]+"_predict.jpg")), shape = (256,256))
a.set_title('Prediction')
a.set_axis_off()
model.output
with open(result_file,"r") as f:
data = f.readlines()
scores = [];
for line in data[1:]:
result = line.split(',')
scores.append([result[0], result[1][:-1]])
scores = np.array(scores)
sorted_scores = scores[scores[:,1].argsort()]
print('Test Jaccard Index:{}'.format(scores[:,1].astype(float).mean()))
low_scores = scores[scores[:,1].astype(float)<0.65]
low_scores = low_scores[low_scores[:,1].argsort()]
print('Total number of low prediction (jaccard index < 0.65):{}'.format(len(low_scores)) )
print('Threshold Jaccard Index:{}'.format(np.sum(scores[scores[:,1].astype(float)>0.65][:,1].astype(float)) / len(scores)))
```
| github_jupyter |
# Recurrent Neural Networks
## Univariate Time Series Regression
This notebook demonstrates how to forecast the S&P 500 index using a Recurrent Neural Network.
## Imports & Settings
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from pathlib import Path
import numpy as np
import pandas as pd
import pandas_datareader.data as web
from scipy.stats import spearmanr
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow import keras
import matplotlib.pyplot as plt
import seaborn as sns
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if gpu_devices:
print('Using GPU')
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
else:
print('Using CPU')
sns.set_style('whitegrid')
np.random.seed(42)
results_path = Path('results', 'univariate_time_series')
if not results_path.exists():
results_path.mkdir(parents=True)
```
## Get Data
We obtain data for 2010-2018 from the Federal Reserve Bankโs Data Service [FRED](https://fred.stlouisfed.org/) using the [pandas_datareader](https://pandas-datareader.readthedocs.io/) library in introduced in [Chapter 2 on Market and Fundamental Data](../02_market_and_fundamental_data).
```
sp500 = web.DataReader('SP500', 'fred', start='2010', end='2020').dropna()
ax = sp500.plot(title='S&P 500',
legend=False,
figsize=(14, 4),
rot=0)
ax.set_xlabel('')
sns.despine()
```
## Preprocessing
```
scaler = MinMaxScaler()
sp500_scaled = pd.Series(scaler.fit_transform(sp500).squeeze(),
index=sp500.index)
sp500_scaled.describe()
```
## Generating recurrent sequences from our time series
Our time series is a sequence of numbers indexed by time:
$$x_{0},x_{1},x_{2},...,x_{T}$$
where $\{x_t\}$ is the numerical value in period $t$ and $T$ is the total length of the series.
To apply a RNN for regression of classification, we use a sliding window to construct a rolling set of input/output pairs for our model to learn from as animated below.
<img src="../assets/timeseries_windowing.gif" width=600 height=600/>
We will generate sequences of 63 trading days, approximately three months, and use a single LSTM layer with 20 hidden units to predict the index value one timestep ahead.
The input to every LSTM layer must have three dimensions, namely:
- **Samples**: One sequence is one sample. A batch contains one or more samples.
- **Time Steps**: One time step is one point of observation in the sample.
- **Features**: One feature is one observation at a time step.
Our S&P 500 sample has 2,264 observations or time steps. We will create overlapping sequences using a window of 63 observations each.
For a simpler window of size T = 5, we obtain input-output pairs as shown in the following table:
$$\begin{array}{c|c}
\text{Input} & \text{Output}\\
\hline {\langle x_1,x_2,x_3,x_4,x_5\rangle} & { x_6} \\
\ {\langle x_{2},x_{3},x_{4},x_{5},x_{6} \rangle } & {x_{7} } \\
{\vdots} & {\vdots}\\
{ \langle x_{T-5},x_{T-4},x_{T-3},x_{T-2},x_{T-1} \rangle } & {x_{T}}
\end{array}$$
Generally speaking, for window size S, the relationship takes the form
$$x_t = f( x_{t-1}, x_{t-2}, ..., x_{t-S}) \quad\forall t=S, S+1, ..., T$$
Each of the $T-S$ lagged input sequence or vector is of length S with a corresponding scalar output.
We can use the function create_univariate_rnn_data() to stack sequences selected using a rolling windows:
```
def create_univariate_rnn_data(data, window_size):
n = len(data)
y = data[window_size:]
data = data.values.reshape(-1, 1) # make 2D
X = np.hstack(tuple([data[i: n-j, :] for i, j in enumerate(range(window_size, 0, -1))]))
return pd.DataFrame(X, index=y.index), y
```
We apply this function to the rescaled stock index for a window_size=63 to obtain a two-dimensional dataset of shape number of samples x number of timesteps:
```
window_size = 63
X, y = create_univariate_rnn_data(sp500_scaled, window_size=window_size)
X.head()
y.head()
X.shape
```
## Train-test split
To respect the time series nature of the data, we set aside the data at the end of the sample as hold-out or test set. More specifically, we'll use the data for 2018.
```
ax = sp500_scaled.plot(lw=2, figsize=(14, 4), rot=0)
ax.set_xlabel('')
sns.despine()
X_train = X[:'2018'].values.reshape(-1, window_size, 1)
y_train = y[:'2018']
# keep the last year for testing
X_test = X['2019'].values.reshape(-1, window_size, 1)
y_test = y['2019']
n_obs, window_size, n_features = X_train.shape
y_train.shape
```
## Keras LSTM Layer
Keras has several built-in RNN layers with various configuration options described in detail in the [documentation](https://keras.io/layers/recurrent/).
```
LSTM(units,
activation='tanh',
recurrent_activation='hard_sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros',
unit_forget_bias=True,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
implementation=1,
return_sequences=False,
return_state=False,
go_backwards=False,
stateful=False,
unroll=False)
```
## Define the Model Architecture
Having created input/output pairs out of our time series and cut this into training/testing sets, we can now begin setting up our RNN. We use Keras to quickly build a two hidden layer RNN of the following specifications
- layer 1 uses an LSTM module with 20 hidden units (note here the input_shape = (window_size,1))
- layer 2 uses a fully connected module with one unit
- the 'mean_squared_error' loss should be used (remember: we are performing regression here)
This can be constructed using just a few lines - see e.g., the [general Keras documentation](https://keras.io/getting-started/sequential-model-guide/) and the [LSTM documentation in particular](https://keras.io/layers/recurrent/) for examples of how to quickly use Keras to build neural network models. Make sure you are initializing your optimizer given the [keras-recommended approach for RNNs](https://keras.io/optimizers/)
```
rnn = Sequential([
LSTM(units=10,
input_shape=(window_size, n_features), name='LSTM'),
Dense(1, name='Output')
])
```
The summary shows that the model has 1,781 parameters:
```
rnn.summary()
```
## Train the Model
We train the model using the RMSProp optimizer recommended for RNN with default settings and compile the model with mean squared error for this regression problem:
```
optimizer = keras.optimizers.RMSprop(lr=0.001,
rho=0.9,
epsilon=1e-08,
decay=0.0)
rnn.compile(loss='mean_squared_error',
optimizer=optimizer)
```
We define an EarlyStopping callback and train the model for up to 100 episodes.
```
rnn_path = (results_path / 'rnn.h5').as_posix()
checkpointer = ModelCheckpoint(filepath=rnn_path,
verbose=1,
monitor='val_loss',
save_best_only=True)
early_stopping = EarlyStopping(monitor='val_loss',
patience=20,
restore_best_weights=True)
lstm_training = rnn.fit(X_train,
y_train,
epochs=150,
batch_size=20,
shuffle=True,
validation_data=(X_test, y_test),
callbacks=[early_stopping, checkpointer],
verbose=1)
```
Training stops after 51 epochs; the `early_stopping` callback restores the weights for the best model (after 41 epochs)
## Evaluate model performance
```
fig, ax = plt.subplots(figsize=(12, 4))
loss_history = pd.DataFrame(lstm_training.history).pow(.5)
loss_history.index += 1
best_rmse = loss_history.val_loss.min()
best_epoch = loss_history.val_loss.idxmin()
title = f'5-Epoch Rolling RMSE (Best Validation RMSE: {best_rmse:.4%})'
loss_history.columns=['Training RMSE', 'Validation RMSE']
loss_history.rolling(5).mean().plot(logy=True, lw=2, title=title, ax=ax)
ax.axvline(best_epoch, ls='--', lw=1, c='k')
sns.despine()
fig.tight_layout()
fig.savefig(results_path / 'rnn_sp500_error', dpi=300);
train_rmse_scaled = np.sqrt(rnn.evaluate(X_train, y_train, verbose=0))
test_rmse_scaled = np.sqrt(rnn.evaluate(X_test, y_test, verbose=0))
print(f'Train RMSE: {train_rmse_scaled:.4f} | Test RMSE: {test_rmse_scaled:.4f}')
train_predict_scaled = rnn.predict(X_train)
test_predict_scaled = rnn.predict(X_test)
train_ic = spearmanr(y_train, train_predict_scaled)[0]
test_ic = spearmanr(y_test, test_predict_scaled)[0]
print(f'Train IC: {train_ic:.4f} | Test IC: {test_ic:.4f}')
```
### Rescale predictions
```
train_predict = pd.Series(scaler.inverse_transform(train_predict_scaled).squeeze(), index=y_train.index)
test_predict = (pd.Series(scaler.inverse_transform(test_predict_scaled)
.squeeze(),
index=y_test.index))
y_train_rescaled = scaler.inverse_transform(y_train.to_frame()).squeeze()
y_test_rescaled = scaler.inverse_transform(y_test.to_frame()).squeeze()
train_rmse = np.sqrt(mean_squared_error(train_predict, y_train_rescaled))
test_rmse = np.sqrt(mean_squared_error(test_predict, y_test_rescaled))
f'Train RMSE: {train_rmse:.2f} | Test RMSE: {test_rmse:.2f}'
sp500['Train Predictions'] = train_predict
sp500['Test Predictions'] = test_predict
sp500 = sp500.join(train_predict.to_frame('predictions').assign(data='Train')
.append(test_predict.to_frame('predictions').assign(data='Test')))
```
### Plot Results
```
fig=plt.figure(figsize=(14,7))
ax1 = plt.subplot(221)
sp500.loc['2015':, 'SP500'].plot(lw=4, ax=ax1, c='k')
sp500.loc['2015':, ['Test Predictions', 'Train Predictions']].plot(lw=1, ax=ax1, ls='--')
ax1.set_title('In- and Out-of-sample Predictions')
with sns.axes_style("white"):
ax3 = plt.subplot(223)
sns.scatterplot(x='SP500', y='predictions', data=sp500, hue='data', ax=ax3)
ax3.text(x=.02, y=.95, s=f'Test IC ={test_ic:.2%}', transform=ax3.transAxes)
ax3.text(x=.02, y=.87, s=f'Train IC={train_ic:.2%}', transform=ax3.transAxes)
ax3.set_title('Correlation')
ax3.legend(loc='lower right')
ax2 = plt.subplot(222)
ax4 = plt.subplot(224, sharex = ax2, sharey=ax2)
sns.distplot(train_predict.squeeze()- y_train_rescaled, ax=ax2)
ax2.set_title('Train Error')
ax2.text(x=.03, y=.92, s=f'Train RMSE ={train_rmse:.4f}', transform=ax2.transAxes)
sns.distplot(test_predict.squeeze()-y_test_rescaled, ax=ax4)
ax4.set_title('Test Error')
ax4.text(x=.03, y=.92, s=f'Test RMSE ={test_rmse:.4f}', transform=ax4.transAxes)
sns.despine()
fig.tight_layout()
fig.savefig(results_path / 'rnn_sp500_regression', dpi=300);
```
| github_jupyter |
```
# Create training data for DeppCT
import pyspark
sc = pyspark.SparkContext()
sc
def map_to_queries(i):
import json
i = json.loads(i)
return (i['targetDocumentId'], [j['anchorText'] for j in i['anchorTextSample']])
def to_json(i):
import json
anchorText = []
for j in i[1]:
anchorText += j
return json.dumps({
'targetDocumentId': i[0],
'anchorText': anchorText
})
sc.textFile('ecir22-filtered-commoncrawl-2019-47-sample/*/*')\
.map(map_to_queries)\
.groupByKey()\
.map(to_json)\
.repartition(10)\
.saveAsTextFile('ecir22-filtered-commoncrawl-2019-47-deepct-sample.jsonl')
import spacy
spacy.__version__
#!python -m spacy download en_core_web_sm
# From https://github.com/grill-lab/trec-cast-tools/blob/master/src/main/python/passage_chunker.py
import re
import spacy
nlp = spacy.load("en_core_web_sm", exclude=["parser", "tagger", "ner", "attribute_ruler", "lemmatizer", "tok2vec"]) #->senter
nlp.enable_pipe("senter")
print(nlp.pipe_names)
class SpacyPassageChunker:
def __init__(self):
self.document = None
self.sentences = None
def sentence_tokenization(self, document):
self.document = parse_doc(document)
self.sentences = list(self.document.sents)
def create_passages(self, passage_size = 250):
passages = []
content_length = len(self.sentences)
sentences_word_count = [len([token for token in sentence]) for sentence in self.sentences]
current_idx = 0
current_passage_word_count = 0
current_passage = ''
sub_id = 0
for i in range(content_length):
if current_passage_word_count >= (passage_size * 0.67):
passages.append({
"body": current_passage,
"id": sub_id
})
current_passage = ''
current_passage_word_count = 0
current_idx = i
sub_id += 1
current_passage += self.sentences[i].text + ' '
current_passage_word_count += sentences_word_count[i]
current_passage = ' '.join(sentence.text for sentence in self.sentences[current_idx:])
passages.append({
"body": current_passage,
"id": sub_id
})
return passages
def parse_doc(document):
return nlp(sanitize_document(document))
def sanitize_document(doc):
sanitized = re.compile('<.*?>')
return re.sub(sanitized, '', doc)
def tokens(doc):
return set([i for i in parse_doc(doc) if not i.is_punct])
print(tokens('Speed Reading 4 Kid: Kill Toenail Fungus Now!'))
import ir_datasets
ms_marco_docs = ir_datasets.load('msmarco-document').docs_store()
def passages(doc):
ret = [doc.title]
tmp = SpacyPassageChunker()
tmp.sentence_tokenization(doc.body)
ret += [i['body'] for i in tmp.create_passages(80)]
return [(i[0] +1, i[1]) for i in enumerate(ret)]
#passages('D1112106')
ids = []
with open(DIR + 'cc-2019-47-anchors.jsonl', 'r') as f:
for l in tqdm(f):
ids += [json.loads(l)['targetDocumentId']]
ids = set(ids)
import multiprocessing
pool = multiprocessing.Pool(processes=20)
def ms_marco_docs_iter():
import ir_datasets
iter = ir_datasets.load('msmarco-document').docs_iter()
for doc in iter:
if doc.doc_id in ids:
yield doc
tmp = pool.map(lambda j: (j.doc_id, passages(j)), ms_marco_docs_iter())
ms_marco_doc_to_passages = {i[0]: i[1] for i in tqdm(tmp)}
ms_marco_doc_to_passages['D1112106']
ms_marco_doc_to_passages['D1112106']
ms_marco_doc_to_passages_processed = pool.map(ms_marco_doc_to_passages.values(), lambda i: (i.doc_id, passages(i)))
import json
from tqdm import tqdm
DIR = '/mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/deep-ct/'
with open(DIR + 'cc-2019-47-anchors.jsonl', 'r') as f, open(DIR + 'deep-ct-training-data.jsonl', 'w') as out:
for l in tqdm(f):
l = json.loads(l)
anchors = l['anchorText']
for passage in passages(l['targetDocumentId']):
out.write(json.dumps({'doc': {'title': passage}, 'queries': anchors}) + '\n')
```
| github_jupyter |
# Degree Preserving Edge Swaps
```
from graspologic.datasets import load_drosophila_right
from graspologic.models import EdgeSwapper
from graspologic.plot import heatmap
from graspologic.utils import binarize, symmetrize
import networkx as nx
from scipy.sparse import csr_matrix
```
`EdgeSwapper` is a class that performs degree preserving edge swaps on networks. The distributions of graphs with a fixed degree sequence are known as configuration models, and these have extensive application for analyzing network datasets. The current implementation works on simple graphs (unewighted, no loops) that are of type `np.ndarray` or `csr_matrix`.
Now let us run dpes on these graphs and ensure that they have the same degree sequence
To begin, we'll look at an example network, the _Drosophila melanogaster_ larva right mushroom body connectome from [Eichler et al. 2017](https://www.ncbi.nlm.nih.gov/pubmed/28796202).
Note: here we make the network undirected and unweighted for compatibility with the current
implementation.
```
#load the data
adj, labels = load_drosophila_right(return_labels=True)
adj = symmetrize(adj)
adj = binarize(adj)
_ = heatmap(adj,
inner_hier_labels=labels,
title='Drosophila right MB',
font_scale=1.5,
sort_nodes=True,
cbar=False)
```
Now, we'll use `EdgeSwapper` to perform 10,000 random degree-preserving edge swaps - this
will dramatically change the structure of the network but keep the degree of each node
the same.
```
swapper = EdgeSwapper(adj, seed=8888)
swapped_adj, _ = swapper.swap_edges(n_swaps=10000)
_ = heatmap(swapped_adj,
title='Drosophila right MB swapped',
font_scale=1.5,
sort_nodes=True,
inner_hier_labels=labels,
cbar=False)
```
We can see how the structure of the network above has changed: for example, there are
now many edges among "I" (input) neurons when there were none before.
We can verify that the degree of each node in the network has been preserved:
```
g = nx.from_numpy_array(adj)
swapped_g = nx.from_numpy_array(swapped_adj)
print(list(g.degree()) == list(swapped_g.degree()))
```
`EdgeSwapper` also works with `csr_matrix` adjacency representations.
```
swapper = EdgeSwapper(csr_matrix(adj), seed=8888)
swapped_adj, _ = swapper.swap_edges(n_swaps=1000)
g = nx.from_numpy_array(adj)
swapped_g = nx.from_numpy_array(swapped_adj)
print(list(g.degree()) == list(swapped_g.degree()))
```
Often, degree-preserving edge swaps are used to sample a series of networks which resemble
the original network in degree, but are otherwise random. This distribution of networks
(sometimes called a configuration model) can be used to compare properties of the original
network to this null distribution in order to evaluate whether some property is more or
less prevalent in a given network than would be expected by chance. However, it is important
to know that in practice, it can be difficult to tell _how many_ edge swaps to perform
to find a new network which is independent from the one you started with.
| github_jupyter |
```
from google.colab import drive
drive.mount('/gdrive')
import pandas as pd
import numpy as np
import os.path
f_name = "/gdrive/My Drive/ML:March2020/Participants/Balaji Ravindaran/face.csv"
# storing the data into a csv file
def write(name, data):
if os.path.isfile(f_name):
df = pd.read_csv(f_name, index_col = 0)
latest = pd.DataFrame(data, columns = map(str, range(7225)))
latest["name"] = name
df = pd.concat((df, latest), ignore_index = True, sort = False)
else:
# Providing range only because the data
# here is already flattened for when
# it was store in f_list
df = pd.DataFrame(data, columns = map(str, range(7225)))
df["name"] = name
df.to_csv(f_name)
from IPython.display import display, Javascript
from google.colab.output import eval_js
from base64 import b64decode
def take_photo(filename='photo.jpg', quality=0.8):
js = Javascript('''
async function takePhoto(quality) {
const div = document.createElement('div');
const capture = document.createElement('button');
capture.textContent = 'Capture';
div.appendChild(capture);
const video = document.createElement('video');
video.style.display = 'block';
const stream = await navigator.mediaDevices.getUserMedia({video: true});
document.body.appendChild(div);
div.appendChild(video);
video.srcObject = stream;
await video.play();
// Resize the output to fit the video element.
google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);
// Wait for Capture to be clicked.
await new Promise((resolve) => capture.onclick = resolve);
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
stream.getVideoTracks()[0].stop();
div.remove();
return canvas.toDataURL('image/jpeg', quality);
}
''')
display(js)
data = eval_js('takePhoto({})'.format(quality))
binary = b64decode(data.split(',')[1])
with open(filename, 'wb') as f:
f.write(binary)
return filename
name = input("Enter your name: ")
name=name.lower()
from google.colab.patches import cv2_imshow
import cv2
classifier = cv2.CascadeClassifier("/gdrive/My Drive/ML:March2020/data/haarcascade_frontalface_default.xml")
# this is class used to detect the faces as provided
# with a haarcascade_frontalface_default.xml file as data
f_list = []
i=0
while i<10: #change for no.of times to take picture
i=i+1
filename = take_photo()
frame = cv2.imread(filename, cv2.IMREAD_UNCHANGED)
# converting the image into gray
# scale as it is easy for detection
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# detect multiscale, detects the face and its coordinates
faces = classifier.detectMultiScale(gray, 1.5, 5)
# this is used to detect the face which
# is closest to the web-cam on the first position
faces = sorted(faces, key = lambda x: x[2]*x[3], reverse = True)
# only the first detected face is used
faces = faces[:1]
# len(faces) is the number of
# faces showing in a frame
if len(faces) == 1:
# this is removing from tuple format
face = faces[0]
# storing the coordinates of the
# face in different variables
x, y, w, h = face
# this is will show the face
# that is being detected
im_face = frame[y:y + h, x:x + w]
cv2_imshow(im_face)
if len(faces) == 1:
gray_face = cv2.cvtColor(im_face, cv2.COLOR_BGR2GRAY)
gray_face = cv2.resize(gray_face, (85, 85))
cv2_imshow(gray_face)
print(len(f_list), type(gray_face), gray_face.shape)
# this will append the face's coordinates in f_list
f_list.append(gray_face.reshape(-1))
f_list
write(name, np.array(f_list))
```
| github_jupyter |
# Non-stationary
```
from copy import deepcopy
import itertools
import numpy as np
import torch
from torch.optim import Adam
import pybulletgym
import gym
import time
import spinup.algos.pytorch.td3.core as core
from spinup.utils.logx import EpochLogger
class ReplayBuffer:
"""
A simple FIFO experience replay buffer for TD3 agents.
"""
def __init__(self, obs_dim, act_dim, size):
self.obs_buf = np.zeros(core.combined_shape(size, obs_dim), dtype=np.float32)
self.obs2_buf = np.zeros(core.combined_shape(size, obs_dim), dtype=np.float32)
self.act_buf = np.zeros(core.combined_shape(size, act_dim), dtype=np.float32)
self.rew_buf = np.zeros(size, dtype=np.float32)
self.done_buf = np.zeros(size, dtype=np.float32)
self.ptr, self.size, self.max_size = 0, 0, size
def store(self, obs, act, rew, next_obs, done):
self.obs_buf[self.ptr] = obs
self.obs2_buf[self.ptr] = next_obs
self.act_buf[self.ptr] = act
self.rew_buf[self.ptr] = rew
self.done_buf[self.ptr] = done
self.ptr = (self.ptr+1) % self.max_size
self.size = min(self.size+1, self.max_size)
def sample_batch(self, batch_size=32):
idxs = np.random.randint(0, self.size, size=batch_size)
batch = dict(obs=self.obs_buf[idxs],
obs2=self.obs2_buf[idxs],
act=self.act_buf[idxs],
rew=self.rew_buf[idxs],
done=self.done_buf[idxs])
return {k: torch.as_tensor(v, dtype=torch.float32) for k,v in batch.items()}
class POMDPWrapper(gym.ObservationWrapper):
def __init__(self, env_name):
super().__init__(gym.make(env_name))
# Remove velocity info
# OpenAIGym
# 1. MuJoCo
if env_name == "HalfCheetah-v3" or env_name == "HalfCheetah-v2":
self.remain_obs_idx = np.arange(0, 8)
elif env_name == "Ant-v3" or env_name == "Ant-v2":
self.remain_obs_idx = list(np.arange(0, 13)) + list(np.arange(27, 111))
elif env_name == 'Walker2d-v3' or env_name == "Walker2d-v2":
self.remain_obs_idx = np.arange(0, 8)
elif env_name == 'Hopper-v3' or env_name == "Hopper-v2":
self.remain_obs_idx = np.arange(0, 5)
elif env_name == "InvertedPendulum-v2":
self.remain_obs_idx = np.arange(0, 2)
elif env_name == "InvertedDoublePendulum-v2":
self.remain_obs_idx = list(np.arange(0, 5)) + list(np.arange(8, 11))
elif env_name == "Swimmer-v3" or env_name == "Swimmer-v2":
self.remain_obs_idx = np.arange(0, 3)
elif env_name == "Thrower-v2":
self.remain_obs_idx = list(np.arange(0, 7)) + list(np.arange(14, 23))
elif env_name == "Striker-v2":
self.remain_obs_idx = list(np.arange(0, 7)) + list(np.arange(14, 23))
elif env_name == "Pusher-v2":
self.remain_obs_idx = list(np.arange(0, 7)) + list(np.arange(14, 23))
elif env_name == "Reacher-v2":
self.remain_obs_idx = list(np.arange(0, 6)) + list(np.arange(8, 11))
elif env_name == 'Humanoid-v3' or env_name == "Humanoid-v2":
self.remain_obs_idx = list(np.arange(0, 22)) + list(np.arange(45, 185)) + list(np.arange(269, 376))
elif env_name == 'HumanoidStandup-v2':
self.remain_obs_idx = list(np.arange(0, 22)) + list(np.arange(45, 185)) + list(np.arange(269, 376))
# PyBulletGym
# 1. MuJoCo
elif env_name == 'HalfCheetahMuJoCoEnv-v0':
self.remain_obs_idx = np.arange(0, 8)
elif env_name == 'AntMuJoCoEnv-v0':
self.remain_obs_idx = list(np.arange(0, 13)) + list(np.arange(27, 111))
elif env_name == 'Walker2DMuJoCoEnv-v0':
self.remain_obs_idx = np.arange(0, 8)
elif env_name == 'HopperMuJoCoEnv-v0':
self.remain_obs_idx = np.arange(0, 7)
elif env_name == 'InvertedPendulumMuJoCoEnv-v0':
self.remain_obs_idx = np.arange(0, 3)
elif env_name == 'InvertedDoublePendulumMuJoCoEnv-v0':
self.remain_obs_idx = list(np.arange(0, 5)) + list(np.arange(8, 11))
# 2. Roboschool
elif env_name == 'HalfCheetahPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,26)) - set(np.arange(3,6)))
elif env_name == 'AntPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,28)) - set(np.arange(3,6)))
elif env_name == 'Walker2DPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,22)) - set(np.arange(3,6)))
elif env_name == 'HopperPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,15)) - set(np.arange(3,6)))
elif env_name == 'InvertedPendulumPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,5)) - set([1,4]))
elif env_name == 'InvertedDoublePendulumPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,9)) - set([1,5,8]))
elif env_name == 'ReacherPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,9)) - set([6,8]))
else:
raise ValueError('POMDP for {} is not defined!'.format(env_name))
# Redefine observation_space
obs_low = np.array([-np.inf for i in range(len(self.remain_obs_idx))], dtype="float32")
obs_high = np.array([np.inf for i in range(len(self.remain_obs_idx))], dtype="float32")
self.observation_space = gym.spaces.Box(obs_low, obs_high)
def observation(self, obs):
return obs.flatten()[self.remain_obs_idx]
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence, pad_packed_sequence
class MLPCritic(nn.Module):
def __init__(self, obs_dim, act_dim, hidden_sizes=(128, 128)):
super(MLPCritic, self).__init__()
self.obs_dim = obs_dim
self.act_dim = act_dim
self.layers = nn.ModuleList()
layer_size = [obs_dim+act_dim]+list(hidden_sizes) + [1]
for h in range(len(layer_size)-2):
self.layers += [nn.Linear(layer_size[h], layer_size[h+1]), nn.ReLU()]
self.layers += [nn.Linear(layer_size[-2], layer_size[-1]), nn.Identity()]
def forward(self, obs, act):
cat_input = torch.cat([obs, act], dim=-1)
x = cat_input
for layer in self.layers:
x = layer(x)
return torch.squeeze(x, -1) # Critical to ensure q has right shape.
class MLPActor(nn.Module):
def __init__(self, obs_dim, act_dim, act_limit, hidden_sizes=(128, 128)):
super(MLPActor, self).__init__()
self.obs_dim = obs_dim
self.act_dim = act_dim
self.act_limit = act_limit
self.layers = nn.ModuleList()
layer_size = [obs_dim]+list(hidden_sizes) + [act_dim]
for h in range(len(layer_size)-2):
self.layers += [nn.Linear(layer_size[h], layer_size[h+1]), nn.ReLU()]
self.layers += [nn.Linear(layer_size[-2], layer_size[-1]), nn.Tanh()]
def forward(self, obs):
x = obs
for layer in self.layers:
x = layer(x)
return self.act_limit * x
class MLPActorCritic(nn.Module):
def __init__(self, obs_dim, act_dim, act_limit, hidden_sizes=(128, 128)):
super(MLPActorCritic, self).__init__()
self.q1 = MLPCritic(obs_dim, act_dim)
self.q2 = MLPCritic(obs_dim, act_dim)
self.pi = MLPActor(obs_dim, act_dim, act_limit=1)
def act(self, obs):
with torch.no_grad():
return self.pi(obs).numpy()
cuda = torch.device('cuda')
def td3(env_name, actor_critic=core.MLPActorCritic, ac_kwargs=dict(), seed=0,
steps_per_epoch=4000, epochs=100, replay_size=int(1e6), gamma=0.99,
polyak=0.995, pi_lr=1e-3, q_lr=1e-3, batch_size=100, start_steps=10000,
update_after=1000, update_every=50, act_noise=0.1, target_noise=0.2,
noise_clip=0.5, policy_delay=2, num_test_episodes=5, max_ep_len=1000,
nonstationary_env = True,
gravity_change_pattern = 'gravity_averagely_equal',
partially_observable = False,
logger_kwargs=dict(), save_freq=1):
"""
Twin Delayed Deep Deterministic Policy Gradient (TD3)
Args:
env_fn : A function which creates a copy of the environment.
The environment must satisfy the OpenAI Gym API.
actor_critic: The constructor method for a PyTorch Module with an ``act``
method, a ``pi`` module, a ``q1`` module, and a ``q2`` module.
The ``act`` method and ``pi`` module should accept batches of
observations as inputs, and ``q1`` and ``q2`` should accept a batch
of observations and a batch of actions as inputs. When called,
these should return:
=========== ================ ======================================
Call Output Shape Description
=========== ================ ======================================
``act`` (batch, act_dim) | Numpy array of actions for each
| observation.
``pi`` (batch, act_dim) | Tensor containing actions from policy
| given observations.
``q1`` (batch,) | Tensor containing one current estimate
| of Q* for the provided observations
| and actions. (Critical: make sure to
| flatten this!)
``q2`` (batch,) | Tensor containing the other current
| estimate of Q* for the provided observations
| and actions. (Critical: make sure to
| flatten this!)
=========== ================ ======================================
ac_kwargs (dict): Any kwargs appropriate for the ActorCritic object
you provided to TD3.
seed (int): Seed for random number generators.
steps_per_epoch (int): Number of steps of interaction (state-action pairs)
for the agent and the environment in each epoch.
epochs (int): Number of epochs to run and train agent.
replay_size (int): Maximum length of replay buffer.
gamma (float): Discount factor. (Always between 0 and 1.)
polyak (float): Interpolation factor in polyak averaging for target
networks. Target networks are updated towards main networks
according to:
.. math:: \\theta_{\\text{targ}} \\leftarrow
\\rho \\theta_{\\text{targ}} + (1-\\rho) \\theta
where :math:`\\rho` is polyak. (Always between 0 and 1, usually
close to 1.)
pi_lr (float): Learning rate for policy.
q_lr (float): Learning rate for Q-networks.
batch_size (int): Minibatch size for SGD.
start_steps (int): Number of steps for uniform-random action selection,
before running real policy. Helps exploration.
update_after (int): Number of env interactions to collect before
starting to do gradient descent updates. Ensures replay buffer
is full enough for useful updates.
update_every (int): Number of env interactions that should elapse
between gradient descent updates. Note: Regardless of how long
you wait between updates, the ratio of env steps to gradient steps
is locked to 1.
act_noise (float): Stddev for Gaussian exploration noise added to
policy at training time. (At test time, no noise is added.)
target_noise (float): Stddev for smoothing noise added to target
policy.
noise_clip (float): Limit for absolute value of target policy
smoothing noise.
policy_delay (int): Policy will only be updated once every
policy_delay times for each update of the Q-networks.
num_test_episodes (int): Number of episodes to test the deterministic
policy at the end of each epoch.
max_ep_len (int): Maximum length of trajectory / episode / rollout.
logger_kwargs (dict): Keyword args for EpochLogger.
save_freq (int): How often (in terms of gap between epochs) to save
the current policy and value function.
"""
logger = EpochLogger(**logger_kwargs)
logger.save_config(locals())
torch.manual_seed(seed)
np.random.seed(seed)
# Wrapper environment if using POMDP
if partially_observable == True:
env, test_env = POMDPWrapper(env_name), POMDPWrapper(env_name)
else:
env, test_env = gym.make(env_name), gym.make(env_name)
obs_dim = env.observation_space.shape[0]
act_dim = env.action_space.shape[0]
# Action limit for clamping: critically, assumes all dimensions share the same bound!
act_limit = env.action_space.high[0]
# Create actor-critic module and target networks
mlp_c1 = MLPCritic(obs_dim, act_dim)
mlp_c2 = MLPCritic(obs_dim, act_dim)
mlp_a = MLPActor(obs_dim, act_dim, act_limit)
mlp_c1_targ = deepcopy(mlp_c1)
mlp_c2_targ = deepcopy(mlp_c2)
mlp_a_targ = deepcopy(mlp_a)
mlp_c1.cuda()
mlp_c2.cuda()
mlp_a.cuda()
mlp_c1_targ.cuda()
mlp_c2_targ.cuda()
mlp_a_targ.cuda()
# Freeze target networks with respect to optimizers (only update via polyak averaging)
for p in mlp_c1_targ.parameters():
p.requires_grad = False
for p in mlp_c2_targ.parameters():
p.requires_grad = False
for p in mlp_a_targ.parameters():
p.requires_grad = False
# List of parameters for both Q-networks (save this for convenience)
q_params = itertools.chain(mlp_c1.parameters(), mlp_c2.parameters())
# Experience buffer
replay_buffer = ReplayBuffer(obs_dim=obs_dim, act_dim=act_dim, size=replay_size)
# # Count variables (protip: try to get a feel for how different size networks behave!)
# var_counts = tuple(core.count_vars(module) for module in [ac.pi, ac.q1, ac.q2])
# logger.log('\nNumber of parameters: \t pi: %d, \t q1: %d, \t q2: %d\n'%var_counts)
# Set up function for computing TD3 Q-losses
def compute_loss_q(data):
o, a, r, o2, d = data['obs'].to(device=cuda), data['act'].to(device=cuda), data['rew'].to(device=cuda), data['obs2'].to(device=cuda), data['done'].to(device=cuda)
q1 = mlp_c1(o, a)
q2 = mlp_c2(o, a)
# Bellman backup for Q functions
with torch.no_grad():
pi_targ = mlp_a_targ(o2)
a2 = pi_targ
# Target Q-values
q1_pi_targ = mlp_c1_targ(o2, a2)
q2_pi_targ = mlp_c2_targ(o2, a2)
q_pi_targ = torch.min(q1_pi_targ, q2_pi_targ)
backup = r + gamma * (1 - d) * q_pi_targ
# MSE loss against Bellman backup
loss_q1 = ((q1 - backup)**2).mean()
loss_q2 = ((q2 - backup)**2).mean()
loss_q = loss_q1 + loss_q2
# Useful info for logging
loss_info = dict(Q1Vals=q1.detach().cpu().numpy(),
Q2Vals=q2.detach().cpu().numpy())
return loss_q, loss_info
# Set up function for computing TD3 pi loss
def compute_loss_pi(data):
o = data['obs'].to(device=cuda)
q1_pi = mlp_c1(o, mlp_a(o))
return -q1_pi.mean()
# Set up optimizers for policy and q-function
pi_optimizer = Adam(mlp_a.parameters(), lr=pi_lr)
q_optimizer = Adam(q_params, lr=q_lr)
# # Set up model saving
# logger.setup_pytorch_saver(ac)
def update(data, timer):
# First run one gradient descent step for Q1 and Q2
q_optimizer.zero_grad()
loss_q, loss_info = compute_loss_q(data)
loss_q.backward()
q_optimizer.step()
# Record things
logger.store(LossQ=loss_q.item(), **loss_info)
# Freeze Q-networks so you don't waste computational effort
# computing gradients for them during the policy learning step.
for p in q_params:
p.requires_grad = False
# Next run one gradient descent step for pi.
pi_optimizer.zero_grad()
loss_pi = compute_loss_pi(data)
loss_pi.backward()
pi_optimizer.step()
# Unfreeze Q-networks so you can optimize it at next DDPG step.
for p in q_params:
p.requires_grad = True
# Record things
logger.store(LossPi=loss_pi.item())
# Finally, update target networks by polyak averaging.
with torch.no_grad():
for p, p_targ in zip(mlp_a.parameters(), mlp_a_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
for p, p_targ in zip(mlp_c1.parameters(), mlp_c1_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
for p, p_targ in zip(mlp_c2.parameters(), mlp_c2_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
def get_action(o, noise_scale):
o = torch.tensor(o).view(1, -1).float().to(device=cuda)
with torch.no_grad():
a = mlp_a(o)
a = a.cpu().numpy().flatten()
a += noise_scale * np.random.randn(act_dim)
return np.clip(a, -act_limit, act_limit)
def test_agent():
for j in range(num_test_episodes):
o, d, ep_ret, ep_len = test_env.reset(), False, 0, 0
while not(d or (ep_len == max_ep_len)):
# Take deterministic actions at test time (noise_scale=0)
o, r, d, _ = test_env.step(get_action(o, 0))
ep_ret += r
ep_len += 1
logger.store(TestEpRet=ep_ret, TestEpLen=ep_len)
# Prepare for interaction with environment
total_steps = steps_per_epoch * epochs
start_time = time.time()
o, ep_ret, ep_len = env.reset(), 0, 0
# Main loop: collect experience in env and update/log each epoch
for t in range(total_steps):
# Until start_steps have elapsed, randomly sample actions
# from a uniform distribution for better exploration. Afterwards,
# use the learned policy (with some noise, via act_noise).
if t > start_steps:
a = get_action(o, act_noise)
else:
a = env.action_space.sample()
if nonstationary_env == True:
gravity_cycle = 1000
gravity_base = -9.81
if gravity_change_pattern == 'gravity_averagely_equal':
gravity = gravity_base * 1 / 2 * (np.cos(2 * np.pi / gravity_cycle * t) + 1) + gravity_base / 2
elif gravity_change_pattern == 'gravity_averagely_easier':
gravity = gravity_base * 1 / 2 * (np.cos(2 * np.pi / gravity_cycle * t) + 1)
elif gravity_change_pattern == 'gravity_averagely_harder':
gravity = gravity_base * 1 / 2 * (-np.cos(2 * np.pi / gravity_cycle * t) + 1) + gravity_base
else:
pass
if 'PyBulletEnv' in env_name:
env.env._p.setGravity(0, 0, gravity)
elif 'Roboschool' in env_name:
pass
else:
env.model.opt.gravity[2] = gravity
# Step the env
o2, r, d, _ = env.step(a)
ep_ret += r
ep_len += 1
# Ignore the "done" signal if it comes from hitting the time
# horizon (that is, when it's an artificial terminal signal
# that isn't based on the agent's state)
d = False if ep_len==max_ep_len else d
# Store experience to replay buffer
replay_buffer.store(o, a, r, o2, d)
# Super critical, easy to overlook step: make sure to update
# most recent observation!
o = o2
# End of trajectory handling
if d or (ep_len == max_ep_len):
logger.store(EpRet=ep_ret, EpLen=ep_len)
o, ep_ret, ep_len = env.reset(), 0, 0
# Update handling
if t >= update_after and t % update_every == 0:
for j in range(update_every):
batch = replay_buffer.sample_batch(batch_size)
update(data=batch, timer=j)
# End of epoch handling
if (t+1) % steps_per_epoch == 0:
epoch = (t+1) // steps_per_epoch
# # Save model
# if (epoch % save_freq == 0) or (epoch == epochs):
# logger.save_state({'env': env}, None)
# Test the performance of the deterministic version of the agent.
test_agent()
# Log info about epoch
logger.log_tabular('Epoch', epoch)
logger.log_tabular('EpRet', with_min_and_max=True)
logger.log_tabular('TestEpRet', with_min_and_max=True)
logger.log_tabular('EpLen', average_only=True)
logger.log_tabular('TestEpLen', average_only=True)
logger.log_tabular('TotalEnvInteracts', t)
logger.log_tabular('Q1Vals', with_min_and_max=True)
logger.log_tabular('Q2Vals', with_min_and_max=True)
logger.log_tabular('LossPi', average_only=True)
logger.log_tabular('LossQ', average_only=True)
logger.log_tabular('Time', time.time()-start_time)
logger.dump_tabular()
args = {'env': 'Ant-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_False_Ant_NoTargSmooth_NoDelayUpdate_POMDP_True'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_False_HalfCheetah_NoTargSmooth_NoDelayUpdate_POMDP_True'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_False_HalfCheetah_NoTargSmooth_NoDelayUpdate_POMDP_False'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HopperPyBulletEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_False_HopperPyBulletEnv_NoTargSmooth_NoDelayUpdate_POMDP_False'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HopperPyBulletEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_False_HopperPyBulletEnv_NoTargSmooth_NoDelayUpdate_POMDP_True'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'AntMuJoCoEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_False_Ant_NoTargSmooth_NoDelayUpdate_POMDP_True'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'AntMuJoCoEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_False_Ant_NoTargSmooth_NoDelayUpdate_POMDP_False'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetahMuJoCoEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_False_HalfCheetah_NoTargSmooth_NoDelayUpdate_POMDP_False'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetahMuJoCoEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':True,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':True,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':True,
'gravity_change_pattern': 'gravity_averagely_equal',
'exp_name': 'td3_NonStationary_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':True,
'gravity_change_pattern': 'gravity_averagely_equal',
'exp_name': 'td3_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
logger_kwargs=logger_kwargs)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
from mplsoccer.pitch import Pitch
import shapely.geometry as geom
```
Load the data
```
WYSCOUT = os.path.join('..', '..', 'data', 'wyscout')
df_wyscout_event = pd.read_parquet(os.path.join(WYSCOUT, 'event.parquet'))
df_wyscout_player = pd.read_parquet(os.path.join(WYSCOUT, 'player.parquet'))
df_wyscout_match = pd.read_parquet(os.path.join(WYSCOUT, 'match.parquet'))
df_wyscout_formation = pd.read_parquet(os.path.join(WYSCOUT, 'formation.parquet'))
df_wyscout_sub = pd.read_parquet(os.path.join(WYSCOUT, 'substitution.parquet'))
```
Get the time in minutes when the goalkeepers come on the pitch
```
df_wyscout_gk = df_wyscout_player.loc[df_wyscout_player.role_name == 'Goalkeeper', ['player_id', 'role_name']].copy()
df_wyscout_formation = df_wyscout_formation.merge(df_wyscout_gk, how='inner')
df_wyscout_sub_in = df_wyscout_sub[['match_id', 'player_id_in', 'minute']].copy()
df_wyscout_sub_in.rename({'player_id_in': 'player_id', 'minute': 'minute_in'}, axis=1, inplace=True)
df_wyscout_formation = df_wyscout_formation.merge(df_wyscout_sub_in, how='left')
df_wyscout_formation.loc[df_wyscout_formation.bench == False, 'minute_in'] = 0
df_wyscout_formation = df_wyscout_formation[df_wyscout_formation.minute_in.notnull()].copy()
df_wyscout_formation.sort_values('minute_in', inplace=True)
```
Add on team name
```
df_team = pd.concat([(df_wyscout_match[['away_team_id', 'away_team_name']]
.rename({'away_team_id': 'team_id', 'away_team_name': 'team_name'}, axis=1)),
(df_wyscout_match[['home_team_id', 'home_team_name']]
.rename({'home_team_id': 'team_id', 'home_team_name': 'team_name'}, axis=1))])
df_team.drop_duplicates('team_id', inplace=True)
df_wyscout_event = df_wyscout_event.merge(df_team, on='team_id', how='left')
```
Add on smart pass marker
```
df_wyscout_event['smart_pass'] = (df_wyscout_event.subEventName == 'Smart pass')
```
Replace player_id = Zero with nullm
```
df_wyscout_event.player_id.replace({0: np.nan}, inplace=True)
```
Pitches for coordinate conversion
```
pitch_wyscout = Pitch(pitch_type='wyscout')
pitch_statsperform = Pitch(pitch_type='statsperform', figsize=(16, 9))
```
Rename Free Kick to Set Piece
```
df_wyscout_event.eventName.replace('Free Kick', 'Set Piece', inplace=True)
```
Add on a column for a pass attempt
```
mask_pass = ((df_wyscout_event.eventName == 'Pass') |
df_wyscout_event.subEventName.isin(['Throw in', 'Free Kick', 'Goal kick', 'Corner', 'Free kick cross']))
df_wyscout_event['pass_attempt'] = mask_pass
```
Add shot attempt boolean column
```
mask_corner_goal = (df_wyscout_event.subEventName=='Corner') & (df_wyscout_event.goal==True)
df_wyscout_event['shot'] = ((df_wyscout_event.eventName == 'Shot') |
(df_wyscout_event.subEventName=='Free kick shot') |
(df_wyscout_event.subEventName == 'Penalty') |
(mask_corner_goal))
```
Add on switch (StatsBomb definition = ball transitioned at least 50% of the pitch vertically). Note that I have already removed dodgy end locations near the corner flags so this works.
```
mask_switch = (abs(df_wyscout_event.end_y - df_wyscout_event.y) >= 50) & (df_wyscout_event.pass_attempt)
df_wyscout_event['pass_switch'] = mask_switch
```
Add on cross (StatsBomb definition: See Appendix 6 of the Docs for Events)
```
# cross right side start
cross_right_start = np.array([[pitch_wyscout.right,
pitch_wyscout.bottom - pitch_wyscout.six_yard_from_side],
[pitch_wyscout.right,
pitch_wyscout.bottom],
[pitch_wyscout.right - pitch_wyscout.pitch_length*0.3,
pitch_wyscout.bottom],
[pitch_wyscout.right - pitch_wyscout.pitch_length*0.3,
pitch_wyscout.bottom - pitch_wyscout.penalty_area_from_side],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.bottom - pitch_wyscout.penalty_area_from_side]])
cross_right_start = geom.Polygon(cross_right_start)
# cross right side end
cross_right_end = np.array([[pitch_wyscout.right,
pitch_wyscout.top],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.top],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.bottom - pitch_wyscout.six_yard_from_side],
[pitch_wyscout.right,
pitch_wyscout.bottom - pitch_wyscout.six_yard_from_side]])
cross_right_end = geom.Polygon(cross_right_end)
# cross left side start
cross_left_start = np.array([[pitch_wyscout.right,
pitch_wyscout.top + pitch_wyscout.six_yard_from_side],
[pitch_wyscout.right,
pitch_wyscout.top],
[pitch_wyscout.right - pitch_wyscout.pitch_length*0.3,
pitch_wyscout.top],
[pitch_wyscout.right - pitch_wyscout.pitch_length*0.3,
pitch_wyscout.top + pitch_wyscout.penalty_area_from_side],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.top + pitch_wyscout.penalty_area_from_side]])
cross_left_start = geom.Polygon(cross_left_start)
# cross left side end
cross_left_end = np.array([[pitch_wyscout.right,
pitch_wyscout.bottom],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.bottom],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.top + pitch_wyscout.six_yard_from_side],
[pitch_wyscout.right,
pitch_wyscout.top + pitch_wyscout.six_yard_from_side]])
cross_left_end = geom.Polygon(cross_left_end)
# find intersection of passes and cross polygons
df_pass = df_wyscout_event[df_wyscout_event.pass_attempt].copy()
# starting locations
pass_start = geom.MultiPoint(df_pass[['x', 'y']].values)
cross_start_left_intersects = [point.intersects(cross_left_start) for point in pass_start]
cross_start_right_intersects = [point.intersects(cross_right_start) for point in pass_start]
# end locations
pass_end = geom.MultiPoint(df_pass[['end_x', 'end_y']].values)
cross_end_left_intersects = [point.intersects(cross_left_end) for point in pass_end]
cross_end_right_intersects = [point.intersects(cross_right_end) for point in pass_end]
# add cross marker to event data
mask_cross = ((np.array(cross_start_left_intersects) & np.array(cross_end_left_intersects)) |
(np.array(cross_start_right_intersects) & np.array(cross_end_right_intersects)))
cross_ids = df_pass[mask_cross].id
df_wyscout_event['cross'] = df_wyscout_event.id.isin(cross_ids)
```
Add on cut-back StatsBomb definition (StatsBomb definition: See Appendix 5 of the Docs for Events)
```
# right side start
cut_right_start = np.array([[pitch_wyscout.right, pitch_wyscout.bottom - pitch_wyscout.six_yard_from_side],
[pitch_wyscout.right, pitch_wyscout.bottom],
[pitch_wyscout.right - pitch_wyscout.six_yard_length, pitch_wyscout.bottom],
[pitch_wyscout.right - pitch_wyscout.six_yard_length,
pitch_wyscout.bottom - pitch_wyscout.six_yard_from_side]])
cut_right_start = geom.Polygon(cut_right_start)
# right side end
cut_right_end = np.array([[pitch_wyscout.right - pitch_wyscout.six_yard_length,
pitch_wyscout.bottom - pitch_wyscout.penalty_area_from_side],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.bottom - pitch_wyscout.penalty_area_from_side],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.top + pitch_wyscout.six_yard_from_side],
[pitch_wyscout.right - pitch_wyscout.six_yard_length,
pitch_wyscout.top + pitch_wyscout.six_yard_from_side]])
cut_right_end = geom.Polygon(cut_right_end)
# left side start
cut_left_start = np.array([[pitch_wyscout.right, pitch_wyscout.top + pitch_wyscout.six_yard_from_side],
[pitch_wyscout.right, pitch_wyscout.top],
[pitch_wyscout.right - pitch_wyscout.six_yard_length, pitch_wyscout.top],
[pitch_wyscout.right - pitch_wyscout.six_yard_length,
pitch_wyscout.top + pitch_wyscout.six_yard_from_side]])
cut_left_start = geom.Polygon(cut_left_start)
# left side end
cut_left_end = np.array([[pitch_wyscout.right - pitch_wyscout.six_yard_length,
pitch_wyscout.top + pitch_wyscout.penalty_area_from_side],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.top + pitch_wyscout.penalty_area_from_side],
[pitch_wyscout.right - pitch_wyscout.penalty_area_length,
pitch_wyscout.bottom - pitch_wyscout.six_yard_from_side],
[pitch_wyscout.right - pitch_wyscout.six_yard_length,
pitch_wyscout.bottom - pitch_wyscout.six_yard_from_side]])
cut_left_end = geom.Polygon(cut_left_end)
# find intersection of passes and cut back polygons
cut_start_left_intersects = [point.intersects(cut_left_start) for point in pass_start]
cut_start_right_intersects = [point.intersects(cut_right_start) for point in pass_start]
# end locations
cut_end_left_intersects = [point.intersects(cut_left_end) for point in pass_end]
cut_end_right_intersects = [point.intersects(cut_right_end) for point in pass_end]
# add cut back marker to event data
mask_cut = ((np.array(cut_start_left_intersects) & np.array(cut_end_left_intersects)) |
(np.array(cut_start_right_intersects) & np.array(cut_end_right_intersects)))
# not high and comes from a normal pass (not corner etc.)
cut_ids = df_pass[((mask_cut) & (df_pass.high == False) & (df_pass.eventName == 'Pass'))].id
df_wyscout_event['cut_back'] = df_wyscout_event.id.isin(cut_ids)
```
Add goal scored (excluding shootouts)
```
df_wyscout_event['goal_scored_excl_shootout'] = (((df_wyscout_event.goal) &
(df_wyscout_event['matchPeriod']!='P') &
(df_wyscout_event['shot'])) |
(df_wyscout_event.own_goal))
```
Check for missing Goals - there is one Kevin de Bruyne goal missing in the Wyscout event data
```
goals_per_game = pd.DataFrame(df_wyscout_event[df_wyscout_event['goal_scored_excl_shootout']]
.groupby('match_id')
.match_id.count())
goals_per_game.columns = ['Goals']
goals_per_game.reset_index(inplace=True)
goals_per_game = goals_per_game.merge(df_wyscout_match[['home_score', 'label', 'away_score','match_id', 'kick_off']],
how='right', on='match_id')
goals_per_game = goals_per_game.fillna(0)
# only one game missing a goal it's a Kevin de Bruyne goal and it's not in the event data
goals_per_game[goals_per_game.Goals != (goals_per_game.home_score + goals_per_game.away_score)]
```
#### Create a pass_height_name feature.
Assumptions:
- headed passes are high (roughly 60% are in the StatsBomb data)
- smart passes with the through ball tag are Ground/ low. It says in the Wyscout docs that through ball is added to smart pass if the pass is on the ground or itโs over the heads of the opposite players, but itโs on short distance โ 5-10 meters.
- smart passes without through balls are high passes
- hand passes are ground/low (hopefully launch catches high passes)
- throw-in / goal kick are high
- free-kick, crosses and corners are low unless high=True
```
# assumption made here that head passes are high passes (in StatsBomb roughly 60% are)
df_wyscout_event.loc[df_wyscout_event.subEventName.isin(['High pass', 'Launch', 'Head pass']),
'pass_height_name'] = 'High Pass'
df_wyscout_event.loc[(df_wyscout_event.high == True) & (df_wyscout_event.eventName == 'Pass'),
'pass_height_name'] = 'High Pass'
# if smart pass and not a through ball assumed high
df_wyscout_event.loc[(df_wyscout_event.subEventName == 'Smart pass') & (df_wyscout_event.through == False),
'pass_height_name'] = 'High Pass'
df_wyscout_event.loc[df_wyscout_event.subEventName == 'Simple pass',
'pass_height_name'] = 'Ground/ Low Pass'
df_wyscout_event.loc[(df_wyscout_event.subEventName.isin(['Corner', 'Free kick cross', 'Free Kick', 'Cross'])) &
(df_wyscout_event.high == False),
'pass_height_name'] = 'Ground/ Low Pass'
df_wyscout_event.loc[df_wyscout_event.subEventName.isin(['Throw in', 'Goal kick']),
'pass_height_name'] = 'High Pass'
df_wyscout_event.loc[(df_wyscout_event.subEventName.isin(['Corner', 'Free kick cross', 'Cross'])) &
(df_wyscout_event.high),
'pass_height_name'] = 'High Pass'
# assumption made here that smart through balls are ground/ low
df_wyscout_event.loc[(df_wyscout_event.subEventName == 'Smart pass') & (df_wyscout_event.through),
'pass_height_name'] = 'Ground/ Low Pass'
df_wyscout_event.loc[(df_wyscout_event.subEventName == 'Hand pass'),
'pass_height_name'] = 'Ground/ Low Pass'
```
Seperate out names in player dataset
```
df_wyscout_player['fullName'] = (df_wyscout_player.firstName + ' ' + df_wyscout_player.lastName).str.strip()
player_name_series = df_wyscout_player.fullName.str.split(' ')
df_wyscout_player['firstName'] = player_name_series.apply(lambda x: x[0] if isinstance(x, list) else None)
df_wyscout_player['middleName'] = player_name_series.apply(lambda x: ' '.join(x[1:-1]) if isinstance(x, list) else None)
df_wyscout_player['lastName'] = player_name_series.apply(lambda x: x[-1] if isinstance(x, list) else None)
df_wyscout_player['middleName'] = df_wyscout_player['middleName'].str.strip()
df_wyscout_player['Name'] = ((df_wyscout_player['firstName'] + ' ' + df_wyscout_player['middleName']).str.strip()
+ ' ' + df_wyscout_player['lastName'])
```
Add on the player name/ foot
```
df_wyscout_player.foot.replace({'null': None, '': None}, inplace=True)
df_wyscout_event = df_wyscout_event.merge(df_wyscout_player[['player_id',
'firstName', 'middleName', 'lastName', 'Name', 'foot']],
how='left', on='player_id')
```
Create a pass_technique name.
Assumptions:
- outswinging, different foot as the side of the pitch
- inswinging, same foot as the side of the pitch
- missing/ both foot = inswinging
- each player takes the kick with the foot in the player table.
This misses straight corner kicks and may not be 100% correct.
```
mask_corner = df_wyscout_event.subEventName == 'Corner'
mask_right = df_wyscout_event.foot == 'right'
mask_left = df_wyscout_event.foot == 'left'
mask_right_side = df_wyscout_event.y >= pitch_wyscout.center_width
mask_left_side = df_wyscout_event.y < pitch_wyscout.center_width
mask_both = df_wyscout_event.foot == 'both'
mask_missing = df_wyscout_event.foot.isnull()
mask_inswing = mask_corner & ((mask_left & mask_left_side) | (mask_right & mask_right_side) | mask_both | mask_missing)
mask_outswing = mask_corner & ((mask_left & mask_right_side) | (mask_right & mask_left_side))
df_wyscout_event.loc[mask_inswing, 'pass_technique_name'] = 'Inswinging'
df_wyscout_event.loc[mask_outswing, 'pass_technique_name'] = 'Outswinging'
df_wyscout_event.loc[df_wyscout_event.through, 'pass_technique_name'] = 'Through Ball'
```
Fix a corner at the wrong end
```
df_wyscout_event.loc[(df_wyscout_event.y == 100) & (df_wyscout_event.x == 0) &
(df_wyscout_event.subEventName == 'Corner'), 'x'] = 100
```
Correct a few shots that appear to be on the wrong side of the pitch. I haven't checked, but these seem to be too far away to be real shots. Especially as some are goals
```
mask_correct_shot = (df_wyscout_event.x < 34) & (df_wyscout_event.shot)
df_wyscout_event.loc[mask_correct_shot, 'x'] = 100 - df_wyscout_event.loc[mask_correct_shot, 'x']
```
Fast attack, win ball in own third, shoot in last quarter in 7-25 seconds
```
mask_defence_win = ((df_wyscout_event.subEventName.isin(['Ground defending duel', 'Air duel', 'Save attempt'])) |
(df_wyscout_event.interception)) & (df_wyscout_event.x < 33.4)
df_wyscout_event.loc[mask_defence_win, 'defence_win'] = df_wyscout_event.loc[mask_defence_win, 'team_id']
df_wyscout_event.loc[mask_defence_win, 'defence_sec'] = df_wyscout_event.loc[mask_defence_win, 'eventSec']
group_match = df_wyscout_event.groupby(['match_id', 'matchPeriod'])
df_wyscout_event[['defence_win', 'defence_sec']] = group_match[['defence_win', 'defence_sec']].ffill()
mask_fast = (((df_wyscout_event.eventSec - df_wyscout_event.defence_sec) <= 25) &
(df_wyscout_event.x > 75) &
(df_wyscout_event.shot) & (df_wyscout_event.team_id == df_wyscout_event.defence_win))
df_wyscout_event['fast_break'] = mask_fast
```
Flag 10 seconds from a corner or freekick/ 20 seconds from a throw-in
```
for set_piece in ['Corner', 'Throw in', ['Free kick cross', 'Free kick shot']]:
if isinstance(set_piece, list):
mask = df_wyscout_event.subEventName.isin(set_piece)
name = 'free_kick'
else:
mask = df_wyscout_event.subEventName.isin([set_piece])
name = set_piece.replace(' ', '_').lower()
df_wyscout_event.loc[mask, f'{name}_sec'] = df_wyscout_event.loc[mask, 'eventSec']
df_wyscout_event.loc[mask, f'{name}_team'] = df_wyscout_event.loc[mask, 'team_id']
df_wyscout_event[f'{name}_sec'] = group_match[f'{name}_sec'].ffill()
df_wyscout_event[f'{name}_team'] = group_match[f'{name}_team'].ffill()
df_wyscout_event[f'{name}_sec'] = df_wyscout_event.eventSec - df_wyscout_event[f'{name}_sec']
df_wyscout_event.loc[df_wyscout_event.throw_in_sec > 20, 'throw_in_sec'] = np.nan
df_wyscout_event.loc[df_wyscout_event.free_kick_sec > 10, 'free_kick_sec'] = np.nan
df_wyscout_event.loc[df_wyscout_event.corner_sec > 10, 'corner_sec'] = np.nan
df_wyscout_event['play_type'] = df_wyscout_event[['throw_in_sec', 'free_kick_sec', 'corner_sec']].idxmin(axis=1).str[:-4]
# if throw-in and defensive set to null
mask_defensive = ((df_wyscout_event.play_type == 'throw_in') &
(df_wyscout_event['throw_in_team'] != df_wyscout_event.team_id))
df_wyscout_event.loc[mask_defensive, 'play_type'] = np.nan
```
Add on previous info
```
# first filter out some events so the previous event is the correct assist type
mask_exclude = ((df_wyscout_event.eventName.isin(['Goalkeeper leaving line', 'Interruption'])) |
(df_wyscout_event.subEventName == 'Acceleration'))
df_wyscout_event = df_wyscout_event[~mask_exclude].copy()
match_group = df_wyscout_event.groupby(['match_id', 'matchPeriod'])
for i in range(1, 4):
df_wyscout_event[f'prev_id_{i}'] = match_group.id.shift(i)
df_wyscout_event[f'prevEventName_{i}'] = match_group.eventName.shift(i)
df_wyscout_event[f'prevSubEventName_{i}'] = match_group.subEventName.shift(i)
df_wyscout_event[f'prev_player_id_{i}'] = match_group.player_id.shift(i)
df_wyscout_event[f'prev_team_id_{i}'] = match_group.team_id.shift(i)
df_wyscout_event[f'prev_pass_attempt_{i}'] = match_group.pass_attempt.shift(i)
df_wyscout_event[f'prev_shot_{i}'] = match_group.shot.shift(i)
```
### Filter Non-penalty/ non-corner shots
```
mask_shot1 = (((df_wyscout_event.eventName=='Shot') | (df_wyscout_event.subEventName=='Free kick shot')) &
(df_wyscout_event['matchPeriod']!='P'))
mask_shot2 = (df_wyscout_event.subEventName=='Corner') & (df_wyscout_event.goal==True)
mask_corner_goal = (df_wyscout_event.subEventName=='Corner') & (df_wyscout_event.goal==True)
mask_shot = (mask_shot1 | mask_shot2) & (~mask_corner_goal)
df_wyscout_shots = df_wyscout_event[mask_shot].copy()
print('Number of shots:', len(df_wyscout_shots))
print('Number of goals:', df_wyscout_shots.goal.sum())
```
Add body part name
```
df_wyscout_shots.loc[df_wyscout_shots.left_foot, 'body_part_name'] = 'Left Foot'
df_wyscout_shots.loc[df_wyscout_shots.right_foot, 'body_part_name'] = 'Right Foot'
df_wyscout_shots.loc[df_wyscout_shots.other_body_part, 'body_part_name'] = 'Other'
df_wyscout_shots.drop(['left_foot', 'right_foot', 'other_body_part'], axis=1, inplace=True)
```
Strongest foot column
```
mask_strong_foot = (((df_wyscout_shots.foot.isin(['right', 'both'])) & (df_wyscout_shots.body_part_name == 'Right Foot')) |
(((df_wyscout_shots.foot.isin(['left', 'both'])) & (df_wyscout_shots.body_part_name == 'Left Foot'))))
df_wyscout_shots['strong_foot'] = mask_strong_foot
```
Add shot type name
```
# note there are three goals that come direct from corner kicks - they are tagged 'Free Kick' originally though
# this was renamed earlier to Set Piece
df_wyscout_shots['shot_type_name'] = df_wyscout_shots.eventName.replace({'Shot': 'Open Play',
'Set Piece': 'Direct Set Piece'})
```
Assist type, and get event id if pass
```
# direct from set pieces
df_wyscout_shots.loc[df_wyscout_shots.shot_type_name == 'Direct Set Piece', 'assist_type'] = 'direct'
# rebound/ clearance from the previous event
df_wyscout_shots.loc[(df_wyscout_shots.assist_type.isnull()) &
(df_wyscout_shots.prevEventName_1.isin(['Save attempt', 'Shot', 'Set Piece'])), 'assist_type'] = 'rebound'
df_wyscout_shots.loc[(df_wyscout_shots.assist_type.isnull()) &
(df_wyscout_shots.prevSubEventName_1 == 'Clearance'), 'assist_type'] = 'clearance'
# pass from the previous event
mask_pass1 = ((df_wyscout_shots.prev_pass_attempt_1 == True) &
(df_wyscout_shots.team_id == df_wyscout_shots.prev_team_id_1) &
(df_wyscout_shots.assist_type.isnull()))
df_wyscout_shots.loc[mask_pass1, 'assist_type'] = 'pass'
df_wyscout_shots.loc[mask_pass1, 'pass_id'] = df_wyscout_shots.loc[mask_pass1, 'prev_id_1']
# pass from the third previous event if and there are two duels in-between
mask_duel = ((df_wyscout_shots.prevEventName_1 == 'Duel') &
(df_wyscout_shots.prevEventName_2 == 'Duel') &
(df_wyscout_shots.assist_type.isnull()))
mask_pass2 = (mask_duel & (df_wyscout_shots.prev_pass_attempt_3) &
(df_wyscout_shots.team_id == df_wyscout_shots.prev_team_id_3))
df_wyscout_shots.loc[mask_pass2, 'assist_type'] = 'pass'
df_wyscout_shots.loc[mask_pass2, 'pass_id'] = df_wyscout_shots.loc[mask_pass2, 'prev_id_3']
# rebound/clearance if the third previous event involved a shot or save and there are two duels in-between
df_wyscout_shots.loc[(df_wyscout_shots.assist_type.isnull()) & mask_duel &
((df_wyscout_shots.prev_shot_3) | (df_wyscout_shots.prevEventName_3 == 'Save attempt')),
'assist_type'] = 'rebound'
df_wyscout_shots.loc[(df_wyscout_shots.assist_type.isnull()) & mask_duel &
(df_wyscout_shots.prevSubEventName_3 =='Clearance'),
'assist_type'] = 'clearance'
# if still null and the second event involves a shot or save set to a rebound
df_wyscout_shots.loc[(df_wyscout_shots.assist_type.isnull()) &
((df_wyscout_shots.prevEventName_2 == 'Save attempt') | (df_wyscout_shots.prev_shot_2)),
'assist_type'] = 'rebound'
# if still null and the second event involves a clearance set to a clearance
df_wyscout_shots.loc[df_wyscout_shots.assist_type.isnull() & (df_wyscout_shots.prevSubEventName_2 == 'Clearance'),
'assist_type'] = 'clearance'
# if still null and the third event involves a pass set to a pass
mask_pass3 = ((df_wyscout_shots.assist_type.isnull()) & (df_wyscout_shots.prev_pass_attempt_3) &
(df_wyscout_shots.prev_team_id_3 == df_wyscout_shots.team_id))
df_wyscout_shots.loc[mask_pass3, 'assist_type'] = 'pass'
df_wyscout_shots.loc[mask_pass3, 'pass_id'] = df_wyscout_shots.loc[mask_pass3, 'prev_id_3']
# if still null set to recovery
df_wyscout_shots.loc[df_wyscout_shots.assist_type.isnull(), 'assist_type'] = 'recovery'
```
Keep subset of columns
```
df_wyscout_shots = df_wyscout_shots[['match_id', 'matchPeriod', 'eventSec', 'id', 'goal', 'team_id', 'team_name', 'player_id',
'firstName', 'middleName', 'lastName', 'Name',
'shot_type_name', 'play_type',
'x', 'y', 'counter_attack', 'fast_break',
'strong_foot', 'body_part_name', 'assist_type', 'pass_id']].copy()
```
Add on pass information
```
df_pass = df_wyscout_event.loc[df_wyscout_event.pass_attempt, ['id', 'end_y', 'end_x', 'smart_pass',
'pass_switch', 'cross', 'cut_back', 'pass_height_name',
'pass_technique_name']].copy()
df_pass.rename({'id': 'pass_id', 'end_x': 'pass_end_x', 'end_y': 'pass_end_y',
'cross': 'pass_cross', 'cut_back': 'pass_cut_back'}, axis=1, inplace=True)
df_wyscout_shots = df_wyscout_shots.merge(df_pass, how='left', on='pass_id')
```
Calculate distance of carry/ dribble in yards (not calculated as too often zero)
```
#yards = (((df_wyscout_shots.pass_end_x - df_wyscout_shots.x) / pitch_wyscout.right * 115) ** 2 +
# ((df_wyscout_shots.pass_end_y - df_wyscout_shots.y) / pitch_wyscout.bottom * 74) ** 2) ** (0.5)
#df_wyscout_shots['carry_length'] = yards
```
Convert coordinates to standard pitch size (105m * 68m)
```
x_cols = ['x', 'pass_end_x']
y_cols = ['y', 'pass_end_y']
df_wyscout_shots[x_cols] = (df_wyscout_shots[x_cols]) / float(pitch_wyscout.right) * pitch_statsperform.right
df_wyscout_shots[y_cols] = ((float(pitch_wyscout.bottom) - df_wyscout_shots[y_cols]) /
float(pitch_wyscout.bottom) * pitch_statsperform.top)
```
Droping the end locations of the assist pass as they too often are the same as the shot location (compared with StatsBomb where they differ more often)
```
df_wyscout_shots.drop(['pass_end_y', 'pass_end_x'], axis=1, inplace=True)
```
Angles/ distance to goals
```
left_post, right_post = pitch_statsperform.goal_right
goal_width = abs(right_post - left_post)[1]
dx = abs(pitch_statsperform.right - df_wyscout_shots.x)
dy = abs(pitch_statsperform.center_width - df_wyscout_shots.y)
df_wyscout_shots['visible_angle'] = np.arctan2(goal_width * dx , (dx**2 + dy**2 - (goal_width / 2.) ** 2))
df_wyscout_shots['middle_angle'] = np.arctan2(dy, dx)
df_wyscout_shots['distance_to_goal'] = round((dy**2 + dx**2)**0.5, 1)
```
Interaction between angle and distance
```
df_wyscout_shots['distance_visible_angle'] = df_wyscout_shots.distance_to_goal * df_wyscout_shots.visible_angle
```
Log distance
```
df_wyscout_shots['log_distance_to_goal'] = np.log(df_wyscout_shots.distance_to_goal)
```
Amend shot type to take into account the play_type (set piece column) made earlier
```
mask_amend = (df_wyscout_shots.shot_type_name != 'Direct Set Piece') & (df_wyscout_shots.play_type.notnull())
df_wyscout_shots.loc[mask_amend, 'shot_type_name'] = df_wyscout_shots.loc[mask_amend, 'play_type']
df_wyscout_shots['shot_type_name'] = df_wyscout_shots.shot_type_name.str.lower().str.replace(' ', '_')
df_wyscout_shots.drop('play_type', axis=1, inplace=True)
```
Add Men
```
df_wyscout_shots['competition_gender'] = 'male'
```
Add on the goalkeeper
```
df_wyscout_shots['minute'] = np.ceil(df_wyscout_shots.eventSec / 60)
df_wyscout_shots.loc[(df_wyscout_shots.matchPeriod == '1H') & (df_wyscout_shots.minute > 45), 'minute'] = 45
df_wyscout_shots.sort_values('minute', inplace=True)
df_wyscout_shots = pd.merge_asof(df_wyscout_shots, df_wyscout_formation[['player_id', 'minute_in']],
left_on='minute', right_on='minute_in',
allow_exact_matches=True, direction='backward', suffixes=['', '_goalkeeper'])
```
Match Period consistent with StatsBomb
```
df_wyscout_shots.rename({'matchPeriod': 'period'}, axis=1, inplace=True)
df_wyscout_shots['period'] = df_wyscout_shots.period.map({'1H': 1, '2H': 2, 'E1': 3, 'E2': 4})
```
Save dataset
```
df_wyscout_shots.drop(['pass_id', 'minute', 'minute_in'], axis=1, inplace=True)
df_wyscout_shots.reset_index(drop=True, inplace=True)
df_wyscout_shots.to_parquet(os.path.join(WYSCOUT, 'shots.parquet'))
```
Info on dataset
```
df_wyscout_shots.info()
```
| github_jupyter |
```
import rdkit
import h5py
import numpy as np
import sys
sys.path.append('../../..')
from generative_playground.molecules.rdkit_utils.rdkit_utils import NormalizedScorer
from rdkit.Chem.rdmolfiles import MolFromSmiles
from rdkit.Chem.Descriptors import NumAromaticRings
from rdkit.Chem.Draw import MolsToGridImage
!dir "../train/pretrained/paper/"
from collections import OrderedDict
runs = OrderedDict(reversed([('Unconstrained',{'file':'pg_smiles_no_anchor.h5','range': 200}),
('Weak Anchor, SA penalty',{'file':'pg_smiles_anchor_wweak_sa20.h5','range':1600}),
('Weak Anchor, SA and aromatic cycle penalty',{'file':'pg_smiles_anchor_wweak_sa20_cycle.h5','range':3850}),
('Strong Anchor, SA penalty',{'file':'pg_smiles_anchor_weak_sa20.h5','range':5800})]))
runs
def get_smiles_bunches(runs):
bunches = []
for key, value in runs.items():
smiles_file = h5py.File('../train/pretrained/paper/' + runs[key]['file'],'r')
smiles = np.array(smiles_file['smiles'])[runs[key]['range']*40:(runs[key]['range']+100)*40]
bunches.append((key,smiles))
return bunches
# MolsToGridImage([MolFromSmiles(kajino)], molsPerRow=3, subImgSize=(200, 200), useSVG=False)
smiles_bunches = get_smiles_bunches(runs)
kusner1 ='CCCc1ccc(I)cc1C1CCC-c1'
kusner2 ='CC(C)CCCCCc1ccc(Cl)nc1'
kusner3 ='CCCc1ccc(Cl)cc1CCCCOC'
jin = 'c1c(Cl)ccc2c1cc(C(=O)C)c(C(=O)Nc3cc(NC(=O)Nc4cc(c5ccccc5(C))ccc4)ccc3)c2'
kajino='c1ccccc1Nc1cc(C2CC2)c(c2cc(Cl)ccc2(c2ccc(F)cc2))cc1'
smiles_bunches = [('Kusner et al.', [kusner1, kusner2, kusner3]),
('Jin et al.',[jin]),
('Kajino',[kajino])] \
+ smiles_bunches
smiles_bunches
import copy
def get_all_metrics(smiles):
mols = [MolFromSmiles(s) for s in smiles]
scores, norm_scores = NormalizedScorer().get_scores_from_mols(mols)
arom_rings = np.array([NumAromaticRings(m) for m in mols])
metrics = np.concatenate([scores.sum(axis=1)[:,None],
norm_scores.sum(axis=1)[:,None],
scores[:,1][:,None],
norm_scores[:,1][:,None],
arom_rings[:,None]],
axis=1)
return (smiles,metrics)
def get_best(metrics_ext, name, num_best=1, thresh = 0):
smiles, metrics = metrics_ext
labels =['1st', '2nd', '3rd','4th', '5th']
#print(num_best)
metric_rows = []
neg_sa = metrics[:,3] < thresh # we're only interested in positive SA
pos_sa_scores = copy.deepcopy(metrics[:,1])
pos_sa_scores[neg_sa] = -100
for i in range(num_best):
#print(i)
best_ind = np.argmax(pos_sa_scores)
if num_best > 1:
this_name = name + ' ' + labels[i]
else:
this_name = name
metric_rows.append((this_name, metrics[best_ind, :], smiles[best_ind]))
pos_sa_scores[best_ind] = -100
return metric_rows
def generate_metrics_list(smiles_bunches):
metrics_list = []
for name, s in smiles_bunches:
file_name = name.replace(' ','').replace('.','').replace(',','') + '.pickle'
print(file_name)
try:
raise ValueError("don't want to load now")
with open(file_name,'rb') as f:
metrics_ext = pickle.load(f)
except:
print('generating metrics...')
metrics_ext = get_all_metrics(s)
with open(file_name, 'wb') as f:
import pickle
pickle.dump(metrics_ext, f)
metrics_list += get_best(metrics_ext, name, num_best = 3 if 'cycle' in name or 'Strong' in name else 1)
print(name, metrics_list[-1])
return metrics_list
metrics_list = generate_metrics_list(smiles_bunches)
# import pickle
# with open('metrics_list.pickle','rb') as f:
# metrics_list = pickle.load(f)
best_smiles = [m[2] for m in metrics_list]
legends =[m[0] for m in metrics_list]
best_mols = [MolFromSmiles(s) for s in best_smiles]
# now let's output this nicely for LaTeX
for m_ in metrics_list:
m = list(m_[1])
m[-1] = int(m[-1])
print( m_[0] +' & {:.2f} & {:.2f} & {:.2f} & {:d} \\\\'.format(*m[1:]))
# groups = [[0,1],[2,3,4],[5,6,7],[8,9]]
# for g in groups[:1]:
# MolsToGridImage([best_mols[gg] for gg in g], molsPerRow=3, subImgSize=(200, 200),legends= [legends[gg] for gg in g], useSVG=False)
MolsToGridImage(best_mols[:2], molsPerRow=3, subImgSize=(200, 200), useSVG=False)
# groups = [[0,1],[2,3,4],[5,6,7],[8,9]]
# for g in groups[:1]:
# MolsToGridImage([best_mols[gg] for gg in g], molsPerRow=3, subImgSize=(200, 200),legends= [legends[gg] for gg in g], useSVG=False)
def GridImageToFile(mols, fn):
plot_str = MolsToGridImage(mols, molsPerRow=min(3,len(mols)), subImgSize=(200, 200), useSVG=True)
plot_str = plot_str.replace('svg:', '')
plot_str = plot_str.replace('stroke-width:2px','stroke-width:1px')
#print(plot_str)
plot_str = plot_str.replace('font-size:3px','font-size:6px')
plot_str = plot_str.replace('font-size:4px','font-size:6px')
from IPython.display import SVG, display
display(SVG(data=plot_str))
with open(fn, 'w') as myfile:
myfile.write(plot_str)
for i in range(3):
GridImageToFile([best_mols[2+i]],'strong_' + str(i) + '.svg')
# use InkScape to convert to pdf
# groups = [[0,1],[2,3,4],[5,6,7],[8,9]]
# for g in groups[:1]:
# MolsToGridImage([best_mols[gg] for gg in g], molsPerRow=3, subImgSize=(200, 200),legends= [legends[gg] for gg in g], useSVG=False)
#MolsToGridImage(best_mols[5:8], molsPerRow=3, subImgSize=(200, 200), useSVG=False)
GridImageToFile(best_mols[5:7],'weak.svg')
# groups = [[0,1],[2,3,4],[5,6,7],[8,9]]
# for g in groups[:1]:
# MolsToGridImage([best_mols[gg] for gg in g], molsPerRow=3, subImgSize=(200, 200),legends= [legends[gg] for gg in g], useSVG=False)
MolsToGridImage(best_mols[8:], molsPerRow=2, subImgSize=(200, 200), useSVG=False)
GridImageToFile(best_mols[8:],'large.svg')
```
| github_jupyter |
# Linear-model based prediction
This script fits linear models
using Lasso and Ridge regression
and summarizes their prediction performance
This script is written in the "outcome-oriented" style,
demonstrating dynamic input, grouped output
and step dependencies.
```
[global]
parameter: beta = [3, 1.5, 0, 0, 2, 0, 0, 0]
id = [x+1 for x in range(5)]
ridge_result = [f'data_{x}.ridge.mse.csv' for x in id]
lasso_result = [f'data_{x}.lasso.mse.csv' for x in id]
# Simulate sparse data-sets
[simulation]
depends: R_library("MASS>=7.3")
parameter: N = (40, 200) # training and testing samples
parameter: rstd = 3
input: for_each = 'id'
output: [(f"data_{x}.train.csv", f"data_{x}.test.csv") for x in id], group_by = 2
R: expand = "${ }"
set.seed(${_id})
N = sum(c(${paths(N):,}))
p = length(c(${paths(beta):,}))
X = MASS::mvrnorm(n = N, rep(0, p), 0.5^abs(outer(1:p, 1:p, FUN = "-")))
Y = X %*% c(${paths(beta):,}) + rnorm(N, mean = 0, sd = ${rstd})
Xtrain = X[1:${N[0]},]; Xtest = X[(${N[0]}+1):(${N[0]}+${N[1]}),]
Ytrain = Y[1:${N[0]}]; Ytest = Y[(${N[0]}+1):(${N[0]}+${N[1]})]
write.table(cbind(Ytrain, Xtrain), ${_output[0]:r}, row.names = F, col.names = F, sep = ',')
write.table(cbind(Ytest, Xtest), ${_output[1]:r}, row.names = F, col.names = F, sep = ',')
# Ridge regression model implemented in R
# Build predictor via cross-validation and make prediction
[ridge]
parameter: nfolds = 5
depends: sos_step('simulation'), R_library("glmnet>=2.0")
input: dynamic(paths([(f"data_{x}.train.csv", f"data_{x}.test.csv") for x in id])), group_by = 2
output: [(f"data_{x}.ridge.predicted.csv", f"data_{x}.ridge.coef.csv") for x in id], group_by = 2
R: expand = "${ }"
train = read.csv(${_input[0]:r}, header = F)
test = read.csv(${_input[1]:r}, header = F)
model = glmnet::cv.glmnet(as.matrix(train[,-1]), train[,1], family = "gaussian", alpha = 0, nfolds = ${nfolds}, intercept = F)
betahat = as.vector(coef(model, s = "lambda.min")[-1])
Ypred = predict(model, as.matrix(test[,-1]), s = "lambda.min")
write.table(Ypred, ${_output[0]:r}, row.names = F, col.names = F, sep = ',')
write.table(betahat, ${_output[1]:r}, row.names = F, col.names = F, sep = ',')
# LASSO model implemented in Python
# Build predictor via cross-validation and make prediction
[lasso]
parameter: nfolds = 5
depends: sos_step('simulation'), Py_Module("sklearn>=0.18.1"), Py_Module("numpy>=1.6.1"), Py_Module("scipy>=0.9")
input: dynamic(paths([(f"data_{x}.train.csv", f"data_{x}.test.csv") for x in id])), group_by = 2
output: [(f"data_{x}.lasso.predicted.csv", f"data_{x}.lasso.coef.csv") for x in id], group_by = 2
python: expand = "${ }"
import numpy as np
from sklearn.linear_model import LassoCV
train = np.genfromtxt(${_input[0]:r}, delimiter = ",")
test = np.genfromtxt(${_input[1]:r}, delimiter = ",")
model = LassoCV(cv = ${nfolds}, fit_intercept = False).fit(train[:,1:], train[:,1])
Ypred = model.predict(test[:,1:])
np.savetxt(${_output[0]:r}, Ypred)
np.savetxt(${_output[1]:r}, model.coef_)
# Evaluate predictors by calculating mean squared error
# of prediction vs truth (first line of output)
# and of betahat vs truth (2nd line of output)
[evaluation_core]
parameter: input_files = list
parameter: output_files = list
input: dynamic(input_files), group_by = 3
output: output_files, group_by = 1
R: expand = "${ }", stderr = False
b = c(${paths(beta):,})
Ytruth = as.matrix(read.csv(${_input[0]:r}, header = F)[,-1]) %*% b
Ypred = scan(${_input[1]:r})
prediction_mse = mean((Ytruth - Ypred)^2)
betahat = scan(${_input[2]:r})
estimation_mse = mean((betahat - b) ^ 2)
cat(paste(prediction_mse, estimation_mse), file = ${_output:r})
[evaluation_lasso]
depends: sos_step('simulation'), sos_step('lasso')
input_files = paths([(f"data_{x}.test.csv", f"data_{x}.lasso.predicted.csv", f"data_{x}.lasso.coef.csv") for x in id])
output: lasso_result
sos_run("evaluation_core", input_files = input_files, output_files = lasso_result)
[evaluation_ridge]
depends: sos_step('simulation'), sos_step('ridge')
input_files = paths([(f"data_{x}.test.csv", f"data_{x}.ridge.predicted.csv", f"data_{x}.ridge.coef.csv") for x in id])
output: ridge_result
sos_run("evaluation_core", input_files = input_files, output_files = ridge_result)
[default]
depends: sos_step('evaluation_lasso'), sos_step('evaluation_ridge')
output: lasso_result, ridge_result
%sosrun
```
| github_jupyter |
# Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
* [Pix2Pix](https://affinelayer.com/pixsrv/)
* [CycleGAN & Pix2Pix in PyTorch, Jun-Yan Zhu](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)
* [A list of generative models](https://github.com/wiseodd/generative-models)
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes "fake" data to pass to the discriminator. The discriminator also sees real training data and predicts if the data it's received is real or fake.
> * The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real, training data.
* The discriminator is a classifier that is trained to figure out which data is real and which is fake.
What ends up happening is that the generator learns to make data that is indistinguishable from real data to the discriminator.
<img src='assets/gan_pipeline.png' width=70% />
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector that the generator uses to construct its fake images. This is often called a **latent vector** and that vector space is called **latent space**. As the generator trains, it figures out how to map latent vectors to recognizable images that can fool the discriminator.
If you're interested in generating only new images, you can throw out the discriminator after training. In this notebook, I'll show you how to define and train these adversarial networks in PyTorch and generate new images!
```
%matplotlib inline
import numpy as np
import torch
import matplotlib.pyplot as plt
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 64
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# get the training datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
# prepare data loader
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
```
### Visualize the data
```
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (3,3))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
```
---
# Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
## Discriminator
The discriminator network is going to be a pretty typical linear classifier. To make this network a universal function approximator, we'll need at least one hidden layer, and these hidden layers should have one key attribute:
> All hidden layers will have a [Leaky ReLu](https://pytorch.org/docs/stable/nn.html#torch.nn.LeakyReLU) activation function applied to their outputs.
<img src='assets/gan_network.png' width=70% />
#### Leaky ReLu
We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
<img src='assets/leaky_relu.png' width=40% />
#### Sigmoid Output
We'll also take the approach of using a more numerically stable loss function on the outputs. Recall that we want the discriminator to output a value 0-1 indicating whether an image is _real or fake_.
> We will ultimately use [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss), which combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.
So, our final output layer should not have any activation function applied to it.
```
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Discriminator, self).__init__()
# define hidden linear layers
self.fc1 = nn.Linear(input_size, hidden_dim*4)
self.fc2 = nn.Linear(hidden_dim*4, hidden_dim*2)
self.fc3 = nn.Linear(hidden_dim*2, hidden_dim)
# final fully-connected layer
self.fc4 = nn.Linear(hidden_dim, output_size)
# dropout layer
self.dropout = nn.Dropout(0.3)
def forward(self, x):
# flatten image
x = x.view(-1, 28*28)
# all hidden layers
x = F.leaky_relu(self.fc1(x), 0.2) # (input, negative_slope=0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc2(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc3(x), 0.2)
x = self.dropout(x)
# final layer
out = self.fc4(x)
return out
```
## Generator
The generator network will be almost exactly the same as the discriminator network, except that we're applying a [tanh activation function](https://pytorch.org/docs/stable/nn.html#tanh) to our output layer.
#### tanh Output
The generator has been found to perform the best with $tanh$ for the generator output, which scales the output to be between -1 and 1, instead of 0 and 1.
<img src='assets/tanh_fn.png' width=40% />
Recall that we also want these outputs to be comparable to the *real* input pixel values, which are read in as normalized values between 0 and 1.
> So, we'll also have to **scale our real input images to have pixel values between -1 and 1** when we train the discriminator.
I'll do this in the training loop, later on.
```
class Generator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Generator, self).__init__()
# define hidden linear layers
self.fc1 = nn.Linear(input_size, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim*2)
self.fc3 = nn.Linear(hidden_dim*2, hidden_dim*4)
# final fully-connected layer
self.fc4 = nn.Linear(hidden_dim*4, output_size)
# dropout layer
self.dropout = nn.Dropout(0.3)
def forward(self, x):
# all hidden layers
x = F.leaky_relu(self.fc1(x), 0.2) # (input, negative_slope=0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc2(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc3(x), 0.2)
x = self.dropout(x)
# final layer with tanh applied
out = F.tanh(self.fc4(x))
return out
```
## Model hyperparameters
```
# Discriminator hyperparams
# Size of input image to discriminator (28*28)
input_size = 784
# Size of discriminator output (real or fake)
d_output_size = 1
# Size of last hidden layer in the discriminator
d_hidden_size = 32
# Generator hyperparams
# Size of latent vector to give to generator
z_size = 100
# Size of discriminator output (generated image)
g_output_size = 784
# Size of first hidden layer in the generator
g_hidden_size = 32
```
## Build complete network
Now we're instantiating the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
```
# instantiate discriminator and generator
D = Discriminator(input_size, d_hidden_size, d_output_size)
G = Generator(z_size, g_hidden_size, g_output_size)
# check that they are as you expect
print(D)
print()
print(G)
```
---
## Discriminator and Generator Losses
Now we need to calculate the losses.
### Discriminator Losses
> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`.
* Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
<img src='assets/gan_pipeline.png' width=70% />
The losses will by binary cross entropy loss with logits, which we can get with [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss). This combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.
For the real images, we want `D(real_images) = 1`. That is, we want the discriminator to classify the the real images with a label = 1, indicating that these are real. To help the discriminator generalize better, the labels are **reduced a bit from 1.0 to 0.9**. For this, we'll use the parameter `smooth`; if True, then we should smooth our labels. In PyTorch, this looks like `labels = torch.ones(size) * 0.9`
The discriminator loss for the fake data is similar. We want `D(fake_images) = 0`, where the fake images are the _generator output_, `fake_images = G(z)`.
### Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get `D(fake_images) = 1`. In this case, the labels are **flipped** to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
```
# Calculate losses
def real_loss(D_out, smooth=False):
batch_size = D_out.size(0)
# label smoothing
if smooth:
# smooth, real labels = 0.9
labels = torch.ones(batch_size)*0.9
else:
labels = torch.ones(batch_size) # real labels = 1
# numerically stable loss
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
batch_size = D_out.size(0)
labels = torch.zeros(batch_size) # fake labels = 0
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
```
## Optimizers
We want to update the generator and discriminator variables separately. So, we'll define two separate Adam optimizers.
```
import torch.optim as optim
# Optimizers
lr = 0.002
# Create optimizers for the discriminator and generator
d_optimizer = optim.Adam(D.parameters(), lr)
g_optimizer = optim.Adam(G.parameters(), lr)
```
---
## Training
Training will involve alternating between training the discriminator and the generator. We'll use our functions `real_loss` and `fake_loss` to help us calculate the discriminator losses in all of the following cases.
### Discriminator training
1. Compute the discriminator loss on real, training images
2. Generate fake images
3. Compute the discriminator loss on fake, generated images
4. Add up real and fake loss
5. Perform backpropagation + an optimization step to update the discriminator's weights
### Generator training
1. Generate fake images
2. Compute the discriminator loss on fake images, using **flipped** labels!
3. Perform backpropagation + an optimization step to update the generator's weights
#### Saving Samples
As we train, we'll also print out some loss statistics and save some generated "fake" samples.
```
import pickle as pkl
# training hyperparams
num_epochs = 100
# keep track of loss and generated, "fake" samples
samples = []
losses = []
print_every = 400
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# train the network
D.train()
G.train()
for epoch in range(num_epochs):
for batch_i, (real_images, _) in enumerate(train_loader):
batch_size = real_images.size(0)
## Important rescaling step ##
real_images = real_images*2 - 1 # rescale input images from [0,1) to [-1, 1)
# ============================================
# TRAIN THE DISCRIMINATOR
# ============================================
d_optimizer.zero_grad()
# 1. Train with real images
# Compute the discriminator losses on real images
# smooth the real labels
D_real = D(real_images)
d_real_loss = real_loss(D_real, smooth=True)
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
fake_images = G(z)
# Compute the discriminator losses on fake images
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
# add up loss and perform backprop
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# =========================================
# TRAIN THE GENERATOR
# =========================================
g_optimizer.zero_grad()
# 1. Train with fake images and flipped labels
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
fake_images = G(z)
# Compute the discriminator losses on fake images
# using flipped labels!
D_fake = D(fake_images)
g_loss = real_loss(D_fake) # use real loss to flip labels
# perform backprop
g_loss.backward()
g_optimizer.step()
# Print some loss stats
if batch_i % print_every == 0:
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, num_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# generate and save sample, fake images
G.eval() # eval mode for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to train mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
```
## Training loss
Here we'll plot the training losses for the generator and discriminator, recorded after each epoch.
```
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
Here we can view samples of images from the generator. First we'll look at the images we saved during training.
```
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
```
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
```
# -1 indicates final epoch's samples (the last in the list)
view_samples(-1, samples)
```
Below I'm showing the generated images as the network was training, every 10 epochs.
```
rows = 10 # split epochs into 10, so 100/10 = every 10 epochs
cols = 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
img = img.detach()
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
```
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
## Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. **We just need to pass in a new latent vector $z$ and we'll get new samples**!
```
`# randomly generated, new latent vectors
sample_size=16
rand_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
rand_z = torch.from_numpy(rand_z).float()
G.eval() # eval mode
# generated samples
rand_images = G(rand_z)
# 0 indicates the first set of samples in the passed in list
# and we only have one batch of samples, here
view_samples(0, [rand_images])
```
| github_jupyter |
```
# General
import numpy
import os
# Processing
import pandas as pd
# Drawing
import cartopy
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.io import shapereader
from matplotlib.cm import get_cmap
import matplotlib.cm as cm
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
# Data from: https://www.scotland.police.uk/about-us/covid-19-police-scotland-response/enforcement-and-response-data/
raw_data = pd.read_excel(os.path.join(os.getcwd(), 'datasets', 'coronavirus-enforcement-information-to-9-june-2021.xlsx'), sheet_name=1)
raw_data.drop(['Unnamed: 9', 'Unnamed: 10', 'Unnamed: 11', 'Unnamed: 12', 'Unnamed: 13', 'Unnamed: 14', 'Unnamed: 15', 'Unnamed: 16', 'Unnamed: 17'], axis=1, inplace=True)
# Taking account of NaNs
# Explanation:
# The xlsx to pandas dataframe conversion seems to have taken "NA" for a division "N" and an Area Command "Inverness"
# and interpret that "NA" as actually: "NaN". Which is very annoying. So the below overwrites the SD letter of area commands
# that are inverness and turns them back to "NA"
raw_data.loc[raw_data["Area Commands"] == "Inverness", "SD Letter"] = raw_data["SD Letter"].fillna("NA")
if (raw_data.isnull().sum().sum() != 0):
raise ValueError("We have NaNs in our dataframe")
division_grouped = raw_data.groupby('Division Letter', as_index=False
).agg(
{"Asked / Informed": "sum",
"Warned / Instructed": "sum",
"Removed from Place or Premises": "sum",
"FPN": "sum",
"Arrested": "sum",
})
# Data from: https://www.nrscotland.gov.uk/statistics-and-data/statistics/statistics-by-theme/population/population-estimates/mid-year-population-estimates/mid-2019
raw_pop_data = pd.read_csv(os.path.join(os.getcwd(), 'datasets', 'Population', 'mid-year-pop-est-19-data_Table 2.csv'))
# Keep only the specific columns
raw_pop_data = raw_pop_data[['Unnamed: 1','Unnamed: 2']]
# Rename them inplace
raw_pop_data.rename(columns={'Unnamed: 1': 'Council areas', 'Unnamed: 2': 'Population'}, inplace=True)
# Drop upper rows that are bad
raw_pop_data = raw_pop_data.drop(raw_pop_data.index[[0,1,2,3,4]]).reset_index(drop=True)
# Drop from certain row, minus 1 for the row above position
raw_pop_data = raw_pop_data[:(raw_pop_data[raw_pop_data['Council areas'] == 'NHS Board areas'].index[0] - 1)]
# Strip out all the commas in Objects of the Population column
raw_pop_data["Population"].replace(',','', regex=True, inplace=True)
# Convert string to int
raw_pop_data["Population"] = raw_pop_data["Population"].astype(str).astype(int)
# We group the council areas into our police divisions
# First, set our index
raw_pop_data.set_index('Council areas')
# Create our division dictionary
div_dict = {'A': ["Moray", "Aberdeenshire", "Aberdeen City"],
'C': ["Stirling", "Clackmannanshire", "Falkirk"],
'D': ["Angus", "Dundee City", "Perth and Kinross"],
'E': ["City of Edinburgh"],
'G': ["East Renfrewshire", "Glasgow City", "East Dunbartonshire"],
'J': ["Scottish Borders", "East Lothian", "Midlothian", "West Lothian"],
'K': ["Inverclyde", "Renfrewshire"],
'L': ["Argyll and Bute", "West Dunbartonshire"],
'N': ["Na h-Eileanan Siar", "Orkney Islands", "Highland", "Shetland Islands"],
'P': ["Fife"],
'Q': ["South Lanarkshire", "North Lanarkshire"],
'U': ["South Ayrshire", "East Ayrshire", "North Ayrshire"],
'V': ["Dumfries and Galloway"]
}
div_pop = {}
def divisionPopulation(row):
incomingRow = row.tolist()
for div, councils in div_dict.items():
for council in councils:
if (council == incomingRow[0]):
if div in div_pop:
div_pop[div] += incomingRow[1]
else:
div_pop[div] = incomingRow[1]
raw_pop_data.apply(lambda row: divisionPopulation(row), axis=1)
div_pop_data = pd.DataFrame(div_pop.items(), columns=['Division Letter', 'Population'])
division_data = pd.merge(division_grouped, div_pop_data, on='Division Letter')
division_data['Asked / Informed per 100k'] = division_data.apply (lambda row: row['Asked / Informed']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
division_data['Warned / Instructed per 100k'] = division_data.apply (lambda row: row['Warned / Instructed']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
division_data['Removed from Place or Premises per 100k'] = division_data.apply (lambda row: row['Removed from Place or Premises']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
division_data['FPN per 100k'] = division_data.apply (lambda row: row['FPN']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
division_data['Arrested per 100k'] = division_data.apply (lambda row: row['Arrested']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
# code for a scale bar and north arrow
from math import floor
from matplotlib import patheffects
import matplotlib
if os.name == 'nt':
matplotlib.rc('font', family='Arial')
else: # might need tweaking, must support black triangle for N arrow
matplotlib.rc('font', family='DejaVu Sans')
def utm_from_lon(lon):
"""
utm_from_lon - UTM zone for a longitude
Not right for some polar regions (Norway, Svalbard, Antartica)
:param float lon: longitude
:return: UTM zone number
:rtype: int
"""
return floor( ( lon + 180 ) / 6) + 1
def scale_bar(ax, proj, length, location=(0.5, 0.05), linewidth=3,
units='km', m_per_unit=1000):
"""
http://stackoverflow.com/a/35705477/1072212
ax is the axes to draw the scalebar on.
proj is the projection the axes are in
location is center of the scalebar in axis coordinates ie. 0.5 is the middle of the plot
length is the length of the scalebar in km.
linewidth is the thickness of the scalebar.
units is the name of the unit
m_per_unit is the number of meters in a unit
"""
# find lat/lon center to find best UTM zone
x0, x1, y0, y1 = ax.get_extent(proj.as_geodetic())
# Projection in metres
utm = ccrs.UTM(utm_from_lon((x0+x1)/2))
# Get the extent of the plotted area in coordinates in metres
x0, x1, y0, y1 = ax.get_extent(utm)
# Turn the specified scalebar location into coordinates in metres
sbcx, sbcy = x0 + (x1 - x0) * location[0], y0 + (y1 - y0) * location[1]
# Generate the x coordinate for the ends of the scalebar
bar_xs = [sbcx - length * m_per_unit/2, sbcx + length * m_per_unit/2]
# buffer for scalebar
buffer = [patheffects.withStroke(linewidth=5, foreground="w")]
# Plot the scalebar with buffer
ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k',
linewidth=linewidth, path_effects=buffer)
# buffer for text
buffer = [patheffects.withStroke(linewidth=3, foreground="w")]
# Plot the scalebar label
t0 = ax.text(sbcx, sbcy, str(length) + ' ' + units, transform=utm,
horizontalalignment='center', verticalalignment='bottom',
path_effects=buffer, zorder=2)
left = x0+(x1-x0)*0.05
# Plot the N arrow
t1 = ax.text(left, sbcy, u'\u25B2\nN', transform=utm,
horizontalalignment='center', verticalalignment='bottom',
path_effects=buffer, zorder=2)
# Plot the scalebar without buffer, in case covered by text buffer
ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k',
linewidth=linewidth, zorder=3)
# Creating new figure and axes instances
def enforcementOutcome(outcome_type):
fig = plt.figure(figsize=(6,8), dpi=100)
projectionPARAM = ccrs.TransverseMercator(central_longitude=-2.0, central_latitude=49.0, false_easting=400000.0, false_northing=-100000.0, scale_factor=0.9996012717, approx=False)
ax = fig.add_subplot(1, 1, 1, projection=projectionPARAM)
ax.set_extent([-8, 0, 54.5, 61]) # Ideal coordinate map range for plotting Scotland
police_dict = (division_data[['Division Letter', outcome_type]].set_index('Division Letter').T.to_dict('records'))[0]
# Downloaded from: https://spatialdata.gov.scot/geonetwork/srv/eng/catalog.search;jsessionid=61F713CF39B3EE2F440F48E9C31BA806#/metadata/4364af71-167a-4236-b5a0-bd4109913231
area_file = os.path.join(os.getcwd(), 'datasets', 'ScottishPoliceDivisions', 'SG_ScottishPoliceDivisions_2019.shp')
police_divisions = shapereader.Reader(area_file)
norm = colors.Normalize(vmin=0., vmax=max(police_dict.values()))
cmap = get_cmap('PuBu')
for record in police_divisions.records():
code = record.attributes['AdminCode']
police_entry = police_dict.get(code, -1)
if police_entry == -1:
police_color = "Silver"
else:
police_color = cmap(police_entry/max(police_dict.values()))
ax.add_geometries(
[record.geometry],
#facecolor=numpy.random.rand(3,),
facecolor=police_color,
linewidth=0,
crs=projectionPARAM,
)
# following https://matplotlib.org/2.0.2/mpl_toolkits/axes_grid/users/overview.html#colorbar-whose-height-or-width-in-sync-with-the-master-axes
# we need to set axes_class=plt.Axes, else it attempts to create
# a GeoAxes as colorbar
divider = make_axes_locatable(ax)
ax_cb = divider.new_horizontal(size="5%", pad=0.1, axes_class=plt.Axes)
fig.add_axes(ax_cb)
sm = plt.cm.ScalarMappable(norm=norm, cmap=cmap)
cb = plt.colorbar(sm, cax=ax_cb)
cb.set_label(outcome_type + " of Population")
scale_bar(ax, projectionPARAM, 100, location=(0.85, 0.05)) # 100 km scale bar
plt.plot()
###
# Copyright Scottish Government, contains Ordnance Survey data ยฉ Crown copyright and database right (2021)
###
enforcementOutcome('Arrested per 100k')
```
| github_jupyter |
# Simple RNN
In ths notebook, we're going to train a simple RNN to do **time-series prediction**. Given some set of input data, it should be able to generate a prediction for the next time step!
<img src='assets/time_prediction.png' width=40% />
> * First, we'll create our data
* Then, define an RNN in PyTorch
* Finally, we'll train our network and see how it performs
### Import resources and create data
```
import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
fig = plt.figure(figsize=(8,5))
# how many time steps/data pts are in one batch of data
seq_length = 6#20
# generate evenly spaced data pts
time_steps = np.linspace(0, np.pi, seq_length + 1)
data = np.sin(time_steps)
data.shape
data.resize((seq_length + 1, 1)) # size becomes (seq_length+1, 1), adds an input_size dimension
data.shape
x = data[:-1] # all but the last piece of data
y = data[1:] # all but the first
# display the data
plt.plot(time_steps[1:], x, 'r.', label='input, x') # x
plt.plot(time_steps[1:], y, 'b.', label='target, y') # y
plt.legend(loc='best')
plt.show()
```
---
## Define the RNN
Next, we define an RNN in PyTorch. We'll use `nn.RNN` to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters:
* **input_size** - the size of the input
* **hidden_dim** - the number of features in the RNN output and in the hidden state
* **n_layers** - the number of layers that make up the RNN, typically 1-3; greater than 1 means that you'll create a stacked RNN
* **batch_first** - whether or not the input/output of the RNN will have the batch_size as the first dimension (batch_size, seq_length, hidden_dim)
Take a look at the [RNN documentation](https://pytorch.org/docs/stable/nn.html#rnn) to read more about recurrent layers.
```
class RNN(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(RNN, self).__init__()
self.hidden_dim=hidden_dim
# define an RNN with specified parameters
# batch_first means that the first dim of the input and output will be the batch_size
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# last, fully-connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
# x (batch_size, seq_length, input_size)
# hidden (n_layers, batch_size, hidden_dim)
# r_out (batch_size, time_step, hidden_size)
batch_size = x.size(0)
# get RNN outputs
r_out, hidden = self.rnn(x, hidden)
# shape output to be (batch_size*seq_length, hidden_dim)
r_out = r_out.view(-1, self.hidden_dim)
# get final output
output = self.fc(r_out)
return output, hidden
```
### Check the input and output dimensions
As a check that your model is working as expected, test out how it responds to input data.
```
# test that dimensions are as expected
test_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2)
# generate evenly spaced, test data pts
time_steps = np.linspace(0, np.pi, seq_length)
data = np.sin(time_steps)
data.resize((seq_length, 1))
data.shape
test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension
print('Input size: ', test_input.size())
# test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Output size: ', test_out.size())
print('Hidden state size: ', test_h.size())
test_rnn
```
---
## Training the RNN
Next, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs.
```
# decide on hyperparameters
input_size=1
output_size=1
hidden_dim=32
n_layers=1
# instantiate an RNN
rnn = RNN(input_size, output_size, hidden_dim, n_layers)
print(rnn)
```
### Loss and Optimization
This is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point?
>* The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error.
* It's typical to use an Adam optimizer for recurrent models.
```
# MSE loss and Adam optimizer with a learning rate of 0.01
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=0.01)
```
### Defining the training function
This function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often.
#### Hidden State
Pay close attention to the hidden state, here:
* Before looping over a batch of training data, the hidden state is initialized
* After a new hidden state is generated by the rnn, we get the latest hidden state, and use that as input to the rnn for the following steps
```
seq_length = 10
# train the RNN
def train(rnn, n_steps, print_every):
# initialize the hidden state
hidden = None
for batch_i, step in enumerate(range(n_steps)):
# defining the training data
time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # input_size=1
# print('data is: ', data, 'step is:', step)
#
x = data[:-1]
y = data[1:]
# print(x, y)
# convert data into Tensors
x_tensor = torch.Tensor(x).unsqueeze(0) # unsqueeze gives a 1, batch_size dimension
y_tensor = torch.Tensor(y)
# outputs from the rnn
prediction, hidden = rnn(x_tensor, hidden)
# print('hidden is:', hidden.shape, hidden)
# print(prediction)
## Representing Memory ##
# make a new variable for hidden and detach the hidden state from its history
# this way, we don't backpropagate through the entire history
hidden = hidden.data
# calculate the loss
loss = criterion(prediction, y_tensor)
# zero gradients
optimizer.zero_grad()
# perform backprop and update weights
loss.backward()
optimizer.step()
# display loss and predictions
if batch_i%print_every == 0:
print('Loss: ', loss.item())
plt.plot(time_steps[1:], x, 'r.') # input
plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions
plt.show()
return rnn
# train the rnn and monitor results
n_steps = 750
print_every = 150
trained_rnn = train(rnn, n_steps, print_every)
```
### Time-Series Prediction
Time-series prediction can be applied to many tasks. Think about weather forecasting or predicting the ebb and flow of stock market prices. You can even try to generate predictions much further in the future than just one time step!
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import sys
sys.path.append("../src")
import numpy as np
from PIL import Image
from skimage import io
import torch
from torchvision import transforms
import torch.nn.functional as F
from torch.utils.data import DataLoader
from dataset.birds_dataset import BirdsDataset
from models.discriminator import Discriminator
from models.text_encoder import TextEncoder
device = torch.device('cuda') if torch.cuda.is_available() else torch.device("cpu")
device
```
### Fine Tuning
### Implementing the tips from the How To Train a GAN video:
https://www.youtube.com/watch?v=myGAju4L7O8&t=482s
```
dataset_path = "../test/Example_Dataset/"
base_image_size = 64
def compose_image_transforms(base_image_size):
"""Returns composed image transforms for using on PIL images"""
resize_factor_for_cropping = 76 / 64 # TODO Understand why they hardcodes this value
new_size = tuple(2*[int(base_image_size * resize_factor_for_cropping)])
image_transforms = transforms.Compose([transforms.Resize(new_size),
transforms.RandomCrop(base_image_size),
transforms.RandomHorizontalFlip()
])
return image_transforms
```
#### 1 - Normalized Inputs
TODO: Add to unittests
```
train_dataset = BirdsDataset(dataset_path, split='train', image_transform=compose_image_transforms(base_image_size),
base_image_size=base_image_size, number_images=3,
text_transform=None, max_caption_length=18)
train_dataset.draw_random_sample().images[0].min()
train_dataset.draw_random_sample().images[0].max()
```
#### 5 - Avoiding spart gradients by switching frm upsample to ConvTranspose
```
ct = torch.nn.ConvTranspose2d(3, 3, 3, stride=2, padding=1, output_padding=1)
us = torch.nn.Upsample(scale_factor=2)
img = train_dataset[0].images[0].unsqueeze(0)
ct(img).shape
us(img).shape
```
#### 6 - Label Smoothing
```
batch_size = 4
real_labels = (torch.ones(batch_size) * 1)
fake_labels = (torch.ones(batch_size) * 0)
def prepare_label_smoothing(real_labels, fake_labels, smooth=0.1):
batch_size = real_labels.shape[0]
smoothed_real = real_labels - smooth * torch.rand(batch_size)
smoothed_fake = fake_labels + smooth * torch.rand(batch_size)
return smoothed_real, smoothed_fake
prepare_label_smoothing(real_labels, fake_labels)
```
#### TODO Wrong labels
```
torch.any(torch.isnan(torch.tensor(np.inf)))
smooth = 0.1
real_labels = torch.ones(100)
fake_labels = torch.zeros(100)
(real_labels - smooth * torch.rand_like(real_labels)).min()
(fake_labels + smooth * torch.rand_like(real_labels)).max()
(torch.randn(10) * 0.02).max()
(torch.randn(10) * 0.02).min()
m1 = torch.randn(2, 3, 64, 64)
m2 = torch.randn(2, 3, 64, 64)
mean1 = torch.mean(m1, dim=0)
mean2 = torch.mean(m2, dim=0)
torch.norm(mean1 - mean2, p=2)
```
### Do Not Delete: Example of masking difference:
I think they implemented it wrongly, since the data from a each sample in the batch will contaminate the info in all other samples in the batch.
i.e. the seqlen of a single sample affects all other samples.
This shouldn't happen.
Thus changed.
```
B = 2
N = 16
SEQ_LEN = 4
S = torch.randn(B, N, SEQ_LEN)
S.shape
S
mask = torch.Tensor([[0, 0, 0, 1],
[0, 0, 1, 1]])
mask = mask.type(torch.uint8)
assert mask.shape == torch.Size([B, SEQ_LEN])
mask
```
***The important part!!!!***
```
new_mask = mask.unsqueeze(1)
new_S = S.data.masked_fill_(new_mask.data, -float('inf'))
S
new_S
```
Copy the above code
```
new_S.shape
new_S
final_S = torch.exp(new_S) / torch.sum(torch.exp(new_S), dim=2, keepdim=True)
final_S
values = torch.randn(B, 128, SEQ_LEN)
final_S.transpose(1,2)
res = torch.bmm(values, final_S.transpose(1,2))
res.shape
```
##### What in their code was supposed to happen?
```
S = torch.randn(B, N, SEQ_LEN)
s_target = S.view(B * N, SEQ_LEN)
mask_target = mask.repeat(N, 1)
mask_target.shape
s_target.shape
s_target = s_target.data.masked_fill_(mask_target.data, -float('inf'))
s_target
final = s_target.view(B, N, SEQ_LEN)
final
```
| github_jupyter |
# Simulating Language 9, When will optimal signalling evolve? (lecture)
*This is a first draft of lecture notes for the Simulating Language course. It probably contains lots of typos!*
In the last lecture, we talked in general about how evolution by natural selection might be the explanation for how species end up with successful communication systems - i.e., ones that lead to a communicative accuracy score of 1. In the labs we extended our simple matrix-based model of sending and receiving first to create a population of agents, and second to evolve that population by selecting agents who were more successful at communication to be parents of new agents (with similar signalling systems as their own, implementing biological inheritance). Building the model made it clear that there are some details about exactly *how* you implement biological evolution that matter for whether communication evolves. Specifically, it's important to consider exactly how you assess fitness - who the agents talk to, and who benefits from successful communication.
## Optimal communication
In this lecture we're going to mainly look at a short, but very elegant, paper by Mike Oliphant [(Oliphant 1996)](https://www.sciencedirect.com/science/article/pii/0303264795015434)
that used a similar model to the one we're developing here to look at the conditions under which optimal communication evolves. But what makes a communication system optimal exactly? We've already talked about the relative importance of avoiding homonymy versus avoiding synonymy, but Oliphant uses a slightly different terminology. He talks about what he calls "Saussurean" signalling as the ideal communication system:
*"What is important is that each signal 'means' the same thing to the individual sending it and the individual receiving it. It must be possible to map some concept onto a symbol and then map back from the symbol to get the original concept." [(Oliphant 1996)](https://www.sciencedirect.com/science/article/pii/0303264795015434)*
He calls such systems "Saussurean" (inspired by the classic diagram from [Saussure (1915)](http://home.wlu.edu/~levys/courses/anth252f2006/saussure.pdf), in which there is a bidirectional relationship between signified and signifier).

Here's an example of a production and reception matrix that is Saussurean in this sense:
|. |$s_1$|$s_2$|$s_3$|
|-----|-----|-----|-----|
|$m_1$|1 |0 |0 |
|$m_2$|0 |1 |0 |
|$m_3$|0 |0 |1 |
|. |$m_1$|$m_2$|$m_3$|
|-----|-----|-----|-----|
|$s_1$|1 |0 |0 |
|$s_2$|0 |1 |0 |
|$s_3$|0 |0 |1 |

So, for Oliphant, optimal communication is Saussurean.
### Optimal does not mean inevitable!
The most important lesson we learn from Oliphant's paper, and from our own explorations in the lab is that **natural selection does not necessarily create optimal solutions!** In other words, Saussurean signalling is not the inevitable result of evolution, despite the fact that a population that is signalling using a bidirectional mapping between meanings and signals *will* be communicating optimally. Surprisingly, this is even true if we using a fitness scoring system that assigns highest fitness to individuals that communicate well! Oliphant aims to show that optimal signalling can only emerge given very specific conditions - results we also found in our own simulations in the lab.
We'll look at each of Oliphant's four simulations in turn, and then review our own findings from the lab.
## Simulation 1
Oliphant's first simulation is a simplified variant of the model we've been developing in the lab, using two signals and two meanings and a completely deterministic mapping between the two of them. (In other words, the sender will always produce a single signal for a particular meaning, and the receiver will always understand a single meaning for a particular signal.) This is different from our own model because the matrix representation allows for any number of meanings and signals, and also allows for indeterminacy even with the winner-take-all algorithm because multiple cells on a row in a matrix can share the same maximum score.
Other than this, the way the simulation works is similar to ours. Crucially, there is heritable variation in fitness, because: offspring tend to have similar signalling systems to their parents, variability can be introduced by mutations, and the probability of being a parent depends on the agents' signalling systems. This means that we should expect to inevitably see evolution by natural selection.
In his first simulation, Oliphant scores each agent on its success at being a sender *and* its success at being a receiver, and then uses these scores to determine each agent's fitness. In these circumstances he finds that optimal signalling does indeed evolve in the population. Here's one of the graphs from his paper:

Note that this isn't a graph of fitness, rather the frequency of agents with one of two possible Saussurean communication systems in his model, namely the one that maps meaning 1 to signal 2 and meaning 2 to signal 1 (bidirectionally).
So, from this result he concludes that one way to get optimal (Saussurean) signalling to evolve is by having selection based on **mutual benefit** to both sender and receiver.
## Simulation 2
This all seems fine, but we need to ask the question: is mutual benefit realistic from an ecological point of view? We started this section of the course with the example of the vervet monkey alarm calling system. Who benefits if an alarm call is successful? It seems clear that the receiver benefits (they run up into a tree and avoid a leopard!). But the benefits to the sender seem much more opaque. That monkey already knew there was a leopard, and knew to go up the tree - assuming they weren't up the tree already. If anything, they seem likely to lose out by producing a loud alarm and attracting attention of the leopard.
To model this kind of situation, Oliphant runs the simulation again but with only receivers benefitting from successful signalling. In other words, the fitness of the agents is only calculated based on whether they correctly understood any signals they received.
The results are dramatically different. The population does not converge on optimal signalling. When he examined exactly what the agents were doing, he found that the *reception behaviour* looks optimal. But it was unstable, constantly flipping between the two possible optimal mappings from meanings to signals (i.e. where each signal maps to a different meaning). On the other hand, the *transmission behaviour* seems to wander about at random. It is these random fluctuations in the transmission behaviour of the population that drive the unstable switching between different reception systems. So, if one particular signal is sent more often by the population as a whole for one of the meanings, then the population rapidly switches to have the corresponding reception system in place. However, because the transmission behaviour of the population is drifting about at random, then agents can only do slightly better than chance as a whole in reception.

So, the conclusion from this second simulation is that optimal signalling cannot evolve if only receivers benefit.
This is, on the face of it, a surprising result. Even if you only score agents on their success at receiving signals, they would be better off (i.e. fitter) if they did communicate using a Saussurean system. This then a clear case where natural selection does not lead to an increase in fitness.
We actually got a hint this might happen even before we built the evolutionary simulation when we were experimenting with different populations of agents with different signalling systems. Consider a population of optimally communicating agents, where each agent sends signals with no homonymy and each agent receives those signals perfectly. Now imagine introducing a single agent whose reception behaviour is the same as the rest of the population but whose production behaviour is the opposite. If the fitness of these agents is dependent on reception only, then this introduced "mutant" agent will score just as highly as if they had perfect optimal production behaviour. Really, the only agents that are affected by that agent's poor production are the ones in the rest of the population. In this scenario, mutants who benefit from others' helpful productions but who themselves don't produce helpfully will invade the population.
Of course, this ultimately means that the fitness of everyone - including the unhelpful producers - suffers. In the end, it is impossible for natural selection to maintain optimal production behaviours if agents are chosen on their reception success even if that reception success would be higher overall if everyone produced optimally. This is a demonstration of the limits of natural selection as an optimising force. It won't necessarily build the best of all possible worlds. It matters who benefits on a local level when the selection is taking place.
## Simulation 3
There's something a bit odd about the result of Oliphant's simulation 2. After all, we know that alarm call systems do indeed exist in nature, and surely this is a case of individuals producing signals optimally even though it is only the receivers who are benefiting. This is part of a general problem for evolutionary biology in explaining the origins of *altruism* in nature. Why do individuals ever help each other given that natural selection works to favour genes who aid their own transmission from one generation to the next?
One answer is that individuals might act altruistically if there's some likelihood that altruistic act will lead to them benefitting from a return favour at a later date. You can think of this as "I'll scratch your back if you scratch mine" or "I'll signal optimally to you if you signal optimally to me". Oliphant's simulation 3 looks at the evolution of this strategy in a very cunning and elegant way.
He adds a second signalling system to the agents - i.e. a second set of genes that encodes an alternative signalling behaviour. Each time an agent communicates with another individual it chooses one of its two signalling systems to use based on whether the previous round of communication *with that individual* was successful. In other words, one of the signalling systems ends up being used with helpful speakers and one with unhelpful speakers. (Oliphant also adds a gene that determines whether to assume individuals are helpful or unhelpful when they are met for the first time.)
Running this simulation with fitness being calculated only on reception success results in a strikingly different result from Simulation 2. Now, communicatively optimal signalling systems do indeed evolve even though agents don't get assigned fitness for producing optimally. How is this possible?
What happens is that the second signalling system evolves to be deliberately unhelpful. It acts as a kind of "punishment" for agents who send unoptimal signals. This leads to a direct benefit for signallers to send optimally so that they can avoid punishment (and receive optimal signals themselves) in the next round of communication. This puts positive selection pressure on genes for optimal production even though scoring for fitness is only on reception.
## Simulation 4
Oliphant's final simulation offers a final way in which optimal signalling can evolve, without mutualism or reciprocity. This simulation demonstrates that it is not only the way in which you calculate fitness that matters, but also how you choose individuals to act as communicative partners in a signalling game.
So far, we've been assuming that pairs of agents are picked at random from the population to take part in communication, but of course in the real world populations are *organised*. In other words, there are some pairs of individuals that will interact more often than others. One way in which this organisation can happen is *spatially*. To model this, Oliphant in his fourth simulation, organises agents in a ring such that they are likely to communicate with their nearer neighbours in this ring. Crucially, reproduction is also organised spatially such that related individuals are also nearby each other in the ring.
The result of this simulation, which is otherwise identical to simulation 2 (i.e. fitness is determined only on reception success and there is no mechanism for reciprocity), is that optimal signalling does evolve:

This is, on the face of it, somewhat surprising. Why would organising the population spatially make such a stark difference?
Oliphant argues that the key is that fitness is being calculated based on success at communicating with your kin rather than random unrelated individuals. An important idea in evolutionary biology is that of *kin selection*, which states that selection can benefit the reproductive success of related individuals even at the expense of an organism's own survival. The biologist J.B.S. Haldane captured the concept pithily by saying he'd be willing to die for "two brothers or eight cousins". The point is that a gene that determined self-sacrificing behaviour could nevertheless increase in frequency in a population as long as it favoured the survival of enough other indivdiuals who are likely to carry that same gene.
Hamilton [(1964a](https://www.sciencedirect.com/science/article/pii/0022519364900384); [1964b)](https://www.sciencedirect.com/science/article/pii/0022519364900396)
set out how this kind of selection can work by showing that genes for a behaviour can increase in frequency as long as the cost to the organism producing that behaviour is lower than the benefit to other organisms scaled by their relatedness to the producer of the behaviour. Hamilton suggested that one way this could happen is in spatially organised populations where interactions tend to be among relatives.
## Summary
The important conclusion from Oliphant's simulations is that optimal (i.e. Saussurean) signalling does not automatically evolve by natural selection just by virtue of it being a good thing for a species to have! Nevertheless we do see it all over the place in nature.
These simulations suggest that where we do see it, one of three possible scenarios could be involved:
1. **Mutualism.** It could be that the ecological context of the communication is one where both parties benefit from communication taking place. (Consider a flower communicating to a bee where nectar is to be found. This is a case of mutual benefit for both parties.)
2. **Reciprocity.** It could be that individuals are keeping track of whether indivdiuals are helpful or not and responding in kind. In this case, "altruistic" behaviour is based on the presumption that there will be some payback in the future. (It may be the case that much complex communication among non-kin is ultimately the result of some kind of reciprocity.)
3. **Spatial organisation resulting in kin selection.** It could be that the relatedness of individuals who benefit from communication is high enough for selection to result in the increase of genes for optimal production behaviour even if fitness does not increase for the sender. (This is likely to be the case where populations are spatially organised such that related individuals are more likely to interact than unrelated ones.)
| github_jupyter |
# Adversarial Variational Bayes
This notebook contains the code for the STAN example to demonstrate how Adversarial Variational Bayes (AVB) can be used to approximate complex posterior distributions.
```
%load_ext autoreload
%autoreload 2
import os
import pystan
import numpy as np
import scipy as sp
from matplotlib import pyplot as plt
import ite
from tqdm import tqdm_notebook
import re
import glob
import tensorflow as tf
from tensorflow.contrib import slim
ds = tf.contrib.distributions
st = tf.contrib.bayesflow.stochastic_tensor
# Parameters
batch_size = 512
data = {'J': 8,
'y': [2.8, 0.8, -0.3, 0.7, -0.1, 0.1, 1.8, 1.2],
'sigma': [0.8, 0.5, 0.8, 0.6, 0.5, .6, 0.5, 0.4],
'psigma_eta': 1., 'psigma_mu': 1., 'psigma_tau': 1.}
# Parameter for AVB: wether to use adaptive contrast
is_adapt_contrast = True
# Utility functions
def kde(mu, tau, bbox=[-5, 5, -5, 5], xlabel="", ylabel=""):
values = np.vstack([mu, tau])
kernel = sp.stats.gaussian_kde(values)
fig, ax = plt.subplots()
ax.axis(bbox)
ax.set_aspect(4/5*abs(bbox[1]-bbox[0])/abs(bbox[3]-bbox[2]))
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
xx, yy = np.mgrid[bbox[0]:bbox[1]:100j, bbox[2]:bbox[3]:100j]
positions = np.vstack([xx.ravel(), yy.ravel()])
f = np.reshape(kernel(positions).T, xx.shape)
cfset = ax.contourf(xx, yy, f, cmap='Blues')
plt.show()
def scatter(mu, tau, bbox=[-5, 5, -5, 5], xlabel="", ylabel=""):
fig, ax = plt.subplots()
ax.axis(bbox)
ax.set_aspect(4/5*abs(bbox[1]-bbox[0])/abs(bbox[3]-bbox[2]))
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.scatter(mu, tau, edgecolor='none', alpha=0.5)
plt.show()
def hist(x, xlabel="", ylabel=""):
fig, ax = plt.subplots()
ax.hist(x, bins=50)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.show()
def heat_map(f, bbox=[-5, 5, -5, 5], xlabel="", ylabel=""):
if not os.path.exists(output_dir):
os.makedirs(output_dir)
N, M = f.shape
fig, ax = plt.subplots()
ax.axis(bbox)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
xx, yy = np.mgrid[bbox[0]:bbox[1]:(N*1j), bbox[2]:bbox[3]:(M*1j)]
cfset = ax.contourf(xx, yy, f, cmap='Reds')
cset = ax.contour(xx, yy, f, colors='k')
plt.show()
def heat_map_func(func, data, bbox=[-5, 5, -5, 5], **kwargs):
xx, yy = np.mgrid[bbox[0]:bbox[1]:100j, bbox[2]:bbox[3]:100j]
positions = np.vstack([xx.ravel(), yy.ravel()])
f = np.reshape(func(positions, data=data).T, xx.shape)
heat_map(f, bbox, **kwargs)
```
# Models
Select one of the following models by running the associated code.
## Eight schools
```
model_code = """
data {
int<lower=0> J; // number of schools
real y[J]; // estimated treatment effects
real<lower=0> sigma[J]; // s.e. of effect estimates
real<lower=0> psigma_eta; // prior for eta
real<lower=0> psigma_mu; // prior for mu
real<lower=0> psigma_tau; // prior for tau
}
parameters {
real mu;
real tau;
real eta[J];
}
transformed parameters {
real theta[J];
for (j in 1:J)
theta[j] = mu + tau * eta[j];
}
model {
target += normal_lpdf(y | theta, sigma);
target += normal_lpdf(eta | 0, psigma_eta);
target += normal_lpdf(mu | 0, psigma_mu);
target += normal_lpdf(tau | 0, psigma_tau);
}
"""
sm = pystan.StanModel(model_code=model_code)
param_dim = 2 + data['J']
Ez0 = np.zeros(param_dim, dtype=np.float32)
stdz0 = np.concatenate([
np.array([data['psigma_mu'], data['psigma_tau']], dtype=np.float32),
data["psigma_eta"] * np.ones(data['J'], dtype=np.float32)
])
def get_logprob(z, data):
mu = z[:, 0:1]
tau = z[:, 1:2]
eta = z[:, 2:]
theta = mu + tau*eta
y = tf.constant(data['y'], dtype=tf.float32, shape=(1, data['J']))
sigma = tf.constant(data['sigma'], dtype=tf.float32, shape=(1, data['J']))
err = (y - theta)/sigma
logprob = tf.reduce_sum(
-0.5 * tf.square(err) - tf.log(sigma) - 0.5*np.log(2*np.pi), [1]
)
logprob -= 0.5*tf.reduce_sum(tf.square(mu/data['psigma_mu']), [1])
logprob -= 0.5*tf.reduce_sum(tf.square(tau/data['psigma_tau']), [1])
logprob -= 0.5*tf.reduce_sum(tf.square(eta/data['psigma_eta']), [1])
return logprob
```
## Gauss example
```
model_code = """
data {
int<lower=0> J; // number of schools
real y[J]; // estimated treatment effects
}
parameters {
real mu;
real logsigma;
}
model {
target += normal_lpdf(mu | 0, 100);
target += normal_lpdf(logsigma | 0, 100);
target += normal_lpdf(y | mu, exp(logsigma));
}
"""
sm = pystan.StanModel(model_code=model_code)
param_dim = 2
Ez0 = np.array([0., 0.], dtype=np.float32)
stdz0 = np.array([100., 100.], dtype=np.float32)
def get_logprob(z, data):
mu = z[:, 0:1]
logsigma = z[:, 1:2]
y = np.asarray(data['y'], dtype=np.float32).reshape(1, -1)
err = (y - mu)*tf.exp(-logsigma)
logprob = tf.reduce_sum(
-0.5 * tf.square(err) - logsigma - 0.5*np.log(2*np.pi), [1]
)
logprob -= 0.5*tf.reduce_sum(z/stdz0.reshape(1, -1), [1])
return logprob
def get_logprob_np(z, data):
mu = z[:, 0:1]
logsigma = z[:, 1:2]
y = np.asarray(data['y'], dtype=np.float32).reshape(1, -1)
err = (y - mu)*np.exp(-logsigma)
logprob = np.sum(
-0.5 * err*err - logsigma - 0.5*np.log(2*np.pi), axis=1
)
logprob -= 0.5*np.sum(z/stdz0.reshape(1, -1), axis=1)
return logprob
```
## Rosenbrock
```
param_dim = 2
Ez0 = np.array([0., 0.], dtype=np.float32)
stdz0 = np.array([1000., 1000.], dtype=np.float32)
def get_logprob(z, data):
x1 = z[:, 0:1]
x2 = z[:, 1:2]
logprob = tf.reduce_sum(
-tf.square(x1 - 1) - tf.square(tf.square(x2) - x1), [1]
)
return logprob
```
## Torus
```
param_dim = 2
Ez0 = np.array([0., 0.], dtype=np.float32)
stdz0 = np.array([1000., 1000.], dtype=np.float32)
def get_logprob(z, data):
x1 = z[:, 0:1]
x2 = z[:, 1:2]
logprob = tf.reduce_sum(
-tf.square((tf.square(x1) + tf.square(x2)) - 2.), [1]
)
return logprob
def torus_np(z):
x1 = z[:, 0:1]
x2 = z[:, 1:2]
logprob = np.sum(
-np.square(x1*x1 + x2*x2 - 2.), axis=1
)
return logprob
```
# AVB definition
The main code for AVB.
```
def lrelu(x, leak=0.2, name="lrelu"):
return tf.maximum(x, leak*x)
def standard_normal(shape, **kwargs):
"""Create a standard Normal StochasticTensor."""
return st.StochasticTensor(
ds.MultivariateNormalDiag(mu=tf.zeros(shape), diag_stdev=tf.ones(shape), **kwargs))
def posterior(reuse=False):
with tf.variable_scope("posterior", reuse=reuse) as scope:
eps = standard_normal([batch_size, param_dim]).value()
with slim.arg_scope([slim.fully_connected], activation_fn=tf.nn.elu):
net = slim.fully_connected(eps, 128, scope='fc_0')
net = slim.fully_connected(net, 128, scope='fc_1')
# net = slim.fully_connected(net, 128, scope='fc_2')
z = slim.fully_connected(net, param_dim, activation_fn=None, scope='z')
return z
def adversary(z, reuse=False):
with tf.variable_scope("adversary", reuse=reuse) as scope:
with slim.arg_scope([slim.fully_connected], activation_fn=lrelu):
net = slim.fully_connected(z, 256, scope='fc_0')
for i in range(5):
dnet = slim.fully_connected(net, 256, scope='fc_%d_r0' % (i+1))
net += slim.fully_connected(dnet, 256, activation_fn=None, scope='fc_%d_r1' % (i+1),
weights_initializer=tf.constant_initializer(0.))
net = lrelu(net)
T = slim.fully_connected(net, 1, activation_fn=None, scope='T',
weights_initializer=tf.constant_initializer(0.))
T = tf.squeeze(T, [1])
return T
tf.reset_default_graph()
z0 = tf.random_normal([batch_size, param_dim], name="z0")
z_ = posterior()
beta = tf.constant(1.)
if is_adapt_contrast:
Ez_, Varz_ = tf.nn.moments(z_, [0], keep_dims=True)
stdz_ = tf.sqrt(Varz_) + 1e-6
Ez_ = tf.stop_gradient(Ez_)
stdz_ = tf.stop_gradient(stdz_)
else:
Ez_ = tf.constant(Ez0.reshape(1, -1), dtype=tf.float32)
stdz_ = tf.constant(stdz0.reshape(1, -1), dtype=tf.float32)
zr = Ez_ + stdz_ * z0
znorm_ = (z_ - Ez_) / stdz_
logr_ = -tf.reduce_sum(0.5 * tf.square(znorm_) + tf.log(stdz_) + 0.5 * np.log(2*np.pi), [1])
logr_zr = -tf.reduce_sum(0.5 * tf.square(z0) + tf.log(stdz_) + 0.5 * np.log(2*np.pi), [1])
if is_adapt_contrast:
Ti = adversary(z0) - logr_zr
Td = adversary(znorm_, reuse=True) - logr_
else:
Ti = adversary(zr) - logr_zr
Td = adversary(z_, reuse=True) - logr_
logprob = get_logprob(z_, data)
mean_logprob = tf.reduce_mean(logprob)
mean_Td = tf.reduce_mean(Td)
loss_primal = tf.reduce_mean(beta*(logr_ + Td) - logprob)
d_loss_d = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=Td, labels=tf.ones_like(Td))
)
d_loss_i = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=Ti, labels=tf.zeros_like(Ti))
)
loss_dual = d_loss_i + d_loss_d
pvars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "posterior")
dvars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "adversary")
popt = tf.train.AdamOptimizer(1e-4, beta1=0.5)
dopt = tf.train.AdamOptimizer(2e-4, beta1=0.5)
grads_primal = popt.compute_gradients(loss_primal, var_list=pvars)
grads_dual = dopt.compute_gradients(loss_dual, var_list=dvars)
train_primal = popt.apply_gradients(grads_primal)
train_dual = dopt.apply_gradients(grads_dual)
train_step = [train_primal, train_dual]
from tqdm import tqdm_notebook, tnrange
def run_training(sess, data, niter=10000, betas=None, npretrain=0):
if betas is None:
betas = []
for i in tnrange(npretrain, desc="Pretrain"):
sess.run(train_dual)
pbar = tnrange(niter+1, desc="Train")
for i in pbar:
if i >= np.size(betas):
beta_i = 1.
else:
beta_i = betas[i]
_, lp, ld, td = sess.run(
[train_primal, loss_primal, loss_dual, mean_Td],
feed_dict={beta: beta_i}
)
sess.run(train_dual)
sess.run(train_dual)
pbar.set_description("lp=%.3f, ld=%.3f, td=%.3f" % (lp, ld, td))
def get_samples(sess, nbatches=100):
zs = np.zeros([nbatches, batch_size, param_dim])
for i in range(nbatches):
zs[i] = sess.run(z_)
zs = zs.reshape(-1, param_dim)
return zs
def stan_vb(stan_vb_alg="fullrank", niter=10000):
stan_vb_out = "./vb.out"
sm.vb(data=data, sample_file=stan_vb_out, algorithm=stan_vb_alg, iter=niter, output_samples=10000, seed=1)
stan_vb_samples = np.genfromtxt(stan_vb_out, dtype=float, delimiter=',')
stan_vb_samples = stan_vb_samples[1:, 1:param_dim+1]
return stan_vb_samples
# Close existing session if available
try:
sess.close()
except NameError:
pass
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
```
# Model fitting
```
# Some parameters for the model fitting
iter_max = 10000
# Run HMC
stan_fit = sm.sampling(data=data, iter=500000, thin=50)
# AVB
run_training(sess, data, niter=iter_max, npretrain=0)
# Multiple VB
stan_vb_samples_mf = stan_vb("meanfield", iter_max)
stan_vb_samples = stan_vb("fullrank", iter_max)
```
# Visualization
```
# Parameters for visualization
idx0, idx1 = 0, 1
labels = [r'$\mu$', r'$\tau$', r'$\eta_1$']
xlabel = labels[idx0]
ylabel = labels[idx1]
plt.rc('font', size=16)
```
## STAN HMC
```
def expand_vec(a):
if a.ndim == 1:
return a.reshape(-1, 1)
else:
return a
stan_res = stan_fit.extract()
q = np.concatenate([expand_vec(l) for l in stan_res.values()], axis=1)
q1 = q[:, idx0]
q2 = q[:, idx1]
q1_mean = q1.mean()
q1_std = q1.std()
q2_mean = q2.mean()
q2_std = q2.std()
bbox = [q1_mean - 4*q1_std, q1_mean + 4*q1_std,
q2_mean - 4*q2_std, q2_mean + 4*q2_std]
scatter(q1, q2, bbox, xlabel=xlabel, ylabel=ylabel)
kde(q1, q2, bbox, xlabel=xlabel, ylabel=ylabel)
hist(q1)
hist(q2)
```
## AVB
```
q = get_samples(sess, nbatches=20)
q1 = q[:, idx0]
q2 = q[:, idx1]
q1_mean = q1.mean()
q1_std = q1.std()
q2_mean = q2.mean()
q2_std = q2.std()
bbox = [q1_mean - 4*q1_std, q1_mean + 4*q1_std,
q2_mean - 4*q2_std, q2_mean + 4*q2_std]
scatter(q1, q2, bbox, xlabel=xlabel, ylabel=ylabel)
kde(q1, q2, bbox, xlabel=xlabel, ylabel=ylabel)
hist(q1)
hist(q2)
```
## STAN VB
```
q = stan_vb_samples
q1 = stan_vb_samples[:, idx0]
q2 = stan_vb_samples[:, idx1]
scatter(q1, q2, bbox, xlabel=xlabel, ylabel=ylabel)
kde(q1, q2, bbox, xlabel=xlabel, ylabel=ylabel)
hist(q1)
hist(q2)
```
| github_jupyter |
# Example: CanvasXpress violin Chart No. 7
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/violin-7.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="violin7",
data={
"y": {
"smps": [
"Var1",
"Var2",
"Var3",
"Var4",
"Var5",
"Var6",
"Var7",
"Var8",
"Var9",
"Var10",
"Var11",
"Var12",
"Var13",
"Var14",
"Var15",
"Var16",
"Var17",
"Var18",
"Var19",
"Var20",
"Var21",
"Var22",
"Var23",
"Var24",
"Var25",
"Var26",
"Var27",
"Var28",
"Var29",
"Var30",
"Var31",
"Var32",
"Var33",
"Var34",
"Var35",
"Var36",
"Var37",
"Var38",
"Var39",
"Var40",
"Var41",
"Var42",
"Var43",
"Var44",
"Var45",
"Var46",
"Var47",
"Var48",
"Var49",
"Var50",
"Var51",
"Var52",
"Var53",
"Var54",
"Var55",
"Var56",
"Var57",
"Var58",
"Var59",
"Var60"
],
"data": [
[
4.2,
11.5,
7.3,
5.8,
6.4,
10,
11.2,
11.2,
5.2,
7,
16.5,
16.5,
15.2,
17.3,
22.5,
17.3,
13.6,
14.5,
18.8,
15.5,
23.6,
18.5,
33.9,
25.5,
26.4,
32.5,
26.7,
21.5,
23.3,
29.5,
15.2,
21.5,
17.6,
9.7,
14.5,
10,
8.2,
9.4,
16.5,
9.7,
19.7,
23.3,
23.6,
26.4,
20,
25.2,
25.8,
21.2,
14.5,
27.3,
25.5,
26.4,
22.4,
24.5,
24.8,
30.9,
26.4,
27.3,
29.4,
23
]
],
"vars": [
"len"
]
},
"x": {
"supp": [
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ"
],
"order": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
"dose": [
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2
]
}
},
config={
"axisAlgorithm": "rPretty",
"axisTickScaleFontFactor": 1.8,
"axisTitleFontStyle": "bold",
"axisTitleScaleFontFactor": 1.8,
"background": "white",
"backgroundType": "window",
"backgroundWindow": "#E5E5E5",
"boxplotMean": True,
"boxplotMeanColor": "rgb(255,215,0)",
"boxplotMeanColorBorder": "red",
"boxplotNotched": True,
"boxplotWishkersType": "single",
"graphOrientation": "vertical",
"graphType": "Boxplot",
"groupingFactors": [
"dose"
],
"guides": "solid",
"guidesColor": "white",
"showBoxplotIfViolin": True,
"showLegend": False,
"showViolinBoxplot": True,
"smpLabelRotate": 90,
"smpLabelScaleFontFactor": 1.8,
"smpTitle": "dose",
"smpTitleFontStyle": "bold",
"smpTitleScaleFontFactor": 1.8,
"theme": "CanvasXpress",
"title": "The Effect of Vitamin C on Tooth Growth in Guinea Pigs",
"xAxis2Show": False,
"xAxisMinorTicks": False,
"xAxisTickColor": "white",
"xAxisTitle": "len"
},
width=613,
height=613,
events=CXEvents(),
after_render=[],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="violin_7.html")
```
| github_jupyter |
# Notebook to go step by step in the selection/reduction/calibration of DL0 data to DL1
<font size="4">
**Content:**
- Data loading
- Calibration:
- Pedestal substraction
- Peak integration
- Conversion of digital counts to photoelectrons.
- High gain/low gain combination
- Cleaning
- Hillas parameters
- Disp reconstruction (from Hillas pars)
- TEST: High gain/Low gain
- Using of Pyhessio to access more MC information:
- Simulated phe, number of simulated events, simulated energy range, etc.
- Calculation of the spectral weight for one event.
- TEST: Comparison of Hillas intensity with simulated number of phe.
- Spectral weighting for a set of events.
### Some imports...
```
from ctapipe.utils import get_dataset_path
from ctapipe.io import event_source
from ctapipe.io.eventseeker import EventSeeker
import astropy.units as u
from copy import deepcopy
from lstchain.calib import lst_calibration
from ctapipe.image import hillas_parameters
import pyhessio
import lstchain.reco.utils as utils
from lstchain.reco import r0_to_dl1
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## Data loading
Get the origin file with dl0 data which is a simtelarray file
```
#input_filename=get_dataset_path('gamma_test_large.simtel.gz')
input_filename="/home/queenmab/DATA/LST1/Gamma/gamma_20deg_0deg_run8___cta-prod3-lapalma-2147m-LaPalma-FlashCam.simtel.gz"
```
Get the data events into a ctapipe event container. We are only interested in LST1 events
```
pyhessio.close_file()
tel_id = 1
allowed_tels = {tel_id}
source = event_source(input_filename)
source.allowed_tels = allowed_tels
## Load the first event
#event = next(iter(source))
## OR select an event manually
seeker = EventSeeker(source)
event = seeker[4]
# OR Find an event that saturates the high gain waveform
'''
counter = 0
howmany = 4
for event in source:
if np.any(event.r0.tel[1].waveform > 4094):
bright_event = deepcopy(event)
tel_id = tid
counter = counter + 1
if counter > howmany:
break
event = bright_event
'''
## OR find a bright LST event:
# intensity = 0
# for event in source:
# for tid in event.r0.tels_with_data:
# if event.r0.tel[tid].image.sum() > intensity and tid in np.arange(8):
# intensity = event.r0.tel[tid].image.sum()
# bright_event = deepcopy(event)
# tel_id = tid
# event = bright_event
```
Take a look at the event container. Select any event using the event seeker
```
event.r0.tel[1]
EvID = event.r0.event_id
print(EvID)
```
Get the waveform data
```
data = event.r0.tel[tel_id].waveform
data.shape
```
The waveform is a matrix, has 30 samples in each of the 1855 pixels, for 2 gains.
### We can plot the waveforms and have an idea of their shapes.
Lame loop to find a pixel with signal:
```
maxvalue=0
for pixel in enumerate(data[0]):
maxsample = max(pixel[1])
if maxsample > maxvalue:
maxvalue = maxsample
pixelwithsignal = pixel[0]
plt.rcParams['figure.figsize'] = (8,5)
plt.rcParams['font.size'] = 14
nsamples = data.shape[2]
sample = np.linspace(0,30,nsamples)
plt.plot(sample,data[0][pixelwithsignal],label="Pixel with signal",color = "blue")
plt.plot(sample,data[0][0],label="Pixel without signal", color = "orange")
plt.legend()
```
## Calibration
**Get the pedestal, which is is the average (for pedestal events) of the *sum* of all samples, from sim_telarray**
```
ped = event.mc.tel[tel_id].pedestal
ped.shape
```
Each pixel has its pedestal for the two gains.
**Correct the pedestal (np.atleast_3d function converts 2D to 3D matrix)**
```
pedcorrectedsamples = data - np.atleast_3d(ped) / nsamples
pedcorrectedsamples.shape
```
**We can now compare the corrected waveforms with the previous ones**
```
plt.plot(sample,data[0][pixelwithsignal],label="Pixel with signal",color="blue")
plt.plot(sample,data[0][0],label="Pixel without signal",color="orange")
plt.plot(sample,pedcorrectedsamples[0][pixelwithsignal],label="Pixel with signal corrected",color="blue",linestyle="--")
plt.plot(sample,pedcorrectedsamples[0][0],label="Pixel without signal corrected",color="orange",linestyle="--")
plt.legend()
```
## Integration
**We must now find the peak in the waveform and do the integration to extract the charge in the pixel**
```
from ctapipe.image.extractor import LocalPeakWindowSum
integrator = LocalPeakWindowSum()
integration, peakpos = integrator(pedcorrectedsamples)
integration.shape, peakpos.shape, window.shape
```
Integration gives the value of the charge
```
integration[0][0],integration[0][pixelwithsignal]
```
Peakpos gives the position of the peak (in which sample it falls)
```
peakpos[0][0],peakpos[0][pixelwithsignal]
```
window gives the number of samples used for the integration
```
window[0][0],window[0][pixelwithsignal]
sample[window[0][0]]
```
**We can plot these positions on top of the waveform and decide if the integration and peak identification has been correct**
```
import matplotlib.patches as patches
plt.plot(sample,pedcorrectedsamples[0][pixelwithsignal],label="Pixel with signal, corrected",color="blue")
plt.plot(sample,pedcorrectedsamples[0][0],label="Pixel without signal, corrected",color="orange")
plt.plot(sample[window[0][0]],pedcorrectedsamples[0][0][window[0][0]],
color="red",label="windows",linewidth=3,linestyle="--")
plt.plot(sample[window[0][pixelwithsignal]],pedcorrectedsamples[0][pixelwithsignal][window[0][pixelwithsignal]],
color="red",linewidth=3,linestyle="--")
plt.axvline(peakpos[0][0],linestyle="--",color="orange")
plt.axvline(peakpos[0][pixelwithsignal],linestyle="--",color="blue")
plt.legend()
```
**Finally we must convert the charge from digital counts to photoelectrons multipying by the correlation factor**
```
signals = integration.astype(float)
dc2pe = event.mc.tel[tel_id].dc_to_pe # numgains * numpixels
signals *= dc2pe
```
**Choose the correct calibration factor for each pixel depending on its intensity. Very bright pixels saturates and the local peak integrator underestimates the intensity of the pixel.**
```
data[0]
combined = signals[0].copy() # On a basis we will use the high gain
for pixel in range(0,combined.size):
if np.any(data[0][pixel] > 4094):
print(signals[1][pixel],signals[0][pixel])
combined[pixel] = signals[1][pixel]
```
**And fill the DL1 containers**
```
event.dl1.tel[tel_id].image = combined
event.dl1.tel[tel_id].peakpos = peakpos
event.dl1.tel[tel_id]
```
**Say hello to our shower!**
```
from ctapipe.visualization import CameraDisplay
camera = event.inst.subarray.tel[tel_id].camera
plt.rcParams['figure.figsize'] = (20, 6)
plt.rcParams['font.size'] = 14
plt.subplot(1,3,1)
disp = CameraDisplay(camera,title="Low gain")
disp.add_colorbar()
disp.image = signals[1]
plt.subplot(1,3,2)
disp = CameraDisplay(camera,title = "High gain")
disp.add_colorbar()
disp.image = signals[0]
plt.subplot(1,3,3)
disp = CameraDisplay(camera,title = "Combined")
disp.add_colorbar()
disp.image = combined
```
## Image cleaning
```
from ctapipe.image import hillas_parameters, tailcuts_clean
cleaning_method = tailcuts_clean
cleaning_parameters = {'boundary_thresh': 3,
'picture_thresh': 6,
'keep_isolated_pixels': False,
'min_number_picture_neighbors': 1
}
signal = combined
signal_pixels = cleaning_method(camera,signal,**cleaning_parameters)
```
We use the combined image.
```
image = signal
image[~signal_pixels] = 0
```
**Let's take a look at the clean and shiny image**
```
plt.rcParams['figure.figsize'] = (6, 6)
plt.rcParams['font.size'] = 14
disp = CameraDisplay(camera,title = "Clean image, high gain")
disp.image = image
disp.add_colorbar()
```
## Hillas parameters
First compute them:
```
hillas = hillas_parameters(camera, image)
hillas.intensity
```
**And plot them over the image**
```
disp = CameraDisplay(camera,title = "Clean image")
disp.add_colorbar()
disp.image = image
disp.overlay_moments(hillas, color='cyan', linewidth=3)
```
**Also we can calculate the timing parameters**
```
from ctapipe.image import timing_parameters as time
timepars = time.timing_parameters(camera, image, peakpos[0], hillas)
timepars
timepars.slope,timepars.intercept
```
## Reconstruction of disp
```
from lstchain.reco.utils import get_event_pos_in_camera, disp, disp_to_pos
tel = event.inst.subarray.tel[tel_id]
src_pos = get_event_pos_in_camera(event, tel)
d = disp(src_pos, hillas)
s = np.sign(src_pos[0] - hillas.x)
dx = src_pos[0] - hillas.x
dy = src_pos[1] - hillas.y
plt.figure(figsize=(12,12))
display = CameraDisplay(camera,title = "Disp reconstruction")
display.add_colorbar()
display.image = image
display.overlay_moments(hillas, color='cyan', linewidth=3, alpha=0.4)
plt.scatter(src_pos[0], src_pos[1], color='red', label='actual source position')
uu = s * d.value * np.cos(hillas.psi)
vv = s * d.value * np.sin(hillas.psi)
plt.quiver(hillas.x, hillas.y, uu, vv, units='xy', scale=1,
label= "reconstructed disp",
)
plt.quiver(hillas.x, hillas.y, dx.value, dy.value,
units='xy', scale=1,
color='red',
alpha=0.5,
label= "actual disp",
)
plt.legend();
```
**In a real use case, the _disp_ value (length of the vector) is reconstructed by training a random forest.
The _reconstructed disp_ above assumes a perfect length reconstruction.
The direction of the `disp` vector is given by the ellipse direction (`hillas.psi`)**
## Lets compare the difference between high and low gain images for all events in the simtelarray file:
```
pyhessio.close_file()
intensity_high = np.array([])
intensity_low = np.array([])
nevents = 0
for event in source:
if nevents%100==0:
print(nevents)
if nevents >= 500:
break
#if np.any(event.r0.tel[1].waveform > 4094):
# continue
geom = event.inst.subarray.tel[tel_id].camera
lst_calibration(event,tel_id)
for Nphe_high, Nphe_low in zip(event.dl1.tel[tel_id].image[0],event.dl1.tel[tel_id].image[1]):
if Nphe_high > 0 and Nphe_low > 0:
intensity_high = np.append(Nphe_high,intensity_high)
intensity_low = np.append(Nphe_low,intensity_low)
nevents=nevents+1
from scipy.stats import norm
plt.figure(figsize=(15,15))
#diff = (np.log10(intensity_low)-np.log10(intensity_high))*np.log(10)
pixels_df = pd.DataFrame(data ={'high_gain':intensity_high,
'low_gain':intensity_low,
'diff':np.log(intensity_low/intensity_high)})
pixels_df['Bin1'] = (pixels_df['low_gain'] >= 10) & (pixels_df['low_gain'] < 30)
pixels_df['Bin2'] = (pixels_df['low_gain'] >= 30) & (pixels_df['low_gain'] < 70)
pixels_df['Bin3'] = (pixels_df['low_gain'] >= 70) & (pixels_df['low_gain'] < 150)
pixels_df['Bin4'] = (pixels_df['low_gain'] >= 150)
plt.subplot(421)
h = plt.hist(pixels_df[pixels_df['Bin1']]['diff'],bins=50,label='10 to 30 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(422)
h2 = plt.hist(pixels_df[pixels_df['Bin1']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin1']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin1']]['diff'])
print(mu,sigma)
plt.subplot(423)
h = plt.hist(pixels_df[pixels_df['Bin2']]['diff'],bins=50,label='30 to 70 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(424)
h2 = plt.hist(pixels_df[pixels_df['Bin2']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin2']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin2']]['diff'])
print(mu,sigma)
plt.subplot(425)
h = plt.hist(pixels_df[pixels_df['Bin3']]['diff'],bins=50,label='70 to 150 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(426)
h2 = plt.hist(pixels_df[pixels_df['Bin3']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin3']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin3']]['diff'])
print(mu,sigma)
plt.subplot(427)
h = plt.hist(pixels_df[pixels_df['Bin4']]['diff'],bins=50,label='> 150 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(428)
h2 = plt.hist(pixels_df[pixels_df['Bin4']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin4']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin4']]['diff'])
print(mu,sigma)
```
## Use Pyhessio to access to extra MC data
```
pyhessio.close_file()
with pyhessio.open_hessio(input_filename) as ev:
for event_id in ev.move_to_next_event():
tels_with_data = ev.get_telescope_with_data_list()
if event_id==EvID:
print('run id {}:, event number: {}'.format(ev.get_run_number() , event_id))
print(' Triggered telescopes for this event: {}'.format(tels_with_data))
nphe = np.sum(ev.get_mc_number_photon_electron(1))
emin = ev.get_mc_E_range_Min()
emax = ev.get_mc_E_range_Max()
index = ev.get_spectral_index()
cone = ev.get_mc_viewcone_Max()
core_max = ev.get_mc_core_range_Y()
break
print('Number of Phe: ',nphe)
print('Hillas intensity',hillas.intensity)
```
## Get the number of simulated events in the file(very slow)
```
#numevents = pyhessio.count_mc_generated_events(input_filename)
numevents = 1000000
print(numevents)
```
## Calculate the spectral weighting for the event
```
emin,emax,index,cone,core_max
particle = utils.guess_type(input_filename)
K = numevents*(1+index)/(emax**(1+index)-emin**(1+index))
A = np.pi*core_max**2
Omega = 2*np.pi*(1-np.cos(cone))
if cone==0:
Omega=1
MeVtoGeV = 1e-3
if particle=="gamma":
K_w = 5.7e-16*MeVtoGeV
index_w = -2.48
E0 = 0.3e6*MeVtoGeV
if particle=="proton":
K_w = 9.6e-2
index_w = -2.7
E0 = 1
Simu_E0 = K*E0**index
N_ = Simu_E0*(emax**(index_w+1)-emin**(index_w+1))/(E0**index_w)/(index_w+1)
R = K_w*A*Omega*(emax**(index_w+1)-emin**(index_w+1))/(E0**index_w)/(index_w+1)
energy = event.mc.energy.value
w = ((energy)**(index_w-index))*R/N_
print('Spectral weight: ',w)
```
## We can compare the Hillas intensity with the MC photoelectron size of the events to check the effects of cleaning
**Set the number of events that we want to analyze and the name of the output h5 file(None for using all events in the file)**
```
r0_to_dl1.max_events = None
output_filename = 'dl1_' + os.path.basename(input_filename).split('.')[0] + '.h5'
```
**Run lstchain to get dl1 events**
```
r0_to_dl1.r0_to_dl1(input_filename,output_filename)
```
**Use Pyhessio to obtain more MC info, like the number of MC photoelectrons in the camera**
```
mc_phe = np.array([])
id = np.array([])
counter=0
#Get MC info with pyhessio
with pyhessio.open_hessio(input_filename) as ev:
for event_id in ev.move_to_next_event():
tels_with_data = ev.get_telescope_with_data_list()
if 1 in tels_with_data:
counter=counter+1
if counter==r0_to_dl1.max_events:
break
nphe = np.sum(ev.get_mc_number_photon_electron(1))
emin = ev.get_mc_E_range_Min()
emax = ev.get_mc_E_range_Max()
index = ev.get_spectral_index()
cone = ev.get_mc_viewcone_Max()
core_max = ev.get_mc_core_range_Y()
mc_phe = np.append(mc_phe,nphe)
id = np.append(id,event_id)
```
**Use pandas to assign the info obtained with pyhessio to the corresponding dl1 previous events**
```
mc_df = pd.DataFrame()
mc_df['mc_phe'] = mc_phe
mc_df['event_id'] = id.astype(int)
df_dl1 = pd.read_hdf(output_filename)
df_dl1 = df_dl1.set_index('event_id')
mc_df = mc_df.set_index('event_id').reindex(df_dl1.index)
df_dl1['mc_phe'] = np.log10(mc_df['mc_phe'])
```
**Plot the hillas intensity vs mc photoelectron size**
```
plt.figure(figsize=(15,5))
plt.subplot(121)
h = plt.hist2d(df_dl1[df_dl1['mc_phe']>0]['intensity'],df_dl1[df_dl1['mc_phe']>0]['mc_phe'],bins=100)
plt.xlabel('$log_{10}$ Hillas intensity')
plt.ylabel('$log_{10}$ mc_phe')
plt.colorbar(h[3])
plt.subplot(122)
h = plt.hist2d(df_dl1[df_dl1['mc_phe']>0]['mc_energy'],df_dl1[df_dl1['mc_phe']>0]['mc_phe'],bins=100)
plt.xlabel('$log_{10}$ MC Energy')
plt.ylabel('$log_{10}$ mc_phe')
plt.colorbar(h[3])
```
## Apply the spectral weighting for this set of events
```
df_dl1['w'] = ((10**df_dl1['mc_energy'])**(index_w-index))*R/N_
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,weights = df_dl1['w'],density=1,label="-2.48 index")
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,density=1,label="-2 index")
plt.yscale('log')
plt.xlabel("$log_{10}E (GeV)$")
plt.legend()
plt.subplot(122)
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,weights = df_dl1['w'],label="weighted to Crab")
plt.legend()
plt.yscale('log')
plt.xlabel("$log_{10}E (GeV)$")
#plt.xscale('log')
```
| github_jupyter |
# Legacy Zeropoints vs. Obsdb
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import fitsio
from glob import glob
import os
import pandas as pd
import matplotlib.patches as mpatches
from matplotlib.collections import PatchCollection
from scipy.stats import sigmaclip
from photutils import (CircularAperture, CircularAnnulus,
aperture_photometry, daofind)
from astrometry.util.fits import fits_table, merge_tables
from obiwan.common import fits2pandas
from legacyzpts.qa.compare_idlzpts import ZptResiduals, StarResiduals
from legacyzpts.fetch import fetch_targz
```
# legacyzpts/test/test_compare_legacy_idl.py
```
from legacyzpts.test.test_compare_legacy_idl \
import test_zpts_decam,test_stars_decam
zpt= test_zpts_decam(plot=False)
star= test_stars_decam(plot=False)
zpt.legacy.data.columns
star.legacy.data.columns
zpt.legacy.data.ccdphoff[:10]
zpt.idl.data.ccdphoff[:10]
star.legacy.data.magoff[:10]
star.idl.data.magoff[:10]
```
### There is a 10 mmag offset between idl and legacy for the median ps1 apmag difference
```
_=plt.hist(zpt.legacy.data.ccdphoff,label='legacy',
histtype='step')
_=plt.hist(zpt.idl.data.ccdphoff,label='idl',
histtype='step')
plt.legend()
plt.xlabel('median 2.5 sigma clippd (ps1 - apmag)')
print(np.median(zpt.legacy.data.ccdphoff))
```
#### Is this in the -zpt.fits file?
```
raw_dir= "/home/kaylan/myrepo/legacyzpts/py/legacyzpts/test"
raw= fits_table(os.path.join(raw_dir,
'testoutput_zpts/temptable_decam_legacy.fits'))
_=plt.hist(raw.phoff,label='-zpt.fits file',
histtype='step')
plt.legend()
plt.xlabel('median 2.5 sigma clippd (ps1 - apmag)')
np.median(raw.phoff)
```
### But how can the median phoff exist when we don't see it in that stars table?? it must be due to units or something between idl and legacy b/c both legacy and idl have good ps1 - apmag distribution for all stars
```
legacy_magoff, _, _ = sigmaclip(star.legacy.data.magoff)
idl_magoff, _, _ = sigmaclip(star.idl.data.magoff)
_=plt.hist(legacy_magoff,label='legacy',
histtype='step')
_=plt.hist(idl_magoff,label='idl',
histtype='step')
plt.legend()
plt.xlabel('ps1 - apmag for each star in sample')
print(np.median(legacy_magoff))
raw_star= fits_table(os.path.join(raw_dir,
'testoutput_stars/temptable_decam_legacy.fits'))
raw_star.columns
_=plt.hist(raw_star.ps1_mag - raw_star.apmag,label='-star.fits file',
histtype='step')
plt.legend()
plt.xlabel('ps1 - apmag')
#np.median(raw_star.phoff)
diff,_,_= sigmaclip(star.legacy.data.magoff - star.idl.data.magoff)
_=plt.hist(diff,histtype='step')
print(np.median(diff))
```
# CCD_RA and DEC
```
diff_ra,_,_= sigmaclip(star.legacy.data.ccd_ra - star.idl.data.ccd_ra)
df= pd.DataFrame({'diff_ra':diff_ra})
_=plt.hist(df['diff_ra'],
histtype='step')
plt.xlabel('ccd_ra (legacy - idl)')
print(df['diff_ra'].describe())
diff_dec,_,_= sigmaclip(star.legacy.data.ccd_dec - star.idl.data.ccd_dec)
df['diff_dec']= diff_dec
_=plt.hist(df['diff_dec'],
histtype='step')
plt.xlabel('ccd_dec (legacy - idl)')
print(df['diff_dec'].describe())
```
| github_jupyter |
```
# Import libraries for data cleaning, analysis, and visualization
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import datetime
%matplotlib inline
# Create our headers for the dataframe
headers = ['States', '2015', '2016', '2017', '2016(b)', '2017(b)']
# Read in our pce file.
df = pd.read_csv(r"C:\Users\Aarons\Downloads\pce1018.csv", sep=',', names=headers)
pce = df[3:61] # Filter dataset to display only states.
pce.head()
# Let's look at the end of our rows of States.
pce.tail()
pce = pce.reset_index()
pce.head()
# Take a look at the size of our dataset.
# More than 50 states, will further examine.
pce.shape
# Take a look at our columns.
pce.columns
# Drop all of the rows that are not individual states.
pce = pce.drop([7])
pce = pce.drop([9])
pce = pce.drop([14])
pce = pce.drop([20])
pce = pce.drop([28])
pce = pce.drop([41])
pce = pce.drop([46])
pce = pce.drop([52])
# Now our dataset is down to 50 states.
pce.shape
# Check for any null values.
pce.isnull().sum()
pce.describe()
# Find out what the summary statistics for the dataset.
pce['States'].describe()
pce.apply(pd.to_numeric, axis=1, errors='ignore').head()
pce.plot(kind='scatter', x='2016(b)', y='2017(b)',figsize=(12,4));
plt.savefig('PCE')
pce['2016(b)'].plot(kind='hist', bins=22, color='blue');
pce['2017(b)'].plot(kind='hist',bins=22, color='orange');
plt.savefig('PCE_hist')
!pip install plotly
!pip install chart_studio
import chart_studio
import plotly.express as px
import plotly.graph_objects as go
fig1 = px.bar(pce, x="States", y="2016(b)", color="2017(b)", barmode="group")
fig1.update_traces(texttemplate='%{text:.2s}', textposition='outside')
fig1.update_layout(uniformtext_minsize=8, uniformtext_mode='hide')
fig1.show()
pce_2016 = pce.sort_values(by=['2016(b)'],ascending=True, inplace=True)
pce_2016
pce_2017 = pce.sort_values(by=['2017(b)'],ascending=True, inplace=True)
fig2 = px.bar(pce, x="States", y="2016(b)", color="2017(b)", barmode="group")
fig2.update_traces(texttemplate='%{text:.2s}', textposition='outside')
fig2.update_layout(uniformtext_minsize=8, uniformtext_mode='hide')
fig2.show()
fig3 = px.bar(pce, x="States", y="2017(b)", color="2016(b)", barmode="group")
fig3.update_traces(texttemplate='%{text:.2s}', textposition='outside')
fig3.update_layout(uniformtext_minsize=8, uniformtext_mode='hide')
fig3.show()
fig4 = px.bar(pce, x="States", y="2017(b)", color="2016(b)", barmode="group")
fig4.update_traces(texttemplate='%{text:.2s}', textposition='outside')
fig4.update_layout(uniformtext_minsize=8, uniformtext_mode='hide')
fig4.show()
pce['2016(b)'].mean()
pce['2017(b)'].mean()
print(pce['2016(b)'].std(ddof=1))
print(pce['2017(b)'].std(ddof=1))
scatter = px.scatter(pce, x='States', y='2016(b)'
,size='2017(b)')
scatter.update_layout(template='plotly_white')
scatter.update_layout(title='Total Personal Consumption Expenditures by State')
scatter.show()
import plotly.io as pio
pio.write_html(fig, file='PCE_hist.html', auto_open=True)
```
| github_jupyter |
<h2>Classifying Warranty Claims with Symptom Class Names Using Machine Learning</h2><br><br>
by Daniel J. Kim
At Honda Market Quality (MQ), we are responsible for identifying vehicle quality and safety problems. The primary source of market or field information is warranty claims data. This data represents the voice of our customers. The data contains several attributes such as part number, part cost, days to failure, miles to failure, customer's complaint, etc. Over the years, Honda has accumulated several millions of warranty claims. In order to efficiently identify market problems, methods have to be employed to "classify" or group like or similar claims together so that analysis can be made to efficiently find trends, track problems, and ensure problems are fixed or counter-measured.
Today, warranty claims data is classified using several, hard-coded algorithms, requiring extensive maintenance. The jobs that our IT runs to complete the classification take several hours overnight. Due to recent advancements and accessibility of [machine learning](https://en.wikipedia.org/wiki/Machine_learning) (ML) methodologies, I believe MQ and our Honda IT professionals should investigate how ML can be used to improve the warranty claims classification process and extend its usage to other applicable areas of business. Furthermore, I strongly believe MQ need to develop in-house capability and knowledge in machine learning. Unfortunately at MQ, we do not have associates that are knowledgeable in ML or have limited knowledge, this includes me. But we can change that and hopefully we can discover benefits of applying machine learning to enhance MQ's business.
The following example is an attempt at a proof-of-concept of how machine learning can be used to classify warranty claims without hard-coded algorithms and is not meant to be representative of a "production" application. The programming language used to employ the machine learning algorithm is Python using the [scikit-learn](http://scikit-learn.org/stable/) machine learning library. This document is a [Jupyter](http://jupyter.org/) web notebook which allows me to document my process so that perhaps others can duplicate or understand my process as well.
### Library Imports
```
import pandas as pd
import numpy as np
```
### Data Ingestion
Due to confidentiality, raw data will not be available. Instead source of sample data was from an Excel file which I then "copy/pasted" from my computer's "clipboard".
```
df = pd.read_csv('/home/pybokeh/Downloads/fit_data.csv')
```
### Data Preparation: Data Cleansing and Transformation
Source data had dollar sign and comma in the part cost amounts. So we need to remove them and ensure the part cost is a numeric (float) value.
```
df['PART_COST_USD'] = df['PART_COST_USD'].str.replace('$','').str.replace(',','')
df['PART_COST_USD'] = df['PART_COST_USD'].astype(float)
```
Confirm data type of the source data:
```
df.dtypes
```
Below is the first 5 rows of the training data set we will use. The features columns that we will use are the first 5 columns and the target or label data will be the last column ("SYMPTOM_CLASS_NM").
Basically, we want to label or classify future claims based on part #, part cost, DTF, MTF, and symptom text cluster family to their appropriate symptom class name.
**Let's view our sample data:**
```
df.head()
```
When using machine learning algorithms, most require that the input data do not contain text/string data. We can use scikit-learn's LabelEncoder() class to convert text/string data to integers.
```
from sklearn import preprocessing
part5_encoder = preprocessing.LabelEncoder()
text_cluster_encoder = preprocessing.LabelEncoder()
laborop_encoder = preprocessing.LabelEncoder()
mtc_model_encoder = preprocessing.LabelEncoder()
mtc_type_encoder = preprocessing.LabelEncoder()
symp_class_encoder = preprocessing.LabelEncoder()
part5_encoder.fit(df.FAIL_SHORT_PARTNO)
text_cluster_encoder.fit(df.TEXT_CLUSTER_FAMILY)
laborop_encoder.fit(df.PRI_LAB_OPRTN_CD)
mtc_model_encoder.fit(df.MTC_MODEL)
mtc_type_encoder.fit(df.MTC_TYPE)
symp_class_encoder.fit(df.SYMP_CLASS_NM)
```
### Let's create new columns containing the integer version of the columns that contain text/string data
```
df['PART5'] = part5_encoder.transform(df.FAIL_SHORT_PARTNO)
df['TEXT_CLUSTER'] = text_cluster_encoder.transform(df.TEXT_CLUSTER_FAMILY)
df['LABOROP'] = laborop_encoder.transform(df.PRI_LAB_OPRTN_CD)
df['MTCMODEL'] = mtc_model_encoder.transform(df.MTC_MODEL)
df['MTCTYPE'] = mtc_type_encoder.transform(df.MTC_TYPE)
df['SYMP_CLASS'] = symp_class_encoder.transform(df.SYMP_CLASS_NM)
df.head()
```
### We need to save the encoders for use later on un-classified data
** Data Structure Persistence using Python's pickle library: **
```
import pickle
# Encoders to disk
pickle.dump(part5_encoder, open('/home/pybokeh/Dropbox/python/jupyter_notebooks/machine_learning/part5_encoder.sk','wb'))
pickle.dump(text_cluster_encoder, open('/home/pybokeh/Dropbox/python/jupyter_notebooks/machine_learning/text_cluster_encoder.sk','wb'))
pickle.dump(laborop_encoder, open('/home/pybokeh/Dropbox/python/jupyter_notebooks/machine_learning/laborop_encoder.sk','wb'))
pickle.dump(mtc_model_encoder, open('/home/pybokeh/Dropbox/python/jupyter_notebooks/machine_learning/mtc_model_encoder.sk','wb'))
pickle.dump(mtc_type_encoder, open('/home/pybokeh/Dropbox/python/jupyter_notebooks/machine_learning/mtc_type_encoder.sk','wb'))
pickle.dump(symp_class_encoder, open('/home/pybokeh/Dropbox/python/jupyter_notebooks/machine_learning/symp_class_encoder.sk','wb'))
```
**NOTE**-For the sake of simplicity, I resorted to saving the mappings using Python's pickle object serialization library. In a production environment, it would be more suitable to use a relational database to store the mappings in a table instead.
### Now we are ready to create our features input data
Our features data will consist of: part5, part cost, DTF, MTF, and symptom text cluster (all represented with numeric values thanks to my mappings made earlier):
```
features = df[['PART5',
'PART_COST_USD',
'DAYS_TO_FAIL_MINZERO',
'MILES_TO_FAIL',
'LABOROP',
'TEXT_CLUSTER',
'MTCMODEL',
'MTCTYPE'
]].values.tolist()
```
Now our features data does not contain text/string data. Let's look at the first 10 rows of data:
```
features[:10]
```
Number of rows in our features data set:
```
len(features)
```
### Now create our target/label data
```
labels = df.SYMP_CLASS.tolist()
```
Let's look at the first 10 label data:
```
labels[:10]
```
Number of rows in our label data:
```
len(labels)
```
## Partitioning the Data Sets
```
from sklearn.cross_validation import train_test_split
features_train, features_test, labels_train, labels_test = train_test_split(features, labels, test_size=0.2, random_state=0)
```
Our features training data should contain 80% of our original complete data:
```
len(features_train)
```
## Features Scaling
```
from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
features_train_std = stdsc.fit_transform(features_train)
features_test_std = stdsc.transform(features_test)
features_train_std[:20]
```
## Fit the model with training data
We'll use Random Forest classification algorithm
```
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_jobs=1)
rfc.fit(features_train_std, labels_train)
```
**Predicting 10 records:**
```
test_data = features_test_std[:10]
test_data
for name in symp_class_encoder.inverse_transform(labels_test[:10]):
print(name)
for item in test_data:
print(symp_class_encoder.inverse_transform(rfc.predict([item]))[0])
```
Comparing the above output to the source data, all but 2 was not classified correctly (80%). But this was on a sample of 20 data observations. We can use sklearn's accuracy score for larger data size.
```
from sklearn.metrics import accuracy_score
accuracy_score(labels_test[:1000], rfc.predict(features_test_std[:1000]))
```
# Re-Using the Machine Learning Model to Classify Future Claims
### Persist the model so that we can re-use it without having to retrain
```
import pickle
pickle.dump(rfc, open('/home/pybokeh/Dropbox/python/jupyter_notebooks/machine_learning/randomforest.sk','wb'))
```
### Re-use the Model and Load Helper Data Structures
```
# Load the model
rfc2 = pickle.load(open('/home/pybokeh/Dropbox/python/jupyter_notebooks/machine_learning/randomforest.sk','rb'))
# Load helper data structures that we made earlier
part5_to_int_mapper = pickle.load(open(r'D:\jupyter\machine_learning\part5_to_int_mapper.sk', 'rb'))
symptom_to_int_mapper = pickle.load(open(r'D:\jupyter\machine_learning\symptom_to_int_mapper.sk', 'rb'))
int_to_symp_class_mapper = pickle.load(open(r'D:\jupyter\machine_learning\int_to_symp_class_mapper.sk', 'rb'))
```
Again, in a production environment, it is probably best to load the mappings from a relational database instead of using Python's pickle.
### Test a single observation using the model
```
# criteria = [part5, part cost, dtf, mtf, symptom]
criteria = [part5_to_int_mapper['04823'], 0, 0, 207, symptom_to_int_mapper['COSMETIC ISSUE']]
int_to_symp_class_mapper[clf2.predict([criteria])[0]]
```
**That's it!**
Instead of symptom class names, we can classify warranty claims with other different types of classification labels so this classification example can be extended for any other classification we can come up with.
I have not tested this model extensively with other larger test data, but so far I have been impressed with the model so far.
# Conclusion
This small-scale example shows that a machine learning classification algorithm was able to classify warranty claims without hard-coded algorithms. It was "trained" solely from the training data consisting of just some of the attributes of the warranty claims data.
| github_jupyter |
## Convolutional Neural Network with Fashion-MNIST dataset
Fashion-MNIST [dataset](https://github.com/zalandoresearch/fashion-mnist) from Zalando
`Fashion-MNIST` is a dataset of [Zalando](https://jobs.zalando.com/tech/)'s article imagesโconsisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend `Fashion-MNIST` to serve as a direct **drop-in replacement** for the original [MNIST dataset](http://yann.lecun.com/exdb/mnist/) for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
## Import Classes and Functions
```
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from keras.datasets import fashion_mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras import backend as K
from keras.utils import to_categorical
from utils.callback import TimingCallback
```
## Initialize Random Number Generator
```
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
num_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
```
## Load The Dataset
The data, shuffled and split between train and test sets
```
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
```
### Plot the first few examples
```
plt.figure(figsize=(12,3))
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(X_train[i].reshape((img_rows, img_cols)), cmap='gray', interpolation='nearest')
plt.axis('off')
```
### Reshape the data
```
if K.image_data_format() == 'channels_first':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
```
### Normalize the data
```
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
```
### Convert class vectors to binary class matrices
```
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
```
## Define The Neural Network Model
```
def create_model():
model = Sequential()
## Your model here
# model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
```
### Create the Model
```
model = create_model()
```
## Define training parameters
```
batch_size = 128
epochs = 1
```
## Train the model
```
tcb = TimingCallback()
model.fit(X_train, y_train, batch_size=batch_size,
epochs=epochs, verbose=1, validation_data=(X_test, y_test), callbacks=[tcb])
```
### Evaluate the model
```
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
### Show needed time
```
print('Epoch mean time: ', tcb.epoch_mean_time)
print('Batch mean time: ', tcb.batch_mean_time)
print('Train mean time: ', tcb.train_mean_time)
```
| github_jupyter |
# Implementation of the SARSA algorithm
This algorithm takes what we know about value function estimation and applies
it to learning the action-value function.

From Sutton and Barto, 2018. Ch. 6.
```
# first, import necessary modules
import sys
import gym
import random
import itertools
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# add your own path to the RL repo here
sys.path.append('/Users/wingillis/dev/reinforcement-learning')
from collections import defaultdict
from lib.envs.gridworld import GridworldEnv
sns.set_style('white')
# initialize the environment
shape = (5, 5) # size of the gridworld
env = GridworldEnv(shape, n_goals=2)
env.seed(23)
random.seed(23)
def make_epsilon_greedy_policy(Q, epsilon, n_actions):
'''
Creates an epsilon-greedy policy based on a given Q-function and epsilon.
Args:
Q: A dictionary that maps from state -> action-values.
Each value is a numpy array of length nA (see below)
epsilon: The probability to select a random action . float between 0 and 1.
n_actions: Number of actions in the environment.
Returns:
A function that takes the observation as an argument and returns
the probabilities for each action in the form of a numpy array of length nA.
'''
return policy_fn
def sarsa(env, num_episodes, gamma=1.0, alpha=0.5, epsilon=0.1):
'''
SARSA algorithm: On-policy TD control. Finds the optimal epsilon-greedy policy.
Args:
env: OpenAI environment.
num_episodes: Number of episodes to run for.
gamma: discount factor.
alpha: TD learning rate.
epsilon: Probability a random action will be sampled. Float betwen 0 and 1.
Returns:
A tuple (Q, stats).
Q is the optimal action-value function, a dictionary mapping state -> action values.
stats is an EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards.
'''
# The final action-value function.
# A nested dictionary that maps state -> (action -> action-value).
Q = defaultdict(lambda: np.zeros(env.nA))
# Keeps track of useful statistics
stats = dict(rewards=np.zeros(num_episodes), lengths=np.zeros(num_episodes))
# The policy we're following
policy = make_epsilon_greedy_policy(Q, epsilon, env.nA)
for i_episode in range(num_episodes):
# Print out which episode we're on, useful for debugging.
if (i_episode + 1) % 50 == 0:
print(f'\rEpisode {i_episode + 1}/{num_episodes}.', end='')
sys.stdout.flush()
# Reset the environment and pick the first action
# take multiple steps in the env until you reach the goal
for t in itertools.count():
# take a step from the action picked above
# pick the next action based on your current state
# add cumulative reward to the statistics variable
stats['rewards'][i_episode] +=
# perform the TD update to the action-value function
if done:
stats['lengths'][i_episode] = t
break
# update your current action and current state
return Q, stats
```
## Next steps
How might you improve the SARSA algorithm?
What happens to your agent's total reward or completion time if you set epsilon close to 1? Or close to 0?
## Reference results
```
Q, stats = sarsa(env, 1000, gamma=0.1, alpha=0.15)
plt.plot(stats['rewards'])
plt.plot(stats['lengths'])
plt.ylabel('steps')
plt.xlabel('episodes')
```
## Watch your policy reach the goal
```
import time
from IPython.display import clear_output
policy = make_epsilon_greedy_policy(Q, 0, env.nA)
n_steps = 40
position = env.reset()
for t in range(n_steps):
action = np.argmax(policy(position))
position, _, done, _ = env.step(action)
clear_output()
env.render('human')
time.sleep(1)
if done:
break
```
| github_jupyter |
#Introducciรณn a Python
##Caracterรญsticas principales
Python es un lenguaje de alto nivel con unas caracterรญsticas muy interesantes para el desarrollo de sistemas de forma รกgil e interactiva:
* Fue creado por [Guido van Rossum](https://es.wikipedia.org/wiki/Guido_van_Rossum).
* Es un lenguaje **interpretado** por lo que el, en general, el cรณdigo generado deberรญa ser multiplataforma.
* Es un lenguaje con **tipos de datos dinรกmicos**, con lo que el tipo de una variable se asigna cuando se asigna un valor. Esto implica que en tiempo de compilaciรณn todo pueda funcionar (sin chequeo de tipos estรกtito) pero en tiempo de ejecuciรณn se pueden producir fallos en la asignaciรณn de tipos.
* Es un lenguaje que da soporte a la programaciรณn *estructurada* (mediante funciones) y **orientada a [prototipos](https://en.wikipedia.org/wiki/Prototype-based_programming)** (similar a la programaciรณn orientada a objetos con el concepto de clase).
* Es un lenguaje **imperativo** en cuanto a sentencias de control habituales como condicionales, bucles, etc.
* Es un lenguaje con caracterรญsticas **funcionales**: funciones lambda, funciones de orden superior y funciones como: map, reduce, sum, count, zip, foldr, foldl, etc.
* Es un lenguaje/plataforma **reflectivo**, es decir, se puede inspecccionar y cambiar el cรณdigo del sistema en tiempo de ejecuciรณn.
* Desde un punto de vista del usuario:
>* Es fรกcil empezar.
* Es intuitivo
* Se utiliza la identaciรณn para especificar bloques.
* ...
**Es importante que cuando se trabaje en Python se "piense" en Python para asรญ poder explotar todas las caracterรญsticas del lenguaje.**
* Python tiene 29 palabras reservadas: for, while, and, return, def, class, etc.
##Tipos de datos
* Simples: int, float, long, complex, bool, enum, dates, etc.
* Estructurados: string, etc.
```
s1 = "Hola"
s2 = 'Hola'
integer_number = 83
float_number = 3.14159
long_number = 2345678909876543234567890
complex_number = 2 +3j
boolean = True
print (type(s1))
print (type(s2))
print (type(integer_number))
print (type(float_number))
print (type(long_number))
print (type(complex_number))
print (type(boolean))
#round(1.23, 1)
round(1.25361,3)
#Enumerados: https://docs.python.org/3/library/enum.html
from enum import Enum, unique, auto
@unique
class Day(Enum):
LUNES = 1
MARTES = 2
@unique
class Month(Enum):
ENERO = auto()
FEBRERO = auto()
print(Day.LUNES)
print (repr(Day.LUNES))
#Fechas y tiempo: https://docs.python.org/3/library/datetime.html
import datetime
x = datetime.datetime.now()
y = datetime.datetime(2018, 3, 21)
print(x)
print (y)
print(x.strftime("%B"))
#Strings: es un array de caracteres
#Ver: https://docs.python.org/3/library/string.html
name = "Pedro"
print (name)
print (name[0])
print ("Longitud: "+str(len(name)))
print (name.lower())
print (name.upper())
print (name.replace("P","T"))
print (" hola ".strip())
print ("Esto son varias palabras".split(" "))
#Tokenizaciรณn de strings con expresiones regulares
#Probar on-line un editor: https://regex101.com/
line = 'asdf fjdk; afed, fjek,asdf, foo'
import re
re.split(r'[;,\s]\s*', line)
text = 'hola este un mensaje, hola'
print(text.startswith('hola'))
print(text.endswith('hola'))
print(text.find('es'))
text = text.replace('hola', 'adiรณs')
text
text = 'UPPER PYTHON, lower python, Mixed Python'
re.findall('python', text, flags=re.IGNORECASE)
s = ' hola mundo \n'
s.strip() #Existen versiones lstrip y rstrip
import unicodedata
a = 'pรฝtฤฅรถรฑ es un gran lenguaje\n'
b = unicodedata.normalize('NFD', a)
b.encode('ascii', 'ignore').decode('ascii')
print('{} {}'.format("Uno","Dos"))
data = ['ACME', 50, 91.1]
','.join(str(d) for d in data)
```
## Operadores
* Enteros: :+ - * / % **
* Reales: + - * / % **
* Booleanos: == != > >= < <= and or not
```
print (2+3)
print (3**2)
print (2 > 3)
print (2 < 3)
print (2 == 3)
print (3 == 3)
print (2 != 3)
print ( (2 != 3) and (3==3))
print (not (1))
```
##Sentencias de control de flujo
```
#Sentencia condicional
#If-else, if-elif-else (switch),
if 2>3:
print ("True")
else:
print ("False")
a = int(input())
if a>3:
print (str(a)+" > 3")
elif a<3:
print (str(a)+" < 3")
else:
print ("Else")
#Bucles
#Tipos: for, while
for a in [1,3,4,5]:
print (a)
a = 0
while a<5:
print (a)
a += 1
```
##Estructuras de datos
Una estructura de datos permite organizar y estructurar un conjunto de datos para su posterior explotaciรณn.
La selecciรณn de una buena estructura de datos es fundamental para poder ejecutar diferentes operaciones sobre los propios datos.
Cada estructura de datos tiene un objetivo, acceso a los elementos y conjunto de operaciones diferentes:
* **Arrays**: conjunto de elementos de tamaรฑo fijo, del mismo tipo y acceso directo a los elementos. Diferentes dimensiones.
* **Lista**: conjunto de elementos de tamaรฑo variable y acceso secuencial (el directo suele estar disponible) a los elementos.
* **Conjunto**: conjunto desordenado de elementos de acceso secuencial.
* **Diccionario** (Map): conjunto de elementos dispersados por una clave con acceso directo. Tambiรฉn se puede ver como un array asociativo.
* **Tupla**: conjunto de elementos
* Otros: colas, pilas, etc.
```
#Array
a = [1,2,3,4]
print (a)
print (len(a))
print (a[0])
for item in a:
print (item)
a[1] = a[1]*2
print (a[1])
#Lista
a = [1,2,3,4]
print (type(a))
a.append(55)
print(a)
print(a)
print(a.index(55))
print(a.pop(len(a)-1))
a.reverse()
print(a)
a.sort()
print(a)
a.clear()
print(a)
print ([1,2,3]+[4,5,6])
#Flatten list python
from itertools import chain
print (list(chain.from_iterable([[2], [3], [4], [5], [6], [7]])))
#Set: https://docs.python.org/3.7/library/stdtypes.html#set-types-set-frozenset
s1 = set()
s1.add(1)
s1.add(4)
s1.add(5)
print(s1)
print (str(len(s1)))
s1.add(55)
s1.discard(55)
s2 = {4,5,6}
print (s2)
print (s1.difference(s2))
print (s1-s2)
print (s1.union(s2))
print (s1 | s2)
print (s1.intersection(s2))
print (s1 & s2)
print ({4,5}.issubset(s2))
print ({4,5} <= s2)
print ({4,5,6,7}.issuperset(s2))
print ({4,5,6,7} >= s2)
print (4 in s1)
print (55 in s1)
for item in s1:
print (item)
print (max(s1))
print (min(s1))
s1.clear()
del s2
#Diccionarios: una tabla {clave:valor}
table = {"a":1,"b":2,"c":3}
print(type(table))
print(table.values())
print(table.keys())
print (table["a"])
#print (table["dd"]) #Cuidado excepciรณn si no existe
print ("a" in table)
print ("dd" in table)
table["d"] = 4
for key in table.keys():
print (key+" "+str(table[key]))
for key,value in table.items():
print (key+" "+str(value))
table.pop("d")
table.clear()
del table
#Tuplas
tupla = (1,2,"hola")
print(type(tupla))
print(tupla[1])
for x in tupla:
print(x)
print (1 in tupla)
#No se pueden aรฑadir elementos
del tupla
#Unpacking listas
a,b,c=data
print(a, b, c, sep=':')
```
##Funciones
```
#Una funciรณn es un subprograma con un nombre y una lista de parรกmetros
def suma(a,b):
return a+b
#Los parรกmetros pueden tener valor por omisiรณn
def hola(Name="Mundo"):
return "Hola "+Name
print (suma(1,2))
print (hola())
print (hola("Jose"))
#Funciones con nรบmero variable de parรกmetros
def avg(first, *rest):
return (first + sum(rest)) / (1 + len(rest))
avg(1, 2, 3, 4)
#Aรฑdiendo informaciรณn de tipos a los parรกmetros
def add(x:int, y:int) -> int:
return x + y
help(add)
#Devolviendo varios valores en una funciรณn con una tupla.
def myfun():
return 1, 2, 3
a, b, c = myfun()
b
#Funciones con parรกmetros con valores por omisiรณn.
def spam(a, b=42):
print(a, b)
spam(1)
#Funciones lambda: es un tipo de funciรณn en lรญnea.
def mymap(n):
return n*2
add = lambda x, y: x + y
print(add('uno', 'dos'))
names = ['David Beazley', 'Brian Jones', 'Raymond Hettinger', 'Ned Batchelder']
print(sorted(names, key=lambda name: name.split()[-1].lower()))
#Listas de comprensiรณn
x = [1,2,3,4]
xpares = [n for n in x if n % 2 == 0]
print (xpares)
xdoble = [n*2 for n in x]
print(xdoble)
xsquare = list(map(lambda n: n**2, x))
print (xsquare)
xfilter = list(filter(lambda n: n < 3, x))
print (xfilter)
import functools
import operator
product = functools.reduce((lambda x, y: x * y), x)
print (product)
#https://www.burgaud.com/foldl-foldr-python
foldl = lambda func, acc, xs: functools.reduce(func, xs, acc)
foldl(operator.sub, 0, [1,2,3])
```
##Clases
```
#__init__: Constructor
#__repr__: Represent the object (machine)
#__str__: Represent the object (human)
#__cmp__: Compare
class Perro:
age = 2
def __init__(self, age):
self.age = age
def ladrar(self, n):
for i in range(0,n):
print ("Guau")
def __str__(self):
return "Perro de "+str(self.age)+" aรฑos."
perro = Perro(5)
print (perro)
perro.ladrar(2)
```
##Gestiรณn de la entrada/salida
```
#Ver: https://docs.python.org/3/tutorial/inputoutput.html
#Montamos primero la unidad para poder interaccionar con los ficheros.
from google.colab import drive
drive.mount('/content/drive')
#Leer todo el contenido en un string
with open('/content/drive/My Drive/Colab Notebooks/data/cars.csv', 'rt') as f:
data = f.read()
#print (data)
#Leer el contenido lรญnea a lรญnea
with open('/content/drive/My Drive/Colab Notebooks/data/cars.csv', 'rt') as f:
for line in f:
print(line)
with open('/content/drive/My Drive/Colab Notebooks/data/test.txt', 'wt') as f:
print('Hello World!', file=f)
import os
print(os.path.exists('/content/drive/My Drive/Colab Notebooks/data/'))
print(os.path.isfile('/content/drive/My Drive/Colab Notebooks/data/cars.csv'))
print(os.path.isdir('/content/drive/My Drive/Colab Notebooks/data/'))
import os
names = os.listdir('/content/drive/My Drive/Colab Notebooks/data/')
print (names)
import os.path
# Listar todos los ficheros
names = [name for name in os.listdir('/content/drive/My Drive/Colab Notebooks/data') if os.path.isfile(os.path.join('/content/drive/My Drive/Colab Notebooks/data', name))]
# Listar directorios
dirnames = [name for name in os.listdir('/content/drive/My Drive/Colab Notebooks') if os.path.isdir(os.path.join('/content/drive/My Drive/Colab Notebooks', name))]
print(names)
print(dirnames)
pyfiles = [name for name in os.listdir('/content/drive/My Drive/Colab Notebooks/DEEP-LEARNING-IN-ACTION') if name.endswith('.ipynb')]
print(pyfiles)
import csv
with open('/content/drive/My Drive/Colab Notebooks/data/movies.csv') as f:
f_csv = csv.reader(f)
headers = next(f_csv)
for row in f_csv:
print (row)
import csv
with open('/content/drive/My Drive/Colab Notebooks/data/movies.csv') as f:
f_csv = csv.DictReader(f)
for row in f_csv:
print(row)
headers = ['Symbol','Price','Date','Time','Change','Volume']
rows = [('AA', 39.48, '6/11/2007', '9:36am', -0.18, 181800),
('AIG', 71.38, '6/11/2007', '9:36am', -0.15, 195500),
('AXP', 62.58, '6/11/2007', '9:36am', -0.46, 935000),
]
with open('/content/drive/My Drive/Colab Notebooks/data/stocks.csv','w') as f:
f_csv = csv.writer(f)
f_csv.writerow(headers)
f_csv.writerows(rows)
import json
data = {
'name' : 'ACME',
'shares' : 100,
'price' : 542.23
}
json_str = json.dumps(data)
data = json.loads(json_str)
# Writing JSON
with open('/content/drive/My Drive/Colab Notebooks/data/test.json', 'w') as f:
json.dump(data, f)
# Reading data
with open('/content/drive/My Drive/Colab Notebooks/data/test.json', 'r') as f:
data = json.load(f)
print(data)
```
##Excepciones, utilidades, etc.
```
#Exceptions
try:
f = open('missing')
except OSError as e:
print('Error: ',e)
except FileNotFoundError:
print('Fichero no encontrado.')
except Exception as e:
print('Excepciรณn: ',e)
def parse_int(s):
try:
n = int(s)
except Exception as e:
print("Error en el parsing")
print('Razรณn:', e)
parse_int(input())
#Unitests
import unittest
#assertEqual(1, 2)
#Listar contenido de un mรณdulo
import unittest
dir(unittest)
#Listar mรฉtodos y atributos e un objeto.
a = "Un string"
dir(a)
```
#Tareas
```
#1-Revisar el siguiente cรณdigo:
mylist = [1,3,5]
mylist2 = [2,4,6]
#Merge lists
from heapq import merge
l = merge(mylist, mylist2)
print (list(l))
mylist.extend(mylist2)
print (mylist)
#Result [1,2,3,4,5,6]
suma = reduce((lambda x, y: x + y), mylist)
print (suma)
#2-Probar cรณdigo Python en un editor y ver los mรฉtodos de diferentes tipos de objetos.
#3-Metaprogramaciรณn y reflectividad: revisar las posibilidades en Python
```
#Referencias
* https://www.codecademy.com/learn/learn-python
* https://www.amazon.es/Python-Cookbook-David-Beazley/dp/1449340377
* https://www.oreilly.com/library/view/fluent-python/9781491946237/
* https://github.com/PacktPublishing/Modern-Python-Cookbook
* https://github.com/dabeaz/python-cookbook
| github_jupyter |
Code is a sequence of instructions.
Instructions pop values from operand stack, and then push results to operand stack. For now, can push at most one result.
Some sequence also have static immediate arguments, usually indices or type annotations.
Some instructions are structed, and contains nested block of instructions. Used for if/else, loops.
# Numeric instructions
Basic operations over numeric values of some types.
- constants: eg i32.const 2: push constant value on stack
- unary operations: clz (count leading zeros), ctz (count trailing zeros), popcnt (nb bits set so 1), abs, neg, etc
- binary operations: add, sub, mul,
- tests operations: consume one operand and push one boolean i32 result: eqz (== 0)
- compare operations: consume 2 operands, compare and push one boolean i32 result: eq, ne, lt, gt, etc
- conversions operations: extend or truncate between i32 and i64, or convert between f32, f64, and between int and float.
# Parametric instructions
Work on any value type:
- drop: consume and ignore one operand
- select: select one of it's first 2 operands based on if the third is 0 or not.
# Variable instructions
access local and global variables:
- get value to stack: local.get, global.get
- save value from stack: local.set, global.set
- save but doesn't pop stack: local.tee, global.tee
```c
int fact(int x)
{
return x <= 1 ? 1 : x * fact(x-1);
}
int add(int a, int b)
{
return a + b;
}
int my_pow(int x, int n)
{
if (n == 0)
return 1;
else if (n == 1)
return x;
else if (n == 2)
return x*x;
else if (n % 2 == 0)
return my_pow(x*x, n/2);
else
return my_pow(x*x, n/2) * x;
}
int my_abs(int x)
{
return x <= 0 ? -x : x;
}
int is_null(int x)
{
return x == 0;
}
```
```
math.wasm: file format wasm 0x1
Code Disassembly:
0000a3 func[0] <__post_instantiate>:
0000a4: 01 | nop
0000a5: 0b | end
0000a7 func[1] <fact>:
0000a8: 20 00 | local.get 0
0000aa: 41 02 | i32.const 2
0000ac: 4e | i32.ge_s
0000ad: 04 7f | if i32
0000af: 20 00 | local.get 0
0000b1: 41 7f | i32.const 4294967295
0000b3: 6a | i32.add
0000b4: 10 01 | call 1 <fact>
0000b6: 20 00 | local.get 0
0000b8: 6c | i32.mul
0000b9: 05 | else
0000ba: 41 01 | i32.const 1
0000bc: 0b | end
0000bd: 0b | end
0000bf func[2] <add>:
0000c0: 20 00 | local.get 0
0000c2: 20 01 | local.get 1
0000c4: 6a | i32.add
0000c5: 0b | end
0000c7 func[3] <my_pow>:
0000c8: 01 7f | local[0] type=i32
0000ca: 02 40 | block
0000cc: 02 40 | block
0000ce: 20 01 | local.get 1
0000d0: 41 02 | i32.const 2
0000d2: 4d | i32.le_u
0000d3: 04 40 | if
0000d5: 41 01 | i32.const 1
0000d7: 21 02 | local.set 2
0000d9: 02 40 | block
0000db: 20 01 | local.get 1
0000dd: 41 01 | i32.const 1
0000df: 6b | i32.sub
0000e0: 0e 02 02 00 03 | br_table 2 0 3
0000e5: 0b | end
0000e6: 20 00 | local.get 0
0000e8: 20 00 | local.get 0
0000ea: 6c | i32.mul
0000eb: 0f | return
0000ec: 0b | end
0000ed: 20 00 | local.get 0
0000ef: 20 00 | local.get 0
0000f1: 6c | i32.mul
0000f2: 20 01 | local.get 1
0000f4: 41 02 | i32.const 2
0000f6: 6d | i32.div_s
0000f7: 10 03 | call 3 <my_pow>
0000f9: 20 00 | local.get 0
0000fb: 41 01 | i32.const 1
0000fd: 20 01 | local.get 1
0000ff: 41 01 | i32.const 1
000101: 71 | i32.and
000102: 1b | select
000103: 6c | i32.mul
000104: 0f | return
000105: 0b | end
000106: 20 00 | local.get 0
000108: 21 02 | local.set 2
00010a: 0b | end
00010b: 20 02 | local.get 2
00010d: 0b | end
00010f func[4] <my_abs>:
000110: 01 7f | local[0] type=i32
000112: 20 00 | local.get 0
000114: 20 00 | local.get 0
000116: 41 1f | i32.const 31
000118: 75 | i32.shr_s
000119: 22 01 | local.tee 1
00011b: 6a | i32.add
00011c: 20 01 | local.get 1
00011e: 73 | i32.xor
00011f: 0b | end
000121 func[5] <is_null>:
000122: 20 00 | local.get 0
000124: 45 | i32.eqz
000125: 0b | end
```
# Memory Instructions
Read/write data of linear memory
- load: load from memory at a specific offset and alignment
- store: load to memory at a specific offset and alignment
a trap occurs if trying to read/write outside of memory range.
memory adress stored as i32.
- memory.size: returns current memory size
- memory.grow: expand memory size by an offset
both operations operate on unit of page size
```c
int my_data[18];
int get_it()
{
return my_data[6];
}
void set_it(int x)
{
my_data[6] = x;
}
void add_it(int x)
{
my_data[6] += x;
}
```
```
mem.wasm: file format wasm 0x1
Code Disassembly:
0000dc func[1]:
0000dd: 10 02 | call 2 <__wasm_apply_relocs>
0000df: 0b | end
0000e1 func[2] <__wasm_apply_relocs>:
0000e2: 01 | nop
0000e3: 0b | end
0000e5 func[3] <get_it>:
0000e6: 23 01 | global.get 1
0000e8: 28 02 18 | i32.load 2 24
0000eb: 0b | end
0000ed func[4] <set_it>:
0000ee: 23 01 | global.get 1
0000f0: 20 00 | local.get 0
0000f2: 36 02 18 | i32.store 2 24
0000f5: 0b | end
0000f7 func[5] <add_it>:
0000f8: 01 7f | local[0] type=i32
0000fa: 23 01 | global.get 1
0000fc: 22 01 | local.tee 1
0000fe: 20 01 | local.get 1
000100: 28 02 18 | i32.load 2 24
000103: 20 00 | local.get 0
000105: 6a | i32.add
000106: 36 02 18 | i32.store 2 24
000109: 0b | end
00010b func[6] <__post_instantiate>:
00010c: 10 07 | call 7
00010e: 10 01 | call 1
000110: 0b | end
000112 func[7]:
000113: 10 00 | call 0 <env.g$my_data>
000115: 24 01 | global.set 1
000117: 0b | end
```
# Control instructions
Control instructions affect the control flow
- nop
- unreachable: trap
- block: define a structued instruction
- if else: structured instruction
- loop: structued instruction
- br: branch to a label
- br_if: conditionaly branch to a label
- br_table: branch given a table of labels and an operand index in this table
- return: shortcut to jump to the outermost block
- call: invoke another functions, consuming arguments from operands stack, and returning result on top of stack.
- call_indirect: call a function in table, through an operand index.
```c
int foo(int x)
{
int y = x * 2;
if (x > 10)
{
int z = x * 3;
if (x > z)
x += 5;
else if (x < 3)
x *= 100;
}
else {
x *= y + 67;
}
return x + y;
}
```
```
00008d func[2] <foo>:
00008e: 01 7f | local[0] type=i32
000090: 20 00 | local.get 0
000092: 41 01 | i32.const 1
000094: 74 | i32.shl
000095: 21 01 | local.set 1
000097: 20 00 | local.get 0
000099: 41 0b | i32.const 11
00009b: 4e | i32.ge_s
00009c: 04 40 | if
00009e: 20 00 | local.get 0
0000a0: 41 03 | i32.const 3
0000a2: 6c | i32.mul
0000a3: 20 00 | local.get 0
0000a5: 48 | i32.lt_s
0000a6: 04 40 | if
0000a8: 20 00 | local.get 0
0000aa: 41 05 | i32.const 5
0000ac: 6a | i32.add
0000ad: 20 01 | local.get 1
0000af: 6a | i32.add
0000b0: 0f | return
0000b1: 0b | end
0000b2: 20 00 | local.get 0
0000b4: 41 e4 00 | i32.const 100
0000b7: 6c | i32.mul
0000b8: 20 00 | local.get 0
0000ba: 20 00 | local.get 0
0000bc: 41 03 | i32.const 3
0000be: 48 | i32.lt_s
0000bf: 1b | select
0000c0: 20 01 | local.get 1
0000c2: 6a | i32.add
0000c3: 0f | return
0000c4: 0b | end
0000c5: 20 01 | local.get 1
0000c7: 41 c3 00 | i32.const 67
0000ca: 6a | i32.add
0000cb: 20 00 | local.get 0
0000cd: 6c | i32.mul
0000ce: 20 01 | local.get 1
0000d0: 6a | i32.add
0000d1: 0b | end
```
```c
int bar(int x, int y)
{
if (x < 0 || y < 0)
return x + y;
else if (x > y)
return x;
else if (x < y)
return y;
else
return 100;
}
```
```
0000d3 func[3] <bar>:
0000d4: 20 00 | local.get 0
0000d6: 20 01 | local.get 1
0000d8: 72 | i32.or
0000d9: 41 7f | i32.const 4294967295
0000db: 4c | i32.le_s
0000dc: 04 40 | if
0000de: 20 00 | local.get 0
0000e0: 20 01 | local.get 1
0000e2: 6a | i32.add
0000e3: 0f | return
0000e4: 0b | end
0000e5: 20 01 | local.get 1
0000e7: 41 e4 00 | i32.const 100
0000ea: 20 00 | local.get 0
0000ec: 20 01 | local.get 1
0000ee: 48 | i32.lt_s
0000ef: 1b | select
0000f0: 20 00 | local.get 0
0000f2: 20 00 | local.get 0
0000f4: 20 01 | local.get 1
0000f6: 4c | i32.le_s
0000f7: 1b | select
0000f8: 0b | end
```
```c
void foo_n(int n)
{
for (int i = 0; i < n; ++i)
foo();
}
```
```
0000b6 func[3] <foo_n>:
0000b7: 01 7f | local[0] type=i32
0000b9: 41 00 | i32.const 0
0000bb: 21 01 | local.set 1
0000bd: 20 00 | local.get 0
0000bf: 41 00 | i32.const 0
0000c1: 4a | i32.gt_s
0000c2: 04 40 | if
0000c4: 03 40 | loop
0000c6: 10 00 | call 0 <env.foo>
0000c8: 20 01 | local.get 1
0000ca: 41 01 | i32.const 1
0000cc: 6a | i32.add
0000cd: 22 01 | local.tee 1
0000cf: 20 00 | local.get 0
0000d1: 47 | i32.ne
0000d2: 0d 00 | br_if 0
0000d4: 0b | end
0000d5: 0b | end
0000d6: 0b | end
```
```c
int fact(int x)
{
int res = 1;
while (x > 1)
{
res *= x;
x -= 1;
}
return res;
}
```
```
0000d8 func[4] <fact>:
0000d9: 02 7f | local[0..1] type=i32
0000db: 41 01 | i32.const 1
0000dd: 21 01 | local.set 1
0000df: 20 00 | local.get 0
0000e1: 41 02 | i32.const 2
0000e3: 4e | i32.ge_s
0000e4: 04 40 | if
0000e6: 03 40 | loop
0000e8: 20 00 | local.get 0
0000ea: 20 01 | local.get 1
0000ec: 6c | i32.mul
0000ed: 21 01 | local.set 1
0000ef: 20 00 | local.get 0
0000f1: 41 02 | i32.const 2
0000f3: 4a | i32.gt_s
0000f4: 21 02 | local.set 2
0000f6: 20 00 | local.get 0
0000f8: 41 7f | i32.const 4294967295
0000fa: 6a | i32.add
0000fb: 21 00 | local.set 0
0000fd: 20 02 | local.get 2
0000ff: 0d 00 | br_if 0
000101: 0b | end
000102: 0b | end
000103: 20 01 | local.get 1
000105: 0b | end
```
```c
int el_break(int x)
{
int res = 0;
for (int i = x / 3; i < x; ++i)
{
if (i % 4 == 0)
break;
++res;
}
return res;
}
```
```
000108 func[5] <el_break>:
000109: 02 7f | local[0..1] type=i32
00010b: 41 00 | i32.const 0
00010d: 21 01 | local.set 1
00010f: 02 40 | block
000111: 20 00 | local.get 0
000113: 41 03 | i32.const 3
000115: 6d | i32.div_s
000116: 22 02 | local.tee 2
000118: 20 00 | local.get 0
00011a: 4e | i32.ge_s
00011b: 0d 00 | br_if 0
00011d: 20 02 | local.get 2
00011f: 41 03 | i32.const 3
000121: 71 | i32.and
000122: 45 | i32.eqz
000123: 0d 00 | br_if 0
000125: 20 02 | local.get 2
000127: 41 7f | i32.const 4294967295
000129: 73 | i32.xor
00012a: 22 01 | local.tee 1
00012c: 20 00 | local.get 0
00012e: 6a | i32.add
00012f: 22 00 | local.tee 0
000131: 20 01 | local.get 1
000133: 41 03 | i32.const 3
000135: 71 | i32.and
000136: 22 01 | local.tee 1
000138: 20 00 | local.get 0
00013a: 20 01 | local.get 1
00013c: 49 | i32.lt_u
00013d: 1b | select
00013e: 41 01 | i32.const 1
000140: 6a | i32.add
000141: 21 01 | local.set 1
000143: 0b | end
000144: 20 01 | local.get 1
000146: 0b | end
```
```c
int el_continue(int x)
{
int res = 0;
for (int i = x / 3; i < x; ++i)
{
if (i % 4 == 0)
continue;
++res;
}
return res;
}
```
```
000148 func[6] <el_continue>:
000149: 02 7f | local[0..1] type=i32
00014b: 41 00 | i32.const 0
00014d: 21 01 | local.set 1
00014f: 20 00 | local.get 0
000151: 41 03 | i32.const 3
000153: 6d | i32.div_s
000154: 22 02 | local.tee 2
000156: 20 00 | local.get 0
000158: 48 | i32.lt_s
000159: 04 40 | if
00015b: 03 40 | loop
00015d: 20 01 | local.get 1
00015f: 20 02 | local.get 2
000161: 41 03 | i32.const 3
000163: 71 | i32.and
000164: 41 00 | i32.const 0
000166: 47 | i32.ne
000167: 6a | i32.add
000168: 21 01 | local.set 1
00016a: 20 02 | local.get 2
00016c: 41 01 | i32.const 1
00016e: 6a | i32.add
00016f: 22 02 | local.tee 2
000171: 20 00 | local.get 0
000173: 47 | i32.ne
000174: 0d 00 | br_if 0
000176: 0b | end
000177: 0b | end
000178: 20 01 | local.get 1
00017a: 0b | end
```
Refs:
- [WebAssembly Specs 1.0](https://webassembly.github.io/spec/core/_download/WebAssembly.pdf)
| github_jupyter |
```
from __future__ import print_function
import rdkit.Chem as Chem
import rdkit.Chem.AllChem as AllChem
from rdkit.Chem.Draw import IPythonConsole, ReactionToImage, MolToImage
from IPython.display import SVG, display, clear_output
import pandas as pd
import numpy as np
import json
import sys
sys.path.append('../../')
from retrosim.data.get_data import get_data_df, split_data_df
from retrosim.utils.generate_retro_templates import process_an_example
from retrosim.utils.draw import ReactionStringToImage, TransformStringToImage
import time
data = get_data_df('../data/data_processed.csv')
```
## Load general templates, which only include reaction core
```
with open('../data/templates_general.json', 'rb') as fid:
templates = json.load(fid)
reactions = []
good_templates = []
counts = []
for template in templates:
try:
rxn = AllChem.ReactionFromSmarts(str(template))
rxn.Initialize()
[rct.UpdatePropertyCache() for rct in rxn.GetReactants()]
[Chem.AssignStereochemistry(rct) for rct in rxn.GetReactants()]
if rxn.Validate()[1] == 0:
reactions.append(rxn)
good_templates.append(template)
counts.append(templates[template])
except ValueError as e:
pass
print('Loaded {} templates'.format(len(reactions)))
import matplotlib.pyplot as plt
%matplotlib inline
```
## What are the most common patterns of the sites of retro disconnections?
```
prod_templates = {}
for template in templates:
prod = template.split('>>')[0]
if prod == '([c;H0:1]-[n;H0:2](:[c;H0:3]):[cH:4]:[c;H0:5]:[c;H0:6])':
print(template)
if prod in prod_templates:
prod_templates[prod] += templates[template]
else:
prod_templates[prod] = templates[template]
sorted_templates = sorted(prod_templates.iteritems(), key = lambda x: x[1], reverse=True)
print('\n{} ........................ {}'.format(sorted_templates[:10], sorted_templates[-10:]))
print('num unique: {}'.format(len(prod_templates)))
with open('../data/templates_general_prods.json', 'wb') as fid:
json.dump(prod_templates, fid, indent=4)
df = pd.DataFrame({'pop': [np.log10(x) for x in templates.values()]})
plot = df['pop'].hist(bins=25, weights=np.power(10.0, df['pop']))
plot.set_title("Template popularity ({} total examples, {} templates)".format(sum(templates.values()), len(templates)))
plot.set_xlabel("Number of precedents in template (log10)")
plot.set_ylabel("Number of precedents")
```
## How many unique precursors are there for a given disconnection?
```
plot = df['pop'].hist(bins=25)
plot.set_title("Template popularity ({} total examples)".format(sum(templates.values())))
plot.set_xlabel("Number of precedents (log10)")
plot.set_ylabel("Number of templates")
prod_templates_unique = {}
ex_templates = {}
for template in templates:
prod = template.split('>>')[0]
if prod == '([c;H0:1]-[c;H0:2])':
ex_templates[template] = templates[template]
if prod in prod_templates_unique:
prod_templates_unique[prod] += 1
else:
prod_templates_unique[prod] = 1
sorted_templates_unique = sorted(prod_templates_unique.iteritems(), key = lambda x: x[1], reverse=True)
print('\n{} ........................ {}'.format(sorted_templates_unique[:10], sorted_templates_unique[-10:]))
print('num unique: {}'.format(len(prod_templates)))
with open('../data/templates_general_prods_unique_precursor_options.json', 'wb') as fid:
json.dump(prod_templates_unique, fid, indent=4)
with open('../data/templates_general_prods_unique_precursor_options_ex.json', 'wb') as fid:
json.dump(ex_templates, fid, indent=4)
print('Example for c-c disconnection:')
for (key, val) in sorted(ex_templates.iteritems(), key = lambda x: x[1], reverse=True)[:10]:
print('{} {}'.format(val, key))
df = pd.DataFrame({'num': [np.log10(x) for x in prod_templates_unique.values()]})
plot = df['num'].hist(bins=25)
plot.set_title("Unique precursors ({} total templates)".format(len(prod_templates_unique)))
plot.set_xlabel("Number of unique precursors for a given disconnection (log10)")
plot.set_ylabel("Number of unique disconnections")
df = pd.DataFrame({'num': [np.log10(x) for x in prod_templates_unique.values()]})
plot = df['num'].hist(bins=25, weights=np.power(10.0, df['num']))
plot.set_title("Unique precursors ({} total templates)".format(len(prod_templates_unique)))
plot.set_xlabel("Number of unique precursors for a given disconnection (log10)")
plot.set_ylabel("Number of unique templates")
```
## What is the largest fragment that determines a retro disconnection?
```
num_atoms = {}
for template in templates:
num = template.split('>>')[0].count('[')
if num in num_atoms:
num_atoms[num] += 1
else:
num_atoms[num] = 1
sorted_num_atoms = sorted(num_atoms.iteritems(), key = lambda x: x[0])
print('List of (# atoms, # templates)')
print(sorted_num_atoms)
num_atoms = {}
for template in templates:
num = template.split('>>')[0].count('[')
if num in num_atoms:
num_atoms[num] += templates[template]
else:
num_atoms[num] = templates[template]
sorted_num_atoms = sorted(num_atoms.iteritems(), key = lambda x: x[0])
print('List of (# atoms, # training examples)')
print(sorted_num_atoms)
plot = plt.scatter(num_atoms.keys(), num_atoms.values())
plt.title("Size of reaction center ({} total examples)".format(sum(num_atoms.values())))
plt.xlabel("Number of involved product atoms")
plt.ylabel("Number of training examples")
plt.yscale('log')
```
| github_jupyter |
# FloPy shapefile export demo
The goal of this notebook is to demonstrate ways to export model information to shapefiles.
This example will cover:
* basic exporting of information for a model, individual package, or dataset
* custom exporting of combined data from different packages
* general exporting and importing of geographic data from other sources
```
import sys
import os
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
# set the output directory
outdir = os.path.join('temp', 'shapefile_export')
if not os.path.isdir(outdir):
os.makedirs(outdir)
# load an existing model
model_ws = "../data/freyberg"
m = flopy.modflow.Modflow.load("freyberg.nam", model_ws=model_ws, verbose=False,
check=False, exe_name="mfnwt")
m.get_package_list()
```
### set the model coordinate information
the coordinate information where the grid is located in a projected coordinate system (e.g. UTM)
```
grid = m.modelgrid
grid.set_coord_info(xoff=273170, yoff=5088657, epsg=26916)
grid.extent
```
## Declarative export using attached `.export()` methods
#### Export the whole model to a single shapefile
```
fname = '{}/model.shp'.format(outdir)
m.export(fname)
ax = plt.subplot(1, 1, 1, aspect='equal')
extents = grid.extent
pc = flopy.plot.plot_shapefile(fname, ax=ax, edgecolor='k', facecolor='none')
ax.set_xlim(extents[0], extents[1])
ax.set_ylim(extents[2], extents[3])
ax.set_title(fname);
fname = '{}/wel.shp'.format(outdir)
m.wel.export(fname)
```
### Export a package to a shapefile
### Export a FloPy list or array object
```
m.lpf.hk
fname = '{}/hk.shp'.format(outdir)
m.lpf.hk.export('{}/hk.shp'.format(outdir))
ax = plt.subplot(1, 1, 1, aspect='equal')
extents = grid.extent
a = m.lpf.hk.array.ravel()
pc = flopy.plot.plot_shapefile(fname, ax=ax, a=a)
ax.set_xlim(extents[0], extents[1])
ax.set_ylim(extents[2], extents[3])
ax.set_title(fname);
m.riv.stress_period_data
m.riv.stress_period_data.export('{}/riv_spd.shp'.format(outdir))
```
### MfList.export() exports the whole grid by default, regardless of the locations of the boundary cells
`sparse=True` only exports the boundary cells in the MfList
```
m.riv.stress_period_data.export('{}/riv_spd.shp'.format(outdir), sparse=True)
m.wel.stress_period_data.export('{}/wel_spd.shp'.format(outdir), sparse=True)
```
## Ad-hoc exporting using `recarray2shp`
* The main idea is to create a recarray with all of the attribute information, and a list of geometry features (one feature per row in the recarray)
* each geometry feature is an instance of the `Point`, `LineString` or `Polygon` classes in `flopy.utils.geometry`. The shapefile format requires all the features to be of the same type.
* We will use pandas dataframes for these examples because they are easy to work with, and then convert them to recarrays prior to exporting.
```
from flopy.export.shapefile_utils import recarray2shp
```
### combining data from different packages
write a shapefile of RIV and WEL package cells
```
wellspd = pd.DataFrame(m.wel.stress_period_data[0])
rivspd = pd.DataFrame(m.riv.stress_period_data[0])
spd = wellspd.append(rivspd)
spd.head()
```
##### Create a list of Polygon features from the cell vertices stored in the modelgrid object
```
from flopy.utils.geometry import Polygon
vertices = []
for row, col in zip(spd.i, spd.j):
vertices.append(grid.get_cell_vertices(row, col))
polygons = [Polygon(vrt) for vrt in vertices]
polygons
```
##### write the shapefile
```
fname = '{}/bcs.shp'.format(outdir)
recarray2shp(spd.to_records(), geoms=polygons,
shpname=fname,
epsg=grid.epsg)
ax = plt.subplot(1, 1, 1, aspect='equal')
extents = grid.extent
pc = flopy.plot.plot_shapefile(fname, ax=ax)
ax.set_xlim(extents[0], extents[1])
ax.set_ylim(extents[2], extents[3])
ax.set_title(fname);
```
### exporting other data
Suppose we have some well data with actual locations that we want to export to a shapefile
```
welldata = pd.DataFrame({'wellID': np.arange(0, 10),
'q': np.random.randn(10)*100 - 1000,
'x_utm': np.random.rand(10)*5000 + grid.xoffset,
'y_utm': grid.yoffset + np.random.rand(10)*10000})
welldata.head()
```
##### convert the x, y coorindates to point features and then export
```
from flopy.utils.geometry import Point
geoms = [Point(x, y) for x, y in zip(welldata.x_utm, welldata.y_utm)]
fname = '{}/wel_data.shp'.format(outdir)
recarray2shp(welldata.to_records(), geoms=geoms,
shpname=fname,
epsg=grid.epsg)
ax = plt.subplot(1, 1, 1, aspect='equal')
extents = grid.extent
pc = flopy.plot.plot_shapefile(fname, ax=ax, radius=100)
ax.set_xlim(extents[0], extents[1])
ax.set_ylim(extents[2], extents[3])
ax.set_title(fname);
```
### Adding attribute data to an existing shapefile
Suppose we have a GIS coverage representing the river in the riv package
```
from flopy.utils.geometry import LineString
### make up a linestring shapefile of the river reaches
i, j = m.riv.stress_period_data[0].i, m.riv.stress_period_data[0].j
x0 = grid.xyzcellcenters[0][i[0], j[0]]
x1 = grid.xyzcellcenters[0][i[-1], j[-1]]
y0 = grid.xyzcellcenters[1][i[0], j[0]]
y1 = grid.xyzcellcenters[1][i[-1], j[-1]]
x = np.linspace(x0, x1, m.nrow+1)
y = np.linspace(y0, y1, m.nrow+1)
l0 = zip(list(zip(x[:-1], y[:-1])), list(zip(x[1:], y[1:])))
lines = [LineString(l) for l in l0]
rivdata = pd.DataFrame(m.riv.stress_period_data[0])
rivdata['reach'] = np.arange(len(lines))
lines_shapefile = '{}/riv_reaches.shp'.format(outdir)
recarray2shp(rivdata.to_records(index=False), geoms=lines,
shpname=lines_shapefile,
epsg=grid.epsg)
ax = plt.subplot(1, 1, 1, aspect='equal')
extents = grid.extent
pc = flopy.plot.plot_shapefile(lines_shapefile, ax=ax, radius=25)
ax.set_xlim(extents[0], extents[1])
ax.set_ylim(extents[2], extents[3])
ax.set_title(lines_shapefile);
```
#### read in the GIS coverage using `shp2recarray`
`shp2recarray` reads a shapefile into a numpy record array, which can easily be converted to a DataFrame
```
from flopy.export.shapefile_utils import shp2recarray
linesdata = shp2recarray(lines_shapefile)
linesdata = pd.DataFrame(linesdata)
linesdata.head()
```
##### Suppose we have some flow information that we read in from the cell budget file
```
# make up some fluxes between the river and aquifer at each reach
q = np.random.randn(len(linesdata))+1
q
```
##### Add reachs fluxes and cumulative flow to lines DataFrame
```
linesdata['qreach'] = q
linesdata['qstream'] = np.cumsum(q)
recarray2shp(linesdata.drop('geometry', axis=1).to_records(),
geoms=linesdata.geometry.values,
shpname=lines_shapefile,
epsg=grid.epsg)
ax = plt.subplot(1, 1, 1, aspect='equal')
extents = grid.extent
pc = flopy.plot.plot_shapefile(lines_shapefile, ax=ax, radius=25)
ax.set_xlim(extents[0], extents[1])
ax.set_ylim(extents[2], extents[3])
ax.set_title(lines_shapefile);
```
## Overriding the model's modelgrid with a user supplied modelgrid
In some cases it may be necessary to override the model's modelgrid instance with a seperate modelgrid. An example of this is if the model discretization is in feet and the user would like it projected in meters. Exporting can be accomplished by supplying a modelgrid as a `kwarg` in any of the `export()` methods within flopy. Below is an example:
```
mg0 = m.modelgrid
# build a new modelgrid instance with discretization in meters
modelgrid = flopy.discretization.StructuredGrid(delc=mg0.delc * 0.3048, delr=mg0.delr * 0.3048,
top= mg0.top, botm=mg0.botm, idomain=mg0.idomain,
xoff=mg0.xoffset * 0.3048, yoff=mg0.yoffset * 0.3048)
# exporting an entire model
m.export('{}/freyberg.shp'.format(outdir), modelgrid=modelgrid)
```
And for a specific parameter the method is the same
```
fname = '{}/hk.shp'.format(outdir)
m.lpf.hk.export(fname, modelgrid=modelgrid)
ax = plt.subplot(1, 1, 1, aspect='equal')
extents = modelgrid.extent
a = m.lpf.hk.array.ravel()
pc = flopy.plot.plot_shapefile(fname, ax=ax, a=a)
ax.set_xlim(extents[0], extents[1])
ax.set_ylim(extents[2], extents[3])
ax.set_title(fname);
```
| github_jupyter |
# Regression
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
```
In the previous chapter we saw several examples of logistic regression, which is based on the assumption that the likelihood of an outcome, expressed in the form of log odds, is a linear function of some quantity (continuous or discrete).
In this chapter we'll work on examples of simple linear regression, which models the relationship between two quantities. Specifically, we'll look at changes over time in snowfall and the marathon world record.
The models we'll use have three parameters, so you might want to review the tools we used for the three-parameter model in <<_MarkandRecapture>>.
## More Snow?
I am under the impression that we don't get as much snow around here as we used to. By "around here" I mean Norfolk County, Massachusetts, where I was born, grew up, and currently live. And by "used to" I mean compared to when I was young, like in 1978 when we got [27 inches of snow](https://en.wikipedia.org/wiki/Northeastern_United_States_blizzard_of_1978) and I didn't have to go to school for a couple of weeks.
Fortunately, we can test my conjecture with data. Norfolk County happens to be the location of the [Blue Hill Meteorological Observatory](https://en.wikipedia.org/wiki/Blue_Hill_Meteorological_Observatory), which keeps the oldest continuous weather record in North America.
Data from this and many other weather stations is available from the [National Oceanic and Atmospheric Administration](https://www.ncdc.noaa.gov/cdo-web/search) (NOAA). I collected data from the Blue Hill Observatory from May 11, 1967 to May 11, 2020.
The following cell downloads the data as a CSV file.
```
import os
datafile = '2239075.csv'
if not os.path.exists(datafile):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2239075.csv
```
We can use Pandas to read the data into `DataFrame`:
```
import pandas as pd
df = pd.read_csv('2239075.csv', parse_dates=[2])
```
Here's what the last few rows look like.
```
df.tail(3)
```
The columns we'll use are:
* `DATE`, which is the date of each observation,
* `SNOW`, which is the total snowfall in inches.
I'll add a column that contains just the year part of the dates.
```
df['YEAR'] = df['DATE'].dt.year
```
And use `groupby` to add up the total snowfall in each year.
```
snow = df.groupby('YEAR')['SNOW'].sum()
```
The first and last years are not complete, so I'll drop them.
```
snow = snow.iloc[1:-1]
len(snow)
```
The following figure shows total snowfall during each of the complete years in my lifetime.
```
from utils import decorate
snow.plot(style='o', alpha=0.5)
decorate(xlabel='Year',
ylabel='Total annual snowfall (inches)',
title='Total annual snowfall in Norfolk County, MA')
```
Looking at this plot, it's hard to say whether snowfall is increasing, decreasing, or unchanged. In the last decade, we've had several years with more snow than 1978, including 2015, which was the snowiest winter in the Boston area in modern history, with a total of 141 inches.
This kind of question -- looking at noisy data and wondering whether it is going up or down -- is precisely the question we can answer with Bayesian regression.
```
snow.loc[[1978, 1996, 2015]]
```
## Regression Model
The foundation of regression (Bayesian or not) is the assumption that a time series like this is the sum of two parts:
1. A linear function of time, and
2. A series of random values drawn from a distribution that is not changing over time.
Mathematically, the regression model is
$$y = a x + b + \epsilon$$
where $y$ is the series of measurements (snowfall in this example), $x$ is the series of times (years) and $\epsilon$ is the series of random values.
$a$ and $b$ are the slope and intercept of the line through the data. They are unknown parameters, so we will use the data to estimate them.
We don't know the distribution of $\epsilon$, so we'll make the additional assumption that it is a normal distribution with mean 0 and unknown standard deviation, $\sigma$.
To see whether this assumption is reasonable, I'll plot the distribution of total snowfall and a normal model with the same mean and standard deviation.
Here's a `Pmf` object that represents the distribution of snowfall.
```
from empiricaldist import Pmf
pmf_snowfall = Pmf.from_seq(snow)
```
And here are the mean and standard deviation of the data.
```
mean, std = pmf_snowfall.mean(), pmf_snowfall.std()
mean, std
```
I'll use the `norm` object from SciPy to compute the CDF of the a normal distribution with the same mean and standard deviation.
```
from scipy.stats import norm
dist = norm(mean, std)
qs = pmf_snowfall.qs
ps = dist.cdf(qs)
```
Here's what the distribution of the data looks like compared to the normal model.
```
import matplotlib.pyplot as plt
plt.plot(qs, ps, color='C5', label='model')
pmf_snowfall.make_cdf().plot(label='data')
decorate(xlabel='Total snowfall (inches)',
ylabel='CDF',
title='Normal model of variation in snowfall')
```
We've had more winters below the mean than expected, but overall this looks like a reasonable model.
## Least Squares Regression
Our regression model has three parameters: slope, intercept, and standard deviation of $\epsilon$.
Before we can estimate them, we have to choose priors.
To help with that, I'll use StatsModel to fit a line to the data by [least squares regression](https://en.wikipedia.org/wiki/Least_squares).
First, I'll use `reset_index` to convert `snow`, which is a `Series`, to a `DataFrame`.
```
data = snow.reset_index()
data.head(3)
```
The result is a `DataFrame` with two columns, `YEAR` and `SNOW`, in a format we can use with StatsModels.
As we did in the previous chapter, I'll center the data by subtracting off the mean.
```
offset = data['YEAR'].mean().round()
data['x'] = data['YEAR'] - offset
offset
```
And I'll add a column to `data` so the dependent variable has a standard name.
```
data['y'] = data['SNOW']
```
Now, we can use StatsModels to compute the least squares fit to the data and estimate `slope` and `intercept`.
```
import statsmodels.formula.api as smf
formula = 'y ~ x'
results = smf.ols(formula, data=data).fit()
results.params
```
The intercept, about 64 inches, is the expected snowfall when `x=0`, which is the beginning of 1994.
The estimated slope indicates that total snowfall is increasing at a rate of about 0.5 inches per year.
`results` also provides `resid`, which is an array of residuals, that is, the differences between the data and the fitted line.
The standard deviation of the residuals is an estimate of `sigma`.
```
results.resid.std()
```
We'll use these estimates to choose prior distributions for the parameters.
## Priors
I'll use uniform distributions for all three parameters.
```
import numpy as np
from utils import make_uniform
qs = np.linspace(-0.5, 1.5, 51)
prior_slope = make_uniform(qs, 'Slope')
qs = np.linspace(54, 75, 41)
prior_inter = make_uniform(qs, 'Intercept')
qs = np.linspace(20, 35, 31)
prior_sigma = make_uniform(qs, 'Sigma')
```
I made the prior distributions different lengths for two reasons. First, if we make a mistake and use the wrong distribution, it will be easier to catch the error if they are all different lengths.
Second, it provides more precision for the most important parameter, `slope`, and spends less computational effort on the least important, `sigma`.
In <<_ThreeParameterModel>> we made a joint distribution with three parameters. I'll wrap that process in a function:
```
from utils import make_joint
def make_joint3(pmf1, pmf2, pmf3):
"""Make a joint distribution with three parameters."""
joint2 = make_joint(pmf2, pmf1).stack()
joint3 = make_joint(pmf3, joint2).stack()
return Pmf(joint3)
```
And use it to make a `Pmf` that represents the joint distribution of the three parameters.
```
prior = make_joint3(prior_slope, prior_inter, prior_sigma)
prior.head(3)
```
The index of `Pmf` has three columns, containing values of `slope`, `inter`, and `sigma`, in that order.
With three parameters, the size of the joint distribution starts to get big. Specifically, it is the product of the lengths of the prior distributions. In this example, the prior distributions have 51, 41, and 31 values, so the length of the joint prior is 64,821.
```
len(prior_slope), len(prior_inter), len(prior_sigma)
len(prior_slope) * len(prior_inter) * len(prior_sigma)
len(prior)
```
## Likelihood
Now we'll compute the likelihood of the data.
To demonstrate the process, let's assume temporarily that the parameters are known.
```
inter = 64
slope = 0.51
sigma = 25
```
I'll extract the `xs` and `ys` from `data` as `Series` objects:
```
xs = data['x']
ys = data['y']
```
And compute the "residuals", which are the differences between the actual values, `ys`, and the values we expect based on `slope` and `inter`.
```
expected = slope * xs + inter
resid = ys - expected
```
According to the model, the residuals should follow a normal distribution with mean 0 and standard deviation `sigma`. So we can compute the likelihood of each residual value using `norm` from SciPy.
```
densities = norm(0, sigma).pdf(resid)
```
The result is an array of probability densities, one for each element of the dataset; their product is the likelihood of the data.
```
likelihood = densities.prod()
likelihood
```
As we saw in the previous chapter, the likelihood of any particular dataset tends to be small.
If it's too small, we might exceed the limits of floating-point arithmetic.
When that happens, we can avoid the problem by computing likelihoods under a log transform.
But in this example that's not necessary.
## The Update
Now we're ready to do the update. First, we need to compute the likelihood of the data for each possible set of parameters.
```
likelihood = prior.copy()
for slope, inter, sigma in prior.index:
expected = slope * xs + inter
resid = ys - expected
densities = norm.pdf(resid, 0, sigma)
likelihood[slope, inter, sigma] = densities.prod()
```
This computation takes longer than many of the previous examples.
We are approaching the limit of what we can do with grid approximations.
Nevertheless, we can do the update in the usual way:
```
posterior = prior * likelihood
posterior.normalize()
```
The result is a `Pmf` with a three-level index containing values of `slope`, `inter`, and `sigma`.
To get the marginal distributions from the joint posterior, we can use `Pmf.marginal`, which we saw in <<_ThreeParameterModel>>.
```
posterior_slope = posterior.marginal(0)
posterior_inter = posterior.marginal(1)
posterior_sigma = posterior.marginal(2)
```
Here's the posterior distribution for `sigma`:
```
posterior_sigma.plot()
decorate(xlabel='$\sigma$, standard deviation of $\epsilon$',
ylabel='PDF',
title='Posterior marginal distribution of $\sigma$')
```
The most likely values for `sigma` are near 26 inches, which is consistent with our estimate based on the standard deviation of the data.
However, to say whether snowfall is increasing or decreasing, we don't really care about `sigma`. It is a "nuisance parameter", so-called because we have to estimate it as part of the model, but we don't need it to answer the questions we are interested in.
Nevertheless, it is good to check the marginal distributions to make sure
* The location is consistent with our expectations, and
* The posterior probabilities are near 0 at the extremes of the range, which indicates that the prior distribution covers all parameters with non-negligible probability.
In this example, the posterior distribution of `sigma` looks fine.
Here's the posterior distribution of `inter`:
```
posterior_inter.plot(color='C1')
decorate(xlabel='intercept (inches)',
ylabel='PDF',
title='Posterior marginal distribution of intercept')
from utils import summarize
summarize(posterior_inter)
```
The posterior mean is about 64 inches, which is the expected amount of snow during the year at the midpoint of the range, 1994.
And finally, here's the posterior distribution of `slope`:
```
posterior_slope.plot(color='C4')
decorate(xlabel='Slope (inches per year)',
ylabel='PDF',
title='Posterior marginal distribution of slope')
summarize(posterior_slope)
```
The posterior mean is about 0.51 inches, which is consistent with the estimate we got from least squared regression.
The 90% credible interval is from 0.1 to 0.9, which indicates that our uncertainty about this estimate is pretty high. In fact, there is still a small posterior probability (about 2\%) that the slope is negative.
```
posterior_slope.make_cdf()(0)
```
However, it is more likely that my conjecture was wrong: we are actually getting more snow around here than we used to, increasing at a rate of about a half-inch per year, which is substantial. On average, we get an additional 25 inches of snow per year than we did when I was young.
This example shows that with slow-moving trends and noisy data, your instincts can be misleading.
Now, you might suspect that I overestimate the amount of snow when I was young because I enjoyed it, and underestimate it now because I don't. But you would be mistaken.
During the Blizzard of 1978, we did not have a snowblower and my brother and I had to shovel. My sister got a pass for no good reason. Our driveway was about 60 feet long and three cars wide near the garage. And we had to shovel Mr. Crocker's driveway, too, for which we were not allowed to accept payment. Furthermore, as I recall it was during this excavation that I accidentally hit my brother with a shovel on the head, and it bled a lot because, you know, scalp wounds.
Anyway, the point is that I don't think I overestimate the amount of snow when I was young because I have fond memories of it.
## Optimization
The way we computed the likelihood in the previous section was pretty slow. The problem is that we looped through every possible set of parameters in the prior distribution, and there were more than 60,000 of them.
If we can do more work per iteration, and run the loop fewer times, we expect it to go faster.
In order to do that, I'll unstack the prior distribution:
```
joint3 = prior.unstack()
joint3.head(3)
```
The result is a `DataFrame` with `slope` and `intercept` down the rows and `sigmas` across the columns.
The following is a version of `likelihood_regression` that takes the joint prior distribution in this form and returns the posterior distribution in the same form.
```
from utils import normalize
def update_optimized(prior, data):
"""Posterior distribution of regression parameters
`slope`, `inter`, and `sigma`.
prior: Pmf representing the joint prior
data: DataFrame with columns `x` and `y`
returns: Pmf representing the joint posterior
"""
xs = data['x']
ys = data['y']
sigmas = prior.columns
likelihood = prior.copy()
for slope, inter in prior.index:
expected = slope * xs + inter
resid = ys - expected
resid_mesh, sigma_mesh = np.meshgrid(resid, sigmas)
densities = norm.pdf(resid_mesh, 0, sigma_mesh)
likelihood.loc[slope, inter] = densities.prod(axis=1)
posterior = prior * likelihood
normalize(posterior)
return posterior
```
This version loops through all possible pairs of `slope` and `inter`, so the loop runs about 2000 times.
```
len(prior_slope) * len(prior_inter)
```
Each time through the loop, it uses a grid mesh to compute the likelihood of the data for all values of `sigma`. The result is an array with one column for each data point and one row for each value of `sigma`. Taking the product across the columns (`axis=1`) yields the probability of the data for each value of sigma, which we assign as a row in `likelihood`.
```
%time posterior_opt = update_optimized(joint3, data)
```
We get the same result either way.
```
np.allclose(posterior, posterior_opt.stack())
```
But this version is about 25 times faster than the previous version.
This optimization works because many functions in NumPy and SciPy are written in C, so they run fast compared to Python. If you can do more work each time you call these functions, and less time running the loop in Python, your code will often run substantially faster.
In this version of the posterior distribution, `slope` and `inter` run down the rows and `sigma` runs across the columns. So we can use `marginal` to get the posterior joint distribution of `slope` and `intercept`.
```
from utils import marginal
posterior2 = marginal(posterior_opt, 1)
posterior2.head(3)
```
The result is a `Pmf` with two columns in the index.
To plot it, we have to unstack it.
```
joint_posterior = posterior2.unstack().transpose()
joint_posterior.head(3)
```
Here's what it looks like.
```
from utils import plot_contour
plot_contour(joint_posterior)
decorate(title='Posterior joint distribution of slope and intercept')
```
The ovals in the contour plot are aligned with the axes, which indicates that there is no correlation between `slope` and `inter` in the posterior distribution, which is what we expect since we centered the values.
In this example, the motivating question is about the slope of the line, so we answered it by looking at the posterior distribution of slope.
In the next example, the motivating question is about prediction, so we'll use the joint posterior distribution to generate predictive distributions.
## Marathon World Record
For many running events, if you plot the world record pace over time, the result is a remarkably straight line. People, [including me](http://allendowney.blogspot.com/2011/04/two-hour-marathon-in-2045.html), have speculated about possible reasons for this phenomenon.
People have also speculated about when, if ever, the world record time for the marathon will be less than two hours.
(Note: In 2019 Eliud Kipchoge ran the marathon distance in under two hours, which is an astonishing achievement that I fully appreciate, but for several reasons it did not count as a world record).
So, as a second example of Bayesian regression, we'll consider the world record progression for the marathon (for male runners), estimate the parameters of a linear model, and use the model to predict when a runner will break the two-hour barrier.
The following cell downloads a web page from Wikipedia that includes a table of marathon world records, and uses Pandas to put the data in a `DataFrame`.
```
url = 'https://en.wikipedia.org/wiki/Marathon_world_record_progression#Men'
tables = pd.read_html(url)
len(tables)
```
If that doesn't work, I have made a copy of this page available. The following cell downloads and parses it.
```
#import os
#datafile = 'Marathon_world_record_progression.html'
#if not os.path.exists(datafile):
# !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/data/Marathon_world_record_progression.html
#tables = pd.read_html(datafile)
#len(tables)
```
The first table is the one we want.
```
table = tables[0]
table.head(3)
```
We can use Pandas to parse the dates.
A few of them include notes that cause parsing problems, but the argument `errors='coerce'` tells Pandas to fill invalid dates with `NaT`, which is a version of `NaN` that represents "not a time".
```
table['date'] = pd.to_datetime(table['Date'], errors='coerce')
table['date'].head()
```
We can also use Pandas to parse the record times.
```
table['time'] = pd.to_timedelta(table['Time'])
```
And convert the times to paces in miles per hour.
```
table['y'] = 26.2 / table['time'].dt.total_seconds() * 3600
table['y'].head()
```
The following function plots the results.
```
def plot_speeds(df):
"""Plot marathon world record speed as a function of time.
df: DataFrame with date and mph
"""
plt.axhline(13.1, color='C5', linestyle='dashed')
plt.plot(df['date'], df['y'], 'o',
label='World record speed',
color='C1', alpha=0.5)
decorate(xlabel='Date',
ylabel='Speed (mph)')
```
Here's what the results look like.
The dashed line shows the speed required for a two-hour marathon, 13.1 miles per hour.
```
plot_speeds(table)
```
It's not a perfectly straight line. In the early years of the marathon, the record speed increased quickly; since about 1970, it has been increasing more slowly.
For our analysis, let's focus on the recent progression, starting in 1970.
```
recent = table['date'] > pd.to_datetime('1970')
data = table.loc[recent].copy()
data.head()
```
In the notebook for this chapter, you can see how I loaded and cleaned the data. The result is a `DataFrame` that contains the following columns (and additional information we won't use:
* `date`, which is a Pandas `Timestamp` representing the date when the world record was broken, and
* `speed`, which records the record-breaking pace in mph.
Here's what the results look like, starting in 1970:
```
plot_speeds(data)
```
The data points fall approximately on a line, although it's possible that the slope is increasing.
To prepare the data for regression, I'll subtract away the approximate midpoint of the time interval, 1995.
```
offset = pd.to_datetime('1995')
timedelta = table['date'] - offset
```
When we subtract two `Timestamp` objects, the result is a "time delta", which we can convert to seconds and then to years.
```
data['x'] = timedelta.dt.total_seconds() / 3600 / 24 / 365.24
data['x'].describe()
```
As in the previous example, I'll use least squares regression to compute point estimates for the parameters, which will help with choosing priors.
```
import statsmodels.formula.api as smf
formula = 'y ~ x'
results = smf.ols(formula, data=data).fit()
results.params
```
The estimated intercept is about 12.5 mph, which is the interpolated world record pace for 1995. The estimated slope is about 0.015 mph per year, which is rate the world record pace is increasing, according to the model.
Again, we can use the standard deviation of the residuals as a point estimate for `sigma`.
```
results.resid.std()
```
These parameters give us a good idea where we should put the prior distributions.
## The Priors
Here are the prior distributions I chose for `slope`, `intercept`, and `sigma`.
```
qs = np.linspace(0.012, 0.018, 51)
prior_slope = make_uniform(qs, 'Slope')
qs = np.linspace(12.4, 12.5, 41)
prior_inter = make_uniform(qs, 'Intercept')
qs = np.linspace(0.01, 0.21, 31)
prior_sigma = make_uniform(qs, 'Sigma')
```
And here's the joint prior distribution.
```
prior = make_joint3(prior_slope, prior_inter, prior_sigma)
prior.head()
```
Now we can compute likelihoods as in the previous example:
```
xs = data['x']
ys = data['y']
likelihood = prior.copy()
for slope, inter, sigma in prior.index:
expected = slope * xs + inter
resid = ys - expected
densities = norm.pdf(resid, 0, sigma)
likelihood[slope, inter, sigma] = densities.prod()
```
Now we can do the update in the usual way.
```
posterior = prior * likelihood
posterior.normalize()
```
And unpack the marginals:
```
posterior_slope = posterior.marginal(0)
posterior_inter = posterior.marginal(1)
posterior_sigma = posterior.marginal(2)
posterior_sigma.plot();
```
Here's the posterior distribution of `inter`:
```
posterior_inter.plot(color='C1')
decorate(xlabel='intercept',
ylabel='PDF',
title='Posterior marginal distribution of intercept')
summarize(posterior_inter)
```
The posterior mean is about 12.5 mph, which is the world record marathon pace the model predicts for the midpoint of the date range, 1994.
And here's the posterior distribution of `slope`.
```
posterior_slope.plot(color='C4')
decorate(xlabel='Slope',
ylabel='PDF',
title='Posterior marginal distribution of slope')
summarize(posterior_slope)
```
The posterior mean is about 0.015 mph per year, or 0.15 mph per decade.
That's interesting, but it doesn't answer the question we're interested in: when will there be a two-hour marathon. To answer that, we have to make predictions.
## Prediction
To generate predictions, I'll draw a sample from the posterior distribution of parameters, then use the regression equation to combine the parameters with the data.
`Pmf` provides `choice`, which we can use to draw a random sample with replacement, using the posterior probabilities as weights.
```
np.random.seed(17)
sample = posterior.choice(101)
```
The result is an array of tuples. Looping through the sample, we can use the regression equation to generate predictions for a range of `xs`.
```
xs = np.arange(-25, 50, 2)
pred = np.empty((len(sample), len(xs)))
for i, (slope, inter, sigma) in enumerate(sample):
epsilon = norm(0, sigma).rvs(len(xs))
pred[i] = inter + slope * xs + epsilon
```
Each prediction is an array with the same length as `xs`, which I store as a row in `pred`. So the result has one row for each sample and one column for each value of `x`.
We can use `percentile` to compute the 5th, 50th, and 95th percentiles in each column.
```
low, median, high = np.percentile(pred, [5, 50, 95], axis=0)
```
To show the results, I'll plot the median of the predictions as a line and the 90% credible interval as a shaded area.
```
times = pd.to_timedelta(xs*365.24, unit='days') + offset
plt.fill_between(times, low, high,
color='C2', alpha=0.1)
plt.plot(times, median, color='C2')
plot_speeds(data)
```
The dashed line show the two-hour marathon pace, which is 13.1 miles per hour.
Visually we can estimate that the prediction line hits the target pace between 2030 and 2040.
To make this more precise, we can use interpolation to see when the predictions cross the finish line. SciPy provides `interp1d`, which does linear interpolation by default.
```
from scipy.interpolate import interp1d
future = np.array([interp1d(high, xs)(13.1),
interp1d(median, xs)(13.1),
interp1d(low, xs)(13.1)])
dts = pd.to_timedelta(future*365.24, unit='day') + offset
pd.DataFrame(dict(datetime=dts),
index=['early', 'median', 'late'])
```
The median prediction is 2036, with 90% credible interval from 2032 to 2043. So there is about a 5% chance we'll see a two-hour marathon before 2032.
## Summary
This chapter introduces Bayesian regression, which is based on the same model as least squares regression; the difference is that it produces a posterior distribution for the parameters rather than point estimates.
In the first example, we looked at changes in snowfall in Norfolk County, Massachusetts, and concluded that we get more snowfall now than when I was young, contrary to my expectation.
In the second example, we looked at the progression of world record pace for the men's marathon, computed the joint posterior distribution of the regression parameters, and used it to generate predictions for the next 20 years.
These examples have three parameters, so it takes a little longer to compute the likelihood of the data.
With more than three parameters, it becomes impractical to use grid algorithms.
In the next few chapters, we'll explore other algorithms that reduce the amount of computation we need to do a Bayesian update, which makes it possible to use models with more parameters.
But first, you might want to work on these exercises.
## Exercises
**Exercise:** I am under the impression that it is warmer around here than it used to be. In this exercise, you can put my conjecture to the test.
We'll use the same dataset we used to model snowfall; it also includes daily low and high temperatures in Norfolk County, Massachusetts during my lifetime.
Here's the data.
```
df = pd.read_csv('2239075.csv', parse_dates=[2])
df.head(3)
```
Again, I'll create a column that contains the year part of the dates.
```
df['YEAR'] = df['DATE'].dt.year
```
This dataset includes `TMIN` and `TMAX`, which are the daily low and high temperatures in degrees F.
I'll create a new column with the daily midpoint of the low and high temperatures.
```
df['TMID'] = (df['TMIN'] + df['TMAX']) / 2
```
Now we can group by year and compute the mean of these daily temperatures.
```
tmid = df.groupby('YEAR')['TMID'].mean()
len(tmid)
```
Again, I'll drop the first and last years, which are incomplete.
```
complete = tmid.iloc[1:-1]
len(complete)
```
Here's what the time series looks like.
```
complete.plot(style='o', alpha=0.5)
decorate(xlabel='Year',
ylabel='Annual average of daily temperature (deg F)')
```
As we did with the snow data, I'll convert the `Series` to a `DataFrame` to prepare it for regression.
```
data = complete.reset_index()
data.head()
offset = data['YEAR'].mean().round()
offset
data['x'] = data['YEAR'] - offset
data['x'].mean()
data['y'] = data['TMID']
data['y'].std()
```
Now we can use StatsModels to estimate the parameters.
```
import statsmodels.formula.api as smf
formula = 'y ~ x'
results = smf.ols(formula, data=data).fit()
results.params
```
And compute the standard deviation of the parameters.
```
results.resid.std()
```
According to the least squares regression model, annual average temperature is increasing by about 0.044 degrees F per year.
To quantify the uncertainty of these parameters and generate predictions for the future, we can use Bayesian regression.
1. Use StatsModels to generate point estimates for the regression parameters.
2. Choose priors for `slope`, `intercept`, and `sigma` based on these estimates, and use `make_joint3` to make a joint prior distribution.
3. Compute the likelihood of the data and compute the posterior distribution of the parameters.
4. Extract the posterior distribution of `slope`. How confident are we that temperature is increasing?
5. Draw a sample of parameters from the posterior distribution and use it to generate predictions up to 2067.
6. Plot the median of the predictions and a 90% credible interval along with the observed data.
Does the model fit the data well? How much do we expect annual average temperatures to increase over my (expected) lifetime?
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
| github_jupyter |
# X2K API Tutorial Notebook
February 25<sup>st</sup>, 2019
This Jupyter Notebook contains an interactive tutorial for **running the Expression2Kinases (X2K) API** using Python 3.
### Table of Contents
The notebook contains the following sections:
1. **<a href="#1">API Documentation</a>** - shows how to programmatically analyze your gene list in Python.
2. **<a href="#2">Using the X2K API</a>** - overview of the input parameters and output of the API.
3. **<a href="#3">Interpreting the results</a>** - gives an overview of the structure and meaning of the analysis results.
* **<a href="#chea">Transcription Factor Enrichment Analysis</a>** (ChEA)
* **<a href="#g2n">Protein-Protein Interaction Expansion</a>** (G2N)
* **<a href="#kea">Kinase Enrichment Analysis</a>** (KEA)
* **<a href="#x2k">Expression2Kinases</a>** (X2K)
## 1. <span id="1">Using the X2K API</span>
The X2K API allows for programmatic analysis of an input gene list.
The `run_X2K()` function displayed below can be used to analyze a gene list and load the results in a Python dictionary by performing a **POST request**.
The function requires only one input, `input_genes`, **a list of gene symbols ** to be analyzed. Additional optional parameters can be specified with the `options` parameters.
```
# Import modules
import requests
import json
##### Function to run X2K
### Input: a Python list of gene symbols
### Output: a dictionary containing the results of X2K, ChEA, G2N, KEA.
def run_X2K(input_genes, options={}):
# Set default options
all_options = {'included_organisms': 'both',
'TF-target gene background database used for enrichment': 'ChEA & ENCODE Consensus',
'sort transcription factors by': 'p-value',
'min_network_size': 10,
'number of top TFs': 10,
'path_length': 2,
'min_number_of_articles_supporting_interaction': 0,
'max_number_of_interactions_per_protein': 200,
'max_number_of_interactions_per_article': 100,
'enable_BioGRID': True,
'enable_IntAct': True,
'enable_MINT': True,
'enable_ppid': True,
'enable_Stelzl': True,
'kinase interactions to include': 'kea 2018',
'sort kinases by': 'p-value'}
# Override defaults with options
all_options.update(options)
all_options['text-genes'] = '\n'.join(input_genes)
# Perform request & get response
res = requests.post(
'https://maayanlab.cloud/X2K/api',
files=[(k, (None, v)) for k, v in default_options.items()],
)
# Read response
data = res.json()
# Convert to dictionary
x2k_results = {key: json.loads(value) if key != 'input' else value for key, value in data.items()}
# Clean results
x2k_results['ChEA'] = x2k_results['ChEA']['tfs']
x2k_results['G2N'] = x2k_results['G2N']['network']
x2k_results['KEA'] = x2k_results['KEA']['kinases']
x2k_results['X2K'] = x2k_results['X2K']['network']
# Return results
return x2k_results
# Get input genes
input_genes = ['Nsun3', 'Polrmt', 'Nlrx1', 'Sfxn5', 'Zc3h12c', 'Slc25a39', 'Arsg', 'Defb29', 'Ndufb6', 'Zfand1',
'Tmem77', '5730403B10Rik', 'Tlcd1', 'Psmc6', 'Slc30a6', 'LOC100047292', 'Lrrc40', 'Orc5l', 'Mpp7',
'Unc119b', 'Prkaca', 'Tcn2', 'Psmc3ip', 'Pcmtd2', 'Acaa1a', 'Lrrc1', '2810432D09Rik', 'Sephs2', 'Sac3d1',
'Tmlhe', 'LOC623451', 'Tsr2', 'Plekha7', 'Gys2', 'Arhgef12', 'Hibch', 'Lyrm2', 'Zbtb44', 'Entpd5',
'Rab11fip2', 'Lipt1', 'Intu', 'Anxa13', 'Klf12', 'Sat2', 'Gal3st2', 'Vamp8', 'Fkbpl', 'Aqp11', 'Trap1',
'Pmpcb', 'Tm7sf3', 'Rbm39', 'Bri3', 'Kdr', 'Zfp748', 'Nap1l1', 'Dhrs1', 'Lrrc56', 'Wdr20a', 'Stxbp2',
'Klf1', 'Ufc1', 'Ccdc16', '9230114K14Rik', 'Rwdd3', '2610528K11Rik', 'Aco1', 'Cables1', 'LOC100047214',
'Yars2', 'Lypla1', 'Kalrn', 'Gyk', 'Zfp787', 'Zfp655', 'Rabepk', 'Zfp650', '4732466D17Rik', 'Exosc4',
'Wdr42a', 'Gphn', '2610528J11Rik', '1110003E01Rik', 'Mdh1', '1200014M14Rik', 'AW209491', 'Mut',
'1700123L14Rik', '2610036D13Rik', 'Cox15', 'Tmem30a', 'Nsmce4a', 'Tm2d2', 'Rhbdd3', 'Atxn2', 'Nfs1',
'3110001I20Rik', 'BC038156', 'LOC100047782', '2410012H22Rik', 'Rilp', 'A230062G08Rik', 'Pttg1ip', 'Rab1',
'Afap1l1', 'Lyrm5', '2310026E23Rik', 'C330002I19Rik', 'Zfyve20', 'Poli', 'Tomm70a', 'Slc7a6os', 'Mat2b',
'4932438A13Rik', 'Lrrc8a', 'Smo', 'Nupl2', 'Trpc2', 'Arsk', 'D630023B12Rik', 'Mtfr1', '5730414N17Rik',
'Scp2', 'Zrsr1', 'Nol7', 'C330018D20Rik', 'Ift122', 'LOC100046168', 'D730039F16Rik', 'Scyl1',
'1700023B02Rik', '1700034H14Rik', 'Fbxo8', 'Paip1', 'Tmem186', 'Atpaf1', 'LOC100046254', 'LOC100047604',
'Coq10a', 'Fn3k', 'Sipa1l1', 'Slc25a16', 'Slc25a40', 'Rps6ka5', 'Trim37', 'Lrrc61', 'Abhd3', 'Gbe1',
'Parp16', 'Hsd3b2', 'Esm1', 'Dnajc18', 'Dolpp1', 'Lass2', 'Wdr34', 'Rfesd', 'Cacnb4', '2310042D19Rik',
'Srr', 'Bpnt1', '6530415H11Rik', 'Clcc1', 'Tfb1m', '4632404H12Rik', 'D4Bwg0951e', 'Med14', 'Adhfe1',
'Thtpa', 'Cat', 'Ell3', 'Akr7a5', 'Mtmr14', 'Timm44', 'Sf1', 'Ipp', 'Iah1', 'Trim23', 'Wdr89', 'Gstz1',
'Cradd', '2510006D16Rik', 'Fbxl6', 'LOC100044400', 'Zfp106', 'Cd55', '0610013E23Rik', 'Afmid', 'Tmem86a',
'Aldh6a1', 'Dalrd3', 'Smyd4', 'Nme7', 'Fars2', 'Tasp1', 'Cldn10', 'A930005H10Rik', 'Slc9a6', 'Adk',
'Rbks', '2210016F16Rik', 'Vwce', '4732435N03Rik', 'Zfp11', 'Vldlr', '9630013D21Rik', '4933407N01Rik',
'Fahd1', 'Mipol1', '1810019D21Rik', '1810049H13Rik', 'Tfam', 'Paics', '1110032A03Rik', 'LOC100044139',
'Dnajc19', 'BC016495', 'A930041I02Rik', 'Rqcd1', 'Usp34', 'Zcchc3', 'H2afj', 'Phf7', '4921508D12Rik',
'Kmo', 'Prpf18', 'Mcat', 'Txndc4', '4921530L18Rik', 'Vps13b', 'Scrn3', 'Tor1a', 'AI316807', 'Acbd4',
'Fah', 'Apool', 'Col4a4', 'Lrrc19', 'Gnmt', 'Nr3c1', 'Sip1', 'Ascc1', 'Fech', 'Abhd14a', 'Arhgap18',
'2700046G09Rik', 'Yme1l1', 'Gk5', 'Glo1', 'Sbk1', 'Cisd1', '2210011C24Rik', 'Nxt2', 'Notum', 'Ankrd42',
'Ube2e1', 'Ndufv1', 'Slc33a1', 'Cep68', 'Rps6kb1', 'Hyi', 'Aldh1a3', 'Mynn', '3110048L19Rik', 'Rdh14',
'Proz', 'Gorasp1', 'LOC674449', 'Zfp775', '5430437P03Rik', 'Npy', 'Adh5', 'Sybl1', '4930432O21Rik',
'Nat9', 'LOC100048387', 'Mettl8', 'Eny2', '2410018G20Rik', 'Pgm2', 'Fgfr4', 'Mobkl2b', 'Atad3a',
'4932432K03Rik', 'Dhtkd1', 'Ubox5', 'A530050D06Rik', 'Zdhhc5', 'Mgat1', 'Nudt6', 'Tpmt', 'Wbscr18',
'LOC100041586', 'Cdk5rap1', '4833426J09Rik', 'Myo6', 'Cpt1a', 'Gadd45gip1', 'Tmbim4', '2010309E21Rik',
'Asb9', '2610019F03Rik', '7530414M10Rik', 'Atp6v1b2', '2310068J16Rik', 'Ddt', 'Klhdc4', 'Hpn', 'Lifr',
'Ovol1', 'Nudt12', 'Cdan1', 'Fbxo9', 'Fbxl3', 'Hoxa7', 'Aldh8a1', '3110057O12Rik', 'Abhd11', 'Psmb1',
'ENSMUSG00000074286', 'Chpt1', 'Oxsm', '2310009A05Rik', '1700001L05Rik', 'Zfp148', '39509', 'Mrpl9',
'Tmem80', '9030420J04Rik', 'Naglu', 'Plscr2', 'Agbl3', 'Pex1', 'Cno', 'Neo1', 'Asf1a', 'Tnfsf5ip1',
'Pkig', 'AI931714', 'D130020L05Rik', 'Cntd1', 'Clec2h', 'Zkscan1', '1810044D09Rik', 'Mettl7a', 'Siae',
'Fbxo3', 'Fzd5', 'Tmem166', 'Tmed4', 'Gpr155', 'Rnf167', 'Sptlc1', 'Riok2', 'Tgds', 'Pms1', 'Pitpnc1',
'Pcsk7', '4933403G14Rik', 'Ei24', 'Crebl2', 'Tln1', 'Mrpl35', '2700038C09Rik', 'Ubie', 'Osgepl1',
'2410166I05Rik', 'Wdr24', 'Ap4s1', 'Lrrc44', 'B3bp', 'Itfg1', 'Dmxl1', 'C1d']
# Run X2K results
x2k_results = run_X2K(input_genes)
x2k_results.keys()
```
## 2. <span id="2">X2K API Documentation</span>
### 2.1 API Inputs
A **full list of the input parameters** for the `run_X2K()` function is available below.
The optional parameters can provided to the function in the `options` dictionary.
<table id="parameter-table">
<tr>
<th style='text-align=left;'>Parameter</th>
<th style='text-align=left;'>Step</th>
<th style='text-align=left;'>Description</th>
<th style='text-align=left;'>Notes</th>
</tr>
<tr>
<td style='text-align=left;'>**input_genes** (required)</td>
<td style='text-align=left;'>X2K</td>
<td style='text-align=left;'>Contains the input gene set for the X2K analysis.</td>
<td style='text-align=left;'>A list of strings representing the input gene symbols.</td>
</tr>
<tr>
<td style='text-align=left;'>*organism* (optional)</td>
<td style='text-align=left;'>ChEA</td>
<td style='text-align=left;'>The organism from which TF-target interaction data should be integrated.</td>
<td style='text-align=left;'>One of `('human_only', 'mouse_only', 'both')`. Default `'both'`.</td>
</tr>
<tr>
<td style='text-align=left;'>*TF-target gene background database used for enrichment* (optional)</td>
<td style='text-align=left;'>ChEA</td>
<td style='text-align=left;'>The database from which TF-target interaction data should be integrated,</td>
<td style='text-align=left;'>One of `('ChEA 2015', 'ENCODE 2015', 'ChEA & ENCODE Consensus', 'Transfac and Jaspar', 'ChEA 2016', 'ARCHS4 TFs Coexp', 'CREEDS', 'Enrichr Submissions TF-Gene Coocurrence')` Default `'ChEA & ENCODE Consensus')`.</td>
</tr>
<tr>
<td style='text-align=left;'>*sort transcription factors by* (optional)
<td style='text-align=left;'>ChEA</td>
<td style='text-align=left;'>The method used to sort the top Transcription Factors identified by ChEA.</td>
<td style='text-align=left;'>One of `('p-value', 'rank', 'combined score')`. Default `'p-value'`.</td>
</tr>
<tr>
<td style='text-align=left;'>*path_length* (optional)</td>
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The maximum Protein-Protein Interaction path length for the network expansion.</td>
<td style='text-align=left;'>Integer, default `2`.</td>
</tr>
<tr>
<td style='text-align=left;'>*minimum_network_size* (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The minimum size of the Protein-Protein interaction network generated using Genes2Networks.</td>
<td style='text-align=left;'>Integer, default `50`.</td>
</tr>
<tr>
<td style='text-align=left;'>*min_number_of_articles_supporting_interaction* (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The minimum number of published articles supporting a Protein-Protein Interaction for the expanded subnetwork.</td>
<td style='text-align=left;'>Integer, default `2`.</td>
</tr>
<tr>
<td style='text-align=left;'>*max_number_of_interactions_per_protein* (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The maximum number of physical interactions allowed for the proteins in the expanded subnetwork.</td>
<td style='text-align=left;'>Integer, default `200`.</td>
</tr>
<tr>
<td style='text-align=left;'>*max_number_of_interactions_per_article* (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The maximum number of physical interactions reported in each published article</td>
<td style='text-align=left;'>Integer, default `100`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_Biocarta (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_BioGRID (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'true'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_BioPlex (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_DIP (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_huMAP (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_InnateDB (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_IntAct (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'true'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_KEGG (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_MINT (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'true'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_ppid (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'true'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_SNAVI (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_iREF (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_Stelzl (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'true'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_vidal (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_BIND (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_figeys (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>enable_HPRD (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The Protein-Protein Interaction databases to integrate for generation of the expanded subnetwork.</td>
<td style='text-align=left;'>Either `'true'` or `'false'`. Default `'false'`.</td>
</tr>
<tr>
<td style='text-align=left;'>*number_of_results* (optional)
<td style='text-align=left;'>G2N</td>
<td style='text-align=left;'>The maximum network size of the expanded network generated using Genes2Networks.</td>
<td style='text-align=left;'>Integer, default `50`.</td>
</tr>
<tr>
<td style='text-align=left;'>*kinase interactions to include*
<td style='text-align=left;'>KEA</td>
<td style='text-align=left;'>Kinase interactions databases to include.</td>
<td style='text-align=left;'>One of `('p-value', 'rank', 'combined score')`. Default `'p-value'`.</td>
</tr>
<tr>
<td style='text-align=left;'>*sort_kinases_by* (optional)
<td style='text-align=left;'>KEA</td>
<td style='text-align=left;'>The method used to sort the top Transcription Factors identified by KEA.</td>
<td style='text-align=left;'>One of `('kea 2018', 'ARCHS4', 'iPTMnet', 'NetworkIN', 'Phospho.ELM', 'Phosphopoint', 'PhosphoPlus', 'MINT')`. Default `'kea 2018'`.</td>
</tr>
</table>
### 2.2 API Output
The `run_X2K()` function returns results as `dict` containing **four keys**, whose contents are described below.
<table id="parameter-table">
<tr>
<th style='text-align: left;'>Key</th>
<th style='text-align: left;'>Notes</th>
<th style='text-align: left;'>Contents</th>
</tr>
<tr>
<td style='text-align: left;'>**ChEA**</td>
<td style='text-align: left;'>Contains the results of the **Transcription Factor Enrichment Analysis**, generated using ChEA.</td>
<td style='text-align: left;'>A `list` of `dict`s containing information on the top TFs predicted to regulate the input genes. </td>
</tr>
<tr>
<td style='text-align: left;'>**G2N**</td>
<td style='text-align: left;'>Contains the results of the **Protein-Protein Interaction Expansion**, generated using Genes2Networks (G2N).</td>
<td style='text-align: left;'>A `dict` containing two keys:
<ul>
<li><b>nodes</b>: A `list` containing information on the nodes of the expanded subnetwork.</li>
<li><b>interactions</b>: A `list` containing information on the edges of the expanded subnetwork.</li>
</ul>
</td>
</tr>
<tr>
<td style='text-align: left;'>**KEA**</td>
<td style='text-align: left;'>Contains the results of the **Kinase Enrichment Analysis**, generated using KEA.</td>
<td style='text-align: left;'>A `list` of `dict`s containing information on the top kinases predicted to regulate the subnetwork identified by G2N. </td>
</tr>
<tr>
<td style='text-align: left;'>**X2K**</td>
<td style='text-align: left;'>Contains the **Expression2Kinases network**, generated by integrating the results of ChEA, G2N and KEA.</td>
<td style='text-align: left;'>A `dict` containing two keys:
<ul>
<li><b>nodes</b>: A `list` containing information on the nodes of the final X2K network.</li>
<li><b>interactions</b>: A `list` containing information on the edges of the final X2K network.</li>
</ul>
</td>
</tr>
</table>
## 3. <span id="3">Interpreting the Results</span>
### 3.1 <span id="chea">ChEA results</span>
The results for the ChEA analysis can be accessed in <code>x2k_results['ChEA']</code>. Here, the results are converted to a pandas DataFrame for easier interpretation.
```
# Import pandas
import pandas as pd
# Read results
chea_dataframe = pd.DataFrame(x2k_results['ChEA'])
chea_dataframe.head()
```
** Table 1 | Results of the ChEA analysis. ** Each row represents a transcription factor predicted to regulate the input gene list.
### 3.2 <span id="g2n">G2N Results</span>
The results for the G2N analysis can be accessed in <code>x2k_results['G2N']</code>.
The results are stored in a dictionary containing two keys:
* `edges`
* `interactions`
```
# G2N nodes dataframe
g2n_nodes_dataframe = pd.DataFrame(x2k_results['G2N']['nodes']).drop('pvalue', axis=1)
g2n_nodes_dataframe.head()
```
** Table 2 | Nodes of the Genes2Networks expanded subnetwork. ** Each row represents a node in the expanded subnetwork. The type column indicates whether the node is a Transcription Factor identified by ChEA, or an intermediate protein.
```
# G2N edges dataframe
g2n_edges_dataframe = pd.DataFrame(x2k_results['G2N']['interactions'])
g2n_edges_dataframe.head()
```
** Table 3 | Edges of the Genes2Networks expanded subnetwork. ** Each row represents an edge in the expanded subnetwork generated by G2N on the top transcription factors identified by ChEA.
### 3.3 <span id="kea">KEA Results</span>
The results for the KEA analysis can be accessed in <code>x2k_results['KEA']</code>.
```
# KEA Results
kea_dataframe = pd.DataFrame(x2k_results['KEA'])
kea_dataframe.head()
```
** Table 4 | Results of the KEA analysis. ** Each row represents a protein kinase predicted to regulate the expanded subnetwork generated by G2N.
### 3.4 <span id="x2k">X2K Results</span>
The results for the X2K analysis can be accessed in <code>x2k_results['X2K']</code>.
The results are stored in a dictionary containing two keys:
* `nodes`
* `interactions`
```
# X2K nodes dataframe
x2k_nodes_dataframe = pd.DataFrame(x2k_results['X2K']['nodes']).drop('pvalue', axis=1)
x2k_nodes_dataframe.head()
```
** Table 5 | Nodes of the final Expression2Kinases network. ** Each row represents a node in the final X2K network network. The type column indicates whether the node is a Transcription Factor identified by ChEA, an intermediate protein identified by G2N, or a protein kinase identified by KEA.
```
# X2K edges dataframe
x2k_edges_dataframe = pd.DataFrame(x2k_results['X2K']['interactions'])
x2k_edges_dataframe.head()
```
** Table 6 | Edges of the final Expression2Kinases subnetwork. ** Each row represents an edge in the final network identified by integrating the results of ChEA, G2N, and KEA.
| github_jupyter |
```
import pandas as pd
import numpy as np
data_click = pd.read_csv('../train_preliminary/click_log.csv')# click_log
# data_user = pd.read_csv('../train_preliminary/user.csv') # user
data_ad = pd.read_csv('../train_preliminary/ad.csv') # user
data_click = data_click.merge(data_ad,on = 'creative_id',how = 'left')
del data_ad
#ๅฏนindustryๅนฟๅ่กไธ่ฟ่ก็นๅพๆๅ
industry_click = data_click[['user_id', 'advertiser_id', 'click_times']].sort_values(by = 'user_id')
# industry_click = industry_click[data_click['industry']!='\\N']
industry_click = industry_click.groupby(['user_id','advertiser_id']).agg({'click_times':sum})
industry_click = industry_click.reset_index()
def func_log(x):
return np.log(x+1)
industry_click['advertiser_id'+'_log'] = industry_click['click_times'].transform(func_log)
# ๆๅๅคดไธไธช็นๅปๆ้ซ็็ง็ฑปๅ็นๅป้
head_x = industry_click.sort_values(['click_times'],ascending=False).groupby(['user_id']).head(3)
head_x = head_x.sort_values('user_id')
del industry_click
del data_click
def fun1(x):
x = list(x.values.reshape([-1]))[:6]
x = x[:6]+[0]*(6-len(x))
return pd.DataFrame([x])
tops = head_x.groupby('user_id')['advertiser_id','click_times'].apply(fun1)
columns = []
for i in range(6):
columns.append('advertiser_id'+str(i))
tops.columns = columns
tops = tops.reset_index()
tops = tops.drop(['level_1'],axis = 1)
tops.to_csv('advertiser_id_feat.csv',index=False)
del tops
#ๅฏนindustryๅนฟๅ่กไธ่ฟ่ก็นๅพๆๅ
industry_click = data_click[['user_id', 'industry', 'click_times']].sort_values(by = 'user_id')
industry_click = industry_click[data_click['industry']!='\\N']
industry_click['click_times'] = industry_click.groupby(['user_id','industry'])['click_times'].transform(np.sum)
def func_x2(x):
max_click = np.max(x['click_times'])
x = x[x['click_times']==max_click]
return x.iloc[-1,:]
feat = industry_click.groupby(['user_id']).apply(func_x2)
feat.columns = ['user_id','industry','max_click_times']
feat['max_click_times_log'] = feat['max_click_times'].transform(np.log)
feat.to_csv('industry_feat.csv',index=False)
print(max(feat['industry']))
category_click = data_click[['user_id', 'product_category', 'click_times']].sort_values(by = ['user_id'])
# category_click = category_click.iloc[:1000,:]
category_click['click_times'] = category_click.groupby(['user_id','product_category'])['click_times'].transform(np.sum)
def func1(x):
x = x.drop_duplicates('product_category',keep='last')
x = x.set_index('product_category')
data = {}
for i in range(1,19):
if i in list(x.index):
data[i]=[x.loc[i]['click_times']]
else:
data[i] = 0
return pd.DataFrame(data)
feat = category_click.groupby(['user_id']).apply(func1)
feat.reset_index(level=0, inplace=True)
feat.index = range(len(feat))
# feat.to_csv('category_feat.csv',index=False)
columns = ['user_id']
for i in range(1,19):
columns.append('category'+ str(i))
feat.columns = columns
def func_log(x):
return np.log(x+1)
for name in columns[1:]:
feat[name+'_log'] = feat[name].transform(func_log)
feat.to_csv('category_feat_addlog.csv',index=False)
click_feat = pd.read_csv('clicks_feat.csv')
category_feat = pd.read_csv('category_feat_addlog.csv')
industry_feat = pd.read_csv('industry_feat.csv')
#็นๅพๅๅนถๅจไธ่ตท
features = click_feat.merge(category_feat,on='user_id',how='left').merge(industry_feat,on='user_id',how='left')
#ๆทปๅ label
user = pd.read_csv('../../data/train_preliminary/user.csv')
data = features.merge(user,on='user_id',how='left')
# del data['user_id']
# ๆทปๅ train_cat_max_id_feat็นๅพ
def func_cat (x):
x = x[['category1','category2', 'category3', 'category4', 'category5', 'category6','category7', 'category8', 'category9', 'category10', 'category11','category12', 'category13', 'category14', 'category15', 'category16','category17', 'category18']]
d = {}
d['cat_max_id'] = [np.argmax(x.values)+1]
return pd.DataFrame(d)
cat_max_id_feat = data.groupby('user_id').apply(func_cat)
cat_max_id_feat.reset_index(level=0, inplace=True)
cat_max_id_feat.index = range(len(cat_max_id_feat))
cat_max_id_feat.to_csv('train_cat_max_id_feat.csv',index=False)
data = data.merge(cat_max_id_feat,on='user_id',how='left')
data.to_csv('train_feat_user.csv',index=False)
```
| github_jupyter |
# CloneSig package
In this notebook, we show a few of CloneSig's main functions, and the associated parameters that can be tuned in order to adjust the results if needed. Runtime should be in the range of minutes on a personal computer (see runtimes for each cell); of course runtime increases when the number of mutations increases.
## Data simulation
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import os
from seaborn.distributions import _freedman_diaconis_bins
from clonesig.data_loader import SimLoader
from clonesig.run_clonesig import get_MU, run_clonesig
sns.set_context('poster')
```
The class ```SimLoader``` is the one to use to simulate data. Several inputs are necessary
- `N`, the number of wanted mutations
- `J`, the number of clones
Other parameters are **optional** and provide further control over the simulated tumor sample.
- `inputMU` is the input signature matrix, containing $L$ signatures, and of size $L\times K$, with $K$ the number of possible mutation type (typically 96)
- `xi_param`, `pi_param`, `phi_param`, `rho_param`, `purity_param` allow the user to set values for parameters of the model $\xi, \pi, \phi, \rho$ and the purity. Some of them allow some indications for the simulation (see [class docstring](https://github.com/judithabk6/clonesig/blob/master/clonesig/data_loader.py#L108) for more details)
- `change_sig_activity` is a boolean indicating whether the signature activity should be constant between the clones
- `dip_prop` indicates the proportion of the simulated genome (mutations) that should be diploid
Then two methods must be called to assign the parameters and simulate the mutations, and finally, several methods to get a table describing the mutations (that can be used as input to fit CloneSig's model).
```
N = 2000
J = 3
sim_object = SimLoader(N, J, rho_param=100, cn=False)
sim_object._get_unobserved_nodes()
sim_object._get_observed_nodes()
# to get the mutation table
sim_mutation_table = sim_object._get_data_df()
sim_mutation_table.head()
```
We can represent the simulated data (here the true CCF for each mutation - unknown in the case of real data)
```
purity = sim_object.purity
sim_mutation_table = sim_mutation_table.assign(vaf=sim_mutation_table.var_counts /
(sim_mutation_table.ref_counts + sim_mutation_table.var_counts))
sim_mutation_table = sim_mutation_table.assign(
total_cn=lambda x: x['minor_cn'] + x['major_cn'])
sim_mutation_table = sim_mutation_table.assign(
vaf_cn=sim_mutation_table.vaf * sim_mutation_table['total_cn'] / sim_mutation_table['mut_cn'])
sim_mutation_table = sim_mutation_table.assign(
vaf_purity=sim_mutation_table.apply(
lambda x: x['vaf']/purity *
((1 - purity) * 2 + purity * x['total_cn']) /
x['mut_cn'], axis=1))
# save data to provide an example csv
sim_mutation_table.to_csv('example_data.csv', index=False)
with open('purity.txt', 'w') as f:
f.write('{}'.format(purity))
sns.distplot(sim_mutation_table.vaf_purity, kde=False, label='all')
for i in range(J):
sns.distplot(sim_mutation_table[sim_mutation_table.clone==i].vaf_purity, kde=False, label='clone {}'.format(i))
plt.legend(bbox_to_anchor=(1.05, 0.5), loc=2, borderaxespad=0.)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(30, 2))
sns.heatmap(sim_object.pi, vmin=0, vmax=np.max(sim_object.pi), ax=ax, cmap='GnBu', annot=False, fmt='.2f', annot_kws={"size": 12})
ax.set_ylim((-0.5, 3.5))
ax.set_ylabel('clone index')
ax.set_xlabel('signature index')
```
## Running CloneSig on that simulated sample
### run CloneSig
The model fitting step of CloneSig is handled by the class [Estimator](https://github.com/judithabk6/clonesig/blob/master/clonesig/estimator.py#L50). However the complete pipeline includes other important step such as selecting the number of clones, apply the statistical test to determine the significance of a signature change between the different clones. All those steps are wrapped in a single function, [run_clonesig](https://github.com/judithabk6/clonesig/blob/master/clonesig/run_clonesig.py#L322). The principal inputs taken by the function are
- `T` : iterable of length N with the trinucleotide context of each mutation, numbered from 0 to 95
- `B` : iterable of length N with the variant allele read count for each mutation
- `D` : iterable of length N with the total read count for each mutation
- `C_normal` : iterable of length N copy number of non-tumor cells in the sample at each mutation locus
- `C_tumor_tot` : iterable of length N the total copy number of tumor cells in the sample at each mutation locus
- `C_tumor_minor` : iterable of length N the minor copy number of tumor cells in the sample at each mutation locus. If this info is not available, set it to zero so that clonesig considers all possible genotypes
- `purity` : float in [0, 1] an estimate of the tumor purity of the sample
- `inputMU` : array-like (L, 96) known L signatures to be fit by clonesig.
Ohter options are available and are detailed in the doctstring of the function. `run_clonesig` returns several
new_est, lr, p, new_inputMU, cst_est, future_sigs
- the Estimator object
- the log ratio of the statistical test (statistrics)
- the associated pvalue
- the MU matrix used for the fit (useful in the case where `run_clonesig` is used in the prefit mode where a first run preselects a subset of signatures that will subsequently be fit to the data
- the alternative Estimator object with a constant signature mixture constant across all clones
- a boolean vector describing the signatures retained or not for fit in the case where the "prefit" option is used.
```
default_MU = get_MU()
print('size of the default MU matrix:', default_MU.shape)
est, lr, pval, new_inputMU, cst_est, future_sigs = run_clonesig(
np.array(sim_mutation_table.trinucleotide),
np.array(sim_mutation_table.var_counts),
np.array(sim_mutation_table.var_counts + sim_mutation_table.ref_counts),
np.array(sim_mutation_table.normal_cn),
np.array(sim_mutation_table.total_cn),
np.array(sim_mutation_table.total_cn - sim_mutation_table.major_cn),
sim_object.purity, default_MU)
```
### Compare CloneSig results with simulations
```
est_table = pd.DataFrame({'trinucleotide': est.T, 'var_counts': est.B,
'minor_cn': est.C_tumor_minor,
'major_cn': est.C_tumor_major,
'total_cn': est.C_tumor_tot, 'depth': est.D,
'clone': est.qun.argmax(axis=1),
'signature': np.arange(default_MU.shape[0])[est.rnus[np.arange(est.N), est.qun.argmax(axis=1), :].argmax(axis=1)],
'mult': est.vmnu[np.arange(est.N), est.qun.argmax(axis=1), :].argmax(axis=1) +1})
est_table = est_table.assign(vaf=est_table.var_counts / est_table.depth)
est_table = est_table.assign(vaf_cn=est_table.vaf * est_table['total_cn'] / est_table['mult'])
est_table = est_table.assign(vaf_purity=est_table.apply(lambda x: x['vaf']/est.p * ((1 - est.p) * 2 + est.p * x['total_cn']) / x['mult'], axis=1))
nb_bins = min(_freedman_diaconis_bins(est_table.vaf_purity) * 2, 50)
final_bins = np.linspace(min(est_table.vaf_purity), max(est_table.vaf_purity), nb_bins)
est_table = est_table.assign(trinucleotide=pd.Categorical(est_table.trinucleotide, ordered=True, categories=range(96)))
fig, ax = plt.subplots(ncols=2, figsize=(15,3))
sns.distplot(sim_mutation_table.vaf_purity, kde=False, label='all', ax=ax[0])
for i in range(J):
sns.distplot(sim_mutation_table[sim_mutation_table.clone==i].vaf_purity, kde=False, label='clone {}'.format(i), ax=ax[0])
ax[0].legend(bbox_to_anchor=(1.05, 0.8), loc=2, borderaxespad=0.)
ax[0].set_title('simulation')
sns.distplot(est_table.vaf_purity, kde=False, label='all', ax=ax[1])
for i in range(est.J):
sns.distplot(est_table[est_table.clone==i].vaf_purity, kde=False, label='clone {}'.format(i), ax=ax[1])
plt.legend(bbox_to_anchor=(1.05, 0.8), loc=2, borderaxespad=0.)
ax[1].set_title('reconstruction')
plt.subplots_adjust(wspace = 0.8)
fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(30, 6))
sns.heatmap(sim_object.pi, vmin=0, vmax=1, ax=ax[0], cmap='GnBu')
sns.heatmap(est.pi, vmin=0, vmax=1, ax=ax[1], cmap='GnBu')
ax[0].set_ylim((-0.5, 3.5))
ax[0].set_ylabel('clone index')
ax[0].set_xlabel('signature index')
ax[0].set_title('simulation')
ax[1].set_ylim((-0.5, 3.5))
ax[1].set_ylabel('clone index')
ax[1].set_xlabel('signature index')
ax[1].set_title('reconstruction')
plt.subplots_adjust(hspace = 1.5)
```
### Alternative parameters to adjust
#### the input signatures
You can use a subset of signatures as input, either by manually select the ones you want from from the default matrix provided by the function `get_MU`, or by use the match of cancer-type-signature provided in Alexandrov's package SigProfiler (see [this script](https://github.com/judithabk6/Clonesig_analysis/blob/master/signature_code/sigprofiler_alexandrov18_data.py) for the precise procedure). In that case you can provide to `get_MU` the parameter `cancer_type` with an integer, indexing the wanted cancer types. Here is the list of the 65 signatures considered in the COSMIC v3 version
>Signature Subs-01, Signature Subs-02, Signature Subs-03, Signature Subs-04, Signature Subs-05, Signature Subs-06, Signature Subs-07a, Signature Subs-07b, Signature Subs-07c, Signature Subs-07d, Signature Subs-08, Signature Subs-09, Signature Subs-10a, Signature Subs-10b, Signature Subs-11, Signature Subs-12, Signature Subs-13, Signature Subs-14, Signature Subs-15, Signature Subs-16, Signature Subs-17a, Signature Subs-17b, Signature Subs-18, Signature Subs-19, Signature Subs-20, Signature Subs-21, Signature Subs-22, Signature Subs-23, Signature Subs-24, Signature Subs-25, Signature Subs-26, Signature Subs-27, Signature Subs-28, Signature Subs-29, Signature Subs-30, Signature Subs-31, Signature Subs-32, Signature Subs-33, Signature Subs-34, Signature Subs-35, Signature Subs-36, Signature Subs-37, Signature Subs-38, Signature Subs-39, Signature Subs-40, Signature Subs-41, Signature Subs-42, Signature Subs-43, Signature Subs-44, Signature Subs-45, Signature Subs-46, Signature Subs-47, Signature Subs-48, Signature Subs-49, Signature Subs-50, Signature Subs-51, Signature Subs-52, Signature Subs-53, Signature Subs-54, Signature Subs-55, Signature Subs-56, Signature Subs-57, Signature Subs-58, Signature Subs-59, Signature Subs-60
Available types are
|Cancer type | index |
|--- | --- |
| BILIARY-ADENOCA | 0|
| BLADDER-TCC | 1|
| BONE-BENIGN | 2|
| BONE-EPITH | 3|
| BONE-OSTEOSARC | 4|
| BREAST-ADENOCA | 5|
| BREAST-LOBULARCA | 6|
| CERVIX-ADENOCA | 7|
| CERVIX-SCC | 8|
| CNS-GBM | 9|
| CNS-MEDULLO | 10|
| CNS-OLIGO | 11|
| CNS-PILOASTRO | 12|
| COLORECT-ADENOCA | 13|
| ESO-ADENOCA | 14|
| HEAD-SCC | 15|
| KIDNEY-CHRCC | 16|
| KIDNEY-RCC | 17|
| LIVER-HCC | 18|
| LUNG-ADENOCA | 19|
| LUNG-SCC | 20|
| LYMPH-BNHL | 21|
| LYMPH-CLL | 22|
| MYELOID-AML | 23|
| MYELOID-MPN | 24|
| OVARY-ADENOCA | 25|
| PANC-ADENOCA | 26|
| PANC-ENDOCRINE | 27|
| PROST-ADENOCA | 28|
| SKIN-MELANOMA | 29|
| SOFTTISSUE-LEIOMYO | 30|
| SOFTTISSUE-LIPOSARC| 31|
| STOMACH-ADENOCA | 32|
| THY-ADENOCA | 33|
| UTERUS-ADENOCA | 34|
You can also use the v2 signatures, by specifying the parameter `cosmic_version=2`, and in that case, the matching of cancer types and signatures is given by
|Cancer type | index |
|--- | --- |
| Adrenocortical carcinoma | 0 |
| ALL | 1 |
| AML | 2 |
| Bladder | 3 |
| Breast | 4 |
| Cervix | 5 |
| Chondrosarcoma | 6 |
| CLL | 7 |
| Colorectum | 8 |
| Glioblastoma | 9 |
| Glioma Low Grade | 10 |
| Head and Neck | 11 |
| Kidney Chromophobe | 12 |
| Kidney Clear Cell | 13 |
| Kidney Papillary | 14 |
| Liver | 15 |
| Lung Adeno | 16 |
| Lung Small Cell | 17 |
| Lung Squamous | 18 |
| Lymphoma B-cell | 19 |
| Lymphoma Hodgkin | 20 |
| Medulloblastoma | 21 |
| Melanoma | 22 |
| Myeloma | 23 |
| Nasopharyngeal Carcinoma | 24 |
| Neuroblastoma | 25 |
| Oesophagus | 26 |
| Oral gingivo-buccal squamous| 27 |
| Osteosarcoma | 28 |
| Ovary | 29 |
| Pancreas | 30 |
| Paraganglioma | 31 |
| Pilocytic Astrocytoma | 32 |
| Prostate | 33 |
| Stomach | 34 |
| Thyroid | 35 |
| Urothelial Carcinoma | 36 |
| Uterine Carcinoma | 37 |
| Uterine Carcinosarcoma | 38 |
| Uveal Melanoma | 39 |
#### selection of the number of clones
the default behavior corresponds to the heuristics fitted with simulations. However, one can change the parameter for the adapted BIC using the parameter `model_selection_kws`.
| github_jupyter |
[Index](Index.ipynb) - [Back](Output Widget.ipynb) - [Next](Widget Styling.ipynb)
# Widget Events
## Special events
The `Button` is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The `on_click` method of the `Button` can be used to register function to be called when the button is clicked. The doc string of the `on_click` can be seen below.
```
import ipywidgets as widgets
print(widgets.Button.on_click.__doc__)
```
### Example
Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the `on_click` method, a button that prints a message when it has been clicked is shown below. To capture `print`s (or any other kind of output) and ensure it is displayed, be sure to send it to an `Output` widget (or put the information you want to display into an `HTML` widget).
```
from IPython.display import display
button = widgets.Button(description="Click Me!")
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("Button clicked.")
button.on_click(on_button_clicked)
```
## Traitlet events
Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the `observe` method of the widget can be used to register a callback. The doc string for `observe` can be seen below.
```
print(widgets.Widget.observe.__doc__)
```
### Signatures
Mentioned in the doc string, the callback registered must have the signature `handler(change)` where `change` is a dictionary holding the information about the change.
Using this method, an example of how to output an `IntSlider`'s value as it is changed can be seen below.
```
int_range = widgets.IntSlider()
output2 = widgets.Output()
display(int_range, output2)
def on_value_change(change):
with output2:
print(change['new'])
int_range.observe(on_value_change, names='value')
```
## Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
### Linking traitlets attributes in the kernel
The first method is to use the `link` and `dlink` functions from the `traitlets` module (these two functions are re-exported by the `ipywidgets` module for convenience). This only works if we are interacting with a live kernel.
```
caption = widgets.Label(value='The values of slider1 and slider2 are synchronized')
sliders1, slider2 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2')
l = widgets.link((sliders1, 'value'), (slider2, 'value'))
display(caption, sliders1, slider2)
caption = widgets.Label(value='Changes in source values are reflected in target1')
source, target1 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1')
dl = widgets.dlink((source, 'value'), (target1, 'value'))
display(caption, source, target1)
```
Function `traitlets.link` and `traitlets.dlink` return a `Link` or `DLink` object. The link can be broken by calling the `unlink` method.
```
l.unlink()
dl.unlink()
```
### Registering callbacks to trait changes in the kernel
Since attributes of widgets on the Python side are traitlets, you can register handlers to the change events whenever the model gets updates from the front-end.
The handler passed to observe will be called with one change argument. The change object holds at least a `type` key and a `name` key, corresponding respectively to the type of notification and the name of the attribute that triggered the notification.
Other keys may be passed depending on the value of `type`. In the case where type is `change`, we also have the following keys:
- `owner` : the HasTraits instance
- `old` : the old value of the modified trait attribute
- `new` : the new value of the modified trait attribute
- `name` : the name of the modified trait attribute.
```
caption = widgets.Label(value='The slider value is in its initial position.')
slider = widgets.IntSlider(min=-5, max=5, value=1, description='Slider')
def handle_slider_change(change):
caption.value = 'The slider value is ' + (
'negative' if change.new < 0 else 'nonnegative'
)
slider.observe(handle_slider_change, names='value')
display(caption, slider)
```
### Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency due to the roundtrip to the server side. You can also directly link widget attributes in the browser using the link widgets, in either a unidirectional or a bidirectional fashion.
Javascript links persist when embedding widgets in html web pages without a kernel.
```
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
range1, range2 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2')
l = widgets.jslink((range1, 'value'), (range2, 'value'))
display(caption, range1, range2)
caption = widgets.Label(value='Changes in source_range values are reflected in target_range1')
source_range, target_range1 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1')
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
display(caption, source_range, target_range1)
```
Function `widgets.jslink` returns a `Link` widget. The link can be broken by calling the `unlink` method.
```
# l.unlink()
# dl.unlink()
```
### The difference between linking in the kernel and linking in the client
Linking in the kernel means linking via python. If two sliders are linked in the kernel, when one slider is changed the browser sends a message to the kernel (python in this case) updating the changed slider, the link widget in the kernel then propagates the change to the other slider object in the kernel, and then the other slider's kernel object sends a message to the browser to update the other slider's views in the browser. If the kernel is not running (as in a static web page), then the controls will not be linked.
Linking using jslink (i.e., on the browser side) means constructing the link in Javascript. When one slider is changed, Javascript running in the browser changes the value of the other slider in the browser, without needing to communicate with the kernel at all. If the sliders are attached to kernel objects, each slider will update their kernel-side objects independently.
To see the difference between the two, go to the [static version of this page in the ipywidgets documentation](http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html) and try out the sliders near the bottom. The ones linked in the kernel with `link` and `dlink` are no longer linked, but the ones linked in the browser with `jslink` and `jsdlink` are still linked.
## Continuous updates
Some widgets offer a choice with their `continuous_update` attribute between continually updating values or only updating values when a user submits the value (for example, by pressing Enter or navigating away from the control). In the next example, we see the "Delayed" controls only transmit their value after the user finishes dragging the slider or submitting the textbox. The "Continuous" controls continually transmit their values as they are changed. Try typing a two-digit number into each of the text boxes, or dragging each of the sliders, to see the difference.
```
a = widgets.IntSlider(description="Delayed", continuous_update=False)
b = widgets.IntText(description="Delayed", continuous_update=False)
c = widgets.IntSlider(description="Continuous", continuous_update=True)
d = widgets.IntText(description="Continuous", continuous_update=True)
widgets.link((a, 'value'), (b, 'value'))
widgets.link((a, 'value'), (c, 'value'))
widgets.link((a, 'value'), (d, 'value'))
widgets.VBox([a,b,c,d])
```
Sliders, `Text`, and `Textarea` controls default to `continuous_update=True`. `IntText` and other text boxes for entering integer or float numbers default to `continuous_update=False` (since often you'll want to type an entire number before submitting the value by pressing enter or navigating out of the box).
## Debouncing
When trait changes trigger a callback that performs a heavy computation, you may want to not do the computation as often as the value is updated. For instance, if the trait is driven by a slider which has its `continuous_update` set to `True`, the user will trigger a bunch of computations in rapid succession.
Debouncing solves this problem by delaying callback execution until the value has not changed for a certain time, after which the callback is called with the latest value. The effect is that the callback is only called when the trait pauses changing for a certain amount of time.
Debouncing can be implemented using an asynchronous loop or threads. We show an asynchronous solution below, which is more suited for ipywidgets. If you would like to instead use threads to do the debouncing, replace the `Timer` class with `from threading import Timer`.
```
import asyncio
class Timer:
def __init__(self, timeout, callback):
self._timeout = timeout
self._callback = callback
async def _job(self):
await asyncio.sleep(self._timeout)
self._callback()
def start(self):
self._task = asyncio.ensure_future(self._job())
def cancel(self):
self._task.cancel()
def debounce(wait):
""" Decorator that will postpone a function's
execution until after `wait` seconds
have elapsed since the last time it was invoked. """
def decorator(fn):
timer = None
def debounced(*args, **kwargs):
nonlocal timer
def call_it():
fn(*args, **kwargs)
if timer is not None:
timer.cancel()
timer = Timer(wait, call_it)
timer.start()
return debounced
return decorator
```
Here is how we use the `debounce` function as a decorator. Try changing the value of the slider. The text box will only update after the slider has paused for about 0.2 seconds.
```
slider = widgets.IntSlider()
text = widgets.IntText()
@debounce(0.2)
def value_changed(change):
text.value = change.new
slider.observe(value_changed, 'value')
widgets.VBox([slider, text])
```
## Throttling
Throttling is another technique that can be used to limit callbacks. Whereas debouncing ignores calls to a function if a certain amount of time has not passed since the last (attempt of) call to the function, throttling will just limit the rate of calls. This ensures that the function is regularly called.
We show an synchronous solution below. Likewise, you can replace the `Timer` class with `from threading import Timer` if you want to use threads instead of asynchronous programming.
```
import asyncio
from time import time
def throttle(wait):
""" Decorator that prevents a function from being called
more than once every wait period. """
def decorator(fn):
time_of_last_call = 0
scheduled, timer = False, None
new_args, new_kwargs = None, None
def throttled(*args, **kwargs):
nonlocal new_args, new_kwargs, time_of_last_call, scheduled, timer
def call_it():
nonlocal new_args, new_kwargs, time_of_last_call, scheduled, timer
time_of_last_call = time()
fn(*new_args, **new_kwargs)
scheduled = False
time_since_last_call = time() - time_of_last_call
new_args, new_kwargs = args, kwargs
if not scheduled:
scheduled = True
new_wait = max(0, wait - time_since_last_call)
timer = Timer(new_wait, call_it)
timer.start()
return throttled
return decorator
```
To see how different it behaves compared to the debouncer, here is the same slider example with its throttled value displayed in the text box. Notice how much more interactive it is, while still limiting the callback rate.
```
slider = widgets.IntSlider()
text = widgets.IntText()
@throttle(0.2)
def value_changed(change):
text.value = change.new
slider.observe(value_changed, 'value')
widgets.VBox([slider, text])
```
[Index](Index.ipynb) - [Back](Output Widget.ipynb) - [Next](Widget Styling.ipynb)
| github_jupyter |
```
import pandas as pd, numpy as np, _pickle as pickle, re, json
from pathlib import Path
from sqlalchemy import create_engine
from time import sleep, time
from tqdm import tqdm
from collections import Counter, defaultdict
from itertools import chain
from bz2 import BZ2File
# with open('../psql_engine.txt') as f:
# psql = create_engine(f.read())
```
## tokenize
Turns readability htmls into tokenized text and saves it to database.<br>
Uses simple pretrained logistic regression classifier to remove unrelated items: datetime paragraphs, "read also" paragraphs etc.<br>
Tokenizes Russian text with `nltk` and Ukrainian - using `tokenize_uk`.<br>
Select texts not longer than 8000 characters. Longer texts are analytics or articles, they are written differently and are not considered in this research
```
from bs4 import BeautifulSoup
from nltk.tokenize import *
from tokenize_uk.tokenize_uk import tokenize_sents, tokenize_words
```
Postgresql query to filter incorectly loaded articles.<br>No need to use - sample contains only loaded ones.<br>There are around 5% of incorectly loaded texts
```
htmls = pd.read_json('../htmls_sample.jl.bz2', lines=True, chunksize=1000)
# bad = ['ะ ัะบะพะฒะพะดััะฒะพ ัะฐะนัะฐ ะฝะต ะฝะตัะตั ะพัะฒะตัััะฒะตะฝะฝะพััะธ ะทะฐ ะดะพััะพะฒะตัะฝะพััั ะผะฐัะตัะธะฐะปะพะฒ, ะฟัะธัะปะฐะฝะฝัั
ะฝะฐัะธะผะธ ัะธัะฐัะตะปัะผะธ. ะะดะผะธะฝะธัััะฐัะธั ัะฐะนัะฐ, ะฟัะฑะปะธะบัั ััะฐััะธ ะฝะฐัะธั
ัะธัะฐัะตะปะตะน, ะฟัะตะดัะฟัะตะถะดะฐะตั, ััะพ ะธั
ะผะฝะตะฝะธั ะผะพะณัั ะฝะต ัะพะฒะฟะฐะดะฐัั',
# 'ะฃัั ะฟัะฐะฒะฐ ะทะฐั
ะธัะตะฝั. ะะฐัะตััะฐะปะธ ัะท ัะฐะนัะฐ',
# 'ะัะต ะฟัะฐะฒะฐ ะฝะฐ ะผะฐัะตัะธะฐะปั, ะพะฟัะฑะปะธะบะพะฒะฐะฝะฝัะต ะฝะฐ ะดะฐะฝะฝะพะผ ัะตััััะต, ะฟัะธะฝะฐะดะปะตะถะฐั',
# 'ะฝะต ะฟะพะดะปะตะถะฐั ะดะฐะปัะฝะตะนัะตะผั ะฒะพัะฟัะพะธะทะฒะตะดะตะฝะธั ะธ/ะธะปะธ ัะฐัะฟัะพัััะฐะฝะตะฝะธั ะฒ ะบะฐะบะพะน-ะปะธะฑะพ ัะพัะผะต',
# 'ะัะธ ัะธััะฒะฐะฝะฝั ั ะฒะธะบะพัะธััะฐะฝะฝั ะฑัะดั-ัะบะธั
ะผะฐัะตััะฐะปัะฒ ะฒ ะะฝัะตัะฝะตัั',
# 'ะ ะตะณะธัััะฐัะธั ะฟะพะปัะทะพะฒะฐัะตะปั ะฒ ัะตัะฒะธัะต ะ ะะ ะะปัะฑ ะฝะฐ ัะฐะนัะต Ria.Ru ะธ ะฐะฒัะพัะธะทะฐัะธั',
# 'ยฉ ะะฒัะพะฝะพะผะฝะฐั ะฝะตะบะพะผะผะตััะตัะบะฐั ะพัะณะฐะฝะธะทะฐัะธั ยซะขะ-ะะพะฒะพััะธยป, 2005โ2017 ะณะณ. ะัะต ะฟัะฐะฒะฐ ะทะฐัะธัะตะฝั',
# 'ะะธะดะตั ยฉ 2001-2017 UA-Reporter.com ะะตัะฒะพะต ะธะฝัะพัะผะฐัะธะพะฝะฝะพะต ะธะฝัะตัะฝะตั-ะธะทะดะฐะฝะธะต ะะฐะบะฐัะฟะฐััะบะพะน ะพะฑะปะฐััะธ',
# 'ะะฐั ัะตะณะธะพะฝ: ะัะฝะพะฒะฝะพะน ัะฐะนั ะะพัะบะฒะฐ ะกะตะฒะตัะพ-ะะฐะฟะฐะด ะฃัะฐะป ะกะธะฑะธัั',
# '18+ ะะฐััะพััะธะน ัะตัััั ะผะพะถะตั ัะพะดะตัะถะฐัั ะผะฐัะตัะธะฐะปั 18+ ะัะธ ัะธัะธัะพะฒะฐะฝะธะธ ะธะฝัะพัะผะฐัะธะธ ะณะธะฟะตััััะปะบะฐ ะฝะฐ ะะ',
# 'ะะฒะตะดััั ัะปะพะฒะพ, ัะพะฑ ะฟะพัะฐัะธ',
# 'ะัะต ะพะฑ ัะบัะฐะธะฝัะบะพะน ะฟะพะปะธัะธะบะต, ะพะปะธะณะฐัั
ะธ, ะบะพัะพััะต ััะบะพะฒะพะดัั ะฃะบัะฐะธะฝะพะน, ะฏะฝัะบะพะฒะธั, ัะบัะฐะธะฝัะบะธะน ะฟะฐัะปะฐะผะตะฝั, ะฟะพะปะธัะธัะตัะบะธะต ะฝะพะฒะพััะธ, ะฒัะฑะพัั, ะธะฝัะตัะฒัั ั ะธะทะฒะตััะฝัะผะธ ะฟะพะปะธัะธะบะฐะผะธ, ะผะฝะตะฝะธั ะฟะพะปะธััะตั
ะฝะพะปะพะณะพะฒ, ะฝะพะฒะพััะธั ัะตะณะธะพะฝะพะฒ, ะบะพะผะผะตะฝัะฐัะธะธ ัะธัะฐัะตะปะตะน, ะขะะ 100 ะฒะปะธััะตะปัะฝัั
ัะบัะฐะธะฝัะตะฒ,ะบะพัััะฟัะธั ะฒ ัะบัะฐะธะฝัะบะพะน ะฟะพะปะธัะธะบะต, ะบัะธะผะธะฝะฐะป ัะบัะฐะธะฝัะบะพะน ะฟะพะปะธัะธะบะธ,ะฝะพะฒะพััะธ ะะตัั
ะพะฒะฝะพะณะพ ะกะพะฒะตัะฐ ะฃะบัะฐะธะฝั',
# 'ะัะต ะฟัะฐะฒะฐ ะทะฐัะธัะตะฝั ะัะต ะฟัะฐะฒะฐ ะฝะฐ ะผะฐัะตัะธะฐะปั, ะพะฟัะฑะปะธะบะพะฒะฐะฝะฝัะต ะฝะฐ ะดะฐะฝะฝะพะผ ัะตััััะต, ะฟัะธะฝะฐะดะปะตะถะฐั ะะะ',
# 'ะัะฑะพะต ะธัะฟะพะปัะทะพะฒะฐะฝะธะต ะผะฐัะตัะธะฐะปะพะฒ c ัะฐะนัะฐ ะธะปะธ ะฟัะพะณัะฐะผะผ ัะตะปะตะบะฐะฝะฐะปะฐ 112 ะฃะบัะฐะธะฝะฐ ัะฐะทัะตัะฐะตััั ะฟัะธ ัะพะณะปะฐัะพะฒะฐะฝะธะธ ั ัะตะดะฐะบัะธะตะน',
# 'ะะฐัะตัะธะฐะปั, ัะพะดะตัะถะฐัะธะต ะพัะผะตัะบั ะัะตัั-ัะตะปะธะท, ะผะพะณัั ะฑััั ะพะฟัะฑะปะธะบะพะฒะฐะฝั ะฝะฐ ะฟัะฐะฒะฐั
ัะตะบะปะฐะผั. ะะฐัะตัะธะฐะปั ั ะฟะพะผะตัะบะพะน',
# 'News24UA - ะฝะพะฒะพััะธ ะฃะบัะฐะธะฝั, ะฝะพะฒะพััะธ ะฟะพะปะธัะธะบะธ, ัะฐะผัะต ัะฒะตะถะธะต ะฝะพะฒะพััะธ ัะบะพะฝะพะผะธะบะธ, ะพะฑัะตััะฒะฐ ะธ ะบัะธะผะธะฝะฐะปะฐ ะะพะฒะพััะธ ัะตะณะพะดะฝั ะฒ ะฃะบัะฐะธะฝะต ะะตัั
ะพะฒะฝะฐั ะ ะฐะดะฐ ะฃะบัะฐะธะฝั, ะฝะพะฒัะต ะทะฐะบะพะฝะพะฟัะพะตะบัั, ะบะพะผะผะตะฝัะฐัะธะธ ัะบัะฐะธะฝัะบะธั
ะฟะพะปะธัะธะบะพะฒ ะธ ะฟะฐัะปะฐะผะตะฝัะฐัะธะตะฒ',
# 'ะฝะพะฒะพััะธ ะฃะบัะฐะธะฝั, ะฝะพะฒะพััะธ ัะบัะฐะธะฝั ัะตะณะพะดะฝั, ะฟะพัะปะตะดะฝะธะต ะฝะพะฒะพััะธ ัะบัะฐะธะฝั, ะฝะพะฒะพััะธ ัะฐัะฐ, ะฝะพะฒะพััะธ ะดะฝั, ะฝะพะฒะพััะธ ะพะฝะปะฐะนะฝ, ะฟะพัะปะตะดะฝะธะต ะฝะพะฒะพััะธ ะฒ ัะบัะฐะธะฝะต',
# 'ะะพั
ะพะถะต, ััะพ ะฒั ะธัะฟะพะปัะทัะตัะต ะฑะปะพะบะธัะพะฒัะธะบ ัะตะบะปะฐะผั :\(ะงัะพะฑั ะฟะพะปัะทะพะฒะฐัััั ะฒัะตะผะธ ััะฝะบัะธัะผะธ ัะฐะนัะฐ',
# '- "INSIDER LIFE NEWS" - insiderlifenews@gmail.com 2014 .',
# '2014-2017 . . \r . \r " \(\) " \r . 34 . " " -',
# 'Copyright ยฉ 1999-2018, ัะตั
ะฝะพะปะพะณะธั ะธ ะดะธะทะฐะนะฝ ะฟัะธะฝะฐะดะปะตะถะฐั ะะะ ยซะัะฐะฒะดะฐ.ะ ัยป',
# 'ะฒัะดะฐะฝะพ ะคะตะดะตัะฐะปัะฝะพะน ัะปัะถะฑะพะน ะฟะพ ะฝะฐะดะทะพัั ะฒ ััะตัะต ัะฒัะทะธ, ะธะฝัะพัะผะฐัะธะพะฝะฝัั
ัะตั
ะฝะพะปะพะณะธะน ะธ ะผะฐััะพะฒัั
ะบะพะผะผัะฝะธะบะฐัะธะน',
# 'ะะตัะตะฟะตัะฐัะบะฐ, ะบะพะฟะธัะพะฒะฐะฝะธะต ะธะปะธ ะฒะพัะฟัะพะธะทะฒะตะดะตะฝะธะต ะธะฝัะพัะผะฐัะธะธ, ะพะฟัะฑะปะธะบะพะฒะฐะฝะฝะพะน ะฝะฐ ัะฐะนัะต',
# 'ะะฒะตะดะธัะต ัะปะพะฒะพ, ััะพะฑั ะฝะฐัะฐัั',
# 'ะะฐะดะตะถะฝัะต VPS/VDS, ะฒัะดะตะปะตะฝะฝัะต ัะตัะฒะตัั ะธ ั
ะพััะธะฝะณ',
# 'ะฒะปะฐะถะฝะพััั: ะดะฐะฒะปะตะฝะธะต: ะฒะตัะตั:',
# 'ะฃัั ะฟัะฐะฒะฐ ะทะฐั
ะธัะตะฝะพ. ะัะผะบะฐ ะฐะฒัะพัะฐ ััะฐััั ะฝะต ะฒัะดะพะฑัะฐะถะฐั ะดัะผะบั ัะตะดะฐะบััั',
# 'ะัะฟะพะปัะทะพะฒะฐะฝะธะต ะผะฐัะตัะธะฐะปะพะฒ ะธ ะฝะพะฒะพััะตะน ะกะตะณะพะดะฝั ัะฐะทัะตัะฐะตััั ะฟัะธ ััะปะพะฒะธะธ ัััะปะบะธ ะฝะฐ ะกะตะณะพะดะฝั.ua',
# 'PolitCentr.ru 2013-2017',
# 'ะงะธัะฐะนัะต ะฒะธัััะฐะปัะฝัะต ะถััะฝะฐะปั RT ะฝะฐ ััััะบะพะผ ะฒ Flipboard',
# 'ะะฐัะฐ ัะตะปั โ ะฐะบััะฐะปัะฝะพะต ะพัะฒะตัะตะฝะธะต ัะพะฑััะธะน ะบะฐัะฐััะธั
ัั ะฟะพะปะธัะธัะตัะบะพะน ัะธััะฐัะธะธ ะฒ ะฃะบัะฐะธะฝะต ะธ ะฒะพะบััะณ ะฝะตั, ะฒะพะนะฝั ะฝะฐ ัะตััะธัะพัะธะธ ะฃะบัะฐะธะฝั, ัะธััะฐัะธะธ ัะปะพะถะธะฒัะตะนัั ะฒ ะััะผั',
# 'ะะพะนะดะธัะต ัะตัะตะท ัะพัะธะฐะปัะฝัะต ัะตัะธ: ะธะปะธ ะฐะฒัะพัะธะทัะนัะตัั:',
# 'ะะฝะตะฝะธะต ัะตะดะฐะบัะธะธ ะผะพะถะตั ะฝะต ัะพะฒะฟะฐะดะฐัั ั ัะพัะบะพะน ะทัะตะฝะธั ะฐะฒัะพัะพะฒ ะฟัะฑะปะธะบะฐัะธะน']
# bad_search_q = '\nor '.join(f'''ra_summary ~~* '%%{cr.replace("'", "''").replace('%', '%%')}%%' '''
# for cr in bad)
# q = f'''
# SELECT html_id, ra_title, ra_summary, real_url, link, lang FROM htmls
# WHERE tokenized isnull
# and not (LENGTH(REGEXP_replace(ra_summary, '[^ะ-ะฏะฐ-ัะัะัะั]', '')) < 20 OR
# ra_summary = '<html><body/></html>'
# or ra_summary ~* '[โกะะะะัะะยต]'
# or ra_summary isnull
# or {bad_search_q}
# )
# and LENGTH(REGEXP_replace(ra_summary, '[^ะ-ะฏะฐ-ัะัะัะั]', '')) < 7000;
# '''
# htmls = pd.read_sql(q, psql, chunksize=20000)
```
Load paragraph classifier - scikit-learn logistic regression, and DictVertorizer for feature transformation.<br>
Features:
* html tag name,
* its classes,
* and id,
* Length of inner text,
* bag of words of element contents
```
with open('dvect_tech_tags.pkl', 'rb') as f:
dv = pickle.load(f)
with open('classify_tech_tags.pkl', 'rb') as f:
cls = pickle.load(f)
def get_feats(tag):
'''
returns feature dict for paragraph classifier.
features: htmls tag name, its classes, and id. Length of inner text and bag of words of element contents.
'''
name = tag.name
text = re.sub('[^A-zะ-ัะัะัะัาา0-9 ]', ' ', tag.get_text(' ')).split()
text = [re.sub('[0-9]', '5', str(i)) for i in text]
class_full = tag.get('class')
if class_full is not None:
classes = list(chain(*[re.split('-+|_+', cl) for cl in class_full]))
else:
classes = []
id_full = tag.get('id')
id_list = re.split('-+|_+', id_full.lower()) if id_full else []
words = [f'word_{w.lower()}' for w in text]
cls_feats = [f'class_{w}' for w in classes]
id_feats = [f'id_{w}' for w in id_list]
feats = {**Counter(cls_feats), **Counter(id_feats), **Counter(words)}
feats['len'] = len(words)
feats[f'tagname_{name}'] = 1
return feats
def tokenize_html(row):
soup = BeautifulSoup(re.sub('<br/?>', '</p><p>', row.ra_summary), 'lxml')
[t.extract() for t in soup.select('div#more-items-infinite, div.fb-post, p.go_out, div.twitter-tweet')]
[t.extract() for t in soup.find_all('h1', text='ะะพะฒะธะฝะธ ะฟะพ ัะตะผั')]
[t.extract() for t in soup.find_all('p', text=re.compile(
'ะฒ ะดะฐะฝะธะน ะผะพะผะตะฝั ะฒะธ ัะธัะฐััะต ะฝะพะฒะธะฝั|ะฒ ะดะฐะฝัะน ะผะพะผะตะฝั ะฒั ัะธัะฐะตัะต ะฝะพะฒะพััั .* ะฝะฐ [(eizvestia)|(enovosti)]\.com',
flags=re.I
))]
url = row.real_url if row.real_url else row.link
url = url if url else ''
if 'ria.ru' in url:
ria_caption = soup.select('div[itemprop="articleBody"] strong')
if len(ria_caption) != 0: ria_caption[0].extract()
hs = soup.select('h1, h2, h3, h4, h5')
paragraphs = [t for t in soup.find_all()
if t.name in ['h1','h2','h3','h4','h5','h6','p', 'div', 'li']
and not t.p
and not t.div]
intersection = list(filter(lambda t: t in paragraphs[:5], hs))
if len(intersection) != 0:
del paragraphs[paragraphs.index(intersection[0])]
if len(paragraphs) == 0: return
X = dv.transform(list(map(get_feats, paragraphs)))
preds = cls.predict_proba(X)[:, 0]
paragraps = [p for pred, p in zip(preds.tolist(), paragraphs) if pred > 0.18]
paragraphs = [re.sub('\s+', ' ', p.get_text(' ').replace('\xa0', ' ')).strip() for p in paragraphs]
paragraphs = list(filter(lambda p: len(re.sub('[^A-zะ-ัะัะัะัาา]', '', p)) > 2, paragraphs))
if row.lang == 'uk':
tokenized = '\n\n'.join('\n'.join([' '.join(tokenize_words(s))
for s in tokenize_sents(p)])
for p in paragraphs)
elif row.lang == 'ru':
tokenized = '\n\n'.join('\n'.join([' '.join(word_tokenize(s))
for s in sent_tokenize(p)])
for p in paragraphs)
return re.sub("``|''", '"', tokenized)
with BZ2File('tokenized_htmls.jl.bz2', 'w') as f:
for df in tqdm(htmls):
df['tokenized'] = df.apply(tokenize_html, axis=1)
df = df.loc[pd.notnull(df.tokenized)
].reindex(['html_id', 'tokenized', 'lang'], axis=1
).copy()
f.write(
(df.to_json(orient='records', lines=True, force_ascii=False) + '\n'
).encode('utf-8')
)
# # in case of postgres
# df.to_json('tokenized_htmls.jl.bz2', compression='bz2', )
# vals = ', '.join([f'''({html_id}, '{tok.replace("'", "''").replace('%', '%%')}')'''
# for html_id, tok in df.loc[pd.notnull(df.tokenized)].reindex(['html_id', 'tokenized'], axis=1).values])
# psql.execute(f'''
# update htmls as t set
# tokenized = c.tok
# from (values
# {vals}
# ) as c(html_id, tok)
# where c.html_id = t.html_id;
# ''')
```
## str2id
Transforms space delimited tokenized text into an array of word ids according to dictionary. Uses separate dictionaries for Ukrainian and Russian language
```
tokenized = pd.read_json('tokenized_htmls.jl.bz2', lines=True, chunksize=1000)
# tokenized = pd.read_sql('''
# select html_id, tokenized, lang from htmls
# where tokenized notnull
# and word_ids isnull;
# ''', psql, chunksize=10000)
BOS = 'xbos' #beginning of string tag
FLD = 'xbod' #beginning of doc tag
with open('itos_ru.pkl', 'rb') as f:
itos_ru = pickle.load(f)
#reverse - return id for every word, id "0" for out of dictionary tokens
stoi_ru = defaultdict(lambda: 0, {v: k for k, v in enumerate(itos_ru)})
with open('itos_uk.pkl', 'rb') as f:
itos_uk = pickle.load(f)
stoi_uk = defaultdict(lambda: 0, {v: k for k, v in enumerate(itos_uk)})
```
`itos` - dictionary for language composed of all loaded news texts. Up to 60000 tokens that occur more than 15 times in all news
```
def split_tok(text):
return' \n '.join([p.replace('\n', f' {BOS} ')
for p in text.strip().split('\n\n')]
).split(' ')
def tok2id(row):
stoi = stoi_ru if row.lang == 'ru' else stoi_uk
return [stoi_ru[token] for token in split_tok(row.tokenized)]
with BZ2File('word_ids_htmls.jl.bz2', 'w') as f:
for df in tqdm(tokenized):
df.tokenized = f'\n{BOS} {FLD} 1 ' + df.tokenized
# "1" is a mistake, but since it occurs in every article, and at the beginning,
# I believe it doesn't influence the result - LSTM will forget it by the end of text and will not find it meaningful
df['word_ids'] = df.apply(tok2id, axis=1)
df = df.loc[pd.notnull(df.word_ids)
].reindex(['html_id', 'word_ids'], axis=1
).copy()
f.write(
(df.to_json(orient='records', lines=True, force_ascii=False) + '\n'
).encode('utf-8')
)
# # for postgres
# vals = ', '.join([f'''({html_id}, ARRAY{str2id})'''
# for html_id, str2id in df.reindex(['html_id', 'word_ids'], axis=1).values])
# psql.execute(f'''
# update htmls as t set
# word_ids = c.word_ids
# from (values
# {vals}
# ) as c(html_id, word_ids)
# where c.html_id = t.html_id;
# ''')
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcallysto-sample-notebooks&branch=master&subPath=notebooks/Math/FlippingCoins.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
<img src="images/iStock-coinflip.jpg" width="500px" />
# Flipping Too Many Coins
## Introduction
A classic statistics experiment is simply counting how many "heads" and "tails" you observe when flipping a coin repeatedly. With a perfectly unbiased coin in a statistically perfect world, one might expect to count an equal number of heads and tails by flipping a coin hundreds of times. However, the world we live in is far from statistically perfect. The real world is plagued with **statistical fluctuations**, meaning that measured data are not always equal to what you would expect. For example, if you were to flip a coin 100 times, it is not inconceivable that you would measure something like 45 heads and 55 tails. The reason for this is most of our statistical expectations represent an upper boundary of probability. In other words, if we were to flip a coin infinitely many times, we would expect exactly half of those trials to be heads, and exactly half of those trials to be tails. Unfortunately (or fortunately depending on if you have hobbies or not), you cannot flip a coin infinitely many times. You can only flip a coin a given amount of times. In mathematical terms, the results of any coin flipping experiment will have a _discrete_ number of heads and a _discrete_ number of tails. Or, you'll have counted heads and tails from a set number of trials.
This sort of statistical problem where you run a number of trials is easily simulated using Python. However, modeling trials of real world situations subject to statistical fluctuations requires something included in Python known as the **random number generator**. So, before we move on to how we can simulate statistical fluctuations, we introduce random numbers and the random number generator.
# Random Numbers
First, let's take a look at some of those "random" numbers.
```
from IPython.display import HTML
import random
%matplotlib inline
import matplotlib as mpl
import numpy as np
import matplotlib.pyplot as plt
import IPython
from IPython import display
from ipywidgets import widgets
from ipywidgets import interactive
import warnings
warnings.filterwarnings("ignore")
hide_me = ''
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show) {
$('div.input').each(function(id) {
el = $(this).find('.cm-variable:first');
if (id == 0 || el.text() == 'hide_me') {
$(this).hide();
}
});
$('div.output_prompt').css('opacity', 0);
} else {
$('div.input').each(function(id) {
$(this).show();
});
$('div.output_prompt').css('opacity', 1);
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input style="opacity:0" type="submit" value="Click here to toggle on/off the raw code."></form>''')
# This is the python module which contains the random
# number generator
import random
# This prints 10 random numbers,
# the "_" is simply an unnamed variable for the loop
for _ in range(10):
print(random.uniform(0,1))
```
Run the above cell a few times, are the numbers the same? They probably aren't. Is the fact that those numbers aren't the same enough to say they are random? Certainly they appear to be from looking at them. Unfortunately, this is math, meaning we need to understand and define the properties that random number should have in order to determine if the random numbers from Python satisfy these requirements.
## Properties of Random Numbers
Suppose you have a sequence of $N$ random numbers $\{R\}$ with contents $\{r_1, r_2, ... , r_N\}$ where each element $r_i$ is a random number. What sort of properties should this sequence of numbers have? If this is truly a sequence of random numbers, it _must_ satisfy the following properties, which we will explain in greater detail:
1. Drawing any $r_i$ is equally probable and independent.
2. The sequence of random numbers is uniformly distributed.
"Drawing a value" in this scope means we're picking a number from our sequence of random numbers, but not removing it from the sequence (the sequence remains unchanged, we're simply "observing" the random number).
Let's look at these two properties in a little more detail.
### All Values Are Equally Probable and Independent
What this means is that if you were to select (but not remove) a number from your sequence of random numbers $\{r_1, r_2, ... , r_N\}$ at random, the probability of drawing any of those numbers is
\begin{equation}
p(r_i) = \frac{1}{N}
\end{equation}
where $p(r_i)$ is the probability of selecting a number $r_i$. This probability is identical for all numbers within your set. More explicitly:
\begin{equation}
p(r_1) = p(r_2) = ... = p(r_N) = \frac{1}{N}
\end{equation}
The independence property means that the if you draw a number from the set, it does not affect the probability of drawing other numbers, or even itself at a later time. This is because the sequence remains unchanged after you draw (observe) a number. This property leads directly to the second important properties of random numbers, discussed below.
### The Numbers are Uniformly Distributed
This simply means that there is no "trend" within the set of random numbers. If you were to plot the histogram of all your random numbers, they would produce a flat rectangle, or the uniform distribution:
\begin{equation}
P(r) = \left\{
\begin{matrix}
p & a \leq x \leq b \\
0 & \text{otherwise}
\end{matrix}
\right.
\end{equation}
where $P(r)$ is the probability _distribution_ and $p$ is the probability of drawing any number between $a$ and $b$. It is important to note that this probability $p$ is the **same** probability for all numbers between $a$ and $b$. This function simply defines a horizontal line -- meaning that all values from this distribution are equally probable.
Alright, so we have some definitions of random numbers, let's test Python's random number generator to see if these "random" numbers satisfy our two requirements. Below is a python widget that you can use to visualize random numbers generated with Python. This widget will draw $N$ random numbers (that you get to define) and bin those values into a histogram. If you're unsure what a histogram is, think of it as a bar graph where the bars are defined by counting how many number land within a specific range or "bin".
Certianly, the question remains of how exactly we're going to translate the definitions above into something that we can use in order to create such a histogram. Let's break down how we'll acomplish this into steps with the following flowchart
>
>
> A schematic diagram for the process required in order to test if the numbers created by the random number generator are uniformly distributed.
This flowchart explains the process required in order to create the histogram in the widget below. Essentially all we need to do is create those $N$ random numbers, place them in the appropriate bind, and then count the number of entires that each bin has. These counts will define the size of each bar in the histogram. We also note that we have converted these raw counts into a percentage by dividing the counts in each bin, by $N$, the number of random numbers counted.
```
hide_me
def Random_Histogram(N):
fig = plt.gcf()
fig.show()
fig.canvas.draw()
plt.style.use('ggplot')
run = 0
plot_step = 1
random_numbers = []
for i in range(N):
run += 1
r = random.uniform(0,1)
random_numbers.append(r)
if run >= plot_step or i == N-1:
# Speed the plot up as we go so we can
# see the variations at the beginning and cruise
# through the "boring" stuff.
plot_step = i/2
plt.gca().cla()
plt.ylabel('Probability')
plt.ylim([0,0.2])
plt.xlim([-0.05,1.05])
# This is simply to scale our boxes from counts to probability
weights = np.ones_like(random_numbers)/float(len(random_numbers))
plt.hist(random_numbers,
bins=10,
weights=weights,
label="Drawn Numbers",
edgecolor='black', linewidth=1.2)
plt.title(" ".join([str(i+1), "Numbers From Set"]))
# Generally it's frowned upon to use 'magic numbers'
# such as the ones below. However, this is what defines
# our expected uniform distribution. The 0.1 limit for y comes
# from the fact that we have 10 bins in our histogram
# (10 bars in the bar graph). If each bin is equally probable,
# then they should each have a probability of 1/(number of bins),
# which is 1/10 for us.
plt.plot([0,0 ,1,1],[0,.1,.1,0], linewidth = 3, label = "Expected")
plt.legend(loc='center left', bbox_to_anchor=(1, .8))
# This helps us get ready for the creation of the new frame
display.clear_output(wait=True)
display.display(plt.gcf())
# reset counter for if we want to plot or not
run = 0
display.clear_output(wait=True)
N = widgets.IntSlider(value = 100, min=10, max = 75000, description="Size of set")
start = widgets.Button(description = "Draw Numbers")
def PlotUniform(b):
Random_Histogram(N.value)
IPython.display.display(N, start)
start.on_click(PlotUniform)
# this is to hide the code block if you'd like
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.cell.code_cell.rendered.selected div.input').hide();
} else {
$('div.cell.code_cell.rendered.selected div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
To see the algorithm which creates the histogram, click <a href="javascript:code_toggle()">here</a>.''')
# This cell creates the widget for the histogram
IPython.display.display(N, start)
```
To use the widget above select the size of the random number set, and click draw numbers. This will animate a histogram which fills with random numbers drawn from the set. The blue line shown in the plot is the "expected" output, a uniform distribution, that the random numbers should follow.
### Questions
1. Does the histogram of random numbers line up with what you expect after drawing
- 100 numbers?
- 1000 numbers?
- 75000 numbers?
- Run those three test cases again, are they identical to the first time you created the histogram?
2. If the numbers you draw don't line up with the expected distribution does that mean our random numbers don't fit the expected curve using a small number of random numbers, does that mean the random number generator isn't behaving correctly? What if it doesn't line up when you include lots of points?
3. Define statistical variation in your own words and whether or not you see that in the animations above.
4. Do the numbers returned from the random number generator satisfy our requirements for random numbers? Why or why not?
### Relevance to Coin Flipping
The connection between generating random numbers in Python and flipping a coin isn't necessarily obvious. Why do we need to generate random numbers to write a code that simulates flipping coins for us? Well, the answer is we need some metric in order to decide which "coins" are heads, and which are tails. Unfortunately to model a real world situation, we need statistical variation -- we can't just use our expected probabilities to decide the outcome of the coin toss. If we did that we would only be modeling the expected result after an infinite number of trials.
So, the natural next question is how can we use these random numbers generated by Python in order to simulate something like a coin toss? To explain, it is important to keep the uniform distribution in mind. If you were to flip a fair coin, unbiased for heads or tails, there is a 50 % probability of observing heads, and a 50 % probability of observing tails after any given coin toss. In terms of the uniform distribution and the random number generator, we could make the following plot to describe a coin toss
```
hide_me
plt.style.use('ggplot')
plt.text(0.15, 5,"Heads",size=20)
plt.text(0.65, 5,"Tails", size=20)
plt.ylim([0,20])
plt.ylabel("Probability (%)", size=20)
plt.xlabel("Random Number Value",size=20)
plt.plot([0,0 ,1,1],[0,10,10,0], linewidth = 3, label = "Uniform Distribution")
plt.plot([0.5,0.5], [0,20], linewidth = 3, c='b')
plt.tick_params(axis='both', which='major', labelsize=18)
plt.show()
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.cell.code_cell.rendered.selected div.input').hide();
} else {
$('div.cell.code_cell.rendered.selected div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
To see the code which created the graph, click <a href="javascript:code_toggle()">here</a>.''')
```
Let's take a moment to understand this plot. On the $y$ axis we have probability, and on the $x$ axis we have "Random Number Value". What this plot means is that for a given random number between $[0,1]$ (the $x$ value), we simply need to look at which side of the blue line our random number is on. If it's on the left, our simulated coin toss returns heads, and if it's on the right our simulated coin toss returns tails. As all random numbers between zero and one are equally probable, this graph shows that we have a 50 % probability of either heads or tails, modeled well with the random number generator.
This idea can also be used to model a _biased_ coin, or a coin that favors either heads or tails. For example, suppose we have a coin with a 60 % probability to land with heads facing up. Using the uniform distribution we would have a plot that looks similar to this:
```
hide_me
plt.style.use('ggplot')
plt.text(0.15, 5,"Heads",size=20)
plt.text(0.70, 5,"Tails", size=20)
plt.ylim([0,20])
plt.ylabel("Probability 100 (%)", size=20)
plt.xlabel("Random Number Value?",size=20)
plt.plot([0,0 ,1,1],[0,10,10,0], linewidth = 3, label = "Uniform Distribution")
plt.plot([0.6,0.6], [0,20], linewidth = 3, c='b')
plt.tick_params(axis='both', which='major', labelsize=18)
plt.show()
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.cell.code_cell.rendered.selected div.input').hide();
} else {
$('div.cell.code_cell.rendered.selected div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
To see the algorithm which created the graph, click <a href="javascript:code_toggle()">here</a>.''')
```
Where this plot is functionally identical to the one shown before it, however the probability of our simulated coin returning heads is 60 %, and the probability of our simulated coin returning tails is 40 %.
Any biased or unbiased coin can be simulated this way and that's exactly what the Python code below does. We have a known probability of returning heads in the range $[0,1]$. We draw a random number $r$, and if that random number is less than the probability of heads we return heads, if it's greater we return tails. This process as a flowchart would appear as follows
>
>
> A potential workflow that would simulate flipping an unbiased coin $N$ times using the random number generator.
Rather than a flow chart, the portion of the above workflow in the green box is written as a simple python function below to demonstrate how straighforward this is to translate into code. hello world!
```
def HeadsOrTails(prob_heads):
# draw a random number between 0 and 1
r = random.uniform(0,1)
# check if that random number is greater than
# the input you provided as prob_heads
if r > prob_heads:
return 'tails'
else:
return 'heads'
HeadsOrTails(0.7)
```
Feel free to run the above code snippet as many times as you like for different coin tosses. Does it behave how you would expect?
Certainly however, this gets more interesting as we begin flipping the coin many times and counting the results to see if the coin is behaving as we'd expect. Below is a Python code that does exactly that by counting the amount of heads and tails it returns in a loop, and animating the results of all the coin tosses in an interactive widget. While the code below looks more complicated, it is doing exactly the same thing as the snippet above, except now it's counting and plotting.
```
hide_me
# Create buttons and widgets
start_button = widgets.Button(description = "Toss those coins")
flips = widgets.IntSlider(value = 100, min=10, max = 100000, description="Trials")
prob = widgets.FloatSlider(value = 0.5, min = 0.0, max = 1.0, description = "Head Probability")
step = widgets.FloatSlider(value = 5, min = 5, max = 100000, description = "Plot step")
# define a coin tossing function
def CoinToss(flips, prob_heads):
# initialize plot
fig = plt.gcf()
fig.show()
fig.canvas.draw()
plt.style.use('ggplot')
head_count = 0
tail_count = 0
run = 0
# Don't want to animate every flip because it would take too long
# this way we only have to animate 100 frames, which is nice.
plot_step = 0.05 * flips
for i in range(flips):
# Thhese five lines are identical to the
# snippet shown earlier, except now we're counting instead.
r = random.uniform(0,1)
if r > prob_heads:
tail_count += 1
else:
head_count += 1
run += 1
# Don't want to plot every frame because it would be
# way too slow. "buffer" it by only plotting so many
# and the last one
if run >= plot_step or i == flips-1:
# This is just for plotting, no actual coin flipping
# is done in this section.
plt.gca().cla()
plt.ylim([0, flips*max(prob_heads, 1-prob_heads) + flips/4])
plt.ylabel("Count")
plt.xticks(np.array([1,2]),["Heads", "Tails"])
plt.bar([1,2],[head_count,
tail_count],align='center',
edgecolor='black',
linewidth=2)
plt.title(' '.join(["Trial Number: ", str(i+1)]))
# Put counts and percentages abouve the plot as its animated
plt.text(1-.2, head_count + .01*flips,
str(head_count)+ " (" + str(round(head_count/(i+1)*100,2)) + " %)",
fontweight='bold')
plt.text(2-.2, tail_count + .01 * flips,
str(tail_count)+ " (" + str(round(tail_count/(i+1)*100,2)) + " %)",
fontweight='bold')
# This helps us get ready for the creation of the new frame
display.clear_output(wait=True)
display.display(plt.gcf())
# reset counter for if we want to plot or not
run = 0
display.clear_output(wait=True)
def PlotCase(b):
CoinToss(flips.value, prob.value)
IPython.display.display(flips,prob, start_button)
start_button.on_click(PlotCase)
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.cell.code_cell.rendered.selected div.input').hide();
} else {
$('div.cell.code_cell.rendered.selected div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
To see the algorithm which simulates flipping coints, click <a href="javascript:code_toggle()">here</a>.''')
IPython.display.display(flips, prob, start_button)
```
With the above widget, click the "Toss these coins" button in order to view an animation of many simulated coin tosses. Adjusting the "Trials" slider will change how many times the coins are flipped, and the "Head Prob..." slider changes the probability of the coin flip resulting in heads.
### Questions
1. Simulate flipping an unbiased coin with:
- 10 trials
- approximately 100 trials
- approximately 10000 trials
- 100000 trials.
Do you count the number of heads/tails you would expect? Why or why not? If there is a difference between the simulated value and your expected value, is that something to be concerned about? Why or why not?
2. Repeat the above with a biased coin.
### Conclusion
In this notebook we covered the idea of statistical fluctuations and how we can use the random number generator to model those statistical fluctuations. In doing so, we created a simulation of coin tosses for both fair and biased coins and observed the difference in counting statistics between the two. We learned that while the probability of a fair coin is 50/50, the observed counting statistics may vary slightly from this. This sort of model can be applied to nearly any statistical process, and can act as a primer for the idea of statistical simulations and Monte Carlo methods.
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# Introduction to Python programming
These notes are based on **J.R. Johansson's** excellent Python lectures which can be found
[here](https://github.com/jrjohansson/scientific-python-lectures).
The other notebooks in this lecture series are indexed at [http://jrjohansson.github.io](http://jrjohansson.github.io).
Kaggle.com has some great courses that I recommend. The courses are short, but quite good.
- Link to all of the courses: https://www.kaggle.com/learn
- Link to the 5 hour Python course: https://www.kaggle.com/learn/python
- Link to the 3 hour Intro to Machine Learning course: https://www.kaggle.com/learn/intro-to-machine-learning
Note, for the purposes of this workshop, you don't necessarily need to know eveything I discuss here. This is where we really get down to the details!
Since we are only together for a few days I don't want to turn this into a Python programming class (though I will if the surveys ask me to :-)
**So, this is not intended to be a full Python tutorial! However, I want to lead you down the path of Python enough so that it is easier for you to learn more later.**
**And, help you work on the final project for the workshop.**
## Modules
Most of the functionality in Python is provided by *modules*. The Python Standard Library is a large collection of modules that provides *cross-platform* implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more.
### References
* The Python Language Reference: http://docs.python.org/2/reference/index.html
* The Python Standard Library: http://docs.python.org/2/library/
To use a module in a Python program it first has to be imported. A module can be imported using the `import` statement. For example, to import the module `math`, which contains many standard mathematical functions, we can do:
```
import math
```
This includes the whole module and makes it available for use later in the program. For example, we can do:
```
import math
x = math.cos(2 * math.pi)
print(x)
```
Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "`math.`" every time we use something from the `math` module:
```
from math import *
x = cos(2 * pi)
print(x)
```
This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the `import math` pattern. This would elminate potentially confusing problems with name space collisions.
As a third alternative, we can chose to import only a few selected symbols from a module by explicitly listing which ones we want to import instead of using the wildcard character `*`:
```
from math import cos, pi
x = cos(2 * pi)
print(x)
```
<font face="verdana" style="font-size:30px" color="red">Your turn</font>
Print out
$$
cos(3 \pi) + sin(5 \pi)
$$
```
```
### Looking at what a module contains, and its documentation
Once a module is imported, we can list the symbols it provides using the `dir` function:
```
import math
print(dir(math))
```
And using the function `help` we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
```
help(math.log)
log(10)
log(10, 2)
```
We can also use the `help` function directly on modules: Try
help(math)
Some very useful modules form the Python standard library are `os`, `sys`, `math`, `shutil`, `re`, `subprocess`, `multiprocessing`, `threading`.
A complete lists of standard modules for Python 2 and Python 3 are available at http://docs.python.org/2/library/ and http://docs.python.org/3/library/, respectively.
## Variables and types
### Symbol names
Variable names in Python can contain alphanumerical characters `a-z`, `A-Z`, `0-9` and some special characters such as `_`. Normal variable names must start with a letter.
By convention, variable names start with a lower-case letter, and Class names start with a capital letter.
In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Note: Be aware of the keyword `lambda`, which could easily be a natural variable name in a scientific program. But being a keyword, it cannot be used as a variable name.
### Assignment
The assignment operator in Python is `=`. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one.
Assigning a value to a new variable creates the variable:
```
# variable assignments
x = 1.0
my_variable = 12.2
```
Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it.
```
type(x)
```
If we assign a new value to a variable, its type can change.
```
x = 1
type(x)
```
If we try to use a variable that has not yet been defined we get an `NameError`:
```
print(y)
```
### Fundamental types
```
# integers
x = 1
type(x)
# float
x = 1.0
type(x)
# boolean
b1 = True
b2 = False
type(b1)
# complex numbers: note the use of `j` to specify the imaginary part
x = 1.0 - 1.0j
type(x)
print(x)
print(x.real, x.imag)
```
## Functions
A function in Python is defined using the keyword `def`, followed by a function name, a signature within parentheses `()`, and a colon `:`. The following code, with one additional level of indentation, is the function body.
```
def func0():
print("test")
func0()
```
Optionally, but highly recommended, we can define a so called "docstring", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.
```
def func1(s):
"""
Print a string 's' and tell how many characters it has
"""
print(s + " has " + str(len(s)) + " characters")
help(func1)
func1("test")
```
Functions that returns a value use the `return` keyword:
```
def square(x):
"""
Return the square of x.
"""
return x ** 2
square(4)
```
We can return multiple values from a function using tuples (see above):
```
def powers(x):
"""
Return a few powers of x.
"""
return x ** 2, x ** 3, x ** 4
powers(3)
x2, x3, x4 = powers(3)
print(x3)
```
<font face="verdana" style="font-size:30px" color="red">Your turn</font>
Make a function that takes two arguements $x$ and $y$ and returns
$$
cos(x \; \pi) + sin(y \; \pi)
$$
Suppose your function was called $f$ then call
f(2,3)
What do you observe?
```
```
Now, if you have done programming in many languages you may be wondering about something. How about types for arguments?
I.e., why don't we write
```
def f(int x):
return x+2
```
The idea is that Python uses "Duck typing".
I.e., if it looks like a duck, it walks like a duck, and it sounds like a duck, then it is a duck.
This means you have to be a little careful
<font face="verdana" style="font-size:30px" color="red">Your turn</font>
Make a function, that takes two arguements $x$ and $y$ and returns
$$
x+y
$$
Suppose your function was called $f$ then call
f(2,3)
f("2","3")
What do you observe?
```
```
## Operators and comparisons
Most operators and comparisons in Python work as one would expect:
* Arithmetic operators `+`, `-`, `*`, `/`, `//` (integer division), '**' power
```
1 + 2, 1 - 2, 1 * 2, 1 / 2
1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0
# Integer division of float numbers
3.0 // 2.0
# Note! The power operators in python isn't ^, but **
2 ** 2
```
Note: The `/` operator always performs a floating point division in Python 3.x.
This is not true in Python 2.x, where the result of `/` is always an integer if the operands are integers.
to be more specific, `1/2 = 0.5` (`float`) in Python 3.x, and `1/2 = 0` (`int`) in Python 2.x (but `1.0/2 = 0.5` in Python 2.x).
* The boolean operators are spelled out as the words `and`, `not`, `or`.
```
True and False
not False
True or False
```
* Comparison operators `>`, `<`, `>=` (greater or equal), `<=` (less or equal), `==` equality, `is` identical.
```
2 > 1, 2 < 1
2 > 2, 2 < 2
2 >= 2, 2 <= 2
# equality
[1,2] == [1,2]
# objects identical?
l1 = l2 = [1,2]
l1 is l2
```
<font face="verdana" style="font-size:30px" color="red">Your turn</font>
What is the truth value of
$$
(2<3) \;\text{and}\; ((5>6) or ( 7 <=7))
$$
```
```
## Compound types: Strings, List and dictionaries
### Strings
Strings are the variable type that is used for storing text messages.
```
s = "Hello world"
type(s)
# length of the string: the number of characters
len(s)
# replace a substring in a string with something else
s2 = s.replace("world", "test")
print(s2)
```
We can index a character in a string using `[]`:
```
s[0]
```
**Heads up MATLAB users:** Indexing start at 0!
## A Brief Digression
As some of you may have observed, some of this might be familiar. Python borrows...steals... ummm, maybe leverages ideas from many languages, with the numeric part of Python looking a lot like Matlab.
In fact, there is even a cheat sheet:
http://mathesaurus.sourceforge.net/matlab-numpy.html
This relationship becomes even more apparent when we do plotting.
## Back to the show!
We can extract a part of a string using the syntax `[start:stop]`, which extracts characters between index `start` and `stop` -1 (the character at index `stop` is not included):
```
s[0:5]
```
If we omit either (or both) of `start` or `stop` from `[start:stop]`, the default is the beginning and the end of the string, respectively:
```
s[:5]
s[6:]
```
Python has a very rich set of functions for text processing. See for example http://docs.python.org/3/library/string.html for more information.
#### String formatting examples
```
print("str1", "str2", "str3") # The print statement concatenates strings with a space
print("str1", 1.0, False, -1j) # The print statements converts all arguments to strings
print("str1" + "str2" + "str3") # strings added with + are concatenated without space
print("value = %f" % 1.0) # we can use C-style string formatting
# this formatting creates a string
s2 = "value1 = %.2f. value2 = %d" % (3.1415, 1.5)
print(s2)
```
## Data Structures
### List
Lists are very similar to strings, except that each element can be of any type.
The syntax for creating lists in Python is `[...]`:
```
l = [1,2,3,4]
print(type(l))
print(l)
```
We can use the same slicing techniques to manipulate lists as we could use on strings:
```
print(l)
print(l[1:3])
print(l[::2])
```
Elements in a list do not all have to be of the same type:
```
l = [1, 'a', 1.0, 1-1j]
print(l)
```
Python lists can be inhomogeneous and arbitrarily nested:
```
nested_list = [1, [2, [3, [4, [5]]]]]
nested_list
```
### Advanced topic. You can do really strange things!
<font face="verdana" style="font-size:30px" color="red">Your turn</font>
What the heck does the following code do?
```
a=1234
myList = [print,a]
myList[0](myList[1])
```
### Back to the show
Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the `range` function:
```
start = 10
stop = 30
step = 2
range(start, stop, step)
# in python 3 range generates an interator, which can be converted to a list using 'list(...)'.
# It has no effect in python 2
list(range(start, stop, step))
list(range(-10, 10))
```
See `help(list)` for more details, or read the online documentation
## Control Flow
### Conditional statements: if, elif, else
The Python syntax for conditional execution of code uses the keywords `if`, `elif` (else if), `else`:
```
statement1 = False
statement2 = False
if statement1:
print("statement1 is True")
elif statement2:
print("statement2 is True")
else:
print("statement1 and statement2 are False")
```
For the first time, here we encounted a peculiar and unusual aspect of the Python programming language: Program blocks are defined by their indentation level.
Compare to the equivalent C code:
if (statement1)
{
printf("statement1 is True\n");
}
else if (statement2)
{
printf("statement2 is True\n");
}
else
{
printf("statement1 and statement2 are False\n");
}
In C blocks are defined by the enclosing curly brakets `{` and `}`. And the level of indentation (white space before the code statements) does not matter (completely optional).
But in Python, the extent of a code block is defined by the indentation level (usually a tab or say four white spaces). This means that we have to be careful to indent our code correctly, or else we will get syntax errors.
#### Examples:
```
statement1 = statement2 = True
if statement1:
if statement2:
print("both statement1 and statement2 are True")
# Bad indentation!
if statement1:
if statement2:
print("both statement1 and statement2 are True") # this line is not properly indented
statement1 = False
if statement1:
print("printed if statement1 is True")
print("still inside the if block")
if statement1:
print("printed if statement1 is True")
print("now outside the if block")
for i in range(3):
print(i)
print('randy is dumb')
x = 3
if x == 5:
print('randy is dumb')
# elif x in [3,4]:
# print('maybe not so dumb')
else:
print('we have no info about randy')
[3 , 4]
def f(x,y):
return x+y
f(2,3)
f('2','3')
f(3,4)
3+7
'3'+'7'
class g:
"""Randy's dumb class"""
def __init__(self, x,y,z):
"""Randy's dumb function"""
self.x = x
self.a = y+z
def call(self, y):
# self.x = self.x+1
return self.x+y
def bar_(self):
self.x=0
newF = g(4)
newF2 = g(17)
g?
newF.call(5)
newF2.call(5)
print('randy is "dumb"')
x="""
randy is dumb
Oh, so dumb
I mean, really dumb
"""
print(x)
x=5
y=7
print(x)
y
```
## Loops
In Python, loops can be programmed in a number of different ways. The most common is the `for` loop, which is used together with iterable objects, such as lists. The basic syntax is:
### **`for` loops**:
```
for x in [1,2,3]:
print(x)
```
The `for` loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the `for` loop. For example:
```
for x in range(4): # by default range start at 0
print(x)
```
Note: `range(4)` does not include 4 !
```
for x in range(-3,3):
print(x)
for word in ["scientific", "computing", "with", "python"]:
print(word)
```
<font face="verdana" style="font-size:30px" color="red">Your turn</font>
Computer 10! using a for loop.
**NOTE, this is by far the hardest assignment so far!**
```
```
## Further reading
* http://www.python.org - The official web page of the Python programming language.
* http://www.python.org/dev/peps/pep-0008 - Style guide for Python programming. Highly recommended.
* http://www.greenteapress.com/thinkpython/ - A free book on Python programming.
* [Python Essential Reference](http://www.amazon.com/Python-Essential-Reference-4th-Edition/dp/0672329786) - A good reference book on Python programming.
```
```
| github_jupyter |
# Data Structures
Expanding on our knowledge from the data types introduced last session, this class we make our code more pythonic by introducing advanced methods and tools. Covers list and dict comprehensions, the iterator toolkit for efficient looping, and how to create generators. Youโll learn how to import other libraries, write anonymous functions with lambdas, and go on a rudimentary exploration of the functional programming paradigm with the map, filter, and reduce functions.
At the end of this class youโll be able to choose the right data structures to represent your problem and use efficient methods to stream your data through your script.
### Agenda
2.0 While Loops
2.1 List Comprehensions
2.2 Set Comprehensions
2.3 Dictionaries
2.4 Dictionary Comprehension
2.5 Collections
2.6 Python Execution Environment
## While
We covered 'for' loops in the previous session, but Python also allows you to construct while loops.
The syntax for while statement is like
```python
while condition:
statement1
statement2
```
The code we want to reuse must be indented properly under the while statement. They will be executed if the condition is true. Again like in if-else statement any non zero value is true. Let us write a simple code to print numbers 0 to 10
```
n = 0
while n < 11:
print(n)
n += 1
```
#### break
Let us write a program to evaluate the power series. The series looks like $e**x =1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} +....+ \frac{x^n}{n!} $ where $0 < x < 1$
```
x = float(input("Enter the value of x: "))
n = term = num = 1
summation = 1.0
while n <= 100:
term *= x / n
summation += term
n += 1
if term < 0.0001:
break
print("No of Times= {} and Sum= {}".format(n, summation))
```
What break does is stop the innermost loop. In this example we are using break under the if statement. This means if the value of term is less than 0.0001 then get out of the loop.
#### continue
Just like break we have another statement, continue, which skips the execution of the code after itself and goes back to the start of the loop. That means it will help you to skip a part of the loop. In the below example we will ask the user to input an integer, if the input is negative then we will ask again, if positive then we will square the number. To get out of the infinite loop user must input 0.
```
while True:
n = int(input("Please enter an Integer: "))
if n < 0:
continue # this will take the execution back to the starting of the loop
elif n == 0:
break
print("Square is ", n ** 2)
print("Goodbye")
```
#### **Problem** : Game of Sticks
```
sticks = 21
print("There are 21 sticks, you can take 1-4 number of sticks at a time.")
print("Whoever will take the last stick will loose")
while True:
print("Sticks left: " , sticks)
sticks_taken = int(input("Take sticks(1-4):"))
if sticks == 1:
print("You took the last stick, you loose")
break
if sticks_taken >= 5 or sticks_taken <= 0:
print("Wrong choice")
continue
print("Computer took: " , (5 - sticks_taken) , "\n")
sticks -= 5
```
Can you figure out how to win?
What if you were a programmer?
## List Comprehensions
List Comprehensions provide a concise way of creating lists. Many times a complex task can be modelled in a single line.
Here are some simple examples for transforming a list.
```
a = range(10)
a
[x for x in a]
[x+1 for x in a]
```
It is also possible to filter a list using if inside a list comprehension.
```
a = range(10)
[x for x in a if x % 2 == 0]
[x*x for x in a if x%2 == 0]
```
It is possible to iterate over multiple lists using the built-in function zip.
```
a = [1, 2, 3, 4]
b = [2, 3, 5, 7]
zip(a, b)
[x+y for x, y in zip(a, b)]
```
we can use multiple for clauses in single list comprehension.
```
[(x, y) for x in range(5) for y in range(5) if (x+y)%2 == 0]
[(x, y) for x in range(5) for y in range(5) if (x+y)%2 == 0 and x != y]
[(x, y) for x in range(5) for y in range(x) if (x+y)%2 == 0]
```
The following example finds all Pythagorean triplets using numbers below 25. (x, y, z) is a called pythagorean triplet if `x*x + y*y == z*z`.
```
n = 25
[(x, y, z) for x in range(1, n) for y in range(x, n) for z in range(y, n) if x*x + y*y == z*z]
```
**Problem** : Provide an implementation for zip function using list comprehensions.
```
zip([1, 2, 3], ["a", "b", "c"])
```
**Problem** : Python provides a built-in function map that applies a function to each element of a list. Provide an implementation for map using list comprehensions.
```
def square(x): return x * x
list(map(square, [1,2,3,4,5]))
```
## Set Comprehension
Set comprehensions allow sets to be constructed using the same principles as list comprehensions, the only difference is that resulting sequence is a set.
Say we have a list of names. The list can contain names which only differ in the case used to represent them, duplicates and names consisting of only one character. We are only interested in names longer then one character and wish to represent all names in the same format: The first letter should be capitalised, all other characters should be lower case.
```
# Given the list
names = [ 'Bob', 'JOHN', 'alice', 'bob', 'ALICE', 'J', 'Bob' ]
# We require the set:
{ 'Bob', 'John', 'Alice' }
```
Note the syntax for denoting a set. Members are enclosed in curly braces.
The following set comprehension accomplishes this:
```
{ name[0].upper() + name[1:].lower() for name in names if len(name) > 1 }
```
## Dictionaries
One of the most loved data structures in Computer Science!!
What makes them so cool?
Dictionaries are like lists, but they can be indexed with non integer keys also. Unlike lists, dictionaries are not ordered.
```
a = {'x': 1, 'y': 2, 'z': 3}
a['x']
a['z']
b = {}
b['x'] = 2
b[2] = 'foo'
b[(1, 2)] = 3
b
```
The del keyword can be used to delete an item from a dictionary.
```
a = {'x': 1, 'y': 2, 'z': 3}
del a['x']
```
The keys method returns all keys in a dictionary, the values method returns all values in a dictionary and items method returns all key-value pairs in a dictionary.
```
a.keys()
a.values()
a.items()
```
The for statement can be used to iterate over a dictionary.
```
for key in a: print(key)
for key, value in a.items(): print(key, value)
```
Presence of a key in a dictionary can be tested using in operator or has_key method.
```
'x' in a
'p' in a
'g' in a.keys()
```
Other useful methods on dictionaries are get and setdefault.
```
d = {'x': 1, 'y': 2, 'z': 3}
d.get('x', 5)
d.get('p', 5)
d.setdefault('x', 0)
d.setdefault('p', 0)
```
Dictionaries can be used in string formatting to specify named parameters.
```
'hello %(name)s' % {'name': 'python'}
'Tutorial %(index)d: %(name)s' % {'index': 2, 'name': 'Data Structures'}
```
### Example: Word Frequency
Suppose we want to find number of occurrences of each word in a file. Dictionary can be used to store the number of occurrences for each word.
Lets first write a function to count frequency of words, given a list of words.
```
def word_frequency(words):
"""Returns frequency of each word given a list of words.
>>> word_frequency(['a', 'b', 'a'])
{'a': 2, 'b': 1}
"""
frequency = {}
for w in words:
try:
frequency[w] += 1
except KeyError:
frequency[w] = 1
return frequency
```
**Problem** : How could we have used the .get() method to avoid the try execpt clause?
```
# solution('ZnJlcXVlbmN5W3ddID0gZnJlcXVlbmN5LmdldCh3LCAwKSArIDE=')
```
Getting words from a file is very trivial.
```
def read_words(filename):
return open(filename).read().split()
```
We can combine these two functions to find frequency of all words in a file.
```
def main(filename):
frequency = word_frequency(read_words(filename))
for word, count in frequency.items():
print(word.lower(), count)
# In a Python Script, you'd add this to the bottom to make it run from the command line
# while also taking in a single argument referring to the filename.
#
# if __name__ == "__main__":
# import sys
# main(sys.argv[1])
main('files/straight_outta_compton.txt')
```
**Problem **: Improve the above program to print the words in the descending order of the number of occurrences.
```
# Tip : List compresension to the rescue!
```
## Dictionary Comprehension
Say we have a dictionary the keys of which are characters and the values of which map to the number of times that character appears in some text. The dictionary currently distinguishes between upper and lower case characters.
We require a dictionary in which the occurrences of upper and lower case characters are combined:
```
mcase = {'a':10, 'b': 34, 'A': 7, 'Z':3}
mcase_frequency = { k.lower() : mcase.get(k.lower(), 0) + mcase.get(k.upper(), 0) for k in mcase.keys() }
# mcase_frequency == {'a': 17, 'z': 3, 'b': 34}
```
Instead of a list comprehension, we could have simply used a dict comprehension in the last problem.
```
a_dict = {'a': 1, 'b': 2, 'c': 3}
{value:key for key, value in a_dict.items()}
```
Of course, this only works if the values of the dictionary are immutable, like strings or tuples. If you try this with a dictionary that contains lists, it will fail most spectacularly.
```
a_dict = {'a': [1, 2, 3], 'b': 4, 'c': 5}
# {value:key for key, value in a_dict.items()}
```
## Collections
This sectioncovers a module called Collections which implements some nice data structures which will help you to solve various real life problems.
```
import collections
```
### Counter
Counter is a dict subclass which helps to count hashable objects. Inside it elements are stored as dictionary keys and counts are stored as values which can be zero or negative.
```
from collections import Counter
import re
path = 'files/straight_outta_compton.txt'
# let's be a bit more precise about what we mean by 'words'
words = re.findall('\w+', open(path).read().lower())
Counter(words).most_common(10)
```
Print out the methods available to you
```
# dir(Counter)
[n for n in dir(Counter) if '__' not in n]
```
Counter objects has a method called elements which returns an iterator over elements repeating each as many times as its count. Elements are returned in arbitrary order.
```
c = Counter(a=4, b=2, c=0, d=-2)
list(c.elements())
```
most_common is a method which returns most common elements and their counts from the most common to the least.
```
Counter('abracadabra').most_common(3)
```
### Default Dictionary
A defaultdict is a dictionary with a default value for keys, so that keys for which no value has been explicitly defined can be accessed without errors. Letโs see it in action. Using defaultdict is faster than doing the same using dict.set_default method.
#### Word Frequency Revisited
We previously used a try/except clause to word around the missing starting value in the dict.
```
def word_frequency(words):
frequency = {}
for w in words:
try:
frequency[w] += 1
except KeyError:
frequency[w] = 1
```
A defaultdict is just like a regular Python dict, except that it supports an additional argument at initialization: a function. If someone attempts to access a key to which no value has been assigned, that function will be called (without arguments) and its return value is used as the default value for the key.
Going back to our example, we want the default value to be 0, so we pass the built-in function int()to the defaultdict constructor. When called without arguments, the int() function simply returns 0.
```
from collections import defaultdict
def word_frequency(words):
frequency = defaultdict(int)
for w in words:
frequency[w] += 1
```
### Ordered dictionaries
### Named Tuple
Named tuples helps to have meaning of each position in a tuple and allow us to code with better readability and self-documenting code. You can use them in any place where you are using tuples. In the example we will create a namedtuple to show hold information for points.
```
from collections import namedtuple
Point = namedtuple('Point', ['x', 'y'])
p = Point(10, y=20)
p
Point(x=10, y=20)
p.x + p.y
p[0] + p[1] # Accessing the values in normal way
x, y = p # Unpacking the tuple
x
y
```
## Python Execution Environment
Python stores the variables we use as a dictionary. The globals() function returns all the globals variables in the current environment.
```
globals()
x = 1
globals()
x = 1
globals()['x'] = 3
x
```
Just like globals python also provides a function locals which gives all the local variables in a function.
```
def f(a, b): print(locals())
```
One more example:
```
def f(name):
return "Hello %(name)s!" % locals()
f("Guido")
```
## Python String Formatting
### Number Formatting
The following table shows various ways to format numbers using python's newish str.format(), examples for both float formatting and integers.
To run examples use print("FORMAT".format(NUMBER)); So to get the output of the first example, you would run: print("{:.2f}".format(3.1415926));
| Number | Format | Output | Description |
|------------|---------|-----------|-----------------------------------------------|
| 3.1415926 | {:.2f} | 3.14 | 2 decimal places |
| 3.1415926 | {:+.2f} | +3.14 | 2 decimal places with sign |
| -1 | {:+.2f} | -1.00 | 2 decimal places with sign |
| 2.71828 | {:.0f} | 3 | No decimal places |
| 5 | {:0>2d} | 05 | Pad number with zeros (left padding, width 2) |
| 5 | {:x<4d} | 5xxx | Pad number with x's (right padding, width 4) |
| 10 | {:x<4d} | 10xx | Pad number with x's (right padding, width 4) |
| 1000000 | {:,} | 1,000,000 | Number format with comma separator |
| 0.25 | {:.2%} | 25.00% | Format percentage |
| 1000000000 | {:.2e} | 1.00e+09 | Exponent notation |
| 13 | {:10d} | 13 | Right aligned (default, width 10) |
| 13 | {:<10d} | 13 | Left aligned (width 10) |
| 13 | {:^10d} | 13 | Center aligned (width 10) |
### string.format() basics
Here are a couple of example of basic string substitution, the {} is the placeholder for the substituted variables. If no format is specified, it will insert and format as a string.
```
s1 = "so much depends upon {}".format("a red wheel barrow")
s2 = "glazed with {} water beside the {} chickens".format("rain", "white")
```
You can also use the numeric position of the variables and change them in the strings, this gives some flexibility when doing the formatting, if you made a mistake in the order you can easily correct without shuffling all variables around.
```
s1 = " {0} is better than {1} ".format("emacs", "vim")
s2 = " {1} is better than {0} ".format("emacs", "vim")
```
The format() function offers a fair amount of additional features and capabilities, here are a few useful tips and tricks using .format()
#### Named Arguments
You can use the new string format as a templating engine and use named arguments, instead of requiring a strict order.
```
" I {verb} the {object} off the {place} ".format(verb="took", object="cheese", place="table")
```
#### Reuse Same Variable Multiple Times
The .format() method allows you to put them in any order as we saw above in the basics, but also allows for reuse.
```
"Oh {0}, {0}! wherefore art thou {0}?".format("Romeo")
```
#### Convert Values to different Bases
You can use the following letters to convert a number to their bases, decimal, hex, octal, binary
```
print("{0:d} - {0:x} - {0:o} - {0:b} ".format(21))
```
#### Use Format as a Function
You can use .format as a function which allows for some separation of text and formatting from code. For example at the beginning of your program you could include all your formats and then use later. This also could be a nice way to handle internationalization which not only requires different text but often requires different formats for numbers.
```
## defining formats
email_f = "Your email address was {email}".format
## use elsewhere
print(email_f(email="bob@example.com"))
```
Oh, and if you need to use braces when using str.format(), just double up
```
" The {} set is often represented as {{0}}".format("empty")
```
For a more detailed treatment of string formating, please visit [pyformat.info](https://pyformat.info/#number_padding)
## Challenges
```
import sys, os
from base64 import b64encode, b64decode
from TestFramework.TestFramework import Test
def solution(enc):
print(str(b64decode(enc), 'utf-8'))
```
### Enough is enough!
Alice and Bob were on a holiday. Both of them took many pictures of the places they've been, and now they want to show Charlie their entire collection. However, Charlie doesn't like this sessions, since the motive usually repeats. He isn't fond of seeing the Eiffel tower 40 times. He tells them that he will only sit during the session if they show the same motive at most N times. Luckily, Alice and Bob are able to encode the motive as a number. Can you help them to remove numbers such that their list contains each number only up to N times, without changing the order?
Task
Given a list lst and a number `N`, create a new list that contains each number of lst at most `N` times without reordering. For example if `N = 2`, and the input is `[1,2,3,1,2,1,2,3]`, you take `[1,2,3,1,2]`, drop the next `[1,2]` since this would lead to `1` and `2` being in the result `3` times, and then take `3`, which leads to `[1,2,3,1,2,3]`.
Example
delete_nth ([1,1,1,1],2) # return [1,1]
delete_nth ([20,37,20,21],1) # return [20,37,21]
#### Solution
```
##Your code
def delete_nth(order,max_e):
pass
test = Test()
test.assert_equals([20,37,21], delete_nth([20,37,20,21], 1))
test.assert_equals([1, 1, 3, 3, 7, 2, 2, 2], delete_nth([1,1,3,3,7,2,2,2,2], 3))
```
#### Sample Solution
```
# solution('IyNTb2x1dGlvbgpkZWYgZGVsZXRlX250aChvcmRlcixtYXhfZSk6CiAgICByZXN1bHQgPSBbXQogICAgZm9yIG51bSBpbiBvcmRlcjoKICAgICAgICBpZiByZXN1bHQuY291bnQobnVtKSA+PSBtYXhfZToKICAgICAgICAgICAgcGFzcwogICAgICAgIGVsc2U6CiAgICAgICAgICAgIHJlc3VsdC5hcHBlbmQobnVtKQogICAgcmV0dXJuKHJlc3VsdCk=')
```
### Which are in?
Given two list of strings a1 and a2 return a sorted array the strings of a1 which are substrings of strings of a2. Return the list in lexicographical order and without duplicates.
Example: a1 = ["arp", "live", "strong"]
a2 = ["lively", "alive", "harp", "sharp", "armstrong"]
returns ["arp", "live", "strong"]
a1 = ["tarp", "mice", "bull"]
a2 = ["lively", "alive", "harp", "sharp", "armstrong"]
returns []
#### Solution
```
##Your code
def in_list(list_1, list_2):
pass
test = Test()
a1 = ["live", "arp", "strong"]
a2 = ["lively", "alive", "harp", "sharp", "armstrong"]
r1 = ['arp', 'live', 'strong']
test.assert_equals(r1, in_list(a1, a2))
a3 = ["tarp", "mice", "bull"]
a4 = ["lively", "alive", "harp", "sharp", "armstrong"]
r2 = []
test.assert_equals(r2, in_list(a3, a4))
a5 = ["tarp", "mst", "har"]
a6 = ["lively", "alive", "harp", "sharp", "armstrong"]
r3 = ['har', 'mst']
test.assert_equals(r3, in_list(a5, a6))
```
#### Sample Solution
```
# solution('IyNTb2x1dGlvbgpkZWYgaW5fbGlzdChsaXN0XzEsIGxpc3RfMik6CiAgICBsaXN0XzEuc29ydCgpCiAgICByZXN1bHQgPSBbXQogICAgZm9yIHdvcmQgaW4gbGlzdF8xOgogICAgICAgIGhhc1dvcmQgPSBGYWxzZQogICAgICAgIGZvciB3b3JkMiBpbiBsaXN0XzI6CiAgICAgICAgICAgIGlmIHdvcmQyLmZpbmQod29yZCkgIT0gLTE6CiAgICAgICAgICAgICAgICBoYXNXb3JkID0gVHJ1ZQogICAgICAgIGlmIGhhc1dvcmQ6CiAgICAgICAgICAgIHJlc3VsdC5hcHBlbmQod29yZCkKICAgIHJldHVybihyZXN1bHQp')
```
### Midpoint Sum
For a given list of integers, return the index of the element where the sums of the integers to the left and right of the current element are equal.
Ex:
```
ints = [4, 1, 7, 9, 3, 9]
# Since 4 + 1 + 7 = 12 and 3 + 9 = 12, the returned index would be 3
ints = [1, 0, -1]
# Returns -1
# are no indices where the left and right sums are equal
```
Here are the 2 important rules:
The element at the index to be returned is not included in either of the sum calculations!
Both the first and last index cannot be considered as a "midpoint" (So None for [X] and [X, X])
#### Solution
```
##Your code
def midpoint_sum(ints):
pass
test = Test()
test.describe("Normal Cases")
test.expect(midpoint_sum([1, 0, -1]) == -1, "[1, 0, -1] should return -1")
test.expect(midpoint_sum([4,1,7,9,3,9]) == 3, "[4,1,7,9,3,9] should return 3")
test.expect(midpoint_sum([1,0,1]) == 1, "[1,0,1] should equal 1")
test.expect(midpoint_sum([9,0,1,2,3,4]) == 2, "[9,0,1,2,3,4] should equal 2")
test.expect(midpoint_sum([0,0,4,0]) == 2, "[0,0,4,0] should equal 2")
test.expect(midpoint_sum([-10,3,7,8,-6,-13,21]) == 4, "[-10,3,7,8,-6,-13,21] should equal 4")
test.expect(midpoint_sum([1,1,1,1,-5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]) == 52, "Large valid sequence: [1,1,1,1,-5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1] should equal 52")
```
#### Sample Solution
```
# solution('IyNTb2x1dGlvbgpkZWYgbWlkcG9pbnRfc3VtKGludHMpOgogICAgbGVmdCA9IFtdCiAgICByaWdodCA9IFtdCiAgICBsZWZ0X2luZGV4ID0gMAogICAgcmlnaHRfaW5kZXggPSBsZW4oaW50cykKICAgIG1heF9zdW0gPSBzdW0oaW50cykvMgogICAgZm9yIGluZGV4IGluIHJhbmdlKDAsIChsZW4oaW50cykgLSAxKSk6CiAgICAgICAgaWYgbGVuKGludHNbMDppbmRleF0pID4gMDoKICAgICAgICAgICAgbGVmdC5hcHBlbmQoc3VtKGludHNbMDppbmRleF0pKQogICAgZm9yIGluZGV4IGluIHJhbmdlKDEsIChsZW4oaW50cykpKToKICAgICAgICBpZiBsZW4oaW50c1staW5kZXg6XSkgPiAwOgogICAgICAgICAgICByaWdodC5hcHBlbmQoc3VtKGludHNbLWluZGV4Ol0pKQogICAgZm9yIGxlZnRfbnVtIGluIGxlZnQ6CiAgICAgICAgcmlnaHRfaW5kZXggPSAwCiAgICAgICAgZm9yIHJpZ2h0X251bSBpbiByaWdodDoKICAgICAgICAgICAgaWYgbGVmdF9udW0gPT0gcmlnaHRfbnVtOgogICAgICAgICAgICAgICAgaWYgbGVmdF9pbmRleCArIHJpZ2h0X2luZGV4ICsgMyA9PSBsZW4oaW50cyk6CiAgICAgICAgICAgICAgICAgICAgcmV0dXJuKGxlZnRfaW5kZXggKyAxKQogICAgICAgICAgICByaWdodF9pbmRleCA9IHJpZ2h0X2luZGV4ICsgMQogICAgICAgIGxlZnRfaW5kZXggPSBsZWZ0X2luZGV4ICsgMQogICAgcmV0dXJuKC0xKQ==')
```
## Homework
This week's homework is to create a proof of concept backend for a predictive keyboard. Sounds fancy right? But you can already do this! The idea is that just like the Android keyboard, once you have completed a word, it will recommend three words which it thinks you'll want to type next.
We are going to seed our model by reading in 100.000 emails from the [Enron Email Dataset](https://www.cs.cmu.edu/~./enron/). This represents our baseline for how we expect people to use language. We all went to talk like the smartest guys in the room, right?
Your solution will consist of two functions. The first function reads in the email data to build up a probabilistic collocation model, i.e. how likely is one word to follow another word. Once you've prepared this model, the second function takes in a string, and based on that string (word, sub-word or phrase) used the language model you've created to return a tuple of the three most likely auto-complete options.
Please [download](http://python-intro.s3.amazonaws.com/data/email.txt) the cleaned up dataset
```
def auto_complete_candidates(txt,no_candidates=3):
"""Auto complete candidates based on text input
Consults a language model to predict the most likely candidates
for a keyboard's auto-complete functionality.
By default returns the best three candidates.
Args:
txt (str): word or phrase to base autocomplete on
no_candidates (int): number of candidates to reutrn
Returns:
tuple: Best candidate autocompletions
"""
return txt
```
#### Reach Goal
The reach goal is to also provide predictive โ_text_โ-completion. That is, if you pass an incomplete word to `auto_complete_candidates`, it will return candidates to complete the word instead of words which are likely to follow the input. This means that you'll need to be flexible in what you accept as input and figure out whether what you're being passed in is a complete word or not.
#### Programmer Goal
The programmer goal is to also take the words into account which the user typed thus far. So instead of just a single word as input, consider a phrase. Also come up with a way to score the number of words you should be taking into account.
| github_jupyter |
# Querying a gbXML file
This notebook demonstrates how to query a gbXML file using the `gbxml` python package.
## Importing the `Gbxml` class
```
from gbxml import Gbxml
```
## Loading the gbXML file
This creates an instance of the Gbxml class and reads in a sample gbXML file called 'detached_house.gbxml'
```
g = Gbxml('detached_house.gbxml')
g
```
## General Methods
### get_ids
```
print(Gbxml.get_ids.__doc__.strip())
result=g.get_ids()
print(result)
result=g.get_ids(tag='Space')
print(result)
```
### get_xmlstring
```
print(Gbxml.get_xmlstring.__doc__.strip())
result=g.get_xmlstring()
result[:1000]
result=g.get_xmlstring(id='DINING_ROOM')
print(result)
```
### get_attributes
```
print(Gbxml.get_attributes.__doc__.strip())
g.get_attributes(id='DINING_ROOM')
```
### get_child_tags
```
print(Gbxml.get_child_tags.__doc__.strip())
result=g.get_child_tags(id='DINING_ROOM')
print(result)
```
### get_child_tag_text
```
print(Gbxml.get_child_tag_text.__doc__.strip())
g.get_child_tag_text(id='DINING_ROOM',child_tag='Area')
```
### get_child_tag_attributes
```
print(Gbxml.get_child_tag_attributes.__doc__.strip())
g.get_child_tag_attributes(id='DINING_ROOM',child_tag='PeopleHeatGain')
```
### get_children_list
```
print(Gbxml.get_children_list.__doc__.strip())
g.get_children_list(id='DINING_ROOM')
```
## Campus query methods
### get_campus_location_tags
```
print(Gbxml.get_campus_location_tags.__doc__.strip())
g.get_campus_location_tags(id='campus-1')
```
### get_campus_location_tag_text
```
print(Gbxml.get_campus_location_tag_text.__doc__.strip())
g.get_campus_location_tag_text(id='campus-1',child_tag='CADModelAzimuth')
```
## Building query methods
### get_building_space_ids
```
print(Gbxml.get_building_space_ids.__doc__.strip())
result=g.get_building_space_ids(id='detached_house')
print(result)
```
### get_building_surface_ids
```
print(Gbxml.get_building_surface_ids.__doc__.strip())
result=g.get_building_surface_ids(id='detached_house')
print(result)
```
## Space query methods
### get_space_surface_ids
```
print(Gbxml.get_space_surface_ids.__doc__.strip())
result=g.get_space_surface_ids(id='DINING_ROOM')
print(result)
```
## Construction query methods
### get_construction_layer_ids
```
print(Gbxml.get_construction_layer_ids.__doc__.strip())
g.get_construction_layer_ids(id='WALL')
```
### get_construction_material_ids
```
print(Gbxml.get_construction_material_ids.__doc__.strip())
g.get_construction_material_ids(id='WALL')
```
## Layer query methods
### get_layer_material_ids
```
print(Gbxml.get_layer_material_ids.__doc__.strip())
g.get_layer_material_ids(id='layer-WALL')
```
## Surface query methods
### get_surface_inner_space_id
```
print(Gbxml.get_surface_inner_space_id.__doc__.strip())
g.get_surface_inner_space_id(id='surface-6')
```
### get_surface_outer_space_id
```
print(Gbxml.get_surface_outer_space_id.__doc__.strip())
result=g.get_surface_outer_space_id(id='surface-6')
print(result)
```
### get_surface_azimuth
```
print(Gbxml.get_surface_azimuth.__doc__.strip())
g.get_surface_azimuth(id='surface-6')
```
### get_surface_tilt
```
print(Gbxml.get_surface_tilt.__doc__.strip())
g.get_surface_tilt(id='surface-6')
```
### get_surface_coordinates
```
print(Gbxml.get_surface_coordinates.__doc__.strip())
g.get_surface_coordinates(id='surface-6')
```
### get_surface_area
```
print(Gbxml.get_surface_area.__doc__.strip())
g.get_surface_area(id='surface-6')
```
### get_surface_opening_ids
```
print(Gbxml.get_surface_opening_ids.__doc__.strip())
g.get_surface_opening_ids(id='surface-6')
```
## Opening query methods
### get_opening_surface_id
```
print(Gbxml.get_opening_surface_id.__doc__.strip())
g.get_opening_surface_id(id='surface-6-opening-1')
```
### get_opening_coordinates
```
print(Gbxml.get_opening_coordinates.__doc__.strip())
g.get_opening_coordinates(id='surface-6-opening-1')
```
### get_opening_area
```
print(Gbxml.get_opening_area.__doc__.strip())
g.get_opening_area(id='surface-6-opening-1')
```
## Zone query methods
### get_zone_space_ids
```
print(Gbxml.get_zone_space_ids.__doc__.strip())
g.get_zone_space_ids(id='Zone-DINING_ROOM')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
import numpy as np
sess = tf.Session(config=config)
import qa_consistency
import qa_consistency.dataset_utils
import qa_consistency.implication
import json
import pickle
import os
```
# Example: generating implications
```
gen = qa_consistency.implication.ImplicationsVQA()
gen.implications('How many birds?', '3')
```
This path has to have all of the VQA json files
```
vqa_path = '/home/marcotcr/datasets/vqa'
vqa_v1 = qa_consistency.dataset_utils.load_vqa(vqa_path, 'validation')
```
# Generating implications for all VQA v1 and v2 (question, answer) pairs. You can skip this and load my precomputed implications below.
```
vqa_v2 = qa_consistency.dataset_utils.load_vqav2(vqa_path, 'validation')
all_qs, all_as = qa_consistency.dataset_utils.question_answers_product(vqa_v1.questions + vqa_v2.questions, vqa_v1.all_answers + vqa_v2.all_answers)
parsed_qas = gen.parse_dataset(all_qs, all_as, verbose=True)
implications = [gen.implications_from_parsed(x) for x in parsed_qas]
# vqa_v1.idxs
output_folder = '/home/marcotcr/tmp/'
all_imps = {}
for qa, imp in zip(parsed_qas, implications):
all_imps[qa.as_tuple()] = imp
pickle.dump(all_imps, open(os.path.join(output_folder, 'vqa_imps.pkl'), 'wb'))
```
# Start from here if you want to use precomputed implications (link to pkl file in the repository's README)
```
output_folder = '/home/marcotcr/tmp/'
all_imps = pickle.load(open(os.path.join(output_folder, 'vqa_imps.pkl'), 'rb'))
consistency_folder = os.path.join(output_folder, 'vqa_v1_consistency')
```
Load original predictions from your model in the official vqa format
```
preds_path = os.path.join(output_folder, 'orig_preds.json')
# make sure this folder exists
qa_consistency.dataset_utils.generate_implication_vqa(vqa_v1, preds_path, all_imps, consistency_folder)
```
Now you would have to run your model on the generated files. Let's create a fake output in the right format just for simulation:
```
question_ids = [x['question_id'] for x in json.load(open(os.path.join(consistency_folder, 'questions.json'), 'r'))['questions']]
fake_preds_path = os.path.join(output_folder, 'consistency_preds.json')
json.dump([{'question_id': q, 'answer': a} for q, a in zip(question_ids, np.random.choice(['yes', 'no'], len(question_ids)))],
open(fake_preds_path, 'w'))
stats = qa_consistency.dataset_utils.evaluate_consistency_vqa(consistency_folder, fake_preds_path)
print('Consistency by implication type:')
print()
for x, v in stats.items():
if x == 'all':
continue
print('%s : %.1f' % (x, 100* v))
print()
print('Avg : %.1f' % (100 * stats['all']))
```
| github_jupyter |
### **INITIALIZATION:**
- I use these three lines of code on top of my each notebooks because it will help to prevent any problems while reloading the same project. And the third line of code helps to make visualization within the notebook.
```
#@ INITIALIZATION:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
**DOWNLOADING THE DEPENDENCIES:**
- I have downloaded all the libraries and dependencies required for the project in one particular cell.
```
#@ DOWNLOADING THE LIBRARIES AND DEPENDENCIES:
# !pip install -U d2l
# !apt-get install p7zip-full
import os, collections, math
import shutil
import pandas as pd
import torch
import torchvision
from torch import nn
from d2l import torch as d2l
PROJECT_ROOT_DIR = "."
ID = "RECOG"
IMAGE_PATH = os.path.join(PROJECT_ROOT_DIR, "Images", ID)
if not os.path.isdir(IMAGE_PATH):
os.makedirs(IMAGE_PATH)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGE_PATH, fig_id + "." + fig_extension)
print("Saving Figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
### **OBTAINING AND ORGANIZING THE DATASET:**
- I have used google colab for this project so the process of downloading and reading the data might be different in other platforms. I will use [**CIFAR-10 Object Recognition in Images**](https://www.kaggle.com/c/cifar-10) for this project. The dataset is divided into training set and test set. The training set contains 50,000 images. The images contains the categories such as planes, cars, birds, cats, deer, dogs, frogs, horses, boats and trucks.
```
#@ ORGANIZING THE DATASET: UNCOMMENT BELOW:
# os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/MyDrive/Kaggle"
# %cd /content/drive/MyDrive/Kaggle
# !kaggle competitions download -c cifar-10
#@ OBTAINING THE DATASET:
d2l.DATA_HUB["CIFAR10"] = (d2l.DATA_URL + "kaggle_cifar10_tiny.zip",
'2068874e4b9a9f0fb07ebe0ad2b29754449ccacd') # Initializing the Dataset.
demo = True # Initialization.
if demo: data_dir = d2l.download_extract("CIFAR10") # Initialization.
else: data_dir = "../Data/CIFAR10/" # Initializaiton.
```
**ORGANIZING THE DATASET:**
- I will organize the datasets to facilitate model training and testing.
```
#@ ORGANIZING THE DATASET:
def read_csv_labels(fname): # Returning names to Labels.
with open(fname, "r") as f:
lines = f.readlines()[1:] # Reading Lines.
tokens = [l.rstrip().split(",") for l in lines]
return dict(((name, label) for name, label in tokens))
labels = read_csv_labels(os.path.join(data_dir, "trainLabels.csv")) # Implementation.
print(f"Training Examples: {len(labels)}") # Number of Training Examples.
print(f"Classes: {len(set(labels.values()))}") # Number of Classes.
#@ ORGANIZING THE DATASET:
def copyfile(filename, target_dir): # Copying File into Target Directory.
os.makedirs(target_dir, exist_ok=True)
shutil.copy(filename, target_dir)
#@ ORGANIZING THE DATASET:
def reorg_train_valid(data_dir, labels, valid_ratio):
n = collections.Counter(labels.values()).most_common()[-1][1] # Number of examples per class.
n_valid_per_label = max(1, math.floor(n * valid_ratio))
label_count = {}
for train_file in os.listdir(os.path.join(data_dir, "train")):
label = labels[train_file.split(".")[0]]
fname = os.path.join(data_dir, "train", train_file)
copyfile(fname, os.path.join(data_dir, "train_valid_test", "train_valid", label)) # Copy to Train Valid.
if label not in label_count or label_count[label] < n_valid_per_label:
copyfile(fname, os.path.join(data_dir, "train_valid_test", "valid", label)) # Copy to Valid.
label_count[label] = label_count.get(label, 0) + 1
else:
copyfile(fname, os.path.join(data_dir, "train_valid_test", "train", label)) # Copy to Train.
return n_valid_per_label
```
- The reorg test function is used to organize the testing set to facilitate the reading during prediction.
```
#@ ORGANIZING THE DATASET:
def reorg_test(data_dir): # Initialization.
for test_file in os.listdir(os.path.join(data_dir, "test")):
copyfile(os.path.join(data_dir, "test", test_file),
os.path.join(data_dir, "train_valid_test", "test", "unknown")) # Implementation of Function.
#@ OBTAINING AND ORGANIZING THE DATASET:
def reorg_cifar10_data(data_dir, valid_ratio): # Obtaining and Organizing the Dataset.
labels = read_csv_labels(os.path.join(data_dir, "trainLabels.csv")) # Implementation of Function.
reorg_train_valid(data_dir, labels, valid_ratio) # Implementation of Function.
reorg_test(data_dir) # Implementation of Function.
#@ INITIALIZING THE PARAMETERS:
batch_size = 4 if demo else 128 # Initializing Batchsize.
valid_ratio = 0.1 # Initialization.
reorg_cifar10_data(data_dir, valid_ratio) # Obtaining and Organizing the Dataset.
```
### **IMAGE AUGMENTATION:**
- I will use image augmentation to cope with overfitting. The images are flipped at random and normalized.
```
#@ IMPLEMENTATION OF IMAGE AUGMENTATION: TRAINING DATASET:
transform_train = torchvision.transforms.Compose([ # Initialization.
torchvision.transforms.Resize(40), # Resizing both Height and Width.
torchvision.transforms.RandomResizedCrop(32, scale=(0.64, 1.0),
ratio=(1.0, 1.0)), # Cropping and Resizing.
torchvision.transforms.RandomHorizontalFlip(), # Randomly Flipping Image.
torchvision.transforms.ToTensor(), # Converting into Tensors.
torchvision.transforms.Normalize(mean=[0.4914, 0.4822, 0.4465],
std=[0.2023, 0.1994, 0.2010])]) # Normalization of RGB Channels.
#@ IMPLEMENTATION OF IMAGE AUGMENTATION: TEST DATASET:
transform_test = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(), # Converting into Tensors.
torchvision.transforms.Normalize(mean=[0.4914, 0.4822, 0.4465],
std=[0.2023, 0.1994, 0.2010])]) # Normalization of RGB Channels.
```
### **READING THE DATASET:**
- I will create the image folder dataset instance to read the organized dataset containing original image files where each example includes the image and label.
```
#@ READING THE DATASET:
train_ds, train_valid_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, "train_valid_test", folder),
transform = transform_train) for folder in ["train", "train_valid"]] # Initializing Training Dataset.
#@ READING THE DATASET:
valid_ds, test_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, "train_valid_test", folder),
transform = transform_test) for folder in ["valid", "test"]] # Initializing Test Dataset.
#@ IMPLEMENTATION OF DATALOADER:
train_iter, train_valid_iter = [torch.utils.data.DataLoader(
dataset, batch_size, shuffle=True, drop_last=True) for dataset in (train_ds,
train_valid_ds)] # Implementation of DataLoader.
valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size, shuffle=True, drop_last=True) # Implementation of DataLoader.
test_iter = torch.utils.data.DataLoader(test_ds, batch_size, shuffle=True, drop_last=False) # Implementation of DataLoader.
```
### **DEFINING THE MODEL:**
- I will define ResNet18 model. I will perform xavier random initialization on the model before training begins.
```
#@ DEFINING THE MODEL:
def get_net(): # Function for Initializing the Model.
num_classes = 10 # Number of Classes.
net = d2l.resnet18(num_classes, 3) # Initializing the RESNET Model.
return net
#@ DEFINING THE LOSS FUNCTION:
loss = nn.CrossEntropyLoss(reduction="none") # Initializing Cross Entropy Loss Function.
```
**DEFINING TRAINING FUNCTION:**
- I will define model training function train here. I will record the training time of each epoch which helps to compare costs of different models.
```
#@ DEFINING TRAINING FUNCTIONS:
def train(net, train_iter, valid_iter, num_epochs, lr, wd, devices,
lr_period, lr_decay): # Defining Training Function.
trainer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9,
weight_decay=wd) # Initializing the SGD Optimizer.
scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_period, lr_decay) # Initializing Learning Rate Scheduler.
num_batches, timer = len(train_iter), d2l.Timer() # Initializing the Parameters.
animator = d2l.Animator(xlabel="epoch", xlim=[1, num_epochs],
legend=["train loss", "train acc", "valid acc"]) # Initializing the Animation.
net = nn.DataParallel(net, device_ids=devices).to(devices[0]) # Implementation of Parallelism on Model.
for epoch in range(num_epochs):
net.train() # Initializing the Training Mode.
metric = d2l.Accumulator(3) # Initializing the Accumulator.
for i, (features, labels) in enumerate(train_iter):
timer.start() # Starting the Timer.
l, acc = d2l.train_batch_ch13(net, features, labels, loss, trainer,
devices) # Initializing the Training.
metric.add(l, acc, labels.shape[0]) # Accumulating the Metrics.
timer.stop() # Stopping the Timer.
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches, (
metric[0] / metric[2], metric[1] / metric[2], None)) # Implementation of Animation.
if valid_iter is not None:
valid_acc = d2l.evaluate_accuracy_gpu(net, valid_iter) # Evaluating Validation Accuracy.
animator.add(epoch + 1, (None, None, valid_acc)) # Implementation of Animation.
scheduler.step() # Optimization of the Model.
if valid_iter is not None:
print(f"Loss {metric[0] / metric[2]:.3f}," # Inspecting Loss.
f"Train acc {metric[1] / metric[2]:.3f}," # Inspecting Training Accuracy.
f"Valid acc {valid_acc:.3f}") # Inspecting Validation Accuracy.
else:
print(f"Loss {metric[0] / metric[2]:.3f}," # Inspecting Loss.
f"Train acc {metric[1] / metric[2]:.3f}") # Inspecting Training Accuracy.
print(f"{metric[2]*num_epochs / timer.sum():.1f} examples/sec"
f"on {str(devices)}") # Inspecting Time Taken.
```
### **TRAINING AND VALIDATING THE MODEL:**
- I will train and validate the model here.
```
#@ TRAINING AND VALIDATING THE MODEL:
devices, num_epochs, lr, wd = d2l.try_all_gpus(), 5, 0.1, 5e-4 # Initializing the Parameters.
lr_period, lr_decay, net = 50, 0.1, get_net() # Initializing the Neural Network Model.
train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay) # Training the Model.
```
**CLASSIFYING THE TESTING SET:**
```
#@ CLASSIFYING THE TESTING SET:
net, preds = get_net(), [] # Initializing the Parameters.
train(net, train_valid_iter, None, num_epochs, lr, wd, devices, lr_period,
lr_decay) # Training the Model.
for X, _ in test_iter:
y_hat = net(X.to(devices[0]))
preds.extend(y_hat.argmax(dim=1).type(torch.int32).cpu().numpy())
sorted_ids = list(range(1, len(test_ds) + 1))
sorted_ids.sort(key=lambda x: str(x))
df = pd.DataFrame({"id": sorted_ids, "label": preds})
df["label"] = df["label"].apply(lambda x: train_valid_ds.classes[x])
df.to_csv("result.csv", index=False)
```
| github_jupyter |
# How to create your own `Backend` using `pytket`
In this tutorial, we will focus on:<br>
- the components of the abstract `Backend` class;<br>
- adaptations for statevector simulation versus measurement sampling.
To run this example, you will only need the core `pytket` package.<br>
<br>
The `pytket` framework currently has direct integration with the largest variety of devices and simulators out of any quantum software platform, but this is certainly not a complete collection. New quantum backends are frequently being rolled out as more device manufacturers bring their machines online and advanced in simulation research give rise to many purpose-built simulators for fast execution of specific circuit fragments.<br>
<br>
If you have something that can take circuits (i.e. a sequence of gates) and run/simulate them, adding integration with `pytket` connects it to a great number of users and enables existing software solutions to immediately take advantage of your new backend. This reach is further extended beyond just software written with `pytket` by exploiting its integration with the rest of the quantum software ecosystem, such as via the `TketBackend` wrapper to use the new backend within Qiskit projects.<br>
<br>
This notebook will take a toy simulator and demonstrate how to write each component of the `Backend` class to make it work with the rest of `pytket`. We'll start by defining the internal representation of a circuit that our simulator will use. Rather than common gates, this example will use exponentiated Pauli tensors ($e^{-i \theta P}$ for $P \in \{I, X, Y, Z\}^n$) as its basic operation, which are universal for unitary circuits. To keep it simple, we will ignore measurements for now and just consider unitaries.
```
from pytket import Qubit
from pytket.pauli import QubitPauliString
from typing import List
class MyCircuit:
"""A minimal representation of a unitary circuit"""
def __init__(self, qubits: List[Qubit]):
"""Creates a circuit over some set of qubits
:param qubits: The list of qubits in the circuit
:type qubits: List[Qubit]
"""
self.qubits = sorted(qubits, reverse=True)
self.gates = list()
def add_gate(self, qps: QubitPauliString, angle: float):
"""Adds a gate to the end of the circuit e^{-0.5i * qps * angle}
:param qps: Pauli string to rotate around
:type qps: QubitPauliString
:param angle: Angle of rotation in radians
:type angle: float
"""
self.gates.append((qps, angle))
```
To simulate these, it is enough to generate the matrix of these exponentials and apply them in sequence to our initial state. Calculating these matrix exponentials is easy since we can exploit the following property: if an operator $A$ satisfies $A^2 = I$, then $e^{i\theta A} = \mathrm{cos}(\theta)I + i \mathrm{sin}(\theta) A$. This works for any tensor of Pauli matrices. Furthermore, since each Pauli matrix is some combination of a diagonal matrix and a permutation matrix, they benefit greatly from a sparse matrix representation, which we can obtain from the `QubitPauliString`.
```
import numpy as np
class MySimulator:
"""A minimal statevector simulator for unitary circuits"""
def __init__(self, qubits: List[Qubit]):
"""Initialise a statevector, setting all qubits to the |0โญ state.
We treat qubits[0] as the most-significant qubit
:param qubits: The list of qubits in the circuit
:type qubits: List[Qubit]
"""
self._qubits = qubits
self._qstate = np.zeros((2 ** len(qubits),), dtype=complex)
self._qstate[0] = 1.0
def apply_Pauli_rot(self, qps: QubitPauliString, angle: float):
"""Applies e^{-0.5i * qps * angle} to the state
:param qps: Pauli to rotate around
:type qps: QubitPauliString
:param angle: Angle of rotation in radians
:type angle: float
"""
pauli_tensor = qps.to_sparse_matrix(self._qubits)
exponent = -0.5 * angle
self._qstate = np.cos(exponent) * self._qstate + 1j * np.sin(
exponent
) * pauli_tensor.dot(self._qstate)
def run_mycircuit(circ: MyCircuit) -> np.ndarray:
"""Gives the state after applying the circuit to the all-|0โญ state
:param circ: The circuit to simulate
:type circ: MyCircuit
:return: The final statevector
:rtype: np.ndarray
"""
sim = MySimulator(circ.qubits)
for qps, angle in circ.gates:
sim.apply_Pauli_rot(qps, angle)
return sim._qstate
```
And that's all we need for a basic simulator! We can check that this works by trying to generate a Bell state (up to global phase).
```
from pytket.pauli import Pauli
q = [Qubit(0), Qubit(1)]
circ = MyCircuit(q)
# Hadamard on Qubit(0)
circ.add_gate(QubitPauliString(Qubit(0), Pauli.Z), np.pi / 2)
circ.add_gate(QubitPauliString(Qubit(0), Pauli.X), np.pi / 2)
circ.add_gate(QubitPauliString(Qubit(0), Pauli.Z), np.pi / 2)
# CX with control Qubit(0) and target Qubit(1)
circ.add_gate(QubitPauliString(Qubit(0), Pauli.Z), -np.pi / 2)
circ.add_gate(QubitPauliString(Qubit(1), Pauli.X), -np.pi / 2)
circ.add_gate(QubitPauliString({Qubit(0): Pauli.Z, Qubit(1): Pauli.X}), np.pi / 2)
print(run_mycircuit(circ))
```
A useful first step to integrating this is to define a conversion from the `pytket.Circuit` class to the `MyCircuit` class. In most cases, this will just amount to converting one gate at a time by a simple syntax map. We need not specify how to convert every possible `OpType`, since we can rely on the compilation passes in `pytket` to map the circuit into the required gate set as long as it is universal. For this example, the definitions of `OpType.Rx`, `OpType.Ry`, `OpType.Rz`, and `OpType.ZZMax` all match the form of a single Pauli exponential.
```
from pytket import Circuit, OpType
def tk_to_mycircuit(tkc: Circuit) -> MyCircuit:
"""Convert a pytket Circuit to a MyCircuit object.
Supports Rz, Rx, Ry, and ZZMax gates.
:param tkc: The Circuit to convert
:type tkc: Circuit
:return: An equivalent MyCircuit object
:rtype: MyCircuit
"""
circ = MyCircuit(tkc.qubits)
for command in tkc:
optype = command.op.type
if optype == OpType.Rx:
circ.add_gate(
QubitPauliString(command.args[0], Pauli.X), np.pi * command.op.params[0]
)
elif optype == OpType.Ry:
circ.add_gate(
QubitPauliString(command.args[0], Pauli.Y), np.pi * command.op.params[0]
)
elif optype == OpType.Rz:
circ.add_gate(
QubitPauliString(command.args[0], Pauli.Z), np.pi * command.op.params[0]
)
elif optype == OpType.ZZMax:
circ.add_gate(
QubitPauliString(command.args, [Pauli.Z, Pauli.Z]), np.pi * 0.5
)
else:
raise ValueError("Cannot convert optype to MyCircuit: ", optype)
return circ
```
Now we turn to the `Backend` class. This provides a uniform API to submit `Circuit` objects for evaluation, typically returning either a statevector or a set of measurement shots. It also captures all of the information needed for compilation and asynchronous job management.<br>
<br>
We will make a subclass of `Backend` for our statevector simulator. The `_supports_state` flag lets the methods of the abstract `Backend` class know that this implementation supports statevector simulation. We also set `_persistent_handles` to `False` since this `Backend` will not be able to retrieve results from a previous Python session.<br>
<br>
Since we do not need to connect to a remote process for the simulator, the constructor doesn't need to set anything up. The base `Backend` constructor will initialise the `_cache` field for storing job data.
```
from pytket.backends import Backend
class MyBackend(Backend):
"""A pytket Backend wrapping around the MySimulator statevector simulator"""
_supports_state = True
_persistent_handles = False
def __init__(self):
"""Create a new instance of the MyBackend class"""
super().__init__()
```
Most `Backend`s will only support a small fragment of the `Circuit` language, either through implementation limitations or since a specific presentation is universal. It is helpful to keep this information in the `Backend` object itself so that users can clearly see how a `Circuit` needs to look before it can be successfully run. The `Predicate` classes in `pytket` can capture many common restrictions. The idea behind the `required_predicates` list is that any `Circuit` satisfying every `Predicate` in the list can be run on the `Backend` successfully as it is.<br>
<br>
However, a typical high-level user will not be writing `Circuit`s that satisfies all of the `required_predicates`, preferring instead to use the model that is most natural for the algorithm they are implementing. Providing a `default_compilation_pass` gives users an easy starting point for compiling an arbitrary `Circuit` into a form that can be executed (when not blocked by paradigm constraints like `NoMidMeasurePredicate` or `NoClassicalControlPredicate` that cannot easily be solved by compilation).<br>
<br>
You can provide several options using the `optimisation_level` argument. We tend to use `0` for very basic compilation with no optimisation applied, `1` for the inclusion of fast optimisations (e.g. `SynthesiseIBM` is a pre-defined sequence of optimisation passes that scales well with circuit size), and `2` for heavier optimisation (e.g. `FullPeepholeOptimise` incorporates `SynthesiseTket` alongside some extra passes that may take longer for large circuits).<br>
<br>
When designing these compilation pass sequences for a given `Backend`, it can be a good idea to start with the passes that solve individual constraints from `required_predicates` (like `FullMappingPass` for `ConnectivityPredicate` or `RebaseX` for `GateSetPredicate`), and find an ordering such that no later pass invalidates the work of an earlier one.<br>
<br>
For `MyBackend`, we will need to enforce that our circuits are expressed entirely in terms of `OpType.Rx`, `OpType.Ry`, `OpType.Rz`, and `OpType.ZZMax` gates which we can solve using `RebaseCustom`. Note that we omit `OpType.Measure` since we can only run pure quantum circuits.<br>
<br>
The standard docstrings for these and other abstract methods can be seen in the abstract `Backend` [API reference](https://cqcl.github.io/tket/pytket/api/backends.html#pytket.backends.Backend).
```
from pytket.predicates import Predicate, GateSetPredicate, NoClassicalBitsPredicate
from pytket.passes import (
BasePass,
SequencePass,
DecomposeBoxes,
SynthesiseTket,
FullPeepholeOptimise,
RebaseCustom,
SquashCustom,
)
@property
def required_predicates(self) -> List[Predicate]:
"""
The minimum set of predicates that a circuit must satisfy before it can
be successfully run on this backend.
:return: Required predicates.
:rtype: List[Predicate]
"""
preds = [
NoClassicalBitsPredicate(),
GateSetPredicate(
{
OpType.Rx,
OpType.Ry,
OpType.Rz,
OpType.ZZMax,
}
),
]
return preds
def default_compilation_pass(self, optimisation_level: int = 1) -> BasePass:
"""
A suggested compilation pass that will guarantee the resulting circuit
will be suitable to run on this backend with as few preconditions as
possible.
:param optimisation_level: The level of optimisation to perform during
compilation. Level 0 just solves the device constraints without
optimising. Level 1 additionally performs some light optimisations.
Level 2 adds more intensive optimisations that can increase compilation
time for large circuits. Defaults to 1.
:type optimisation_level: int, optional
:return: Compilation pass guaranteeing required predicates.
:rtype: BasePass
"""
assert optimisation_level in range(3)
cx_circ = Circuit(2)
cx_circ.Sdg(0)
cx_circ.V(1)
cx_circ.Sdg(1)
cx_circ.Vdg(1)
cx_circ.add_gate(OpType.ZZMax, [0, 1])
cx_circ.Vdg(1)
cx_circ.Sdg(1)
cx_circ.add_phase(0.5)
def sq(a, b, c):
circ = Circuit(1)
if c != 0:
circ.Rz(c, 0)
if b != 0:
circ.Rx(b, 0)
if a != 0:
circ.Rz(a, 0)
return circ
rebase = RebaseCustom(
{OpType.ZZMax}, cx_circ, {OpType.Rx, OpType.Ry, OpType.Rz}, sq
)
squash = SquashCustom({OpType.Rz, OpType.Rx, OpType.Ry}, sq)
seq = [DecomposeBoxes()] # Decompose boxes into basic gates
if optimisation_level == 1:
seq.append(SynthesiseTket()) # Optional fast optimisation
elif optimisation_level == 2:
seq.append(FullPeepholeOptimise()) # Optional heavy optimisation
seq.append(rebase) # Map to target gate set
if optimisation_level != 0:
seq.append(squash) # Optionally simplify 1qb gate chains within this gate set
return SequencePass(seq)
```
The `backend_info` property is used for storing various properties of a backend. By default it provides all device information useful for compilation. Typically we would make it return a class attribute `self._backend_info` that we initialise on construction, but we will define it at point of request here. We use a `FullyConnected` architecture producing an `Architecture` object with couplings between 4 qubits.
```
from pytket.backends.backendinfo import BackendInfo
from pytket.routing import FullyConnected
@property
def backend_info(self) -> BackendInfo:
return BackendInfo(
"MyBackend",
"MySimulator",
"1.0",
FullyConnected(4),
{
OpType.Rx,
OpType.Ry,
OpType.Rz,
OpType.ZZMax,
OpType.Measure,
},
supports_midcircuit_measurement=False,
misc={"characterisation": None},
)
```
The `characterisation` property functions as an additional information store for a `Backend`. This is intended to hold hardware-specific characterisation information such as gate fidelities. Typically these are held in the `backend_info` `misc` attribute, a bucket dictionary that takes strings as keys and can store any objects as values.
```
from typing import Dict, Any, Optional
@property
def characterisation(self) -> Optional[Dict[str, Any]]:
char = self._backend_info.get_misc("characterisation")
return cast(Dict[str, Any], char) if char else None
```
Asynchronous job management is all managed through the `ResultHandle` associated with a particular `Circuit` that has been submitted. We can use it to inspect the status of the job to see if it has completed, or to look up the results if they are available.<br>
<br>
For devices, `circuit_status` should query the job to see if it is in a queue, currently being executed, completed successfully, etc. The `CircuitStatus` class is mostly driven by the `StatusEnum` values, but can also contain messages to give more detailed feedback if available. For our simulator, we are not running things asynchronously, so a `Circuit` has either not been run or it will have been completed.<br>
<br>
Since a device API will probably define its own data type for job handles, the `ResultHandle` definition is flexible enough to cover many possible data types so you can likely use the underlying job handle as the `ResultHandle`. The `_result_id_type` property specifies what data type a `ResultHandle` for this `Backend` should look like. Since our simulator has no underlying job handle, we can just use a UUID string.
```
from pytket.backends import ResultHandle, CircuitStatus, StatusEnum, CircuitNotRunError
from pytket.backends.resulthandle import _ResultIdTuple
@property
def _result_id_type(self) -> _ResultIdTuple:
"""Identifier type signature for ResultHandle for this backend.
:return: Type signature (tuple of hashable types)
:rtype: _ResultIdTuple
"""
return (str,)
def circuit_status(self, handle: ResultHandle) -> CircuitStatus:
"""
Return a CircuitStatus reporting the status of the circuit execution
corresponding to the ResultHandle
"""
if handle in self._cache:
return CircuitStatus(StatusEnum.COMPLETED)
raise CircuitNotRunError(handle)
```
And finally, we have the method that actually submits a job for execution. `process_circuits` should take a collection of (compiled) `Circuit` objects, process them and return a `ResultHandle` for each `Circuit`. If execution is synchronous, then this can simply wait until it is finished, store the result in `_cache` and return. For backends that support asynchronous jobs, you will need to set up an event to format and store the result on completion.<br>
<br>
It is recommended to use the `valid_check` parameter to control a call to `Backend._check_all_circuits()`, which will raise an exception if any of the circuits do not satisfy everything in `required_predicates`.<br>
<br>
The `_cache` fields stores all of the information about current jobs that have been run. When a job has finished execution, the results are expected to be stored in `_cache[handle]["result"]`, though it can also be used to store other data about the job such as some information about the `Circuit` required to properly format the results. Methods like `Backend.get_result()` and `Backend.empty_cache()` expect to interact with the results of a given job in this way.<br>
<br>
The final output of the execution is stored in a `BackendResult` object. This captures enough information about the results to reinterpret it in numerous ways, such as requesting the statevector in a specific qubit ordering or converting a complete shot table to a summary of the counts. If we create a `BackendResult` with quantum data (e.g. a statevector or unitary), we must provide the `Qubit` ids in order from most-significant to least-significant with regards to indexing the state. Similarly, creating one with classical readouts (e.g. a shot table or counts summary), we give the `Bit` ids in the order they appear in a readout (left-to-right).<br>
<br>
For a statevector simulation, we should also take into account the global phase stored in the `Circuit` object and any implicit qubit permutations, since these become observable when inspecting the quantum state. We can handle the qubit permutation by changing the order in which we pass the `Qubit` ids into the `BackendResult` object.
```
from pytket.backends.backendresult import BackendResult
from pytket.utils.results import KwargTypes
from typing import Iterable
from uuid import uuid4
def process_circuits(
self,
circuits: Iterable[Circuit],
n_shots: Optional[int] = None,
valid_check: bool = True,
**kwargs: KwargTypes,
) -> List[ResultHandle]:
"""
Submit circuits to the backend for running. The results will be stored
in the backend's result cache to be retrieved by the corresponding
get_<data> method.
Use keyword arguments to specify parameters to be used in submitting circuits
See specific Backend derived class for available parameters, from the following
list:
* `seed`: RNG seed for simulators
:param circuits: Circuits to process on the backend.
:type circuits: Iterable[Circuit]
:param n_shots: Number of shots to run per circuit. None is to be used
for state/unitary simulators. Defaults to None.
:type n_shots: Optional[int], optional
:param valid_check: Explicitly check that all circuits satisfy all required
predicates to run on the backend. Defaults to True
:type valid_check: bool, optional
:return: Handles to results for each input circuit, as an interable in
the same order as the circuits.
:rtype: List[ResultHandle]
"""
circuit_list = list(circuits)
if valid_check:
self._check_all_circuits(circuit_list)
handle_list = []
for circuit in circuit_list:
handle = ResultHandle(str(uuid4()))
mycirc = tk_to_mycircuit(circuit)
state = run_mycircuit(mycirc)
state *= np.exp(1j * np.pi * circuit.phase)
implicit_perm = circuit.implicit_qubit_permutation()
res_qubits = [implicit_perm[qb] for qb in sorted(circuit.qubits, reverse=True)]
res = BackendResult(q_bits=res_qubits, state=state)
self._cache[handle] = {"result": res}
handle_list.append(handle)
return handle_list
```
Let's redefine our `MyBackend` class to use these methods to finish it off.
```
class MyBackend(Backend):
"""A pytket Backend wrapping around the MySimulator statevector simulator"""
_supports_state = True
_persistent_handles = False
def __init__(self):
"""Create a new instance of the MyBackend class"""
super().__init__()
required_predicates = required_predicates
default_compilation_pass = default_compilation_pass
_result_id_type = _result_id_type
circuit_status = circuit_status
process_circuits = process_circuits
```
Our new `Backend` subclass is now complete, so let's test it out. If you are planning on maintaining a backend class, it is recommended to set up some unit tests. The following tests will cover basic operation and integration with `pytket` utilities.
```
from pytket.circuit import BasisOrder, Unitary1qBox
from pytket.passes import CliffordSimp
from pytket.utils import get_operator_expectation_value
from pytket.utils.operators import QubitPauliOperator
import pytest
def test_bell() -> None:
c = Circuit(2)
c.H(0)
c.CX(0, 1)
b = MyBackend()
b.compile_circuit(c)
h = b.process_circuit(c)
assert np.allclose(
b.get_result(h).get_state(), np.asarray([1, 0, 0, 1]) * 1 / np.sqrt(2)
)
def test_basisorder() -> None:
c = Circuit(2)
c.X(1)
b = MyBackend()
b.compile_circuit(c)
h = b.process_circuit(c)
r = b.get_result(h)
assert np.allclose(r.get_state(), np.asarray([0, 1, 0, 0]))
assert np.allclose(r.get_state(basis=BasisOrder.dlo), np.asarray([0, 0, 1, 0]))
def test_implicit_perm() -> None:
c = Circuit(2)
c.CX(0, 1)
c.CX(1, 0)
c.Ry(0.1, 1)
c1 = c.copy()
CliffordSimp().apply(c1)
b = MyBackend()
b.compile_circuit(c)
b.compile_circuit(c1)
assert c.implicit_qubit_permutation() != c1.implicit_qubit_permutation()
h, h1 = b.process_circuits([c, c1])
r, r1 = b.get_results([h, h1])
for bo in [BasisOrder.ilo, BasisOrder.dlo]:
s = r.get_state(basis=bo)
s1 = r1.get_state(basis=bo)
assert np.allclose(s, s1)
def test_compilation_pass() -> None:
b = MyBackend()
for opt_level in range(3):
c = Circuit(2)
c.CX(0, 1)
u = np.asarray([[0, 1], [-1j, 0]])
c.add_unitary1qbox(Unitary1qBox(u), 1)
c.CX(0, 1)
c.add_gate(OpType.CRz, 0.35, [1, 0])
assert not (b.valid_circuit(c))
b.compile_circuit(c, optimisation_level=opt_level)
assert b.valid_circuit(c)
def test_invalid_measures() -> None:
c = Circuit(2)
c.H(0).CX(0, 1).measure_all()
b = MyBackend()
b.compile_circuit(c)
assert not (b.valid_circuit(c))
def test_expectation_value() -> None:
c = Circuit(2)
c.H(0)
c.CX(0, 1)
op = QubitPauliOperator(
{
QubitPauliString({Qubit(0): Pauli.Z, Qubit(1): Pauli.Z}): 1.0,
QubitPauliString({Qubit(0): Pauli.X, Qubit(1): Pauli.X}): 0.3,
QubitPauliString({Qubit(0): Pauli.Z, Qubit(1): Pauli.Y}): 0.8j,
QubitPauliString({Qubit(0): Pauli.Y}): -0.4j,
}
)
b = MyBackend()
b.compile_circuit(c)
assert get_operator_expectation_value(c, op, b) == pytest.approx(1.3)
```
Explicit calls are needed for this notebook. Normally pytest will just find these "test_X" methods when run from the command line:
```
test_bell()
test_basisorder()
test_implicit_perm()
test_compilation_pass()
test_invalid_measures()
test_expectation_value()
```
To show how this compares to a sampling simulator, let's extend our simulator to handle end-of-circuit measurements.
```
from typing import Set
def sample_mycircuit(
circ: MyCircuit, qubits: Set[Qubit], n_shots: int, seed: Optional[int] = None
) -> np.ndarray:
"""Run the circuit on the all-|0โญ state and measures a set of qubits
:param circ: The circuit to simulate
:type circ: MyCircuit
:param qubits: The set of qubits to measure
:type qubits: Set[Qubit]
:param n_shots: The number of samples to take
:type n_shots: int
:param seed: Seed for the random number generator, defaults to no seed
:type seed: Optional[int], optional
:return: Table of shots; each row is a shot, columns are qubit readouts in ascending Qubit order
:rtype: np.ndarray
"""
state = run_mycircuit(circ)
cumulative_probs = (state * state.conjugate()).cumsum()
if seed is not None:
np.random.seed(seed)
shots = np.zeros((n_shots, len(circ.qubits)))
for s in range(n_shots):
# Pick a random point in the distribution
point = np.random.uniform(0.0, 1.0)
# Find the corresponding readout
index = np.searchsorted(cumulative_probs, point)
# Convert index to a binary array
# `bin` maps e.g. index 6 to '0b110'
# So we ignore the first two symbols and add leading 0s to make it a fixed length
bitstring = bin(index)[2:].zfill(len(circ.qubits))
shots[s] = np.asarray([int(b) for b in bitstring])
filtered = np.zeros((n_shots, len(qubits)))
target = 0
for col, q in enumerate(circ.qubits):
if q in qubits:
filtered[:, target] = shots[:, col]
target += 1
return filtered
```
Since `MyCircuit` doesn't have a representation for measurement gates, our converter must return both the `MyCircuit` object and some way of capturing the measurements. Since we will also want to know how they map into our `Bit` ids, the simplest option is just a dictionary from `Qubit` to `Bit`.
```
from pytket import Bit
from typing import Tuple
def tk_to_mymeasures(tkc: Circuit) -> Tuple[MyCircuit, Dict[Qubit, Bit]]:
"""Convert a pytket Circuit to a MyCircuit object and a measurement map.
Supports Rz, Rx, Ry, and ZZMax gates, as well as end-of-circuit measurements.
:param tkc: The Circuit to convert
:type tkc: Circuit
:return: An equivalent MyCircuit object and a map from measured Qubit to the Bit containing the result
:rtype: Tuple[MyCircuit, Dict[Qubit, Bit]]
"""
circ = MyCircuit(tkc.qubits)
measure_map = dict()
measured_units = (
set()
) # Track measured Qubits/used Bits to identify mid-circuit measurement
for command in tkc:
for u in command.args:
if u in measured_units:
raise ValueError("Circuit contains a mid-circuit measurement")
optype = command.op.type
if optype == OpType.Rx:
circ.add_gate(
QubitPauliString(command.args[0], Pauli.X), np.pi * command.op.params[0]
)
elif optype == OpType.Ry:
circ.add_gate(
QubitPauliString(command.args[0], Pauli.Y), np.pi * command.op.params[0]
)
elif optype == OpType.Rz:
circ.add_gate(
QubitPauliString(command.args[0], Pauli.Z), np.pi * command.op.params[0]
)
elif optype == OpType.ZZMax:
circ.add_gate(
QubitPauliString(command.args, [Pauli.Z, Pauli.Z]), np.pi * 0.5
)
elif optype == OpType.Measure:
measure_map[command.args[0]] = command.args[1]
measured_units.add(command.args[0])
measured_units.add(command.args[1])
else:
raise ValueError("Cannot convert optype to MyCircuit: ", optype)
return circ, measure_map
```
To build a `Backend` subclass for this sampling simulator, we only need to change how we write `required_predicates` and `process_circuits`.
```
from pytket.predicates import NoMidMeasurePredicate, NoClassicalControlPredicate
from pytket.utils.outcomearray import OutcomeArray
class MySampler(Backend):
"""A pytket Backend wrapping around the MySimulator simulator with readout sampling"""
_supports_shots = True
_supports_counts = True
_persistent_handles = False
def __init__(self):
"""Create a new instance of the MySampler class"""
super().__init__()
default_compilation_pass = default_compilation_pass
_result_id_type = _result_id_type
circuit_status = circuit_status
@property
def required_predicates(self) -> List[Predicate]:
"""
The minimum set of predicates that a circuit must satisfy before it can
be successfully run on this backend.
:return: Required predicates.
:rtype: List[Predicate]
"""
preds = [
NoClassicalControlPredicate(),
NoMidMeasurePredicate(),
GateSetPredicate(
{
OpType.Rx,
OpType.Ry,
OpType.Rz,
OpType.ZZMax,
OpType.Measure,
}
),
]
return preds
def process_circuits(
self,
circuits: Iterable[Circuit],
n_shots: Optional[int] = None,
valid_check: bool = True,
**kwargs: KwargTypes,
) -> List[ResultHandle]:
"""
Submit circuits to the backend for running. The results will be stored
in the backend's result cache to be retrieved by the corresponding
get_<data> method.
Use keyword arguments to specify parameters to be used in submitting circuits
See specific Backend derived class for available parameters, from the following
list:
* `seed`: RNG seed for simulators
:param circuits: Circuits to process on the backend.
:type circuits: Iterable[Circuit]
:param n_shots: Number of shots to run per circuit. None is to be used
for state/unitary simulators. Defaults to None.
:type n_shots: Optional[int], optional
:param valid_check: Explicitly check that all circuits satisfy all required
predicates to run on the backend. Defaults to True
:type valid_check: bool, optional
:return: Handles to results for each input circuit, as an interable in
the same order as the circuits.
:rtype: List[ResultHandle]
"""
circuit_list = list(circuits)
if valid_check:
self._check_all_circuits(circuit_list)
handle_list = []
for circuit in circuit_list:
handle = ResultHandle(str(uuid4()))
mycirc, measure_map = tk_to_mymeasures(circuit)
qubit_list, bit_list = zip(*measure_map.items())
qubit_shots = sample_mycircuit(
mycirc, set(qubit_list), n_shots, kwargs.get("seed")
)
# Pad shot table with 0 columns for unused bits
all_shots = np.zeros((n_shots, len(circuit.bits)), dtype=int)
all_shots[:, : len(qubit_list)] = qubit_shots
res_bits = [measure_map[q] for q in sorted(qubit_list, reverse=True)]
for b in circuit.bits:
if b not in bit_list:
res_bits.append(b)
res = BackendResult(
c_bits=res_bits, shots=OutcomeArray.from_readouts(all_shots)
)
self._cache[handle] = {"result": res}
handle_list.append(handle)
return handle_list
```
Likewise, we run some basic tests to make sure it works.
```
def test_sampler_bell() -> None:
c = Circuit(2, 2)
c.H(0)
c.CX(0, 1)
c.measure_all()
b = MySampler()
b.compile_circuit(c)
res = b.run_circuit(c, n_shots=10, seed=3)
assert res.get_shots().shape == (10, 2)
assert res.get_counts() == {(0, 0): 5, (1, 1): 5}
def test_sampler_basisorder() -> None:
c = Circuit(2, 2)
c.X(1)
c.measure_all()
b = MySampler()
b.compile_circuit(c)
res = b.run_circuit(c, n_shots=10, seed=0)
assert res.get_counts() == {(0, 1): 10}
assert res.get_counts(basis=BasisOrder.dlo) == {(1, 0): 10}
def test_sampler_compilation_pass() -> None:
b = MySampler()
for opt_level in range(3):
c = Circuit(2)
c.CX(0, 1)
u = np.asarray([[0, 1], [-1j, 0]])
c.add_unitary1qbox(Unitary1qBox(u), 1)
c.CX(0, 1)
c.add_gate(OpType.CRz, 0.35, [1, 0])
c.measure_all()
assert not (b.valid_circuit(c))
b.compile_circuit(c, optimisation_level=opt_level)
assert b.valid_circuit(c)
def test_sampler_invalid_conditions() -> None:
c = Circuit(2, 2)
c.H(0)
c.CX(0, 1, condition_bits=[0, 1], condition_value=3)
c.measure_all()
b = MySampler()
b.compile_circuit(c)
assert not (b.valid_circuit(c))
def test_sampler_expectation_value() -> None:
c = Circuit(2)
c.H(0)
c.CX(0, 1)
op = QubitPauliOperator(
{
QubitPauliString({Qubit(0): Pauli.Z, Qubit(1): Pauli.Z}): 1.0,
QubitPauliString({Qubit(0): Pauli.X, Qubit(1): Pauli.X}): 0.3,
QubitPauliString({Qubit(0): Pauli.Z, Qubit(1): Pauli.Y}): 0.8j,
QubitPauliString({Qubit(0): Pauli.Y}): -0.4j,
}
)
b = MySampler()
b.compile_circuit(c)
expectation = get_operator_expectation_value(c, op, b, n_shots=2000, seed=0)
assert (np.real(expectation), np.imag(expectation)) == pytest.approx(
(1.3, 0.0), abs=0.1
)
test_sampler_bell()
test_sampler_basisorder()
test_sampler_compilation_pass()
test_sampler_invalid_conditions()
test_sampler_expectation_value()
```
Exercises:<br>
- Add some extra gate definitions to the simulator and expand the accepted gate set of the backends. Start with some that are easily represented as exponentiated Pauli tensors like `OpType.YYPhase`. For a challenge, try adding `OpType.CCX` efficiently (it is possible to encode it using seven Pauli rotations).<br>
- Restrict the simulator to a limited qubit connectivity. Express this in the backends by modifying the `Architecture` property of the `BackendInfo` attribute object and adding to the `required_predicates`. Adjust the `default_compilation_pass` to solve for the connectivity.<br>
- The file `creating_backends_exercise.py` extends the simulators above to allow for mid-circuit measurement and conditional gates using a binary decision tree. Implement an appropriate converter and `Backend` class for this simulator.
| github_jupyter |
# Problem Set 5, Spring 2020, Villas-Boas
Due <u>Friday, May 1 at 11:59pm</u>
Submit materials (Jupyter notebook with all code cells run and output visible) as one pdf on [Gradescope](https://www.gradescope.com/courses/85265).
## Preamble
#### Use the below code cell to load all your packages (we will use `haven()` and `tidyverse()`) and the dataset.
Note: the packages **xtable** and **stargazer** can help you with putting together the custom summary statistics and regression tables - see [Coding Bootcamp Part 5]() for more information on using these packages.
# Exercise 1.
#### In this exercise you will use US patent data from the last century to investigate whether inventors living close to other creative individuals produce more knowledge spillovers than inventors living in greater isolation.
#### For this problem set you will use the Stata file `pset5.dta`. Note that several of the problems require you to produce custom summary statistics and regression tables. For more information on how to produce these types of tables, see the [Coding Bootcamp Part 5 notebook posted on Datahub]().
#### The dependent variable of interest is the occurrence of patent interferences (that occurs when two inventors file a very similar patent. Let the variable inter_pair= 0 or 1, where 1 means there was a similar patent filed in a certain geographic area in the US. You will use a linear probability model to compare the probability of interference between patent pairs above and below different co-location thresholds of inventors within 10, 50 and 100 miles. You will also estimate a logit specification, interpret marginal effects, and perform hypothesis testing.
|Variable Name | Description |
| :----------- | :----------------------- |
| $inter\_pair$ | =0 if no patent interference, =1 if interference |
| $cites\_shared$ | number of shared citations in patent|
| $num\_cl\_subcl\_shared$ | number of subclasses shared by the pair of inventors |
| $match1m$ | =1 if co-located with places of residence within 1 mile |
| $match50m$ | =1 if co-located with places of residence within 50 miles |
| $match100m$ | =1 if co-located with places of residence within 100 miles |
# 1.
#### What are the number of observations in the data? How many observations do we have with a pair patent interference and how many without a pair patent interference?
```
# Add your code here.
```
Add any written answer here.
# 2.
#### Please rename the variable `match1m` as`Treatment1`. Rename the variable that indicates the fact that both individuals live within 50 miles to `Treatment50`. Summarize the averages and standard deviations of the percent occurrence of patent interferences and also of the number of shared citations by Treatment status, for `Treatment1` and `Treatment50`. Do so by creating a table, Table 1,with four columns where the first two columns are for `Treatment1=0` and `Treatment1=1`, and the next two columns for `Treatment50`=0 and =1. In rows 1 and 2 of summary stats, please provide the average and below it the standard deviation of `inter_pair`, then rows 3 and 4 the average and std dev of number of shared citations, and lastly in rows 5 and 6 the mean and std dev of the number of class and subclass shared.
```
#Add your code here
```
If needed, add any html code for your table here.
# 3.
#### Estimate four linear probability model regressions and present the estimates in a four-column table, Table 2. Make sure you use robust standard errors always in all regressions. Let the dependent variable of all columns be the indicator of having a pair patent interference. For the regression in column 1, specify a constant (i.e. intercept) and `Treatment1` as regressors. In column 2, add to the constant and `Treatment1` the number of shared citations and then the number of subclasses shared. In column 3 present the estimates and standard errors from the regression of the `inter_pair` indicator on a constant and `Treatment50`, and column 4 the estimates and standard errors from the regression on a constant, `Treatment50`, the number of shared citations, and then the number of subclasses shared. Produce the table also by denoting with a star * the coefficients that are significant at the 10% level, two stars ** those significant at the 5% and three starts *** those significant at the 1 percent level. (Like in lecture 19, I am asking you to run separate regressions and present the estimates in a table with 4 columns. (See Coding Bootcamp Part 5 for help producing these)
```
#Add your code here
```
If needed, add any html code for your table here.
## a.
#### Which coefficient measures the estimated change in patent interference in areas with inventors living within one mile from each other (in column 1)? Is it statistically significant at the 5 percent level?
Type your answer here.
## b.
#### What does the estimated constant mean? Is it significantly different from zero?
Type your answer here.
## c.
#### Looking at the whole table, which coefficient measures the estimated change in patent interferences when the pair of individuals lives within 50 miles from each other and no other controls are considered? What is its value?
Type your answer here.
# 4.
#### What are the conditions needed so that we can interpret the coefficient of the Treatment variables as the causal impacts of living close by on patent interferences? What would be a simple set of tests you could run to support this?Do not run these tests - explain only what data you would use and collect and what tests you would run.
Type your answer here.
# 5.
#### In columns (2) and (4) we added covariates to the regression in columns (1) and (3). Does adding the covariates affect the estimated coefficient of theTreatment1? How about on Treatment50? How do you interpret the point estimate of Treatment50 now in one sentence (also using Size, Sign and Significance)?
Type your answer here.
# 6.
#### Looking at the change from column (3) to Column (4) in the Treatment50 estimate, can you explain what that implies in terms of the joint correlation of the Treatment50 and the pair working in the same fields (measured jointly by the effects of shared citations and number of subclasses shared)?
Type your answer here.
# 7.
#### What are potential problems with estimating the linear probability model?
Type your answer here.
# 8.
#### Run the linear probability model in column (4) of Table 2 without robust standard errors, what happened to the estimates and to the significance of the estimates? Which problem in 7 does this highlight?
```
#add your code here
```
Type your answer here.
# 9.
#### In a sentence or two, describe how the Logit model addresses the problems with the linear probability model you mentioned in Q7. Estimate the same right-hand side specification as in column 4 of Table 2 above but now using a Logit model. After you estimate the model, type the marginal effects command in R as discussed in Lecture. What do you conclude in terms of the effect of Treatment50 on the patent interference occurrence?
```
#add your code here
```
Type your answer here.
# 10.
#### Estimate a Logit model that, together with the estimation output from 9, allows you to test whether the additional covariates added in column 4 of Table 2 relative to column 3 matter for patent interference or not. What do you conclude? Do the five steps of Hypothesis testing by hand, do not use Rโs built-in test command.
```
#add your code here.
```
Type your answer here.
# 11.
#### Create the necessary variables and then estimate a Logit model that allows you to test whether the impact of the Treatment is different depending on the number of shared citations. What do you conclude? Do the five steps of Hypothesis testing by hand, do not use Rโs built-in test command
```
#Add your code here
```
Type your answer here.
# Exercise 2.
### Does regulating gasoline emissions, by requiring more reformulation in order to improve air quality, cause increases in gasoline prices in the USA? Cities (j) were regulated with stricter gasoline content standards if their measure pollution in year T, back in the 1990โs, was above X parts per million (ppm).
### Cities with pollution measurements in year T that exceeded the pollution threshold X were regulated with stricter environmental laws to reformulate gasoline to have less emissions. Cities below the threshold were not bound by this regulation. You have data over time t and for a random sample of cities j in the USA.
# a)
#### How would you estimate the causal effect of regulation on gasoline prices? Write down the exact regression and define each variable. Also say which coefficient is interpreted as the causal effect of the regulation.
Type your answer here.
# b)
#### What assumption is key for you to interpret the coefficient as a causal effect of regulation aimed at cleaning air quality on the prices consumers pay for gasoline?
Type your answer here.
# Excercise 3
\begin{align*}
Predicted Outcome_{jt}&= 50 - 30 After_{jt} + 25 Treated_{jt} - 20 After_{jt}*Treated_{jt}
\end{align*}
Using the above estimated equation, please fill out the table.
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;}
.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;}
.tg .tg-1wig{font-weight:bold;text-align:left;vertical-align:top}
.tg .tg-w9vq{font-weight:bold;border-color:#ffc702;text-align:right;vertical-align:top}
.tg .tg-fymr{font-weight:bold;border-color:inherit;text-align:left;vertical-align:top}
.tg .tg-6ic8{font-weight:bold;border-color:inherit;text-align:right;vertical-align:top}
.tg .tg-mcqj{font-weight:bold;border-color:#000000;text-align:left;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top}
.tg .tg-0lax{text-align:left;vertical-align:top}
</style>
<table class="tg">
<tr>
<th class="tg-fymr">Average Outcome</th>
<th class="tg-6ic8">Control <br> (Treated=0)</th>
<th class="tg-fymr">Treated <br> (Treated=1)</th>
<th class="tg-mcqj">Difference T-C in each row</th>
</tr>
<tr>
<td class="tg-fymr">Before (After=0)</td>
<td class="tg-0pky"></td>
<td class="tg-0pky"></td>
<td class="tg-0pky"></td>
</tr>
<tr>
<td class="tg-fymr">After (After=0)</td>
<td class="tg-0pky"></td>
<td class="tg-0pky"></td>
<td class="tg-0pky"></td>
</tr>
<tr>
<td class="tg-1wig">Difference After-Before in each Column</td>
<td class="tg-0lax"></td>
<td class="tg-0lax"></td>
<td class="tg-1wig">D in D =</td>
</tr>
</table>
Preview
**You can type your answers directly into the table below. Note that the `|` define the columns.**
|Average Outcome | Control <br> (Treated=0) | Treated <br> (Treated=1) | Difference T-C in each row |
| :---------------- | :--------------------: |:--------------------:|---------------------------: |
| Before (After=0) | | | |
| After (After=1) | | | |
|Difference After-Before in each Column| | | D in D = |
| github_jupyter |
## 3 EKF SLAM
ๅจไธไธ่ไธญๆไปฌ็ๅฐ็ๆฏๆฉๅฑๅกๅฐๆผๆปคๆณขๅจๅฎไฝไธญ็ๅบ็จ๏ผEKFๅๆ ทๅฏไปฅๅบ็จไบSLAM้ฎ้ขไธญใๅจๅฎไฝ้ฎ้ขไธญ๏ผๆบๅจไบบๆฅๆถๅฐ็่งๆตๅผๆฏๅ
ถๅจไบ็ปด็ฉบ้ดไธญ็x-yไฝ็ฝฎใๅฆๆๆบๅจไบบๆฅๆถๅฐ็ๆฏ่ทๅจๅด็ฏๅขๆๅ
ณ็ไฟกๆฏ๏ผไพๅฆๆบๅจไบบๆๆถๅปไธ่ท็ฆปๆ่ทฏๆ ็น็่ท็ฆปๅ่งๅบฆ๏ผ้ฃไนๆไปฌๅฏไปฅๆ นๆฎๆญคๆถๆบๅจไบบ่ช่บซไฝ็ฝฎ็ไผฐ่ฎกๅผ๏ผๆจๆตๅบ่ฏฅ่ทฏๆ ็นๅจไบ็ปด็ฉบ้ดไธญ็ไฝ็ฝฎ๏ผๅฐ่ทฏๆ ็น็็ฉบ้ดไฝ็ฝฎไนไฝไธบๅพ
ไฟฎๆญฃ็็ถๆ้ๆพๅ
ฅๆดไธช็็ถๆๅ้ไธญใ
็ฑๆญคๅๅผๅบไบไธคไธชๅญ้ฎ้ข๏ผๆฐๆฎๅ
ณ่ๅ็ถๆๆฉๅขใ
ๆไปฌๅฐ้่ฟไปฅไธpythonๅฎไพๆฅ็่งฃekf slam็ๅ็๏ผ

- ้ป่ฒๆๅท: ่ทฏๆ ็น็ๅฎไฝ็ฝฎ๏ผไปฃ็ ไธญRFIDๆฐ็ปไปฃ่กจ็ๅฎไฝ็ฝฎ๏ผ
- ็ปฟ่ฒร: ่ทฏๆ ็นไฝ็ฝฎไผฐ่ฎกๅผ
- ้ป่ฒ็บฟ: ่ช่ฟนๆจๆตๆนๆณๅพๅบ็ๆบๅจไบบ่ฝจ่ฟน
- ่่ฒ็บฟ: ่ฝจ่ฟน็ๅผ
- ็บข่ฒ็บฟ: EKF SLAMๅพๅบ็ๆบๅจไบบ่ฝจ่ฟน
ๅๆณไธไธ่็ๆฉๅฑๅกๅฐๆผๆปคๆณข็ๅ
ฌๅผ๏ผ
=== Predict ===
$x_{Pred} = Fx_t+Bu_t$
$P_{Pred} = J_FP_t J_F^T + Q$
=== Update ===
$z_{Pred} = Hx_{Pred}$
$y = z - z_{Pred}$
$S = J_H P_{Pred}.J_H^T + R$
$K = P_{Pred}.J_H^T S^{-1}$
$x_{t+1} = x_{Pred} + Ky$
$P_{t+1} = ( I - K J_H) P_{Pred}$
EKF SLAM ไฝฟ็จไธไธชEKFๆปคๆณขๅจๆฅ่งฃๅณSLAM้ฎ้ข๏ผEKF SLAM็็ถๆๅ้ๅ
ๆฌไบๆบๅจไบบไฝๅงฟ $(x, y, \theta)$ ๅ่งๆตๅฐ็ๅไธช่ทฏๆ ็น็ๅๆ $[(m_1x, m_1y), (m_2x, m_2y), ... , (m_nx, m_ny)]$ ๏ผ่ทฏๆ ็นไธๆบๅจไบบไฝๅงฟ้ด็ๅๆนๅทฎ็ฉ้ตไนๅจๆดๆฐใ

็ธๅบ็ๅๆนๅทฎ็ฉ้ต๏ผ

ๅฏไปฅ็ฎๅไฝ๏ผ

้่ฆๆณจๆ็ๆฏ๏ผ็ฑไบ็ถๆๅ้ไธญๅ
ๅซ่ทฏๆ ็นไฝ็ฝฎๅๆ ๏ผๆไปฅ้็ๆบๅจไบบ่ฟๅจ๏ผ่งๆตๅฐ็่ทฏๆ ็นไผ่ถๆฅ่ถๅค๏ผๅ ๆญค็ถๆๅ้ๅ็ถๆๅๆนๅทฎ็ฉ้ตไผไธๆญๅๅใ
### ่ฟๅจๆจกๅ
**็ถๆๅ้้ขๆต** ๅจ็ถๆๅ้ไธญ๏ผๅชๆๆบๅจไบบไฝๅงฟ็ถๆ$(x, y, \theta)$ๆฏ้็ๆบๅจไบบ่ฟๅจๆนๅ็๏ผๆไปฅ่ฟๅจๆจกๅๅฝฑๅไธ็็ถๆๅ้้ขๆต่ฟๆญฅไธญๅชๆนๅ็ถๆๅ้ไธญ็่ฟไธไธชๅผๅๅๆนๅทฎ็ฉ้ตไธญๅฏนๅบ็ๅบๅใไปฅไธๆฏๆฌๅฎไพไธญไฝฟ็จ็่ฟๅจๆจกๅ๏ผๆงๅถ่พๅ
ฅๅ้ไธบๆบๅจไบบ็บฟ้ๅบฆๅ่ง้ๅบฆ $(v,w)$ใ
$\begin{equation*}
F=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\end{equation*}$
$\begin{equation*}
B=
\begin{bmatrix}
\Delta t cos(\theta) & 0\\
\Delta t sin(\theta) & 0\\
0 & \Delta t
\end{bmatrix}
\end{equation*}$
$\begin{equation*}
U=
\begin{bmatrix}
v_t\\
w_t\\
\end{bmatrix}
\end{equation*}$
$\begin{equation*}
X = FX + BU
\end{equation*}$
$\begin{equation*}
\begin{bmatrix}
x_{t+1} \\
y_{t+1} \\
\theta_{t+1}
\end{bmatrix}=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}\begin{bmatrix}
x_{t} \\
y_{t} \\
\theta_{t}
\end{bmatrix}+
\begin{bmatrix}
\Delta t cos(\theta) & 0\\
\Delta t sin(\theta) & 0\\
0 & \Delta t
\end{bmatrix}
\begin{bmatrix}
v_{t} + \sigma_v\\
w_{t} + \sigma_w\\
\end{bmatrix}
\end{equation*}$
ๅฏไปฅ็ๅบ่ฟๅจๆจกๅไธไธไธ่ไธญ็ๅนถๆ ไธๅ๏ผๅชๆฏ็ถๆๅ้ๆๅบๅซ๏ผ๏ผ$U$ ๆฏๅ
ๅซ็บฟ้ๅบฆ่ง้ๅบฆ $v_t$ ๅ $w_t$็ๆงๅถ่พๅ
ฅๅ้๏ผ$+\sigma_v$ ๅ $+\sigma_w$ ่กจ็คบๆงๅถ่พๅ
ฅไนๅ
ๅซๅชๅฃฐใ
**ๅๆนๅทฎ้ขๆต** ๅฏนๆบๅจไบบไฝๅงฟ็ถๆ็ๅๆนๅทฎ่ฟ่ก้ขๆต๏ผๆฌ่ดจไธๆฏๅ ไธ่ฟๅจๆจกๅ่ฏฏๅทฎ๏ผไฝๅงฟ็ไธ็กฎๅฎๆง้่ฟๅ ไธ่ฟๅจๆจกๅ็ๅๆนๅทฎ$Q$่ๅขๅ ไบใ
$
P = G^TPG + Q
$
ๆณจๆ๏ผ่ฟ้็$G$ๅฐฑๆฏไธไธ่ๅ
ฌๅผไธญ็้
ๅ
ๆฏ็ฉ้ต$J_F$๏ผไธบไธpythonไปฃ็ ไฟๆไธ่ด๏ผ่ฟ้ไธๅๆนๅจใ
ๅจ่ฟไธๆญฅ้ชคไธญ๏ผๅๆนๅทฎ็ฉ้ต$P$้ๅชๆไธๆบๅจไบบไฝๅงฟ็ธๅ
ณ็้จๅ่ขซไฟฎๆน๏ผไธ่ทฏๆ ็น็ธๅ
ณ็้จๅๆฒกๆๆนๅจใ
```
def motion_model(x, u):
"""
ๆ นๆฎๆงๅถ้ขๆตๆบๅจไบบไฝๅงฟ
"""
F = np.array([[1.0, 0, 0],
[0, 1.0, 0],
[0, 0, 1.0]])
B = np.array([[DT * math.cos(x[2, 0]), 0],
[DT * math.sin(x[2, 0]), 0],
[0.0, DT]])
x = (F @ x) + (B @ u)
# ่ฟๅ3x1
return x
# ่ฎก็ฎ้
ๅ
ๆฏ็ฉ้ต๏ผไธไธไธ่็ธๆฏๅนถๆ ไธๅ๏ผๅชไธ่ฟๅ ไธบๆบๅจไบบไฝๅงฟ็ถๆๅ้ๅฐไบไธไธชๅ
็ด ๏ผjFๆฏ3ร3๏ผ
def jacob_motion(x, u):
Fx = np.hstack((np.eye(STATE_SIZE), np.zeros(
(STATE_SIZE, LM_SIZE * calc_n_lm(x)))))
jF = np.array([[0.0, 0.0, -DT * u[0] * math.sin(x[2, 0])],
[0.0, 0.0, DT * u[0] * math.cos(x[2, 0])],
[0.0, 0.0, 0.0]], dtype=np.float64)
G = np.eye(STATE_SIZE) + Fx.T @ jF @ Fx
# ่ฟๅ้
ๅ
ๆฏ็ฉ้ตG(3x3)๏ผๅๅฏน่ง็ฉ้ตFx(ๅฎ้
ไธ็np.eye(3))
return G, Fx,
```
### ่งๆตๆจกๅ
ๅจ่ฟไธชๅฎไพไธญ๏ผ่งๆตๆจกๅๆฏไธไธ่่ฆๅคๆ๏ผๅ ไธบๅฎไพๆจกๆ็ๆฏๆฟๅ
้ท่พพๆฃๆต่ทฏๆ ็น่ทๅพ็ไธๆบๅจไบบ่ท็ฆปๅ่งๅบฆ็ไฟกๆฏใ่งๆตๅผๅ้$z$ๅ
ๅซ็ไธคไธชๅ
็ด ๅฐฑๆฏๆบๅจไบบไธ่ทฏๆ ็น็็ธๅฏน่ท็ฆปๅ่งๅบฆ๏ผๅ ๆญคๅไธช่ทฏๆ ็นๅจไบ็ปด็ฉบ้ดๅ
็ๅๆ ๅฐฑๅฏไปฅ้่ฟๆบๅจไบบๅจไบ็ปด็ฉบ้ดๅ
็ๅๆ ๅพๅบ๏ผ

็ญๅผๅทฆ่พนๅฐฑๆฏ็ฌฌjไธช่ทฏๆ ็นๅจไบ็ปด็ฉบ้ดๅ
็ๅๆ ใ
็ธๅบๅฐ๏ผๅฏไปฅๅพๅฐๆบๅจไบบไฝๅงฟ็ถๆไธๅไธช่ทฏๆ ็น่งๆตๅผไน้ด็่ฝฌๆขๅ
ณ็ณป๏ผไนๅฐฑๆฏ่งๆตๆจกๅ๏ผ

ๆไปฅๅไธช่ทฏๆ ็น็้
ๅ
ๆฏ็ฉ้ต่กจ็คบไธบ๏ผ

๏ผ่ฏดๆ๏ผ$H_t$็ฉ้ตๅทฆไธ่ง็low่กจ็คบ่ฟๆฏ้ๅฏนๅไธช่ทฏๆ ็น็้
ๅ
ๆฏ็ฉ้ต๏ผๅณไธ่ง็i่กจ็คบ่ฟๆฏๅฏนๅบ็ฌฌi็ป่งๆตๅผ็้
ๅ
ๆฏ็ฉ้ต๏ผ
ๅฏนไบๆฏไธช่ทฏๆ ็น๏ผ่งๆต้
ๅ
ๆฏ็ฉ้ต้่ฆ่ฟ่กๅๅพฎๅ็ๆไปฅไธ5ไธชๅ
็ด ๏ผ
$(\overline\mu_{t,x}, \overline\mu_{t,y}, \overline\mu_{t,\theta}, \overline\mu_{j,x},\overline\mu_{j,y})$
ๅๅซๆฏๆบๅจไบบไฝๅงฟ็ถๆๅ่ฏฅ่ทฏๆ ็น็ไบ็ปดx-y็ฉบ้ดไฝ็ฝฎใ
ๆจๅฏผๅฏๅพ๏ผ


ๅไธช่ทฏๆ ็น็้
ๅ
ๆฏ็ฉ้ตไฝ็จไบๆดไธช่งๆตๆจกๅ้
ๅ
ๆฏ็ฉ้ตไธ้่ฆไนไปฅไธไธช็ฉ้ต$F_{x,j}$๏ผๆฅๆๅไธช้
ๅ
ๆฏ็ฉ้ต็ๅ
็ด ๆพๅฐๅฏนๅบไฝ็ฝฎไธ๏ผjๅฐฑ่กจ็คบๆฏ็ฌฌๅ ไธช่ทฏๆ ็นใ

$2j-2=2(j-1)$่กจ็คบ็ฌฌjไธช่ทฏๆ ็นๅ็j-1ไธช่ทฏๆ ็น็ไฝ็ฝฎ๏ผ$2N-2j=2(N-j)$่กจ็คบ็ฌฌjไธช่ทฏๆ ็นไนๅ็N-jไธช่ทฏๆ ็น๏ผNๅจๆญค่กจ็คบไธๅ
ฑๆNไธช่ทฏๆ ็น๏ผ็ไฝ็ฝฎ๏ผ่ฟไบไฝ็ฝฎ้ฝๆฏ0๏ผๅ ไธบ่ฏฅ่ทฏๆ ็นไธไผไฝ็จไบๅ
ถไป่ทฏๆ ็นใ
```
def calc_innovation(lm, xEst, PEst, z, LMid):
"""
Calculates the innovation based on expected position and landmark position
:param lm: landmark position
:param xEst: estimated position/state
:param PEst: estimated covariance
:param z: read measurements
:param LMid: landmark id
:returns: returns the innovation y, and the jacobian H, and S, used to calculate the Kalman Gain
"""
delta = lm - xEst[0:2]
q = (delta.T @ delta)[0, 0]
z_angle = math.atan2(delta[1, 0], delta[0, 0]) - xEst[2, 0]
zp = np.array([[math.sqrt(q), pi_2_pi(z_angle)]]) # ๅฏนๅบไปฅไธ็่งๆตๆจกๅ
y = (z - zp).T
y[1] = pi_2_pi(y[1])
# ๆพๅ
ฅdelta็ๅนณๆน๏ผๆ ้๏ผ,delta(2x1),็ถๆๅ้ๅlmๅจ็ถๆๅ้ไธญ็ๅบๅท+1
H = jacob_h(q, delta, xEst, LMid + 1)
S = H @ PEst @ H.T + R
return y, S, H
def jacob_h(q, delta, x, i):
"""
Calculates the jacobian of the measurement function
:param q: the range from the system pose to the landmark
:param delta: the difference between a landmark position and the estimated system position
:param x: the state, including the estimated system position
:param i: landmark id + 1
:returns: the jacobian H
"""
sq = math.sqrt(q)
G = np.array([[-sq * delta[0, 0], - sq * delta[1, 0], 0, sq * delta[0, 0], sq * delta[1, 0]],
[delta[1, 0], - delta[0, 0], - q, - delta[1, 0], delta[0, 0]]]) # ๅไธช่ทฏๆ ็น็้
ๅ
ๆฏ็ฉ้ต
G = G / q # ๅไธช่ทฏๆ ็น็้
ๅ
ๆฏ็ฉ้ต
nLM = calc_n_lm(x) # ่ทฏๆ ็นๆปๆฐ
F1 = np.hstack((np.eye(3), np.zeros((3, 2 * nLM))))
F2 = np.hstack((np.zeros((2, 3)), np.zeros((2, 2 * (i - 1))),
np.eye(2), np.zeros((2, 2 * nLM - 2 * i))))
F = np.vstack((F1, F2))
H = G @ F
return H
```
ๅ
ถไธญjacob_hๅฝๆฐไธญ็่ฟๅ ่กไปฃ็ ๏ผ
```
F1 = np.hstack((np.eye(3), np.zeros((3, 2 * nLM))))
F2 = np.hstack((np.zeros((2, 3)), np.zeros((2, 2 * (i - 1))),
np.eye(2), np.zeros((2, 2 * nLM - 2 * i))))
F = np.vstack((F1, F2))
```
ๅฐฑๆฏๅจๆๅปบไธ้ขๆจๅฏผ็็ฎๅผไธญ็$F_{x,j}$็ฉ้ต๏ผๅจไปฃ็ ไธญ๏ผnLM่กจ็คบ่ทฏๆ ็นๆปๆฐ๏ผi่กจ็คบ่ทฏๆ ็น็id๏ผ่ฟ่กๅ ไธชไพๅญๆฅ็็$F_{x,j}$็ฉ้ต็็ป่๏ผ
```
import numpy as np
nLM = 3 # ๅ่ฎพๅ
ฑๆไธไธช่ทฏๆ ็น
i = 1
F1 = np.hstack((np.eye(3), np.zeros((3, 2 * nLM))))
F2 = np.hstack((np.zeros((2, 3)), np.zeros((2, 2 * (i - 1))),np.eye(2), np.zeros((2, 2 * nLM - 2 * i))))
F = np.vstack((F1, F2))
print(F1)
print(F2)
print(F)
i = 2
F1 = np.hstack((np.eye(3), np.zeros((3, 2 * nLM))))
F2 = np.hstack((np.zeros((2, 3)), np.zeros((2, 2 * (i - 1))),np.eye(2), np.zeros((2, 2 * nLM - 2 * i))))
F = np.vstack((F1, F2))
print(F1)
print(F2)
print(F)
i = 3
F1 = np.hstack((np.eye(3), np.zeros((3, 2 * nLM))))
F2 = np.hstack((np.zeros((2, 3)), np.zeros((2, 2 * (i - 1))),np.eye(2), np.zeros((2, 2 * nLM - 2 * i))))
F = np.vstack((F1, F2))
print(F1)
print(F2)
print(F)
```
ๅจๅฎไพไธญ๏ผๆบๅจไบบๅจๆฏไธชๆถๅปt้ฝไผๅพๅฐ่ฅๅนฒ็ป่งๆตๅผ๏ผๅฐฑๅๅจ็ฐๅฎไธญ๏ผๆฏ้ไธๅฎๆถ้ด๏ผๆบๅจไบบ้ฝไผๅพๅฐไผ ๆๅจ็ไธ็ณปๅๆต้ๅผไธๆ ทใ้ฃไนๅฆไฝ็ฅ้ไธ็ป่งๆตๅผ$z_t^i=(r_t^i,\phi_t^i)^T$ ๅฏนๅบ็ๆฏ็ฌฌๅ ไธช่ทฏๆ ็นๅข๏ผไนๅฐฑๆฏ่ฏด๏ผไธ้ข็ฌฌjไธช่ทฏๆ ็น็jๆฏๆไนๅพๅฐ็๏ผไธ้ข็ฎ่ฟฐๆฌpythonๅฎไพไธญ็ๆนๆก๏ผ
ๅฏนไบๆฏไธ็ป่งๆตๅผ$z_t^i=(r_t^i,\phi_t^i)^T$๏ผ่ฎก็ฎๅฏนๅบๆญคๆถๅป็ๆบๅจไบบไฝๅงฟ๏ผๅทฒๆ็็ถๆๅ้๏ผ

ไธญ็ไปlandmark 1ๅฐlandmark n็ๆฏไธไธช่ทฏๆ ็น็่งๆตๅผ๏ผไนๅฐฑๆฏไปๆญคๆถๅปๆบๅจไบบ็ไฝ็ฝฎๅๅๅจ็ๅป๏ผlandmark 1ๅฐlandmark n็่งๆตๅผ๏ผ่ฎก็ฎ่ฟไบ่งๆตๅผไธ่งๆตๅผ$z_t^i=(r_t^i,\phi_t^i)^T$็**ๆฎๅทฎ**๏ผๅฆๆ่ฟไบๆฎๅทฎ็**้ฉฌๅๆ่ฏบๆฏๆฏ่ท็ฆป**ๅๅคงไบๆไธไธช้ๅผ๏ผๅๅคๅฎ่ฟๆฏไธไธชๆฒก่งๆต่ฟ็ๆฐ็่ทฏๆ ็น๏ผๅฆๆ่ฟไบ**่ท็ฆป**ไธๆฏ้ฝๅคงไบๆไธช้ๅผ๏ผๅๆไปฌ็ฎๅๅฐ่ฎคไธบ**่ท็ฆป**ๆ็ญ็้ฃไธช่ทฏๆ ็นlandmarkๅฐฑๆฏ่งๆตๅผ$z_t^i=(r_t^i,\phi_t^i)^T$ๅฏนๅบ็่ทฏๆ ็นใ็ฎๅๆฅ่ฏด๏ผๆพ่ท่งๆตๅผ$z_t^i=(r_t^i,\phi_t^i)^T$ๅทฎๅซๆๅฐ็้ฃไธช่ทฏๆ ็น๏ผๅฆๆๅทฎๅซ้ฝๆบๅคง๏ผ้ฃๅฐฑ่ฎคไธบ็ๅฐไบไธชๆฐ็่ทฏๆ ็นใ
**้ฉฌๅๆ่ฏบๆฏๆฏ่ท็ฆป**็้ๅผๅจไปฃ็ ไธญ็ๅผไธบ2.0:
```
M_DIST_TH = 2.0 # Threshold of Mahalanobis distance for data association.
## ๆณจ๏ผ้ฉฌๅๆ่ฏบๆฏๆฏ่ท็ฆปMahalanobis distanceๆฏ็ฑๅฐๅบฆ็ป่ฎกๅญฆๅฎถ้ฉฌๅๆ่ฏบๆฏๆฏๆๅบ็๏ผ่กจ็คบๆฐๆฎ็ๅๆนๅทฎ่ท็ฆปใ
## ๅฎๆฏไธ็งๆๆ็่ฎก็ฎไธคไธชๆช็ฅๆ ทๆฌ้็็ธไผผๅบฆ็ๆนๆณใไธๆฌงๆฐ่ท็ฆปไธๅ็ๆฏๅฎ่่ๅฐๅ็ง็นๆงไน้ด็่็ณปๅนถไธๆฏๅฐบๅบฆๆ ๅ
ณ็๏ผ
## ๅณ็ฌ็ซไบๆต้ๅฐบๅบฆ๏ผๆฅ่ชwiki๏ผใ
```
ๅจๅฎไพไธญๅฏปๆพๅน้
็่ทฏๆ ็น็ผๅทid็ๅฏนๅบไปฃ็ ๆฏ๏ผ
```
def search_correspond_landmark_id(xAug, PAug, zi):
"""
Landmark association with Mahalanobis distance
"""
# ๆญคๆถๅทฒๆ็LM็ไธชๆฐ๏ผ
nLM = calc_n_lm(xAug)
min_dist = []
for i in range(nLM):
# ่ฟๅ็ฌฌiไธชๅทฒๆLM็ๅๆ lm๏ผ
lm = get_landmark_position_from_state(xAug, i)
# ๆพๅ
ฅ็ฌฌiไธชๅทฒๆLM็ๅๆ lmใๆบๅจไบบไฝๅงฟ้ขๆต่ฟ็็ถๆๅ้xAugๅvariance PAugใๅพ
ๆฏ่พ็ๆฐ่งๆตzi
y, S, H = calc_innovation(lm, xAug, PAug, zi, i)
# ็ฑๆฎๅทฎy่ฎก็ฎMahalanobis distance๏ผy.T @ np.linalg.inv(S) @ y
min_dist.append(y.T @ np.linalg.inv(S) @ y)
min_dist.append(M_DIST_TH) # new landmark
min_id = min_dist.index(min(min_dist))
return min_id
```
่ฟ็งๆ นๆฎๆฎๅทฎ็้ฉฌๅๆ่ฏบๆฏๆฏ่ท็ฆปๆๅฐๆฅๅฏปๆพๅน้
็่ทฏๆ ็นๅชๆฏไธ็ง็ๆณๅ็็ฎๅๅค็ๆนๅผ๏ผๅฎ้
ๅบ็จไธญ็โdata associationโไผๅคๆๅพๅคใ
ๆฅไธๆฅ่ฟๆไธๆญฅ้่ฆๆณจๆ๏ผๅฆๆ$z_t^i=(r_t^i,\phi_t^i)^T$ๆฏไธไธชๆช่งๆต่ฟ็่ทฏๆ ็น็่งๆตๅผ๏ผ้ฃไน้่ฆๅฏน็ถๆๅ้ๅ็ถๆๅๆนๅทฎ็ฉ้ต่ฟ่กๆฉๅข๏ผๅฏนๅบ็pythonไปฃ็ ไธบ๏ผ
```
# Extend state and covariance matrix
xAug = np.vstack((xEst, calc_landmark_position(xEst, z[iz, :])))
PAug = np.vstack((np.hstack((PEst, np.zeros((len(xEst), LM_SIZE)))),
np.hstack((np.zeros((LM_SIZE, len(xEst))), initP))))
xEst = xAug
PEst = PAug
```
ๅ
ถไธญ็initPๆฏไธไธชๅฏน่งๅ
็ด ไธบ1็2ร2็ฉ้ต๏ผไปฃ่กจ็ปๆฐๅข็ไธไธช่ทฏๆ ็น็็ถๆ้ๅๆนๅทฎไธไธชๅๅงๅผใ
่ณๆญค๏ผๆไปฌไบ่งฃไบๆบๅจไบบๅฏน่ทฏๆ ็น็่งๆตๆจกๅ๏ผไปฅๅๅฆไฝ่ฎก็ฎ่งๆตๆจกๅ็้
ๅ
ๆฏ็ฉ้ต๏ผๆฅไธๆฅๅฐฑๅฏไปฅ็ปง็ปญ่ฎก็ฎๅกๅฐๆผๅข็$K$๏ผๆง่กEKFไธญ็**ๆดๆฐ**ๆญฅ้ชคไบใๆดไธช**ๆดๆฐ**ๆญฅ้ชค็็ฎๆณๆป็ปๅฆไธ๏ผ


ๅฎไพไธญๆดไธชEKF_SLAM็ฎๆณ็ๆกๆถๅฆไธ๏ผๅ
ทไฝๅไธชๅฝๆฐ็ๅฎ็ฐ่ฏฆ่งไปฃ็ ๆไปถ๏ผ
```
def ekf_slam(xEst, PEst, u, z):
# Predict
# ้ขๆต่ฟ็จๅชไธ่ฟๅจๆจกๅๆๅ
ณ
S = STATE_SIZE
# ่ฟๅ่ฟๅจๆจกๅ้
ๅ
ๆฏGๅๅฏน่ง็ฉ้ตFx
G, Fx = jacob_motion(xEst[0:S], u)
# ่ฟๅๆบๅจไบบไฝๅงฟ็ๅจๅซๅชๆงๅถไธ็้ขๆต๏ผ
xEst[0:S] = motion_model(xEst[0:S], u)
# ๆดๆฐๆบๅจไบบไฝๅงฟ็P
PEst[0:S, 0:S] = G.T @ PEst[0:S, 0:S] @ G + Fx.T @ Q @ Fx
# ็ปLandmarkๅๅค็variance
initP = np.eye(2)
# Update
# ๅฏนๅจๆญคๆถๅปๆฐๅพๅฐ็ไธ็ปzๅซๅช่งๆต๏ผๅฏนzไธญ็ๆฏไธไธชๅ
็ด ๅๆไฝ๏ผ
for iz in range(len(z[:, 0])): # for each observation
min_id = search_correspond_landmark_id(xEst, PEst, z[iz, 0:2])
nLM = calc_n_lm(xEst)
# ๅจๅฝๆฐsearch_correspond_landmark_idไธญ๏ผๆM_DIST_THๆพๅจไบๆๅไธไธช๏ผ
# ๅๅ ไธบๅทฒๆ็LM็ๅบๅทๅผ่ๅดๆฏ๏ผ0่ณ๏ผnLM-1๏ผ๏ผ๏ผๆไปฅๆๅไธไธชๅผM_DIST_TH็ๅบๅทๅฐฑๆฏnLM๏ผ
# ๆไปฅๅฆๆmin_id็ญไบnLM๏ผๆๅณ็ๆฐ่งๆตz[iz, 0:2]ไธๆๆๅทฒๆ่งๆต็้ฉฌๅๆ่ฏบๆฏๆฏ่ท็ฆป้ฝ่ถ
่ฟไบM_DIST_TH๏ผ
# ๅณ่ฟๆฏไธไธชไปฅๅๆฒก็ๅฐ่ฟ็LM๏ผ
if min_id == nLM:
print("New LM")
# Extend state and covariance matrix
xAug = np.vstack((xEst, calc_landmark_position(xEst, z[iz, :])))
PAug = np.vstack((np.hstack((PEst, np.zeros((len(xEst), LM_SIZE)))),
np.hstack((np.zeros((LM_SIZE, len(xEst))), initP))))
xEst = xAug
PEst = PAug
# ๅฆๆmin_idไธ็ญไบnLM๏ผๅ่ฎคไธบ่ฟๆฏไธไธชๅทฒ็ป่งๆต่ฟ็LM๏ผ้ฉฌๅๆ่ฏบๆฏๆฏ่ท็ฆปๆๅฐ็้ฃไธช๏ผไนๅฐฑๆฏmin_idๆๅ็้ฃไธชๅฐฑๆฏๅฏนๅบ็ๅทฒ่งๆตLM๏ผ
lm = get_landmark_position_from_state(xEst, min_id)
y, S, H = calc_innovation(lm, xEst, PEst, z[iz, 0:2], min_id)
K = (PEst @ H.T) @ np.linalg.inv(S)
xEst = xEst + (K @ y)
PEst = (np.eye(len(xEst)) - (K @ H)) @ PEst
xEst[2] = pi_2_pi(xEst[2])
return xEst, PEst
```
ๆๅๅ่ฏดไธ็นๆญคๅฎไพไธญๆๅ
ณไปฟ็ๆฐๆฎ็ๆ็ๅ
ๅฎน๏ผๅจobservationๅฝๆฐไธญ๏ผๆ นๆฎๆบๅจไบบไฝๅงฟ็ๅผๅณๅฎๆบๅจไบบๅฏไปฅ็ๅฐๅชๅ ไธช่ทฏๆ ็น๏ผๅๅฐ่ฟๅ ไธช่ทฏๆ ็น็่งๆตๅผไบบไธบๅ ไธๅชๅฃฐใ
| github_jupyter |
Look for shapes that have the same color as their neighbors, and change them to a new random color until there are no collisions.
```
import random
import pandas as pd
import geopandas as gp
df = gp.read_file('smd.geojson')
df.rename(columns={
'Color_Category': 'map_color_id'
, 'SMD_ID': 'smd_id'
}, inplace=True)
df.sort_values(by='smd_id', inplace=True)
df['neighbors'] = None
df['num_neighbors'] = None
df['neighbor_colors'] = None
df['has_collision'] = False
possible_colors = list(range(1,13))
df.loc[0].geometry[0].boundary.xy
for idx, row in df.iterrows():
# get 'not disjoint' countries
neighbors = df[~df.geometry.disjoint(row.geometry)].smd_id.tolist()
# remove own name from the list
neighbors = [ name for name in neighbors if row.smd_id != name ]
# add names of neighbors as NEIGHBORS value
df.loc[idx, 'neighbors'] = ", ".join(neighbors)
df.loc[idx, 'num_neighbors'] = len(neighbors)
# df.groupby('num_neighbors').size()
def assess_collisions(df):
"""Mark True for districts with collisions"""
for idx, row in df.iterrows():
neighbors = row['neighbors'].split(', ')
neighbor_colors = [df.loc[df.smd_id == n, 'map_color_id'].values[0] for n in neighbors]
df.loc[idx, 'neighbor_colors'] = ", ".join([str(n) for n in neighbor_colors])
if row['map_color_id'] in neighbor_colors:
df.loc[idx, 'has_collision'] = True
num_collisions = df['has_collision'].sum()
print(f'Current collisions: {num_collisions}')
return df, num_collisions
def change_one_district_color(df):
"""Change color for one district to an available color"""
smd_to_change = df[df['has_collision']].head(1)['smd_id'].values[0]
row = df[df['smd_id'] == smd_to_change]
old_color = row['map_color_id'].values[0]
neighbor_colors_str = row['neighbor_colors'].values[0].split(', ')
neighbor_colors = [int(n) for n in neighbor_colors_str]
available_colors = [c for c in possible_colors if c not in neighbor_colors]
new_color = random.choice(available_colors)
df.loc[row.index, 'map_color_id'] = new_color
df.loc[row.index, 'has_collision'] = False
print(f'District {smd_to_change} changed from {old_color} to color {new_color}')
return df
df, num_collisions = assess_collisions(df)
num_iterations = 100
i = 0
while num_collisions != 0 and i < num_iterations:
i += 1
print()
df = change_one_district_color(df)
df, num_collisions = assess_collisions(df)
df[['smd_id', 'map_color_id', 'geometry', 'neighbors']].to_file('smd.geojson', driver='GeoJSON')
df[['smd_id', 'map_color_id', 'neighbors']].to_csv('smd.csv', index=False)
# df[df['smd_id'] == '1A10']
df[(df['smd_id'] == '2C02') | (df['smd_id'] == '6E05') ]
```
| github_jupyter |
# Example: CanvasXpress density Chart No. 4
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/density-4.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="density4",
data={
"y": {
"smps": [
"weight"
],
"data": [
[
49
],
[
56
],
[
60
],
[
43
],
[
57
],
[
58
],
[
52
],
[
52
],
[
52
],
[
51
],
[
53
],
[
50
],
[
51
],
[
55
],
[
60
],
[
54
],
[
52
],
[
50
],
[
51
],
[
67
],
[
56
],
[
53
],
[
53
],
[
57
],
[
52
],
[
48
],
[
58
],
[
50
],
[
55
],
[
50
],
[
61
],
[
53
],
[
51
],
[
52
],
[
47
],
[
49
],
[
44
],
[
48
],
[
54
],
[
53
],
[
62
],
[
50
],
[
51
],
[
54
],
[
50
],
[
50
],
[
49
],
[
49
],
[
52
],
[
53
],
[
46
],
[
52
],
[
49
],
[
50
],
[
54
],
[
58
],
[
63
],
[
51
],
[
63
],
[
49
],
[
58
],
[
68
],
[
55
],
[
52
],
[
55
],
[
64
],
[
49
],
[
62
],
[
62
],
[
57
],
[
55
],
[
53
],
[
53
],
[
58
],
[
65
],
[
54
],
[
48
],
[
51
],
[
56
],
[
53
],
[
54
],
[
54
],
[
48
],
[
54
],
[
59
],
[
58
],
[
58
],
[
53
],
[
54
],
[
49
],
[
55
],
[
56
],
[
64
],
[
60
],
[
53
],
[
57
],
[
49
],
[
59
],
[
60
],
[
66
],
[
57
],
[
53
],
[
55
],
[
52
],
[
51
],
[
56
],
[
51
],
[
56
],
[
57
],
[
55
],
[
54
],
[
52
],
[
49
],
[
59
],
[
55
],
[
59
],
[
49
],
[
56
],
[
58
],
[
55
],
[
54
],
[
51
],
[
65
],
[
59
],
[
64
],
[
55
],
[
52
],
[
47
],
[
52
],
[
56
],
[
60
],
[
56
],
[
49
],
[
58
],
[
47
],
[
53
],
[
53
],
[
45
],
[
60
],
[
52
],
[
53
],
[
62
],
[
58
],
[
54
],
[
58
],
[
57
],
[
63
],
[
56
],
[
58
],
[
57
],
[
53
],
[
55
],
[
63
],
[
51
],
[
56
],
[
62
],
[
54
],
[
50
],
[
51
],
[
53
],
[
51
],
[
54
],
[
53
],
[
54
],
[
57
],
[
58
],
[
63
],
[
55
],
[
53
],
[
62
],
[
64
],
[
55
],
[
53
],
[
46
],
[
62
],
[
51
],
[
49
],
[
70
],
[
56
],
[
55
],
[
41
],
[
55
],
[
60
],
[
57
],
[
60
],
[
65
],
[
61
],
[
52
],
[
59
],
[
54
],
[
52
],
[
41
],
[
51
],
[
57
],
[
66
],
[
58
],
[
58
],
[
50
],
[
56
],
[
45
],
[
67
],
[
68
],
[
66
],
[
69
],
[
67
],
[
69
],
[
74
],
[
71
],
[
65
],
[
59
],
[
67
],
[
63
],
[
72
],
[
57
],
[
63
],
[
67
],
[
64
],
[
62
],
[
63
],
[
68
],
[
69
],
[
68
],
[
76
],
[
71
],
[
66
],
[
62
],
[
80
],
[
68
],
[
62
],
[
66
],
[
63
],
[
64
],
[
60
],
[
66
],
[
67
],
[
60
],
[
49
],
[
64
],
[
65
],
[
68
],
[
65
],
[
67
],
[
60
],
[
69
],
[
69
],
[
66
],
[
72
],
[
67
],
[
66
],
[
66
],
[
67
],
[
70
],
[
67
],
[
68
],
[
59
],
[
63
],
[
72
],
[
59
],
[
66
],
[
67
],
[
70
],
[
63
],
[
66
],
[
56
],
[
67
],
[
62
],
[
64
],
[
59
],
[
67
],
[
68
],
[
63
],
[
74
],
[
68
],
[
70
],
[
75
],
[
62
],
[
69
],
[
70
],
[
65
],
[
67
],
[
60
],
[
67
],
[
61
],
[
69
],
[
61
],
[
67
],
[
61
],
[
64
],
[
57
],
[
66
],
[
70
],
[
66
],
[
56
],
[
62
],
[
73
],
[
74
],
[
59
],
[
63
],
[
67
],
[
67
],
[
62
],
[
60
],
[
64
],
[
70
],
[
65
],
[
62
],
[
62
],
[
73
],
[
63
],
[
69
],
[
72
],
[
67
],
[
63
],
[
65
],
[
63
],
[
71
],
[
64
],
[
73
],
[
62
],
[
62
],
[
66
],
[
65
],
[
62
],
[
57
],
[
65
],
[
61
],
[
70
],
[
60
],
[
71
],
[
62
],
[
66
],
[
69
],
[
62
],
[
68
],
[
65
],
[
59
],
[
64
],
[
73
],
[
64
],
[
61
],
[
65
],
[
67
],
[
70
],
[
71
],
[
66
],
[
71
],
[
61
],
[
53
],
[
63
],
[
62
],
[
53
],
[
68
],
[
61
],
[
64
],
[
57
],
[
68
],
[
74
],
[
61
],
[
64
],
[
75
],
[
70
],
[
75
],
[
65
],
[
64
],
[
62
],
[
72
],
[
59
],
[
67
],
[
65
],
[
76
],
[
62
],
[
57
],
[
66
],
[
65
],
[
61
],
[
66
],
[
64
],
[
62
],
[
68
],
[
63
],
[
56
],
[
52
],
[
62
],
[
72
],
[
69
],
[
71
],
[
70
],
[
67
],
[
57
],
[
66
],
[
73
],
[
48
],
[
61
],
[
71
],
[
68
],
[
69
],
[
67
],
[
68
],
[
65
],
[
60
]
],
"vars": [
"var1",
"var2",
"var3",
"var4",
"var5",
"var6",
"var7",
"var8",
"var9",
"var10",
"var11",
"var12",
"var13",
"var14",
"var15",
"var16",
"var17",
"var18",
"var19",
"var20",
"var21",
"var22",
"var23",
"var24",
"var25",
"var26",
"var27",
"var28",
"var29",
"var30",
"var31",
"var32",
"var33",
"var34",
"var35",
"var36",
"var37",
"var38",
"var39",
"var40",
"var41",
"var42",
"var43",
"var44",
"var45",
"var46",
"var47",
"var48",
"var49",
"var50",
"var51",
"var52",
"var53",
"var54",
"var55",
"var56",
"var57",
"var58",
"var59",
"var60",
"var61",
"var62",
"var63",
"var64",
"var65",
"var66",
"var67",
"var68",
"var69",
"var70",
"var71",
"var72",
"var73",
"var74",
"var75",
"var76",
"var77",
"var78",
"var79",
"var80",
"var81",
"var82",
"var83",
"var84",
"var85",
"var86",
"var87",
"var88",
"var89",
"var90",
"var91",
"var92",
"var93",
"var94",
"var95",
"var96",
"var97",
"var98",
"var99",
"var100",
"var101",
"var102",
"var103",
"var104",
"var105",
"var106",
"var107",
"var108",
"var109",
"var110",
"var111",
"var112",
"var113",
"var114",
"var115",
"var116",
"var117",
"var118",
"var119",
"var120",
"var121",
"var122",
"var123",
"var124",
"var125",
"var126",
"var127",
"var128",
"var129",
"var130",
"var131",
"var132",
"var133",
"var134",
"var135",
"var136",
"var137",
"var138",
"var139",
"var140",
"var141",
"var142",
"var143",
"var144",
"var145",
"var146",
"var147",
"var148",
"var149",
"var150",
"var151",
"var152",
"var153",
"var154",
"var155",
"var156",
"var157",
"var158",
"var159",
"var160",
"var161",
"var162",
"var163",
"var164",
"var165",
"var166",
"var167",
"var168",
"var169",
"var170",
"var171",
"var172",
"var173",
"var174",
"var175",
"var176",
"var177",
"var178",
"var179",
"var180",
"var181",
"var182",
"var183",
"var184",
"var185",
"var186",
"var187",
"var188",
"var189",
"var190",
"var191",
"var192",
"var193",
"var194",
"var195",
"var196",
"var197",
"var198",
"var199",
"var200",
"var201",
"var202",
"var203",
"var204",
"var205",
"var206",
"var207",
"var208",
"var209",
"var210",
"var211",
"var212",
"var213",
"var214",
"var215",
"var216",
"var217",
"var218",
"var219",
"var220",
"var221",
"var222",
"var223",
"var224",
"var225",
"var226",
"var227",
"var228",
"var229",
"var230",
"var231",
"var232",
"var233",
"var234",
"var235",
"var236",
"var237",
"var238",
"var239",
"var240",
"var241",
"var242",
"var243",
"var244",
"var245",
"var246",
"var247",
"var248",
"var249",
"var250",
"var251",
"var252",
"var253",
"var254",
"var255",
"var256",
"var257",
"var258",
"var259",
"var260",
"var261",
"var262",
"var263",
"var264",
"var265",
"var266",
"var267",
"var268",
"var269",
"var270",
"var271",
"var272",
"var273",
"var274",
"var275",
"var276",
"var277",
"var278",
"var279",
"var280",
"var281",
"var282",
"var283",
"var284",
"var285",
"var286",
"var287",
"var288",
"var289",
"var290",
"var291",
"var292",
"var293",
"var294",
"var295",
"var296",
"var297",
"var298",
"var299",
"var300",
"var301",
"var302",
"var303",
"var304",
"var305",
"var306",
"var307",
"var308",
"var309",
"var310",
"var311",
"var312",
"var313",
"var314",
"var315",
"var316",
"var317",
"var318",
"var319",
"var320",
"var321",
"var322",
"var323",
"var324",
"var325",
"var326",
"var327",
"var328",
"var329",
"var330",
"var331",
"var332",
"var333",
"var334",
"var335",
"var336",
"var337",
"var338",
"var339",
"var340",
"var341",
"var342",
"var343",
"var344",
"var345",
"var346",
"var347",
"var348",
"var349",
"var350",
"var351",
"var352",
"var353",
"var354",
"var355",
"var356",
"var357",
"var358",
"var359",
"var360",
"var361",
"var362",
"var363",
"var364",
"var365",
"var366",
"var367",
"var368",
"var369",
"var370",
"var371",
"var372",
"var373",
"var374",
"var375",
"var376",
"var377",
"var378",
"var379",
"var380",
"var381",
"var382",
"var383",
"var384",
"var385",
"var386",
"var387",
"var388",
"var389",
"var390",
"var391",
"var392",
"var393",
"var394",
"var395",
"var396",
"var397",
"var398",
"var399",
"var400"
]
},
"z": {
"sex": [
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"F",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M",
"M"
]
}
},
config={
"graphType": "Scatter2D",
"hideHistogram": True,
"showHistogramDensity": True,
"theme": "CanvasXpress"
},
width=613,
height=613,
events=CXEvents(),
after_render=[
[
"createHistogram",
[
"sex",
None,
None
]
]
],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="density_4.html")
```
| github_jupyter |
```
# Delete this cell to re-enable tracebacks
import sys
ipython = get_ipython()
def hide_traceback(exc_tuple=None, filename=None, tb_offset=None,
exception_only=False, running_compiled_code=False):
etype, value, tb = sys.exc_info()
value.__cause__ = None # suppress chained exceptions
return ipython._showtraceback(etype, value, ipython.InteractiveTB.get_exception_only(etype, value))
ipython.showtraceback = hide_traceback
# JSON output syntax highlighting
from __future__ import print_function
from pygments import highlight
from pygments.lexers import JsonLexer, TextLexer
from pygments.formatters import HtmlFormatter
from IPython.display import display, HTML
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
def json_print(inpt):
string = str(inpt)
formatter = HtmlFormatter()
if string[0] == '{':
lexer = JsonLexer()
else:
lexer = TextLexer()
return HTML('<style type="text/css">{}</style>{}'.format(
formatter.get_style_defs('.highlight'),
highlight(string, lexer, formatter)))
globals()['print'] = json_print
```
# STIX2 Patterns
The Python ``stix2`` library supports STIX 2 patterning insofar that patterns may be used for the pattern property of Indicators, identical to the STIX 2 specification. ``stix2`` does not evaluate patterns against STIX 2 content; for that functionality see [cti-pattern-matcher](https://github.com/oasis-open/cti-pattern-matcher).
Patterns in the ``stix2`` library are built compositely from the bottom up, creating subcomponent expressions first before those at higher levels.
## API Tips
### ObservationExpression
Within the STIX 2 Patterning specification, Observation Expressions denote a complete expression to be evaluated against a discrete observation. In other words, an Observation Expression must be created to apply to a single Observation instance. This is further made clear by the visual brackets(```[]```) that encapsulate an Observation Expression. Thus, whatever sub expressions that are within the Observation Expression are meant to be matched against the same Observable instance.
This requirement manifests itself within the ``stix2`` library via ```ObservationExpression```. When creating STIX 2 observation expressions, whenever the current expression is complete, wrap it with ```ObservationExpression()```. This allows the complete pattern expression - no matter its complexity - to be rendered as a proper specification-adhering string. __*Note: When pattern expressions are added to Indicator objects, the expression objects are implicitly converted to string representations*__. While the extra step may seem tedious in the construction of simple pattern expressions, this explicit marking of observation expressions becomes vital when converting the pattern expressions to strings.
In all the examples, you can observe how in the process of building pattern expressions, when an Observation Expression is completed, it is wrapped with ```ObservationExpression()```.
### ParentheticalExpression
Do not be confused by the ```ParentheticalExpression``` object. It is not a distinct expression type but is also used to properly craft pattern expressions by denoting order priority and grouping of expression components. Use it in a similar manner as ```ObservationExpression```, wrapping completed subcomponent expressions with ```ParentheticalExpression()``` if explicit ordering is required. For usage examples with ```ParentheticalExpression```'s, see [here](#Compound-Observation-Expressions).
### BooleanExpressions vs CompoundObservationExpressions
Be careful to note the difference between these two very similar pattern components.
__BooleanExpressions__
- [AndBooleanExpression](../api/stix2.patterns.rst#stix2.patterns.AndBooleanExpression)
- [OrbooleanExpression](../api/stix2.patterns.rst#stix2.patterns.OrBooleanExpression)
__Usage__: When the boolean sub-expressions refer to the *same* root object
__Example__:
```[domain-name:value = "www.5z8.info" AND domain-name:resolvess_to_refs[*].value = "'198.51.100.1/32'"]```
__Rendering__: when pattern is rendered, brackets or parenthesis will encapsulate boolean expression
__CompoundObservationExpressions__
- [AndObservationExpression](../api/stix2.patterns.rst#stix2.patterns.AndObservationExpression)
- [OrObservationExpression](../api/stix2.patterns.rst#stix2.patterns.OrObservationExpression)
__Usage__: When the boolean sub-expressions refer to *different* root objects
__Example__:
```[file:name="foo.dll"] AND [process:name = "procfoo"]```
__Rendering__: when pattern is rendered, brackets will encapsulate each boolean sub-expression
## Examples
### Comparison Expressions
```
from stix2 import DomainName, File, IPv4Address
from stix2 import (ObjectPath, EqualityComparisonExpression, ObservationExpression,
GreaterThanComparisonExpression, IsSubsetComparisonExpression,
FloatConstant, StringConstant)
```
#### Equality Comparison expressions
```
lhs = ObjectPath("domain-name", ["value"])
ece_1 = ObservationExpression(EqualityComparisonExpression(lhs, "site.of.interest.zaz"))
print("\t{}\n".format(ece_1))
lhs = ObjectPath("file", ["parent_directory_ref","path"])
ece_2 = ObservationExpression(EqualityComparisonExpression(lhs, "C:\\Windows\\System32"))
print("\t{}\n".format(ece_2))
```
#### Greater-than Comparison expressions
```
lhs = ObjectPath("file", ["extensions", "windows-pebinary-ext", "sections[*]", "entropy"])
gte = ObservationExpression(GreaterThanComparisonExpression(lhs, FloatConstant("7.0")))
print("\t{}\n".format(gte))
```
#### IsSubset Comparison expressions
```
lhs = ObjectPath("network-traffic", ["dst_ref", "value"])
iss = ObservationExpression(IsSubsetComparisonExpression(lhs, StringConstant("2001:0db8:dead:beef:0000:0000:0000:0000/64")))
print("\t{}\n".format(iss))
```
### Compound Observation Expressions
```
from stix2 import (IntegerConstant, HashConstant, ObjectPath,
EqualityComparisonExpression, AndBooleanExpression,
OrBooleanExpression, ParentheticalExpression,
AndObservationExpression, OrObservationExpression,
FollowedByObservationExpression, ObservationExpression)
```
#### AND boolean
```
ece3 = EqualityComparisonExpression(ObjectPath("email-message", ["sender_ref", "value"]), "stark@example.com")
ece4 = EqualityComparisonExpression(ObjectPath("email-message", ["subject"]), "Conference Info")
abe = ObservationExpression(AndBooleanExpression([ece3, ece4]))
print("(AND)\n{}\n".format(abe))
```
#### OR boolean
```
ece5 = EqualityComparisonExpression(ObjectPath("url", ["value"]), "http://example.com/foo")
ece6 = EqualityComparisonExpression(ObjectPath("url", ["value"]), "http://example.com/bar")
obe = ObservationExpression(OrBooleanExpression([ece5, ece6]))
print("(OR)\n{}\n".format(obe))
```
#### ( OR ) AND boolean
```
ece7 = EqualityComparisonExpression(ObjectPath("file", ["name"]), "pdf.exe")
ece8 = EqualityComparisonExpression(ObjectPath("file", ["size"]), IntegerConstant("371712"))
ece9 = EqualityComparisonExpression(ObjectPath("file", ["created"]), "2014-01-13T07:03:17Z")
obe1 = OrBooleanExpression([ece7, ece8])
pobe = ParentheticalExpression(obe1)
abe1 = ObservationExpression(AndBooleanExpression([pobe, ece9]))
print("(OR,AND)\n{}\n".format(abe1))
```
#### ( AND ) OR ( OR ) observation
```
ece20 = ObservationExpression(EqualityComparisonExpression(ObjectPath("file", ["name"]), "foo.dll"))
ece21 = ObservationExpression(EqualityComparisonExpression(ObjectPath("win-registry-key", ["key"]), "HKEY_LOCAL_MACHINE\\foo\\bar"))
ece22 = EqualityComparisonExpression(ObjectPath("process", ["name"]), "fooproc")
ece23 = EqualityComparisonExpression(ObjectPath("process", ["name"]), "procfoo")
# NOTE: we need to use AND/OR observation expression instead of just boolean
# expressions as the operands are not on the same object-type
aoe = ParentheticalExpression(AndObservationExpression([ece20, ece21]))
obe2 = ObservationExpression(OrBooleanExpression([ece22, ece23]))
ooe = OrObservationExpression([aoe, obe2])
print("(AND,OR,OR)\n{}\n".format(ooe))
```
#### FOLLOWED-BY
```
ece10 = ObservationExpression(EqualityComparisonExpression(ObjectPath("file", ["hashes", "MD5"]), HashConstant("79054025255fb1a26e4bc422aef54eb4", "MD5")))
ece11 = ObservationExpression(EqualityComparisonExpression(ObjectPath("win-registry-key", ["key"]), "HKEY_LOCAL_MACHINE\\foo\\bar"))
fbe = FollowedByObservationExpression([ece10, ece11])
print("(FollowedBy)\n{}\n".format(fbe))
```
### Qualified Observation Expressions
```
from stix2 import (TimestampConstant, HashConstant, ObjectPath, EqualityComparisonExpression,
AndBooleanExpression, WithinQualifier, RepeatQualifier, StartStopQualifier,
QualifiedObservationExpression, FollowedByObservationExpression,
ParentheticalExpression, ObservationExpression)
```
#### WITHIN
```
ece10 = ObservationExpression(EqualityComparisonExpression(ObjectPath("file", ["hashes", "MD5"]), HashConstant("79054025255fb1a26e4bc422aef54eb4", "MD5")))
ece11 = ObservationExpression(EqualityComparisonExpression(ObjectPath("win-registry-key", ["key"]), "HKEY_LOCAL_MACHINE\\foo\\bar"))
fbe = FollowedByObservationExpression([ece10, ece11])
par = ParentheticalExpression(fbe)
qoe = QualifiedObservationExpression(par, WithinQualifier(300))
print("(WITHIN)\n{}\n".format(qoe))
```
#### REPEATS, WITHIN
```
ece12 = EqualityComparisonExpression(ObjectPath("network-traffic", ["dst_ref", "type"]), "domain-name")
ece13 = EqualityComparisonExpression(ObjectPath("network-traffic", ["dst_ref", "value"]), "example.com")
abe2 = ObservationExpression(AndBooleanExpression([ece12, ece13]))
qoe1 = QualifiedObservationExpression(QualifiedObservationExpression(abe2, RepeatQualifier(5)), WithinQualifier(180))
print("(REPEAT, WITHIN)\n{}\n".format(qoe1))
```
#### START, STOP
```
ece14 = ObservationExpression(EqualityComparisonExpression(ObjectPath("file", ["name"]), "foo.dll"))
ssq = StartStopQualifier(TimestampConstant('2016-06-01T00:00:00Z'), TimestampConstant('2016-07-01T00:00:00Z'))
qoe2 = QualifiedObservationExpression(ece14, ssq)
print("(START-STOP)\n{}\n".format(qoe2))
```
## Attaching patterns to STIX2 Domain objects
### Example
```
from stix2 import Indicator, EqualityComparisonExpression, ObservationExpression
ece14 = ObservationExpression(EqualityComparisonExpression(ObjectPath("file", ["name"]), "$$t00rzch$$.elf"))
ind = Indicator(name="Cryptotorch", pattern_type="stix", pattern=ece14)
print(ind.serialize(pretty=True))
```
| github_jupyter |
# Use Pytorch to recognize hand-written digits with `ibm-watson-machine-learning`
This notebook facilitates Pytorch ML library in Watson Machine Learning service. It contains steps and code to work with [ibm-watson-machine-learning](https://pypi.python.org/pypi/ibm-watson-machine-learning) library available in PyPI repository. It also introduces commands for getting model and training data, persisting model, deploying model and scoring it.
Some familiarity with Python is helpful. This notebook uses Python 3.8.
## Learning goals
The learning goals of this notebook are:
- Download an externally trained Pytorch model with dataset.
- Persist an external model in Watson Machine Learning repository.
- Deploy model for online scoring using client library.
- Score sample records using client library.
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Download externally created Pytorch model and data](#download)
3. [Persist externally created Pytorch ONNX model](#persistence)
4. [Deploy and score](#scoring)
5. [Clean up](#cleanup)
6. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Contact with your Cloud Pack for Data administrator and ask him for your account credentials
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `api_key`.
```
username = 'PASTE YOUR USERNAME HERE'
api_key = 'PASTE YOUR API_KEY HERE'
url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = {
"username": username,
"apikey": api_key,
"url": url,
"instance_id": 'openshift',
"version": '4.0'
}
```
Alternatively you can use `username` and `password` to authenticate WML services.
```
wml_credentials = {
"username": ***,
"password": ***,
"url": ***,
"instance_id": 'openshift',
"version": '4.0'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.
- Click New Deployment Space
- Create an empty space
- Go to space `Settings` tab
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd4.0/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
<a id="download"></a>
## 2. Download externally created Pytorch model and data
In this section, you will download externally created Pytorch models and data used for training it.
```
import os
import wget
data_dir = 'MNIST_DATA'
if not os.path.isdir(data_dir):
os.mkdir(data_dir)
model_path = os.path.join(data_dir, 'mnist_pytorch.tar.gz')
if not os.path.isfile(model_path):
wget.download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd4.0/models/pytorch/mnist_pytorch.tar.gz', out=data_dir)
data_dir = 'MNIST_DATA'
if not os.path.isdir(data_dir):
os.mkdir(data_dir)
filename = os.path.join(data_dir, 'mnist.npz')
if not os.path.isfile(filename):
wget.download('https://s3.amazonaws.com/img-datasets/mnist.npz', out=data_dir)
import numpy as np
dataset = np.load(filename)
x_test = dataset['x_test']
```
<a id="persistence"></a>
## 3. Persist externally created Pytorch ONNX model
In this section, you will learn how to store your model in Watson Machine Learning repository by using the IBM Watson Machine Learning SDK.
### 3.1: Publish model
#### Publish model in Watson Machine Learning repository.
Define model name, autor name and email.
```
sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.8")
metadata = {
client.repository.ModelMetaNames.NAME: 'External pytorch model',
client.repository.ModelMetaNames.TYPE: 'pytorch-onnx_1.7',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid
}
published_model = client.repository.store_model(
model=model_path,
meta_props=metadata)
```
### 3.2: Get model details
```
import json
published_model_uid = client.repository.get_model_uid(published_model)
model_details = client.repository.get_details(published_model_uid)
print(json.dumps(model_details, indent=2))
```
### 3.3 Get all models
```
models_details = client.repository.list_models()
```
<a id="scoring"></a>
## 4. Deploy and score
In this section you will learn how to create online scoring and to score a new data record by using the IBM Watson Machine Learning SDK.
### 4.1: Create model deployment
#### Create online deployment for published model
```
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "Deployment of external pytorch model",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
```
**Note**: Here we use deployment url saved in published_model object. In next section, we show how to retrive deployment url from Watson Machine Learning instance.
```
deployment_uid = client.deployments.get_uid(created_deployment)
```
Now you can print an online scoring endpoint.
```
scoring_endpoint = client.deployments.get_scoring_href(created_deployment)
print(scoring_endpoint)
```
You can also list existing deployments.
```
client.deployments.list()
```
### 4.2: Get deployment details
```
client.deployments.get_details(deployment_uid)
```
### 4.3: Score
You can use below method to do test scoring request against deployed model.
Let's first visualize two samples from dataset, we'll use for scoring.
```
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([x_test[0], x_test[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
```
Prepare scoring payload with records to score.
```
score_0 = [x_test[0].tolist()]
score_1 = [x_test[1].tolist()]
scoring_payload = {"input_data": [{"values": [score_0, score_1]}]}
```
Use ``client.deployments.score()`` method to run scoring.
```
predictions = client.deployments.score(deployment_uid, scoring_payload)
```
Let's print the result of predictions.
```
print(json.dumps(predictions, indent=2))
```
As you can see, prediction probabilities point to proper classes as displayed above from test dataset.
<a id="cleanup"></a>
## 5. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd4.0/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 6. Summary and next steps
You successfully completed this notebook! You learned how to use Pytorch machine learning library as well as Watson Machine Learning for model creation and deployment.
Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/analyze-data/wml-setup.html) for more samples, tutorials, documentation, how-tos, and blog posts.
### Authors
**Daniel Ryszka**, Software Engineer
Copyright ยฉ 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
| github_jupyter |
# Masakhane - Machine Translation for African Languages (Using JoeyNMT)
## Note before beginning:
### - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus.
### - The tl;dr: Go to the **"TODO"** comments which will tell you what to update to get up and running
### - If you actually want to have a clue what you're doing, read the text and peek at the links
### - With 100 epochs, it should take around 7 hours to run in Google Colab
### - Once you've gotten a result for your language, please attach and email your notebook that generated it to masakhanetranslation@gmail.com
### - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)
## Retrieve your data & make a parallel corpus
If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.
Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe.
```
from google.colab import drive
drive.mount('/content/drive')
# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:
# These will also become the suffix's of all vocab and corpus files used throughout
import os
source_language = "en"
target_language = "nr"
lc = False # If True, lowercase the data.
seed = 42 # Random seed for shuffling.
tag = "baseline" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
os.environ["tag"] = tag
# This will save it to a folder in our gdrive instead!
!mkdir -p "/content/drive/My Drive/masakhane/$src-$tgt-$tag"
os.environ["gdrive_path"] = "/content/drive/My Drive/masakhane/%s-%s-%s" % (source_language, target_language, tag)
!echo $gdrive_path
# Install opus-tools
! pip install opustools-pkg
# Downloading our corpus
! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q
# extract the corpus file
! gunzip JW300_latest_xml_$src-$tgt.xml.gz
# Download the global test set.
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en
# And the specific test set for this language pair.
os.environ["trg"] = target_language
os.environ["src"] = source_language
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en
! mv test.en-$trg.en test.en
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg
! mv test.en-$trg.$trg test.$trg
# Read the test data to filter from train and dev splits.
# Store english portion in set for quick filtering checks.
en_test_sents = set()
filter_test_sents = "test.en-any.en"
j = 0
with open(filter_test_sents) as f:
for line in f:
en_test_sents.add(line.strip())
j += 1
print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))
import pandas as pd
# TMX file to dataframe
source_file = 'jw300.' + source_language
target_file = 'jw300.' + target_language
source = []
target = []
skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.
with open(source_file) as f:
for i, line in enumerate(f):
# Skip sentences that are contained in the test set.
if line.strip() not in en_test_sents:
source.append(line.strip())
else:
skip_lines.append(i)
with open(target_file) as f:
for j, line in enumerate(f):
# Only add to corpus if corresponding source was not skipped.
if j not in skip_lines:
target.append(line.strip())
print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))
df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])
# if you get TypeError: data argument can't be an iterator is because of your zip version run this below
#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])
df.head(3)
```
## Pre-processing and export
It is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned.
In addition we will split our data into dev/test/train and export to the filesystem.
```
# drop duplicate translations
df_pp = df.drop_duplicates()
# drop conflicting translations
# (this is optional and something that you might want to comment out
# depending on the size of your corpus)
df_pp.drop_duplicates(subset='source_sentence', inplace=True)
df_pp.drop_duplicates(subset='target_sentence', inplace=True)
# Shuffle the data to remove bias in dev set selection.
df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True)
# Install fuzzy wuzzy to remove "almost duplicate" sentences in the
# test and training sets.
! pip install fuzzywuzzy
! pip install python-Levenshtein
import time
from fuzzywuzzy import process
import numpy as np
from os import cpu_count
from functools import partial
from multiprocessing import Pool
# reset the index of the training set after previous filtering
df_pp.reset_index(drop=False, inplace=True)
# Remove samples from the training data set if they "almost overlap" with the
# samples in the test set.
# Filtering function. Adjust pad to narrow down the candidate matches to
# within a certain length of characters of the given sample.
def fuzzfilter(sample, candidates, pad):
candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad]
if len(candidates) > 0:
return process.extractOne(sample, candidates)[1]
else:
return np.nan
start_time = time.time()
### iterating over pandas dataframe rows is not recomended, let use multi processing to apply the function
with Pool(cpu_count()-1) as pool:
scores = pool.map(partial(fuzzfilter, candidates=list(en_test_sents), pad=5), df_pp['source_sentence'])
hours, rem = divmod(time.time() - start_time, 3600)
minutes, seconds = divmod(rem, 60)
print("done in {}h:{}min:{}seconds".format(hours, minutes, seconds))
# Filter out "almost overlapping samples"
df_pp = df_pp.assign(scores=scores)
df_pp = df_pp[df_pp['scores'] < 95]
# This section does the split between train/dev for the parallel corpora then saves them as separate files
# We use 1000 dev test and the given test set.
import csv
# Do the split between dev/train and create parallel corpora
num_dev_patterns = 1000
# Optional: lower case the corpora - this will make it easier to generalize, but without proper casing.
if lc: # Julia: making lowercasing optional
df_pp["source_sentence"] = df_pp["source_sentence"].str.lower()
df_pp["target_sentence"] = df_pp["target_sentence"].str.lower()
# Julia: test sets are already generated
dev = df_pp.tail(num_dev_patterns) # Herman: Error in original
stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index)
with open("train."+source_language, "w") as src_file, open("train."+target_language, "w") as trg_file:
for index, row in stripped.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
with open("dev."+source_language, "w") as src_file, open("dev."+target_language, "w") as trg_file:
for index, row in dev.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
#stripped[["source_sentence"]].to_csv("train."+source_language, header=False, index=False) # Herman: Added `header=False` everywhere
#stripped[["target_sentence"]].to_csv("train."+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks.
#dev[["source_sentence"]].to_csv("dev."+source_language, header=False, index=False)
#dev[["target_sentence"]].to_csv("dev."+target_language, header=False, index=False)
# Doublecheck the format below. There should be no extra quotation marks or weird characters.
! head train.*
! head dev.*
```
---
## Installation of JoeyNMT
JoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io)
```
# Install JoeyNMT
! git clone https://github.com/joeynmt/joeynmt.git
! cd joeynmt; pip3 install .
```
# Preprocessing the Data into Subword BPE Tokens
- One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).
- It was also shown that by optimizing the umber of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)
- Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable.
```
# One of the huge boosts in NMT performance was to use a different method of tokenizing.
# Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance
# Do subword NMT
from os import path
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
# Learn BPEs on the training data.
os.environ["data_path"] = path.join("joeynmt", "data", source_language + target_language) # Herman!
! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 40000 -o bpe.codes.40000 --write-vocabulary vocab.$src vocab.$tgt
# Apply BPE splits to the development and test data.
! subword-nmt apply-bpe -c bpe.codes.40000 --vocabulary vocab.$src < train.$src > train.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.40000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.40000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.40000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.40000 --vocabulary vocab.$src < test.$src > test.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.40000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt
# Create directory, move everyone we care about to the correct location
! mkdir -p $data_path
! cp train.* $data_path
! cp test.* $data_path
! cp dev.* $data_path
! cp bpe.codes.40000 $data_path
! ls $data_path
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.40000 "$gdrive_path"
! ls "$gdrive_path"
# Create that vocab using build_vocab
! sudo chmod 777 joeynmt/scripts/build_vocab.py
! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.bpe.$src joeynmt/data/$src$tgt/train.bpe.$tgt --output_path joeynmt/data/$src$tgt/vocab.txt
# Some output
! echo "BPE isiNdebele Sentences"
! tail -n 5 test.bpe.$tgt
! echo "Combined BPE Vocab"
! tail -n 10 joeynmt/data/$src$tgt/vocab.txt # Herman
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.40000 "$gdrive_path"
! ls "$gdrive_path"
#THIS WAS NEVER SAVED TO DRIVE.
# Create that vocab using build_vocab
! sudo chmod 777 joeynmt/scripts/build_vocab.py
! joeynmt/scripts/build_vocab.py "/content/drive/My Drive/masakhane/en-nr-baseline/train.bpe.$src" "/content/drive/My Drive/masakhane/en-nr-baseline/train.bpe.$tgt" --output_path "/content/drive/My Drive/masakhane/en-nr-baseline/vocab.txt"
```
# Creating the JoeyNMT Config
JoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!
- We used Transformer architecture
- We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))
Things worth playing with:
- The batch size (also recommended to change for low-resourced languages)
- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)
- The decoder options (beam_size, alpha)
- Evaluation metrics (BLEU versus Crhf4)
```
# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update
# (You can of course play with all the parameters if you'd like!)
name = '%s%s' % (source_language, target_language)
gdrive_path = os.environ["gdrive_path"]
# Create the config
config = """
name: "{name}_transformer"
data:
src: "{source_language}"
trg: "{target_language}"
train: "data/{name}/train.bpe"
dev: "data/{name}/dev.bpe"
test: "data/{name}/test.bpe"
level: "bpe"
lowercase: False
max_sent_length: 100
src_vocab: "data/{name}/vocab.txt"
trg_vocab: "data/{name}/vocab.txt"
testing:
beam_size: 5
alpha: 1.0
training:
#load_model: "{gdrive_path}/models/{name}_transformer/1.ckpt" # if uncommented, load a pre-trained model from this checkpoint
random_seed: 42
optimizer: "adam"
normalization: "tokens"
adam_betas: [0.9, 0.999]
scheduling: "plateau" # TODO: try switching from plateau to Noam scheduling
patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.
learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)
learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)
decrease_factor: 0.7
loss: "crossentropy"
learning_rate: 0.0003
learning_rate_min: 0.00000001
weight_decay: 0.0
label_smoothing: 0.1
batch_size: 4096
batch_type: "token"
eval_batch_size: 3600
eval_batch_type: "token"
batch_multiplier: 1
early_stopping_metric: "ppl"
epochs: 30 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all
validation_freq: 1000 # TODO: Set to at least once per epoch.
logging_freq: 100
eval_metric: "bleu"
model_dir: "models/{name}_transformer"
overwrite: False # TODO: Set to True if you want to overwrite possibly existing models.
shuffle: True
use_cuda: True
max_output_length: 100
print_valid_sents: [0, 1, 2, 3]
keep_last_ckpts: 3
model:
initializer: "xavier"
bias_initializer: "zeros"
init_gain: 1.0
embed_initializer: "xavier"
embed_init_gain: 1.0
tied_embeddings: True
tied_softmax: True
encoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
decoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
""".format(name=name, gdrive_path=os.environ["gdrive_path"], source_language=source_language, target_language=target_language)
with open("joeynmt/configs/transformer_{name}.yaml".format(name=name),'w') as f:
f.write(config)
```
# Train the Model
This single line of joeynmt runs the training using the config we made above
```
# Train the model
# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints!
!cd joeynmt; python3 -m joeynmt train configs/transformer_$src$tgt.yaml
# Copy the created models from the notebook storage to google drive for persistant storage
!cp -r joeynmt/models/${src}${tgt}_transformer/* "$gdrive_path/models/${src}${tgt}_transformer/"
!cp /content/joeynmt/models/ennr_transformer/best.ckpt "/content/drive/My Drive/masakhane/en-nr-baseline/models/ennr_transformer/"
# Output our validation accuracy
! cat "$gdrive_path/models/${src}${tgt}_transformer/validations.txt"
# Test our model
! cd joeynmt; python3 -m joeynmt test "$gdrive_path/models/${src}${tgt}_transformer/config.yaml"
```
| github_jupyter |
```
#smo is a very complex algorithm
#plz make sure you have a solid understanding of svm to continue
#for basics of svm, plz refer to the following link
# https://github.com/je-suis-tm/machine-learning/blob/master/binary%20support%20vector%20machine.ipynb
#a reference for python codes of smo
# https://jonchar.net/notebooks/SVM/
#a reference for original paper of smo
# https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf
#a reference for math derivation of smo
# http://jupiter.math.nctu.edu.tw/~yuhjye/assets/file/teaching/2017_machine_learning/SMO%20algorithm.pdf
#as most details can be found in pseudo code of john platts paper
#i will not give too many comments
import random as rd
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
import os
os.chdir('d:/')
#this part is purely math derivation
def get_alpha2(alpha2_old,y2,e2,e1,eta):
alpha2_new=alpha2_old+y2*(e2-e1)/eta
return alpha2_new
#in john platts paper
#the assumption is y=wx-b
#but my code is based on y=wx+b
#thats why we need to change the sign of eta
def get_eta(k11,k22,k12):
eta=k11+k22-2*k12
return -eta
def get_alpha1(alpha1_old,y1,y2,alpha2_old,alpha2_new):
alpha1_new=alpha1_old+y1*y2*(alpha2_old-alpha2_new)
return alpha1_new
def get_b(e1,e2,y1,alpha1_new,alpha1, \
k11,k12,k22,y2,alpha2_new_clipped,alpha2,b):
b1=e1+y1*(alpha1_new-alpha1)*k11+ \
y2*(alpha2_new_clipped-alpha2)*k12+b
b2=e2+y1*(alpha1_new-alpha1)*k12+ \
y2*(alpha2_new_clipped-alpha2)*k22+b
return b1,b2
def clip_alpha2(alpha2_new,l,h):
if alpha2_new>h:
alpha2_new_clipped=h
elif alpha2_new<l:
alpha2_new_clipped=l
else:
alpha2_new_clipped=alpha2_new
return alpha2_new_clipped
#still 3 types of kernels, linear,polynomial,gaussian
def kernel(x1,x2,ktype='linear', \
poly_constant=0,poly_power=1,gamma=0.5):
if ktype=='linear':
x_product=(np.mat(x1)*np.mat(x2).T).tolist()[0][0]
elif ktype=='polynomial':
temp=np.mat(x1)*np.mat(x2).T
x_product=((x1.T*x2+poly_constant)**poly_power).tolist()[0][0]
else:
temp=np.mat([i-j for j in x1 for i in x2]).reshape(len(x1), \
len(x2))
x_product=np.exp(-1*gamma*(np.linalg.norm(temp))**2)
return x_product
#the main routine in the original paper
def train(x_train,y_train,c=10,**kwargs):
numchanged=0
examineall=True
alphas=list(np.zeros(len(x_train)))
b=0
#this is crucial
#without initial value
#everything will stay at 0
errors=list(np.multiply(y_train,-1))
while numchanged>0 or examineall:
numchanged=0
if examineall:
for i in range(len(x_train)):
output,alphas,b=examine_example(i, \
x_train, \
y_train, \
b, \
alphas,errors, \
constant=c, \
**kwargs)
numchanged+=output
else:
for i in range(len(x_train)):
if alphas[i]!=0 and alphas[i]!=c:
output,alphas,b=examine_example(i, \
x_train, \
y_train, \
b, \
alphas,errors, \
constant=c, \
**kwargs)
numchanged+=output
if examineall==True:
examineall=False
else:
if numchanged==0:
examineall==True
return alphas,b
#selection criteria on alpha1 and alpha2
def examine_example(i,x_train,y_train,b, \
alphas,errors,constant, \
tol=0.005, \
**kwargs):
y2=y_train[i]
alpha2=alphas[i]
e2=errors[i]
support_vector=[]
output=0
r2=e2*y2
for idn in range(len(alphas)):
if alphas[idn]!=0 and alphas[idn]!=constant:
support_vector.append(idn)
if (r2<-tol and alpha2<constant) or \
(r2>tol and alpha2>0):
if len(support_vector) > 1:
if e2>0:
crap,j=second_heuristic(i,alphas,y_train, \
x_train,b,errors)
if e2<0:
j,crap=second_heuristic(i,alphas,y_train, \
x_train,b,errors)
result,b,alphas,errors=takestep(j,i, \
alphas,y_train, \
x_train,b, \
constant,errors, \
**kwargs)
if result:
output=1
return output,alphas,b
temp=rd.sample(support_vector,len(support_vector))
for l in temp:
result,b,alphas,errors=takestep(l,i, \
alphas,y_train, \
x_train,b, \
constant,errors, \
**kwargs)
if result:
output=1
return output,alphas,b
temp=rd.sample(list(np.arange(len(alphas))),len(alphas))
for m in temp:
result,b,alphas,errors=takestep(m,i, \
alphas,y_train, \
x_train,b, \
constant,errors, \
**kwargs)
if result:
output=1
return output,alphas,b
return output,alphas,b
def second_heuristic(i,alphas,y_train, \
x_train,b,errors):
e_list=[errors[k] for k in range(len(x_train)) if k!=i]
maxval=e_list.index(max(e_list))
minval=e_list.index(min(e_list))
return maxval,minval
#the core part of updating alpha1,alpha2
def takestep(idn1,idn2,alphas,y_train, \
x_train,b,constant,errors, \
eps=0.001,**kwargs):
result=False
if idn1==idn2:
return result,b,alphas,errors
alpha1=alphas[idn1]
x1=x_train[idn1]
y1=y_train[idn1]
e1=errors[idn1]
alpha2=alphas[idn2]
x2=x_train[idn2]
y2=y_train[idn2]
e2=errors[idn2]
s=y1*y2
if y1!=y2:
l=max(0,alpha2-alpha1)
h=min(constant,constant-alpha1+alpha2)
else:
l=max(0,alpha1+alpha2-constant)
h=min(constant,alpha2+alpha1)
if l==h:
return result,b,alphas,errors
k11=kernel(x1,x1,**kwargs)
k12=kernel(x1,x2,**kwargs)
k22=kernel(x2,x2,**kwargs)
eta=get_eta(k11,k22,k12)
if eta<0:
alpha2_new=get_alpha2(alpha2,y2,e2,e1,eta)
alpha2_new_clipped=clip_alpha2(alpha2_new,l,h)
else:
f1=y1*(e1+b)-alpha1*kernel(x1,x1,**kwargs)+ \
s*alpha2*kernel(x1,x2,**kwargs)
f2=y2*(e2+b)-s*alpha1*kernel(x1,x2,**kwargs)- \
alpha2*kernel(x2,x2,**kwargs)
l1=alpha1+s*(alpha2-l)
h1=alpha1+s*(alpha2-h)
lobj=l1*f1+l*f2+0.5*l1*l1*kernel(x1,x1,**kwargs)+ \
0.5*l*l*kernel(x2,x2,**kwargs)+ \
s*l*l1*kernel(x1,x2,**kwargs)
hobj=h1*f1+h*f2+0.5*h1*h1*kernel(x1,x1,**kwargs)+ \
0.5*h*h*kernel(x2,x2,**kwargs)+ \
s*h*h1*kernel(x1,x2,**kwargs)
if lobj<hobj-eps:
alpha2_new_clipped=hobj
elif lobj>hobj+eps:
alpha2_new_clipped=lobj
else:
alpha2_new_clipped=alpha2
if np.abs(alpha2_new_clipped-alpha2)<eps*(alpha2_new_clipped+ \
alpha2+eps):
return result,b,alphas,errors
alpha1_new=get_alpha1(alpha1,y1,y2,alpha2,alpha2_new_clipped)
b1,b2=get_b(e1,e2,y1,alpha1_new,alpha1, \
k11,k12,k22,y2,alpha2_new_clipped, \
alpha2,b)
if alpha1_new>0 and alpha1_new<constant:
b_new=b1
elif alpha2_new_clipped>0 and alpha2_new_clipped<constant:
b_new=b2
else:
b_new=(b1+b2)/2
alphas[idn1]=alpha1_new
alphas[idn2]=alpha2_new_clipped
for i in range(len(alphas)):
if i!=idn1 and i!=idn2:
errors[i]+=y1*(alpha1_new-alpha1)* \
kernel(x1,x_train[i],**kwargs)+ \
y2*(alpha2_new_clipped-alpha2)* \
kernel(x2,x_train[i],**kwargs)+ \
(b-b_new)
b=b_new
result=True
return result,b,alphas,errors
#using smo to forecast
def smo(x_train,x_test,y_train,y_test,constant=10,**kwargs):
alphas,b=train(x_train,y_train,c=constant,**kwargs)
forecast=[]
for j in x_test:
temp=0
for i in range(len(x_train)):
temp+=alphas[i]*y_train[i]*kernel(x_train[i],j)
forecast.append(temp)
forecast=list(np.sign(forecast))
return forecast
#calculate train and test accuracy
def self_smo(x_train,x_test,y_train,y_test,constant=10,**kwargs):
forecast=smo(x_train,x_train,y_train,y_train,constant=constant)
percentage=len(pd.Series(y_train)[pd.Series(forecast)==pd.Series(y_train)])/len(y_train)
print('\ntrain accuracy: %.2f'%(percentage*100)+'%')
forecast=smo(x_train,x_test,y_train,y_test,constant=10)
percentage=len(pd.Series(y_test)[pd.Series(forecast)==pd.Series(y_test)])/len(y_test)
print('\ntest accuracy: %.2f'%(percentage*100)+'%')
#official package of machine learning
def sklearn_svm(x_train,x_test,y_train,y_test,constant=10,**kwargs):
m=SVC(kernel='linear', C=constant).fit(x_train,y_train)
percentage=m.score(x_train,y_train)
print('\ntrain accuracy: %.2f'%(percentage*100)+'%')
percentage=m.score(x_test,y_test)
print('\ntest accuracy: %.2f'%(percentage*100)+'%')
df=pd.read_csv('iris.csv')
#as usual, we use binary classification to simplify things
#multiclass svm can be found in the following link
# https://github.com/je-suis-tm/machine-learning/blob/master/multiclass%20support%20vector%20machine.ipynb
df['y']=np.select([df['type']=='Iris-setosa', \
df['type']=='Iris-versicolor', \
df['type']=='Iris-virginica'],[-1.0,0.0,1.0])
df=df[df['y']!=0.0]
#two dimensional data for the sake of easy visualization
x1=PCA(n_components=1).fit_transform(pd.concat([df[i] for i in df.columns if 'sepal' in i],axis=1))
x2=PCA(n_components=1).fit_transform(pd.concat([df[i] for i in df.columns if 'petal' in i],axis=1))
y=df['y'].tolist()
#crucial!
#always reset index after loc or shuffle
df.reset_index(drop=True,inplace=True)
#wrap x into matrix form
x=[[x1[i].item(),x2[i].item()] for i in range(len(df))]
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3)
sklearn_svm(x_train,x_test,y_train,y_test,constant=10)
self_smo(x_train,x_train,y_train,y_train,constant=10)
%timeit m=SVC(kernel='linear', C=100).fit(x_train,y_train)
%timeit w,b=train(x_train,y_train,c=100)
```
<h4>Apparently self implementation of SMO is not as fast as sklearn. After all, sklearn runs on cython</h4>
<br>
```
#visualization of linear kernel
m=SVC(kernel='linear', C=10).fit(x_train,y_train)
xx=np.arange(min(np.mat(x)[:,0])-1, \
max(np.mat(x)[:,0])+1, \
(max(np.mat(x)[:,0])-min(np.mat(x)[:,0]))/200)
yy=np.arange(min(np.mat(x)[:,1])-1, \
max(np.mat(x)[:,1])+1, \
(max(np.mat(x)[:,1])-min(np.mat(x)[:,1]))/200)
X,Y=np.meshgrid(xx,yy)
forecast=m.predict(np.c_[X.ravel(),Y.ravel()])
fig=plt.figure(figsize=(10,6))
ax=fig.add_subplot(111)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.pcolormesh(X,Y,forecast.reshape(X.shape),cmap='pink')
for i in range(len(x)):
plt.scatter(x[i][0],x[i][1],c='r' if y[i]==1 else 'y')
plt.show()
#visualization of gaussian kernel
m=SVC(kernel='rbf', C=10).fit(x_train,y_train)
xx=np.arange(min(np.mat(x)[:,0])-1, \
max(np.mat(x)[:,0])+1, \
(max(np.mat(x)[:,0])-min(np.mat(x)[:,0]))/200)
yy=np.arange(min(np.mat(x)[:,1])-1, \
max(np.mat(x)[:,1])+1, \
(max(np.mat(x)[:,1])-min(np.mat(x)[:,1]))/200)
X,Y=np.meshgrid(xx,yy)
forecast=m.predict(np.c_[X.ravel(),Y.ravel()])
fig=plt.figure(figsize=(10,6))
ax=fig.add_subplot(111)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.pcolormesh(X,Y,forecast.reshape(X.shape),cmap='pink')
for i in range(len(x)):
plt.scatter(x[i][0],x[i][1],c='r' if y[i]==1 else 'y')
plt.show()
#visualization of polynomial kernel
m=SVC(kernel='poly', C=10).fit(x_train,y_train)
xx=np.arange(min(np.mat(x)[:,0])-1, \
max(np.mat(x)[:,0])+1, \
(max(np.mat(x)[:,0])-min(np.mat(x)[:,0]))/200)
yy=np.arange(min(np.mat(x)[:,1])-1, \
max(np.mat(x)[:,1])+1, \
(max(np.mat(x)[:,1])-min(np.mat(x)[:,1]))/200)
X,Y=np.meshgrid(xx,yy)
forecast=m.predict(np.c_[X.ravel(),Y.ravel()])
fig=plt.figure(figsize=(10,6))
ax=fig.add_subplot(111)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.pcolormesh(X,Y,forecast.reshape(X.shape),cmap='pink')
for i in range(len(x)):
plt.scatter(x[i][0],x[i][1],c='r' if y[i]==1 else 'y')
plt.show()
```
| github_jupyter |
# Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
**After this assignment you will:**
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
## <font color="darkblue"> Updates to Assignment</font>
This is version 3a of the notebook.
#### If you were working on a previous version
* If you were already working on version "3", you'll find your original work in the file directory.
* To reach the file directory, click on the "Coursera" icon in the top left of this notebook.
* Please still use the most recent notebook to submit your assignment.
#### List of Updates
* softmax section has a comment to clarify the use of "m" later in the course
* softmax function specifies (m,n) matrix dimensions to match the notation in the preceding diagram (instead of n,m)
## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(โ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
```
### START CODE HERE ### (โ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
```
**Expected output**:
test: Hello World
<font color='blue'>
**What you need to remember**:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
## 1 - Building basic functions with numpy ##
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
### 1.1 - sigmoid function, np.exp() ###
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
**Reminder**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
```
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (โ 1 line of code)
s = 1/(1+math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
```
**Expected Output**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
```
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
```
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
```
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
```
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
```
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.
**Exercise**: Implement the sigmoid function using numpy.
**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (โ 1 line of code)
s = 1/(1+np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
```
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (โ 2 lines of code)
s = 1/(1+np.exp(-x))
ds = sigmoid(x)*(1-sigmoid(x))
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reshaping arrays ###
Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
```
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (โ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[1], image.shape[2]))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
```
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (โ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x,axis=1,keepdims=True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Note**:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
### 1.5 - Broadcasting and the softmax function ####
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
**Instructions**:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \\
softmax\text{(second row of x)} \\
... \\
softmax\text{(last row of x)} \\
\end{pmatrix} $$
#### Note
Note that later in the course, you'll see "m" used to represent the "number of training examples", and each training example is in its own column of the matrix.
Also, each feature will be in its own row (each row has data for the same feature).
Softmax should be performed for all features of each training example, so softmax would be performed on the columns (once we switch to that representation later in this course).
However, in this coding practice, we're just focusing on getting familiar with Python, so we're using the common math notation $m \times n$
where $m$ is the number of rows and $n$ is the number of columns.
```
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (m,n).
Argument:
x -- A numpy matrix of shape (m,n)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (m,n)
"""
### START CODE HERE ### (โ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp,axis=1,keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Note**:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
**What you need to remember:**
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
## 2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
```
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.
### 2.1 Implement the L1 and L2 loss functions
**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
**Reminder**:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (โ 1 line of code)
loss = np.sum(np.abs(yhat-y),axis=0)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (โ 1 line of code)
loss = np.dot(np.abs(yhat-y),np.abs(yhat-y))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!
<font color='blue'>
**What to remember:**
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
| github_jupyter |
## *DISCLAIMER*
<p style="font-size:16px; color:#117d30;">
By accessing this code, you acknowledge the code is made available for presentation and demonstration purposes only and that the code: (1) is not subject to SOC 1 and SOC 2 compliance audits; (2) is not designed or intended to be a substitute for the professional advice, diagnosis, treatment, or judgment of a certified financial services professional; (3) is not designed, intended or made available as a medical device; and (4) is not designed or intended to be a substitute for professional medical advice, diagnosis, treatment or judgement. Do not use this code to replace, substitute, or provide professional financial advice or judgment, or to replace, substitute or provide medical advice, diagnosis, treatment or judgement. You are solely responsible for ensuring the regulatory, legal, and/or contractual compliance of any use of the code, including obtaining any authorizations or consents, and any solution you choose to build that incorporates this code in whole or in part.
</p>
<p style="font-size:25px; color:black;"><u><i><b>Predicting the number of customers likely to visit different departments in a store</b></i></u></p>
<p style="font-size:16px; color:#117d30;">
Time series forecasting is the use of a model to predict future values based on previously observed values.
The AutoML feature of AzureSynapse, in this case uses more than 25 time series forecasting machine learning algorithms to predicts how many customers are likely to visit different departments in a store.
</p>
Note:
</p>
<p style="font-size:15px; color:#117d30;">
This notebook is written in Scala, and there is interoperability between Scala and Python code.
</p>
<p style="font-size:15px; color:#117d30;">
<u> Abstract: </u>
</p>
<p style="font-size:16px; color:#117d30;">
1) Ingest data from Azure Synapse Data Storage account using PySpark.
</p>
<p style="font-size:16px; color:#117d30;">
2) Exploratory Data Analysis
</p>
<p style="font-size:15px; color:#117d30;">
3) Training more than 25 time series forecasting machine learning algorithms.
</p>
<p style="font-size:15px; color:#117d30;">
4) Predict the number of customers likely to visit different departments in a store by choosing the best performing Machine Learning Algorithm..
</p>
## Introduction
<p style="font-size:16px; color:#117d30;">
### In this notebook we showcased how to:
<p style="font-size:16px; color:#117d30;">
1. Create an experiment using an existing workspace
<p style="font-size:16px; color:#117d30;">
2. Configure AutoML using 'AutoMLConfig'
<p style="font-size:16px; color:#117d30;">
3. Train the model
<p style="font-size:16px; color:#117d30;">
4. Explore the engineered features and results
<p style="font-size:16px; color:#117d30;">
5. Configuration and remote run of AutoML for a time-series model with lag and rolling window features
<p style="font-size:16px; color:#117d30;">
6. Run and explore the forecast
<p style="font-size:16px; color:#117d30;">
7. Register the model
### Importing required libraries such as azureml, pandas, pandasql, pyspark, and other supporting libraries.
## Cell title
```
%%pyspark
from azureml.train.automl import AutoMLConfig
# from azureml.widgets import RunDetails
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl.run import AutoMLRun
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
import math
from pyspark.sql.window import Window
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
from azureml.core.model import Model
from azureml.core.webservice import Webservice
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import azureml.train.automl.runtime
import logging
import os, tempfile
import pandas as pd
from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark import SparkContext
os.environ['AZURE_SERVICE']="Microsoft.ProjectArcadia"
```
## *Connecting to Azure Synapse Data Warehouse*
<p style="font-size:16px; color:#117d30;">
Connection to Azure Synapse Data Warehouse is initiated and the required data is ingested for processing.
The warehouse is connected with a single line of code. Just point to actions in a table, click on a new notebook, and then click on "Load to DataFrame". </p>
<p style="font-size:16px; color:#117d30;"> After providing the necessary details, the required data is loaded in the form of a Spark dataframe.
One magical line of code converts a dataframe from Scala to Python!
</p>
```
%%pyspark
department_visits = spark.read.load('abfss://machine-learning@#DATA_LAKE_NAME#.dfs.core.windows.net/department-visits.csv'
,format='csv'
,header=True)
department_visits.show(10)
```
# Exploratory Data Analysis
<p style="font-size:16px; color:#117d30;">
The goal of performing exploratory data analysis is to understand the underlying patterns and correlations among features in the data.
```
%%pyspark
department_visit_data = department_visits.select("*").toPandas()
department_visit_data['Date'] = pd.to_datetime(department_visit_data['Date']).dt.strftime('%m/%d/%y')
department_visit_data['Month'] = pd.to_datetime(department_visit_data['Date']).dt.strftime('%m')
department_visit_data['DayOfMonth'] = pd.to_datetime(department_visit_data['Date']).dt.strftime('%d')
department_visit_data['Year'] = pd.to_datetime(department_visit_data['Date']).dt.strftime('%y')
department_visit_data['DayOfWeek'] = pd.to_datetime(department_visit_data['Date']).dt.strftime('%a')
department_visit_data[['Accessories_count','Entertainment_count','Gaming','Kids','Mens','Phone_and_GPS','Womens']] = \
department_visit_data[['Accessories_count','Entertainment_count','Gaming','Kids','Mens','Phone_and_GPS','Womens']].apply(pd.to_numeric)
#display(department_visit_data)
```
## Deriving insights from customer visits data
<p style="font-size:16px; color:#117d30;">
1. Heat Map: Thickness of the color indicates the no of customers visiting the section on that particular day. It provides a quick representation of distribution of traffic across days and in various departments. From the graph, we can infer that more number of customers visit the Entertainment department on Wednesdays, Thursdays and Fridays and there is less foot traffic on Mondays and Fridays in the Phone_and_gps department.
```
df_dow = department_visit_data.groupby('DayOfWeek').sum().sort_values(by = 'DayOfWeek',
ascending=True)
df_dow.head(100)
sns.set()
plt.rcParams['font.size'] = 12
bg_color = (0.88,0.85,0.95)
plt.rcParams['figure.facecolor'] = bg_color
plt.rcParams['axes.facecolor'] = bg_color
fig, ax = plt.subplots(1)
cmap = sns.diverging_palette(10, 150, n=2, as_cmap=True)
#cmap = sns.color_palette("hls", 150)
p = sns.heatmap(df_dow,
cmap=cmap,
annot=True,
fmt="d",
annot_kws={'size':8},
ax=ax)
plt.xlabel('Category')
plt.ylabel('Day Of Week')
ax.set_ylim((0,7))
plt.text(5,7.4, "Heat Map", fontsize = 25, color='Black', fontstyle='italic')
plt.show()
```
# Data Manipulation
<p style="font-size:16px; color:#117d30;">
1. Converting date to a specific format and making date fields relevant for prediction.
<p style="font-size:16px; color:#117d30;">
2. Converting the data type of the columns to numeric before being passed as input to the model.
```
%%pyspark
department_visit_data = department_visits.select("*").toPandas()
department_visit_data['Date'] = pd.to_datetime(department_visit_data['Date']).dt.strftime('%Y-%m-%d')
department_visit_data[['Accessories_count','Entertainment_count','Gaming','Kids','Mens','Phone_and_GPS','Womens']] = \
department_visit_data[['Accessories_count','Entertainment_count','Gaming','Kids','Mens','Phone_and_GPS','Womens']].apply(pd.to_numeric)
grouped_data = department_visit_data.groupby('Date', as_index=False).sum()
total_rows = grouped_data.count
print(total_rows)
%%pyspark
accessories_data = grouped_data[['Date','Accessories_count']]
entertainment_data = grouped_data[['Date','Entertainment_count']]
gaming_data = grouped_data[['Date','Gaming']]
kids_data = grouped_data[['Date','Kids']]
mens_data = grouped_data[['Date','Mens']]
phone_and_GPS_data = grouped_data[['Date','Phone_and_GPS']]
womens_data = grouped_data[['Date','Womens']]
accessories_data['Department']='Accessories'
entertainment_data['Department']='Entertainment'
gaming_data['Department']='Gaming'
kids_data['Department']='Kids'
mens_data['Department']='Mens'
phone_and_GPS_data['Department']='Phone_and_GPS'
womens_data['Department']='Womens'
accessories_data= accessories_data.rename(columns={'Accessories_count': 'Visits'})
entertainment_data= entertainment_data.rename(columns={ 'Entertainment_count':'Visits'})
gaming_data = gaming_data.rename(columns={'Gaming': 'Visits'})
kids_data = kids_data.rename(columns={'Kids': 'Visits'})
mens_data = mens_data.rename(columns={'Mens': 'Visits'})
phone_and_GPS_data = phone_and_GPS_data.rename(columns={'Phone_and_GPS': 'Visits'})
womens_data = womens_data.rename(columns={'Womens': 'Visits'})
df_customervisits = accessories_data.append( entertainment_data).append(gaming_data).append(kids_data).append(mens_data).append(phone_and_GPS_data).append(womens_data)
df_customervisits
%%pyspark
total_rows = df_customervisits.count
print(total_rows)
```
## Train
<p style="font-size:16px; color:#117d30;">
1. Instantiate an AutoMLConfig object.
<p style="font-size:16px; color:#117d30;">
2. The configuration below defines the settings and data used to run the experiment.
## Set AutoML Configuration Parameters
<p style="font-size:16px; color:#117d30;">
The forecast horizon is the number of periods into the future that the model should predict.
<p style="font-size:16px; color:#117d30;">
It is generally recommended that users set forecast horizons to less than 100 time periods
<p style="font-size:16px; color:#117d30;">
Furthermore, AutoML's memory use and computation time increases in proportion to the length of the horizon, so consider carefully how this value is set.
<p style="font-size:16px; color:#117d30;">
If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale.
```
%%pyspark
automl_settings = {
'time_column_name':'Date',
'grain_column_names': ['Department'],
'max_horizon': 25
}
%%pyspark
automl_config = AutoMLConfig(
#forecasting for time-series tasks
task='forecasting',
#measuere for evaluating the performance of the models
primary_metric='normalized_root_mean_squared_error',
#Maximum amount of time in minutes that the experiment take before it terminates.
experiment_timeout_minutes=15,
enable_early_stopping=True,
training_data=df_customervisits,#train_data,
label_column_name='Visits',
#Rolling Origin Validation is used to split time-series in a temporally consistent way.
n_cross_validations=4,
# Flag to enble early termination if the score is not improving in the short term.
enable_ensembling=False,
verbosity=logging.INFO,
**automl_settings)
%%pyspark
subscription_id='#SUBSCRIPTION_ID#'
resource_group='#RESOURCE_GROUP_NAME#'
workspace_name='#AML_WORKSPACE_NAME#'
ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)
ws.write_config()
ws = Workspace.from_config()
experiment = Experiment(ws, "Department_Visit_Count_V3")
```
## Run The Experiment
<p style="font-size:16px; color:#117d30;">
Automated ML runs more than 25 Machine Learning Algorithms and grades them according to performance.
```
%%pyspark
model_name = "Department-Visits-Forecast-Model"
description = "Forecasting Machine Learning model to predict department visits"
tags = {'WWI': 'DepartmentVisitsPredictions'}
matching_models = Model.list(ws, model_name, latest=True)
if len(matching_models) > 0:
run = AutoMLRun(experiment, matching_models[0].run_id)
automl_run = AutoMLRun(experiment, run.parent.id)
best_run, fitted_model = automl_run.get_output()
else:
local_run = experiment.submit(automl_config, show_output=True)
best_run, fitted_model = local_run.get_output()
model = local_run.register_model(model_name=model_name, description = description, tags = tags)
fitted_model
%%pyspark
#future_date = pd.date_range(start='2015-12-1', end='2015-12-5')
#future_data = pd.DataFrame({'Date':future_date, 'Department':'Accessories', 'Visits':0})
future_date = pd.date_range(start='2021-01-01', end='2021-01-20').values
future_data = pd.DataFrame({'Date':future_date, 'Department':'Womens'})
future_data = future_data.append(pd.DataFrame({'Date':future_date, 'Department':'Mens'}), ignore_index=True)
future_data = future_data.append(pd.DataFrame({'Date':future_date, 'Department':'Entertainment'}), ignore_index=True)
future_data = future_data.append(pd.DataFrame({'Date':future_date, 'Department':'Gaming'}), ignore_index=True)
future_data = future_data.append(pd.DataFrame({'Date':future_date, 'Department':'Kids'}), ignore_index=True)
future_data = future_data.append(pd.DataFrame({'Date':future_date, 'Department':'Accessories'}), ignore_index=True)
future_data = future_data.append(pd.DataFrame({'Date':future_date, 'Department':'Phone_and_GPS'}), ignore_index=True)
future_data
```
## Making future prediction using model that performs best
```
y_predictions, X_trans = fitted_model.forecast(future_data)
X_trans.reset_index(inplace=True)
X_trans
%%pyspark
table = pd.pivot_table(X_trans, values=['_automl_target_col'], index=['Date'], columns=['Department'])
#table['Date'] = table.index
table.columns = table.columns.droplevel()
table.reset_index(inplace=True)
table = table.astype({'Accessories': 'int32','Entertainment': 'int32','Gaming': 'int32','Kids': 'int32','Mens': 'int32','Phone_and_GPS': 'int32','Womens': 'int32'}, copy=False)
table.head(100)
%%pyspark
predict_df = spark.createDataFrame(table, ['Date', 'Accessories', 'Entertainment', 'Gaming', 'Kids', 'Mens', 'Phone_and_GPS', 'Womens'])
%%pyspark
predict_df \
.repartition(1) \
.write.format('csv') \
.option("header", "true") \
.mode("overwrite") \
.save('abfss://machine-learning@#DATA_LAKE_NAME#.dfs.core.windows.net/department-visit-predictions.csv')
```
| github_jupyter |
# Naive Bayes with differential privacy
We start by importing the required libraries and modules and collecting the data that we need from the [Adult dataset](https://archive.ics.uci.edu/ml/datasets/adult).
```
import diffprivlib.models as dp
import numpy as np
from sklearn.naive_bayes import GaussianNB
X_train = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
usecols=(0, 4, 10, 11, 12), delimiter=", ")
y_train = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
usecols=14, dtype=str, delimiter=", ")
```
Let's also collect the test data from Adult to test our models once they're trained.
```
X_test = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
usecols=(0, 4, 10, 11, 12), delimiter=", ", skiprows=1)
y_test = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
usecols=14, dtype=str, delimiter=", ", skiprows=1)
# Must trim trailing period "." from label
y_test = np.array([a[:-1] for a in y_test])
```
## Naive Bayes with no privacy
To begin, let's first train a regular (non-private) naive Bayes classifier, and test its accuracy.
```
nonprivate_clf = GaussianNB()
nonprivate_clf.fit(X_train, y_train)
print("Non-private test accuracy: %.2f%%" %
(nonprivate_clf.score(X_test, y_test) * 100))
```
## Differentially private naive Bayes classification
Using the `models.GaussianNB` module of diffprivlib, we can train a naive Bayes classifier while satisfying differential privacy.
If we don't specify any parameters, the model defaults to `epsilon = 1` and selects the model's feature bounds from the data. This throws a warning with `.fit()` is first called, as it leaks additional privacy. To ensure no additional privacy loss, we should specify the bounds as an argument, and choose the bounds indepedently of the data (i.e. using domain knowledge).
```
dp_clf = dp.GaussianNB()
```
If you re-evaluate this cell, the test accuracy will change. This is due to the randomness introduced by differential privacy. Nevertheless, the accuracy should be in the range of 87โ93%.
```
dp_clf.fit(X_train, y_train)
print("Differentially private test accuracy (epsilon=%.2f): %.2f%%" %
(dp_clf.epsilon, dp_clf.score(X_test, y_test) * 100))
```
By setting `epsilon=float("inf")` we get an identical model to the non-private naive Bayes classifier.
```
dp_clf = dp.GaussianNB(epsilon=float("inf"), bounds=(-1e5, 1e5))
dp_clf.fit(X_train, y_train)
print("Agreement between non-private and differentially private (epsilon=inf) classifiers: %.2f%%" %
(dp_clf.score(X_test, nonprivate_clf.predict(X_test)) * 100))
```
## Changing `epsilon`
On this occasion, we're going to specify the `bounds` parameter as a list of tuples, indicating the ranges in which we expect each feature to lie.
```
bounds = ([17, 1, 0, 0, 1], [100, 16, 100000, 4500, 100])
```
We will also specify a value for `epsilon`. High `epsilon` (i.e. greater than 1) gives better and more consistent accuracy, but less privacy. Small `epsilon` (i.e. less than 1) gives better privacy but worse and less consistent accuracy.
```
dp_clf2 = dp.GaussianNB(epsilon=0.1, bounds=bounds)
dp_clf2.fit(X_train, y_train)
print("Differentially private test accuracy (epsilon=%.2f): %.2f%%" %
(dp_clf2.epsilon, dp_clf2.score(X_test, y_test) * 100))
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import torch
from torch import nn as nn
from math import factorial
import random
import torch.nn.functional as F
import numpy as np
import seaborn as sn
import pandas as pd
import os
from os.path import join
import glob
from math import factorial
ttype = torch.cuda.DoubleTensor if torch.cuda.is_available() else torch.DoubleTensor
print(ttype)
# deep_iSITH is being used here, not deep_sith
from sith import DeepSITH
from tqdm.notebook import tqdm
import pickle
sn.set_context("poster")
sig_lets = ["A","B","C","D","E","F","G","H",]
signals = ttype([[0,1,1,1,0,1,1,1,0,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,0,1,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,0,1,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,0,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,0,1,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0]]
).view(7, 1, 1, -1)
#signals = ms
key2id = {k:i for i, k in enumerate(sig_lets)}
print(key2id)
print(signals.shape)
signals.shape
perm = torch.arange(signals.shape[0]).type(torch.LongTensor)
perm
signals[perm].shape
def train_model(model,
signals,
optimizer,
loss_func,
train_dur=2.0,
test_durs=[1.5, 2.0, 2.5],
epochs=1500,
loss_buffer_size=50,
testing_every=30):
loss_track = {"loss":[],
"epoch":[],
"acc":[],
"perf":[]}
losses = []
progress_bar = tqdm(range(int(epochs)), ncols=800)
for e in progress_bar:
perm = torch.arange(signals.shape[0]).type(torch.LongTensor)
# Zero the gradient between each batch
model.zero_grad()
# Present an entire batch to the model
out = model(signals[perm])
out = out[:, -1]
print(out.shape)
print(perm.shape)
# Measure loss via CrossEntropyLoss
loss = loss_func(out,
perm)
# Adjust Weights
loss.backward()
optimizer.step()
losses.append(loss.detach().cpu().numpy())
if len(losses) > loss_buffer_size:
losses = losses[1:]
# Record loss, epoch number, batch number in epoch,
# last accuracy measure, etc
loss_track['loss'].append(np.mean(losses))
loss_track['epoch'].append(e)
# calculate model accuracy:
if ((e)%testing_every == 0) & (e != 0):
model.eval()
perf = test_model(model, signals)
model.train()
loss_track['perf'].append(perf)
if e > testing_every:
# Update progress_bar
s = "{}: Loss: {:.6f}, Acc:{:.4f}"
format_list = [e, loss_track['loss'][-1]] + [perf]
s = s.format(*format_list)
progress_bar.set_description(s)
if loss_track['perf'][-1] == 1.0:
break
return loss_track
def test_model(model, signals):
# Test the Model
out = model(signals)
perf = (torch.argmax(out[:, -1, :], dim=-1) == torch.arange(signals.shape[0])).sum().item() / signals.shape[0]
return perf
```
# Setup Classifier type model
```
class DeepSITH_Classifier(nn.Module):
def __init__(self, out_features, layer_params, dropout=.5):
super(DeepSITH_Classifier, self).__init__()
last_hidden = layer_params[-1]['hidden_size']
self.hs = DeepSITH(layer_params=layer_params, dropout=dropout)
self.to_out = nn.Linear(last_hidden, out_features)
def forward(self, inp):
x = self.hs(inp)
x = self.to_out(x)
return x
```
# TEST layers for correct taustars/parameters/cvalues
These dictionaries will not be used later.
```
sith_params2 = {"in_features":1,
"tau_min":1, "tau_max":20.0, 'buff_max':40,
"k":50,
"ntau":5, 'g':0,
"ttype":ttype,
"hidden_size":10, "act_func":nn.ReLU()}
sith_params3 = {"in_features":sith_params2['hidden_size'],
"tau_min":1, "tau_max":200.0, 'buff_max':240,
"k":50,
"ntau":5, 'g':0,
"ttype":ttype,
"hidden_size":20, "act_func":nn.ReLU()}
layer_params = [sith_params2, sith_params3]
model = DeepSITH_Classifier(out_features=signals.shape[0],
layer_params=layer_params, dropout=.0).double()
print(model)
for i, l in enumerate(model.hs.layers):
print("Layer {}".format(i), l.sith.tau_star)
tot_weights = 0
for p in model.parameters():
tot_weights += p.numel()
print("Total Weights:", tot_weights)
```
# Visualize the taustar buffers
They must all completely empty or there will be edge effects
```
plt.plot(model.hs.layers[0].sith.filters[:, 0, 0, :].detach().cpu().T);
```
# Training and testing
```
# You likely don't need this to be this long, but just in case.
epochs = 10
# Just for visualizing average loss through time.
loss_buffer_size = 100
loss_func = torch.nn.CrossEntropyLoss()
```
# A note about the size of the samples
The size of the input signals is (8, 1, 1, 34). This is because the model takes in data of size (batch, MAGIC, features, sequence). The second "magic" dimention will always be 1 in order for this to quickly calculate the output of the SITH layer.
```
sith_params2 = {"in_features":1,
"tau_min":.1, "tau_max":20.0, 'buff_max':40,
"k":50,
"ntau":10, 'g':0,
"ttype":ttype,
"hidden_size":10, "act_func":nn.ReLU()}
sith_params3 = {"in_features":sith_params2['hidden_size'],
"tau_min":.1, "tau_max":200.0, 'buff_max':240,
"k":50,
"ntau":10, 'g':0,
"ttype":ttype,
"hidden_size":20, "act_func":nn.ReLU()}
layer_params = [sith_params2, sith_params3]
model = DeepSITH_Classifier(out_features=signals.shape[0],
layer_params=layer_params,
dropout=0.).double()
optimizer = torch.optim.Adam(model.parameters())
signals.shape
out = model(signals)
print(out.shape)
plt.plot(out[:,:,-1].detach().cpu().T);
perf = train_model(model, signals, optimizer, loss_func,
epochs=epochs,
loss_buffer_size=loss_buffer_size)
signals[:,0,0,33]
perf
with open('filename.dill', 'wb') as handle:
pickle.dump(perf, handle, protocol=pickle.HIGHEST_PROTOCOL)
fig = plt.figure(figsize=(8,10))
ax = fig.add_subplot(2,1,1)
ax.plot(perf['loss'])
ax.set_ylabel("Loss")
#ax.set_xlabel("Presentation Number")
ax = fig.add_subplot(2,1,2)
dat = pd.DataFrame(perf['perf'])
ax.plot(np.arange(dat.shape[0])*30, dat)
ax.set_ylabel("Classification Acc")
ax.set_xlabel("Presentation Number")
()
plt.savefig("DeepSith_training_H8")
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from clean import manifest_clinical_merge
from collections import Counter
pd.set_option('display.max_columns', 100)
```
# Data Prep
```
manifest_df = pd.read_csv('../Manifest_Data/GCD_TARGET_Data_Manifest_AML_NBL_WT_RT.csv')
wt_disc_df = pd.read_excel('../Clinical_Data/TARGET_WT_ClinicalData_Discovery_20160714_public.xlsx')
aml_disc_df = pd.read_excel('../Clinical_Data/TARGET_AML_ClinicalData_20160714.xlsx')
nbl_disc_df = pd.read_excel('../Clinical_Data/TARGET_NBL_ClinicalData_20151124.xlsx')
WT_df = manifest_clinical_merge(manifest_df, wt_disc_df, 'TARGET-WT')
AML_df = manifest_clinical_merge(manifest_df, aml_disc_df, 'TARGET-AML')
NBL_df = manifest_clinical_merge(manifest_df, nbl_disc_df, 'TARGET-NBL')
from clean import assay_transpose
assay_df = pd.read_csv('../Expn_Data/TARGET_NBL_AML_RT_WT_TMMCPM_log2_Norm_Counts.csv.zip')
assay_t_df = assay_transpose(assay_df)
from clean import assay_clinical_merge
AML_genes = assay_clinical_merge(assay_t_df, AML_df)
WT_genes = assay_clinical_merge(assay_t_df, WT_df)
NBL_genes = assay_clinical_merge(assay_t_df, NBL_df)
from model_comp import data_prep_columns
# import xgboost
df, __ = data_prep_columns(AML_genes, 'neither')
print(df.info())
print('\n')
AML_genes.info()
pd.DataFrame.to_csv(df, './data/aml_df.csv')
```
- just read aml_df.csv in the future to save time
## train/test split selection
- testing - 44 samples from composite_code
- training - 93 samples from composite_code
- N = 137
```
testing = pd.read_csv('../composite_code/rnotebook/data/TARGET_AML_Testing_Samples.csv')
training = pd.read_csv('../composite_code/rnotebook/data/TARGET_AML_Training_Samples.csv')
df.head()
training_genes = list(training['x'])
testing_genes = list(testing['x'])
train_df = df[df['TARGET USI'].isin(training_genes)]
test_df = df[df['TARGET USI'].isin(testing_genes)]
print(train_df.info())
print('\n')
print(test_df.info())
```
### DF preparation
data has been split, now prepare for modeling
modeling on 21,404 genes
```
model_columns = df.iloc[:, 84:-4].columns
model_columns
X_train = train_df[model_columns].astype(float)
y_train = train_df.iloc[:, -1] #will scramble these
X_test = test_df[model_columns].astype(float)
y_test = test_df.iloc[:, -1]
X = df[model_columns]
y = df.iloc[:, -1]
len(y_train)
from sklearn.metrics import log_loss, accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
import random
from sklearn.model_selection import permutation_test_score
import datetime
xgb = XGBClassifier(learning_rate = 0.01, max_depth = 3, n_estimators = 700, random_state=8, n_jobs=-1)
rf = RandomForestClassifier(n_estimators=1000, max_depth=20, random_state=8, n_jobs=-1)
# rf.fit(X_train, y_train)
# xgb.fit(X_train, y_train)
# p_rf = rf.predict_proba(X_test)
# p_xgb = xgb.predict_proba(X_test)
# p_rf_acc = rf.predict(X_test)
# p_xgb_acc = xgb.predict(X_test)
# rf_ll = log_loss(y_test, p_rf)
# xgb_ll = log_loss(y_test, p_xgb)
# rf_acc = accuracy_score(y_test, p_rf_acc)
# xgb_acc = accuracy_score(y_test, p_xgb_acc)
# print(f'RF Performance: Accuracy: {rf_acc} Log Loss: {rf_ll}')
# print('\n')
# print(f'XGB Performance: Accuracy: {xgb_acc} Log Loss: {xgb_ll}')
```
These are my benchmarks
RF Performance: Accuracy: 0.8863636363636364 Log Loss: 0.37249402770254797
XGB Performance: Accuracy: 0.9090909090909091 Log Loss: 0.21215451328845863
### Permutation Tests
```
# print(datetime.datetime.utcnow())
# print('\n')
# true_score, perm_scores, p_value = permutation_test_score(rf, X, y,
# scoring= 'neg_log_loss', cv=5,
# n_permutations=50, n_jobs=-1, verbose=1)
# print('\n')
# print(datetime.datetime.utcnow())
# 5000 / 50 * 4 / 60
# # 5-7 hours to run
```
# START FROM HERE
### Lasso Regression
then train on those features --> then do permutation test
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, KFold
final_val = df.sample(frac=0.1)
final_X = final_val[model_columns]
final_y = final_val.iloc[:, -1]
data = df.drop(index= final_val.index)
X = data[model_columns]
y = data.iloc[:, -1]
len(X) == len(y)
# log_model = LogisticRegression(penalty='l1', solver='saga')
# log_model.fit(X_train, y_train)
# y_pred = log_model.predict_proba(X_test)
# log_loss(y_test, y_pred)
# log_model.coef_
log_model = LogisticRegression(penalty='l1', solver='saga', max_iter=10000)
kf = KFold(n_splits=5, shuffle=True)
ll_performance = []
model_weights = []
for train_index, test_index in kf.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
log_model.fit(X_train, y_train)
y_pred = log_model.predict_proba(X_test)
log_ll = log_loss(y_test, y_pred)
ll_performance.append(log_ll)
model_weights.append(log_model.coef_)
```
Did a 5-split kfold validation to find optimal weights for the logistic regression
```
ll_performance
# len(model_weights)
# model_weights[0]
```
Here I am taking the average weight of all the coefs of the kfolds + also find the weight of the coefs
for all kfolds to find genes that are deemed somewhat important.
```
average_weight = np.mean(model_weights, axis=0)[0]
def important_gene_mask(columns, coefs):
mask = coefs != 0
important_genes = columns[mask[0]]
print(len(important_genes))
return important_genes
lasso_k1 = set(important_gene_mask(model_columns, model_weights[0]))
lasso_k2 = set(important_gene_mask(model_columns, model_weights[1]))
lasso_k3 = set(important_gene_mask(model_columns, model_weights[2]))
lasso_k4 = set(important_gene_mask(model_columns, model_weights[3]))
lasso_k5 = set(important_gene_mask(model_columns, model_weights[4]))
total_gene_union = set.union(lasso_k1, lasso_k2, lasso_k3, lasso_k4, lasso_k5)
len(total_gene_union)
```
1138 possible genes the lasso regression thinks is important depending on the kfold it got..
```
total_gene_intersection = set.intersection(lasso_k1, lasso_k2, lasso_k3, lasso_k4, lasso_k5)
len(total_gene_intersection)
```
124 genes that all five lasso models agree on
### Now lets use these 976 genes to model RF or XGB
```
lasso_columns = list(total_gene_union)
X_2 = data[lasso_columns]
y_2 = data.iloc[:, -1]
X_2 = X_2.apply(pd.to_numeric)
xgb = XGBClassifier(learning_rate = 0.01, max_depth = 3, n_estimators = 700, random_state=8, n_jobs=-1)
rf = RandomForestClassifier(n_estimators=1000, max_depth=20, random_state=8, n_jobs=-1)
kf = KFold(n_splits=5, shuffle=True)
rf_ll_performance = []
rf_acc_performance = []
xgb_ll_performance = []
xgb_acc_performance = []
for train_index, test_index in kf.split(X_2):
X_train, X_test = X_2.iloc[train_index], X_2.iloc[test_index]
y_train, y_test = y_2.iloc[train_index], y_2.iloc[test_index]
rf.fit(X_train, y_train)
xgb.fit(X_train, y_train)
p_rf = rf.predict_proba(X_test)
p_xgb = xgb.predict_proba(X_test)
p_rf_acc = rf.predict(X_test)
p_xgb_acc = xgb.predict(X_test)
rf_ll = log_loss(y_test, p_rf)
xgb_ll = log_loss(y_test, p_xgb)
rf_acc = accuracy_score(y_test, p_rf_acc)
xgb_acc = accuracy_score(y_test, p_xgb_acc)
rf_ll_performance.append(rf_ll)
rf_acc_performance.append(rf_acc)
xgb_ll_performance.append(xgb_ll)
xgb_acc_performance.append(xgb_acc)
from IPython.display import display
display(rf_ll_performance)
display(xgb_ll_performance)
display(rf_acc_performance)
display(xgb_acc_performance)
```
Really good accuracy and model performance here...
- Accuracy of 100% !!!
```
rf_kfold_5_df = pd.DataFrame({'rf prediction_0': p_rf[:, 0], 'rf prediction_1': p_rf[:, 1],
'actual': y_test})
xgb_kfold_5_df = pd.DataFrame({'xgb prediction_0': p_xgb[:, 0], 'xgb prediction_1': p_xgb[:, 1],
'actual': y_test})
rf_kfold_5_df.head()
xgb_kfold_5_df.head()
display(np.mean(rf_ll_performance))
display(np.mean(rf_acc_performance))
display(np.mean(xgb_ll_performance))
display(np.mean(xgb_acc_performance))
```
### Next steps
- tune model with kfolds
- fit on all the "training" data
- validate with the final "validation" set
- write a note on the 114 features deemed important by each round of lasso
- find "feature importance" of XGB, RF forest
- permutation test with the 976 features
```
depths = [5, 10, 15, 20, 25, 30]
for d in depths:
depth_ll_performance = []
depth_acc_performance = []
rf = RandomForestClassifier(n_estimators=1000, max_depth=d, max_features='auto', random_state=8, n_jobs=-1)
for train_index, test_index in kf.split(X_2):
X_train, X_test = X_2.iloc[train_index], X_2.iloc[test_index]
y_train, y_test = y_2.iloc[train_index], y_2.iloc[test_index]
rf.fit(X_train, y_train)
p_rf = rf.predict_proba(X_test)
p_rf_acc = rf.predict(X_test)
rf_ll = log_loss(y_test, p_rf)
rf_acc = accuracy_score(y_test, p_rf_acc)
depth_ll_performance.append(rf_ll)
depth_acc_performance.append(rf_acc)
depth_ll_performance
```
lets go with depth of 30 and use 2000 estimators later...
```
def log_loss_calc(model, X_train, y_train, X_test, y_test):
"""
function that takes the predicted log value and then calculates the
Root Mean Squared Log Error
"""
model.fit(X_train, y_train)
pred = model.predict_proba(X_test)
ll = log_loss(y_test, pred)
return ll
class XGBoostTuner():
def __init__(self, param=None):
self.param = param
self.xgb_performance = {}
self.best_parameters = []
def xgb_kfolds(self, X, y, trees_list, depth, splits=5):
"""
Parameters
-----------
X: data to train on
y: labeled target data
trees_list: list of numbers ex [2, 4, 6, 8, 10] that will decide number of trees
depth: depth of tree
Returns
----------------
"""
kf = KFold(n_splits=splits, shuffle=True)
models = []
for i in trees_list:
model = XGBClassifier(learning_rate=0.01, n_estimators= i * 100,
depth = depth, n_jobs = -1, random_state=8)
models.append(model)
xgb1, xgb2, xgb3, xgb4, xgb5 = models
scores = []
for i in range(5):
scores.append([])
score1, score2, score3, score4, score5 = scores
for train_index, test_index in kf.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
score1.append(log_loss_calc(xgb1, X_train, y_train, X_test, y_test))
score2.append(log_loss_calc(xgb2, X_train, y_train, X_test, y_test))
score3.append(log_loss_calc(xgb3, X_train, y_train, X_test, y_test))
score4.append(log_loss_calc(xgb4, X_train, y_train, X_test, y_test))
score5.append(log_loss_calc(xgb5, X_train, y_train, X_test, y_test))
# self.xgb_performance = {}
for i in range(len(trees_list)):
self.xgb_performance[f'{trees_list[i]}00 trees'] = {'scores over k splits': scores[i], 'mean score': np.mean(scores[i])}
return self.xgb_performance
def best_params(self):
mean_score = []
for key in self.xgb_performance.keys():
mean_score.append((self.xgb_performance[key]['mean score'], key))
bp = min(mean_score, key = lambda x: x[0])
self.best_parameters.append(bp)
return bp
xt1 = XGBoostTuner()
xgb_scores1 = xt1.xgb_kfolds(X_2, y_2, [5, 7, 10, 15, 20], depth=3)
print(xt1.best_params())
# xgb_scores1
xt2 = XGBoostTuner()
xgb_scores2 = xt2.xgb_kfolds(X_2, y_2, [5, 7, 10, 15, 20], depth=5)
print(xt2.best_params())
xt3 = XGBoostTuner()
xgb_scores3 = xt3.xgb_kfolds(X_2, y_2, [5, 7, 10, 15, 20], depth=7)
print(xt3.best_params())
xt4 = XGBoostTuner()
xgb_scores4 = xt4.xgb_kfolds(X_2, y_2, [5, 7, 10, 15, 20], depth=10)
print(xt4.best_params())
xt5 = XGBoostTuner()
xgb_scores5 = xt5.xgb_kfolds(X_2, y_2, [5, 7, 10, 15, 20], depth=20)
print(xt5.best_params())
pd.DataFrame.to_csv(X_2, './data/lasso_features_data.csv')
```
can use that dataframe if i need to retrain or reset notebook
### Fitting tuned on the data then predicting on holdout
```
final_X = final_val[lasso_columns].astype(float)
final_y = final_val.iloc[:, -1]
final_val.info()
xgb = XGBClassifier(learning_rate = 0.01, max_depth = 7, n_estimators = 1000, random_state=8, n_jobs=-1)
rf = RandomForestClassifier(n_estimators=2000, max_depth=30, random_state=8, n_jobs=-1)
rf.fit(X_2, y_2)
xgb.fit(X_2, y_2)
p_rf = rf.predict_proba(final_X)
p_xgb = xgb.predict_proba(final_X)
p_rf_acc = rf.predict(final_X)
p_xgb_acc = xgb.predict(final_X)
rf_ll = log_loss(final_y, p_rf)
xgb_ll = log_loss(final_y, p_xgb)
rf_acc = accuracy_score(final_y, p_rf_acc)
xgb_acc = accuracy_score(final_y, p_xgb_acc)
print(f'RF Performance: Accuracy: {rf_acc} Log Loss: {rf_ll}')
print('\n')
print(f'XGB Performance: Accuracy: {xgb_acc} Log Loss: {xgb_ll}')
rf.feature_importances_
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.