Search is not available for this dataset
text string | meta dict |
|---|---|
%-------------------------------------------------------
% DOCUMENT CONFIGURATIONS
%-------------------------------------------------------
%-------------------------------------------------------
% START OF FIGURES
%-------------------------------------------------------
\subsection{TOR}
\begin{figure}[htpb]
\centering
\caption{\small \sl TOR Routage Schematic
\label{fig:tor-scheme}}
\begin{adjustbox}{center}
\includegraphics[scale=0.081]{annexes/schemes/tor-scheme.jpg}
\end{adjustbox}
\end{figure}
\clearpage
\subsection{JS Cryptography Library Graphs}
The graphs from the following figures have been made by \textbf{Dominic Tarr} \cite {Tarr2014PerformanceLibraries.}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows total time taken, lower is better
\label{fig:hash-sha1}}
\includegraphics[scale=0.6]{graphs/hash-sha1.png}
\end{figure}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows size/time, higher is better
\label{fig:hash-ops-sha1}}
\includegraphics[scale=0.6]{graphs/hash-ops-sha1.png}
\end{figure}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows total time taken, lower is better
\label{fig:hash-sha256}}
\includegraphics[scale=0.6]{graphs/hash-sha256.png}
\end{figure}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows size/time, higher is better
\label{fig:hash-ops-sha256}}
\includegraphics[scale=0.6]{graphs/hash-ops-sha256.png}
\end{figure}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows total time taken, lower is better
\label{fig:pbkdf2-sha1}}
\includegraphics[scale=0.6]{graphs/pbkdf2-sha1.png}
\end{figure}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows size/time, higher is better
\label{fig:pbkdf2-ops-sha1}}
\includegraphics[scale=0.6]{graphs/pbkdf2-ops-sha1.png}
\end{figure}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows total time taken, lower is better
\label{fig:pbkdf2-sha256}}
\includegraphics[scale=0.6]{graphs/pbkdf2-sha256.png}
\end{figure}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows size/time, higher is better
\label{fig:pbkdf2-ops-sha256}}
\includegraphics[scale=0.6]{graphs/pbkdf2-ops-sha256.png}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[scale=0.6]{graphs/small-hash-sha1.png}
\caption{\small \sl y-axis shows size/time, higher is better
\label{fig:small-hash-sha1}}
\end{figure}
\begin{figure}[htpb]
\centering
\caption{\small \sl y-axis shows size/time, higher is better
\label{fig:small-hash-sha256}}
\includegraphics[scale=0.6]{graphs/small-hash-sha256.png}
\end{figure}
%-------------------------------------------------------
% END OF FIGURES
%-------------------------------------------------------
| {
"alphanum_fraction": 0.6616623947,
"avg_line_length": 28.7473684211,
"ext": "tex",
"hexsha": "c3b3c96554379be778ded3e4d9ae501761337001",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Rocla/OverClouds",
"max_forks_repo_path": "Archieves/Report SS/annexes/figures.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Rocla/OverClouds",
"max_issues_repo_path": "Archieves/Report SS/annexes/figures.tex",
"max_line_length": 115,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Rocla/OverClouds",
"max_stars_repo_path": "Archieves/Report SS/annexes/figures.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 813,
"size": 2731
} |
\chapter{Yaschas Massif 100AF}
Take the crazy chocobo, from the back right to make him point in the correct direction.. Make sure to save a Ghysal Green and dismount before he eats it. \pickup{Graviton Core}{right side of the crevice} Crux out.
\newline | {
"alphanum_fraction": 0.7834645669,
"avg_line_length": 63.5,
"ext": "tex",
"hexsha": "29603f55608e9b662912328380c8ed87a2781eb2",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2021-10-03T12:58:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-06T10:30:25.000Z",
"max_forks_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns",
"max_forks_repo_path": "Final Fantasy XIII-2/Chapters/yaschasmassif100af.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae",
"max_issues_repo_issues_event_max_datetime": "2020-11-18T11:44:28.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-11-05T08:11:06.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns",
"max_issues_repo_path": "Final Fantasy XIII-2/Chapters/yaschasmassif100af.tex",
"max_line_length": 213,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns",
"max_stars_repo_path": "Final Fantasy XIII-2/Chapters/yaschasmassif100af.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-18T09:01:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-27T04:50:16.000Z",
"num_tokens": 70,
"size": 254
} |
\startcomponent ma-cb-en-document
\product ma-cb-en
\chapter{How to create a document}
\index{input file}
Let's assume you want to create a simple document. It has
some structure and contains a title page, a few chapters,
sections and sub sections. Of course there is a table of
contents and an index.
\CONTEXT\ can create such a document automatically if you
offer the right input by means of a file. So first you have
to create an input file. An input file consists of a name
and an extension. You can choose any name you want but the
extension has to be \type{tex}. If you create a file with
the name \type{myfile.tex} you will find no difficulties in
running \CONTEXT.
An \pagereference[inputfile] input file could look like
this:
\startbuffer
\starttext
\startstandardmakeup
\midaligned{How to make a document.}
\midaligned{by}
\midaligned{The Author}
\stopstandardmakeup
\completecontent
\chapter{Introduction}
... your text\index{indexentry} ...
\chapter{One Chapter}
\section[firstsection]{The first section}
... your text ...
\section{The second section}
\subsection{the first sub section}
... your text\index{another indexentry} ...
\subsection{the second sub section}
... your text ...
\section{The third section}
... your text ...
\chapter{Another Chapter}
... your text ...
\chapter[lastchapter]{The Last Chapter}
... your text ...
\completeindex
\stoptext
\stopbuffer
{\switchtobodyfont[9pt]\typebuffer}
\CONTEXT\ expects a plain \ASCII\ input file. Of course you
can use any text|-|editor or word|-|processor you want, but you
should not forget that \CONTEXT\ can only read \ASCII\
input. Most text|-|editors or word|-|processors can export your
file as plain \ASCII.
The input file should contain the text you want to be
processed by \CONTEXT\ and the \CONTEXT\ commands. A
\CONTEXT\ command begins with a backslash~\type{\}. With
the command \type{\starttext} you indicate the beginning of
your text. The area before \type{\starttext} is called the
set up area and is used for defining new commands and setting up
the layout of your document.
A command is usually followed by a left and right bracket
pair \type{[]} and|/|or a left and right brace \type{{}}. In
\type{\chapter[lastchapter]{The Last Chapter}} the command
\type{\chapter} for example tells \CONTEXT\ to perform a
few actions concerning design, typography and structure.
These actions may be:
\startitemize[n,packed]
\item start a new page
\item increase chapter number by one
\item place chapter number in front of chapter title
\item reserve some vertical space
\item use a big font
\item put chapter title (and page number) in table of contents
\stopitemize
These actions will be performed on the argument that is
given between the left and right braces: {\em The Last
Chapter}.
The \type{[lastchapter]} between brackets has not been
mentioned yet. This is a label with a logical name that can
be used for referring to that specific chapter. This can be
done with yet some other \CONTEXT\ commands:
\type{\in{chapter}[lastchapter]} that typesets the chapter
number, while \type{\about[lastchapter]} returns the title.
So now the list of actions can be extended with:
\startitemize[continue]
\item let label \type{lastchapter} be chapter number
and title (and store these for later use)
\stopitemize
Other actions concerning running heads, number resetting and
interactivity are disregarded at this moment.
If you have \CONTEXT\ process this example file, you would
obtain a very simple document with a few numbered chapters
and section headers.
While processing the file \CONTEXT\ takes care of many
things. One of these things is for example page numbering.
But in order to make a table of contents \CONTEXT\ needs
page numbers that are not yet known to \CONTEXT\ at the
first run. So you have to process this file twice (a two
pass job). \CONTEXT\ will produce a few auxiliary files to
store this kind of information. In some instances you have
to process an input file thrice (a three pass job). One can
use \TEXEXEC\ to run \CONTEXT\ from the command line. This
\RUBY\ script also takes care of the multiple passes and
processing of auxiliary files. \TEXEXEC\ is part of the
standard \CONTEXT\ distribution.
\stopcomponent
| {
"alphanum_fraction": 0.7654234107,
"avg_line_length": 29.6041666667,
"ext": "tex",
"hexsha": "5832f26ba3df7a8a074b3acad3466e870d87ff35",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "marcpaterno/texmf",
"max_forks_repo_path": "contextman/context-beginners/en/ma-cb-en-document.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "marcpaterno/texmf",
"max_issues_repo_path": "contextman/context-beginners/en/ma-cb-en-document.tex",
"max_line_length": 64,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "marcpaterno/texmf",
"max_stars_repo_path": "contextman/context-beginners/en/ma-cb-en-document.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1038,
"size": 4263
} |
\subsection{Stated preference}
| {
"alphanum_fraction": 0.7647058824,
"avg_line_length": 6.8,
"ext": "tex",
"hexsha": "6edb306b922c93e42d813f8477c845d04af82440",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/economics/econometricsStated/01-01-statedPreference.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/economics/econometricsStated/01-01-statedPreference.tex",
"max_line_length": 30,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/economics/econometricsStated/01-01-statedPreference.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8,
"size": 34
} |
\documentclass[11pt,a4paper,oneside]{report}
\usepackage{pdfpages}
\usepackage{epstopdf}
\usepackage{url}
\usepackage[toc,page]{appendix}
\usepackage{hyperref}
\usepackage{caption}
\usepackage{subcaption}
\renewcommand{\thesection}{\arabic{section}}
\setcounter{secnumdepth}{3}
\title{\textbf{Master thesis : Fake news detection using machine learning}
\\ Review 2 Draft}
\author{Simon Lorent}
\date{Academic year 2018 - 2019}
\begin{document}
\maketitle
\section{Models}
\subsection{Model comparaison} \label{section:model_comp}
Since the last review I've introduced a few new models in addition of the Naïve-Bayes classifier. These new models are decision tree classifer, linear svm classifier and ridge classifier. I have also tried to use lasso classifier and non linear svm but these medols complexity increase too fast with respect to the sample size and thus are not possible to terminate in a reasonable amount of time. It exists libraries that run svm on GPU but I have not manage to make it works yet. In order to avoid the problem of over represation of two single domains, I've chose to discard them in the following analysis and to use them as test set.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{output/numSentences.png}
\caption{Summary statics on dataset, with and without downsampling.}
\label{fig:numSentences}
\end{figure}
The linear svc shows a good improvement over the basic Naïve-Bayes. I've also try to limit the maximum number of features of the TF-IDF model and it also allows to have some improvement. This can be seen at \textbf{Figure \ref{fig:all_recall_fake}}.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/all_fake_recall.png}
\caption{Model comparison with respect to the number of TF-IDF features (log scale) on fake.}
\label{fig:all_recall_fake}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/all_reliable_recall.png}
\caption{Model comparison with respect to the number of TF-IDF features (log scale) on reliable.}
\label{fig:all_recall_reliable}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/all_fake_precision.png}
\caption{Model comparison with respect to the number of TF-IDF features (log scale) on fake.}
\label{fig:all_precision_fake}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/all_reliable_precision.png}
\caption{Model comparison with respect to the number of TF-IDF features (log scale) on reliable.}
\label{fig:all_precision_reliable}
\end{figure}
We can see that Naïve-Bayes has the worst recall for fake news but it is the one that perform the best on reliable news.
\subsection{Linear SVC}
Linear SVC is a special case of SVM that fit linear model and is a lot faster than traditional SVM. \textbf{Figure \ref{fig:LinearSVC_recall}} shows the recall with respect to the paramter C, which is defined as Penalty parameter of the error term. It can be seen that this parameter does not have a lot of influence on precision or recall. It should be noted that these values are the ones of the 3-folds cross validation and not from the validation test score. See \textbf{Section \ref{section:model_comp}} for the performances on the validation set.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/LinearSVC_recall.png}
\caption{Recall with respect to the parameter C}
\label{fig:LinearSVC_recall}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/LinearSVC_precision.png}
\caption{Precision with respect to the parameter C}
\label{fig:LinearSVC_precision}
\end{figure}
\section{SMOTE}
As opposed to primilary results, SMOTE method does increase the recall for fake detection, but on the other hand is lowering the one for reliable detection, it is also lowering precision. This can be seen at \textbf{Figure \ref{fig:all_recall_fake_SMOTE}} to \textbf{\ref{fig:all_precision_reliable_SMOTE}}.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/all_fake_recall_SMOTE.png}
\caption{Model comparison with respect to the number of TF-IDF features (log scale) on fake.}
\label{fig:all_recall_fake_SMOTE}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/all_reliable_recall_SMOTE.png}
\caption{Model comparison with respect to the number of TF-IDF features (log scale) on reliable.}
\label{fig:all_recall_reliable_SMOTE}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/all_fake_precision_SMOTE.png}
\caption{Model comparison with respect to the number of TF-IDF features (log scale) on fake.}
\label{fig:all_precision_fake_SMOTE}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/all_reliable_precision_SMOTE.png}
\caption{Model comparison with respect to the number of TF-IDF features (log scale) on reliable.}
\label{fig:all_precision_reliable_SMOTE}
\end{figure}
\section{LDA}
I have made some experiment with LDA analysis but this does not show any interesting facts, I plan to run on numerous LDA topics numbers but I need some way of mathematicaly compare the two distribution in order to find the optimal number of topics. For instance, \textbf{Figure \ref{fig:lda25}} shows the distrubtion of fake and reliable news for 25 topics and \textbf{Figure \ref{fig:lda10}} for 10 topics.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/lda_10.png}
\caption{News count per topics for 10 topics}
\label{fig:lda10}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{output/lda_25.png}
\caption{News count per topics for 25 topics}
\label{fig:lda25}
\end{figure}
\bibliographystyle{plain}
\bibliography{../references/references.bib}
\end{document} | {
"alphanum_fraction": 0.7827047231,
"avg_line_length": 42.652173913,
"ext": "tex",
"hexsha": "739dbd0ca09fd5e69add5e8e4c21a55e57cb64c6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2c3a5e82d4c7d6294ca87c265a1b638d61f2cb08",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SimonKenoby/Master-Thesis-Fake-News-Dectection",
"max_forks_repo_path": "reviews/review2-16_05_18/review.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "2c3a5e82d4c7d6294ca87c265a1b638d61f2cb08",
"max_issues_repo_issues_event_max_datetime": "2020-09-24T19:09:04.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-09-22T12:00:06.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SimonKenoby/Master-Thesis-Fake-News-Dectection",
"max_issues_repo_path": "reviews/review2-16_05_18/review.tex",
"max_line_length": 637,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "2c3a5e82d4c7d6294ca87c265a1b638d61f2cb08",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SimonKenoby/Master-Thesis-Fake-News-Dectection",
"max_stars_repo_path": "reviews/review2-16_05_18/review.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-16T08:15:14.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-04T22:39:57.000Z",
"num_tokens": 1587,
"size": 5886
} |
\section{OSS contributions}
\begin{itemize}
<% for contribution in contributions %>
\item \href{<<contribution.url>>}{<<contribution.name>>}
<% endfor %>
\end{itemize}
| {
"alphanum_fraction": 0.6629834254,
"avg_line_length": 25.8571428571,
"ext": "tex",
"hexsha": "1fd36b8dab0527899c2ff3e6c93e4845b7a3d5f2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e0d4f25471d63337a672ceddb0620ef58ca84b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dlalic/resume",
"max_forks_repo_path": "templates/contribution.tex",
"max_issues_count": 38,
"max_issues_repo_head_hexsha": "e0d4f25471d63337a672ceddb0620ef58ca84b93",
"max_issues_repo_issues_event_max_datetime": "2022-03-19T11:00:38.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-08-12T21:01:36.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dlalic/resume",
"max_issues_repo_path": "templates/contribution.tex",
"max_line_length": 61,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e0d4f25471d63337a672ceddb0620ef58ca84b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dlalic/resume",
"max_stars_repo_path": "templates/contribution.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-18T10:43:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-18T10:43:20.000Z",
"num_tokens": 51,
"size": 181
} |
\section{LIME}
LIME \cite{ribeiro2016should} (Local Interpretable Model-Agnostic Explanations) is a black box interpretability method. Like all black box methods, LIME randomly changes features in the input data to detect which feature is relevant for a specific classification.
In the case of image classification, LIME does not modify single pixels, because this would generate too many different versions of the input.
Instead, LIME generates superpixels.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.35\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/02_methods/images/frog1.png}
\caption{Original image}
\end{subfigure}\hspace{1.5cm}%
\begin{subfigure}[t]{.35\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/02_methods/images/frog2.png}
\caption{Original image overlaid with superpixel boundaries}
\end{subfigure}
\caption{Superpixels generated for an input image. Superpixels are the features LIME analyzes to detect if they are relevant for the classification \cite{limeoreilly}.}
\label{lime_superpixel}
\end{figure}
Superpixels are continuous regions on an image with a similar color. In the Python reference implementation, LIME uses the Quick Shift \cite{vedaldi2008quick} clustering algorithm to generate these superpixels. Figure \ref{lime_superpixel} shows superpixels overlaid on an example image.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{chapters/02_methods/images/lime1.jpg}
\caption{Left: Original image with detected class and probability. Right: Perturbed images generated by randomly turning off superpixels and their probability for the specific class.}
\label{lime_perturbed}
\end{figure}
In the next step, LIME generates input images by turning off multiple randomly selected superpixels. Turning off in this case means setting the color inside the superpixel to gray. The right image of Figure \ref{lime_perturbed} shows some examples of deactivated superpixels. The generated input images are then passed through the neural network and the changed probabilities of the relevant class(es) are recorded.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{chapters/02_methods/images/lime2.png}
\caption{LIME linear regression model: The light red and blue background of the image represent the non-linear function of the underlying model.
The bright red cross represents the image that is explained. The smaller red crosses and blue circles represent
perturbed instances of image. The size of these symbols show the distance to the unchanged image. The x-axis represents the features, in the case of images these are the superpixels. The y-axis represents the output label of the neural network. The dashed line shows the fitted linear model.}
\label{lime_linear_regression}
\end{figure}
As a last step, LIME trains a linear regression model (Figure \ref{lime_linear_regression}) on this data: Superpixels represented as a binary vector on the x-axis and the corresponding neural network output for the specific class on the y-axis. The samples are weighted by distance to the original sample, because a linear model is trained which is accurate in the local environment of the unchanged image, but not further away because the underlying model is not linear. The coefficients of the linear model represent how important every superpixel is for the correct classification of the model.
For visualization, the coefficients are sorted by their value. Based on the superpixel count parameter given to LIME, the most important superpixels are drawn, all other superpixels are grayed out.
Figure \ref{lime_dog} shows a visualization for three classes on a complex input image. Alternatively, the superpixel cluster is drawn over the input image with transparency.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/02_methods/images/lime_dog_1.png}
\caption{Original image}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/02_methods/images/lime_dog_2.png}
\caption{Explain class "Electric guitar" (p=0.32)}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/02_methods/images/lime_dog_3.png}
\caption{Explain class "Acoustic guitar" (p=0.24)}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{chapters/02_methods/images/lime_dog_3.png}
\caption{Explain class "Labrador" (p=0.21)}
\end{subfigure}
\caption{Explaining the top 3 classes on an example image with LIME. The used neural network architecture is Inception trained on ImageNet \cite{ribeiro2016should}.}
\label{lime_dog}
\end{figure}
| {
"alphanum_fraction": 0.7740112994,
"avg_line_length": 65.2105263158,
"ext": "tex",
"hexsha": "682bf01f1c8cf1b0a58a99f7621327e5d4dd9b0e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "andef4/thesis-doc",
"max_forks_repo_path": "chapters/02_methods/04_lime.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "andef4/thesis-doc",
"max_issues_repo_path": "chapters/02_methods/04_lime.tex",
"max_line_length": 597,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "andef4/thesis-doc",
"max_stars_repo_path": "chapters/02_methods/04_lime.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1181,
"size": 4956
} |
\subsection{Euler's theorem}
| {
"alphanum_fraction": 0.71875,
"avg_line_length": 6.4,
"ext": "tex",
"hexsha": "3eaed4fb49c72e74fa70525d33173f4a154e02e1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/logic/primes/01-04-eulerTheorem.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/logic/primes/01-04-eulerTheorem.tex",
"max_line_length": 28,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/logic/primes/01-04-eulerTheorem.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9,
"size": 32
} |
\chapter*{Results and conclusions}
\addcontentsline{toc}{chapter}{Results and conclusions}
\markboth{Conclusions}{}
\label{chap:conclusions}
\section{Project status and schedule}
After reviewing the Gantt's diagram of the previous progress report, we can see that according to the the current status (testing the funding transaction of a bidirectional payment channel), we are two weeks on delay compared with the estimated schedule proposed on the previous document.
The problems that surged in order to reach that delay were:
\begin{itemize}
\item \textbf{More research time than expected:} We thought that with our knowledge acquired until the last progress report we would spend less time in each iteration performing research tasks and more time doing testing would be required. What happened is that the more advanced or non-standard scripts you are developing, the less information you find. It was true that we also required more test time, and less development as we developed a great and usable framework, but the research task time couldn't be reduced because of the lack of sources. To avoid long searches, I decided to use the Bitcoin Core implementation code as the main source of information as it's the code that will be executed to check our transactions and despite being hard to understand because of the advanced C++ syntax used, the efforts of understanding the code are paid as the knowledge obtained will be valid without any doubt.
\item \textbf{Personal timing issues:} Some weeks we couldn't complete the iterations because I started an scholarship that takes more time than expected and therefore each iteration was delayed more than necessary.
\end{itemize}
To reschedule the project, we skipped implementing the two transactions (refund and fund) method to open a channel, as the one-transaction works and is more secure and I will use the Bitcoin Core implementation as my main source of information to ensure the validity and accuracy of the knowledge obtained. With this changes, I expect to be on-schedule after testing the bidirectional payment channel funding transaction.
\section{About the development}
Since last progress report, I've learned how P2SH transactions are created and spent, what a BIP exactly is, how it is created, developed, tested and requested to be introduced as a feature in the network, and how the BIP-65 works implying that I understand the \code{nLocktime} exact meaning and operation, and also how \code{nSequence} works because it also was a requirement for BIP-65.
After reading the whitepaper, I think no extra knowledge should be necessary to develop the payment channel rather than creating and testing the transaction with its scripts, but it's very optimistic to say it. Despite that, and with the changes performed to the research tasks (look first the Bitcoin Core implementation, as I'm know more familiarized with it), I think I'll be able to develop a bidirectional payment channel as on schedule, despite maybe I'll have to skip optional features like the channel automation so the channel can be used by any user with a minimal knowledge about Bitcoin.
| {
"alphanum_fraction": 0.8100095633,
"avg_line_length": 165.1052631579,
"ext": "tex",
"hexsha": "5ec2926d29e0fe652a10eeef030ca94335f5fb26",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-23T09:37:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-23T09:37:40.000Z",
"max_forks_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "davidlj95/smart-payment-channel",
"max_forks_repo_path": "progress_report_ii/04-conclusions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "davidlj95/smart-payment-channel",
"max_issues_repo_path": "progress_report_ii/04-conclusions.tex",
"max_line_length": 915,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "davidlj95/smart-payment-channel",
"max_stars_repo_path": "progress_report_ii/04-conclusions.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-23T09:37:30.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-24T07:43:51.000Z",
"num_tokens": 625,
"size": 3137
} |
\hyphenation{al-though Al-though}
\section{Introduction}
In today's application, scientists want to share their services with many colleagues while not only offering the services as bare metal programs but exposing the functionality as a Software as a Service (SaaS). This has the advantage that the services can be readily reused by other applications and hosted in the cloud, allowing access to state-of-the-art services or volumes of resources that otherwise would not be accessible to individual domain experts. Through the increased availability, resource constraints can be reduced, and scientists can offer their analytics workflows as services to the community. This may include long-lasting services envisioned by cloud computing as part of its Software as a Service (SaaS) paradigm or for smaller analytics functions as microservices. Furthermore, a subset of analytics functions can be offered as part of a serverless computing model, elevating the penetration from a pure bare metal solution to a multi-pronged cloud-based service offering.
While working with many professionals, researchers, and students, we found that the barriers to entry to accomplish this goal remain very high, and would elude many domain experts as they have neither the expertise nor the time to learn the expertise necessary to conduct the infrastructure-related tasks integrating DevOps and analytics tasks. Although recent developments, especially on the serverless computing side, have made progress, we ought to leverage the existing expertise of the domain scientists while automating the creation of various services from SaaS, microservices, and serverless computing.
Having worked with this community, we found that the educational steps involved for a beginner take about two to three months to get up to a level where the development of cloud-based services is possible. We set the goal to explore if it is possible to drastically reduce the time needed to create such services.
For this reason, we developed a sophisticated but easy to use framework that takes a regular python function and converts it automatically into a secure REST service and OpenAPI specifications \cite{openapi} that can be reused in the ecosystem of cloud services. We used this framework to create many AI-based REST services to showcase the approach's validity. We used examples from SciKit-learn \cite{scikit-learn} and benchmark the execution of the resulting REST services on various clouds and an IoT device.
The paper is structured as follows. In \Section{sec:background} we will start with a very brief background section to allow domain experts to catch up with the terminology and concepts used in our architecture. The background analysis leads us to our requirements presented in \Section{sec:requirements} and our architectural design shown in \Section{sec:architecture}. Our benchmarks are collected in \Section{sec:benchmark}. We present our conclusion in \Section{sec:conclusion}.
In the appendix, a small number of useful notes are provided to ease replication of what we have achieved by others. In the final publication, the appendix can be removed with a link to our manual for the pilot framework presented here \cite{cloudmesh-manual, cloudmesh-openapi} where we will include the content of the appendix.
\section{Background}
\label{sec:background}
In this background section, we provide a small summary of activities related to this research so that domain experts can get a small introduction to concepts that we use to implement our architecture. It is beyond the scope of this paper to give more detailed introductions in topics such as IaaS, SaaS, microservices, serverless computing, OpenAPI, and REST services. The sections will, however, be useful as a starting point for further research to the reader.
\subsection{The Big Data Reference Architecture}
NIST has developed a Big Data Reference Architecture as part of the NIST Big Data Interoperability Framework (NBDIF)\cite{nist-v6} and identified several use cases that motivate
it \cite{nist-v3}. The reference architecture is depicted in \Figure{fig:bdra}. It includes the following components: Data Provider, Big Data Application Provider, Big Data Framework Provider, Data Consumer and
System Orchestrator as well as two overarching fabrics: security and
privacy and system management. There are three types of linkages,
namely \emph{Big Data Information flow}, \emph{Service Use and
Software Tools}, and \emph{algorithms transfer}. The architecture
presents a level of abstraction to define Big Data
applications. Components that implement sophisticated functionality
work in concert to address the challenging creation of instantiating architectures beyond the conceptual stage. As such, the components
interact with each other that are expressed through the linkages
within the NBDIF. The next logical step is to explore how it can
benefit and be used for analytics services.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\columnwidth]{images/NIST_RA_latest-crop.pdf}
\caption{NIST Big Data Reference Architecture \cite{nist-v8}}
\label{fig:bdra}
\end{figure}
NIST has developed through open working group participation the
following documents related to the
NBDIF~\cite{nist-v1,nist-v2,nist-v3,nist-v4,nist-v5,nist-v6,nist-v7,nist-v8,nist-v9}. Within
these activities, Volume 8 is of especial importance as it allows a
set of Big Data Architectural needs
\cite{cloudmesh-openapi}\cite{las20book-cloudeng}. This effort builds the basis of
our activities reported here while expanding it to cloud providers and
services focusing on {\em Analytics Services}, which are not covered
by the current volumes.
In a previous effort, we have developed a reference implementation that follows the architecture laid out in NBDIF and is easy to use by scientists.
However, it focused mostly on multi-cloud provider access via REST services and command-line tools. The reference implementation is done as part of the cloudmesh project, which was one of the first hybrid multi-cloud provider interfaces, even including cloud technologies that are no longer active such as Eucalyptus \cite{www-eucalyptus}, OpenCirrus \cite{opencirrus}, FutureGrid \cite{futuregrid}, and Comet Cloud \cite{las-comet}. Today, it supports clouds such as AWS \cite{www-aws}, Azure \cite{www-azure}, Google Cloud Platform \cite{www-google}, Oracle \cite{www-oracle-cloud}, and OpenStack \cite{www-openStack}. It will offer further value as it also explores the integration of MapReduce frameworks such as Hadoop \cite{www-hadoop} and Spark \cite{www-spark}, as well as container-based frameworks such as Docker \cite{www-docker}, and Kubernetes \cite{www-kubernetes}.
However, the work presented here focuses on creating analytics services that can be automatically created and hosted on any of the clouds supported by cloudmesh. This is a non-trivial effort due to the large number of technologies involved and is outside of the expertise of domain scientists. However, the use of cloudmesh makes it possible for the domain scientist to easily access these services and leverage our more than ten years of experience in this field.
The previous work provides us with a blueprint on how to proceed. We list the following main findings of our earlier work that we leverage as part of this work.
\begin{description}
\item[Software Defined Analytics Services and Applications.] ~\\
Just as
in the NBDIF, the utilization of \emph{DevOps} to deliver
Software-Defined (SD) Big Data applications is of
utmost importance for the design of reusable services and components \cite{cloudmesh-manual,bigdata-stack-1,bigdata-stack-2}.
\item[Multi-cloud Provider Interfaces.] Volume 8 was through community
input shaped in such a form that it allows multi-cloud
interfaces. Such interfaces have been in practical use in our software and showcase the validity of the NIST-BDRA approach. It is clear that we need to introduce such multi-cloud and multi-service
interfaces for analytics-related tasks whenever possible as motivated in our introduction.
\item[Use Case Collection.] NIST has provided as part of the NIST BDRA
document Vol. 6~\cite{nist-v6} several use cases that can be analyzed and from which common big data services can be detected. These use
cases were sufficient to drive the NIST BDRA document \cite{nist-v6}
and allowed the community to investigate initial implementations. These use cases also motivate the work conducted in this effort.
\item[Independent API Specification Leveraging OpenAPI.] ~\\ Although the use of OpenAPI \cite{openapi,openapi-tools} is not required as part of the NIST specification, it can be used to formulate services in a
language-independent fashion. Hence it allows {\em creating, evolving and promoting a vendor-neutral description format}. This is important to provide for our analytics services approach to promote a vendor-neutral and independent effort.
\item[API's and Tools Targeting A Multi-Layered Architecture.] In our
previous effort, we learned that we need to provide support for
tools, services, and APIs on multiple levels in a multi-layered
architecture. While some users expect a generalized specification other users may require access on the command line, deployed services, or even a Jupyter notebook. We observe that in many cases,
the entry-level to define API specification is too high for many. This is the case for domain experts in the analytics community that often lack the necessary expertise for general service integration and deployment.
\end{description}
Hence, previous work provides us with a blueprint on how to proceed, which we summarize as
follows:
{\em Develop an easy to use framework that allows the scientists (a) to develop shareable analytics components (b) allow for the deployment of them, and (c) allow for the easy reuse of the services by community members leveraging the deployments.}
\subsection{REST}\label{rest}
One of the most common architectural styles for cloud-related services is based on {\bf R}epr{\bf E}sentational {\bf S}tate {\bf T}ransfer (REST). REST often uses
the HTTP protocol for the CRUD functions, which create, read, update, and
delete resources. It is important to note that REST is not a standard,
but it is a software architectural style for building network services.
When referred to as a part of the HTTP protocol, REST has the methods of
GET, PUT, POST, and DELETE. These methods are used to implement the CRUD
functions on collections and items for which REST introduces abstractions for managing these collections and single resources \cite{las-book-cloud} as explained in \Figure{fig:rest}.
\begin{figure}[htb]
\begin{adjustbox}{minipage=0.9\columnwidth,
margin=10pt 5pt,%
bgcolor={black!2},%
frame=0.01pt}%
{\footnotesize
\begin{description}
\item[Collection of resources.] Assume the URI, \verb|http://.../resources/|, identifies a
collection of resources. The following CRUD functions would be
implemented:
\begin{description}
\item
[GET:] List the URIs and details about the collection's
items.
\item[PUT:] Replace the collection with a different collection.
\item[POST:] Make a new entry in the collection. The operation
returns new entry's URI and assigns it automatically.
\item
[DELETE:] Delete the collection.
\end{description}
\bigskip
\item[Single Resource.] Assume the URI, \verb|http://.../resources/item1|, identifies a
single resource in a collection. The following CRUD functions would be
implemented:
\begin{description}
\item
[GET:] Fetch a representation of the item in the collection,
extracted in the appropriate media type.
\item
[PUT:] Replace the item in the collection. If the item does
not exist, then create the item.
\item
[POST:] Typically, not used. Treat the item as a collection
and make a new entry in it.
\item
[DELETE:] Delete the item in the collection.
\end{description}
\end{description}
}
\end{adjustbox}
\caption{REST definitions for a collection and single resources.}
\label{fig:rest}
\end{figure}
Because REST has a defined structure, there are tools that manage
programming to REST style architectures. They include, for example, different categories
\cite{las-book-cloud}:
\begin{itemize}
\item \textbf{REST Specification Frameworks} which define
REST service specifications for generating REST services in a
language and framework independent manner such as Swagger 2.0
\cite{openapi-2}, OpenAPI 3.0 \cite{openapi-3} and RAML
\cite{raml-1}.
\item \textbf{REST programming language support} which include tools and services
for targeting specific programming languages such as Flask Restful
\cite{www-flask-restful}, and Django Rest \cite{www-django-rest} for Python.
\item \textbf{REST documentation-based tools} which are tools to document
REST specifications. One such tool is Swagger \cite{www-swagger}.
\item \textbf{REST design support tools} which support the
design process in developing REST services while defining reusable client and server that can be integrated and enhanced such as Swagger \cite{www-swagger} and other tools
available at OpenAPI Tools \cite{www-openapi-tools} to generate code
from OpenAPI specifications \cite{www-swagger-codegen}
\end{itemize}
Within our work reported here, we will heavily base our architecture on REST. From this small discussion, it is evident that although the concept of REST is easy to understand, a significant amount of expertise is needed to apply it, which domain scientists may not be interested in to know but keen on reusing without needing to know the details.
\subsection{OpenAPI}
One of the important aspects of generating REST services is a language-independent formulation of REST services. For this reason, the ``OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with minimal implementation logic \cite{openapi}.''
Hence the specification allows us to not only display the documentation but also allows us to use it to generate the clients and server stubs from it automatically. OpenAPI can be formulated as a YAML Ain't Markup Language (YAML) \cite{www-yaml} file.
An OpenAPI definition can then be used by documentation generation tools to display the API, code generation tools to generate servers and clients in various programming languages, testing tools, and many other use cases. One of the issues with using the OpenAPI during the design of a project is that it takes considerable effort to understand the specification. Based on our experience of integrating it into university courses, it is a formidable effort to learn and use it. The lessons from this educational effort that includes researchers, professionals, graduate, and undergraduate students motivated this work.
\subsection{Hybrid Multi-Cloud Computing with Cloudmesh}\label{cloudmesh}
Cloud computing providers offer their customers on-demand self-service
computing resources that are rapidly elastic and accessible via broad
network access \cite{nist-cloud-standard}.
They accomplish this through the economies of scale achieved by resource
pooling (serving multiple customers on the same hardware) and using
measured services for fine-grained customer billing \cite{nist-cloud-standard}.
Cloud providers offer these resources in multiple service models
including infrastructure as a service, platform as a service, software
as a service, and, recently, function as a service
\cite{nist-cloud-standard}.
These providers are rapidly offering new platforms and services ranging
from bare-metal machines to AI development platforms like Google's
TensorFlow Enterprise platform \cite{www-tensorflow-enterprise}, and AI services
such as Amazon's text-to-speech service \cite{amazon-polly}.
Customers can take advantage of cloud computing to reduce overhead
expenses, increase their speed and scale of service deployment, and
reduce development requirements by using cloud providers' platforms or
services. For example, customers' developing AI systems can utilize
clouds to handle big data inputs for which private infrastructure would
be too costly or slow to implement. However, having multiple competing
cloud providers leads to situations where service availability,
performance, and cost may vary. Customers must navigate these
heterogeneous solutions to meet their business needs while avoiding
vendor lock-in and managing organizational risk. This may require
comparing or using multiple cloud providers to meet various objectives.
Today's infrastructure deployments can benefit from a {\em hybrid multi-cloud}
strategy in which a mix of cloud-enabled services such as computing, storage, and other services
are integrated from on-premises infrastructure, private cloud services, and a public cloud.
As pointed out earlier, Cloudmesh \cite{cloudmesh-manual} is a framework and toolkit that enables users to
easily access hybrid multi-cloud environments. Cloudmesh is an evolution of
previous tools that have been used by many users. Cloudmesh makes
interacting with clouds easy by creating a service mashup to access
common cloud services across numerous cloud platforms. Cloudmesh
contains a sophisticated command shell, a database to store JSON
objects representing virtual machines, storage, and a registry of REST
services \cite{cloudmesh-openapi}. Cloudmesh has a sophisticated
plugin concept that is easy to use and leverages python namespaces
while integrating plugins from different source code
directories \cite{cloudmesh-github}. Installation of Cloudmesh is
available for macOS, Linux, Windows, and Rasbian
\cite{cloudmesh-manual}.
Cloudmesh works with a variety of cloud providers, including Amazon Web
Services, Microsoft Azure, Google Cloud Platform, and Oracle's OpenStack
based providers such as the academic research serving Chameleon Cloud \cite{chameleon-cloud}.
Recently we have also explored containers and microservices. The work presented here summarizes some of this effort. With the help of a plugin {\em cloudmesh-openapi} We can generate REST services, including microservices and
containers, to organize its functions and code. In addition, cloudmesh can be distributed as a container and used in a containerized environment. Through this ability, cloudmesh services generated with {\em cloudmesh-openapi} can also be deployed on Kubernetes.
\input{content-requirements}
\input{content-architecture}
\section{Benchmark}
\label{sec:benchmark}
In this section we describe our benchmark results.
\subsection{Infrastructure}
For a comparison of our services, we want to compare service deployments on virtual machines that are hosted on various cloud providers. We have chosen to select similar virtual machines for conducting the benchmark. This includes AWS \cite{www-aws}, Azure\cite{www-azure}, and Google \cite{www-google}.
In addition, we are performing some bare metal experiments on two Raspberry PI clusters, one with Raspberry PI4's and the other with Raspberry PI 3b+'s. The latter has a management node, a PI 4, and worker nodes that are PI 3b+. The inclusion of the Raspberry platform was important to us as it demonstrates the capability of IoT and Edge computing devices that may become more prevalent in the future for delegating tasks to the edge. We further provide a docker container for a comparison of containerized services.
\subsection{Application}
We developed benchmark tests that are pytest replications of Scikit-learn artificial intelligent algorithms. These pytests are then run on different cloud services to benchmark statistics on how they perform.
The team obtained cloud service
accounts from AWS, Azure, Google, and OpenStack. To deploy the pytests,
the team used Cloudmesh and its OpenAPI based REST services to
benchmark the performance on different cloud services.
Benchmarks include components like data transfer time, model train time, model prediction time, and more. Besides this report, scripts and other code are provided for others to replicate our tests.
We provide two example benchmarks for the Eigenfaces SVM example. The
first deploys and measures the AI service on a single cloud provider at
a time (see \ref{single-cloud-provider-service-benchmarking}), and the second deploys a multi-cloud (see \ref{sec-multi-benchmark}) AI service measuring the service across the clouds in parallel.
\subsection{Algorithms and Datasets}
\label{sec:algorithms-and-datasets}
This project uses a simple example algorithm and dataset. We have chosen to use an example included in Scikit-learn as they are
widely known and can be used by others to replicate our benchmarks
easily. Nevertheless, it will be possible to easily integrate other data
sources, as well as algorithms, due to the generative nature of our code base for creating REST services. Within Scikit-learn we have chosen the {\bf\em Eigenfaces SVM Facial Recognition} example as it represents a very common data science usage pattern. This example conducts a facial recognition
that first utilizes principle component analysis (PCA) to
generate eigenfaces from the training image data, and then trains and
tests an SVM model \cite{www-skikit-learn-faces}. This example uses the real world {\em Labeled Faces in the Wild} dataset
consisting of labeled images of famous individuals gathered from the
internet \cite{faces-data}.
% \textbf{Pipelined ANOVA SVM}: An example code that shows a pipeline
% successively running a univariate feature selection with anova and
% then a SVM of the selected features \cite{www-skikit-learn-pipeline}
\subsection{VM Selection}\label{vm-selection}
When benchmarking cloud performance, it is important to identify and
control VM deployment parameters. This allows one to analyze comparable service offerings, or identify
opportunities for performance improvement by varying deployment features
such as machine size, location, network, or storage hardware. These
benchmark examples aimed to create similar machines across all three clouds, and
measure their service performance. See \Table{tab:iaas} for a summary of the parameters
controlled in these benchmark examples.
One key component is the virtual machine size, which determines the
number of vCPUs, the amount of memory, attached storage types, and
resource sharing policies. Resource sharing policies include shared
core machine varieties—which providers offer at less expensive rates—that allow the virtual machines to burst over its base clock rate in
exchange for credits or the machine's inherent bursting factor
\cite{amazon-instances,google-instances}. For this example, we chose
three similar machine sizes that had comparable: vCPUs, underlying processors, memory, price, and were not a shared core variety. We installed the same Ubuntu 20.04 operating system on all
three clouds.
Another factor that can affect performance, particularly in network
latency, is the zone and region selected. We deploy all benchmark
machines to zones on the east coast of the United States. This helps
control variations caused by network routing latency and provides more
insight into the inherent network performance of the individual cloud
services.
%\rowcolors{2}{gray!25}{white}
Because cloud providers can observe varying loads during the day, the benchmark execution time is another parameter to control. In our single cloud provider benchmark for the Eigenfaces SVM example, clouds were tested at least twice and were run sequentially between the hours of approximately 19:45 EST and 03:30 EST starting with Google and ending with Azure. In the Eigenfaces SVM example, only 60 runs were
conducted on Azure due to a failed VM deployment caused by factors outside of the benchmark script's control. Compared to our single cloud provider benchmark, our multi-cloud benchmark benefits from all clouds being tested at the same time.
\begin{table}
\caption{Controlled VM parameters for cloud benchmarks.}
\label{tab:iaas}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}[]{@{}llll@{}}
\toprule
& AWS & Azure & Google \tabularnewline
\midrule
%\endhead
Size (flavor) & m4.large & Standard\_D2s\_v3 & n1-standard-2 \tabularnewline
vCPU & 2 & 2 & 2 \tabularnewline
Memory (GB) & 8 & 8 & 7.5 \tabularnewline
Image & ami-0dba2cb6798deb6d8
& \begin{minipage}[t]{0.40\columnwidth}\raggedright
Canonical:0001-com-ubuntu-server-focal:20\_04-lts:20.04.202006100\strut
\end{minipage} &
ubuntu-2004-lts
\tabularnewline
OS & Ubuntu 20.04 LTS & Ubuntu 20.04 LTS & Ubuntu 20.04 LTS \tabularnewline
Region & us-east-1 & eastus & us-east1 \tabularnewline
Zone & N/A & N/A & us-east1-b \tabularnewline
Price (\$/hr) & 0.1 & 0.096 & 0.0949995 \tabularnewline
%Runs/Test & 90 & 60 & 90\tabularnewline
\bottomrule
\end{tabular}
}
\bigskip
\caption{Raspberry Pi and Docker Specifications}
\label{tab:pi}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}[]{lllll}
\toprule
& & & Docker & MacBook \tabularnewline
& Pi 3B+ & Pi 4 & (On MBP) & Pro i5 3.1GHz\tabularnewline
\midrule
%\endhead
Cores & 4 & 4 & 2&2\tabularnewline
Memory (GB) & 1 & 8 & 2&8\tabularnewline
OS & Raspberry OS 10 & Raspberry OS 10 & Ubuntu 20.04 LTS & macOS \tabularnewline
Version & Kernel 5.4.51 & Kernel 5.4.51 & NA& Big Sur \tabularnewline
Purchase Cost (\$) & 51.99 & 109.99 & NA& NA\tabularnewline
Energy Cost (\$/year) & 5.36 & 6.73 & NA & NA\tabularnewline
Price (\$/hr) & 0.0065 & 0.0133 & NA & NA\tabularnewline
%Runs/Test & 30 & 30 & 1 &\tabularnewline
\bottomrule
\end{tabular}
}
\smallskip
{\footnotesize
The Price is the purchase cost and 1yr energy cost, amortized over a year and given for each hour of the year.}
\end{table}
\subsection{Single Cloud Provider AI Service Benchmark.}
\label{single-cloud-provider-service-benchmarking}
The benchmark script for the Eigenfaces SVM example uses Cloudmesh to
create virtual machines and set up a Cloudmesh OpenAPI environment
sequentially across the three measured clouds, Amazon, Azure,
and Google. After the script sets up the environment, it runs a series
of pytests that generate and launch the Eigenfaces-SVM OpenAPI service,
and then conducts runtime measurements of various service functions. Also, we run the same pytests on two Raspberry Pi models, a MacBook Pro running a Docker container, and a bare metal MacBook Pro to demonstrate Cloudmesh OpenAPI's flexibility for multi-platform use.
The benchmark runs the pytest in two configurations. After the benchmark
the script sets up a virtual machine environment, it runs the first pytest
locally on the OpenAPI server and measures five runtimes:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item \textbf{download data:} Download and extraction of remote image data from
ndownloader.figshare.com/files/5976015
\item \textbf{train:}
The model training time when run as an OpenAPI service
\item \textbf{scikitlearn train:}
The model training time when run as the Scikit-learn example without
OpenAPI involvement
\item \textbf{upload local:}
The time to upload an image from the server to itself
\item \textbf{predict local:}
The time to predict and return the target label of the uploaded image
\end{enumerate}
The benchmark runs the second pytest iteration as a remote client and interacts with the deployed OpenAPI service over the internet. It tests two runtimes:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item \textbf{upload remote:}
The time to upload an image to the remote OpenAPI server
\item \textbf{predict remote:}
The time to run the predict function on the remote OpenAPI server, and
return the target label of the uploaded image
\end{enumerate}
In \Figure{fig:download} we compare the download and extraction time of the labeled faces in the wild data set. This data set is approximately 233 MBs
compressed, which allows us to measure a non-trivial data transfer.
Lower transfer times imply the cloud has higher throughput from the data
server, less latency to the data server, or that the cloud has a better performing internal network. The standard deviation is displayed to compare the variation in the download times. Because the difference between commercial and residential internet speeds dominates the function runtime, we do not compare the clouds to the Pi models, MacBook, or docker container.
\begin{comment}
In \Figure{fig:download} we show the same plot without the Pi and docker results to allow a closer comparison of the three comparable clouds.
\end{comment}
\begin{comment}
\TwoFIGURES
{sample_graph_1_pi1.pdf}
{Runtime for downloading the data used in the Eigenfaces SVM benchmark.}
{fig:download_pi}
{sample_graph_1.pdf}
{Closeup of the Runtime for downloading the data used in the Eigenfaces SVM benchmark while excluding the runtime for the Raspberry PI.}
{fig:download}
\end{comment}
\OneFIGURE
{sample_graph_1.pdf}
{Runtime for downloading the data used in the Eigenfaces SVM benchmark.}
{fig:download}
In \Figure{fig:train_pi} we measure the training time of the Eigenfaces-SVM model both as an OpenAPI service and as the basic Scikit-learn example. This
allows us to measure the runtime overhead added by OpenAPI compared to the
source example. Here, the two functions are identical except that the
OpenAPI train function makes an additional function call to store the
model to disk. This is necessary to share the model across
the train and predict functions. In the figure there are two bars per cloud provider. The blue bars are the training time of the model when hosted as a Cloudmesh OpenAPI function. The
orange bars are the training time of the Scikit-learn example code without Cloudmesh OpenAPI involvement. The bars plot the mean runtimes and the error bar reflects the standard deviation of the runtimes. In \Figure{fig:train} we show the same plot without the Pi models, MacBook, and docker results to allow a closer comparison of the three comparable clouds.
\TwoFIGURES
{sample_graph_2_pi1.pdf}
{Runtime for training on the data used in the Eigenfaces SVM benchmark.}
{fig:train_pi}
{sample_graph_2.pdf}
{Closeup of the Runtime for training on the data used in the Eigenfaces SVM benchmark without the data for the Pi.}
{fig:train}
In \Figure{fig:upload_pi} we measure the time to upload an image to the server both
from itself and from a remote client. This allows us to compare the
function runtime as experienced by the server, and as experienced by a
remote client. The difference helps determine the network latency
between the benchmark client and the cloud service. In the figure, there are two bars per
cloud provider. The blue bars are the runtime of the upload function as
experienced by the server, and the orange as experienced by the remote
client. The bars plot the mean runtimes and the error bar reflects the
standard deviation of the runtimes. For the Pi models, MacBook, and docker container, we only measure the local function runtime.
%\TwoFIGURES
\OneFIGURE
{sample_graph_3_pi1.pdf}
{Runtime for uploading the data used in the Eigenfaces SVM benchmark.}
{fig:upload_pi}
%{sample_graph_3.pdf}
%{Closeup of the Runtime for uploading the data used in the Eigenfaces SVM benchmark without the data for the Pi.}
%{fig:upload}
In \Figure{fig:predict_pi} we measure the time to call the predict function on the uploaded image. Again we run this once from the local server itself, and a second time from a remote client to determine client and server runtimes. In the figure, there are two bars per cloud provider. The blue bars are the run time of the predict function as experienced by the server, and the orange as experienced by the remote client. The bars plot mean runtimes and the error bar reflects the standard deviation of the runtimes. For the Pi models, MacBook, and docker container, we only measured the local function runtime.
%\TwoFIGURES
\OneFIGURE
{sample_graph_4_pi1.pdf}
{Runtime for the prediction used in the Eigenfaces SVM benchmark.}
{fig:predict_pi}
%{sample_graph_4.pdf}
%{Closeup of the Runtime for the prediction used in the Eigenfaces SVM benchmark without the data for the Pi.}
%{fig:predict}
\Table{tab:2} presents a full listing of test results. For the upload and predict tests, the 'type' column denotes whether the test was run locally (server runtime) or remote (client runtime).
In \Table{tab:cost} we present a cost analysis of the service functions. The analysis uses the price from \Table{tab:iaas} and \Table{tab:pi}. The price for the cloud virtual machines are based on provider advertised costs, while the price for the Pi models are based on the hardware cost and one year of energy cost amortized for one year. This does not include other costs such as cooling, networking, or real estate. For the Pi energy cost we assume a full and constant load. We utilize power consumption benchmarks from \cite{pi-power} and Indiana residential kWH cost from \cite{indiana-energy} to calculate the expected Energy Cost per year. We calculate the cost to run each function and compare the clouds and Raspberry Pi 4 to the Raspberry Pi 3b+. We compare the percent runtime decrease from the Pi 3b+ to the clouds and Raspberry Pi4, and the percent cost increase from the Pi 3b+ to the clouds and Raspberry Pi 4.
\begin{comment}
\begin{table}[h]
%\caption{example table with vertical labels}
example for vertical labels
\centering
\begin{tabular}{clllrrrr}
\toprule
type & \multicolumn{1}{c}{test} & \multicolumn{1}{c}{cloud} & \multicolumn{1}{c}{mean} & \multicolumn{1}{c}{min} & \multicolumn{1}{c}{max} & \multicolumn{1}{c}{std}\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering local}}} &
\multirow{3}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering down\-load}}}
& aws &2&3&4&5\\
& & azure & 2&3&4&5\\
& & google &3&3&4&5\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering local}}} &
\multirow{3}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering predict}}}
& aws &2&3&4&5\\
& & azure & 2&3&4&5\\
& & google &3&3&4&5\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering local}}} &
\multirow{3}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering remote}}}
& aws &2&3&4&5\\
& & azure & 2&3&4&5\\
& & google &3&3&4&5\\
\hline
\end{tabular}
\end{table}
\end{comment}
\begin{table}[htb]
\caption{Test results for the Eigenfaces SVM single cloud provider benchmark.}
\label{tab:2}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{lllrrrr}
\toprule
test & type & cloud & mean & min & max & std \\
\midrule
download data & local & aws & 20.58 & 17.23 & 31.80 & 2.77 \\
download data & local & azure & 20.81 & 13.56 & 42.70 & 6.94 \\
download data & local & docker & 820.98 & 820.98 & 820.98 & 0.00 \\
download data & local & google & 18.00 & 17.06 & 19.38 & 0.48 \\
download data & local & pi 3b+ & 130.17 & 123.84 & 149.40 & 5.39 \\
download data & local & pi 4 & 47.67 & 43.43 & 75.60 & 5.72 \\
\midrule
predict & local & aws & 0.03 & 0.02 & 0.05 & 0.00 \\
predict & local & azure & 0.02 & 0.01 & 0.03 & 0.00 \\
predict & local & docker & 0.03 & 0.03 & 0.03 & 0.00 \\
predict & local & google & 0.03 & 0.01 & 0.06 & 0.00 \\
predict & local & mac book & 0.12 & 0.12 & 0.12 & 0.00 \\
predict & local & pi 3b+ & 0.12 & 0.10 & 0.14 & 0.01 \\
predict & local & pi 4 & 0.08 & 0.08 & 0.08 & 0.00 \\
\midrule
predict & remote & aws & 0.40 & 0.26 & 0.80 & 0.18 \\
predict & remote & azure & 0.36 & 0.24 & 0.60 & 0.13 \\
predict & remote & google & 0.36 & 0.27 & 0.82 & 0.16 \\
\midrule
scikitlearn train & local & aws & 35.89 & 35.11 & 46.45 & 1.77 \\
scikitlearn train & local & azure & 40.13 & 34.95 & 43.96 & 3.29 \\
scikitlearn train & local & docker & 53.76 & 53.76 & 53.76 & 0.00 \\
scikitlearn train & local & google & 42.13 & 41.77 & 42.49 & 0.13 \\
scikitlearn train & local & mac book & 32.53 & 32.53 & 32.53 & 0.00 \\
scikitlearn train & local & pi 3b+ & 222.63 & 209.18 & 231.90 & 7.87 \\
scikitlearn train & local & pi 4 & 88.32 & 87.78 & 89.14 & 0.33 \\
\midrule
train & local & aws & 35.72 & 34.91 & 46.50 & 1.73 \\
train & local & azure & 40.28 & 35.30 & 47.50 & 3.32 \\
train & local & docker & 54.72 & 54.72 & 54.72 & 0.00 \\
train & local & google & 42.04 & 41.52 & 45.93 & 0.71 \\
train & local & mac book & 33.82 & 33.82 & 33.82 & 0.00 \\
train & local & pi 3b+ & 222.61 & 208.56 & 233.48 & 8.40 \\
train & local & pi 4 & 88.59 & 87.83 & 89.35 & 0.32 \\
\midrule
upload & local & aws & 0.01 & 0.01 & 0.01 & 0.00 \\
upload & local & azure & 0.01 & 0.00 & 0.01 & 0.00 \\
upload & local & docker & 0.02 & 0.02 & 0.02 & 0.00 \\
upload & local & google & 0.01 & 0.01 & 0.01 & 0.00 \\
upload & local & mac book & 0.02 & 0.02 & 0.02 & 0.00 \\
upload & local & pi 3b+ & 0.09 & 0.04 & 0.48 & 0.08 \\
upload & local & pi 4 & 0.02 & 0.02 & 0.02 & 0.00 \\
\midrule
upload & remote & aws & 0.43 & 0.16 & 1.13 & 0.21 \\
upload & remote & azure & 0.32 & 0.15 & 0.50 & 0.15 \\
upload & remote & google & 0.31 & 0.18 & 0.73 & 0.18 \\
\bottomrule
\end{tabular}
}
\bigskip
\caption{Test results for the Eigenfaces SVM benchmark deployed
as a multi-cloud service.}
\label{tab:3}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{lllrrrr}
\toprule
test & type & cloud & mean & min & max & std \\
\midrule
download data & remote & aws & 20.51 & 17.57 & 34.42 & 3.82 \\
download data & remote & azure & 18.60 & 13.49 & 32.65 & 4.53 \\
download data & remote & google & 17.90 & 17.13 & 21.86 & 0.85 \\
\midrule
predict & remote & aws & 4.15 & 3.59 & 5.42 & 0.57 \\
predict & remote & azure & 3.93 & 3.40 & 6.65 & 0.74 \\
predict & remote & google & 4.13 & 3.74 & 6.37 & 0.60 \\
\midrule
train & remote & aws & 35.61 & 35.24 & 39.53 & 0.73 \\
train & remote & azure & 35.89 & 35.08 & 40.00 & 0.95 \\
train & remote & google & 41.98 & 41.58 & 45.71 & 0.71 \\
\midrule
upload & remote & aws & 10.08 & 4.89 & 16.52 & 4.38 \\
upload & remote & azure & 8.46 & 4.72 & 13.92 & 4.05 \\
upload & remote & google & 8.87 & 5.39 & 15.44 & 4.52 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[htb]
\caption{Cost Analysis of function runtimes with \% cost increase and \% runtime decrease relative to the Raspberry Pi 3B+. }
\label{tab:cost}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{lllrrrr}
\toprule
test & type & cloud & mean & cost & \% runtime decrease & \% cost increase \\
\midrule
download data & local & aws & 20.58 & 5.72e-04 & NA & NA \\
download data & local & azure & 20.81 & 5.55e-04 & NA & NA \\
download data & local & google & 18.00 & 4.75e-04 & NA & NA \\
\midrule
predict & local & aws & 0.03 & 8.33e-07 & 75.00 & 281.87 \\
predict & local & azure & 0.02 & 5.33e-07 & 83.33 & 144.39 \\
predict & local & google & 0.03 & 7.92e-07 & 75.00 & 262.77 \\
predict & local & mac book & 0.12 & NA & 0.00 & NA \\
predict & local & docker & 0.03 & NA & 75.00 & NA \\
predict & local & pi 4 & 0.08 & 2.96e-07 & 33.33 & 35.68 \\
predict & local & pi 3b+ & 0.12 & 2.18e-07 & 0.00 & 0.00 \\
\midrule
predict & remote & aws & 0.40 & 1.11e-05 & NA & NA \\
predict & remote & azure & 0.36 & 9.60e-06 & NA & NA \\
predict & remote & google & 0.36 & 9.50e-06 & NA & NA \\
\midrule
scikitlearn train & local & aws & 35.89 & 9.97e-04 & 83.88 & 146.24 \\
scikitlearn train & local & azure & 40.13 & 1.07e-03 & 81.97 & 164.32 \\
scikitlearn train & local & google & 42.13 & 1.11e-03 & 81.08 & 174.60 \\
scikitlearn train & local & mac book & 32.53 & NA & 85.39 & NA \\
scikitlearn train & local & docker & 53.76 & NA & 75.85 & NA \\
scikitlearn train & local & pi 4 & 88.32 & 3.27e-04 & 60.33 & -19.26 \\
scikitlearn train & local & pi 3b+ & 222.63 & 4.05e-04 & 0.00 & 0.00 \\
\midrule
train & local & aws & 35.72 & 9.92e-04 & 83.95 & 145.10 \\
train & local & azure & 40.28 & 1.07e-03 & 81.91 & 165.33 \\
train & local & google & 42.04 & 1.11e-03 & 81.11 & 174.04 \\
train & local & mac book & 33.82 & NA & 84.81 & NA \\
train & local & docker & 54.72 & NA & 75.42 & NA \\
train & local & pi 4 & 88.59 & 3.28e-04 & 60.20 & -19.01 \\
train & local & pi 3b+ & 222.61 & 4.05e-04 & 0.00 & 0.00 \\
\midrule
upload & local & aws & 0.01 & 2.78e-07 & 88.89 & 69.72 \\
upload & local & azure & 0.01 & 2.67e-07 & 88.89 & 62.93 \\
upload & local & google & 0.01 & 2.64e-07 & 88.89 & 61.23 \\
upload & local & mac book & 0.02 & NA & 77.78 & NA \\
upload & local & docker & 0.02 & NA & 77.78 & NA \\
upload & local & pi 4 & 0.02 & 7.40e-08 & 77.78 & -54.77 \\
upload & local & pi 3b+ & 0.09 & 1.64e-07 & 0.00 & 0.00 \\
\midrule
upload & remote & aws & 0.43 & 1.19e-05 & NA & NA \\
upload & remote & azure & 0.32 & 8.53e-06 & NA & NA \\
upload & remote & google & 0.31 & 8.18e-06 & NA & NA \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Multi-Cloud AI Service Benchmark}
\label{sec-multi-benchmark}
In this benchmark, our script first acquires VMs, installs Cloudmesh
OpenAPI, and launches the Eigenfaces SVM AI service on three separate
cloud providers. Because Cloudmesh has limited parallel computing
support, the script deploys the VMs in a serial manner. After the
services are running, we then run our tests in a parallel manner as
depicted in \Figure{fig:2}. Testing in parallel provides faster benchmark
results and better equalizes benchmark testing conditions. The
benchmark conducts requests to each cloud in parallel, so they experience similar network conditions. For example, in a serial testing
model when downloading data from a remote server, the remote server may experience varying loads which will ultimately result in different throughputs for the various tests. Our parallel tests better equalize these conditions by having each cloud download the data under the same network conditions.
In the benchmark, we compute the means from 30 runs of a workflow that includes one download data invocation, one train invocation,
30 upload invocations, and 30 predict invocations. We run the workflows
in parallel on the separate clouds using multiprocessing on an eight-core
machine.
In \Figure{fig:7} we depict the combined runtime of our benchmark tests. This allows us to compare the complete execution time of an AI service workflow.
\OneFIGURE
{ai_service_workflow_runtime.png}
{Mean runtime of the Eigenfaces SVM workflow deployed
as a multi-cloud service.}
{fig:7}
In \Table{tab:3} we provide complete test results for the multi-cloud benchmark.
\begin{comment}
\subsubsection{Pipelined Anova SVM Example}
\label{pipelined-anova-svm-example}
\end{comment}
\begin{comment}
\subsection{Caleb Example}\label{caleb-example}
\end{comment}
\section{Limitations}\label{limitations}
Azure has updated their libraries and discontinued the version 4.0 Azure
libraries. We updated Cloudmesh to use the new library, but not all
features, such as virtual machine delete, are implemented or verified.
\section{Conclusion}
\label{sec:conclusion}
This paper has introduced a framework and tool called GAS Generator that allows data scientists not experienced enough with REST and/or OpenAPI to generate REST services from python functions quickly. The overall time for deploying the resulting service was reduced from several months by inexperienced data scientists to under a week. The service can be provisioned on public clouds and shared with other users. Authentication is built into our framework while leveraging common REST service practices.
In a small benchmark executed on the various cloud providers as well as local hardware, including Raspberry PIs, we have seen that the cloud providers, when using similar resources and images, perform similarly. To compare the services with IoT devices such as Raspberry PI 3b+ and 4 we have chosen a small enough example that can be conducted on them and can be used as a reference to other IoT devices in the future. We found especially that in the case of the PI 4, the performance was quite good for our example. We also provided a cost-performance analysis to compare the IoT devices with the cost used on the cloud to conduct the task over a year's worth of activities. We find that the PI is surprisingly cost-effective.
However, our most significant gain from this project is the reduction in manpower and entry barrier it takes to create and deploy our AI services. Due to the generalized approach while using python function developers and data scientists can naturally integrate more complex tasks as well as tasks that leverage cloud-specific AI services that are uniquely offered by particular providers. GAS Generator is an open-source project, and we appreciate contributions to the project. Please contact the first author at \textit{laszewski\@gmail.com}.
\section*{Acknowledgment}
We like to thank
Brian Kegerreis,
Jonathan Beckford,
Jagadeesh Kandimalla,
Prateek Shaw,
Ishan Mishra,
Fugang Wang, and Andrew Goldfarb for developing the Python REST service generator on which this work is based on. We also like to thank the more than 70 contributors to the cloudmesh toolkit for developing the various cloud providers. We would like to thank the NBDIF working group for their contributions in discussions that lead to the development of this effort.
| {
"alphanum_fraction": 0.719942017,
"avg_line_length": 64.8187919463,
"ext": "tex",
"hexsha": "9313587e778bb0b537aee4ff1706bcc748e6956e",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2020-10-30T22:48:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-02T17:09:14.000Z",
"max_forks_repo_head_hexsha": "c866c4cbdad3adfdf3fe31591906ced8d71352ef",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "cloudmesh/cloudmesh.openapi",
"max_forks_repo_path": "deprecated/paper/content.tex",
"max_issues_count": 27,
"max_issues_repo_head_hexsha": "c866c4cbdad3adfdf3fe31591906ced8d71352ef",
"max_issues_repo_issues_event_max_datetime": "2020-09-02T20:24:59.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-29T13:38:11.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "cloudmesh/cloudmesh.openapi",
"max_issues_repo_path": "deprecated/paper/content.tex",
"max_line_length": 995,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "c866c4cbdad3adfdf3fe31591906ced8d71352ef",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "cloudmesh/cloudmesh.openapi",
"max_stars_repo_path": "deprecated/paper/content.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-17T17:08:44.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-29T14:53:19.000Z",
"num_tokens": 12241,
"size": 48290
} |
\documentclass[twoside]{article}
\setlength{\oddsidemargin}{0.25 in}
\setlength{\evensidemargin}{-0.25 in}
\setlength{\topmargin}{-0.6 in}
\setlength{\textwidth}{6.5 in}
\setlength{\textheight}{8.5 in}
\setlength{\headsep}{0.75 in}
\setlength{\parindent}{0 in}
\setlength{\parskip}{0.1 in}
%
% ADD PACKAGES here:
%
\usepackage{amsmath,amsfonts,amssymb,graphicx,mathtools,flexisym, hyperref, graphicx}
\usepackage[ruled,vlined]{algorithm2e}
\graphicspath{ {./Images/} }
\usepackage{tcolorbox}
%
% The following commands set up the lecnum (lecture number)
% counter and make various numbering schemes work relative
% to the lecture number.
%
\newcounter{lecnum}
\renewcommand{\thepage}{\thelecnum-\arabic{page}}
\renewcommand{\thesection}{\thelecnum.\arabic{section}}
\renewcommand{\theequation}{\thelecnum.\arabic{equation}}
\renewcommand{\thefigure}{\thelecnum.\arabic{figure}}
\renewcommand{\thetable}{\thelecnum.\arabic{table}}
%
% The following macro is used to generate the header.
%
\newcommand{\lecture}[4]{
\pagestyle{myheadings}
\thispagestyle{plain}
\newpage
\setcounter{lecnum}{#1}
\setcounter{page}{1}
\noindent
\begin{center}
\framebox{
\vbox{\vspace{2mm}
\hbox to 6.28in { {\bf STAT3923: Advanced Statistical Inference
\hfill } }
\vspace{4mm}
\hbox to 6.28in { {\Large \hfill #1. #2 \hfill} }
\vspace{4mm}
}
}
\end{center}
}
\tcbuselibrary{theorems}
\newtcbtheorem
[]% init options
{theorem_exam}% name
{Theorem}% title
{%
colback=orange!5,
colframe=orange!35!black,
fonttitle=\bfseries,
}% options
{def}% prefix
\newtcbtheorem
[]% init options
{definition_exam}% name
{Definition}% title
{%
colback=blue!5,
colframe=blue!35!black,
fonttitle=\bfseries,
}% options
{def}% prefix
\newtcbtheorem
[]% init options
{proposition_exam}% name
{Proposition}% title
{%
colback=red!5,
colframe=red!35!black,
fonttitle=\bfseries,
}% options
{def}% prefix
%
% Convention for citations is authors' initials followed by the year.
% For example, to cite a paper by Leighton and Maggs you would type
% \cite{LM89}, and to cite a paper by Strassen you would type \cite{S69}.
% (To avoid bibliography problems, for now we redefine the \cite command.)
% Also commands that create a suitable format for the reference list.
\renewcommand{\cite}[1]{[#1]}
\def\beginrefs{\begin{list}%
{[\arabic{equation}]}{\usecounter{equation}
\setlength{\leftmargin}{2.0truecm}\setlength{\labelsep}{0.4truecm}%
\setlength{\labelwidth}{1.6truecm}}}
\def\endrefs{\end{list}}
\def\bibentry#1{\item[\hbox{[#1]}]}
%Use this command for a figure; it puts a figure in wherever you want it.
%usage: \fig{NUMBER}{SPACE-IN-INCHES}{CAPTION}
\newcommand{\fig}[3]{
\vspace{#2}
\begin{center}
Figure \thelecnum.#1:~#3
\end{center}
}
% Use these for theorems, lemmas, proofs, etc.
\newtheorem{theorem}{Theorem}[lecnum]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{example}[theorem]{Example}
\newenvironment{proof}{{\bf Proof:}}{\hfill\rule{2mm}{2mm}}
\newcommand{\Lim}[1]{\raisebox{0.5ex}{\scalebox{0.8}{$\displaystyle \lim_{#1}\;$}}}
\newcommand{\Inf}[1]{\raisebox{0.5ex}{\scalebox{0.8}{$\displaystyle \inf_{#1}\;$}}}
\newcommand{\Sup}[1]{\raisebox{0.5ex}{\scalebox{0.8}{$\displaystyle \sup_{#1}\;$}}}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
%
% To generate a clickable table of content.
%
\hypersetup{
colorlinks,
citecolor=black,
filecolor=black,
linkcolor=blue,
urlcolor=black
}
\newcommand\E{\mathbb{E}}
\usepackage{tocloft}
\addtolength{\cftsecnumwidth}{10pt}
\setlength{\cftsubsecnumwidth}{3.5em}
\title{STAT3923: Advanced Statistical Inference}
\author{Charles Christopher Hyland}
\date{Semester 2 2019}
\begin{document}
\pagenumbering{gobble}
\maketitle
\begin{abstract}
Thank you for stopping by to read this. These are notes collated from lectures and tutorials as I took this course.
\end{abstract}
\newpage
\tableofcontents
\newpage
\pagenumbering{arabic}
%\lecture{**CHAPTER-NUMBER**}{**TITLE**}
\lecture{1}{Probability Theory}
\section{Probability Theory}
\section{Probability Theory}
\subsection{Probability Theory Introduction}
\begin{definition}(Random Variable). Let $\Omega$ be a sample space. A random variable $X: \Omega \rightarrow \mathbb{R}$ is a real-valued function defined over elements of $\Omega$.
\end{definition}
\begin{definition}(Cumulative Distribution Function). For any random variable, its distribution is characterised by the cumulative distribution function
$$
F(x) = P(X \leq x)
$$
for $- \infty < x < \infty$.
\end{definition}
\begin{lemma}The following are properties of the CDF F(x)
\begin{enumerate}
\item $F(a) \leq F(b)$ for $a < b$;
\item $\lim_{x \rightarrow \infty}F(x) = 1$ and $\lim_{x \rightarrow -\infty}F(x) = 0$;
\item F(x) is right continuous.
\end{enumerate}
\end{lemma}
\begin{definition}(Probability Mass Function). Let X be a discrete random variable taking on values $x_1 < x_2 < ...$ The PMF for the random variable X is defined as
$$
f(x_i) = P(X=x_i)
$$
where $\sum_{x_{i}}f(x_i) = 1.$
\end{definition}
\begin{theorem}Let X be a random variable. Then, $f(x_i) = F(x_{i}) - F(x_{i-1})$ and $F(x) = \sum_{x_{i} \leq x}f(x_i).$
\end{theorem}
\begin{definition}(Probability Density Function). Let X be a continuous random variable. The PDF is defined as
$$
f(x) = \frac{dF(x)}{dx}
$$
where $\int_{-\infty}^{\infty}f(x)dx=1$. Furthermore, we have that
$$
F(x) = \int_{-\infty}^{x}f(t)dt.
$$
\end{definition}
\begin{definition}($L^1$-space). We denote the set of all first integrable random variables as
$$
L^1 = \{X: \Omega \rightarrow \mathbb{R}: ||X||_1 < \infty \}.
$$
\end{definition}
\begin{definition}(Expectation). Let $X \in L^1$. Then we define the expectation of a random variable as
$$
E(X) = \begin{cases}
\sum_{X}xf(x) \quad \text{(Discrete)}\\
\int_{-\infty}^{\infty}xf(x)dx \quad \text{(Continuous)}\\
\end{cases}
$$
\end{definition}
\begin{lemma}Let X be a random variable and $g: \mathbb{R} \rightarrow \mathbb{R}.$ Then the random variable $Y = g(X)$ is a random variable with PMF/PDF $f_Y$.
\end{lemma}
\begin{definition}(r-th moment) Let $X \in L^r$. Then we define the r-th moment as
$$
E(X^r) = \begin{cases}
\sum_{X}x^rf(x) \quad \text{(Discrete)}\\
\int_{-\infty}^{\infty}x^rf(x)dx \quad \text{(Continuous)}\\
\end{cases}
$$
\end{definition}
\begin{definition}(Variance). Let $X \in L^2$. Then we define the variance as
$$
Var(X) = E((X - E(X))^2) = E(X^2) - E(X)^2.
$$
\end{definition}
\begin{definition}(General Expectation) Let $X \in L^r$. Then we define the r-th moment as
$$
E(g(X)) = \begin{cases}
\sum_{X}g(x)f(x) \quad \text{(Discrete)}\\
\int_{-\infty}^{\infty}g(x)f(x)dx \quad \text{(Continuous)}\\
\end{cases}
$$
\end{definition}
\begin{proposition}Let X and Y be a random variables and a,b be constants. We have the following properties
\begin{enumerate}
\item $E(aX + b) = aE(X) + b$;
\item $E(X + Y) = E(X) + E(Y)$;
\item $Var(aX + b) = a^2Var(X)$;
\item $E(XY) = E(X)E(Y)$ (if X and Y are independent);
\item $Var(X+Y) = Var(X) + Var(Y)$ (if X and Y are independent).
\end{enumerate}
\end{proposition}
\lecture{2}{Random variables and distributions}
\section{Probability Theory}
\subsection{Discrete Random Variables}
A discrete random variable is a random variable whose range is finite or countably infinite.
\begin{definition}(Bernoulli Distribution). A random variable X has a Bernoulli distribution and it is referred to as a Bernoulli random variable if and only if its probability distribution is given by
$$
f(x; \theta) = \theta^x(1 - \theta)^{1-x} \quad x \in \{0, 1\}.
$$
\end{definition}
\begin{definition}(Binomial). A random variable X has a Binomial distribution and it is referred to as a Binomial random variable if and only if its probability distribution is given by
$$
f(x; n,\theta) = {n \choose x}\theta^x(1-\theta)^{n-x} \quad x = 0,1,...,n.
$$
\end{definition}
\begin{theorem}Let X be a Binomial random variable. Then
$$
f(x;n,\theta) = f(n-x;n,1-\theta).
$$
\end{theorem}
\begin{theorem}The mean and variance of the Binomial distribution are
$$
E(X) = n\theta
$$
$$
Var(X) = n\theta(1-\theta).
$$
\end{theorem}
\begin{definition}(Negative Binomial Distribution). A random variable X has a negative binomial distribution and it is referred to as a negative binomial random variable if and only if
$$
f(x;k,\theta) = {x-1\choose k-1}\theta^k(1-\theta)^{x-k} \quad x=k,k+1,k+2,...
$$
\end{definition}
\begin{theorem}The mean and the variance of the negative binomial distribution are
$$
\mu = \frac{k}{\theta}
$$
$$
\sigma^2 = \frac{k}{\theta}(\frac{1}{\theta} - 1).
$$
\end{theorem}
\begin{definition}(Geometric Distribution). A random variable X has a Geometric distribution and it is referred to as a Geometric random variable if and only if its probability distribution is given by
$$
f(x; \theta) = \theta(1-\theta)^{x-1} \quad x=1,2,3,...
$$
\end{definition}
\begin{theorem}The mean and variance of the Geometric random variable are
$$
\mu = \frac{1}{p}
$$
$$
\sigma^2 = \frac{1-p}{p^2}.
$$
\end{theorem}
\begin{definition}(Hypergeometric Distribution). A random variable X has a Hypergeometric distribution and it is referred to as a hypergeometric random variable if and only if its probability distribution is given by
$$
f(x;n,N,M) = \frac{{M \choose x}{N - M \choose n - x}}{{N \choose n}} \quad x = 0,1,2,...
$$
and $x \leq M$ and $n - x \leq N - M$. Here, M are the number of successes and N - M as failures.
\end{definition}
\begin{definition}(Poisson). A random variable X has a Poisson distribution and it is referred to as a Poisson random variable if and only if its probability distribution is given by
$$
f(x; \lambda) = \frac{\lambda^xe^{-\lambda}}{x!} \quad x=0,1,2,...
$$
for $k \in \mathbb{N}^+$.
\end{definition}
\begin{remark}The Poisson random variable expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant rate $(\lambda)$ and are independent of the time of the last event.
\end{remark}
\begin{theorem}The mean and variance of the Poisson random variable are
$$
\mu = \lambda
$$
$$
\sigma^2 = \lambda.
$$
\end{theorem}
\begin{theorem}(Poisson Limit Theorem). Let $p_n$ be a sequence of real numbers in [0,1] such that the sequence $np_n \rightarrow \lambda < \infty$. Then
$$
\lim_{n \rightarrow \infty;p_n \rightarrow 0}{n \choose k}p_{n}^{k}(1-p_n)^{n-k} = e^{-\lambda}\frac{\lambda^k}{k!}.
$$
\end{theorem}
\subsection{Continuous Random Variables}
\begin{definition}(Uniform Distribution). A random variable X has a uniform distribution and it is referred to as a continuous uniform random variable if and only if its probability density is given by
$$
f(x;\alpha, \beta) = \begin{cases}
\frac{1}{\beta - \alpha} \quad \alpha < x < \beta \\
0 \quad \text{otherwise}.
\end{cases}
$$
\end{definition}
\begin{theorem}The mean and variance of the uniform distribution are given by
$$
\mu = \frac{\alpha + \beta}{2}
$$
$$
\sigma^2 = \frac{1}{2}(\beta - \alpha)^2.
$$
\end{theorem}
\begin{definition}(Gamma Function). The gamma function is defined for any complex number with a positive real part. It is defined as
$$
\Gamma(\alpha) = \int_{0}^{\infty}x^{\alpha - 1}e^{-x}dy
$$
for $\alpha > 0$.
\end{definition}
\begin{theorem}The gamma function satisfies the recursion formula
$$
\Gamma(\alpha + 1) = (\alpha)\Gamma(\alpha).
$$
\end{theorem}
\begin{definition}(Gamma Distribution). A random variable X has a Gamma distribution and it is referred to as a Gamma random variable if and only if its density is given by
$$
f(x; \alpha, \beta) = \begin{cases}
\frac{1}{\beta^{\alpha}}x^{\alpha - 1}e^{-x/\beta} \quad x > 0\\
0 \quad \text{otherwise}
\end{cases}
$$
where $\alpha > 0$ and $\beta > 0.$
\end{definition}
\begin{theorem}The mean and variance of the gamma distribution are given by
$$
\mu = \alpha \beta
$$
$$
\sigma^2 = \alpha \beta^2
$$
\end{theorem}
The exponential and chi-square distribution are special cases of the gamma distribution.
\begin{definition}(Exponential Distribution). A random variable X has an exponential distribution and it is referred to as an exponential random variable if and only if its probability density is given by
$$
f(x; \theta) = \begin{cases}
\frac{1}{\theta}e^{-x/\theta} \quad x > 0\\
0 \quad \text{elsewhere}
\end{cases}
$$
for $\theta > 0.$
\end{definition}
\begin{remark}The exponential distribution is the Gamma distribution for $\alpha = 1.$
\end{remark}
\begin{theorem}
The mean and variance of the exponential distribution are given by
$$
\mu = \theta
$$
$$
\sigma^2 = \theta^2.
$$
\end{theorem}
\begin{definition}(Chi-Square Distribution). A random variable X has a chi-square distribution and it is referred to as a chi-square random variable if and only if its probability density is given by
$$
f(x, \nu) = \begin{cases}
\frac{1}{2^{\nu/2}\gamma(\nu/2)}x^{\frac{\nu - 2}{2}}e^{-\frac{x}{2}} \quad x > 0\\
0 \quad \text{elsewhere}.
\end{cases}
$$
\end{definition}
\begin{remark}The chi-square distribution is the Gamma distribution for $\alpha = \nu/2$ and $\beta = 2.$
\end{remark}
\begin{theorem}The mean and variance of the chi-square distribution are given by
$$
\mu = \nu
$$
$$
\sigma^2 = 2\nu.
$$
\end{theorem}
\begin{theorem}If $Z_i \sim N(0,1)$ are i.i.d, then $X = \sum_{i=1}^{\nu}Z_{i}^{2}$, then $X \sim \chi_{\nu}^{2}.$
\end{theorem}
\begin{definition}(Beta Distribution). A random variable X has a Beta distribution and it is referred to as a Beta random variable if and only if its probability density is given by
$$
f_X(x; \alpha, \beta) = \begin{cases}
\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)}x^{\alpha - 1}(1 - x)^{\beta - 1} \quad x \in (0,1) \\
0 \quad \text{elsewhere}
\end{cases}
$$
where $\alpha > 0$ and $\beta > 0$.
\end{definition}
\begin{theorem}The mean and variance of the beta distribution are given by
$$
\mu = \frac{\alpha}{\alpha + \beta}
$$
$$
\sigma^2 = \frac{\alpha \beta}{(\alpha + \beta)^2(\alpha + \beta + 1)}.
$$
\end{theorem}
\begin{definition}(Normal Distribution). A random variable X has a normal distribution and it is referred to as a normal random variable if and only if its probability density is given by
$$
f(x; \mu, \sigma) = \frac{1}{\sigma \sqrt{2 \pi}}e^{-\frac{1}{2}(\frac{x - \mu}{\sigma})^2} \quad -\infty < x < \infty
$$
where $\sigma > 0.$
\end{definition}
\begin{theorem}(Linear Transformation of the Normal). Let $Z \sim N(0,1)$. Define $X = \mu + \sigma Z$. Then, $X \sim N(\mu, \sigma^2)$.
\end{theorem}
\begin{definition}(Standard Normal Distribution). The normal distribution with $\mu = 0$ and $\sigma = 1$ is referred to as the standard normal distribution.
\end{definition}
\begin{theorem}(Binomial Approximation To Normal). If X is a random variable having a binomial distribution with the parameters n and $\theta$, then the MGF of
$$
Z = \frac{X - n\theta}{\sqrt{n\theta(1 - \theta)}}
$$
approaches to that of the standard normal distribution when $n \rightarrow \infty.$
\end{theorem}
\begin{lemma} Let X be a continuous nonnegative random variable. Then we have that
$$
E(X) = \int_{0}^{\infty}P(X > x)dx.
$$
\end{lemma}
\lecture{3}{Moment Generating Functions}
\section{Moment Generating Functions}
\section{Moment Generating Functions}
\subsection{Moment Generating Functions Introduction}
We are interested in MGFs for three reasons. A MGF as a real function that uniquely determines its associated probability distribution, and its derivatives at zero are equal to the moments of the random variable. Finally, a MGF is useful for finding the distribution of sums of functions.
\begin{definition_exam}{Moment Generating Function}{} Let X be a random variable, then $Y = e^{tX} \geq 0$, so $E(Y)$ is well defined. The MGF of the random variable X is defined as
$$
M(t) = E(e^{tX}) = \sum_xe^{tx}f_X(x)
$$
for all t for which the right hand side is finite.
\end{definition_exam}
\begin{remark}We know that for t = 0, the MGF exists as $M(0) = 1 < \infty$. If X is a discrete RV, then the MGF is $M(t) = \sum_ie^{tX_i}P_X(x_i).$
\end{remark}
\begin{theorem_exam}{Computing moments with MGFs}{}If there exists a $\delta > 0$ such that $M(t) < \infty$ for all $t \in (-\delta,\delta)$ then for all $n \in \mathbb{N}$, we have $$M^n(0) = \mathbb{E}(X^n)$$ and exist. The MGF is infinitely differentiable at 0.
\end{theorem_exam}
\begin{proof}Assuming there is an interval in which we can interchange differentiation and expectation, we have
$$
\frac{d}{dt}E[e^{tX}] = E\bigg[\frac{d}{dt}e^{tX} \bigg] = E\bigg[Xe^{tX} \bigg].
$$
Then, if we let t = 0 in the above, we get
$$
E\bigg[X \bigg] = M'(0).
$$
\end{proof}
\begin{theorem_exam}{Equality of distributions}{} Let F and G be CDFs and suppose that there exists $\delta > 0$ such that for all $t \in (-\delta, \delta)$, the MGFs $M_F(t) = M_G(t) < \infty$. Then $F = G$. It follows that all the moments of F and G exist and are equal.
\end{theorem_exam}
\begin{remark}The converse to the above theorem is false. All the moment of F and G can exist, and be equal, yet $F \neq G.$
\end{remark}
\begin{proposition_exam}{Linear transformation of MGFs}{}Let X be a random variable possessing a MGF $M_X(t)$. Define the linear transformation $$Y = a + bX$$ where $a, b \in \mathbb{R}$ are two constants and $b \neq 0.$ Then the random variable Y posseses a MGF $M_Y(t)$ and $$M_Y(t) = exp(at)M_X(bt).$$
\end{proposition_exam}
\begin{remark}Not every random variable posseses a moment generating function. However, every random variable posseses a characteristic function.
\end{remark}
\begin{definition}(Characteristic Function). Let X be a random variable. Let $i = \sqrt{-1}$ be the imaginary unit. The function $\phi: \mathbb{R} \rightarrow \mathbb{C}$ is defined by
$$
\phi_X(t) = \mathbb{E}[exp(itX)]
$$
is called the characteristic function of X.
\end{definition}
\begin{proposition_exam}{Independence of MGFs}{} Let $X_1,...,X_n$ be n mutually independent random variables. Let Z be their sum: $$Z = \sum_{i=1}^nX_i.$$ Then, the MGF of Z is the product of the MGFs of $X_1,...,X_n$: $$M_Z(t) = \prod_{i=1}^nM_{X_{i}}(t).$$
\end{proposition_exam}
\begin{theorem}(Continuity Theorem). Let $F_n$ be CDFs with MGFs $M_n$, and let F be a CDF with MGF M, and suppose that there exists $\delta > 0$ such that $M_n(t) \rightarrow_n M(t)$ for all $t \in (-\delta, \delta)$. Then $F_n(x) \rightarrow F(x)$ for all x where F is continuous at x.
\end{theorem}
\lecture{4}{Convergence Concepts}
\section{Moment Generating Functions}
\subsection{Convergence Concepts}
We are interested in defining what it means for random variables to converge.
\begin{definition_exam}{Convergence in probability}{} Let $\{X_n\}$ and X be jointly distributed random variables. We say that $X_n \xrightarrow{p} X$ in probability if for all $\epsilon > 0$, we have that $$\lim_{n \rightarrow \infty}P(|X_n - X| > \epsilon) = 0$$. We say that $X_n \xrightarrow{p} X.$
\end{definition_exam}
\begin{definition_exam}{Convergence Almost Surely}{} Let $\{X_n\}$ and X be jointly distributed random variables. We say that $X_n \xrightarrow{a.s.} X$ strong or almost surely if we have that $$P(\lim_{n \rightarrow \infty}|X_n - X| < \epsilon) = 1.$$
\end{definition_exam}
\begin{remark}Recall that random variables are real-valued functions defined on the sample space $\Omega$. Let $s \in \Omega$ be sample points. A sequence of functions $X_n(s)$ converges to X(s) for all $s \in \Omega$ except for $s \in \mathcal{N}$ where $\mathcal{N} \subset \Omega$ and $P(\mathcal{N}) = 0.$ That is, we have pointwise convergence of a sequence of functions except convergence need not occur on a set with probability 0.
\end{remark}
\begin{theorem}Convergence almost surely $\xrightarrow{a.s.}$ implies convergence in probability $\xrightarrow{p}.$
\end{theorem}
\begin{definition_exam}{Convergence in distribution}{} Let $\{X_n\}$ and X be jointly distributed random variables. We say that $X_n \xrightarrow{d} X$ in distribution if
$$\lim_{n \rightarrow \infty} F_{X_{n}}(x) = F_X(x)$$
for all x where $F_X$ is continuous at x. We say that $X_n \xrightarrow{d} X.$
\end{definition_exam}
\begin{remark}Note that convergence in distribution is phrased in terms of the CDFs. Hence, it is the CDFs that converges, not the random variables when we speak of convergence in distributions.
\end{remark}
\begin{theorem}Convergence in probability $\xrightarrow{p}$ implies convergence in distribution $\xrightarrow{d}.$
\end{theorem}
The CLT states that the \textbf{sample mean} has a distribution which is approximately normal with mean $\mu$ and variance $\sigma^2.$ That is, probability statements about the sample mean can be approximated using a normal distribution. \textbf{Not the random variable itself.}
\begin{theorem_exam}{Central Limit Theorem}{} Suppose $X_i$ are i.i.d random variables with $\sigma^2 = Var(X_i) < \infty$ and $\mu = E(X_i)$. Then with $S_n = \sum_{i=1}^{n}X_i$ for all $x \in \mathbb{R}$
$$
P(\frac{S_n - n\mu}{\sqrt{n}\sigma} \leq x) \xrightarrow{d} \phi(x) = \int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{\frac{-t^2}{2}}dt.
$$ \\ Let $Y_n = \frac{S_n - n\mu}{\sqrt{n}\sigma}$, then the CLT states that
$$
F_{Y_{n}}(x) \xrightarrow{d} F_Z(x)
$$
for all $x \in \mathbb{R}$ where $Z \sim N(0,1).$ In other words, $Y_n \xrightarrow{d} Z$ in distribution.
\end{theorem_exam}
\begin{theorem}(Markov's Inequality). Let X be a non-negative random variable and suppose $\mathbb{E}[X]$ exists. For any $t > 0$
$$
P(X > t) \leq \frac{\mathbb{E}(X)}{t}.
$$
\end{theorem}
\begin{theorem_exam}{Chebychev's Inequality}{} Assume the random variable X has a finite second moment $\sigma$. Then, for any real number $k > 0$,
$$
P(|X - \mu| > k\sigma) \leq \frac{1}{k^2}.
$$
\end{theorem_exam}
\begin{remark}This a useful theorem for proving convergence in probability to a constant.
\end{remark}
\begin{theorem_exam}{Weak Law of Large Numbers}{} Let $X_1,...,X_n$ be a sequence of i.i.d random variables. Let $E(X_i) = \mu$ and $Var(X_i) = \sigma^2.$ Define the random variable $S_n = \frac{X_1 + ... + X_n}{n}$, then $$S_n \xrightarrow{p} \mu$$. That is, for any $\epsilon > 0$
$$
\lim_{n \rightarrow \infty}P(|S_n - \mu| \geq \epsilon) = 0.
$$
That is, $S_n \xrightarrow{p} \mu.$
\end{theorem_exam}
That is, the mean of a large sample is close to the mean of the distribution. Hence, the distribution of the sample mean becomes more concentrated around $\mu$ as n gets large.
\begin{theorem}(Strong Law of Large Numbers). Let $X_1,...,X_n$ be a sequence of i.i.d random variables. Let $E(X_i) = \mu$ and $Var(X_i) = \sigma^2.$ Define the random variable $S_n = \frac{X_1 + ... + X_n}{n}$, then $$S_n \xrightarrow{a.s.} \mu$$. That is, for any $\epsilon > 0$
$$
P(\lim_{n \rightarrow \infty}|S_n - \mu| \leq \epsilon) = 1.
$$
That is, $S_n \xrightarrow{a.s.} \mu.$
\end{theorem}
\lecture{5}{Further Limit Laws}
\section{Moment Generating Functions}
\subsection{Further Limit Laws}
\begin{theorem}Let $\{X_n\}$, $\{Y_n\}$ be sequences of random variables. If $X_n \xrightarrow{p} c$ and $Y_n \xrightarrow{p} d$, then
$$
X_n + Y_n \xrightarrow{p} c + d
$$
where c and d are constants.
\end{theorem}
\begin{theorem}Let $\{X_n\}$ be a sequence of random variables. Suppose for a function g(.), we have that $\lim_{x \rightarrow c}g(x) = \ell$ exists and is finite for a constant $\ell$. If $X_n \xrightarrow{p} c$, then
$$
g(X_n) \xrightarrow{p} \ell.
$$
\end{theorem}
\begin{corollary} If g(.) is \textbf{continuous} at c, then
$$
g(X_n) \xrightarrow{p} g(c).
$$
If h(.) is \textbf{differentiable} at c, then
$$
\frac{h(X_n) - h(c)}{X_n - c} \xrightarrow{p} h'(c).
$$
\end{corollary}
\lecture{6}{Asymptotics}
\section{Transformation of random variables}
\subsection{Asymptotics}
\begin{lemma}Let X have a CDF F(.) and let x be a continuity point of the CDF F(.). Suppose $X_n \xrightarrow{d} X$. Then
$$
P(X_n = x) \rightarrow 0.
$$
\end{lemma}
\begin{proposition}The sequence of random variables $X_1,X_2,...,$ converges in probability to a constant c if and only if the sequence converges in distribution to c. \\That is, the statement
$$
P(|X_n - c| > \epsilon) \xrightarrow{p} 0 \quad \text{for every }\epsilon > 0
$$
is equivalent to
$$
P(X_n \leq x) \xrightarrow{d} \begin{cases}
0 \quad \text{if } x < c\\
1 \quad \text{if } x > c.
\end{cases}
$$
\end{proposition}
\begin{theorem_exam}{Continuous Mapping Theorem}{} Let $X_n$ and X be a random variable. Let g(.) be a continuous function.
\begin{enumerate}
\item If $X_n \xrightarrow{p} X$, then $g(X_n) \xrightarrow{p} g(X)$.
\item If $X_n \xrightarrow{d} X$, then $g(X_n) \xrightarrow{d} g(X).$
\end{enumerate}
\end{theorem_exam}
\begin{remark}g(.) can actually be continuous \textbf{almost surely}, i.e. $P(x \in D) = 0$ where D is the set of discontinuity points of g.
\end{remark}
\begin{theorem_exam}{Slutsky's Theorem}{} Suppose $X_n \xrightarrow{d} X$ and $Y_n \xrightarrow{p} c$ where c is a constant. Then
\begin{enumerate}
\item $X_n + Y_n \xrightarrow{d} X + c$;
\item $X_nY_n \xrightarrow{d} cX$;
\item If $c \neq 0$, then $\frac{X_n}{Y_n} \xrightarrow{d} \frac{X}{c}.$
\end{enumerate}
\end{theorem_exam}
\begin{remark}This theorem is equivalent for when $Y_n \xrightarrow{d} c$.
\end{remark}
We recall some concepts from calculus to derive the delta method, a tool that helps us approximate the mean and variance of estimators.
\begin{definition}(Taylor Polynomial). If a function g(x) has derivatives of order r, that is, $g^{(r)}(x) = \frac{d^r}{dx^r}g(x)$ exists, then for any constant a, the Taylor polynomial of order r about a is
$$
T_r(x) = \sum_{i=0}^{r}\frac{g^{(i)}(a)}{i!}(x - a)^{i}.
$$
\end{definition}
\begin{theorem}If $g^{r}(a) = \frac{d^r}{dx^r}g(x)|_{x = a}$ exists, then
$$
\lim_{x \rightarrow a}\frac{g(x) - T_r(x)}{(x - a)^r} = 0.
$$
That is, the remainder from the approximation, $g(x) - T_r(x)$, always tend to 0 faster than the highest-order explicit term.
\end{theorem}
For our purposes, we are interested in the first-order Taylor series expansion of an estimator T(.) with the differentiable function g(T) about the parameter point $\theta$:
$$
g(t) \approx g(\theta) + \sum_{i=1}^{k}g_i'(\theta)(t_i - \theta_i).
$$
We can now look at a theorem to help us determine the limiting variance of an estimator.
\begin{theorem_exam}{Delta Method}{} Let $X_n$ be a sequence of random variables such that $\sqrt{n}(X_n - \theta) \xrightarrow{d} \mathcal{N}(0, \sigma^2)$ and g(.) is differentiable and nonzero at $\theta$. Then
$$
\sqrt{n}(g(X_n) - g(\theta)) \xrightarrow{d} \mathcal{N}(0, \sigma^2[g'(\theta)]^2).
$$
\end{theorem_exam}
\begin{remark}Note that the Delta method requires that $X_n$ has a limiting normal distribution in order for us to apply the Delta method to find the limiting distribution of $g(X_n).$
\end{remark}
\begin{lemma}Suppose $\sqrt{n}(X_n - c) \xrightarrow{d} F(.)$ for a proper CDF F(.). Then $X_n \xrightarrow{p} c.$
\end{lemma}
\begin{definition}(Variance stabilising transformation). Suppose that the limiting variance is a function of an unknown parameter. A function $g(.)$ is a variance stabilising transformation if the limiting variance is no longer a function of the unknown parameter.
\end{definition}
\lecture{7}{Joint, Marginal, and Conditional Distributions}
\section{Multivariate Distributions}
\section{Multivariate Distributions}
\subsection{Joint, Marginal, and Conditional Distributions}
\begin{definition}(Joint Probability Density Function). A bivariate function with values $f(x,y)$ defined over the xy-plane is called a joint probability density function of the continuous random variables X and Y if and only if
$$
P(X,Y) \in A = \int\int_Af(x,y)dxdy
$$
for any region A in the xy-plane.
\end{definition}
\begin{theorem}A bivariate function can serve as the joint probability distribution function of a pair of random variabels X and Y if and only if f(x,y) satisfies that
\begin{enumerate}
\item $f(x,y) \geq 0$ for each pair of values (x,y) within its domain;
\item $\int_X\int_Yf(x,y) = 1$ for each pair of values (x,y) within its domain.
\end{enumerate}
\end{theorem}
\begin{definition}(Joint CDF). The joint CDF of X and Y is given by
$$
f_{X,Y}(x,y) = P(X \leq x, Y \leq y) = \int_{-\infty}^{x}\int_{-\infty}^{y}f(s,t)dtds
$$
for $-\infty < x < \infty$ and $-\infty < y < \infty$.
\end{definition}
\begin{theorem_exam}{Tonelli's Theorem}{} The iterated/repeated integral of a non-negative function is the same as the double integral
$$
\int_{-\infty}^x\int_{-\infty}^yf(s,t)dtds = \int\int_{B_{xy}}f(s,t)dsdt = \int_{-\infty}^{y}\int_{-\infty}^{x}f(s,t)dsdt.
$$
Fubini's extension states that if for \textbf{any} integrable f, that if one of the above three integrals is finite, when f is replaced by $|f|$, then the equalities still hold.
\end{theorem_exam}
\begin{theorem}Assume the CDF function $F \in C^2$, then
$$
f(x,y) = \frac{\partial^2}{\partial x \partial y}F(x,y).
$$
\end{theorem}
\begin{definition}(Marginal Density) If X and Y are continuous random variables and f(x,y) is the value of their joint probability density at (x,y), the function given by
$$
g(x) = \int_{-\infty}^{\infty}f(x,y)dy \quad -\infty < x < \infty
$$
is called the marginal density of X. Correspondingly, the function given by
$$
h(y) = \int_{-\infty}^{\infty}f(x,y)dx \quad -\infty < y < \infty
$$
is called the marginal density of Y.
\end{definition}
\begin{remark}For a multivariate joint probability distribution, we can also speak of the $\textbf{joint marginal distribution}.$
\end{remark}
\begin{remark}We can derive the marginal distribution from the joint distribution but \textbf{not the converse}.
\end{remark}
\begin{definition}(Conditional Distribution). If f(x,y) is the value of the joint probability distribution of the random variables X and Y at (x,y) and h(y) is the value of the marginal distribution of Y at y, the function given by
$$
f(x|y) = \frac{f(x,y)}{h(y)} \quad h(y) \neq 0
$$
for $-\infty < x < \infty$ is called the conditional density of X given Y = y. Correspondingly, if g(x) is the value of the marginal density of X at x, the function given by
$$
w(y|x) = \frac{f(x,y)}{g(x)} \quad g(x) \neq 0
$$
for $- \infty < y < \infty$ is called the conditional density of Y given X = x.
\end{definition}
\begin{definition}(Independence of random variables). If $f(x_1,x_2,...,x_n)$ is the value of the joint probability distribution of the random variables $X_1,X_2,...,X_n$ at $(x_1,x_2,....,x_n)$ and $f_i(x_i)$ is the value of the marginal distribution of $X_i$ at $x_i$ for i = 1,2,...,n, then the n random variables are independent if and only if
$$
f(x_1,...,x_n) = f_1(x_1)f_2(x_2)...f_n(x_n)
$$
for all $(x_1,x_2,...,x_n)$ within their range.
\end{definition}
\begin{definition}(Conditional Expectation). We define the conditional expectation
$$
E(X|Y=y) = \begin{cases}
\sum xf_{X|Y}(x|y) \quad \text{X is discrete}\\
\int xf_{X|Y}(x|y)dx \quad \text{X is continuous}.
\end{cases}
$$
\end{definition}
\begin{definition}(Conditional Variance). We define the conditional variance as
$$
V(Y|X=x) = \int (y - E(Y|X=x))^2f(y|x)dy.
$$
\end{definition}
\begin{theorem}(Law of total expectation). We define the law of total expectation as
$$
E(Y) = E[E(Y|X)].
$$
\end{theorem}
\begin{theorem}(Law of total variance). We define the law of total variance as
$$
V(Y) = E[V(Y|X)] + V(E[Y|X]).
$$
\end{theorem}
\subsection{Multivariate Distribution}
\begin{definition_exam}{Bivariate Normal Distribution}{} A pair of random variables X and Y have a bivariate normal distribution and they are referred to as jointly normally distributed random variables if and only if their joint probability density is given by
$$
f(x,y) = \frac{e^{-\frac{1}{2(1 - \rho)^2}} [(\frac{x - \mu_1}{\sigma_1})^2] -2\rho(\frac{x - \mu_1}{\sigma_1})(\frac{y - \mu_2}{\sigma_2}) + (\frac{y - \mu_2}{\sigma_2})^2 }{2\pi \sigma_1\sigma_2\sqrt{1 - \rho^2}}
$$
for $x \in (-\infty, \infty)$ and $y \in (-\infty, \infty)$, where $\sigma_1 > 0$, $\sigma_2 > 0$, and $-1 < \rho < 1.$
\end{definition_exam}
\begin{theorem}If X and Y have a bivariate normal distribution, the conditional density of Y given X = x is a normal distribution with the mean
$$
\mu_{Y|x} = \mu_2 + \rho \frac{\sigma_2}{\sigma_1}(x - \mu_1)
$$
and the variance
$$
\sigma_{Y|x}^2 = \sigma_{2}^{2}(1 - \rho^2)
$$
and the conditional density of X given Y = y is a normal distribution with the mean
$$
\mu_{X|y} = \mu_1 + \rho\frac{\sigma_1}{\sigma_2}(y - \mu_2)
$$
and the variance
$$
\sigma_{X|y}^{2} = \sigma_{1}^{2}(1 - \rho^2).
$$
\end{theorem}
\begin{theorem}If two random variables have a bivariate normal distribution, they are independent if and only if $\rho = 0.$
\end{theorem}
\begin{theorem}If (X,Y) is a bivariate normal random variable, then $aX + bY$ is a normal random variable for constants a and b.
\end{theorem}
\lecture{8}{Sampling Distributions}
\section{Multivariate Distributions}
\subsection{Sampling Distributions}
\begin{definition}(Random Sample). If $X_1,...,X_n$ are i.i.d random variables, we say that they constitute a random sample from the infinite population given by their common distribution. We can write their joint distribution as
$$
f(x_1,...,x_n) = \prod_{i=1}^{n}f(x_i).
$$
\end{definition}
\begin{definition}(Statistic). A statistic T(.) is a random variable that is a function of a set of random variables $X_1,...,X_n$ that constitute a random sample.
\end{definition}
\begin{proposition}A statistic is a random variable and hence has a sampling distribution.
\end{proposition}
\begin{definition}(Sample Mean and Sample Variance). If $X_1,...,X_n$ are a random sample, then the sample mean is given by
$$
\bar{X} = \frac{1}{n}\sum_{i=1}^{n}X_i
$$
and the sample variance is given by
$$
S^2 = \frac{1}{n-1}\sum_{i=1}^{n}(X_i - \bar{X})^2.
$$
\end{definition}
The values of sampling statistics can be expected to vary from sample to sample, hence we find the distribution of such statistics.
\begin{definition}(Sampling Distribution). The distribution of the sampling statistics is known as the \textbf{sampling distribution.}
\end{definition}
\begin{theorem}Let $X_1,...,X_n$ be a random sample with mean $\mu$ and variance $\sigma^2.$ Then,
$$
E(\bar{X}) = \mu
$$
$$
Var(\bar{X}) = \frac{\sigma^2}{n}.
$$
\end{theorem}
\begin{definition}(Standard Error of the mean). Let $\bar{X}$ be the sample mean. Then, let $\sigma_{\bar{X}}^{2} = Var(\bar{X})$. We define $\sigma_{\bar{X}}$ as the standard error of the mean.
\end{definition}
\begin{proposition}Let $X_1,...,X_n$ be a random sample from $N(\mu, \sigma^2)$. Then the sample mean $\bar{X}$ and sample variance $S^2$ are independent.
\end{proposition}
\begin{proposition_exam}{Sampling distribution of variance}{}The transformed sample variance has the distribution
$$
\frac{(n-1)}{\sigma^2}S^2 \sim \chi_{n-1}^2.
$$
Furthermore,
$$
E(\frac{(n-1)}{\sigma^2}S^2) = n - 1
$$
$$
E(S^2) = \sigma^2
$$
that is, $S^2$ is an unbiased estimator of $\sigma^2.$
\end{proposition_exam}
\begin{definition}(T-distribution). Let $Z \sim N(0,1)$ and $Y \sim \chi_{v}^{2}$. Let Z and Y be independent. Then, we say that a random variable T
$$
T = \frac{Z}{\sqrt{\frac{Y}{2}}}
$$
has a t distribution $T \sim t_v$.
\end{definition}
\begin{definition}(F distribution). Let $U \sim \chi_{v_{1}}^2$ and $V \sim \chi_{v_{2}}^2$ where U and V are independent. Then we can construct the F distribution by defining
$$
F = \frac{\frac{U}{v_1}}{\frac{V}{v_2}}
$$
giving us $F \sim F_{v_{1}, v_{2}}$.
\end{definition}
\lecture{9}{Order Statistics}
\section{Multivariate Distributions}
\subsection{Order Statistics}
\begin{definition_exam}{Order Statistics}{} Let $X_1,...,X_n$ be an i.i.d sample from a population with the same CDF F. Define $Y_j$ to be the jth smallest value of $X_1,...,X_n$. The ordered values $Y_1 < Y_2 ... < Y_n$ are called the order statistics.
\end{definition_exam}
\begin{remark}The order statistics are a permutation of the original dataset. Furthermore, the distribution of the order statistics depends on the sample size n.
\end{remark}
\begin{proposition}The CDF of the k-th order statistic is given by
$$
F_{Y_{k}}(x) = \sum_{j=k}^n{n \choose j}[F(x)]^j[1 - F(x)]^{n-j}.
$$
The PDF of the k-th order statistic is given by
$$
f_{Y_{k}}(x) = k{n \choose k}[F(x)]^{k-1}[1 - F(x)]^{n-k}f(x).
$$
\end{proposition}
\begin{theorem}Let $Y_n$ be the maximum statistic. The CDF of the maximum $Y_n$ is given by
$$
F_{Y_{n}}(x) = [F(x)]^n.
$$
The PDF is given by
$$
f_{Y_{n}}(x) = n[F(x)]^{n-1}f(x).
$$
\end{theorem}
\begin{theorem}Let $Y_1$ be the minimum statistic. The CDF of the maximum $Y_1$ is given by
$$
F_{Y_{1}}(x) = 1 - [1 - F(x)]^n.
$$
The PDF is given by
$$
f_{Y_{1}}(x) = n[1 - F(x)]^{n-1}f(x).
$$
\end{theorem}
\begin{definition}(Joint PDF of 2 Order Statistics). Let $\{Y_i\}$ be the order statistics. Then, for any $i < j$, the joint PDF of 2 order statistics is
$$
f_{Y_{i},Y_{j}}(y_i,y_j) = \frac{n!}{(i-1)!(j-i-1)!(n-j)!}F(y_i)^{i-1}[F(y_j) - F(y_i)]^{j-i-1}(1 - F(y_j))^{n-j}f(y_i)f(y_j).
$$
\end{definition}
\begin{definition}(Joint PDF of all Order Statistics). Let $\{Y_i\}$ be the order statistics. Then, the joint PDF of all the order statistics is given by
$$
f_{Y_{(1)},Y_{(2)},...,Y_{(n)}}(y_1,y_2,...,y_n) = \begin{cases}
n!f(y_1)...f(y_n) \quad -\infty < y_1 < .... < y_n < \infty \\
0 \quad \text{otherwise.}
\end{cases}
$$
\end{definition}
\lecture{10}{Transformation of random variables}
\section{Transformation of random variables}
\section{Transformation of random variables}
\subsection{Transformation of random variables}
Suppose we are given a set of random variables $X_1,...,X_n$ and we are interested in the probability distribution or density of $Y = g(X_1,...,X_n)$. The 3 techniques we can use are
\begin{enumerate}
\item Distribution Function Technique;
\item Transformation Technique;
\item Moment-Generating Function Technique.
\end{enumerate}
\begin{theorem}(Distribution Function Technique). We obtain the CDF of Y $F_Y(y) = P(Y \leq y)$ and then differentiate with respect to y to find the probability density $f_Y(y) = \frac{dF(y)}{dy}$.
\end{theorem}
\begin{remark}Typically this is used for scalar-valued function and for continuous distributions.
\end{remark}
\begin{definition}(Convolution). Let X and Y be independent random variables. Define the random variable $T = X + Y.$ Then the convolution is defined as
\begin{enumerate}
\item $P(T=t) = \sum_XP(X=x, Y=t-x) = \sum_XP_X(x)P_Y(t-x)$ (Discrete)
\item $f_T(t) = \int_{-\infty}^{\infty}f_X(x)f_Y(t-x)dx$ (Continuous).
\end{enumerate}
\end{definition}
\lecture{11}{Transformation of random variables}
\section{Transformation of random variables}
\subsection{Univariate Transformation of random variables}
\begin{theorem}Suppose $f'(x) > 0$ in $(a,b).$ If f is strictly increasing in (a,b) and let g be its inverse function. Then g is differentiable and
$$
g'(f(x)) = \frac{1}{f'(x)}.
$$
\end{theorem}
\begin{theorem_exam}{Transformation techniques}{}Let X be a continuous random variable having PDF $f_X(.)$. Suppose that g(x) is differentiable and strictly monotonic for all x such that $f_X(x) \neq 0.$ Then the random variable $Y = g(X)$ has the PDF
$$
f_Y(y) = \begin{cases}
f_X(g^{-1}(y))|\frac{d}{dy}g^{-1}(y)| \quad \text{if y = g(x)}\\
0 \quad \text{otherwise}
\end{cases}
$$
\end{theorem_exam}
\begin{theorem}Suppose $(X_1,X_2)$ has the joint PDF $f(x_1,x_2)$ and define $Y = U(X_1,X_2).$ W.L.O.G, if we fix $X_2$, then $U(.,X_2)$ satisfies the conditions required by the one-variable transformation technique. Then we can write the joint PDF of $(Y,X_2)$ as
$$
g(y,x_2) = f(x_1,x_2)|\frac{\partial x_1}{\partial y}|.
$$
Then we can marginalise out $X_2$ to arrive at
$$
f_Y(y) = \int_{-\infty}^{\infty}g(y,x_2)dx_2.
$$
\end{theorem}
\lecture{12}{Multivariate Transformation of random variables}
\section{Transformation of random variables}
\subsection{Multivariate Transformation of random variables}
\begin{definition}(Jacobian). Let $(X_1,X_2)$ have the joint PDF $f(x_1,x_2)$. Let $g(y_1,y_2) = f(u_1(x_1,x_2), u_2(x_1,x_2))$. The Jacobian is then
$$
J =
\begin{bmatrix}\frac{\partial x_1}{\partial y_1} & \frac{\partial x_1}{\partial y_2}\\
\frac{\partial x_2}{\partial y_1} & \frac{\partial x_2}{\partial y_2}.
\end{bmatrix}
$$
\end{definition}
\begin{theorem_exam}{Multivariate Transformation}{} Define $T: D \subset \mathbb{R}^2 \rightarrow R \subset \mathbb{R}^2$ where D and R are open sets. Suppose that T is bijective and differentiable on D with a non vanishing Jacobian $J_T \neq 0$. Suppose $(X,Y)$ is jointly continuous with density $f_{XY}$ which vanishes outside of D and let $(U,V) = T(X,Y)$. Then $(U,V)$ is jointly continuous and for all $(u,v) \in R$, we have the density
$$
f_{UV}(u,v) = f_{XY}(T^{-1}(u,v))|J_{T^{-1}}(u,v)|.
$$
\end{theorem_exam}
\begin{proof} Take $\psi = f_{XY}$, then for any open $B \subset R$, we have that
$$
P((U,V) \in B) = P((X,Y) \in A = T^{-1}(B))
$$
$$
= \int\int_Af_{XY}(x,y)dxdy
$$
$$
= \int\int_{B = T(A)}f_{XY}(T^{-1}(u,v))|J_{T^{-1}}(u,v)|dudv.
$$
Since this holds for all $B \subset R$, (U,V) has a density which is given by
$$
f_{UV}(u,v) = f_{XY}(T^{-1}(u,v))|J_{T^{-1}}(u,v)|.
$$
\end{proof}
\begin{remark}The Jacobian takes into account that the density increases (decreases) after the linear transformation and hence scales the density down (up) to compensate.
\end{remark}
\begin{claim}Suppose $X_1,X_2,...,X_n \sim F_X$ and $\textbf{Y} = T(\textbf{X})$ where T is bijective from $D \subset \mathbb{R}^n \rightarrow R \subset \mathbb{R}^n$. Denote $\textbf{Y} \sim G_{\textbf{Y}}$, for any $\textbf{Z} \sim G_{\textbf{Y}}$,
$$
T^{-1}\textbf{Z} \sim F_{\textbf{X}}.
$$
\end{claim}
\lecture{13}{Multivariate Jacobian Technique}
\section{Multivariate Jacobian}
\subsection{Multivariate Jacobian}
\begin{theorem}Let X and Y be 2 vectors of random variables, related by an invertible transformation
$$
Y_1 = y_1(X), \quad Y_2 = y_2(X), \quad ... \quad Y_n = y_n(X).
$$
Then the joint PDF of X is related to the joint PDF of Y via
$$
f_Y(y) = f_X(x)|det J|
$$
where the inverse transforms are
$$
X_1 = x_1(Y), \quad X_2 = x_2(Y), \quad ... \quad X_n = x_n(Y)
$$
and the Jacobian matrix is
$$
J = \begin{bmatrix}
\frac{\partial x_1}{\partial y_1} & ... & \frac{\partial x_1}{\partial y_n}\\
... & ... & ... \\
\frac{\partial x_n}{\partial y_1} & ... & \frac{\partial x_n}{\partial y_n}
\end{bmatrix}
$$
\end{theorem}
\lecture{14}{Sufficient Statistics}
\section{Exponential Families}
\section{Exponential Families}
\subsection{Information Theory (Background)}
Information theory will make the notions of sufficiency alot more intuitive. We work with discrete random variables for ease.
\begin{definition}(Entropy). The entropy H(X) of a discrete random variable X is defined by
$$
H(X) = -\sum_{x \in \mathcal{X}}p_X(x)log_2p(x).
$$
The unit of measurements for entropy is called \textbf{bits}.
\end{definition}
\begin{remark}Entropy measures the uncertainty of a random variable. That is, what is the amount of information required on average to describe the random variable. Note that entropy only depends on the probabilities of the PMF rather than the actual values that the random variable takes on.
\end{remark}
\begin{lemma}We have that
$$
H(X) \geq 0.
$$
\end{lemma}
Recall that if g(X) is a function of a random variable, then
$$
E(g(X)) = \sum_{x \in \mathcal{X}}g(x)p_X(x).
$$
Hence, we can let $g(X) = \frac{1}{p(X)}$ and then
$$
H(X) = E_plog(\frac{1}{p(X)}).
$$
\begin{definition}(Joint Entropy). The joint entropy H(X,Y) of a pair of discrete random variables (X,Y) with a joint distribution p(x,y) is defined as
$$
H(X,Y) = -\sum_{x \in \mathcal{X}}\sum_{y \in \mathcal{Y}}p(x,y)logp(x,y)
$$
\end{definition}
\begin{definition}(Conditional Entropy). If $(X,Y) \sim P(x,y)$, the conditional entropy $H(Y|X)$ is defined as
$$
H(Y|X) = \sum_{x \in \mathcal{X}}p(x)H(Y|X=x) = -Elogp(Y|X).
$$
\end{definition}
\begin{theorem}(Chain rule).
$$
H(X,Y) = H(X) + H(Y|X).
$$
\end{theorem}
\begin{corollary}
$$
H(X,Y|Z) = H(X|Z) + H(Y|X,Z).
$$
\end{corollary}
\begin{definition}(Relative Entropy/Kullback-Leibler Distance). The relative entropy between two probability mass functions p(x) and q(x) is defined as
$$
D(p||q) = \sum_{x \in \mathcal{X}}p(x)log\frac{p(x)}{q(x)} = E_plog\frac{p(X)}{q(X)}.
$$
\end{definition}
\begin{remark}Relative entropy measures the distance between two distributions.
\end{remark}
\begin{definition}(Mutual Information). Consider two random variables X and Y with a joint PMF $p(x,y)$ and marginal PMFs p(x) and p(y). The mutual information $I(X;Y)$ is the relative entropy between the joint distribution and the product distribution p(x)p(y);
$$
I(X;Y) = \sum_{x \in \mathcal{X}}\sum_{y \in \mathcal{Y}}p(x,y)log\frac{p(x,y)}{p(x)p(y)}.
$$
\end{definition}
\begin{remark}Mutual information is a measure of the amount of information that one random variables contains about another random variable. It is the reduction in the uncertainty of one random variable due to knowledge of another.
\end{remark}
\begin{lemma}We can express mutual information as
$$
I(X;Y) = H(X) - H(X|Y).
$$
\end{lemma}
Mutual information is therefore the reduction in the uncertainty of X due to knowledge of Y.
\begin{definition}The conditional mutual information of random variables X and Y given Z is defined by
$$
I(X;Y|Z) = H(X|Z) - H(X|Y,Z).
$$
\end{definition}
\begin{theorem}(Information Inequality). Let p(x), q(x), and $x \in \mathcal{X}$ be two PMFs. Then the relative entropy
$$
D(p||q) \geq 0
$$
with equality if and only if $p(x) = q(x)$ for all x.
\end{theorem}
\begin{corollary}For any two random variables X and Y
$$
I(X;Y) \geq 0
$$
with equality if and only if X and Y are independent.
\end{corollary}
\begin{theorem}Conditioning reduces entropy
$$
H(X|Y) \leq H(X)
$$
with equality if and only if X and Y are independent.
\end{theorem}
\begin{definition}(Markov Chain). Random variables X,Y,Z are said to form a Markov chain in that order $(X \rightarrow Y \rightarrow Z)$ if the conditional distribution of Z depends only on Y and is conditionally independent of X. Specially, the joint PMF can be written as
$$
p(x,y,z) = p(x)p(y|x)p(z|y).
$$
\end{definition}
\begin{theorem}(Data-processing inequality). If $X \rightarrow Y \rightarrow Z$, then
$$
I(X;Z) \leq I(X;Y).
$$
\end{theorem}
\begin{definition}Suppose we have a family of PMFs $\{f_{\theta}(x)\}$ indexed by $\theta$ and let X be any sample from a distribution in this family. Let T(X) be any statistic. Then $\theta \rightarrow X \rightarrow T(X)$ is a Markov Chain and by the data-processing inequality
$$
I(\theta;T(X)) \leq I(\theta; X)
$$
\end{definition}
\subsection{Sufficient Statistics}
In this section, we work with the intuitive idea that discarding irrelevant data can never hurt performance. In fact, irrelevant data may actually impair performance. The advantage of this is that it makes inference alot easier to do.\\
\begin{definition}(Statistic). A statistic $T: X \rightarrow
\mathbb{R}^n$ is a function of the data.
\end{definition}
Suppose we had a random sample $\tilde{X} = (X_1,...,X_n)$ whose distribution depends on $\theta$. We want to estimate the parameter $\theta$ using $\tilde{X}$ using a function $T(\tilde{X})$ without losing information about $\theta$.
\begin{definition_exam}{Likelihood Function}{} We have a random sample of size n from a distribution with PDF $f(x;\theta)$. Then the likelihood function is
$$
\ell(x;\theta) = \prod_{i=1}^nf(x_i;\theta)
$$
where $\tilde{x} = (x_1,...,x_n)$ are observed values.
\end{definition_exam}
Hence, a function T(X) is a \textbf{sufficient statistic} if
$$
I(\theta;T(X)) = I(\theta;X)
$$
as no information is lost. Hence sufficient statistics preserve mutual information.
\begin{definition_exam}{Sufficient Statistic}{} A statistic $\tilde{T} = \tilde{T}(\tilde{X})$ is sufficient for a family of distributions if and only if the conditional distribution of $\tilde{X}$ given $\tilde{T}(\tilde{X})$ is independent of the parameters.\newline In general, if $X_1,...,X_n$ are random samples from the discrete distribution with PMF $f(x;\theta)$, the conditional probability of $\tilde{X} = \tilde{x}$ given $\tilde{T} = \tilde{t}$ is
$$
f(\tilde{X}; \theta | \tilde{T} = \tilde{t}) = \frac{\prod_{i=1}^nf(x_i; \theta)}{f_T(\tilde{t}; \theta)}
$$
where we require the ratio of the likelihood and marginal distribution of T to be independent of the parameter $\theta$.
\end{definition_exam}
If we can express the likelihood with just the parameter $\theta$ and statistic T(x), then T(x) is a sufficient statistic. In other words, a statistic T is sufficient if for all t, the conditional distribution $X|T(x) = t$ does not depend on the parameter $\theta$.\\
However, it may be difficult to compute the conditional distribution which leads us to the next theorem.
\begin{theorem_exam}{Neyman Factorisation Theorem}{} Let $f(x; \theta)$ be the PDF of a random sample $\tilde{X} = X_1,...,X_n$. Let $\tilde{T} = \tilde{T}(\tilde{X})$ be a statistic. Then, $\tilde{T}(\tilde{X})$ is a sufficient statistic for $\theta$ if and only if the \textbf{likelihood} $\ell(\tilde{x}; \theta)$ can be written in the form
$$
\ell(\tilde{x}; \theta) = g(\tilde{T}(\tilde{x}); \theta)h(\tilde{x})
$$
where $h(\tilde{x})$ is independent of $\theta$.
\end{theorem_exam}
\begin{remark}Note that the first factor $g(\tilde{T}(\tilde{x})$ means that it may depend on $\theta$ and possibly depend on x, but only through T(x).
\end{remark}
So why do we care about sufficient statistics? If we had a sufficient statistic, when computing the likelihood function, instead of evaluating n PDFs, we can evaluate a single PDF with the sufficient statistic placed inside.
\begin{theorem_exam}{Sufficiency Principle}{} If T(\textbf{X}) is a sufficient statistic for $\theta$, then any inference about $\theta$ should depend on the sample \textbf{X} only through the value T(\textbf{X}). That is, if \textbf{x} and \textbf{y} are two sample points such that $T(\textbf{x}) = T(\textbf{y})$, then the inference about $\theta$ should be the same whether $\textbf{X} = \textbf{x}$ or $\textbf{X} = \textbf{y}$ is observed.
\end{theorem_exam}
We now present two notable examples of sufficient statistics.
\begin{lemma}(Max of uniform). Let $X_1,...,X_n$ be i.i.d according to the uniform distribution $U(0,\theta).$ Then, $T(x) = \max(X_1,...,X_n)$ is a sufficient statistic.
\end{lemma}
\begin{lemma}(Order Statistics). Let $X_1,...,X_n$ be i.i.d with any model. Then, the order statistics $T = X_{(1)} \leq X_{(2)} \leq ... \leq X_{(n)}$ are sufficient statistics.
\end{lemma}
Finally, note however that reduction via sufficiency can also increase the computational complexity of inference, in some instances even turning a computationally tractable inference problem into
an intractable one.
\lecture{15}{Exponential Families}
\section{Exponential Families}
\subsection{Exponential Families}
Exponential families are of particular interest to us because many common distributions are exponential families. Examples include the normal, binomial, and poisson. Furthermore, the exponential families are closely linked to the notion of sufficiency.
\begin{definition_exam}{Exponential family}{} A one parameter exponential family is a set of probability distributions that can be written in the form
$$
f(x; \theta) = e^{\eta(\theta)T(x) - \phi(\theta)}h(x)I_A(x)
$$
for $x \in \mathbb{R}^n$ and $\theta \in \Theta \subseteq \mathbb{R}$. We have that $\eta(.), T(.), \phi(.), h(.)$ are real-valued functions. $I_A(.)$ indicates the support of the distribution and does not depend on $\theta$.
\end{definition_exam}
\begin{remark}$\eta, T, \phi, h$ are not unique. $h(x) \geq 0$ is known as a base (carrier) measure. $\phi(\theta)$ is a normalising constant.
\end{remark}
\begin{lemma}The parameterisation of exponential families is \textbf{not} unique.
\end{lemma}
\begin{proposition_exam}{Sufficiency of Exponential family}{}The exponential family always has a sufficient statistic.
\end{proposition_exam}
\begin{lemma}The uniform distribution is not part of the exponential family as its support depends on the parameters $(a,b).$
\end{lemma}
\begin{proposition}(Exponential Distribution). The density of the exponential distribution has the density
$$
f(x;\theta) = \theta e^{-\theta x}I_{x \geq 0} = e^{-\theta x + log(\theta)}I_{x \geq 0}
$$
which yields a 1-dimensional exponential family with
\begin{enumerate}
\item $\eta(\theta) = -\theta$
\item $T_i(x) = x$
\item $\phi(\theta) = -log(\theta)$
\item $h(x) = I_{x \geq 0}.$
\end{enumerate}
\end{proposition}
\lecture{16}{Canonical Paramter Exponential Families}
\section{Canonical Paramter Exponential Families}
\subsection{Canonical Paramter Exponential Families}
\begin{definition_exam}{Canonical Form of exponential family}{} The canonical form of one-parameter exponential family is
$$
f(x; \eta) = e^{\eta T(x) - \psi_0(\eta)}h(x)I_{A}(x) \quad x \in \mathbb{R}^n
$$
where
$$
\eta \in \mathcal{F} = \{\eta: e^{\psi_0(\eta)} = \int_{A}e^{\eta T(x)}h(x)dx < \infty\}.
$$
Here, $\eta$ is called the \textbf{natural parameter}; T(x) is a \textbf{natural sufficient statistic} for $\eta$.\newline $\mathcal{F}$ is the \textbf{natural parameter space} which describes the set of values of $\eta$ for which the PDF can be defined.
\end{definition_exam}
This parameterises the density in terms of the \textbf{natural parameter} $\eta$ rather than $\theta.$
\begin{definition}(Regular). The canonical exponential family is called \textbf{regular} of $\mathcal{F}$ is an open set in $\mathbb{R}$.
\end{definition}
\begin{theorem_exam}{Moments of sufficient statistics}{}For any $\eta$ in the interior of $\mathcal{F}$, we have that
\begin{enumerate}
\item $E(T(X)) = \psi_{0}^{'}(\eta)$;
\item $Var(T(x)) = \psi_{0}^{''}(\eta).$
\end{enumerate}
\end{theorem_exam}
\begin{theorem}(Moments of sufficient statistics). Suppose $T(\tilde{X})$ is a natural sufficient statistic for $\eta$ based on a random sample $\tilde{X} = (X_1,X_2,...,X_n)$ from $f(x; \eta)$, then
\begin{enumerate}
\item $E(T(\tilde{X})) = n\psi_{0}^{'}(\eta);$
\item $Var(T(\tilde{x})) = n\psi_{0}^{''}(\eta).$
\end{enumerate}
\end{theorem}
\begin{proposition}(Independent exponentials). If $X_1,...,X_n$ are i.i.d with pdf $e^{\eta T(x) - \psi_0(\eta)}h(x)I_{A}(x)h(x)$, then, their joint pdf is
$$
f(x_1,...,x_n;\theta) = e^{\eta \sum_{j=1}^{n}T(x_j) - n\psi_0(\eta)}(x)\prod_{j=1}^nh(x_j)I_{A}
$$
\end{proposition}
\begin{proposition}The exponential family is infinitely differentiable with respect to $\eta$ and the derivatives can be obtained by differentiating under the integral sign.
\end{proposition}
\subsection{Two Parameter Exponential Families}
\begin{definition_exam}{Two Parameter exponential families}{} Let $\tilde{\theta} = (\theta_1, \theta_2)$. A family of distributions is said to be 2-parameter exponential family if there exists real-valued functions $\eta_1(.), \eta_2(.), T_1(.), T_2(.), \psi(.), h(.)$ such that the PDF
$$
f(x; \theta) = e^{\sum_{i=1}^2\eta_i(\tilde{\theta})T_i(x) - \psi(\tilde{\theta})}h(x)I_A(x)
$$
for $x \in \mathbb{R}^n.$
\end{definition_exam}
The beta distribution is an example of a two parameter exponential family.
\subsection{Uniform and Exponential Spacing}
\begin{definition}(Uniform Spacing). Let $U_1,...,U_n$ be i.i.d uniform [0,1] with order statistics $U_{(1)} \leq U_{(2)} \leq ... \leq U_{(n)}$. The statistics $S_i$ defined by
$$
S_i = U_{(i)} - U_{(i-1)} \quad (1 \leq i \leq n+1)
$$
where $U_{(0)} = 0, U_{(0)} = 1$ are called the uniform spacings for this sample.
\end{definition}
\begin{theorem}The uniform spacing $(S_1,...,S_n)$ are uniformly distributed over the simplex
$$
A_n = \{(x_1,...,x_n): x_i \geq 0, \sum_{i=1}^{n}x_i \leq 1\}.
$$
\end{theorem}
\begin{definition}(Exponential Spacing). Let $E_1,...,E_n$ be i.i.d exponential random variables with order statistics $E_{(1)} \leq E_{(2)} \leq ... \leq E_{(n)}$. The statistics defined by
$$
(n-i+1)(E_{(i)} - E_{(i-1)}) \quad 1 \leq i \leq n
$$
are known as normalized exponential spacings.
\end{definition}
\begin{theorem} Let $(n-i+1)(E_{(i)} - E_{(i-1)}) \quad 1 \leq i \leq n$ be normalized exponential spacings. These are i.i.d. exponential random variables. Furthermore,
$$
\frac{E_1}{n},\frac{E_1}{n}+\frac{E_2}{n-1},...,\frac{E_1}{n} + ... + \frac{E_1}{1}
$$
are distributed as $E_{(1)},...E_{(n)}$.
\end{theorem}
\lecture{17}{Maximum Likelihood Estimators}
\section{Minimum Variance Unbiased Estimation}
\section{Minimum Variance Unbiased Estimation}
\subsection{The Likelihood Principle}
We are interested in finding the parameters $\theta$ as knowledge of $\theta$ will allow us to generate data through the pdf. We look at techniques at estimating the parameter $\theta$ and techniques for evaluating our estimations.
\begin{definition}(Likelihood Function). Let $f(\textbf{x}|\theta)$ denote the joint pdf of the sample $\textbf{X} = (X_1,...,X_n).$ Then, given that $\textbf{X} = \textbf{x}$ is observed, the function of $\theta$ defined by
$$
\ell(\theta|\textbf{x}) = f(\textbf{x}|\theta)
$$
is called the likelihood function.
\end{definition}
\begin{remark}Recall that if $\textbf{X}$ is a discrete random vector, the likelihood function will be $\ell(\theta|\textbf{x}) = P_{\theta}(\textbf{X} = \textbf{x})$ and hence for two different parameter points $\theta_1, \theta_2,$ then we can interpret the likelihood function as the probability of a parameter $\theta_i$ given the sample we have $\textbf{x}$
$$
P_{\theta_{1}}(\textbf{X} = \textbf{x}) = \ell(\theta_1|\textbf{x}_1) > \ell(\theta_2|\textbf{x}_2) = P_{\theta_{2}}(\textbf{X} = \textbf{x}).
$$
\end{remark}
\begin{theorem}Let $X_i$ be i.i.d random variables with common pdf$f_{\theta}(.).$ Then,
$$
log\bigg(\prod_{i=1}^{n}X_i \bigg) = \sum_{i=1}^{n}log\bigg(X_i \bigg).
$$
\end{theorem}
Here, the pdf fixes $\theta$ and varies $\textbf{x}$ whereas the likelihood function fixes $\textbf{x}$ and varies $\theta.$
\begin{theorem}(Likelihood Principle). If $\textbf{x}$ and $\textbf{y}$ are two sample points such that $\ell(\theta|\textbf{x})$ is proportional to $\ell(\theta|\textbf{y})$, that is, there exists a constant $C(\textbf{x}, \textbf{y})$ such that
$$
\ell(\theta|\textbf{x}) = C(\textbf{x}, \textbf{y})\ell(\theta|\textbf{y})
$$
for all $\theta$, then the conclusions drawn from $\textbf{x}$ and $\textbf{y}$ should be identical.
\end{theorem}
\subsection{Maximum Likelihood Estimators}
\begin{definition}(Likelihood Function).
If $X_1,...,X_n$ are i.i.d. sample from a population with pdf $f(x|\theta_1,...,\theta_k)$, the likelihood function is defined by
$$
\ell(\theta|\textbf{x}) = \ell(\theta_1,...,\theta_k|x_1,...,x_n) = \prod_{i=1}^{n}f(x_i|\theta_1,...,\theta_k).
$$
\end{definition}
\begin{definition_exam}{Maximum Likelihood Estimator}{} For each sample point $\textbf{x}$, let $\hat{\theta}(\textbf{x})$ be a parameter value at which $\ell(\theta|\textbf{x})$ attains its maximum as a function of $\theta$, with $\textbf{x}$ held fixed. A maximum likelihood estimator (MLE) of the parameter $\theta$ based on a sample $\textbf{X}$ is $\hat{\theta}(\textbf{X}).$
\end{definition_exam}
\begin{remark}The range of the MLE coincides with the range of the parameter.
\end{remark}
The MLE is the parameter point of the MLE for which the observed sample is most likely.
\begin{remark}Two inherent drawbacks of MLE is that finding the global maximum can be difficult and that the estimate may be sensitive to small changes in the data. This second scenario occurs in the case of flat likelihoods.
\end{remark}
\begin{proposition_exam}{Necessary condition for MLE}{}The first derivative of a function being 0 is a \textbf{necessary} condition for a maximum
$$
\frac{\partial}{\partial \theta_i}L(\theta | \textbf{x}) = 0, \quad i=1,...,k.
$$
\end{proposition_exam}
\begin{remark}The zeros of the first derivative are only located in the extrema in the interior of the domain of the function. Henec, we need to check the boundaries separately for extrema.
\end{remark}
\begin{remark}When maximising a likelihood with restrictions on the parameter, we need to check for different cases of the optimal values.
\end{remark}
\begin{theorem}(Invariance Property of MLE). Suppose that $\hat{\theta}$ is the MLE of a parameter $\theta$. Let $\tau(\theta)$ be a one to one mapping. Then, $\tau(\hat{\theta})$ is the MLE of $\tau(\theta).$
\end{theorem}
\begin{theorem}(MLE of multivariate likelihood). Suppose our likelihood function is $H(\theta_1, \theta_2)$. To check that $H(\theta_1,\theta_2)$ has a local maximum at $(\hat{\theta}_1,\hat{\theta}_2)$, we check the 3 conditions that
\begin{enumerate}
\item First order partial derivatives are 0
$$
\frac{\partial}{\partial \theta_1}H(\theta_1, \theta_2) = \frac{\partial}{\partial \theta_2}H(\theta_1, \theta_2) = 0
$$
\item At least one second-order partial derivative is negative;
\item The Jacobian J of the second-order partial derivatives is positive
$$
|J| > 0.
$$
\end{enumerate}
\end{theorem}
\subsection{Mean Squared Error}
We look at \textbf{finite-sample} measures of the quality of an estimator.
\begin{definition_exam}{Mean Squared Error}{} The mean squared error (MSE) of an estimator W of a parameter $\theta$ is the function of $\theta$ defined by
$$
\mathbb{E}[(W - \theta)^2].
$$
\end{definition_exam}
\begin{definition_exam}{Bias}{} The bias of a point estimator W of a parameter $\theta$ is the difference between the expected value of W and $\theta$, that is,
$$
\text{Bias}_{\theta} = \mathbb{E}[W - \theta].
$$
An estimator whose bias is equal to 0 is called \textbf{unbiased} and satisfies $\mathbb{E}_{\theta}[W] = \theta$ for all $\theta$.
\end{definition_exam}
\begin{theorem_exam}{MSE Decomposition}{}The MSE can be decomposed as the sum of the variance of the estimator plus the square of the bias:
$$
E_{\theta}[(W - \theta)^2] = Var_{\theta}(W) + (E_{\theta}[W - \theta])^2 = Var_{\theta}(W) + (\text{Bias}_{\theta}W)^2.
$$
\end{theorem_exam}
\begin{corollary}The MSE of an \textbf{unbiased} estimator is equal to its variance.
\end{corollary}
\lecture{18}{Differentiation and Integration}
\section{Minimum Variance Unbiased Estimation}
\subsection{Differentiation and Integration}
We now look into when can we interchange differentiation, integration, and summation.
\begin{theorem}(Leibnitz's Rule). If $f(x,\theta), a(\theta), b(\theta)$ are differentiable with respect to $\theta$, then
$$
\frac{d}{d\theta}\int_{a(\theta)}^{b(\theta)}f(x,\theta)dx = f(b(\theta),\theta)\frac{d}{d\theta}b(\theta) - f(a(\theta),\theta)\frac{d}{d\theta}a(\theta) + \int_{a(\theta)}^{b(\theta)}\frac{\partial}{\partial \theta}f(x,\theta)dx.
$$
\end{theorem}
\begin{corollary}If $a(\theta), b(\theta)$ are constants, then
$$
\frac{d}{d\theta}\int_a^bf(x,\theta)dx = \int_a^b\frac{\partial}{\partial \theta}f(x,\theta)dx.
$$
\end{corollary}
\begin{remark}Note the LHS of the corollary depends on one parameter whereas the RHS depends on two parameters.
\end{remark}
\begin{theorem}(Dominated Convergence Theorem). Suppose the function $h(x,y)$ is continuous at $y_0$ for each x, and there exists a function g(x) satisfying
\begin{enumerate}
\item $|h(x,y)| \leq g(x)$ for all x and y,
\item $\int_{-\infty}^{\infty}g(x)dx < \infty$.
\end{enumerate}
Then,
$$
\lim_{y \rightarrow y_0}\int_{-\infty}^{\infty}h(x,y)dx = \int_{-\infty}^{\infty}\lim_{y \rightarrow y_0}h(x,y)dx.
$$
\end{theorem}
\begin{theorem}(Interchange integration and limits). Suppose $f(x,\theta)$ is differentiable for every $\theta = \theta_0$. That is
$$
\lim_{\delta \rightarrow 0}\frac{f(x,\theta + \delta) - f(x,\theta)}{\delta} = \frac{\partial}{\partial \theta}f(x,\theta)
$$
such that
\begin{enumerate}
\item $|\frac{f(x,\theta + \delta) - f(x,\theta)}{\delta}| \leq g(x,\theta_0)$ for all x and $|\delta| \leq \delta_0$;
\item $\int_{-\infty}^{\infty}g(x,\theta)dx < \infty$.
\end{enumerate}
Then,
$$
\frac{d}{d\theta}\int_{-\infty}^{\infty}f(x,\theta)dx = \int_{-\infty}^{\infty}\frac{\partial}{\partial \theta}f(x,\theta)dx.
$$
\end{theorem}
\begin{theorem}We can interchange integration and differentiation for the exponential family.
\end{theorem}
\begin{lemma}A derivative can always be taken inside a \textbf{finite} sum.
\end{lemma}
\begin{theorem}(Interchange differentiation and summation). Suppose that the series $\sum_{x=0}^{\infty}h(\theta,x)$ converges for all $\theta \in (a,b)$ and
\begin{enumerate}
\item $\frac{\partial}{\partial \theta}h(\theta,x)$ is continuous in $\theta$ for each x,
\item $\sum_{x=0}^{\infty}\frac{\partial}{\partial \theta}h(\theta,x)$ converges uniformly on every closed bounded subinterval of $(a,b)$.
\end{enumerate}
Then,
$$
\frac{d}{d\theta}\sum_{x=0}^{\infty}h(\theta,x) = \sum_{x=0}^{\infty}\frac{\partial}{\partial \theta}h(\theta,x).
$$
\end{theorem}
\begin{theorem}(Interchange summation and integration). Suppose the series $\sum_{x=0}^{\infty}h(\theta,x)$ converges uniformly on [a,b] and that, for each x, $h(\theta,x)$ is a continuous function of $\theta$. Then
$$
\int_a^b\sum_{x=0}^{\infty}h(\theta,x)d\theta = \sum_{x=0}^{\infty}\int_a^bh(\theta,x)d\theta.
$$
\end{theorem}
\lecture{19}{Cramer Rao Lower Bound}
\section{Minimum Variance Unbiased Estimation}
\subsection{Cramer Rao Lower Bound}
When looking at the best performance of estimators, we require 3 restrictive conditions
\begin{enumerate}
\item We only look at unbiased estimators $E_{\theta}[\hat{\theta}] = \theta$.
\item We measure performance by the variance of the estimator.
\item We restrict attention to a class of \textbf{regular} problems.
\end{enumerate}
\begin{definition_exam}{Regularity Conditions}{} For the next section, we state that the regularity conditions are
\begin{enumerate}
\item We can interchange the order of differentiation and integration/summation;
\item The PMF $f_{\theta}(\tilde{x}) \neq 0$ for all $\tilde{x}$ in the support.
\end{enumerate}
\end{definition_exam}
\begin{remark}Recall interchanging the order of differentiation and integration requires the dominated convergence theorem.
\end{remark}
\begin{definition_exam}{Uniform Minimum Variance Unbiased Estimators UMVUE}{} An estimator $\hat{\theta}$ is called the uniform minimum variance unbiased estimator of $\theta$ if $E_{\theta}[\hat{\theta}] = \theta$ and for any other unbiased estimator W, we have that
$$
Var_{\theta}\hat{\theta} \leq Var_{\theta}W
$$
for all $\theta.$
\end{definition_exam}
\begin{remark}UMVUE does not need to actually only apply to unbiased estimators. Suppose the estimator $W^*$ had the bias $E_{\theta}[W^*] = \tau(\theta)$. Then, we say that $W^*$ is UMVUE for $\tau(\theta)$ if $Var(W^*) \leq Var(W)$ for all estimators W such that $E_{\theta}[W] = \tau(\theta).$
\end{remark}
\begin{definition_exam}{Score Function}{} Let $\tilde{X}$ be a random vector with PMF $f_{\theta}(\tilde{X})$. Assuming the regularity conditions hold, the score function is defined by
$$
s(\theta) = \frac{\partial}{\partial \theta} log f_{\theta}(\tilde{X}).
$$
\end{definition_exam}
\begin{remark} The score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values.
\end{remark}
We recall the following definition in order to help define the Cramer-Rao lower bound.
\begin{definition}(Covariance/Correlation). The covariance of two random variables Y and Z is
$$
Cov(Y,Z) = E[Y,Z] - E[Y]E[Z].
$$
\end{definition}
\begin{theorem}(Correlation Inequality). The correlation inequality is
$$
Corr(Y,Z)^2 = \frac{Cov(Y,Z)^2}{Var(Y)Var(Z)} \leq 1.
$$
\end{theorem}
\begin{definition_exam}{Cramer-Rao Lower Bound}{} Let $\hat{\theta}(.)$ be an $\textbf{unbiased estimator}$ with finite variance. Furthermore, assume that regularity conditions holds. Then, the lower bound of the variance of any unbiased estimator is given by
$$
Var_{\theta}[\hat{\theta}(\tilde{X})] \geq \frac{1}{Var_{\theta}[\frac{\partial}{\partial \theta}log f_{\theta}(\tilde{X})]}
$$
\end{definition_exam}
\begin{proof} Let $\hat{\theta}(.)$ be an unbiased estimator of $\theta.$ Hence, we have that
$$
E_{\theta}[\hat{\theta}(X)] = \theta = \sum_{x \in X}...\sum \hat{\theta}(x)f_{\theta}(x) = \theta \quad \forall \theta \in \Theta.
$$
We then take derivative with respect to $\theta$ on both sides, interchange the derivative and summation and multiply and divide by $f_{\theta}(x)$. We then get
$$
\sum_{x \in X}...\sum \bigg[\frac{\partial}{\partial \theta}log(f_{\theta}(x)) \bigg]\hat{\theta}(x)f_{\theta}(x) = 1.
$$
We note that this is the definition of the expectation with respect to the pdf $f_{\theta}(.)$.
$$
E_{\theta}\bigg[\hat{\theta}(X),\frac{\partial}{\partial \theta}log(f_{\theta}(X)) \bigg].
$$
Now, recall that $Cov(X,Y) = E[X,Y]$ if either E[X] = 0 or E[Y] = 0. As $E_{\theta}[\frac{\partial}{\partial \theta}log(f_{\theta}(X))] = 0$, we have
$$
E_{\theta}\bigg[\hat{\theta}(X),\frac{\partial}{\partial \theta}log(f_{\theta}(X)) \bigg] = Cov \bigg(\hat{\theta}(X), \frac{\partial}{\partial \theta}log(f_{\theta}(X))\bigg).
$$
Now, we apply the correlation inequality to get
$$
Cov \bigg(\hat{\theta}(X), \frac{\partial}{\partial \theta}log(f_{\theta}(X))\bigg) \leq Var \bigg(\hat{\theta}(X)\bigg) Var\bigg(\frac{\partial}{\partial \theta}log(f_{\theta}(X))\bigg).
$$
As the covariance of an unbiased estimator with the score function is less than or equal to 1, we therefore have
$$
1 \leq Var \bigg(\hat{\theta}(X)\bigg) Var\bigg(\frac{\partial}{\partial \theta}log(f_{\theta}(X))\bigg)
$$
and hence the CRLB follows.
\end{proof}
\begin{remark}An issue with the CRLB is that there may be no estimators that attain the CRLB.
\end{remark}
\begin{definition_exam}{Fisher Information}{} The term in the denominator of the Cramer-Rao lower bound $$Var_{\theta}[\frac{\partial}{\partial \theta}logf_{\theta}(\tilde{X})]$$ is known as the \textbf{Fisher information} of the sample. The more information we have of the sample, then the smaller the variance of our estimator.
\end{definition_exam}
\begin{remark}If we have an unbiased estimator $\hat{\theta}(.)$ and we show that its variance is equivalent to the Cramer-Rao lower bound, then we know that $\hat{\theta}(.)$ is an optimal estimator.
\end{remark}
\begin{remark}Alternatively, we can formulate the Fisher information as
$$
I_n(\theta) \coloneqq E_{\theta}[-\frac{d^2}{d\theta^2}\ell(\theta; X_1,...,X_n)].
$$
Here, the second derivative measures the curvature of the likelihood function. Taking the expectation of this tells us how curved the likelihood function is on average. The more curved the likelihood function is, the more \textbf{information} it contains and hence the more precise the MLE will be.
\end{remark}
\begin{theorem}Let $X_1,...,X_n$ be iid and define the Fisher information as $I_n \coloneqq E_{\theta}[-\frac{d^2}{d\theta^2}\ell(\theta; X_1,...,X_n)].$ Then,
$$
I_n(\theta) = nI_{\theta}
$$
where $I(\theta)$ is the Fisher information for a \textbf{single observation}. That is, $I(\theta) = I_{1}(\theta).$
\end{theorem}
We are interested in the case of when is the CR-lower bound an equality. In particular, when is the correlation inequality an equality. This occurs when 2 random variables are linearly related.
\begin{lemma}Let Y and X be random variables. If Y = a + bX, then
$$
Cov(X,Y)^2 = Var(X)Var(Y).
$$
\end{lemma}
We now can specify a way to find an estimator that attains the CRLB. That is, if an unbiased estimator is linearly dependent to the score function, then the estimator attains the Cramer-Rao lower bound!
\begin{theorem_exam}{Attainment of CRLB}{} Let $X_1,...,X_n$ be iid from $f_{\theta}(x|\theta)$ and $f_{\theta}(x|\theta)$ satisfies the conditions of the Cramer-Rao theorem. If the unbiased estimator $\hat{\theta}(\tilde{x})$ and $\frac{\partial}{\partial \theta}logf_{\theta}(\tilde{x})$ are linearly related, then the estimator $\hat{\theta}(\tilde{x})$ attains the CRLB. \\ Furthermore, as the expectation of the score function of the unbiased estimator must be zero, we have that
$$
\frac{\partial}{\partial \theta}logf_{\theta}(\tilde{x}) = C_{\theta}[\hat{\theta}(\tilde{x}) - \theta].
$$
\end{theorem_exam}
\begin{remark}If the score function has the relationship above, then the estimator $\hat{\theta}$ is a MVUE. Furthermore, the constaint $C_{\theta}$ is the inverse of the Fisher information.
\end{remark}
\begin{theorem}If the theorem of attainment holds, we must have an exponential family and $\hat{\theta}(\tilde{x})$ must be a multiple of the sufficient statistic for $\theta.$
\end{theorem}
\begin{theorem_exam}{Rao-Blackwell}{} Let W be an unbiased estimator of $\tau(\theta)$ and let T be a sufficient statistic for $\tau(\theta)$. Define $\phi(T) = E(W|T).$ Then
$$
\begin{cases}
E_{\theta}\phi(T) = \tau(\theta)\\
Var_{\theta}\phi(T) \leq Var_{\theta}W \quad \forall \theta
\end{cases}
$$
that is, $\phi(T)$ is a uniformly better unbiased estimator of $\tau(\theta).$
\end{theorem_exam}
\begin{remark}Conditioning any unbiased estimator on a sufficient statistic will result in a uniform imporvement.
\end{remark}
\lecture{20}{Asymptotically Minimum Variance Unbiased Estimators}
\section{Minimum Variance Unbiased Estimation}
\subsection{Asymptotically Minimum Variance Unbiased Estimators}
At the end of the last section, we identified that if the score function has the following relationship
$$
\frac{\partial}{\partial \theta}logf_{\theta}(\tilde{x}) = C_{\theta}[\hat{\theta}(\tilde{x}) - \theta].
$$
then $\hat{\theta}$ is a minimum variance unbiased estimator. However, this does not always hold. In this section, we look at cases where this almost holds when looking at asymptotics.
\begin{example}Let $X_1,...,X_n$ be iid geometric(p). We have that
$$
\frac{\partial}{\partial \theta}logf_{\theta}(\tilde{X}) = \frac{-n}{1 - \theta}\bigg(\overline{X} - \frac{1}{\theta} \bigg).
$$
Here, we don't quite have the correct form as we have $\frac{1}{\theta}$ rather than $\theta.$ Hence, we can't identify an optimal estimator.
\end{example}
We use the notions of asymptotic normality and that the asymptotic variance of the estimator is the CRLB to now identify these estimators.
\begin{definition}(Central Limit Theorem). Suppose a sequence of random variables $\{X_n\}$ is such that
$$
\sqrt{n}(X_n - \mu) \xrightarrow{d} \mathcal{N}(0, \sigma^2)
$$
or equivalently
$$
Z_n = \frac{X_n - \mu}{\sigma/\sqrt{n}} \xrightarrow{d} \mathcal{N}(0, 1).
$$
i.e. $P(Z_n \leq z) \rightarrow \phi (z).$
Then we say that $X_n$ is asymptotically normal $\mathcal{N}(\mu, \frac{\sigma^2}{n})$ and we write this as $X_n \sim A\mathcal{N}(\mu, \frac{\sigma^2}{n}).$
\end{definition}
\begin{remark}We can have n when writing $A\mathcal{N}(\mu, \frac{\sigma^2}{n})$ but not when writing convergence in distributions $\xrightarrow{d}.$
\end{remark}
\begin{remark}We refer to $\sigma^2/n$ as the asymptotic variance of $X_n.$
\end{remark}
\begin{theorem_exam}{Delta Method}{} Suppose $X_n \sim A\mathcal{N}(\mu, \frac{\sigma^2}{n}$ and g(.) is differentiable at $\mu$. Then
$$
g(X_n) \sim A\mathcal{N}(g(\mu), \frac{g'(\mu)^2\sigma^2}{n}).
$$
\end{theorem_exam}
\begin{proof}(Sketch). First, it can be shown that if $X_n$ is asymptotically normal (AN), then $X_n$ converges to the asymptotic mean, that is, $X_n \xrightarrow{p} \mu.$ Now, we look at
$$
\sqrt{n}[g(X_n) - g(\mu)] = [\frac{g(X_n) - g(\mu)}{X_n - \mu}]\sqrt{n}(X_n - \mu).
$$
Now, we have that
$$
\begin{cases}
[\frac{g(X_n) - g(\mu)}{X_n - \mu}] \xrightarrow{p} g'(\mu) \quad \text{(definition of derivative)}\\
\\
\sqrt{n}(X_n - \mu) \xrightarrow{d} \mathcal{N}(0, \sigma^2) \quad \text{(WLLN)}.
\end{cases}
$$
Hence, we have that
$$
[\frac{g(X_n) - g(\mu)}{X_n - \mu}]\sqrt{n}(X_n - \mu) \xrightarrow{d} \mathcal{N}(0, g'(\mu)^2\sigma^2).
$$
\end{proof}
\begin{remark}If X is normal, then $g(X)$ is not necessarily normal. This theorem is a local theorem requiring linearity. This is because we are using the fact that the derivative is a linear transformation about the point $\mu.$
\end{remark}
We may not be able to find an unbiased estimator that meets the Cramer-Rao lower bound. However, we may have that it is asymptotically normal, where the mean is the true parameter $\theta_0$ and the variance is the Cramer-Rao lower bound. This is the next best thing.
\begin{definition_exam}{Asymptotically Minimum Variance Unbiased}{} We say that an estimator $\hat{\theta}(\tilde{x})$ is \textbf{asymptotically minimum variance unbiased} if it is $A\mathcal{N}(\theta, \frac{v}{n})$ where $\frac{v}{n}$ is the Cramer-Rao lower bound.
\end{definition_exam}
\begin{remark}Our estimator is an AMVU estimator if the asymptotic variance is the CRLB.
\end{remark}
Unbiased estimators which are AMVU are sometimes said to be asymptotically efficient.\\
We now describe the procedure on how to show that an estimator $\hat{\theta}$ is AMVU. Suppose you had $X_1,...,X_n$ iid with common pdf $f_{\theta}().$ After doing some work, you attempt to arrive at
$$
C_{\theta}\bigg[\hat{\theta}(X) - \theta \bigg].
$$
If you can arrive at this form, then $\theta$ is in fact a MVU estimator. However, if not, we can then try to show that it is an AMVU estimator. First, define $\eta = \eta(\theta)$ such that
$$
C_{\eta}\bigg[ \hat{\eta}(X) - \eta \bigg]
$$
is of the correct form. This shows that $\eta(X)$ is an MVU estimator of $\eta.$ Now, we also have that $\eta \sim AN(\eta, C_{\eta})$. We then define $g(\hat{\eta})$ where we solve for $\theta$. Compute $g(\hat{\eta})'$ then using the delta method, we get
$$
\hat{\theta} \sim AN(g(\hat{\eta}) = \theta, g'(\hat{\eta})C_{\eta} = C_{\theta}).
$$
Hence, this shows that $\hat{\theta}$ is an AMVU estimator as it now achieves the CRLB ($C_{\theta}$).
\subsection{MLE is AMVU}
All the techniques we have looked at apply well to exponential families when looking at their maximum likelihood estimators where they were the solutions to the score equations
$$
\frac{\partial}{\partial \theta}logf_{\theta}(\tilde{X}) = 0.
$$
Furthermore, these methods were also method of moment estimators based on the sufficient statistic $t(\tilde{X})$, i.e. solutions to the moment equation
$$
E_{\theta}[t(\tilde{X})] = t(\tilde(X)).
$$
\begin{lemma}For exponential families, maximum likelihood estimation is equivalent to method of moments estimation.
\end{lemma}
We now explore further properties of maximum likelihood estimates beyond exponential families.
We now have the following set up. Suppose $X_1,...,X_n$ are iid continuous random variables with common pdf $f_{\theta}(x)$ for a family of pdfs $\mathcal{F} = \{f_{\theta}: \theta \in \Theta\}$ for some $\Theta \subset \mathbb{R}.$ Suppose $\mathcal{F}$ is suitably regular and we can differentiate twice under the integral sign
$$
\int_{-\infty}^{\infty}f_{\theta}(x)dx = 1
$$
to get
$$
\int_{-\infty}^{\infty} \frac{\partial^2}{\partial \theta^2} f_{\theta}(x)dx = 0.
$$
Suppose we can also multiply and divide by $f_{\theta}(x).$ We then get the following theorem.
\begin{theorem}Assuming the set up above holds, we have the following information regarding the score function
$$
\begin{cases}
E_{\theta}[\frac{\partial}{\partial \theta}log f_{\theta}(x)] = 0\\
\\
Var_{\theta}[\frac{\partial}{\partial \theta}log f_{\theta}(x)] = -E_{\theta}[\frac{\partial^2}{\partial \theta^2}log f_{\theta}(x)]
\end{cases}
$$
which describes the mean and variance of the score function.
\end{theorem}
Now, recall that the maximum likelihood estimate (MLE) $\hat{\theta} = \hat{\theta}(X)$ is the solution to the score equation
$$
\ell'(\theta; \tilde{X}) = \sum_{i=1}^n\frac{\partial}{\partial \theta}log f_{\theta}(x_i) = 0.
$$
Then, assuming $\hat{\theta}$ is close to the true parameter value $\theta_0$, we note that
$$
-\frac{\ell'(\theta_0;\tilde{x})}{\hat{\theta} - \theta_0} = \frac{\ell'(\hat{\theta};\tilde{x}) - \ell'(\theta_0;\tilde{x})}{\hat{\theta} - \theta_0} \approx \ell^{''}(\theta_0;\tilde{x}).
$$
\begin{theorem}We can approximate the MLE $\hat{\theta}$ by the following relationship
$$
\hat{\theta} \approx \theta_0 - \frac{\ell'(\theta_0; \tilde{x})}{\ell^{''}(\theta_0; \tilde{x})}.
$$
\end{theorem}
Resultantly, we have that
$$
\sqrt{n}(\hat{\theta} - \theta_0) \approx -\sqrt{n}\frac{\ell'(\theta_0;\tilde{x})}{\ell^{''}(\theta_0;\tilde{x})}
$$
where $\ell'(\theta_0;\tilde{x}) \xrightarrow {d} \mathcal{N}(0, I_{\theta})$ and $\ell^{''}(\theta_0;\tilde{x}) \xrightarrow{p} -I_{\theta}$ where $I_{\theta} = Var_{\theta}(\frac{\partial}{\partial \theta}log f_{\theta}(x_1)).$
\begin{theorem_exam}{MLE attains CRLB asymptotically}{}The MLE $\hat{\theta}$ is asymptotically normal with mean equal to the true value $\theta_0$ and variance equal to the Cramer-Rao Lower bound
$$
\hat{\theta} \sim A\mathcal{N}(\theta, \frac{1}{nI_{\theta}}).
$$
In a more rigorous manner,
$$
I_n(\theta_0)^{-0.5}(\hat{\theta} - \theta) \xrightarrow{d} \mathcal{N}(0, 1)
$$
as $n \rightarrow \infty.$
\end{theorem_exam}
To interpret this, we consider the \textbf{sampling distribution} of the MLE. That is, suppose we have had sampled several datasets $X_1,...,X_j$ where the jth dataset gives the jth MLE $\hat{\theta}_{j}.$ The distribution of the MLE is then the distribution of these realised $\hat{\theta}_j$ values, that is, the histogram of this is the \textbf{sampling distribution}. This sampling distribution is what has a normal distribution.
\begin{remark}This holds because the variance of the score function is the negative of the expected value of the second derivative of the score function
$$
Var_{\theta}[\frac{\partial}{\partial \theta}log f_{\theta}(x)] = -E_{\theta}[\frac{\partial^2}{\partial \theta^2}log f_{\theta}(x)]
$$
\end{remark}
Hence, maximum likelihood estimates are optimal under regularity conditions and hence their widespread use.
\lecture{21}{Further properties of score function}
\section{Minimum Variance Unbiased Estimation}
\subsection{Further properties of score function}
\begin{definition_exam}{Efficient}{} Unbiased estimators which attain the CRLB are said to be \textbf{efficient}.
\end{definition_exam}
\begin{definition}(Asymptotically Efficient). Asymptotically normal estimators which are AMVU are said to be \textbf{asymptotically efficient}.
\end{definition}
\begin{definition_exam}{Asymptotic Relative Efficiency}{} Let $\hat{\theta}$ be our candidate estimator. Then, the ratio
$$
\frac{CRLB}{\text{Asymp }Var_{\theta}(\hat{\theta}(X)} \leq 1
$$
is said to to be the asymptotic relative efficiency (ARE) of $\hat{\theta}(\tilde{x}).$
\end{definition_exam}
\begin{remark}To interpret ARE, if ARE = 85$\%$ for our candidate estimator $\hat{\theta}$, then the optimal AMVU estimator needs only 85$\%$ of the sample size to get the same precision as $\hat{\theta}(\tilde{X}).$ The CRLB is the smallest value of variance and hence the higher the ARE is for $\hat{\theta}$, the closer it is in performance compared to the AMVU estimator.
\end{remark}
\begin{definition}(Fisher's Information per observation). Let $X_1,...,X_n$ be iid with common pdf $f_{\theta}(.)$ that satisfies the regularity conditions. Then, the CRLB is $\frac{1}{nI_{\theta}}$ where
$$
I_{\theta} = Var_{\theta}[\frac{\partial}{\partial \theta}log f_{\theta}(X_1)]
$$
is called the Fisher's information per observation. That is, it is the variance of the score function for \textbf{one observation}.
\end{definition}
\begin{remark}The bigger Fisher's information per obervation, the more information is in the data and the smaller the variance of the estimator.
\end{remark}
\begin{theorem}Suppose X = $(X_1,...,X_n)$ are iid RVs with common density $f_{\theta}(.)$ given by
$$
f_{\theta}(x) = g(x - \theta)
$$
for a known PDF g(.) with a continuous derivative. Thus, $\theta$ is a location parameter.\\
Then, the score function is
$$
\sum_{i=1}^{n}\psi(X_i - \theta) = \sum_{i=1}^{n}\frac{-g'(X_i - \theta)}{g(X_i - \theta)}.
$$
Furthermore, for any unbiased estimator $\hat{\theta}(X)$ of $\theta$, we have
$$
Var\bigg[\hat{\theta}(X) \bigg] \geq \frac{1}{nI}
$$
where $I = \int \frac{[g'(x)]^2}{g(x)}dx.$
\end{theorem}
\subsection{Completeness}
We learn about some other things for fun.
\begin{definition}(Completeness). A statistic $T = T(X)$ is complete if for every measurable function g
$$
E_{\theta}g(T) = 0
$$
for all $\theta$, then
$$
P_{\theta}(g(T) = 0) = 1
$$
for all $\theta.$
\end{definition}
That is, for every $\theta$, the expectation of $g(T)$. \\
We now state why completeness is useful.
\begin{theorem}(Lehmann-Scheffé Theorem). If a statistic T is unbiased, complete, and sufficient for some parameter $\theta$, then T is MVUE.
\end{theorem}
\begin{definition}(Pivot). Let $X=(X_1,...,X_n)$ from a distribution that depends on parameter $\theta$. Let $g(X,\theta)$ be a random variable whose distribution is \textbf{the same} for all $\theta$. Then g is called a pivotal quantity.
\end{definition}
Pivotal quantities do not depend on $\theta$ in their distribution. However, the actual quantities themselves may depend on $\theta.$
\\
Recall that a statistic is independent of the parameters $\theta$.
\begin{definition}(Ancillary statistic). A statistic that is pivotal. That is, a statistic whose distribution does not depend on the parameters $\theta.$
\end{definition}
\begin{theorem}(Basu's Theorem). A statistic that is both boundedly complete and sufficient is independent of any ancillary statistic.
\end{theorem}
This theorem is useful to prove independence of two statistics by showing that one statistic is complete sufficient and the other is ancillary. For example, we can use Basu's theorem to show that the sample mean and sample variance are independent.
\begin{definition}(Minimal sufficient). A statistic T(X) is minimal sufficient if and only if T(X) is sufficient and for any other sufficient statistic T(X), then there exists a function f such that T(X) = f(S(X)).
\end{definition}
\lecture{22}{Hypothesis Testing}
\section{Hypothesis Testing}
\section{Hypothesis Testing}
\subsection{Hypothesis Testing}
In statistical inference, we observe the realisation of random observations. However, due to the randomness, there are a range of possibilities in which the data could have arisen from. However, there are some possibilities which are too "different" with our realised data and hence we can disregard them. Therefore, we can only ever disprove something with data.
In hypothesis testing, we are interested in seeing for what realised values of $\textbf{X}$, should we reject the null hypothesis. We reduce the data to a single statistic, known as a test statistic and use it to measure the strength of evidence against the hypothesis $H_0$. The p-value is a measure that is used. We are interested in identifying optimal level-$\alpha$ tests where we maximise the power of a test. The smaller the p-value, the stronger the evidence against the hypothesis we have.
\begin{definition}(Hypothesis). The hypothesis $H_0 \subset \mathcal{M}$ where $\mathcal{M}$ is a larger statistical model. We call the complement of $H_0$ the alternative hypothesis $H_1$ whilst $H_0$ is called the null hypothesis $\mathcal{M} = H_0 \cup H_1$ and $H_0 \cap H_1 = \emptyset.$
\end{definition}
\begin{definition}(Simple and Composite Hypothesis). A hypothesis containing only 1 distribution is called simple. Otherwise, it is called composite.
\end{definition}
\begin{remark}We look at 3 cases in this course. Simple vs simple, simple vs composite, and composite vs composite. The first one is easy to find optimal tests whereas the second and third only has optimal tests in certain circumstances.
\end{remark}
\begin{definition_exam}{Power}{} The power of a test is
$$
P(\text{reject }H_0|H_1 \text{ is true}).
$$
In other words, the power function of a hypothesis test with rejection region R is the function of $\theta$ defined by $\beta(\theta) = P_{\theta}(\mathbb{X} \in R).$
\end{definition_exam}
\begin{remark}Let the null hypothesis be $H_0: \theta \in \Theta_0$. The ideal power function is 0 for all $\theta \in \Theta_0$ and 1 for all $\theta \in \Theta_{0}^{c}.$
\end{remark}
\begin{definition}(Type I/II Error). A type 1 error is when we incorrectly reject a true null hypothesis whereas a type 2 error is when we fail to reject a false null hypothesis.
\end{definition}
\begin{definition_exam}{Size}{} For $0 \leq \alpha \leq 1$, a test with power function $\beta(\theta)$ is a size $\alpha$ test if $\sup_{\theta \in \Theta_0}\beta(\theta) = \alpha.$
\end{definition_exam}
\begin{definition_exam}{Level$-\alpha$ test}{} For $0 \leq \alpha \leq 1$, a test with power function $\beta(\theta)$ is a level $\alpha$ test if $size = \sup_{\theta \in \Theta_0}\beta(\theta) \leq \alpha.$
\end{definition_exam}
\begin{remark}For a simple null hypothesis, the size and level of a test coincide.
\end{remark}
\lecture{23}{Simple vs Simple Hypothesis}
\section{Hypothesis Testing}
\subsection{Simple vs Simple Hypothesis}
Here, we only have two specific distributions that we are looking at $x \sim f_0(.)$ and $x \sim f_1(.)$. Suppose that when $H_0$ is true, the random variable
$$
y = \frac{f_1(x)}{f_0(x)}
$$
the likelihood ratio has a continuous distribution. We then fix the level (probability of making a type 1 error) to $0 < \alpha < 1.$ Suppose there is a unique value $y_{\alpha}$ such that
$$
P_{f_{0}}(Y \geq y_{\alpha}) = \alpha
$$
that is, the probabliity when the true distribution if $f_0$ that $Y \geq y_0$ is $\alpha$. $y_{\alpha}$ is the upper $\alpha$ quantile of the random variable Y when the null hypothesis is true. We define the critical region to be
$$
C = \{x: \frac{f_1(x)}{f_0(x)} \geq y_{\alpha}\}
$$
then $P_{f_{0}}(c) = P_{f_{0}}(x \in C) = \alpha.$ C is the set of observed values of X for which we reject the null hypothesis $H_0.$
\begin{definition}(Critical Region). The set
$$
C = \{x: \frac{f_1(x)}{f_0(x)} \geq y_{\alpha}\}
$$
is defined as the critical region.
\end{definition}
\begin{theorem_exam}{Continuous Neyman-Pearson Lemma}{} Let $H_0: X \sim f_0(.)$ and $H_1: X \sim f_1(.)$ where $H_0, H_1$ are both simple. Then the \textbf{likelihood ratio}
$$
\frac{f_1(X)}{f_0(X)}
$$
is the \textbf{most powerful} test. That is, the power of the test based on the likelihood ratio is higher than the power of any other test.
\end{theorem_exam}
If we let the critical region $C = \{x: \frac{f_1(x)}{f_0(x)} \geq y_{\alpha}\}$ be based on the likelihood ratio and D be any other critical region. What the continuous Neyman-Pearson lemma says is that the power for C is greater than the power for D, that is $P_{f_{1}}(C) = P_{f_{1}}(\tilde{X} \in C) \geq P_{f_{1}}(\tilde{X} \in D) = P_{f_{1}}(D).$\\
If the likelihood ratio $Y = \frac{f_1(X)}{f_0(X)}$ has a discrete distribution, then there may be no exact value $y_{\alpha}$ such that $P_0(Y \geq y_{\alpha}) = \alpha$ for any given $\alpha.$ We introduce the concept of a randomised test to help us.
\begin{definition}(Test Function). Let $\delta(.)$ be a function taking values in [0,1]. That is, let $U \sim U[0,1]$ be independent of X. Then, we reject if $U \leq \delta(X).$
\end{definition}
\begin{remark}The above construction of the test function means that we reject with probability $\delta(X).$
\end{remark}
\begin{theorem_exam}{Discrete Neyman-Pearson Lemma}{} Let X be a \textbf{disrete random variable}. Hence, the likelihood ratio of the alternative over the null is a discrete random variable. The most powerful test at level $\alpha$ of $H_0: P_0(.)$ vs $H_1: P_1(.)$ is given by the test function
$$
\delta(x) =
\begin{cases}
1 \quad \frac{f_1(x)}{f_0(x)} > y \\
\gamma \quad \frac{f_1(x)}{f_0(x)} = y \\
0 \quad \frac{f_1(x)}{f_0(x)} < y \\
\end{cases}
$$
where y and $\alpha$ are chosen so that $\mathbb{E}_{\theta}[\delta(X)] = \alpha.$
\end{theorem_exam}\begin{remark}We will need to determine the critical value y and randomisation probability $\gamma$ such that $\mathbb{E}_{\theta}[\delta(X)] = \alpha.$
\end{remark}
Hence, if $Y = \frac{f_{1}(.)}{f_{0}(.)}$ has a discrete distribution, then the value y satisfies $P_{0}(Y \geq y) \geq \alpha \geq P_{0}(Y > y)$ and $\gamma$ is such that we usually have that
$$
P_0(Y = y).\gamma + P_0(Y > y) = \alpha
$$
therefore
$$
\gamma = \frac{\alpha - P(Y > y)}{P(Y = y)}.
$$
\lecture{24}{Simple vs Composite Hypothesis: UMP Tests}
\section{Hypothesis Testing}
\subsection{Simple vs Composite Hypothesis: One-sided UMP Tests}
We now have the following set up. We have a family, depending only on 1 parameter $\{f_{\theta}(.): \theta \in \Theta\}$ for some $\Theta \subset \mathbb{R}$ and we model the data x as observed values of $X \sim f_{\theta}(.)$ where $\theta \in \Theta$ is unknown. We want to test $H_0: \theta = \theta_0$ against $H_1: \theta \in \Theta$\textbackslash $\theta_0.$ We want to find a uniformly most powerful test.\\
\begin{definition_exam}{Uniformly Most Powerful Test}{} Let $\mathcal{C}$ be a class of tests for testing $H_0: \theta_0 \in \Theta$ versus $H_1: \theta \in \Theta$\textbackslash$\theta_0.$ A test $
\delta_0(.)$ in class $\mathcal{C}$, is a uniformly most powerful (UMP) test at level $\alpha$ if
\begin{enumerate}
\item $E_{\theta_{0}}[\delta(X)] \leq \alpha$ (so the test is of level $\alpha$)
\item $E_{\theta}[\delta_0(X)] \geq E_{\theta}[\delta_1(X)]$ (the test has the biggest power)
\end{enumerate}
for any other test $\delta_1(.)$, which also has a level $E_{\theta_{0}}[\delta_1(X)] \leq \alpha$, for all $\theta \in \Theta$\textbackslash $\theta_0$ (i.e. where $H_1$ is true).
\end{definition_exam}
\begin{remark}With regards to notation, $E_{\theta_{0}}$ denotes the expectation under the null hypothesis being true whilst $E_{\theta}$ denotes the expectation under the alternative hypothesis.
\end{remark}
Hence, we want to find the test that has the most power out of all tests that have level $\alpha.$\\
We can also frame this as an optimisation problem. Our goal is to find a UMP level-$\alpha$ test $\delta$ which
$$
\text{Maximise power function } E_{\theta}[\delta(X)]
$$
$$
\text{subject to } E_{\theta_{0}} \leq \alpha.
$$
\newline
For composite alternative hypotheses, the Neyman-Pearson lemma, and the likelihood ratio test, generalises if the data model $f_{\theta}(.)$ satisfies the monotone-likelihood ratio property. First, we define what is meant by the monotone likelihood ratio property.
\begin{definition_exam}{Monotone Likelihood Ratio}{} A family of pdfs/pmfs $\{f_{\theta}(.): \theta \in \Theta\}$ for $\Theta \subseteq \mathbb{R}$ is said to have monotone likelihood ratio (MLR) in a test statistic T if for any arbitrary parameter values $\theta_0 < \theta_1$ in $\Theta$, we had that
\begin{enumerate}
\item $f_{\theta_{0}}(.)$ and $f_{\theta_{1}}(.)$ are different distinct distributions;
\item The ratio $\frac{f_{\theta_{1}}(X)}{f_{\theta_{0}}(X)}$ is a non-decreasing function of the test statistic T(X).
\end{enumerate}
\end{definition_exam}
\begin{corollary}If the functions are first-differentiable, we can also say that the family has the monotone likelihood ratio if
$$
\frac{\partial}{\partial X}\bigg[\frac{f_{\theta_{1}}(X)}{f_{\theta_{0}}(X)} \bigg] \geq 0.
$$
\end{corollary}
\begin{remark}Note that the ratio is a function of X and not the parameter $\theta.$
\end{remark}
We can now state that when the density $\{f_{\theta}(.): \theta \in \Theta\}$ has the MLR property, then a uniformly most powerful test exists.
\begin{theorem_exam}{One-Sided Composite Tests with MLR}{} Suppose a family $\{f_{\theta}(.): \theta \in \Theta\}$ for $\Theta \subseteq \mathbb{R}$ has monotone likelihood ratio in the statistic $T(X).$ Then, for any $\theta_0 \in \Theta$, a uniformly most powerful level-$\alpha$ test exists for testing $H_0: \theta = \theta_0$ against $H_1: \theta > \theta_0$ given by the test function
$$
\delta(X) =
\begin{cases}
1 \quad T(\tilde{X}) > c\\
\gamma \quad T(\tilde{X}) = c\\
0 \quad T(\tilde{X}) < c
\end{cases}
$$
where $C, \gamma$ are chosen to satisfy $E_{\theta_{0}}[\delta(\tilde{X})] = \alpha.$
\end{theorem_exam}
\begin{remark}The UMP level-$\alpha$ test for $H_0: \theta = \theta_0$ against $H_1: \theta < \theta_0$ is obtained by swapping inequalities given in the theorem above.
\end{remark}
\begin{remark}Note that we need MLR to be an INCREASING function of T(X) to invoke the above theorem. So be careful.
\end{remark}
\begin{theorem}If the family $\{f_{\theta}(.): \theta \in \Theta\}$ has a monotone likelihood ratio, the power functions of the UMP 1-sided tests are strictly monotone. That is, it increases up until 1 and then becomes constant.
\end{theorem}
We now can revisit something we have seen from first year.
\begin{definition}(P-value). Given our test $\delta$ defined for densities with the MLR property, we reject the null hypothesis $H_0$ if $T(x) > c$. Suppose that the test statistic $T(x) = t$. We call the probability
$$
P(T > t)
$$
the \textbf{p-value}.
\end{definition}
\begin{remark}The smaller the p-value, the stronger the evidence against the null hypothesis.
\end{remark}
The reason for the introduction of p-values is that simply stating rejecting $H_0$ is not very informative. With a p-value, we can then know that if we reject a test at the smallest level $\alpha$, we know that we will reject for $\alpha' > \alpha.$\\
\begin{remark}A high p-value is \textbf{not} in favor of $H_0.$ It suggests that either $H_0$ is true or $H_0$ is false but our test has low power. Furthermore, the p-value is \textbf{not} the probability that $H_0$ is true conditioned on the data.
\end{remark}
We now state an important theorem to help us with testing.
\begin{theorem_exam}{Monotone likelihood ratio for exponential families}{}All 1-parameter exponential families have monotone likelihood ratio, which are functions of their \textbf{sufficient statistics} T(X).
\end{theorem_exam}
Hence, we have an important corollary to the fact that MLR exists for exponential families.
\begin{proposition_exam}{Existence of 1-sided UMP tests for exponential families}{}All 1-parameter exponential families have a 1-sided UMP test.
\end{proposition_exam}
\subsection{Simple vs Composite Hypothesis: Two-sided UMPU Tests}
We now look at two-sided tests. Suppose we are now testing a two-sided composite hypothesis $H_0: \theta = \theta_0$ against $H_1: \theta \neq \theta_0$ for $\theta_0$ in the interior of $\Theta.$\\
Recall that we stated that the power function for UMP 1-sided tests are strictly monotone. The significance of this is that for a two-sided test, performing the one-sided test will be extremely biased towards one side. That is, the power function will be terrible for one side of the test. In particular, the power of the test will be below the level of the test
$$
E_{\theta}\big[\delta(X) \big] < E_{\theta_{0}}\big[\delta(X) \big]
$$
for some $\theta$ in the alternative.
\begin{proposition}(Power function is monotone for UMP 1-sided). The power function for a UMP 1-sided test is strictly monotone.
\end{proposition}
Hence, this is terrible for our 2 sided test as the 1-sided UMP test will do well for one side of $\theta$ but have terrible power for the other side. In fact, it will have a power even less than the level $\alpha$, which we do not want. This is known as a \textbf{biased test}. Hence, we need to restrict our analysis to tests that do not have this property.
\begin{definition_exam}{Unbiased test}{} A test $\delta(.)$ is unbiased if its power function
$$
E_{\theta}[\delta(X)] \geq E_{\theta_{0}}[\delta(X)] = \alpha
$$
for all $\theta$ under $H_1$.
That is, the power for rejecting a false hypothesis is higher than the level of the test.
\end{definition_exam}
\begin{definition_exam}{UMPU Test}{} An unbiased uniformly most powerful test is a test that is uniformly most powerful and unbiased.
\end{definition_exam}
\begin{theorem_exam}{Existence of 2-sided UMPU tests for exponential families}{}
For a 1-parameter exponential family, an UMPU test always exist as it has a monotone likelihood function.
\end{theorem_exam}
We can now state the theorem that guarantees us a UMPU level-$\alpha$ test for 1-parameter exponential families.
\begin{theorem_exam}{Karlin-Rubin Theorem}{} For a 1-parameter exponential family with sufficient statistic T(X), a UMPU level-$\alpha$ test of $H_0: \theta = \theta_0$ against $H_1: \theta \neq \theta_0$ exists and is given by
$$
\delta(X) =
\begin{cases}
1 \quad T(x) > C_2 \\
\gamma_2 \quad T(x) = C_2 \\
0 \quad C_1 < T(x) < C_2 \\
\gamma_1 \quad T(x) = C_1 \\
1 \quad T(x) < C_2 \\
\end{cases}
$$
where $\gamma_1, \gamma_2, C_1, C_2$ are chosen such that
$$
\begin{cases}
E_{\theta_{0}}[\delta(X)] = \alpha \\
\\
E_{\theta_{0}}[T(x)\delta(x)] = \alpha E_{\theta_{0}}[T(x)]
\end{cases}
$$
i.e. the level is $\alpha$ and the test function $\delta(.)$ is uncorrelated with the sufficient statistic $T(x).$
\end{theorem_exam}
\begin{remark}Notice that we now have UMPU rather than UMP tests. This is because we now \textbf{restrict} our attention to tests that are unbiased. Then, this allows us to now find uniformly most powerful tests, that is, a test that is a test with a uniformly higher power function compared to other tests.
\end{remark}
\begin{theorem}Suppose we have a 1-parameter family indexed by $\theta$ in some interval and suppose $\theta_0$ is an interior point of that interval. If a UMP test of level $\leq \alpha$ exists for $H_0: \theta = \theta_0$ against the two-sided alternative $H_1: \theta \neq \theta_0$, then the test is automatically of exact level $\alpha$ and unbiased.
\end{theorem}
\lecture{25}{Simple vs Composite: General Methods}
\section{Hypothesis Testing}
\subsection{Simple vs Composite: General Methods}
We are now interested in generalising to simple vs composite testing for \textbf{when we don't have a monotone likelihood ratio}. Suppose we have an i.i.d sample x with a common pdf $f_{\theta}(.)$ for a 1-parameter family $\{f_{\theta}(.): \theta \in \Theta\}$ for some $\Theta \subseteq \mathbb{R}.$ Recall the Neyman Pearson likelihood ratio (NPLR) test statistic.
\begin{definition}(Neyman Pearson likelihood ratio test statistic). Let $f_{\theta_{1}}$ and $f_{\theta_{0}}$ be the pdf under $\theta_1$ and $\theta_0$ respectively. Then, the Neyman Pearson likelihood ratio test statistic is
$$
\frac{\prod_{i=1}^nf_{\theta_{1}}(x_i) }{\prod_{i=1}^nf_{\theta_{0}}(x_i)}.
$$
\end{definition}
Now if we have a composite null $H_1: \theta \in \Theta$\textbackslash $\theta_0$, we can try estimate a $\theta_1$-value and plug it in to get an approximation to the NLPR statistic.
\begin{definition_exam}{Generalised Likelihood Ratio Test}{} The Generalised Likelihood Ratio Test (GLRT) for testing $H_0: \theta = \theta_0$ vs $H_1: \theta \in \Theta $\textbackslash $\theta_0$ uses the statistic
$$
\frac{\prod_{i=1}^nf_{\hat{\theta}}(x_i)}{\prod_{i=1}^nf_{0}(x_i)}
$$
where $\hat{\theta} = \max_{\theta \in \Theta}\prod_{i=1}^{n}f_{\theta}(x_i)$ is the MLE for $\theta$ over $\Theta.$
\end{definition_exam}
We can also use the results we derived of the asymptotic properties of the MLE. Recall that we linearly approximated the score function and took a 2-term Taylor series about the true value $\theta_0.$ This gives us the next theorem.
\begin{theorem_exam}{Limiting distribution of GLRT}{}Let $\hat{\theta}$ be the MLE. Then, the limiting distribution of the $\ell(\hat{\theta};\tilde{X}) - \ell(\theta_0 ; \tilde{X})$ is $\frac{1}{2}\chi_{1}^{2}$ when $H_0$ is true.
\end{theorem_exam}
Alternative formulation of the likelihood ratio statistic is
$$
\lambda = 2log\bigg(\frac{\sup_{\theta \in \Theta}\ell(\theta)}{\ell(\theta_0)} \bigg)
$$
where $\ell(\theta)$ is the likelihood.
\lecture{26}{Composite vs Composite: 1-Parameter Families}
\section{Hypothesis Testing}
\subsection{Composite vs Composite: 1-Parameter Families}
We are now interested in the set up $x \sim f_{\theta}(x)$ for a 1-parameter family $\mathcal{F} = \{f_{\theta}(.): \theta \in \Theta\}$ for $\Theta \subseteq \mathbb{R}.$ We wish to test $H_0: \theta \in \Theta_0$ against $H_1: \theta \in \Theta$\textbackslash$\Theta_0$ for some $\Theta_0 \subseteq \Theta.$ For certain kinds of composite $H_0$'s, optimal tests exists.
\begin{proposition_exam}{Composite null with MLR property}{} If the family $\mathcal{F}$ has a monotone likelihood ratio in a statistic T(x), the \textbf{UMP} test of $H_0: \theta \leq \theta_0$ against $H_1: \theta > \theta_0$ is of the same form as for a simple null hypothesis $H_0: \theta = \theta_0$ against $H_1: \theta > \theta_0$. That is, the test function is given by
$$
\delta(X) =
\begin{cases}
1 \quad T(X) > C\\
\gamma \quad T(X) = C\\
0 \quad T(X) < C
\end{cases}
$$
where $C, \gamma$ are chosen to satisfy $E_{\theta_{0}}[\delta(X)] = \alpha.$
\end{proposition_exam}
Hence, our composite null hypothesis test now becomes a simple null hypothesis test against a composite null. Recall that UMP test exists for simple vs 1-sided composite hypothesis if our family of interest has a monotone likelihood ratio in the statistic T(X).\\
We see again that the exponential family has nice properties.
\begin{proposition_exam}{Composite null for exponential family}{} If the family $\mathcal{F}$ is a 1-parameter exponential family, a \textbf{UMP} test for $H_0: \{ \theta \leq \theta_1 \} \cup \{ \theta > \theta_2\}$ against $H_1: \theta_1 < \theta < \theta_2$ exists and is of the form
$$
\delta(X) =
\begin{cases}
1 \quad C_1 < T(x) < C_2 \\
\gamma_i \quad T(x) = C_i, i=1,2 \\
0 \quad T(x) < C_1 \text{ or } T(x) > C_2\\
\end{cases}
$$
where $C_i, \gamma_i$ are chosen to satisfy
$$
E_{\theta_{1}}[\delta(x)] = E_{\theta_{2}}[\delta(x)] = \alpha
$$
i.e. on the endpoints of the interval, the level is exactly $\alpha.$
\end{proposition_exam}
\newpage
We can state a stronger proposition for 1-parameter exponential families for when our null hypothesis is inside an interval with our compsite being outside of that interval.
\begin{proposition}(Interval composite null for exponential family). If $\mathcal{F}$ is a 1-parameter exponential family with a sufficient statistic T(x), then the \textbf{UMPU} test of $H_0: \theta_1 \leq \theta \leq \theta_2$ against $H_1: \{ \theta < \theta_1 \} \cup \{\theta > \theta_2\}$ is of the form
$$
\delta(X) =
\begin{cases}
1 \quad \{T(x) < C_1 \} \cup \{T(x) > C_2 \}\\
\gamma_i \quad T(x) = C_i, i=1,2 \\
0 \quad C_1 < T(x) < C_2\\
\end{cases}
$$
where the $C_i, \gamma_i$ are chosen so that
$$
E_{\theta_{1}}[\delta(x)] = E_{\theta_{2}}[\delta(x)] = \alpha
$$
i.e. on the endpoints of the interval, the level is exactly $\alpha.$
\end{proposition}
\begin{remark}The limiting version of this test as $\theta_1 \rightarrow \theta_0 \leftarrow \theta_2$ is the \textbf{UMPU} test for $H_0: \theta = \theta_0$ against $H_1: \theta \neq \theta_0.$
\end{remark}
\subsection{Composite vs Composite: Multi-Parameter Families}
We are now interested in testing composite nulls against composite alternatives for multi-parameter families. Previously, what we have established worked for one parameter exponential families. We now wish to generalise this and give an approximation to the NPLR statistic. That is $H_0: \theta \in \Theta_0 \subset \Theta$ against $H_1: \theta \in \Theta$\textbackslash $\Theta_0$.
\begin{definition_exam}{Generalised Likelihood Ratio Test for composite null}{} The Generalised Likelihood Ratio Test (GLRT) for testing $H_0: \theta \in \Theta_0 \subset \Theta$ against $H_1: \theta \in \Theta$\textbackslash $\Theta_0$ uses the statistic
$$
\ell(\hat{\theta}; x) - \ell(\hat{\theta}_0; x)
$$
where $\hat{\theta}$ is the unrestricted maximum likelihood estimator whilst $\hat{\theta}_{0} = \max_{\theta \in \Theta_0}\ell(\theta;x)$ is the null-restricted m.l.e.
\end{definition_exam}
We now describe the limiting distribution of the GLRT statistic.
\begin{theorem_exam}{Wilk's Theorem}{} Suppose we had a vector of parameters $H_0: \theta \in \Theta_0$ against $H_1: \theta \in \Theta$\textbackslash $\Theta_0.$ Under the conditions of
\begin{enumerate}
\item Smoothness/differentiability
\item Support being independent of $\theta$
\item $\Theta_0$ consists of only interior points of $\Theta$
\item Identifiability: $\theta_1 \neq \theta_2$ means that $f_{\theta_{1}}(.) \neq f_{\theta_{2}}(.)$.
\end{enumerate}
Suppose that $H_0$ imposes k constraints on the parameter vectors. Then
$$
2\{\ell({\hat{\theta}); x) - \ell(\hat{\theta}_0}; x)\} \xrightarrow{d} \chi_{k}^{2}
$$
where $\hat{\theta}_0$ is the m.l.e under $\Theta_0$, $\ell$ is the log likelihood and k is the difference in dimension between $\Theta$ and $\Theta_0.$
\end{theorem_exam}
\begin{remark}This allows us to convert observations into p-values.
\end{remark}
\lecture{27}{GLRT Examples}
\section{Hypothesis Testing}
\subsection{GLRT Examples}
\begin{theorem}The 1-way ANOVA F-test is an example of the GLRT.
\end{theorem}
\begin{theorem}The maximisation of the one sided likelihood GLRT is equivalent to a 1-sided t-test.
\end{theorem}
\subsection{Simulation based p-values}
What should we do if we can't utilise large sample theory to run inference tests. We can use simulation techniques.
\begin{definition_exam}{Monte-Carlo p-value}{} Suppose we are testing a simple null hypothesis $H_0: \theta = \theta_0.$ We can then simulate data x from the null hypothesis distribution $f_{\theta_{0}}(.)$ and generate an arbitrary number of realisations. We then construct a statistic T(x) from our realised sample. We repeat this process an arbitrary number of times to generate a sampling distribution of the statistic T(x). This gives us a Monte-Carlo p-value.
\end{definition_exam}
\begin{algorithm}
\DontPrintSemicolon
\KwIn{Data Generating Process such that $H_0$ is true}
\KwOut{Size level}
$x^{(i)} \gets$ 10000 draws from DGP with true $H_0$\;
$t^{(i)} \gets$ test statistic\;
$size \gets$ count fraction of rejections of $H_0$ over $\{t^{(i)}\}$\;
\Return{size}\;
\caption{{\sc Size Study}}
\label{algo:duplicate}
\end{algorithm}
\begin{definition_exam}{Parametric Bootstrap}{} Suppose we are testing a composite null hypothesis $H_0: \theta \in \Theta_0.$ We estimate the null $\theta_0$ using $\hat{\theta}_0$ under $H_0$. We then sample from the distribution $f_{\hat{\theta}_0}(.)$ to generate arbitrary sample sizes and construct statistics from this. Hence, we can generate a sampling distribution for our statistic T(x).
\end{definition_exam}
\newpage
\begin{algorithm}
\DontPrintSemicolon
\KwIn{Data Generating Process $F_{\theta}$ with unknown $\theta$}
\KwOut{P-value}
$\hat{\theta} \gets$ from data x which estimates $\theta$\;
$\gamma(x) \gets$ test statistic with $\hat{\theta}$ and x\;
\For{$s\gets0$ \KwTo $S$}{
$x^{(s)} \gets$ sample from DGP $F_{\hat{\theta}}$ with $\hat{\theta}$ as parameter\;
$\gamma(x^{(s)}) \gets$ test statistic on $x^{(s)}$\;
}
p-value $\gets$ $\frac{\text{number of draws with }\gamma(x^{(s)})\leq\gamma(x)}{S}$\;
\uIf{p-value $< \alpha$}{
\Return{Reject $H_0$}\;
}
\Else{
\Return{Don't reject $H_0$}\;
}
\caption{{\sc Parametric Bootstrap for one sided test}}
\label{algo:duplicate}
\end{algorithm}
\lecture{28}{Simple Prediction Problems}
\section{Statistical Decision Theory}
\section{Statistical Decision Theory}
\subsection{Simple Prediction Problems}
We are interested in the set up where Y is a random variable with a \textbf{known distribution}. We define D to be an arbitrary set called the \textbf{decision space}.
\begin{definition}(Loss). For each possible value y of Y and decision $d \in D$, we suffer a loss $L(d|y).$
\end{definition}
\begin{definition}(Risk). The expectation of the loss is the risk
$$
R(d) = E[L(d|y)]
$$
where we aim to minimise the risk.
\end{definition}
\begin{definition}(Admissible). Let $\tilde{d}(.)$ be a procedure. We say that $\tilde{d}(.)$ is admissible if
$$
R(\theta, \tilde{d}) \leq R(\theta, d) \quad \forall \theta \in \Theta
$$
and
$$
R(\theta, \tilde{d}) < R(\theta, d) \quad \forall \theta \in [a,b] \subset \Theta
$$
for all other procedures d(.).
\end{definition}
\begin{theorem}In the simple prediction problem, if we define the loss to be the \textbf{squared error loss} $L(d|y) = c(d - y)^2$ for $c > 0$, the optimal decision d to minimise this is the mean $d = E(y).$
\end{theorem}
\begin{proof}(Sketch). Look at $R(d|y) = E[L(d|y)]$ and then find first order conditions to find optimal value for d.
\end{proof}
\begin{theorem}In the simple prediction problem, if we define the loss to be the \textbf{absolute error loss} $L(d|y) = c|d - y|$ for $c > 0$, the optimal decision d to minimise this is the median $d = F^{-1}(\frac{1}{2}) = median(Y).$
\end{theorem}
\begin{theorem}In the simple prediction problem, if we define the loss to be the \textbf{0-1 error loss}
$$
L(d|y) = 1\{|d - y| > c\} =
\begin{cases}
1 \quad \text{if } |d-y| > c \\
0 \quad \text{if } |d-y| \leq c
\end{cases}
$$
where the non-coverage of y be the interval $d \pm c.$ Furthermore, we also assume that $f(.)$ is unimodal. The optimal decision d is chosen so that the interval $d \pm c$ is a level set of f(.). If the pdf f(.) is symmetric about m, the optimal d is m.
\end{theorem}
\begin{theorem}For the simple prediction problem where Y has a strictly increasing, continuous CDF F(.) and $\mu = E(Y)$ exists and is finite and the decision space $D = \mathbb{R}$. We assume the loss is the asymmetric piecewise-linear loss function given by
$$
L(d|y) =
\begin{cases}
p(y - d) \quad d < y \\
\\
(1 - p)(d - y) \quad d > y
\end{cases}
$$
for some $p \in (0,1).$ Then, the decision d that minimises the risk is
$$
d = F^{-1}(p)
$$
that is, the p-th quantile of $F(.)$.
\end{theorem}
\lecture{29}{Discrete Selection Problem}
\section{Statistical Decision Theory}
\subsection{Discrete Selection Problem}
Suppose $\mathbb{R}$ is partitioned into sets $S_1,...,S_k$. The decision space is $D = \{1,2,...,k\}$. Our goal is to guess which set does y belong to. Hence, the loss function is
$$
L(d|y) = \sum_{j=1}^{k}L_{d_{j}}1\{y \in S_j\}
$$
where we can construct the the \textbf{loss matrix} $\mathbb{L} = \{L_{d_{j}}; j=1,...,k\}$ where $L_{dd} = 0$ and $L_{dy} \geq 0.$
We can define a column vector $\tilde{p}$ where $p_j = P(Y \in S_j)$ for $j=1,...,k.$ Then, the risk of a decision d is
$$
R(d) = \sum_{j=1}^{k}L_{d_{j}}P(Y \in S_j) = \mathbb{L}_{\tilde{p}_{j}}.
$$
In other words, the risk of decision d is the loss associated with the probability of y being in d.
\subsection{Special case of Discrete Selection}
Suppose that the loss matrix defined earlier $\mathbb{L}$ only depends on
\begin{enumerate}
\item The observed value y of Y
\item Whether the decision d is right or wrong.
\end{enumerate}
Recall that for our loss matrix $\mathbb{L}$, the columns are the possible values of Y and the rows are the different decisions we label Y to be.
So, $\mathbb{L}_{dd} = 0$ and $\mathbb{L}_{d_{j}} = L_j$ where $d \neq j.$ That is, the diagonals of our loss matrix is 0 and the off-diagonals are the same in each column.
Then, the risk of decision d reduces to
$$
R(d) = E[L(d|y)] = \sum_{j=1}^{k}L_{d_{j}}P(Y \in S_j) = \sum_{j=1 \land j \neq d}^{k}L_{j}P(Y \in S_j)
$$
$$
= \sum_{j=1}^{k}L_{j}P(Y \in S_j) - L_dP(Y \in S_d).
$$
Hence, minimising $R(d)$ is equivalent to maximising the product $L_dP(Y \in S_d).$ Hence, we label Y to be in set $S_d$ if it is highly likely or we pay a big price of $Y \in S_d$ if we don't pick it as $L_{d}$ is the loss we pay if Y lands in $S_d$ but we did not pick $S_d.$
\lecture{30}{Statistical Decision Theory}
\section{Statistical Decision Theory}
\subsection{Statistical Decision Theory}
\begin{definition}(Statistical Decision Framework). We specify the setup for the framework. \begin{enumerate}
\item A family $\mathcal{F} = \{f_{\theta}(.): \theta \in \Theta\}$ of distributions for a random vector $\tilde{x}$ taking values in a space X.
\item A decision space $\mathcal{D} = \{d\}$.
\item Non-negative valued loss function such that when a decision $d \in D$ is made, the true distribution $f_{\theta}(.)$ is made and the true distribution is $f_{\theta}(.)$ a loss of $L(d|\theta)$ is suffered.
\end{enumerate}
The problem is then to choose a D-valued decision function d(.) defined on the sample space $d: X \rightarrow D.$
\end{definition}
\begin{definition_exam}{Risk Function}{} Each decision function $d: X \rightarrow D$ has an associated risk function
$$
R(\theta|d(.)) = E_{\theta}[L(d(x)|\theta)]
$$
which measures the long-run average loss suffered using the decision function $d(.)$ and when $x \sim f_{\theta}(.).$
\end{definition_exam}
\begin{remark}Comparing decision functions reduces down to comparing their respective risk functions.
\end{remark}
The issue is that we cannot compare risk functions pointwise between decisions functions as we can simply define biased decision functions which work extremely well for certain values of $\theta.$ Hence, pointwise comparison of risk functions does not work. We instead use two alternative measures of risk. That is, Bayes Risk and maximum risk.
\begin{definition_exam}{Bayes Risk}{} Let w(.) be a non-negative weight function. We define the Bayes risk as
$$
B_{w}(d) = \int_{\Theta}w(\theta)R(\theta|d)d\theta = \int_{\Theta}w(\theta)E_{\theta}[L(d(x)|\theta)]d\theta \leq +\infty.
$$
\end{definition_exam}
\begin{definition_exam}{Bayes Decision Rule}{} If the decision $\tilde{d}$ is such that
$$
B_{w}(\tilde{d}) \leq B_w(d)
$$
for any other decision function $d(.)$, then $\tilde{d}$ is said to be a \textbf{Bayes decision rule} with respect to the weight function w(.).
\end{definition_exam}
\begin{definition_exam}{Maximum over a subset risk}{} For a given subset $\Theta_0 \subseteq \Theta$, a decision rule $\hat{d}$ is said to be minimax (over $\Theta_0$) if
$$
\max_{\theta \in \Theta_0}R(\theta|\hat{d}) \leq \max_{\theta \in \Theta_0}R(\theta|d)
$$
for any other decision function $d(.)$. Hence, we minimise the maximum risk.
\end{definition_exam}
\subsection{Finding Bayes Decision Rules}
It is quite easy to find Bayes decision rules. We can find Bayes decision rules by reducing our problem into a simple prediction problem. Recall that the Bayes risk of a decision rule $d(.)$ is
$$
B_w(d) = \int_{\Theta}w(\theta)R(\theta|d)d\theta
$$
$$
= \int_{\Theta}w(\theta)\bigg[E_{\theta}[L(d(\tilde{x})|\theta)] \bigg]d\theta
$$
$$
= \int_{\Theta}w(\theta) \Bigg[ \int ... \int_X L[d(\tilde{x}|\theta)]f_{\theta}(\tilde{x})d\tilde{x} \Bigg] d\theta
$$
$$
= \int ...\int_{X} \Bigg[ \int_{\Theta} L[d(\tilde{x}|\theta)]w(\theta)f_{\theta}(\tilde{x})d\theta\Bigg]d\tilde{x}
$$
where the last equality arises from Tonelli's theorem.
\begin{theorem}(Tonelli's theorem). Assume that f is a non-negative measurable function. Then
$$
\int_X\bigg(\int_Yf(x,y)dy \bigg)dx = \int_Y\bigg(\int_Xf(x,y)dx \bigg)dy = \int_{X \times Y}f(x,y)d(x,y).
$$
\end{theorem}
Assuming $w(\theta)f_{\theta}(\tilde{x})$ is ingerable, we define
$$
m(\tilde{x}) = \int w(\theta)f_{\theta}(\tilde{x})d\theta.
$$
Hence, we have the Bayes risk as
$$
B_{w}(d)= \int ... \int_{X} m(\tilde{x}) \Bigg[ \int_{\Theta} L[d(\tilde{x}|\theta)]\frac{w(\theta)f_{\theta}(\tilde{x})}{m(\tilde{x})}d\theta\Bigg]d\tilde{x}
$$
\begin{definition}Using the definition of conditional probabliity, we define
$$
p(\theta|\tilde{x}) = \frac{w(\theta)f_{\theta}(\tilde{x})}{m(\tilde{x})}
$$
as the probabliity density of $\theta.$
\end{definition}
\begin{definition}(Bayes Risk). We define the Bayes risk as
$$
B_w(d) = \int ... \int_{X}m(\tilde{x}) \Bigg[ \int_{\Theta}L\Big[d(\tilde{x})|\theta \Big]p(\theta|\tilde{x})d\theta \Bigg]d\tilde{x}.
$$
\end{definition}
Our aim is to choose the decision $d(\tilde{x})$ that minimises the innter integral $\int_{\Theta}L\Big[d(\tilde{x})|\theta \Big]p(\theta|\tilde{x})d\theta$ as that will minimise the Bayes risk. Note that this is \textbf{exactly} the same form of the simple prediction problem based on a single draw from $p(\theta|\tilde{x})$ with loss $L[d|\theta].$
\begin{theorem}If $\tilde{d}(\tilde{x})$ was a decision rule such that
$$
\int_{\Theta}L\Big[\tilde{d}(\tilde{x})|\theta \Big]p(\theta|\tilde{x})d\theta \leq \int_{\Theta}L\Big[d(\tilde{x})|\theta \Big]p(\theta|\tilde{x})d\theta
$$
for any other decision rule d(.), then we have that
$$
B_w(\tilde{d}) \leq B_w(d).
$$
\end{theorem}
\begin{definition}(Posterior Density). The density $p(\theta|\tilde{x})$ is known as the \textbf{posterior density.}
\end{definition}
\begin{definition}(Prior Density). If the weight function w(.) is a density, then it is known as a \textbf{prior density}.
\end{definition}
Hence, finding a Bayes rule is solving a simple prediction problem from a single draw of the posterior density.
\lecture{31}{Bayesian Interpretation}
\section{Statistical Decision Theory}
\subsection{Bayesian Interpretation}
We now consider setting our weight function w(.) to be a distribution. That is, $\int_{\Theta}w(\theta)d\theta = 1.$
\begin{definition}(Prior Distribution). Let w(.) be a probability density function. Then, we say that w(.) is a prior distribution.
\end{definition}
\begin{definition}(Bayes Theorem). Let $\theta$ have a continuous PDF f(.). Then Bayes theorem states that
$$
f(\theta|x) = \frac{f(x|\theta)w(\theta)}{\int f(x|\theta)f(\theta)d\theta}
$$
where the denominator does not depend on $\theta.$\\ If we had n i.i.d samples x, then we have that
$$
f(\theta|\tilde{x}) = \frac{\prod_{i=1}^{n}f(x_i|\theta)w(\theta)}{\int f(x|\theta)f(\theta)d\theta} = \frac{L_n(\theta)w(\theta)}{c} \approx L_n(\theta)w(\theta)
$$
where c is a normalising constant and $L_n(\theta)$ is the likelihood with sample size n.
\end{definition}
\begin{theorem}The posterior is approximately the prior times the likelihood
$$
f(\theta|\tilde{x}) \approx L_n(\theta)w(\theta).
$$
\end{theorem}
\begin{theorem_exam}{Bayes Decision Rules}{} Let the decision space $D = \mathbb{R}$ and suppose $\tilde{x} = (x_1,...,x_n)$ are iid random variables $f_{\theta}(.)$ for $\theta \in \mathbb{R} = \Theta$ with a loss function $L(d|\theta).$ Unless otherwise, let $d = \mathbb{R}.$
\begin{enumerate}
\item If $L(d|\theta) = (d-\theta)^2$, the Bayes decision rule is the \textbf{mean of the posterior distribution}.
\item If $L(d|\theta) = |d-\theta|$, the Bayes decision rule is the \textbf{median of the posterior distribution}.
\item If $L(d|\theta) = 1\{|d-\theta| > c\}$, the Bayes decision rule is the \textbf{midpoint of the level set of width 2c}.
\item Let $d = \{0,1\}$. Then, define the loss function to be
$$
L(d|\theta) =
\begin{cases}
L_0 \quad d=1,\theta \leq 0\\
L_1 \quad d=0,\theta > 0\\
0 \quad \text{otherwise.}
\end{cases}
$$
Let $p_1$ be the probability placed on $(0,\infty) = \Theta_1$ by the posterior distribution and $p_0 = 1 - p_1.$ Then, the Bayes decision rule is to choose 0 if $p_0L_0 > p_1L_1$ and 1 if $p_1L_1 > L_0p_0.$
\end{enumerate}
\end{theorem_exam}
The following is useful for helping us determine the risk with absolute error loss.
\begin{lemma}Suppose $Z \sim \mathcal{N}(0,1)$. Then, for any constant c,
$$
E_{\theta}\{|c + Z|\} = c\bigg[1 - 2\Phi(-c) \bigg] + \frac{2e^{-\frac{1}{2}c^2}}{\sqrt{2\pi}}.
$$
\end{lemma}
\begin{lemma}Suppose $Z \sim \mathcal{N}(0,1)$. Suppose $c_n \rightarrow 0$ as $n \rightarrow \infty$. Then
$$
\lim_{n \rightarrow \infty}E_{\theta}\{|c_n + Z\} = \sqrt{\frac{2}{\pi}}.
$$
\end{lemma}
\begin{lemma}Let $f(\theta|\tilde{x})$ be the posterior of $\theta.$ We can compute the point estimate by computing the mean of the posterior
$$
\frac{1}{c}\int L_n(\theta)w(\theta)\theta.
$$
\end{lemma}
\begin{definition}(Posterior interval). Suppose we want to find a, b such that
$$
\int_{-\infty}^{a}f(\theta|\tilde{x})d\theta = \int_{b}^{\infty}f(\theta|\tilde{x})d\theta = \frac{\alpha}{2}.
$$
Let $C = (a,b)$. Then
$$
P(\theta \in C|\tilde{x}) = \int_{a}^{b}f(\theta|\tilde{x})d\theta = 1 - \alpha.
$$
C is called the $1 - \alpha$ posterior interval.
\end{definition}
\begin{definition}(Flat prior). Let the weight function $w(.) \approx \gamma$ where $\gamma$ is a constant. Then $w(.)$ is known as a flat or uninformative prior.
\end{definition}
\begin{definition}(Improper prior). Let $w(.)$ be a weight function such that $\int w(\theta)d\theta = \infty.$ w(.) is not a probability density function and hence is referred to as an improper prior.
\end{definition}
\begin{lemma}Flat prior are not transformation invariant. That is, a flat prior on a parameter $\theta$ does not imply a flat prior on the transformed version of the parameter $\tau(\theta).$
\end{lemma}
\begin{definition}(Jeffrey's prior). A prior that is transformation invariant is known as Jeffrey's prior and is defined by
$$
w(\theta) \approx I(\theta)^{\frac{1}{2}}
$$
where $I(\theta)$ is the Fisher information function. The Jeffrey's prior is transformation invariant.
\end{definition}
\lecture{32}{Bayesian vs Frequentist}
\section{Statistical Decision Theory}
\subsection{Bayesian vs Frequentist}
We now describe the differences between Bayesian and frequentist approach to statistical modelling.\\
The frequentist approach to statistical model supposes that the data was generated from a fixed distribution from a known family. That is, the family $\{f_{\theta}(.): \theta \in \Theta\}$ is given and the distribution of the data in $f_{\theta}(.)$ for some unknown but fixed $\theta.$ Inference then consists of hypothesis tests, point and interval estimates.\\
The Bayesian approach is to specify a known prior distribution $w(.)$ on $\Theta$ and assume the data was obtained by first drawing a value $\theta$ from $\Theta$ according to $w(.)$ and then conditional on $\theta$, the data has distribution $f_{\theta}(.).$ Inference is done on the posterior distribution $p(\theta|\tilde{x})$ of $\theta$ given $\tilde{x}$.\\
\textbf{We assume the frequentist point of view in this course.} We assume there is a fixed non-random but unknown true parameter value.\\
Bayesian procedures have very desirable frequentist properties. We do not require that our weight functions are integrable (proper priors). Even if w(.) is not integrable, the resulting posterior may still be integrable.\\
We now look at a family of weight functions with nice properties. We can select the weight function w(.) in such a way that the corresponding posterior is of the same distribution.
\begin{definition_exam}{Conjugate Family}{} Let $\mathcal{F}$ denote the class of PDFs $f(x|\theta)$. A class $\Pi$ of prior distributions is a conjugate family for $\mathcal{F}$ if the posterior distribution is in the class $\Pi$ for all $f \in \mathcal{F}$, all priors in $\Pi$, and all $x \in X.$
\end{definition_exam}
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Parameter $\theta$ & Conjugate Prior $w(\theta)$ \\
\hline
Normal Mean & Normal \\
Binomial Success probability & Beta \\
Poisson & Gamma \\
Gamma Rate & Gamma \\
Gamma Scale & Inverse Gamma \\
Normal variance & Inverse Gamma \\
$U(0,\theta)$ & Pareto \\
Pareto Shape & Gamma\\
\hline
\end{tabular}
\end{center}
We can use Bayes procedures as theoretical tools for finding minimax procedures. The main takeaway is that Bayes estimators with a constant risk function are minimax.
\begin{theorem_exam}{Minimax}{}Suppose that for k = 1,2,..., $d_k(.)$ is the Bayes procedure with respect to a proper prior $w_k(.)$ on $\Theta$ and the loss function $L(.|\theta).$ If the procedure $\tilde{d}(.)$ is such that
$$
\max_{\theta \in \Theta}\mathbb{E}_{\theta}[L(\tilde{d}(\tilde{x})|\theta)] \leq
\lim_{k \rightarrow \infty}B_{w_{k}}(d_k(.))
$$
then $\tilde{d}(.)$ is minimax over $\Theta$ for $L(.|\theta).$ Furthermore, $w_k$ is called a least favorable prior.
\end{theorem_exam}
\subsection{Minimax Procedures}
Recall that a decision function $\tilde{d}(.)$ such that for $\Theta_0 \subseteq \Theta$
$$
\sup_{\theta \in \Theta_0}E_{\theta}[L(\tilde{d}(\tilde{x})|\theta)] \leq \sup_{\theta \in \Theta_0}E_{\theta}[L(d(\tilde{x})|\theta)]
$$
for any other decision function $d(.)$, then $\tilde{d}(.)$ is said to be minimax over $\Theta_0$ for the loss function $L(.|\theta).$ However, these procedures are harder to find compared to Bayes procedures. However, they have the advantage of not requiring the choice of a weight function.\\
\begin{proposition_exam}{Average less than max}{} For any estimator d(.) and proper prior w(.),
$$
B_w(d(.)) \leq \sup_{\theta \in \Theta}E_{\theta}[L(d(\tilde{x})|\theta)].
$$
That is, the Bayes risk is always less than the maximum risk.
\end{proposition_exam}
From the above lemma, if we can show that the \textbf{maximum risk is less than the Bayes risk, we then can conclude that the Bayes risk is equal to the maximum risk for an estimator and hence it is a minimax estimator}. We have 2 theorems that use Bayes procedures as theoretical tools for finding minimax procedures.
\begin{theorem_exam}{Minimax Estimator}{}Suppose that for $k=1,2,...$, $d_k(.)$ is the Bayes procedures with respect to a \textbf{proper prior} $w_k(.)$ on $\Theta$ and the loss function $L(.|\theta).$ If the procedure $\tilde{d}(.)$ is such that
$$
\max_{\theta \in \Theta}E_{\theta}[L(\tilde{d}(\tilde{x})|\theta)] \leq \lim_{k \rightarrow \infty}\int E_{\theta}[L(d_k(\tilde{x})|\theta)]w_k(\theta)d\theta
$$
$$
= \lim_{k \rightarrow \infty}B_{w_{k}}(d_k(.))
$$
then $\tilde{d}(.)$ is a minimax estimator over $\Theta$ for $L(.|\theta).$
\end{theorem_exam}
\begin{theorem_exam}{Hodges and Lehman}{} Suppose d(.) is a Bayes procedure with respect to $L(.|\theta)$ and a \textbf{proper prior} $w(.)$ on $\Theta.$ Let $\Theta_0$ denote the support of $w(.).$ Suppose the following conditions hold
\begin{enumerate}
\item $\mathbb{E}_{\theta}[L(d(\tilde{x})|\theta)] = c$ for $\theta \in \Theta_0$
\item $\mathbb{E}_{\theta}[L(d(\tilde{x})|\theta)] \leq c$ for $\theta \in \Theta$
\end{enumerate}
then $d(.)$ is minimax over $\Theta$ for $L(.|\theta).$
\end{theorem_exam}
\begin{remark}The reason we have $w_k$ now is that we may have our prior $w(.)$ to be an improper prior or a flat prior, hence $w_k(.)$ are a sequence of proper priors that converges to this prior. This allows us to make sure of the 2 theorems above.
\end{remark}
\begin{remark}An estimator with constant risk is a minimax estimator.
\end{remark}
\begin{proposition_exam}{Showing a Bayes procedure is minimax}{}Given a Bayes procedure with a proper prior, if we can show that it has a constant risk, then it is minimax.
\end{proposition_exam}
\lecture{33}{Decision Theory Recap}
\section{Statistical Decision Theory}
\subsection{Decision Theory Recap}
In regression analysis, we have the covariates $X_1,...,X_p$ and the best least squares prediction is given by the conditional mean $d(x_1,...,x_p) = E[Y|X_1 = x_1,...,X_p=x_p].$
The Bayesian approach is to include a prior distribution $w(\theta)$ and make inferences from the posterior distribution
$$
f(\theta|\tilde{x}) = \frac{w(\theta)f(\tilde{x}|\theta)}{m(\tilde{x})} \approx w(\theta)f(\tilde{x}|\theta)
$$
where $f(\tilde{x}|\theta)$ is the likelihood function and $m(\tilde{x}) = \int_{\Theta}w(\theta)f(\tilde{x}|\theta)d\theta$ is the marginal likelihood.
$w(\theta)$ represents the beliefs about $\theta$ before observing $\tilde{X}$. $f(\theta|\tilde{x})$ represents beliefs about $\theta$ after observing $\tilde{X}.$
\begin{definition_exam}{Posterior Expected Loss}{} We define the posterior expected loss as
$$
\int_{\Theta}L(d(\tilde{x})|\theta)f(\theta|\tilde{x})d\theta.
$$
\end{definition_exam}
\begin{remark}Here, we take a single draw from the posterior and predict what it is.
\end{remark}
\begin{lemma}The Bayes decision rule minimises the posterior expected loss.
\end{lemma}
The Bayes decision rule to minimise the posterior expected loss is the same form as a simple prediction problem of a single draw from the posterior distribution $\theta|X.$
\begin{definition_exam}{Limiting Risk}{} We define the limiting risk as the "long term risk" of an estimator
$$
\lim_{n \rightarrow \infty}nR(\theta|d).
$$
\end{definition_exam}
\begin{theorem_exam}{MLE under regularity}{}For models that satisfy regularity conditions, MLE and Bayes estimators with reasonable priors $w(.)$ have the same large sample performance.
\end{theorem_exam}
\lecture{34}{Non-regular Bayes estimation}
\section{Statistical Decision Theory}
\subsection{Non-regular Bayes estimation}
As stated in the last section, if a model is regular, then the MLE and Bayes estimators with reasonable priors will have similar limiting rescaled risk. We will now look at what happens when a model is \textbf{not regular}.
\begin{definition}(Pareto Distribution). The CDF of the Pareto $(\gamma, m)$ distribution with shape parameter $\gamma > 0$ and scale $m > 0$ is
$$
F(y;\gamma, m) =
\begin{cases}
0 \quad y < m\\
1 - (\frac{m}{y})^{\gamma} \quad y \geq m.
\end{cases}
$$
The corresponding PDF is
$$
f(y;\gamma, m) =
\begin{cases}
0 \quad y < m\\
\frac{\gamma m^{\gamma}}{y^{\gamma + 1}} \quad y \geq m.
\end{cases}
$$
\end{definition}
\begin{theorem}The Pareto distribution is heavy-tailed, and the k-th moment exists only if $\gamma$ is sufficiently large
$$
\mathbb{E}[Y^k] =
\begin{cases}
m^k\frac{\gamma}{\gamma - k} \quad k < \gamma \\
\infty \quad k \geq \gamma.
\end{cases}
$$
\end{theorem}
\begin{theorem}The mean of the Pareto distribution is
$$
\mathbb{E}[Y] = \frac{m\gamma}{\gamma - 1} \quad \gamma > 1.
$$
\end{theorem}
We can now look at our irregular model. Suppose $X_1,...,X_n$ are iid $U(0,\theta)$ random variables where $\theta \in (0,\infty)$ is unknown. For the squared error loss, we wish to determine the limiting risk $\lim_{n \rightarrow \infty}n^2R(\theta|d)$ for the MLE $d(\tilde{X}) = \hat{\theta}_{MLE}$ and for the Bayes estimator with respect to the flat prior $w(\theta) = 1$, which we denote by $\hat{\theta}_{flat}.$\\
\begin{lemma}The MLE is the sample maximum
$$
\hat{\theta}_{MLE} = X_{(n)}.
$$
\end{lemma}
\begin{lemma}The Bayes estimator with a flat prior and uniform likelihood function, has a posterior distribution proportional to the Pareto distribution
$$
f(\theta|\tilde{X}) = \frac{(n-1)X_{(n)}^{n-1}}{\theta^n}1\{\theta \geq X_{(n)}\}.
$$
\end{lemma}
\begin{lemma}Under squared error loss, the Bayes estimator $d_{flat}$ is the posterior mean, which in this case is
$$
\hat{\theta}_{flat} = \frac{(n - 1)X_{(n)}}{n - 2} \quad n > 2.
$$
\end{lemma}
\begin{lemma}The risk is the MSE under squared error loss.
\end{lemma}
\begin{lemma}The limiting rescaled risk of the MLE $\hat{\theta}_{MLE}$ is
$$
\lim_{n \rightarrow \infty}n^2R(\hat{\theta}_{MLE}|\theta) = \theta^2 + \theta^2.
$$
\end{lemma}
\begin{lemma}The limiting rescaled risk of the Bayes estimator with a flat prior is
$$
\lim_{n \rightarrow \infty}n^2R(\hat{\theta}_{flat}|\theta) = 0 + \theta^2 = \theta^2.
$$
\end{lemma}
\begin{remark}Hence, we see that $\hat{\theta}_{flat}$ dominates $\hat{\theta}_{MLE}$ for large n.
\end{remark}
As $U(0,\theta)$ is not a regular model as the support depends on $\theta$ and $\theta$ is on the boundary of $\Theta.$ For such models, the MLE does not do well.
\lecture{35}{Asymptotically Minimax Estimator}
\section{Asymptotically Minimax Procedures}
\section{Asymptotically Minimax Procedures}
\subsection{Asymptotically Minimax Estimator}
Minimax estimators that are global (over the entire parameter space) are quite rare.
\begin{theorem}For a large sample size, the MLE will have a risk smaller than the risk of the minimax estimator, except for a small subset $\theta \in \Theta.$
\end{theorem}
It is often impossible to find exact minimax estimators. Hence, we relax conditions and now focus on deciding whether to maximimise \textbf{max subset risk} or \textbf{limiting maximum risk}.
\begin{definition}(Max subset risk). The maximum subset risk for a decision d over an interval $[a,b] \subset \Theta$ is
$$
\max_{a \leq \theta \leq b}R(\theta|d).
$$
\end{definition}
\begin{definition_exam}{Limiting (rescaled) maximum risk}{} The limiting (rescaled) maximum risk for a sequence of decisions $\{d_n\}_{n \geq 1}$ over an interval $[a,b] \subset \Theta$ is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|d_n).
$$
\end{definition_exam}
\begin{definition_exam}{Asymptotically Minimax Estimators}{} An estimator $d(\tilde{x})$ which minimises the limiting (rescaled) maximum risk $\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R_n(\theta|d)$ over all choices $d(.)$ is called an \textbf{asymptotically minimax estimator}.
\end{definition_exam}
For a sequence of decisions $d_n$, to show that it is asymptotically minimax, we have two steps.\\
1) First, we determine a lower bound to
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R_n(\theta|d)
$$
for \textbf{any} procedure $R_n(\theta|d).$ Note that this is remarkable as we do not require the estimator to be unbiased or regular. The drawback though is that this gives us only a limiting result.\\
2) We then show that the procedure attains this lower bound.\\
\lecture{36}{Hodges' estimator and Superefficiency}
\section{Asymptotically Minimax Procedures}
\subsection{Hodges' estimator and Superefficiency}
This section, we look at showing that asymptotic mean-variance unbiased estimators (AMVU), which are asymptotically normal with asymptotic mean and asymptotic variance $\frac{1}{n}$, are not actually asymptotically efficient. That is, it is possible to construct estimators that do better than an AMVU estimator in terms of the pointwise limiting (rescaled) risk. In particular, these estimators only outperform AMVU estimators at isolated points.
\begin{definition_exam}{Superefficient}{} A superefficient estimator is an estimator that attains a asymptotic variance that is smaller than regular efficient estimators (AMVU estimators).
\end{definition_exam}
Suppose we have $X_1,...,X_n$ iid $N(\theta,1)$ for some unknown $\theta.$
\begin{definition_exam}{Hodges’ Estimator}{} Suppose $\hat{\theta}_n$ is a consistent estimator for a parameter $\theta$ that converges to an asymptotic distribution $L_{\theta}$, where $L_{\theta}$ is a normal distribution with mean zero and variance depending on $\theta$ at the $\sqrt{n}$ rate
$$
\sqrt{n}(\hat{\theta}_n - \theta) \xrightarrow{d}L_{\theta}.
$$
Then, the Hodges’ estimator $\hat{\theta}_n^{H}$ is defined as
$$
\hat{\theta}_n^{H} =
\begin{cases}
\hat{\theta}_n \quad \text{if } |\hat{\theta}_n| \geq n^{-1/4}\\
\\
0 \quad \text{if } |\hat{\theta}_n| < n^{-1/4}.\\
\end{cases}
$$
\end{definition_exam}
\begin{theorem}The Hodges’ estimator $\hat{\theta}_n^{H}$ is equal to $\hat{\theta}_n$ everywhere except on the interval $[-n^{1/4}, n^{1/4}]$, where it is equal to 0. The Hodges' estimator is superefficient as it surpasses the asymptotic behavior of the efficient estimator $\hat{\theta}_n$ on at one point $\theta = 0.$
\end{theorem}
\begin{lemma}The Hodges’ estimator $\hat{\theta}_n^{H}$ is consistent for $\theta$ and its asymptotic distribution is
$$
\begin{cases}
\sqrt{n}(\hat{\theta}_n^{H} - \theta) \xrightarrow{d} L_{\theta} \quad \theta \neq 0\\\\
n^{\alpha}(\hat{\theta}_n^{H} - \theta) \xrightarrow{d} 0 \quad \theta = 0, \forall \alpha \in \mathbb{R}.
\end{cases}
$$
That is, the estimator has the same asymptotic distribution as $\hat{\theta}_n$ for all $\theta \neq 0$ whereas for $\theta = 0$, the rate of convergence becomes arbitrarily fast.
\end{lemma}
\begin{theorem}Hodges’ estimator improves upon a regular estimator at a single point. In general, any superefficient estimator may surpass a regular estimator at most on a set of Lebesgue measure zero.
\end{theorem}
We now analyse the performance of Hodges’ estimator against the AMVU estimator $\overline{X}$ using the criteria of limiting rescaled risk.
\begin{theorem} Let $d_1(\tilde{X}) = \overline{X}.$ The risk function is $R(\theta|d_1) = Var_{\theta}(\overline{X}) = \frac{1}{n}.$ Then, the rescaled risk is constant, not depending on n or $\theta.$ The limiting rescaled risk of $d_1$ is
$$
\lim_{n \rightarrow \infty}nR(\theta|d_1) = 1.
$$
\end{theorem}
\begin{theorem}Let $d_2(\tilde{X})$ be Hodges’ estimator $\hat{\theta}_n^{H}.$ The limiting rescaled risk is
$$
\lim_{n \rightarrow \infty}nR(\theta|d_2) =
\begin{cases}
1 \quad \theta \neq 0\\
\\
0 \quad \theta = 0.
\end{cases}
$$
\end{theorem}
Here, Hodges' estimator seems to perform uniformly better compared to our AMVU estimator, it does just as well at every point of $\theta$ but does better at the point $\theta = 0$. However, if we now look at the limiting maximum rescaled risk over an interval, we get a different story.
\begin{theorem}Let $d_1(\tilde{X}) = \overline{X}.$ The limiting maximum rescaled risk is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}nR(\theta|d_1) = 1
$$
as $\max_{a \leq \theta \leq b}nR(\theta|d_1)$ does not depend on n.
\end{theorem}
\begin{theorem}Let $d_2(\tilde{X})$ be Hodges’ estimator $\hat{\theta}_n^{H}.$ The limiting maximum rescaled risk is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}nR(\theta|d_2) =
\begin{cases}
\infty \quad a \leq 0 \leq b \\
\\
1 \quad \text{otherwise.}
\end{cases}
$$
\end{theorem}
Hence, for points very near to but not exactly zero, Hodges’ estimator performs poorly.
\lecture{37}{Interchanging limit and maximum}
\section{Asymptotically Minimax Procedures}
\subsection{Interchanging limit and maximum}
The takeaway from this section is that limiting rescaled risks may not always be the best criteria to use to evaluate estimators. The \textbf{limiting maximum scaled risk} may be better in some circumstances. Furthemore, it is not always possible to swap the order of operation of taking limits and taking maximum.\\
We recall some facts from analysis.
\begin{definition}(Supremum norm). Let $f: D \rightarrow \mathbb{K}^N$ be a function. We define its supremum norm by
$$
||f||_{\infty,D} = \sup_{x \in D}||f(x)||.
$$
\end{definition}
\begin{corollary}f is a bounded function if and only if $||f||_{\infty,D} < \infty.$
\end{corollary}
\begin{lemma}Let $f: D \rightarrow \mathbb{K}^N$ be a continuous function. If D is a compact subset of $\mathbb{K}^d$, then f is bounded. That is, $||f||_{\infty,D} < \infty.$
\end{lemma}
\begin{definition}(Uniform convergence). We say that $f_n \rightarrow f$ uniformly on D if for every $\epsilon > 0$, there exists a $n_{\epsilon} \in \mathbb{N}$ such that
$$
||f_n(x) - f(x)|| < \epsilon
$$
for all $n > n_{\epsilon}$ and all $x \in D.$ We say that $f_n(x) \rightarrow f(x)$ uniformly with respect to $x \in D.$
\end{definition}
\begin{proposition_exam}{Uniform Convergence of Functions}{}Let $f_n: D \rightarrow \mathbb{K}^N$ be a sequence of functions. Then, we have that $f_n \rightarrow f$ uniformly on the domain D if and only if
$$
||f_n - f||_{\infty,D} \rightarrow 0
$$
as $n \rightarrow \infty.$
\end{proposition_exam}
\subsection{Poisson Interval Estimation}
We describe the setup for the poisson interval. Suppose $\tilde{x} = (x_1,...,x_n)$ consists of iid Poisson$(\theta)$ r.v's with
$$
P_{\theta}(X_1=x) = \frac{e^{-\theta}\theta^x}{x!}
$$
for $x = 0,1,...$ and some unknown parameter $\theta \in \Theta = (0,\infty).$\\
Consider the decision problem of predicting interval estimate of $\theta$ of width $\frac{2C}{\sqrt{n}}$ where the decision space $D = \Theta = (0,\infty)$ and the loss function is $1\{|d-\theta| > \frac{C}{\sqrt{n}}\}$ for some known $C > 0.$\\
We define the decision $d(\tilde{x}) = \overline{X}.$
\begin{proposition}The risk for the Poisson interval estimate is
$$
R(\theta|\overline{X}) = P_{\theta}(\theta < \overline{X} - \frac{C}{\sqrt{n}}) + P_{\theta}(\theta > \overline{X} - \frac{C}{\sqrt{n}}).
$$
\end{proposition}
If we let $T = n\overline{X}$ and recall that T is asymptotically normal, that is
$$
Z_n = \frac{T - n\theta}{\sqrt{n\theta}} \xrightarrow{d} \mathcal{N}(0,1).
$$
\begin{lemma}For any $z_n \rightarrow z$, we have that
$$
P_{\theta}(Z_n \leq z_n) \rightarrow \Phi(z)
$$
where $\Phi(.)$ is the $\mathcal{N}(0,1)$ CDF.
\end{lemma}
\begin{theorem}For a fixed $\theta$, we have that the limiting risk for the Poisson interval estimation is
$$
\lim_{n \rightarrow \infty}R(\theta|\overline{X}) = 2[1 - \Phi(\frac{C}{\sqrt{\theta}})].
$$
\end{theorem}
We are interested in computing $\lim_{n\rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|\overline{X})$. We nominate that we could possibly compute this by interchanging the operations and analysing $\max_{a \leq \theta \leq b}\lim_{n\rightarrow \infty}R(\theta|\overline{X}).$
\begin{lemma}We have that
$$
\max_{a \leq \theta \leq b}\lim_{n\rightarrow \infty}R(\theta|\overline{X}) = 2[1 - \Phi(\frac{C}{\sqrt{b}})]
$$
where b is the maximum of the interval we are maximising $\theta$ over.
\end{lemma}
\begin{lemma}We can upper bound the limiting maximum risk by the following
$$
\lim_{n\rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|\overline{X}) \leq 2[1 - \Phi(\frac{C}{\sqrt{b}})]
$$
where $b$ is the end point of the interval to maximise $\theta.$
\end{lemma}
\begin{corollary}If we can show that $2[1 - \Phi(\frac{C}{\sqrt{b}})]$ is also a lower bound, then we have that
$$
\lim_{n\rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|\overline{X}) = 2[1 - \Phi(\frac{C}{\sqrt{b}})] = \max_{a \leq \theta \leq b}\lim_{n\rightarrow \infty}R(\theta|\overline{X}).
$$
That is, we can interchange the operation of taking limits and taking maximum.
\end{corollary}
\lecture{38}{Asymptotic Minimax Lower Bound}
\section{Asymptotically Minimax Procedures}
\subsection{Asymptotic Minimax Lower Bound}
We now use the pointwise limiting rescaled risk of certain Bayes procedures to provide a lower bound to the limiting maximum rescaled risk of \textbf{any estimator}. That is, the bound applies to any estimator whereas previously, the Cramer Rao Lower bound only applied to unbiased and regular models.
\begin{theorem_exam}{Asymptotic Minimax Lower Bound theorem}{} Suppose that for a sequence $\{L_n(.|\theta)\}$ of loss functions and any $\theta_0 < \theta_1$, the corresponding sequence of Bayes procedures $\{d_n(.)\}$ based on the uniform prior $U[\theta_0, \theta_1]$ over the interval $[\theta_0, \theta_1]$ is such that for each $\theta_0 < \theta < \theta_1$, we have that the limiting risk
$$
\lim_{n \rightarrow \infty}R_n(\theta|d_n) = \lim_{n \rightarrow \infty}E_{\theta}\bigg[ L_n(d_n(x)|\theta)\bigg] = S(\theta)
$$
for some continuous function $S(.).$ \\Then for \textbf{any other sequence of procedures} $\{\tilde{d}(.)\}$ and any $a < b$,
$$
\max_{a \leq \theta \leq b}S(\theta) = \max_{a \leq \theta \leq b}\lim_{n \rightarrow \infty}R_n(\theta|d_n) \leq \lim_{n \rightarrow \infty}max_{a \leq \theta \leq b}E_{\theta}\bigg[L_n(\tilde{d}_n(x)|\theta)\bigg].
$$
\end{theorem_exam}
\begin{remark}$L_n(.)$ and hence $R_n$ absorbs the rescaling n term. Furthermore, by analysing the pointwise limiting risk of certain Bayes procedures, this gives us a lower bound to the limiting maximum risk of any procedure which is quite remarkable.
\end{remark}
\begin{remark}For any fixed a and b, it is needed that $[\theta_0, \theta_1] \subseteq [a,b].$ However, the asymptotically minimax property is to hold for any $a < b$ in the parameter space, therefore we also require that the Bayes procedure based on the $U[\theta_0,\theta_1]$ prior to have the desired property for any $\theta_0 < \theta_1$ in the parameter space.
\end{remark}
\begin{remark}There is nothing special to a uniform prior being used. Any other prior with bounded support would also work. However, it is easier to work with a uniform prior.
\end{remark}
The important takeaway from the above theorem is that if we have a Bayes procedures based on a uniform prior, then the maximum limiting risk is the lower bound for the limiting maximum risk of any other procedure.
\lecture{39}{Proof of Asymptotic Minimax Lower Bound theorem}
\section{Asymptotically Minimax Procedures}
\subsection{Proof of Asymptotic Minimax Lower Bound theorem}
We are interested in proving the asymptotic minimax lower bound theorem. Let us recall is first.
\begin{theorem}Suppose that for a statistical decision problem based on a sequence $\{L_n(d|\theta)\}$ of loss functions, for any $\theta_0 < \theta_1$, the sequence $\{\tilde{d}(.)\}$ of Bayes procedures based on the uniform $U[\theta_0, \theta_1]$ weight function $w(\theta) = \frac{1\{\theta_0 \leq \theta \leq \theta_1\}}{\theta_1 - \theta_0}$ satisfies, for all $\theta_0 < \theta < \theta_1$,
$$
\lim_{n \rightarrow \infty}E_{\theta}\bigg[L_n(\tilde{d}_n(X)|\theta) \bigg] = S(\theta)
$$
for a continuous function S(.). Then, for any other sequence of procedures $\{d_n(.)\}$ and any $a < b$, we have that
$$
\max_{a \leq \theta \leq b}S(\theta) \leq \lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}E_{\theta}\bigg[ L_n(d_n(X)|\theta) \bigg].
$$
\end{theorem}
\begin{lemma}For a monotone function $m(.)$, the limit
$$
\lim_{x \rightarrow \infty}m(x)
$$
always exists.
\end{lemma}
\begin{lemma}For an arbitrary function $f(.)$, the new functions
$$
\overline{f(x)} = \sup_{y \geq x}f(y)
$$
and
$$
\underline{f(x)} = \inf_{y \geq x}f(y)
$$
are monotone.
\end{lemma}
\begin{corollary}As a result, we have that
$$
\lim_{x \rightarrow \infty}\overline{f(x)} = \lim_{x \rightarrow \infty}\sup_{y \geq x}f(y) = \lim_{x \rightarrow \infty}\sup f(x)
$$
and
$$
\lim_{x \rightarrow \infty}\underline{f(x)} = \lim_{x \rightarrow \infty}\inf_{y \geq x}f(y) = \lim_{x \rightarrow \infty}\inf f(x)
$$
always exists.
\end{corollary}
\begin{lemma}We have that $\lim_{x \rightarrow \infty}\inf f(x) = \lim_{x \rightarrow \infty}\sup f(x)$ if and only if $\lim_{x \rightarrow \infty}f(x)$ exists.
\end{lemma}
\begin{theorem}(Fatou's Lemma). For a sequence of non-negative functions $\{f_n(.)\}$, we have that
$$
\lim_{n \rightarrow \infty}inf \int f_n(x)dx \geq \int \lim \inf f_n(x)dx.
$$
\end{theorem}
\begin{lemma}For any sequence of procedure $\{d_n(.)\}$ and any $a < \theta_0 < \theta_1 < b$, we have that
$$
\lim_{n \rightarrow \infty}\inf \max_{a \leq \theta \leq b}E_{\theta}\bigg[L_n(d_n(X)|\theta) \bigg] \geq \frac{1}{\theta_1 - \theta_0}\int_{\theta_{0}}^{\theta_{1}}S(\theta)d\theta
$$
where the weight function is the uniform density over the interval $[\theta_0, \theta_1].$
\end{lemma}
\begin{lemma}Under the conditions, we have that for any $a < b$
$$
\lim_{n \rightarrow \infty}\inf \max_{a \leq \theta \leq b}E_{\theta}\bigg[L_n(d_n(X)|\theta) \bigg] \geq \max_{a \leq \theta \leq b}S(\theta).
$$
\end{lemma}
\lecture{40}{Interval Estimation of a normal mean parameter}
\section{Examples of Asymptotically Minimax Procedures}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Introduction to interval estimation}
Suppose our loss function is the interval non-coverage loss. The interval is the level set of the posterior of the desired width $2C_n$, that is, the set of $\theta$ values where the posterior density $p(\theta|X)$ is above a certain level $\ell$
$$
\{\theta: p(\theta|X) \geq \ell\}.
$$
Choosing different levels of $\ell$ gives intervals of different widths. The trick is then to choose the level which gives an interval of width $2C_n.$ We have two scenarios.
\begin{enumerate}
\item If the posterior density is \textbf{unimodal}, first increasing then decreasing, then it is $d \pm C_n$ where we solve for d in the equation
$$
p(d - C_n|X) = p(d + C_n|X)
$$
\item If the posterior density is \textbf{strictly decreasing} over some range [a, b), then so long as $a + 2C_n < b$, the level set is then simply
$$
[a, a + 2C_n].
$$
\end{enumerate}
Hence, we are looking the interval of width $2C_n$ with the highest probability under the posterior distribution.
\subsection{Examples with non-coverage loss: Poisson Interval Estimation Revisited}
Recalled we have shown that for the procedure $\overline{X}$,
$$
\lim_{n\rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|\overline{X}) \leq \max_{a \leq \theta \leq b}\lim_{n\rightarrow \infty}R(\theta|\overline{X}) = 2[1 - \Phi(\frac{C}{\sqrt{b}})]
$$
and that we now wanted to show that the maximum of the limiting risk is also a lower bound. We can use the theorem of the asymptotic minimax lower bound theorem to help us show this.\\
First, recall that
$$
\lim_{n \rightarrow \infty}R(\theta|\tilde{d}_n) = 2 [1 - \Phi(\frac{C}{\sqrt{\theta}})] = S(\theta).
$$
We use the uniform prior $w(\theta) = \frac{1\{\theta_0 \leq \theta \leq \theta_1\}}{\theta_1 - \theta_0}.$ We have that the product of the prior and likelihood gives us
$$
w(\theta)f_{\theta}(\tilde{x}) = Const \frac{\theta^{(T+1)}e^{-\eta \theta}1\{\theta_0 \leq \theta \leq \theta_1\}}{\int_{\theta_0}^{\theta_1}\theta^{(T+1)-1}e^{-n\theta}d\theta}
$$
where $T = \sum_{i=1}^{n}X_i.$ The posterior density is a \textbf{truncated gamma density}. That is, it is the Gamma distribution with shape T+1 and rate n where we restrict it to the interval $[\theta_0,\theta_1].$ As the loss function is the coverage loss, the Bayes procedure is the level set of the posterior of width $\frac{2C}{\sqrt{n}}.$ Hence, $\tilde{d}_n(\tilde{x}) = \overline{X}.$\\
We can now use the asymptotic minimax lower bound theorem.
\begin{theorem}For any other procedure $\{d_n\}$ and any $a < b$
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|d_n(.)) \geq \max_{a \leq \theta \leq b}S(\theta)
$$
$$
= \max_{a \leq \theta \leq b}2[1 - \Phi(\frac{C}{\sqrt{\theta}})]
$$
$$
= 2[1 - \Phi(\frac{C}{\sqrt{b}})].
$$
\end{theorem}
Hence, using the above, let $\{d_n\} = \overline{X}$ and we get that
$$
2[1 - \Phi(\frac{C}{\sqrt{b}})] \leq \lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|\overline{X}).
$$
As a result, we finally get the result that
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|\overline{X}) = 2[1 - \Phi(\frac{C}{\sqrt{b}})] = \max_{a \leq \theta \leq b}\lim_{n \rightarrow \infty}R(\theta|\overline{X})
$$
that is, we can interchange the operation of taking limits and maximum. Therefore, the procedure $\overline{X} \pm \frac{C}{\sqrt{n}}$ is asymptotically minimax over any interval for Poisson interval estimation.
\lecture{41}{Interval Estimation of a normal mean parameter}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Examples with non-coverage loss: Interval Estimation of a normal mean parameter}
The limiting risk of Bayes procedures $\tilde{d}(X)$ using a uniform prior can be derived by first deriving the limiting risk of $d_{flat}(X)$, the Bayes procedure using the flat prior $w(\theta) = 1$ and then showing that with probability tending to 1, $\tilde{d}(X) = d_{flat}(X).$
Suppose $\tilde{X}$ consists of iid $N(\theta,1)$ r.v's for some unknown $\theta \in \Theta = \mathbb{R}.$ Consider the decision problem with $D = \Theta = \mathbb{R}$ and non-coverage loss function $L(d|\theta) = 1\{|d - \theta| > \frac{C}{\sqrt{n}}\}$ for some given $C > 0.$\\
We first derive the limiting risk of the Bayes estimator $\tilde{d}(\tilde{X})$ based on a uniform prior $U[\theta_0,\theta_1].$
\begin{lemma}The posterior density of the Bayes estimator $\tilde{d}$ using a uniform prior $U[\theta_0, \theta_1]$ is
$$
w(\theta)f_{\theta}(\tilde{X}) = Const.\frac{1\{\theta_0 \leq \theta \leq \theta_1\}e^{-\frac{n}{2}(\theta - \overline{X})^2}}{\int_{\theta_{0}}^{\theta_{1}}e^{-\frac{n}{2}(\theta - \overline{X})^2}}.
$$
In particular, the posterior density is a truncated normal density where we restrict the normal $N(\overline{X}, \frac{1}{n})$ density to the interval $[\theta_0, \theta_1].$
\end{lemma}
\begin{lemma}The Bayes estimator $\tilde{d}(\tilde{X})$ is the midpoint of the level set of the truncated normal density of width $\frac{2C}{\sqrt{n}}.$ That is, $\tilde{d}(\tilde{X}) = \overline{X}$ for when
$$
\theta_0 + \frac{C}{\sqrt{n}} < \overline{X} < \frac{C}{\sqrt{n}} - \theta_1.
$$
\end{lemma}
\begin{definition}(Event of non-coverage). Let us denote $B_n$ to be the event of non-coverage of the interval
$$
B_n = \{\theta < \overline{X} - \frac{C}{\sqrt{n}}\} \cup \{\theta > \overline{X} + \frac{C}{\sqrt{n}}\}.
$$
\end{definition}
\begin{theorem}We have that the limiting risk of the Bayes estimator $\tilde{d}$ using a uniform prior $U[\theta_0, \theta_1]$ is
$$
\lim_{n \rightarrow \infty}R(\theta|\tilde{d}) = \lim_{n \rightarrow \infty}P_{\theta}(B_n) = 2\bigg[1 - \Phi(C) \bigg] = S(\theta).
$$
\end{theorem}
\begin{corollary}For any procedure $\{d_n(.)\}$, and any $a < b$, we have that
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|d_n) \geq \max_{a \leq \theta \leq b}S(\theta) = 2\bigg[1 - \Phi(c) \bigg].
$$
\end{corollary}
Now, to show that $\overline{X} = \hat{\theta}_n$ is asymptotically minimax, we need to show that
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|\hat{\theta}_n) \leq \max_{a \leq \theta \leq b}S(\theta) = 2\bigg[1 - \Phi(c) \bigg].
$$
However, we have just shown that $\overline{X}$ has a limiting maximum rescaled risk that is exactly equal to the lower bound. Hence, $\overline{X}$ is asymptotically minimax.\\
We now want to show that a Bayes estimator with a conjugate normal prior $w(\theta) = \frac{1}{\sigma_0\sqrt{2\pi}}e^{-\frac{1}{2\sigma^2}(\theta - \mu_0)^2}$ ($\mathcal{N}(\mu_0, \sigma_{0}^{2})$ density) is asymptotically minimax.
\begin{lemma}The posterior distribution of the Bayes estimator with conjugate normal prior $\mathcal{N}(\mu_0, \sigma_{0}^{2})$ is also a normal distribution
$$
\mathcal{N}\bigg( \big(\frac{1}{1+n\sigma_{0}^{2}}\big)\mu_0 + \big(\frac{n\sigma_{0}^{2}}{1 + n\sigma_{0}^{2}}\big)\overline{X}, \frac{\sigma_{0}^{2}}{1 + n\sigma_{0}^{2}} \bigg).
$$
\end{lemma}
\begin{lemma}Under the interval coverage loss, the Bayes estimator is the center of symmetry of the posterior distribution, hence
$$
\hat{\theta}_{conj} \pm \frac{C}{\sqrt{n}}
$$
where $\hat{\theta}_{conj} = \big(\frac{1}{1+n\sigma_{0}^{2}}\big)\mu_0 + \big(\frac{n\sigma_{0}^{2}}{1 + n\sigma_{0}^{2}}\big)\overline{X}.$
\end{lemma}
\begin{lemma}The exact risk for the Bayes estimator $\hat{\theta}_{conj}$ is
$$
R(\theta|\hat{\theta}_{conj}) = P_{\theta}\bigg(\theta < \hat{\theta}_{conj} - \frac{C}{\sqrt{n}} \bigg) + P_{\theta}\bigg(\theta > \hat{\theta}_{conj} - \frac{C}{\sqrt{n}} \bigg)
$$
\end{lemma}
We analyse the probabilities of our interval being too high or too low in separate cases.
\begin{lemma}We have that the probability of our interval overestimating to be
$$
P_{\theta}\bigg(\theta < \hat{\theta}_{conj} - \frac{C}{\sqrt{n}} \bigg) = 1 - \Phi\bigg(C(1 + \frac{1}{n\sigma_0^2}) + \frac{\theta - \mu_0}{\sigma_0^2\sqrt{n}} \bigg).
$$
\end{lemma}
\begin{lemma}We have that the probability of our interval underestimating to be
$$
P_{\theta}\bigg(\theta < \hat{\theta}_{conj} - \frac{C}{\sqrt{n}} \bigg) = \Phi\bigg(-C(1 + \frac{1}{n\sigma_0^2}) + \frac{\theta - \mu_0}{\sigma_0^2\sqrt{n}} \bigg).
$$
\end{lemma}
\begin{lemma}Hence, for any $a < b$, we have that
$$
\max_{a \leq \theta \leq b}R(\theta|\hat{\theta}_{conj}) \leq 1 - \Phi\bigg(C(1 + \frac{1}{n\sigma_0^2}) + \frac{\theta - \mu_0}{\sigma_0^2\sqrt{n}} \bigg) + \Phi\bigg(-C(1 + \frac{1}{n\sigma_0^2}) + \frac{\theta - \mu_0}{\sigma_0^2\sqrt{n}} \bigg).
$$
\end{lemma}
\begin{corollary}We have that the limiting maximum risk for $\hat{\theta}_{conj}$ is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|\hat{\theta}_{conj}) \leq 2\bigg[1 - \Phi(C) \bigg].
$$
\end{corollary}
Hence, $\hat{\theta}_{conj}$ is asymptotically minimax.
\lecture{42}{Interval Estimation of a uniform scale parameter}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Examples with non-coverage loss: Interval Estimation of a uniform scale parameter}
Suppose $\tilde{X} = (X_1,...,X_n)$ consists of iid $U[0,\theta]$ random variables for some unknown $\theta \in \Theta = (0,\infty).$ The maximum likelihood estimator is $X_{(n)}$, the sample maximum.
\begin{remark}
In models that do not satisfy regularity conditions, the bias matters now when analysing the limiting MSE.
\end{remark}
\begin{lemma}The posterior distribution of the Bayes procedure $d_{flat}$ using the flat prior $w(\theta) = 1$ is the Pareto distribution with shape n-1 and scale $X_{(n)}$
$$
p(\theta|\tilde{X}) = \frac{(n - 1)X_{(n)}^{n - 1}}{\theta^n}1\{\theta \geq X_{(n)}\}.
$$
\end{lemma}
\begin{lemma}Under non-coverage loss, the interval estimate with $d_{flat}$ is the level set of width $\frac{2C}{n}$, which in this case is
$$
\bigg[X_{(n)},X_{(n)} + \frac{2C}{n} \bigg].
$$
\end{lemma}
\begin{lemma}The risk function for $d_{flat}$ is
$$
R(\theta|d_{flat}) = P_{\theta}(\theta < X_{(n)}) + P_{\theta}(\theta > X_{(n)} + \frac{2C}{n})
$$
$$
= \bigg(1 - \frac{2C}{\theta n} \bigg)^n.
$$
\end{lemma}
\begin{corollary}Hence, the maximum of $d_{flat}$ over [a,b] is
$$
\max_{a \leq \theta \leq b}R(\theta|d_{flat}) = \bigg(1 - \frac{2C}{bn} \bigg)^n.
$$
\end{corollary}
\begin{lemma}The limiting risk of $d_{flat}$ is
$$
\lim_{n \rightarrow \infty}R(\theta|d_{flat}) = \lim_{n \rightarrow \infty} \bigg(1 - \frac{2C}{bn} \bigg)^n = e^{-\frac{2C}{\theta}}.
$$
\end{lemma}
\begin{lemma}The limiting maximum risk of $d_{flat}$ is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|d_{flat}) = e^{-\frac{2C}{b}}.
$$
\end{lemma}
We now show that $d_{flat}$ is asymptotically minimax. We derive the limiting risk of $\tilde{d}$ using the $U[\theta_0,\theta_1]$ prior.
\begin{lemma}The posterior distribution of $\tilde{d}$ using the $U[\theta_0,\theta_1]$ prior is
$$
const.\frac{1}{\theta^n}\frac{1\{max(X_{(n)}, \theta_0) \leq \theta \leq \theta_1\}}{\int_{max(X_{(n)},\theta_0)}^{\theta_1}\frac{1}{\theta^n}d\theta}
$$
which is the truncated version of the Pareto distribution we get using the flat prior.
\end{lemma}
\begin{lemma}As long as $\theta_0 \leq X_{(n)}$ and $X_{(n)} + \frac{2C}{n} \leq \theta$, the level is
$$
\bigg[X_{(n)}, X_{(n)} + \frac{2C}{n} \bigg].
$$
\end{lemma}
\begin{lemma}We can show that
$$
P_{\theta}\bigg[\theta_0 \leq X_{(n)} \leq \theta_1 - \frac{2C}{n} \bigg] \rightarrow 1
$$
for all $\theta_0 < \theta < \theta_1.$
\end{lemma}
\begin{lemma}Let us denote $\tilde{d}_n(.)$ to be the Bayes procedure which uses the uniform prior $U[\theta_0, \theta_1]$. Then, we have that
$$
P_{\theta}\bigg[d_{flat}(\tilde{X}) = \tilde{d}_n(\tilde{X}) \bigg] \rightarrow 1
$$
for all $\theta_0 < \theta < \theta_1.$
\end{lemma}
\begin{lemma}The limiting risk of the Bayes procedure with uniform prior, via the flat prior, is
$$
\lim_{n \rightarrow \infty}R(\theta|\tilde{d}_n) = \lim_{n \rightarrow \infty}R(\theta|d_{flat}) = e^{-\frac{2C}{\theta}} = S(\theta).
$$
\end{lemma}
Hence, for any sequence of procedures $\{d_n(.)\}$,
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|d_n) \geq \max_{a \leq \theta \leq b}S(\theta) = e^{-\frac{2C}{b}}.
$$
Therefore, $d_{flat}$ is asymptotically minimax.
\lecture{43}{Estimating binomial proportion with known sample size}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Examples with squared error loss: Estimating binomial proportion with known sample size}
In the case of squared error loss, it turns out that in many cases, the Bayes procedure (i.e. the posterior mean) using a uniform prior has the same limiting rescaled risk as the Bayes procedure using a flat prior.\\
Suppose $\tilde{X} = (X_1,...,X_n)$ consists of iid binomial $(1,\theta)$ random variables for $\theta \in \Theta = (0,1).$ Consider the decision problem with decision space $D = \Theta = (0,1)$ and the loss $L(d|\theta) = (d - \theta)^2.$ We want to show that the Bayes procedure with the conjugate prior $w(\theta) = \frac{\theta^{\alpha_{0}-1}(1 - \theta)^{\beta_{0}-1}}{beta(\alpha_{0},\beta_{0})}$ (beta$(\alpha_0, \beta_0)$ density) is asymptotically minimax. We assume that for any $\theta \leq \theta_0 < \theta_1 \leq 1,$ the Bayes procedure using the $U[\theta_0, \theta_1]$ prior, $\tilde{d}(\tilde{X})$ is such that
$$
\lim_{n \rightarrow \infty}nR(\theta|\tilde{d}) = \lim_{n \rightarrow \infty}nR(\theta|d_{flat})
$$
where $d_{flat}$ is the Bayes procedure using the flat prior $w(\theta)= 1.$\\
First, we find the lower bound to the limiting maximum risk. We look at the limiting rescaled risk of $\tilde{d}$ but this may be difficult, hence, we use the assumption given to us and look at the limiting rescaled risk of $d_{flat}$ which uses a flat prior.\\
\begin{lemma}The posterior distribution for $d_{flat}$ using a flat prior $w(\theta) = 1$ is
$$
p(\theta|\tilde{X}) = \frac{\theta^{(T+1)-1}(1 - \theta)^{(n - T + 1) - 1}}{beta(T+1, n-T+1)}
$$
which is the $beta(T+1,n-T+1)$ density.
\end{lemma}
\begin{corollary}Under a squared-error loss, the Bayes procedure for $d_{flat}$ is the posterior mean of the $beta(T+1,n-T+1)$ density, which is
$$
d_{flat}(\tilde{X}) = \frac{T+1}{n+2}
$$
which may be interpreted as the sample proportion we would obtain if we added 1 success and 1 failure to the data.
\end{corollary}
\begin{lemma}The risk under a squared error loss is the mean squared error, which can be decomposed into the variance and bias squared. Hence, the risk of $d_{flat}$ is
$$
R(\theta|d_{flat}) = Var_{\theta}\bigg[d_{flat}\bigg] + Bias_{\theta}\bigg[d_{flat}\bigg]^2 = \frac{n\theta(1 - \theta)}{(n+2)^2} + \bigg(\frac{1 - 2\theta}{n + 2}\bigg)^2
$$
$$
= \frac{n\theta(1 - \theta) + (1 - 2\theta)^2}{(n+2)^2}.
$$
\end{lemma}
\begin{lemma}We have that the limiting rescaled risk for $d_{flat}$ is
$$
\lim_{n \rightarrow \infty}nR(\theta|d_{flat}) = \lim_{n \rightarrow \infty} ]bigg[\frac{n\theta(1 - \theta) + (1 - 2\theta)^2}{(n+2)^2} \bigg]
$$
$$
\rightarrow \theta(1-\theta) = S(\theta).
$$
\end{lemma}
Hence, using our assumption, we have that
$$
\lim_{n \rightarrow \infty}nR(\theta|\tilde{d}) = \lim_{n \rightarrow \infty}nR(\theta|d_{flat}) = \theta(1 - \theta).
$$
\begin{corollary}For any other procedure $\{d_n(.)\}$, we have that
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|d_n) \geq \max_{a \leq \theta \leq b}S(\theta) = \max_{a \leq \theta \leq b}\theta(1 - \theta).
$$
\end{corollary}
We now look for an upper bound.
\begin{lemma}Using the $beta(\alpha_0, \beta_0)$ density as a weight function, we have that the posterior density is
$$
f_{\theta}(\tilde{X})w(\theta) = Const. \frac{\theta^{T+\alpha_0 - 1}(1 - \theta)^{n - T + \beta_0 - 1}}{beta(T + \alpha_0, n - T + \beta_0)}
$$
where the posterior density is the $beta(T + \alpha_0, n - T + \beta_0)$ density.
\end{lemma}
\begin{lemma}The Bayes procedure under the squared error loss is the posterior mean, which is,
$$
d_{conj}(\tilde{X}) = \frac{T + \alpha_0}{n + \alpha_0 + \beta_0}.
$$
\end{lemma}
\begin{lemma}The risk under the squared error loss is the MSE, hence, the risk of $d_{conj}$ is
$$
R(\theta|d_{conj}) = Var_{\theta}\bigg[d_{conj}(X) \bigg] + Bias_{\theta}\bigg[d_{conj}\bigg]^2
$$
$$
= \frac{n\theta(1 - \theta) + [(1 - \theta)\alpha_0 - \theta \beta_0]^2}{(n + \alpha_0 + \beta_0)^2}.
$$
\end{lemma}
\begin{lemma}The limiting maximum rescaled risk of $d_{conj}$ can be bounded by
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}R(\theta|d_{conj}) \leq \max_{a \leq \theta \leq b}\theta(1 - \theta) = \max_{a \leq \theta \leq b}S(\theta).
$$
\end{lemma}
\begin{corollary}The bias is asymptotically negligible compared to the variance in the limit.
\end{corollary}
\begin{theorem_exam}{Exponential family variance vs risk}{}Exponential families have the property that the variance dominates the risk in the limit.
\end{theorem_exam}
Hence, we have that $d_{conj}(\tilde{X})$ is asymptotically minimax.
\subsection{Showing convergence of interval coverage}
\begin{theorem}
Suppose that $X_1,...,X_n$ are iid $N(\theta,1)$ are random variables and $\overline{X} = \frac{1}{n}\sum_{i=1}^{n}X_i$. If $\theta_0 < \theta < \theta_1$ and $0 < C <\infty$, then
$$
P_{\theta}\bigg\{\theta_0 + \frac{C}{\sqrt{n}} < \overline{X} < \theta_1 - \frac{C}{\sqrt{n}} \bigg\} \rightarrow \infty
$$
as $n \rightarrow \infty.$
\end{theorem}
\begin{remark}Hence, the probability that the estimator $\overline{X}$ lies in the interval is 1 as the sample size gets large.
\end{remark}
\lecture{44}{Estimating normal variance with known mean}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Examples with squared error loss: Estimating normal variance with known mean}
Suppose $X = (X_1,...,X_n)$ consists of iid $N(0,\theta)$ R.Vs for some unknown $\theta \in \Theta = (0,\infty).$ We let the decision space $D = \Theta = (0,\infty)$ and loss $L(d|\theta) = (d - \theta)^2.$ We want to show that \textbf{both} the maximum-likelihood estimator and the Bayes procedure with the inverse Gamma conjugate prior $w(\theta) = \frac{\lambda_{0}^{\alpha_{0}}e^{-\lambda/\theta}}{\theta^{\alpha_{0}+1}\Gamma(\alpha_{0})}$ for known $\alpha_0, \lambda_0 > 0$ are asymptotically minimax.\\
We are allowed to assume that for any $0 < \theta_0 < \theta_1 < \infty$, the Bayes procedure $\tilde{d}(.)$ using the $U[\theta_0, \theta_1]$ prior, has, for all $\theta_0 < \theta < \theta_1$,
$$
\lim_{n \rightarrow \infty}nR(\theta|\tilde{d}) = \lim_{n \rightarrow \infty}nR(\theta|d_{flat})
$$
where $d_{flat}$ is the Bayes procedure using the flat prior $w(\theta) = 1.$\\
We show two steps. First, we find the lower bound to any limiting maximum rescaled risk for \textbf{any} estimator. Then, we find the upper bounds to the limiting maximum risks for our candidate estimators. If these bounds coincide, then we have shown that our candidate estimators are asymptotically minimax.\\
Instead of trying to derive the limiting maximum rescaled risk with our Bayes procedure with inverse Gamma conjugate prior $\tilde{d}$, we can use the assumption given to us that it is equivalent to the limiting maximum rescaled risk of $d_{flat}$ which uses a flat prior.
\begin{lemma}The posterior density of $d_{flat}$ is
$$
w(\theta)f_{\theta}(\tilde{X}) = \frac{1}{(2\pi \theta)^{n/2}}e^{-\frac{1}{2\theta}T}
$$
where $T = \sum_{i=1}^{n}X_i^2$ is the sufficient statistic.
\end{lemma}
\begin{corollary}The posterior density of $d_{flat}$ is the Inverse Gamma($\frac{n}{2}-1, \frac{T}{2}$) as we had that the conjugate prior of $\tilde{d}$ was an Inverse gamma.
\end{corollary}
\begin{lemma}The Bayes procedure under squared error loss is the posterior mean, hence
$$
d_{flat}(\tilde{X}) = \frac{T/2}{(n/2 - 1) - 1} = \frac{T}{n - 4}.
$$
\end{lemma}
\begin{lemma}Under the squared error loss, the risk of $d_{flat}$ is the MSE of $d_{flat}.$ Therefore,
$$
R(\theta|d_{flat}) = Var(d_{flat}) + Bias(d_{flat})^2 = \frac{2n\theta^2}{(n-4)^2} + \frac{16\theta^2}{(n-4)^2}.
$$
\end{lemma}
\begin{lemma}The limiting maximum rescaled risk is of $d_{flat}$ is
$$
\lim_{n \rightarrow \infty}nR(\theta|d_{flat}) = \lim_{n \rightarrow \infty}\{nVar_{\theta}(d_{flat})\} + \lim_{n \rightarrow \infty}\{nBias_{\theta}(d_{flat})^2\}
$$
which is equal to
$$
= 2\theta^2 \bigg[ \lim_{n \rightarrow \infty}\bigg(\frac{n}{n-4}\bigg)^2 + \lim_{n \rightarrow \infty}\frac{8n}{(n-4)^2} \bigg]
$$
$$
\xrightarrow{n \rightarrow \infty}2\theta^2.
$$
\end{lemma}
\begin{corollary}Under the squared error loss, the contribution of the bias to the limiting rescaled MSE/risk is negligible compared to the variance.
\end{corollary}
Hence, using our assumption, we have that
$$
\lim_{n \rightarrow \infty}nR(\theta|\tilde{d}) = \lim_{n \rightarrow \infty}nR(\theta|d_{flat}) = 2\theta^2 = S(\theta).
$$
So, for any other sequence of estimators $\{d_n(.)\}$, we have, for any $0 < a < b < \infty$
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}nR(\theta|d_n) \geq max_{a \leq \theta \leq b}S(\theta) = 2b^2.
$$
\begin{lemma}The MLE is
$$
\hat{\theta}_{ML} = \frac{T}{n}
$$
\end{lemma}
\begin{lemma}The limiting maximum rescaled risk of the MLE is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}nR(\theta|\hat{\theta}_{ML}) = 2b^2.
$$
\end{lemma}
This coincides with the lower bound and hence $\hat{\theta}_{ML}$ is asymptotically minimax.
\begin{lemma}For the Bayes procedure using the inverse gamma conjugate prior, the Bayes procedure is the posterior mean of the posterior inverse gamma distribution
$$
d_{conj}(\tilde{X}) = \frac{T + 2\lambda_0}{n + 2\alpha_0 - 2}.
$$
\end{lemma}
\begin{lemma}The rescaled risk of $d_{conj}$ is the rescaled MSE
$$
nR(\theta|d_{conj}) = 2\theta^2\bigg(\frac{n}{n+2\alpha_0 - 2} \bigg)^2 + \frac{n\bigg(2\lambda_0 -2\theta\alpha_0 + 2\theta \bigg)}{(n + 2\alpha_0 - 2)^2}.
$$
Hence, the limiting maximum rescaled risk is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}nR(\theta|d_{conj}) = \max_{a \leq \theta \leq b}2\theta^2 = 2b^2.
$$
\end{lemma}
Hence, $d_{conj}$ is also asymptotically minimax.
\lecture{45}{Estimating normal variance with known mean}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Examples with absolute error loss: Estimating normal mean with known variance}
Suppose $\tilde{X} = (X_1,...,X_n)$ consists of iid $N(\theta,1)$ random variables for some unknown $\theta \in \Theta = \mathbb{R}.$ Consider the decision problem with the decision space $D = \Theta$ and loss function $L(d|\theta) = |d - \theta|.$\\
\begin{lemma}The sample mean $\overline{X} = \frac{1}{n}\sum_{i=1}^{n}X_i$ is \textbf{both} the MLE and the Bayes procedure using the flat prior $w(\theta) = 1$, where in the latter case, the posterior is the $\mathcal{N}(\overline{X}, \frac{1}{n})$ density.
\end{lemma}
\begin{corollary}Under the absolute error loss, the Bayes procedure is the posterior median which is $\overline{X}.$
\end{corollary}
\begin{lemma}The limiting maximum rescaled risk for $\overline{X}$ is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}\sqrt{n}R(\theta|\overline{X}) = \sqrt{\frac{2}{\pi}}.
$$
\end{lemma}
\begin{lemma}The Bayes procedure $\tilde{d}(\tilde{x})$ is the Bayes procedure based on a $U[\theta_0, \theta_1]$ prior. This estimator has the form
$$
\tilde{d}(\tilde{x}) = \overline{X} + \frac{1}{\sqrt{n}}\Phi^{-1}\bigg(\frac{1}{2}\bigg[\Phi(\sqrt{n}(\theta_1 - \overline{X})) + \Phi(\sqrt{n}(\theta_0 - \overline{X})) \bigg] \bigg).
$$
\end{lemma}
\begin{lemma}The rescaled risk for the Bayes procedure $\tilde{d}$ is
$$
\sqrt{n}R(\theta|\tilde{d}) \rightarrow \sqrt{\frac{2}{\pi}}.
$$
\end{lemma}
\begin{corollary}Hence, for any other estimator $d_n(\tilde{X})$, we have that
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}\sqrt{n}R(\theta|d_n) \geq \sqrt{\frac{2}{\pi}}.
$$
\end{corollary}
\lecture{46}{Estimating uniform scale}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Examples with absolute error loss: Estimating uniform scale}
Suppose $\tilde{X} = (X_1,...,X_n)$ which consists of iid $U[0,\theta]$ random variables for some unknown $\theta \in \Theta = (0,\infty).$ Consider the decision space $D = \Theta$ and loss $L(d|\theta) = |d - \theta|.$
\begin{lemma}The risk of the MLE $\hat{\theta}_{ML} = X_{(n)}$ is
$$
R(\theta|\hat{\theta}_{ML}) = \frac{\theta}{n + 1}.
$$
\end{lemma}
\begin{corollary}The limiting maximum rescaled risk of the MLE is therefore
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}nR(\theta|\hat{\theta}_{ML}) = b.
$$
\end{corollary}
We now consider the \textbf{median-unbiased} version of the MLE.
\begin{lemma}The MLE $X_{(n)}$ has the CDF
$$
F_n(x; \theta) = P_{\theta}(X_{(n)} \leq x) =
\begin{cases}
0 \quad x < 0\\
\bigg(\frac{x}{\theta} \bigg)^n \quad 0 \leq x \leq \theta \\
1 \quad x > \theta.
\end{cases}
$$
\end{lemma}
\begin{corollary}The median of this distribution is the solution m to
$$
\frac{1}{2} = \bigg(\frac{m}{\theta} \bigg)^n
$$
$$
m = \bigg(\frac{1}{2}\bigg)^{\frac{1}{n}}\theta.
$$
\end{corollary}
\begin{definition}(Median-unbiasd MLE). The median unbiased MLE is
$$
d_{med}(\tilde{X}) = 2^{\frac{1}{n}}X_{(n)}.
$$
\end{definition}
\begin{lemma}The risk of the median unbiasd MLE is
$$
R(\theta|d_{med}) = \frac{\theta n}{n+1}\bigg(2^{\frac{1}{n}} - 1 \bigg).
$$
\end{lemma}
\begin{corollary}The limiting maximum risk of the median unbiased MLE is
$$
\lim_{n \rightarrow \infty}\max_{a \leq \theta \leq b}nR(\theta|d_{med}) = b log_{e}2.
$$
\end{corollary}
\begin{lemma}The Bayes procedure using a flat weight function is the median of the posterior distribution of the Pareto(n-1,$X_{(n)}$) density. Hence, the Bayes procedure is
$$
d_{flat}(\tilde{X}) = 2^{\frac{1}{n - 1}}X_{(n)}.
$$
\end{lemma}
\begin{remark}The Bayes procedure under the flat prior is very similar to the median-unbiased MLE!
\end{remark}
\begin{theorem}The Bayes procedure $\tilde{d}(\tilde{X})$ using the $U[\theta_0,\theta_1]$ prior has the same limiting risk as the Bayes procedure with the flat prior and the median-unbiased MLE.
$$
\lim_{n \rightarrow \infty}nR(\theta|\tilde{d}) = \lim_{n \rightarrow \infty}nR(\theta|d_{flat}) = \lim_{n \rightarrow \infty}nR(\theta|d_{med}) = \theta log 2
$$
for all $\theta_0 < \theta < \theta_1.$
\end{theorem}
\begin{corollary}Hence, $d_{med}(\tilde{X})$ and $d_{flat}(\tilde{X})$ are asymptotically minimax but $\hat{\theta}_{ML}(\tilde{X}$ is not.
\end{corollary}
\lecture{47}{L2 convergence of estimators}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{L2 convergence of estimators}
We have used numerous times that when computing the limiting risk, under squared error loss, of the Bayes procedure with a uniform prior, we can instead compute the limiting risk of the Bayes procedure with a flat prior and this gives us the same result. The advantage to this is that computing the limiting risk of the Bayes procedure with a flat prior is easier. We now seek to prove why we can do this. Intuitively, we can do this because the two procedures are "close" enough and hence have similar limiting risk. \\
First, we seek to define what it means for two estimators to be close.
\begin{definition}(L2 metric). We define the L2 or mean-squared metric as
$$
d(X,Y) = E[(X-Y)]^2
$$
where we assume that $X \in L^2$ and $Y \in L^2.$
\end{definition}
\begin{definition}(Mean-square convergent). Let $\{X_n\}_{n \geq 1}$ be a sequence in $L^2$ defined on a sample space $\Omega.$ We say that $\{X_n\}_{n \geq 1}$ is mean-square convergent (or convergent in mean square) if and only if there exists a random variable $X \in L^2$ such that
$$
\lim_{n \rightarrow \infty}E[(X_n - X)^2] = 0.
$$
X is called the mean-square limit of the sequence and denote this by
$$
X_n \xrightarrow{m.s}X
$$
\end{definition}
\begin{theorem}Mean-square convergence implies convergence in probability.
\end{theorem}
\begin{theorem}Suppose $X_n \xrightarrow{P} 0$ and $|X_n| \leq M < \infty$ for all $n \geq n_0$ for some positive integer $n_0$ and constant M. Then,
$$
E(X_{n}^{2}) \rightarrow 0.
$$
\end{theorem}
We can now define what it means for two Bayes procedures to be close by using the L2 metric.
\begin{theorem_exam}{L2 Convergence of Estimators}{}Suppose that for 2 estimators $\hat{\theta}_1$ and $\hat{\theta}_2$ and some rate (sequence) $\{r_n\}$, we have
\begin{enumerate}
\item $r_nE_{\theta}\bigg[(\hat{\theta}_1 - \theta)^2 \bigg] \rightarrow S(\theta) < \infty$
\item $r_nE_{\theta}\bigg[(\hat{\theta}_1 - \hat{\theta}_2)^2 \bigg] \rightarrow 0$.
\end{enumerate}
Then, we have that
$$
r_nE_{\theta}\bigg[(\hat{\theta}_2 - \theta)^2 \bigg] \rightarrow S(\theta) < \infty
$$
\end{theorem_exam}
\begin{remark}That is, if we have one estimator that has the same finite limiting risk, which is $S(\theta)$, and we have that the two estimators are close in mean-squared, then we can say that the other estimator has the same limiting risk as our original estimator.
\end{remark}
\begin{lemma}The MLE is not asymptotically minimax for non-regular model as the bias has the same order as the variance.
\end{lemma}
The significance of this is that for non-regular models, the Bayes estimators automatically adjust the order of their bias in the presence of irregularity and hence perform better than the MLE for such cases.
\lecture{48}{Convergence of Bayes procedure and sample mean}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Convergence of Bayes procedure and sample mean for normal}
Suppose $X = (X_1,...,X_n)$ consists of iid $N(\theta,1)$ random variables for some unknown $\theta \in \Theta = \mathbb{R}.$ We consider the decision space $D = \Theta$ and loss $L(d|\theta) = (d - \theta)^2.$ We are interested in showing that the limiting rescaled risk of the sample mean $\overline{X}$ is identical to the limiting rescaled risk of the Bayes procedure with a uniform $U[\theta_0,\theta_1]$ prior.
\begin{lemma}(Mills Ratio). Let $\Phi$ be the CDF of a standard normal random variable. Then, we have
$$
\frac{1}{\sqrt{2\pi}}[1 - \Phi(\sqrt{2})] < 6.
$$
\end{lemma}
\begin{theorem}If $\tilde{d}$ is the Bayes procedure with uniform $[\theta_0,\theta_1]$ prior, then
$$
\lim_{n \rightarrow \infty}nR(\theta|\tilde{d}) = \lim_{n \rightarrow \infty}nR(\theta|\overline{X}).
$$
\end{theorem}
\lecture{49}{Overview of Asymptotically Minimax Procedures}
\section{Examples of Asymptotically Minimax Procedures}
\subsection{Overview of Asymptotically Minimax Procedures}
We summarise what we have done with regards to asymptotically minimax procedures. First, under squared-error loss, asymptotically minimax procedures generalises the notion of AMVU estimators in the case of non-regular models.\\
\begin{lemma}Under regularity conditions, maximum likelihood estimators are different to Bayes estimators. However, they tend to differ in their bias under squared error loss. However, as the bias is asymptotically negligible, they are both asymptotically minimax.
\end{lemma}
\begin{lemma}We state 4 reasons for why we would use the limiting maximum risk over intervals as our criteria for statistical optimality.
\begin{enumerate}
\item It is difficult to establish non-asymptotic results and only available under certain conditions.
\item The limiting maximum risk gets around the issues of superefficiency.
\item The asymptotic minimax lower bound applies to \textbf{any procedure}.
\item The theory behind asymptotic minimax lower bound highlights intersting properties of Bayes estimators. That is, by just analysing the risk of Bayes procedures, this determines the best possible performance of any procedure in terms of their limiting maximum risk.
\end{enumerate}
\end{lemma}
\lecture{50}{Notes on Bayesian Statistics}
\section{Bayesian Statistics}
\section{Bayesian Statistics}
\subsection{Notes}
First, we motivate the use of priors. Before that, we motivate the very use of parameters.
\begin{definition}(Infinite Exchangeability). We say that $(x_1,...,x_n)$ is an infinitely exchangeable sequence of random variables if, for any n, the joint probability $p(x_1,...,x_n)$ is invariant to permutation of the indices. That is, for any permutation $\pi$,
$$
p(x_1,...,x_n) = p(x_{\pi_{1}},...,x_{\pi_{n}}).
$$
\end{definition}
Note that independent and identically distributed is a subset of infinite exchangeability. The following theorem indicates why infinite exchangeability is important.
\begin{theorem}(De Finetti Theorem). A sequence of random variables $(x_1,x_2,...)$ is infinitely exchangeable if and only if for all n,
$$
p(x_1,x_2,...,x_n) = \int \prod_{i=1}^{n}p(x_i|\theta)p(\theta)d\theta
$$
for some distribution of $\theta$ given by $p(\theta)d\theta.$
\end{theorem}
The forward direction of the above theorem is what is so powerful. It says that
\begin{enumerate}
\item We have exchangeable data;
\item There must exist a parameter $\theta$;
\item There must exist a likelihood $p(x|\theta);$
\item There must exist a distribution P on $\theta;$
\item The above quantities \textbf{must} exist so as to render the data $(x_1,...,x_n)$ conditionally independent.
\end{enumerate}
Hence, this is why we should use parameters and why we should priors on parameters.
\begin{proposition}(3 principles of Bayesian approach). We state the 3 principles behind Bayesian modelling.
\begin{enumerate}
\item Conditionality principle. If an experiment concerning inference about $\theta$ is chosen from a collection of possible experiments independently, then any experiment not chosen is irrelevant to the inference.
\item Likelihood principle. The relevant information in any inference about $\theta$ after x is observed is contained \textbf{entirely} in the likelihood function. That is, the likelihood function is $p(x|\theta)$ for a fixed x is a function of $\theta.$
\item Sufficiency principle. If two different observations x,y are such that $T(x) = T(y)$ for sufficient statistic T, then inference based on x and y should be the same.
\end{enumerate}
\end{proposition}
The issue with taking expectations over all datasets X is that it does not adhere to the conditionality principle and hence the reason for Bayesians not liking the notion of fixing $\theta$ and taking expectation over X.\newline
In Bayesian approach to decision theory, we construct a loss function $L(\theta,\delta(X))$ for a decision $\delta(X).$ From this, we can define the posterior risk, which conditions on x and integrates over $\Theta$ with a prior $\pi$.
\begin{definition}(Posterior risk). The posterior risk is defined as
$$
p(\pi, \delta(x)) = \int L(\theta,\delta(x))p(\theta|x)d\theta.
$$
The Bayes action $\delta^*(x)$ for any fixed x is the decision $\delta(x)$ that minimises the posterior risk.
\end{definition}
\begin{lemma}Under squared error loss, the posterior mean is the Bayes action.
\end{lemma}
However, note that in frequentists, we can define the frequentist risk
$$
R(\theta,\delta) = E_{\theta}L(\theta, \delta(X))
$$
where we take expectation over X with parameter $\theta$ fixed. However, we can combine the two ideas.
\begin{definition_exam}{Bayes rule}{} A Bayes rule is a function $\delta_{\pi}$ that minimises
$$
r(\pi, \delta) = \int R(\theta, \delta)\pi(\theta)d\theta
$$
where $R(\theta,\delta)$ is the frequentist risk. This averages the frequentist risk over a prior distribution of $\theta.$\newline
The \textbf{Bayes risk} is $r(\pi) = r(\pi,\delta_{\pi})$ is the Bayes rule with the Bayes rule plugged in.
\end{definition_exam}
\begin{definition}(Conjugate prior). A family of priors such that, upon being multiplied by the likelihood, yields a posterior in the same family.
\end{definition}
Note that we distinguish between objective priors (priors chosen based on the likelihood) and subjective priors (based on domain knowledge).\\
\subsection{Exponential families and conjugate priors}
Nearly everything we have seen are exponential families. Notable exceptions to this are the Cauchy distribution and t-distribution.\\
Recall that a conjugate prior is the case when the posterior is of the same distribution as the prior.
\begin{enumerate}
\item Beta is conjugate prior for the Bernoulli.
\item Gamma is the conjugate prior to the exponential.
\end{enumerate}
\begin{remark}
Note that there isn't ONE conjugate prior for a distribution. There is \textbf{a} conjugate prior for a distribution.
\end{remark}
\begin{theorem}Any exponential family has a conjugate prior.
\end{theorem}
\end{document}
}
}
| {
"alphanum_fraction": 0.7015968595,
"avg_line_length": 46.9281078382,
"ext": "tex",
"hexsha": "26df0556eac6df7f521056cd168a4b89251a37dd",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-08-27T06:38:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-02-27T13:22:22.000Z",
"max_forks_repo_head_hexsha": "c19b260657f4c3e4a0c2a3a1248cc0baf23d3e55",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "chrishyland/uni-notes",
"max_forks_repo_path": "STAT3923/exam-notes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c19b260657f4c3e4a0c2a3a1248cc0baf23d3e55",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "chrishyland/uni-notes",
"max_issues_repo_path": "STAT3923/exam-notes.tex",
"max_line_length": 629,
"max_stars_count": 9,
"max_stars_repo_head_hexsha": "c19b260657f4c3e4a0c2a3a1248cc0baf23d3e55",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "chrishyland/uni-notes",
"max_stars_repo_path": "STAT3923/exam-notes.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-07T05:17:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-18T09:23:53.000Z",
"num_tokens": 59785,
"size": 187994
} |
\subsubsection{\stid{3.13} CLOVER Sub-project PEEKS} \label{subsubsect:peeks}
\paragraph{Overview}
The PEEKS subproject is a focused team effort to advance the capabilities of the
ECP software stack in terms of communication-avoiding Krylov solvers and
advanced preconditioning techniques featuring fine-grained parallelism.
Previously developed techniques that are available as prototype codes -- as
well as novel algorithm developments -- are turned into production-quality
implementations and integrated into the ECP software ecosystem
as part of the Trilinos~\footnote{\url{https://trilinos.org/}} and the
Ginkgo~\footnote{\url{https://github.com/ginkgo-project/ginkgo}} software
stacks.
%With the PEEKS project focus being on algorithm development an software design
%and leading developers of the Trilinos and Ginkgo software packages being
%involved in the PEEKS project, a strong focus is on software interoperability
%and software sustainability. In consequence, there exists a strong link to the
%other ECP math libraries and the xSDK4ECP project coordinating the ECP
%mathematical library interoperability efforts. All technology developed in
%PEEKS is available and disseminated via the xSDK software stack.
\paragraph{Key Challenges}
Developing preconditioned iterative solvers for the US flagship supercomputers
deployed in ECP, we acknowledge three major challenges coming from the hardware
architecture:
\begin{enumerate}
\item
Fine-grained parallelism in a single node that has to be exploited efficiently
by the iterative solver and the preconditioner.
\item
Rising communication and synchronization cost.
\item
Computational power growing much faster than memory power, resulting on
increased pressure on the bandwidth of all cache/memory levels.
\item
Low-precision special function units like Tensor cores that are increasingly
adopted by hardware architectures require sophisticated numerical schemes to be
useful for general purpose scientific computing.
\end{enumerate}
All challenges require the redesign of existing iterative solvers with respect
to higher parallelism, % within all building blocks
a reduced number of
communication and synchronization points, favoring computations over
communication, and adopting multiprecision algorithms for efficient hardware
utilization.
In the last few decades, numerous efforts
have investigated the potential of communication-avoiding (CA) and pipelined
Krylov solvers~\cite{yamazakiipdps2014,Cornelis2018TheCC},
% ; however, the implementations usually remained in prototype status and rarely made it into production code.
% Similarly, significant effort was put into developing
as well as new preconditioning
techniques that allow for the efficient parallelization of the preconditioner
setup and the preconditioner
application~\cite{chowisc2015,anzteuropa2015,ANZT20181}.
However, most implementations were experimental and rarely adopted by application
code.
Also the concept of accelerating iterative methods by using lower precision
formats for parts of the computations or memory access was extensively
investigated in literature~\cite{carson1,carson2,doi:10.1002/cpe.4460},
while production-ready implementations are still scarce.
%In the PEEKS project we address the challenge of turning prototype
%implementations into production-ready functionality by improving robustness
%and
%safeguarding against numerical breakdown, developing application- and
%architecture-specific optimizations, and integrating into ECP application
%projects and into a sustainable and extensible mathematical software stack.
\paragraph{Solution Strategy}
The primary thrusts of the PEEKS project are:
\begin{enumerate}
\item \textbf{Architecture-portable software design:}
In the Ginkgo C++ software, we design and develop a next-generation
sparse linear algebra library able to run on multi- and manycore
architectures. The library design decouples
algorithm implementations from hardware-specific kernel implementations,
thereby allowing extensibility as well as
architecture-specific kernel optimization.
\item \textbf{Sustainability efforts:}
The Ginkgo software development cycle adheres the Better Scientific
Software (BSSw) design principles~\cite{betterscientificsoftware} that
ensure production-quality code by featuring unit testing, automated
configuration and installation, Doxygen code documentation, as well as a
continuous integration and continuous benchmarking
framework~\cite{pasc_anzt}. Ginkgo is an
open source effort licensed under BSD 3-clause and ships with the latest
version of the xSDK package (v.0.5.0).
\item \textbf{Pipelined and CA Krylov methods:}
We realize pipelined and
communication-avoiding Krylov methods in production-quality code, and
we are actively collaborating with the ECP ExaWind project to integrate
our new features into their application~\cite{Yamazaki-lowsynch}.
\item \textbf{ParILUT -- A parallel threshold ILU:} We are spearheading
the manycore-parallel computation of threshold-based
incomplete factorization preconditioners~\cite{sisc_anzt,ipdps_anzt}.
\item \textbf{Adaptive precision block-Jacobi:} We realized a
production-ready block-Jacobi preconditioner that reduces the runtime by
carefully selecting the storage format of the distinct block inverses
without impacting the preconditioner quality~\cite{toms_anzt}.
\item \textbf{Software-defined events (SDE):} We team up with the ECP
Exa-PAPI project to design and realize an ecosystem for software-defined
events. The idea is to provide
% application scientists with
easy access
to library-, domain- and solver-specific metrics via the PAPI interface.
This avoids cumbersome code instrumentation and library recompilation for
debugging algorithm behavior or identifying performance
bottlenecks~\cite{doi:10.1177/1094342019846287}.
\end{enumerate}
\paragraph{Recent Progress}
\begin{enumerate}
\item
For improving the Ginkgo software quality and performance reproducibility, we
realized a continuous benchmarking system permanently evaluating the
performance of central building blocks and archiving the data~\cite{pasc_anzt}.
We
also realized a web-based Ginkgo Performance Explorer that allows
interactively exploration of archived performance
data~\cite{gpewebpage}.
\item
We implemented and released five variations of communication-avoiding
and pipelined Krylov solvers in the Belos Trilinos package.
\item
We demonstrated the efficient use of communication-avoiding Krylov methods in Trilinos
inside wind turbine simulations of the ECP ExaWind project~\cite{Yamazaki-lowsynch}.
\item
We deployed ParILUT, the first production-ready manycore-parallel algorithm for
generating threshold-based incomplete factorization preconditioner and
demonstrated significant speedups over state-of-the-art
algorithms~\cite{ipdps_anzt} (see Figure~\ref{fig:ParILUTperf}).
%\item
%We released a production-ready adaptive precision block-Jacobi preconditioner
%in the Ginkgo software library. This preconditioner separates the memory
%format
%from the computation format, adapts the memory format to the numerical
%properties, and reduces the runtime cost on high-end GPUs by about 20\%
%without
%impacting the preconditioner quality~\cite{toms_anzt}.
\end{enumerate}
\begin{figure}[htb]
\centering
\includegraphics[width=5in]{projects/2.3.3-MathLibs/2.3.3.13-CLOVER/parilutspeedup}
\caption{\label{fig:ParILUTperf}Speedup of the ParILUT over conventional
threshold-ILU generation on different manycore architectures. Test problems
are taken from the Suite Sparse Matrix Collection.}
\end{figure}
\paragraph{Next Steps}
Our next efforts are:
\begin{enumerate}
\item \textbf{Low-synchronous orthogonalization:} The success of
communication-avoiding Krylov methods motivates to push the synchronization
limits further by deploying low-synchronous orthogonalization methods.
(Collaboration with the ExaWind team at NREL.)
\item \textbf{Parallel incomplete factorization preconditioner
application:} With the advances in the parallel incomplete factorization
preconditioner generation, the focus increasingly turns to the efficient
preconditioner application. We enhance the concept of sparse approximate
inverse approximation for incomplete factorization preconditioners, and
extend the scope to novel hardware architectures featuring attractive
performance in the low-precision regimes.
% \item \textbf{Get-set usage of software-defined events:} Together with the
% Exa-PAPI team, deployed software-defined events (SDE) in the Ginkgo sparse
% linear algebra library. These provide the user with access to
% domain-specific events like, e.g., preconditioner invocations,
% synchronizations, precision format changes. With building blocks differing
% in the resource usage, we investigate the possibility of instant power and
%% frequency scaling for reducing the power and energy footprint.
% \item \textbf{Graph analytics kernels:} Preconditioning techniques like
% block Jacobi have a strong need for efficient and low-overhead graph
% analytics tools identifying strongly-connected components. We deploy GPU
% kernels providing this functionality while introducing only negligible
% overhead to the preconditioner generation.
\item \textbf{Multiprecision sparse matrix formats:} Operations with sparse
matrices are memory-bound on virtually all architectures. We investigate
how splitting the matrix int several operators stored in value-optimized
less complex floating point precision formats can help improving
performance.
\item \textbf{Polynomial preconditioners:} The communication cost of
numerical preconditions is high. In particular for communication-avoiding
pipelined Krylov methods, the synchronization necessary by standard
preconditioning can become a bottleneck. We will deliver a new
polynomial preconditioner in Trilinos (Belos) and investigate their effectiveness
for ECP applications.
\end{enumerate}
| {
"alphanum_fraction": 0.8134873618,
"avg_line_length": 52.75,
"ext": "tex",
"hexsha": "3ad3a201996f937aae033757a1954230c212f88a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "00cafc696b651133c01022794a9e05a1361dd451",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "epourmal/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.3-MathLibs/2.3.3.13-CLOVER/2.3.3.13-PEEKS.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "00cafc696b651133c01022794a9e05a1361dd451",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "epourmal/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.3-MathLibs/2.3.3.13-CLOVER/2.3.3.13-PEEKS.tex",
"max_line_length": 111,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "00cafc696b651133c01022794a9e05a1361dd451",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "epourmal/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.3-MathLibs/2.3.3.13-CLOVER/2.3.3.13-PEEKS.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2274,
"size": 10128
} |
\documentclass[a4paper, 12pt]{article}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amssymb}
\usepackage{longtable}
\usepackage{pdflscape}
\usepackage{algorithm}
\usepackage{graphicx}
\usepackage[noend]{algpseudocode}
\usepackage{url}
\usepackage{tikz}
\usetikzlibrary{arrows}
\newlength\tindent
\setlength{\tindent}{\parindent}
\setlength{\parindent}{0pt}
\renewcommand{\indent}{\hspace*{\tindent}}
\newtheorem{thm}{Theorem}
\newtheorem{cor}{Corollary}[thm]
\newtheorem{lemma}{Lemma}[thm]
\title{COMP 312 Project Individual Report}
\author{Daniel Braithwaite}
\begin{document}
\pagenumbering{gobble}
\maketitle
\newpage
\pagenumbering{arabic}
\section{Individual Report}
We chose to study the queue at the Bluebridge ferry terminal. There where 3 different departure times, morning, lunch, dinner which we all took turns to collect data from. Following this we used python scripts to take our data and convert it to a form we could work with.\\
We then fitted our data to distributions which was a difficult part of the project as none of us had much idea about how to fit distributions or perform goodness of fit tests. Luckily we where able to find a python package called 'SciPy' to help us with this.\\
Finally we created and ran simulations of our models to collect performance measures about them.\\
Unfortunately we found that our simulations weren't as accurate as we hoped, this was because the arrival rate wasn't constant over the periods we where monitoring the service.\\
A better approach to make the simulating more accurate would of been to have the arrival rate as a function of time.\\
The project description suggested having one person work on each of the components. Rather than doing this we all worked on all of the parts of the project. We felt this was a better way to split up the work and it gave us all experience with each part of the project.\\
I Helped write the code that fitted and graphed our data with the three different distributions and wrote the python simulation for the empirical model.\\
When we where investigating the non constant arrival time I created a python script to graph a histogram of the minutes before departure of all the arrivals. This not only showed the non constant arrival rate but also that there was a peak of arrivals around the 40 - 50 minutes before departure. This happens to be the same amount of time that Bluebridge recommends you arrive before departure.\\
If we where to start the project again I would say it would be good to think more about the system we where going to study and what other information could be interesting to collect. For example we just collected the arrivals and services but it would of also been interesting to record information like the average group size.\\
Another interesting thing to do would of been to create a model that accounted for the non constant arrival rate. This would of allowed us to make more valuable and interesting recommendations to the business.\\
Taking this project from start to finish and being able to apply things we where learning in the course helped reinforce what I was learning. Especially with the data analysis and modeling part. Was very enjoyable to be able to use what we had collected to create a theoretical plan to help a business. One of the managers at Bluebridge was actually interested in the outcome of our study, knowing this made the project a lot more enjoyable.\\
Our team used the chat app called "Slack" which helped us be more productive and it made it a lot easier to communicate with team members. We where able to use this application to easily message the team and share files. This contributed to our team being effective. Towards the end of the project was having weekly meetings where we would go to a computer lab and work together. If starting this project again I would like to maintain a consistent weekly meeting from the start as I found this always stimulated good discussion about the work we had to complete.
\end{document} | {
"alphanum_fraction": 0.7889052528,
"avg_line_length": 72.75,
"ext": "tex",
"hexsha": "8b1077809fc6d2aa0282f460cd67344d71c80bb8",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-04-23T23:02:31.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-04-23T23:02:31.000Z",
"max_forks_repo_head_hexsha": "50c6a904e1c53c03bce9928975607c35fd741e33",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "danielbraithwt/University",
"max_forks_repo_path": "Undergraduate/COMP312/Group/Individual Report/report.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "50c6a904e1c53c03bce9928975607c35fd741e33",
"max_issues_repo_issues_event_max_datetime": "2016-12-09T00:28:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-12-09T00:17:19.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "danielbraithwt/University",
"max_issues_repo_path": "Undergraduate/COMP312/Group/Individual Report/report.tex",
"max_line_length": 565,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "50c6a904e1c53c03bce9928975607c35fd741e33",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "danielbraithwt/University",
"max_stars_repo_path": "Undergraduate/COMP312/Group/Individual Report/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 929,
"size": 4074
} |
\documentclass{memoir}
\usepackage{notestemplate}
%\logo{~/School-Work/Auxiliary-Files/resources/png/logo.png}
%\institute{Rice University}
%\faculty{Faculty of Whatever Sciences}
%\department{Department of Mathematics}
%\title{Class Notes}
%\subtitle{Based on MATH xxx}
%\author{\textit{Author}\\Gabriel \textsc{Gress}}
%\supervisor{Linus \textsc{Torvalds}}
%\context{Well, I was bored...}
%\date{\today}
\begin{document}
% \maketitle
% Notes taken on 05/10/21
Now we apply this theorem to finite fields. Consider \(\mathbb{F}_{p^{n}}\), the splitting field of \(x^{p^{n}}-x\). This is Galois over \(\mathbb{F}_p\). Thus we have \(\left| \textrm{Aut}(\mathbb{F}_{p^{n}} / \mathbb{F}_p )\right| = [ \mathbb{F}_{p^{n}}: \mathbb{F}_p ] = n \). This gives us \(\textrm{Gal}(\mathbb{F}_{p^{n}} / \mathbb{F}_p ) = \Z / n\Z\) and the Galois group consists solely of the Frobenius endomorphism.\\
One can see then that all subfields \(\mathbb{F}_p\subset E\subset \mathbb{F}_{p^{n}}\) have the form \(E \cong \mathbb{F}_{p^{d}}\) for some \(d\mid n\). Of course, this means that \(E / F\) is necessarily Galois as well!
\section{Applications of Galois Theory}
\label{sec:applications_of_galois_theory}
\begin{prop}
The irreducible polynomial \(x^{4}+1 \in \Z[x]\) is reducible over \(\mathbb{F}_p\) for any prime \(p\).
\end{prop}
\begin{proof}
One can check this directly for \(p=2\). If \(p>2\), then observe that \(p \cong 1,3,5\) or \(7 \mod 8\), and hence \(p^2 \cong 1 \mod 8\). Therefore we have that \(x^{8}-1 \mid x^{p^2-1}-1\) over \(\mathbb{F}_p\).\\
Of course, \(x^{4}+1 \mid x^{8}-1\) and so any root of \(x^{4}+1\) is a root of \(x^{p^2}-x\) and hence are elements of the field \(\mathbb{F}_{p^2}\). Since \([\mathbb{F}_{p^2}:\mathbb{F}_p] = 2\), the degree of the extension is no more than 2. Of course, if \(x^{4}+1\) were irreducible over \(\mathbb{F}_p\), then it would necessarily be 4, and hence it must be reducible.
\end{proof}
\begin{prop}
\begin{align*}
x^{p^{n}}-x = \prod_{d\mid n} \left\{ \text{irreducible polynomial in \(\mathbb{F}_p[x]\) of degree \(d\)} \right\}
\end{align*}
\end{prop}
We can use this recursively as \(n\) increases.
\end{document}
| {
"alphanum_fraction": 0.6584587323,
"avg_line_length": 49.8409090909,
"ext": "tex",
"hexsha": "f72abbee34d5dfe50f4005f6378f468530870227",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "gjgress/LibreMath",
"max_forks_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture29 - GalThry_FiniteFields.tex",
"max_issues_count": 12,
"max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b",
"max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "gjgress/Libera-Mentis",
"max_issues_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture29 - GalThry_FiniteFields.tex",
"max_line_length": 427,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "gjgress/Libera-Mentis",
"max_stars_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture29 - GalThry_FiniteFields.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z",
"num_tokens": 799,
"size": 2193
} |
\documentclass[10pt]{article}
%==============================
% Document Metadata
%==============================
\usepackage[pdftex,
pdfauthor={Rukmal Weerawarana},
pdftitle={Homework 1 Solutions - FE 621},
pdfsubject={FE 621 - Computational Methods in Finance}
]{hyperref}
%==============================
% Package Imports
%==============================
\usepackage[ruled]{algorithm2e} % typeset algorithms
\usepackage[authordate, maxcitenames=1]{biblatex-chicago} % chicago bibliography style
\usepackage{amsmath} % math environment stuff
\usepackage{amssymb} % additional math symbols
\usepackage[toc, page]{appendix} % Appendix referencing
\usepackage{booktabs} % Table lines
\usepackage{comment} % enables the use of multi-line comments (\ifx \fi)
\usepackage[skip=5pt, labelfont=bf]{caption} % caption formatting
\usepackage{csvsimple} % CSV import to Table
\usepackage{fancyhdr} % Header
\usepackage{fancyvrb} % Verbatim text
\usepackage{float} % Controlling figure border
\usepackage[headings]{fullpage} % Set all margins to 1.5 cm
\usepackage{graphicx} % Figures
\usepackage{listings} % code embedding
\usepackage{longtable} % Multipage tables
\usepackage{multirow} % Multirow cells in tables
\usepackage{pmboxdraw} % Box characters for file tree
\usepackage[dvipsnames]{xcolor} % colors for code
%==============================
% Configuration
%==============================
% Figure outline configuration
% \floatstyle{boxed}
% \restylefloat{figure}
% Bibliography configuration
\addbibresource{../bibliography.bib}
% Remapping bibliography underscores (_) and tildes (~) because Mendeley has weird exporting
% Solution from: https://tex.stackexchange.com/questions/309980/parsing-underscores-in-urls-from-mendeley
\DeclareSourcemap{ % Used when .bib/Bibliography is compiled, not when document is
\maps{
\map{ % Replaces '{\_}', '{_}' or '\_' with just '_'
\step[fieldsource=url,
match=\regexp{\{\\\_\}|\{\_\}|\\\_},
replace=\regexp{\_}]
}
\map{ % Replaces '{'$\sim$'}', '$\sim$' or '{~}' with just '~'
\step[fieldsource=url,
match=\regexp{\{\$\\sim\$\}|\{\~\}|\$\\sim\$},
replace=\regexp{\~}]
}
}
}
% Code display configuration
\newcommand*\lstinputpath[1]{\lstset{inputpath=#1}} % Setting path
\lstset{
language=Python,
basicstyle=\footnotesize\ttfamily,
commentstyle=\ttfamily\color{purple!40!black},
identifierstyle=\color{blue},
keywordstyle=\color{ForestGreen},
numbers=left,
numberstyle=\ttfamily\color{gray}\footnotesize,
stepnumber=1,
numbersep=5pt,
backgroundcolor=\color{white},
showspaces=false,
showstringspaces=false,
showtabs=false,
frame=single,
tabsize=2,
captionpos=b,
breaklines=true,
breakatwhitespace=false,
title=\lstname
}
\lstset{
language=R,
basicstyle=\footnotesize\ttfamily,
commentstyle=\ttfamily\color{purple!40!black},
identifierstyle=\color{blue},
keywordstyle=\color{ForestGreen},
numbers=left,
numberstyle=\ttfamily\color{gray}\footnotesize,
stepnumber=1,
numbersep=5pt,
backgroundcolor=\color{white},
showspaces=false,
showstringspaces=false,
showtabs=false,
frame=single,
tabsize=2,
captionpos=b,
breaklines=true,
breakatwhitespace=false,
title=\lstname
}
% Header and Footer configuration
\pagestyle{fancy} % set page style
\fancyhead{} % override header
\fancyfoot{} % override footer
\renewcommand{\headrulewidth}{.4pt} % set header rule width
\renewcommand{\footrulewidth}{.4pt} % set footer rule width
\lhead{Homework Assignment 1} % set left header
\rhead{Rukmal Weerawarana} % set right header
\lfoot{\textit{FE 621}: Computational Methods in Finance} % set left footer
\rfoot{Page \thepage} % set right footer
%==============================
% Document Content
%==============================
\begin{document}
\thispagestyle{plain}
\pagenumbering{roman} % Changing numbering to Roman numerals for first pages
%==============================
% Document Title
%==============================
\noindent
\large\textbf{Homework Assignment 1} \hfill \textbf{Rukmal Weerawarana} \\
\normalsize \textit{FE 621}: Computational Methods in Finance \hfill \textit{rweerawa@stevens.edu} $\mid$ 104-307-27 \\
\textit{Instructor}: Ionut Florescu \hfill Department of Financial Engineering \\
2/20/2019 \hfill Stevens Institute of Technology
\noindent\rule{\linewidth}{.1em}
%==============================
% Overview
%==============================
\section*{Overview}
In this Homework Assignment, we explore various numerical optimization methods through the lens of the Black-Scholes-Merton Option pricing model\footnote{\cite{Shreve2004}}. Using this, we calculate and explore the implied volatility of options for various assets traded on the market. Furthermore, we also explore numeric methods of differential calculation to compute the Greeks of these candidate options. Finally, we explore numeric integration and the behavior of various quadrature methods.
Unless otherwise stated, the following shorthand notation is used to distinguish between dates:
\begin{itemize}
\item \textbf{DATA1} - Wednesday, February 6 2019 (\textit{2/6/19});
\item \textbf{DATA2} - Thursday, February 7 2019 (\textit{2/7/19}).
\end{itemize}
The content of this Homework Assignment is divided into three sections; the first discusses data gathering, formatting, and a discussion of the assets being examined. The second contains data analysis, and an exploration of implied volatility through the Black-Scholes-Merton pricing framework and related computations. Finally, the third section discusses numerical integration and the convergence of various quadrature rules.
\begin{center}
\textit{See Appendix~\ref{appendix:source} for specific question implementations, and the project GitHub repository\footnote{\cite{Weerawarana2019}} for full source code of the {\normalfont \texttt{fe621}} Python package.}
\end{center}
%==============================
% Table of Contents
%==============================
\newpage
\tableofcontents
%==============================
% Section 1
%==============================
\newpage
\pagenumbering{arabic} % Changing numbering to arabic numerals for main content
\section{Data Overview}
\subsection{Asset Descriptions}
\subsubsection[\textit{SPY} - SPDR S\&P 500 ETF]{\textit{SPY} - SPDR S\&P 500 ETF\footnote{\cite{StateStreetGlobalAdvisors2019}}}
% Note: Heading is weird here because of footnote. See: https://texfaq.org/FAQ-ftnsect
The S\&P 500 (\textit{Standard \& Poor's 500}) is a stock market index tracking the 500 largest companies on the American Stock Exchange by Market Capitalization. In this case, the market capitalization is defined as the number of outstanding shares, multiplied by the current share price. A stock market index is designed to be a metric that can be used by market observers as a benchmark to gauge the relative health of the stock market, by analyzing the aggregate performance of its largest components.
However, this index is not the same as the \textit{SPY} ETF. An ETF (\textit{Exchange Traded Fund}) is a basket of stocks that is designed to track a specific index or benchmark. That is, it provides investors with exposure to a index or benchmark, without having to own all of the underlying assets that constitute a composite ETF. In addition to higher liquidity, this type of investment also provides lower transaction costs and required minimum investment to gain exposure to a given index or benchmark. It is traded on an exchange, akin to a typical traded asset.
\subsubsection[\textit{VIX} - CBOE Volatility Index]{\textit{VIX} - CBOE Volatility Index\footnote{\cite{CBOEChicagoBoardOptionsExchange2019}}}
The CBOE (\textit{Chicago Board Options Exchange}) volatility index, \textit{VIX} is an exchange traded product (\textit{ETP}) designed to give investors exposure to the market's expectation of 30-day volatility. It is priced using a large set of implied volatility of put and call options on the S\&P 500 index to gauge investor sentiment. Typically, the price of the VIX has an inverse relationship to the price of the S\&P 500 index. Similar to an ETF, an ETP is also traded on an exchange as a typical traded asset.
\subsection{Data Gathering}
For the assignment, we downloaded monthly options on \textit{Amazon Inc.} (ticker: AMZN) and \textit{S\&P 500 ETF} (ticker: SPY) at various strike prices for the following dates:
\begin{itemize}
\item \textit{02/15/19} - Friday, February 15 2019;
\item \textit{03/15/19} - Friday, March 15 2019;
\item \textit{04/18/19} - Thursday, April 18 2019.
\end{itemize}
A wide variety of option strike prices were considered, with the following ranges:
\begin{itemize}
\item \textit{AMZN} - \$1315 to \$1970 in increments of \$5 (130 strike prices, 470 total options);
\item \textit{SPY} - \$216 to \$320 in increments of \$1 (87 strike prices, 364 total options).
\end{itemize}
Intra-day minute closing price data was gathered for both put and call options with expiration dates and strike prices detailed above. This intra-day data was gathered for the trading day \textit{2/6/19} (February 6 2019; \textbf{DATA1}). Additionally, intra-day minute closing price data was also downloaded for each of the underlying assets. This data was downloaded for both \textit{2/6/19} (February 6 2019; \textbf{DATA1}), and \textit{2/7/19} (February 7 2019; \textbf{DATA2}).
This data detailed above was gathered utilizing \textit{Rblpapi}\footnote{\cite{Armstrong2018}}, which provides an R interface to data on the Bloomberg Terminal\footnote{\cite{BloombergL.P.2019}}. The data download was automated, and corresponding intra-day prices for each of the options were output to individual files. The source code for this implementation is available in Appendix~\ref{appendix:source:q1:bloomberg}.
Furthermore, as a proxy for the \textit{risk-free rate}, we chose to utilize the effective Federal Funds Rate (FFR). This is the interest rate at which depository institutions in the United States lend reserve balances to other depository institutions overnight. This data was gathered for both dates, and correspond to \textbf{DATA1} and \textbf{DATA2}. The effective FFR is published daily by the US Federal Reserve Board of Governors, and are expressed as yields per annum.\footnote{\cite{BoardofGovernorsoftheFederalReserveSystem2019}}
\subsubsection{Data Cleaning}
For easier programmatic access, the data was placed in a hierarchical structure, corresponding to the \textbf{DATA1}, \textbf{DATA2} data division. Each of the option and asset prices for the corresponding days were placed in the requisite sub-folders. This directory structure is reproduced below.
\VerbatimInput{bin/data_tree.txt}
Option price filenames were changed to OOC format option names, discussed further below. This was done utilizing a cleaning script, written in Python. This script employs utility functions from the \texttt{fe621} Python package\footnote{\cite{Weerawarana2019}}.
\subsection{Option Naming Convention}
A modern convention for naming option contracts was proposed by the Options Clearing Commission (OCC) in 2008\footnote{\cite{OptionsSymbologyInitiative2008}}, and adopted in 2010. The OCC is an organization that acts as both the issuer and guarantor for option and future contracts. The OCC is governed by the Securities and Exchange Commission (SEC) and the Commodities Futures Trading Commission (CFTC). The current convention for option naming is best explained by example.
Consider the option code, \textit{AMZN190215C01960000}. This corresponds to a \textbf{Call Option} on \textbf{Amazon Inc. (AMZN)}, with a strike price of \textbf{\$1960.00} and an expiration date of \textbf{2/15/19} (February 15 2019).
The methodology of this nomenclature is explained in detail below:
% Note: The number here denotes the "priority" of the pagebreak. It can range from 0-4, with 4 being highest priority (i.e. right now), and 0 being the lowest.
\pagebreak[3]
\begin{center}
\textbf{\textcolor{red}{AMZN}\textcolor{MidnightBlue}{19}\textcolor{Bittersweet}{02}\textcolor{YellowOrange}{15}\textcolor{RoyalPurple}{C}\textcolor{ForestGreen}{01960}\textcolor{violet}{000}}
\end{center}
\begin{itemize}
\item \textbf{\textcolor{red}{AMZN}} - Ticker of the company (arbitrary length; always first sequence of characters)
\item \textbf{\textcolor{MidnightBlue}{19}} - Expiration year of the contract (shortened to two digits, i.e. 2019 $\rightarrow$ 19)
\item \textbf{\textcolor{Bittersweet}{02}} - Expiration month of the contract
\item \textbf{\textcolor{YellowOrange}{15}} - Expiration day of the contract
\item \textbf{\textcolor{RoyalPurple}{C}} - Type of option (\textit{C} for call, \textit{P} for put)
\item \textbf{\textcolor{ForestGreen}{01960}} - Dollar component of strike price (in \$; always 5 digits)
\item \textbf{\textcolor{violet}{000}} - ${\frac{1}{1000}}^\text{th}$ Dollar component of strike price (in $\frac{1}{1000}$\$; always 3 digits)
\end{itemize}
Similarly, the following option code corresponds to a \textbf{Put Option} on \textbf{SPDR S\&P 500 ETF (SPY)}, with a strike price of \textbf{\$287.50} and an expiration date of \textbf{3/15/19} (March 15 2019):
\begin{center}
\textbf{\textcolor{red}{SPY}\textcolor{MidnightBlue}{19}\textcolor{Bittersweet}{03}\textcolor{YellowOrange}{15}\textcolor{RoyalPurple}{P}\textcolor{ForestGreen}{00287}\textcolor{violet}{500}}
\end{center}
Finally, the following option code corresponds to a \textbf{Call Option} on \textbf{CBOE Volatility Index (VIX)}, with a strike price of \textbf{\$16.35} and an expiration date of \textbf{4/18/19} (February 18 2019):
\begin{center}
\textbf{\textcolor{red}{VIX}\textcolor{MidnightBlue}{19}\textcolor{Bittersweet}{04}\textcolor{YellowOrange}{18}\textcolor{RoyalPurple}{C}\textcolor{ForestGreen}{00016}\textcolor{violet}{350}}
\end{center}
%==============================
% Section 2
%==============================
\newpage
\lstinputpath{..}
\section{Data Analysis}
\begin{center}
\textit{Note: All Python scripts reproduced in this section are a subset of the {\normalfont \texttt{fe621}}\footnote{\cite{Weerawarana2019}} Python package developed for this class.}
\end{center}
\subsection{Black-Scholes Model}
With the probabilities $d_1$ and $d_2$ defined as:
\begin{gather*}
d_1 = \frac{\log \left( \frac{S_t}{K} \right) + \left( r + \frac{\sigma^2}{2} \right) (T-t)}{\sigma \sqrt{T-t}} \\
d_2 = d_1 - \sigma \sqrt{T-t} \\
\Phi(x) = \int_{-\infty}^{x} \phi(z) dz = \int_{-\infty}^{x} \frac{1}{\sqrt{2\pi}} e^{\frac{-z^2}{2}} dz
\end{gather*}
\lstinputlisting{fe621/black_scholes/util.py}
\pagebreak[4]
\begin{center}
\textbf{\textit{Note:} The following assumes the dividend rate, $q = 0$.}
\end{center}
\subsubsection{Put Option}
The Black-Scholes Option price for a European Put ($P(S_t)$) option is defined as:
\begin{gather*}
P(S_t) = K e^{-r(T-t)} \Phi(-d_2) - S_t \Phi(-d_1)
\end{gather*}
\lstinputlisting{fe621/black_scholes/put.py}
\subsubsection{Call Option}
The Black-Scholes Option price for a European Call ($C(S_t)$) option is defined as:
\begin{gather*}
C(S_t) = S_t \Phi(d_1) - K e^{-r(T-t)} \Phi(d_2) \\
\end{gather*}
\lstinputlisting{fe621/black_scholes/call.py}
\subsubsection{Put-Call Parity}
The relationship between the price of a Call and Put option is governed by Put-Call parity:
\begin{gather*}
P(S_t) = C(S_t) - S_t + K e^{-r(T-t)}
\end{gather*}
\lstinputlisting{fe621/black_scholes/parity.py}
\subsubsection{The Greeks} \label{sec:q2:greeks}
The Greeks are the quantities representing the sensitivity of the price of a derivative with respect to changes in the underlying parameters. The following formulas are implemented to calculate each of the Greeks using the Black-Scholes option pricing formula. These formulas are derived in full in (\cite{Stefanica2011}) and (\cite{Weerawarana2016}).
\begin{center}
\textbf{\textit{Note:} The following assumes the dividend rate, $q = 0$.}
\end{center}
\textbf{Delta}
The Delta ($\Delta$) of an option is the first derivative of an option with respect to the price of the underlying asset at time $t$, $S_t$.
\begin{gather*}
\Delta(C) = \frac{\partial C(S_t)}{\partial S_t} = \Phi(d_1)
\end{gather*}
\textbf{Gamma}
The Gamma ($\Gamma$) of an option is the second derivative of an option with respect to the price of the underlying asset at time $t$, $S_t$.
\begin{gather*}
\Gamma(C) = \frac{\partial^2 C(S_t)}{\partial S_t^2} = \frac{\phi(d_1)}{S_t \sigma \sqrt{T-t}}
\end{gather*}
\textbf{Vega}
The Vega ($\nu$) of an option is the first derivative of an option with respect to the volatility of the underlying asset at time $t$, $\sigma$.
\begin{gather*}
\nu(C) = \nu(P) = \frac{\partial C(S_t)}{\partial \sigma} = S_t \sqrt{T-t} \, \phi(d_1)
\end{gather*}
\lstinputlisting{fe621/black_scholes/greeks.py}
\newpage
\subsection{Numeric Optimization}
\subsubsection{Bisection Method}
In this section, we implement the Bisection optimization method. The bisection algorithm is outlined in Algorithm \ref{alg:bisection}. The algorithm is implemented recursively.
\begin{algorithm}[h]
\SetAlgoNoLine
\KwIn{Input function, $f$ to be optimized; must have sign change. Search space start and stop points, $a$ and $b$. Tolerance level, $\epsilon$.}
\KwOut{Point $x^* \in [a, b]$ where $f(x^*) = 0$.}
Let midpoint = $m$\;
\Repeat{$(b - a) < \epsilon$}{
$m = \frac{a + b}{2}$\;
\If{$f(a) \times f(mid) < 0$}{$b=m$}
\If{$f(b) \times f(mid) < 0$}{$a=m$}
}
\Return $\frac{a + b}{2}$\;
\caption{Bisection Algorithm}
\label{alg:bisection}
\end{algorithm}
\lstinputlisting{fe621/optimization/bisection.py}
\subsubsection{Newton Method}
In this section, we implement the Newton optimization method. The Newton method algorithm is outlined in Algorithm~\ref{alg:newton}.\footnote{\cite{Stefanica2011}} The algorithm is implemented recursively.
\begin{algorithm}[h]
\SetAlgoNoLine
\KwIn{A differentiable function $f : \mathbb{R}^a \rightarrow \mathbb{R}^b \, \forall \, a, b \in \mathbb{N}_{>0}$. Starting guess for the root $x_0$. Tolerance level, $\epsilon$.}
\KwOut{$x^* \in \mathbb{R}^a$, such that $f(x^*) = 0$}
$k = 1$\;
\Repeat{$\lvert x_{k} - x_{k-1}\rvert < \epsilon$}{
$x_{k+1} = x_k - \frac{f(x_k)}{f^\prime(x_k)}$\;
$k = k + 1$
}
\Return $x_{k+1}$\;
\caption{Newton's Method}
\label{alg:newton}
\end{algorithm}
\lstinputlisting{fe621/optimization/newton.py}
\subsubsection{Convergence Comparison}
Here, we compare the performance of each of the optimization methods described above, the Bisection method and Newton method. This was done by computing the average daily implied volatility on the complete SPY option chain in the dataset.
The average daily implied volatility is computed by first calculating the implied volatility by-minute. Then, the mean of these minute-level implied volatilities is computed and is treated as the average daily implied volatility of the given option. For this comparison, the tolerance level of each of the termination conditions was set to $1 \times 10^{-4}$.
\begin{table}[h]
\centering
\csvautotabular{bin/imp_vol_convergence.csv}
\caption{Convergence comparison of average daily implied volatility computation on the SPY option chain using the Bisection and Newton optimization methods.}
\label{table:imp_vol_convergence}
\end{table}
The time elapsed for these computations, and other related statistics under each of the two optimization methods are presented in Table~\ref{table:imp_vol_convergence}.
Despite having a theoretical quadratic convergence rate, Newton's method results in slower performance compared to the Bisection method. This is evident from both the total time elapsed, and the average time per operation (computed to include dropped option computations for consistency).
This can be attributed to the fact that some of the minute-level implied volatility optimizations do not have solutions. The Bisection method reaches a state of "no solution" faster than Newton's method, as it employs a technique of reducing the possible range of the solution. This converging search space would suggest it discovers a state of "no solution" faster than the unbounded search space of the Newton method. In principle - on the condition that the existence of a solution is guaranteed - the Newton method will converge faster than the Bisection method, given a reasonable initial guess.
\newpage
\subsection{Implied Volatility}
In this section, we utilize the functions and data described above to calculate the average implied volatility of each of the option chains. This was done for the entire dataset using the Bisection Method. Additionally, we also discuss the differences in average daily implied volatility between \textit{in-the-money} and \textit{out-of-the-money} options.
\subsubsection{Average Daily Implied Volatility} \label{section:q2:avg_imp_vol}
Average daily implied volatility was computed for each option, across all strike prices and expiration dates, for both SPY and AMZN option chains. This optimization on the aggregate dataset was completed using the Bisection Method.
This was done by first computing the implied volatility for each minute, solving for some $\sigma$ such that ${(C(S_t) |_{\sigma} - P = 0)}$ or ${(P(S_t) |_{\sigma} - P = 0)}$ for a call or put option, respectively. Then, the mean of each of these implied volatilities was computed to obtain the daily average implied volatility for an option with a given strike price and expiration date. For this comparison, the tolerance level of each of the termination conditions was set to $1 \times 10^{-7}$.
The complete dataset of average daily implied volatility is reproduced for the complete option chains on SPY in Appendix~\ref{appendix:q2:spy_vol} and AMZN in Appendix~\ref{appendix:q2:amzn_vol}.
\newpage
\subsection{Volatility Plots}
In this section, we explore the visual relationship between the computed average daily implied volatility (see Section~\ref{section:q2:avg_imp_vol}), the strike price, and the expiration date of the options. The source code for this question is reproduced in Appendix~\ref{appendix:source:q2:vol_plots}.
\subsubsection{Volatility Smile}
The Volatility Smile is the graph of the relationship between the strike price and the average daily implied volatility of the option:
\begin{center}
$\hat{\sigma} = \hat{f}(K)$
\end{center}
\begin{figure}[!hb] % the !hb forces it to be placed here, and at the bottom of the page. Source: https://robjhyndman.com/hyndsight/latex-floats/
\begin{tabular}{|c|c|}
\hline
\includegraphics[width=.47\textwidth]{bin/vol_smile/SPY_Call_2DVolSmile.png} &
\includegraphics[width=.47\textwidth]{bin/vol_smile/SPY_Put_2DVolSmile.png} \\
(a) SPY Call Option Volatility Smile &
(b) SPY Put Option Volatility Smile \\
\hline
\includegraphics[width=.47\textwidth]{bin/vol_smile/AMZN_Call_2DVolSmile.png} &
\includegraphics[width=.47\textwidth]{bin/vol_smile/AMZN_Put_2DVolSmile.png} \\
(c) AMZN Call Option Volatility Smile &
(d) AMZN Put Option Volatility Smile \\
\hline
\end{tabular}
\caption{Volatility Smiles of call and put option chains on AMZN and SPY. Plots the relationship between the strike price and implied volatility for various maturities.}
\label{fig:volatility_smiles}
\end{figure}
The various options (of the same type and underlying asset) are graphed on the same axes, and different expiration dates are displayed in different colors. The Volatility Smile is plotted for both put and call options on both SPY and AMZN in Figure~\ref{fig:volatility_smiles}.
\subsubsection{Volatility Surface}
The Volatility Surface is the graph of the relationship between the strike price, the time to maturity, and the average daily implied volatility of the option:
\begin{center}
$\hat{\sigma} = \hat{f}(K, \sqrt{T - t})$
\end{center}
The various options (of the same type and underlying asset) are graphed on the same axes. The Volatility Surface is plotted for both put and call options on both SPY and AMZN in Figure~\ref{fig:volatility_surfaces}.
\begin{figure}[!hb]
\begin{tabular}{|c|c|}
\hline
\includegraphics[width=.47\textwidth]{bin/vol_surface/SPY_Call_3DVolSurface.png} &
\includegraphics[width=.47\textwidth]{bin/vol_surface/SPY_Put_3DVolSurface.png} \\
(a) SPY Call Option Volatility Surface &
(b) SPY Put Option Volatility Surface \\
\hline
\includegraphics[width=.47\textwidth]{bin/vol_surface/AMZN_Call_3DVolSurface.png} &
\includegraphics[width=.47\textwidth]{bin/vol_surface/AMZN_Put_3DVolSurface.png} \\
(c) AMZN Call Option Volatility Surface &
(d) AMZN Put Option Volatility Surface \\
\hline
\end{tabular}
\caption{Volatility Surfaces of call and put option chains on AMZN and SPY. Plots the relationship between the strike price, time to maturity, and the implied volatility.}
\label{fig:volatility_surfaces}
\end{figure}
\newpage
\subsection{Implied Volatility Analysis}
We also compared the average daily implied volatility of options \textit{in-the-money}, and \textit{out-of-the-money}. For this comparison, we defined the ratio of \textit{money-ness} to be $\pm5\%$ of the current underlying asset price, where options within the range are \textit{in-the-money}, and \textit{out-of-the-money} otherwise. This comparison data is presented in Table~\ref{table:itm_otm_comparison}.
\begin{table}[h]
\centering
\csvautotabular{bin/itm_otm_vol_analysis.csv}
\caption{Comparison of \textit{in-the-money} and \textit{out-of-the-money} options through the lens of their average daily implied volatility.}
\label{table:itm_otm_comparison}
\end{table}
It is immediately apparent from Table~\ref{table:itm_otm_comparison} that the implied volatility for \textit{in-the-money} option chains on both AMZN and SPY are less than their \textit{out-the-money} counterparts. This is to be expected, as realizing prices farther away from the current price would imply that the underlying asset has a larger volatility in price.
Additionally, an analysis of Figure~\ref{fig:volatility_smiles} and Figure~\ref{fig:volatility_surfaces} echo the results of the volatility comparison between \textit{in-the-money} and \textit{out-of-the-money} options above. Options that have a closer maturity date consistently have a higher implied volatility when compared to options with a farther maturity date. This behavior is also expected, as realized strike prices farther from the current asset price would have longer to be realized; implying a lower level of volatility in the underlying asset price.
Furthermore, the difference between the implied volatilities of both flavors of options on SPY and AMZN are also to be expected. As the SPY index tracks the \textit{S\&P 500 Index}, it captures the volatility of the ETF as a whole, compared to AMZN options which track a single company. The idiosyncratic risk of a single company is always greater than that of an averaged basket of assets, due to the benefit provided by diversification.
Finally, compared to the current level of the VIX (\$15.42 Closing Price for \textbf{DATA1}) the ratio between the options \textit{in-the-money} and \textit{out-of-the-money} are reasonable. This can be inferred from the current (relatively low) value of the VIX, as its price is based on the prices of \textit{out-of-the-money} put and call options on SPY.
\newpage
\subsection{The Greeks}
In this section, we compute the Greeks for the options. To do this, we employ the estimate of the average daily implied volatility (see Section~\ref{section:q2:avg_imp_vol}). We compute the Greeks using both the analytical formula (see Section~\ref{sec:q2:greeks}), and by estimation of the derivatives using the central finite difference Method.
\subsubsection{Central Finite Difference Method}
The central finite difference method is a framework for computing the numerical derivative of a three times differentiable function in an interval around the point $a$, $f$. Then, numerical approximations for the first and second derivatives are\footnote{\cite{Stefanica2011}}:
\begin{gather*}
\text{Let } h > 0 \\
f^\prime(a) \approx \frac{f(a + h) - f(a - h)}{2h} + O(h^2) \\
f^{\prime\prime}(a) \approx \frac{f(a + h) - 2 f(a) + f(a - h)}{h^2} + O(h^2)
\end{gather*}
\subsubsection{Analytical and Estimated Greeks}
The analytical and estimated Delta, $\Delta$, Gamma, $\Gamma$, and Vega, $\nu$, for the complete SPY and AMZN option chains are presented in Appendix~\ref{appendix:q2:spy_greeks} and Appendix~\ref{appendix:q2:amzn_greeks}, respectively. The source code for this computation is reproduced in Appendix~\ref{appendix:source:q2:greeks}.
\subsection{DATA2 Computed Prices}
Finally, we compute option prices utilizing the closing price data for DATA2. This was accomplished using the risk-free rate for DATA2, and correspondingly computed time-to-maturities. The computed prices are presented for both the SPY and AMZN option chains in Appendix~\ref{appendix:q2:spy_data2} and Appendix~\ref{appendix:q2:spy_data2}, respectively. The source code for this computation is reproduced in Appendix~\ref{appendix:source:q2:data2}.
%==============================
% Section 3
%==============================
\newpage
\section{Numerical Integration}
\subsection{Quadrature Methods}
In this section, we implement the \textit{Trapezoidal Rule} and \textit{Simpson's Rule} quadrature methods.
\begin{gather*}
\text{Let data} = \boldsymbol{x} \\
\text{Let $i^\text{th}$ element of $\boldsymbol{x}$} = x_i
\end{gather*}
\subsubsection{Trapezoidal Rule}
\begin{gather*}
\text{Let Trapezoidal rule approximation} = T_N(f) \\
\Rightarrow T_N(f)
= \sum_{i=1}^{N} \left[ \left( \frac{f(x_{i-1}) + f(x_i)}{2} \right) \times h \right] \\
\Rightarrow h \times \sum_{i=1}^N \left[ \frac{f(x_{i-1}) + f(x_i)}{2} \right]
= h \times \left( \frac{1}{2}f(x_0) + f(x_1) + \cdots + f(x_{N-1} + \frac{1}{2}f(x_N))\right) \\
\therefore \, T_N(f) = hf(\boldsymbol{x}) - \frac{h}{2}(f(x_0) + f(x_N))
\end{gather*}
\lstinputlisting{fe621/numerical_integration/trapezoidal.py}
\pagebreak[4]
\subsubsection{Simpson's Rule}
The following equation is derived in full in (\cite{Florescu2019}).
\begin{gather*}
\text{Let Simpson's rule approximation} = S_N(f) \\
\Rightarrow S_N(f) \approx \frac{h}{6} \times \sum_{i=1}^N \left[ f(x_{i-1}) + 4f \left( \frac{x_{i-1} + x_i}{2} \right) + f(x_i) \right] \\
= \frac{h}{6} \left( \sum_{i=1}^N [f(x_{i-1}) + f(x_i)] + 4 \times \sum_{i=1}^N \left[ f \left( \frac{x_{i-1} + x_i}{2} \right) \right] \right) \\
\\
\text{Note that $\left( \frac{x_{i-1} + x_i}{2} \right)$ is the midpoint between the points in $\boldsymbol{x}$.} \\
\text{Let the above} = \boldsymbol{x_\text{mid}} \\
\therefore \, S_N(f) \approx \frac{h}{6} \left( 2f(\boldsymbol{x}) - (f(x_0) + f(x_N)) + 4f(\boldsymbol{x}_\text{mid}) \right)
\end{gather*}
\lstinputlisting{fe621/numerical_integration/simpsons.py}
\pagebreak[3]
\subsection{Truncation Error Analysis} \label{section:q3:trunc_error}
To examine the behavior of each of the quadrature methods described above, we approximate the integral of the following function:
\begin{gather*}
f(x) =
\begin{cases}
\frac{\sin{(x)}}{x}, & \quad \text{for} \, x \neq 0, \\
1, & \quad \text{for} \, x = 0.
\end{cases}
\end{gather*}
We parameterize the start and stop points of the quadrature methods with a variable $a$, such that ${\text{start} = -a}$ and ${\text{stop} = a}$. Furthermore, we define the number of segments with variable $N$.
\begin{gather*}
\text{Let approximation with parameters $a$ and $N$} = I_{N,a}
\end{gather*}
It is know analytically that the value of the integral $\int_\infty^\infty f(x) dx = \pi$. We evaluate the performance of each of the quadrature methods with various values of $a$ and $N$. Then, we compute the \textit{truncation error} of the approximation, defined as:
\begin{gather*}
\text{Truncation error for approximation with parameters $a$ and $N$} = \left| I_{N,a} - \pi \right|
\end{gather*}
\begin{table}[h]
\centering
\csvautotabular{bin/numerical_integration/trapezoidal_trunc_error.csv}
\caption{Trapezoidal quadrature rule truncation error for varying values of $a$ and $N$.}
\label{table:trapezoidal_trunc_error}
\end{table}
\begin{table}[h]
\centering
\csvautotabular{bin/numerical_integration/simpsons_trunc_error.csv}
\caption{Simpsons quadrature rule truncation error for varying values of $a$ and $N$.}
\label{table:simpsons_trunc_error}
\end{table}
Table~\ref{table:trapezoidal_trunc_error} and Table~\ref{table:simpsons_trunc_error} report the truncation error for the Trapezoidal and Simpson's quadrature rules, respectively. The script used to produce this table is reproduced in Appendix~\ref{appendix:q3:source:trunc_error}. Variations of $N$ and $a$ are explores in increasing powers of 10, with $a$ progressing from 100 to 1,000,000, and $N$ from 1,000 to 10,000,000.
It is evident from Table~\ref{table:trapezoidal_trunc_error} that the Trapezoidal quadrature rule performs relatively well across all values of $a$, even at relatively low values of $N$. Compared to Simpson's quadrature rule truncation error (Table~\ref{table:simpsons_trunc_error}), the Trapezoidal quadrature rule also performs relatively better with larger values of $N$, and small values of $a$.
A potential explanation of this may be the interpolating behavior of the Simpson's quadrature rule. The function $\frac{sin{(x)}}{x}$ is significantly more linear than quadratic in small intervals, and thus the quadratic interpolating behavior of the Simpson's quadrature rule is a poor approximation heuristic for the function with low values of $a$.
Finally, it can be observed that both quadrature rule approximations converge commensurately as the values of $a$ and $N$ increase. However, it is clear that the Trapezoidal quadrature rule approximation converges at a faster rate than the Simpson's quadrature rule approximation with increasing values of $a$ and $N$.
\subsection{Convergence Analysis}
Typically, the true value of the objective integral is unknown. In this case, we would evaluate the rate of change of the objective function (i.e. convergence) computation with respect to the number of segments, $N$. We assign an arbitrary convergence criteria, $\epsilon$ to test the convergence with progressively increasing (in powers of 10) values of N.
\begin{gather*}
\text{Let approximation with parameter $N$} = I_N \\
\text{Repeat while:} \left| I_N - I_{N_\text{old}} \right| > \epsilon
\end{gather*}
We evaluate the number of iterations required for a convergence level of $\epsilon = 10^{-3}$ for the Trapezoidal and Simpson's quadrature rules. The output of this evaluation is reproduced in Table~\ref{table:quadrature_convergence}. The solution source code for this analysis is reproduced in Appendix~\ref{appendix:q3:source:convergence_analysis}. The \texttt{fe621} package\footnote{\cite{Weerawarana2019}} sub-module used in this analysis is presented below.
\lstinputlisting{fe621/numerical_integration/convergence.py}
\begin{table}[h]
\centering
\csvautotabular{bin/numerical_integration/convergence.csv}
\caption{Analysis of segments required for convergence under the Trapezoidal and Simpson's quadrature rules.}
\label{table:quadrature_convergence}
\end{table}
Analyzing the results in Table~\ref{table:quadrature_convergence}, it is evident that the number of segments required for convergence under the Trapezoidal quadrature rule is significantly less than that required under Simpson's quadrature rule. This difference is significant, with the Trapezoidal quadrature rule requiring segments two orders of magnitude less than Simpson's quadrature rule for convergence. These behavior is in agreement with the previous analysis of convergence with respect to varying values of $N$ and $a$, explored in Section~\ref{section:q3:trunc_error}.
\subsubsection{Arbitrary Function}
Additionally, we also evaluate each quadrature rule with respect to the number of segments required for convergence with an additional arbitrary integral:
\begin{gather*}
g(x) = 1 + e^{-x^{2}} \cos{(8x^{\frac{2}{3}})} \\
\int_0^2 g(x) \, dx
\end{gather*}
\begin{table}[h]
\centering
\csvautotabular{bin/numerical_integration/arb_convergence.csv}
\caption{Analysis of segments required for convergence of an arbitrary integral under the Trapezoidal and Simpson's quadrature rules.}
\label{table:arb_function_convergence}
\end{table}
The estimates and segments required for convergence for the integral $\int_0^2 g(x) \, dx$ are presented in Table~\ref{table:arb_function_convergence}. The source code for this analysis is reproduced in Appendix~\ref{appendix:q3:source:arb_convergence_analysis}.
%==============================
% References
%==============================
\newpage
\printbibliography
%==============================
% Appendix
%==============================
\newpage
\appendix % all sections after this are appendix sections
% Resetting input path
\lstinputpath{}
\section{Computed Implied Volatility} \label{appendix:q2:imp_vol}
\subsection{SPY Option Chain} \label{appendix:q2:spy_vol}
\csvreader[
longtable=l|cccc,
table head=
\toprule\bfseries Option Name &\bfseries Expiration Date &\bfseries Type &\bfseries Strike &\bfseries Implied Volatility \\ \midrule \endhead \bottomrule \endfoot,
late after line=\\
]{bin/spy_data1_vol.csv}{1=\one, 2=\two, 3=\three, 4=\four, 5=\five}{\one & \two & \three & \four & \five}
\newpage
\subsection{AMZN Option Chain} \label{appendix:q2:amzn_vol}
\csvreader[
longtable=l|cccc,
table head=
\toprule\bfseries Option Name &\bfseries Expiration Date &\bfseries Type &\bfseries Strike &\bfseries Implied Volatility \\ \midrule \endhead \bottomrule \endfoot,
late after line=\\
]{bin/amzn_data1_vol.csv}{1=\one, 2=\two, 3=\three, 4=\four, 5=\five}{\one & \two & \three & \four & \five}
\newpage
\section{Analytically Computed and Estimated Greeks} \label{appendix:q2:greeks}
\subsection{SPY Option Chain Greeks} \label{appendix:q2:spy_greeks}
\csvreader[
longtable=l|cccccc,
table head=
\toprule \multirow{2}{*}{\bfseries Option Name} & \multicolumn{3}{c}{\bfseries Analytical} & \multicolumn{3}{c}{\bfseries Estimated} \\ & $\Delta$ & $\Gamma$ & $\nu$ & $\Delta$ & $\Gamma$ & $\nu$ \\ \midrule \endhead \bottomrule \endfoot,
late after line=\\
]{bin/greeks/spy_greeks.csv}{1=\one, 2=\two, 3=\three, 4=\four, 5=\five, 6=\six, 7=\seven}{\one & \two & \three & \four & \five & \six & \seven}
\newpage
\subsection{AMZN Option Chain Greeks} \label{appendix:q2:amzn_greeks}
\csvreader[
longtable=l|cccccc,
table head=
\toprule \multirow{2}{*}{\bfseries Option Name} & \multicolumn{3}{c}{\bfseries Analytical} & \multicolumn{3}{c}{\bfseries Estimated} \\ & $\Delta$ & $\Gamma$ & $\nu$ & $\Delta$ & $\Gamma$ & $\nu$ \\ \midrule \endhead \bottomrule \endfoot,
late after line=\\
]{bin/greeks/amzn_greeks.csv}{1=\one, 2=\two, 3=\three, 4=\four, 5=\five, 6=\six, 7=\seven}{\one & \two & \three & \four & \five & \six & \seven}
\newpage
\section{DATA2 Computed Prices} \label{appendix:q2:data2_prices}
\subsection{SPY Option Chain} \label{appendix:q2:spy_data2}
\csvreader[
longtable=l|cccc,
table head=
\toprule\bfseries Option Name &\bfseries Expiration Date &\bfseries Type &\bfseries Strike &\bfseries Computed Price \\ \midrule \endhead \bottomrule \endfoot,
late after line=\\
]{bin/data2/spy_prices.csv}{1=\one, 2=\two, 3=\three, 4=\four, 5=\five}{\one & \two & \three & \four & \five}
\newpage
\subsection{AMZN Option Chain} \label{appendix:q2:amzn_data2}
\csvreader[
longtable=l|cccc,
table head=
\toprule\bfseries Option Name &\bfseries Expiration Date &\bfseries Type &\bfseries Strike &\bfseries Computed Price \\ \midrule \endhead \bottomrule \endfoot,
late after line=\\
]{bin/data2/amzn_prices.csv}{1=\one, 2=\two, 3=\three, 4=\four, 5=\five}{\one & \two & \three & \four & \five}
\newpage
\section{Solution Source Code} \label{appendix:source}
\subsection{Question 1 Implementation}
\subsubsection{Bloomberg Terminal Data Download} \label{appendix:source:q1:bloomberg}
\lstinputlisting{question_solutions/question_1.R}
\newpage
\subsection{Question 2 Implementation}
\subsubsection{Optimization Method Convergence Comparison} \label{appendix:source:q2:convergence}
\lstinputlisting{question_solutions/question_2_convergence.py}
\subsubsection{Implied Volatility Computation} \label{appendix:source:q2:imp_vol}
\lstinputlisting{question_solutions/question_2_imp_vol.py}
\subsubsection{Implied Volatility Analysis} \label{appendix:source:q2:vol_analysis}
\lstinputlisting{question_solutions/question_2_vol_analysis.py}
\subsubsection{Volatility Plots} \label{appendix:source:q2:vol_plots}
\lstinputlisting{question_solutions/question_2_vol_plots.py}
\subsubsection{The Greeks} \label{appendix:source:q2:greeks}
\lstinputlisting{question_solutions/question_2_greeks.py}
\subsubsection{DATA2 Price Computation} \label{appendix:source:q2:data2}
\lstinputlisting{question_solutions/question_2_data2.py}
\newpage
\subsection{Question 3 Implementation} \label{appendix:q3}
\subsubsection{Truncation Error Analysis} \label{appendix:q3:source:trunc_error}
\lstinputlisting{question_solutions/question_3_trunc_error.py}
\subsubsection{Convergence Segment Analysis} \label{appendix:q3:source:convergence_analysis}
\lstinputlisting{question_solutions/question_3_convergence.py}
\subsubsection{Arbitrary Function Convergence Segment Analysis} \label{appendix:q3:source:arb_convergence_analysis}
\lstinputlisting{question_solutions/question_3_arbitrary_area.py}
%==============================
% Document End
%==============================
\end{document}
| {
"alphanum_fraction": 0.6844331828,
"avg_line_length": 55.8928571429,
"ext": "tex",
"hexsha": "77e208a0473f64b274e33bdb643b2068cb8e27bb",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-04-23T07:32:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-04-23T07:32:44.000Z",
"max_forks_repo_head_hexsha": "9c7cef7931b58aed54867acd8e8cf1928bc6d2dd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rukmal/FE-621-Homework",
"max_forks_repo_path": "Homework 1/Homework 1 Solutions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9c7cef7931b58aed54867acd8e8cf1928bc6d2dd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rukmal/FE-621-Homework",
"max_issues_repo_path": "Homework 1/Homework 1 Solutions.tex",
"max_line_length": 608,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "9c7cef7931b58aed54867acd8e8cf1928bc6d2dd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rukmal/FE-621-Homework",
"max_stars_repo_path": "Homework 1/Homework 1 Solutions.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-11T07:49:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-04-29T04:34:50.000Z",
"num_tokens": 11991,
"size": 45385
} |
In this section, we detail the main issues for \mrs coordination
in industrial domains, then we provide a detailed discussion on coordination
approaches, highlighting challenges and main solution techniques.
\section{Multi-Robot System for Logistics Applications}
In this thesis we focus on industrial scenarios where robots have a high
degree of autonomy and operate in a dynamic environment.
% coocoo
In this article \cite{coocoo} they presented the Kiva warehouse-management system
create a paradigm for pick-pack-and-ship warehouse that improves worker productivity \footnote{https://www.amazonrobotics.com/\#/} .
The Kiva system uses movable storage shelves that can be lifted by small, autonomous robots.
By bringing the product to the worker, productivity is increased by a factor of two or more,
while simultaneously improving accountability and flexibility.
The key innovation in the Kiva system is the application of
inexpensive robots capable of lifting and carrying
three-foot-square shelving units, called \textit{inventory pods}.
The robots, called \textit{drive units}, transport the inventory pods from storage
locations to stations where workers can pick items off the shelves and
put them into shipping cartons. Throughout the
day, the picker stays in her station while a continuous stream of robots presents
pick-faces. By moving the inventory to the worker, rather than the
other way around, we typically see worker productivity at least double.
The Kiva drive units operate in a controlled, known enviroment, greatly simplifying
the design problem and making the solution pratical.
Another distinguishing attribute of \mrs is the extent to which agents are cooperative
(in the sense that they must coordinate activities to achieve a system goal)
or are self-interested and have independent, often conflicting, objectives.
Although the overall system is cooperative, the Kiva robots are essentially independent.
Warehouses and distribution centers play a critical role in the flow of goods from
manufacturers to consumers. They serve as giant routing centers in which pallets
of products from different manufacturers are split, and the items are redirected
into outgoing containers.
The drive units are small enough to fit under the inventory pod and are outfitted
with a mechanical lifting mechanism that allows them to lift pods off
the ground. The pods consist of a stack of trays,
each of which is subdivided into bins. A variety of
tray sizes and bin sizes create the mixture of storage locations for the profile
of products the warehouse stores.
A Kiva installation is arranged on a
grid with storage zones in the middle and inventory stations spread around the perimeter.
The drive units are used to move the inventory pods with the correct bins
from their storage locations to the inventory stations where a pick worker removes
the desired products from the desired bin. Note that the pod
has four faces, and the drive unit may need to rotate the pod in order to present
the correct face.
When a picker is done with a pod, the drive unit
stores it in an empty storage location.
For compute the path planning the environment is defined such as a grind costitutes
a two-dimensional graph of paths that may be given weights at design time.
The drive units use a standard implementation of $A^*$ to plan paths to storage locations
and inventory stations. The drive unit agents also maintain a list of highlevel
goals and are responsible for prioritizing the goals and accomplishing them as efficiently
as possible. Then the drive unit agent decides which station to visit first and in
what sequence to show the faces to minimize travel time.
% token passing
The most recent papers in logistic scenario \cite{mapf} and \cite{mapd}, are definited
the Multi-Agent Pickup and Delivery problem (MAPD) where a large number of agents attend to
stream of incoming pickup-and-delivery tasks. They using a Token Passing (TP) approach
for implement MAPD algorithm that is efficient and effective. The MAPD algorithm takes
kinematic constraints of real robot into account direclty during planning, computes
continuous agent movements with given velocities that work on non-holonomic robot rather than
discrete agent movements with uniform velocity.
TP assumes, like many Multi-Agent pathfinding algorithms, discrete agent movements
in the main compass directions with uniform velocity but can use a post-processing step
to adapt its paths to continuous forward movements with given translational velocities
and point turns with given rotational velocities.
Unfortunately, the resulting paths might then not be effective since planning is oblivious
to this transformation. TP needs to repeatdly plan time-minimal paths for agents
that avoid collisions with the paths of the other agents.
The important implication for this paper is that agents repeatdly plan paths for themselves,
considering the other agents as dynamic obstacles that follow their paths and with which
collisions need to be avoided. The agents use space-time $A^*$ for this single-agent path planning.
A set of endpoints is any subset of cells that contains at least all start cells of agents and all
pickup and delivery cells of tasks. The pickup and delivery cells are called task endpoints.
The other endpoints are called non-task endpoints.
TP operates as follows for a given set of endpoints: It uses
a token (a synchronized block of shared memory) that stores
the task set and the current paths, one for each agent.
The system repeatedly updates the task set in the token to contain
all unassigned tasks in the system and then sends the token
to some agent that is currently not following a path. The
agent with the token considers all tasks in the task set whose
pickup and delivery cells are different from the end cells of
all paths in the token.
Overview of TP works:
\begin{enumerate}
\item If such tasks exist, then the agent
assigns itself that task among these tasks whose pickup cell
it can arrive at the earliest, removes the task from the task
set, computes two time-minimal paths in the token, one that
moves the agent from its current cell to the pickup cell of
the task and then one that moves the agent from the pickup
cell to the delivery cell of the task, concatenates the two
paths into one path, and stores the resulting path.
\item If no such tasks exist and the agent is not in the delivery cell
of any task in the task set, then it stores the empty path in
the token (to wait at its current cell).
\item Otherwise, the
agent computes and stores a time-minimal path in the token
that moves the agent from its current cell to some endpoint
that is different from both the delivery cells of all tasks in
the task set and from the end cells of all paths in the token.
\end{enumerate}
Each path the
agent computes has two properties: (1) It avoids collisions
with all other paths in the token; (2) No other paths in the
token use its end cell after its end time. Finally, the agent
releases the token, follows its path, and waits at the end cell
of the path.
Tey demonstrate the benefit of their approach for automated warehouse.
Otherwise, they take kinematic constraints for instance the robot can turn around only 90 degree at time
and can perform only one task at time \footnote{Lifelong Path Planning with Kinematic Constraints for Multi-Agent Pickup and Delivery https://www.youtube.com/watch?v=RTJvJYJVxJk\&t=30s}.
This limitation in our system does not preside.
% Profe logic
In article \cite{maxsum} they focus on approaches that are based on algorithms
widely used to solve graphical models and constraint optimization problems, such as
the max-sum algorithm. They analyse the coordination problem faced by a set of robots
operating in a warehouse logistic application. In this context robots must trasport
items from loading to unloading bays so to complete packages to be delivered to customers.
Robots must cooperate to maximizes the number of packages completed in the unit of time.
To this end crucial component is to avoid interferences when moving in the enviroment.
They show how such problem can be formalised as a Distributed Constrained Optimization
problem (DCOP )and they provide a solution based on the binary max-sum algorithm.
In more detail, in this paper \cite{maxsum}, they provide DCOP model for the task
assignment problem faced by robots involved in logistics operations in a warehouse.
Among the various solution approaches for DCOPs they advocate the use of
heuristic algorithms, and specifically the max-sum, an iterative message passing
approach that has been shown to provide solutions of high quality for systems
operating in real-time and with limited computation and communication resources.
When the decicions of one robot affect only a small subset of team, because
the message update step, a key operation of max-sum, has a computation completely that
is exponential in the number of robots that can perform the same task.
This exponential element can be a significant limitation for large scale, real-time systems.
To combat this, they show that for specifc types of constraints and using binary variables,
such exponential element can be reduced to a polynomial. Hence, they can use the max-sum
approach for large-scale systems that must operate with real-time constraints.
In this work, we consider a similar setting where a set of robots are involved
in trasportations tasks for logistics. However, we focus on the specific problem
of task assignment.
\section{Coordination in Multi-Robot Systems}
Coordination for \mrs has been investigated from several diverse
perspectives and nowadays, there is a wide range of techniques that can be used to
orchestrate the actions and movements of robots operationg in the same enviroment.
Specifically, the ability to effectively coordinate the actions of a \mrs is a key
requirement in several applications domains that range from disaster response to
environmental monitoring, militaty operations, manufacturing and logistics.
In all such domains, coordination has been addressed using various frameworks and
techniques and there are several survery papers dedicated to categorize such different
approaches and identifying most prominent issues when developing \mrs.
In this paper \cite{cooros} present and evaluate new ROS package for coordinated
multi-robot exploration. The packages allow completely distributed control and do
not rely on (but allow) central controllers.
Their integration including application layer
protocols allows out of the box installation and execution. The
communication package enables reliable ad hoc communication
allowing to exchange local maps between robots which are
merged to a global map (for more detail see section \ref{ros:glplanner}).
Exploration uses the global map
to spatially spread robots and decrease exploration time. The
intention of the implementation is to offer basic functionality for
coordinated multi-robot systems and to enable other research
groups to experimentally work on multi-robot systems.
They use the terms "loacl" and "global" to distinguish contexts of a single robot and
the complete multi-robot system.
A local map is the map created by each individual robot, while the global map includes
local maps of all robots. Communication enables the exchange of data. Lastly, coordinated
exploration utilizes communication and the global map to organize the \mrs by assigning
frontiers to robots.
They contribution in to present ROS package that enable Multi-Robot exploration implementing
the aforementioned required components:
\begin{itemize}
\item ad hoc communication between robots.
\item construction of global maps from local maps.
\item exploration of unknown environments.
\end{itemize}
% paper profe
In this work \cite{focoo} they focus on \mrs coordination, presented a survery of recent
work in the area by specifically examining the forms of cooperation and coordination realized
in the \mrs.
Robotics systems may range from simple sensors, acquiring and processing data, to
complex human-like machines, able to interact with the enviroment in fairly complex way.
Moreover, it is not easy to give a definition of the level of autonomy that is required
for a robot in order to be considered an entity acting in the enviroment, as opposed
to a simple machine that provides services to the operator.
From an engineering standpoint,
the \mrs can improve the effectiveness of a robotic system either from the viewpoint
of the performance in accomplishing
certain tasks, or in the robustness and reliability of the system,
which can be increased by modularization.
In fact, \mrs are useful not only when the robots can accomplishing different functions, but
also when they have the same capabilities. Even when a single robot can achieve the given task,
the possibility of deploying a team of robots can improve the performance of the overall system.
Another significant development of \mrs is technological improvements both in the
hardware and in the associated software are two of the key reasons beyond the growing
interest in \mrs. The increased availability of complex sensor devices and robotic platforms
in the research laboratories favored their development and customization, resulting
in robots equipped with reliable and effective hardware that improves their basic
capabilities. In addiction, the software techniques developed for the robotic applications
take advantage of the hardware improvements and provide complex and reliable solutions for the basic
tasks that a robot should be able to perform, while acting in real world environments:
localization, path planning, object trasportation, object recognition and tracking, etc.
In addiction, the work in this area can be classified from several points of view.
Their main motivation is the study and evaluation of the ability to take advantage
of coordination to improve system performance. Therefore,
the classification we propose is focused on the coordination aspects
and thus inspired by the relationships with the field of multi-agent systems.
They proposed the taxonomy for classifying the works on \mrs in characterized by two
groups of dimensions: \textit{coordinatin dimensions} and \textit{system dimensions}.
For a suitable classification of the works it is important to clearly define the dimensions
that are used:
\begin{itemize}
\item \textit{cooperation level:} is the ability of the system to cooperate in order
to accomplish a specific task. A cooperative system is composed of "robots that operate together
to perform some global task".
\item \textit{knowledge level:} is concerned with the knowledge that each robot in the team
has about its team mates.
\item \textit{coordinatin level:} is the mechanisms used for cooperation in which the actions performed by each
robotic agents "in such a way that the whole ends up being a coherent and high-performance operaiton".
The underlying feature is the \textit{coordiantion protcol}, that is defined as a set
of rules that the robots must follow in order to interact with each other in the enviroment.
\item \textit{organization level:} is the way the decicion system is realized within the \mrs.
Introduces a distinction in the forms of coordination, distinguishing centralized approaches
from distributed ones. In particular, a centralized system has a sgent (\textit{leader}) that is in
charge of organizing the work of other agents;the leader
is involved in the decision process for the whole team, while
the other members can act only according to the directions of
the leader.
Instead, a distributed system is composed ao agents which are completely autonomous in the
decision process with respect to each other; in this case of systems a leader does not exist.
\end{itemize}
In this paper they have addressed the recent developments in the field of \mrs, focusing
on those approaches that are targeted to specific applications and motivated by engineering
considerations. Specifically, they have presented a taxonomy with the aim of highlighting
the coordination aspects of the recent proposals in the literature: we have defined a set of coordination
dimensions for the classification of the approaches to team coordination, together with a set of system dimensions that account
for the design choices that are more relevant to the team organization.
In out system using a central coordinator that give the actions to execute at all robots, which know the static environment and moving in their given task independently.
Another important article \cite{market-based} focus they work on coordinatin for complex tasks.
Complex task are tasks that can be solved in many possible way. Their work is currently
limited to complex tasks that can be decomposed into multiple subtasks related by Boolen logic operators.
They generalizing task descriptions into task trees, which allows tasks to be traded
in a market setting at variable level of abstraction.
The task allocation problem addresses the issue of finding task-to-robot assignments the optimize
global cost objectives.
They address the general problem of allocating complex tasks to a team of autonomous robots.
Complex tasks are tasks that require high-level decision-making or planning,
and may have many potential solution strategies.
Complex tasks are usually identified with problems involving multiple interacting components.
These interactions can come from relationships between subtasks such
as Boolean logic operations or precedence constraints.
Additionally, if there are multiple robots, complex tasks may
require reasoning about interactions between the robots executing them.
Specifically, they look at tasks hierarchically
related by \textit{AND} and \textit{OR} logical operators.
The main contribution of this paper are to identify the complex task allocation problem
can be more efficiently solved by not decoupling the solution into separate allocation
and decomposition phases, and to propose a solution concept that unifies these two stages.
The approach is not optimal, but produces highly efficient solutions in unknown
and dynamic domains using distributed local knowledge and decentralized planning
to continually improve upon global costs.
In contrast to this article in our system we have simple tasks to compose in more complex tasks.
The interconnection between the various tasks is given by a heuristic that normalizes the cost of the route
based on the number of objects transported in a composed task.
% frase finale
Given our focus on logistic scenarios, here we restrict our attention to coordination
approaches based on optimization and specifically on task assignment as this the most
common framework for our reference application domain.
| {
"alphanum_fraction": 0.8103293808,
"avg_line_length": 66.1149825784,
"ext": "tex",
"hexsha": "6472f2f7b54c7c2795fb179a2f98400d4faba49e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d1dab6ff32485b3af26ef8be58e624d059282c8a",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Davidemb/LogisticAgent_ws",
"max_forks_repo_path": "LaTex/Tesi/chapter/background_related_works.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d1dab6ff32485b3af26ef8be58e624d059282c8a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Davidemb/LogisticAgent_ws",
"max_issues_repo_path": "LaTex/Tesi/chapter/background_related_works.tex",
"max_line_length": 187,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d1dab6ff32485b3af26ef8be58e624d059282c8a",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Davidemb/LogisticAgent_ws",
"max_stars_repo_path": "LaTex/Tesi/chapter/background_related_works.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3807,
"size": 18975
} |
\documentclass[12pt]{article}
\usepackage{fullpage}
\usepackage{microtype} % microtypography
\usepackage{array}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{amsthm}
%% Header
\usepackage{fancyhdr}
\fancyhf{}
\fancyhead[C]{CS 136 - 2022s - Checkpoint2 Submission}
\fancyfoot[C]{\thepage} % page number
\renewcommand\headrulewidth{0pt}
\pagestyle{fancy}
\usepackage[headsep=0.5cm,headheight=2cm]{geometry}
%% Hyperlinks always blue, no weird boxes
\usepackage[hyphens]{url}
\usepackage[colorlinks=true,allcolors=black,pdfborder={0 0 0}]{hyperref}
%%% Doc layout
\usepackage{parskip}
\usepackage{times}
%%% Write out problem statements in blue, solutions in black
\usepackage{color}
\newcommand{\officialdirections}[1]{{\color{blue} #1}}
%%% Avoid automatic section numbers (we'll provide our own)
\setcounter{secnumdepth}{0}
\begin{document}
~~\\ %% add vert space
{\Large{\bf Student Names: TODO}}
{\Large{\bf Collaboration Statement:}}
Turning in this assignment indicates you have abided by the course Collaboration Policy:
\url{www.cs.tufts.edu/comp/136/2022s/index.html#collaboration-policy}
Total hours spent: TODO
We consulted the following resources:
\begin{itemize}
\item TODO
\item TODO
\item $\ldots$
\end{itemize}
\newpage
These are the official instructions for checkpoint 2. You can find instructions on how to submit at \url{www.cs.tufts.edu/comp/136/2022s/checkpoint2.html}
\textbf{Please consult the full project description at \url{https://www.cs.tufts.edu/comp/136/2022s/project.html} in addition to this document when working on this checkpoint. It gives details on what we expect.}
\section{Applying Model to Dataset}
In this section, you should describe the implementation of your model from checkpoint 1. If you have chosen to implement a model that we did not cover in a Coding Practice (CP), please describe any design choices you made when implementing it. If you used code from a CP, please specify exactly which part of the code. Your implementation should be your own (do not just use an existing package), but if you build off of a CP code base, you can use the implementation you submitted for the assignment, including the starter code. If you are using a model not covered by a CP, but the starter code for a CP is useful for you, you are welcome to use it, but please describe how you have done so.
Be sure to describe any issues you ran into when applying your model/learning method to your dataset, as well as how you have addressed them. For example, did you have trouble scaling the model to run in a reasonable amount of time on your dataset? If so, what changes to either the code or dataset did you make and what was the outcome?
Please additionally submit the code for this assignment in the separate Checkpoint 2 code submission.
\textbf{Section grading rubric:}
\begin{itemize}
\item Describe how you have implemented your model (3 points)
\item Describe any bottlenecks you ran into (3 points)
\item Describe how you addressed bottlenecks (3 points)
\item Submitted code to implement the model and generate results in the following section (10 points)
\end{itemize}
\section{Evaluating Hypotheses from Checkpoint 1}
In this section, you should describe the outcome of 3 of your hypotheses from checkpoint 1. Separately for each of your 3 hypotheses, please include the information described in the example hypothesis section below.
\subsection{Hypothesis 1 (Example)}
Each hypothesis should include no more than 1/2 page of text (excluding your result).
\begin{itemize}
\item Briefly reiterate your hypothesis.
\item Describe how you evaluated your hypothesis in 2-3 sentences. Be sure to specify your performance metric and any design choices. For example, if you computed likelihood, be sure to specify whether it is computed on a held-out test set, and if it is, how the test set was held-out (e.g. random sampling, instances with a specific property, etc.)
\item Include a specific result generated by the code you submit, relating to the performance metric. This can be a graph or a table of numbers. Your graph should include a title, a legend (where applicable), and clear labels on the axes.
\item Describe the behavior of the result in 1-2 sentences. This should include a description of what you see on the graph (for example, line A is higher than line B in the left half of the graph).
\item Analyze the implications of the result in approximately 1 paragraph. This should link back to the specific dataset and model/learning method properties you included in your original hypothesis.
\item Was your hypothesis correct? Spend 2-3 sentences reflecting on why that might be the case.
\end{itemize}
\textbf{Subsection grading rubric (for each hypothesis):}
\begin{itemize}
\item Describe implementation details of how you evaluated your hypothesis (1 point)
\item Include a specific result linked to the evaluation of your hypothesis (3 points)
\item Is your result coherently presented (axis labels, titles, legends etc) (1 point)
\item Description of the behavior of result (2 points)
\item Analysis of implication of result (3 points)
\item Link back to hypothesis: why was or wasn't it right? (2 points)
\end{itemize}
\section{Proposing an Upgrade to your Model or Learning Method}
This section should be no more than 1/2 page total.
Based on your results from the previous section, describe an idea for an upgrade to your model or learning method. First, spend 2-3 sentences describing the upgrade. You should be specific about which of the 4 options your upgrade falls into (see the project description for the full list). Then, write a short list (3-4 elements at most) of the changes you will need to make to implement your upgrade (these don't need to be exhaustive, we just want to get you thinking about what needs to happen, and provide feedback on how to approach it). Finally, explain why this upgrade might address a problem found in your previous hypotheses (2-3 sentences). If you would prefer to focus on a different problem, that is also ok, but make sure to describe the problem.
This section is largely to help us provide you with feedback and resources since you will have to include a more specific description of the upgrade in the next project checkpoint. The more detail you include here, the more helpful our feedback will be. While you are only required to submit one idea, you can submit up to 3 for feedback.
\textbf{Subsection grading rubric:}
\begin{itemize}
\item Which option does your upgrade fall under (1 point)
\item Briefly describe your proposed upgrade (3 points)
\item Short list of implementation changes that need to be made for upgrade (2 points)
\item Explain why upgrade might be helpful (3 points)
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7777615216,
"avg_line_length": 56.0245901639,
"ext": "tex",
"hexsha": "39647f962c8d71af0710dbc89fb309789f1ab235",
"lang": "TeX",
"max_forks_count": 12,
"max_forks_repo_forks_event_max_datetime": "2022-03-28T21:21:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-12-07T19:46:50.000Z",
"max_forks_repo_head_hexsha": "ab8df93092dba940b5cf95437bc7aeb906c8964b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "martin-buck/cs136-22s-assignments",
"max_forks_repo_path": "checkpoint2/chkpt2_template.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ab8df93092dba940b5cf95437bc7aeb906c8964b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "martin-buck/cs136-22s-assignments",
"max_issues_repo_path": "checkpoint2/chkpt2_template.tex",
"max_line_length": 769,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ab8df93092dba940b5cf95437bc7aeb906c8964b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "martin-buck/cs136-22s-assignments",
"max_stars_repo_path": "checkpoint2/chkpt2_template.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1645,
"size": 6835
} |
%example.tex
\documentclass[12pt,a4paper]{report}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{lazylatex}
\usepackage{lipsum}
\usepackage{amsmath}
\tcbuselibrary{documentation}
\begin{document}
%
\tableofcontents
\section{Introduction}
\lib{lazylatex} is a {\LaTeX} package inspired by sphinx-rtd-theme. Build with {\LaTeX} packages such as tcolorbox, minted, tikz, etc,. Not all text elements are a simulation of rtd-theme, it also adopts text element style from different sources. Please consult \fname{example.tex} source file of this \emph{pdf} for usage and examples of some of the commands.
\section{Packages Used}
\begin{itemize}
\item \lib{geometry} - for page dimension setup
\item \lib{xcolor} - for better colour definitions
\item \lib{tcolorbox} - for inline markup and admonitions
\item \lib{minted} - for code blocks
\item \lib{tikz} - additional tex features(rotation and placing) also for graphics
\item \lib{warapfig} - for floating area(size and place) when creating sidebar
\item \lib{tabularx}, \lib{array}, \lib{colortbl} - fot styling tables
\item \lib{hyperref} - for link styling and viewer setup.
\item \lib{setspace} - for line spacing and indented blocks
\end{itemize}
\section{Structural Elements}
Uses native LaTeX structural elements(Section, Subsection, etc,.). Paragraphs are indented even the first paragraph in each section, this can be easily turned off using \pre{setlength} latex command
\section{Paragraph Level Markup}
\subsection{Inline Markup}
Documents contain text and may contain inline markup: \pre{inline literals}, \emph{emphasis}, \textbf{strong emphasis}, stand alone links (\url{https://www.wikibooks.org}), external hyperlinks (\href{https://www.wikibooks.org}{wikibooks home}),internal reference (Eq:\ref{eqn:euler}, Code:\ref{myCodeLabel} with \nameref{myCodeLabel}) and \guilabel{some action}. For software library or a package use \lib{PackageName} and a file names are indicated with \fname{filename.ext}
\subsection{Math}
As usual,
\begin{equation}
e^{i\pi}+1=0
\label{eqn:euler}
\end{equation}
\subsection{Line Blocks}
Line blocks are styled in two different ways, \emph{indent blocks} and \emph{hanging blocks}. The former takes an optional number in units of \emph{cm}, a measure of how much indentation is needed. Examples:\\
\subsubsection{Indent Block}
\begin{docCommand}%
[]{indentBlock}{\oarg{optional left width}\marg{content}}\tcbdocmarginnote{syntax}
Where \meta{optional left width} in \emph{cm} is the width with which the block is shifted to right.
\end{docCommand}
\noindent
\textbf{Example:}
\indentBlock{}{is is an indent block tas. Mauris ut leo. Cras viverra this is an indent block tas. Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. Phasellus eu tellus sit}
\subsubsection{Hanging Block}
\begin{docCommand}%
[]{indentHang}{\marg{content}}
Where \meta{conent} is the text.
\end{docCommand}
\noindent
\textbf{Example:}
\indentHang{this is an indent block tas. Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. Phasellus eu tellus sit amet tortor gravida placerat. Integer sapien est, iaculis in, preti}\\
\begin{note}
Both \cs{indentBlock} and \cs{indentHang} commands must preceed a blank line or new line \LaTeX\ character.
\end{note}
\subsection{Code Blocks}
There are three styles of code blocks are possible in \lib{lazylatex}, 1) \pre{literal} code blocks, \pre{code} blocks with syntax highlighting and line number, 3) \pre{lazycode} with properties of \pre{code} and captions+label. Examples:
\noindent
\textbf{normal code block}
\begin{code}{go}
package main
import "fmt"
func main() {
var x int = 7
fmt.Println(x)
fmt.Println(&x)
}
\end{code}\\
\noindent
\textbf{literal code block}\\
If we run the above program
\begin{literal}
>> go run pointers.go
7
0xc00001c0a8
\end{literal}\\
\noindent
\textbf{captioned code block}
\begin{lazycode}[Code Caption,label={myCodeLabel},nameref={code caption}]{go}
package main
import "fmt"
func main() {
var x int = 73
var xptr *int = &x
fmt.Println(x)
fmt.Println(&x)
fmt.Println(xptr)
fmt.Println(*xptr)
}
\end{lazycode}
\subsection{Admonitions}
There are four admonitions \pre{note}, \pre{tip}, \pre{warning} and \pre{error}. They all take one optional argument which is the title of the admonition. If not given they have a default value of \textbf{!Note}, \textbf{!Tip}, \textbf{!Warning} and \textbf{!Error} respectively. Examples:\\
\begin{note}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu lib ero, nonummy eget, consectetuer id, vulputate a, magna. Donec vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices.
\end{note}
\begin{tip}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu lib ero, nonummy eget, consectetuer id, vulputate a, magna. Donec vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus rhoncus sem.
\end{tip}
\begin{warning}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu lib ero, nonummy eget, consectetuer id, vulputate a, magna. Donec vehicula augue eu neque.
\end{warning}
\begin{error}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu lib ero, nonummy eget, consectetuer id, vulputate a, magna. Donec vehicula augue eu neque.
\end{error}
\begin{note}[Custom Title]
Nam arcu lib ero, nonummy eget, consectetuer id, vulputate a, magna. Donec vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices.
\end{note}
\begin{tip}[This Hint is cool]
Donec vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus rhoncus sem.
\end{tip}
\subsection{Sidebar}
The \pre{sidebar} feature is rendered with \lib{wrapfig} latex package for placing it wherever in the latex document also for the witdth of the sidebar. Adding caption and label is also possible. Text will wrap around the sidebar, \textbf{Example}:\\
\lipsum[1]
\begin{wrapfigure}{r!}{0.6\textwidth}
\begin{sidebar}{My Heading}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu lib ero, nonummy eget, consectetuer id, vulputate a, magna. Donec vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas.
\end{sidebar}
\end{wrapfigure}
\lipsum[2]
\section{Lists \& Tables}
\label{listntables}
Numbered and bulleted lists work pretty well in latex but it doesn't provide options for \textbf{Field lists} and \textbf{Hlists}. \lib{lazylatex} provide \pre{llist} environment to create Field Lists and Hlists. Table is created with \pre{lazytable}. Both environments take 2 optional arguments, first \oarg{optional width} which is the width of the list or table in a page and second \marg{alignment}, the alignment of the elements in a table or list.\\
\begin{tip}[Command]
\begin{docEnvironment}%
[doclang/environment content=content]%
{llist or lazytable}{\oarg{optional width}\marg{alignment}}
Where:\\
\oarg{optional width} is specified in the units of \cs{pagewidth}\\
\noindent
\marg{alignment} is specified using a combination of \pre{X},\pre{Y},\pre{Z}, \pre{L\marg{column width}}, \pre{R\marg{column width}}, \pre{C\marg{column width}}.\\
\noindent
\marg{column width} option for \pre{L,R,C} are specified with a real number indicating the width each column must take within the specified \oarg{optional width}\\
\noindent
\pre{X} and \pre{L} - align the column content to left, \pre{Y} and \pre{R} aligns the content to right and \pre{Z} and \pre{C} aligns the content to center
\end{docEnvironment}
\end{tip}
\subsection{Native {\LaTeX} List}
\begin{itemize}
\item Item A
\item Item B
\item Item C
\end{itemize}
\subsection{Hlists}
\begin{tip}[Example]
\begin{docEnvironment}%
[doclang/environment content=content]%
{llist}{\oarg{optional width}\marg{alignment}}
Where \meta{content} is a set of \cs{litem} separated by \&. \oarg{optional width} is [0.5\cs{textwidth}] and \marg{alignment} is \brackets{XXX}\\
Refer Section ~\ref{listntables} for values each option can take.
\end{docEnvironment}
\end{tip}
\bigskip
\begin{llist}[0.5\textwidth]{XXX}
\litem{item 00} & \litem{item 01} & \litem{item 02}\\
\litem{item 10} & \litem{item 11} & \litem{item 12}\\
\litem{item 20} & \litem{item 21} & \litem{item 22}\\
\end{llist}
\subsection{Field List}
\begin{tip}[Example]
\begin{docEnvironment}%
[doclang/environment content=content]%
{llist}{\oarg{optional width}\marg{alignment}}
Where \meta{content} is a set of text separated by \&. \oarg{optional width} is default i.e \cs{textwidth} and \marg{alignment} is \brackets{L\brackets{0.35}X}\\
Refer Section ~\ref{listntables} for values each option can take.
\end{docEnvironment}
\end{tip}
\bigskip
\begin{llist}{L{0.35}X}
\textbf{Author:} & Anoop Chandran \\
\textbf{Address:} & 123 Example Street, Example, Example \\
\textbf{Contact:} & strivetobelazy@gmail.com \\
\textbf{Authors:} & Me; Myself; I \\
\textbf{Organization:} & humankind \\
\textbf{Date:} & \today \\
\textbf{Status:} & This is a ``work in progress" \\
\textbf{Revision:} & Revision: 100001 \\
\textbf{Version:} & 0.01 \\
\textbf{Copyright:} & This document has been placed in the public domain. You may do with it as you wish.
\end{llist}
\subsection{Tables}
Three examples are shown for Tables however one can combine the features of all the tables in different ways.
\begin{tip}[Example 1]
\begin{docEnvironment}%
[doclang/environment content=content]%
{lazytable}{\oarg{optional width}\marg{alignment}}
Where \meta{content} is a set of text separated by \&. \oarg{optional width} is default i.e \cs{textwidth} and \marg{alignment} is \brackets{L\brackets{1}$\vert$ R\brackets{0.5}$\vert$ R\brackets{0.5}$\vert$ C\brackets{2}}, argument sum, $1+0.5+0.5+2=$num columns\\
Refer Section ~\ref{listntables} for values each option can take.
\end{docEnvironment}
\end{tip}
\bigskip
\tcbdocmarginnote{variable width columns}
\begin{lazytable}{L{1}|R{0.5}|R{0.5}|C{2}}
label 00 & label 01 & label 02 & label 03 \\
item 10 & item 11 & item 12 & item 13 \\
item 20 & item 21 & item 22 & item 23 \\
\end{lazytable}
The following examples have captions, to do this one has to wrap \pre{lazytable} inside a \pre{table} environment. Caption and lablel would be the contents of this table but outside \pre{lazytable}\\
\begin{tip}[Example 2]
\begin{docEnvironment}%
[doclang/environment content=content]%
{lazytable}{\oarg{optional width}\marg{alignment}}
\oarg{optional width} is default i.e \cs{textwidth} and \marg{alignment} is \brackets{X$\vert$$\vert$Y$\vert$Y$\vert$Y$\vert$Y$\vert$}
Refer Section ~\ref{listntables} for values each option can take.
\end{docEnvironment}
\end{tip}
\bigskip
\tcbdocmarginnote{fixed width columns}
\begin{table}[h!]
\begin{lazytable}{X||Y|Y|Y|Y|Y}
\bf Group & \bf One & \bf Two & \bf Three & \bf Four & \bf Sum\\
\hline
\hline
Red & 1000.00 & 2000.00 & 3000.00 & 4000.00 & 10000.00\\
\hline
Green & 2000.00 & 3000.00 & 4000.00 & 5000.00 & 14000.00\\
\hline
Blue & 3000.00 & 4000.00 & 5000.00 & 6000.00 & 18000.00\\
\hline
Sum & 6000.00 & 9000.00 & 12000.00 & 15000.00 & 42000.00
\end{lazytable}
\caption{this is a table}
\label{mytable}
\end{table}
To align a table with reduced with at the center of the page, wrap the \pre{lazytable} inside a \pre{center} environment.\\
\begin{tip}[Example 3]
\begin{docEnvironment}%
[doclang/environment content=content]%
{lazytable}{\oarg{optional width}\marg{alignment}}
\oarg{optional width} is 0.6\cs{textwidth} and \marg{alignment} is \brackets{Z$\vert$Z$\vert$Z$\vert$Z}\\
Refer Section ~\ref{listntables} for values each option can take.
\end{docEnvironment}
\end{tip}
\bigskip
\begin{table}[h!]
\begin{center}
\begin{lazytable}[0.6\textwidth]{Z|Z|Z|Z}
\bf Group & \bf One & \bf Two & \bf Three\\
\hline
\hline
Red & 1000.00 & 2000.00 & 10000.00\\
\hline
Green & 2000.00 & 3000.00 & 14000.00\\
\hline
Blue & 3000.00 & 4000.00 & 18000.00\\
\hline
Sum & 6000.00 & 9000.00 & 42000.00\\
\hline
Blue & 3000.00 & 4000.00 & 18000.00\\
\hline
Gray & 3000.00 & 4000.00 & 18000.00\\
\end{lazytable}
\caption{this is a table2}
\label{mytable2}
\end{center}
\end{table}
%
\noindent
\textbf{Example 4:}\\
\begin{table}[h!]
\begin{lazytable}{L{1.25}|C{0.75}|C{0.5}|C{0.5}}
\bf Header row,(header rows optional) & \bf Header 2 & \bf Header 3 & \bf Header 4\\
\hline\hline
body row 1, column 1 & column 2 & column 3 & column 4\\
\hline
body row 2 & \multicolumn{3}{c}{Cells may span columns}\\
\hline
body row 3 & & & \\
\hline
body row 4 & & & \\
\hline
body row 5 & \multicolumn{3}{c}{Cells may also be empty}\\
\end{lazytable}
\caption{this is a table2}
\label{mytable2}
\end{table}
\end{document}
| {
"alphanum_fraction": 0.7326261095,
"avg_line_length": 43.5754716981,
"ext": "tex",
"hexsha": "54dbbd7375d590ebde6121864821b293845138a0",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-11-17T02:20:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-07-22T00:43:51.000Z",
"max_forks_repo_head_hexsha": "e642184e0dc81cf4a8d562ec7f5b198efa7c6133",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gaoajia/lazylatex",
"max_forks_repo_path": "docs/example.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e642184e0dc81cf4a8d562ec7f5b198efa7c6133",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gaoajia/lazylatex",
"max_issues_repo_path": "docs/example.tex",
"max_line_length": 475,
"max_stars_count": 17,
"max_stars_repo_head_hexsha": "a638b9dce3b1e09f328d3672dfcafd268e3b3572",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "anoopkcn/lazylatex",
"max_stars_repo_path": "docs/example.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-16T07:25:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-06-14T08:56:32.000Z",
"num_tokens": 4573,
"size": 13857
} |
\chapter{\label{chapter:Propagators}GMAT's Propagators}
\chapauthor{Darrel J. Conway}{Thinking Systems, Inc.}
Chapter~\ref{chapter:PropagatorStates} describes the classes used to represent the state of the
objects in a Mission Control Sequence, but does not define the pieces that perform the time
evolution for that state. Those components -- the propagators -- are described in this chapter.
\section{The Propagator Classes}
The components that are instrumental in time evolution are shown in
Figure~\ref{figure:PropagatorClasses}.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.5]{Images/ThePropagatorClasses.eps}
\caption[Classes Used to Propagate State Data]{\label{figure:PropagatorClasses}Classes Used
to Propagate State Data. Base classes are shown in orange. Classes that instantiate the objects
used in numerical integration are in green. Analytic propagation elements are in blue. Classes
that are used to drive the propagation process, and other base level classes, are shown in yellow.}
\end{center}
\end{figure}
The numerical integration portion of the propagation system, shown in green in the figure, consists
of a differential equation model and numerical integrator paired to perform the precision
integration. The ODEModel class is a container class that accumulates all of the differential
equation data modeled in the mission, and reports the resulting changes in the elements of the
state vector. Details of the force model components of the differential equation model are described
in Chapter~\ref{chapter:ForceModel}. Other differential equation models are described separately.
\section{The Propagator Base Class}
\subsection{Class Attributes}
\begin{itemize}
\item \textbf{PropVector *thePropVector}
\end{itemize}
\subsection{Class Methods}
\begin{itemize}
\item \textbf{bool Initialize()}
\end{itemize}
\section{Numerical Integration}
\subsection{\label{section:TheODEModel} The Derivative Models}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.5]{Images/ThePhysicalModelClasses.eps}
\caption[The Derivative Model Classes]{\label{figure:PhysicalModelClasses}The Derivative Model
Classes. This figure shows the classes used to provide derivative information to GMAT's
integrators.}
\end{center}
\end{figure}
\subsection{Initialization of the Derivative Model}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.5]{Images/PropagatorInitialization.eps}
\caption[Propagator Initialization]{\label{figure:PropagatorInitialization}Propagator
Initialization. This sequence diagram shows the process used to prepare a propagator for use in
the Mission Control Sequence.}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.5]{Images/InitializationoftheDerivativeModel.eps}
\caption[Derivative Model Initialization]{\label{figure:DerivativeModelInitialization}Derivative
Model Initialization. This sequence diagram shows the process used to build the data that is
numerically integrated during propagation.}
\end{center}
\end{figure}
\subsection{Finalizing Initialization: The PrepareToPropagate() Method}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.5]{Images/ThePrepareToPropagateMethod.eps}
\caption[Final Propagator Initialization:PrepareToPropagate())]
{\label{figure:PrepareToPropagate}Final Propagator Initialization:
PrepareToPropagate()}
\end{center}
\end{figure}
\subsection{Propagation}
\subsection{Completing Propagation}
\section{Analytic Propagation}
\section{File Based Propagation}
\section{Propagation Examples}
\subsection{\label{section:IntegratorExample}A Numerical Integration Example}
\subsection{\label{section:SpiceExample}A SPICE File Propagation Example}
\subsection{\label{section:MixedModePropagation}A Mixed Mode Example}
| {
"alphanum_fraction": 0.8125329121,
"avg_line_length": 33.9107142857,
"ext": "tex",
"hexsha": "810a3207182bd16b9ced409ddc0d9d1bc9489264",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z",
"max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f",
"max_forks_repo_licenses": [
"NASA-1.3"
],
"max_forks_repo_name": "ddj116/gmat",
"max_forks_repo_path": "doc/SystemDocs/ArchitecturalSpecification/Propagators.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f",
"max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z",
"max_issues_repo_licenses": [
"NASA-1.3"
],
"max_issues_repo_name": "ddj116/gmat",
"max_issues_repo_path": "doc/SystemDocs/ArchitecturalSpecification/Propagators.tex",
"max_line_length": 100,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Randl/GMAT",
"max_stars_repo_path": "doc/SystemDocs/ArchitecturalSpecification/Propagators.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z",
"num_tokens": 905,
"size": 3798
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% UMB-CS110-2015S: Introduction to Computing
% Copyright 2015 Pejman Ghorbanzade <pejman@ghorbanzade.com>
% Creative Commons Attribution-ShareAlike 4.0 International License
% More info: https://github.com/ghorbanzade/UMB-CS110-2015S
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def \topDirectory {.}
\def \texDirectory {\topDirectory/src/main/tex}
\documentclass[12pt,letterpaper,twoside]{article}
\usepackage{\texDirectory/template/style/directives}
\usepackage{\texDirectory/template/style/assignment}
\input{\texDirectory/template/config}
\begin{document}
\doc{title}{Lab Session Problems}
\doc{points}{0}
\prepare{header}
\subsection*{Week 5}
\hfill \textbf{March 5, 2015}
Consider a two-dimensional array matrix with $11$ rows and $11$ columns, corresponding to a Cartesian plot with x axis $[-10, 10]$ and y-axis $[-10, 10]$. Initialize this matrix such that each element in matrix is zero. Imagine that the matrix corresponds to a $11$ meters in $11$ meters map that represents footsteps of a subject in real-world. It means all elements of the matrix in which the subject is or has been is shown with one and all square-meters that the person has not visited yet is shown with zero. Imagine that someone is initially at position \texttt{matrix[6][6]}.
\begin{enumerate}[itemsep=0pt]
\item At every instance of time, a random number between $1$ and $4$ is generated and based on generated number the person decides to go up, right, bottom or left. Generate such number and imagine the subject has moved in corresponding direction. Show current footsteps of the person in a program \texttt{MagicWorld1.java}.
\item Write a program \texttt{MagicWorld2.java} that computes how many moves it would take until the person reaches an element on the edge of the world.
\item Modify the previous program to \texttt{MagicWorld3.java} such that when the person moves in one direction, doesn't go back in that direction in his next step. How many moves would it take in this scenario that the person reaches the edge of the world.
\item Try to modify your program to \texttt{MagicWorld4.java} that presents an algorithm that can maximize the number of movements it takes that the person reaches the edge of the world.
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.7371331348,
"avg_line_length": 67.1714285714,
"ext": "tex",
"hexsha": "9f873535090ea0ac440a04f58b677cf2216eb593",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "UMB-CS110-2015S/Assignments",
"max_forks_repo_path": "src/main/tex/labs/l05.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f",
"max_issues_repo_issues_event_max_datetime": "2019-03-17T16:39:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-08-22T15:44:45.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "UMB-CS110-2015S/Assignments",
"max_issues_repo_path": "src/main/tex/labs/l05.tex",
"max_line_length": 582,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "UMB-CS110-2015S/Assignments",
"max_stars_repo_path": "src/main/tex/labs/l05.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:40.000Z",
"num_tokens": 559,
"size": 2351
} |
\section{Power Requirements}
\index{Power Requirements}
This section deals with the power budget required to run a WebBrick.
\subsection{Power Supply}
\index{Power Supply}
WebBricks should be run from a power supply that delivers 12.6V to 18V. 12.6V is a minimum to achieve a full 0-10V range on
the analogue outputs.
A quiescent WebBrick will consume 55-60mA at 12.6V, however the WebBrick in real use will be supplying power to a range of external items
including:
\begin{itemize}
\item{\bf LEDs} The WebBrick may drive up to 8 mimic LEDs, each consuming 5mA. {\it 40mA}
\item{\bf Relays} There are two relays, each using 30mA to hold the contacts closed. {\it 60mA}
\item{\bf Temperature Sensors} There may be up to 5 sensors, these are driven periodically. {\it 5mA peak}
\item{\bf Analogue Outputs} There are four buffered 0-10V outputs each capable of supplying 20mA. {\it 80mA}
\item{\bf General Output Drive} The digital outputs can each supply 5mA, although only four are presented as TTL.
if a rotary encoder is connected it will use 2mA. Driving the triac and open collector gates requires 2mA each. {\it 40mA}
\end{itemize}
Total power budget, all outputs driven, all sensors connected 280mA.
\subsubsection{Back Feed}
\index{Back Feed}
Because the WebBrick consumes so little current, it is possible to inadvertently
power it through the coils of external relays connected to
the open collector outputs. This only becomes an issue if one needs to fully power
down a WebBrick. To avoid this situation we suggest
that the +ve side of relay coils are driven from the same power supply as drives the WebBricks. | {
"alphanum_fraction": 0.7355658199,
"avg_line_length": 43.3,
"ext": "tex",
"hexsha": "9333bbe0f095c27bf353a9e95bca6b9a8a90f17b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cf81416653f091bacfbf29eb6e4507db33ac0ca6",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "LawrenceK/webbrick",
"max_forks_repo_path": "Documentation/PowerRequirements.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cf81416653f091bacfbf29eb6e4507db33ac0ca6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "LawrenceK/webbrick",
"max_issues_repo_path": "Documentation/PowerRequirements.tex",
"max_line_length": 142,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "cf81416653f091bacfbf29eb6e4507db33ac0ca6",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "LawrenceK/webbrick",
"max_stars_repo_path": "Documentation/PowerRequirements.tex",
"max_stars_repo_stars_event_max_datetime": "2019-01-21T13:10:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-21T13:10:49.000Z",
"num_tokens": 450,
"size": 1732
} |
%% LaTeX2e Template by Stephen Iota (https://stepheniota.com/)
%% last updated: May 2019
\documentclass{article}
\usepackage[a4paper,margin=2cm]{geometry}
\usepackage[utf8]{inputenc}
%\usepackage[noadjust]{cite}
\usepackage{lipsum}
\usepackage{amsmath,amssymb,amsfonts,physics}
\usepackage{mathtools} % for boxed answers in align environments
\usepackage{graphicx} % \includegraphics{ }
\usepackage[shortlabels]{enumitem} % change labels in enum/item environments
\usepackage[dvipsnames]{xcolor} % colored links=
%\usepackage[big]{titlesec} % [small,medium,big] << controls size of *section text
%\usepackage{fancyhdr} %http://tug.ctan.org/tex-archive/macros/latex/contrib/fancyhdr/fancyhdr.pdf
\usepackage[
colorlinks=true,
citecolor=green!50!black,
linkcolor=Red,
urlcolor=green!50!black,
hypertexnames=false]{hyperref}
%%%%%%%%%%%%%%%%%%
%% New Commands %%
%%%%%%%%%%%%%%%%%%
\newcommand{\email}[1]{\texttt{\href{mailto:#1}{#1}}}
\newcommand{\HH}[0]{\ensuremath{\mathcal{H}}}
%\newcommand{\ave}[1]{$\langle #1 \rangle$}
\renewcommand{\d}[1]{\ensuremath{\operatorname{d}\!{#1}}}
\newenvironment{question}[0]{
\vspace{2mm}
\noindent
\itshape
}
\newenvironment{solution}[0]{
\vspace{2mm}
\noindent
\textbf{Solution:}
}
%%%%%%%%%%%%%%%%%%
%% Front Matter %%
%%%%%%%%%%%%%%%%%%
%\pagenumbering{gobble} % no page numbers
\graphicspath{{figures/}} % set directory for figures
%\setcounter{section}{-1} % start with section 0
\numberwithin{equation}{section}
%%%%%%%%%%%%%
%%% Title %%%
%%%%%%%%%%%%%
\begin{document}
\begin{center}
{\Large \textsc{Statistical Mechanics}: \textbf{Problem Set 3}}
\end{center}
\vspace{.5mm}
%%%%%%%%%%
%% INFO %%
%%%%%%%%%%
\begin{tabular}{rl}
\textsc{Name}: & Stephen Iota (\email{siota001@ucr.edu})
\\
\textsc{Course}: & Physics 133 (Spring 2019), Prof.~Kuhlman
\\
\textsc{Date}: & \today
\end{tabular}
\vspace{2mm}
%%%%%%%%%%%%%%
%% PROBLEMS %%
%%%%%%%%%%%%%%
\noindent
Sethna problems 3.6, 3.8, 5.1 and 5.5. All final answers are \boxed{\text{boxed}}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% CONNECTING TWO MACRO SYSTEMS %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Connecting two macroscopic systems}
An isolated system with energy $E$ is composed of two macroscopic subsystems, each of fixed volume $V$ and number of particles $N$. The subsystems are weakly couples, so the sum of their energies is $E_1 + E_2 = E$. The volume of the energy surface of a system with the Hamiltonian $\mathcal{H}$ is given by
\begin{align}
\Omega(E) &= \int \frac{\d{\mathbb{P}}\d{\mathbb{Q}}}{h^{3N}} \ \delta(E - \mathcal{H}(\mathbb{P},\mathbb{Q}))
\\
&= \int \frac{\d{\mathbb{P}_1}\d{\mathbb{Q}_1}}{h^{3N_1}}
\frac{\d{\mathbb{P}_2}\d{\mathbb{Q}_2}}{h^{3N_2}} \
\delta(E - (\mathcal{H}_1(\mathbb{P}_1,\mathbb{Q}_1)+\mathcal{H}_2(\mathbb{P}_2,\mathbb{Q}_2)).
\label{omega}
\end{align}
\begin{question}
Derive the formula $\Omega(E) = \int \d{E_1} \ \Omega_1(E_1) \Omega_2(E-E_1),$ for the volume of the energy surface of the energy surface of the whole system using Dirac $\delta$-functions.
\end{question}
\begin{solution}
Beginning with the definition for the volume of the energy surface of a system with Hamiltonian $\HH$ [\ref{omega}], we insert the identity $ \int \d{E_1} \ \delta(E_1 - \HH_1) = 1$. Grouping the integrals together, we notice the definition for $\Omega(E_1)$ where the delta-function picks out $E_1 = \HH_1$. We then do the same for the next integral, where the delta-function now picks out $E - E_1 = \HH_2$.
\begin{align}
\Omega(E) &= \iiint \d{E_1} \ \delta(E_1 - \HH_1) \
\frac{\d{\mathbb{P}_1}\d{\mathbb{Q}_1}}{h^{3N_1}}
\frac{\d{\mathbb{P}_2}\d{\mathbb{Q}_2}}{h^{3N_2}} \
\delta(E - (\HH_1+\HH_2))
\\
&= \int \d{E_1}
\int \frac{\d{\mathbb{P}_2}\d{\mathbb{Q}_2}}{h^{3N_2}}\ \delta(E - (\HH_1+\HH_2)
\int \frac{\d{\mathbb{P}_1}\d{\mathbb{Q}_1}}{h^{3N_1}}\ \delta(E_1 - \HH_1)
\\
&= \int \d{E_1} \ \Omega(E_1)
\int \frac{\d{\mathbb{P}_2}\d{\mathbb{Q}_2}}{h^{3N_2}}\delta(E - E_1 - \HH_2)
\\
\Aboxed{
\Omega(E) &= \int \d{E_1} \ \Omega(E_1) \ \Omega(E - E_1)
}
\end{align}
\end{solution}
\newpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% MICROCANONICAL ENERGY FLUCT %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Microcanonical energy fluctuations}
For two subsystems with energy $E_1$ and $E_2 = E - E_1$ the probability density of $E_1$ is a Gaussian with variance
\begin{align}
\sigma_{E_1}^{2} &= -k_B/(\pdv[2]{S_1}{E_1} + \pdv[2]{S_{2}}{E_2}). \label{variance}
\end{align}
\begin{question}
\textnormal{\textbf{(a)}}
Show that
\begin{align}
\frac{1}{k_B} \pdv[2]{S}{E} &= - \frac{1}{k_BT} \frac{1}{Nc_vT}
\label{fluctuation}
\end{align}
where $c_v$ is the inverse of the total specific heat at constant volume.
\end{question}
\begin{solution}
The equilibrium entropy is $ S = k_B \log(\Omega(E)) $ where temperature is defined as $ 1/T \equiv \partial S / \partial E.$ Using some sleight of hand Leibniz notation\footnote{Mathematicians beware!}:
\begin{align}
\frac{1}{k_B}\pdv[2]{S}{E} &= \frac{1}{k_B}\pdv{E}\pdv{S}{E}
\\
&= \frac{1}{k_B}\pdv{E} \frac{1}{\Omega}\pdv{\Omega}{E}
\\
&= \frac{1}{k_B}\pdv{E} \frac{1}{T}
= \frac{1}{k_B} \pdv{E} \pdv{T}{T} \frac{1}{T}
\\
&= \frac{1}{k_B} \pdv{T}{E} \pdv{\frac{1}{T}}{T}
= -\frac{1}{k_B} \frac{1}{Nc_v} \frac{1}{T^2}
\\
\Aboxed{\frac{1}{k_B}\pdv[2]{S}{E} &= -\frac{1}{k_B T} \frac{1}{Nc_vT}}
\end{align}
where we made use of the specific heat $Nc_v \equiv \partial E / \partial T$.
\end{solution}
\begin{question}
\textnormal{\textbf{(b)}}
If $c_{v1}$ and $c_{v2}$ are the specific heats per particle for two subsystems of $N$ particles each, show using eqns [\ref{variance}] and [\ref{fluctuation}] that
\begin{align}
\frac{1}{c_{v1}} + \frac{1}{c_{v2}} = \frac{Nk_BT^2}{\sigma_{E_1}^2}.
\end{align}
\end{question}
\begin{solution}
This one is pure massaging equations, specifically [\ref{variance}] and [\ref{fluctuation}].
\begin{align}
\sigma_{E_1}^{2} &= -k_B/\bigg(\pdv[2]{S_1}{E_1} + \pdv[2]{S_{2}}{E_2}\bigg)
\\
\pdv[2]{S_1}{E_1} + \pdv[2]{S_{2}}{E_2} &= -k_B/\sigma_{E_1}^{2}
\end{align}
We substitute in the specific heat $Nc_v$ for each subsystem.
\begin{align}
-\frac{1}{NT^2}\bigg( \frac{1}{c_{v1}} + \frac{1}{c_{v2}} \bigg) &= -\frac{k_B}{\sigma_{E_1}^{2}}
\\
\Aboxed{\frac{1}{c_{v1}} + \frac{1}{c_{v2}} &= \frac{Nk_BT^2}{\sigma_{E_1}^{2}}} \label{sum}
\end{align}
\end{solution}
\begin{question}
\textnormal{\textbf{(c)}}
Using the equipartition theorem, write the temperature in terms of $K$. Show that $c_v^{(1)} = 3k_B/2$ for the momentum degrees of freedom. In terms of $K$ and $\sigma_K$, solve for the total specific heat of the molecular dynamics simulation.
\end{question}
\vspace{2mm}
\noindent
\textbf{Solution:}\\
\emph{Temperature in terms of $K$.}
The kinetic energy of a particle moving in a direction $i$ is given by $p_i^2/2m$. The average kinetic energy of a particle depends on the temperature of its system $K = 1/2 k_B T$. A particle has three translational degrees of freedom, each of which contribute equally---on average---to the average to the particle's kinetic energy (equipartition theorem). Thus for a gas of $N$ atoms, its total average kinetic energy is
\begin{align}
K &= \frac{3}{2} N k_B T.\label{kinetic}
\end{align}
This allows us to write the temperature as a function of energy as $\boxed{T(K) = \frac{2}{3}\frac{E}{Nk_B}}$.
\vspace{2mm}
\noindent
\emph{Momentum degrees of freedom.}
Remebering the definition of specific heat, we rewrite energy in terms of temperature to show $c_v^{(1)} = 3k_B/2$ for the momentum degrees of freedom.
\begin{gather}
Nc_{v1} = \pdv{E}{T_1} = \pdv{T_1} \bigg( \frac{3}{2} N k_b T_1 \bigg)
\\
\boxed{c_{v1} = 3/2 k_B}\label{specific_heat}
\end{gather}
\vspace{2mm}
\noindent
\emph{Total specific heat.}
First note that $\sigma_K = \sigma_{E}$.
Let $K = \langle E_1 \rangle$. Since the kinetic energy does not depend on the spacial configuration $\mathbb{Q}$ and potential energy does not depend on momentum configuration $\mathbb{P}$, we can treat our system as two uncoupled-subsystems, where $c_{v1}$ is the kinetic contribution to specific heat and $c_{v2}$ is the configuration contribution. Plug in [\ref{kinetic}] and [\ref{specific_heat}] to [\ref{sum}].
\begin{gather}
\frac{1}{c_{v2}} = \frac{Nk_BT^2}{\sigma^2_{E_1}} - \frac{2}{3}k_BT
\\
\frac{1}{c_{v2}} = \frac{1}{N k_B \sigma^2_{E_1}} \bigg(4K^2 - 6Nk_B^2 \sigma^2_{E_1} \bigg)
\\
c_{v2} = \frac{N k_B \sigma^2_{E_1}}{4K^2 - 6Nk_B^2 \sigma^2_{E_1}}
c_{v2} = \frac{(k_B N \sigma^2_K)}{4K^2 - 6N\sigma^2_K}
\end{gather}
Then we find that $c_{v1} + c_{v2}$ is
\begin{align}
c_v &= c_{v1} + c_{v2}
\\
&= \frac{(k_B N \sigma^2_K)}{4K^2 - 6N\sigma^2_K} + \frac{3}{2}k_B
\\
\Aboxed{
c_v &= k_B \frac{K^2}{\frac{2}{3}K^2 - N\sigma_K^2}}
\end{align}
\newpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% MICROCANONICAL ENERGY FLUCT %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Life and the heat death of the universe}
\emph{Living beings intercept entropy flows}; they use low-entropy sources of energy and emit high-entropy forms of the same energy. Freeman Dyson presumed that an intelligent being generates a fixed entropy $\Delta S$ per thought. Let's investigate how living beings might evolve to cope with the cooling and dimming we expect during the heat death of the Universe.\\
Assume that a being draws heat Q from a hot reservoir at $T_1$ and radiates it away to a cold reservoir at $T_2$.
\begin{question}
\textnormal{\textbf{(a)}}
\textbf{\emph{Energy needed per thought.}}
What is the minimum energy $Q$ needed per thought, in terms of $\Delta S$ and $T_2$?
\end{question}
\begin{solution}
The change in entropy $\Delta S$ is given by
\begin{align}
\Delta S &= \frac{Q_2}{T_2} - \frac{Q_1}{T_1}\label{entropy}.
\end{align}
If our heat bath $T_1$ is very hot, we can ignore the last tern in [\ref{entropy}] and we're left with
\begin{align}
\Delta S &= \frac{Q_2}{T_2}
\end{align}
Thus, the minimum energy $Q$ needed per thought is given by
\begin{align}
\Aboxed{ Q &= T_2\Delta S.}\label{Q}
\end{align}
\end{solution}
\begin{question}
\textnormal{\textbf{(b)}}
\textbf{\emph{Time needed per thought to radiate energy.}}
Write an expression for the maximum rate of thoughts per unit time $\dv*{H}{t}$ (the inverse of the time $\Delta t$ per thought), in terms of $\Delta S$, $C$, and $T_2$.
\end{question}
\begin{solution}
Let $H$ be number of thoughts. Dyson shows that power radiated by our intelligent-being-as-entropy-producer is about
$P = C T^3_2$.
It follows that the energy radiated for a thought of time $\tau$ is
$E = C T^3_2 \tau$.
This energy is simply energy $Q$, the heat energy taken from the hot bath $T_1$.
The time-per-thought $\tau$ is
\begin{align}
\tau &= \frac{\Delta S}{C T^2_2}\label{time}
\end{align}
The time needed per thought $\dv*{H}{t}$ is [\ref{time}]'s reciprocal.
\begin{align}
\Aboxed{ \dv{H}{t} &= \frac{C T^2_2}{\Delta S}}
\end{align}
\end{solution}
\begin{question}
\textnormal{\textbf{(c)}}
\textbf{\emph{Number of thoughts for an ecologically efficient being.}}
\emph{Our Universe is expanding ($R \sim t$); the microwave background radiation has a characteristic temperature $\Theta (t) \sim R^{-1}$ which is getting lower as the Universe expands due to the Doppler effect.}
How many thoughts $H$ can an ecologically efficient being have between now and time infinity, in terms of $\Delta S$, $C$, $A$, and the current time $t_0$?
\end{question}
\begin{solution}
An ecologically efficient being wants to radiate as little heat as possible;
we choose $T_2$ to be equal to the microwave background $\Theta(t)$ since it cannot radiate heat at a temperature below this.
We integrate the rate of thoughts with respect to time to get the number of thoughts $H$ for a given time scale, using
$ \Theta(t) \sim A/t$.
\begin{align}
H &= \int_{t_0}^{\infty} \! \d{H}
= \int_{t_0}^{\infty} \! \d{t} \ \dv{H}{t}
= \int_{t_0}^{\infty} \! \d{t} \ \frac{C T_2^2}{\Delta S}\label{H-integral}
\\
&= \frac{C A^2}{\Delta S} \int_{t_0}^{\infty} \! \d{t} \
\frac{1}{t^2}
= \frac{C A^2}{\Delta S} t^{-1} \bigg|_{\infty}^{t_0}
\end{align}
The number of thoughts an efficient being can have between now and time infinity is
\begin{align}
\Aboxed{H &= \frac{CA^2}{\Delta S}\frac{1}{t_0}}.
\end{align}
\end{solution}
\begin{question}
\textnormal{\textbf{(d)}}
\textbf{\emph{Time without end: greedy beings.}}
\emph{Dyson would like his beings to be able to think an infinite number of thoughts before the Universe ends, but consume a finite amount of energy.
He proposes that beings radiate at a temperature $T_2(t) \sim t^{-3/8}$.} Show that with Dyson's cooling schedule, the total number of $H$ is infinite, but the total energy consumed $U$ is finite.
\end{question}
\begin{solution}
Take $T_2(t) = At^{-3/8}$ and integrate [\ref{H-integral}] to show that the number of thoughts at Dyson's proposed radiation temperature is infinite.
\begin{align}
H &= \int_{t_0}^{\infty}\! \d{t} \ \dv{H}{t} = \frac{CA^2}{\Delta S}\int_{t_0}^{\infty} \! \d{t} \ \Bigg(\frac{1}{t^{3/8}} \Bigg)^2
= \frac{8}{2}\frac{CA^2}{\Delta S} \ t^{1/4}\ \bigg|_{t = t_0}^{t = \infty} = \boxed{\infty}\label{H}
\end{align}
However, the total energy consumed is still finite. The total energy consumed $U$ is equal to the number of thoughts [\ref{H}] multiplied the energy per thought [\ref{Q}].
\begin{gather}
\dv{U}{t} = Q \dv{H}{t} = T_2\Delta S \dv{H}{t} = CT_2^3
\\
U = CA^3\int_{t_0}^{\infty} \! \d{t} \
\Bigg(\frac{1}{t^{3/8}} \Bigg)^3 = CA^3\int_{t_0}^{\infty} \! \d{t} \ t^{-9/8}
= 8 C A^3 t^{-1/8} \ \bigg|_{\infty}^{t_0}
\end{gather}
We find that the total energy consumed is finite.
\begin{align}
\Aboxed{U &= 8 C A^3 \ t_0^{-1/8}}
\end{align}
\end{solution}
%%%%%%%%%%%%%%%%%%%%%
%% PRESSURE-VOLUME %%
%%%%%%%%%%%%%%%%%%%%%
\newpage
\section{Pressure-volume diagrams}
A monatomic ideal gas in a piston is cycled around the path in the $P-V$ diagram in fig.~\ref{P-V}. Leg \textbf{a} cools at constant volume by connecting to a heat bath at $T_c$; leg \textbf{b} heats at a constant pressure by connecting to a heat bath at $T_h$; leg \textbf{c} compresses at constant temperature while remaining connected to the bath at $T_h$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.3\linewidth]{PSet3_Fig1}
\caption{$P-V$ diagram.\label{P-V}}
\end{figure}
\begin{question}
Which of the following six statements are true?
\end{question}
\begin{enumerate}[(1)]
\item The cycle is reversible; no net entropy is created in the Universe.
\\
\textbf{False.} Leg \textbf{a} and leg \textbf{b} both increase net entropy of the universe. The system is exchanging heat with a cold and hot bath, respectively, leading to an entropy increase of $\Delta S = \Delta Q/T$.
\item The cycle acts as a refrigerator, using work from the piston to draw energy from the cold bath into the bot bath, cooling the bath.
\\
\textbf{False.} In leg \textbf{a}, when the system is in contact with the cold bath, heat energy is being transferred \textit{from} the system \textit{to} the cold bath.
\item The cycle acts as an engine, transfering heat from the hot bath to the cold bath and doing positive net work on the outside world.
\\
\textbf{False.} $W = \int P \d{V} < 0$; the work done on the outside world is negative. Engines do positive work on their environments.
\item The work done per cycle has magnitude
$|W| = P_0V_0 |4\log{4-3}|$.
\\
\textbf{True.} It can be shown that the area integral $\int P \d{V}$ is equal to $P_0V_0 |4\log{4-3}|$ (i did it on the white board in discussion!).
\item The heat transferred into the cold bath, $Q_c$, has magnitude $|Q_c| = (9/2)P_0V_0$.
\\
\textbf{True.} Consider leg \textbf{a} (where no work is being done). The heat energy transferred is given by $Q = \Delta U - W$. Initially, the potential energy is $U_i = (3/2)NK_bT_h = (3/2)4P_0V_0$. Once the cycle reaching leg \textbf{b}, the potential energy is $U_f = (3/2)NK_bT_c = (3/2)P_0V_0$. Therefore, $Q = \Delta U = (9/2)P_0V_0$.
\item The heat transferred from the hot bath, $Q_h$, plus the net work $W$ done by the piston onto the gas, equals the heat $Q_c$ transferred into the cold bath.
\\
\textbf{True.} This is a statement of total energy conservation in the system.
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.6669521595,
"avg_line_length": 39.7712895377,
"ext": "tex",
"hexsha": "7fb78e084080a6cf3866b64d2408fefe305bec34",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2018ae62e384205459175dbed525930e607f8956",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "stepheniota/Physics133-UCR",
"max_forks_repo_path": "solved/P133_PSet3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2018ae62e384205459175dbed525930e607f8956",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "stepheniota/Physics133-UCR",
"max_issues_repo_path": "solved/P133_PSet3.tex",
"max_line_length": 422,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2018ae62e384205459175dbed525930e607f8956",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "stepheniota/Physics133-UCR",
"max_stars_repo_path": "solved/P133_PSet3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5727,
"size": 16346
} |
\documentclass{article}
%%%%%%%%%%%%%%%%%%%%%%%%%
% Packages & Macros
%%%%%%%%%%%%%%%%%%%%%%%%%
% For including graphics
\usepackage{graphicx}
% For title page
\usepackage{datetime}
\newdateformat{monthyeardate}{\monthname[\THEMONTH] \THEYEAR}
% For supporting linking
\usepackage{hyperref}
\hypersetup{colorlinks,urlcolor=blue,linkcolor=blue}
% For table colouring (in command line tables)
\usepackage{colortbl}
% For supporting footnotes in tables
\usepackage{footnote}
\makesavenoteenv{tabular}
% For centering multi-line captions
\usepackage[justification=centering]{caption}
% For string comparison in macros
\usepackage{etoolbox}
%%%%%%%%%%%%%%%%%%%%%%%%%
% Tool-Specific Macros
%%%%%%%%%%%%%%%%%%%%%%%%%
\input{macros}
\newcommand{\ToolName}{Line-Goto/From Tool\@\xspace}
\newcommand{\menu}[2]{%
\ifthenelse{\equal{#1}{1}}{Goto/Froms to Line}{}%
\ifthenelse{\equal{#1}{2}}{Line to Goto/Froms}{}%
}
\newcommand{\func}[2]{%
\ifthenelse{\equal{#1}{1}}{\cmd{goto2Line}}{}%
\ifthenelse{\equal{#1}{2}}{\cmd{line2Goto}}{}%
}
\newcommand{\toolFolder}{\cmd{LineToGotoFrom}}
\newcommand{\demoName}{\cmd{Line2GotoFromDemo}\@\xspace}
%%%%%%%%%%%%%%%%%%%%%%%%%
% Document
%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Title Page
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{\ToolName}
\date{\monthyeardate\today}
\maketitle
\vfill
\begin{figure}
\centering
\includegraphics[]{../figs/McSCert_Logo.pdf} \\
McMaster Centre for Software Certification (McSCert)
\end{figure}
\newpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Table of Contents
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\tableofcontents
\newpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Introduction
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
% Briefly, what is the tool?
% Provide any background or references.
The \ToolName is used in \Matlab/\Simulink to convert signal lines to \goto/\from blocks, as well as \goto/\from blocks to signal lines.
% Why is it useful?
The tool aims to facilitate software development activities in \Matlab \Simulink by automating frequently performed actions by developers when creating, modifying, or maintaining models. The use of \goto/\from blocks or signal lines helps to increase readability where appropriate. In models with many signal line crossings, converting some signal lines to \goto/\from blocks will help in decluttering the model. Conversely, using signal lines for straightforward connections, instead of \goto/\from blocks, will allow developers to more easily follow the visual data flow.
% Is there more information?
%\subsection*{More Information}
%For more information about ..., an interested reader is referred to:
%
%\vspace{1em}
% <citation goes here>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% How to Use the Tool
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{How to Use the Tool}
This section describes what must be done to setup the \ToolName, as well as how to use the tool.
%---------------------------------------
% What needs to be done before the tool can be used?
% What needs to be done to a model in order for it to work with the tool?
%---------------------------------------
\subsection{Prerequisites and Installation}
\begin{enumerate}
\item Use \Matlab/\Simulink 2011b or newer.
\item To install the tool, use one of the following approaches:
\begin{enumerate}
\item \textbf{Download the \file{.zip} from GitHub}
\begin{enumerate}
\item Unzip the contents into your desired location.
\item Add the unzipped folder and subfolders to your \mpath.
\item Download the \href{https://github.com/McSCert/Simulink-Utility}{Simulink-Utility} in the same manner. Add the folder and subfolders to your \mpath also. This is a dependency for the tool to work correctly.
\end{enumerate}
\item \textbf{Use the Git command line}
\begin{enumerate}
\item Use the following command to download the tool and any necessary submodules.
\begin{verbatim}
git clone --recursive https://github.com/McSCert/LineToGotoFrom
\end{verbatim}
\item Add the folder and subfolders to your \mpath.
\end{enumerate}
\item \textbf{If you already have the files}
\begin{enumerate}
\item Add the tool folder and subfolders to your \mpath.
\end{enumerate}
\end{enumerate}
\item Run \href{https://www.mathworks.com/help/simulink/ug/registering-customizations.html}{sl\_refresh\_customizations} to refresh the Context Menu.
\item Ensure your model is open and unlocked.
\end{enumerate}
\paragraph{Troubleshooting:} If running the command ``\cmd{which line2Goto}'' indicates that the script is not found, then the tool needs to be added to the \mpath. For information on adding files to the \mpath, please see the \href{https://www.mathworks.com/help/matlab/matlab_env/add-remove-or-reorder-folders-on-the-search-path.html}{MathWorks documentation}.
%---------------------------------------
% How/when do you access the tool?
%---------------------------------------
\subsection{Getting Started}
The tool can be used via the \Simulink Context Menu, which can be viewed by right-clicking in a model. The following options can be available, depending on what is selected in the model. These options are listed below and shown in Figure~\ref{FIG:contextMenu}.
\begin{itemize}
\item \emph{\menu{1}} -- Available when one or more \goto/\from blocks are selected.
\item \emph{\menu{2}} -- Available when one or more signal lines are selected.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{../figs/ContextMenu}
\caption{How the tool will appear in the \Simulink Context Menu.}
\label{FIG:contextMenu}
\end{figure}
%---------------------------------------
% What are the main uses of the tool?
%---------------------------------------
\newpage
\subsection{Functionality}
This section describes the tool functionality when being used from the \Simulink Context Menu (Figure~\ref{FIG:contextMenu}).
\subsubsection*{\menu{1}}
Selecting one or more \goto/\from blocks and then selecting \cmd{\menu{1}} from the Context Menu will convert the selected blocks to signal line connections.
Multiple blocks can be selected by either dragging the cursor over several blocks, or by pressing \cmd{shift} and then selecting blocks.
\goto/\from blocks with global or scoped visibilities, i.e., those with corresponding blocks outside of the current subsystem, will not be converted.
\subsubsection*{\menu{2}}
Selecting one or more signal lines and then selecting \cmd{\menu{2}} from the Context Menu will convert the selected signal lines to \goto/\from connections. If a signal line has a name which is a valid variable name, it will automatically be used as the \goto/\from tag. If the line has no name or it is an invalid variable name, the propagated signal name will be used as the tag. If propagation is off or the propagated signal name is not valid, the user will be prompted to provide a tag name through the GUI shown in Figure~\ref{FIG:prompt_name}. If the tag name entered is not a valid variable name, the user will be prompted to provide another name. If the tag corresponds to existing \goto/\from blocks which are within scope, that is, they will conflict, the user will be prompted to provide another name.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{../figs/Prompt_Name}
\caption{Tool dialog window prompting for a \goto/\from tag.}
\label{FIG:prompt_name}
\end{figure}
This operation can be done on multiple signal lines at a time. To do so, multiple lines can be selected by either dragging the cursor over several signal lines, or by pressing shift and then selecting the desired signal lines.
\textit{\textbf{Note:} For \Matlab 2011b \Simulink, it is not possible to directly right-click on multiple signal lines. To overcome this, you must also select one of the source blocks of a selected signal line, and perform the right-click on this block.}
%---------------------------------------
% What are the configuration options for the tool?
%---------------------------------------
\subsection{Configuration Parameters}
The configuration file \cmd{config.txt} is included in \cmd{\toolFolder\textbackslash src}. The following configuration parameters are utilized by the tool, and can be modified by the user in order to tailor tool functionality:
\begin{itemize}
\item \cmd{resize\_block} --- Enables or disables the the resizing of \goto/\from block length to a specific size.
\item \cmd{static\_resize} --- Enables or disables the ability to resize \goto/\from block length to a fixed value. Otherwise, \goto/\from blocks will be resized dynamically. This parameter is used only when \cmd{resize\_block} is enabled.
\item \cmd{static\_length} --- The number of pixels that \goto/\from blocks are resized to lengthwise, when \cmd{static\_resize} is enabled.
\item \cmd{px\_per\_letter} --- The number of pixels to allocate per letter of a \goto/\from
tag, that the block will be resized to. This parameter is used when \cmd{static\_length} is disabled (i.e., dynamic resizing is enabled).
\item \cmd{block\_offset} --- The distance in pixels between \goto/\from blocks and the blocks that they are connected to.
\item \cmd{line\_routing} --- Enables or disables \emph{autorouting} when adding new lines.
\item \cmd{from\_signal\_naming} --- Enables or disables the naming of signals out of the new \from block(s) to match the signal name going into the \goto block.
\item \cmd{from\_signal\_propagation} --- Enables or disables the propagation of signals through the new \from block(s).
\end{itemize}
Please see the configuration file for more details regarding parameter usage and accepted values. These parameters can be modified with \Matlab open, and do not require that \Matlab be restarted for the changes to take effect.
%---------------------------------------
% What else does the tool do?
%---------------------------------------
\subsection{Errors and Warnings}
Any errors or warnings during tool use will be visible in the \Matlab Command Window.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Example
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Example}
Use the command \demoName in the \Simulink Command Window to open the example model, shown in Figure~\ref{FIG:demo1}. There are \goto/\from blocks in this example with tag \block{A}. To transform them into a signal line connection, right-click on the \goto block, one of the \from blocks, or by selecting all these blocks. Then choose the \cmd{\menu{1}} option from the Context Menu. The resulting model is given in Figure~\ref{FIG:demo2}. Likewise, to transform the line named \keyword{signalB} to a \goto/\from block connection, right-click on the signal line and select the \cmd{\menu{2}} option. The resulting model is given in Figure~\ref{FIG:demo3}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../figs/Demo1}
\caption{\ToolName demo model.}
\label{FIG:demo1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../figs/Demo2}
\caption{Resulting model after \cmd{Goto/Froms to Line} transformation on \block{A} \goto/\from blocks.}
\label{FIG:demo2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../figs/Demo3}
\caption{Resulting model after \cmd{Line to Goto/Froms} transformation on \keyword{signalB}.}
\label{FIG:demo3}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Matlab Commands
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Matlab Commands}
The tool can also be used via the \Matlab command line, with the following functions.
%---------------------------------------
% Command 1
%---------------------------------------
\begin{center}
\begin{tabular}{| >{\columncolor[gray]{0.9}}l | p{8.5cm} |} \hline
Function & \func{1}~ \\ \hline
Syntax & \func{1}~(\args{address, blocks}) \\ \hline
Description & Converts selected local \goto/\from block connections into signal lines.\\ \hline
Inputs & \args{address}: Path of where the \goto/\from blocks reside in the model. \newline
\args{blocks}: Cell array of \goto/\from block pathnames or handles to convert. \\ \hline
Outputs & N/A \\ \hline
\end{tabular}
\end{center}
\paragraph{Example:} The following command transforms the \goto/\from blocks with tag \keyword{A}, named \block{From0}, into a signal line connection in model \demoName. The resulting model is shown as Figure~\ref{FIG:demo2}.
\begin{center}
\cmd{\func{1}~(`\demoName', \{`test2/From0'\})}
\end{center}
%---------------------------------------
% Command 2
%---------------------------------------
\begin{center}
\begin{tabular}{| >{\columncolor[gray]{0.9}}l | p{8.5cm} |} \hline
Function & \func{2}~ \\ \hline
Syntax & \func{2}~(\args{address, line, tag}) \\ \hline
Description & Converts a signal line into a \goto/\from connection.\\ \hline
Inputs & \args{address}: Path of where the signal line resides in the model. \newline
\args{line}: Handle of the signal line to convert. \newline
\args{tag}: Valid variable name\footnote{\url{https://www.mathworks.com/help/matlab/ref/isvarname.html}} char array, to be used as the \goto/\from tag. \\ \hline
Outputs & N/A \\ \hline
\end{tabular}
\end{center}
\paragraph{Example:} The following command transforms the signal line named \keyword{signalB}, with line handle given as variable \keyword{lh}, into a \goto/\from block connection in model \demoName. The resulting model is shown as Figure~\ref{FIG:demo3}.
\begin{center}
\cmd{\func{2}~(`\demoName', lh, `signalB')}
\end{center}
\textit{\textbf{Note:} Included with this tool are two functions, \cmd{gcl} and \cmd{gcls}, which get the current line handle(s) for one or more lines, respectively. They are provided to assist with the command line operation of this tool.}
\end{document} | {
"alphanum_fraction": 0.6707891223,
"avg_line_length": 49.5853658537,
"ext": "tex",
"hexsha": "7fbf4620e857556ef90c5e52e418d27b9d8caed7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cf13577aec3d412d22da7999be22ce9a826d03fc",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "McSCert/Simulink-LineToGotoFrom",
"max_forks_repo_path": "doc/tex/LineToGotoFrom_UserGuide.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cf13577aec3d412d22da7999be22ce9a826d03fc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "McSCert/Simulink-LineToGotoFrom",
"max_issues_repo_path": "doc/tex/LineToGotoFrom_UserGuide.tex",
"max_line_length": 814,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cf13577aec3d412d22da7999be22ce9a826d03fc",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "McSCert/Simulink-LineToGotoFrom",
"max_stars_repo_path": "doc/tex/LineToGotoFrom_UserGuide.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3549,
"size": 14231
} |
%
% Copyright 2014, General Dynamics C4 Systems
%
% This software may be distributed and modified according to the terms of
% the GNU General Public License version 2. Note that NO WARRANTY is provided.
% See "LICENSE_GPLv2.txt" for details.
%
% @TAG(GD_GPL)
%
%macros for API documentation
\newcommand{\param}[3]{\texttt{#1}&\texttt{#2}\\ }
\newcommand{\inputapidoc}[1] {\input{parts/api/#1.tex}}
\newcommand{\apidoc}[7]
{
\subsection{\label{api:#1}#2}
\texttt{#4}
\vspace*{6pt}
#3
\begin{center}
\begin{minipage}{0.95\textwidth}
\begin{tabularx}{\textwidth}{llX}
\toprule
\textbf{Type} & \textbf{Name} & \textbf{Description} \\
\midrule
#5
\bottomrule
\end{tabularx}
\end{minipage}
\end{center}
\textit{Return value:} #6 \par
\textit{Description:} #7 \par
\vfill
}
%Common parameter descriptions
\newcommand{\destcspacedesc}{CPTR to the CNode that forms the root of the destination CSpace. Must be at a depth of 32.}
\newcommand{\destindexdesc}{CPTR to the destination slot. Resolved from the root of the destination CSpace.}
\newcommand{\destdepthdesc}{Number of bits of dest\_index to resolve to find the destination slot.}
\newcommand{\srccspacedesc}{CPTR to the CNode that forms the root of the source CSpace. Must be at a depth of 32.}
\newcommand{\srcindexdesc}{CPTR to the source slot. Resolved from the root of the source CSpace.}
\newcommand{\srcdepthdesc}{Number of bits of src\_index to resolve to find the source slot.}
\newcommand{\rightsdesc}{The rights inherited by the new capability. Possible values for this type are given in \autoref{sec:cap_rights}.}
\newcommand{\badgedesc}{Badge to be applied to the new capability.}
\newcommand{\cspacedesc}{CPTR to the CNode at the root of the CSpace where the capability will be found. Must be at a depth of 32.}
\newcommand{\indexdesc}{CPTR to the capability. Resolved from the root of the \_service parameter.}
\newcommand{\depthdesc}{Number of bits of index to resolve to find the capability being operated on.}
\newcommand{\ioportcapdesc}{An IO port capability.}
\newcommand{\ioportdescread}{The port to read from.}
\newcommand{\ioportdescwrite}{The port to write to.}
\newcommand{\ioportdatadesc}{Data to write to the IO port.}
\newcommand{\pagecapdesc}{Capability to the page to map.}
\newcommand{\pdcapdesc}{Capability to the VSpace which will contain the mapping.}
\newcommand{\vaddrdesc}{Virtual address to map the page into.}
\newcommand{\vmcaprightsdesc}{Rights for the mapping. Possible values for this type are given in \autoref{sec:cap_rights}.}
\newcommand{\vmattribsdescarm}{VM Attributes for the mapping. Possible values for this type are given in \autoref{ch:vspace}.}
\ifxeightsix
\newcommand{\vmattribsdescintel}{VM Attributes for the
mapping. Possible values for this type are given in \autoref{ch:vspace}. }
\fi
\newcommand{\tcbcapdesc}{Capability to the TCB which is being operated on.}
\newcommand{\irqhandlercapdesc}{The IRQ handler capability.}
\newcommand{\invokedcapdesc}{The capability to be invoked.}
\newcommand{\messageinfodesc}{The messageinfo structure for the IPC.}
\newcommand{\senderdesc}{The badge of the endpoint capability that was invoked by the sender is written to this address. This parameter is ignored if \texttt{NULL}.}
\newcommand{\resumetargetdesc}{The invocation should also resume the destination thread.}
\newcommand{\suspendsourcedesc}{The invocation should also suspend the source thread.}
\newcommand{\archflagsdesc}{Architecture dependent flags. These have no meaning on \ifxeightsix{either IA-32 or}\fi{} ARM.}
\newcommand{\threadpriodesc}{The thread's new priority.}
\newcommand{\threadcspacerootdesc}{The new CSpace root.}
\newcommand{\threadcspacedatadesc}{Optionally set the guard and guard size of the new root CNode. If set to zero, this parameter has no effect.}
\newcommand{\threadvspacerootdesc}{The new VSpace root.}
\newcommand{\threadvspacedatadesc}{Has no effect on \ifxeightsix{IA-32 or}\fi{} ARM processors.}
\newcommand{\threadbufferdesc}{Location of the thread's IPC buffer. Must be 512-byte aligned. The IPC buffer may not cross a page boundary.}
\newcommand{\threadbufferpagedesc}{Capability to a page containing the thread's IPC buffer.}
\newcommand{\excepthanddesc}{CPTR to the endpoint which receives IPCs when this thread faults. This capability is in the CSpace of the thread being configured.}
\newcommand{\asidassignpooldesc}{The ASID pool which is being assigned to. Must not be full. Each ASID pool can contain 1024 entries.}
\newcommand{\asidassignpddesc}{The page directory that is being assigned to an ASID pool. Must not already be assigned to an ASID pool.}
%Return value descriptions
\newcommand{\messageinforetdesc}{A \texttt{seL4\_MessageInfo\_t} structure as described in \autoref{sec:messageinfo}.}
\newcommand{\noret}{This method does not return anything.}
\newcommand{\errorenumdesc}{A return value of \texttt{0} indicates success. A non-zero value indicates that an error occurred. See \autoref{sec:errors} for a description of the message register and tag contents upon error.}
\newcommand{\domcapdesc}{Capability allowing domain configuration.}
\newcommand{\domargdesc}{The thread's new domain.}
\section{Error Codes}
\label{sec:errors}
Invoking a capability with invalid parameters will result in an error.
seL4 system calls return an error code in the message tag and a short
error description in the message registers to aid the programmer in
determining the cause of errors.\\
\subsection{Invalid Argument}
A non-capability argument is invalid.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_InvalidArgument} \\
\ipcbloc{IPCBuffer[0]} & Invalid argument number \\
\bottomrule
\end{tabularx}
\vfill
\subsection{Invalid Capability}
A capability argument is invalid.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_InvalidCapability} \\
\ipcbloc{IPCBuffer[0]} & Invalid capability argument number \\
\bottomrule
\end{tabularx}
\vfill
\subsection{Illegal Operation}
The requested operation is not permitted.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_IllegalOperation} \\
\bottomrule
\end{tabularx}
\vfill
\subsection{Range Error}
An argument is out of the allowed range.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_RangeError} \\
\ipcbloc{IPCBuffer[0]} & Minimum allowed value \\
\ipcbloc{IPCBuffer[1]} & Maximum allowed value \\
\bottomrule
\end{tabularx}
\vfill
\subsection{Alignment Error}
A supplied argument does not meet the alignment requirements.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_AlignmentError} \\
\bottomrule
\end{tabularx}
\vfill
\subsection{Failed Lookup}
A capability could not be looked up.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_FailedLookup} \\
\ipcbloc{IPCBuffer[0]} & 1 if the lookup failed for a source capability, 0 otherwise\\
\ipcbloc{IPCBuffer[1]} & Type of lookup failure\\
\ipcbloc{IPCBuffer[2..]} & Lookup failure description as described in \autoref{sec:lookup_fail_desc}\\
\bottomrule
\end{tabularx}
\vfill
\subsection{Delete First}
A destination slot specified in the syscall arguments is occupied.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_DeleteFirst} \\
\bottomrule
\end{tabularx}
\vfill
\subsection{Revoke First}
The object currently has other objects derived from it and the requested
invocation cannot be performed until either these objects are deleted or
the revoke invocation is performed on the capability.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_RevokeFirst} \\
\bottomrule
\end{tabularx}
\vfill
\subsection{Not Enough Memory}
The \obj{Untyped Memory} object does not have enough unallocated space to
complete the \apifunc{seL4\_Untyped\_Retype}{untyped_retype} request.
\begin{tabularx}{\textwidth}{p{0.25\textwidth}X}
\toprule
Field & Meaning \\
\midrule
\ipcbloc{Label} & \enummem{seL4\_NotEnoughMemory} \\
\ipcbloc{IPCBuffer[0]} & Amount of memory available in bytes\\
\bottomrule
\end{tabularx}
\vfill
\section{System Calls}
\inputapidoc{sel4_send}
\inputapidoc{sel4_wait}
\inputapidoc{sel4_call}
\inputapidoc{sel4_reply}
\inputapidoc{sel4_nbsend}
\inputapidoc{sel4_replywait}
\inputapidoc{sel4_yield}
\inputapidoc{sel4_notify}
\clearpage
\section{Architecture-Independent Object Methods}
\label{sec:kobj_api}
\inputapidoc{cnode_copy}
\inputapidoc{cnode_delete}
\inputapidoc{cnode_mint}
\inputapidoc{cnode_move}
\inputapidoc{cnode_mutate}
\inputapidoc{cnode_recycle}
\inputapidoc{cnode_revoke}
\inputapidoc{cnode_rotate}
\inputapidoc{cnode_savecaller}
\inputapidoc{debug_halt}
\inputapidoc{debug_putchar}
\inputapidoc{domainset_set}
\inputapidoc{irq_controlget}
\inputapidoc{irq_handleracknowledge}
\inputapidoc{irq_handlerclear}
\inputapidoc{irq_handlersetendpoint}
\inputapidoc{tcb_configure}
\inputapidoc{tcb_copyregisters}
\inputapidoc{tcb_readregisters}
\inputapidoc{tcb_resume}
\inputapidoc{tcb_setipcbuffer}
\inputapidoc{tcb_setpriority}
\inputapidoc{tcb_setspace}
\inputapidoc{tcb_suspend}
\inputapidoc{tcb_writeregisters}
\inputapidoc{untyped_retype}
\ifxeightsix
\clearpage
\section{IA-32-Specific Object Methods}
\label{sec:kobj_api_intel}
\inputapidoc{ia32_ASID_controlmakepool}
\inputapidoc{ia32_ASID_poolassign}
\inputapidoc{ia32_IO_portin8}
\inputapidoc{ia32_IO_portin16}
\inputapidoc{ia32_IO_portin32}
\inputapidoc{ia32_IO_portout8}
\inputapidoc{ia32_IO_portout16}
\inputapidoc{ia32_IO_portout32}
\inputapidoc{ia32_io_pagetable_map}
\inputapidoc{ia32_page_mapio}
\inputapidoc{ia32_page_map}
\inputapidoc{ia32_page_remap}
\inputapidoc{ia32_page_unmap}
\inputapidoc{ia32_page_getaddress}
\inputapidoc{ia32_pagetable_map}
\inputapidoc{ia32_pagetable_unmap}
\fi
\clearpage
\section{ARM-Specific Object Methods}
\label{sec:kobj_api_arm}
\inputapidoc{arm_asidcontrol_makepool}
\inputapidoc{arm_asidpool_assign}
\inputapidoc{arm_page_flushcaches}
\inputapidoc{arm_page_map}
\inputapidoc{arm_page_remap}
\inputapidoc{arm_page_unmap}
\inputapidoc{arm_page_getaddress}
\inputapidoc{arm_pagetable_map}
\inputapidoc{arm_pagetable_unmap}
| {
"alphanum_fraction": 0.774787403,
"avg_line_length": 33.7570977918,
"ext": "tex",
"hexsha": "14ece6b185fb6102a4e26da86ff04283d38d1c33",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "52900e395383af11f03dd3fe695fa677fc525f19",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "winksaville/sel4-minimal-helloworld",
"max_forks_repo_path": "kernel/manual/parts/api.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "52900e395383af11f03dd3fe695fa677fc525f19",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "winksaville/sel4-minimal-helloworld",
"max_issues_repo_path": "kernel/manual/parts/api.tex",
"max_line_length": 223,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "2f2d88aa0f06b35f80c12bc06556100d44298c10",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "winksaville/sel4-min-sel4",
"max_stars_repo_path": "kernel/manual/parts/api.tex",
"max_stars_repo_stars_event_max_datetime": "2016-09-09T13:53:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-09-09T13:53:10.000Z",
"num_tokens": 3192,
"size": 10701
} |
%-------------------------
% Resume in Latex
% Author : Sourabh Bajaj
% License : MIT
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[hidelinks]{hyperref}
\usepackage{fancyhdr}
\usepackage[english]{babel}
\usepackage{tabularx}
\input{glyphtounicode}
\pagestyle{fancy}
\fancyhf{} % clear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.5in}
\addtolength{\evensidemargin}{-0.5in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
% Ensure that generate pdf is machine readable/ATS parsable
\pdfgentounicode=1
%-------------------------
% Custom commands
\newcommand{\resumeItem}[2]{
\item\small{
\textbf{#1}{: #2 \vspace{-2pt}}
}
}
% Just in case someone needs a heading that does not need to be in a list
\newcommand{\resumeHeading}[4]{
\begin{tabular*}{0.99\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeSubheading}[4]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeSubSubheading}[2]{
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textit{\small#1} & \textit{\small #2} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}}
\renewcommand{\labelitemii}{$\circ$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
%-------------------------------------------
%%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING-----------------
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{\href{https://github.com/jameschern}{\Large Wukki}} & Email : \href{jameschen1664@gmail.com}{jameschen1664@gmail.com}\\
\href{https://github.com/jameschern}{https://github.com/jameschern} & Mobile : +86-13213291664 \\
\end{tabular*}
%-----------EDUCATION-----------------
\section{Education}
\resumeSubHeadingListStart
\resumeSubheading
{Henan University of Chinese Medicine}{zhengzhou, China}
{Bachelor of Science in Computer Science}{2019 -- 2023}
\resumeSubHeadingListEnd
%-----------EXPERIENCE-----------------
\section{Experience}
% --------Multiple Positions Heading------------
% \resumeSubSubheading
% {Software Engineer I}{Oct 2014 - Sep 2016}
% \resumeItemListStart
% \resumeItem{Apache Beam}
% {Apache Beam is a unified model for defining both batch and streaming data-parallel processing pipelines}
% \resumeItemListEnd
% \resumeSubHeadingListEnd
%-------------------------------------------
%-----------PROJECTS-----------------
\section{Projects}
\resumeSubHeadingListStart
\resumeSubItem{QuantSoftware Toolkit}
{Open source python library for financial data analysis and machine learning for finance.}
\resumeSubItem{Github Visualization}
{Data Visualization of Git Log data using D3 to analyze project trends over time.}
\resumeSubItem{Recommendation System}
{Music and Movie recommender systems using collaborative filtering on public datasets.}
\resumeSubItem{Mac Setup}
{Book that gives step by step instructions on setting up developer environment on Mac OS.}
\resumeSubHeadingListEnd
%--------PROGRAMMING SKILLS------------
\section{Skills}
\resumeSubHeadingListStart
\resumeSubItem{Languages}
{Chinese, English,Japanese}
\resumeSubItem{Programming languages}
{Rust,Go,Java,Javascript,SQL,Kotlin,Cpp,C sharp}
\resumeSubItem{Skills}
{Front-end Developer,Back-end Developer,Illustration,Graphic Design}
\resumeSubHeadingListEnd
%------Certification----------
\section{Certification}
\resumeSubHeadingListStart
\resumeSubItem{CET Score}
{CET-4:473 \ CET-6:550}
\resumeSubItem{CATTI Score}
{}
\resumeSubItem{JIPT}
{}
\resumeSubItem{IELTS}
{}
\resumeSubHeadingListEnd
%-------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.6613745561,
"avg_line_length": 29.0121212121,
"ext": "tex",
"hexsha": "a0eaf5866bb859d6bb57fbab39723d1d139b0edb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "eacd1ee4f2573416e100ca02ad6af0f9cb76ad47",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jameschern/resume",
"max_forks_repo_path": "wukki-resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "eacd1ee4f2573416e100ca02ad6af0f9cb76ad47",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jameschern/resume",
"max_issues_repo_path": "wukki-resume.tex",
"max_line_length": 129,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "eacd1ee4f2573416e100ca02ad6af0f9cb76ad47",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jameschern/resume",
"max_stars_repo_path": "wukki-resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1411,
"size": 4787
} |
\documentclass[simplex.tex]{subfiles}
% DO NOT INCLUDE PREAMBLES/PACKAGES HERE!!
% packages are inherited from preamble.tex; you can compile this on its own
\begin{document}
\subsection{CLARITY}
\subsubsection{A low-latency pipeline for processing CLARITY data in the cloud}
We are working on migrating our exisiting CLARITY pipeline to run entirely on virtualized infrastructure in the cloud.
This workflow includes ingesting into ndstore, aligning to a reference atlas, and storing a registered stack back into ndstore.
Currently, ndstore is working the cloud, with manual ingest of data.
The next steps are to deploy LDDMM via Docker containers to run within the cloud environment.
\end{document}
| {
"alphanum_fraction": 0.8093883357,
"avg_line_length": 46.8666666667,
"ext": "tex",
"hexsha": "69a72f5c479c6bb46cd7a8582b1676f9ea88bde2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "openconnectome/SIMPLEX_Q2",
"max_forks_repo_path": "Reporting/reports/2017-03Q1/clarity.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "openconnectome/SIMPLEX_Q2",
"max_issues_repo_path": "Reporting/reports/2017-03Q1/clarity.tex",
"max_line_length": 127,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "openconnectome/SIMPLEX_Q2",
"max_stars_repo_path": "Reporting/reports/2017-03Q1/clarity.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 160,
"size": 703
} |
\documentclass[10pt,twocolumn,letterpaper]{article}
\usepackage{cvpr}
\usepackage{times}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
% Include other packages here, before hyperref.
% If you comment hyperref and then uncomment it, you should delete
% egpaper.aux before re-running latex. (Or just hit 'q' on the first latex
% run, let it finish, and you should be clear).
\usepackage[breaklinks=true,bookmarks=false]{hyperref}
\cvprfinalcopy % *** Uncomment this line for the final submission
\def\cvprPaperID{****} % *** Enter the CVPR Paper ID here
\def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
% Pages are numbered in submission mode, and unnumbered in camera-ready
%\ifcvprfinal\pagestyle{empty}\fi
\setcounter{page}{1}
\begin{document}
%%%%%%%%% TITLE
\title{Recognizing Ancient Greek Handwriting Using Modern Training Data}
\author{Sean Karlage\\
University of Kentucky\\
Lexington, KY\\
{\tt\small sean.karlage@uky.edu}
}
% For a paper whose authors are all at the same institution,
% omit the following lines up until the closing ``}''.
% Additional authors and addresses can be added with ``\and'',
% just like the second author.
% To save space, use either the email address or home page, not both
\maketitle
%\thispagestyle{empty}
\section{Progress}
\begin{itemize}
\item Complete changed my project around after realizing that it was not feasible to use off-the-shelf OCR scanners to try and recognize handwritten text
\item At the recommendation of Dr. Seales, dropped the notion of scanning my own papyrus on the scanner in Marksbury because it would take too long to complete and "the scanner is really finicky"
\item Iterated through several project ideas until I was able to come up with the current idea
\item Did some initial research and tried to locate datasets. Found the GCDB database of handwritten characters. I would like to evaluate classifiers that have been trained with this data to detect those from ancient Greek writings from http://sites.lib.byu.edu/scholarsarchive/byu-multi-spectral-imaging-project/
\item Using OpenCV + Python, built a simple kNN classifier that was trained and evaluated on the GCDB data, obtaining 66.8\% accuracy in recognition
\end{itemize}
\section{Issues}
\begin{itemize}
\item Still do not have manually-segmented letterforms from BYU corpora, but expect to have them in the coming weeks with the help of Scrolls research team members. This is the key part of the project
\item Need to create more classifiers using different ML techniques to try and get the best detector possible
\item Need to determine which features would be the best for testing on modern Greek handwriting and then evaluating on ancient Greek
\end{itemize}
%%%%%%%%% ABSTRACT
\begin{abstract}
While recognition of machine-printed text with automated procedures is considered a solved problem by most computer scientists, recognition of handwriting is still a difficult problem in the field of text recognition. On top of these difficulties, recognition of ancient manuscript letterforms is even more difficult due to document deformities and letterform occlusion. Because of the lack of datasets for ancient Greek handwriting, classifiers employing a variety of machine learning techniques were trained on the [GCBD] dataset and evaluated on both samples from that dataset and on manually-segmented letterforms from an ancient Greek manuscript corpus.
\end{abstract}
%%%%%%%%% BODY TEXT
\section{Introduction}
Automated recognition of machine-printed text is generally considered by some to be a solved problem in the field of computer vision. Many commercial and open-source systems exist that are designed to take as input images or scans of machine-printed text and output corresponding text elements with a high degree of accuracy. However, this automated process is much more difficult when it comes to recognizing handwritten text for a variety of reasons: different writing styles, various font embellishments, irregular line widths, etc.
Recognition of ancient handwritten text, in particular, represents a unique challenge in this field; in addition to the above-mentioned problems, texts can be malformed, occluded by foreign material on the document, or simply have degraded due to age. The ability to automatically recognize such text would be a real boon for document researchers, as not only would they be able to more easily interpret ancient text, but also contribute to the digital archival of said text.
%-------------------------------------------------------------------------
\subsection{Motivation}
As a part of the larger “volume cartography” project occurring at [VisCenter], a need arose for automated recognition of ancient Greek handwritten letterforms. Because of the volume of data that could potentially contain text is large, manually parsing flattened, textured volume data for possible letterforms is infeasible. Furthermore, in addition to team member validation, professional analysis from experts in ancient Greek writings would be required in order to evaluate and validate any potential discoveries.
As a first pass, however, automated recognition of potential letterforms provides a good estimate of signal-to-noise ratio and evaluation of letterform extraction procedures. To the author’s knowledge, however, no open or commercial database of segmented ancient Greek handwritten letterforms exists with which to serve as a training dataset for any potential text classifier. Instead, the author proposes to train classifiers based on datasets of modern Greek handwriting - in particular, the dataset proposed in [GCDB] - and evaluate based on hand-labeled text from the document collection of [BYU].
%------------------------------------------------------------------------
\section{Related Work}
Automated recognition of Early Christian Greek manuscripts has been carried out previously by [CIL], in which the authors employed a segmentation-free approach by detecting open and closed cavities in letterforms and using those as a basis with which to extract features of each character. The system developed by the authors obtained a recall value of 89\% for simple letterforms in their testing dataset.
\begin{itemize}
\item The database of letterforms used by the authors is not available for further study to the best of my knowledge. I have contacted one of the authors and am waiting to hear back
\end{itemize}
[HisDoc] project involved not only handwriting recognition and digitization of manuscript text data, but also developing an information retrieval system around recovered data. The authors developed this system and applied it primarily to three text corpora in different languages (Latin, German, and English, respectively). Their system accurately segmented and recognized text from each corpus and, using a neural network-based approach, had word recognition rates of less than 10\%.
[Diem1], [Diem2] proposed a binarization-free approach to recognizing letterforms from ancient Slavonic documents using SIFT features, SVM classifiers, and finally a weighted voting algorithm based on pre-classified local descriptors. Not applying binarization filters to input documents allowed for more data from each letterform, particularly those that were heavily degraded or experienced occlusion from stains or tears in the manuscript.
The authors of [GRUHD] were the first to develop an open dataset of modern Greek handwriting that contained extensive metadata about participants, as well as providing very well-segmented letterforms and words from each individual author. The sample text contains simple Greek words as well as both uppercase and lowercase Greek letters and numerals from 1000 individual contributors. This database has been supplanted by [GCDB], from the same authors, with improvements to database architecture and archival. GCDB also contains Greek word samples and letterforms from 305 unique contributors.
%-------------------------------------------------------------------------
\section{Problem Statement}
With the lack of ancient Greek handwriting dataset availability, building and evaluating a detector on live data proves difficult. In order to approximate a recognition system, we propose to answer the following primary research question:
\begin{quote}
\textit{Can a handwriting recognition system that is trained on modern Greek handwriting be used to classify segmented letterforms from ancient Greek manuscripts?}
\end{quote}
A variety of classifiers will be built that will be validated against both the training dataset as well as live data from the corpora of [BYU].
%-------------------------------------------------------------------------
\section{Approach}
In order to get a variety of results from training data, a host of classifiers will be developed using a variety of common learning algorithms. A fraction of the dataset from [GCDB] will be used to train each classifier, and the remaining samples will comprise the testing/evaluation dataset. Each trained classifier will also be evaluated on individual letterforms extracted from the corpora of [BYU].
%-------------------------------------------------------------------------
\subsection{k-Nearest Neighbors}
A very simple k-Nearest Neighbors classifier was developed using Python and OpenCV. The built-in kNN classifier in OpenCV was used as the base, and was trained on 50\% of the GCDB dataset. The remaining half of the data served as evaluation data.
\begin{itemize}
\item As I try more classifiers, I will put them in subsections below this one
\item Once I get the extracted letterforms from the BYU corpora, I will go back and evaluate the trained kNN classifier on those letterforms and report the results in the appropriate section
\item Also want to try out other features such as Zernicke moments from [Kale]
\end{itemize}
%-------------------------------------------------------------------------
\section{Evaluation}
\subsection{k-Nearest Neighbors}
The simple k-Nearest Neighbors classifier was evaluated on the GCDB dataset, with 50\% of the dataset comprising the training data, and the other 50\% comprising the evaluation data. The classifier was run for differing values of $k$ in the range [1, 20]. The plot of accuracy vs. values of $k$ are shown in Figure 1.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{res/figure_1.png}
\caption{Plot of k-Nearest Neighbors accuracy vs. increasing values of $k$}
\end{center}
\end{figure}
\begin{itemize}
\item Would like to have a color plot of the clusters in the future here as well
\item As I add more classifiers, I will expand this section in tandem with the above \textit{Approach} section
\end{itemize}
%-------------------------------------------------------------------------
\section{Discussion}
\begin{itemize}
\item Trained kNN classifier on GCDM dataset, achieved maximum accuracy of 66.8\% recognition
\item Will list more summary results for each classifier
\item Once I get evaluation data from BYU corpora, will compare results and determine whether this approach is viable or not
\end{itemize}
%-------------------------------------------------------------------------
\section{References}
{\small
\bibliographystyle{ieee}
\bibliography{egbib}
}
\end{document}
| {
"alphanum_fraction": 0.7554000875,
"avg_line_length": 73.7741935484,
"ext": "tex",
"hexsha": "a86d7ad2465f2bf32c402f951218c8b9141f4fd9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1bd7210047119bab478a8362298a5b1f79d934a0",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "get9/greek-hw-detector",
"max_forks_repo_path": "paper/egpaper_final.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1bd7210047119bab478a8362298a5b1f79d934a0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "get9/greek-hw-detector",
"max_issues_repo_path": "paper/egpaper_final.tex",
"max_line_length": 662,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "1bd7210047119bab478a8362298a5b1f79d934a0",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "get9/greek-hw-detector",
"max_stars_repo_path": "paper/egpaper_final.tex",
"max_stars_repo_stars_event_max_datetime": "2015-05-27T16:42:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-27T16:42:20.000Z",
"num_tokens": 2345,
"size": 11435
} |
\section{Scholarships and Awards}
\denseouterlist{
\entry{ACM ICPC 16\textsuperscript{th} place \hfill 2016\fillyear{\textendash 2015}}
\entry{Top start-up certification SW Maestro 7\textsuperscript{th} \hfill 2017\fillyear{\textendash 2017}}
\entry{Google Machine Learning Challenge Korea 5\textsuperscript{th} place \hfill 2017\fillyear{\textendash 2017}}
}
| {
"alphanum_fraction": 0.7645502646,
"avg_line_length": 29.0769230769,
"ext": "tex",
"hexsha": "2a282b66713f55b4fdf50e8cb988ba53f0dd0c17",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "780742e1055cd12dbce8e0eeca651839cc0f8c13",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jiunbae/curriculum-vitae",
"max_forks_repo_path": "styles/sections/awards.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "780742e1055cd12dbce8e0eeca651839cc0f8c13",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jiunbae/curriculum-vitae",
"max_issues_repo_path": "styles/sections/awards.tex",
"max_line_length": 118,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "780742e1055cd12dbce8e0eeca651839cc0f8c13",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jiunbae/curriculum-vitae",
"max_stars_repo_path": "styles/sections/awards.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 130,
"size": 378
} |
% This is part of the TFTB Reference Manual.
% Copyright (C) 1996 CNRS (France) and Rice University (US).
% See the file refguide.tex for copying conditions.
\markright{tfrrmsc}
\section*{\hspace*{-1.6cm} tfrrmsc}
\vspace*{-.4cm}
\hspace*{-1.6cm}\rule[0in]{16.5cm}{.02cm}
\vspace*{.2cm}
{\bf \large \sf Purpose}\\
\hspace*{1.5cm}
\begin{minipage}[t]{13.5cm}
Reassigned Morlet Scalogram time-frequency distribution.
\end{minipage}
\vspace*{.2cm}
{\bf \large \sf Synopsis}\\
\hspace*{1.5cm}
\begin{minipage}[t]{13.5cm}
\begin{verbatim}
[tfr,rtfr,hat] = tfrrmsc(x)
[tfr,rtfr,hat] = tfrrmsc(x,t)
[tfr,rtfr,hat] = tfrrmsc(x,t,N)
[tfr,rtfr,hat] = tfrrmsc(x,t,N,f0t)
[tfr,rtfr,hat] = tfrrmsc(x,t,N,f0t,trace)
\end{verbatim}
\end{minipage}
\vspace*{.5cm}
{\bf \large \sf Description}\\
\hspace*{1.5cm}
\begin{minipage}[t]{13.5cm}
{\ty tfrrmsc} computes the Morlet scalogram and its reassigned
version. The reassigned Morlet scalogram has the following
expression, where $h(t)$ is a gaussian window :
\begin{eqnarray*}
\hspace*{-.2cm}SC_x^{(r)}(t',a';h)=\iint_{-\infty}^{+\infty} {a'}^2\
SC_x(t,a;h)\ \delta(t'-\hat{t}(x;t,a))\ \delta(a'-\hat{a}(x;t,a))\
\dfrac{dt\ da}{a^2},
\end{eqnarray*}
where
\begin{eqnarray*}
\hat{t}(x;t,a)=t-\Re\left\{a\ \dfrac{T_x(t,a;\ens{T}_h)\ T_x^*(t,a;h)}
{|T_x(t,a;h)|^2}\right\} \\
\hat{\nu}(x;t,a)=\dfrac{\nu_0}{\hat{a}(x;t,a)}=\dfrac{\nu_0}{a} +
\Im\left\{\dfrac{T_x(t,a;\ens{D}_h)\ T_x^*(t,a;h)}{2\pi a\
|T_x(t,a;h)|^2}\right\}
\end{eqnarray*}
with $\ens{T}_h(t)=t\ h(t)$ and $\ens{D}_h(t)=\frac{dh}{dt}(t)$. $SC_x(t,a;h)$ denotes
the scalogram and $T_x(t,a;h)$ the wavelet transform :
\[SC_x(t,a;h)=\left|T_x(t,a;h)\right|^2=\frac{1}{|a|}\ \left|\int_{-\infty}^{+\infty}
x(s)\ h^*\left(\dfrac{s-t}{a}\right)\ ds\right|^2.\]
\hspace*{-.5cm}\begin{tabular*}{14cm}{p{1.5cm} p{8cm} c}
Name & Description & Default value\\
\hline
{\ty x} & analyzed signal ({\ty Nx=length(x)})\\
{\ty t} & the time instant(s) & {\ty (1:Nx)}\\
{\ty N} & number of frequency bins & {\ty Nx}\\
{\ty f0t} & time-bandwidth product of the mother wavelet
& {\ty 2.5}\\
{\ty trace} & if nonzero, the progression of the algorithm is shown
& {\ty 0}\\
\hline \end{tabular*} \end{minipage}
\newpage
\hspace*{1.5cm} \begin{minipage}[t]{13.5cm}
\hspace*{-.5cm}\begin{tabular*}{14cm}{p{1.5cm} p{8cm} c}
Name & Description & Default value\\ \hline
{\ty tfr, rtfr} & time-frequency representation and its reassigned
version\\
{\ty hat} & complex matrix of the reassignment vectors\\
\hline
\end{tabular*}
\vspace*{.2cm}
When called without output arguments, {\ty tfrrmsc} runs {\ty tfrqview}.
\end{minipage}
\vspace*{.5cm}
{\bf \large \sf Example}
\begin{verbatim}
sig=fmlin(64,0.1,0.4);
tfrrmsc(sig,1:64,64,2.1,1);
\end{verbatim}
\vspace*{.5cm}
{\bf \large \sf See Also}\\
\hspace*{1.5cm}
\begin{minipage}[t]{13.5cm}
all the {\ty tfr*} functions.
\end{minipage}
\vspace*{.5cm}
{\bf \large \sf Reference}\\
\hspace*{1.5cm}
\begin{minipage}[t]{13.5cm}
[1] F. Auger, P. Flandrin ``Improving the Readability of Time-Frequency and
Time-Scale Representations by the Reassignment Method'' IEEE Transactions
on Signal Processing, Vol. 43, No. 5, pp. 1068-89, 1995.
\end{minipage}
| {
"alphanum_fraction": 0.608555523,
"avg_line_length": 31.3119266055,
"ext": "tex",
"hexsha": "093fdeface8c523785ddcc3c7cfd3e16bca9b3b9",
"lang": "TeX",
"max_forks_count": 21,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z",
"max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "sangyoonHan/extern",
"max_forks_repo_path": "tftb/refguide/tfrrmsc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "sangyoonHan/extern",
"max_issues_repo_path": "tftb/refguide/tfrrmsc.tex",
"max_line_length": 86,
"max_stars_count": 50,
"max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "sangyoonHan/extern",
"max_stars_repo_path": "tftb/refguide/tfrrmsc.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z",
"num_tokens": 1348,
"size": 3413
} |
\makeatletter \@ifundefined{rootpath}{\input{../../setup/preamble.tex}}\makeatother
\worksheetstart{Roslyn Extension}{0}{Februar 10, 2015}{}{../../}
This chapter describes how the Roslyn C\# compiler is extended to support the constructs of \stmnamesp described in \bsref{chap:stm_design}. \bsref{sec:roslyn_extension_strategy} describes the overall extension strategy. In \bsref{sec:roslyn_lexer_parser_changes} the changes made to the lexer and parser are described. Following this, \bsref{sec:syntax_tree_transformations} presents examples of the transformations made on the syntax tree by the extension while \bsref{sec:roslyn_extension_testing} describes the testing approach employed to test the Roslyn extension. Finally, \bsref{sec:design_and_int_revisited} describes areas where the initial prototype implementation, described in this chapter, conflicts with the intended \ac{STM} design described in \bsref{chap:stm_design}.
\label{chap:roslyn_extension}
\section{Extension Strategy}
\label{sec:roslyn_extension_strategy}
The Roslyn C\# compiler is extended by modifying the lexing and parsing phases with support for the language constructs described in \bsref{chap:stm_design}. The extended parsing phase outputs an extended syntax tree containing direct representations of the language features provided by \stmname. The syntax tree is then analyzed to identify \stmnamesp constructs, followed by a transformation where the language extension of \stmnamesp is transformed into equivalent C\# code which utilizes the \ac{STM} library described in \bsref{chap:implementation}. This syntax tree is then passed to the remaining C\# compiler phases, utilizing the compilers semantic analysis and code generation implementations. The approach is visualized in \bsref{fig:compiler_pipeline_extension}, which is a modified version of \bsref{fig:api_vs_compiler_pipeline}. The transformation phase utilizes both the extended syntax tree and symbol information gathered through the Roslyn \ac{API}. By doing modifications in the early phases, the amount of changes required is minimized, as the rest of the phases can be reused without modifications. Furthermore, modifications are done on the stable syntax tree, rather than the unstable bound tree, as described in \bsref{sec:compile_phases}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{\rootpath/worksheets/roslyn_extension/figures/compiler_pipeline_extension}
\caption{Extension occurs at the syntax tree and symbol level. The parser is extended to output an extended syntax tree and transformation of this tree occurs before the binding and IL emission phases.}
\label{fig:compiler_pipeline_extension}
\end{figure}
The described approach was selected due to the following reasons:
\begin{enumerate}
\item As described in \bsref{subsec:roslyn_first_phase} the C\# lexer uses a simple \bscode{switch} case to identify tokens, which chooses the type of token to identify based on the first character of the token, while the parser uses a recursive descent technique. Both the lexer and parser are implemented by hand. The techniques employed have a low degree of complexity, as the first method uses a simple \bscode{switch} and the latter method corresponds to parsing one non-terminal at a time. Thus modifying the lexer and parser is simpler than if more complex techniques had been employed, such as \ac{LALR} parsing\cite{nunes2003cps}.
\item The Roslyn compiler generates the nodes composing its syntax trees along with factory methods and a visitor pattern implementation based on the content of an \ac{XML} file. Therefore adding new nodes to represent the extended language constructs, such as the \bscode{atomic} block and \bscode{retry} statement, simply amounts to adding definitions for these nodes to the \ac{XML}, allowing the employed tool to generate source code for the new nodes.
\item As described in \bsref{sec:syntax_trees} Roslyn's syntax trees are designed to be fast to modify by reusing underlying green nodes instead of creating complete copies\cite[p. 6]{ng2012roslyn}. This speaks for conducting transformation on the level of the syntax tree despite its immutable implementation.
\item The Roslyn project has been designed to allow programmers to utilize the information gathered during the compilation phases by analyzing the syntax tree, the information in the symbol table, and the results of semantics analysis. These parts of the compiler are exposed as a public \ac{API} allowing access to both syntactic and semantic information. Utilizing this \ac{API} during the transformation phase allows the transformation to draw on the existing semantic analysis to answer questions such as, what method is the target of a method invocation, without the need for implementing complex analysis.
\item By parsing an extended syntax tree and transforming it into a regular C\# syntax tree, the existing semantic analysis and code emission implementations can be utilized.
\end{enumerate}
Despite of the many advantages of the selected approach, a number of disadvantages also exists. The following disadvantages has been identified:
\begin{enumerate}
\item By modifying the syntax tree, the roundtripable property, described in \bsref{sec:syntax_trees}, is lost, as the syntax tree no longer represents the original source code. Consequently, any error or warning generated by the compiler refers to the transformed source code, which the programmer does not see, as opposed to the original \stmname source code. This requires our own code analysis, to give meaningful errors attached to the original source code. Alternatively, the transformation could be performed at a later compilation phase, e.g. modifying the bound tree in the binding phase or do changes before emitting code in the emit phase. This would preserve the roundtripable property, but also limit the reuse of the existing compiler.
\item As \bscode{atomic} local variables are translated into variables of a corresponding \ac{STM} type, the types of these need to be known. C\# allows the programmer to utilize type inference for local variables using the \bscode{var} keyword. Consequently the type of a local variable is not required to be defined in the source code. To remedy this, the extension must infer the type before translation. Roslyn offers the possibility to evaluate the type of an expression, which is used to infer the type of \bscode{atomic} local variables with unknown types. Consequently, the extension must perform some of the work that the Roslyn compiler does at later stages of the compilation, reducing the reuse of the later compilation phases.
\end{enumerate}
\section{Lexing \& Parsing Phases}
This section describes the changes conducted in order to extend the lexing and parsing phases of the Roslyn C\# compiler to support the constructs described in \bsref{chap:stm_design}.
\label{sec:roslyn_lexer_parser_changes}
\subsection{Lexer Changes}
As described in \bsref{chap:stm_design}, \stmnamesp introduces three new keywords: \bscode{atomic,} \bscode{orelse} and \bscode{retry}. Consequently the lexer has been extended to identify these keywords as tokens of the correct kind. The C\# lexer initially lexes keyword tokens as if they are identifiers. If an identifier token corresponds to a keyword, a keyword token with the correct kind is returned instead.
In order to identify the new keywords, their definition has been added to the lookup methods of the \bscode{SyntaxKindFacts} class. This class defines the \bscode{GetKeywordKind} which the lexer uses to identify the keyword kind of an identifier, if the identifier represents a keyword. Additionally, the \bscode{SyntaxKindFacts} class defines the \bscode{GetText} method for determining the string representation of a keyword based on its kind.
To allow tokens to represent the new keywords, the \bscode{AtomicKeyword,} \bscode{OrelseKeyword,} and \bscode{RetryKeyword} entries has been added to the \bscode{SyntaxKind} enumeration. As described in \bsref{subsubsec:roslyn_kinds}, the \bscode{SyntaxKind} enumeration contains an entry for each type of node, token, or trivia in C\#. Whenever the lexer identifies an occurrence of one of the new keywords, a token with the corresponding kind is returned. For example an occurrence of the \bscode{atomic} keyword results in a token with the kind \bscode{AtomicKeyword} being returned.
\subsection{Syntax Tree Extension}
The design described in \bsref{chap:stm_design} adds the \bscode{atomic,} \bscode{orelse} and \bscode{retry} constructs which the existing syntax tree cannot express. Therefore the syntax tree must be extended to support these language constructs. As described in \bsref{subsec:roslyn_syntax_tree_generation} nodes composing the syntax tree, along with factory methods and the visitor pattern implementation, are generated on the basis of an \ac{XML} file. Adding additional nodes to the syntax tree therefore amounts to defining them in the \ac{XML} notation, which has been done for the three previously mentioned constructs. \bsref{lst:roslyn_extension_tre_xml} shows the \ac{XML} code defining the \bscode{AtomicStatementSyntax} node which represents an \bscode{atomic} block in the syntax tree. Line 1 defines the name and base class of the node while line 2 defines its kind. The \bscode{AtomicStatement} kind has been added to the \bscode{SyntaxKind} enumeration as described previously. Line 3 defines a property on the node which holds the token representing the \bscode{AtomicKeyword} which starts the definition of the atomic block. The property has a constraint specifying the kind of the token, that can be associated with the property which is defined on line 4 as well as a comment given on lines 5-9. Line 11 defines a statement property which holds the block of statements associated with the defined \bscode{atomic} block. Line 18 defines a property containing a \bscode{SyntaxList} of \bscode{orelse} blocks associated with the \bscode{atomic} block. This relationship has been modeled after the relationship between a C\# \bscode{try} statement and its \bscode{catch} clauses, as both have a zero to many association. Finally lines 25-27 defines a comment for the \bscode{AtomicStatementSyntax}, while line 28-30 defines a comment for the factory method.
\begin{lstlisting}[label=lst:roslyn_extension_tre_xml,
caption={AtomicStatement \ac{XML} definition},
language=XML,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
morekeywords={Name, Base, Type}] % Start your code-block
<Node Name="AtomicStatementSyntax" Base="StatementSyntax">
<Kind Name="AtomicStatement"/>
<Field Name="AtomicKeyword" Type="SyntaxToken">
<Kind Name="AtomicKeyword"/>
<PropertyComment>
<summary>
Gets a SyntaxToken that represents the atomic keyword.
</summary>
</PropertyComment>
</Field>
<Field Name="Statement" Type="StatementSyntax">
<PropertyComment>
<summary>
Gets a StatementSyntax that represents the statement to be executed when the condition is true.
</summary>
</PropertyComment>
</Field>
<Field Name="Orelses" Type="SyntaxList<OrelseSyntax>">
<PropertyComment>
<summary>
Gets a SyntaxList containing the orelse blocks associated with the atomic statement.
</summary>
</PropertyComment>
</Field>
<TypeComment>
<summary>Represents an atomic block.</summary>
</TypeComment>
<FactoryComment>
<summary>Creates an AtomicStatementSyntax node</summary>
</FactoryComment>
</Node>
\end{lstlisting}
Declaration of transactional local variables, fields and parameters does not require modifications to the syntax tree as the standard nodes for these constructs allow a collection of modifiers to be associated with the declaration. Any atomic modifiers are simply added to the collection along with any other modifiers.
\subsection{Parser Changes}
As described in \bsref{subsec:roslyn_first_phase} the C\# parser uses a recursive descent strategy implemented by hand. As customary for recursive descent implementations, each non-terminal has a method responsible for parsing that particular non-terminal. For the new \bscode{atomic}, \bscode{orelse} and \bscode{retry} such a method has been added. Furthermore, the methods for parsing local variables, fields, parameters, and properties have been modified to allow for an atomic modifier, as well as generating error messages for any unsupported modifier combinations such as \bscode{atomic const} and \bscode{readonly const}, as defined in \bsref{subsec:design_trans_field}. Errors are associated with the erroneous nodes as is customary for the Roslyn compiler. A later compilation phase generates error messages and cancels code emission if any errors are present in the syntax tree.
\bsref{lst:parse_atomic_block} shows the \bscode{ParseAtomicBlock} method responsible for parsing an \bscode{atomic} block. Line \ref{line:ab_keyword} parses an \bscode{atomic} keyword by returning a token representing the keyword while line \ref{line:ab_block} parses the block of statements representing the transaction body. On line \ref{line:ab_orelse_start} to \ref{line:ab_orelse_end} any \bscode{orelse} blocks associated with the \bscode{atomic} statement are parsed , if any are present. Line \ref{line:ab_allocate} allocates a \bscode{SyntaxListBuilder}. The \bscode{Allocate} method reuses existing space if possible, in order to limit the overhead of allocation. As seen on line \ref{line:ab_parse_orelse} the parsing of the actual \bscode{orelse} block is delegated to its corresponding method. On line \ref{line:ab_free} the space allocated by the call on line \ref{line:ab_allocate} is freed. The \bscode{try finally} construct ensures that the space is freed in case of an exception. Finally on line \ref{line:ab_return} the syntax factory is used to create the \bscode{AtomicStatementSyntax} that is returned.
\begin{lstlisting}[label=lst:parse_atomic_block,
caption={Method for parsing \bscode{atomic} block},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
escapechar=~,
morekeywords={atomic, retry, orElse, var, get, set}] % Start your code-block
private StatementSyntax ParseAtomicBlock()
{
var atomicKeyword = this.EatToken(SyntaxKind.AtomicKeyword);~\label{line:ab_keyword}~
var block = this.ParseEmbeddedStatement(false);~\label{line:ab_block}~
var orelses = SyntaxFactory.List<OrelseSyntax>();~\label{line:ab_orelse_start}~
var orelseBuilder = _pool.Allocate<OrelseSyntax>();~\label{line:ab_allocate}~
try
{
while (this.CurrentToken.Kind == SyntaxKind.OrelseKeyword)
{
var clause = ParseOrelse();~\label{line:ab_parse_orelse}~
orelseBuilder.Add(clause);
}
orelses = orelseBuilder.ToList();~\label{line:ab_tolist}~
}
finally
{
_pool.Free(orelseBuilder);~\label{line:ab_free}~
}~\label{line:ab_orelse_end}~
return _syntaxFactory.AtomicStatement(atomicKeyword, block, orelses);~\label{line:ab_return}~
}
\end{lstlisting}
%error codes, const atomic, readonly atomic
\subsection{Symbol Changes}
The symbols representing fields, local variables, and parameters have been modified with a new \bscode{IsAtomic} property of type \bscode{bool}, indicating if the symbol represent an atomic variation of these constructs. The logic which creates the symbol table has further been modified to, for each of these constructs, determine whether the declaration is atomic and set the \bscode{IsAtomic} to the appropriate value.
For each usage of a field, local variable or parameter, the semantic model allows for the retrieval of a symbol, representing the corresponding declaration. Based on the added \bscode{IsAtomic} property this symbol can be used to determine whether the usage represents the usage of an atomic variable. For cases where only access to atomic variables are of interest, in relation to the \ac{STM} system, the symbol extension allows for easy distinguishing between access to atomic variables or non-atomic variables.
\section{Syntax Tree Transformations}
\label{sec:syntax_tree_transformations}
This section presents the syntax tree transformations performed during the compilation process. In order to prevent errors due to ambiguity between type names, the transformation process uses fully qualified names\cite[p. 73]{csharp2013specificaiton} for any types in the \ac{STM} library. In the examples presented in this section the simple names have been used, in order to improve readability.
\subsection{Atomic Block}
In \bsref{subsec:design_atomic_block} the design for the \bscode{atomic} block is described. \bsref{lst:before_atomic_block} depicts the syntax of the atomic block before transformation.
The transformation of an atomic block is done using the following four steps:
\begin{enumerate}
\item Construct a lambda expression with a body equal to that of the \bscode{atomic} block.
\item Construct lambda expressions for any \bscode{orelse} blocks associated with the \bscode{atomic} block, with bodies equal to that of their respective original definition.
\item Construct a syntax node for the invocation of the \bscode{STMSystem.Atomic} method, supplied with the created lambda expressions as arguments.
\item Replace the \bscode{atomic} block with the invocation of the \bscode{STMSystem.Atomic} method.
\end{enumerate}
For syntax shown in \bsref{lst:before_atomic_block} the transformation produces the output shown in \bsref{lst:after_atomic_block}.
\begin{lstlisting}[label=lst:before_atomic_block,
caption={\bscode{atomic} Block Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set}] % Start your code-block
atomic
{
//Block
}
orelse
{
//Orelse block
}
\end{lstlisting}
\begin{lstlisting}[label=lst:after_atomic_block,
caption={\bscode{atomic} Block After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set}] % Start your code-block
STMSystem.Atomic(() => {
//Block
},
() => {
//Orelse block
});
\end{lstlisting}
As a consequence of translating an atomic block into a lambda expression, return statements inside the lambda expression do not return out of the atomic block as described in \bsref{subsec:design_atomic_block}. In order to ensure the wanted semantics, an analysis is performed to identify all atomic blocks containing return statements. In such a case, a return statement is added before the method invocation. Return statements inside nested transactions must return out of the enclosing method. To accommodate this, the analysis also identifies nested transactions.
\subsection{Field Types}\label{subsec:extension_field}
In \bsref{subsec:design_trans_field} the design of transactional fields are described. Any field declared \bscode{atomic} must have its type substituted to the corresponding \ac{STM} type in order for the \ac{STM} system to track any changes to the variable. If a specialized type exist, such as \bscode{TMInt} for \bscode{int}, then that type is used. Otherwise the generic \bscode{TMVar} is used. As the \ac{STM} types act as wrapper objects that allows the \ac{STM} system to track how the wrapped values are accessed, all atomic fields must be initialized to an instance of a \bscode{STM} type, as accessing the wrapped value will otherwise cause a \bscode{NullReferenceException}. If an initializer expression is given as part of the field declaration, the constructor of the wrapping \ac{STM} object is given the expression as an argument, initializing the wrapped value to the value computed by the initializer expression. If no initializer expression is given, the wrapped value is initialized to the default value for the wrapped type by instantiating the \ac{STM} object using its parameterless constructor. The transformation of an atomic field declaration follows the steps described below:
\begin{enumerate}
\item Determine the type of the wrapping \ac{STM} object, based on the type of the field declaration.
\item For each variable declared as part of the field declaration, construct an object instantiation expression following the approach described above. This expression serves as the new initializer for the particular variable it was created for.
\item Construct a new field declaration with the same access modifiers and variable names as the original declaration, but substituting the type with the \ac{STM} object type, and initializer expressions with the created object instantiation expressions.
\item Replace the original field declaration with the constructed field declaration.
\end{enumerate}
\bsref{lst:before_atomic_field} presents an example of two \bscode{atomic} field declarations before transformation while \bsref{lst:after_atomic_field} shows the result of applying the transformation. Line \ref{line:af_field1_before} of \bsref{lst:before_atomic_field} is transformed to line \ref{line:af_field1_after} of \bsref{lst:after_atomic_field}. The type of the field is changed to the specialized integer \ac{STM} type \bscode{TMInt} and the initializer expression is used to initialize the value of the created \bscode{TMInt} object. Line \ref{line:af_field2_before} of \bsref{lst:before_atomic_field} is transformed to line \ref{line:af_field2_after} of \bsref{lst:after_atomic_field}. The type is transformed to \bscode{TMVar<string>}. For \bscode{field3}, which original definition does not contain an initializer expression, an initializer expression has been created following the previously described procedure.
\begin{lstlisting}[ label=lst:before_atomic_field,
caption={\bscode{atomic} Field Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set}] % Start your code-block
public class AtomicFieldExample
{
private atomic int field1 = 1;~\label{line:af_field1_before}~
public atomic string field2 = "Hello world!", field3;~\label{line:af_field2_before}~
}
\end{lstlisting}
\begin{lstlisting}[label=lst:after_atomic_field,
caption={\bscode{atomic} Field After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, string}] % Start your code-block
public class AtomicFieldExample
{
private TMInt field1 = new TMInt (1);~\label{line:af_field1_after}~
public TMVar<string> field2 = new TMVar<string>("Hello world!"), field3 = new TMVar<string>();~\label{line:af_field2_after}~
}
\end{lstlisting}
\subsection{Properties}\label{sec:roslyn_extension_properties}
In \bsref{subsec:design_properties} the design for transactional properties was described. In order to provide the wanted semantics for the automatic form, the transformation involves two parts: an atomic backing field, and a manual property. The transformation takes the following approach for each transactional property identified in the syntax tree:
\begin{enumerate}
\item Construct a get-body with the access-modifier of the original property's get. The get-body contains a block that returns the value of the backing field. The backing field is not yet constructed, but a method is used to determine its future identifier to ensure a correct reference.
\item Construct a set-body with the access-modifiers of the original property's set-body. The set-body contains a block where an assignment of \bscode{value} is made to the backing field.
\item Construct a manual property with the access modifier of the original property and the get-body and set-body constructed earlier.
\item Construct a private transactional field of the same type as the original property, used as backing field.
\item Insert the new property after the original property and replace the original property with the private transactional field.
\end{enumerate}
Transactional properties are transformed before transactional fields. This enables the transformation of transactional properties to simply generate a backing field which is declared atomic and rely on the transformation of transactional fields to transform the type of the field to the correct \ac{STM} type. No transformation has to be done for the manual form, as the transactional field used for backing the property is processed as described in \bsref{subsec:extension_field}. The automatic form before transformation is exemplified in \bsref{lst:before_atomic_property}, where the transformation result is shown in \bsref{lst:after_atomic_property}.
\begin{lstlisting}[ label=lst:before_atomic_property,
caption={\bscode{atomic} Property Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orElse, var, get, set, string}] % Start your code-block
class Car {
public atomic int KmDriven { get; set; }
}
\end{lstlisting}
\begin{lstlisting}[ label=lst:after_atomic_property,
caption={\bscode{atomic} Property After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orElse, var, get, set}] % Start your code-block
class Car {
private atomic int _kmDriven;
public int KmDriven {
get {
return _kmDriven;
}
set {
_kmDriven = value;
}
}
}
\end{lstlisting}
\subsection{Local Variables}
In \bsref{subsec:local_variables} the design for transactional local variables is described. Similar to fields modified with the \bscode{atomic} keyword, \bscode{atomic} local variables must have its type substituted to a corresponding \ac{STM} type. Thus, the approach is similar to the one described in \bsref{subsec:extension_field}, with the exception that a local variable can be declared without specifying the type by using the \bscode{var} keyword, relying on compile-time type inference to determine the type. Since the type has not yet been determined at the point of the transformation, Roslyn's \ac{API} is utilized to infer the type, and replace the \bscode{var} keyword with the type. This is done by identifying all local declaration statements with the type \bscode{var} and the \bscode{atomic} modifier.
The transformation is exemplified with a before example on \bsref{lst:before_atomic_variable}, where on line \ref{line:av_variable1_before} a local variable is declared by using the \bscode{var} keyword. The r-value of the statement is an expression, which type can be identified using the Roslyn compiler’s \ac{API}. In \bsref{lst:after_atomic_variable} on line \ref{line:av_variable1_after} the result shows that the type was infered to a \bscode{string}, thus the \bscode{var} is replaced by a \bscode{TMVar<string>}.
\begin{lstlisting}[ label=lst:before_atomic_variable,
caption={\bscode{atomic} Local Variables Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set}] % Start your code-block
public class AtomicVarExample
{
public void Method()
{
atomic var variable1 = "Hello World!";~\label{line:av_variable1_before}~
atomic int variable2 = 42;~\label{line:av_variable2_before}~
}
}
\end{lstlisting}
\begin{lstlisting}[ label=lst:after_atomic_variable,
caption={\bscode{atomic} Local Variables After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, string}] % Start your code-block
public class AtomicVarExample
{
public void Method()
{
TMVar<string> variable1 = new TMVar<string>("Hello World!");~\label{line:av_variable1_after}~
TMInt variable2 = new TMInt(42);~\label{line:av_variable2_after}~
}
}
\end{lstlisting}
\subsection{Accessing Transactional Variables}
\label{subsec:roslyn_extension_accessing_variables}
As described in \bsref{subsec:stm_impl_transactional_variables} setting the value of a transactional variable is done through the \bscode{Value} property of the supplied \ac{STM} types. As the type of any field, local variable or parameter declared \bscode{atomic} in \stmnamesp is changed to the corresponding \ac{STM} type, any assignment must go through the \bscode{Value} property. Additionally, the \ac{STM} types supply support for implicit conversion to the wrapped value, when appearing as r-values. The transformation can however not rely on implicit conversion in all cases as, for example the comparison of two \bscode{TMInt} objects results in reference comparison instead of the expected integer comparison. As such, also the r-value appearances of any transactional field, local variable and parameter must be transformed to access the \bscode{Value} property, which ensures that the wrapped object is accessed instead of the \ac{STM} object. Special handling is given to the \bscode{++} and \bscode{--} operators as the numeric \ac{STM} types supply transactional implementations of these operators. The transformation ensures that these operators can be used on a transactional variable directly, instead of on its \bscode{Value} property.
The transformation of access to transactional variables is divided into two parts. The first deals with the usage of a transactional variable occurring as a single identifier such as \bscode{someTMVar}. The second part handles member access expressions such as \bscode{object.tmfield1.tmfield2}. While these two implementations differ in the syntactic constructs they work on, their overall approach both follow the steps described below:
\begin{enumerate}
\item Identify each usage of a transactional field, local variable, or parameter including both r- and l-value occurrences.
\item Construct a member access expression that accesses the \bscode{Value} property of the identified variable.
\item Replace the usage of the variable with the constructed member access expression.
\end{enumerate}
\bsref{lst:before_atomic_usage} presents an example of accessing both a transactional field, local variable and parameter. \bsref{lst:after_atomic_usage} shows the result of applying the transformation. The assignment on line \ref{line:au_assigment} of \bsref{lst:before_atomic_usage} is transformed to access the \bscode{Value} property of both its left and right side, as both of the variables involved are transactional. The result of the transformation is shown on line \ref{line:au_assigment_after} of \bsref{lst:after_atomic_usage}. The member access expression on line \ref{line:au_member_access} of \bsref{lst:before_atomic_usage} is likewise transformed to access the \bscode{Value} property of both transactional fields involved. The resulting code is shown on line \ref{line:au_member_access_after} of \bsref{lst:after_atomic_usage}.
\begin{lstlisting}[ label=lst:before_atomic_usage,
caption={Usage of \bscode{atomic} Variables Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set}] % Start your code-block
public class AtomicExample
{
public atomic AtomicExample aField;
public AtomicExample ExampleMethod(atomic int i)
{
atomic int k = 0;
k = i;~\label{line:au_assigment}~
return aField.aField;~\label{line:au_member_access}~
}
}
\end{lstlisting}
\begin{lstlisting}[ label=lst:after_atomic_usage,
caption={Usage of \bscode{atomic} Variables After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set}] % Start your code-block
public class AtomicExample
{
public TMVar<AtomicExample> aField = new TMVar<AtomicExample>();
public AtomicExample ExampleMethod(TMInt i)
{
TMInt k = new TMInt(0);
k.Value = i.Value;~\label{line:au_assigment_after}~
return aField.Value.aField.Value;~\label{line:au_member_access_after}~
}
}
\end{lstlisting}
\subsection{Parameters}
\label{subsec:roslyn_extension_parameters}
As with transactional local variables and transactional fields, the type of a parameter declared atomic must be changed to the corresponding \ac{STM} type in order for the \ac{STM} system to track assignments to it. \bsref{lst:before_atomic_parameter} presents a method taking two \bscode{atomic} parameters while \bsref{lst:after_atomic_parameter} presents the result of applying the parameter transformation. Each atomic parameter is transformed individually and any \bscode{ref} or \bscode{out} modifiers are preserved.
\begin{lstlisting}[ label=lst:before_atomic_parameter,
caption={\bscode{atomic} Parameters Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, bool, ref, string}] % Start your code-block
public class AtomicParameterExample
{
public void TestMethod(atomic int x, bool b, atomic ref string s)~\label{line:ap_before}~
{
//Body
}
}
\end{lstlisting}
\begin{lstlisting}[ label=lst:after_atomic_parameter,
caption={\bscode{atomic} Parameters After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, ref, bool, string}] % Start your code-block
public class AtomicParameterExample
{
public void TestMethod(TMInt x, bool b, ref TMVar<string> s)~\label{line:ap_after}~
{
//Body
}
}
\end{lstlisting}
\subsubsection{Transactional Output Parameters}
Transactional output parameters described in \bsref{subsec:stm_desgin_out_parameters} require additional handling as C\# requires every execution path in the method body to assign a value to the parameter\cite[p. 42]{sestoft2011c}. As described in \bsref{subsec:roslyn_extension_accessing_variables}, any assignment to a variable is replaced with an assignment to its \bscode{Value} property. As a consequence no assignments occur to the transactional parameter itself, which results in an error in the generated code. To rectify this error, an assignment, assigning a new \ac{STM} object with the same type as the parameter, is generated in the top of the method body for every \bscode{atomic out} parameter of that particular method. For each \bscode{atomic out} parameter the \stmnamesp transformation contains the following steps:
\begin{enumerate}
\item Construct an object initialization expression that creates a new \ac{STM} object of the same type as the parameter.
\item Construct an assignment statement that assigns the newly constructed object initialization expression to the \bscode{atomic out} parameter.
\item Insert the assignment statement as the first statement in the body of the enclosing method declaration.
\end{enumerate}
\bsref{lst:before_atomic_out_parameter} shows a method with an \bscode{atomic out} parameter while \bsref{lst:after_atomic_out_parameter} shows the result of applying the transformation.
Based on the \bscode{atomic out} parameter declared on line \ref{line:aout_before} of \bsref{lst:before_atomic_out_parameter} the assignment on line \ref{line:aout_after} of \bsref{lst:after_atomic_out_parameter} is generated.
\begin{lstlisting}[ label=lst:before_atomic_out_parameter,
caption={\bscode{atomic out} Parameter Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, ret, out}] % Start your code-block
public class AtomicOutExample
{
public static void TestMethodAtomic(atomic out int i, int j)~\label{line:aout_before}~
{
i = 12;
j = 12;
}
}
\end{lstlisting}
\begin{lstlisting}[ label=lst:after_atomic_out_parameter,
caption={\bscode{atomic out} Parameter After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, ret, out}] % Start your code-block
public class AtomicOutExample
{
public static void TestMethodAtomic(out TMInt i, int j)
{
i = new TMInt();~\label{line:aout_after}~
i.Value = 12;
j = 12;
}
}
\end{lstlisting}
While the generated assignment solves the problem of assignments to \bscode{atomic out} parameters it introduces two new problems. Firstly, C\# requires that every \bscode{out} parameter is assigned a value before a method exits, generating a compile time error if that is not the case\cite[p. 94]{csharp2013specificaiton}. This error message is lost for transactional output parameters due to the generated assignment. Secondly, C\# requires that an \bscode{out} parameter is assigned to before reading from it\cite[p. 94]{csharp2013specificaiton}, generating a compile time error if a read occurs before an \bscode{out} parameter is assigned a value. If a read occurs before any assignment in the original source code, no error is given due to the generated assignment. In both cases a meaningful error can be generated by applying analysis to a transactional output parameters \bscode{Value} property by following the rules defined in\cite[p. 95]{csharp2013specificaiton}. No such analysis has however been implemented in the initial prototype of \stmname.
\subsection{Arguments}
As described in \bsref{subsec:stm_desgin_value_parameters} \stmnamesp support value parameters. Replacing the type of an atomic parameter with the corresponding \ac{STM} type, as described in \bsref{subsec:roslyn_extension_parameters}, results in a type mismatch when calling a method with an \bscode{atomic} parameter of some type $T$, as the parameter is no longer of type $T$, but of $T$'s corresponding \ac{STM} type. As a result transformation must be applied to arguments passed to an \bscode{atomic} parameter, ensuring that the argument represents an object of the required \ac{STM} type. The transformation applied when an argument is passed to an \bscode{atomic} parameter goes through the following steps:
\begin{enumerate}
\item Determine the \ac{STM} type of the formal parameter to which the argument corresponds.
\item Construct an object initialization expression, which creates an object of the previously determined \ac{STM} type, where the argument expression is given as argument to the constructor of the object.
\item Replace the argument with the constructed object initialization expression.
\end{enumerate}
\bsref{lst:before_atomic_argument} presents an example of calling a method with an \bscode{atomic} parameter, while \bsref{lst:after_atomic_argument} shows the result of the transformation. Line \ref{line:aa_param_before} is transformed as described in \bsref{subsec:roslyn_extension_parameters}. Line \ref{line:aa_before} in \bsref{lst:before_atomic_argument} is transformed to line \ref{line:aa_after} in \bsref{lst:after_atomic_argument}. The argument to the \bscode{atomic} parameter is replaced with an object initialization expression that creates a new \bscode{TMInt} object which is initialized using the original argument.
\begin{lstlisting}[ label=lst:before_atomic_argument,
caption={\bscode{atomic} Argument Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, ret, out}] % Start your code-block
public class AtomicArgumentExample
{
public static void TestMethod(atomic int x, int y)~\label{line:aa_param_before}~
{
//Body
}
public static void Main()
{
TestMethod(1, 2);~\label{line:aa_before}~
}
}
\end{lstlisting}
\begin{lstlisting}[ label=lst:after_atomic_argument,
caption={\bscode{atomic} Argument After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, ret, out}] % Start your code-block
public class AtomicArgumentExample
{
public static void TestMethod(TMInt x, int y)
{
//Body
}
public static void Main()
{
TestMethod(new TMInt(1), 2);~\label{line:aa_after}~
}
}
\end{lstlisting}
\subsubsection{Ref/Out Arguments}
Supporting \bscode{atomic} \bscode{ref} and \bscode{out} parameters, as described in \bsref{subsubsec:trans_ref_out_args}, presents a problem due to the type mismatch as a result of transforming \bscode{atomic} variables. \bscode{ref} and \bscode{out} parameters require the argument to be an assignable variable of the same type as the parameter. However as the type of the parameter is transformed, a variable of type $T$ cannot be passed directly as \bscode{ref} or \bscode{out} to an \bscode{atomic} parameter of type $T$.
Four different cases exist for passing \bscode{ref} and \bscode{out} arguments. These are:
\begin{enumerate}
\item $T$ $\quad \rightarrow \quad$ \bscode{atomic} $T$
\item \bscode{atomic} $T$ $\quad \rightarrow\quad$ $T$
\item \bscode{atomic} $T$ $\quad \rightarrow\quad$ \bscode{atomic} $T$
\item $T$ $\quad \rightarrow \quad$ $T$
\end{enumerate}
where $T$ is some arbitrary type, $\text{ } \bscode{atomic} \quad T$ is $T$'s corresponding \ac{STM} type and $\text{ } \bscode{atomic} \quad T \quad \rightarrow \quad T$ represents passing an argument of type $T$ into a parameter of the \ac{STM} type corresponding to $T$. The third and fourth cases are handled by the C\# compiler so transformation must only be applied in the first and second cases.
For each argument node in the syntax tree the transformation goes through the following steps:
\begin{enumerate}
\item Determine whether the argument represents one of the two previously described cases. If so, the remaining transformation steps are applied.
\item Construct a local variable with a type definition equal to that of the parameter, initialized using the original argument, and insert it just before the method call.
\item Replace the original argument with an identifier equal to the name of the previously generated intermediate local variable, passed using the same modifier as the original argument
\item Construct an assignment syntax node that assigns the value of the intermediate local variable to the original argument and insert it just after the method call
\end{enumerate}
\bsref{lst:before_atomic_ref} shows an example containing three method calls, where the first and third call falls in one of the categories which require transformation. Applying the transformation to the example presented in \bsref{lst:before_atomic_ref} produces a syntax tree representing the code shown in \bsref{lst:after_atomic_ref}. Line \ref{line:aref1} of \bsref{lst:before_atomic_ref} is transformed to lines \ref{line:aref1_after_1} to \ref{line:aref1_after_3} of \bsref{lst:after_atomic_ref}. As the parameter is \bscode{atomic}, the local variable inserted on line \ref{line:aref1_after_1} is of the parameters corresponding \ac{STM} type. The argument to the method call has been replaced as seen on line \ref{line:aref1_after_2}. The assignment on line \ref{line:aref1_after_3} assigns the value computed by the called method to the original argument. Line \ref{line:aref2} of \bsref{lst:before_atomic_ref} is not transformed as it represents the case \bscode{atomic} $T$ $\quad \rightarrow \quad$ \bscode{atomic} $T$. Line \ref{line:aref3} of \bsref{lst:before_atomic_ref} is transformed much like \ref{line:aref1}, except the generated local field is of type \bscode{int} instead of an \ac{STM} type, as the parameter is not declared \bscode{atomic}.
C\# requires the argument for a \bscode{ref} or \bscode{out} parameter to be a variable. For the case $T$ $\quad \rightarrow \quad$ \bscode{atomic} $T$ and \bscode{atomic} $T$ $\quad \rightarrow \quad$ $T$, compile time analysis has been implemented to generate an error if the original argument does not correspond to a variable, as this is always the case in the generated code.
%In order simplify the language and maintain the same semantics whenever a atomic parameter or variable is passed as an argument or
\begin{lstlisting}[ label=lst:before_atomic_ref,
caption={\bscode{ref} Arguments Before Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, ref, out}] % Start your code-block
public class AtomicRefExample
{
public static void TestMethodAtomic(atomic ref int i)
{
i = 12;
}
public static void TestMethod(ref int i)
{
i = 12;
}
public static void Main()
{
int i = 10;
atomic int iAtomic = 10;
TestMethodAtomic(ref i);~\label{line:aref1}~
TestMethodAtomic(ref iAtomic);~\label{line:aref2}~
TestMethod(ref iAtomic);~\label{line:aref3}~
}
}
\end{lstlisting}
\begin{lstlisting}[ label=lst:after_atomic_ref,
caption={\bscode{ref} Arguments After Transformation},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, ref, out}] % Start your code-block
public class AtomicRefExample
{
public static void TestMethodAtomic(ref TMInt i)
{
i.Value = 12;
}
public static void TestMethod(ref int i)
{
i = 12;
}
public static void Main()
{
int i = 10;
TMInt iAtomic = new TMInt(10);
TMInt _gen1 = new TMInt(i);~\label{line:aref1_after_1}~
TestMethodAtomic(ref _gen1);~\label{line:aref1_after_2}~
i = _gen1.Value;~\label{line:aref1_after_3}~
TestMethodAtomic(ref iAtomic);~\label{line:aref2_after_1}~
int _gen3 = iAtomic.Value;~\label{line:aref3_after_1}~
TestMethod(ref _gen3);~\label{line:aref3_after_2}~
iAtomic.Value = _gen3;~\label{line:aref3_after_3}~
}
}
\end{lstlisting}
\subsection{Retry}
In \bsref{sec:sync_design} the design of conditional synchronization through the use of \bscode{retry} is described. As \bscode{retry} is a keyword used as a statement, much like the \bscode{break} and \bscode{continue} statements in C\#, the transformation need to identify all \bscode{retry} keywords and replace them with a method invocation to the static method \bscode{STMSystem.Retry} defined in the \ac{STM} library, described in \bsref{subsec:impl_retry}. The library then carries out the effect of the \bscode{retry} statement. To inform programmers of unintended behavior, analysis generating an error if the retry statement is used outside of an \bscode{atomic} or \bscode{orelse} block, has been implemented.
\section{Testing}\label{sec:roslyn_extension_testing}
Testing is employed to ensure the Roslyn extension fulfills the design decisions of \stmnamesp and the integration with existing language features described in \bsref{chap:stm_design}.
For the reasons described in \bsref{sec:stm_impl_testing}, unit testing is chosen for testing the Roslyn extension. Unit testing is valuable in relation to the maintenance and further development of the Roslyn extension. Furthermore, Roslyn is still under heavy development, which means radical changes will happen. The unit test suite is beneficial to test if the extension still works, whenever new changes are made to Roslyn. The disadvantage of unit testing being more time consuming than ad hoc testing is outweighed by the many advantages which is gained. Additionally, each unit test is built using a black-box testing strategy\cite[p. 87]{graham2008foundations}, where the program is viewed as a black box and focus is only on the input and output. This is done as the focus of the unit tests is on the functionality of the compiler. That is, the tests focus on what it does, and not on how it does it. Additionally it fits well with how a compiler is normally treated, as it is treated as a black-box where the input and output is only available.
Each \ac{STM} construct and each integration with an existing language feature is covered by at least a single unit test. Furthermore, integration with disallowed existing language features is tested, e.g. there is a unit test that ensures an error is produced if the programmer tries to use the \bscode{retry} statement outside of an \bscode{atomic} or \bscode{orelse} block. This results in a rather extensive test suite, which is valuable in relation to the correctness of the \ac{STM} constructs and integration with language features of \stmname.
\bsref{lst:empty_atomic_block} shows an example of a unit test for an empty atomic block. On line \ref{line:ab_finalstr} the string \bscode{finalStr} to be compiled is generated using the \bscode{MakeSkeletonWithMain} method, which generates a test namespace, class and main method, with the supplied string arguments \bscode{strInClass} and \bscode{strInMain} included. Afterwards the string is written to a test file, which is used as argument in the method on line \ref{line:ab_runcsc}, which executes \bscode{csc.exe} (C\# command line compiler) and returns a \bscode{CmdRes} object that contains the command line output. On line \ref{line:ab_assertcsc} it is checked that the command line output does not contain any warning, error or invalid exitcode. For unit tests that check error generations, e.g. an illegal existing language feature used with an \ac{STM} construct, this step is not present, instead \bscode{CmdRes} can be checked if it contains the expected error or warning. On line \ref{line:ab_expecstr} the expected result of the compilation string \bscode{expecStr} is generated and on line \ref{line:ab_compiledstr} the actual compilation string \bscode{compiledStr} is fetched. In order to retrieve the actual compilation string, csc.exe is extended with an additional argument called \bscode{stmiout}, which writes the source code after \ac{STM} transformation to the specified path. This argument is used in the method on line \ref{line:ab_runcsc}. Finally on line \ref{line:ab_asserteq} the expected string and the compiled string is checked for equality, if equal the test passes and otherwise it fails. Before checking for equality, the \bscode{AssertEqualStrings} method removes formatting from the strings.
\begin{lstlisting}[float, label=lst:empty_atomic_block,
caption={Empty Atomic Block Unit Test},
language=Java,
showspaces=false,
showtabs=false,
breaklines=true,
showstringspaces=false,
breakatwhitespace=true,
escapechar=~,
commentstyle=\color{greencomments},
keywordstyle=\color{bluekeywords},
stringstyle=\color{redstrings},
morekeywords={atomic, retry, orelse, var, get, set, ref, out, Test}] % Start your code-block
[Test]
public void AtomicBlockEmpty()
{
string finalStr = MakeSkeletonWithMain(~\label{line:ab_finalstr}~
strInClass: "",
strInMain: "atomic{\n\t\t\t"+
"}");
StringToTestFile(finalStr);~\label{line:ab_strtofile}~
CmdRes res = RunCscWithOutAndStmIOut();~\label{line:ab_runcsc}~
AssertCmdRes(res);~\label{line:ab_assertcsc}~
string expecStr = MakeSkeletonWithMain(~\label{line:ab_expecstr}~
strInClass: "",
strInMain: STM.STMNameSpace + ".STMSystem.Atomic(" +
"() =>" +
"{" +
"});");
string compiledStr =~\label{line:ab_compiledstr}~ TestFileToString(currentCompiledCsFile);
AssertEqualStrings(expecStr, compiledStr);~\label{line:ab_asserteq}~
}
\end{lstlisting}
%\toby[i]{Hvad med errors? Errorcode og stm diagnostics bag?}
%ERRORS: (hvordan smides de) - er det her det skal besrkives?
%STM diagnostic for semantic errors
%errors i paraser, kan bruge ErrorCode Enum osv.
\section{Design and Integration Revisited}\label{sec:design_and_int_revisited}
This section describes areas in which the initial prototype implementation described in this chapter, conflicts with the intended design and language integration described in \bsref{chap:stm_design}. Ideally no conflicts would happen, as the analysis in the design and language integration chapter would foresee them. However, implementing the design gave an insight and knowledge into unforeseen areas.
\subsection{Transactional Local Variables and Fields}
As described in \bsref{subsec:extension_field} and \bsref{subsec:local_variables} local variables and fields of type $T$ declared \bscode{atomic} are transformed to declarations with a type equal to $T$'s corresponding \ac{STM} type. As part of this process the local variable or field is initialized to an \ac{STM} object of the correct type, using any initializer expression if present. If not, the value of the variable is initialized to the default value of type $T$ through the \ac{STM} object's parameter less constructor.
For fields this presents no issue as fields are always initialized to the default value of their type if no initializer expression is present\cite[p. 93]{csharp2013specificaiton}. This is however not the case for local variables. The C\# compiler generates an error if a local variable is accessed before it has been assigned a value\cite[p. 96]{csharp2013specificaiton}. As \bscode{atomic} local variables are always initialized, this error will never occur for such variables, which can lead to unintended behavior in cases where the programmer forgets to assign an \bscode{atomic} local variable. In such cases the default value of the \bscode{atomic} local variables original type will be the value accessed, potentially leading to some confusion.
In order to provide similar error messages for \bscode{atomic} local variables and regular C\# local variables, the Roslyn flow analysis which detects whether a given accessed local variable has been definitely assigned, must be extended. It must be extended to track both initial assignment, as part of the variable declaration, and assignment to the \bscode{STM} objects \bscode{Value} property, as opposed to assignments directly to the variable. Such analysis has however not been included in the initial prototype described in this chapter.
\subsection{Transactional Optional Parameters}
C\# requires that the default value given when declaring an optional parameter is one of the following\cite[p. 309]{csharp2013specificaiton}:
\begin{enumerate}
\item A constant expression
\item An expression of the form \bscode{new S()} where \bscode{S} is a value type
\item An expression of the form \bscode{default(S)} where \bscode{S} is a value type
\end{enumerate}
As described in \bsref{subsec:roslyn_extension_parameters} the transformation of an \bscode{atomic} parameter transforms the type of the parameter to one of the \ac{STM} types. Providing a default value to an \bscode{atomic} parameter therefore amounts to initializing a new \ac{STM} object with the same type as the parameter, supplied with the defined default value to its constructor. All \ac{STM} types are however reference types and the creation of a new \ac{STM} object can therefore, per the three rules described above, not be supplied as the replacement default value, as this would result in a compile time error. As a result default values for transactional value parameters are not implemented in the initial prototype described in this chapter. Analysis has been implemented to produce an error if an \bscode{atomic} value parameter is given a default value. This is done in the parsing phase, by checking if a parameter is declared with the \bscode{atomic} modifier and has an optional value.
\subsection{Transactional Ref/Out Parameters \& Arguments}\label{subsec:roslyn_extension_ref_out_revisited}
As described in \bsref{subsec:stm_desgin_ref_parameters} and \bsref{subsec:stm_desgin_out_parameters} the intended design was that assignments to transactional \bscode{ref/out} parameters would take effect when the transaction in which the assignment takes place commits. This behavior holds true for the case:
\begin{itemize}
\item $\bscode{atomic} \quad T \quad \rightarrow \quad \bscode{atomic} \quad T$
\end{itemize}
That is, the case involving an \bscode{atomic} argument and parameter. For the cases:
\begin{itemize}
\item $T \quad \rightarrow \quad \bscode{atomic} \quad T$
\item $\bscode{atomic} \quad T \quad \rightarrow \quad T$
\end{itemize}
the behavior is however different. Due to the type mismatch between the argument and parameter, an intermediate local variable is passed as the actual argument, and the original argument is assigned as the first thing after the call finishes. This conflicts with traditional C\#, where any assignment to a \bscode{ref/out} parameter takes effect immediately\cite[p. 76]{sestoft2011c}. If the parameter is declared \bscode{atomic} and the method call is wrapped in an \bscode{atomic} block, there will be no difference in timing as any assignments to atomic variables take effect when the transaction commits. However, if the call is not wrapped in an \bscode{atomic} block the difference in timing can have unintended consequences, especially when considering concurrent execution. Due to this issue a number of ways for handling \bscode{atomic ref/out} parameters and arguments were investigated:
\begin{enumerate}
\item Warn the programmer if one of the two problematic cases is used.
\item Disallow the two problematic cases described above.
\item Disallow \bscode{atomic ref/out} parameters.
\end{enumerate}
In C\#, passing a volatile field as \bscode{ref/out} results in the field being treated as non-volatile within the method body\cite{csharpVolatileRef}. If such an operation is detected the compiler creates a warning informing the programmer of the change in semantics. The same approach could be taken whenever one of the two problematic cases described earlier is detected.
Disallowing the \bscode{ref/out} cases where both an \bscode{atomic} and non-\bscode{atomic} variable/parameter are involved removes the cases where the desired timing cannot be provided. However, it will, for example, no longer be possible to pass an \bscode{atomic} integer into an \bscode{out} integer parameter such as that of \bscode{Int32.TryParse.} Such restrictions limits the flexibility when using \bscode{atomic} variables, and the programmer needs to circumvent this, by the use of intermediate variables which clutters the code, and reduces the readability.
Disallowing \bscode{atomic ref/out} parameters, prevents the programmer from declaring \bscode{atomic ref/out} parameters, which removes the timing issues present in the cases where an intermediate \bscode{atomic} variable is required. However, the ability to pass an \bscode{atomic} variable by reference is lost and the case of passing a variable of type \bscode{atomic} $T$ into a parameter of type $T$ is still unsolved. Consequently, this would limit the orthogonality of the \bscode{atomic} construct.
For the initial prototype presented in this chapter the first option of warning the programmer of the change in semantics has been selected. This ensures that the change in semantics is not hidden from the programmer thereby allowing her to adapt the program. This choice follows the approach taken by C\# in the case of a \bscode{volatile} field passed by ref into a method as described above.
\worksheetend | {
"alphanum_fraction": 0.7809376544,
"avg_line_length": 75.6706875754,
"ext": "tex",
"hexsha": "01bbabb684634546028d7702a75f210a62a91a5f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "96741cffdbf14bfa5ca0664ad96c1834f8bf9d07",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Felorati/Thesis",
"max_forks_repo_path": "report/worksheets/roslyn_extension/roslyn_extension.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "96741cffdbf14bfa5ca0664ad96c1834f8bf9d07",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Felorati/Thesis",
"max_issues_repo_path": "report/worksheets/roslyn_extension/roslyn_extension.tex",
"max_line_length": 1874,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "96741cffdbf14bfa5ca0664ad96c1834f8bf9d07",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Felorati/Thesis",
"max_stars_repo_path": "report/worksheets/roslyn_extension/roslyn_extension.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 14883,
"size": 62731
} |
\chapter{Mass and Inertia}
\section{Empty Aircraft Moments of Inertia}
Aircraft is divided into structure groups which mass is estimated. This groups are assumed to be homogeneous rigid body with simple shape which allows to calculate its moment of inertia using an exact closed-form expression, given e.g. in \cite{HousnerHudson1980}.
Steiner’s theorem, given by the following expression, is used to express aircraft structure groups inertia tensor in Body Axis System. \cite{Taylor2005, ResnickHalliday2011}
\begin{equation}
\label{eq-mass-steiners}
{\boldsymbol I}_b
=
{\boldsymbol I}_0
+
m
\left[
\begin{matrix}
y^2 + z^2 & -xy & -xz \\
-yx & x^2 + z^2 & -yz \\
-zx & -zy & x^2 + y^2 \\
\end{matrix}
\right]
\end{equation}
Sum of all aircraft structure groups inertia tensors gives empty aircraft inertia tensor:
\begin{equation}
{\boldsymbol I}_b = \sum_{j} {\boldsymbol I}_{j,b}
\end{equation}
\section{Variable Masses}
All variable masses, crew, fuel, payload, etc., are considered to be point masses. Point mass inertia tensor can be calculated using formula (\ref{eq-mass-steiners}), where ${\boldsymbol I}_b = 0$. This tensors are then added to the empty aircraft inertia tensor giving total aircraft inertia tensor.
Aircraft total first moment of mass is given as follows:
\begin{equation}
{\vec S}_b = \sum_{j} m_j {\vec r}_{CM,j,b}
\end{equation}
Position of aircraft center of mass including variable masses is then given by following formula:
\begin{equation}
{\vec r}_{CM,b} = \frac{ {\vec S}_b }{ \sum_{j} m_j }
\end{equation}
| {
"alphanum_fraction": 0.7047677262,
"avg_line_length": 38.9523809524,
"ext": "tex",
"hexsha": "00fd4425bd51695551fda60985545e019aec1b22",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-12-01T19:41:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-12-01T10:56:23.000Z",
"max_forks_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "marek-cel/mscsim-docs",
"max_forks_repo_path": "tex/fdm_6.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "marek-cel/mscsim-docs",
"max_issues_repo_path": "tex/fdm_6.tex",
"max_line_length": 300,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "marek-cel/mscsim-docs",
"max_stars_repo_path": "tex/fdm_6.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-09T07:02:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-12-01T02:27:28.000Z",
"num_tokens": 450,
"size": 1636
} |
\lab{Application}{Cracking Blackjack}{Cracking Blackjack}
\label{Ch:BJ}
\objective{This section teaches about how toexploite the weaknesses of a pseudorandom number generator that uses Linear Congruience}
\section*{Blackjack}
%Lab \ref{BJ}
Black Jack is a card game that involes the use of randomness. The game is simple, the dealer deals the player and himself each two cards. He flips over his first card so the player can see it. The player has to chose to take another card ("hit") or not ("stand"). If the player hits he gets another card and again has the choice to hit or stand.
The goal is to get your hand to be at or as close to 21 without going over. Face cards are worth 10 points. Aces can count either as 11 or 1. The of value other cards are equal to number on the card.
Once the player has decided to stand the dealer flips over his second card and deals himself cards until his hand value is 17 or greater.
If the player value goes above 21 he automaticly loses. If his value is 21 and below and dealer has above 21 then the player wins. If they both have 21 or under than whose hand has the highest value, wins. If both hands have the same value, the game is a tie.
\section*{Shuffling Algorithms}
One use of Psuedorandom Number Generators (PRNGs) is to shuffle cards. The main goal of these algorithms is that the card order are random - so that no one player has advatage based on order. Often online gambling sites will post their algorithms online. The only thing they do not post is their seed values. Often the time in miliseconds is used as the seed value.
\section*{Cracking Blackjack}
For these next problems you will need three files that are provided:Black.py, BlackEasy.py, and bjHelp.py. Black.py and BlackEasy.py are are programs that run games of Blackjack that use a Linear Congruentail Generator (LCG) to shuffle the cards. They print out 52 numbers and the argsort of those numbers is the order of the cards.The parameters for BlackEasy.py are a$=2521$, c$=13$, mod$=2^{16}$; For Black.py they are a$=25214903917$, c$=11$, mod$=2^{48}$. In order to play them python name (Black.py or BlackEasy.py) numberofgames. They are both seeded initially by the time.
bjHelp.py contains two functions that will help you "predict" the cards:
SuffleHack(n,a,c,mod,seed) gives the first n card shuffles given the parameters for a LCG. The shuffles are represented by numbers
Hacker(Stats,['card','card','card']) Stats is the output of SuffleHack and takes a list of 3 cards (see below.) It prints all shuffles as a list of cards in Stats that have the same first three cards as the inputed list.
The trick to being able to "predict" the cards is to find the initial seed value.
Cards- A, 2-10, J, Q, or K combined with heart, diamond, club, or spade in single quotes. Examples: '6diamond', 'Kclub'
\begin{problem}
Play 10 games of BlackEasy.py and by the 5th game be able to predict the cards. You can write your own functions or use the ones in bjHelp.py. You will want to open two comand promts, one to play the game and one to use to predict the cards.
\end{problem}
Not too hard. That is because there is only $2^{16}$ seed values. This next one you will have to look at more hands until you can find out the initial seed value.
\begin{problem}
Play 20 games of Black.py and by the 15th game be able to predict the cards.
\end{problem}
| {
"alphanum_fraction": 0.7664599941,
"avg_line_length": 72.0638297872,
"ext": "tex",
"hexsha": "8c1f65b46b1b7c4ddeeb2200038d1a5ff815b0eb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "abefrandsen/numerical_computing",
"max_forks_repo_path": "Applications/Blackjack/Blackjack.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "abefrandsen/numerical_computing",
"max_issues_repo_path": "Applications/Blackjack/Blackjack.tex",
"max_line_length": 581,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "abefrandsen/numerical_computing",
"max_stars_repo_path": "Applications/Blackjack/Blackjack.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 848,
"size": 3387
} |
% Chapter 7
\chapter{Evaluations} % Main chapter title
\thispagestyle{fancy}
\label{eval} % For referencing the chapter elsewhere, use \ref{Chapter1}
\lhead{Chapter 7. \emph{Evaluations}} % This is for the header on each page - perhaps a shortened title
\doublespacing
\setlength{\parindent}{1cm}
%\section{Conclusion}
%
%\section{Future Work}
%
%\subsection{Summarizing Event Related Content}
%
%\subsection{Identifying Insightful Opinionated Content Related to Events}
%
%\subsection{Event Topic Modeling}
%
%\subsection{Event-specific Recommendations}
%
%\subsection{Distributed Processing of EventIdentityInfoGraph}
%
%\subsection{Event Ontology for Social Media}
\section{Evaluation Baselines}
In order to evaluate the performance of \textit{EventIdentityInfoRank} we selected six different techniques that acted as our baselines. The six techniques along with their brief explanation and the reason behind their choice is discussed below.
\begin{enumerate}
\item \textbf{LexRank} - LexRank is a popular graph based algorithm commonly used for \textit{extractive text summarization} of documents \cite{erkan2004lexrank}. It uses a stochastic graph-based method for computing the relative importance of textual units for natural language processing. The task of \textit{extractive text summarization} is based on the concept of identifying the most important sentences in a document or a set of documents. Importance is defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. LexRank, computes sentence importance based on the concept of eigenvector centrality \cite{ruhnau2000eigenvector}, in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity \cite{garcia2006cosine}, is used as the adjacency matrix of the graph representation of sentences. The LexRank algorithm is implemented using an open-source python module named sumy\footnote{\tiny https://pypi.python.org/pypi/sumy/0.1.0}, which ranks the event related tweets considering them as individual sentences.
The objective of ranking natural language sentences in terms of their importance, makes it very similar to the \textit{EventIdentityInfoRank} algorithm as proposed in this dissertation for the purpose of ranking tweets instead of textual documents. \textit{EventIdentityInfoRank}, additionally ranks other units of information such as hashtags, text units, users and URLs, simultaneously, and does not takes into account the similarity of the tweets with a centroid pseudo-tweet.
\item \textbf{TextRank} - TextRank is another popularly used technique used for summarization of textual documents \cite{Mihalcea2004}. It is also a graph-based
ranking model for text processing, that can be successfully used in natural language
applications. The mechanism of its working is very similar to PageRank \cite{page1999pagerank}. However, instead of ranking web pages based on their linking structure, it ranks text units based on their linking structure. It can be used for identifying salient sentences as well as key words of a document. Its objective of identifying key words and important sentences is also similar to our objective of finding important tweets.
The algorithm was implemented in this dissertation with slight modifications from its original flavor, in order to make it suitable for the context. Instead of creating heterogeneous relationships in \textit{EventIdentityInfoGraph} homogeneous relationships were created between the \textit{event identity information units}. Cosine similarity ($ \ge 0.10$) \cite{garcia2006cosine}, was used as the measure of relatedness between tweets, and the association scores of the hashtags, text units, users and URLs were based on their co-occurrence normalized between 0 and 1. The users were associated whenever they mentioned each other in the tweets, and the association score was measured by the number of mentions normalized between 0 and 1.
\item \textbf{Centroid} - Centroid is one of the techniques that was previously used in the literature for solving a part of the problem in this dissertation that ranks tweets. The technique is used for identifying high quality informative and useful tweets related to an event \cite{becker2011selecting}. In order to implement it as a baseline the tweets for a single event in a given time period was considered as one cluster. After pre-processing the tweets, the centroid of the cluster was calculated and the tweets were ordered in decreasing order of their similarities with the centroid.
\item \textbf{SeenRank} - SeenRank is a proprietary algorithm commercially used by seen.co for generating event summaries and highlights from Twitter. \textit{SeenRank} was considered as the state-of-the-art technique. Although the true working of the algorithm is unknown, yet the task that the algorithm achieves is similar to the task of \textit{EventIdentityInfoRank}. In order to use this technique as one of the baselines, tweets about the events tracked by the EIIM framework were also collected from seen.co. The collection task was achieved using their API found at http://developer.seen.co/, and a freely available python wrapper pySeen\footnote{https://github.com/dxmahata/pySeen} developed during the experiment, for collecting data from seen.co. Each tweet collected from the website has a score assigned to it using SeenRank. This score was used for arranging the tweets in descending order. The ordering of the tweets were confirmed from the company's co-founder in order to be sure that greater score reflects higher ranking.
\item \textbf{RTRank} - Number of retweets is a good measure of popularity of a tweet and is also used by Twitter for ranking its search results. It is also commonly used by other paltforms for ranking tweets as already pointed out in Chapter \ref{review}. Therefore, tweets ordered in decreasing order of number of retweets were also considered as one of the baselines. This scheme is refered to as RTRank throughout this dissertation.
\item \textbf{Logistic Regression Model} - This technique is the logistic regression model, implemented for initializing the informativeness score of the tweets in the \textit{Event Information Quality} component of the EIIM life cycle. The generic informativeness score assigned by the logistic regression model is different from the final event-specific informativeness score assigned by \textit{EventIdentityInfoRank}. Also the logistic regression model acts as a good representative of the common challenges with supervised approaches. This is explained with an example.
On manually analyzing the informative tweets we tried to assess if it is good enough to train a classifier for detecting informative tweets for an event in order to identify valuable event-specific information. Although the tweets on which the logistic regression model was trained were related to events yet tweets like, \textit{RT @BFDealz: http://t.co/TSJAigrVJI WHEELS SUPER TREASURE HUNT SUPERIZED HARLEY DAVIDSON FAT BOY LONG CARD 2014 \#cpac2014 \#sxsw}, were classified as informative, despite of not containing any event-specific information.
This was probably because of the choice of features for the model, which were generic and not event-specific. The model did not take into account the presence of features that were popular and specific to the events, like popular hashtags, text units, etc. Popularity alone might not work as it is often mis-used by the spammers. It is also challenging to come up with a list of such event-specific features. Moreover, if one can compile such a list then it would be difficult to set thresholds on each such feature in order to qualify it as event-specific. Also, a supervised classification model does not have the ability to simultaneously rank tweets, hashtags, text units, URLs and users in terms of event-specific informativeness. After going through the existing literature it was assumed that the challenges discussed above would be a shortcoming of any supervised model and there is a need for an alternative feasible approach. It is also difficult to predict the event-specific informativeness in the URLs shared along with the tweets, as it might be necessary to analyze the content pointed to by the URLs. Also, not all the URLs contain text. They might be images or videos providing valuable information about an event. Similarly, it is often a challenging task to identify the users who are producing event-specific informative content. This motivated to devise a novel framework that solves all the above problems.
Therefore, the developed logistic regression model was considered as one of the baselines in order to make sure that \textit{EventIdentityInfoRank} improves upon the initial generic informativeness score already assigned to the tweets at the start of the iteration and assigns event-specific informativeness scores on convergence. In other words the tweets having high score after the final ranking are more useful and informative than the initial ranking obtained using the logistic regression model.
\end{enumerate}
Due to unavailability of proper baseline techniques for ranking hashtags, text units, URLs and users in terms of event-specific informativeness the results obtained for them are not compared with any other approach. However, their average scores and sample results are reported. Please refer Chapter \ref{eiim} for the sample results.
\section{Evaluation Setup and Objectives}
The rankings obtained using \textit{EventIdentityInfoRank} were evaluated on the datasets (refer Chapter \ref{eiim}, \textit{Event Reference Collection} component of the EIIM life cycle), collected for events: ``Millions March NYC'' and ``Sydney Siege Crisis", by comparing its performance with the selected baselines. A subset of tweets for each event for a given time period (one hour) was selected. The choice of the time period was made on the basis of the interesecton of the time period of the tweets collected by the EIIM framework and that provided by Seen for the same event. There were 21641 tweets for Millions March NYC and 37429 tweets for Sydney Siege, respectively. Ranked tweets were obtained for all the seven approaches. For all the approaches except \textit{SeenRank} the tweets were sorted in decreasing order on the basis of the ranking scores as the primary key and time of posting as the secondary key. This was done in order to get the most informative yet recent tweets at the top of the order. For \textit{SeenRank} the tweets were sorted in terms of the scores assigned to them by Seen, as showing recent informative tweets for an event is one of the features of their platform.
A well accepted standard user evaluation approach was followed for judging the event-specific informativeness of ranked tweets and also the hashtags, text units, URLs, and users. A team of three independent annotators comprising of graduate students, having taken the course of Information Retrieval, were assigned the task of annotation. Necessary background of the events were given to the annotators along with suitable resources for learning more about the events. The annotation schemes are presented next.
\subsection{Tweet Annotation}
The ranked tweets were annotated on an event-specific informativeness-scale of 1 to 3 by the three independent annotators. Sample tweets were provided for each of them taking the 'Sydney Siege Crisis' event as an example.
\begin{itemize}
\item The value of 1 was assigned to tweets that does not contain any event related information (for e.g. \textit{SteveSmith becomes Australias 45th Test captain http://t.co/nYh9DqRXxh \#sydneysiege \#MartinPlace Lindt \#MYEFO \#siege Ray Hadley Muslims ISIS}).
\item Value of 2 was assigned to tweets that were related to the event yet they did not provide useful event-specific information (for e.g. \textit{RT @TheDavidStevens: It wasn't just the policeman grabbing that girl in his arms, it was every Australian watching on too \#sydneysiege} ).
\item A value of 3 was assigned to tweets that not only provided useful event-specific informative content but also led the user to more detailed information following the URLs mentioned in the tweet (for e.g. \textit{RT @FoxNews: MORE: Police confirm 3 hostages escape Sydney cafe, unknown number remain inside http://t.co/pcAt91LIdS \#Sydneysiege}).
\end{itemize}
The annotators assigned scores to top 100 tweets ranked according to each of the seven strategies. Thereafter, \textit{Inter Indexer Consistency} (IIC) values \cite{rolling1981indexing}, were computed for the annotations of the two datasets. The average IIC scores obtained for the two events are are shown in Table \ref{avgiicMillionsMarchNyc} and Table \ref{avgiicSydneySiege}, respectively. The IIC values for both the events fall in the acceptable range of accuracy of annotations. A tweet might be assigned three different scores by the annotators. In such a scenario the average of the three scores were calculated and rounded off to the smallest positive integer so that each tweet gets a single score. Total average scores for top 100 tweets for both the events is reported in the tables \ref{avgiicMillionsMarchNyc} and \ref{avgiicSydneySiege}.
\subsection{Hashtags, Text Units and URL Annotations}
A similar annotation strategy was taken for annotating the top 50 hashtags, text units and URLs obtained using \textit{EventIdentityInfoRank}. For hashtags and text units the annotators were asked to look at the tweets that consisted them. Following strategy was followed for scoring.
\begin{itemize}
\item If the tweets containing them primarily led to event-specific informative content then a score of 3 was assigned.
\item If the tweets containing them led to related but not so informative content about the event then they were assigned a score of 2.
\item Hashtags and text units that were irrelevant and did not lead to any event related content, were assigned a score of 1.
\end{itemize}
Similarly, the annotators visited the links for each URL, and based on the content they assigned them a score between 1-3. If the URLs were videos and images, then they further visited the tweet containing them in order to understand the context and scored them accordingly. Table \ref{avgiicMillionsMarchNyc} and Table \ref{avgiicSydneySiege} shows their average IIC scores and total average scores for top 50 ranks.
\subsection{User Annotations}
For annotating users five random tweets were selected for each of the top 50 users ranked according to \textit{EventIdentityInfoRank}. An user was assigned a score of 3 if more than three of his tweets out of five got a score of 3 in the event-specific informativeness scale as already explained earlier. If three of his tweets get a score of 3 then the user gets a score of 2. Otherwise, a score of 1 is assigned to the user. Table \ref{avgiicMillionsMarchNyc} and Table \ref{avgiicSydneySiege} shows average IIC scores and total average scores for top 50 users.
\begin{table}[htbp]
\centering
\caption{Avg IIC scores and total avg scores of annotations for Millions March NYC event.}
\label{avgiicMillionsMarchNyc}
\begin{tabular}{|c|c|c|}
\hline
\textbf{\begin{tabular}[c]{@{}c@{}}Millions March \\ NYC\end{tabular}} & \textbf{IIC} & \textbf{\begin{tabular}[c]{@{}c@{}}Total Avg \\ Score (1-3)\end{tabular}} \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 50 event-specific\\ informative Hashtags\end{tabular}} & 0.786 & 1.980 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 50 event-specific\\ informative Text Units\end{tabular}} & 0.880 & 1.320 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 50 event-specific\\ informative URLs\end{tabular}} & 0.926 & 2.560 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 50 event-specific\\ informative Users\end{tabular}} & 0.700 & 2.386 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 100 event-specific\\ informative Tweets\end{tabular}} & 0.760 & 2.59 \\ \hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{Avg IIC scores and total avg scores of annotations for Sydney Siege event.}
\label{avgiicSydneySiege}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Sydney Siege} & \textbf{IIC} & \textbf{\begin{tabular}[c]{@{}c@{}}Total Avg\\ Score (1-3)\end{tabular}} \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 50 event-specific\\ informative Hashtags\end{tabular}} & 0.880 & 2.027 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 50 event-specific\\ informative Text Units\end{tabular}} & 0.986 & 1.487 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 50 event-specific\\ informative URLs\end{tabular}} & 0.893 & 2.413 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 50 event-specific\\ informative Users\end{tabular}} & 0.646 & 2.353 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Top 100 event-specific\\ informative Tweets\end{tabular}} & 0.83 & 2.62 \\ \hline
\end{tabular}
\end{table}
\subsection{NDCG@n and Precision@n}
After being assured about consistency and accuracy of annotations, \textit{Normalized Discounted Cumulative Gain} (NDCG@n) \cite{jarvelin2002cumulated} and Precision@n \cite{baeza1999modern} values at each of the hundred recall levels were computed. The NDCG@n values consider both the position and event-specific informativeness scores of the tweets. The NDCG value up-to position $p$ in the ranking is given by equation \ref{eq62}, where $DCG_{p}$ denotes the \textit{discounted cumulative gain up-to position p} and is calculated using equation \ref{eq61}, and $IDCG_{p}$ denotes the \textit{ideal discounted cumulative gain} value till position $p$ in the ranking, or in other words the maximum possible $DCG_{p}$ value till position $p$. $rel_{i}$ denotes the graded relevance of the result at position $i$. In the context of this evaluation $rel_{i}$ represents the average rounded score in the scale of (1-3) that has been assigned by the annotators to the tweet at position $i$ in the ranked list of top 100 tweets.
\begin{equation}
\label{eq61}
DCG_{p} = \sum_{i=1}^{p}\frac{2^{rel_{i}}-1}{log(i+1)}
\end{equation}
\begin{equation}
\label{eq62}
nDCG_{p} = \frac{DCG_{p}}{IDCG_{p}}
\end{equation}
Precision@n is measured using equation \ref{eq63}. A tweet was considered to be relevant if it has a score of either 3 or 2 and was considered irrelevant if it has a score of 1.
\begin{equation}
\label{eq63}
\frac{No.\,of\, relevant\, tweets\, at\, position\, n}{n}
\end{equation}
NDCG@n and Precision@n values were calculated for all the seven approaches for each of the datasets. Figures \ref{millionsmarchndcg} and \ref{sydneysiegendcg} shows the NDCG curves for all the seven approaches on the `Millions March NYC' and the `Sydney Siege Crisis' events, respectively, for up-to 20 recall levels. Tables \ref{sydneysiegendcgtable} and \ref{sydneysiegeprecisiontable} presents the NDCG@n values and Precision@n values for different recall levels upto 100 for the `Sydney Siege Crisis' event. Similarly, Tables \ref{millionsmarchndcgtable} and \ref{millionsmarchnycprecisiontable} presents the NDCG@n values and Precision@n values for different recall levels upto 100 for the `Millions March Nyc' event. It is quite evident from the figures and the tables that \textit{EventIdentityInfoRank} approach outperforms all the baselines including the state-of-the-art approach of \textit{SeenRank} in gaining event-specific information.
\begin{figure}[htbp]
\centering
\includegraphics[height=4.5in,width=6in]{Figures/EventIdentityInfoRankPerformanceMillionsMarchNyc.jpg}
\caption{Performance comparison of ranking techniques using NDCG scores for `Millions March Nyc' event.}
\label{millionsmarchndcg}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=4.5in,width=6in]{Figures/EventIdentityInfoRankPerformanceSydneySiege.jpg}
\caption{\small Performance comparison of ranking techniques using NDCG scores for `Sydney Siege Crisis' event.}
\label{sydneysiegendcg}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=3in,width=5.5in]{Figures/MillionsMarchNycCorrectedNDCG.jpg}
\caption{Performance comparison of ranking techniques using NDCG scores for `Millions March Nyc' event.}
\label{millionsmarchndcgtable}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=3in,width=5.5in]{Figures/sydneysiegecorrectedndcg.jpg}
\caption{Performance comparison of ranking techniques using NDCG scores for `Sydney Siege Crisis' event.}
\label{sydneysiegendcgtable}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=3in,width=5.5in]{Figures/MillionsMarchNycCorrectedPrecision.jpg}
\caption{Performance comparison of ranking techniques using precision scores for `Millions March Nyc' event.}
\label{millionsmarchnycprecisiontable}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=3in,width=5.5in]{Figures/sydneysiegeprecisioncorrected.jpg}
\caption{Performance comparison of ranking techniques using precision scores for `Sydney Siege Crisis' event.}
\label{sydneysiegeprecisiontable}
\end{figure}
On considering only the top 10 tweets, a substantial information gain of the \textit{EventIdentityInfoRank} algorithm over the state-of-the-art (\textit{SeenRank}) and the baseline that performed second best for both the events were observed. On comparing the values of NDCG@10 for the two events it was found that \textit{EventIdentityInfoRank} algorithm performs 13.96\% (Millions March NYC) and 34.07\% (Sydney Siege) better than the second best baseline technique, in identifying event-specific informative tweets. When compared with \textit{SeenRank}, the \textit{EventIdentityInfoRank} algorithm was 64.53\% (Millions March NYC) and 56.59\% (Sydney Siege) better.
We also reasoned about the poor performance of \textit{TextRank} in both the events. Since \textit{TextRank} allowed random walks between homogeneous nodes, the strong association of non-informative nodes with the informative ones might have lowered the final scores of the informative nodes. The strong association of non-informative nodes with informative ones can be attributed to the spamming activity as already explained earlier in Chapter \ref{challenges}, Section \ref{veracity}. This also proves that the EIIM framework is robust against spams and is very effective in identifying the most informative content related to events from the noisy stream of tweets in Twitter.
| {
"alphanum_fraction": 0.7973213491,
"avg_line_length": 109.0970873786,
"ext": "tex",
"hexsha": "9f029157b456d0971811067e61764b7862504893",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fc734397c9a29273a1441b52b2334ff037292af3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dxmahata/PhDThesis",
"max_forks_repo_path": "Chapters/Evaluations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fc734397c9a29273a1441b52b2334ff037292af3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dxmahata/PhDThesis",
"max_issues_repo_path": "Chapters/Evaluations.tex",
"max_line_length": 1428,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fc734397c9a29273a1441b52b2334ff037292af3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dxmahata/PhDThesis",
"max_stars_repo_path": "Chapters/Evaluations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5408,
"size": 22474
} |
\documentclass{llncs}
\newcommand{\X}{{\bf X}}
\newcommand{\x}{{\bf x}}
\newcommand{\Y}{{\bf Y}}
\newcommand{\y}{{\bf y}}
\newcommand{\Z}{{\bf Z}}
\newcommand{\z}{{\bf z}}
\newcommand{\bs}{\boldsymbol}
\newcommand{\bSigma}{\boldsymbol \Sigma}
\usepackage{amsmath,amssymb,algorithm,algorithmic}
\usepackage{times}
\usepackage{setspace,verbatim}
\usepackage{epsfig,url,subfigure}
\begin{document}
\title{Partial sparse canonical correlation analysis (PSCCA) for population
studies in medical imaging}
\author{Anonymous}
\institute{Anonymous}
\maketitle
\begin{abstract}
We propose a new multivariate method, partial sparse canonical
correlation analysis (PSCCA), for computing the statistical
comparisons needed by population studies in medical imaging. PSCCA is a
multivariate generalization of linear regression that allows one to statistically parameterize imaging studies in terms of
multiple views of the population (e.g., the full collection of
measurements taken from an image set along with batteries of cognitive
or genetic data) while controlling for nuisance variables. This paper
develops the theory of PSCCA, provides an algorithm and illustrates
PSCCA performance on both simulated and real datasets. We show, as a
first application and evaluation of this new methodology, that
PSCCA can improve detection power over mass univariate approaches
while retaining the interpretability and biological plausibility of
the estimated effects. We also discuss the strengths, limitations and
future potential of this methodology.
\end{abstract}
\section{Introduction}
% pubmed references to MRI :
% 2000-2001 --- 25561
% 2001-2002 --- 27053
% 2003-2004 --- 31708
% 2005-2006 --- 38620
% 2008-2009 --- 49288
% 2009-2010 --- 49323
% pubmed references to MRI brain :
% 2000-2001 --- 9938
% 2001-2002 ---
% 2003-2004 --- 12655
% 2005-2006 ---
% 2008-2009 ---
% 2009-2010 --- 19676 ------
% MRI brain statistical parametric mapping 783 results ,
% MRI brain gives
The number of neuroimaging studies published annually has doubled from
9,938 in 2000-2001 to 19,676 in 2009-2010
(\url{http://www.ncbi.nlm.nih.gov/pubmed/}). This growth has been
accompanied by increasing diversity in the types of data being
collected; Imaging studies now often include not only various
structural and functional modalities but also neurocognitive
batteries, genetics, and environmental measurements. However, the
statistical methods have changed relatively little over the past
twenty years -- until very recently (e.g., \cite{Tosun2010a}). The
increasing size of imaging datasets and the concomitant desire for
performing integrative studies across modalities points to the need
for new multivariate statistical methods that elegantly handle large,
multi-view datasets. These methods should retain or even improve
detection power over traditional mass-univariate (MU) models such as
statistical parametric mapping (SPM) which uses the univariate form of
the general linear model (GLM). Repeatedly applying the univariate GLM (or
linear regression) at each voxel leads to loss of detection power due
to the well-known multiple comparisons problem.
Canonical Correlation Analysis (CCA)~\cite{hotellingcca} is a
traditional multivariate generalization of standard linear regression
\cite{kshirsagar}. CCA inherently avoids the multiple-comparisons penalty
associated with MU methods by symmetrically maximizing the correlation
between the full matrices representing two views of the data (here
denoted $\Y$ and $\X$). The matrix {$\X$} might represent a
tabulation of all demographic data, including genetics, diagnosis,
behavioral measures, age, etc. while $\Y$ may be a matrix of all the
imaging measurements. In contrast, traditional univariate models only
enable the predicted value to be a vector while the predictors may be
a matrix. Thus, linear regression is a special case of CCA. In both
traditional regression and CCA, the number of predictors, $p$, must be
fewer than the number of observations, $n$ (subjects in our
neuroimaging study). For CCA (or multivariate regression), this
restriction holds for both sets of predictors $p$, $q$, contained in
$\X, \Y$.
Recently, sparse (or penalized) canonical covariance analyses (SCCovA)
~\cite{parkhomenko,witten,lykou} have been proposed as an
approximation to CCA specifically for the high dimensional ($p\gg n$)
setting.\footnote{SCCovA substitutes the identity matrix for
within-view covariance matrices and thus analyze cross-covariance
structure, not correlation structure \cite{cherry}. Thus, SCCA
(versus SCCovA) does not depend on how the observations are scaled.}
The {\em sparseness} in penalized methods improves interpretability by
including in the model only the most important variables from the
large set of $p$ (and/or $q$) predictors. From a medical imaging
researcher's perspective, the benefit is that only the most predictive
variables (e.g. parts of the brain) will emerge in the results
provided by a penalized statistical tool. Hence, brain regions are
highlighted in a way that is similar to SPM. Furthermore, regions
selected by SCCovA (or similarly sparse canonical correlation analysis
(SCCA)) are treated statistically as a collective (or `network') as
opposed to MU methods which treat each predictor as an independent
variable.
Despite prior studies using SCCovA~\cite{parkhomenko,witten,lykou,Avants2010b}, we are unaware of
previous work that studies factoring (``partialling'') out nuisance
variables within the penalized CCA framework. While this problem is
addressed in the $p < n$ setting by partial canonical correlation
analysis (PCCA)\cite{timm}, no penalized formulation has yet been
proposed.
This paper contributes the theory of Partial Sparse CCA (PSCCA) along
with a novel and efficient iterative algorithm for PSCCA. PSCCA (like
CCA) performs a global multivariate test of the association between
two modalities that quantify a study's subjects while accounting for a
third set of nuisance variables. It generalizes linear regression and
is inherently, sparsely multivariate in multiple views of the data
unlike MU and standard support vector machines (SVM).
The general PSCCA formulation has many applications. PSCCA may be
applied to almost any statistical scenario in medical imaging studies
traditionally handled by SPM. PSCCA is able to identify the subset of
the brain most correlated with non-imaging variable(s) of interest
(for instance, a cognitive battery) while factoring out confounding
effects (age, gender). Alternatively, we may apply PSCCA to the case
where both views of the data are high-dimensional, for instance, to
identify correlations between different imaging modalities
independently from covariates such as scanner, gender, etc. PSCCA
thus enables complex studies of multiple view data that contains many
more variables than observations.
The rest of the paper is organized as follows: in the next section we
provide a brief review of CCA and Sparse CCA. In Section 3, we present
the theory behind our novel Partial Sparse CCA (PSCCA) framework and
describe the iterative algorithm for performing PSCCA. In Section 4,
we provide experimental results on synthetic and real world
neuroimaging datasets and we conclude with a brief summary in Section
5.
Neuropsychological measures detect high-level cognitive function that
emerges from complex, large-scale neural
networks that include multiple gray matter (GM) regions integrated by
white matter (WM) projections. The clinically-validated Philadelphia
Brief Assessment of Cognition (PBAC) \cite{Libon2007} examines several
cognitive domains, including executive functioning/working memory
(exe), language (lang), visuospatial skills (vs), visual/verbal
episodic memory (mem) and social comportment/behavior (behav). We use
the open-source sparse canonical correlation analysis for neuroimaging
(SCCAN) software to {\bf cross-validate} putative relationships
between the neural substrate and cognition. We thus relate three quantitative measures: the PBAC, GM density (GMD) derived from T1 MRI, and the fractional anisotropy (FA) of WM derived from DTI.
%What we are doing here ( a few sentences )
%This paper will detail and illustrate our approach to performing
%neuroimaging studies in the style of traditional formulations but
%using a new, powerful multivariate pscca. We highlight in both
%simulated and real data the advantages and disadvantages of this
%method.
\section{Brief review: CCA and sparse CCA (SCCA)}
\begin{comment}{
Consider a standard neuroimaging population study with both male and
female subjects between 20 and 40 years of age each of which measured
via a series of MRI scans. Each subject is also designated either
patient or control. A standard regression analysis will treat the
quantitative imaging measurement as the dependent variable ($\y$) and
age, gender and diagnosis group as covariates $({\X})$ where each test
is performed independently at each voxel. Detection power is
compromised by the multiple-comparisons problem incurred by the number
of imaging measurements (millions) as well as the confounding
variables (age, gender). It is compromised because one must perform
additional statistical corrections to the $p$-values output by MU
statistics. If we are using MU models and
want to test the whole brain for effects, then
we can not do much about the first problem. For the second problem,
we can factor out the effects of these unwanted (confounding)
variables by regressing on the residuals ($\y -\X\beta$).
}
\end{comment}
% CCA~\cite{hotellingcca} is the analog to Principal Component Analysis
% (PCA) for pairs of matrices. PCA computes the directions of maximum
% covariance between elements in a single matrix, whereas CCA computes
% the directions of maximal correlation between a pair of matrices.
% See~\cite{taylor:cca} for a general review of CCA along with some
% representative applications. Unlike PCA, CCA does not depend on how the observations are scaled.
More specifically, given a set of $n$ paired observation vectors
$\{(y_1,x_1),...,(y_n,x_n)\}$--in our case the two matrices are the
quantitative imaging measurement ({\Y}) and age, gender, diagnosis ({\X}) matrices --we would like to simultaneously find the directions
${\bs{\bs\phi_{\Y}}}$ and
${\bs{\bs\phi_{\X}}}$ that maximize the correlation of
the projections of ${\Y}$ onto ${\bs{\bs\phi_{\Y}}}$
with the projections of ${\X}$ onto
${\bs{\bs\phi_{\X}}}$. This is expressed as
\begin{equation}
\label{cca1}
\rho=\max_{{\bs\phi_Y}, {\bs\phi_X}}
\frac{{\bs\phi_{\X}^T\bs\Sigma_{\X\Y}}\bs\phi_{\Y}}{\sqrt{\bs\phi_{\X}^T\bs\Sigma_{\X\X}\bs\phi_{\X}}\sqrt{\bs\phi_{\Y}^T\bs\Sigma_{\Y\Y}\bs\phi_{\Y}}}
\end{equation}
where ${\bs\Sigma_{\X\X}}$, ${\bs\Sigma_{\Y\Y}}$ and ${\bs\Sigma_{\X\Y}}$ are the auto and cross covariance matrices i.e. $\X^T\X$, $\Y^T\Y$ and $\X^T\Y$, respectively. The above objective can also be thought of as maximizing the numerator $\bs\phi_{\X}^T\bs\Sigma_{\X\Y}\bs\phi_{\Y}$ subject to $\bs\phi_{\X}^T\bs\Sigma_{\X\X}\bs\phi_{\X} =1$ and $\bs\phi_{\Y}^T\bs\Sigma_{\Y\Y}\bs\phi_{\Y}=1$
Now, define change of basis as:
\begin{equation}
\label{basisChange}
\bs\psi_{\X} = \bs\Sigma_{\X\X}^{1/2}\bs\phi_{\X}, \;\;\;\;\;\; \bs\psi_{\Y} = \bs\Sigma_{\Y\Y}^{1/2}\bs\phi_{\Y}
\end{equation}
Then, substituting~(\ref{basisChange}) in~(\ref{cca1}) we get
\begin{equation}
\label{subs}
\rho= \max_{{\bs\psi_{\Y}}, {\bs\psi_{\X}}} \frac{\bs\psi_{\X}^T \bs\Sigma_{\X\X}^{-1/2}\bs\Sigma_{\X\Y}\bs\Sigma_{\Y\Y}^{-1/2}\bs\psi_Y}{\|\bs\psi_{\X}\| \|\bs\psi_{\Y}\|}
\end{equation}
The whitening transform is used to convert covariances to correlations
and also to de-correlate auto-correlation matrices. In CCA, this
normalizes the data such that the optimization can maximize the
cross-correlation. The standard whitening transform is defined as
$\X_w= \X\bs\Sigma_{\X\X}^{-1/2}$ and $\Y_w=
\Y\bs\Sigma_{\Y\Y}^{-1/2}$. Applying the whitening transform
to~(\ref{subs})
\begin{equation}
\label{simplifiedcca}
Corr(\X_w\bs\psi_{\X},\Y_w\bs\psi_{\Y})=\rho=\max_{{\bs\psi_{\Y}}, {\bs\psi_{\X}}} \frac{\bs\psi_{\X}^T \bs\Sigma_{\X_w\Y_w}\bs\psi_Y}{\|\bs\psi_{\X}\| \|\bs\psi_{\Y}\|}
\end{equation}
where $\bs\Sigma_{\X_w\Y_w} = \X_w^T\Y_w$.
As mentioned earlier, CCA results in vectors $\bs\psi_{\X}$,
$\bs\psi_{\Y}$ that are not sparse, and these vectors are not unique
if $p > n$. In most biomedical imaging applications, $p$ is
large and, one needs to find a linear combination of the
variables in $\X_w$ and $\Y_w$ that has large correlation but is also
sparse in the variables that enter the model.
\begin{comment}
Several researchers propose sparse formulations of canonical
covariance analysis: Witten et al.~\cite{witten} use penalized matrix
decomposition to enforce sparsity; Parkhomenko et
al.~\cite{parkhomenko} use soft-max thresholding in an iterative
algorithm. Both papers assume that within-view covariance matrices
are well-approximated by the identity. Finally, Lykou and
Whittaker~\cite{lykou} propose a LARS~\cite{lars} style algorithm for
obtaining sparsity in the loadings.
\end{comment}
While several researchers propose sparse formulations of canonical
covariance analysis \cite{parkhomenko,witten,lykou}, none of the these
methods handle confounding variables---a highly desirable modeling
property for many biomedical and neuroimaging applications. In the
next section, we detail the PSCCA solution to this problem.
\begin{comment}
We formulate a sparse canonical correlation analysis optimization
with an embedded step that factors out the effect of nuisance
variables. We also provide an efficient power iteration based
algorithm to compute the directions of maximum partial
correlation. Our approach incorporates the normalization constraints
required for correlation (versus covariance) analysis; We call it
Partial Sparse CCA (PSCCA).
\end{comment}
\section{PSCCA (Partial Sparse Canonical Correlation Analysis)}
As described earlier, let $\X$ be the matrix with columns containing voxels from one set of
images of $n$ subjects; $\Y$ is the matrix with columns containing
the second set of measurements from the same $n$ subjects and further
let $\Z$ be the matrix of confounding variables (age, gender, etc.) for
our neuroimaging problem. The second set of measurements may be
voxels from another imaging modality, scores from a battery of
neuropsychological tests or a much simpler feature such as a binary
diagnosis variable. Also, let $\lambda_{\X}$ and $\lambda_{\Y}$ ($\in
[0,1]$) (where higher values indicate more sparsity) be the user defined
parameters which control the sparsity for either set of the canonical
variates. The sparseness parameters can, alternatively, be chosen automatically
from the data so as to maximize the correlation (or likelihood) between the canonical variates.
PCCA~\cite{timm} finds the correlation between $\X$ and $\Y$ after removing (``partialling out'') the linear effect of the confounding variables $\Z$.
We denote the $\X$ and $\Y$ matrices with effect of $\Z$ ``partialled'' out as $\X^{\backslash\Z}$ and $\Y^{\backslash\Z}$. Regressing $\X$ against $\Z$, using standard least squares ($\|\X -\Z\bs\beta\|^2$) gives $\bs\beta= \bs\Sigma_{\Z\Z}^{-1}\Z^T\X$.
Thus, the residual\footnote{Note that $\X^{\backslash\Z}$ is actually what is called the residual $\X-\Z\bs\beta$ in a least squares regression problem.} can be written as $\X^{\backslash\Z}=\X - \Z\bs\Sigma_{\Z\Z}^{-1}\Z^T\X$. Applying the whitening transform to $\Z$ as $\Z_w =\Z \Sigma_{\Z\Z}^{-1/2}$, we get $\X^{\backslash\Z}=\X - \Z_w\Z_w^T\X$.
We can write similar equations for the residual when $\Y$ is regressed against $\Z$.
Now, we can write the complete variance-covariance matrix of the residuals as:
\begin{eqnarray}
\label{matrices}
\begin{bmatrix}
\bs\Sigma_{\X\X}^{\backslash \Z} & \bs\Sigma_{\X\Y}^{\backslash \Z} \\
\bs\Sigma_{\Y\X}^{\backslash \Z} & \bs\Sigma_{\Y\Y}^{\backslash \Z}
\end{bmatrix}
&=&
\begin{bmatrix}
\X^T\X -\X^T \Z_w\Z_w^T\X &\;\;\;\; \X^T\Y -\X^T \Z_w\Z_w^T\Y\\
\Y^T\X -\Y^T \Z_w\Z_w^T\X &\;\;\;\; \Y^T\Y -\Y^T \Z_w\Z_w^T\Y
\end{bmatrix}
.
\end{eqnarray}
%The matrices $\bSigma_{X_wX_w\backslash Z_w}$ etc. are the variance-covariance matrices of the residual vectors $\bs r_X$ and $\bs r_Y$ and are defined as:
%\begin{eqnarray}
%\label{matrices}
%\begin{bmatrix}
% \bSigma_{11\backslash 3} & \bSigma_{12\backslash 3} \\
% \bSigma_{21\backslash 3} & \bSigma_{22\backslash 3}
%\end{bmatrix}
%&=&
%\begin{bmatrix}
% \bSigma_{11} -\bSigma_{13}\bSigma_{33}^{-1}\bSigma_{31} &\;\;\;\; \bSigma_{12} -\bSigma_{13}\bSigma_{33}^{-1}\bSigma_{32}\\
% \bSigma_{21} -\bSigma_{23}\bSigma_{33}^{-1}\bSigma_{31} &\;\;\;\; \bSigma_{22} -\bSigma_{23}\bSigma_{33}^{-1}\bSigma_{33}
%\end{bmatrix}
%\end{eqnarray}
%where we denote $\X_w \rightarrow 1$, $\Y_w \rightarrow 2$ and $\Z_w \rightarrow 3$ as shorthand.
% Equation~(\ref{matrices}) can easily be extended to handle more than two matrices and also to ``partial'' out the effect of more than one confounding covariate matrices.
The PCCA problem can therefore be written as:
\begin{equation}
\label{pscca}
\rho_{PCCA}=\max_{{\bs\phi_{\Y}}, {\bs\phi_{\X}}} \frac{\bs\phi_{\X}^T \bs\Sigma_{\X\Y}^{\backslash \Z}\bs\phi_Y}{\sqrt{\bs\phi_{\X}^T\bs\Sigma_{\X\X}^{\backslash \Z}\bs\phi_{\X}}\sqrt{\bs\phi_{\Y}^T\bs\Sigma_{\Y\Y}^{\backslash \Z}\bs\phi_{\Y}}}
\end{equation}
Changing the basis as for simple CCA, we get
\begin{equation}
\label{basisChangePSCCA}
\bs\psi_{\X} = (\bs\Sigma_{\X\X}^{\backslash \Z})^{1/2}\bs\phi_{\X}, \;\;\;\;\;\; \bs\psi_{\Y} = (\bs\Sigma_{\Y\Y}^{\backslash \Z})^{1/2}\bs\phi_{\Y}
\end{equation}
and substituting~(\ref{basisChangePSCCA}) in~(\ref{pscca}) gives
\begin{equation}
\label{subsPSCCA}
\rho_{PCCA}= \max_{{\bs\psi_{\Y}}, {\bs\psi_{\X}}} \frac{\bs\psi_{\X}^T (\bs\Sigma_{\X\X}^{\backslash \Z})^{-1/2}\bs\Sigma_{\X\Y}(\bs\Sigma_{\Y\Y}^{\backslash \Z})^{-1/2}\bs\psi_{\Y}}{\|\bs\psi_{\X}\| \|\bs\psi_{\Y}\|} .
\end{equation}
After some algebraic manipulation we can write the PCCA objective compactly as
\begin{equation}
\label{psccacompact}
\rho_{PCCA}=\max_{{\bs\psi_{\Y}}, {\bs\psi_{\X}}} \frac{\bs\psi_{\X}^T \bs\Sigma_{\X_w\Y_w}^{\backslash \Z}\bs\psi_{\Y}}{\|\bs\psi_{\X}\| \|\bs\psi_{\Y}\|}
\end{equation}
where $\X_w=\X(\bs\Sigma_{\X\X}^{\backslash \Z})^{-1/2}$ and
$\Y_w=\Y(\bs\Sigma_{\Y\Y}^{\backslash \Z})^{-1/2}$. Note the
difference in the whitening transform from the one used in simple CCA;
here we are using the covariance matrix with $\Z$ partialled out
to whiten $\X$ and $\Y$.
Finally, the above objective after incorporating the user specified $\ell_1$ sparsity penalties ($\lambda_{\X}$ and $\lambda_{\Y}$) and under the constraints $\bs\psi_{\X}^T\bs\psi_{\X}=\bs\psi_{\Y}^T\bs\psi_{\Y}=1$ can be written as:
\begin{equation}
\label{psccaSparseConst}
\rho_{PSCCA}=\max_{{\bs\psi_{\Y}}, {\bs\psi_{\X}}} \{ \bs\psi_{\X}^T \bs\Sigma_{\X_w\Y_w}^{\backslash \Z}\bs\psi_{\Y} - \lambda_{\X}\|\bs\psi_{\X}\|_1 - \lambda_{\Y}\|\bs\psi_{\Y}\|_1\}
\end{equation}
Our optimization strategy for (\ref{psccaSparseConst}) combines power
iteration and soft thresholding to compute the canonical vectors while
satisfying the sparsity constraints. The approach, described in the next
section, uses an alternating least squares method
\cite{golub} extended to include sparsity constraints
\cite{cichocki}.
%\caption{\baselineskip 12pt \small Cartoons illustrating (a) SCCA ;
% (b) partial SCCA ; (c) part SCCA. }
%\label{fig:cartoon}
%\end{figure}
\subsection{PSCCA Algorithm}
Following \cite{golub}, we propose a power iteration based algorithm
for PSCCA for the general problem of finding principal eigenvectors of
the matrices. This numerical approach does not require one to ever
explicitly form the full $\X_{w}^T \Y_{w}$ matrix and is therefore
appropriate for large datasets where the number of columns in both views may
count in the millions or more. In all steps below, we employ the
pseudoinverse when needed. In addition, the function $(x)_+$ is equal to $x$ is $x \geq 0$ and $0$ is $x <0$ and
\begin{equation}
Sign(x)= \begin{cases} -1, & \mbox{if } x<0 \\0, & \mbox{if } x=0 \\1, & \mbox{if } x>0 \end{cases}
\end{equation}
Note that positivity or negativity constraints on the
$\bs\psi_{\X}, \bs\psi_{\Y}$ may be trivially included with a minor
modification to Algorithm \ref{partial-mic}.
\vspace{-0.1in}
\begin{algorithm}[htdp]
\small \caption{\bf Computing principal eigenvectors for PSCCA}
\label{partial-mic}
\begin{algorithmic}[1]
\STATE Apply the whitening transformation to {\Z} to get $\Z_{w}$.
\STATE Compute $\X^{\backslash \Z}$ and $\Y^{\backslash \Z}$ and the whitened matrices $\X_{w}$ and $\Y_{w}$.
\STATE Select the (fractional) sparsity parameters $\lambda_{\X}$ and $\lambda_{\Y}$
\STATE Randomly initialize $\bs \psi_{\X}^0$ and $\bs \psi_{\Y}^0$ ($\sim \mathcal{N}(0,1)$) and set $k=0$.
\WHILE {$\Delta$ Corr($\bs X_w \bs \psi_{\X}^{k+1}$, $\bs Y_w \bs \psi_{\Y}^{k+1}$) $<$ $\epsilon$}
\STATE Compute $\bs \psi_{\X}^{k+1}= {\X_w}^T {\Y_w} \bs \psi_{\Y}^{k} - {\X_w}^T {\Z_w} {\Z_w}^T {\Y_w} \bs \psi_{\Y}^{k}$
\STATE Soft-Max Sparseness: $\bs \psi_{\X}^{k+1} \leftarrow (\|\bs \psi_{\X}^{k+1}\| - max(\bs \psi_{\X}^{k+1})*\lambda_{\X})_+ Sign(\bs \psi_{\X}^{k+1})$
\STATE Normalize: $\bs \psi_{\X}^{k+1} \leftarrow \frac{\bs \psi_{\X}^{k+1}}{\|\bs \psi_{\X}^{k+1}\|}$\\
//Repeat Same Procedure for $\bs \psi_{\Y}$ \\
\STATE Compute $\bs \psi_{\Y}^{k+1}= {\Y_w}^T {\X_w} \bs \psi_{\X}^{k+1} - {\Y_w}^T {\Z_w} {\Z_w}^T {\X_w} \bs \psi_{\X}^{k+1}$
\STATE Soft-Max Sparseness: $\bs \psi_{\Y}^{k+1} \leftarrow (\|\bs \psi_{\Y}^{k+1}\| - max(\bs \psi_{\Y}^{k+1})*\lambda_{\Y})_+ Sign(\bs \psi_{\Y}^{k+1})$
\STATE Normalize: $\bs \psi_{\Y}^{k+1} \leftarrow \frac{\bs \psi_{\Y}^{k+1}}{\|\bs \psi_{\Y}^{k+1}\|}$
\STATE k $\leftarrow$ k+1
\ENDWHILE
\end{algorithmic}
\end{algorithm}
We use permutation testing on $\X$, $\Y$ to assess significance where
the test statistic is the partial correlation between the two main views.
%\noindent{}
%\begin{description}
%\item [Whiten:]Apply the whitening transformation to {\X}, {\Y}, {\Z}.
% VectorType temp=q*w_q;
% wpnew=p.transpose()*( temp - this->m_MatrixRRt*temp );
%\item [Begin Loop:]for power iteration.
%\item [~~View 1:]Compute $\x= {\X}^T {\Y} \y - {\X}^T {\Z} {\Z}^T {\Y} \y$.
%\item [~~Soft-Max Sparseness \& Normalization $\x$:] Enforce $\x$
% sparseness and set $\x \leftarrow \frac{\x}{\|\x\|}$.
%\item [~~View 2:]Compute $\y= {\Y}^T {\X} \x - {\Y}^T {\Z} {\Z}^T
% {\X} \x$
%\item [~~Soft-Max Sparseness \& Normalization $\y$:] As in $\x$ step.
%\item [~~CC:]Compute $Corr( {\X} \x , {\Y} \y )$.
%\item [End loop:]Check the correlation and stop when converged.
%\end{description}
%For multiple eigenvectors, use the Lanczos algorithm.
\begin{comment}
\subsection{Assessing significance}
A well-known difficulty with neuroimaging studies, particularly when
sample sizes are small, is the potential for the methods to find biologically implausible structure in the data. While the
potential for this problem can never be fully eliminated, we seek to
minimize the confound by using an empirical approach to significance
testing based on permutations:
\begin{enumerate}
\item Compute the true PSCCA correlation~$t=$PSCCA$(\X,\Y,\Z)$.
\item Initialize $p=0$.
\item {\bf For} $N$ simulations {\bf do }
\item ~~~Permute the rows of $\X$, $\Y$ to get $\X_p, \Y_p$.
\item ~~~Compute $t_p=$ PSCCA$(\X_p,\Y_p,\Z)$.
\item ~~~if $t_p > t , ~~p=p+1$.
\item {\bf done }
\item {\bf Return} the p-value, $p/N$.
\end{enumerate}
The number of simulations should be selected to provide a reasonable
sampling of the permutation space.
\end{comment}
\section{Results}
The code for the PSCCAN implementation, the simulation study and the
neuroimaging study will be made available at publication time.
\subsection{Simulations} Define a ``true'' linear signal vector with
$n$ entries, ${\bs v}$, such that the value of each entry is ${\bs
v}_i=i/n$ where $i$ indexes the vector. A second signal is a vector drawn from a zero mean unit
variance Gaussian distribution, ${\bs g_x}$ with $p$ entries. The
first view is then $\X ={\bs v}^T {\bs g_x}$ and we similarly generate
$\Y $ with $n \times q$ entries. We optionally add noise to both
views. In 100 low-noise simulations, SCCA produces a significant
association. However, when we use $\Z = {\bs v} + $ {\em noise} as a
covariate in PSCCA on $\X$ and $\Y$, then no significant association
exists. Both results are as expected and provide a sanity check on
our theory and implementation. The second experimental validation of
our implementation and theory generates $\X$ and $\Y$ where the first
$p/2, q/2$ columns are derived from ${\bs v}$. The second $p/2, q/2$
columns in $\X, \Y$ are derived from a different ``true'' signal (${\bs v}_2$) with
a less strong linear relationship than in the first half of the
matrices. Thus, when we use SCCA with sparseness
$\lambda_{\X}=\lambda_{\Y}=0.25$, the first half of the matrix is
selected. PSCCA selects the second half of the matrix when $\Z$ is
used as confounding covariate. Both are significant across permutations. Due to
noise, in some simulations, a few entries from the first half of the
matrix may enter the model with low weight. If we add a column containing signal derived from ${\bs
v}_2$ to $\Z$ then, as predicted, PSCCA results become insignificant. Figure~\ref{fig:sim}
shows the vectors $\bs\phi_{\X}$ selected by SCCA and PSCCA on the
same input data where PSCCA uses $\Z$ (derived from ${\bs v}$ alone)
as confounding covariate.
\begin{figure}
\begin{center}
\includegraphics[width=120mm]{simulation_result_mix.pdf}
\end{center}
\vspace{-0.2in}
\caption{The black hollow circles show the non-zero entries in
$\bs\phi_{\X}$ that are selected by SCCA, that is, the value of the vector
$\bs\phi_{\X}$. The red full circles show the non-zero entries in
the vector $\bs\phi_{\X}$ that are selected by PSCCA. The $\Z$ signal factors
out the confounding signal in the first half of the matrix leaving
the second signal of interest in the second half to be the source of
the significant association.}
\label{fig:sim}
\end{figure}
% This analysis can be achieved by simulating imaging data and two other
% views (age, cognition) in such a way that the age is the true hidden variable that
% generates both cognition and imaging measurements. PSCCA should then
% detect an insignificant association between cognition and imaging when
% age is used as the confounding variable. Similarly, PSCCA should
% detect a significant association between age and imaging when
% cognition is a confounding variable.
\subsection{Comparison of regression and PSCCA on OASIS data}
Our first evaluation on real data employs PSCCA as a form of
multivariate regression between imaging, diagnosis and nuisance variables.
We employ a subset of the freely available OASIS dataset to compare
PSCCA to mass-univariate linear regression. This subset of the OASIS
data contains elderly subjects (n=38) in addition to subjects with
Alzheimer's disease (n=31) of both genders (39 F, 30 M) and with ages
that range between 62 and 98 years. Our evaluation criterion compares
both methods' power to detect the known anatomical distribution of
AD-related atrophy in gray matter (hippocampus, cuneus, temporal lobe)
\cite{Avants2010b} where gray matter was segmented and normalized by using standard
open source software. We use the whole brain, in template space, as region of interest in order to
challenge the power of the MU method relative to the
single test performed by multivariate PSCCA. We assume that the
researcher has pre-selected the sparseness parameter for the study.
We choose $\lambda$ (sparsity parameter) for the gray matter voxels such that 10\% of the
ROI (contained in the $\X$ matrix) will be selected by PSCCA. The
$\Y$ matrix, in this case, is the diagnosis vector that defines
whether a subject is control or patient. The nuisance matrix $\Z$
contains age and gender variables. We run both the MU
statistics (via the {\bf R} program) and our own independently
developed PSCCA implementation (C++ based, BSD license, open-source)
on identical input data. Using false discovery rate (FDR) correction
on the regression-based p-values for diagnosis, we find that the
minimum q-value is 0.183, thus insignificant after correction. In
contrast, PSCCA shows significant effects at the $p=0.041$ level,
10000 permutations. We visualize the regions that emerge from PSCCA
by overlaying the first canonical vector $\bs \psi_{\X}$ on the brain.
Figure~\ref{fig:comp} compares the PSCCA output with the regression
results overlaid on the brain at the level of $p=0.01$ uncorrected.
\begin{figure}
\begin{center}
\includegraphics[width=120mm]{figs/MUvPSCCAN.pdf}
% \includegraphics[width=50mm]{Pvalue_f_of_sparsenss_and_corr.pdf}
\end{center}
\vspace{-0.1in}
\caption{PSCCAN (right) versus mass-univariate uncorrected statistics
(left). Both methods reveal similar areas of the brain. However, the
mass-univariate results cannot be considered significant (after FDR
correction) due to the multiple comparisons problem. It is possible
that another correction method would retain some of the mass-univariate
effects but we choose FDR because it is standard and only moderately
conservative. We show, at right, the relationship of estimated
significance to variations in the sparseness parameter (for the image
voxel matrix $\X$) and PSCCA correlation. The significant region is
outlined in a dashed box. In a real study, one would only use the
pre-selected sparseness parameter.}
\label{fig:comp}
\end{figure}
\subsection{PBAC}
\paragraph{Description of cohort: }
104 subjects had complete cognitive evaluations as well as Siemens
3.0T T1 and DTI collected at the University of Pennsylvania. The
cohort contained subjects diagnosed with Alzheimer's disease (AD;
n=23), behavioral- variant frontotemporal degeneration (bvFTD; n=31),
primary progressive aphasia (PPA) variants of FTD (n=24), 17 with
extrapyramidal motor disorders, and 9 elderly controls. We use
multiple different disorders only to obtain variance in cognition but
do not include clinical diagnosis in subsequent analyses. We
normalized and segmented \cite{Avants2011a} the DT and T1 images using
the ITK-based open-source ANTs \cite{Avants2011} and Pipedream software.
\paragraph{Part 1-imaging \& cognition:} SCCAN identifies, for each PBAC domain, uniquely related WM regions. We perform the same type of study, independently, using GMD. The output from each run of SCCAN is the subset of WM / GM voxels that (as a set) relate most significantly to the cognitive domain of interest (e.g. language).
\newline
\newline
We found significant associations between GM and each domain of
cognition consistent with putative neuroanatomical substrates at the
p<0.001 level. FA in WM was related to exec, soc, vs (p<0.05), weakly
related to memory (p<0.072) and unrelated to lang (p<0.89).
\newline
\paragraph{Part 2:}~We directly tested relationships between
cognition-specific GMD and FA voxels derived from Part A. For
example, we test if language-defined GMD voxels relate significantly
to language-defined FA voxels, where SCCAN treats the voxels as a
set.
\newline
\newline
FA regions and GM regions were strongly related within each cognitive
domain (p<0.001) except for language where the relation was modest
(p<0.05). Fig 1 and 2 show the anatomic distribution of the
cognition-specific regions for both WM and GM for each cognitive
domain. Fig 3 shows the behav-based GMD and FA scatterplot.
\newline
\paragraph{Highlights:} Behav relates strongly to medial orbitofrontal cortex GM and genu in WM. Vs to occipital GM (cuneus) and occipital projections. Exec to insula, dorsolateral prefrontal, lateral orbitofrontal, and bilateral hippocampus GM regions and WM projections in the superior frontal lobe and the anterior callosum. Mem to GM insula, cuneus and posterior superior temporal gyrus, and WM fornix and superior longitudinal fasciculus.
\begin{figure}
\centering
\mbox{\subfigure{
\includegraphics[height=1.5in]{figs/gm}
}
\quad
\subfigure{
\includegraphics[height=1.5in]{figs/gm}
}
}
\caption{In this figure, we illustrate two dimensionality-reduction applications of the same
class of the multivariate correlation-based method, sparse canonical
correlation analysis (SCCA). At left, {\bf we use SCCA as a direct tool for quantifying significant
associations between cortical thickness and five neuropsychological
batteries}: social behavior (apathy/disinhibition), executive
function, language skill, visuospatial skill and memory. SCCA draws out a network of
voxels that, as a set, correlates optimally with each domain and
the location of these networks largely confirm putative brain-behavior
relationships. In a second study, right, {\bf we used SCCA as a feature
selection tool to identify diagnosis-relevant imaging signatures of ADHD} from a large dataset of
structural and resting state fMRI taken from ADHD and control
subjects. The figure at right shows the features from the rsf MRI that are most relevant to ADHD-control classification and
improve performance by a factor of almost 10\% over using clinical variables
alone. Lateral inferior prefrontal cortex, middle frontal gyrus and thalamus
connectivity with anterior cingulate were most important.} \label{fig:fig3}
\end{figure}
\section{Discussion and Conclusion}
In this paper we proposed a new statistical tool that is
ideal for multivariate imaging studies.
% We presented the theory behind
% PSCCA and also provided a highly efficient algorithm for solving the
% optimization problem. Our formulation also has two user defined
% sparsity parameters $\lambda_{\X}$ and $\lambda_{\Y}$ which the
% researcher can chose based on domain knowledge or select automatically
% by optimizing on a held out development set.
Results on synthetic and
real world data (OASIS) further corroborate our hypothesis that PSCCA
is able to increase detection power in the presence of covariates and
extract biologically plausible, multivariate patterns from neuroimaging
data. Specifically, PSCCA reveals significant patterns of difference
between elderly and AD subjects that are within brain regions known to
be affected by Alzheimer's tauopathy. Although the MU model fails to reveal
significant effects, there is notable similarity between regions selected by PSCCA and
those voxels in the brain that had uncorrected $p$-value $< 0.01$. %This study has limitations.
In our experiments we only use the
primary eigenvector from PSCCA;
Future work will analyze the effect of including additional
eigenvectors and will seek to further investigate alternatives for
assessing PSCCA significance in interpretable ways. Finally, as in standard correlation, one should take care to visualize PSCCA results to investigate the potential impact of outliers.
% {\bf FINISH}
% despite the fact that we show similar biologically plausible patterns to univariate
% regression, though with greater power,
% difficulty of interpretation remains ....
{\bf Our results confirm putative brain-behavior associations.} At
the least, they suggest unique relationships between cognitive variation
and large-scale GM-WM networks that vary uniquely with cognitive domain. Furthermore,
these results suggest that SCCAN may enhance detection power over
traditional univariate approaches. In particular, the significance
of GM and cognition relationships ($\approx$ p<0.001) far exceeds those of the FA
cognition relationships ($\approx$ p<0.05). Despite this, there is
significant association between FA-cognition identified voxels and GM-cognition
identified voxels ($\approx$ p<0.001).
%\noindent{\bf Acknowledgment}
% This work is supported by Grant XXX
% 1R01EB006266-01
% from the ...
%National Institute Of Biomedical Imaging and Bioengineering and administered through the UCLA Center for Computational Biology.
\bibliographystyle{IEEEbib}
\bibliography{./cca}
\end{document}
\text{argmax}( \x,\y) :
~\text{Corr}~( \X \x , \Y \y) - \lambda_\x \| \x \|_1 - \lambda_\y \| \y \|_1 ,
\end{equation}
where $\X$ is a matrix with columns containing voxels from one set of
images of $n$ subjects,
and $\Y$ is a matrix with columns containing voxels from the second
set of images from the same $n$ subjects.
Corr computes Pearson correlation and the
$\lambda$ are inversely related to the sparseness costs, $C$. %\vspace{-0.2in}
The covariance formulation of SCCA ....
Let's compute the canonical correlation between two matrices where we
assume the matrices have been normalized and, as such, CCA computes
$$ \rho = \frac{ x \X^T \Y y }{ \sqrt{x \X^T \X x}\sqrt{x \Y^T \Y y} } $$. Now, we change bases by
using the whitening transform.
Redefine $\x = ... $ Then, $\X \leftarrow \X \Sigma^{-1/2}_{XX}$ (same for $\Y$) and
$$ \rho = \frac{ x \X_w^T \Y_w \y }{ ||x|| || y||} $$.
$$\rho = \frac{c^T \Sigma^{-1/2}_{XX} \Sigma_{XY}
\Sigma^{-1/2}_{YY}}{\sqrt{c^Tc}\sqrt{d^Td}}$$
if matrices are whitened, then $\X \leftarrow \X \Sigma^{-1/2}_{XX}$
$$ \rho = \frac{c^T \Sigma_{XY} d }{\sqrt{c^Tc}\sqrt{d^Td}}$$
The partial SCCA formulation will maximize
Generally, SCCA depends upon univariate models to
factor out the effects of confounding variables. In this paper we present a
novel algorithm for computing partial sparse canonical correlation
analysis (PSCCA) and factor (``partial'') out the effect of unwanted covariates.
Sparse canonical correlation analysis (SCCA) is a powerful,
multivariate statistical tool for making unbiased inferences about the
relationship between different types of measurements taken on the same
population.
| {
"alphanum_fraction": 0.7454382685,
"avg_line_length": 53.6426553672,
"ext": "tex",
"hexsha": "c23e42cfdf8849cc5a688f2a049114aad68466a1",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-10-03T09:31:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-22T18:49:53.000Z",
"max_forks_repo_head_hexsha": "2e9241b13c319a3adbbeab5b61e7ddfbf228bd5a",
"max_forks_repo_licenses": [
"FSFAP"
],
"max_forks_repo_name": "stnava/sccan",
"max_forks_repo_path": "doc/psccan.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2e9241b13c319a3adbbeab5b61e7ddfbf228bd5a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"FSFAP"
],
"max_issues_repo_name": "stnava/sccan",
"max_issues_repo_path": "doc/psccan.tex",
"max_line_length": 443,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "2e9241b13c319a3adbbeab5b61e7ddfbf228bd5a",
"max_stars_repo_licenses": [
"FSFAP"
],
"max_stars_repo_name": "stnava/sccan",
"max_stars_repo_path": "doc/psccan.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-11T11:48:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-08-06T03:40:22.000Z",
"num_tokens": 10970,
"size": 37979
} |
\documentclass[a4paper, 12pt]{article}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage [autostyle, english = american]{csquotes}
\MakeOuterQuote{"}
\usepackage{url}
\usepackage{import}
\usepackage{tabularx}
\usepackage{booktabs}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{graphicx}
\usepackage[margin=1.25in]{geometry}
\usepackage{caption}
\usepackage{multirow}
\usepackage[table]{xcolor}
\usepackage{rotating}
\usepackage{mathtools}
\usepackage[multiple]{footmisc}
\usepackage{xr}
\usepackage{breakcites}
\usepackage{matlab-prettifier}
\usepackage[]{mcode}
\usepackage{listings}
\usepackage{color}
\usepackage{hyperref}
\usepackage{authblk}
\title {Profile Ranking Adaptive Choice-Based Conjoint Analysis: A Complementary Approach to Utility-Based Analysis of Small Populations}
\author[1]{}
% \author[1]{Skyler Laney\thanks{skyler.laney@my.wheaton.edu}}
% \author[1]{Leo O'Malley\thanks{leo.omalley@my.wheaton.edu}}
% \author[1]{Cathy Shi\thanks{cathy.shi@my.wheaton.edu}}
% \author[1]{Danilo Diedrichs\thanks{danilo.diedrichs@wheaton.edu}}
% \affil[1]{Department of Mathematics, Wheaton College}
\date{}
\usepackage[]{mcode}
\usepackage{matlab-prettifier}
\usepackage{listings} %For code in appendix
\usepackage{color} %red, green, blue, yellow, cyan, magenta, black, white
\definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue
\definecolor{mylilas}{RGB}{170,55,241}
\usepackage{gensymb}
\usepackage{makeidx}
\makeindex
\pagestyle{empty}
\usepackage{endnotes}
\usepackage{lineno}
%\linenumbers
\begin{document}
%%%%For color MATLAB Scripts
\lstset
{ %Formatting for code in appendix
language=Matlab,
basicstyle=\scriptsize,
numbers=left,
stepnumber=1,
showstringspaces=false,
tabsize=1,
breaklines=true,
breakatwhitespace=false,
}
\maketitle
\hrulefill
\externaldocument{targettable}
\externaldocument{SimpleEquityModel}
\vspace{.7in}
\begin{abstract}
To analyze Adaptive Choice-Based Conjoint (ACBC) survey samples from small populations, a new methodology called profile ranking based ACBC (PR-ACBC) is proposed as a complement to utility based ACBC. PR-ACBC offers a form of validation especially useful for small survey data with high variances in partworth utilities. Without requiring knowledge of partworth utilities, PR-ACBC deduces from sample data the maximum likelihood population ranking for known population sizes using a multivariate hypergeometric distribution, and for unknown population sizes using the Lagrange multiplier based optimization. A population ranking interval can easily be computed for each profile as an interval in which the population profile ranking must lie. Various distance measures from statistical ranking theory are used to analyze profile decomposition (importance of attribute levels in choice tasks), and multidimensional scaling (MDS) for visual representation of attribute importances of particular value in comparing sample sub-groups. The methodology of PR-ACBC is introduced using a toy survey, and its application by a recent survey administered to faith-based and non-faith based disaster relief organizations belonging to the National Voluntary Organizations Active in Disaster (VOAD).
\end{abstract}
%% THIS IS FOR A SHORT SCRIPT
% \begin{table}[!htpb]
% \begin{tabular}{|l|}\toprule
% {\bf MATLABScript.m}\\\hline
% \parbox[b]{5.75in}{\lstinputlisting[style=Matlab-editor]{MATLABScript.m}}\\\hline\hline
% \bottomrule
% \end{tabular}
% \end{table}
%% THIS IS FOR A LONG SCRIPT WHICH MUST BE SPLIT INTO
%% SHORTER BLOCK OF CODE YOU CAN SPECIFY THE
%% RANGE OF LINE NUMBERS DISPLAYED AND NUMBER
%% OF THE FIRST LINE
% \begin{table}[!htpb]
% \centering
% \begin{tabular}{|l|}\hline
% MATLABScript.m. (p1 of 1)\\\hline
% \parbox[b]{5.8in}{\lstinputlisting[style=Matlab-editor,firstline=20, lastline=32, firstnumber=20]{MATLABScript.m}}\\\hline
% \end{tabular}
% \end{table}
\vspace{1in}
\section{Introduction}
Adaptive Choice-Based Conjoint (ACBC) analysis surveys are a widely-utilized, well-developed and highly effective type of conjoint analysis (Orme and Chrzan, 2017). While the Max-Diff approach to select the best and worst among several profiles has generated a great deal of research interest, we focus on ACBC surveys whose choice tasks are designed with the simplest choice between just two concepts. Following Sawtooth's Lighthouse survey creation tool, we structure our choice-task stage as a tournament beginning with 16 profiles close to the respondent's \#1 profile called ``Build Your Own'' (BYO). For large samples, use of a sophisticated statistical method such as hierarchical-Bayesian Markov-Chain Monte-Carlo (HB MCMC) simulation (Rossi et. al. 2005) is extremely effective to estimate partworth utilities and their variances. In the case of very small samples (eg. $n\le 15$) from a small population (eg. $N\le 50$), the variances in partworth utility may hamper both accurate prediction of choice experiments and ranking of attribute importances. Profile ranking (PR-) ACBC is therefore introduced as a method serving as a validity check for profile choice prediction based on partworth levels, as well as attribute importance rankings derivable from the partworth utilities. PR-ACBC utilizes distance measures from statistical ranking theory for attribute decomposition as an alternative to partworth utilities for choice predictions, and multi-dimensional scaling (MDS) as a means to assess attribute importances (Alvo and Yu 2014).
In Section 2, we introduce basic PR-ACBC methodology by means of a generic survey with only 4 profiles constructed from 2 attributes each with 2 levels. We begin with a fundamental observation that the exact sample profile rankings directly obtainable from choice tournament data can not be predicted by multiple linear regression of part-worth utilities. PR-ACBC then proceeds to analyze survey tournament data without requiring any knowledge of partworth utilities. Maximum likelihood estimate (MLE) population rankings for known population sizes are obtainable by discrete multivariate hypergeometric distribution (Oberhofer and Kaufman 1987), and for unknown population sizes by multivariable calculus optimization using Lagrange multipliers (Stewart 2016). Population ranking intervals (PRIs) are easily computed from sample tournament data as one-dimensional intervals which must contain the population profile rankings. Statistical ranking theory distance measures are then used for profile attribute decomposition (akin to partworth levels) and multi-dimensional scaling (MDS) for attribute importances. These are important for comparing and contrasting sample subgroups. In Section 3 we illustrate PR methodology using a recent ACBC survey deployed to both faith-based and non-faith based disaster relief organizations. The context motivating this methodological study is a sequel to a novel application of ACBC in disaster-response research (Gralla et. al. 2014).
\section{PR-ACBC Methodology}
\subsection{Simple Example}
Consider a generic ACBC survey with just 2 attributes each having 2 levels. We designate the 4 possible profiles $A=11, B=12, C=21, D=22$, where $X=x_1x_2$ designates that profile $X$ has level $x_1$ for the first attribute and level $x_2$ for the second attribute. We consider a population with $N=8$ possible respondents from which sample data for $n=4$ randomly selected respondents is obtained by a head-to head choice tournament of the 4 profiles. A sample tournament outcome is shown in Figure \ref{SimpleTourn}
\begin{figure}[!htpb]
\centering
\includegraphics[width=1.75in, height=1.5in]{SimpleTourn.png}
\caption{One possible tournament outcome ranks profile A=11 first, B=12 second, and the remaining two profiles C=21 and D=22 third. }
\label{SimpleTourn}
\end{figure}
{\flushleft For} this respondent, profile A=11 is ranked 1 (tournament winner), profile B=12 is ranked 2 (runner-up), and profiles C=21 and D=22 are both ranked 3. Note that each of the $n=4$ sample respondents will have a tournament outcome, which we can compile into an \emph{Individual Ranking Table (IRT)}, an example of which is shown in Table \ref{Tab1}.
\begin{table}[!htpb]
\centering
\scriptsize
\begin{tabular}{c|ccc}
&\multicolumn{3}{c}{Rank}\\
Respondent& 1 & 2 & 3\\\hline
1& A&B&C,D\\
2& A &C&B,D \\
3& B &C&A,D \\
4& A &C&B,D \\
\end{tabular}
\caption{Individual Ranking Table. Tournament outcomes for a sample of 4 respondents.}
\label{Tab1}
\end{table}
{\flushleft From} this data, it is easy to compute the \emph{Sample Ranking Table (SRT)} which indicates how many respondents assigned a particular rank to each profile (Table \ref{Tab2}).
\begin{table}[!htpb]
\scriptsize
\centering
\begin{tabular}{c|ccc|c}
&\multicolumn{3}{c}{Rank}&\\
Profile& 1 & 2 & 3&SPR\\\hline
A& 3&0&1&1.5\\
B& 1 &1&2 &2.25\\
C& 0 &3&1&2.25 \\
D& 0 &0&4&3 \\
\end{tabular}
\caption{Sample Ranking Table (SRT). The $i,j$ entry of the 4x3 matrix forming the body of the table shows the number of respondents who assigned to the profile in the margin of row $i$, the rank shown by the header in column $j$. The sample profile ranking (SPR) is the arithmetic average of the sample respondent rankings.}
\label{Tab2}
\end{table}
Given any profile $X$ and a sample of size $n=4$, the \emph{sample profile ranking (SPR)} $\rho_n(X)=\rho_4(X)$ is obtained as an arithmetic average of the $n=4$ respondents. For example, for A=11, $\rho_4(A)=[3(1)+1(3)]/4=1.5$, and for C=12 it is $\rho_4(C) = [3(2)+1(3)]/4=2.25$.
In this case, profile A=11 has the best sample profile ranking, profiles B=12 and C=21 are tied for second, and profile D=22 comes in fourth.
Our main question is ``\emph{what can we infer from the SRT about the population ranking table (PRT)?}'', the latter meaning the profile ranking table for all $N=8$ population respondents. To begin, note that there are essentially 12 different possible tournament outcomes which are essentially determined by specifying the winner and runner-up (Table \ref{Tab3}):
\begin{table}[!htpb]
\scriptsize
\centering
\begin{tabular}{c|cccc}
&\multicolumn{3}{c}{Rank}&\\
Outcome& 1 & 2 & 3\\\hline
$O_1$& A&B&C,D\\
$O_2$& A &C&B,D \\
$O_3$& A&D&B,C\\
$O_4$& B&A&C,D\\
$O_5$& B &C&B,D\\
$O_6$& B&D&A,C\\
$O_7$& C&A&B,D\\
$O_8$& C &B&A,D\\
$O_9$& C&D&A,B\\
$O_{10}$& D&A&B,C\\
$O_{11}$& D&B&C,D\\
$O_{12}$& D&C&A,B\\
\end{tabular}
\caption{The 12 possible tournament outcomes.}
\label{Tab3}
\end{table}
{\flushleft Each} member of the population with size $N=8$ has an outcome belonging to the set $\mathcal{S}=\{O_1,...,O_{12}\}$. In other words, the outcomes for a population with $N=8$ consists of $N=8$ draws (with replacement) from a hat containing the 12 outcomes in the set $\mathcal{S}$.
\begin{table}[!htpb]\centering
\scriptsize
\begin{tabular}{c|ccc|c}
&\multicolumn{3}{c}{Rank}&\\
Respondent& 1 & 2 & 3&Outcome\\\hline
1& A&B&C,D&$O_1$\\
2& A &C&B,D&$O_2$ \\
3& B &C&A,D & $O_5$\\
4& A &C&B,D & $O_2$\\\hline
5 & \multicolumn{3}{c|}{unknown}&unknown\\
6& \multicolumn{3}{c|}{unknown}&unknown\\
7& \multicolumn{3}{c|}{unknown}&unknown\\
8& \multicolumn{3}{c|}{unknown}&unknown\\
\end{tabular}
\caption{{\small The individual ranking table for the entire population must consist of the 4 sample outcomes plus an additional 4 outcomes which are unknown.}}
\label{Tab4}
\end{table}
{\flushleft Returning to our example, as shown in Table \ref{Tab4}, } we note that \emph{any} outcomes for respondents 5, 6, 7 and 8 could in principle have resulted in the observed sample outcomes for respondents 1, 2, 3 and 4. However, not all choices for the unknown outcomes would have the same probability of generating a sample with $n= 4$ such that two of the outcomes are $O_2$, and one each of the outcomes are $O_1$ and $O_5$. For example, let us consider first the case where the unknown population outcomes are exactly equal to the observed sample. That is, the outcomes for the unknown 4 population members also consist of two with outcome $O_2$ and one each with outcomes $O_1$ and $O_5$. As a result, the total population consists of four with outcome $O_2$ and two each with outcomes $O_1$ and $O_5$. The resulting \emph{population ranking table} (PRT) shown in Table \ref{Tab5} is in this case obtained by simply doubling each entry in the body of the SRT (Table \ref{Tab2}).
\begin{table}[!htpb]
\centering
\scriptsize
\begin{tabular}{c|ccc|c}
&\multicolumn{3}{c}{Rank}&\\
Profile& 1 & 2 & 3&PPR\\\hline
A& 6&0&2&1.5\\
B& 2 &2&4 &2.25\\
C& 0 &6&2&2.25 \\
D& 0 &0&8&3 \\
\end{tabular}
\caption{{\small Population Ranking Table (PRT) for a population $N=2n=8$ obtained by doubling each entry in the body of the SRT of Table \ref{Tab2}. In this case, the population profile rankings (PPR) are identical to the sample profile rankings (SPR)}}
\label{Tab5}
\end{table}
\subsection{A Fundamental Observation}
In this section we explain why least squares multiple regression (LSRM) will not in general give exact sample profile rankings. In other words, part-worth utilities can only approximate sample rankings.
\subsection{Analytic Approach}
Least squares multiple regression (LSRM) can be used to predict sample profile rankings based on individual sample outcomes as we will now explain using our simple 2 attribute, 2 level survey and generic sample data for 4 respondents. Table \ref{Tab7} gives the dataset \{($U_i,V_i,Y_i$)\} ($i = 1,...,16$) where
\begin{eqnarray*}
U_i&=& 1 \textup{ if attribute 1 has level 1, and 0 if it has level 2}\\
V_i&=& 1 \textup{ if attribute 2 has level 1, and 0 if it has level 2}\\
Y_i&=& \textup{ Respondent's ranking of a profile with $U=U_i$, $V=V_i$.}
\end{eqnarray*}
\begin{table}[!htpb]
\centering
\small
\begin{tabular}{cc|ccccc}
\multicolumn{2}{c}{} &\multicolumn{4}{c}{Output Ranking Data}\\\hline
$U$ & $V$ & Respondent 1& Respondent 2& Respondent 3& Respondent 4\\ \hline
1 &1&$Y_1$&$Y_2$&$Y_3$&$Y_4$\\
1 &0&$Y_5$&$Y_6$&$Y_7$&$Y_8$ \\
0 &1&$Y_9$&$Y_{10}$&$Y_{11}$&$Y_{12}$ \\
0 &0&$Y_{13}$&$Y_{14}$&$Y_{15}$&$Y_{16}$ \\\hline
\end{tabular}
\caption{{\small Dataset ranking structure by respondents.}}
\label{Tab7}
\end{table}
This dataset has certain properties:
\begin{itemize}
\item
Each column consists of a respondent's profile rankings, and hence contains the values $1, 2, 3, 3$ in any order.
\item
Table \ref{Tab7} can also be represented in the form of Table \ref{Tab8}, by which we see that $$\sum U_i = \sum V_i = \sum U_i^2 = \sum V_i^2= 8, $$ and $$ \sum U_iV_i = 4,$$ where the symbol $\sum $ represents $\displaystyle \sum_{n=1}^{16}$.
\end{itemize}
\begin{table}[!htpb]
\centering
\small
\begin{tabular}{cc|c}
$U$ & $V$ & Rank\\ \hline
1& 1& $Y_1$\\
1& 1& $Y_2$\\
1& 1& $Y_3$\\
1& 1& $Y_4$\\
1& 0& $Y_5$\\
1& 0& $Y_6$\\
1& 0& $Y_7$\\
1& 0& $Y_8$\\
0& 1& $Y_9$\\
0& 1& $Y_{10}$\\
0& 1& $Y_{11}$\\
0& 1& $Y_{12}$\\
0& 0& $Y_{13}$\\
0& 0& $Y_{14}$\\
0& 0& $Y_{15}$\\
0& 0& $Y_{16}$\\\hline
\end{tabular}
\caption{{\small Dataset's ranking structure with respondents combined.}}
\label{Tab8}
\end{table}
Using least squares multiple linear regression (LSMR) on the dataset in Table \ref{Tab8}, we estimate each sample profile ranking $Y_i$ as $\hat{Y}_i$:
$$
\hat{Y}_i=c_0 + c_1 U_i + c_2 V_i,
$$
{\flushleft where} the regression coefficients $c_0,c_1,c_2$ are determined by minimizing the sum of squared residuals (SSR):
$$
SSR = \sum_{n=1}^{16}(Y_i-\hat{Y}_i)^2
=\sum_{n=1}^{16}(Y_i-(c_0 + c_1 U_i + c_2 V_i))^2.
$$
To minimize the SSR, we set the partial derivatives with respect to $c_0,c_1$ and $c_2$, equal to zero:
$$\frac{\partial SSR}{\partial c_0} = \frac{\partial SSR}{\partial c_1} = \frac{\partial SSR}{\partial c_2} = 0.$$
This yields the linear system:
$$\begin{cases}
nc_0 + c_1\sum U_i + c_2\sum V_i = \sum Y_i\\
c_0\sum U_i + c_1\sum U_i^2 + c_2\sum U_iV_i = \sum U_iY_i\\
c_0\sum V_i + c_1\sum U_iV_i + c_2\sum V_i^2 = \sum V_iY_i
\end{cases},$$\\
which is equivalent to the matrix equation:
\[
\begin{bmatrix}
n& \sum U_i& \sum V_i \\
\sum U_i&\sum U_i^2&\sum U_iV_i\\
\sum V_i& \sum U_iV_i& \sum V_i^2\\
\end{bmatrix}
%
\begin{bmatrix}
c_0 \\
c_1\\
c_2
\end{bmatrix}
=
\begin{bmatrix}
\sum Y_i\\
\sum U_iY_i\\
\sum V_iY_i
\end{bmatrix}
.
\]
{\flushleft Simplifying} the sums and using Cramer's Rule we obtain the regression coefficients $c_0,c_1$ and $c_2$:
$$
c_0 =
\frac{
\begin{bmatrix}
\sum Y_i & 8 & 8\\
\sum U_iY_i& 8 & 4\\
\sum V_iY_i & 4 & 8
\end{bmatrix}
}{256} = \frac{
\begin{bmatrix}
\sum Y_i & 2 & 2\\
\sum U_iY_i& 2 & 1\\
\sum V_iY_i & 1 & 2
\end{bmatrix}
}{16},
$$
$$
c_1 =
\frac{
\begin{bmatrix}
16 & \sum Y_i & 8 \\
8 & \sum U_iY_i& 4\\
8 & \sum V_iY_i & 8
\end{bmatrix}
}{256} = \frac{
\begin{bmatrix}
4 & \sum Y_i & 2 \\
2 & \sum U_iY_i& 1\\
2 & \sum V_iY_i & 2
\end{bmatrix}
}{16}, \textup{and}
$$
$$
c_2 =
\frac{
\begin{bmatrix}
16 & 8 & \sum Y_i\\
8 & 8 & \sum U_iY_i \\
8 & 4 & \sum V_iY_i
\end{bmatrix}
}{256} = \frac{
\begin{bmatrix}
4 & 2 & \sum Y_i\\
2 & 2 & \sum U_iY_i \\
2 & 1 & \sum V_iY_i
\end{bmatrix}
}{16}.
$$
The LSMR predicted profile rankings $ \hat{Y}_{uv}$ are given by:
$$\hat{Y}_{11} = c_0 + c_1 + c_2 =
\frac{2\sum U_iY_i + 2\sum V_iY_i - \sum Y_i}{16},$$
$$\hat{Y}_{10} = c_0 + c_1 =
\frac{-6 \sum V_iY_i + \sum U_iY_i + \sum Y_i}{16},$$
$$\hat{Y}_{01} = c_0 + c_2 =
\frac{-2\sum U_iY_i + 2\sum V_iY_i + \sum Y_i}{16}, \textup{ and}$$
$$\hat{Y}_{00} = c_0 =
\frac{-2\sum U_iY_i - 2\sum V_iY_i +3 \sum Y_i}{16}.$$
The corresponding actual sample profile rankings obtained by averaging the respondent rankings are:
$$\bar{Y}_{11} = \frac{Y_1+Y_2+Y_3+Y_4}{4},$$
$$\bar{Y}_{10} = \frac{Y_5+Y_6+Y_7+Y_8}{4},$$
$$\bar{Y}_{01} = \frac{Y_9+Y_{10}+Y_{11}+Y_{12}}{4}, \textup{ and}$$
$$\bar{Y}_{00} = \frac{Y_{13}+Y_{14}+Y_{15}+Y_{16}}{4}.$$
{\flushleft We} thus have the following theorem: \emph{The sample profiles' LSMR predicted and actual rankings are the same if and only if the following four equations hold}
\begin{equation}
\textup{(profile 11)} : 2\sum U_iY_i + 2\sum V_iY_i - \sum Y_i = 4( Y_1+Y_2+Y_3+Y_4 ),
\end{equation}
\label{eq:14}
\begin{equation}
\textup{(profile 10)} : -6 \sum V_iY_i + \sum U_iY_i + \sum Y_i = 4( Y_5+Y_6+Y_7+Y_8 ),
\end{equation}
\label{eq:15}
\begin{equation}
\textup{(profile 01)} : -2\sum U_iY_i + 2\sum V_iY_i + \sum Y_i = 4(Y_9+Y_{10}+Y_{11}+Y_{12} ), \textup{and}
\end{equation}
\label{eq:16}
\begin{equation}
\textup{(profile 00)} : -2\sum U_iY_i - 2\sum V_iY_i +3 \sum Y_i = 4(Y_{13}+Y_{14}+Y_{15}+Y_{16} ).
\end{equation}
\label{eq:17}
{\flushleft Furthermore}, by adding these equations, we obtain a corollary:
\emph{The following is a necessary condition for the LSMR predicted sample profile rankings to equal the sample profile rankings:}
\begin{equation}
\sum U_iY_i = \sum V_iY_i.
\label{cor}
\end{equation}
Table 9 gives an example where the average profile rankings and part-worth profile rankings are equal
\begin{table}[!htpb]
\centering
\scriptsize
\begin{tabular}{cc|cccc|c|c|c}
\multicolumn{2}{c}{} &\multicolumn{4}{c}{Respondents}\\\hline
$U$ & $V$ & R 1& R 2& R 3& R 4 &Actual Sample Rank&Predicted Sample Rank=$c_0+c_1U_1+c_2U_2$ & Residual Error\\ \hline
1 &1&1&1&3&1&1.5&3-.75(1)-.75(1)=1.5&0\\
1 &0&2&3&1&3&2.25&3-.75(1)-.75(0)=2.25&0 \\
0 &1&3&2&2&2&2.25 &3-.75(0)-.75(1)=2.25&0 \\
0 &0&3&3&3&3& 3 &3-.75(0)-.75(0)=3&0\\\hline
\end{tabular}
\caption{{\small Predicted rankings for the sample outcomes in Table \ref{Tab1}.}}
\label{Tab9}
\end{table}
{\flushleft In} this case, the regression coefficients are
$c_0=3$, $c_1=-.75$, $c_2=-.75$. The actual sample profile rankings obtained by averaging the respondent rankings are equal to the predicted profile rankings obtained by LSMR, shown in Table \ref{Tab9}.
{\flushleft Moreover}, the equality (\ref{cor}) in the corollary holds:
$$\sum U_iY_i=\sum V_iY_i = 15.$$
The dataset in Table 10, shows 4 respondents whose profiles' predicted and actual rankings are not equal. In this case, the regression coefficients are
$c_0=3$, $c_1=-1$, $c_2=-.5$. The actual profile rankings obtained by averaging the respondent rankings are not equal to the estimated profile rankings obtained by LSMR since all of the profile residuals are non-zero. This must be so since the corollary's condition (\ref{cor}) does not hold.
%(14 $\textdoublebarslash$ 16).
\begin{table}[!htpb]
\centering
\scriptsize
\begin{tabular}{cc|cccc|c|c|c}
\multicolumn{2}{c}{} &\multicolumn{4}{c}{Respondents}\\\hline
$U$ & $V$ & R 1& R 2& R 3& R 4 &Actual Rank&Predicted Rank=$c_0+c_1U_1+c_2U_2$ & $|$Residual Error$|$\\ \hline
1 &1&1&1&1&2&1.25&3-1(1)-.5(1)=1.5&0.25\\
1 &0&2&3&3&1&2.25&3-1(1)-.5(0)=2&0.25\\
0 &1&3&3&2&3&2.75 &3-1(0)-.5(1)=2.5&0.25 \\
0 &0&3&2&3&3&2.75 &3-1(0)-.5(0)=3&0.25 \\\hline
\end{tabular}
\caption{{\small LSMR predicted profile rankings for a sample will in general involve residual errors. }}
\label{Tab10}
\end{table}
\subsection{Geometric Interpretation}
The conditions (12)-(15) for whether or not the LSMR profile ranking predictions are error-free may be understood geometrically by considering points in an $x_1x_2x_3$ coordinate system in which the $x_1x_2$ coordinates represent the profile and the $x_3$ coordinate the ranking. For our simple generic survey, if the four points representing the sample profile rankings are co-planar, there is no error; otherwise, the LSMR predicted profile rankings will have a residual error (Figure \ref{sec4fig}).
\begin{figure}[!htpb]
\centering
\includegraphics[width=2in,height=2in]{sec4fig.eps}
\caption{{\small In special cases such as the one described in Table 9, the 4 points representing the sample profile rankings are co-planar ($Y_i=3-.75U_i-.75V_i$, shown in solid outline). Otherwise, as in Table 10, the utility-based sample profile rankings will have residual errors as shown by the sample rankings and plane $Y_i=3-U_i-.5V_i$ (dashed outline). }}
\label{sec4fig}
\end{figure}
{\flushleft Such a} geometric interpretation is not possible for surveys involving more than 2 attributes, in which case standard LSMR residual analysis indicates the error in sample profile rankings using regression coefficients.
\subsection{Maximum Likelihood Estimation}
\subsubsection{Known Population Size}
\subsubsection{Unknown Population Size}
Let us assume that each member of the population with $N=8$ is equally likely to occur in our sample with $n=4$. Assume further that $p_i$ is the probability that a respondent's outcome is $O_i$. The probability $p$ that the observed sample consisting of one $O_1$, two $O_2$'s and one $O_5$ is
\begin{equation}
p=f(p_1,p_2,p_5)=\frac{4!}{1!2!1!}[p_1p_2^2p_5],
\end{equation}
\label{eq:1}
{\flushleft where} $g(p_1,p_2,p_5)=p_1+p_2+p_5=1$.
The values of $p_1^*,p_2^*,$ and $p_5^*$ which maximize $H(p_1,p_2,p_5)=\ln (f(p_1,p_2,p_5))$ (and hence also maximizes $p=f(p_1,p_2,p_5)$) are obtained using Lagrange multipliers:
\begin{eqnarray*}
\nabla H(p_1^*,p_2^*,p_5^*) & = & \lambda \nabla g
(p_1^*,p_2^*,p_5^*),
\end{eqnarray*}
{\flushleft and therefore}
\begin{eqnarray*}
\frac{1}{p_1^*} & = & \lambda\\
\frac{2}{p_2^*} & = & \lambda\\
\frac{1}{p_5^*} & = & \lambda.
\end{eqnarray*}
{\flushleft (The scalar quantity $\lambda$ is called a Lagrange multiplier.) Using} $p_1^*+p_2^*+p_5^*=1$ gives $\frac{1}{\lambda} + \frac{2}{\lambda}+\frac{1}{\lambda}=1$ and so $ \lambda = 4$. Hence, the values $p_1^*=\frac{1}{4}, p_2^*=\frac{1}{2}$, $p_5^*=\frac{1}{4}$ maximize the probability of the observed sample outcomes. Thus, the PRT in Table 5 can be interpreted as the expected rankings for a population of size 8 which maximizes the likelihood of the observed sample outcomes. The likelihood of the observed sample would be lower for any other expected population rankings such as a PRT corresponding to a population whose response outcomes consist of two $O_1$'s, two $O_5$'s, three $O_2$'s and the outcome $O_7$ (which reverses the winner and runner-up in outcome $O_2$). The revised PRT is shown in Table \ref{Tab6}.
\begin{table}[!htpb]
\centering
\scriptsize
\begin{tabular}{c|ccc|c}
&\multicolumn{3}{c}{Rank}&\\
Profile& 1 & 2 & 3&PPR\\\hline
A& 5&1&2&1.625\\
B& 2 &2&4 &2.25\\
C& 1 &5&2&2.15 \\
D& 0 &0&8&3 \\
\end{tabular}
\caption{{\small A revised population ranking table (PRT$_1$) with $N=2n=8$ obtained by replacing an outcome $O_2$ with outcome $O_7$ in the population with $N=8$ represented by Table \ref{Tab5}. Note that the population ranking of profile A increases by .125, while profile C's decreases by .1.}}
\label{Tab6}
\end{table}
In general, let $n_k$ be the number of sample outcomes $O_k$ ($k=1,2,...,K$) and let $p_k$ be the probability that a respondent in the population has outcome $O_k$ $(k= 1, 2, ..., K)$. The likelihood function $f(p_1, p_2, ..., p_K)$ giving the probability of observing the sample values $n_1, ..., n_{K}$ is given by
\begin{equation}
f(p_1, ...., p_K)= \frac{n!}{n_1!n_2!\cdot\cdot\cdot n_K!} \prod_{k=1}^K p_k^{n_k},
\end{equation}
\label{eq:4}
{\flushleft with} $\sum_{k=1}^{K}n_k=n$ and $\sum_{k=1}^{K}p_k=1$.
We seek to find the values $p_1^*, ..., p_{K}^*$ which maximize the likelihood function $f$, or equivalently, the log-likelihood function
\begin{equation}
H(p_1, ..., p_K)=\ln f = \ln(n!) - \sum_{k=1}^{K} n_k! +\sum_{k=1}^{K} n_k\ln(p_k),
\end{equation}
\label{eq:5}
{\flushleft subject} to the constraint $g(p_1, ..., p_{K})=p_1+p_2+...+p_K=1$. Properties of gradients imply that the optimal values $p_i^*$ must satisfy
\begin{equation}
\nabla H(p_1^*, ..., p_K^*) = \lambda \nabla g(p_1^*, ..., p_{K}^*).
\end{equation}
\label{eq:6}
{\flushleft It} follows that for $k=1, ..., K$,
\begin{equation}
\frac{n_k}{p_k^*}=\lambda.
\end{equation}
\label{eq:7}
{\flushleft Hence,} $n=\sum_{k=1}^{K} n_k = \lambda \sum_{k=1}^{K} p_k^* = \lambda$, and so the probabilities $p_k^* = \frac{n_k}{n}$ give the maximum likelihood of the observed sample outcomes $n_k$ ($k=1, 2, ..., K$). For any sample of size $n$ and number $n_k$ of observed outcomes $O_k$ ($k=1, 2, ...K$), the maximum likelihood probabilities $p_k^*=\frac{n_k}{n}$
indicate that for a population of size $N$, the expected number $N_k$ of outcomes $O_k$ is given by $E(N_k)=p_k N.$ A maximum-likelihood population could be simulated by augmenting the observed $n$ sample outcomes, where the probability of outcome $O_k$ at each draw is given by $p_k$. For a large number of such randomly constructed populations of size $N$, for each $k$ the average number of population outcomes $O_k$ is approximately $p_k N$.
\subsection{Population Ranking Intervals}
Maximum likelihood provides some insight into the expected population profile rankings. In this section, given ranking data for a sample consisting of $n$ survey respondents selected at random from a population with $N> n$ respondents, we will show how to construct confidence intervals which are guaranteed to include the population profile rankings.
\subsection{Population Ranking Range}
Returning to our simple example, in which each profile is ranked 1, 2, or 3, let $\rho_k(X)$ denote the ranking of profile $X$ based on tournament results for $k$ respondents. Note that for any population of $k$ respondents ($n\le k \le N$) which contains an observed sample of size $n$, the following inequality must hold:
\begin{equation}
\frac{k+n(\rho_n(X)-1)}{k}\le \rho_k(X) \le \frac{3k+n(\rho_n(X)-3)}{k}.
\label{eq6}
\end{equation}
{\flushleft This} interval containing $\rho_k(X) $ is obtained by either (i) assigning the rank 1 to $X$ for all $k-n$ members of the population not in the sample (lower bound for $\rho_k(X)$); or (ii) assigning the rank 3 to $X$ for all $k-n$ non-sample population members (upper bound for $\rho_k(X)$). Taking $k=N$, a sample of size $n$ provides a 100\% confidence interval $$\frac{N+n(\rho_n(X)-1)}{N}\le \rho_N(X) \le \frac{3N+n(\rho_n(X)-3)}{N}$$ for the population ranking $\rho_N(X)$ of any profile $X$.
Insight into the confidence interval (\ref{eq6}) is gained when we write it in the form
\begin{equation}
\rho_n(X)-e_1 \le \rho_k(X) \le \rho_n(X)+e_3,
\end{equation}
\label{eq:9}
{\flushleft where} $e_1$ is the maximum distance from $\rho_k(X)$ to $\rho_n(X)$ towards the lower ranking bound 1, and $e_3$ is the maximum distance from $\rho_k(X)$ to $\rho_n(X)$ towards the upper ranking bound 3. Note further that
\begin{equation}
\frac{k+n(\rho_n(X)-1)}{k} = \rho_n(X)-e_1,
\end{equation}
\label{eq:10}
{\flushleft which implies}
\begin{equation}
e_1 = \frac{k-n}{k}(\rho_n(X)-1).
\end{equation}
\label{eq:11}
{\flushleft Similarly,}
\begin{equation}
\frac{3k+n(\rho_n(X)-3)}{k} = \rho_n(X)+e_2,
\end{equation}
\label{eq:12}
{\flushleft which} yields
\begin{equation}
e_2=\frac{k-n}{k}(3-\rho_n(X)).
\end{equation}
\label{eq:13}
{\flushleft Let} $\lambda=\frac{k-n}{k}$ be the proportion of the population which has not taken the survey. In both directions, the interval extends from $\rho_n(X)$ a distance $\lambda$ times the distance to the endpoints of the ranking interval $[1, 3]$. In addition, the length of this confidence interval is given by $e_1+e_2=2\lambda$, as seen in Figure \ref{AL}. Note that the coefficient 2 of $\lambda$ arises algebraically as the difference between the extreme rankings 1 and 3.
\begin{figure}[!htpb]
\centering
\includegraphics[width=6.5in, height=1.75in]{Confidence_Interval.png}
\caption{Given a profile $X$ and its sample ranking $\rho_n(X)$, a 100\% confidence interval for the population profile ranking $p_k(X)$ is determined by $\lambda=\frac{k-n}{k}$, the proportion of the population who have not taken the survey.}
\label{AL}
\end{figure}
\subsection{Application}
These results can be used to quantify the possible consequences of survey response bias. For the simple example introduced in Section 2, let us consider the following response scenarios. Assume that, out of the total population $N=8$, two respondents have outright refused to take the survey, four have completed it, and the other two have not yet replied. If the analysis is performed with only the sample of $n=4$ respondents, then the length of the confidence interval is $\frac{2(8-4)}{8}=1$. If one of the non-respondents is convinced to participate, the interval length is reduced to $\frac{2(8-5)}{8}=\frac{3}{4}$, improving the precision by 25\%. If both of the non-respondents participate, then the interval is further reduced to $\frac{2(8-6)}{8}=\frac{1}{2}$. In other words, the two non-respondents cause the confidence interval to be twice as large, an important consideration in seeking to elicit survey response.
\subsection{Attribute Decomposition Via 2 Dimensional Multimensional Scaling}
\subsection{Application: Profile Ranking Analysis of a Small Population Disaster Relief Survey}
In this section we show how to apply PR ACBC methodology to an actual survey.
\subsection{Humanitarian Disaster Relief}
PR-based ACBC methodology for analyzing small populations has many possible applications. In disaster relief, effectiveness of a response may depend on the quality of collaboration between organizations with a broad diversity of religious and ideological perspectives. For effective coordination of relief, it is important that humanitarian organizations understand the unique traits and characteristics that shape their disaster response decisions. Through comparison of these factors, it is possible to design optimal partnerships and joint endeavors between organizations that may fulfill distinct, yet complimentary, humanitarian roles. Our research focused on how faith based organizations (FBOs) prioritize key attributes affecting their decision whether or not to respond to an international humanitarian disaster.
To this end, we designed a survey that creates "Go/No-Go" decision profile preferences for a small population of approximately 50 international FBOs with headquarters in the United States. The attributes and levels for this survey are displayed in Figure \ref{AL}. Different disaster scenarios face off against each other in the tournament stage and are given rankings based on their performance. The purpose of the ranking is to determine whether FBOs fill a certain niche in disaster response landscape as might be inferred for by their ``number-one'' or ``top-three'' ranked disaster response profiles. As a case study, we discus how our methodology was applied to data obtained for this survey administered using Sawtooth's Lighthouse platform.
\begin{figure}[!htpb]
\centering
\includegraphics[width=5.75in, height=7in]{AttributeLevels.png}
\caption{{\small An ACBC survey with 4 attributes consisting of 3 levels each. }}
\label{AL}
\end{figure}
\subsection{Ranking Method}
As shown in Figure 4, our humanitarian survey consists of four attributes, each with three levels. Thus, the number of possible profiles is $3^4=81$. These are identified by four digit numbers $X=x_1x_2x_3x_4$ where profile $X$ has level $x_1$ for the first attribute, level $x_2$ for the second attribute, level $x_3$ for the third attribute, and level $x_4$ for the fourth attribute. In the tournament stage of the competition, there are four rounds, in which sixteen profiles face off against each other in head to head match-ups, much like the FIFA World Cup Round of 16. The competing profiles are selected from the 81 possible profiles based on the respondent's BYO preferences. We assign a ranking of 1 to the tournament winner, 2 to the runner up, 3 to the semifinal losers, 4 to the quarterfinal losers, 5 to the profiles that are eliminated in the first round, and 6 to those that do not appear in the tournament. By the two-week deadline after deploying the survey, 5 FBOs had responded, resulting in the sample ranking table shown in Table \ref{Tab13}. The responding population ($N=10$) most likely to produce this observed sample is also included.
\begin{table}[!htpb]
\scriptsize
\centering
\begin{tabular}{c|ccccc|ccccc|ccc|c}
Profile& 1 & 2 & 3 & 4 & 5 & 6&7&8&9&10&11&12&13&PR\\\hline
1212& 2& 3& 7& 3& 7& 7& 1& 7& 7& 7& 4& 1& 7& 4.846\\
1112&7 &1 &7 &7 &1 &2 &7 &5 &7 &3 &7 &7 &4 &5.000\\
1312&7 &7 &7 &7 &5 &5 &7 &1 &1 &1 &7 &7 &3 &5.000\\
3212&7 &7 &7 &5 &5 &1 &2 &7 &4 &7 &7 &7 &1 &5.154\\
1122&1 &7 &7 &4 &7 &7 &5 &7 &4 &7 &3 &3 &7 &5.308\\
2212&7 &5 &5 &7 &7 &7 &7 &2 &7 &4 &2 &2 &7 &5.308\\
1222&7 &7 &4 &7 &4 &5 &7 &4 &3 &2 &7 &7 &7 &5.462\\
1232&5 &7 &7 &5 &5 &5 &4 &5 &7 &5 &7 &7 &2 &5.462\\
2312&7 &7 &7 &1 &5 &3 &4 &7 &2 &7 &7 &7 &7 &5.462\\
2112&3 &7 &7 &4 &5 &4 &3 &7 &7 &7 &7 &7 &5 &5.615\\\hline
\end{tabular}
\caption{{\small Top 10 ranked profiles for FBOs ($n=13,N=50$) } }
\label{Tab13}
\end{table}
\subsection{Profile Rank Confidence Intervals}
Applying the results of Section 3, using the data in Table \ref{Tab13}, given a profile's sample ranking ($n=5$), we can construct confidence intervals for population rankings with $N=10$. For example, consider the profile 1232 which had the top ranking in the sample. In a population with $N=10$ and $\rho_n(1232)=4.2$, the value $\lambda=\frac{1}{2}$ results in the confidence interval shown in Figure \ref{fig:4}. Since the range of individual rankings is $6-1=5$, we compute the interval length as $5\lambda = 2.5$. It follows that that the population ranking $\rho_k(2113)$ could be up to 1.6 less or .9 greater than the sample ranking $\rho_5(1232)=4.2$. After following up with organizations to whom we sent the survey, we received survey data from an additional 5 FBOs, and calculated $\rho_{10}(1232)=4.6$, which falls within our confidence interval (Figure \ref{fig:4}). This approach can be applied to any profile in Table \ref{Tab13} and provides a simple visualization of the extent to which sample profile rankings can capture population profile rankings.
\begin{figure}[!htpb]
\centering
\includegraphics[width=5.75in, height=2.75in]{Confidence_Interval_2.png}
\caption{Confidence Interval for profile 1232 (n = 5, k = 10).}
\label{fig:4}
\end{figure}
\subsection{Visualization of Top Ranked Profiles}
A final outcome of PR ACBC would be a visual representation of the top ranked profiles. For the population rankings in Table \ref{Table 13}, there is a triple tie for the number 1 ranking. A visual display of top-ranked profiles such as shown in Figure \ref{fig:6}, might enhance the GUI of existing ACBC software.
\begin{figure}[!htpb]
\centering
\includegraphics[width=5.75in, height=2.75in]{Fig6.png}
\caption{A visual lineup of top-ranked profiles could enhance the GUI of commercial ACBC software.}
\label{fig:6}
\end{figure}
\section{Conclusion}
Unlike conjoint analysis of survey data where the target populations are large and more suitable for conventional statistical tools, we have introduced a simple, intuitive approach to a small population's profile rankings based on sample data. The population most likely to yield the given sample results are expected to have the same profile rankings as the sample's. We also provide a new type of 100\% confidence interval for profile rankings without using standard deviations, which can be used to quantify maximum possible survey response bias. Furthermore, we have shown that part-worth utilities obtained by multiple linear regression can only replicate sample profile rankings under special conditions, with the residuals indicating the errors in predicted rankings.
For applications such as humanitarian disaster relief, sample profile ranks are more easily understandable to respondents than part-worth utilities. The intuitive nature of profile ranking allows quick and straightforward analysis of any small sample of disaster response organizations. Given that the sample comprises a relatively significant portion of the overall population, 100$\%$ confidence intervals provide an absolute range of possible population profile rankings without the complexity of statistical inference. Consequently, for small organizations with low operating costs, these techniques may serve as an effective, yet affordable, alternative to more sophisticated ACBC survey utility-based analysis software. Moreover, a visualization of top-ranked or bottom-ranked profiles is a good way to summarize PR ACBC survey results.
Major areas open to further research include analysis of different ranking systems for various types of choice tournaments and application of PR ACBC methodology to other small population studies.
\subsection*{Acknowledgements}
The authors would like to thank
Erica Gralla, Jarrod Goenzel, Timotius Kartawijaya, and Mike Veatch for their valuable contributions to this work.
\section*{References}
\begin{list}{}{\itemindent=-2em}
\small
\item Alvo, M., and Yu, P.L.H. 2014. \emph{Statistical Methods for Ranking Data}. Springer.
\item Gralla, E., Goentzel, J., and Fine G. 2014. Assessing trade-offs among multiple objectives for humanitarian aid delivery using expert preferences.
\emph{Production and Operations Management}, Springer-Verlag Berlin 23(6), 978-989.
\item Oberhofer, W. and Kaufman, H. Maximum Likelihood Estimation of a Multivariate Hypergeometric Distribution. \emph{Sankhya: The Indian Journal of Statistics, Series B (1960-2002)}, Indian Statistical Institute, 49(2), 188-191.
\item Orme, B.K., and Chrzan, K. 2017. \emph{Becoming an Expert in Conjoint Analysis: Choice Modeling for Pros.} Sawtooth Software.
\item Rao, V. R. 2014. \emph{Applied Conjoint Analysis}. Springer.
\item Stewart, J. 2016. \emph{Calculus, Early Transcendentals (8E)}. Cengage Learning.
\item Rossi, P., Allenby, G. and McCulloch R. 2005. \emph{Baysian Statistics and Marketing.} John Wiley \& Sons, Ltd.
\end{list}
\end{document}
| {
"alphanum_fraction": 0.7278344301,
"avg_line_length": 54.6895604396,
"ext": "tex",
"hexsha": "dcdd9151fdde406381f62e5bd2d11afbcd11eb43",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b8cda3d21bca0417d0786802e85f02a1faccc344",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "pisihara/ProfileRanking",
"max_forks_repo_path": "mainNov16.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b8cda3d21bca0417d0786802e85f02a1faccc344",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "pisihara/ProfileRanking",
"max_issues_repo_path": "mainNov16.tex",
"max_line_length": 1556,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b8cda3d21bca0417d0786802e85f02a1faccc344",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "pisihara/ProfileRanking",
"max_stars_repo_path": "mainNov16.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12752,
"size": 39814
} |
% Chapter 2
% % % % Some Macros
\newcommand{\laplacian}[1][G]{\ensuremath{L_{#1}^{+}}}
\newcommand{\reffformula}[1][\laplacian]{\ensuremath{ (\chi_u - \chi_v)^T \ #1 \ (\chi_u - \chi_v) }}
\newcommand{\proj}{\ensuremath{I - \frac{\textbf{1} \textbf{1}^T}{n}}}
\newcommand{\sqlaplacian}[1][G]{\ensuremath{\sqrt{L_{#1}^{+}}}}
% \newcommand{\Mset}[2]{\ensuremath{\mathbb{M}_{#1 \times #2}}}
% \newcommand\norm[1]{\left\lVert#1\right\rVert}
% from https://tex.stackexchange.com/questions/107186/how-to-write-norm-which-adjusts-its-size
\newcommand\norm[1]{\left\lVert#1\right\rVert}
%
\newcommand{\comment}[1]{}
% https://tex.stackexchange.com/questions/87303/multi-line-block-comments-in-latex
% from https://tex.stackexchange.com/questions/39390/writing-a-limit-so-that-the-subscript-goes-directly-underneath
\newcommand{\Lim}[1]{\raisebox{0.5ex}{\scalebox{0.8}{$\displaystyle \lim_{#1}\;$}}}
\chapter{Matrix Approach} % Main chapter title
\label{Chapter4} % For referencing the chapter elsewhere, use \ref{Chapter2}
%----------------------------------------------------------------------------------------
% \section{Colbourn, Day, Nel}
\section{Harvey, Xu}
\citet{harvey2016generating} proposed a $\mathcal{O}(N^\omega)$ (where $\omega$ is the exponent of fast matrix multiplication) algorithm for sampling a uniform spanning tree which is much simpler compared to the one proposed earlier by \citet{COLBOURN1996268} which also has the same running time. The initial starting point for the algorithm is using the relationship between effective resistance and the probability that an edge belongs to a spanning tree. And also the fact that sampling an edge corresponds to contracting it and discarding the edge corresponds to deleting it. The following naive chain rule algorithm works on the same principle.
% \subsection{Techniques used}
\subsubsection{Naive chain rule algorithm}
\begin{algorithm}[H]
\KwIn{$G = (V,E) \ \text{and} \ L_G^+$}
\KwOut{Set of edges corresponding to a random spanning tree}
\For{$e = (u,v) \in E$ }{
$R_e^{\text{eff}} = (\chi_u - \chi_v)^T \ L_G^+ \ (\chi_u - \chi_v)$\;
\eIf{$(X \sim \text{Bernoulli}(R_e^{\text{eff}})) = 1$} {
Add edge $e$ to the spanning tree\;
$G = G / e$\;
}{
$G = G \setminus e$ \;
}
Update $ L_G^+ $ \;
}
\caption{Sampling uniform spanning tree using chain rule}
\end{algorithm}
Computing $\laplacian$ takes $\mathcal{O}(N^3)$ hence the overall running time is $\mathcal{O}(MN^3)$. The main bottleneck of this algorithm is to modify the graph each time after making a decision to sample an edge.
\subsection{Summary of the paper}
On a high level the main ideas used in the paper are
\begin{enumerate}
\item Use a divide and conquer algorithm to break the graph into smaller parts and sample on each parts separately and update the pseudoinverse of the laplacian lazily only on the subgraph when needed
\item The important insight here is that the sampling probability of an edge depends only on 4 entries of the pseudoinverse of the laplacian. Hence we don't need to update all the entries of the matrix when the graph is modified
\item A well known method to compute inverse of a matrix with updates is to use the Sherman-Morrison-Woodbury formula. But in this case the formula has to be modified to work for the case where only a submatrix is modified.
\item Since while contracting an edge the number of vertices decreases it would get cumbersome to modify the dimension of the matrix evertime. So they overcome this issue by considering the formula on the limit case. When the graph is considered as a electric network then increasing the weight of an edge corresponds to shorting that link, hence in the limit case we get the same result as contracting the edge
\item One of the main improvements over the previous algorithms of the same complexity (\citet{COLBOURN1996268}) is that the intricacies of LU decomposition is avoided since the current algorithm uses only matrix inversion.
\item All the formulas are derived for the submatrix case hence the complexity of a sub problem depends only on the size of the subproblem.
\end{enumerate}
\subsection{Harvey, Xu Algorithm}
Given a graph $G = (V_G, E_G)$. The algortihm proceeds by splitting the vertex set into 2 equal halves $V_G = S \uplus R$. And now they define subsets of edges $E[S] = (S \times S) \cap E_G$ and $E[R, S] = (S \times R) \cap E_G$.
Suppose $S = S_1 \uplus S_2$ and $R = R_1 \uplus R_2$. Then we can see that
$$ E[S] = E[S_1] \cup E[S_2] \cup E[S_1, S_2] $$
$$ E[R, S] = \bigcup\limits_{i,j \in \{1,2\} } E[R_i, S_j] $$
This formula is used to recurse on the subproblems for each subset. In this manner each edge will be visited exactly once by the algorithm
\subsubsection{Notation}
\begin{itemize}
\item $N$ - This is an auxiliary matrix which starts with $\laplacian$ and gets updated lazily as the algorithm progresses.
\item \textbf{function} $SampleSpanningTree(G = (V, E))$ - This is the main function from which the execution of the algorithm starts. It takes input a graph and outputs a set of edges corresponding to a random spanning tree.
\item \textbf{function} $SampleEdgesWithin(S)$ - This function takes input a set of vertices $S$ and returns sets $F$ and $D$ which are set of edges contracted and deleted respectively in the subgraph induced by $S$ on $G$.
\item \textbf{function} $SampleEdgesCrossing(R,S)$ - This function works similar to the above one but samples edges crossing the sets $R$ and $S$. Also the base case of the entire algorithm (case where $|R| = |S| = 1$) of making the decision to sample an edge is handled here.
\item \textbf{procedure} $Update(S,F,D)$ - This function updates the sub matrix $N_{S,S}$ based on the formulas derived in \textbf{Corollary 1} and \textbf{Theorem 3}
\end{itemize}
\pagebreak
% https://texfaq.org/FAQ-ftncapt
\begin{figure}[h!]
\begin{minipage}{\textwidth}
\includegraphics[scale=0.7]{HX-algo-screenshot.png}
\caption[Harvey, Xu algorithm]%
{Harvey, Xu algorithm \footnote{This image is exactly reproduced from the paper.} }
\end{minipage}
\label{fig:alg}
\end{figure}
\pagebreak
\subsubsection{An Example}
The below diagram shows how the algorithm proceeds by recursively splitting the graph.
\begin{figure}[h!]
\begin{minipage}{\textwidth}
\begin{center}
\includegraphics[scale=0.85]{Figures/alg-tree}
% \caption{An example}\textsuperscript{2}
\caption[Example]{An example of Harvey, Xu algorithm \footnote{The function calls to $SampleEdgesWithin(S_1)$ and $SampleEdgesWithin(S_2)$ in turn would call their respective $SampleEdgesCrossing$ functions. But it would be the base case and would look similar to the one portrayed here and hence omitted for the sake of brevity.}}
\end{center}
\end{minipage}
\end{figure}
\pagebreak
\subsection{Structure of the paper}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.7]{deps}
\end{center}
\caption{Dependencies between the results}
\end{figure}
\subsection{Facts used}
\citet{ttao} has a nice derivation of the Sherman-Morrison formula.
\begin{HXf}[Woodbury matrix identity]
Let $ M \in \mathbb{M}_{n \times n} , U \in \mathbb{M}_{n \times k}, V \in \mathbb{M}_{n \times k}$. Suppose $M$ is non-singular then $M + UV^T$ is non-singular $\iff \ I + V^T M^{-1} U$ is non-singular. If $M + UV^T$ is non-singular, then
$$ (M + UV^T)^{-1} = M^{-1} - \left( M^{-1} \cdot U \cdot (I + V^TM^{-1}U)^{-1} \cdot V^T \cdot M^{-1}\right) $$
\end{HXf}
% \begin{proof}
% TODO
% \end{proof}
\begin{HXf}
For any $L \in \mathbb{M}_{n \times n}$ with $\text{ker}(L) = \text{span}(\textbf{1})$, we have $LL^+ = I - \frac{\textbf{1} \cdot \textbf{1}^T}{n}$ and $P := I - \frac{\textbf{1} \cdot \textbf{1}^T}{n}$ is called the \textbf{projection matrix}.
\end{HXf}
The following set of facts are about the properties of matrix operations (addition, multiplication, etc.) on sub-matrices. The first 4 are easy to see, so I haven't derived them. For the last one I have written a derivation using Shur's complement from Wikipedia
\begin{HXf}[Sub-matrices]
For all the results below, $S$ denotes a index set and $S^c$ denotes it's complement.
\begin{enumerate}
\item For any $A,B \in \Mset{n}{n}, (A + B)_{S,S} = A_{S,S} + B_{S,S}$
\item If $C = D \cdot E \cdot F$ then $C_{S,S} = D_{S,*} \cdot E \cdot F_{*,S}$
\item For $A \in \Mset{m}{n}, B \in \Mset{n}{l}$ , If $A_{S^c, S^c} = 0$ or $B_{S^c, S^c} = 0$ then \\ $(AB)_{S,S} = A_{S,S} \cdot B_{S,S}$
\item For any matrix $C$ where $C = D \cdot E \cdot F$ . If $D_{*, S^c} = 0$ and $F_{S^c, *} = 0$, then $C = D_{*, S} \cdot E_{S,S} \cdot F_{S,*}$
\item $D =
\begin{bmatrix}
M & 0 \\
0 & 0 \\
\end{bmatrix},
$
and $
E =
\begin{bmatrix}
A & B \\
X & Y \\
\end{bmatrix}
$
where $M, A \in \Mset{n}{n}$ and If $(MA - I)$ is invertible Then,
$$ (DE - I)^{-1} = \begin{bmatrix}
(MA - I)^{-1} & (MA - I)^{-1} \cdot M \cdot B \\
0 & -I \\
\end{bmatrix}
$$
\begin{proof}
$(DE - I)^{-1}$ can be computed using Shur Complement(\cite{wiki:shur}) .
Suppse $N =
\begin{bmatrix}
P & Q \\
R & S \\
\end{bmatrix}
$ and Shur's complement of block $S$ and $P$ is $$N / S := P - QS^{-1}R \qquad N / P := S - RP^{-1}Q$$
Then $$N^{-1} =
\begin{bmatrix}
P^{-1} + (P^{-1} Q (N/P)^{-1} R P^{-1}) & -(P^{-1}Q(N/P)^{-1}) \\[0.3cm]
-((N/P)^{-1}RP^{-1}) & (N/P)^{-1} \\
\end{bmatrix}
$$
In our case $N =
\begin{bmatrix}
MA - I & MB \\
0 & -I \\
\end{bmatrix}
$ and $(N/P) = -I$. From this it follows that $$N^{-1} = (DE-I)^{-1} = \begin{bmatrix}
(MA - I)^{-1} & (MA - I)^{-1} \cdot M \cdot B \\
0 & -I \\
\end{bmatrix}
$$
\end{proof}
\end{enumerate}
\end{HXf}
\begin{HXf}
Let $A, B \in \Mset{n}{n}$ with $B$ being symmetric PSD. Suppose $x$ is an eigenvector of $AB$ corresponding to eigenvalue $\lambda$. Then $\sqrt{B} x$ is an eigenvector of $\sqrt{B}A\sqrt{B}$ corresponding to eigenvalue $\lambda$
\end{HXf}
\begin{HXf}[Laplacian and graph connectivity (Fiedler value)]
Let $G$ be a graph with $n$ vertices. Suppose $(\lambda_1, \lambda_2 \cdots \lambda_n)$ be the eigenvalues corresponding to the eigenvectors $(v_1, v_2 \cdots v_n) $ of the Laplacian of $G$ denoted as $L_G$. $L_G$ is symmetric PSD with $\lambda_1 = 0$ and $v_1 = \textbf{1}$. The following properties relate the eigenvalues of $L_G$ with the connectivity of $G$ :
\begin{enumerate}
\item $\lambda_2 > 0 \iff G$ is connected
\item $G$ is disconnected $\iff \exists z$ with $z^T \textbf{1} = 0$ and $z^T L_G z = 0$
\end{enumerate}
The above is true for $\laplacian$ also
\end{HXf}
\begin{HXd}[$\chi_u$]
$\chi_u$ is a vector of size $|V|$
\[
\chi_u(i) =
\begin{cases}
1,& \text{if } i = u\\
0, & \text{otherwise}
\end{cases}
\]
\end{HXd}
\begin{HXd}[Uniform random spanning tree]
Let $\hat{T}$ be the random variable denoting a uniformly random spanning tree, then $\mathbb{P}(\hat{T} = T) = \frac{1}{|\mathcal{T}|}$, where $\mathcal{T}$ is the set of all spanning trees of $G$.
\end{HXd}
\begin{HXf}
Given a graph $G = (V,E)$ with laplacian $L_G$, the effective resistance of an edge $e = \{u, v\} \in E$ is
$$ \reff = \reffformula $$
Then for any $e \in E$ we have $$\mathbb{P}(e \in \hat{T}) = \reff$$
\end{HXf}
\subsection{Technical details of the results}
\subsubsection{Deletion}
The first step in obtaining a updation formula for deletion is to make sure $(I - L_D \laplacian)$ is invertible. As the inverse of this term would be used in the expansion of $(L_G - L_D)^+$
For the first direction of Lemma 1, the main result used is \textbf{Fact 5} ($G$ is disconnected $\iff \exists z$ with $z^T \textbf{1} = 0$ and $z^T \laplacian z = 0$). So if we can show this for a suitable $z$ we are done. Now using the hypothesis that $(I - L_D \laplacian)$ is singular and \textbf{Fact 4} they derive the following $ y^T \cdot \sqlaplacian \cdot L_{G \setminus D} \cdot \sqlaplacian \cdot y = 0$. As we an see the remaining part is to show $(z = \sqlaplacian y) \perp \textbf{1}$
\begin{HXl}[Formulas in \textbf{Theorem 1} are well defined]
Let $G=(V,E)$ be a connected graph and $D \subseteq E$ then
$\left( I - L_D \laplacian \right)$ is invertible $\iff G \setminus D$ is connected
\end{HXl}
\begin{proof}
First let's show that If $(I - L_D\laplacian)$ is singular then $G \setminus D$ is disconnected
\begin{itemize}
\item Since $(I - L_D\laplacian)$ is singular $\exists x \neq 0 \text{ s.t. }$ $(I - L_D\laplacian)x = 0$
\begin{alignat}{1}
& \implies L_D \laplacian \ x = x \\
& \implies 1 \in eigenvalues(L_D\laplacian) \\
& \implies 1 \in eigenvalues((L_G - L_{G \setminus D}) \laplacian)
% & \therefore
\end{alignat}
\item Let $x \perp \textbf{1}$ be an eigenvector of $(L_G - L_{G \setminus D}) \laplacian$ with eigenvalue 1.
\item By \textbf{Fact 4}, $y = \frac{\sqrt{\laplacian} x}{\norm{\sqrt{\laplacian}x}}$ is an eigenvector of $\sqrt{\laplacian} (L_G - L_{G \setminus D}) \sqrt{\laplacian}$
\begin{alignat}{1}
& = y^T \cdot \sqrt{\laplacian} (L_G - L_{G \setminus D}) \sqrt{\laplacian} \cdot y = 1 \\
& = y^T \sqrt{\laplacian} L_G \sqrt{\laplacian} y = \comment{HOW} y^T \laplacian L_G y = y^T P y \\
& = y^T \left( I - \frac{\textbf{1}^T \textbf{1}}{n} \right) y = y^T y - \left( \frac{y^T \textbf{1}^T \textbf{1} y}{n} \right) = \comment{(HOW)} y^T y = 1
\end{alignat}
\item $\therefore y^T \sqrt{\laplacian} L_{G \setminus D} \sqrt{\laplacian} y = 0$ now if we consider $z = \sqrt{\laplacian} y$ and show that $z^T \textbf{1} = 0$ then we can use $\textbf{Fact 5}$ to complete the proof
\begin{alignat}{1}
& y^T \sqrt{\laplacian} \textbf{1} = x^T \sqrt{\laplacian} \sqrt{\laplacian} \textbf{1} = 0 \comment{\text{(HOW is 1 in kernel of }} \laplacian \\
& G \setminus D \text{ is disconnected}
\end{alignat}
\end{itemize}
Now to prove the converse, If $G \setminus D$ is disconnected then $I - L_D \laplacian$ is singular
\begin{itemize}
\item If $G \setminus D$ is disconnected then $\exists y \perp \textbf{1}, ||y|| = 1$ we have
% \begin{alignat}{1}
\begin{enumerate}
\item $ y^T \cdot \sqlaplacian \cdot L_{G \setminus D} \cdot \sqlaplacian \cdot y = 0 \comment{(HOW)} $
\item $ y^T \cdot \sqlaplacian \cdot L_G \cdot \sqlaplacian \cdot y = y^T y = 1 $
\end{enumerate}
\item From (1) and (2) we get $y^T \sqlaplacian (L_G - L_{G \setminus D}) \sqlaplacian y= 1$
\begin{alignat}{1}
& \implies y^T \cdot \sqlaplacian \cdot L_D \cdot \sqlaplacian \cdot y = 1 \\
& \implies 1 \in \text{eigenvalues}(L_D\laplacian) \comment{(HOW)}\\
& \implies (I - L_D\laplacian) \text{ is singular}
\end{alignat}
\end{itemize}
\end{proof}
In \textbf{Theorem 1} they just show that the formula for the updated pseudoinverse is indeed true. This is shown using the following identity $L L^+ = P$
\begin{HXt}[Update identity for Deletion]
Let $G=(V,E)$ be a connected graph and $D \subseteq E$. If $G \setminus D$ is connected then
$$ (L_G - L_D)^+ = \laplacian - \left( \laplacian \cdot (L_D\laplacian-I)^{-1} \cdot L_D \cdot \laplacian\right)$$
\end{HXt}
\begin{proof}
If R.H.S is indeed true then it should satisfy the property of $LL^+ = P$
$$ (L_G - L_D) \cdot (L_G - L_D)^+ $$
$$ (L_G - L_D) \cdot \left(\laplacian - \left( \laplacian \cdot (L_D\laplacian-I)^{-1} \cdot L_D \cdot \laplacian\right) \right)$$
$$\left[P - L_D\laplacian \right] - \left[(L_G\laplacian - L_D\laplacian) \cdot (L_D\laplacian-I)^{-1} \cdot L_D \cdot \laplacian\right]$$
$$\left[P - L_D\laplacian \right] + \left[\left( (L_D\laplacian - I) + \frac{\textbf{1}\textbf{1}^T}{n} \right) \cdot \left((L_D\laplacian-I)^{-1} \cdot L_D \cdot \laplacian\right)\right]$$
$$\left[P - L_D\laplacian \right] + \left[ (L_D\laplacian) + \left(\frac{\textbf{1}\textbf{1}^T}{n} \cdot (L_D\laplacian-I)^{-1} \cdot L_D \cdot \laplacian \right) \right]$$
We can see that $-\textbf{1}^T (L_D \laplacian -I) = -\textbf{1}^T L_D \laplacian + (I \textbf{1})^T = 0 + \textbf{1}^T = \textbf{1}^T$. Hence $\textbf{1}^T (L_D \laplacian -I)^{-1} = -\textbf{1}^T$. And also $\textbf{1}^T L_D = 0$. Hence,
$$ P - L_D\laplacian + L_D\laplacian + \textbf{1}^T L_D\laplacian = P$$
\end{proof}
\begin{HXd}[Submatrix]
A submatrix of a martix $A$ containing rows $S$ and columns $T$ is denoted as $A_{S,T}$
\end{HXd}
\textbf{Corollary 1} modifies the update formula in \textbf{Theorem 1} to work for submatrices and hence reduce the compliexity to $\mathcal{O}(|S|^{\omega})$. They do this by first applying the facts related to submatrices \textbf{Fact 3.3, 3.5}.
\begin{HXc}[Improved \textbf{Theorem 1} for submatrix]
Let $G=(V,E)$ be a connected graph and $D \subseteq G$. For $S \subseteq V$ define $ E[S] $ as $(S \times S) \cap E$. Suppose $E_D \subseteq E[S]$ and $G \setminus D$ is connected then
$$ (L_G - L_D)_{S,S}^+ = (\laplacian)_{S,S} - \left( (\laplacian)_{S,S} \cdot ((L_D)_{S,S} \ (\laplacian)_{S,S} \; - \; I)^{-1} \cdot (L_D)_{S,S} \cdot (\laplacian)_{S,S} \right) $$
\end{HXc}
\begin{proof}
From \textbf{Theorem 1} we know that $(L_G - L_D)^+ = \laplacian - \left( \laplacian \cdot (L_D\laplacian-I)^{-1} \cdot L_D \cdot \laplacian\right)$. If we apply \textbf{Fact 3.1} to $(L_G - L_D)^+$ we get
$$ (\laplacian)_{S,S} - \left( \laplacian \cdot (L_D\laplacian-I)^{-1} \cdot L_D \cdot \laplacian \right)_{S,S} $$
Applying \textbf{Fact 3.3} we get \comment{(HOW)}
$$(\laplacian)_{S,S} - \left( (\laplacian)_{S,S} \cdot (L_D\laplacian-I)^{-1}_{S,S} \cdot (L_D)_{S,S} \cdot (\laplacian)_{S,S} \right) $$
Now applying \textbf{Fact 3.5} to $(L_D\laplacian-I)^{-1}_{S,S}$
\textbf{Fact 3.5} states that If
$D =
\begin{bmatrix}
M & 0 \\
0 & 0 \\
\end{bmatrix},
$
and $
E =
\begin{bmatrix}
A & B \\
X & Y \\
\end{bmatrix}
$where $M, A \in \Mset{n}{n}$ and If $(MA - I)$ is invertible Then,
$$ (DE - I)^{-1} = \begin{bmatrix}
(MA - I)^{-1} & (MA - I)^{-1} \cdot M \cdot B \\
0 & -I \\
\end{bmatrix}
$$
Here we have $L_D =
\begin{bmatrix}
(L_D)_{S,S} & 0 \\
0 & 0 \\
\end{bmatrix},
$
and $
L_G =
\begin{bmatrix}
(L_G)_{S,S} & (L_G)_{S,S^c} \\
(L_G)_{S^c,S} & (L_G)_{S^c,S^c} \\
\end{bmatrix}
$
$$\therefore \; (L_DL_G - I)^{-1} =
\begin{bmatrix}
((L_D)_{S,S} (L_G)_{S,S} - I)^{-1} & (L_D)_{S,S}(L_G)_{S,S^c} \\
0 & -I \\
\end{bmatrix}
$$
Hence we get the required result
$$ (L_G - L_D)_{S,S}^+ = (\laplacian)_{S,S} - \left( (\laplacian)_{S,S} \cdot ((L_D)_{S,S} \ (\laplacian)_{S,S} \; - \; I)^{-1} \cdot (L_D)_{S,S} \cdot (\laplacian)_{S,S} \right) $$
\end{proof}
\subsubsection{Contraction}
The main approach proposed to tackle contraction updates is to increase the weight of the edges that are to be contracted to a large value $k$.
\begin{HXd}[Incidence Matrix]
Let $G = (V,E)$, given an edge $e = {u,v} \in E$ the incidence vector of e is defined as $v_e = (\chi_u - \chi_v)$ . Given a set of edges $D = \{e_1, e_2 \cdots e_m\} \subseteq E$ , the incidence matrix of $D$ is defined as $B_D = [v_{e_1} | v_{e_2} \cdots | v_{e_m}]$
\textbf{Note - } I have used a different notation for the incidence matrix compared to the original paper as I found it to be a bit confusing. And $B$ is the common notation for incidence matrices in other resources such as \citet{TCS-054}.
\end{HXd}
\begin{HXd}[$G + ke$]
$G + ke$ is the weighted graph obtained by increasing e's weight by k
\end{HXd}
\begin{HXl}[Formulas in \textbf{Theorem 2} are well defined]
Let $G = (V, E)$ be a connected graph. Given $F \subseteq E$ with $|F| = r$ and let $B_F$ be the incidence matrix of $F$.
$$ B_F^T \ \laplacian \ B_F \ \text{is invertible} \iff \text{ F is a forest} $$
\end{HXl}
\begin{proof}
First they show \\
$ F$ is a forest $\implies B_F^T \ \laplacian \ B_F \ \text{is invertible}$
So the main idea of this proof is to show that $B_F^T \ \laplacian \ B_F $ is positive definite. This is enough because positive definite matrices are non singular (if not then they will have 0 as an eigenvalue). Now using the following claim
\begin{HXcl}
The incidence matrix of a acyclic graph has full column rank
\end{HXcl}
Hence for any $x \in \mathbb{R}^r, x \neq 0, \text{ let } y = B_F x$ and $y \neq 0$. Also $y^T \textbf{1} = x^T B_F^T \textbf{1} = 0$. Hence $y \perp ker(\laplacian)$. Now since $G$ is connected we have $\lambda_2(L_G) > 0$. Now since $y$ corresponds to all vectors perpendicular to $ker(\laplacian)$ we can say that $y^T \laplacian y > 0$ now expanding $y = B_F x$ we get $x^T (B_F^T \ \laplacian \ B_F) x > 0$ . Hence $B_F^T \ \laplacian \ B_F$ is positive definite and hence invertible.
% Now for the converse (TODO)
\end{proof}
\begin{HXl}[Formulas in \textbf{Theorem 2} are well defined]
Let $G = (V, E)$ be a connected graph. Given $F \subseteq E$ and let $B_F$ be the incidence matrix of $F$. For any $k > 0$ ,
$$\text{If F is a forest then } \left( \frac{I}{k} + B_F^T \ \laplacian \ B_F \right) \ \text{is invertible for any k $> 0$}$$
\end{HXl}
\begin{proof}
Suppose $A, B$ are positive definite matrices then $A + B$ is also positive definite. Since $A, B$ are positive definite we have $x^T A x > 0 , x^T B x > 0$ for any $x$. Combining these two identity we get $x^T (A + B) x > 0$. Hence $A + B$ is also positive definite.
By \textbf{Lemma 2} $B_F^T \ \laplacian \ B_F$ is positive definite. And $I/k$ is also positive definite because all the eigenvalues are $1/k$ and we have $k > 0$. Since positive definite matrices are non-singular, $\left( \frac{I}{k} + B_F^T \ \laplacian \ B_F \right)$ is invertible
\end{proof}
\textbf{Theorem 2} uses \textbf{Lemma 2} to show that the contraction update formula for finite $k$ is well defined.
\begin{HXt}[Contraction update formula for finite $k$]
Let $G = (V, E)$ be a connected graph. Given $F \subseteq E$ and let $B_F$ be the incidence matrix of $F$. For any $k > 0$,
$$ (L_G + k \ L_F)^+ = \laplacian - \left(\laplacian \cdot B_F \cdot (\frac{I}{k} + B_F^T \ \laplacian \ B_F)^{-1} \cdot B_F^T \cdot \laplacian \right)$$
\end{HXt}
\begin{proof}
They use the same strategy used in \textbf{Theorem 1}. Also note that $B_F B_F^T = L_F$
$$\left[L_G + k B_F B_F^T \right] \cdot \left[\laplacian - \left(\laplacian \ B_F \ (\frac{I}{k} + B_F^T \ \laplacian \ B_F)^{-1} \ B_F^T \ \laplacian \right) \right]$$
$$= P + k B_F B_F^T \laplacian - \left( (L_G\laplacian B_F + kB_F B_F^T \laplacian B_F)\ (\frac{I}{k} + B_F^T \laplacian B_F)^{-1} \ B_F^T \ \laplacian \right)$$
Here $L_G \laplacian B_F = (I - \frac{\textbf{1} \textbf{1}^T}{n}) B_F = B_F$. Because each column sum of $B_F$ is 0.
$$ = P + k B_F B_F^T \laplacian - \left( k B_F \ (\frac{I}{k} + B_F^T \laplacian B_F) \ (\frac{I}{k} + B_F^T \laplacian B_F)^{-1} \ B_F^T \ \laplacian \right)$$
$$ = P + k B_F B_F^T \laplacian - k B_F B_F^T \laplacian = P$$
\end{proof}
This coroallry is a direct consequence of the facts pertaining to submatrices (\textbf{Fact 3})
\begin{HXc}[Improves \textbf{Theorem 2} for sub-matrices]
Let $G = (V, E)$ be a connected graph. Given $F \subseteq E$ and let $B_F$ be the incidence matrix of $F$. Suppose $F \subseteq E[S]$, where $S \subseteq V$. For any $k > 0$,
% $$
% \begin{multline*}
$$
(L_G + k \ L_F)^+_{S,S} = (\laplacian)_{S,S} -\left((\laplacian)_{S,S} \ (B_F)_{S,*} \ (\frac{I}{k} + (B_F^T)_{S,*} \ (\laplacian)_{S,S} \ (B_F)_{S,*})^{-1} \ (B_F^T)_{S,*} \ (\laplacian )_{S,S} \right)
$$
% \end{multline*}
% $$
\end{HXc}
In \textbf{Theorem 3} they extend the contraction formula from \textbf{Theorem 2} to the case $k \rightarrow \infty$ . The main idea of the proof is to define $M_k = \laplacian[G + kF]$ \footnote{I have changed the notation to $M$ as the proof used in the paper used $N$ for defining 2 different formulas} \\ $M = \laplacian - \left(\laplacian \ B_F \ (B_F^T \laplacian B_F)^{-1} \ B_F^T \ \laplacian\right)$ and to show that $\Lim{k \to \infty} \norm{M_k - M} = 0$
\begin{HXt}[Extends \textbf{Theorem 2} to $k \rightarrow \infty$ case]
For a forest $F_1 \subseteq E$, let $G(k) = G + k \ F_1$ as defined in \textbf{Definition 3} . Let $F_2 \subseteq E$ be disjoint from $F_1$ such that $F_1 \cup F_2$ is a forest. Let $B_{F_2}$ be the incidence matrix of $F_2$. For $k > 0$ define $N = \Lim{k \to \infty} \laplacian[G(k)]$
$$ \Lim{k \to \infty} \laplacian[G(k) + kF_2] = N - \left( N \cdot B_{F_2} \cdot (B_{F_2}^T \ N \ B_{F_2}) \cdot B_{F_2}^T \cdot N \right)$$
Also $\text{ker} \left( \Lim{k \to \infty} \laplacian[G(k) + kF_2] \right) = \text{span }(B_{F_1 \cup F_2} \cup \textbf{1})$
\end{HXt}
\begin{proof}
From \textbf{Theorem 2} we have $M_k = \laplacian - \left(\laplacian \cdot B_F \cdot (\frac{I}{k} + B_F^T \ \laplacian \ B_F)^{-1} \cdot B_F^T \cdot \laplacian \right)$ and they define $A = B_F^T \laplacian B_F$
$$ \norm{M_k - M} = \norm{\laplacian \ B_F \left( (\frac{I}{k} + A)^{-1} - A^{-1} \right) \ B_F^T \ \laplacian}$$
Using the property of matrix norm $\norm{X \ Y} \leq \norm{X} \ \norm{Y}$, they get
$$ \norm{M_k - M} \leq \norm{\laplacian}^2 \ \norm{B_F} \ \norm{B_F^T} \ \norm{\left( \frac{I}{k} + A \right)^{-1} - A^{-1}} \qquad (1)$$
Now they expand the last term in the above inequality using the Sherman-Morrison-Woodbury identity and apply the matrix norm inequality and with some straightforward manipulations they show that
$$\Lim{k \to \infty} \norm{\left( \frac{I}{k} + A \right)^{-1} - A^{-1} } = 0$$
Now applying this to $(1)$ they prove that $\Lim{k \to \infty} \norm{M_k - M} = 0$
\end{proof}
\subsection{Summary}
The ideas discussed here such as lazy updates were used in \citet{10.1137/070684008} to get a randomized algorithm for non-bipartite matching in $\mathcal{O}(N^\omega)$.
%----------------------------------------------------------------------------------------
| {
"alphanum_fraction": 0.6500522022,
"avg_line_length": 46.2629695886,
"ext": "tex",
"hexsha": "94b81e0f789c02eea734abbf7a4af7112d1a14ac",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-06-14T15:34:34.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-14T15:34:34.000Z",
"max_forks_repo_head_hexsha": "c6d3856cccda06735a01699c91ad923590f08ab7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "severus-tux/masters-thesis",
"max_forks_repo_path": "thesis/Chapters/Chapter4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c6d3856cccda06735a01699c91ad923590f08ab7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "severus-tux/masters-thesis",
"max_issues_repo_path": "thesis/Chapters/Chapter4.tex",
"max_line_length": 651,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c6d3856cccda06735a01699c91ad923590f08ab7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "severus-tux/masters-thesis",
"max_stars_repo_path": "thesis/Chapters/Chapter4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9284,
"size": 25861
} |
% Author: Dean Serenevy 2012
% This document is hereby placed into the public domain.
\documentclass{article}
\usepackage{fontspec}
\usepackage[centering,width=10in,height=7.5in]{geometry}
\usepackage{parskip}
\parskip 0pt
\geometry{landscape}
\usepackage{array,amsfonts,amsmath,amssymb,graphicx}
\usepackage{multicol,xcolor,shortvrb,fancyvrb,xspace}
\setlength{\columnsep}{4em}
\let\epsilon\varepsilon\let\phi\varphi
\usepackage{syntax} %% http://tug.ctan.org/pkg/syntax-mdw
\usepackage{xifthen}
\newcolumntype{L}{>{\lit*\bgroup}l<{\egroup}}
% COMMANDS:
%----------
% _ | underscore, verticalbar
% "#1" a ShortVerb
% \lr#1#2 left-right line
% <#1> for meta-syntax
% \sl#1 for meta-syntax
% \opt#1 for optional parameters
% code fancyvrb environment allowing \commands
% \columnbreak for breaking columns
% \", \^ literal ", ^
% \bs \textbackslash
\makeatletter
\syn@shorts\relax\relax
\addspecial\|\catcode`\|\active
% \addspecial\"\catcode`\"\active%% can use commands in "..."
\shortverb{"}%% may not use commands in "..."
\def\opt#1{\textcolor[gray]{.5}{#1}}
\def\sl#1{{\textrm{\textsl{#1}}}}
\let\bs\textbackslash
\def\lr{\@ifstar\@lr@star\@lr}
\def\@lr#1#2{\@@lr{#1}{#2}}
\def\@lr@star#1#2{\@@lr{#1\\\mbox{}}{#2}}
\def\@@lr#1#2{\mbox{}\texttt{\spaceskip.35em\@plus.2em\@minus.15em\relax #1}\hfill #2\par}
\def\"{\char`"}
\def\^{\char`^}
\def\<{\char`<}
\def\>{\char`>}
\def\~{\char`~}
\def\dots{{\rmfamily\ifmmode \mathellipsis \else \textellipsis \fi}}
\newcommand{\Code}[1]{\texttt{\spaceskip.35em\@plus.2em\@minus.15em\relax #1}}
\newif\ifprototype
\newif\ifjquery
\newif\ifdeanjs
\makeatother
\catcode`\<\active\catcode`\>\active
\def<{\bgroup\normalfont\itshape}
\def>{\/\egroup}
\def\doif#1#2{\ifthenelse{\isempty{#1}}{}{#2}}
\DefineVerbatimEnvironment{code}{Verbatim}
{gobble=2,baselinestretch=.8,commandchars=\\\{\}}
\renewcommand{\section}[2][]{\par\vspace{3ex}{\large{\textbf{\doif{#1}{#1 }#2}}}\par}
\renewcommand{\subsection}[2][»]{\par\vspace{2ex}\textbf{#1 #2}\par}
\def\br{\par\vspace{1.75ex}\par}
\def\jQueryOnly{\includegraphics[height=1.5ex,width=1.5ex,keepaspectratio]{jquery_logo}}
\def\th{$^{\mathrm{th}}$}\def\st{$^{\mathrm{st}}$}\def\nd{$^{\mathrm{nd}}$}\def\rd{$^{\mathrm{rd}}$}
\pagestyle{empty}
\begin{document}\small
\begin{multicols*}3
% Set Booleans, which libraries to include
%-----------------------------------------
\jquerytrue
% \prototypetrue
\deanjstrue
\begin{center}
{
JavaScript
\ifprototype + prototype.js 1.6 \fi
\ifjquery + jQuery 1.8 (\jQueryOnly) \fi
} Reference Guide\\
All funcs/methods work in FF 52+, IE 10+ (Windows 8), Chrome\\
Version 0.4, May 2018 --- Dean Serenevy
\end{center}
% https://en.wikipedia.org/wiki/Google_Chrome_version_history
% https://en.wikipedia.org/wiki/Edge_browser
% https://en.wikipedia.org/wiki/Firefox_version_history
% https://en.wikipedia.org/wiki/Safari_version_history
% https://en.wikipedia.org/wiki/Internet_Explorer_version_history
\section{JavaScript Syntax}
% |<script src="../|\sl{foo.js}|"></script>|\\
% |<script language="javascript" type="text/javascript">
% <!--
%
% script goes here
%
% // -->
% </script>
% |+,-,*,/,%,++,--,=,+=,-=,*=,/=,%=|\hfill operators\\
\lr{var <foo>}{variable declaration}
\lr{var <foo> = new <Bar>(<args>);}{object instantiation}
\lr{\"5\"+\"5\"; 5+\"5\"; \"5\"+5;}{string concatenation (``55'')}
\lr{==, ===}{equality, identity (value and type)}
\lr{null, undefined, NaN, '', 0, false}{false values}
\br
\lr{if (<A>) \{ \} else if (<B>) \{ \} else \{ \}}{}
\lr{switch(<A>)\{case $C_1$:<X>; break; \dots~default:<X>;\}}{}
\lr{function <foo>(<a1>,<a2>)\{\dots\}}{varargs in "arguments"}
\lr{var i; for (i=0;i"<"=5;i++) \{\dots\}}{}
\lr{var i; for (i in A) \{ \dots~A[i] \dots\}}{}
\lr{while (<cond>) \{\dots\}; do \{\dots\} while (<cond>)}{}
\lr{"break", "continue"}{leave loop, next iteration}
\lr{try \{ \dots; throw \"<Blah>\"; \} catch(<err>) \{ \dots \}}{}
\ifprototype
\subsection{prototype Try}
\lr{Try.these(f1,f2,\dots)}{return first result to not fail}
\fi
\section{Builtin Functions / Constants}
\lr{alert(<msg>); confirm(<msg>); prompt(<msg>,<value>);}{}
\lr{setTimeout(\"<eval str>\"|<Func>, <milliseconds>);}{}
\lr{encodeURI(<str>)}{Encode special except ",/?:@\&=+\$\#"}
\lr{encodeURIComponent(<str>)}{Encode all specials}
% \lr{escape(<str>); unescape(<str>)}{Escape all but "*@-_+./"}
\lr{isFinite(<n>); isNaN(<n>)}{}
\lr{parseFloat(<str>); parseInt(<str>, <radix>)}{}
\lr{Infinity, NaN, undefined, null}{}
\section{Core "String"}
\lr{.length}{number of characters}
\lr{.charAt(<index>)}{returns single char string}
\lr{.charCodeAt(<index>)}{Unicode decimal code point}
\lr{.concat(<str2>,<str3>,\dots)}{concatenate}
\lr{.fromCharCode(<n1>,<n2>,\dots)}{Unicode decimal code}
\lr{.indexOf(<str>,\opt{<c>})}{start at char \textsl{c}}
\lr{.lastIndexOf(<str>,\opt{<c>})}{skip \textsl{c} chars from end}
\lr{.match(<regex>)}{array of matches (beware captures)}
\lr{.replace(<pat>,<newstr>)}{regex or string pattern}
\lr{}{\$1, \$2, \dots~allowed in <newstr>}
\lr{.search(<regex>)}{position of match or $-1$}
\lr{.slice(<beg>,\opt{<end>})}{0-based; negatives ok}
\lr{.split(<sep>,\opt{<limit>})}{w/\sl{limit} discards extras}
\lr{.substr(<beg>,\opt{<length>})}{0-based; negatives not ok}
\lr{.substring(<beg>,\opt{<end>})}{0-based; negatives not ok}
\lr{.toLowerCase(), .toUpperCase()}{}
\lr{.trim(), .trimStart(), .trimEnd()}{returned}
% \subsection{HTML wrapping}
% |x="foo"; y=x.bold()|\hfill y=``$<$b$>$foo$<$/b$>$''\\
% |.anchor(|\sl{name}|)|, |.link(|\sl{url}|)|\hfill \\
% |.fontcolor(|\sl{color}|)|\hfill name, rgb(\dots), or ``\#aabbcc''\\
% |.fontsize(|\sl{n}|)|\hfill n$\in\{1..7\}$\\
% |.bold()|, |.italics()|, |.fixed()|, |.strike()|\\
% |.sub()|, |.sup()|, |.big()|, |.small()|\\
\ifjquery
\subsection[\jQueryOnly]{jQuery string utils}
\lr{\$.trim(<str>)}{trim leading and trailing whitespace}
\lr{\$.parseXML(<str>)}{DOM object from string}
\lr{\$.parseJSON(<str>)}{``safe'' parsing}
\fi
\ifprototype
\subsection{prototype "String"}
\lr{.blank()}{"/\^\bs s+\$/" or length == 0}
\lr{.empty()}{length == 0}
\lr{.escapeHTML(), .unescapeHTML()}{}
\lr{.evalJSON(\opt{<sanitize>=false})}{}
\lr{.parseQuery(\opt{<sep>=\"\&\"})}{parse query string}
\lr{.sub(<pat>,<replace>,\opt{<count>=1})}{substitute \textsl{count} times}
\lr{.gsub(<pat>,<replace>)}{globally, <replace> is Str|Func}
\lr{}{func <replace>(<m>) \{ m.input, m[0], m.index \}}
\lr{.include(<str>)}{true if includes substring}
\lr{.startsWith(<str>)}{true if begins with given string}
\lr{.endsWith(<str>)}{true if ends with given string}
\lr{.strip()}{trim leading and trailing whitespace}
\lr{.succ()}{successor, ``foo baz'' $\to$ ``foo ba\{''}
\lr{.times(<n>)}{repeat <n> times}
\lr{.toArray()}{character array}
\lr{.truncate(<len>,\opt{<suffix>=\"\dots\"})}{trunc then append}
\lr{.capitalize()}{``foo bar'' $\to$ ``Foo bar''}
% \lr{\$w(<str>).invoke(\"capitalize\").join(\" \")}{``foo bar'' $\to$ ``Foo Bar''}
\lr{.camelize()}{``foo-bar'' $\to$ ``fooBar''}
\lr{.underscore()}{``fooBar'' $\to$ ``foo_bar''}
\lr{.dasherize()}{``foo_bar'' $\to$ ``foo-bar''}
\fi
\ifdeanjs
\subsection{Dean sprinth.js}
\lr{sprinth(<fmt>,<hash>)}{plain JS or prototype Hash}
Format replacements (escapes):\\
\begin{tabular}{@{}L@{~}l@{\hspace{1em}}L@{~}l@{}}
\%\% & literal \% & \%(<key>) & unescaped\\
\%\{<key>\} & HTML (\lit*{\&\<\>}) & \%[<key>] & JavaScript (\texttt{\"'})\\
\%\<<key>\> & URI {\tiny $\to$ \%2F\%20} & \%«<key>» & URI (except \lit*{/=?\&}\dots)\\
\end{tabular}\\
Ex: \verb!<p onclick="alert('%{[bar]}')">%{foo}</p>!
\fi
\ifdeanjs
\subsection{Dean sprintf.js {\mdseries\small -- perl-like}}
\lr{sprintf(<fmt>,<x1>,<x2>,\dots)}{surprisingly complete}
Supported formats: b,c,d,u,f,o,s,x,X\\
Supports: justification, padding, precision\\
Examples: "%02d", "%.3f", "%\x5s" $\to$ ``xxfoo'', "%- 5s"
\fi
\section{Core "RegExp" {\mdseries\small -- perl-like}}
\lit*{<r>=/<pat>/<mods>} or \lit*{<r>=new RegExp(\"<pat>\",\"<mods>\")}\\
\lit*{if (/<pat>/.test(<string>)) \{ {\dots} \}}\\
modifiers: "i", "g", "m"\\
brackets: "[abc]", "[^abc]", "[red|green|blue]"\\
metachars: ".\w\W\d\D\s\S\b\B\0\n\f\r\t"\\
quantifiers: "a+", "a*", "a{1,3}", "a?", "^a", "a$", "(?=a)", "(?!a)"%$
\lr{.lastIndex}{index to start next match, needs "/g"}
\lr{.source}{text of the RegExp pattern}
\lr{.test(<str>)}{test pattern on <str>, returns bool}
\lr{.exec(<str>)}{test pattern on <str>, returns match|null}
\lr{.compile(<newpat>)}{regex is now <newpat>}
\section{Core "Object"}
\lr{for (key in obj) \{ obj[key]{\dots} \}}{}
\ifjquery
\subsection[\jQueryOnly]{jQuery Object Utils}
\lr{\$.param(<obj>)}{string of query params}
\lr{\$.extend(\opt{<deep>=false}, <obj>,<obj1>,\dots)}{merge hashes}
\lr{
\$.isArray(<x>), \$.isEmptyObject(<x>), \$.isFunction(<x>),
\$.isNumeric(<x>), \$.isPlainObject(<x>), \$.isWindow(<x>),
\$.isXMLDoc(<x>)
}{}
\fi
\ifprototype
\subsection{prototype "Object"}
\lr{.clone()}{\textsl{shallow} copy}
\subsection{Call as \Code{Object.<foo>(<x>)} {\mdseries (does best effort)}:}
\lr{.inspect(<x>)}{string representation}
\lr{.isArray(<x>), .isElement(<x>), .isFunction(<x>), .isHash(<x>),
.isNumber(<x>), .isString(<x>), .isUndefined(<x>) .toHTML(<x>),
.toJSON(<x>), .toQueryString(<x>)}
{}
\fi
\section{Core "Array"}
\lr{for (i in items) \{ items[i]{\dots} \}}{}
\lr{.length}{}
\lr{.concat(<A1>,<A2>,\dots)}{non-destructive}
\lr{.pop(), .push(<elt>), .shift(), .unshift(<elt>)}{}
\lr{.join(<sep>)}{}
\lr{.reverse()}{destructive}
\lr{.slice(<beg>,\opt{<end>})}{}
\lr{.sort(\opt{func cmp(a,b)\{\}})}{return neg if a before b}
\lr{.splice(<beg>,<n>,\opt{$e_1$},\opt{$e_2$},\dots)}{remove <n>; insert $e_i$}
\ifjquery
\subsection[\jQueryOnly]{jQuery Iterable}
\lr{\$.each(<iter>,<func>(<idx>,<item>))}{return "false" to stop}
\lr{\$.grep(<iter>,<func>(<idx>,<item>),\opt{<invert>=false})}{}
\lr{\$.map(<iter>,<func>(<idx>,<item>))}{}
\lr{\$.inArray(<val>,<array>,\opt{<start>=0})}{returns index}
\lr{\$.merge(<iter>, <iter>)}{mutating concatenation}
\lr{.length}{}
\lr{.toArray()}{}
\lr{.each(<func>(<idx>,<item>))}{return "false" to stop early}
\lr{.get(<idx>)}{get item at index, negative OK}
\lr{.index(<sel>)}{index of first match}
\fi
\ifprototype
\section{prototype "Enumerable"}
Note: <id> = identity function; <ctx> = context (this)\\
\lr{.size()}{number of items}
\lr{.all, .any(\opt{<func>=<id>},\opt{<ctx>})}{are all/any true}
\lr{.partition(\opt{<func>=<id>},\opt{<ctx>})}{$\to$ [trueArray,falseArray]}
\br
\lr{.map(\opt{<func>=<id>},\opt{<ctx>})}{``map''; apply and collect}
\lr{.pluck(<prop>)}{like map(func(x)\{return x.<prop>\})}
\lr{.each(\opt{<func>=<id>},\opt{<ctx>})}{``foreach''; apply and discard}
\lr{.invoke(<method>,\opt{<args>})}{invoke method on each item}
\br
\lr{.grep(<filter>,\opt{<func>},\opt{<ctx>})}{search by RegEx|Str}
\lr{}{and collect <func>(x)}
\lr{.member(<elt>)}{true if <elt> in Enum}
\lr{.find, .findAll(\opt{<func>},\opt{<ctx>})}{first/all satisfying <func>}
\lr{.reject(\opt{<func>},\opt{<ctx>})}{opposite of findAll}
\br
\lr{.inGroupsOf(<n>,\opt{<filler>=null})}{AoA each with <n> items}
\lr{.eachSlice(<n>,\opt{<func>=<id>},\opt{<ctx>})}{like inGroupsOf but no}
\lr{}{filler and collect <func>(x)}
\lr{.inject(<init>,<func>,\opt{<ctx>})}{``reduce'' in other languages}
\lr{<A>.zip(<B>,<C>,\dots,\opt{<func>=<id>})}{result$_i$=<func>("["<A>$_i$,<B>$_i$,\dots"]")}
\br
\lr{.min, .max(\opt{<func>=<id>},\opt{<ctx>})}{min/max of <func>(x)}
\lr{.sortBy(<trans>,\opt{<ctx>})}{sort by <trans>(x)}
\fi
\ifprototype
\subsection{prototype "Array" {\mdseries [isa "Enumerable"]}}
\lr{\$A(\opt{<obj>})}{constructor}
\lr{\$w(<str>)}{split on whitespace}
\lr{\$R(<beg>,<end>,\opt{<exclusive>=false})}{range <beg>..<end>}
\br
\lr{.clear()}{destructive}
\lr{.compact()}{remove "null" and "undefined"}
\lr{.flatten()}{non-destructive}
\lr{.intersect(<array>)}{non-destructive}
\lr{.last()}{last item}
\lr{.indexOf(<item>,\opt{<beg>=0})}{}
\lr{.lastIndexOf(<item>,\opt{<neg_offset>})}{}
\lr{.reverse(\opt{<inline>=true})}{destructive if bool arg true}
\lr{.uniq(\opt{<sorted>=false})}{more efficient if sorted=true}
\lr{.without(<val1>,<val2>,\dots)}{non-destruuctive}
\fi
\section{Core "Math"}
Call as class methods/attrs: "Math.PI", "Math.abs(x)"
\br
".E"=$e^1$, ".LN2"=$\log_e2$, ".LN10"=$\log_e10$, ".LOG2E"=$\log_2e$,
".LOG10E"=$\log_{10}e$, ".PI", ".SQRT1_2"=$\sqrt{.5}$, ".SQRT2"=$\sqrt2$
\br
\lr{%
.sin(<x>)
.cos(<x>)
.tan(<x>)
}{}
\lr{%
.acos(<x>)
.asin(<x>)
.atan(<x>)
.atan2(<y>,<x>)
}{}
\lr{%
.abs(<x>)
.ceil(<x>)
.floor(<x>)
.round(<x>)
}{}
\lr{%
.exp(<x>)
.log(<x>)\textrm{=}$\log_ex$
.sqrt(<x>)
.pow(<x>,<y>)\textrm{=}$x^y$
}{}
\lr{%
.max(<x>,<y>)
.min(<x>,<y>)
.random()$\in[\,0,1)$
}{}
\section{Core "Number"}
\lr{.toExponential(<n>)}{<n> digits after; ``1.00e+4''}
\lr{.toFixed(<n>)}{<n> digits after; ``1234.00''}
\lr{.toPrecision(<n>)}{<n> digits total}
\lr{.toString(\opt{<radix>})}{\sl{radix}$\in\{2..36\}$}
\ifprototype
\subsection{prototype "Number"}
\lr{.abs(), .ceil(), .floor()}{Math.<foo>(x)}
\lr{.round()}{round to int -- Math.round(x)}
\lr{.succ()}{successor $(n+1)$}
\lr{.times(func <iter>(<i>)\{ \},\opt{<ctx>})}{repeat with count}
% \lr{.toColorPart()}{$\equiv$ ".toPaddedString(2,16)"}
\lr{.toPaddedString(<length>,\opt{<radix>})}{0-padded}
\fi
\ifprototype
\section{prototype "Hash" {\mdseries\small [isa "Enumerable"]}}
\lr{\$H(\opt{<obj>})}{constructor}
\lr{.keys()}{array of keys}
\lr{.values()}{array of values}
\lr{.get(<key>)}{only one key at a time}
\lr{.set(<key>,<value>)}{}
\lr{.unset(<key>)}{remove <key> from hash}
\lr{.each(func <iter>(<t>)\{ \},\opt{<ctx>})}{t.key, t.value}
\lr{.index(<value>)}{first key found with <value>}
\lr{.merge(<obj>)}{new merged hash}
\lr{.update(<obj>)}{destructive form of "merge"}
\lr{.toObject()}{downgrade to vanilla hash}
\lr{.toQueryString()}{}
\fi
\ifprototype
\section{prototype Functions}
\lr{.bind(<ctx>,\opt{<arg1>},\opt{<arg2>},\dots)}{set this=ctx, curry args}
\lr{.curry(<args>,\dots)}{curry arguments}
\lr{.defer(<args>,\dots)}{curry and run when bored}
\lr{.delay(<secs>,\opt{<args>},\dots)}{run after timeout (0.1 ok)}
\lr{.wrap(func <wrap>(<orig>,<args>,\dots)\{ {\dots} \})}{}
\fi
\section{Core "Date"}
\lr{var now = new Date()}{}
\lr{.getTime}{epoch time in milliseconds}
\lr{.getMonth(), .getDate()}{0--11, 1--31}
\lr{.getFullYear()}{4-digit year}
\lr{.getDay()}{day of week: Sunday=0--6}
\lr{.getHours(), .getMinutes(), .getSeconds(),
.getMilliseconds()}{0--23, 0--59, 0--59, 0--999}
Also:\\
\lr{.getUTC<Foo>(), .set<Foo>(<x>), .setUTC<Foo>(<x>)}{}
\br
\lr{.getTimezoneOffset()}{\#minutes offset from GMT}
\lr{.toLocaleDateString()}{``10/28/2009''}
\lr{.toLocaleTimeString()}{``11:02:48 PM''}
\lr{.toLocaleString()}{{\footnotesize Wed 28 Oct 2009 11:03:09 PM EDT}}
\lr{.toUTCString()}{``Thu, 29 Oct 2009 03:03:28 GMT''}
\ifprototype
\subsection{prototype "Date"}
\lr{.toJSON()}{``1969-12-31T16:00:00Z'' (always GMT)}
\fi
\newpage
\begin{center}
{
JavaScript
\ifprototype + prototype.js 1.6 \fi
\ifjquery + jQuery 1.8 (\jQueryOnly) \fi
} DOM Reference Guide\\
All funcs/methods work in FF 1.0+ and IE 6.0+\\
% {\footnotesize (Thus, probably all modern browsers)}}
Version 0.10, Sep 2012 --- Dean Serenevy
\end{center}
\section{"document" and related}
\lr{.domain, .referrer, .title, .URL}{document info}
\lr{navigator.cookieEnabled}{bool cookie test}
\lr{var h = window.history;}{}
\lr*{h.length, h.back(), h.forward(), h.go(<where>)}
{<where>=-1,-2,\dots or URL}
\lr{var l = window.location;}{}
\lr{l.hash}{set or return URL from "\#" on}
\lr{l.search}{query params from "?" on}
\lr{l.pathname}{path of url}
\lr{l.href}{entire url (include params+hash)}
\Code{l.hostname, location.port, l.host, l.protocol l.reload(),
l.replace(<new_url>)}
\ifjquery
\subsection[\jQueryOnly]{jQuery "document"}
\lr{\$(<func>())}{callback now or when DOM ready}
\lr{\$(window).height()}{height of browser viewport}
\lr{\$(document).height()}{height of HTML document}
\fi
\ifprototype
\subsection{prototype "document"}
\lr*{.observe(\"dom:loaded\",<handler>)}
{DOM loaded, does not wait for images to finish}
\lr{.loaded}{bool: DOM tree ready for manipulation}
\lr{.viewport.getDimensions()}{\Code{\{width:<x>, height:<y>\}}}
\lr{.viewport.getHeight()}{pixel height of page in view}
\lr{.viewport.getWidth()}{pixel width of page in view}
\lr{.viewport.getScrollOffsets()}{\Code{\{left:<x>, top:<y>\}}}
\fi
\section{Core "Event"}
\lr{.type}{name of the event}
% \lr{.button|}{mouse button clicked when event triggered}
\lr{.keyCode||.which}{reliable only for keydown|keyup}
\lr{}{interpret via Str.fromCharCode(x)|constants below}
\lr*{.altKey, .ctrlKey, .metaKey, .shiftKey}
{true if corresponding key was down during event}
\subsection{Core events}
\lit*{on<foo>} for plain JS, but as-is in most JS libraries\\
Grey events have more than usual incompatibilities\\[-.5ex]
\lr{}{{\footnotesize http://www.quirksmode.org/dom/events/index.html}}
\br
\lr{load, error, abort}{event in page or image load}
\lr{}{Note: use end of doc "<script>" instead of \texttt{load}}
\lr{focus, blur}{element gains/loses focus}
\lr{\opt{keydown}, \opt{keypress}, keyup}{keyboard key event}
\lr{input}{content of a field changes (HTML5)}
\lr{\opt{change}}{content of a field changes (exit field)}
\lr{mousedown, mouseup}{recommended button events}
\lr{click, dblclick}{clicks -- also fires mouse-down/up!}
\lr{mouseenter, mouseleave}{over/not over element}
\lr{}{IE only, but emulated in major libraries}
\lr{mouseover, mouseout}{\textsl{hover}, re-fires ``out''}
\lr{}{then ``over'' for child elements (bubble)}
\lr{mousemove}{mouse is moved}
\lr{reset, submit}{form button events}
\lr{resize}{window or frame is resized}
\lr{select}{selection made in text input field}
\lr{unload}{user exits the page}
\subsection{Core Promise\small, Oct 2014 (C,F,E,S$-$IE)}
\begin{code}
new Promise(function (resolve, reject) \{
\dots resolve() \dots{OR}\dots reject() \dots{OR}\dots throw \dots
\}).catch(funcname).then(onsuccess, onfailure);
\end{code}
\ifjquery
\subsection[\jQueryOnly]{jQuery "Event"}
\lr{.data}{<data> passed at ".on()" time}
\fi
\ifprototype
\subsection{prototype "Event" XXX:TO FINISH}
\lr{func <handler>(<event>)\{/* this=<observed element> */\}}{}
<sel>: CSS selector expression
\br
\lr{.memo}{<memo> value from ".fire" call}
\lr{.findElement(<sel>)}{event element, then up ancestors}
% \lr{.findElement(<sel>)}{ancestor of event elmt match <sel>}
\lr{.isLeftClick(), .isMiddleClick(), .isRightClick()}{}
\lr{.stop()}{prevent default action and stop bubbling}
\lr{.pointer()}{page (document) absolute: \lit*{\{x:<a>, y:<b>\}}}
% .pointerX(), .pointerY()
%
% $('records').observe('click',function(event) {
% var clickedRow;
% clickedRow = event.findElement('tr');
% if (clickedRow) {
% this.down('th').update("You clicked record #" + clickedRow.readAttribute("data-recnum"));
% }
% });
\fi
\ifprototype
\subsection{prototype "Event" key constants}
\Code{%
.KEY_BACKSPACE, .KEY_TAB, .KEY_RETURN, \mbox{.KEY_ESC}, .KEY_LEFT,
.KEY_UP, .KEY_RIGHT, .KEY_DOWN, .KEY_DELETE, .KEY_HOME, .KEY_END,
.KEY_PAGEUP, .KEY_PAGEDOWN
}
\fi
\section{Core "Element"}
Attributes avail.~as props:~(others in line w/element)\\
\lr{.tagName}{}
\lr{.id, .title, .name, .accessKey, .tabIndex, .alt}{}
\lr{.style}{e.g.: \texttt{x.style.display\,=\,\"none\"}}{}
\lr{.innerHTML, .textContent}{for container objects}
\lr{.getAttribute(<name>), .setAttribute(<name>, <val>)}{}
\lr{.hasAttribute(\dots), .removeAttribute(\dots)}{}
\lr{.classList.add(<cls>); .remove(<cls>); .toggle(<cls>)}{}
\subsection{"a"\hfill {\tt\small .href}}
\subsection{"img"\hfill {\tt\small .src}}
\lr{.complete}{true if image finished downloading}
\subsection{"table"\hfill {\tt\small .cellPadding, .cellSpacing}}
\lr{.rows[]}{array of tr elements}
\lr{.caption}{caption element object}
\lr{.createCaption()}{new element inserted into DOM}
\lr{.deleteRow(<idx>)}{remove row from table}
\subsection{"td" or "th"\hfill {\tt\small .colSpan, .rowSpan, .vAlign}}
\lr{.cellIndex}{index within row}
\subsection{"tr"\hfill {\tt\small .vAlign}}
\lr{.cells[]}{array of td/th elements}
\lr{.rowIndex}{index within table}
\lr{.deleteCell(<idx>)}{remove cell from row}
\subsection{"form"\hfill {\tt\small .action, .method, .encoding}}
\lr{.reset(), .submit()}{corresponding action}
\ifjquery\lr{.serialize()}{format form as query string \jQueryOnly}\fi
\subsection{Form Elements\hfill {\tt\footnotesize .disabled,.readOnly,.value,.type}}
\lr{.form}{associated form}
\subsection{form:~\texttt{input type="button|reset|submit"}}
\lr{.click()}{}
\subsection{form:~\texttt{input type="checkbox|radio"}\hfill {\tt\small .checked}}
\lr{.defaultChecked}{would be checked by default?}
\subsection{form:~\texttt{input type="text|password"}\hfill {\tt\small .maxLength}}
\lr{.defaultValue}{}
\lr{.select()}{select (hilight) text in entry}
\subsection{form:~\texttt{option}\hfill {\tt\small .selected}}
\lr{.defaultSelected}{true if is default option}
\lr{.index}{position in dropdown list}
\lr{.text}{display text}
\subsection{form:~\texttt{select}\hfill {\tt\small .multiple, .size}}
\lr{.options[]}{array of option objects}
\lr{.length}{number of objects}
\lr{.selectedIndex}{}
\subsection{form:~\texttt{textarea}\hfill {\tt\small .cols, .rows}}
\lr{.defaultValue}{}
\lr{.select()}{select (hilight) text in entry}
\section{DOM Manipulation}
Do save space, \textsl{doc} $\leftrightarrow$ \texttt{document}\\
\lr{<doc>.createElement(<tag-name>)}{}
\lr{.querySelector(<selectors>)}{meth of <doc> or Element}
\lr{.querySelectorAll(<selectors>)}{NodeList}
\br
\lr{<ele>.insertAdjacentHTML(<where>, <HTML>)}{<where> = }
\lr{.insertAdjacentElement(\dots)}{beforeend\,|\,afterend}
\lr{.insertAdjacentText(\dots)}{|\,beforebegin\,|\,afterbegin}
\br
\lr{.parentElement, .children}{}
\lr{.firstElementChild, .lastElementChild}{skip text}
\lr{.firstChild, .nextSibling}{beware text nodes}
\subsection{NodeList}
\lr{nl.length, nl[<i>], nl.forEach()}{like Array}
\ifjquery
\section[\jQueryOnly]{jQuery Elements}
\lr{.attr(<name>)}{get attr value}
\lr{.attr(<name>,<val>), .attr(<hash>)}{set attr values}
\lr{.removeAttr(\"<name1> <name2> \dots\")}{}
\br
\lr{.addClass(\"<class1> <class2> \dots\")}{}
\lr{.removeClass(\"<class1> <class2> \dots\")}{}
\lr{.hasClass(<name>)}{}
\lr{.toggleClass(<classes>,\opt{<switch>})}{force with <switch>}
\lr{.is(<what>)}{}
\br
\lr{.clone(\opt{<meta>=false})}{clones events if <meta>="true"}
\lr{.html(), .html(<html>)}{get/set inner HTML}
\lr{.text(), .text(<html>)}{get/set inner text content}
\lr{.val(), .val(<val>)}{get/set form field value}
\lr{.css(<prop>), .css(<prop>,<val>)}{get/set CSS props}
\lr{}{both "foo-bar" and "fooBar" format supported}
\lr{.data(<key>), .data(<key>,<val>)}{get/set arbitrary data}
\br
\lr*{.height(), .width(), .height(<val>), .width(<val>)}
{px size excluding border, margin, and padding}
\lr{.innerHeight(), .innerWidth()}{px size w/padding}
\lr*{.outerHeight(\opt{<margin>=false}), .outerWidth(\opt{\dots})}
{px size w/padding+border (+margin if "true")}
\br
\lr{.position()}{relative to parent, \Code{\{top:<y>,left:<x>\}}}
\lr{.offset(), .offset(<pos>)}{absolute, \Code{\{top:<y>,left:<x>\}}}
\fi
\ifjquery
\subsection{jQuery Selectors}
\lr{<tag>}{elements with tag name}
\lr{\#<id>}{element of given id}
\lr{.<class>}{element of given class}
\lr{<parent> {\>} <child>}{child selector}
\lr{<ancestor> <descendent>}{descendent match}
\lr{<prev> + <next>}{adjacency (selects <next>)}
\lr{<prev> {\~} <siblings>}{all siblings following <prev>}
\br
\lr{[<name>]}{has a <name> attribute}
\lr{[<name>\,=\,<value>]}{exactly equal}
\lr{[<name>\,\^=\,<value>]}{starts with match}
\lr{[<name>\,\$=\,<value>]}{ends with match}
\lr{[<name>\,\~=\,<value>]}{word match (space separators)}
\lr{[<name>\,!=\,<value>]}{not equal (or <name> not set)}
\lr{[<name>\,*=\,<value>]}{substring match}
\lr{[<name>\,|=\,<value>]}{equal or starts with ``<value>"-"''}
\br
\lr{:animated}{currently in animation sequence}
\lr{:button}{button or input type button}
\lr{:checked, :selected}{checked/selected elements}
\lr{:contains(<str>)}{text content contains <str>}
\lr{:focus}{currently focused}
\lr{:empty}{no children (including text nodes)}
\lr{:even, :odd, :first, :last}{}
\lr{:eq(<idx>), :lt(<idx>), :gt(<idx>)}{n-th matches}
\lr{:first-child, :last-child, :nth-child(<n>)}{}
\lr{:only-child, :parent}{}
\lr{:has(<sel>)}{has any descendent which matches <sel>}
\lr{:not(<sel>)}{does not match <sel>}
\lr{:disabled, :enabled, :hidden, :visible}{}
\lr{:header}{h1, h2, \dots}
\lr{:checkbox, :image, :input, :password, :radio, :reset, :submit, :text}{}
\fi
\ifjquery
\subsection{jQuery DOM manipulation}
\lr{\$(<html>), \$(\"\<<tag>/\>\", <attrs>)}{Create elements}
\begin{code}
\$("<div/>", \{ "class": "t", "text": "Click!",
"click": function()\{\$(this).toggleClass("t");\}
\}).appendTo("body");
\end{code}
\lr{<A>.append(<B>),<A>.prepend(<B>)}{}
\lr{<B>.appendTo(<A>),<B>.prependTo(<A>)}{}
\lr{}{insert <B> as last/first child of <A>}
\br
\lr{.index()}{index of element in list of siblings}
\lr{<A>.before(<B>),<A>.after(<B>)}{}
\lr{<B>.insertBefore(<A>),<B>.insertAfter(<A>)}{}
\lr{}{insert <B> as prev/next sibling to <A>}
\br
\lr{.remove()}{remove and free memory}
\lr{.empty()}{remove all children}
\lr{.detach()}{``remove'' but keep in memory}
\lr{.replaceWith(<elmt>)}{remove self and insert <elmt>}
\br
\lr{.wrap(<elmt>)}{clone of <elmt> as parent to each}
\lr{.wrapInner(<elmt>)}{clone of <elmt> as first child of each}
\lr{.wrapAll(<elmt>)}{single <elmt> as parent to collection}
\lr{.unwrap()}{remove element's parent}
\fi
\ifjquery
\subsection{jQuery DOM traversal}
These (except "is") produce new iterables\\
\lr{.eq(<idx>), .first(), .last()}{}
\lr{.slice(<beg>, \opt{<end>=last})}{negatives ok}
\lr{.filter(<sel>), .filter(<func>(<idx>))}{}
\lr{.add(<sel>), .add(<iter>), .add(<html>)}{}
\lr{.not(<sel>), .not(<func>(<idx>))}{}
\lr{.is(<sel>), .is(<func>(<idx>))}{returns bool}
\lr{.map(<func>(<idx>,<item>))}{builds iter of return vals}
\lr{.has(<sel>)}{has any descendent which matches <sel>}
\br
When <sel> provided, only return those matching it\\
\lr{.children(\opt{<sel>}), .parents(\opt{<sel>}), .siblings(\opt{<sel>})}{}
\lr{.next(\opt{<sel>}), .prev(\opt{<sel>}), .parent(\opt{<sel>})}{immediate}
\lr{.find(\opt{<sel>})}{descendent nodes}
\lr{.closest(<sel>)}{first self or ancestor}
\lr{.nextAll(\opt{<sel>}), .prevAll(\opt{<sel>})}{all siblings}
\lr*{.nextUntil(<sel>\,|<node>,\opt{filter}), .parentsUntil(\dots)}
{up to <sel> (not including) filtered by <filter>}
\lr{.contents()}{ALL children (node, text, comment)}
\fi
\ifjquery
\subsection{jQuery Element Effects}
duration <dur> is in ms or "fast" or "slow"\\
<func> called at end with "this" set to element\\
\lr{.delay(<dur>)}{delay before executing next effect}
\lr{.stop(\opt{<clear>=true},\opt{<jump_end>=false})}{stop anims}
\lr{.show(\opt{<dur>=0},\opt{<func>()}), .hide(\dots)}{anims w,h,alpha}
\lr{.slideDown(\opt{<dur>=400},\opt{<func>()}),.slideUp,.slideToggle}{}
\lr{.fadeIn(\opt{<dur>=400},\opt{<func>()}), .fadeOut, .fadeToggle}{}
\lr{.fadeTo(<dur>,<opacity>,\opt{<func>()})}{}
\lr{.animate(<css_props>, \opt{<dur>=0}, \opt{<func>()})}{}
% $('#foo').slideUp(300).delay(800).fadeIn(400);
\fi
\ifjquery
\subsection{jQuery Element Events}
\lr*{\$(\"\#Tab tbody\").on(\"click\", \"tr\", do_stuff)}
{in "do_stuff", "this" will be the tr element}
\br
\lr{.trigger(<event>, [\opt{args}])}{}
\lr{.on(<event>,\opt{<selector>},\opt{<data>},<func>(<event>))}{}
\lr{.one(<event>,\opt{<selector>},\opt{<data>},<func>(<event>))}{}
\lr*{.off(<event>,\opt{<selector>},<func>(<event>))}
{<selector> in "off" must exactly match "on" value}
\br
\lr{<event>.stopPropagation()}{do not trigger parents}
\lr{<event>.preventDefault()}{skip browser action}
Returning "false" from callback stops both
% \begin{code}
% // NOT needed for "ready"
% \$("\sl{stuff}").one("load", function() \{
% // do something
% \})
% .each(function() \{
% if(this.complete) \$(this).trigger("load");
% \});
% \end{code}
\fi
\ifprototype
\section{prototype "Selector"}
\lr{\$(<id>), \$(<id1>,<id2>,\dots)}{Element or array of elements}
\lr{\$\$(<sel1>,<sel2>,\dots)}{array of Elements by CSS selector}
\lr{Selector.findChildElements(<element>,<sel>)}{}
\lr{Selector.findElement(<elements>,<sel>,\opt{<index>=0})}{}
\lr{Selector.matchElements(<elements>,<sel>)}{grep/filter}
% \lr{Selector.split(<expression>)}{split comma sep. selectors}
\lr{<x> = new Selector(<expression>)}{}
\lr{<x>.findElements(<root>)}{like findChildElements}
\lr{<x>.match(<element>)}{true if element matches selector}
\fi
\ifprototype
\section{prototype "Element"}
\lr{new Element(<tagName>,<attribute_hash>)}{}
% Class methods:
% |.addMethods(<tagName>,<hash_of_meth})|}{}
\lr{.identify()}{(auto-gen if necessary) element id}
\lr{.classNames()}{array of class names}
\lr{.addClassName(<name>), .removeClassName(<name>),
.toggleClassName(<name>), .hasClassName(<name>)}
\br
\lr{.getStorage()}{Hash for arbitrary info on element}
\lr{.store(<key>,<val>), .retrieve(<key>,<default>)}{}
\br
\lr{.hide(), .show(), .toggle()}{change visibility}
\lr{.visible()}{boolean for visibility state}
\lr{.getStyle(<prop>)}{font-size|fontSize form}
\lr{.setStyle(<hash>)}{plain hash with camelCase keys}
\br
\lr{.makeClipping(), .undoClipping()}{simulate clip}
\lr{.makePositioned(), .undoPositioned()}{}
\lr{.getOpacity(), .setOpacity(<val>)}{<val> $\in[0,1]$}
\br
% \lr{.getHeight(), .getWidth()}{}
\lr{.getDimensions()}{\Code{\{width:<w>, height:<h>\}}}
\lr{.absolutize(), .relativize()}{}
\lr{.clonePosition(<src>,\opt{<opt>})}{<opt>: set<Foo> (bool;~Left,}
\lr{}{Top, Width, Height); offset<Foo> (int; Top, Left)}
\br
\lr{.scrollTo()}{scroll so element at top of viewport}
\lr{.getOffsetParent()}{closest positioned ancestor}
\lr{.positionedOffset()}{offset rel.~to ".getOffsetParent"}
\lr{.cumulativeOffset(). .cumulativeScrollOffset(), .viewportOffset()}
{\Code{\{0:<l>, 1:<t>, left:<l>, top:<t>\}}}
\fi
\ifprototype
\subsection{prototype Events}
\lr{.fire(<event>,\opt{<memo>},\opt{<bubble>=true})}{custom events ok}
\lr{.observe(<event>,<handler>)}{}
\lr{.stopObserving(\opt{<name>},\opt{<handler>})}{}
\fi
\ifprototype
\subsection{prototype DOM Traversal}
% {\tiny (http://www.eskimo.com/~bloo/indexdot/css/syntax/selectors/selectors.htm)}}
<sel>: CSS selector
\lr{.match(<sel>)}{true if element matches selector}
\lr{.descendantOf(<e2>)}{is "this" decendant of <e2>?}
\br
\lr{.ancestors(), .siblings()}{array of elements}
\lr{.childElements(), .descendants()}{ditto}
\lr{.nextSiblings(), .previousSiblings()}{ditto}
\lr{.adjacent(\opt{<sel>})}{siblings matching \textsl{sel}}
\lr{.select(<sel1>,<sel2>,\dots)}{descendants matching any}
\br
\lr{.firstDescendant()}{first non-text child}
\lr{.down(\opt{<sel>},\opt{<n>=0}), .up(\opt{<sel>},\opt{<n>=0})}{$n$\th~desc/parent}
\lr{.next(\opt{<sel>},\opt{<n>=0}), .previous(\opt{<sel>},\opt{<n>=0})}{$n$\th~sibling}
% \lr{.recursivelyCollect(<prop>)}{as in prop=``parentNode''}% or ``nextSibling''
\fi
\ifprototype
\subsection{prototype DOM Manipulation}
\lr{.insert(<what>)}{String of HTML | Element | Hash}
\lr{}{\Code{\{<where>:<what>\}}, <where>=before|after|top|bottom}
\lr{.remove()}{remove from DOM and return element}
\lr{.replace(<new>)}{replace element, return old element}
\lr{.update(<new_content>)}{replace element content}
\lr{.wrap(<wrapper>,<attr>)}{<wrapper>=Element|tag name}
\lr{.clone(\opt{<deep>=false})}{}
\br
\lr{.cleanWhitespace()}{remove ws-only text children}
\lr{.empty()}{contains only whitespace?}
\lr{.readAttribute(<name>), .writeAttribute(<name>,<val>)}{}
\lr{.writeAttribute(<hash>)}{}
\fi
\ifdeanjs
\section{Dean cookies.js}
\lr{setCookie(<name>,<val>,\opt{<expiredays>},\opt{<path>})}{}
\lr{getCookie(<name>), deleteCookie(<name>)}{}
\fi
\ifdeanjs
\section{Dean sort_tables.js}
\lr*{sortTable(<id>,<colNo>,\opt{<order>=1},\opt{<nrHeaders>=0})}
{$\text{\textsl{order}}\in\{1,-1\}$, leave top \textsl{nrHeaders} in place}
\fi
\section{Core AJAX}
\lr{JSON.parse(<str>), JSON.stringify(<obj>)}{}
\ifjquery
\section[\jQueryOnly]{AJAX in jQuery}
\subsection{Making Requests}
\begin{code}
\$.get(<url>,<data>,<success_func>)
.error(<f>(<jqXHR>,<jqMsg>,<serverMsg>))
.complete(<f>(<jqXHR>,<jqMsg>))
;
\end{code}
\lr{<jqMsg>}{= "null"|success|timeout|error|abort|parsererror}
\lr{<serverMsg>}{= ``Not Found''|``Internal Server Error''|\dots}
<type>: expected return type: xml|json|script|html
\br
\lr*{.load(<url>,<data>,<complete>(<text>,<jqMsg>,<jqXHR>))}{Loads response into element}
\lr{\$.get(<url>,<data>,<sucess>(<data>,<jqMsg>,<jqXHR>),<type>)}{}
\lr{\$.post(<url>,<data>,<sucess>(<data>,<jqMsg>,<jqXHR>),<type>)}{}
\br
\lr{\$.getJSON(<url>,<data>,<sucess>(<data>,<jqMsg>,<jqXHR>))}{}
\lr{\$.getScript(<url_js>,<sucess>(<data>,<jqMsg>,<jqXHR>))}{}
\br
\lr*{\$.ajax(<url>,<settings>)}{See: http://api.jquery.com/jQuery.ajax/}
\subsection{jqHXR methods}
\lr{.error(<func>), .success(<func>), .complete(<func>)}{}
\lr{.readyState}{"1"=open, "2"=sent, "3"=recving, "4"=loaded}
\lr{.status}{HTTP status code (when readyState $\ge3$)}
\lr{.statusText}{HTTP status text from server}
\lr{.responseXML, .responseText}{}
\lr{.getAllResponseHeaders()}{unparsed text headers}
\lr{.getResponseHeader(<name>)}{}
\lr{.abort()}{cancel network activity}
\subsection{AJAX Settings}
Set global settings via: \Code{\$.ajaxSetup(<settings>)}
\lr{\"beforeSend\": func(<jqXHR>, <settings>)}{}
\lr{\"xhrFields\": \{ <fields> \}}{}
\subsection{Global Event Handlers}
Attach to element which becomes "this"\\
\lr{.ajaxStart(<f>())}{Begin ajax batch}
\lr{.ajaxSend(<f>(<event>, <jqXHR>, <opt>))}{each request}
\lr{.ajaxComplete(<f>(<event>, <HttpReq>, <opt>))}{}
\lr{.ajaxSuccess(<f>(<event>, <HttpReq>, <opt>))}{}
\lr{.ajaxError(<f>(<event>, <jqXHR>, <settings>, <err>))}{}
\lr{.ajaxStop(<f>())}{End ajax batch}
\fi
% \section{XMLHttpResponse (prototype-ed)}
%
% \section{Timed Execution}
% new PeriodicalExecuter(callback,frequency)
%%%%%%%%%%%%%%%%%%%%%
% \columnbreak
%%%%%%%%%%%%%%%%%%%%%
\end{multicols*}
\end{document}
| {
"alphanum_fraction": 0.6344741502,
"avg_line_length": 34.1079207921,
"ext": "tex",
"hexsha": "358ef816a57f8c77579b118ba7c4f50de92ddabe",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f10654fc36de6920f55bcf6a3b769f7e504108e1",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "duelafn/refcards",
"max_forks_repo_path": "JavaScript/JavaScript.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f10654fc36de6920f55bcf6a3b769f7e504108e1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "duelafn/refcards",
"max_issues_repo_path": "JavaScript/JavaScript.tex",
"max_line_length": 100,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f10654fc36de6920f55bcf6a3b769f7e504108e1",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "duelafn/refcards",
"max_stars_repo_path": "JavaScript/JavaScript.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12460,
"size": 34449
} |
% based on example 7 in pythontex_gallery
% https://github.com/gpoore/pythontex/
\documentclass[12pt]{mmalatex}
\usepackage{examples}
\begin{document}
\section*{A table of derivatives and anti-derivatives}
This example is based upon a nice example in the Pythontex gallery, see
\ \url{https://github.com/gpoore/pythontex/}.
It uses a tagged block to capture the Mathematica output for later use
in the body of the LaTeX table.
\lstset{numbers=left}
\begin{minipage}[t]{0.65\textwidth}
\begin{mathematica}
(* Create a list of functions to include in the table *)
fun = {Sin[x], Cos[x], Tan[x],
ArcSin[x], ArcCos[x], ArcTan[x],
Sinh[x], Cosh[x], Tanh[x]};
eol = {"\\\\" , "\\\\", "\\\\",
"\\\\[5pt]", "\\\\[5pt]", "\\\\[5pt]",
"\\\\", "\\\\", " "};
ddxfun = D[#, x] & /@ fun;
intfun = Integrate[#, x] & /@ fun;
ddxfunHold = HoldForm[D[#, x]] & /@ fun;
intfunHold = HoldForm[Integrate[#, x]] & /@ fun;
(* mmaBeg (CalculusTable) *)
Do[Print[OutputForm[
ToString[TeXForm[ddxfunHold[[i]]]] <> "&=" <>
ToString[TeXForm[ddxfun[[i]]]] <> "\\quad & \\quad"
ToString[TeXForm[intfunHold[[i]]]] <> "&=" <>
ToString[TeXForm[intfun[[i]]]] <>
eol[[i]]
]], {i,1,9}]
(* mmaEnd (CalculusTable) *)
\end{mathematica}
\end{minipage}
\hskip 1cm
\begin{minipage}[t]{0.35\textwidth}
\begin{latex}
\begin{align*}
\mma {CalculusTable}
\end{align*}
\end{latex}
\end{minipage}
\clearpage
\begin{align*}
\mma {CalculusTable}
\end{align*}
\end{document}
| {
"alphanum_fraction": 0.5753424658,
"avg_line_length": 25.9032258065,
"ext": "tex",
"hexsha": "8e9f89418056db5be7c632694a33c7d091e4acd0",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-03-30T17:17:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-27T03:29:40.000Z",
"max_forks_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "leo-brewin/hybrid-latex",
"max_forks_repo_path": "mathematica/examples/example-02.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "leo-brewin/hybrid-latex",
"max_issues_repo_path": "mathematica/examples/example-02.tex",
"max_line_length": 71,
"max_stars_count": 16,
"max_stars_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "leo-brewin/hybrid-latex",
"max_stars_repo_path": "mathematica/examples/example-02.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T23:16:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-12T06:31:49.000Z",
"num_tokens": 544,
"size": 1606
} |
%!TEX TS-program = xelatex
\documentclass[]{friggeri-cv-short}
\usepackage{multicol}
%command for emphasising points/keys among body
\newcommand{\imp}[1] {{\em #1}}
\definecolor{lightgray}{HTML}{000000}
\begin{document}
\header{Joey}{Pereira}
\begin{aside}
\section{contact}
%226 343.9309
\href{mailto:joey@pereira.io}{joey@pereira.io}
~
\href{http://pereira.io}{http://pereira.io}
\href{http://www.github.com/xlegoz}{github.com/xlegoz}
\section{languages}
\textbf{experienced with:}
Pyton
Javascript
Objective C
Racket
Java
Bash
~
\textbf{working knowledge:}
C
C++
PHP
Elixir
\section{tools}
Git
\LaTeX
MongoDB
PostgreSQL
\section{reverse engineering tools}
Java Bytecode Editor
Java Decompiler
.NET Reflector
Ollydbg
\section{interests}
distributed systems
reverse engineering
operating systems
machine learning
compilers and assemblers
~
innovating \& inventing
theoretical physics
fishing and sailing
entrepreneurship
new tech
\end{aside}
\section{about me}
I'm currently an undergraduate Computer Science student in the co-op program at the University of Waterloo. I'm always enthusiastic and passionate for entrepreneurship, software development, innovation, and learning.
% \section{qualifications}
%\vspace{-1.5\parskip}
%\begin{multicols}{3}
% avid leader \\ enjoys challenging problems \\ experienced with small teams \\ passion for learning \\ adaptable to any environment \\ {\em loves} computer science
%\end{multicols}
%\vspace{-2\parskip}
\section{education}
\begin{entrylist}
\entry
{2013-2018}
{Bachelor's of Computer Science, University of Waterloo}
{Waterloo, Canada}
{Projected minor in management studies based on interests of entrepreneurship \\
First year representative on Math Endowment Fund Funding Council \\
% Highly involved in campus volunteering and clubs \\
Cumulative average of \imp{86\%}}
%\entry
% {07-09 2013}
% {Startup Engineering Course}
% {Stanford - Coursera (Online)}
% {Learned about what it takes to develop an idea from a business perspective \\
% Introduced to node.js and using PaaS}
\end{entrylist}
\section{experience}
\begin{entrylist}
\entry
{09 2014-now}
{Software Engineer, PiinPoint}
{Waterloo, Canada}
{Working with data visualization of location analytics in a fast paced startup\\
Rapidly learning technologies to build new components \\
Independently taking on projects through the entire development process \\
Using jQuery, Angularjs, React, Pyramids, Flask, MongoDB, PostgreSQL \\
Created and published initial product iOS application
% Developing a web app with Javascript, Python, and MongoDB \\
% Building innovative solutions to spatial data visualizations and GIS tools
}
\entry
{2012-2013}
{Co-founder and Head Technician, J\&J Tech}
{Fergus, Canada}
{Investigated starting a \imp{small business} with a partner \\
Gained \imp{management skills} through advertising and customer handling \\
Underwent management and organizational \imp{responsibilities}}
% \entry
% {06–08 2012}
% {Service Assistant, PlanetCPU}
% {Fergus, Canada}
% {Diagnosed software and hardware problems under \imp{tight time constraints} \\ Worked as a \imp{team to complete tasks}, as well as run a market campaign \\
% Learned how to be a \imp{key contributor} to a small team and to the business}
\textbf{\emph{Volunteering}} \\
\entry
{2015-now}
{Organizer, Platform Team}
{Hack The North, uWaterloo}
{Developing an API and web application for hackathon applicants}
{Developing internal tools for organization and analysis of applicants}
\entry
{2014}
{Volunteer}
{Hack The North, uWaterloo}
{Helped the event run smoothly, keep the hackers happy and feel at home}
\entry
{2013-2014}
{Recruitment Ambassador}
{David Cheriton School of Computer Science, uWaterloo}
{Volunteered in University of Waterloo events for prospective students \\
Set up \imp{technical equipment} for the event \\
Help prospective students in \imp{discovering computer science} as a passion}
\entry
{2012-2013}
{Technical Organizer}
{Elora Road Christian Fellowship}
{Solved various \imp{technical problems under pressure} to maintain event quality\\
\imp{Facilitated} a small worship team and the operation of equipment}
\end{entrylist}
\section{projects}
\begin{entrylist}
\entry
{2015}
{SmartDoor \textsf{- Smart Deadbolt and Doorbell \em{- Python}}}
{Still in invention}
{Sniff various wireless protocols to pick up nearby devices \\
Push notifications upon seeing registered devices arrive \\
Built using the multiple Raspberry Pis, Python scripts, and a ton of antennas }
\entry
{2014}
{Pi Net \textsf{- Raspberry Pi Mesh Based Network}}
{Still in development}
{Intentions to discover the network interaction of multiple devices over a mesh topology network and the sustainability of an intranet with all nodes}
\entry
{2013-2014}
{JoOS \textsf{- Bare Bones Operating System \em{- NASM, C}}}
{\href{http://github.com/xlegoz/JoOS}{github.com/xlegoz/JoOS}}
{Goal of learning more about the operating system layer and theory \\
Involved i386 instruction set, operation at a low level and interrupt handling}
\entry
{2013}
{jjNES \textsf{- Nintendo Entertainment System Emulator \em{- Java}}}
{Still in research}
{Aimed to achieve a better understanding of computer architecture \\
Observed models of how components interact within the NES architecture \\
Involved an understanding of machine code, processor operations}
\end{entrylist}
\end{document}
| {
"alphanum_fraction": 0.7279833246,
"avg_line_length": 31.8066298343,
"ext": "tex",
"hexsha": "9ec29274ccc8f9762d96880e14a7b4eb04c9059a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "027d811cf830671ebe9522397a54e497910d419d",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "xLegoz/cv",
"max_forks_repo_path": "cv-short.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "027d811cf830671ebe9522397a54e497910d419d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "xLegoz/cv",
"max_issues_repo_path": "cv-short.tex",
"max_line_length": 217,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "027d811cf830671ebe9522397a54e497910d419d",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "xLegoz/cv",
"max_stars_repo_path": "cv-short.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1544,
"size": 5757
} |
\documentclass[12pt]{article}
\usepackage{graphics}
\begin{document}
\section*{NYU Physics 1---In-class Exam 2}
\vfill
\paragraph{Name:} ~
\paragraph{email:} ~
\paragraph{recitation:} ~
\vfill
This exam consists of two problems. Write only in this booklet. Be
sure to show your work.
\vfill ~
\clearpage
\section*{Problem 1}
\noindent~\hfill\includegraphics{../mp/pulley_table.eps}\hfill~
In the system shown, the strings are massless and inextensible and the
pulley is massless and frictionless. There is a frictional force
$F_\mathrm{f}=\mu\,N$ between block $m_1$ and the table, but when
released from rest, block $m_2$ falls.
(a) What is the acceleration $\vec{a}$ (magnitude and direction) of
mass $m_1$? Give your answer in terms of quantities shown in the
diagram.
\vfill
(b) When mass $m_2$ has fallen by a distance $\Delta h$, by how much
$\Delta U$ has the potential energy of the system changed? Has it
increased or decreased? By how much $\Delta K$ has the kinetic energy
of the system changed? Has it increased or decreased? Give your
answers in terms of $\Delta h$ and the symbols shown in the diagram.
\vfill
\emph{Note that you do not have to have solved part (a) correctly to
get part (b).}
\clearpage
\section*{Problem 2}
A physicist of mass $48~\mathrm{kg}$ stands (carefully) on a perfectly
frictionless ice rink, next to a block of ice of mass
$16~\mathrm{kg}$. Both begin at rest in the ``rink frame.'' The
physicist pushes on the block until it is moving away \emph{relative
to the physicist} at $1~\mathrm{m\,s^{-1}}$.
(a) When viewed in the rink frame, the block ends up moving in the
positive $x$ direction and the physicist ends up moving in the
negative $x$ direction, by conservation of momentum. What are the
final speeds of the block and physicist, in the rink frame?
\vfill
(b) How much work $W$ did the physicist do?
\vfill ~
\clearpage
[This page intentionally left blank for calculations or other work.]
\end{document}
| {
"alphanum_fraction": 0.7354417671,
"avg_line_length": 26.2105263158,
"ext": "tex",
"hexsha": "f1918b58910885a5a15c5aa16b84603dc152a43a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "davidwhogg/Physics1",
"max_forks_repo_path": "tex/old/q02_2002.tex",
"max_issues_count": 29,
"max_issues_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda",
"max_issues_repo_issues_event_max_datetime": "2019-01-29T22:47:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-10-07T19:48:57.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "davidwhogg/Physics1",
"max_issues_repo_path": "tex/old/q02_2002.tex",
"max_line_length": 70,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "davidwhogg/Physics1",
"max_stars_repo_path": "tex/old/q02_2002.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-13T03:48:56.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-13T03:48:56.000Z",
"num_tokens": 575,
"size": 1992
} |
\documentclass[11pt, twoside, pdftex]{article}
% This include all the settings that we should use for the document
\newcommand{\PDFTitle}{Introduction to the Intel FPGA DevCloud}
\newcommand{\blue}[1]{{\color{blue}\sf{#1}}}
\newcommand{\commonPath}{../../Common}
\input{\commonPath/Docs/defaulttext.tex}
\input{\commonPath/Docs/preamble.tex}
%%%%%%%%%%%%%%%%%%%%%%%%%
% Add title
\newcommand{\doctitle}{Introduction to the Intel FPGA DevCloud}
\newcommand{\dochead}{Introduction to the Intel FPGA DevCloud}
% Usually no need to change these two lines
\title{\fontfamily{phv}\selectfont{\doctitle} }
\chead{ \small{\textsc{\bfseries \dochead} } }
% Customizations
%%%%%%%%%%%%%%%%%%%%%%%%%
% Allows multiple figures per page
\renewcommand\floatpagefraction{.9}
\renewcommand\topfraction{.9}
\renewcommand\bottomfraction{.9}
\renewcommand\textfraction{.1}
\setcounter{totalnumber}{50}
\setcounter{topnumber}{50}
\setcounter{bottomnumber}{50}
\raggedbottom
%%%%%%%%%%%%%%%%%%
%%% DOCUMENT START
%\begin{document}
\begin{document}
\begin{table}
\centering
\begin{tabular}{p{5cm}p{4cm}}
\hspace{-3cm}
&
\raisebox{1\height}{\parbox[h]{0.5\textwidth}{\Large\fontfamily{phv}\selectfont{\textsf{\doctitle}}}}
\end{tabular}
\label{tab:logo}
\end{table}
\colorbox[rgb]{0,0.384,0.816}{\parbox[h]{\textwidth}{\color{white}\textsf{\textit{\textBar}}}}
\thispagestyle{plain}
\section{Introduction}
This tutorial provides an introduction to the cloud-based computing resource called the
Intel\textsuperscript{\textregistered} FPGA DevCloud.\textsuperscript{\textregistered}
Each computer in the DevCloud comprises both a high-end Intel
CPU and an Intel FPGA device. The focus of this tutorial is on the development of
Accelerator Functional Units (AFUs) for use on the DevCloud.
An AFU is a hardware component that can be implemented
in an FPGA and used to perform computations along with an Intel CPU. We show the required
steps for setting up a secure connection to the DevCloud, and describe various commands that
are available for AFU development. Detailed documentation on the Intel FPGA DevCloud can
be found on {\it GitHub} at \blue{https://github.com/intel/FPGA-Devcloud}.
{\bf Contents}:
\begin{itemize}
\item Obtaining a user account on the Intel FPGA DevCloud
\item Setting up a secure connection to the DevCloud using SSH
\item Configuring the DevCloud for AFU development
\item Compiling AFU hardware
\item Programming an FPGA on the DevCloud
\item Developing software programs that make use of an AFU
\end{itemize}
{\bf Requirements:}
\begin{itemize}
\item Working knowledge of Microsoft Windows and/or Linux operating systems
\item A computer running either Microsoft Windows (version 10 is recommended) or Linux
(Ubuntu, or a similar Linux distribution). The computer would typically be either a
desktop computer or laptop.
\item Access to the Internet from your computer
\end{itemize}
\clearpage
\newpage
\section{Getting Started}
Intel provides a set of tools called the
{\it Intel Acceleration Stack}\textsuperscript{\textregistered} for
developing applications that use both Intel processors and hardware acceleration devices,
such as FPGAs. The development tools provided by the acceleration stack are installed on the
DevCloud, and include features for designing both hardware and software components. In
this tutorial we show how to use these tools on the DevCloud for AFU development.
The first step to using the Intel FPGA DevCloud is to obtain a user account, which can be
done by filling in the form at
\blue{https://www.intel.com/content/www/us/en/forms/idz/devcloud-enrollment/fpga-request.html}.
The application process takes about one to two weeks.
Upon approval of your application an email message will be sent that provides a customized
link for setting up your DevCloud access. The part of the email that you click on to reach
the customized link is illustrated at the bottom of Figure~\ref{fig:welcome}.
Clicking on this link takes you to a webpage that includes the
connection link displayed in Figure~\ref{fig:connect}. On this webpage you click on
\blue{Connection Options} to reach the selections displayed in
Figure~\ref{fig:connect_terminal}.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.8]{figures/welcome.png}
\caption{A part of the DevCloud welcome message.}
\label{fig:welcome}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.8]{figures/connect.png}
\caption{Connect to the DevCloud.}
\label{fig:connect}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.8]{figures/connect_terminal.png}
\caption{Connection options.}
\label{fig:connect_terminal}
\end{center}
\end{figure}
As indicated in Figure~\ref{fig:connect_terminal}, you should click to select the item
labeled \blue{Linux$^*$ or macOS} {\sf (SSH client)}. This action opens a webpage customized to
your DevCloud login information, as illustrated in Figure~\ref{fig:ssh}. The DevCloud account
indicated in the figure, \texttt{u42132}, is shown for illustrative purposes only--the
webpage is customized to set up each user's unique DevCloud account.
\begin{figure}[H]
\begin{center}
\includegraphics[width=\textwidth]{figures/ssh.png}
\caption{Setting up a secure connection to the DevCloud.}
\label{fig:ssh}
\end{center}
\end{figure}
By clicking on the dark-blue box in ``step \texttt{1}'' of Figure~\ref{fig:ssh} you can download a
file onto your home computer that contains your DevCloud secure login information,
in {\it Secure Shell} (SSH) format.
Before you can follow ``step \texttt{2}'' of Figure~\ref{fig:ssh} you have to be using a
Linux environment, as discussed below.
\subsection{Using Linux to Connect to the DevCloud}
The SSH information downloaded using the link in Figure~\ref{fig:ssh} requires
a Linux environment. If you are already using a Linux system, then you can skip
to Section~\ref{sec:ssh}. But if you are using Microsoft Windows, then a Linux environment
has to be set up before continuing. We assume that
you are running Linux via the Microsoft Windows extension called
{\it Windows System for Linux} (WSL). If you are using another method of accessing Linux,
for example the {\it Cygwin} tools, then some differences may apply in the setup process.
If not already done on your computer, enable WSL. Instructions for enabling this feature of
Windows can be found by searching on the Internet. After WSL is enabled, download and
install the \texttt{Ubuntu} app from the Microsoft Windows Store. Open the Ubuntu app,
which provides you with a Linux terminal.
\subsubsection{Installing SSH Authentication}
\label{sec:ssh}
To complete ``step \texttt{2}'' of Figure~\ref{fig:ssh}, in a Linux terminal execute the command
\lstset{language=,numbers=none,escapechar=|}
\begin{lstlisting}
bash PATH/setup-devcloud-access-``userid''.txt
\end{lstlisting}
where \texttt{PATH} represents the directory where the ``setup'' file was saved when it was
downloaded in ``step 1''. This command extracts the SSH authentication information from the
setup file and stores it into your Linux user's home directory in $\sim$/.{\it ssh}. As
a final step modify a file permission by executing the command
\begin{lstlisting}
chmod 600 ~/.ssh/config
\end{lstlisting}
Now, login to the DevCloud by executing the command
\begin{lstlisting}
ssh devcloud
\end{lstlisting}
Once logged in, perform the following one-time set up. Edit the .{\it bashrc} file
in your home directory on the DevCloud. Several {\it command-line interface} (CLI) text
editors are available on the DevCloud, including {\it Vim}, {\it emacs}, {\it nano}, and
{\it pico}. At the end of your
.{\it bashrc} file append the line
\begin{lstlisting}
source /data/intel_fpga/devcloudLoginToolSetup.sh
\end{lstlisting}
This setting ensures that your environment is properly configured to execute additional
commands that are needed for AFU development. To apply this new setting to your login session,
logout of the DevCloud by typing either the end-of-transmission character
$^\wedge$\texttt{D} (hold down the \texttt{CTRL} key and press \texttt{d}),
or \texttt{exit}, and then login again by executing \texttt{ssh devcloud}.
\section{Using the DevCloud}
The command \texttt{ssh devcloud} makes a connection to a DevCloud computer named {\it login-2}.
This machine does not have access to any FPGA accelerator cards. To connect to a computer
that has the required hardware resources execute (on {\it login-2}) the command
\begin{lstlisting}
devcloud_login
\end{lstlisting}
This command allows you to make a connection to machines that offer five different types of
compute-resources. For this tutorial we assume that AFU development will be done using
an Arria 10 FPGA. If a different type of FPGA were to be used instead, then some of the
instructions below would need to be modified accordingly. Type \texttt{1} to select
\begin{lstlisting}
1) Arria 10 PAC Compilation and Programming - RTL AFU, OpenCL
\end{lstlisting}
You will then be presented with a choice of two versions of Arria 10 development tools.
Type \texttt{1} to select
\begin{lstlisting}
1) 1.2.1
\end{lstlisting}
You should now be connected to a computer that includes an Arria 10 FPGA card. In this
discussion we assume that the computer is named {\it s005-n003}. To complete your setup,
execute (on {\it s005-n003}) the command
\begin{lstlisting}
tools_setup
\end{lstlisting}
This command presents seven options for selecting development software. Type \texttt{5} to
choose
\begin{lstlisting}
5) Arria 10 PAC Compilation and Programming - RTL AFU, OpenCL
\end{lstlisting}
At this point the environment should be configured for access to the AFU development tools.
One of the software packages that we will use is called the
{\it Open Programmable Accelerator Engine} (OPAE), which is part of the Intel Acceleration
Stack. To verify the proper setup, type the command
\begin{lstlisting}
echo |\$|OPAE_PLATFORM_ROOT
\end{lstlisting}
This command should produce an output like the one illustrated in Figure~\ref{fig:setup}.
The OPAE software is installed on the DevCloud, and is also available in open-source form on
{\it GitHub} at \blue{https://github.com/OPAE}. This repository includes detailed documentation
about OPAE.
~\\
\begin{figure}[H]
\begin{center}
\includegraphics[width=\textwidth]{figures/setup.png}
\caption{Verifying the proper setup.}
\label{fig:setup}
\end{center}
\end{figure}
\subsection{Copying Files to/from the DevCloud}
Files that are created on your home computer can be copied onto the DevCloud. One simple
way to perform the desired file-transfer is to use the {\it Secure Copy} (\texttt{scp}) program.
For example, to copy {\it homefile} from your computer to your
{\it home} directory ($\sim$) on the DevCloud you would use the command
\begin{lstlisting}
scp homefile devcloud:~/
\end{lstlisting}
Of course, you can copy files to other directories on the DevCloud by including the desired path
in the \texttt{scp} command. Similarly, \texttt{scp} can be used to copy a file from the
DevCloud to the your home computer. For example, to copy {\it devfile} from the DevCloud to
the current working directory (\texttt{.}) on your home computer you would use the command
\begin{lstlisting}
scp devcloud:~/devfile .
\end{lstlisting}
The \texttt{scp} command can by used to copy an entire directory {\it tree}, including
sub-directories, by using the \texttt{-r} option. More information about \texttt{scp} can
be found by typing \texttt{man scp}. Note that \texttt{scp} should be used with some
caution, as it {\it clobbers} (overwrites) existing files without warning. Thus, if you
copy a file to a remote computer using \texttt{scp}, the file is overwritten on the remote
computer if it already exists.
\subsection{Working with an AFU}
Intel documentation specifies that the source files for an AFU have to be structured in a
particular way. An example of a simple AFU, named {\it hello\_afu}, is provided as a sample
on the DevCloud in the directory
\begin{lstlisting}
|\$|OPAE_PLATFORM_ROOT/hw/samples
\end{lstlisting}
The AFU has a root directory called \texttt{hello\_afu}, which
contains directories named \texttt{hw} and \texttt{sw}. The \texttt{hw} directory contains a
directory called \texttt{rtl}, which holds the hardware source-code files.
Figure~\ref{fig:sample} (near the top) shows the contents of the sample \texttt{rtl}
folder. With your working directory set to \texttt{samples}, copy \texttt{hello\_afu} to your home directory with the command \texttt{cp -r hello\_afu \textasciitilde}. If you create a new AFU, then before it can be compiled for the first time you must complete the following steps, using \texttt{hello\_afu} as an example. Run \texttt{cd \textasciitilde/hello\_afu} then execute the
command
\begin{lstlisting}
afu_synth_setup --source hw/rtl/filelist.txt build_synth
\end{lstlisting}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.7]{figures/sample.png}
\caption{Sample AFU hardware source-code files.}
\label{fig:sample}
\end{center}
\end{figure}
This command creates the directory named \texttt{build\_synth} inside the \texttt{hello\_afu}
directory, and copies a number of files that are needed to compile the AFU into a hardware
circuit. To perform the hardware compilation you would set your working directory to
\texttt{build\_synth} and execute the command
\begin{lstlisting}
run.sh
\end{lstlisting}
This command, which is found in the directory $\$$\texttt{OPAE\_PLATFORM\_ROOT/bin}, runs the
tools required to generate a hardware circuit for the AFU. In the case of the {\it hello\_afu}
example the circuit would be saved in a bitstream file called {\it hello\_afu.gbs}.
Some of the files created during hardware compilation can be ``cleaned up'' by executing
in the \texttt{build\_synth} directory the command
\begin{lstlisting}
clean.sh
\end{lstlisting}
\subsubsection{Downloading an AFU Bitstream into an FPGA}
You can download a bitstream file such as {\it hello\_afu.gbs} into an FPGA using a
two-step process. First, execute the command
\begin{lstlisting}
PACSign PR -t UPDATE -H openssl_manager -i hello_afu.gbs -o hello_afu_unsigned.gbs
\end{lstlisting}
Then, execute
\begin{lstlisting}
fpgasupdate hello_afu_unsigned.gbs
\end{lstlisting}
\subsubsection{Compiling Software Programs for an AFU}
Software programs that utilize an AFU have to be contained in its \texttt{sw} directory.
These files can be compiled into executable programs by using a special {\it Makefile}. An
example Makefile, as well as a C source-code file, can be found in the \texttt{sw} folder
for {\it hello\_afu}, as illustrated in Figure~\ref{fig:sample} (near the bottom).
\subsubsection{Miscellaneous Commands}
Some miscellaneous Linux and DevCloud <{\it commands}> are described below.
\begin{description}
\item [< ! >] You can use \texttt{!} to recall a previously executed command.
For example, to
re-execute a previous \texttt{ssh} command you can type \texttt{!shh}. The Linux shell will
search your command history for the last command that started with {\it ssh} and execute it again.
\item [< !:p >] This command is similar to \texttt{!,} except that is does not
{\it execute} the
recalled command. For example, if you wish to check for the most-recently executed command
that begins with {\it ssh}, you would type \texttt{!ssh:p}.
\item [< $\uparrow$ >] You can use $\uparrow$ to recall previously-executed commands.
\item [< $^\wedge$Z >] When you are using a program, for example a text editor, typing
$^\wedge$Z (hold down the \texttt{CTRL} key and press \texttt{z})
returns control to the Linux command-line prompt, but leaves your program
running in the {\it background}.
\item [< $\sim^\wedge$Z >] When you are connected to a {\it remote} computer, typing the
character sequence $\sim^\wedge$Z returns control to your {\it local} computer, but keeps the
session on the remote computer running in the {\it background}.
\item [< bg >] This command displays a list of all programs that you are running in
the background.
\item [< fg >] The \texttt{fg} command returns control to the most recently-suspended
background program. If there are multiple background programs then you can return control to
program $n$ by executing \texttt{fg \%n}.
\item [< ps >] This command provides a list of currently-executing processes.
\item [< kill >] The \texttt{kill} command can be used to terminate a process.
\item [< killall >] Occasionally you may get automatically logged out of a compute-machine
on the DevCloud, for example the machine {\it s005-n003}, even though you have not purposely
logged out. If this occurs then a subsequent issue may develop. Specifically, trying to reconnect
to a compute-machine using the command \texttt{devcloud\_login} may result in the error
\begin{lstlisting}
You are already logged into node =s005-n003 interactively.
\end{lstlisting}
In this case, you can reconnect to the machine by executing
\begin{lstlisting}
ssh s005-n003
\end{lstlisting}
After reconnecting, in some situations you may find that commands on the DevCloud do not
work properly. If this happens, then (as a last resort) you can kill all processes owned by
your {\it userid}, by using the \texttt{killall} command. For example, if your userid
were {\it u42132} you would type
\begin{lstlisting}
killall --user u42132
\end{lstlisting}
This command should terminate all processes owned by you and then disconnect from the
machine. If you do not get logged out, then execute the command one more time.
\item [< uuidgen >] This DevCloud command generates a {\it universally unique identifier} (uuid)
for an AFU.
\item [< pbsnodes >] This DevCloud command provides a listing of available compute-machines. To
control the display of this list use the command \texttt{pbsnodes | more}.
\end{description}
\section{Concluding Remarks}
This tutorial has provided an introduction to the Intel FPGA DevCloud for AFU development.
Further details about the DevCloud can be found on its {\it GitHub} site
at \blue{https://github.com/intel/FPGA-Devcloud}.
% Copyright and Trademark
\input{\commonPath/Docs/copyright.tex}
\end{document}
| {
"alphanum_fraction": 0.7658810081,
"avg_line_length": 45.3604938272,
"ext": "tex",
"hexsha": "4446318faf11fefee5fb2309de73d609c8fff153",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fpgacademy/Tutorials",
"max_forks_repo_path": "High_Level_Design/DevCloud_Introduction/DevCloud.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fpgacademy/Tutorials",
"max_issues_repo_path": "High_Level_Design/DevCloud_Introduction/DevCloud.tex",
"max_line_length": 384,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fpgacademy/Tutorials",
"max_stars_repo_path": "High_Level_Design/DevCloud_Introduction/DevCloud.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4749,
"size": 18371
} |
% Created 2020-05-27 Wed 17:32
% Intended LaTeX compiler: pdflatex
\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{graphicx}
\usepackage{grffile}
\usepackage{longtable}
\usepackage{wrapfig}
\usepackage{rotating}
\usepackage[normalem]{ulem}
\usepackage{amsmath}
\usepackage{textcomp}
\usepackage{amssymb}
\usepackage{capt-of}
\usepackage{hyperref}
\author{Augusto Peres}
\date{\today}
\title{Usage of the model checker}
\hypersetup{
pdfauthor={Augusto Peres},
pdftitle={Usage of the model checker},
pdfkeywords={},
pdfsubject={},
pdfcreator={Emacs 26.3 (Org mode 9.3.6)},
pdflang={English}}
\begin{document}
\maketitle
\section{Expressing the formulas}
\label{sec:org953ebfe}
The DTL formula we want to model check should be passed as one of the arguments.
To express the formulas all parenthesis should be used whenever an operator is
use, this includes \(\wedge\), \(\implies\), \(\vee\).
For example \(@_1[p \implies q]\) should be expressed as \texttt{@\_1((p)=>(q))}. Spaces
should also not be used to express formulas. Communication formulas are
expressed as \texttt{c\_2(otherformulas)}.
\textbf{Note}: When not using the bounded options it is not possible the usage of
\(\vee\), \(\wedge\) or any other dual operators.
\section{Expressing transition systems}
\label{sec:org82f870d}
The transition system should be encoded in a file. The lines in that file must
be of the following form:
\begin{enumerate}
\item \texttt{states 1 2 3 4}. This line says the states present in the system
\item \texttt{initial 1 2}. This lines states the initial states
\item \texttt{actions agent a1 a2}. This line states the actions of each agent. agent
should be an integer
\item \texttt{symbols agent p1 p2 p1}. States the propositional symbols available to each
agent.
\item \texttt{label state agent p1 p2}. States the propositional symbols in the language
of the agent that are present in the state.
\item \texttt{state action state}. Describes the transition relation.
\end{enumerate}
\section{Using the model checker}
\label{sec:org6e540a1}
The model checker is used as a command line application with the following
options.
\subsection{Visializations}
\label{sec:orgf8a45c4}
Transition systems can be outputed to the graphviz format by calling
\begin{verbatim}
./Main -toGraphviz path-to-the-file-containing-the-transition-system
\end{verbatim}
This outputs the full the transition system and the full simplified transition
system, \emph{i.e}, the transition system with all dead or unreachable states
removed.
This can be copy pasted \href{http:webgraphviz.com}{here} to visualize.
\subsection{Model checking}
\label{sec:org75f5e41}
To get a simple yes or no answer just use for
\begin{verbatim}
./Main -modelCheck <path-to-transition-system> <formula> <number of agents>
\end{verbatim}
This uses the default automata theoretic approach.
To use the bounded approach
\begin{verbatim}
./Main -modelCheck <path-to-transition-system> <formula> <number of agents> -bounded <maxbound>
\end{verbatim}
To get one counter example we use
\begin{verbatim}
./Main -oneCounterExample <path-to-transition-system> <formula> <number of agents>
\end{verbatim}
This outputs something of the form
\begin{verbatim}
CounterExample [ [(s1, x1), (s2, x2)...(sn, x2)], [(sn, xn), ..., (sn, xn)] ]
\end{verbatim}
This corresponds to the infinite path in the dot product of DTS with the
automaton that witnesses the persistence property. Projecting the first
coordinates yields an infinite path in the transitioon function.
To use the bounded model checking approach
\begin{verbatim}
./Main -oneCounterExample <path-to-transition-system> <formula> <number of agents> -bounded <max-bound>
\end{verbatim}
The output should be something of the form
\begin{verbatim}
fromList [("0_a":True)..("0_p1":True)...]
\end{verbatim}
Corresponding to the solution of the formula and the symbols present in each
state. The number before "\_" indicates the action taken at that step.
\section{Some examples}
\label{sec:org3624e08}
\begin{verbatim}
./Main -modelCheck t8States2Agents1.txt "(@_1(p1))=>~(@_2(q1))" 2
True
./Main -modelCheck t8States2Agents1.txt "@_2(X(c_1(p2)))" 2
True
./Main -modelCheck t8States2Agents4.txt "(@_1(c_2(q1)))=>(@_1(p1))" 2
False
./Main -modelCheck t8States2Agents4.txt "@_1(F((p1)/\\(p2)))" 2 -bounded 0
True
./Main -modelCheck t8States2Agents4.txt "@_1(F((p1)/\\(p2)))" 2 -bounded 2
False
./Main -oneCounterExample t8States2Agents4.txt "@_1(F((p1)/\\(p2)))" 2 -bounded 2
Just (fromList [(0_"a",True),(0_"b",False),(0_"c",False),(0_p1,False),(0_p2,False),(0_q1,False),(1_"a",False),(1_"b",True),(1_"c",False),(1_p1,True),(1_p2,False),(1_q1,False)],1)
\end{verbatim}
\end{document} | {
"alphanum_fraction": 0.7477968947,
"avg_line_length": 29.7875,
"ext": "tex",
"hexsha": "797fe68a9a4d4ca4a8095011c1aa2ebb722c0024",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a89a10d9b9be90bb7703d86b9b1b6537dec78d0a",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "AugustoPeres/DTL-Model-Checking",
"max_forks_repo_path": "dtl-model-checking/instructions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a89a10d9b9be90bb7703d86b9b1b6537dec78d0a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "AugustoPeres/DTL-Model-Checking",
"max_issues_repo_path": "dtl-model-checking/instructions.tex",
"max_line_length": 178,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a89a10d9b9be90bb7703d86b9b1b6537dec78d0a",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "AugustoPeres/DTL-Model-Checking",
"max_stars_repo_path": "dtl-model-checking/instructions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1400,
"size": 4766
} |
\documentclass[svgnames]{article}
\usepackage{url}
\usepackage{listings}
\usepackage{color}
\usepackage{nomencl}
\usepackage{amssymb,mathtools}
\usepackage[ruled,vlined]{algorithm2e}
\usepackage{algpseudocode}
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{afterpage}
\usepackage[english]{babel}
\usepackage{booktabs}
\usepackage[singlelinecheck=false]{caption}
% \usepackage{color}
\usepackage{float}
\usepackage{graphicx}
\usepackage{here}
% \usepackage{hyperref}
\usepackage[utf8]{inputenc}
\usepackage{listings}
\usepackage{mathrsfs}
\usepackage{ragged2e}
\usepackage{subfig}
\usepackage{xcolor}
% \usepackage{math}
% \usepackage{pgfplots}^^M
% \usepackage{movie15}
\usepackage{pgfplots}
\usepackage{pgfplotstable}
\usepackage{physics}
% \usepackage[colorlinks=true,linkcolor=blue,pdfstartview=FitH]{hyperref}
\selectlanguage{english}
\pagestyle{plain}
% \definecolor{dkgreen}{rgb}{0,0.6,0}
% \definecolor{gray}{rgb}{0.5,0.5,0.5}
% \definecolor{mauve}{rgb}{0.58,0,0.82}
% \definecolor{carrotorange}{rgb}{0.93, 0.57, 0.13}
% \lstset{language=C,
% frame=tb,
% backgroundcolor=\color{white},
% commentstyle=\color{dkgreen},
% keywordstyle=\color{blue},
% numbers=none,
% stringstyle=\color{carrotorange},
% basicstyle=\footnotesize,
% breakatwhitespace=false,
% breaklines=true,
% captionpos=b,
% keepspaces=true,
% numbersep=5pt,
% showspaces=false,
% showstringspaces=false,
% showtabs=false,
% tabsize=2,
% language=C
% }
\textheight=21cm
\textwidth=17cm
%\topmargin=-1cm
\oddsidemargin=0cm
\parindent=0mm
\pagestyle{plain}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\global\let\date\relax
\newcounter{unomenos}
\setcounter{unomenos}{\number\year}
\addtocounter{unomenos}{-1}
\stepcounter{unomenos}
\gdef\@date{ Course \arabic{unomenos}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%% definitions and macros %%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\SF}[1]{\mathsf{#1}}
\newcommand{\ER}{Erd\H{o}s-R\'eyni }
\newcommand{\RGMER}{\mathcal{G}_{\mathrm{ER}}}
\newcommand{\sd}{\text{sd}}
\newcommand{\Dr}{\text{Dr}}
\newcommand{\Indeg}{\text{Indeg}}
\newcommand{\outd}{\text{Outdeg}}
\newcommand{\Ker}{\text{Ker}}
\newcommand{\ime}{\text{Im}}
% \newcommand{\Im}{\text{Im}}
\newcommand{\CC}{C\nolinebreak\hspace{-.05em}\raisebox{.4ex}{\tiny\bf +}\nolinebreak\hspace{-.10em}\raisebox{.4ex}{\tiny\bf +}}
\def\CC{{C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}}
\newcommand\tab[1][1cm]{\hspace*{#1}}
% \newcommand{\Ger}{$\mathcal{G}_{ER}$}
\SetKwProg{Init}{init}{}{}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{titlepage}
\begin{center}
\vspace*{-1in}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm]{UU_logo.jpg}
\end{center}
\end{figure}
DEPARTMENT OF MATHEMATICS \\
\vspace*{0.15in}
Random Graph Models of a neocortical column in a \\rat's brain and their topological statistical distributions\\
\vspace*{0.3in}
\begin{large}
KIERAN BARBER\\
\end{large}
\vspace*{0.1in}
\rule{80mm}{0.1mm}\\
\vspace*{0.1in}
\begin{large}
Advisor \\
Raazesh Sainudiin
\end{large}
\end{center}
\end{titlepage}
\tableofcontents
\newpage
\section{Abstract}
There has been work done over the years to understand what exactly are the driving forces behind how the brain functions. Recently, with the advent of cloud computing and being able to store large volumes of data, reconstructing neocortical columns from the brain has become not only possible, but much easier. The Blue Brain Project (BBP) created multiple digital reconstructions of a rat's neocortical microcircuitry that closely resembles the biological features that include the numbers, types and densities of neurons and their synaptic connectivity that followed anatomical and physiological data obtained from experiments. What followed were sets of microconnectomes that were represented as directed graphs. After applying stimuli to the microconnectomes, realisations of connectivity were observed. One of these observations has been identified as the Bio-M microconnectome (MC). To determine the complexity of the Bio-M MC, we use topological statistics on the local level as well as the global level. We study here, five models, each with an added level of complexity ranging from the simplest random graph of \ER to those that account for known biological characteristics such as distance-dependence and neocortical layers. From our models, we observed that there was a lack of complexity in the connectivity shown at the local level, however we found surprisingly higher levels of connectivity on the global level than first expected. All codes needed to process the data and implement the models are made available \cite{Barber_Random_Graph_Models} under a permissive license.
\newpage
\input{tex_files/glossary}
\newpage
\input{tex_files/introduction}
\newpage
\input{tex_files/mathematicalPreliminaries}
\newpage
\input{tex_files/theConnectome}
\newpage
\input{tex_files/modelsForTheConnectome}
\newpage
\input{tex_files/results}
\newpage
\input{tex_files/conclusion}
\newpage
\section{Appendix}
% \subsection{Python Scripts}
% \url{https://github.com/lamastex/working-manuscript-TopologicalDataAnalysisOnABrainNetwork}
\bibliographystyle{plain}
\bibliography{references}
\end{document}
| {
"alphanum_fraction": 0.7123337584,
"avg_line_length": 34.0931677019,
"ext": "tex",
"hexsha": "6b4d486462c8eea1bd9f00b7f3c6d0b126bfba8a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c43b0a79e6d17a069b3d1297a7de248a01050045",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "lamastex/working-manuscript-TopologicalDataAnalysisOnABrainNetwork",
"max_forks_repo_path": "2020UUMScKieranBarber/main.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "c43b0a79e6d17a069b3d1297a7de248a01050045",
"max_issues_repo_issues_event_max_datetime": "2021-02-12T15:51:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-12T15:21:35.000Z",
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "lamastex/working-manuscript-TopologicalDataAnalysisOnABrainNetwork",
"max_issues_repo_path": "2020UUMScKieranBarber/main.tex",
"max_line_length": 1590,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c43b0a79e6d17a069b3d1297a7de248a01050045",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "lamastex/working-manuscript-TopologicalDataAnalysisOnABrainNetwork",
"max_stars_repo_path": "2020UUMScKieranBarber/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1578,
"size": 5489
} |
\chapter{VPN in ASA} | {
"alphanum_fraction": 0.75,
"avg_line_length": 20,
"ext": "tex",
"hexsha": "4a2c825ce0de8a3b4e654e551b8bdcb6804f0f4d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1234574bcba3c206263091ef90191fd9c41c624f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "buiquanghuy23103/CCNA",
"max_forks_repo_path": "Security/10-VPN-in-ASA.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1234574bcba3c206263091ef90191fd9c41c624f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "buiquanghuy23103/CCNA",
"max_issues_repo_path": "Security/10-VPN-in-ASA.tex",
"max_line_length": 20,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1234574bcba3c206263091ef90191fd9c41c624f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "buiquanghuy23103/CCNA",
"max_stars_repo_path": "Security/10-VPN-in-ASA.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7,
"size": 20
} |
\section{Results} \label{sec:results:main}
In the following subsections, the results for the changes to hicprediction introduced in
\autoref{sec:improve:main} will be presented and interpreted.
Furthermore, \autoref{sec:res:compare} provides a direct comparison
of the (improved) results from hicprediction to the ones from HiC-Reg.
\subsection{Avoiding training samples without protein data} \label{sec:res:removeEmptySamples}
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\resizebox{0.5\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_emptyVsNonempty_narrowPeak.pdf_tex}}
\caption{Pearson correlation before/after removing samples}
\label{fig:results:emptyVsNonempty:Pearson}
\end{wrapfigure}
The effect of removing empty samples from the datasets was clearly visible,
both in terms of Pearson correlation and matrix plots,
see \autoref{fig:results:emptyVsNonempty:Pearson} and \ref{fig:results:pk:noEmpty:5k}.
It is obvious that inverted triangles and gradient-style predictions were removed by discarding empty samples,
but the prediction is also strongly fragmented, so that structures like TADs and loops are hardly visible in the plots.
For example, in GM12878 and K562, only about 25\% of the originally \num{3227900} samples remained in the datasets,
which means that 75\% of all pairs in the predicted Hi-C matrix for K562 had the default interaction count zero.
This effect would be even more striking for the explanation example from \autoref{sec:improve:emptySamples},
where only 6 of \num{79900} samples would remain in the dataset.
In this (staged) case, the algorithm then just ``learns'' the relation
``sample exists $\Rightarrow$ read count is 100'',
without any of the features (start, end, window, distance) being considered as a decision criterion.
\begin{figure}[h]
\centering
\scriptsize
\import{figures/}{trianglesGradientsAfter2.pdf_tex}
\caption{Example prediction, without empty samples}
\label{fig:results:pk:noEmpty:5k}
\end{figure}
\subsection{Dealing with sparse input data} \label{sec:res:dealing_with_sparse_input}
Filtering fragmented output matrices like the ones obtained in \autoref{sec:res:removeEmptySamples}
was unsuccessful, since no parametrization of the 2D-Gaussian filters proved useful -- either the gaps were not
filled, or the structures in the matrix were
blurred too much, or both, see \autoref{fig:results:pk:smoothened:5k}.
Other kernel types and more sophisticated image-inpainting methods \cite{Elharrouss2019}
might do better, but choosing the appropriate ones and adapting them to the problem at hand would be
laborious and beyond the scope of this masterproject.
Additionally, and even more important, it is not clear whether such sparse matrices as depicted in
\autoref{fig:results:pk:noEmpty:5k} would be a good starting point for suchlike approaches.
The missing regions can be several \num{100000}s of base pairs wide (e.\,g. \autoref{fig:results:pk:noEmpty:5k}, upper right),
and both the low Pearson correlation (\autoref{fig:results:emptyVsNonempty:Pearson}) and the lack of structure in the predicted matrices
(\autoref{fig:results:pk:noEmpty:5k}) suggest there might simply not be enough information
for imputing missing values, with whatever approach.
Filtering the inputs with one-dimensional Gaussian filters was more successful.
While it seems impossible to fill long stretches of empty bins by Gaussian smoothing alone, some structures became quite well visible
and the Pearson correlation also improved slightly, \autoref{fig:results:pk:sigma4p0:5k} and \ref{fig:results:pk:Pearson:sigma4p0:5k}.
Apart from the gaps remaining in the prediction, smoothing the proteins also has the disadvantage of blurring and shifting the structures in
the predictions compared to the target matrices.
This is typical for Gauss filters, but probably acceptable in this case.
In predictions from bigwig files, there were much less empty bins and no inverted triangles or gradient-style regions,
\autoref{fig:results:bigwig:allsamples} and \autoref{fig:app:bw:others:5k} to \ref{fig:app:Pearson:furtherCellLines}
on page \pageref{fig:app:bw:others:5k}\,ff.
Due to the higher peak density in the bigwig files, the effect of leaving out
empty samples was sometimes hardly recognizable, compare \autoref{fig:results:bigwig:allsamples} and \ref{fig:results:bigwig:noemptysamples}.
The Pearson correlations were also similar for both approaches, and both were better than the ones from using peak files,
\autoref{fig:results:bigwig:Pearson}.
However, the effect of removing empty samples was found to be dependent on cell line, chromosome and genomic position.
Some chromosomes, e.\,g. 21, have large contiguous regions which seem to be deprived of all proteins,
probably for biological reasons.
For such regions, removing empty samples was significantly better, \autoref{fig:app:GM12878:K562:chr21:25k:goodToRemove},
but detrimental for other regions of the same chromosome, see \autoref{fig:app:GM12878:K562:chr21:25k:badToRemove}.
This was unfortunately not reflected in the Pearson correlation, \autoref{fig:app:Pearson:GM12878:K562:chr21:25kb}.
Note that the area under the correlation curve is up to 0.7, which is a surprisingly high value considering
the modest look of the matrix plots, \autoref{fig:app:GM12878:K562:chr21:25k:goodToRemove}
and \ref{fig:app:GM12878:K562:chr21:25k:badToRemove}.
With regard to background signal and sequencing noise in the bigwig files,
the results from chromosome 17 suggest that the algorithm can at least partially
learn which ChIP-seq read count values are indicative of high interaction counts and which are not.
Beyond that, using a threshold to set signal values below a certain level to zero is possible,
but finding a value which works well across all cell lines, chromosomes and proteins seems challenging.
Too large values again lead to sparsity and too small values
do not have noteworthy influence on the result.
Avoiding empty bins in the inputs by increasing the input matrix resolution (bin size) to 25\,kbp
worked surprisingly well, even without further measures, \autoref{fig:results:pk:allsamples:25k}.
Especially larger structures like the ones from 12...\SI{13}{\mega\bp} and 13...\SI{14}{\mega\bp} were usually at least signified.
However, gradient-style regions and inverted triangles triangles still occurred, albeit to a lesser extent,
since the underlying problem remains the same, see e.\,g. the regions around 13 and \SI{14.5}{\mega\bp}
in \autoref{fig:results:pk:allsamples:25k}.
As expected, discarding empty samples led to less fragmentation with 25\,kbp than with 5\,kbp,
because fewer input bins were empty; the result was still not useful, \autoref{fig:results:pk:noemptysamples:25k}.
Again, it was not possible to fill in missing values with a Gauss filter without blurring the
matrix too much, \autoref{fig:results:pk:smoothened:25k}.
For 25\,kbp resolution, too, smoothing the proteins worked better than smoothing the predicted matrix.
Larger gaps could again not be closed, but apart from that, structures were often more easy to identify than
without smoothing, compare \autoref{fig:results:pk:allsamples:25k} and \ref{fig:results:pk:proteinsSmoothened:25k}.
Predictions using bigwig files were also performed,
but at 25\,kbp resolution, it was even more difficult to say whether these were better or worse than the ones from peak files,
compare \autoref{fig:results:bigwig:allsamples:25k} and \ref{fig:results:pk:allsamples:25k}.
As already mentioned above, it cannot generally be said whether leaving out or keeping empty samples is better.
In the example matrix snippet from chromosome 17, it was again difficult to determine differences,
\autoref{fig:results:bigwig:allsamples:25k}, \ref{fig:results:bigwig:noemptysamples:25k}
and \ref{fig:results:bigwig:Pearson25k}, but this might not hold for all regions.
In terms of Pearson correlation, all predictions from 25\,kbp resolution were better
than the best one from \SI{5}{\kilo\bp}, \autoref{fig:results:bigwig:Pearson25k}, and structures indeed sometimes seemed
more recognizable in the plots for \SI{25}{\kilo\bp}.
Interestingly, the fragmented prediction from peak files without empty samples, \autoref{fig:results:pk:noemptysamples:25k},
showed one of the best Pearson correlations obtained thus far.
This is probably because interaction counts of discarded regions are set to zero,
which seems to be correct in many cases, especially in the distance range [0.4...1.0]\,Mbp.
Independent of the root cause, this example again underscores that relying on Pearson correlation alone is unfavorable,
because the corresponding matrices can be useless in practice.
For \SI{25}{\kilo\bp}, predictions were also performed using bigwig- and peak files simultaneously,
thus increasing the number of features in the samples from $3\cdot12+1=37$ to $3\cdot24+1=73$ (2x12 start-, 2x12 window- and 2x12 end features plus
distance).
While the Pearson correlations were mostly better than with bigwig files alone, especially when keeping empty
samples, \autoref{fig:app:Pearson:GM12878:K562:chr17:25kb:combi} and \ref{fig:app:Pearson:GM12878:K562:chr21:25kb:combi},
not much improvement was visible in the predicted matrices, \autoref{fig:results:combined:chr17:25k}, \ref{fig:app:GM12878:K562:chr21:25k:combiFew} and
\ref{fig:app:GM12878:K562:chr21:25k:combiMany}.
Feature importance plots show that the influence of the start- and end features from the peak files was generally negligible,
but the window features were often considered more important than most features from
bigwig files, \autoref{fig:chr17:GM12878:K562:combined:featImport} and \ref{fig:chr21:GM12878:K562:combined:featImport}.
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_pk_GM12878-on-K562_5k_sigma5p0.pdf_tex}
\caption{Example prediction from peaks, no empty samples, 5\,kbp, matrix filtered $\sigma=5.0$}
\label{fig:results:pk:smoothened:5k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_pk_GM12878-on-K562_5k_smooth4p0.pdf_tex}
\caption{Example prediction from peaks, no empty samples, 5\,kbp, proteins filtered $\sigma=4.0$}
\label{fig:results:pk:sigma4p0:5k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_5k_smooth4p0.pdf_tex}
\caption{Pearson correlation, 5\,kbp, proteins filtered $\sigma=4.0$}
\label{fig:results:pk:Pearson:sigma4p0:5k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_bigwig_GM12878-on-K562.pdf_tex}
\caption{Example prediction from bigwig, all samples}
\label{fig:results:bigwig:allsamples}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_bigwig_removeEmpty_GM12878-on-K562.pdf_tex}
\caption{Example prediction from bigwig, no empty samples}
\label{fig:results:bigwig:noemptysamples}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{pearson_emptyVsNonempty_bigwig-GM12878-on-K562.pdf_tex}
\caption{Pearson correlation before/after removing samples, 5\,kbp}
\label{fig:results:bigwig:Pearson}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_pk_GM12878-on-K562_25k_noRemoveEmpty.pdf_tex}
\caption{Example prediction from narrow-/broadPeak, 25\,kbp, all samples}
\label{fig:results:pk:allsamples:25k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_pk_GM12878-on-K562_25k_removeEmpty.pdf_tex}
\caption{Example prediction from peaks, no empty samples, 25\,kbp}
\label{fig:results:pk:noemptysamples:25k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_pk_GM12878-on-K562_25k_smoothMatrix1p0.pdf_tex}
\caption{Example prediction from peaks, no empty samples, 25\,kbp, matrix filtered $\sigma=1.0$}
\label{fig:results:pk:smoothened:25k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_pk_GM12878-on-K562_25k_smoothProteins1p0.pdf_tex}
\caption{Example prediction from peaks, no empty samples, \SI{25}{\kilo\bp}, proteins filtered $\sigma=1.0$}
\label{fig:results:pk:proteinsSmoothened:25k}
\end{figure}
\begin{figure}[hp]
\tiny
\centering
\import{figures/}{GM12878-on-K562_chr17_featImportances_combined.pdf_tex}
\caption{feature importances, GM12878 on K562, chr17, bigwig- and peak files combined}
\label{fig:chr17:GM12878:K562:combined:featImport}
\end{figure}
\begin{figure}[hp]
\tiny
\centering
\import{figures/}{GM12878-on-K562_chr21_featImportances_combined.pdf_tex}
\caption{feature importances, GM12878 on K562, chr21, bigwig- and peak files combined}
\label{fig:chr21:GM12878:K562:combined:featImport}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15-bw-GM12878-on-K562-25k_noRemoveEmpty.pdf_tex}
\caption{Example prediction from bigwig, \SI{25}{\kilo\bp}, all samples}
\label{fig:results:bigwig:allsamples:25k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_bw_GM12878-on-K562_25k_removeEmpty.pdf_tex}
\caption{Example prediction from bigwig, no empty samples, \SI{25}{\kilo\bp}}
\label{fig:results:bigwig:noemptysamples:25k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_25k.pdf_tex}
\caption{Pearson correlation comparison}
\label{fig:results:bigwig:Pearson25k}
\end{figure}
\begin{figure}[hp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_GM12878-on-K562_25kb_combi.pdf_tex}
\caption{Example combined prediction, \SI{25}{\kilo\bp}}
\label{fig:results:combined:chr17:25k}
\end{figure}
\subsection{Scaling protein signal values and Hi-C interaction counts} \label{sec:res:normalization}
For three of four tested scales for Hi-C interaction counts, [0..10], [0...100] and [0...1000],
predicted and (scaled) target value were in the desired order of magnitude and
the Pearson correlation was very similar to the status quo,
see \autoref{fig:app:Pearson:GM12878:K562:chr17:5kb:normalized} and
\ref{fig:app:GM12878:K562:chr17:normalizedMatrices}.
For value range [0...1], the Pearson correlation was notably worse than without scaling,
at least for distances greater than \SI{0.2}{\mega\bp}.
No obvious reason for this could be found in the plots of the predicted matrices,
but the usual ``logarithm plus 1''-representation is also not well suited
for matrices with interaction counts generally smaller than 1.
The target matrices, too, do not look very informative when scaled and plotted this way,
which is a reason on its own to go without the [0...1] range.
\begin{figure}[hb]
\centering
\begin{subfigure}{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_normalized.pdf_tex}}
\caption{matrices scaled}
\label{fig:app:Pearson:GM12878:K562:chr17:5kb:normalized}
\end{subfigure}\hfill
\begin{subfigure}{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_protsNormalized.pdf_tex}}
\caption{proteins scaled}
\label{fig:app:Pearson:GM12878:K562:chr17:5kb:protsNormalized}
\end{subfigure}
\caption{Pearson correlations, matrices / proteins scaled}
\label{fig:res:Pearson:matrices:proteins:normalized:5kb}
\end{figure}
Feature scaling yielded the expected outcome in terms of Pearson correlations,
\autoref{fig:app:Pearson:GM12878:K562:chr17:5kb:protsNormalized}, i.\,e. all value ranges
had similar performance and were no better or worse than without.
In terms of Hi-C matrices, the results were surprisingly worse than without scaling,
with errors in the predictions e.\,g. around 13.5 and \SI{14.0}{\mega\bp},
see \autoref{fig:app:GM12878:K562:chr17:normalizedProteins}.
The comparatively bad outcome might partially be due to the small set-to-zero threshold
which was applied after adjusting the value range, \autoref{sec:methods:normalization}.
As this is a non-continuous transformation of the value range, thresholding could in principle affect the splitting.
However, it would be very surprising if this was the main cause,
as only very few values fell below the threshold for each protein;
for example, in value range [0...10], at most 53, or 0.3\% of \num{16240} samples from a single protein were set to zero.
Additionally, the threshold hypothesis does not explain the errors in predictions with value range [0...1],
where the threshold was not applied.
On the other hand, it is also hard to imagine that
the errors in the predictions occurred by chance, as they were approximately at the same positions across all
concerned predictions.
To investigate further on the issue, predictions were also made for \SI{25}{\kilo\bp} resolution,
Here, too, the outcome was partially unexpected.
While the Pearson correlations indicated a positive influence of scaling for all but
the division-by-mean method, \autoref{fig:app:Pearson:GM12878:K562:chr17:protNorms:25kb},
the plots of the matrices looked very similar to the non-scaled state,
\autoref{fig:app:GM12878:K562:chr17:normalizedProteins:25k}.
The reason for this behavior could unfortunately not be determined.
Because scaling input features does not seem very common for random forests, only a single paper was
found on the topic.
Here, Dinc. et al. investigated the influence of value-range (min-max) scaling and z-score normalization\footnote{
subtract mean and divide by standard deviation}
on the performance of several classifiers for protein crystallization images \cite{Dinc2014}.
For the random forest classifier, some improvements for the z-score normalization,
but only very little change for value-range scaling were recorded.
However, the reasons for the improvement were not investigated.
The already mentioned study by Strobl et colleagues \cite{Strobl2007} does not cover feature scaling
directly, but found that features with different value ranges can cause bias in variable
selection, especially when used in combination with categorical variables and bootstrapping with replacement.
Within this masterproject, it could not be clarified what exactly caused the
unexpected results when scaling feature value ranges.
However, first tests showed an interesting relation between the
floating point precision of the feature values and the predicted matrices,
which could be investigated in future studies.
Currently, hicprediction is using 32-bit floating point numbers rounded
to 6 digits after the decimal point -- and this may be insufficient
for small value ranges, considering the strongly nonuniform interaction count distribution,
\autoref{fig:GM12878:K562:interactionCountDistribution}.
For example, when using value range [0...1] and \SI{5}{\kilo\bp} matrix resolution,
there are \num{100000} different floating point numbers in the
interval [0,~0.100000) after rounding to six digits after the decimal point -- but more than half of the approximately 3.2 million samples have interaction
counts lying in this interval.
Since protein scaling yielded unfavorable results,
the remaining computations were performed without.
\subsection{Concatenating datasets and predictions from different cell lines}\label{sec:res:concat}
In order to combine predictions from different cell lines on K562,
the single predictions were computed first, \autoref{fig:app:all:K562:chr17:singleVsJoint:5k},
top 4 panels.
It is obvious that not all cell lines are equally well suited for predicting K562,
which is probably due to different biological functions.
Although e.\,g. the prediction from HUVEC showed only fairly few structures,
the averaged prediction from GM12878, HMEC, HUVEC and NHEK on K562 was still acceptable,
partially even better than the best single-cell-line prediction,
\autoref{fig:res:Pearson:singleVsTogether:K562:chr17:5k} and
\ref{fig:app:all:K562:chr17:singleVsJoint:5k}.
Both the Pearson correlation and the plot of the predicted matrix look more smooth than the ones
from single cell lines, which is due to the averaging process and thus not surprising.
The structures in the matrix plot were still distinguishable and not significantly worse
than the ones from single-cell-line predictions.
The prediction from concatenated datasets performed not as well as the averaged single predictions,
\autoref{fig:res:Pearson:singleVsTogether:K562:chr17:5k} and \ref{fig:app:all:K562:chr17:singleVsJoint:5k}, third panel from bottom,
but still better than the single predictions from ``less suitable'' cell lines.
It should however be noted that the datasets have been concatenated without prior feature scaling, due
to the problems mentioned in \autoref{sec:res:normalization} above.
This is probably suboptimal, because the same features can have significantly different value ranges in different cell lines,
see \autoref{fig:app:prots:valueRangeBefore}.
The predictions from concatenated datasets might thus improve once the
existing feature scaling problems have been investigated and resolved.
\begin{figure}
\centering
\begin{subfigure}{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_allSingle-on-K562_chr17_5k.pdf_tex}}
\caption{single predictions}
\label{fig:res:Pearson:allSingle:K562:chr17:5kb}
\end{subfigure}\hfill%
\begin{subfigure}{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_allTogether-on-K562_chr17_5k.pdf_tex}}
\caption{joint predictions}
\label{fig:res:Pearson:allTogether:K562:chr17:5kb}
\end{subfigure}
\caption{Pearson correlations, single vs. joint predictions on K562}
\label{fig:res:Pearson:singleVsTogether:K562:chr17:5k}
\end{figure}
\subsection{Emphasizing certain samples} \label{sec:res:emphasizing}
After 200 runs of the tree-structured Parzen estimator, see \autoref{sec:methods:emphasizing},
the supposedly best parameters for weighting samples according to interaction count were determined as
$lower=292.635$, $upper=857.073$ and $k=14.625$.
This means that all samples with an interaction count in the range [292.635...857.073] were given integer weights such that
the sum of the weighted samples was approximately 14.625 times the weight sum (=number) of the unweighted samples.
Unfortunately, there were only six samples in the given range,
so the effect of interaction-count-based weighting on the predictions was quite small.
Both the resulting Pearson correlation and the matrix plots looked fairly similar for predictions with weighted samples
and usual predictions, \autoref{fig:res:Pearson:K562:chr17:5kb:weightingInteractionCts}
and \ref{fig:app:GM12878:K562:chr17:interactionCountWeighting:matrices}.
With another 200 runs of the estimator, the supposedly best parameters for weighting samples according to CTCF signal value
were determined as $lower=228.8$, $upper=670.112$ and $k=4.241$.
In this case, there were \num{285242} samples with CTCF start-, end- or window-feature in the given range,
so an influence of CTCF-based weighting on the results was to be expected, which also precipitated in the feature importances,
\autoref{fig:app:GM12878:K562:chr17:CTCFWeighting:importances}.
Interestingly, only the CTCF-window feature gained importance, while start- and end-feature
remained at low levels.
However, the Pearson correlation of the prediction from weighted samples was actually worse than before,
\autoref{fig:res:Pearson:K562:chr17:5kb:weightingCTCF},
and the matrix plots showed at least no significant improvement, fig\,\ref{fig:app:GM12878:K562:chr17:CTCFWeighting:matrices}.
Removing all samples with a distance below \SI{5000}{\bp} surprisingly improved and also slightly smoothened the Pearson correlation,
\autoref{fig:res:Pearson:K562:chr17:5kb:noDiags}.
The matrix plots also clearly differed from the standard predictions, but it is difficult to say whether they are better,
\autoref{fig:app:GM12878:K562:chr17:noDiagonal}.
While some predicted structures were more distinct, there were also some which do not seem to match any real interacting regions,
and the contrast between interacting- and non-interacting regions also seemed lower than in the standard predictions.
TAD-based weighting of samples did not change the prediction results much.
Both the Pearson correlation and the resulting matrices were quite similar to the ones from standard predictions,
\autoref{fig:res:Pearson:K562:chr17:5kb:weightingTADs} and \ref{fig:app:GM12878:K562:chr17:TadWeighting}.
Note that for weighting factor $k=0.1$ the results were identical to the the status quo,
because it turned out that $\frac{\sum_{weightedSamples}weight}{\sum_{unweightedSamples}weight}\approx 0.1$, so no weighting was performed at all
due to rounding (\autoref{sec:methods:emphasizing}).
\begin{figure}
\centering
\begin{subfigure}{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_chr17_readCountWeighting.pdf_tex}}
\caption{weighting according to interaction counts}
\label{fig:res:Pearson:K562:chr17:5kb:weightingInteractionCts}
\end{subfigure}\hfill%
\begin{subfigure}{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_chr17_CTCFweighting.pdf_tex}}
\caption{weighting according to CTCF signal value}
\label{fig:res:Pearson:K562:chr17:5kb:weightingCTCF}
\end{subfigure}
\vspace{5mm}
\begin{subfigure}{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_chr17_noDiagonal.pdf_tex}}
\caption{removing samples with distance $\leq$ \SI{5}{\kilo\bp}}
\label{fig:res:Pearson:K562:chr17:5kb:noDiags}
\end{subfigure}
\begin{subfigure}{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_5k_TADweighting.pdf_tex}}
\caption{weighting according to TADs}
\label{fig:res:Pearson:K562:chr17:5kb:weightingTADs}
\end{subfigure}
\caption{Pearson correlations for different sample weighting approaches}
\label{fig:res:Pearson:sampleWeighting:5k}
\end{figure}
\subsection{Replacing random forest by extra tree regression} \label{sec:res:extratrees}
Predictions using the extra trees algorithm were generally similar to predictions from random forests.
While Pearson correlations were in favor of the random forest algorithm, \autoref{fig:res:Pearson:extratreesVsForests},
the matrix plots looked fairly similar, \autoref{fig:app:GM12878:K562:chr17:forestVsExtratrees}.
It remains for future research whether this holds for other cell lines and chromosomes as well.
\begin{figure}[hb]
\centering
\scriptsize
\import{figures/}{pearson_GM12878-on-K562_extraTrees_vs_Forests.pdf_tex}
\caption{Pearson correlations extra trees vs. random forest}
\label{fig:res:Pearson:extratreesVsForests}
\end{figure}
\subsection{Comparison between HiC-Reg and hicprediction} \label{sec:res:compare}
Looking at the matrices obtained with hicprediction thus far, it was obvious that -- despite some improvements compared to the original state --
the predictions were still inferior to the ones published for HiC-Reg.
\autoref{fig:res:HicregVsHicprediction_supplementary} shows a direct juxtaposition between a cross-cell prediction from HiC-Reg --
reconstructed from data published along with the article \cite{Zhang2019} --
and the corresponding prediction from hicprediction.
In terms of Pearson correlation, the results from hicprediction and HiC-Reg were very different,
see \autoref{fig:res:Pearson:K562:chr17:5kb:HicregFromPublication}.
While hicprediction was sometimes better at smaller distances, in the given test case particularly obvious below \SI{0.5}{\mega\bp},
the published results from HiC-Reg were always -- not only in the test case plotted in \autoref{fig:res:Pearson:K562:chr17:5kb:HicregFromPublication} -- better
for larger distances.
To check whether the comparatively worse results from hicprediction where due to the underlying algorithm or to the input data,
new HiC-Reg predictions were made by converting the corresponding hicprediction training sets to text files and using them for
training and prediction with HiC-Reg.
It is obvious from \autoref{fig:res:HicregVsHicprediction_recomputed} that the results did no longer differ much,
except for the value range. This indeed points to the input data as the main cause for the differences between hicprediction and HiC-Reg.
However, the computations from hicprediction were much more efficient both in terms of runtime and memory consumption,
see \autoref{tab:res:efficiency}. While training took almost three hours in HiC-Reg, it finished within around 90 seconds
in hicprediction -- and this even includes the time spent to split the data into five cross-validation sets and store them.
No efforts were made to find the cause for the significant runtime difference,
but it was obvious that HiC-Reg was using only one CPU, while hicprediction was set up to use multithreading on
all available (here, four) CPUs for the random forest regressor.
\begin{table}[htb]
\resizebox{\textwidth}{!}{
\begin{tabular}{lcclcc}
\hline
& \multicolumn{2}{c}{\textbf{training}} & & \multicolumn{2}{c}{\textbf{prediction}}
\\
& \multicolumn{1}{l}{runtime / min} & \multicolumn{1}{l}{memory / GB} & & \multicolumn{1}{l}{runtime / min} &
\multicolumn{1}{l}{memory / GB} \\ \hline
\textbf{HiC-Reg} & 175,58 & 4,29 & \multicolumn{1}{c}{} & 3,70 & 1,92
\\
\textbf{hicprediction} & 1,26 & 0,70 & \multicolumn{1}{c}{} & 1,18 & 1,17
\\ \hline
\end{tabular}}
\caption{computational effort HiC-Reg vs. hicprediction} \label{tab:res:efficiency}
\end{table}
For the example of CTCF in GM12878, the bedgraph input file converted from the corresponding BAM file
proved qualitatively quite similar to the example file in the HiC-Reg github repository
\cite{Roy2020}\footnote{full url:
\url{https://github.com/Roy-lab/HiC-Reg/blob/master/Scripts/aggregateSignalInRegion/wgEncodeBroadHistoneGm12878CtcfStdRawDataRep1_chr17.counts}},
see \autoref{fig:app:GM12878:CTCF:convertedInput3MB} and \ref{fig:app:GM12878:CTCF:convertedInput100KB}.
However, the newly created file contained about \num{55000} (14.2\%) more lines than the example file,
which suggests that the original input data to HiC-Reg must have been filtered in some way.
The paper \cite{Zhang2019} provides general information on how the data have been processed,
e.\,g. with regard to software tools, but unfortunately lacks detailed information for example on specific filtering procedures or parameters.
Irrespective of this, predictions with HiC-Reg using the converted BAM files did not yield useful results, see
\autoref{fig:res:HicregVsHicprediction_fromGroundUp}.
It could not be clarified what went wrong,
but gradient predictions like these can occur when only distance is
considered as a feature, cf. \autoref{sec:improve:emptySamples}.
This could have happened either due to a bug in HiC-Reg
or a misunderstanding of its input file formats, some of which are
only specified by example.
A concluding discussion of all changes made to hicprediction within this masterproject
will be given in \autoref{sec:discussion:main}.
\begin{figure}[hbp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_GM12878-on-K562_oursVsZhangPaper.pdf_tex}
\caption{HiC-Reg vs. hicprediction, GM12878 on K562, chromosome 17, \SI{5}{\kilo\bp}, HiC-Reg prediction reconstructed from published
data \cite{Zhang2019,zhangMaterial01,zhangMaterial02,zhangMaterial03}}
\label{fig:res:HicregVsHicprediction_supplementary}
\end{figure}
\begin{figure}[hbp]
\centering
\scriptsize
\import{figures/}{chr17-12-15_GM12878-on-K562_oursVsZhangPaper_noEmpty_sameInput.pdf_tex}
\caption{HiC-Reg vs. hicprediction, GM12878 on K562, chromosome 17, \SI{25}{\kilo\bp}, HiC-Reg input data converted from hicprediction datasets}
\label{fig:res:HicregVsHicprediction_recomputed}
\end{figure}
\begin{figure}[hbp]
\centering
\scriptsize
\import{figures/}{GM12878-on-K562_HicregFromGroundUp.pdf_tex}
\caption{HiC-Reg prediction from BAM files, GM12878 on K562, chromosome 17, \SI{5}{\kilo\bp}}
\label{fig:res:HicregVsHicprediction_fromGroundUp}
\end{figure}
\begin{figure}[hbp]
\centering
\begin{subfigure}[t]{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{Pearson_GM12878-on-K562_hicregFromSupplementary_vs_hicprediction.pdf_tex}}
\caption{HiC-Reg data from publication \cite{Zhang2019}}
\label{fig:res:Pearson:K562:chr17:5kb:HicregFromPublication}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.495\textwidth}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{Pearson_GM12878-on-K562_noEmpty_HicregVsHicprediction_sameInputs.pdf_tex}}
\caption{HiC-Reg input data converted from hicprediction datasets}
\label{fig:res:Pearson:K562:chr17:5kb:HicregFromConversion}
\end{subfigure}
\begin{subfigure}{.495\textwidth}
\vspace{5mm}
\centering
\resizebox{\textwidth}{!}{
\scriptsize
\import{figures/}{Pearson_GM12878-on-K562_HicregFromGroundUp.pdf_tex}}
\caption{HiC-Reg input data computed from BAM files}
\label{fig:res:Pearson:K562:chr17:5kb:HicregFromBam}
\end{subfigure}
\caption{Pearson correlation comparison HiC-Reg vs. hicprediction}
\label{fig:res:Pearson:hicRegVsHicprediction}
\end{figure} | {
"alphanum_fraction": 0.7862077144,
"avg_line_length": 57.8542024014,
"ext": "tex",
"hexsha": "3103c8398b22aa2d9f2883fb1881397eb3c23f51",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-09-04T11:59:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-04T11:59:08.000Z",
"max_forks_repo_head_hexsha": "9b7da97f09b037e0669948ec3ab14e1e78efc5e2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MasterprojectRK/HiCPrediction",
"max_forks_repo_path": "hicprediction/report/results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9b7da97f09b037e0669948ec3ab14e1e78efc5e2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MasterprojectRK/HiCPrediction",
"max_issues_repo_path": "hicprediction/report/results.tex",
"max_line_length": 161,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "9b7da97f09b037e0669948ec3ab14e1e78efc5e2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MasterprojectRK/HiCPrediction",
"max_stars_repo_path": "hicprediction/report/results.tex",
"max_stars_repo_stars_event_max_datetime": "2020-08-24T08:24:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-24T08:24:04.000Z",
"num_tokens": 9152,
"size": 33729
} |
\documentclass[]{andre-vechina-resume}
\usepackage{lipsum}
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% LAST UPDATED DATE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\lastupdated
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% TITLE NAME
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\namesection{André}{Vechina}{
% \href{http://andrevechina.com}{andrevechina.com} |
\href{mailto:andrevechina@gmail.com}{andrevechina@gmail.com} |
+351 915606584 \\
LinkedIn://\href{https://www.linkedin.com/in/andrevechina}{\bf andrevechina} |
GitHub://\href{https://github.com/andrevechina}{\bf andrevechina}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EXPERIENCE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experience}
\begin{experience-item}
{Blip - Paddy Power Betfair}
{Software Developer}
{Oct 2014}
{Present}
{Porto, Portugal}
\begin{tightemize}
\item
Member of a Kanban squad building several web-based internal tools, mainly using ES6+, React and Redux.
Member of a cross-location team (Porto and Dublin) working in a highly collaborative agile environment.
Quality software is one of the main concerns, code coverage is over 95\% and each feature is implemented using the BDD process.
Active part in the recruitment process, performing technical interviews.
\item
Member of a Scrum team responsible for building Betfair's Sportsbook client side rendered mobile web-application.
Development and maintenance of a +500k loc AngularJS application, as well as all its supporting web middlewares NodeJS based, capable of scaling up to over 1M active users (100k simultaneously).
Monitoring and improvement of performance and scalability issues, both for data fetching and client side rendering.
\end{tightemize}
\centering{
Javascript \tdot
React \tdot
Redux \tdot
React-Router \tdot
AngularJS\\
Webpack \tdot
Grunt \tdot
Jasmine \tdot
Mocha \tdot
Protractor\\
Kanban \tdot
Scrum \tdot
Continuous Integration \tdot
Continuous Deploy
}
\end{experience-item}
\begin{experience-item}
{Blip - Paddy Power Betfair}
{Software Engineer Intern}
{Apr 2014}
{Sep 2014}
{Porto, Portugal}
\begin{tightemize}
\item
Development of Betfair's Sportsbook server side rendered websites (both desktop and mobile) built using a Spring based framework.
\end{tightemize}
\centering{
Java \tdot
Javascript \tdot
Selenium
}
\end{experience-item}
\begin{experience-item}
{PT Inovação}
{Research Engineer Trainee}
{Apr 2013}
{Mar 2014}
{Aveiro, Portugal}
\begin{tightemize}
\item
Member of a small team that developed and integrated functional prototypes.
\item
Development and integration of a SIP WebRTC client and a SCC-AS (Service Centralization and Continuity Application Server) in a IMS environment.
\end{tightemize}
\centering{
Javascript \tdot
Coffescript \tdot
Spine.js \tdot
sipML5 \tdot
WebRTC \tdot
Java
}
\end{experience-item}
\begin{experience-item}
{Varident}
{Web Developer Intern}
{Jul 2011}
{Sep 2011}
{Somerville, NJ, USA}
\begin{tightemize}
\item
Member of a small development team that delivered custom designed business websites and created a form-builder, for the content management system developed in-house (KickStart).
\end{tightemize}
\centering{
Javascript \tdot
jQuery
}
\end{experience-item}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EDUCATION
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Education}
\runsubsection{University of Aveiro}
\descript{| Msc in Computer and Telematics Engineering}
\location{Sep 2007 - Dec 2012 | Aveiro, Portugal}
Final grade: {\color{subheadings} 15/20} \tdot Dissertation grade: {\color{subheadings} 17/20}\\
\location{Msc Dissertation:}
\begin{tightemize}
\item
Representation of semantic networks of biomedical terms.
\item
Development of a web platform for representation and analysis of relationships among biomedical identities.
\end{tightemize}
\centering{
Java \tdot
J2EE \tdot
JSF \tdot
Javascript
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% PUBLICATIONS
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Publications}
\bibliographystyle{ieeetr}
\bibliography{publications}
\nocite{*}
\end{document}
| {
"alphanum_fraction": 0.6270742358,
"avg_line_length": 29.7402597403,
"ext": "tex",
"hexsha": "4a49d3527f608f834ce0541d428af9b698a7fc79",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "83bcdd9ad16bb1ec86757f8ccad2e2d2b7232887",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "andrevechina/Vechina-Resume",
"max_forks_repo_path": "andre-vechina-resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "83bcdd9ad16bb1ec86757f8ccad2e2d2b7232887",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "andrevechina/Vechina-Resume",
"max_issues_repo_path": "andre-vechina-resume.tex",
"max_line_length": 202,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "83bcdd9ad16bb1ec86757f8ccad2e2d2b7232887",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "andrevechina/Vechina-Resume",
"max_stars_repo_path": "andre-vechina-resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1111,
"size": 4580
} |
\documentclass[color=usenames,dvipsnames]{beamer}\usepackage[]{graphicx}\usepackage[]{color}
% maxwidth is the original width if it is less than linewidth
% otherwise use linewidth (to make sure the graphics do not exceed the margin)
\makeatletter
\def\maxwidth{ %
\ifdim\Gin@nat@width>\linewidth
\linewidth
\else
\Gin@nat@width
\fi
}
\makeatother
\definecolor{fgcolor}{rgb}{0, 0, 0}
\newcommand{\hlnum}[1]{\textcolor[rgb]{0.69,0.494,0}{#1}}%
\newcommand{\hlstr}[1]{\textcolor[rgb]{0.749,0.012,0.012}{#1}}%
\newcommand{\hlcom}[1]{\textcolor[rgb]{0.514,0.506,0.514}{\textit{#1}}}%
\newcommand{\hlopt}[1]{\textcolor[rgb]{0,0,0}{#1}}%
\newcommand{\hlstd}[1]{\textcolor[rgb]{0,0,0}{#1}}%
\newcommand{\hlkwa}[1]{\textcolor[rgb]{0,0,0}{\textbf{#1}}}%
\newcommand{\hlkwb}[1]{\textcolor[rgb]{0,0.341,0.682}{#1}}%
\newcommand{\hlkwc}[1]{\textcolor[rgb]{0,0,0}{\textbf{#1}}}%
\newcommand{\hlkwd}[1]{\textcolor[rgb]{0.004,0.004,0.506}{#1}}%
\let\hlipl\hlkwb
\usepackage{framed}
\makeatletter
\newenvironment{kframe}{%
\def\at@end@of@kframe{}%
\ifinner\ifhmode%
\def\at@end@of@kframe{\end{minipage}}%
\begin{minipage}{\columnwidth}%
\fi\fi%
\def\FrameCommand##1{\hskip\@totalleftmargin \hskip-\fboxsep
\colorbox{shadecolor}{##1}\hskip-\fboxsep
% There is no \\@totalrightmargin, so:
\hskip-\linewidth \hskip-\@totalleftmargin \hskip\columnwidth}%
\MakeFramed {\advance\hsize-\width
\@totalleftmargin\z@ \linewidth\hsize
\@setminipage}}%
{\par\unskip\endMakeFramed%
\at@end@of@kframe}
\makeatother
\definecolor{shadecolor}{rgb}{.97, .97, .97}
\definecolor{messagecolor}{rgb}{0, 0, 0}
\definecolor{warningcolor}{rgb}{1, 0, 1}
\definecolor{errorcolor}{rgb}{1, 0, 0}
\newenvironment{knitrout}{}{} % an empty environment to be redefined in TeX
\usepackage{alltt}
%\documentclass[color=usenames,dvipsnames,handout]{beamer}
%\usepackage[roman]{../../lab1}
\usepackage[sans]{../../lab1}
\hypersetup{pdftex,pdfstartview=FitV}
%% New command for inline code that isn't to be evaluated
\definecolor{inlinecolor}{rgb}{0.878, 0.918, 0.933}
\newcommand{\inr}[1]{\colorbox{inlinecolor}{\texttt{#1}}}
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\begin{document}
%\setlength\fboxsep{0pt}
\begin{frame}[plain]
\huge
\centering \par
{\color{RoyalBlue}{Lab 11 -- ANCOVA}} \\
\vspace{1cm}
\Large
% October 29 \& 30, 2018 \\
FANR 6750 \\
\vfill
\large
Richard Chandler and Bob Cooper
\end{frame}
\section{Regression}
\begin{frame}
\frametitle{ANCOVA overview}
{%\bf
Scenario}
\begin{itemize}
\item We are interested in doing a one-way ANOVA
\item However, we need to account for variation associated with a
continuous predictor variable
\end{itemize}
\pause
\vfill
{%\bf
Additive model}
\[
y_{ij} = \mu + \alpha_i + \beta(x_{ij} - \bar{x}) + \varepsilon_{ij}
\]
\pause
\vfill
% \centering
% \bf
ANCOVA can be thought of as a hybrid between ANOVA and regression \\
\pause
\vfill
% \centering
% \bf
ANOVA, regression, and ANCOVA are linear models \\
\end{frame}
\begin{frame}[fragile]
\frametitle{The Diet Data}
% \small
Import the data and view the levels of the factor
\begin{knitrout}\small
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{dietData} \hlkwb{<-} \hlkwd{read.csv}\hlstd{(}\hlstr{"dietData.csv"}\hlstd{)}
\hlkwd{levels}\hlstd{(dietData}\hlopt{$}\hlstd{diet)}
\end{alltt}
\begin{verbatim}
## [1] "Control" "High" "Low" "Med"
\end{verbatim}
\end{kframe}
\end{knitrout}
\pause
\vspace{0.5cm}
{\large Reorder the levels of the factor, just for convenience}
\begin{knitrout}\small
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{levels}\hlstd{(dietData}\hlopt{$}\hlstd{diet)} \hlkwb{<-} \hlkwd{list}\hlstd{(}\hlkwc{Control}\hlstd{=}\hlstr{"Control"}\hlstd{,} \hlkwc{Low}\hlstd{=}\hlstr{"Low"}\hlstd{,}
\hlkwc{Med}\hlstd{=}\hlstr{"Med"}\hlstd{,} \hlkwc{High}\hlstd{=}\hlstr{"High"}\hlstd{)}
\hlkwd{levels}\hlstd{(dietData}\hlopt{$}\hlstd{diet)}
\end{alltt}
\begin{verbatim}
## [1] "Control" "Low" "Med" "High"
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]
\frametitle{The diet data}
%<<scatter,echo=FALSE,fig.show="hide">>=
%plot(weight ~ age, dietData)
%@
%<<bp,echo=FALSE,fig.show=FALSE>>=
%boxplot(weight ~ diet, dietData, ylab="Weight")
%@
\begin{columns}
\begin{column}{0.5\textwidth}
\tiny %\scriptsize
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{plot}\hlstd{(weight} \hlopt{~} \hlstd{age, dietData)}
\end{alltt}
\end{kframe}
\end{knitrout}
\includegraphics[width=\textwidth]{figure/plot-wt-1}
\end{column}
\begin{column}{0.5\textwidth}
\tiny %\scriptsize
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{boxplot}\hlstd{(weight} \hlopt{~} \hlstd{diet, dietData,} \hlkwc{ylab}\hlstd{=}\hlstr{"Weight"}\hlstd{)}
\end{alltt}
\end{kframe}
\end{knitrout}
\includegraphics[width=\textwidth]{figure/bp-1}
\end{column}
\end{columns}
\end{frame}
\begin{frame}[fragile]
\frametitle{Simple linear regression using {\tt lm}}
% \scriptsize
\begin{knitrout}\scriptsize
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{fm1} \hlkwb{<-} \hlkwd{lm}\hlstd{(weight} \hlopt{~} \hlstd{age, dietData)}
\hlkwd{summary}\hlstd{(fm1)}
\end{alltt}
\begin{verbatim}
##
## Call:
## lm(formula = weight ~ age, data = dietData)
##
## Residuals:
## Min 1Q Median 3Q Max
## -5.6906 -1.2625 0.0522 1.0233 6.1680
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 21.32523 0.80685 26.430 < 2e-16 ***
## age 0.51807 0.06742 7.685 2.07e-10 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.171 on 58 degrees of freedom
## Multiple R-squared: 0.5045, Adjusted R-squared: 0.496
## F-statistic: 59.05 on 1 and 58 DF, p-value: 2.072e-10
\end{verbatim}
\end{kframe}
\end{knitrout}
\pause
\footnotesize
{The two estimates correspond to the intercept and slope parameters}
\end{frame}
\begin{frame}[fragile]
\frametitle{Regression line and confidence interval}
{%\bf %\footnotesize
Regression lines and CIs can be created using
\inr{predict} \\}
\begin{enumerate}[\bf (1)]
\item Create a new {\tt data.frame} containing a sequence of
values of the predictor variable {\it age}
\item Predict {\it weight} using these values of {\it age}
\item Put predictions and data together for easy plotting
\end{enumerate}
\pause
\vfill
\small
\begin{knitrout}\footnotesize
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{age} \hlkwb{<-} \hlstd{dietData}\hlopt{$}\hlstd{age}
\hlstd{predData1} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(}\hlkwc{age}\hlstd{=}\hlkwd{seq}\hlstd{(}\hlkwd{min}\hlstd{(age),} \hlkwd{max}\hlstd{(age),} \hlkwc{length}\hlstd{=}\hlnum{50}\hlstd{))}
\hlstd{pred1} \hlkwb{<-} \hlkwd{predict}\hlstd{(fm1,} \hlkwc{newdata}\hlstd{=predData1,} \hlkwc{se.fit}\hlstd{=}\hlnum{TRUE}\hlstd{,}
\hlkwc{interval}\hlstd{=}\hlstr{"confidence"}\hlstd{)}
\hlstd{predictions1} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(pred1}\hlopt{$}\hlstd{fit, predData1)}
\end{alltt}
\end{kframe}
\end{knitrout}
\pause
\vfill
% {\bf Plot raw data and the regression results}
There's nothing special about \inr{length=50}, but in general, the longer the length of the sequence, the smoother the lines will look.
\end{frame}
\begin{frame}[fragile]
\frametitle{Regression line and confidence interval}
\scriptsize
\begin{knitrout}\scriptsize
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{plot}\hlstd{(weight} \hlopt{~} \hlstd{age,} \hlkwc{data}\hlstd{=dietData)} \hlcom{# raw data}
\hlkwd{lines}\hlstd{(fit} \hlopt{~} \hlstd{age,} \hlkwc{data}\hlstd{=predictions1,} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{)} \hlcom{# fitted line}
\hlkwd{lines}\hlstd{(lwr} \hlopt{~} \hlstd{age,} \hlkwc{data}\hlstd{=predictions1,} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{,} \hlkwc{col}\hlstd{=}\hlstr{"gray"}\hlstd{)} \hlcom{# lower CI}
\hlkwd{lines}\hlstd{(upr} \hlopt{~} \hlstd{age,} \hlkwc{data}\hlstd{=predictions1,} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{,} \hlkwc{col}\hlstd{=}\hlstr{"gray"}\hlstd{)} \hlcom{# upper CI}
\end{alltt}
\end{kframe}
\end{knitrout}
\vspace{-9mm}
\begin{center}
\includegraphics[width=0.65\textwidth]{figure/plot-fm1-1}
\end{center}
\end{frame}
\section{One-way ANOVA}
\begin{frame}[fragile]
\frametitle{One-way ANOVA using {\tt lm}}
{%\bf
Change the \inr{contrasts} option so that the estimates will
correspond to the additive model, and then fit the ANOVA
}
\begin{knitrout}\small
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{options}\hlstd{(}\hlkwc{contrasts}\hlstd{=}\hlkwd{c}\hlstd{(}\hlstr{"contr.sum"}\hlstd{,} \hlstr{"contr.poly"}\hlstd{))}
\hlstd{fm2} \hlkwb{<-} \hlkwd{lm}\hlstd{(weight} \hlopt{~} \hlstd{diet, dietData)}
\hlkwd{summary.aov}\hlstd{(fm2)}
\end{alltt}
\begin{verbatim}
## Df Sum Sq Mean Sq F value Pr(>F)
## diet 3 54.6 18.216 2.053 0.117
## Residuals 56 496.9 8.873
\end{verbatim}
\end{kframe}
\end{knitrout}
\pause
\vfill
{%\bf
The \inr{aov} function gives identical results}
\begin{knitrout}\small
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{summary}\hlstd{(}\hlkwd{aov}\hlstd{(weight} \hlopt{~} \hlstd{diet, dietData))}
\end{alltt}
\begin{verbatim}
## Df Sum Sq Mean Sq F value Pr(>F)
## diet 3 54.6 18.216 2.053 0.117
## Residuals 56 496.9 8.873
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
% See Venables and Ripley (2002, pg 145)
\begin{frame}[fragile]
\frametitle{An alternative summary}
\scriptsize
\begin{knitrout}\tiny
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{summary}\hlstd{(fm2)}
\end{alltt}
\begin{verbatim}
##
## Call:
## lm(formula = weight ~ diet, data = dietData)
##
## Residuals:
## Min 1Q Median 3Q Max
## -7.6371 -1.9253 -0.0366 1.9770 5.4576
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 27.13962 0.38456 70.573 <2e-16 ***
## diet1 -1.02179 0.66608 -1.534 0.131
## diet2 -0.56593 0.66608 -0.850 0.399
## diet3 0.08027 0.66608 0.121 0.905
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.979 on 56 degrees of freedom
## Multiple R-squared: 0.09908, Adjusted R-squared: 0.05082
## F-statistic: 2.053 on 3 and 56 DF, p-value: 0.1169
\end{verbatim}
\end{kframe}
\end{knitrout}
\pause
\vfill
{%\bf
Because we changed the \inr{contrast} option to {\tt
contr.sum}, the intercept is the grand mean ($\mu$) and the other
estimates are the effect sizes ($\alpha_i$) \\}
% contr.sum}, The intercept is the mean weight for the reference level
% (Control) when age=0. The other estimates indicate the
% difference from the reference level. \\}
\pause
\vfill
The effect size for the last level of diet ({\tt diet4}) isn't shown because it is not a unique parameter (i.e., it is a function of the other parameters: $\alpha_4 = -\alpha_1 - \alpha_2 - \alpha_3$). \\
\end{frame}
%\begin{comment}
\begin{frame}[fragile]
\frametitle{One-way ANOVA}
{%\centering
\small %\bf
The \inr{predict} function can also be used to obtain
group means, SEs, and CIs from a one-way ANOVA \\}
\footnotesize %\scriptsize
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{predData2} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(}\hlkwc{diet}\hlstd{=}\hlkwd{levels}\hlstd{(dietData}\hlopt{$}\hlstd{diet))}
\hlstd{pred2} \hlkwb{<-} \hlkwd{predict}\hlstd{(fm2,} \hlkwc{newdata}\hlstd{=predData2,}
\hlkwc{se.fit}\hlstd{=}\hlnum{TRUE}\hlstd{,} \hlkwc{interval}\hlstd{=}\hlstr{"confidence"}\hlstd{)}
\end{alltt}
\end{kframe}
\end{knitrout}
\pause
\vfill
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{predictions2} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(pred2}\hlopt{$}\hlstd{fit,} \hlkwc{SE}\hlstd{=pred2}\hlopt{$}\hlstd{se, predData2)}
\hlstd{predictions2}
\end{alltt}
\begin{verbatim}
## fit lwr upr SE diet
## 1 26.11783 24.57710 27.65856 0.7691199 Control
## 2 26.57368 25.03295 28.11442 0.7691199 Low
## 3 27.21988 25.67915 28.76062 0.7691199 Med
## 4 28.64707 27.10634 30.18780 0.7691199 High
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}
\frametitle{One-way ANOVA}
\vspace{-0.5cm}
\begin{center}
\includegraphics[width=0.8\textwidth]{figure/barplot1-1}
\end{center}
\end{frame}
%\end{comment}
\section{ANCOVA}
\begin{frame}[fragile]
\frametitle{ANCOVA preliminaries}
\small
{ Additive model}
\[
y_{ij} = \mu + \alpha_i + \beta(x_{ij} - \bar{x}) + \varepsilon_{ij}
\]
\pause
\vfill
{Make sure the {\tt contrasts} are set as before}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{options}\hlstd{(}\hlkwc{contrasts}\hlstd{=}\hlkwd{c}\hlstd{(}\hlstr{"contr.sum"}\hlstd{,} \hlstr{"contr.poly"}\hlstd{))}
\end{alltt}
\end{kframe}
\end{knitrout}
\pause
\vfill
{Centering the covariate isn't required, but doing so allow the
intercept to be interpretted as the grand mean}
% \small
\footnotesize
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{dietData}\hlopt{$}\hlstd{ageCentered} \hlkwb{<-} \hlstd{dietData}\hlopt{$}\hlstd{age} \hlopt{-} \hlkwd{mean}\hlstd{(dietData}\hlopt{$}\hlstd{age)}
\end{alltt}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]
\frametitle{ANCOVA}
% \pause
{%\scriptsize
\footnotesize
%\bf
Put the covariate before the treatment
variable in the formula. \\}
%\scriptsize %\tiny
\begin{knitrout}\tiny
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{fm3} \hlkwb{<-} \hlkwd{lm}\hlstd{(weight} \hlopt{~} \hlstd{ageCentered} \hlopt{+} \hlstd{diet, dietData)}
\end{alltt}
\end{kframe}
\end{knitrout}
\pause
%\vfill
\begin{knitrout}\tiny
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{summary}\hlstd{(fm3)}
\end{alltt}
\begin{verbatim}
##
## Call:
## lm(formula = weight ~ ageCentered + diet, data = dietData)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.8214 -1.2213 -0.2519 1.2161 4.9185
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 27.1396 0.2406 112.787 < 2e-16 ***
## ageCentered 0.5573 0.0594 9.382 5.2e-13 ***
## diet1 -1.7446 0.4238 -4.116 0.00013 ***
## diet2 -0.3758 0.4173 -0.901 0.37171
## diet3 0.7819 0.4234 1.847 0.07020 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.864 on 55 degrees of freedom
## Multiple R-squared: 0.6536, Adjusted R-squared: 0.6284
## F-statistic: 25.94 on 4 and 55 DF, p-value: 4.147e-12
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]
\frametitle{The ANOVA table}
{%\bf
The null hypothesis of no diet
effect is rejected, even though it was not rejected before.}
\small
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{summary.aov}\hlstd{(fm3)}
\end{alltt}
\begin{verbatim}
## Df Sum Sq Mean Sq F value Pr(>F)
## ageCentered 1 278.25 278.25 80.095 2.54e-12 ***
## diet 3 82.22 27.41 7.889 0.000182 ***
## Residuals 55 191.07 3.47
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]
\frametitle{Predict weight}
{%\bf
Create predictions of {\tt weight} over a sequences of {\tt
ages}, for every level of {\tt diet}}
\small
\begin{knitrout}\small
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{ageC} \hlkwb{<-} \hlstd{dietData}\hlopt{$}\hlstd{ageCentered}
\hlstd{predData3} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(}
\hlkwc{diet}\hlstd{=}\hlkwd{rep}\hlstd{(}\hlkwd{c}\hlstd{(}\hlstr{"Control"}\hlstd{,} \hlstr{"Low"}\hlstd{,} \hlstr{"Med"}\hlstd{,} \hlstr{"High"}\hlstd{),} \hlkwc{each}\hlstd{=}\hlnum{20}\hlstd{),}
\hlkwc{ageCentered}\hlstd{=}\hlkwd{rep}\hlstd{(}\hlkwd{seq}\hlstd{(}\hlkwd{min}\hlstd{(ageC),} \hlkwd{max}\hlstd{(ageC),}
\hlkwc{length}\hlstd{=}\hlnum{20}\hlstd{),}
\hlkwc{times}\hlstd{=}\hlnum{4}\hlstd{))}
\hlstd{pred3} \hlkwb{<-} \hlkwd{predict}\hlstd{(fm3,} \hlkwc{newdata}\hlstd{=predData3,} \hlkwc{se.fit}\hlstd{=}\hlnum{TRUE}\hlstd{,}
\hlkwc{interval}\hlstd{=}\hlstr{"confidence"}\hlstd{)}
\hlstd{predictions3} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(pred3}\hlopt{$}\hlstd{fit, predData3)}
\end{alltt}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]
\frametitle{Plot the regression lines}
\footnotesize %\small
\begin{knitrout}\footnotesize
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlstd{colrs} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"black"}\hlstd{,} \hlstr{"royalblue"}\hlstd{,} \hlstr{"orange"}\hlstd{,} \hlstr{"darkcyan"}\hlstd{)}
\hlkwd{plot}\hlstd{(weight} \hlopt{~} \hlstd{ageCentered, dietData,} \hlkwc{cex}\hlstd{=}\hlnum{1.2}\hlstd{,}
\hlkwc{pch}\hlstd{=}\hlkwd{rep}\hlstd{(}\hlnum{15}\hlopt{:}\hlnum{18}\hlstd{,} \hlkwc{each}\hlstd{=}\hlnum{15}\hlstd{),}
\hlkwc{col}\hlstd{=}\hlkwd{rep}\hlstd{(colrs,} \hlkwc{each}\hlstd{=}\hlnum{15}\hlstd{))}
\hlkwd{lines}\hlstd{(fit} \hlopt{~} \hlstd{ageCentered, predictions3,} \hlkwc{subset}\hlstd{=diet}\hlopt{==}\hlstr{"Control"}\hlstd{,}
\hlkwc{col}\hlstd{=colrs[}\hlnum{1}\hlstd{],} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{)}
\hlkwd{lines}\hlstd{(fit} \hlopt{~} \hlstd{ageCentered, predictions3,} \hlkwc{subset}\hlstd{=diet}\hlopt{==}\hlstr{"Low"}\hlstd{,} \hlkwc{lty}\hlstd{=}\hlnum{1}\hlstd{,}
\hlkwc{col}\hlstd{=colrs[}\hlnum{2}\hlstd{],} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{)}
\hlkwd{lines}\hlstd{(fit} \hlopt{~} \hlstd{ageCentered, predictions3,} \hlkwc{subset}\hlstd{=diet}\hlopt{==}\hlstr{"Med"}\hlstd{,} \hlkwc{lty}\hlstd{=}\hlnum{1}\hlstd{,}
\hlkwc{col}\hlstd{=colrs[}\hlnum{3}\hlstd{],} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{)}
\hlkwd{lines}\hlstd{(fit} \hlopt{~} \hlstd{ageCentered, predictions3,} \hlkwc{subset}\hlstd{=diet}\hlopt{==}\hlstr{"High"}\hlstd{,} \hlkwc{lty}\hlstd{=}\hlnum{1}\hlstd{,}
\hlkwc{col}\hlstd{=colrs[}\hlnum{4}\hlstd{],} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{)}
\hlkwd{legend}\hlstd{(}\hlnum{4}\hlstd{,} \hlnum{23}\hlstd{,} \hlkwd{c}\hlstd{(}\hlstr{"High"}\hlstd{,} \hlstr{"Med"}\hlstd{,} \hlstr{"Low"}\hlstd{,} \hlstr{"Control"}\hlstd{),} \hlkwc{pch}\hlstd{=}\hlnum{18}\hlopt{:}\hlnum{15}\hlstd{,}
\hlkwc{title}\hlstd{=}\hlstr{"Diet"}\hlstd{,} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{,} \hlkwc{col}\hlstd{=}\hlkwd{rev}\hlstd{(colrs))}
\end{alltt}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}
\frametitle{Plot the regression lines}
\vspace{-0.5cm}
\begin{center}
\only<1 | handout:0>{\includegraphics[width=0.8\textwidth]{figure/scatplot0-1}}
\only<2>{\includegraphics[width=0.8\textwidth]{figure/scatplot1-1}}
\end{center}
\end{frame}
\begin{frame}[fragile]
\frametitle{Multiple comparisons}
%{\bf Use {\tt multcomp} package}
\scriptsize
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlcom{## install.packages("multcomp")}
\hlkwd{library}\hlstd{(multcomp)}
\hlkwd{summary}\hlstd{(}\hlkwd{glht}\hlstd{(fm3,} \hlkwc{linfct}\hlstd{=}\hlkwd{mcp}\hlstd{(}\hlkwc{diet}\hlstd{=}\hlstr{"Tukey"}\hlstd{)))}
\end{alltt}
\begin{verbatim}
##
## Simultaneous Tests for General Linear Hypotheses
##
## Multiple Comparisons of Means: Tukey Contrasts
##
##
## Fit: lm(formula = weight ~ ageCentered + diet, data = dietData)
##
## Linear Hypotheses:
## Estimate Std. Error t value Pr(>|t|)
## Low - Control == 0 1.3688 0.6875 1.991 0.20389
## Med - Control == 0 2.5265 0.6973 3.623 0.00336 **
## High - Control == 0 3.0830 0.6832 4.513 < 0.001 ***
## Med - Low == 0 1.1577 0.6828 1.696 0.33583
## High - Low == 0 1.7143 0.6817 2.515 0.06861 .
## High - Med == 0 0.5566 0.6869 0.810 0.84931
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)
\end{verbatim}
\end{kframe}
\end{knitrout}
\end{frame}
\begin{frame}[fragile]
\frametitle{Assignment}
{\bf \large Complete the following and upload your \R~script to ELC
before lab next week \\}
% \pause
\vfill
\begin{enumerate}[\bf \color{PineGreen} (1)]
% \item Run a new analysis using {\tt lm} to test for an interaction
% of diet and age
% \item Is the interaction significant? Report the $F$-value and
% $P$-value used for this test.
\item Fit an ANCOVA model to the data in {\tt treeData.csv}, which
represent the height of trees following a fertilizer
experiment. The covariate is pH.
\item Use: {\tt options(contrasts=c("contr.sum", "contr.poly"))}
so that your estimates correspond to the additive model from the
lecture notes
\item Interpret each of the estimates from {\tt lm}. What is the
null hypothesis associated with each $p$-value?
\item Plot the data and the regression lines. Use different colors
or symbols to distinguish the treatment groups.
\item Which fertilizer treatments are significantly different?
\end{enumerate}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.650891238,
"avg_line_length": 31.5327754533,
"ext": "tex",
"hexsha": "a28df69df3820375ad0c6223c73b69bed1527e9c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "89bce9e600c6d4601b70eb5cd02531fc1dfa950e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rbchan/exp-design",
"max_forks_repo_path": "labs/ANCOVA/lab-ANCOVA.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "89bce9e600c6d4601b70eb5cd02531fc1dfa950e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rbchan/exp-design",
"max_issues_repo_path": "labs/ANCOVA/lab-ANCOVA.tex",
"max_line_length": 236,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "89bce9e600c6d4601b70eb5cd02531fc1dfa950e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rbchan/exp-design",
"max_stars_repo_path": "labs/ANCOVA/lab-ANCOVA.tex",
"max_stars_repo_stars_event_max_datetime": "2019-02-27T17:03:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-08-01T17:16:11.000Z",
"num_tokens": 9063,
"size": 22609
} |
\chapter{Replacements for t-tests and ANOVA}
ANOVA is a common procedure in classical statistics, and is related to the
simpler idea of a t-test. These classical tests were designed for particular
kinds of problems, and in this chapter we will study similar problems but
solve them from a Bayesian point of view.
We will also use these examples to discuss some issues about the choice
of prior distributions when there are more than a few parameters. When there
are only a few parameters it is usually safe to assign a vague, wide prior
to describe your initial uncertainty (unless, of course, you have more
information than that). In higher dimensions, problems can arise if you do
this. One way of getting around these problems is to use a
{\it hierarchical model}.
\section{A T-Test Example}
This example is based on one given in a 1976 article by physicist E. T. Jaynes,
called ``Confidence Intervals vs. Bayesian Intervals''. This is a very strongly
worded paper and might be an interesting read for
those who are interested in the battle between frequentist and Bayesian statistics
when the latter was making its comeback in the second half of the 20th century.
It's also where I got the crazy confidence interval example from.
Two manufacturers, $1$ and $2$, both make ``widgets'', and we are interested
in figuring out which manufacturer makes the best widgets (on average), as
measured by their lifetime. To determine this, we obtain 9 widgets from
manufacturer $1$ and 4 widgets from manufacturer $2$, and measure their
lifetimes, in days. The results are given below:
\begin{eqnarray}
x^1 &=& \{41.26, 35.81, 36.01, 43.59, 37.50, 52.70, 42.43, 32.52, 56.20\}\\
x^2 &=& \{54.97, 47.07, 57.12, 40.84\}
\end{eqnarray}
These measurements can be summarised by the means and standard deviations, which
are $42 \pm 7.48$ for group $1$ and $50 \pm 6.48$ for group $2$.
The question is: given this data, is there evidence that one of the manufacturers
is better than the other, and if so, by how much?
In classical statistics the standard procedure for this situation would be a
two sample $t$0-test. However, before we do anything I'd like you to consider
the numbers and use your intuition: what do {\it you} think about what the
evidence says?
An underlying assumption of a classical $t$-test is that the data are normally
distributed around the mean values for each group\footnote{Strictly speaking,
it's the probability distribution for the data given the parameters
that is normal, the data may or may not look normally distributed.}.
We may as well adopt this
assumption for our Bayesian model. If we call the group 1
data points $\{x^1_1, x^1_2, ..., x^1_{N_1}\}$ and the group 2 data points
$\{x^2_1, x^2_2, ..., x^2_{N_1}\}$, then the likelihood is:
\begin{eqnarray}
x^1_i &\sim& \mathcal{N}\left(\mu_1, \sigma^2\right)\nonumber\\
x^2_i &\sim& \mathcal{N}\left(\mu_2, \sigma^2\right)\label{eq:ttest_likelihood}
\end{eqnarray}
Where all the data points are independent given the parameters. Note the assumption that
the two groups have the same underlying (``population'') standard deviation $\sigma$. This is a popular
assumption in this kind of analysis but it is not necessarily well justified!
We will build our Bayesian models using this assumption, but it is not that
difficult to relax it if you want to. You could just include multiple
$\sigma$ parameters in the model,
just like how we will include the multiple $\mu$ parameters.
Instead of just one model for this situation, we will study three different
versions. Each model will have the same likelihood
as given above in Equation~\ref{eq:ttest_likelihood}, and the same prior
for $\sigma$. However, the models will all have different priors for $\mu_1$
and $\mu_2$.
We will be able to see that the choice of prior does
influence the results (of course), but in ways that make sense. Which of these
models is more appropriate in a practical situation would depend on the exact
situation. There is no ``one size fits all'' model.
\subsection{Likelihood}
To implement our model in JAGS, we can begin by specifying the likelihood
part like so:
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
# Sampling distribution/likelihood
for(i in 1:N1)
{
x1[i] ~ dnorm(mu1, 1/sigma^2)
}
for(i in 1:N2)
{
x2[i] ~ dnorm(mu2, 1/sigma^2)
}
\end{minted}
We have called our data arrays {\tt x1} and {\tt x2}, and we have also
assumed that the sample sizes {\tt N1} and {\tt N2} are defined, so our
{\tt data} list will need to be consistent with these choices. The parameters
we will be estimating are {\tt mu1}, {\tt mu2}, and {\tt sigma}, so we will
need to specify prior distributions for them. In the following sections, we'll
use the same prior for {\tt sigma}, so we may as well specify that now.
Let's use a log-uniform prior where $\sigma$ is between $e^{-10}$ and
$e^{10}$.
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
# Prior for sigma
log_sigma ~ dunif(-10, 10)
sigma <- exp(log_sigma)
\end{minted}
\subsection{Prior 1: Very Vague}
The last missing ingredients to finish the JAGS model are the priors for
{\tt mu1} and {\tt mu2}. For our first model, let's be really naive and assign
super-wide uniform priors.
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
# Prior 1: Very Vague
mu1 ~ dnorm(0, 1/1000^2)
mu2 ~ dnorm(0, 1/1000^2)
\end{minted}
At first glance, this might seem like a fairly reasonable thing to do. In many
problems, it doesn't make much difference if we just use vague priors and get
on with the calculation (as opposed to thinking really hard about the prior,
and what is actually known about the parameters).
However, this prior has a number of properties that suggest it might not be
quite right: firstly, what is the probability that $\mu_1 = \mu_2$?
In classical t-tests, the whole point is to test the hypothesis that
the two ``population means'' (parameters) are equal. However, our prior
actually implies that the probability they are equal is 0! Therefore, no matter
what data we get, the posterior probability of $\mu_1 = \mu_2$ will always be
zero.
\subsection{Prior 2: They might be equal!}
The problem with Prior 1 is that we may think $\mu_1$ might exactly equal
$\mu_2$, and Prior 1 doesn't allow for this. So here's another way we might
set up the prior. We'll start by defining the prior for $\mu_1$ as we did
before. Then, when we consider $\mu_2$, we need a way of giving it a
50\% probability of equalling $\mu_1$, and if not, then it should have
a ``bi-exponential'' distribution centered around $\mu_1$.
Here is our solution. Read it carefully and make sure you understand what this
prior does.
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
# First mean
mu1 ~ dnorm(0, 1/1000^2)
# Prior for difference, mu2 - mu1
u ~ dunif(-1, 1)
# Length of exponential prior given difference != 0
L <- 5
size_of_difference <- step(u)*(-L*log(1 - u))
# To make the difference positive or negative
C ~ dbin(0.5, 1)
difference <- (2*C - 1)*size_of_difference
# Second mean
mu2 <- mu1 + difference
\end{minted}
\subsection{Prior 3: Alright, they're not equal, but they might be {\it close}}
Prior 2 is also a little bit strange, if you think about it. If we're comparing
these two manufacturers of widgets, why would we think it is possible that the
two manufacturers are {\tt exactly} equal? Maybe we just think the parameters
$\mu_1$ and $\mu_2$ are likely to be {\it similar} in value.
In other words, we shouldn't worry so much about the
prior probablity of $\mu_1 = \mu_2$, but we should at least make sure there's
a moderate prior probability that $\mu_1 \approx \mu_2$.
One way we could do this is by applying a normal prior to both $\mu_1$ and
$\mu_2$ with some mean (let's call it the ``grand mean'')
and some standard deviation (let's call it the ``diversity'').
That way, $\mu_1$ and
$\mu_2$ would both be likely to be somewhere around the grand mean, and
they would likely be different by roughly the size of the diversity.
The challenge now seems to be the choice of appropriate values for the grand
mean and the diversity. Fortunately, we don't actually have to! What we can
do instead is apply priors for them instead.
This is our first example of a {\it hierarchical model}. In a hierarchical
model, instead of directly assigning priors to our parameters, we imagine that
we knew the values of some other parameters (called ``hyperparameters''), and
assign our prior for the parameters {\it given} the hyperparameters. Then we
assign a prior for they hyperparameters as well, to complete the model.
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
# Hierarchical prior for the means
# Hyperparameters
grand_mean ~ dnorm(0, 1/1000^2)
log_diversity ~ dunif(-10, 10)
diversity <- exp(log_diversity)
# Prior for the parameters given the hyperparameters
mu1 ~ dnorm(grand_mean, 1/diversity^2)
mu2 ~ dnorm(grand_mean, 1/diversity^2)
\end{minted}
Samples (obtained using JAGS) of the three priors are shown in
Figure~\ref{fig:ttest1}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.8]{Figures/ttest1.pdf}
\caption{\it The three different priors we are trying for our Bayesian
equivalent of a t-test. The first prior simply asserts a large amount of
prior ignorance about the value of the two parameters $\mu_1$ and $\mu_2$.
The second is similar but applies 50\% probability to the proposition
$\mu_1 = \mu_2$. The third prior does not allow the two parameters to be
exactly equal, but enhances the probability that they are quite similar
in value.\label{fig:ttest1}}
\end{center}
\end{figure}
The posteriors are shown in Figure~\ref{fig:ttest2}.
The inferences are different, as you would expect, and that's entirely down
to the choice of the prior. Any summaries we make will therefore depend on
which prior we want to use.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.8]{Figures/ttest2.pdf}
\caption{\it The posterior distributions, given the widget data,
based on the three different priors for $\mu_1$ and $\mu_2$.\label{fig:ttest2}}
\end{center}
\end{figure}
The original question was whether manufacturer two was better, equal, or worse
than manufacturer one. We can answer that question by calculating the
posterior probabilities of $\mu_1 = \mu_2$, $\mu_1 < \mu_2$, and
$\mu_1 > \mu_2$. The results are shown in Table~\ref{tab:ttest_results}.
\begin{table}
\begin{center}
{\bf Prior Probabilities}:\\
\vspace{0.3cm}
\begin{tabular}{|l|c|c|c|}
\hline
Prior & $\mu_1 < \mu_2$ & $\mu_1 = \mu_2$ & $\mu_1 > \mu_2$\\
\hline
1 & 0.5 & 0 & 0.5\\
2 & 0.25 & 0.5 & 0.25\\
3 & 0.5 & 0 & 0.5\\
\hline
\end{tabular}\\
\vspace{0.5cm}
{\bf Posterior Probabilities}:\\
\vspace{0.3cm}
\begin{tabular}{|l|c|c|c|}
\hline
Prior & $\mu_1 < \mu_2$ & $\mu_1 = \mu_2$ & $\mu_1 > \mu_2$\\
\hline
1 & 0.945 & 0 & 0.055\\
2 & 0.491 & 0.430 & 0.079\\
3 & 0.629 & 0 & 0.372\\
\hline
\end{tabular}
\caption{\it Prior and posterior probabilities for three different hypotheses about
the two manufacturers, based on the models with the three different priors.
As you can see, the conclusions are quite sensitive to the choice of prior in
this case.
\label{tab:ttest_results}}
\end{center}
\end{table}
Remember that Prior 1 did not assign any probability to the possibility of the
two parameters being equal. Therefore, no possible evidence can increase
make the posterior probability nonzero. However, according to this model, there
is quite strong evidence that $\mu_1 < \mu_2$, as the probability changed from
0.5 to 0.946.
Prior 2 did allow the two parameters to be equal, and if we use Prior 2, we
seem to have found very weak evidence that they are not in fact equal. The probability
decreased from 0.5 to 0.424. According to Prior 2, if $\mu_1 \neq \mu_2$, then
$\mu_1 < \mu_2$ is the next most likely scenario. However, Prior 2 has an issue
associated with it. Our prior says that if $\mu_2$ is not equal to $\mu_1$, then
it is likely to be close to $\mu_1$. Exactly how close we expect it to be is
set by the variable {\tt L} in the model.
If we were to make {\tt L} very large, then the data would go from weak evidence
against $\mu_1 = \mu_2$ to strong evidence for it! Why does this happen? Well,
if we increased {\tt L}, the prior probability that $\mu_2$ and $\mu_1$ are
close given that they're different is decreased. Then, the hypothesis that
the $\mu$s are different does not predict our data as well, since our data
looks like the $\mu$s are close together. Since it doesn't predict the data as
well as before, its posterior probability will be lower.
Some people think this sensitivity to the prior is a
danger of Bayesian inference (if you want, you can do a web search for the
``Jeffreys-Lindley paradox''),
but it is behaving logically: the wider we make
the prior, the lower we make the prior probability that $\mu_1$ and $\mu_2$
are close but not equal, giving the model no choice but to believe that they're
equal. If the results are sensitive to the prior, that's important, and you
should think about the logic of the problem to understand why.
Prior 3 seems like it's what we might want in general. It's often silly to think
two parameters might be {\it exactly} equal. What we really think is that there
is a difference, and it might be very small, or moderate or large.
\section{One Way Anova}
One-way ANOVA can be considered as a generalisation of a t-test to more than
two groups. The question is usually phrased as a test of the hypothesis that
the group means are the same, versus the alternative that there is some difference.
As we saw in the Bayesian ``t-test'', it is possible (using clever tricks) to
make a model that has some prior probability that the group means are equal.
However, this gets more tricky with multiple groups. Therefore we will build our
one-way ANOVA model in a similar way to the ``hierarchical model'' version of the
t-test model. There will be one other major difference, but it is a difference
in the way the model is coded, not a conceptual difference.
In the t-test section our data set was composed of measurements in two groups
and our data list contained two vectors of measurements, called {\tt x1} and
{\tt x2}. The sampling distribution/likelihood part of our JAGS model also
needed two {\tt for} loops, one for each group.
If we have many groups (in the following example we will have four),
it can get awkward having to write all those loops. Therefore, when we develop
our ``one-way ANOVA'' model, we will format the data differently by putting
all measurements into a single vector {\tt x}. To make this work, we'll need
an extra vector in the dataset, which tells us which group each data point
belongs to.
We'll use an example dataset on the masses of starlings (a type of bird).
The masses of some starlings were measured at four locations. We are interested
in the differences between the locations. How similar are they in terms of the
average weight of starlings? Are they basically the same, radically different,
or something in between? A boxplot of the data is shown in
Figure~\ref{fig:starling}, which seems to show substantial differences between
the locations. However, only ten starlings were measured at each location, so
we can't be absolutely sure of this, and our goal is to investigate how
sure we should be.
To solve this problem in a Bayesian way, we will treat it as a parameter
estimation problem with four $\mu$ parameters, one for each of the locations.
We will also need at least one parameter describing the standard deviation
of the starling masses at each location. For convenience we'll assume that's
the same across all locations, but it is straightforward to relax this
assumption later.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.6]{Figures/starling.pdf}
\caption{\it The masses of starlings as measured at four different locations.
It seems as though the mean mass varies somewhat according to the location, and
our results will tell us how plausible this is.
\label{fig:starling}}
\end{center}
\end{figure}
\subsection{Hierarchical Model}
Our ``one-way ANOVA'' model is very much the same as our final ``t-test'' model,
except for the format of the dataset.
The main advantage of this model is that it generalises to more than two groups
in a very straightforward way; we no longer need to write separate for loops
for each group. As with the third ``t-test'' model, we are not seriously
considering the hypothesis that all of the group means (i.e. the $\mu$
parameters) are exactly equal, but we are allowing them to be quite close
together, or quite distinct in value, by using the hierarchical model
structure with the {\tt diversity} parameter.
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
model
{
# Log-uniform prior for the scatter
log_sigma ~ dunif(-10, 10)
sigma <- exp(log_sigma)
# Hierarchical prior for the means
# Hyperparameters
grand_mean ~ dnorm(0, 1/1000^2)
log_diversity ~ dunif(-10, 10)
diversity <- exp(log_diversity)
# Parameters
for(i in 1:N)
{
mu[i] ~ dnorm(grand_mean, 1/diversity^2)
}
# Sampling distribution/likelihood
for(i in 1:N)
{
x[i] ~ dnorm(mu[group[i]], 1/sigma^2)
}
}
\end{minted}
After running this model on the starling data, we can plot any results we
wish. In Figure~\ref{fig:trace_starlings}, I have plotted a trace plot of
$\mu_1$, the parameter for the mean weight of starlings at location 1. This
is a healthy trace plot, although there is a strange feature near iteration
1000 which we will discuss in the next section. Figure~\ref{fig:diversity}
shows the posterior distribution for the {\tt log\_diversity} hyperparameter,
which quantifies how different the groups really are. Our prior for this
parameter was U(-10, 10), and the posterior peaks at around 1.5, which
corresponds to {\tt diversity} $\approx$ 4.5, although there is a fair bit of
uncertainty. Notice also the long tail of the posterior on the left hand side.
Although we never allowed the $\mu$s to be exactly the same, we did allow them
to be close (and this corresponds to the diversity being low). The fact that
some posterior samples landed between -10 and 0 suggests there is a small
probability that the differences between groups are very small, despite the
fact that the data doesn't look that way.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.6]{Figures/trace_starlings.pdf}
\caption{\it A trace plot of $\mu_1$ from a JAGS run on the starling data. Things
appear to be mixing well, except for an odd feature near iteration 1000.
\label{fig:trace_starlings}}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.6]{Figures/diversity.pdf}
\caption{\it The posterior distribution for {\tt log\_diversity}.\label{fig:diversity}}
\end{center}
\end{figure}
As usual, we can use our posterior samples to calculate the posterior
probability of any hypothesis that we can think of based on the parameters.
Here are a couple of interesting examples:
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
# Is mu2 really greater than mu3?
> mean(results$mu[,2] > results$mu[,3])
[1] 0.672
# Is the diversity really less than 1 (log diversity less than 0)?
> mean(results$diversity < 0)
[1] 0.0272
\end{minted}
\subsection{MCMC Efficiency}
The hierarchical ``one-way ANOVA'' model given above works, but a quick look
at the trace plot suggests the mixing (how easily the MCMC algorithm is
able to move around) did have some difficulties
(see Figure~\ref{fig:trace_starlings}). In this particular example the problem
wasn't fatal, but this problem could be more severe with a different data set.
In some models, it makes sense to
consider the {\it parameterisation} of the model. There are actually different
ways to implement exactly the same model assumptions, but in a way that helps
the efficiency of the MCMC sampling. This is done by changing which parameters
are defined by ``{\verb|~|}'' and which are defined by ``{\tt <-}'', in a way
that keeps the meaning of the model intact, but forces JAGS to do the
exploration differently.
Let's look at a small subset of the above model: just the hierarchical prior
for the $\mu$s. Here it is:
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
# Hierarchical prior for the means
# Hyperparameters
grand_mean ~ dnorm(0, 1/1000^2)
log_diversity ~ dunif(-10, 10)
diversity <- exp(log_diversity)
# Parameters
for(i in 1:N)
{
mu[i] ~ dnorm(grand_mean, 1/diversity^2)
}
\end{minted}
To understand why this causes problems, we need to understand a little about
how JAGS works internally. JAGS uses two main MCMC methods, known as
{\it Gibbs Sampling} and {\it Slice Sampling}. Both of these methods usually
work by updating a single parameter or hyperparameter at a time, while keeping
all of the others fixed. Because JAGS is sampling the posterior, each
parameter will tend to move about as far as it can without the new value
becoming inconsistent with the data or the (joint) prior distribution. But
the above model doesn't just have problems exploring the posterior efficiently,
but would also have problems exploring the prior!
For example, if the current values of {\tt grand\_mean} and {\tt diversity}
are 50 and 5, and JAGS is moving the parameter {\tt mu[3]}, it will probably
move it to somewhere within the range 50 $\pm$ 5, roughly speaking, since
the prior for {\tt mu[3]} given {\tt grand\_mean}=50 and {\tt diversity}=5
is Normal($50, 5^2$). But another possibility that is (speaking loosely again)
compatible with the prior is to have {\tt grand\_mean}=-1500,
{\tt diversity}=10000, and {\tt mu[3]}=5600. How would the sampler move from
having {\tt mu[3]}=5 to having {\tt mu[3]}=5600? It certainly couldn't do this
while {\tt diversity} was still 5. Somehow, {\tt diversity} would have to be
much greater than 5. Yet when the sampler tries to increase the value of
{\tt diversity}, it won't be able to move very far, because that would make
it inconsistent with the values of the other {\tt mu} parameters!
Many MCMC methods (and importantly for us, the ones used by JAGS)
are inefficient when the posterior distribution has strong
{\it dependence} between different parameters. Unfortunately, in our one-way
ANOVA model, it's not just the posterior that has strong dependence, but
even the prior has strong dependence!
\subsection{An Alternative Parameterisation}
We will now look at an alternative way of implementing the hierarchical model,
that entails exactly the same assumptions (the same prior distributions and
sampling distribution), yet has computational advantages.
The alternative parameterisation is given below.
\begin{minted}[mathescape,
numbersep=5pt,
gobble=0,
frame=single,
framesep=2mm, fontsize=\small]{r}
# Hierarchical prior for the means
# Hyperparameters
grand_mean ~ dnorm(0, 1/1000^2)
log_diversity ~ dunif(-10, 10)
diversity <- exp(log_diversity)
# Parameters
for(i in 1:N)
{
n[i] ~ dnorm(0, 1)
mu[i] <- grand_mean + diversity*n[i]
}
\end{minted}
The only difference between this implementation and the original is the part
within the loop. Instead of defining the prior for the $\mu$s directly, we
have defined different parameters called {\tt n}, with standard normal priors.
We then compute the {\tt mu}s deterministically from the {\tt n}s. In this
alternative parameterisation, the prior for the {\tt n}s is completely
independent of {\tt grand\_mean} and {\tt diversity}, so sampling from the
prior would be extremely efficient, yet the implied prior for the {\tt mu}s
is exactly the same as before. Of course, the posterior (what we actually want
to sample) will still probably have dependence, but hopefully less.
Running this new version of the model on the starling data gives the trace
plot in Figure~\ref{fig:trace_starlings2}, which doesn't have any strange
features.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.6]{Figures/trace_starlings2.pdf}
\caption{A trace plot of the parameter $\mu_1$ using the revised model.
\label{fig:trace_starlings2}}
\end{center}
\end{figure}
| {
"alphanum_fraction": 0.7400664239,
"avg_line_length": 44.8671454219,
"ext": "tex",
"hexsha": "86e73361f0f58639422d3851a7ab2c788c9be31a",
"lang": "TeX",
"max_forks_count": 11,
"max_forks_repo_forks_event_max_datetime": "2020-06-04T20:04:47.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-07-29T14:34:51.000Z",
"max_forks_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "xulinpan/stat331",
"max_forks_repo_path": "anova.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5",
"max_issues_repo_issues_event_max_datetime": "2015-07-10T08:48:27.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-07T05:00:32.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "xulinpan/stat331",
"max_issues_repo_path": "anova.tex",
"max_line_length": 103,
"max_stars_count": 55,
"max_stars_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "xulinpan/stat331",
"max_stars_repo_path": "anova.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-25T03:36:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-09T18:03:23.000Z",
"num_tokens": 6699,
"size": 24991
} |
\section{Extending \UTXO}
\label{sec:informal-eutxo}
Various forms of state machines have been proposed to characterise smart contract functionality that goes beyond what is possible with the basic \UTXO{} model --- see, for example, \cite{fsolidm,scilla} using Ethereum's account-based model. However, we might wonder whether we can extend the basic \UTXO{} model in such a way as to support more expressive state machines without switching to an account-based model.
Given that we can regard the individual transactions in a continuous chain of transactions as individual steps in the evolution of a state machine, we require two pieces of additional functionality from the \UTXO{} model:
\begin{inparaenum}[(a)]
\item we need to be able to maintain the machine state, and
\item we need to be able to enforce that the same contract code is used along the entire sequence of transactions --- we call this \emph{contract continuity}.
\end{inparaenum}
To maintain the machine state, we extend \UTXO{} outputs from being a
pair of a validator $\nu$ and a cryptocurrency value $\val$ to being a
triple \((\nu, \val, \delta)\) of validator, value, and a
\textit{datum} $\delta$, where $\delta$ contains arbitrary
contract-specific data. Furthermore, to enable validators to enforce
contract continuity, we pass the entirety of the transaction that
attempts to spend the output locked by a validator to the validator
invocation. Thus a validator can inspect the transaction that
attempts to spend its output and, in particular, it can ensure that the
contract output of that transaction uses validator code belonging to
the same contract --- often, this will be the same validator. Overall,
to check that an input with redeemer $\rho$ that is part of the
transaction $\mi{tx}$ is entitled to spend an output \((\nu, \val,
\delta)\), we check that \(\nu(\val, \delta, \rho, \mi{tx}) = \true\).
As we are allowing arbitrary data in $\delta$ and we enable the validator
$\nu$ to impose arbitrary validity constraints on the consuming
transaction $\mi{tx}$, the resulting \ExUTXO{} (\EUTXO{}) model goes
beyond enabling state machines. However, in this paper we restrict
ourselves to the implementation of state machines and leave the
investigation of further-reaching computational patterns to future
work.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{EUTxO_MultiSig_States.pdf}
\caption{Transition diagram for the multi-signature state machine; edges labelled with input from redeemer and transition constraints.}
\label{fig:multisig-machine}
\end{figure}
%
As a simple example of a state machine contract consider an $n$--of--$m$
multi-signature contract. Specifically, we have a given amount
$\val_\msc$ of some cryptocurrency and we require the approval of at
least $n$ out of an a priori fixed set of $m \geq n$ owners to spend
$\val_\msc$. With plain \UTXO{} (e.g., on Bitcoin), a multi-signature
scheme requires out-of-band (off-chain) communication to collect all
$n$ signatures to spend $\val_\msc$. On Ethereum, and also in the
\EUTXO{} model, we can collect the signatures on-chain, without any
out-of-band communication. To do so, we use a state machine operating
according to the transition diagram in
Figure~\ref{fig:multisig-machine}, where we assume that the threshold
$n$ and authorised signatures $\sigs_\auth$ with \(|\sigs_\auth| = m\)
are baked into the contract code.
In its implementation in the \EUTXO{} model, we use a validator
function $\nu_\msc$ accompanied by the datum $\delta_\msc$ to lock
$\val_\msc$. The datum $\delta_\msc$ stores the machine state,
which is of the form \(\Holding\) when only holding the locked value
or \(\Collecting{(\val, \kappa, d)}{\sigs}\) when collecting
signatures $\sigs$ for a payment of $\val$ to $\kappa$ by the deadline
$d$. The initial output for the contract is \((\nu_\msc, \val_\msc,
\Holding)\).
The validator $\nu_\msc$ implements the state transition diagram from
Figure~\ref{fig:multisig-machine} by using the redeemer of the spending input to determine the transition that needs to be taken. That redeemer (state machine input) can take four forms:
\begin{inparaenum}[(1)]
\item \(\Propose{\val, \kappa, d}\) to propose a payment of $\val$ to $\kappa$
by the deadline $d$,
\item \(\Add{\sig}\) to add a signature $\sig$ to a payment,
\item $\Cancel$ to cancel a proposal after its deadline expired, and
\item $\Pay$ to make a payment once all required signatures have been collected.
\end{inparaenum}
It then validates that the spending transaction $\mi{tx}$ is a valid
representation of the newly reached machine state. This implies that
$\mi{tx}$ needs to keep $\val_\msc$ locked by $\nu_\msc$ and that the
state in the datum $\delta^{\prime}_\msc$ needs to be the successor state
of $\delta_\msc$ according to the transition diagram.
The increased expressiveness of the \EUTXO{} model goes far beyond
simple contracts such as this on-chain multi-signature contract. For
example, the complete functionality of the Marlowe domain-specific
language for financial contracts~\cite{marlowe} has been successfully
implemented as a state machine on the \EUTXO{} model.
| {
"alphanum_fraction": 0.7627511592,
"avg_line_length": 60.8941176471,
"ext": "tex",
"hexsha": "b689303bca7c984100bc53627418b3a19b752637",
"lang": "TeX",
"max_forks_count": 399,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T11:18:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-05T09:36:10.000Z",
"max_forks_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "AriFordsham/plutus",
"max_forks_repo_path": "papers/eutxo/informal-eutxo.tex",
"max_issues_count": 2493,
"max_issues_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T15:31:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-09-28T19:28:17.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "AriFordsham/plutus",
"max_issues_repo_path": "papers/eutxo/informal-eutxo.tex",
"max_line_length": 416,
"max_stars_count": 1299,
"max_stars_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "AriFordsham/plutus",
"max_stars_repo_path": "papers/eutxo/informal-eutxo.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-28T01:10:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-02T13:41:39.000Z",
"num_tokens": 1318,
"size": 5176
} |
\chapter{Type theory (old computation book chapter draft)}
The type theory described here is inspired by
Martin-L\"of type theory
and the Haskell programming language.
The type theory is constructive (intuitionistic) and dependent.
\section{Notation}
Iff \(a\) is a type then \(|a|\) is the \emph{cardinality} of \(a\).
It is the number of inhabitants of that type.
\section{Common types}
\begin{itemize}
\item
$\Void$ is the empty type.
It has no inhabitant.
\item
$\Bool$ is the type of boolean values.
This type has two inhabitants: $\true$ and $\false$.
\item
$\Bit$ has two inhabitants: $0$ and $1$.
\item
$\Fun{a}{b}$ is the type of functions from $a$ to $b$.
The function can be partial.
This type is also be written $a \to b$.
\item
$\Pred{a}$ is the type of
logical predicates about objects of type $a$.
We define
\[ \Pred{a} = \Fun{a}{\Bool}. \]
\item
$\Nat$ is the type of natural numbers.
This type is defined using Peano axioms.
\item
$\Set{a}$ is the type of the set of elements of type $a$.
\item $\List{a} = \Kleene{a}$ is the Kleene closure of $a$.
A list of $a$ is an ordered collection of elements of type $a$ with duplicates allowed.
so we can write $[x,y,z] 1 = y$.
\item $\Bits$ is the type of bitstrings.
\[
\Bits = \Kleene{\Bit}
\]
\item $\Either{a}{b}$ is the sum type or the union type
that consists of $\Left{x}$ for all $x : a$
and $\Right{y}$ for all $y : b$.
\item $\Pair{a}{b}$ is the product type
that consists of $(x,y)$ for all $x : a$ and $y : b$.
\item
$\Vect{a}{n}$ is the type of vectors
whose element type is $a$ and whose length is $n : \Nat$:
\[ \Vect{a}{n} = \Fun{(I n)}{a} \]
where
\[ I n = \{ 0, 1, 2, \ldots, n - 1 \} \]
or alternatively using predicate logic
\[ \Fa{x} (x : \Nat \wedge x < n \iff x : I n) \]
\item $\Typ$ is the type of types.
This implies
\[ \Typ : \Typ \]
(Does $\Typ : \Typ$ imply inconsistency?
Russell's paradox?
Unrestricted comprehension?)
This means that $\sfSet$ and $\sfRel$ can be thought as the \emph{type functions}
\begin{align*}
\sfSet &: \Typ \to \Typ,
\\
\sfRel &: \Typ \to \Typ \to \Typ.
\end{align*}
\end{itemize}
The cardinality of $\Nat$ is $\aleph_0$ (aleph-null).
\section{Cardinality of types}
$|\List{a}| = |\Fun{\Nat}{a}|$.
\(|a \to b| = |b|^{|a|}\)
If a type has finitely many inhabitants,
then the cardinality of that type is the number of its inhabitants.
If there is a bijection between two types,
then those types have the same cardinality.
Type-theoretic restatement of Cantor's theorem?
There is no bijection between $a$ and $\Set{a}$.
$|a| < |\Set{a}|$.
Kind of ordering on cardinalities:
\begin{itemize}
\item Iff there is an injection from $a$ to $b$, then $|a| \le |b|$.
\item Iff there is an surjection from $a$ to $b$, then $|a| \ge |b|$.
\item Iff there is a bijection between $a$ and $b$, then $|a| = |b|$.
\end{itemize}
Two sets are equinumerous (have the same cardinality) if and only if there is a bijection between them.
$|\sfT a| = |\sfT (\Set{a})|$?
\section{Cardinality theorems}
\begin{mthm}
\[
|a| \lneq |\Set{a}|
\]
\begin{proof}
Has been proved by Georg Cantor using Cantor's diagonal argument
that the cardinality of a set is strictly less than its power set.
Beth numbers.
$\beth_n \lneq \beth_{n+1}$ for each natural number $n$.
\end{proof}
\end{mthm}
\begin{mthm}[Equinumerosity among one-parameter types]
For each $a$, all these types have the same cardinality:
$\Pred{a}$, $\Set{a}$.
\begin{proof}
Let $p : \Pred{a}$ be a predicate and $s : \Set{a}$ be a set.
We define $p$ and $s$ such that each object that satisfies the predicate $p$ is an element of the set $s$
and also such that each element of the set $s$ satisfies the predicate $p$.
\begin{align*}
F p &= \{ x \,|\, p x \}
\\
G s &= \lambda x \to x \in s
\end{align*}
The relationship is
\[ \FA{x} (p x \iff x \in s) \]
But what if $p x = x \not\in S$.
Or what if $p x = \neg\exists S ~ x \in S$?
Or what if $p x = \Fa{S} x \in S$?
Or what if $p x = x \in x$?
What if $p x = \neg (p x)$?
Isn't this prone to Russell's paradox?
Unrestricted comprehension?
FIXME?
Or is this not prone?
$p$ cannot refer to $s$?
Can it?
\end{proof}
\end{mthm}
Thus a predicate is a set and a set is a predicate.
It turns out that there is a name for this concept:
that set is the \emph{extension} of that predicate.
If $p$ is a predicate, then $p$ is also a set,
so we can write $x \in p$ to mean that $p x$ is true.
What if we assume that a predicate is equal to its own extension?
Now we make a bold but reasonable claim:
a predicate \emph{is} a set and a set \emph{is} a predicate.
This has some interesting consequences.
If we assume the equality, then $p$ becomes a fixed point of $\phi \mu$.
To see this, we have to define several functions.
Let $\phi$ be the flip combinator, that is $\phi f x y = f y x$.
Let $\mu$ be the set membership function, that is $\mu x y = x \in y$.
Recall that the $\eta$-reduction transforms $p x = q x$ to $p = q$.
\begin{align*}
p x &= x \in s
\\
&= \mu x s
\\
&= \phi \mu s x
\\
p &= \phi \mu s
\\
p &= s
\\
p &= \phi \mu p
\\
p &= \phi \mu (\phi \mu p)
\\
&= \phi \mu (\phi \mu (\phi \mu p))
\\
&= \ldots
\end{align*}
That implies that we can write strange but provable things like these:
\begin{align*}
1 \in \{0,1,2\} &= \{0,1,2\} 1 = \true
\\
3 \in \{0,1,2\} &= \{0,1,2\} 3 = \false
\\
(\lambda x \to x = 1) 1 &= 1 \in (\lambda x \to x = 1) = \true
\end{align*}
but this can be confusing at first.
Should we distinguish predicate and set?
Should we treat them as the same thing?
The membership operator $\in$ becomes swapped function application.
We can even generalize the notation $f x = x \in f$ to every function $f : a \to b$, not just predicates.
Let $f x = x + 1$. Then $f 0 = 0 \in f = 1$.
This may need some effort and time to get accustomed to,
but once you master it, you will be another mathematician.
\begin{mthm}[Equinumerosity among two-parameter types]
All these types have the same cardinality:
\begin{itemize}
\item $\Relab{a}{b}$, $\Relab{b}{a}$
\item $\Pred(a,b)$, $\Pred(b,a)$
\item $\Set{(a,b)}$, $\Set{(b,a)}$
\item $\Fun{a}{(\Set{b})}$, $\Fun{(\Set{a})}{b}$
\end{itemize}
\begin{proof}
Proving $|\Relab{a}{b}| = |\Relab{b}{a}|$ is simple.
Proving $|\Pred(a,b)| = |\Pred(b,a)|$ is simple.
$r : \Relab{a}{b}$ and $p : \Pred(a,b)$ and $f : a \to b \to \Bool$.
$r$ relates $x$ to $y$ if and only if $p(x,y)$ is true.
\begin{align*}
p z &= z \in r
\\ &= \mu z r
\\ &= \phi \mu r z
\\
p &= \phi \mu r
\end{align*}
Then let $p = r$.
Since there is a bijection between $\Pred{(a,b)}$ and $\Relab{a}{b}$
and between $\Pred{a}$ and $\Set{a}$,
there is a bijection between $\Relab{a}{b}$ and $\Set{(a,b)}$.
To prove that there is a bijection between $\Relab{a}{b}$ and $\Fun{a}{(\Set{b})}$,
we choose any $r : \Relab{a}{b}$ that is a relation
from objects of type $a$ to objects of type $b$.
Define the \emph{image of $x$ in $r$} as
$i r x = \{ y \,|\, \text{$r$ relates $x$ to $y$} \}$
where the type of $i$ is $\Relab{a}{b} \to a \to \Set{b}$.
We define the \emph{relation functionization} function $F$ as
\[ F r = \{ (x,Y) \,|\, i r x = Y \} \]
we capitalize $Y$ to highlight the fact that it is a set.
$G : \Fun{a}{(\Set{b})} \to \Relab{a}{b}$ is the \emph{function relationization} function.
\[ G f = \{ (x,y) \,|\, y \in f x \} \]
$G f$ relates $x$ to $y$ iff $y \in f x$.
We can see that $F(G f) = f$ and $G(F r) = r$.
Thus $F \circ G$ is the identity of $\Fun{a}{(\Set{b})}$
and $G \circ F$ is the identity of $\Relab{a}{b}$.
Thus $F$ and $G$ are inverses of each other.
???
\end{proof}
\end{mthm}
There is a mapping from $\Fun{a}{b}$ to $\Relab{a}{b}$.
There is a bijection between $\Relab{a}{b}$ and $\Relab{b}{a}$.
There is a bijection between $\Relab{b}{a}$ and $\Fun{b}{(\Set{a})}$.
This means that there is a bijection between $\Fun{a}{(\Set{b})}$ and $\Fun{b}{(\Set{a})}$.
| {
"alphanum_fraction": 0.5633929576,
"avg_line_length": 35.4143426295,
"ext": "tex",
"hexsha": "d1b1498ff5ddcbb7f86531978017397d2f9524c9",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z",
"max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_forks_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_forks_repo_name": "edom/work",
"max_forks_repo_path": "research/type-old.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z",
"max_issues_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_issues_repo_name": "edom/work",
"max_issues_repo_path": "research/type-old.tex",
"max_line_length": 113,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0",
"max_stars_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_stars_repo_name": "edom/work",
"max_stars_repo_path": "research/type-old.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2905,
"size": 8889
} |
\documentclass[a4paper,twoside]{report}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\rtbyear}{2019}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{times}
\usepackage{graphicx}
\usepackage{wrapfig}
\usepackage{pdfpages}
\usepackage{fancyhdr}
\usepackage[overlay]{textpos}
\usepackage{mathptmx}
\usepackage{anyfontsize}
\usepackage{longtable}
\usepackage{amsfonts} % mathbb font
\usepackage{t1enc}
\usepackage{booktabs,longtable}
% hyperlinks
\usepackage{hyperref}
\hypersetup{
colorlinks=true,
linkcolor=black
}
\usepackage{color}
\usepackage{sectsty}
\usepackage{textcomp} % access \textquotesingle
\allsectionsfont{\sffamily}
\makeatletter
\newcommand\funcsection{%
\@startsection{section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\color{red}\sffamily\huge\bfseries}}
\makeatother
%% layout
\def\topfraction{.9}
\def\bottomfraction{.9}
\def\textfraction{.1}
\def\floatpagefraction{.9}
% paragraphs
\usepackage{parskip}
\setlength{\parindent}{0mm}
% space around longtables
\setlength{\LTpre}{6pt}
\setlength{\LTpost}{0pt}
% reduce space around headings
\usepackage[compact]{titlesec}
\titlespacing{\section}{0pt}{*0}{*0}
\titlespacing{\subsection}{0pt}{*0}{*0}
\titlespacing{\subsubsection}{0pt}{*0}{*0}
\renewcommand{\today}{\number\day \space%
\ifcase \month \or January\or February\or March\or April\or May%
\or June\or July\or August\or September\or October\or November\or December\fi, \number \year}
\IfFileExists{../../RELEASE}{\def\release{\input{../../RELEASE}}
\def\reldate{\today}}{\input{release.tex}}
%\usepackage{multind}
\usepackage{multicol}
% printindex stuff
\makeatletter
\def\printindex#1#2{\subsection*{#2}
\@input{#1.ind}}
\def\theindex{\parindent\z@
\parskip\z@ plus .3pt\relax\let\item\@idxitem}
\def\@idxitem{\par\hangindent 40pt}
\def\subitem{\par\hangindent 40pt \hspace*{20pt}}
\def\subsubitem{\par\hangindent 40pt \hspace*{30pt}}
\def\endtheindex{}
\def\indexspace{\par \vskip 10pt plus 5pt minus 3pt\relax}
\makeatother
\usepackage{fancyvrb}
\fvset{formatcom=\color{blue},fontseries=c,fontfamily=courier,xleftmargin=4mm,commentchar=!}
\DefineVerbatimEnvironment{Code}{Verbatim}{formatcom=\color{blue},fontseries=c,fontfamily=courier,fontsize=\footnotesize,xleftmargin=4mm,commentchar=!}
\pagestyle{empty}
\def\Mlab{MATLAB}
\begin{document}
\typeout{**Starting}
\setlength{\TPHorizModule}{1cm}
\setlength{\TPVertModule}{1cm}
%%%%%%%%%%%%%%%%% TITLE PAGE
\begin{textblock}{19}(-3.5,-4.5)
\includepdf{titlepage}
\fontsize{75}{80}\selectfont
\textbf{Spatial Math}\\[5pt]
\fontsize{45}{50}\selectfont
\textbf{Toolbox} for MATLAB\textsuperscript{\textregistered}\\[12pt]
Release \release
\vspace*{18cm}
\hfill\textbf{Peter Corke}\\[5pt]
\end{textblock}
\thispagestyle{empty}
\newpage
%%%%%%%%%%%%%%%%% INSIDE FRONT COVER
\typeout{**Inside front cover}
\vspace*{\fill}
\begin{tabular}{ll}
Release & \release \\
Release date & \reldate \\[20pt]
Licence & MIT \\
Toolbox home page & \url{https://github.com/petercorke/spatial-math} \\
Discussion group & \url{https://tiny.cc/rvcforum}
\end{tabular}
\vspace*{\fill}
\hrule
Copyright \textcopyright \rtbyear\ Peter Corke\\
peter.i.corke$@$gmail.com\\
\url{http://www.petercorke.com}
\vspace*{\fill}
\setlength{\fboxsep}{10pt}%
\typeout{done}
%%%%%%%%%%%%%%%% CONTENT
\pagestyle{headings} % Gives page headings at top of page
\lfoot{Spatial Math Toolbox \release\ for \Mlab\textsuperscript{\textregistered} }
\rfoot{Copyright \copyright Peter Corke \rtbyear}
\setcounter{section}{0}
\addcontentsline{toc}{section}{Preface}
\chapter*{Preface}
\typeout{**Preface front cover}
\pagestyle{fancyplain}
\begin{wrapfigure}{l}{4cm}
\vspace{-2ex}\includegraphics[width=3.5cm]{figs/frontcover.pdf}
\end{wrapfigure}
This is the first release of the Spatial Math Toolbox which has been refactored from the Robotics Toolbox for MATLAB.
The latter represents over twenty five years of continuous
development and a substantial level of maturity -- a significant part of that code base was concerned with representing position, orientation
and pose in 2D and 3D as well as lines in 3D using Pl\"{u}cker coordinates.
This \Mlab\textsuperscript{\textregistered} Toolbox has a rich collection of functions for manipulating and converting between datatypes such as vectors, rotation matrices, unit-quaternions, quaternions, homogeneous transformations and twists which are necessary to represent position and orientation in 2- and 3-dimensions.
These are useful in the study of robotics and computer vision, but also for other fields of engineering and physics.
The Toolbox makes strong use of classes to represent many of the mathematical objects and also
includes Simulink\textsuperscript{\textregistered} blocks for some conversions.
The code is written in a straightforward manner which allows
for easy understanding, perhaps at the expense of computational efficiency.
If you feel strongly about computational efficiency then you can always
rewrite the function to be more efficient,
compile the M-file using the \Mlab\ compiler, or
create a MEX version.
The bulk of this manual is auto-generated from the comments in the \Mlab\ code itself.
For elaboration on the underlying principles, extensive illustrations and worked examples please consult
``\textit{Robotics, Vision \& Control}'' which provides a detailed discussion (720 pages, nearly 500 figures and over 1000 code examples) of how to use the Toolbox functions to solve many types of problems in robotics.
This version corresponds to the \textbf{second edition} of the book ``\textit{Robotics, Vision \& Control}'' published in June 2017 -- aka RVC2.
%\cleardoublepage
%\chapter*{Functions by category}
%\addcontentsline{toc}{section}{Functions by category}
%\begin{multicols}{2}
%\IfFileExists{funcidx_body.tex}{\input{funcidx_body.tex}}{}
%\end{multicols}
\cleardoublepage
\tableofcontents
\newpage
\chapter{Introduction}
As already mentioned this code has been refactored from the Robotics Toolbox for MATLAB. As that Toolbox evolved there has been
increasing adoption of classes, even for objects like rotation matrices and homogeneous transformation matrices which can be
represented easily using native MATLAB matrices. The motivations for this are:
\begin{enumerate}
\item Classes ensure type safety. For example a 3x3 matrix could be an SO(3) rotation matrix or an SE(2) homogeneous transformation, or the transpose of an SE(3) homogeneous transformation is invalid.
Overloaded class operators ensure that only valid operations can be performed.
\item The classes support more descriptive constructors with names like \texttt{SO3.eul} which constructs an SO(3) object from Euler angles.
\item A sequence, or trajectory, using native matrices, has to represented by a 3-dimensional matrix, eg. $4\times 4 \times N$. Using objects we can represent this instead using a 1-dimensional vector of objects.
\end{enumerate}
In RTB10 a set of classes have been introduced to represent orientation and pose in 2D and 3D: \texttt{SO2}, \texttt{SE2}, \texttt{SO3}, \texttt{SE3}, \texttt{Twist} and \texttt{UnitQuaternion}. These classes are fairly polymorphic, that is, they share many methods and operators\footnote{For example, you could substitute objects of class \texttt{SO3} and \texttt{UnitQuaternion} with minimal code change.}. All have a number of static methods that serve as constructors from particular representations. A trajectory is represented by a vector of these objects which makes code easier to read and
understand. Overloaded operators are used so the classes behave in a similar way to native matrices\footnote{The capability is extended so that we can element-wise multiple two vectors of transforms, multiply one transform over a vector of transforms or a set of points.}.
The relationship between the classical Toolbox functions and the new classes are shown in Fig \ref{fig:newfunctions}.
You can continue to use the classical functions. The new classes have methods with the names of classical functions to provide similar functionality. For instance
\begin{Code}
>> T = transl(1,2,3); % create a 4x4 matrix
>> trprint(T) % invoke the function trprint
>> T = SE3(1,2,3); % create an SE3 object
>> trprint(T) % invoke the method trprint
>> T.T % the equivalent 4x4 matrix
>> double(T) % the equivalent 4x4 matrix
\end{Code}
\begin{Code}
>> T = SE3(1,2,3); % create a pure translation SE3 object
>> T2 = T*T; % the result is an SE3 object
>> T3 = trinterp(T, T2,, 5); % create a vector of five SE3 objects between T and T2
>> T3(1) % the first element of the vector
>> T3*T % each element of T3 multiplies T, giving a vector of five SE3 objects
\end{Code}
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{figs/CT-02-03.eps}
\includegraphics[width=\textwidth]{figs/CT-02-02.eps}
\caption{(top) new and classic methods for representing orientation and pose, (bottom) functions and methods to convert
between representations. Reproduced from ``\textit{Robotics, Vision \& Control, second edition, 2017}''}\label{fig:newfunctions}
\end{figure}
Options to RTB functions can now be strings\footnote{Introduced from MATLAB 2016b.} or character arrays, ie. \texttt{rotx(45, 'deg')} or \texttt{rotx(45, "deg")}.
\section{Installing the Toolbox}
\subsection{Automatically from GitHub}
From MATLAB Desktop or Online use the AddOn Manager on the Home tab, and search for "spatial math" and click on the Spatial Math Toolbox.
It will be installed into the folder \texttt{MATLAB/Add-Ons/Collections/Spatial Math Toolbox/petercorke-spatial-math-xxxx} in your default MATLAB
documents folder\footnote{\texttt{xxxx} is part of git's hash and represents the version number.}.
This also works from MATLAB Online in which case it will be stored in \texttt{/MATLAB Add-Ons/Collections/Spatial Math Toolbox/petercorke-spatial-math-xxxx}.
The Toolbox will be automatically added to the end of your path. If you have the Phase Array Toolbox also installed then note that some of the Spatial Math functions will be shadowed. To check for this run
\begin{Code}
>> which rotx
\end{Code}
If this indicates a path not as shown above then either:
\begin{enumerate}
\item use \texttt{pathtool} to move Phase Array Toolbox to the end of the path
\item remove the Phase Array Toolbox, if you don't need it, using the AddOn Manager.
\end{enumerate}
\subsection{Manually from GitHub}
Clone the repository to your own computer
\begin{Code}
>> git clone https://github.com/petercorke/spatial-math
\end{Code}
and ensure that the folder \texttt{spatial-math} is added to your MATLAB path.
\subsection{Notes on implementation and versions}
The Simulink blocks are implemented in Simulink itself with calls to MATLAB code, or as Level-1 S-functions (a proscribed coding format which MATLAB functions
to interface with the Simulink simulation engine).
Simulink allows signals to have matrix values but not (yet) object values. Transformations must be represented as matrices, as per the classic functions, not classes.
Very old versions of Simulink (prior to version 4) could only handle scalar signals which limited its usefulness for robotics.
\subsection{Documentation}
This document {\tt spatialmath.pdf} is a comprehensive manual that describes all functions in the Toolbox.
It is auto-generated from the comments in the \Mlab\ code and is fully hyperlinked:
to external web sites, the table of content to functions, and the ``See also'' functions
to each other.
%The same documentation is available online in
%alphabetical order at \url{http://www.petercorke.com/RTB/r10/html/index_alpha.html}
%or by category at \url{http://www.petercorke.com/RTB/r10/html/index.html}.
%Documentation is also available via the \Mlab\ help browser, under supplemental software, as ``Robotics
%Toolbox".
\section{Compatible MATLAB versions}
The Toolbox has been tested under R2018b and R2019aPRE. Compatibility problems are increasingly likely the older your version of \Mlab\ is.
\section{Use in research}
If the Toolbox helps you in your endeavours then I'd appreciate you citing the Toolbox when you publish.
The details are:
\begin{verbatim}
@book{Corke17a,
Author = {Peter I. Corke},
Note = {ISBN 978-3-319-54413-7},
Edition = {Second},
Publisher = {Springer},
Title = {Robotics, Vision \& Control: Fundamental Algorithms in {MATLAB}},
Year = {2017}}
\end{verbatim}
or
\begin{quote}
P.I. Corke, Robotics, Vision \& Control: Fundamental Algorithms in MATLAB. Second edition. Springer, 2017. ISBN 978-3-319-54413-7.
\end{quote}
which is also given in electronic form in the CITATION file.
\subsection{Octave}
GNU Octave (www.octave.org) is an impressive piece of free software that implements a language that is close to, but not the same as, \Mlab. The Toolboxes currently do not work well with Octave, though as time goes by compatibility improves.
Many Toolbox functions work just fine under Octave, but most classes do not.
For uptodate information about running the Toolbox with Octave check out the page \url{http://petercorke.com/wordpress/toolboxes/other-languages}.
\section{Support}
There is no support! This software is made freely available in the hope that you find it useful in solving whatever problems
you have to hand.
I am happy to correspond with people who have found genuine
bugs or deficiencies but my response time can be long and I can't guarantee that I respond to your email.
\textbf{I can guarantee that I will not respond to any requests for help with assignments or homework, no matter
how urgent or important they might be to you. That's what your teachers, tutors, lecturers and professors are paid to do.}
You might instead like to communicate with other users via
the Google Group called ``Robotics and Machine Vision Toolbox''
\begin{quote}
\url{http://tiny.cc/rvcforum}
\end{quote}
which is a forum for discussion.
You need to signup in order to post, and the signup process is moderated by me so allow a few
days for this to happen. I need you to write a few words about why you want to join the list
so I can distinguish you from a spammer or a web-bot.
\section{Contributing to the Toolboxes}
I am very happy to accept contributions for inclusion in future versions of the
toolbox. You will, of course, be suitably acknowledged.
\renewcommand{\section}{\funcsection}
%\setcounter{secnumdepth}{-1}
%\settocdepth{section}
\newpage
\chapter{Functions and classes}
\typeout{**Before include}
\IfFileExists{all.tex}{\input{all}}{}
\typeout{**After include}
\bibliographystyle{ieeetr}
\bibliography{strings,robot,control,dynamics,kinematics,force,grind,publist,software}
\end{document}
| {
"alphanum_fraction": 0.7665647299,
"avg_line_length": 43.407079646,
"ext": "tex",
"hexsha": "81d4d7b2e7cc8a0e4371c4832adc3cd9fca984e3",
"lang": "TeX",
"max_forks_count": 37,
"max_forks_repo_forks_event_max_datetime": "2022-03-18T17:03:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-19T05:51:11.000Z",
"max_forks_repo_head_hexsha": "6eeff4a79f14286705560b84f1fe72e0b7e0e7f7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rreichel86/spatialmath-matlab",
"max_forks_repo_path": "doc/manual/spatialmath.tex",
"max_issues_count": 12,
"max_issues_repo_head_hexsha": "6eeff4a79f14286705560b84f1fe72e0b7e0e7f7",
"max_issues_repo_issues_event_max_datetime": "2020-05-03T22:41:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-08T21:19:11.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rreichel86/spatialmath-matlab",
"max_issues_repo_path": "doc/manual/spatialmath.tex",
"max_line_length": 600,
"max_stars_count": 99,
"max_stars_repo_head_hexsha": "6eeff4a79f14286705560b84f1fe72e0b7e0e7f7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rreichel86/spatialmath-matlab",
"max_stars_repo_path": "doc/manual/spatialmath.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T07:15:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-09T23:49:49.000Z",
"num_tokens": 3943,
"size": 14715
} |
\section{Results}
Results for two different cases are being presented, one the one-dimensional Ising Model and another for the the two-dimensional Ising model. We begin with looking at the. 1D Ising model.
\subsection{1D Ising model}
\subsubsection{Fitting with linear regression}
For linear regression we got coefficients of $\bm{J}$ in \eqref{eq:1d-ising-linreg} as the following presented in figure \ref{fig:reg-coef-heatmap},
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.9\textwidth}
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=1]{../fig/{regression_ising_1d_heatmap_lambda0.001}.pdf}
\caption{$\lambda=10^{-3}$}
\label{fig:linreg-hm-1e-3}
\end{subfigure} \\
\begin{subfigure}[b]{0.9\textwidth}
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=1]{../fig/{regression_ising_1d_heatmap_lambda0.1}.pdf}
\caption{$\lambda=10^{-1}$}
\label{fig:linreg-hm-1e-1}
\end{subfigure} \\
\begin{subfigure}[b]{0.9\textwidth}
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=1]{../fig/{regression_ising_1d_heatmap_lambda10.0}.pdf}
\caption{$\lambda=10^{1}$}
\label{fig:linreg-hm-1e2}
\end{subfigure}
\caption{Heat map plots of the $\bm{J}$ in \eqref{eq:1d-ising-linreg} retrieved from OLS, Ridge and Lasso. Gathered using $N_\mathrm{train}=5000$.}
\label{fig:reg-coef-heatmap}
\end{figure}
The $R^2$ score of the OLS, Ridge and Lasso can be seen in figure \ref{fig:linreg-r2},
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/r2_ols_ridge_lasso.pdf}
\caption{$R^2$ score for different Ordinary Least Squares(OLS), Ridge and Lasso regression. Retrieved $N_\mathrm{train}=5000$ and $N_\mathrm{test}=5000$ on a 1D Ising model of size $L=20$.}
\label{fig:linreg-r2}
\end{figure}
The bias-variance decomposition for Ridge and Lasso using bootstrap and cross validation can be viewed in figure \ref{fig:linreg-bias-variance-decomp-ridge} and \ref{fig:linreg-bias-variance-decomp-lasso}.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[scale=0.5]{../fig/ridge_bs_bias_variance_analysis.pdf}
\caption{Bootstrap.}
\label{fig:linreg-bias-variance-decomp-bs-ridge}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[scale=0.5]{../fig/ridge_cv_bias_variance_analysis.pdf}
\caption{$k$-fold Cross Validation.}
\label{fig:linreg-bias-variance-decomp-cv-ridge}
\end{subfigure}
\caption{A bias-variance decomposition of Ridge regression using bootstrapping\ref{fig:linreg-bias-variance-decomp-bs-ridge} and cross-validation\ref{fig:linreg-bias-variance-decomp-cv-ridge}.}
\label{fig:linreg-bias-variance-decomp-ridge}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[scale=0.5]{../fig/lasso_bs_bias_variance_analysis.pdf}
\caption{Bootstrap.}
\label{fig:linreg-bias-variance-decomp-bs-lasso}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[scale=0.5]{../fig/lasso_cv_bias_variance_analysis.pdf}
\caption{$k$-fold Cross Validation.}
\label{fig:linreg-bias-variance-decomp-cv-lasso}
\end{subfigure}
\caption{A bias-variance decomposition of Lasso regression using bootstrapping\ref{fig:linreg-bias-variance-decomp-bs-lasso} and cross-validation\ref{fig:linreg-bias-variance-decomp-cv-lasso}.}
\label{fig:linreg-bias-variance-decomp-lasso}
\end{figure}
\subsubsection{Fitting with a neural network}
By setting the output activation function to the identity and by having zero hidden layers, we are essentially performing a regression analysis on the 1D Ising model. We generate the same amount of data by inputing the same RNG(random number generator) seed. A fit using $N_\mathrm{train}=400$, $N_\mathrm{train}=5000$ and $N_\mathrm{test}=5000$ for $\lambda=10^{-3}, 10^{-1}, 10^1$ can be seen in figure \ref{fig:mlp_coefs}.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda0.0001_N800}.pdf}
\caption{$N_\mathrm{train}=400$, $\lambda=10^{-3}$}
\label{fig:mlp-reg-heatmap400-lmb-3}
\end{subfigure} \qquad \qquad \qquad
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda0.0001_N100000}.pdf}
\caption{$N_\mathrm{train}=5000$, $\lambda=10^{-3}$}
\label{fig:mlp-reg-heatmap5000-lmb-3}
\end{subfigure} \\
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda0.1_N800}.pdf}
\caption{$N_\mathrm{train}=400$, $\lambda=10^{-1}$}
\label{fig:mlp-reg-heatmap400-lmb-1}
\end{subfigure} \qquad \qquad \qquad
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda0.1_N100000}.pdf}
\caption{$N_\mathrm{train}=5000$, $\lambda=10^{-1}$}
\label{fig:mlp-reg-heatmap5000-lmb-1}
\end{subfigure} \\
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda10.0_N800}.pdf}
\caption{$N_\mathrm{train}=400$, $\lambda=10^1$}
\label{fig:mlp-reg-heatmap400-lmb1}
\end{subfigure} \qquad \qquad \qquad
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda10.0_N100000}.pdf}
\caption{$N_\mathrm{train}=5000$, $\lambda=10^1$}
\label{fig:mlp-reg-heatmap5000-lmb1}
\end{subfigure} \\
\caption{Heat map plot of the coefficients of $\bm{J}$ in \eqref{eq:1d-ising-linreg} using neural networks with different regularizations for $\lambda=10^{-3}, 10^{-1}, 10^1$.}
\label{fig:mlp-coefs}
\end{figure}
The $R^2$ score of the neural network using L$^1$, L$^2$ and no regularization can be seen in figure \ref{fig:mlp-r2},
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[scale=0.5]{../fig/{mlp_r2_ols_ridge_lasso800}.pdf}
\caption{$N_\mathrm{train}=400$}
\label{fig:mlp-r2-800}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[scale=0.5]{../fig/{mlp_r2_ols_ridge_lasso100000}.pdf}
\caption{$N_\mathrm{train}=5000$}
\label{fig:mlp-r2-5000}
\end{subfigure}
\caption{$R^2$ score for the neural network using L$^1$ (Lasso), L$^2$ (Ridge) and no regularization (OLS). Retrieved $N_\mathrm{train}=400$ on the left and $N_\mathrm{train}=5000$ on the right, for a 1D Ising model of size $L=20$.}
\label{fig:mlp-r2}
\end{figure}
% TODO: rerun mlp regression with N_samples = 10000 as I run for too much :|
\subsection{2D Ising model}
As stated in the section about the 2D Ising model \ref{sec:2d-ising-model}, the classification will focus on evaluating the phases of different lattice configurations, and wetter or not it is below or above a critical temperature. We begin by listing the results from the logistic regression.
\subsubsection{Classification through logistic regression}
In logistic regression we investigated the behavior of the classification and compared it to that of SciKit Learn\cite{scikit-learn}, using the standard logistic regression method\footnote{See \href{https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html}{Logistic Regression documentation}} and
SciKit Learn's SGD(Stochastic Gradient Descent) implementation \footnote{See \href{https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html}{SGD documentation}}. This gave the results found in figure \ref{fig:logreg-accuracy-sklearn-comparison}.
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/logistic_accuracy_sklearn_comparison.pdf}
\caption{The accuracy for our implementation of logistic regression versus that of SciKit learn.}
\label{fig:logreg-accuracy-sklearn-comparison}
\end{figure}
\subsubsection{Classification through neural networks}
For classifying the states through a neural network, we looked at several different hyper parameters. All runs were made using $N_{samples}=10000$ except stated other wise. The training percent was 0.5. We start by comparing two different cost functions and their layer outputs,
\begin{itemize}
\item Cross entropy with softmax layer output\eqref{eq:ce-mlp-cost}
\item MSE with sigmoidal layer output.\eqref{eq:mse-mlp-cost}
\end{itemize}
These cost functions following behavior for epochs seen in following figure \ref{fig:mlp-cost-function-comparison},
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_epoch_cost_functions.pdf}
\caption{A comparison in accuracy scores between the MSE and (CE) Cross Entropy loss functions over 500 epochs. The output layer for MSE is sigmoidal, the output layer for CE is softmax. The learning parameter was $\eta=0.001$ and we used the inverse learning rate\eqref{eq:inverse-eta}.}
\label{fig:mlp-cost-function-comparison}
\end{figure}
We then wish to to investigate the effects of having different initial weights. Given the initial weights \textit{large} and \textit{default} as listed in section \ref{sec:nn-weights}, we get the results as seen in figure \ref{fig:mlp-epoch-init-weights},
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_epoch_weight_inits.pdf}
\caption{A comparison in accuracy scores between the initial weights \textit{large} and \textit{default} as listed in section \ref{sec:nn-weights}. The run was for 500 epochs. The cost function was set to cross entropy and had softmax output activation. The learning parameter was $\eta=0.001$ and we used the inverse learning rate\eqref{eq:inverse-eta}.}
\label{fig:mlp-epoch-init-weights}
\end{figure}
% skriv inn lambda ting her !=")#(097 21841 \\\\}Ʒ׶"
An investigation into different layer activations\ref{sec:layer-acts} was performed for both the MSE- and the CE-cost function. The results from MSE can be seen in figure
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_mse.pdf}
\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \ref{sec:layer-acts}) for MSE as cost function. The run was for 500 epochs. The learning rate was set with the inverse learning rate \eqref{eq:inverse-eta} with an $\eta_0=0.001$ and $\lambda=0.0$.}
\label{fig:mlp-epoch-activations-mse}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_log_loss.pdf}
\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \ref{sec:layer-acts}) for cross entropy as cost function. The run was for 500 epochs. The learning rate was set with the inverse learning rate \eqref{eq:inverse-eta} with an $\eta_0=0.001$ and $\lambda=0.0$.}
\label{fig:mlp-epoch-activations-log-loss}
\end{figure}
We then move on to an investigation for different L$^2$ regularization strengths $\lambda$ versus different constant learning rates $\eta$. A run with 500 epochs, cross entropy and sigmoidal hidden layer activation can be seen in figure \ref{fig:mlp-eta-lambda},
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_lambda_eta.pdf}
\caption{The accuracy as function of the L$^2$ regularization parameter $\lambda$ and constant training rate $\eta$. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation. The hidden layer was 10 neurons large.}
\label{fig:mlp-eta-lambda}
\end{figure}
A comparison of the accuracy score\eqref{eq:mlp-accuracy} as a function of L$^2$ regularization parameter and hidden layer size(the neurons) can be viewed in figure \ref{fig:mlp-lambda-neurons}.
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_lambda_neurons.pdf}
\caption{The accuracy score as function of the L$^2$ regularization parameter $\lambda$ and the number of neurons. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation. The learning rate was set with the inverse learning rate \eqref{eq:inverse-eta} with an $\eta_0=0.001$.}
\label{fig:mlp-lambda-neurons}
\end{figure}
The accuracy score\eqref{eq:mlp-accuracy} as a function of the hidden layer size(the neurons) and the training data size as percentage of of a $N_\mathrm{samples}=10000$ training data, can be viewed in figure \ref{fig:mlp-neurons-ts}.
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_neurons_training_size.pdf}
\caption{The accuracy score as function of the number of neurons and the training data size percentage of $N_\mathrm{samples}=10000$. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation. The learning rate was set with the inverse learning rate \eqref{eq:inverse-eta} with an $\eta_0=0.001$ and $\lambda=0.0$. }
\label{fig:mlp-neurons-ts}
\end{figure}
The accuracy score\eqref{eq:mlp-accuracy} as a function of the hidden layer size(the neurons) and the learning rate $\eta$, can be viewed in figure \ref{fig:mlp-neurons-ts}.
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_neurons_eta.pdf}
\caption{The accuracy score as function of the number of neurons and the learning rate $\eta$. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation. The regularization strength was set to $\lambda=0.0$}.
\label{fig:mlp-neurons-eta}
\end{figure}
The accuracy score\eqref{eq:mlp-accuracy} as a function of L$^2$ regularization strength $\lambda$ and the mini batch size in the SGD\ref{alg:sgd}, can be viewed in figure \ref{fig:mlp-lambda-mb}.
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_lambda_mini_batch_size.pdf}
\caption{The accuracy score as function of the L$^2$ regularization strength $\lambda$ and the mini batch size in the SGD\ref{alg:sgd}. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation and the inverse learning rate \eqref{eq:inverse-eta} with $\eta_0=0.001$.}
\label{fig:mlp-lambda-mb}
\end{figure}
After choosing a set of optimal parameters seen in table \ref{tab:opt_ols_result}, we get the results that is seen in figure \ref{fig:mlp-epoch-activations-mse-optimal} with the MSE cost function, figure \ref{fig:mlp-epoch-activations-log-loss-optimal} with the CE cost function.
\begin{table}[H]
\centering
\caption{A set of optimal parameters}
\begin{tabular}{l l} % 6 columns
\specialrule{.1em}{.05em}{.05em}
Parameters & Values \\ \hline
$N_\mathrm{neurons}$ & 10 \\
$\lambda$ & 0.1 \\
$N_\mathrm{mb}$ & 20 \\
$N_\mathrm{epochs}$ & 500 \\
$\eta$ (constant) & 0.001 \\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}
\label{tab:opt_ols_result}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_mse2.pdf}
\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \ref{sec:layer-acts}) for MSE as cost function using \textit{optimal parameters}. The optimal parameters can be viewed in table \ref{tab:opt_ols_result}.}
\label{fig:mlp-epoch-activations-mse-optimal}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_log_loss2.pdf}
\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \ref{sec:layer-acts}) for cross entropy as cost function using \textit{optimal parameters}. The optimal parameters can be viewed in table \ref{tab:opt_ols_result}.}
\label{fig:mlp-epoch-activations-log-loss-optimal}
\end{figure}
Another set of optimal parameters were chosen a listed in table \ref{tab:opt_ols_result2} provided us with the figures seen in \ref{fig:mlp-epoch-activations-mse-optimal} and \ref{fig:mlp-epoch-activations-log-loss-optimal}.
% Taking the last 50 accuracy scores gives as with tanh activation function, Accuracy$=0.999(21)$.
\begin{table}[H]
\centering
\begin{tabular}{l l} % 6 columns
\specialrule{.1em}{.05em}{.05em}
Parameters & Values \\ \hline
$N_\mathrm{neurons}$ & 20 \\
$\lambda$ & 0.1 \\
$N_\mathrm{mb}$ & 30 \\
$N_\mathrm{epochs}$ & 500 \\
$\eta$ (constant) & 0.001 \\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}
\label{tab:opt_ols_result2}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_mse_optimal3.pdf}
\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \ref{sec:layer-acts}) for MSE as cost function using \textit{optimal parameters}. The optimal parameters can be viewed in table \ref{tab:opt_ols_result2}.}
\label{fig:mlp-epoch-activations-mse-optimal}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_log_loss_optimal3.pdf}
\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \ref{sec:layer-acts}) for cross entropy as cost function using \textit{optimal parameters}. The optimal parameters can be viewed in table \ref{tab:opt_ols_result2}.}
\label{fig:mlp-epoch-activations-log-loss-optimal}
\end{figure} | {
"alphanum_fraction": 0.7197459205,
"avg_line_length": 62.7560137457,
"ext": "tex",
"hexsha": "a883ec4370ec7a9418566d4ff04a59d35db5bf10",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hmvege/FYSSTK4155-Project2",
"max_forks_repo_path": "doc/results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hmvege/FYSSTK4155-Project2",
"max_issues_repo_path": "doc/results.tex",
"max_line_length": 425,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hmvege/FYSSTK4155-Project2",
"max_stars_repo_path": "doc/results.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5487,
"size": 18262
} |
\chapter{Conclusion and Future Work}
\label{chap:conclusion-future-work}
In this dissertation,
we contributed to the research needs highlighted in~\cref{chap:introduction}.
The applicability of SSAT to the analysis of VLSI systems was examined.
We formulated a framework for the property evaluation of probabilistic design.
The average-case and worst-case analyses are encoded
as random-exist and exist-random quantified SSAT formulas, respectively.
Motivated by the emerging VLSI applications,
we further devised novel algorithms for random-exist and exist-random quantified SSAT formulas.
The proposed algorithms leverage the success from SAT/QBF-solving and model-counting communities
and advance the state-of-the-art of SSAT solving beyond the conventional DPLL-based search.
For random-exist quantified SSAT formulas,
we used minterm generalization and weighted model counting as subroutines
and employed SAT solvers and weighted model counters as plug-in engines.
For exist-random quantified SSAT formulas,
we proposed clause-containment learning,
which was inspired by clause selection from QBF solving.
Under the framework of clause-containment learning,
we explored three heuristics to strengthen learnt clauses.
Moreover, unlike previous exact approaches,
the proposed algorithms can solve approximate SSAT by deriving upper and lower bounds of satisfying probabilities.
Our evaluation showed the benefits of the proposed solvers over a wide range of formula instances.
Furthermore, our implementations and the benchmark suite of SSAT instances are open-source
for other researchers to base their work on top of our results.
To generalize SSAT beyond the PSPACE-complete complexity class for more complex problems,
we extended DQBF to its stochastic variant DSSAT and proved its NEXPTIME-completeness.
Compared to the PSPACE-complete SSAT,
DSSAT is more powerful to succinctly model NEXPTIME-complete decision problems with uncertainty.
We demonstrated the DSSAT formulation of the analysis to probabilistic/approximate partial design
and gave a polynomial-time reduction from the NEXPTIME-complete Dec-POMDP to DSSAT.
We highlight several directions for future investigation.
First, in order to improve the scalability of probabilistic property evaluation,
we are interested in approximate approaches based on simulation techniques.
In addition to the conventional Monte Carlo method,
circuit simulation based on symbolic sampling~\cite{KravetsDAC19ECOSampling} may have much potential.
Another line of on-going work is to develop solvers for arbitrarily quantified SSAT and DSSAT formulas.
Recently, clause selection has been adapted to solve random-exist quantified SSAT formulas
and combined with the clause-containment learning in a recursive manner to solve general SSAT~\cite{Chen2021}.
It is also extended to DQBF~\cite{Tentrup2019},
which might provide a promising framework for DSSAT solving.
From the view of practical implementation,
SSAT solvers will benefit from a tight integration with model-counting components.
More advanced data structures, e.g., d-DNNF~\cite{Darwiche2001,Darwiche2002dDNNF}, could be also integrated.
In particular, \textit{incremental} model counting might be a key step to boost the performance of SSAT solvers
if the computational efforts among different counting queries can be effectively shared.
Motivated by approximate model counting,
we also hope to pursue a similar formulation for SSAT solving.
Specifically, we envisage a unified SSAT framework that allows users to control the solution precision,
in order to trade inexactness for better scalability.
Finally, we would like to bring SSAT to different research fields, especially to machine-learning applications.
The SSAT solvers developed in this dissertation have been applied to verify the fairness of supervised-learning algorithms~\cite{Ghosh2021}.
According to the reported data~\cite{Ghosh2021},
using the proposed SSAT solvers achieves several orders of magnitude improvement over the state-of-the-art tools.
This success shows the great potential and benefit of SSAT solving.
\iffalse
We have proposed a formal framework to probabilistic property
evaluation, under the worst-case and average-case scenarios.
Connections between probabilistic property evaluation and existing
solving techniques have been established. A novel BDD-based SSAT solver is proposed. A comparative experimental study has been performed to assess the capabilities
of different methods. Among the considered solutions, the proposed BDD-based SSAT solver, which makes use of circuit structures to construct BDD, currently tends to be the most robust in our experiments. Nevertheless, there are cases solvable only by approximate weighted model counting, but not by other methods. As the BDD-based method has its memory explosion problem, SSAT and model
counting approaches based on CNF formula might be more viable than the BDD one if their efficiency would be improved in the future. Our results may benefit the synthesis of probabilistic design, perhaps not only for silicon but also for genetic circuits, which are intrinsically stochastic. For future investigation, Monte-Carlo simulation may be incorporated to our proposed formal methods.
In this paper, we focused on solving random-exist quantified SSAT formulas.
In contrast to the previous DPLL-based algorithms, we proposed a novel algorithm using SAT solver and weighted model counter as underlying engines to improve computational efficiency.
Leveraging the great success of modern SAT solving techniques, the proposed algorithm outperforms the state-of-the-art method in the experiment on random $k$-CNF and strategic companies formulas.
Moreover, unlike previous exact SSAT methods, the proposed algorithm can be easily modified to solve approximate SSAT by deriving upper and lower bounds of satisfying probability.
We demonstrated the applicability of our SSAT solver to VLSI circuit analysis.
While the state-of-the-art solver fails to compute the exact satisfying probability, the proposed method succeeded in finding bounds of the formulas.
In several cases, the derived bounds are very close to, or even match the exact satisfying probability.
This approximation flexibility of our method can be helpful when SSAT is applied to real-world applications.
%This work might shed light on the application of solving approximate SSAT to real-world problems.
For future work, we intend to extend the proposed algorithm to arbitrary quantified SSAT formulas.
We developed a new approach to solving E-MAJSAT formulas. In contrast to prior methods based on DPLL search or knowledge compilation, we proposed the clause containment learning technique, inspired by clause selection recently developed in QBF evaluation, and design a novel algorithm to solve E-MAJSAT efficiently. Under the framework of clause containment learning, three enhancement techniques were proposed to improve the computational efficiency.
Experiment results show the benefit of our method.
%that our method achieves significant performance gains and memory savings over prior SSAT methods, and also provides useful lower bound information for cases where no information can be given by prior methods.
For future work, we intend to solve SSAT with general prefix structure.
In this paper, we extended DQBF to its stochastic variant DSSAT and proved its NEXPTIME-completeness.
Compared to the PSPACE-complete SSAT, DSSAT is more powerful to succinctly model NEXPTIME-complete decision problems with uncertainty.
The new formalism can be useful in applications such as artificial intelligence and system design.
Specifically, we demonstrated the DSSAT formulation of the analysis to probabilistic/approximate partial design, and gave a polynomial-time reduction from the NEXPTIME-complete Dec-POMDP to DSSAT.
We envisage the potential broad applications of DSSAT and plan solver development for future work.
%\textcolor{blue}{
We note that recent developments of \textit{clausal abstraction} for QBF~\cite{JanotaM15,RabeT15} and DQBF~\cite{Tentrup19} might provide a promising framework for DSSAT solving.
Clausal abstraction has been lifted to SSAT~\cite{ChenHJ21}, and we are investigating its feasibility for DSSAT.
%}
\fi | {
"alphanum_fraction": 0.8198972889,
"avg_line_length": 88.1368421053,
"ext": "tex",
"hexsha": "b83c8130ccd1aec2a429a59bc4dd85bc5c0d2c5b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "nianzelee/PhD-Dissertation",
"max_forks_repo_path": "paper/conclusion-future-work.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "nianzelee/PhD-Dissertation",
"max_issues_repo_path": "paper/conclusion-future-work.tex",
"max_line_length": 455,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "nianzelee/PhD-Dissertation",
"max_stars_repo_path": "paper/conclusion-future-work.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z",
"num_tokens": 1689,
"size": 8373
} |
\subsection{Mulliken populations}
\index{Mulliken populations}
By default, the density matrix printed is the Coulson matrix, which
assumes that the atomic orbitals are orthogonalized.
If the assumption of orthogonality is not made, then the Mulliken density
matrix can be constructed. To construct the Mulliken density matrix (also known
as the Mulliken population analysis), the M.O.s must first be re-normalized,
using the overlap matrix, $S$:
$$
\psi_i^{'} = \psi_i\times S^{-\frac{1}{2}}.
$$
From these M.O.s, a Coulson population is carried out. The off diagonal terms
are simply the Coulson terms multiplied by the overlap:
$$
P_{\lambda\sigma\neq\lambda}'=S_{\lambda\sigma}2\sum_{i=1}^{occ}c_{\lambda i}
c_{\sigma i},
$$
while the on-diagonal terms are given by the Coulson terms, plus half the sum
of the off-diagonal elements:
$$
P_{\lambda \lambda}' =S_{\lambda\sigma}2\sum_{i=1}^{occ}c_{\lambda i}c_{\lambda i}
+ \frac{1}{2}\sum_{\sigma\neq\lambda}P_{\lambda \sigma}'.
$$
A check of the correctness of the Mulliken populations is to add the diagonal
terms: these should equal the number of electrons in the system.
\subsubsection*{Theory of Mulliken Populations}
The NDDO methods (MNDO, AM1, PM3, and MNDO-$d$) all use Slater orbitals,
but an implication of one of the approximations made, that $\sum(F_{\mu \nu}-E_i\delta_{\mu\nu})
C_{\nu i} =0$, is that the conventional molecular orbitals are normalized
to unity:
$$
\psi_i=\sum_{\lambda}c_{\lambda i}\phi_{\lambda}
$$
with
$$
<\psi_i^2> = 1 = \sum_{\lambda}c_{\lambda i}^2
$$
For example, for H$_2$, the occupied M.O.\ is:
$$
\psi_1 = \sqrt{\frac{1}{2}}(\phi_{H_1}+\phi_{H_2}),
$$
and the unoccupied M.O.\ is:
$$
\psi_2 = \sqrt{\frac{1}{2}}(\phi_{H_1}-\phi_{H_2}).
$$
The diagonal of the density matrix is then constructed using the Coulson
formula:
$$
P_{1,1}=P_{2,2}=2.0\times\left(\sqrt{\frac{1}{2}}\right)^2 =1.0.
$$
The off-diagonal terms are constructed in the same way:
$$
P_{1,2}=P_{1,2}=2.0\times\left(\sqrt{\frac{1}{2}}\right)^2 =1.0.
$$
If, instead of using $\sum(F_{\mu \nu}-E_i\delta_{\mu\nu}) C_{\nu i} =0$,
$\sum(F_{\mu \nu}-E_i) C_{\nu i} =0$ is used, then the occupied and unoccupied
M.O.s become:
$$
\psi_1 = \sqrt{\frac{1}{2(1+S)}}(\phi_{H_1}+\phi_{H_2}),
$$
and the unoccupied M.O.\ is:
$$
\psi_1 = \sqrt{\frac{1}{2(1-S)}}(\phi_{H_1}-\phi_{H_2}).
$$
where $S$ is the overlap integral: $\int\phi_{H_1}\phi_{H_2}{\rm d}v$.
In this case, the Coulson population would give
$$
\begin{array}{cc|cc|}
& & \frac{1}{1+S} & \frac{1}{1+S} \\
P & = & & \\
& & \frac{1}{1+S} & \frac{1}{1+S} \\
\end{array}
$$
From this we see that the Coulson representation is unsuitable for two
reasons: first, the number of electrons in the system, represented by the
diagonal terms, does not add to 2.0. Second, the off-diagonal terms, which
should represent the number of electrons resulting from the overlap of the two
atomic orbitals, becomes unity as the overlap {\em decreases}.
To correct for this, it is physically meaningful to multiply the matrix
elements by the overlap. This gives:
$$
\begin{array}{cc|cc|}
& & \frac{1}{1+S} & \frac{S}{1+S} \\
P & = & & \\
& & \frac{S}{1+S} & \frac{1}{1+S} \\
\end{array}
$$
Now the off-diagonal terms accurately represent the number of electrons which are
associated with the overlap electron density. The total number of electrons
in the system is now correct: $ \frac{1}{1+S} $ on atom 1, $ \frac{1}{1+S} $
on atom 2, and $ \frac{2S}{1+S} $ in the overlap region, giving a total of 2.0.
Although this representation is correct, it is potentially misleading, in that
the diagonal terms do not add to the number of electrons. Mulliken reasoned
that the electron density resulting from the overlaps should be divided into
two equal parts and added to the diagonal terms. When that is done, we get:
$$
\begin{array}{cc|ll|}
& & \frac{1}{1+S}+\frac{S}{1+S} & \frac{S}{1+S} \\
P & = & & \\
& & \frac{S}{1+S} & \frac{1}{1+S} +\frac{S}{1+S}\\
\end{array}
$$
or
$$
\begin{array}{cc|ll|}
& & 1.0 & \frac{S}{1+S} \\
P & = & & \\
& & \frac{S}{1+S} & 1.0\\
\end{array}
$$
This simple example can be extended to systems involving heteroatoms and to
polyatomics, and is fully general.
The Mulliken analysis can be applied to semiempirical methods. To do this, it
is necessary to first convert the M.O.s from solutions of $\sum(F_{\mu
\nu}-E_i\delta_{\mu\nu}) C_{\nu i} =0$ to solutions of $\sum(F_{\mu \nu}-E_i)
C_{\nu i} =0$. The simplest way to do this is to take the conventional M.O.s
and multiply them by $S^{-\frac{1}{2}}$. In the case of H$_2$, the resulting
M.O.s are exactly correct; in general, a small error is introduced. This error
arises from the incomplete annihilation of the secular matrix elements, and is
quite unimportant.
| {
"alphanum_fraction": 0.6469194313,
"avg_line_length": 39.2558139535,
"ext": "tex",
"hexsha": "b838f9373aaceaf9002639a3a3e11463019d0351",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "openmopac/MOPAC-archive",
"max_forks_repo_path": "manuals/MOPAC2000_manual/t_mullik.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "openmopac/MOPAC-archive",
"max_issues_repo_path": "manuals/MOPAC2000_manual/t_mullik.tex",
"max_line_length": 97,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "openmopac/MOPAC-archive",
"max_stars_repo_path": "manuals/MOPAC2000_manual/t_mullik.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-16T20:54:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-16T20:53:27.000Z",
"num_tokens": 1641,
"size": 5064
} |
\documentclass{article}
\usepackage{siunitx}
\usepackage{amsmath}
\title{Cardano's Formula for Cubic Equation}
\author{Ariyibi Joseph Iseoluwa}
\begin{document}
\maketitle
\begin{center}
\textbf{ABSTRACT}
\end{center}
Gerolamo Cardano was born in Pavia in 1501 as the illegitimate child of a jurist.He attended the University of Padua and become a physician in the town of Sacco, after being rejected by his home town of Milan. He became one of the most famous doctors in all of Europe, having treated the Pope. He was also an astrologerr and an avid gambler, to which he wrote the Book on Games of Chance, which was the first serious treatise on the mathematics of probability.\cite{ Ariyibi2021}
\section{Introdction To Cardano's Formula}
Cardano's formula for solution of cubic equations for an equation like: $x^3 + a_1x^2 + a_2x + a^3 = 0$
the parameters Q, R, S and T can be computed thus,\\
\raggedright
\begin{align}
\textbf{Q}= 3a_2 - \frac{a_1^2}{a} & \textbf{R}= \frac{9a_1a_2-27a_3-2a_1^3}{54} \\
\textbf{S} =\sqrt[3]{R + \sqrt{-Q^3 + R^2}} &
\textbf{T} = \sqrt{R - \sqrt{Q^3 + R^2}} \\
\end{align}
To give the roots
$x_1= S + T-\frac{1}{3}a_1$
$x_2= \frac{-(S+T)}{2} - \frac{a_1}{3} + i\frac{\sqrt{3}(s-T)}{2}$ \\
$x_2= \frac{-(S+T)}{2} - \frac{a_1}{3} - i\frac{\sqrt{3}(s-T)}{2}$ \\
Note : $x^3$ must not have a co-efficient
\subsection{Some Examples}
\begin{itemize}
\item $x^3 - 3x^2 + 4 = 0$
\item $2x^3 + 6x^2 + 1 = 0$
\end{itemize}
\end{document} | {
"alphanum_fraction": 0.6761778368,
"avg_line_length": 45.6666666667,
"ext": "tex",
"hexsha": "78f12d1c881b084b59f240a2ca19b45ed042e919",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4c2ac00d92146f0f84e4260160ff68700e13fd71",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ise2005best/iseoluwaCSC102",
"max_forks_repo_path": "last commit for texstudio/cardano.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4c2ac00d92146f0f84e4260160ff68700e13fd71",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ise2005best/iseoluwaCSC102",
"max_issues_repo_path": "last commit for texstudio/cardano.tex",
"max_line_length": 479,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4c2ac00d92146f0f84e4260160ff68700e13fd71",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ise2005best/iseoluwaCSC102",
"max_stars_repo_path": "last commit for texstudio/cardano.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 573,
"size": 1507
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Acknowledgement for my thesis
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\vspace*{\fill}
\section*{\centering Acknowledgement}
\addcontentsline{toc}{section}{Acknowledgement}
First and foremost, I would like to thank my supervisor Mik, for giving me this project and the opportunity to learn and develop the bioinformatics skills required to complete it.
You have been a great supervisor; I have learnt a lot from you over the past two years, and I have enjoyed being a part of the Black lab.
\\
\noindent
I would like to thank my committee members, Dr.\ Anita Dunbier, Dr.\ Miriam Sharpe, and Dr.\ Peter Mace (temporary member), for being patient during my presentations that were full of statistics and heatmaps.
I am grateful for all the suggestions and input you have given me to make this a better project.
\\
\noindent
I would also like to thank both past and present members of the Black lab, for useful tips and tricks to get me through my Masters.
Especially James Boocock, for introducing me to a variety of useful tool sets (by the way, everyone should start using vim); Murray Cadzow, for organising the fortnightly ``Shit You Should Know About'' (a.k.a. SYSKA) sessions to teach us more about R and other useful tools; and Tom Kelly, for being my companion in the Fish-bowl and helping me out with some of the code in my project.
\\
\noindent
I would like to thank all of my friends and family, for all their support and fun memories throughout the years that I have been at University.
I wish I could name everyone, but extending this thesis any longer is probably not a good idea\ldots{}
Long story short, my life would not be nearly as interesting or enjoyable without you guys.
\\
\noindent
Lastly, a special thanks to Gerry and Tessa who generously proof-read my introduction.
\\
\vfill
| {
"alphanum_fraction": 0.7292875989,
"avg_line_length": 52.6388888889,
"ext": "tex",
"hexsha": "85932b676af2256f4041b59cbf28f36567294053",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "402349085b0e9fe5373e0e0ce12780811809a41f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rikutakei/mastersDoc",
"max_forks_repo_path": "thesis/misc/acknowledgement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "402349085b0e9fe5373e0e0ce12780811809a41f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rikutakei/mastersDoc",
"max_issues_repo_path": "thesis/misc/acknowledgement.tex",
"max_line_length": 385,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "402349085b0e9fe5373e0e0ce12780811809a41f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rikutakei/mastersDoc",
"max_stars_repo_path": "thesis/misc/acknowledgement.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 416,
"size": 1895
} |
\section{Discussion and Analysis}
\label{sec:discussion}
In this section we discuss our results and compare them to the human score. We elaborate on potential limiting factors, which may be capping the maximum achievable accuracy. Furthermore, we analyze the model internals, try to interpret them, and distinguish between hard and easy samples in the dataset.
\subsection{Accuracy}
The page rank estimation task is difficult. Our model is provided with screenshots of up to eight web pages of a domain as well as their link structure. Given two samples of that kind, guessing the relative ranking is \textit{hard}. In the real world, a website's popularity depends on many factors other than its look. By achieving accuracies significantly beyond random guessing (50\%), we do however show that a website's look correlates to some extent with its rank. Obviously, a deduction on the causal relationship between look and rank (or the other way around?) would be purely speculative.
The human score gives an idea of the performance of our model. Even though the model achieves super-human results, it seems unlikely that more sophisticated methods can push the accuracy much further to, say, 90\%. We suspect the upper limit of purely vision-based rank estimation to be below 70\%.
Another aspect of difficulty is noise in the dataset: When crawling and screenshotting 100k websites, some errors are almost inevitable. A small but not negligible percentage of our data is therefore corrupt, e.g. because the website has not finished loading by the time we took the screenshot. Consequently, our model is confronted with all- or almost all-black screenshots occasionally, which harms the training process.
It is noteworthy that the dataset size was sufficient to prevent our CNN architecture from overfitting. We saw no need to make use of regularization methods such as weight decay or data augmentation because the accuracy on training and validation data was almost the same throughout all training runs. We therefore suspect that a model with greater capacity might be able to capture more fine-grained details of screenshots, thereby achieving a higher accuracy.
The accuracy score as proposed in Equation~\ref{eq:acc} has some \textit{unfairness} to it, because the distance of two compared samples is disregarded. For example, a erroneous assessment of the relative ranking of two samples with ranks \#12,000 and \#12,001 is contributing to the accuracy just like the confusion of samples with ranks \#1,000 and \#25,000, even though the latter should presumably be easier to disentangle. Figure~\ref{fig:accvsrank} supports this hypothesis by plotting the mean accuracy against the sample rank. Samples at the end of the spectrum tend to have considerably higher accuracy scores than the ones in the middle, because on average they are juxtaposed to samples with a rank further away from them than samples in the middle. Note that an improvement from 58\% to 64\% is an improvement by a factor of two, because random guessing is at 50\%.
Alternative accuracy metric formulations could take the difficulty of a pairwise comparison into account.
\begin{figure}
\centering
\begin{tikzpicture}
\pgfplotsset{
scale only axis,
}
\begin{axis}[
legend pos=north west,
xlabel=Rank $r$,
ylabel=Accuracy,
xmin=1,xmax=100000,
]
%\addplot[mark=x,color=black,error bars/.cd, y dir=both,y explicit]
\addplot[mark=x,color=black]
coordinates {
(2042.0927536231884, 0.6323095560073853) +- (0.252880334854126,0.252880334854126)
(5994.739130434783, 0.6122098565101624) +- (0.23341456055641174,0.23341456055641174)
(9987.672131147541, 0.5961850881576538) +- (0.22007544338703156,0.22007544338703156)
(14314.282571912014, 0.6346387267112732) +- (0.19101302325725555,0.19101302325725555)
(18001.915745856353, 0.6088740229606628) +- (0.16929037868976593,0.16929037868976593)
(21796.904761904763, 0.6173704266548157) +- (0.152631938457489,0.152631938457489)
(25896.116013071896, 0.5939950942993164) +- (0.14189182221889496,0.14189182221889496)
(29956.708053691276, 0.5828483700752258) +- (0.12687844038009644,0.12687844038009644)
(34018.020188425304, 0.5788435935974121) +- (0.10543978214263916,0.10543978214263916)
(37994.73270013568, 0.592072069644928) +- (0.07398778200149536,0.07398778200149536)
(42004.752380952385, 0.5882502198219299) +- (0.05585600063204765,0.05585600063204765)
(45999.76454668471, 0.5834199786186218) +- (0.042080748826265335,0.042080748826265335)
(50028.14344827586, 0.5820223093032837) +- (0.03431824594736099,0.03431824594736099)
(53989.265027322406, 0.5824041962623596) +- (0.04247155413031578,0.04247155413031578)
(57966.04674220963, 0.5840225219726562) +- (0.05858512595295906,0.05858512595295906)
(61993.13711911358, 0.586538553237915) +- (0.07541247457265854,0.07541247457265854)
(66035.42020497804, 0.590856671333313) +- (0.0997852087020874,0.0997852087020874)
(69878.57388316152, 0.5914943218231201) +- (0.11854474246501923,0.11854474246501923)
(74233.90106007067, 0.6241846680641174) +- (0.1272871345281601,0.1272871345281601)
(77989.69867060562, 0.6133634448051453) +- (0.1402067393064499,0.1402067393064499)
(82030.64427480916, 0.6063210964202881) +- (0.17285984754562378,0.17285984754562378)
(86000.5987933635, 0.6055595278739929) +- (0.20112822949886322,0.20112822949886322)
(90098.17037037037, 0.686475932598114) +- (0.19338952004909515,0.19338952004909515)
(93881.48888888888, 0.5622369647026062) +- (0.2505538761615753,0.2505538761615753)
(98061.09816971714, 0.569052517414093) +- (0.26824629306793213,0.26824629306793213)
};\label{test_set}
\addplot[only marks,mark=o,color=black]
coordinates{
(1982.0199443413728, 0.6462764739990234) +- (0.2512509226799011,0.2512509226799011)
(5978.758321933425, 0.6189908385276794) +- (0.2340804934501648,0.2340804934501648)
(9902.487074030552, 0.5953881144523621) +- (0.22232778370380402,0.22232778370380402)
(14307.361990950227, 0.6325339078903198) +- (0.19089166820049286,0.19089166820049286)
(18004.98867240598, 0.6146559119224548) +- (0.17392544448375702,0.17392544448375702)
(21806.290449438202, 0.603556752204895) +- (0.15489085018634796,0.15489085018634796)
(25894.420545746387, 0.5976948142051697) +- (0.1361941397190094,0.1361941397190094)
(29980.35864022663, 0.5823514461517334) +- (0.12486264109611511,0.12486264109611511)
(33985.87261146497, 0.578705906867981) +- (0.10141576081514359,0.10141576081514359)
(38000.94506476106, 0.5942217111587524) +- (0.06816687434911728,0.06816687434911728)
(41999.57528089888, 0.5835291147232056) +- (0.055270206183195114,0.055270206183195114)
(45996.97393689986, 0.5826137065887451) +- (0.0386173278093338,0.0386173278093338)
(49984.56206737425, 0.5810558199882507) +- (0.03277594968676567,0.03277594968676567)
(53996.78541569902, 0.5815796256065369) +- (0.040436532348394394,0.040436532348394394)
(57971.61578449905, 0.5827121138572693) +- (0.05864390358328819,0.05864390358328819)
(62001.87233054782, 0.5853626132011414) +- (0.07834165543317795,0.07834165543317795)
(66026.0411736867, 0.589715838432312) +- (0.10076970607042313,0.10076970607042313)
(69899.80461711712, 0.5951372981071472) +- (0.11936279386281967,0.11936279386281967)
(74265.74804570054, 0.6281706094741821) +- (0.12839582562446594,0.12839582562446594)
(77984.103948026, 0.6186800599098206) +- (0.1424257755279541,0.1424257755279541)
(81995.44253770151, 0.6045624017715454) +- (0.17524945735931396,0.17524945735931396)
(86010.70334685598, 0.6146765351295471) +- (0.1952735334634781,0.1952735334634781)
(90092.59086444008, 0.6757708787918091) +- (0.20028313994407654,0.20028313994407654)
(93889.90120620333, 0.5717195272445679) +- (0.2455245852470398,0.2455245852470398)
(98082.11754874652, 0.5708103179931641) +- (0.26858896017074585,0.26858896017074585)
}; \label{train_set}
\addplot[dashed][domain=1:100000]{0.6};\label{test_set_avg}
\addlegendimage{/pgfplots/refstyle=test_set}\addlegendentry{Test set}
\addlegendimage{/pgfplots/refstyle=train_set}\addlegendentry{Train set}
\addlegendimage{/pgfplots/refstyle=test_set_avg}\addlegendentry{Test set average}
\end{axis}
\end{tikzpicture}
\caption[Accuracy plotted against rank]{Mean accuracy of samples within certain rank ranges. To compute the data points, batches of samples of 4k ranks were grouped and the mean accuracy was computed. There is a clear tendency of samples in the middle to be harder to classify than samples on the extreme ends of the rank spectrum. The reason for the spike for ranks between 88k and 92k is unknown to us. The model used to create the plot is [baseline+avg] with weighting function, its performance on the entire test set is indicated by the dashed, horizontal line. The train set points tends to behave similarly as the test set points. At low ranks, the model is significantly better on the training data which may be attributed to the rank dependent weighting function (see Section~\ref{sec:loss}).}
\label{fig:accvsrank}
\end{figure}
In summary, we can strongly assume that a correlation between page rank and visual features exists. Our model was able to learn features and make use of them to solve the pairwise ranking task with an accuracy of $62.68\%$. Based on the assumption that an upper limit below $70\%$ exists, which cannot be exceeded by purely vision-based models, the outcomes of our trainings are a success.
\subsection{Model Understanding}
Besides ranking websites, our model has the practical purpose of yielding somewhat interpretable information. In this section we try to understand the model internals and its behavior, and deduce information therefrom.
While it is a common method to look at the kernels of CNNs (see e.g. Figure 3 in \cite{krizhevsky:imagenet}), this approach is not reasonably applicable to our architecture because our filter tensors have a spatial dimensionality of at most $3\times3$. Instead, we analyze the CNN feature maps and retrieve hard and easy samples.
\textbf{Activation maps} are internal representations (latent variables) of an input at intermediate stages of a deep model. Specifically, we consider the input as such, as well as the activations of the CNN blocks Block1, Block2, Block3, and Block4 (before pooling and application of the ReLU activation function). The dimensionalities of these tensors are listed in Table~\ref{tab:activationmaptensors}.
\begin{table}
\centering
\begin{tabular}{lrr}
\textbf{Activation map} & \textbf{Desktop size} & \textbf{Mobile size}\\\hline
\textbf{Input} & $270\times480\times3$ & $333\times187\times3$\\
\textbf{Block1} & $266\times476\times32$ & $329\times183\times32$\\
\textbf{Block2} & $129\times234\times64$ & $160\times87\times64$\\
\textbf{Block3} & $39\times74\times128$ & $49\times25\times128$\\
\textbf{Block4} & $9\times20\times256$ & $12\times4\times256$\\
\end{tabular}
\caption[Size of feature extractor activation maps]{Size of the feature extractor activation feature maps for mobile and desktop screenshots. Dimensions are listed in order $h\times w\times c$, where $h$ is the height, $w$ the width, and $c$ the number of channels.}
\label{tab:activationmaptensors}
\end{table}
A qualitative analysis of the activation maps suggests that the model learns to discriminate different common elements of web pages such as natural images or text. We have observed some filters to get excited by layout elements like edges and corners. Figure~\ref{fig:activationmaps} shows some cherry-picked filters from all layers.
\begin{figure}
\centering
\includegraphics[clip,width=\columnwidth]{resources/analysis/feat-map-28073-0.png}\\
\includegraphics[clip,width=\columnwidth]{resources/analysis/feat-map-28073-1.png}\\
\includegraphics[clip,width=\columnwidth]{resources/analysis/feat-map-28073-2.png}\\
\includegraphics[clip,width=\columnwidth]{resources/analysis/feat-map-28073-3.png}\\
\includegraphics[clip,width=\columnwidth]{resources/analysis/feat-map-28073-4.png}\\
\caption[Activation maps of the CNN for sample \#28073]{Cherry-picked activation maps of the CNN for sample \#28073. Top to bottom: Input, Block1, Block2, Block3, and Block4. The first row shows the three color channels of a desktop screenshot. The second rows (Block 1) resembles the original image still very much because the receptive field of the convolution has not grown very much yet. The right-most filter in Block 2 seems to detect the look of natural images. It gets excited in areas of text and where the center image is positioned on the website. In contrast, the filter in the middle fires at places where text is present, but has lower activation magnitudes in the image areas. The left-most filter in Block 3 is excited by text. The left-most filter in Block 4 was triggered by the column layout of the site. It has a column of white pixels in the area where the side-bar is located. The same filters are visualized for two other websites in the appendix in Figures \ref{fig:activationmaps2} and \ref{fig:activationmaps3}.}
\label{fig:activationmaps}
\end{figure}
We introduce the notion of \textbf{hard and easy samples}. A hard sample from the dataset is characterized by the property that a given model's estimation of its rank diverges significantly from its actual rank. This can be quantized by evaluating Equation~\ref{eq:inference} (inference) and comparing the inferred rank with the ground-truth rank. Samples for which the difference is large are considered hard, because the model has trouble estimating their rank properly. Easy samples are the opposite: Relative to the other samples in the dataset, the model is capable of predicting the correct ranking relatively often.
Hard and easy samples have qualitative meaning because they are representative for websites of their particular rank: Give for instance an easy sample with a high rank, it can be assumed that this sample is representative for high ranked websites. A hard sample with a high rank might have attained its rank for reasons other than its look. On the other hand, easy samples with low rank are representative for websites with a low rank. By comparing easy samples with high/low rank we can distill the visual difference between high and low ranked websites. Note that this is more sophisticated than looking only at the pure dataset, i.e. comparing high-ranked samples with low-ranked samples, because the model acts as a filter, extracting visually representative samples.
We split the test dataset ranks into 25 bins (each containing 4k samples) and search for the hardest and easiest sample in each of those bins. The first desktop screenshot of each of those samples is depicted in Figure~\ref{fig:hardandeasysamples} (50 screenshots in total). We have noticed that samples in the middle of the spectrum had accuracies much closer to 50\%. For instance, the easiest sample in the bin $r^{(i)}\in{\left\{44000, \dots, 47999\right\}}$ (which is $i=44064$) has an accuracy of just $62.83\%$. All maximum/mininum accuracies vs. ranks can be seen in Figure~\ref{fig:minmaxaccvsrank}.
\begin{figure}
\centering
\makebox[\textwidth][c]{\includegraphics[width=1.4\textwidth]{resources/hard-easy-wide.jpg}}%
\caption[Easy and hard samples across all ranks]{Easy (left in each column) and hard (right in each column) samples across all ranks from high (top-left) to low (bottom-right); best seen on a computer screen. The images are the hardest/easiest samples in rank-bins of size $4000$. Each image corresponds to one cross-mark in Figure~\ref{fig:minmaxaccvsrank}. Some easy samples at the lower end of the rank specturm have gray placeholder sites or godaddy placeholder content. The godaddy placeholder occurs four times, as an easy sample in the right-most column (fourth and fifth from the top) and as a hard sample in the middle and left-most column. Hard samples with high ranks look rather bad to a human as well, e.g. a 404 page, a gray login screen, or a directory listing.}\label{fig:hardandeasysamples}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\pgfplotsset{
scale only axis,
}
\begin{axis}[legend style={at={(0.5,+1.1)},anchor=south},
xlabel=Rank $r$,
ylabel=Accuracy,
xmin=1,xmax=100000,
]
\addplot[name path=F,mark=x,color=black]
coordinates {
(55, 0.9988577)
(4058, 0.95130163)
(8020, 0.91372573)
(12572, 0.8788553)
(16364, 0.84158)
(20383, 0.796549)
(24407, 0.76179886)
(28073, 0.7308363)
(32071, 0.7000541)
(36001, 0.6655444)
(40071, 0.6422774)
(44064, 0.6283292)
(51649, 0.6214754)
(55605, 0.62760776)
(59829, 0.64065415)
(63952, 0.66452235)
(67973, 0.700475)
(71939, 0.7335417)
(75917, 0.7663681)
(79826, 0.80586785)
(83884, 0.84512717)
(87970, 0.8857091)
(90855, 0.9140263)
(95235, 0.95502913)
(99609, 0.9919437)
};\label{max_acc}
\addplot[name path=G,mark=x,color=black]
coordinates{
(450, 0.009799795)
(4880, 0.061504237)
(9957, 0.10467143)
(12899, 0.13112487)
(16956, 0.17519389)
(20828, 0.21535502)
(24155, 0.24685866)
(28599, 0.2820297)
(32801, 0.32110864)
(36635, 0.3725125)
(40459, 0.40996814)
(44435, 0.44928756)
(51995, 0.47556064)
(55450, 0.4379246)
(59871, 0.38784343)
(62559, 0.35898516)
(67819, 0.3109481)
(70262, 0.28449467)
(75932, 0.25335178)
(79033, 0.20543498)
(83407, 0.16286899)
(86852, 0.12360969)
(91299, 0.09427042)
(95411, 0.0490591)
(99866, 0.008537245)
}; \label{min_acc}
\addplot[pattern=north west lines, pattern color=brown!50]fill between[of=F and G, soft clip={domain=1:100000}];\label{area}
\addplot[dashed][domain=1:100000]{0.6};\label{test_set_avg2}
\addplot[][domain=1:100000]{1-1/100000*x};
\addplot[solid][domain=1:100000]{0+1/100000*x};\label{test_set_trivial_bound}
\addlegendimage{/pgfplots/refstyle=max_acc}\addlegendentry{Maximum (easy samples)}
\addlegendimage{/pgfplots/refstyle=min_acc}\addlegendentry{Minimum (hard samples)}
\addlegendimage{/pgfplots/refstyle=area}\addlegendentry{All accuracies}
\addlegendimage{/pgfplots/refstyle=test_set_avg2}\addlegendentry{Average accuracy}
\addlegendimage{/pgfplots/refstyle=test_set_trivial_bound}\addlegendentry{Trivial bound}
\end{axis}
\end{tikzpicture}
\caption[Min/max accuracy against rank]{Min/max accuracy of samples in bins of size $4000$ plotted against their rank. It is apparent that the model struggles at estimating the rank of samples in the middle correctly, because it is harder. Take for instance the sample with the highest rank: The model could theoretically easily achieve 100\% accuracy by predicting a high value for it. Likewise, it can constantly predict a low value for it, making it a hard sample (accuracy 0\%). This \textit{trivial} predicting gets harder as the ranks diverge from the ends of the spectrum. The solid lines indicate where the trivial bound lies; for a randomly initialized model maximum and minimum would (in the expected value) lay on the trivial bound. Crosses exceeding the solid lines (i.e. crosses above or below the surface enclosed by them) are better than the trivial guessing. It is important to note though, that the samples are drawn from the test set, so the model has not seen them before. The horizontal, dashed line marks the average model performance on the test set. The maximum accuracy for samples around $r=0.5$ is only slightly above $60\%$. We see this as an indicator for the difficulty of the task, supporting our speculation that the maximum achievable accuracy (based on visual features) is below $70\%$.}
\label{fig:minmaxaccvsrank}
\end{figure}
| {
"alphanum_fraction": 0.732436186,
"avg_line_length": 86.643153527,
"ext": "tex",
"hexsha": "d16f5728aa66881844e006db59b869ce7018c36d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-02-18T16:27:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-02-18T16:27:30.000Z",
"max_forks_repo_head_hexsha": "424d80031501701ebe1ab1473b0fb09ccd6f6453",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Simsso/Vision-Based-Page-Rank-Estimation",
"max_forks_repo_path": "docs/article/content/discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "424d80031501701ebe1ab1473b0fb09ccd6f6453",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Simsso/Vision-Based-Page-Rank-Estimation",
"max_issues_repo_path": "docs/article/content/discussion.tex",
"max_line_length": 1324,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "424d80031501701ebe1ab1473b0fb09ccd6f6453",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Simsso/Vision-Based-Page-Rank-Estimation",
"max_stars_repo_path": "docs/article/content/discussion.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-03T20:10:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-27T05:59:40.000Z",
"num_tokens": 6031,
"size": 20881
} |
\documentclass{article}
\usepackage{amsmath}
\newcommand{\BigO}[1]{\ensuremath{\operatorname{O}\bigl(#1\bigr)}}
\begin{document}
\begin{center}
{\Huge \bf Sudoku $M*N$ Solver} \\
{\large Daniel Lovasko} \\[1\baselineskip]
\end{center}
\section*{Introduction}
Sudoku solver written entirely in Prolog. Supports only square-shaped puzzles,
but inner areas are allowed to be rectangular (ex. $20*20$ puzzle may have inner
areas $5*4$, $4*5$, $2*10$, $10*2$ configured in inverse). GNU Prolog was chosen as
Prolog flavour, mainly due to licensing terms and nice IO library.
\section{Goal} The goal of this solver is to find all possible solutions for
the supplied puzzle using chosen solving method.
\section{Command Line Arguments} The goal of this solver is to find all
possible solutions for the supplied Multiple argument-counts are acceptable:
\paragraph{0 arguments} user will be prompted after startup to supply path to
the file containing the sudoku puzzle and solving method name (for solving
method names and information, see Algorithms section)
\paragraph{1 argument} the first argument will be understood as path to the
file containing the sudoku puzzle, user will be prompted for solving method name
\paragraph{2 arguments} first argument will be understood as path to the file
containing the sudoku puzzle, second will be understood as solving method name
\\
\\
NOTE: All supplied information during runtime (not command line arguments) must
be surrounded with "".
\section{Data Format}
Puzzle is loaded from external file saved on the disk. This file must be in the
following format: First line consists of two integers, each followed by a dot.
These numbers represent width and height of inner cell of the puzzle,
respectively.
Next, the actual puzzle is expected. Each cell is represented as integer,
ranging from 1 to width*height followed by a dot. 0, followed also by a dot, is
also possible, meaning this cell is not solved yet and is a subject to our
solving algorithm. Number of cells is always (width*height) * (width*height),
so for example: width = 3 and height = 3, 81 cells are expected. Be careful to
provide exact number of cells, as program can behave strangely with incorrect
input.
\subsection{Example Puzzle File}
\begin{verbatim}
2. 2.
1. 0. 2. 0.
2. 0. 0. 0.
0. 3. 0. 0.
0. 0. 0. 4.
\end{verbatim}
\section{Data Structures and API}
The puzzle is represented by sudoku/3 predicate. First argument is M, the width
of each inner area, second N, the height of each inner area and finally Fields,
the list of fields. Fields are the inner program representation of the
user-supplied cells loaded from the puzzle file. Each field is represented by
list of length 3, first two elements being the X, Y coordinates, last element
being Value of that field.
Manipulation with such data structure can be found in get set.pl source file.
Read-only access to specified row can be done via filter same row/3, with
analogue procedures filter same column/3 and filter same area/3 for column and
area respectively. These procedures take O(M ∗ N)2 time.
Write access is supported with set value/4 procedure. This procedure takes O(M
∗ N)2 time.
\section{Algorithms}
There are two basic approaches used in this program. First, the static solving
algorithm, which creates a list of fields at first, and than backtracks
thought this list, always inserting correct values to the puzzle. During the
first computation, no actual data modification is made and the heuristic method
does not take the values to account.
On the other hand, dynamic solving algorithm modifies actual puzzle data during its
run. The heuristic method is based on field values.
Common code shared between these algorithms include sudoku puzzle get/set API
calls and predicates concerning field correctness.
\subsection{Static Algorithms}
The algorithm can be divided into two steps. First step is creating list of
fields that were not pre-solved. Order of fields in this list based on
heuristic approach.
Each individual technique is described in its subsection. Second step is using
Prolog’s internal back-tracking engine to try to fill in each field, element by
element.
\subsubsection{Method 1: Relatives}
This heuristic approach sorts fields by following criterion: more fields in
same column, row or area already solved or added to list. These all are called
the ’relatives’. In each step, it chooses the one with the most ’relatives’.
\subsubsection{Method 2: Neighbours}
This heuristic approach sorts fields by following criterion: more fields solved
or added to list from adjacent fields. These all are called the ’neighbours’.
In each step, it chooses the one with the most ’neighbours’.
\subsection{Method 3: Adhoc (Dynamic Algorithm)}
In each step of this algorithm, all yet unsolved fields are evaluated by
heuristic - fields with less possible correct values are ranked higher - and
than filled in. No list is being created, the actual puzzle is modified, so
each step needs to recalculate the heuristic values.
\section{Generator}
IMPORTANT: I myself did not write this code, just copied it from FreeBSD ports
collection. My contribution is the interface described below.
\subsection{make}
This command handles the compilation of the C source code. libcurses is needed
for compilation.
\subsection{make generate}
Uses the compiled binary sudoku generator to create a set of puzzles with size
3x3.
\subsection{make solve}
Invokes the compiled binary sudoku solver (written in Prolog) with each
generated puzzle, for each solving technique.
\end{document}
| {
"alphanum_fraction": 0.7895777179,
"avg_line_length": 51.0550458716,
"ext": "tex",
"hexsha": "d59a1d516d424180acd85ca04084b3c34ea32f98",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "282f7a677e4f9d8bde8334eb19c6940ff39fe4bd",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "lovasko/NoveZamky",
"max_forks_repo_path": "doc/manual.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "282f7a677e4f9d8bde8334eb19c6940ff39fe4bd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "lovasko/NoveZamky",
"max_issues_repo_path": "doc/manual.tex",
"max_line_length": 83,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "282f7a677e4f9d8bde8334eb19c6940ff39fe4bd",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "lovasko/NoveZamky",
"max_stars_repo_path": "doc/manual.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1314,
"size": 5565
} |
\section{Common errors}
\label{sec:Common errors}
\subsection{Install Qt}
Inviwo uses the graphics library Qt which isn't always installed properly. These instructions show how to download and install the latest version of Qt on Ubuntu Linux 20.04. That is, in the moment of writing this user guide, version 5.12.3.
To download the installation file into the \emph{~/Downloads} directory, simply execute the commands below.
\begin{lstlisting}[frame = single, breaklines=true]
cd ~/Downloads
wget http://download.qt.io/official_releases/qt/5.12/5.12.3/qt-opensource-linux-x64-5.12.3.run
\end{lstlisting}
When the installation file has finished downloading, the user won't have permission to run the file. To change permissions and run the file by executing the commands below and enter your superuser password immediately after.
\begin{lstlisting}[frame = single, breaklines=true]
chmod +x qt-opensource-linux-x64-5.12.3.run
sudo ./qt-opensource-linux-x64-5.12.3.run
\end{lstlisting}
An Qt installer is now shown on the screen. Notice that the manual installation will force a installation of the Qt editor as shown in step 6. The entire installation will occupy approximately 5.12 GB. Follow the instructions in figure \ref{fig:Qt} to complete the installation.
After the installation is done, the path to Qt needs to be added to the system. Add the necessary paths by executing the commands below.
\begin{lstlisting}[frame = single, breaklines=true]
cd /usr/lib/x86_64-linux-gnu/qtchooser
sudo echo "/opt/Qt5.12.3/5.12.3/gcc_64/bin" | sudo tee -a default.conf
sudo echo "/opt/Qt5.12.3/5.12.3/gcc_64/lib" | sudo tee -a default.conf
\end{lstlisting}
The system is now ready for an Inviwo installation.
\begin{figure}[H]
\centering
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=1\textwidth]{Images/Qt1.png}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=1\textwidth]{Images/Qt2.png}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=1\textwidth]{Images/Qt3.png}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=1\textwidth]{Images/Qt4.png}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=1\textwidth]{Images/Qt6.png}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=1\textwidth]{Images/Qt7.png}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=1\textwidth]{Images/Qt8.png}
\end{subfigure}
\caption{Instructions for installation of Qt 5.12.3.}
\label{fig:Qt}
\end{figure}
\subsection{Set the INVIWO\_HOME path}
Before using ENVISIoN's GUI it is good to check that your environmental variable INVIWO\_HOME is set to the correct value. Otherwise the system will not find the modules inviwopy or inviwopyapp. If you have started to use the GUI you will receive the following message if the variable is not set:
\newline
``Module error: No module named '[The missing module]'. Can not find module. Please check that the environment variable INVIWO\_HOME is set to the correct value in the computers system settings.''
\newline
\subsection{The module Inviwopyapp did not build}
If the inviwopyapp did not build when building in Visual Studio, open the project once again in Visual Studio 2017 and make sure ``inviwopyapp'' is listed to the right in the ``Solution Explorer'', see figure \ref{fig:VisStudioInviwopyapp}. Inviwopyapp should be under ``minimals'' in the list.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.6]{Images/VisualStudioinviwopyapp.png}
\caption{The ``Solution Explorer'' in Visual Stuido 2017}
\label{fig:VisStudioInviwopyapp}
\end{figure}
Now try to build the project again but by choosing ``Build'' in the upper menu and then ``Build Solution'' instead of pressing fn + f5. The build should start, keep track of possible errors during the building. When it's finished, check that inviwopyapp had built.
| {
"alphanum_fraction": 0.7494473102,
"avg_line_length": 52.1923076923,
"ext": "tex",
"hexsha": "0420281354e8f25721ca3e011944d7870738f66d",
"lang": "TeX",
"max_forks_count": 9,
"max_forks_repo_forks_event_max_datetime": "2021-03-04T10:12:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-03-20T12:26:05.000Z",
"max_forks_repo_head_hexsha": "95cf5e74411807e21909f1a0d47b52fd6318a855",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "adamskor/ENVISIoN",
"max_forks_repo_path": "docs/User guide/CommonError.tex",
"max_issues_count": 23,
"max_issues_repo_head_hexsha": "95cf5e74411807e21909f1a0d47b52fd6318a855",
"max_issues_repo_issues_event_max_datetime": "2022-03-25T18:51:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-05-28T20:50:21.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "adamskor/ENVISIoN",
"max_issues_repo_path": "docs/User guide/CommonError.tex",
"max_line_length": 296,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "95cf5e74411807e21909f1a0d47b52fd6318a855",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "adamskor/ENVISIoN",
"max_stars_repo_path": "docs/User guide/CommonError.tex",
"max_stars_repo_stars_event_max_datetime": "2019-02-25T17:21:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-09T21:50:40.000Z",
"num_tokens": 1107,
"size": 4071
} |
\chapter{Requirements}
The following sections list the requirements and their respective priorities.
Additionally it is indicated whether and how they are currently implemented.
\section{Non-Functional Requirements}
\paragraph{Transparency (Must Have High)}
\emph{Varroa has to be comprehensible for the user.}
This requirement is fulfilled.
Varroa implements expressive logging with configurable levels and outputs.
This enables a rich insight into the execution process of Varroa.
\paragraph{10.000.000 MQTT Clients (Must Have High)}
\emph{Varroa has to be able to generate a large amount of clients.}
Even though we can generate large amounts of clients already, we did not verify the 10.000.000 benchmark yet.
\paragraph{Scalability (Must Have)}
\emph{Varroa should scale vertically with relatively low scaling costs.}
This requirement is fulfilled.
To do so Varroa is organised as a distributed system and can easily be extended with new instances running as Agents.
\paragraph{Determinism (Must Have)}
\emph{Varroa has to work in deterministic ways, meaning it should produce the same result for the same Scenario every time.}
This requirement is fulfilled.
To guarantee this determinism several measures are taken, ensuring deterministic distribution and execution of a Scenario (see chapter \ref{sec:Architecture}).
\paragraph{Distributed (Must Have Low)}
\emph{Varroa is a distributed system.}
This requirement is fulfilled.
Varroa is organized as a distributed system.
It is composed of a Commander and multiple Agents.
These components can be run on separate machines.
\paragraph{Usabillity (Very Important)}
\emph{Varroa has to be easily usable.}
This requirement is fulfilled.
The user can easily define custom Scenarios as XML files supported by XSD validation and execute them (see chapter \ref{sec:ScenarioConcept} and \ref{sec:Execution}).
\paragraph{Code Quality (Important)}
\emph{Varroa's code quality should be very high.}
This requirement is fulfilled.
To ensure high code quality multiple measures are taken.
An example for these measures is the consequent use of nullability annotations as well as broad test code coverage.
Also we followed strict rules regarding code reviews.
Before new code was merged into the master branch it had to be reviewed by another team member.
\paragraph{Stability (Important)}
\emph{Varroa has to run in a stable manner.}
This requirement is fulfilled.
Stability is ensured by the use of integration tests and general unit tests.
\paragraph{Resource efficiency (Important)}
\emph{Varroa has to use the available computation and memory resources efficiently.}
Resource efficiency was proven while testing Varroa, as it only used a fraction of the attacked broker's RAM usage.
\paragraph{User / Developer Guide (Somewhat Important)}
\emph{Varroa needs a User / Developer Guide.}
This requirement is fulfilled.
The documentation educates the user on the construction of a custom Scenario as well as the execution of it on a Varroa Distributed System (see \ref{sec:ScenarioConcept} and \ref{sec:Execution}).
\paragraph{Automation capacity (Somewhat Important)}
\emph{Varroa should be automatable.}
This requirement is partially fulfilled. It can be automated by an external script.
\paragraph{User Interface}
\emph{Varroa should have a usable user interface.}
This requirement is fulfilled.
The user can interact with Varroa using a command line interface or using Kubernetes.
\paragraph{Multi Platform Support (Nice To Have)}
\emph{Varroa should run on multiple platforms.}
This requirement is fulfilled.
Due to the utilisation of Docker, Varroa can run on several platforms (see chapter \ref{sec:Execution}).
\section{Functional Requirements}
The following sections list the functional requirements and their respective priorities.
\paragraph{Scenarios (Must Have High)}
\emph{Varroa must be able to execute user defined MQTT Scenarios.}
This requirement is fulfilled.
Varroa has the ability to parse and execute Scenarios, according to the concept explained in Chapter \ref{sec:ScenarioConcept}.
\paragraph{MQTT Specification Confirmity (Must Have High)}
\emph{Varroa must conform to the MQTT specification.}
This requirement is fulfilled.
Varroa uses the HiveMQ MQTT Client library for its MQTT Actions.
The library ensures specification conformity in both MQTT version 3 and MQTT version 5.
\paragraph{All Transports (Must Have)}
\emph{Varroa has to support all transports that are possible in MQTT.}
This requirement is not fulfilled.
Currently Varroa does not support WebSockets.
\paragraph{Reports (Must Have)}
\emph{Varroa has to report the findings of the testing process.}
This requirement is fulfilled (see Chapter \ref{sec:Reporting}).
\paragraph{User Feedback (Must Have Low)}
\emph{Varroa has to give users feedback during runtime.}
This requirement is fulfilled by extensive logging.
\paragraph{Documentation (Very Important)}
\emph{Varroa has to be documented. }
This requirement is fulfilled.
This document contains a user guide and explains the architecture and inner workings of Varroa.
\paragraph{Metrics (Very Important)}
\emph{Varroa has to record and expose internal metrics.}
This requirement is partially fulfilled.
Varroa provides multiple metrics for the Scenario (see Section \ref{sec:metrics}) but not for internal processes.
\paragraph{Deployment Convenience (Important)}
\emph{Varroa must be easily deployable.}
This requirement is fulfilled. Due to the use of Docker Varroa Instances can be easily started and orchestrated without much configuration (see chapter \ref{sec:Execution}).
\paragraph{Topology Change (Less important)}
\emph{Varroa must be able to simulate changes to the network topology.}
This requirement is not fulfilled.
It is not possible to simulate network level behaviour of the clients in the current version.
\paragraph{Simple and Expert Mode (Somewhat Important}
\emph{Varroa has to have two user modes.
One for experienced users and one for unexperienced users.}
This requirement is not fulfilled.
Since Varroa does not have a graphical user interface in this version, simple and expert modes are not implemented.
\paragraph{Notifications (Nice To Have)}
\emph{Varroa must notitfy the user about the testing process by Email.}
This requirement is not fulfilled in the current version. | {
"alphanum_fraction": 0.8009754563,
"avg_line_length": 41.2727272727,
"ext": "tex",
"hexsha": "0bfefd115c2c74dd181282b1d8d8c732079e9593",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b0b11729b4d582156ea499d480911b12a1dda840",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "mqtt-varroa/varroa-documentation",
"max_forks_repo_path": "LaTeX/Parts/Requirements.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "b0b11729b4d582156ea499d480911b12a1dda840",
"max_issues_repo_issues_event_max_datetime": "2019-06-18T13:31:46.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-03-05T10:19:21.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "mqtt-varroa/varroa-dokumentation",
"max_issues_repo_path": "LaTeX/Parts/Requirements.tex",
"max_line_length": 195,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "b0b11729b4d582156ea499d480911b12a1dda840",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "mqtt-varroa/varroa-dokumentation",
"max_stars_repo_path": "LaTeX/Parts/Requirements.tex",
"max_stars_repo_stars_event_max_datetime": "2020-09-23T15:29:55.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-22T14:17:19.000Z",
"num_tokens": 1402,
"size": 6356
} |
\chapter{Marketing Analytics}\label{ch:5}
\begin{remark}{Outline}
In this chapter, we present two different marketing real-world example-dependent
cost-sensitive problems, namely, churn modeling and direct marketing. Both problems deal
with identifying those customers with certain characteristic with the objective to maximize the
results of the different CRM strategies.
First, we introduce the churn modeling problem and present the proposed financial evaluation
measure for a churn campaign in Section \ref{sec:5:churn}. Lastly in Section
\ref{sec:5:directmarketing}, the direct marketing problem is presented including the proposed
example-dependent cost-sensitive cost matrix for evaluating this problem.
\end{remark}
\section{Churn modeling}
\label{sec:5:churn}
Customer churn predictive modeling deals with predicting the probability of a customer defecting
using historical, behavioral and socio-economical information. This tool is of great benefit to
subscription based companies allowing them to maximize the results of retention campaigns. The
problem of churn predictive modeling has been widely studied by the data mining and machine learning
communities. It is usually tackled by using classification algorithms in order to learn the
different patterns of both the churners and non-churners. Nevertheless, current state-of-the-art
classification algorithms are not well aligned with commercial goals, in the sense that, the models
miss to include the real financial costs and benefits during the training and evaluation phases. In
the case of churn, evaluating a model based on a traditional measure such as accuracy or predictive
power, does not yield to the best results when measured by the actual financial cost, i.e.,
investment per subscriber on a loyalty campaign and the financial impact of failing to detect a
real churner versus wrongly predicting a non-churner as a churner.
In this section, we propose a cost-sensitive framework for customer churn predictive
modeling. First, in Section \ref{sec:5:1:intro}, we introduce
the problem of chrun modeling. Then in Section \ref{sec:5:1:evaluation}, we present a financial
based measure for evaluating the effectiveness of a churn campaign taking into account the
available
portfolio of offers, their individual financial cost and probability of offer acceptance depending
on the customer profile. Finally, in Section~\ref{sec:5:1:data}, we describe the real-world churn
modeling dataset that will be used during the experiments.
\subsection{Flow analysis of a churn campaign}
\label{sec:5:1:intro}
The two main objectives of subscription-based companies are to acquire new subscribers and
retain those they already have, mainly because profits are directly linked with the number of
subscribers. In order to maximize the profit, companies must increase the customer base by
incrementing sales while decreasing the number of churners. Furthermore, it is common knowledge
that retaining a customer is about five times less expensive than acquiring a new one
\citep{Farris2010}, this creates pressure to have better and more effective churn campaigns.
A typical churn campaign consists in identifying from the current customer base which ones are
more likely to leave the company, and make an offer in order to avoid that behavior.
With this in mind the companies use intelligence to create and improve retention and collection
strategies. In the first case, this usually implies an offer that can be either a discount or a
free upgrade during certain span of time. In both cases the company has to assume a cost for that
offer, therefore, accurate prediction of the churners becomes important. The logic of this flow is
shown in \figurename{ \ref{fig:ch5:1}}.
The churn campaign process starts with the sales that every month increase the customer
base, however, monthly there is a group of customers that decide to leave the company for many
reasons. Then the objective of a churn model is to identify those customers before they take the
decision of defecting.
Using a churn model, those customers more likely to leave are predicted as churners and
an offer is made in order to retain them. However, it is known that not all customers will accept
the offer, in the case when a customer is planning to defect, it is possible that the offer is not
good enough to retain him or that the reason for defecting can not be influenced by an offer.
Using historical information, it is estimated that a customer will accept the offer with
probability $\gamma$.
On the other hand, there is the case in which the churn model misclassified a non-churner as
churner, also known as false positives, in that case the customer will always accept the offer that
means and additional cost to the company since those misclassified customers do not have the
intentions of leaving.
\begin{figure}[t]
\centering
\includegraphics[width=11.5cm]{ch5_fig1} %CHANGE TO 12cm
\caption{Flow analysis of a churn campaign \citep{Verbraken2012}}
\label{fig:ch5:1}
\end{figure}
In the case were the churn model predicts customers as non-churners, there is also the possibility
of a misclassification, in this case an actual churner is predicted as non-churner, since
these customers do not receive an offer and they will leave the company, these cases are known as
false negatives. Lastly, there is the case were the customers are actually non-churners, then
there is no need to make a retention offer to these customers since they will continue to be part
of the customer base.
It can be seen that a churn campaign (or churn model) has three main points. First, avoid false
positives since there is a financial cost of making an offer where it is not needed. Second, find
the right offer to give to those customers identified as churners. And lastly, to decrease
the number of false negatives.
From a machine learning perspective, a churn model is a classification algorithm.
In the sense that using historical information, a prediction of which current customers
are more likely to defect, is made. This model is normally created using one of a number of
well established algorithms (Logistic regression, neural networks, random forests, among
others) \citep{Ngai2009,KhakAbi2010}. Then, the model is evaluated using measures such as
misclassification error, receiver operating characteristic ($ROC$), Kolmogorov-Smirnov ($KS$)
or \mbox{$F_1Score$} statistics \citep{Verbeke2012}.
However, these measures may not be the most appropriate evaluation criteria when
evaluating a churn model, because they tacitly assume that misclassification errors carry the
same cost, similarly with the correct classified examples. This assumption does not hold in many
real-world applications such as churn modeling, since when misidentifying a churner the financial
losses are quite different than when misclassifying a non-churner as churner \citep{Glady2009}.
Furthermore, the accuracy measure also assumes that the class distribution
among examples is constant and balanced \citep{Provost1998}, and typically the distributions of a
churn data set are skewed \citep{Verbeke2012}.
In the next section, we propose a new financial based measure for evaluating the effectiveness of
a voluntary churn campaign taking into account the available portfolio of offers, their
individual financial cost and probability of acceptance depending on the customer profile.
\subsection{Propose evaluation measure of a churn campaign}
\label{sec:5:1:evaluation}
Different studies have proposed measures to deal with these cost-sensitivity related to
evaluating a churn model. In \citep{Neslin2006}, a profit-based measure was proposed by starting
with the confusion matrix and multiplying it with the expected profit of each case.
\begin{align}\label{eq:5:profit1}
Profit_1 = (TP+FP)\bigg[ & \left(\gamma CLV + C_o(1-\gamma)(-C_a)\right)\pi_1\gamma \nonumber \\
& -C_o-C_a\bigg]-N\cdot C_a,
\end{align}
with $C_a$ being the fixed administrative cost of running the campaign, $C_o$ the average
cost of the retention offer, $C_a$ the cost of contacting the customer, $\pi_1$ the prior churn rate
and $CLV$ the average customer lifetime value or the present value of the expected profit that a
customer will generate. Moreover, as discussed in \citep{Verbraken2013}, if the average instead of
the total profit is considered and the fixed cost $N\cdot C_a$ is discarded since it is irrelevant
for the classifier selection, the profit can be expressed as:
\begin{align}\label{eq:5:profit2}
Profit_2 = &TP\left(\gamma(CLV-C_o-C_a)+(1-\gamma)C_a \right) \nonumber \\
&+FP(-C_o-C_a).
\end{align}
Nevertheless, equations (\ref{eq:5:profit1}) and (\ref{eq:5:profit2}) assume that every customer
has the same $CLV$ and $C_o$, whereas this is not true in practice. In fact, different customers
have a very different $CLV$, and not all offers can be made to every customer, neither do they have
the same impact across customers. In order to obtain a more business oriented measure, we first
analyze the financial impact of the different decisions, i.e., false positives, false negatives,
true positives and true negatives, for each customer.
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{ch5_fig2}
\caption{Financial impact of the different decisions, i.e., False positives, false negatives,
true positives and true negatives}
\label{fig:ch5:2}
\end{figure}
In \figurename{ \ref{fig:ch5:2}}, the financial impact of a churn model is shown. Note than we take
into account the costs and not the profit in each case.
When a customer is predicted to be a churner, an offer is made with the objective of avoiding
the customer defecting. However, if a customer is actually a churner, he may or not accept the
offer with a probability $\gamma_i$. If the customer accepts the offer, the financial impact is
equal to the cost of the offer ($C_{o_i}$) plus the administrative cost of contacting the
customer ($C_a$). On the other hand, if the customer declines the offer, the cost is the
expected income that the clients would otherwise generate, also called customer lifetime value
($CLV_i$), plus $C_a$. Lastly, if the customer is not actually a churner, he will be happy to
accept the offer and the cost will be $C_{o_i}$ plus $C_a$.
In the case that the customer is predicted as non-churner, there are two possible outcomes.
Either the customer is not a churner, then the cost is zero, or the customer is a churner and the
cost is $CLV_i$.
\begin{table}[b]
\centering
\footnotesize
\begin{tabular}{c|c|c}
\multicolumn{3}{c}{}\\
\multicolumn{1}{c|}{} & Actual Positive& Actual Negative \\
\multicolumn{1}{c|}{} & $y_i=1$& $y_i=0$ \\
\hline
Predicted Positive & $C_{TP_i}=\gamma_iC_{o_i}$ &
\multirow{2}{*}{$C_{FP_i}=C_{o_i}+C_a$}\\
% Predicted Positive & \multirow{
% 2}{*}{$C_{TP_i}=\gamma_iC_{o_i}+(1-\gamma_i)(CLV_i+C_a)$} &
% \multirow{2}{*}{$C_{FP_i}=C_{o_i}+C_a$}\\
$c_i=1$ & $+(1-\gamma_i)(CLV_i+C_a)$ &\\
\hline
Predicted Negative & \multirow{ 2}{*}{$C_{FN_i}=CLV_i$} & \multirow{
2}{*}{$C_{TN_i}=0$} \\
$c_i=0$ & &\\
%\hline
\end{tabular}
\caption{Proposed churn modeling example-dependent cost matrix}
\label{tab:ch5:1}
\end{table}
The different costs are summarized in \tablename{ \ref{tab:ch5:1}}. Then using the cost
matrix, and the example-dependent cost-sensitive framework as described in Section
\ref{sec:3:csmeasures}, an example-dependent cost statistic is calculated as:
\begin{align}
Cost_i &= y_i(c_i C_{TP_i} + (1-c_i)C_{FN_i})& \nonumber \\
& + (1-y_i)(c_i C_{FP_i} + (1-c_i)C_{TN_i})& \nonumber \\
% &=y_i(c_i (\gamma_iC_{o_i}+(1-\gamma_i)(CLV_i+C_a))& \nonumber \\
% & + (1-c_i)CLV_i)& \nonumber \\
% & + (1-y_i)(c_i (C_{o_i}+C_a) + (1-c_i)(0)) &\nonumber \\
&= y_i(c_i\left(\gamma_i(C_{o_i}-CLV_i-C_a)-C_{o_i}\right)+CLV_i)&\nonumber \\
& +c_i(C_{o_i}+C_a),&
\end{align}
leading to a total cost of:
\begin{equation}
Cost = \sum_{i=1}^N Cost_i.
\end{equation}
Furthermore, using (\ref{eq:3:savings}), the savings are calculated as:
\begin{equation}
Savings = \frac{Cost_l - Cost}{Cost_l},
\end{equation}
In almost all cases the costless class ($Cost_l$) will be the negative class, as typically the
distribution of a churn dataset is skewed towards the non-churners \citep{Verbeke2012}. Given that
$Cost_l$ can be expressed as $Cost(f_0)$, or simply $Cost$ with $c_i=0$ $\forall i$:
\begin{equation}
Cost_l = \sum_{i=1}^{N} y_i CLV_i.
\end{equation}
This is consistent with the notion that if no model is used, the total cost would be the
sum of the customer lifetime values of the actual churners, which gives the insight
that the $Savings$ measure consists in comparing the financial impact of the campaign of using a
classification model against not using a model at all.
\subsubsection{Customer lifetime value}
Lastly, one of the key values to calculate the $Savings$ is the customer lifetime value. Within
marketing there exists a common misconception between customer profitability and customer lifetime
value. The two terms are usually used in an interchangeable way,
creating confusion of what the actual objective of a churn modeling campaign should be. Several
studies have proposed models providing a unique definition of both terms
\citep{Neslin2006,Pfeifer2004,Milne1999a,VanRaaij2003}. Customer
profitability indicates the difference between the income and the cost
generated by a customer $i$ during a financial period $t$. It is defined as:
\begin{equation}
CP_{i,t} = \mu \cdot s_{i,t},
\end{equation}
where $s_{i,t}$ refers to the consumption of customer $i$ during time period $t$, and $\mu$ refers
to the average marginal profit by unit product usage.
Moreover, we are interested to see what is the expected income that a particular customer will
generate in the future, in other words, calculating the expected sum of
discount future earnings \citep{Neslin2006}. Therefore, the $CLV_i$ is defined as:
\begin{equation}
CLV_i = \sum_{t=1}^T\frac{\mu \cdot s_{i,t}}{(1+r)^t},
\end{equation}
where $r$ is the discount rate, and $T$ the number of time period.
Typically $T$ should be considered large enough since without prior
knowledge a customer is expected to keep being a customer for the foreseeable future. In practice
$T$ is set up to be infinity~\citep{Glady2009}. Also, for simplicity, it can be assumed that
$s_{i,t+1}=s_{i,t}\cdot (1+g)$ $\forall {i,t}$, which means that there is a constant growth $g$ in
the customer consumption. Given that, the customer lifetime value can be re-written as
\begin{equation}
CLV_i = \sum_{t=1}^\infty\frac{ (1+g)^t}{(1+r)^t}\cdot \mu\cdot s_{i,1},
\end{equation}
which in the case of $g<r$, this is a geometric series and can be expressed as
\begin{equation}
CLV_i = \frac{\mu\cdot s_{i,1}}{(r-g)}.
\end{equation}
\subsection{Churn modeling database}
\label{sec:5:1:data}
For our experiments we use a dataset provided by a TV cable provider.
The dataset consists of active customers during the first semester of 2014.
The total dataset contains 9,410 individual registries, each one with 45 attributes,
including a churn label indicating whenever a customer is a churner.
This label was created internally in the company, and can be regarded as highly accurate.
In the dataset only 455 customers are churners, leading to a churn ratio of 4.83\%.
\subsubsection{Offer acceptance calculation}
In practice companies have a set of offers to make to a customer as part of the retention
campaign. They vary from discounts to upgrades, among others. In the particular case of a TV cable
provider, the offers include adding a new set of channels, changing the TV receiver to one with new
technology (i.e., high definition, video recording, 4K), or to offer a discount on the monthly
bill. Unsurprisingly, not all offers apply to all clients. For instance, a customer that already
has all the channels can not be offered a new set of channels. Moreover, an offer usually means an
additional cost to the company and not all offers have the same cost or the same impact in
reducing churn.
Taking into account the cost and the implication of the offers, the problem can be
summarized in making each customer the offer that will maximize the acceptance rate and more
importantly reduce the overall cost.
\begin{figure}[t!]
\centering
\includegraphics[width=10cm]{ch5_fig3}
\caption{Acceptance rate ($\gamma$) of the best offer for each customer profile. As expected,
the higher the churn rate the lower the acceptance rate, as it is more difficult to make a
good offer to a customer which is more likely to defect. }
\label{fig:ch5:3}
\end{figure}
In order to calculate the acceptance probability $\gamma_i$ a champion-challenger process was made.
First, the customers were grouped into clusters according to their behavioral and socio-economical
characteristics. In particular the K-means algorithm was used \citep{Marslan2009}.
Then for a period of two months, randomly selected offers were made to the customers and their
response was evaluated. Unfortunately, for confidentiality reasons we can not describe the
different clusters, or the actual offer made to each customer. Nevertheless, in
\figurename{~\ref{fig:ch5:3}}, the average churn rate and acceptance rate $\gamma_i$ per cluster is
shown. As expected, the higher the churn rate the lower the acceptance rate, as it is more difficult
to make a good offer to a customer that is more likely to defect.
\section{Direct marketing}
\label{sec:5:directmarketing}
In direct marketing the objective is to classify those customers who are more likely to have a
certain response to a marketing campaign \citep{Ngai2009}. We used a direct marketing dataset
\citep{Moro2011} available on the UCI machine learning repository \citep{UCI2013}. The dataset
contains 45,000 clients of a Portuguese bank who were contacted by phone between March 2008 and
October 2010 and received an offer to open a long-term deposit account with attractive interest
rates. The dataset contains features such as age, job, marital status, education, average yearly
balance and current loan status and the label indicating whether or not the client accepted
the offer.
This problem is example-dependent cost sensitive, since there are different costs of false
positives and false negatives. Specifically, in direct marketing, false positives have the cost
of contacting the client, and false negatives have the cost due to the loss of income by failing
to contact a client that otherwise would have opened a long-term deposit.
\begin{table}[t]
\centering
\footnotesize
\begin{tabular}{c|c|c}
\multicolumn{1}{c|}{} & Actual Positive& Actual Negative \\
\multicolumn{1}{c|}{} & $y_i=1$& $y_i=0$ \\
\hline
Predicted Positive & \multirow{ 2}{*}{$C_{TP_i}=C_a$} & \multirow{ 2}{*}{$C_{FP_i}=C_a$}
\\
$c_i=1$ & &\\
\hline
Predicted Negative & \multirow{ 2}{*}{$C_{FN_i}=Int_i$} & \multirow{ 2}{*}{$C_{TN_i}=0$}
\\
$c_i=0$ & &\\
%\hline
\end{tabular}
\caption{Direct marketing example-dependent cost matrix}
\label{tab:5:d_mat}
\end{table}
We propose a direct marketing example-dependent cost matrix as shown in \mbox{\tablename{
\ref{tab:5:d_mat}}}. Where $C_a$ is the administrative cost of contacting the client, as is credit
card fraud, and $Int_i$ is the expected income when a client opens a long-term deposit. This last
term is defined as the long-term deposit amount times the interest rate spread.
In order to estimate $Int_i$, first the long-term deposit amount is assumed to be a 20\% of the
average yearly balance, and lastly, the interest rate spread is estimated to be 2.463\%, which
is the average between 2008 and 2010 of the retail banking sector in Portugal as reported by the
Portuguese central bank. Given that, the $Int_i$ is equal to $\left( balance * 20\% \right) *
2.463\%$.
\section{Summary of the datasets}
In this section we present the different marketing datasets.
For each dataset we used a pre-define cost matrix as shown in Section~\ref{sec:5:churn} and
Section~\ref{sec:5:directmarketing}. Moreover, the datasets are split in training, validation
and testing, each one containing 50\%, 25\% and 25\% of the examples, respectively. Afterwards,
an under-sampling of the positive examples is made, and we perform the
cost-proportionate rejection-sampling and cost proportionate over-sampling procedures, as described
in Section~\ref{sec:3:costsampling}. \tablename{~\ref{tab:5:databases}},
summarizes the different datasets.
\begin{table}%[ht!]
\centering
\footnotesize
\begin{tabular}{l l c c c } %sum 7.7
\hline
\textbf{Database}& \textbf{Set}& $N$ & $\pi_1$ & Cost (Euros) \\
\hline
Churn&Total&9,410&4.83&580,884\\
Modeling&Training ($t$)&3,758&5.05&244,542\\
&Under-sampled ($u$) &374&50.80&244,542\\
&Rejection-sampled ($r$)&428&41.35&431,428\\
&Over-sampled ($o$) &5,767&31.24&2,350,285\\
&Validation&2,824&4.77&174,171\\
&Testing&2,825&4.42&162,171\\
\hline
Direct &Total&37,931&12.62&59,507\\
Marketing&Training ($t$)&15,346&12.55&24,304\\
&Under-sampled ($u$)&3,806&50.60&24,304\\
&Rejection-sampled ($r$)&1,644&52.43&20,621\\
&Over-sampled ($o$)&22,625&40.69&207,978\\
&Validation&11,354&12.30&16,154\\
&Testing&11,231&13.04&19,048\\
\hline
\end{tabular}
\caption{Summary of the marketing datasets, where $N$ is the number of examples and $\pi_1$ is
the percentage of positive examples.}
\label{tab:5:databases}
\end{table}
% \makeatletter
% \setlength{\@fptop}{0pt}
% \makeatother | {
"alphanum_fraction": 0.7554856074,
"avg_line_length": 55.0778894472,
"ext": "tex",
"hexsha": "be8856236b227c55acab8b4058314ad7b59282f7",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2020-02-11T13:47:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-25T17:16:32.000Z",
"max_forks_repo_head_hexsha": "8aedb00cba939b6f8a2f891f453a37206db1f635",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "albahnsen/phd-thesis",
"max_forks_repo_path": "tex/chapters/chapter05.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8aedb00cba939b6f8a2f891f453a37206db1f635",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "albahnsen/phd-thesis",
"max_issues_repo_path": "tex/chapters/chapter05.tex",
"max_line_length": 101,
"max_stars_count": 9,
"max_stars_repo_head_hexsha": "8aedb00cba939b6f8a2f891f453a37206db1f635",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "albahnsen/phd-thesis",
"max_stars_repo_path": "tex/chapters/chapter05.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-09T12:02:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-07T13:31:49.000Z",
"num_tokens": 5771,
"size": 21921
} |
\documentclass[a4paper, 11pt]{article}
\usepackage[margin=2cm]{geometry}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{multirow}
\usepackage{amsmath,amsfonts,amssymb,amsthm}
\setlength{\parindent}{0em}
\title{\textbf{COMP90056 Assignment A Report}}
\author{Tingsheng (Tinson) Lai (731319)}
\date{2019}
\begin{document}
\maketitle
\section{Introduction}
This report is a summary of the implementation and experimentation taken in the second assignment. It majorly focuses on two techniques, $\ell_{0}$ sampling and sparse recovery, which are also mutually interrelated. The prior, $\ell_{0}$ sampler, is an essential sampler used in a stream with biased distribution to assign even probability to each item presented in the stream. Another important use case is when the stream is neither an insertion stream nor even a strict turnstile\footnote{In the remaining part of the report, unless otherwise indicated, a turnstile stream is always strict in any circumstances.} stream but a general stream. Sparse vector recovery is the fundamental technique embedded in the sampler to handle complex models of streaming data. It can achieve a high success rate for recovering a sparse vector with a minor amount of memory.
\section{Theory \& Implementation}
\subsection{Sparse Recovery}
\subsubsection{1-Sparse Recovery}
Starting from 1-sparse recovery, this is a data structure used to recover a 1-sparse vector with only three integers. A $k$-sparse vector, by definition, is a vector containing exactly $k$ element(s) in the vector with non-zero frequency. The traditional technique requires a hash map to summarise the frequency for each item to determine if a given stream forms a 1-sparse vector. It becomes infeasible in a space-hungry setting to process streams with large varieties of items, and this technique is extremely slow as well. Instead, we only need to record three numbers, which are:
\begin{itemize}
\item $w_{1} = \sum^{n}_{i = 1} f_{i} = F_{1}$, where $f_{i}$ is the sum of all updates retrieved from the stream for item $i$
\item $w_{2} = \sum^{n}_{i = 1} i \cdot f_{i}$
\item $w_{3} = \left( \sum^{n}_{i = 1} q^{i} \cdot f_{i} \right) \textit{ mod } p$, where $p$ is a large prime number and $q$ is a random number between $\left[ 2,p \right]$\footnote{This differs to what the note says but setting $q$ to 0 or 1 is pointless, thus the implementation deliberately ignore these two candidates.}
\end{itemize}
The update process is fast as it only needs integer computations. It also saves space for the process as the structure needs merely three integers. One of the major functionalities of this structure is to identify and distinguish cases when the sparsity $s$ of the stream are:
\begin{itemize}
\item $s = 0 \Rightarrow w_{1} = 0 \wedge w_{2} = 0 \wedge w_{3} = 0$, but it is not always true for the reverse. However, the exact non-zero integer solution to simultaneously solve these three equations is very rare, especially when $q$ is extensively large. Hence it is safe to claim that the reverse is correct for most of the time. This claim is proven later with the experiments. An implicit fact here is that $w_{1} = 0 \wedge \left( w_{2} \neq 0 \vee w_{3} \neq 0 \right) \Rightarrow s \geq 2$. This fact is critical to handle general streams.
\item $s = 1 \Rightarrow w_{2} \textit{ mod } w_{1} = 0 \wedge w_{3} = w_{1} \cdot q^{w_{2} / w_{1}} \textit{ mod } p$, similarly, does not always hold for the reverse. Letting $q$ be a prime number can mitigate the collision of numbers caused by modulus operator. Technically, however, randomly selecting a prime number in a range involves multiple complicated computations and verifications, and the complexity grows polynomially. In practice, a combination of large $q$ and $p$ will be sufficient to handle most of the circumstances, and experiments also support this at later stages. Meanwhile, based on the fact that $i \in \mathbb{Z}^{+}$, $w_{1}$ and $w_{2}$ must have the same sign even if the incoming data is from a general stream.
\item $s \geq 2$, when the structure does not meet the conditions listed previously.
\end{itemize}
Given the assumption that all arithmetic operations can be completed in constant time\footnote{May not be applicable to multiplication, division and modulus. An explicit improvement for the modulus computation of mersenne prime is to simplify it into a constant number of bit operations, which is achieved in the implementation.}, the computation complexity of $_{1}$ and $w_{2}$ is constant. For $w_{3}$, a recursive divide and conquer approach to compute the exponentiation can reduce the theoretical complexity to $O \left( \log{n} \right)$ but recursions incur unnecessary overhead. An elegant workaround is to introduce an accumulator and square itself each time until the exponent reaches the nearest power of 2 smaller than the index, then repeat this process for the remaining exponent. Both of these techniques achieve a theoretical upper bound of computation complexity $O \left( \log{n} \right)$.
Modulus operator has two properties heavily used in the implementation to avoid problems caused by integer overflow in low-level languages, which are:
\begin{itemize}
\item $\left( a + b \right) \textit{ mod } p = \left( a \textit{ mod } p + b \textit{ mod } p \right) \textit{ mod } p$
\item $\left( a \cdot b \right) \textit{ mod } p = \left( a \textit{ mod } p \cdot b \textit{ mod } p \right) \textit{ mod } p$
\end{itemize}
\subsubsection{$k$-Sparse Recovery}
Building on top of the 1-sparse recovery, multiple (precisely, $k$) 1-sparse recovery structures are incorporated into a single structure using a technique similar to the count-min sketch as discussed in the previous assignment. The $\delta$ parameter controls the expected maximum error rate in the output. The yielded results in experiments prove this parameter to be a precise estimation of the error rate. The theoretical bound of complexity is $\Omega \left( k \cdot \log{\delta} \cdot \log{n} \right)$\footnote{The computation time of hash function was not considered in this complexity, so it is a lower bound}.
\subsection{$\ell_{0}$ Sampler}
Using a $k$-wise independent hash family to approximate the $\epsilon$-min hash family is a simple but appropriate workaround as described in \cite{INDYK200184}. The simplicity of insertion streams reduces the complexity of implementation of the sampler. The uniformity of $k$-wise independent hash function guarantees the sample from the stream is uniformly distributed.
In a more flexible streaming model where negative updates exist, simple techniques for the insertion stream is not suitable as it does not support subtraction. The previously defined $s$-sparse structure came to the rescue. Multiple levels of $s$-sparse structures are combined with a hash function to incorporate both uniformity and error tolerance into the sampler to handle strict turnstile streams or even general streams. The hash function is expected to distribute the items sampled from the stream onto different levels based on the returned hash value. Causing an $\ell_{0}$ sampler to fail requires at least fulfilling the $s$-sparse structure at the highest level. The sampler can still accommodate data from streams with higher sparsity than expected.
\section{Experiment}
\subsection{$\ell_{0}$ Sampler}
The experiment used a uniform distribution alongside with a binomial distribution to test if the sampler assigned equal priority to each item during sampling. From the graphs below, it is clear that the $\ell_{0}$ sampler did not follow the original distribution to select items. The binomial distribution was simulating a heavy hitter for the items centred around the mean success rate of the distribution. The selection of items was not entirely uniform due to collisions incurred by hashing. Based on the observation that the sampler did not favour the items with higher weights in the centre of the second graph, it is fair to claim that the $\ell_{0}$ sampler works as it was initially designed.
\begin{center}
\includegraphics[scale=0.5]{graphs/a/1-Uniform.jpg}
\includegraphics[scale=0.5]{graphs/a/1-Binomial.jpg}
\end{center}
For $\ell_{0}$ sampler performance in a turnstile or general stream, as the stream generated has an extremly large range, $\left[ 1,2^{16}-1 \right]$ but the total stream cannot be too long\footnote{The execution time is too long even for one update.}, the graph generated is trivial to see. However, an interesting observation from the results is that there is no failure during sampling even the sparsity $s$ grows to approximately $2k$ in the sampler.
\subsection{Sparse Recovery}
\subsubsection{1-Sparse Recovery}
A sketch of execution result for 1-sparse structure under different sparsity in different streams with different sizes of unique items. Repeating this process did not cause oscillations in results, so randomly picking a output result is still representative.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& & $s = 0$ & $s = 1$ & $s = 2$ & $s = 3$ & $s = 4$ & $s = 5$ & $s = 6$ \\ \hline
\multirow{2}{*}{$unique = 500$} & \textbf{Turnstile} & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\ \cline{2-9}
& \textbf{General} & 100 & 99 & 100 & 100 & 100 & 99 & 99 \\ \hline
\multirow{2}{*}{$unique = 5000$} & \textbf{Turnstile} & 100 & 99 & 99 & 100 & 100 & 100 & 100 \\ \cline{2-9}
& \textbf{General} & 100 & 99 & 100 & 99 & 100 & 100 & 99 \\ \hline
\end{tabular}
\end{table}
\subsubsection{$k$-Sparse Recovery}
Similar to the previous table, the results did not vary too far in a sequence of consecutive executions. This is the lowest output picked from the results. It is a little bit confusing that the its ability to handle general streams is even better than strict turnstile stream. One of the possible justification is that a strict turnstile stream will always end up with non-negative frequencies for all numbers. Since a bigger number is likely to be factorised into more numbers which increases the probability of collisions, the difficulty of distinguishing between $k$-sparse and $(k+1)$-sparse grows significantly.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& & $s = 0$ & $s = 1$ & $s = 2$ & $s = 3$ & $s = 4$ & $s = 5$ & $s = 6$ \\ \hline
\multirow{2}{*}{$unique = 500$} & \textbf{Turnstile} & 100 & 98 & 94 & 70 & 90 & 95 & 100 \\ \cline{2-9}
& \textbf{General} & 100 & 99 & 95 & 88 & 92 & 99 & 99 \\ \hline
\multirow{2}{*}{$unique = 5000$} & \textbf{Turnstile} & 100 & 99 & 92 & 68 & 75 & 94 & 99 \\ \cline{2-9}
& \textbf{General} & 99 & 100 & 94 & 90 & 92 & 97 & 100 \\ \hline
\end{tabular}
\end{table}
\section{Future Improvement}
It worths to see how pairwise independence improved the $\ell_{0}$ sampling metioned in \cite{Cormode:2014:UFA:2631033.2631080}. The initial design of the $\ell_{0}$ sampler uses an $\epsilon$-min-wise independent hash family, but the parameter $\epsilon$ was not considered in the generation of $k$-wise independent hash functions.
\bibliographystyle{ieeetr}
\bibliography{main}
\end{document}
| {
"alphanum_fraction": 0.664286281,
"avg_line_length": 146.5465116279,
"ext": "tex",
"hexsha": "cebd49cf6fd23c184b1a35e84ee4342bd3bd1b36",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "adc65917942ce0057cd51602f700c8a7e09cfaea",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "laitingsheng/2019S2-COMP90056",
"max_forks_repo_path": "Assignment/B/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "adc65917942ce0057cd51602f700c8a7e09cfaea",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "laitingsheng/2019S2-COMP90056",
"max_issues_repo_path": "Assignment/B/report.tex",
"max_line_length": 923,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "adc65917942ce0057cd51602f700c8a7e09cfaea",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "laitingsheng/2019S2-COMP90056",
"max_stars_repo_path": "Assignment/B/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3060,
"size": 12603
} |
\input{../../style/preamble}
\input{../../latex-math/basic-math}
\input{../../latex-math/basic-ml}
\newcommand{\titlefigure}{figure_man/biasvariance_scheme.png}
\newcommand{\learninggoals}{
\item Understand why overfitting happens
\item Know how overfitting can be avoided
\item Know regularized empirical risk minimization
}
\title{Introduction to Machine Learning}
\date{}
\begin{document}
\lecturechapter{Introduction to Regularization}
\lecture{Introduction to Machine Learning}
\section{Motivation for Regularization}
\begin{vbframe}{Example: Overfitting}
\begin{itemize}
\item Assume we want to predict the daily maximum \textbf{ozone level} in LA given a data set containing $50$ observations.
\item The data set contains $12$ features describing time conditions (e.g., weekday, month),
the weather (e.g., temperature at different weather stations, humidity, wind speed) or geographic variables (e.g., the pressure gradient).
\item We fit a linear regression model using \textbf{all} of the features
$$
\fxt = \thetab^T\xv = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... + \theta_{12} x_{12}
$$
with the $L2$ loss.
\item We evaluate the performance with $10$ times $10$-fold CV.
\end{itemize}
\vfill
\begin{footnotesize}
We use (a subset of) the \texttt{Ozone} data set from the \texttt{mlbench} package. This way, we artificially create a \enquote{high-dimensional} dataset by reducing the number of observations drastically while keeping the number of features fixed.
\end{footnotesize}
\framebreak
While our model fits the training data almost perfectly (left), it generalizes poorly
to new test data (right). We overfitted.
\lz
\begin{figure}
\includegraphics[width=0.8\textwidth]{figure_man/example01.png}\\
\end{figure}
\end{vbframe}
\begin{vbframe}{Avoid Overfitting}
Why can \textbf{overfitting} happen? And how to avoid it?
\begin{enumerate}
\item Not enough data \\
$\to$ collect \textbf{more data}
\item Data is noisy \\
$\to$ collect \textbf{better data} (reduce noise)
\item Models are too complex \\
$\to$ use \textbf{less complex models}
\item Aggressive loss optimization \\
$\to$ \textbf{optimize less}
\end{enumerate}
\framebreak
\textbf{Approach 1: Collect more data}
\lz
We explore our results for increased dataset size by $10$ times $10$-fold CV.
The fit worsens slightly, but the test error decreases.
\begin{figure}
\includegraphics[width=0.7\textwidth]{figure_man/avoid-overfitting01.png}\\
\end{figure}
Good idea, but often not feasible in practice.
\framebreak
\textbf{Approach 3: Reduce complexity}
\lz
We try the simplest model we can think of: the constant model. For the $L2$ loss, the optimal constant model is
$$
\fxt = \frac{1}{n}\sumin \yi
$$
We then increase the complexity of the model step-by-step by adding one feature at a time.
\framebreak
We can control the complexity of the model by including/excluding features.
We can try out all feature combinations and investigate the model fit.
\begin{figure}
\includegraphics[width=0.7\textwidth]{figure_man/avoid-overfitting02.png}\\
\end{figure}
\vfill
\begin{footnotesize}
Note: For simplicity, we added the features in one specific (clever) order, so we cheated a bit. Also note there are $2^{12} = 4096$ potential feature combinations.
\end{footnotesize}
\framebreak
\textbf{Approach 4: Optimize less}
\lz
Now we use polynomial regression with temperature as the only feature to predict the ozone level, i.e.,
$$\fxt = \sum^{d}_{i=0} \theta_i (x_T)^{i} .$$
We choose $d = 15$, for which we get a very flexible model, which can be prone to overfitting for small data sets. \\
\medskip
In this example, we don't solve for $\hat\theta$ directly, but instead, we use the gradient descent algorithm to find $\hat\theta$ stepwise.
\framebreak
We want to stop the optimization early when the generalization error starts to degrade.
\begin{figure}
\includegraphics[width=0.7\textwidth]{figure_man/mean-squ-error.png}\\
\end{figure}
\footnotesize{Note: For polynomial regression, gradient descent usually needs many iterations before it starts to overfit. Hence a very small training set was chosen to accelerate this effect.}
\framebreak
We have contradictory goals
\begin{itemize}
\item \textbf{maximizing the fit} (minimizing the train loss)
\item \textbf{minimizing the complexity} of the model.
\end{itemize}
We need to find the \enquote{sweet spot}.
\begin{center}
\begin{figure}
\includegraphics[width=0.6\textwidth]{figure_man/complexity-vs-fit.png}
\end{figure}
\end{center}
\framebreak
Until now, we can either add a feature completely or not at all.
\lz
Instead of controlling the complexity in a discrete way by specifying the number of features,
we might prefer to control the complexity \textbf{on a continuum} from simple to complex.
\begin{center}
\begin{figure}
\includegraphics[width=0.6\textwidth]{figure_man/complexity-vs-fit-continuous.png}
\end{figure}
\end{center}
\end{vbframe}
\section{Regularized Empirical Risk Minimization}
\begin{vbframe}{Regularized Empirical Risk Minimization}
Recall, empirical risk minimization with a complex hypothesis set tends to overfit. A major tool to handle overfitting is \textbf{regularization}.
\lz
In the broadest sense, regularization refers to any modification made to a learning algorithm that is intended to reduce its generalization error but not its training error.
\lz
Explicitly or implicitly, such modifications represent the preferences we have regarding the elements of the hypothesis set.
\framebreak
Commonly, regularization takes the following form:
$$
\riskrf = \riskef + \lambda \cdot J(f) = \sumin \Lxyi + \lambda \cdot J(f)
$$
\begin{itemize}
\item $J(f)$ is called \textbf{complexity penalty}, \textbf{roughness penalty} or \textbf{regularizer}.
\item $\lambda > 0$ is called \textbf{complexity control} parameter.
\item It measures the \enquote{complexity} of a model and penalizes it in the fit.
\item As for $\riske$, often $\riskr$ and $J$ are defined on $\thetab$ instead of $f$, so $\riskrt = \risket + \lambda \cdot J(\thetab)$.
\end{itemize}
\framebreak
\textbf{Remarks:}
\begin{itemize}
\item Note that we now face an optimization problem with two criteria:
\begin{enumerate}
\item models should fit well (low empirical risk),
\item but not be too complex (low $J(f)$).
\end{enumerate}
\item We decide to combine the two in a weighted sum and to control
the trade-off via the complexity control parameter $\lambda$.
\item $\lambda$ is hard to set manually and is usually selected via cross-validation (see later).
\item $\lambda = 0$: The regularized risk $\riskrf$ reduces to the simple empirical $\riskef$.
\item If $\lambda$ goes to infinity, we stop caring about the loss/fit and models become as \enquote{simple} as possible.
\end{itemize}
\framebreak
\center
\vspace*{0.5cm}
\includegraphics[width=0.6\textwidth]{figure_man/biasvariance_scheme.png} \\
\footnotesize{Hastie, The Elements of Statistical Learning, 2009 (p. 225)}
\end{vbframe}
\endlecture
\end{document}
| {
"alphanum_fraction": 0.7479321464,
"avg_line_length": 29.353909465,
"ext": "tex",
"hexsha": "c742bd845c39ca967abf74971de6eed4e359016d",
"lang": "TeX",
"max_forks_count": 56,
"max_forks_repo_forks_event_max_datetime": "2021-09-08T05:53:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-02-27T16:25:44.000Z",
"max_forks_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "jukaje/lecture_i2ml",
"max_forks_repo_path": "slides/regularization/slides-regu-intro.tex",
"max_issues_count": 323,
"max_issues_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62",
"max_issues_repo_issues_event_max_datetime": "2021-10-07T08:11:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-02-27T08:02:59.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "jukaje/lecture_i2ml",
"max_issues_repo_path": "slides/regularization/slides-regu-intro.tex",
"max_line_length": 249,
"max_stars_count": 93,
"max_stars_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "jukaje/lecture_i2ml",
"max_stars_repo_path": "slides/regularization/slides-regu-intro.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-24T12:05:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-27T17:20:30.000Z",
"num_tokens": 1933,
"size": 7133
} |
\section{Paging}
\paragraph{Hierarchical Page Table --- two-level page table}
\begin{itemize}
\item \textbf{Layout}: on 32-bit machine with 4KiB pages divide virtual address into
\begin{itemize}
\item \emph{page number} (p): 20 bits
\item \emph{page offset} (d): 12 bits
\end{itemize}
\item \textbf{Table Paging}: table can be paged to save memory -- subdivide vpn:
\begin{itemize}
\item index in \emph{page directory} ($ p_1 $): 10 bits
\item index in \emph{page table} ($ p_2 $): 10 bits
\end{itemize}
\item for ranges of 1024 invalid pages, reset present bit in page directory
\begin{itemize}
\item[$ \to $] save space of second-level page table
\end{itemize}
\end{itemize}
\begin{figure}[h]\centering\label{TwoLevelPageTable}\includegraphics[width=0.33\textwidth]{TwoLevelPageTable}\end{figure}
\paragraph{Linear Inverted Page Table}
\begin{itemize}
\item \textbf{Problem}: large AS (64 bit) but only few mapped virtual addresses
\begin{itemize}
\item[$ \to $] much memory wasted on page tables
\item[$ \to $] lookup slow due to many levels of hierarchy
\end{itemize}
\item \textbf{Idea}: invert page table mapping
\begin{itemize}
\item map physical frame to virtual page instead of other way around
\item single page table for \emph{all processes} (exactly one table per system)
\item one page table entry for each physical page frame
\end{itemize}
\item \textbf{Advantage}: less overhead for page table meta data
\item \textbf{Disadvantage}: increases time needed to search table when page reference occurs
\end{itemize}
\begin{figure}[h]\centering\label{LinearInvertedPageTable}\includegraphics[width=0.33\textwidth]{LinearInvertedPageTable}\end{figure}
\paragraph{Hashed Inverted Page Table}
\begin{itemize}
\item \textbf{Hash Anchor Table}: limits search to at most a few page-table entries
\end{itemize}
\paragraph{Translation Lookaside Buffer --- Motivation}
\begin{itemize}
\item \textbf{Naive paging is slow}:
\begin{itemize}
\item every load/store requires multiple memory references
\item 4-level hierarchy: 5 memory references for every load/store (4 page directory/table references, 1 data access)
\end{itemize}
\item \textbf{Idea}: add cache that stores recent memory translations
\begin{itemize}
\item \emph{translation lookaside buffer} (TLB) maps [vpn] to [pfn, protection]
\item typically 4-way to fully associative hardware cache in MMU
\item typically 64-2048 entries
\item typically 95\%-99\% hit rate
\end{itemize}
\end{itemize}
\paragraph{TLB --- Operation}
\begin{itemize}
\item on every load/store:
\begin{itemize}
\item check if translation result is cached in TLB (\emph{TLB hit})
\item otherwise walk page tables, insert result into TLB (\emph{TLB miss})
\end{itemize}
\item \textbf{Quick}: can compare many TLB entries in parallel in hardware
\end{itemize}
\paragraph{TLB --- TLB Miss}
\begin{itemize}
\item \textbf{Process}:
\begin{itemize}
\item evict entry from TLB on TLB miss
\item load entry for missing virtual address into TLB
\end{itemize}
\item \textbf{Variants}: \emph{software-managed} and \emph{hardware managed}
\item \textbf{software-managed TLB}:
\begin{itemize}
\item OS receives \emph{TLB miss exception}
\item OS decides which entry to evict (drop) from TLB
\item OS generally walks page tables in software to fill new TLB entry
\item TLB entry format specified in \emph{instruction set architecture} (ISA)
\end{itemize}
\item \textbf{hardware-managed TLB}:
\begin{itemize}
\item evict TLB entry based on hardware-encoded policy
\item walk page table in hardware $ \to $ resolve address mapping
\end{itemize}
\end{itemize}
\paragraph{TLB --- Address Space Identifiers}
\begin{itemize}
\item \textbf{Problem}: vpn dependent on AS
\begin{itemize}
\item vpns in different AS can map to different pfns
\item[$ \to $] need to clear TLB on AS switch
\end{itemize}
\item \textbf{Idea}: solve vpn ambiguity with additional identifiers in TLB
\item \textbf{ASID}: TLB has \emph{address space identifier} (ASID) in every entry
\begin{itemize}
\item map [vpn, ASID] to [pfn, protection]
\item[$ \to $] avoids TLB flush at every address-space switch
\item[$ \to $] less TLB misses: some TLB entries still present from last time process ran
\end{itemize}
\end{itemize}
\paragraph{TLB --- Reach}
\begin{itemize}
\item[=] amount of memory accessible with TLB hits: $ \text{TLB reach} = \text{TLB size} * \text{page size} $
\item \textbf{Ideally}: working set of each process is stored in TLB (otherwise high degree of TLB misses)
\item \textbf{Increase page size}:
\begin{itemize}
\item[+] fewer TLB entries per memory needed
\item[-] increase internal fragmentation
\end{itemize}
\item \textbf{multiple page sizes}:
\begin{itemize}
\item[+] allows applications that map larger memory areas to increase TLB coverage with minimal fragmentation increase
\end{itemize}
\item \textbf{increase TLB size}:
\begin{itemize}
\item[-] expensive
\end{itemize}
\end{itemize}
\paragraph{TLB --- Effective Access Time}
\begin{itemize}
\item \textbf{Associative lookup}: takes $ \tau $ time units (e.g., $ \tau = 1\text{ns} $)
\item \textbf{Memory cycle}: takes $ \mu $ time units (e.g., $ \mu = 100\text{ns} $)
\item \textbf{TLB hit ratio} $ \alpha $: percentage of all memory accesses with cached translation (e.g., $ \alpha = 99\% $)
\item \textbf{Effective Access Time} (EAT) for linear page table without cache:
\begin{equation*}
\text{EAT} = (\tau + \mu)\alpha + (\tau + 2\mu)(1-\alpha) = \tau + 2\mu - \mu\alpha
\end{equation*}
\end{itemize}
\begin{summary}
\begin{itemize}
\item page tables communicate between OS and MMU hardware
\begin{itemize}
\item how virtual addresses in each address space translate to physical addresses
\item which kind of accesses the MMU should allow/signal to the OS
\end{itemize}
\item different page table layouts have been developed
\begin{itemize}
\item linear page table
\item hierarchical page tables
\item inverted page tables
\item hashed page tables
\end{itemize}
\item performing page table lookups for every memory access significantly slows down execution time of programs
\begin{itemize}
\item translation lookaside buffer (TLB) caches previously performed page table lookups
\item typical TLBs cover $ 95\%-99\% $ of all translations
\end{itemize}
\end{itemize}
\end{summary}
| {
"alphanum_fraction": 0.7136384232,
"avg_line_length": 41.38125,
"ext": "tex",
"hexsha": "14aa6a69ca5807bc6a840569fc5fb3c4b194beeb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Jintzo/OS",
"max_forks_repo_path": "chapters/11_Paging.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_issues_repo_issues_event_max_datetime": "2017-12-31T11:57:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-12-02T12:22:38.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Jintzo/OS",
"max_issues_repo_path": "chapters/11_Paging.tex",
"max_line_length": 133,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Jintzo/OS",
"max_stars_repo_path": "chapters/11_Paging.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1954,
"size": 6621
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %
% ZEBRA User Guide -- LaTeX Source %
% %
% Chapter Utilities %
% %
% The following external EPS files are referenced: %
% fshunt1, fshunt2, fshunt3, fshunt4, fshunt5, fshunt6, fshunt7 %
% %
% Editor: Michel Goossens / AS-MI %
% Last Mod.: 8 Dec. 1990 mg %
% %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Utilities}
\section{Operations on data structures}
\subsection{Dropping a bank and its dependents}
\par \Rind{MZDROP} allows one to drop either the {\bf bank}
or the {\bf linear structure} at L ('L' option).
Dropping a bank implies dropping also the whole partial
data structure which depends on this bank downwards.
\par Dropped banks stay in memory with the drop status bit set,
links pointing to them continue to do so,
except for the immediate structural link indicated via the
origin link of the bank at L, which is bridged or reset to zero,
whichever is appropriate.
\Subr{CALL MZDROP (IXSTOR,L,CHOPT)}
\index{bank!drop}
\Idesc
\begin{DL}{MMMMM}
\item[IXSTOR]Index of the store containing the bank to be dropped
\item[L]Address of the bank or linear structure to be dropped
\item[CHOPT]Character variable specifying the action desired:
\begin{DL}{MM}
\item['']Default - Drop the bank at L and its vertical dependents
i.e. the next link of this bank is not followed.
\item['L']Drop the linear structure pointed to by L
i.e. the next link of the bank at L is followed until
all the banks in the linear structure and its dependents have been
dropped.
\item['V']Drop only the partial data structure
dependent vertically on the bank at L, but not the the bank itself.
\end{DL}
\end{DL}
\subsection{Set one of the status bits in the bank of a data structure}
\par By following the structural links,
\Rind{MZFLAG} sets the selected status bit into the status words
of all the banks of the data structure supported
by the vertical links of the specified start bank.
Optionally it can include in the marking
also the banks of the linear structure supported
by link 0 of the start bank and all their dependents.
The start bank itself may or may not be marked.
\Subr{CALL MZFLAG (IXSTOR,L,IBIT,CHOPT)}
\index{bank!status bit}
\Idesc
\begin{DL}{MMMM}
\item[IXSTOR]Index of the store containing the bank to be flagged
\item[L]Address of the start bank supporting the partial
data structure.
No action if {\tt L=0}.
\item[IBIT]The bit number of the status bit to be set (using the convention that
the least significant bit in a word is identified with the number 1).
\item[CHOPT]Character variable specifying the action desired:
\begin{DL}{MM}
\item['']Default - Flag the bank at L (and its vertical dependents),
but the next link of this bank is not followed.
\item['L']Flag the linear structure pointed to by L,
i.e. the next link of the bank at L is followed.
\item['V']Flag only the the partial data structure dependent vertically
on the bank at L, but not the bank itself.
\item['Z']Set to zero the status bit IBIT in each bank to be marked.
In the default case bit IBIT in the status word is set to one.
\end{DL}
\end{DL}
\subsubsection{Example:}
\begin{verbatim}
CALL MZFLAG (0,LQMAIN,IQDROP,'L')
\end{verbatim}
drops the complete 'Main' data structure, and is equivalent to
\begin{verbatim}
CALL MZDROP (0,LQMAIN,'L'),
\end{verbatim}
except that it does not alter the contents of {\tt LQMAIN}.
\subsection{Change structural relations}
\par Because of the presence of the reverse pointers,
the operation of moving a bank by relinking from one data structure
to another one is a non trivial operation.
The routine \Rind{ZSHUNT} is provided to execute such an operation.
\par \Rind{ZSHUNT} may be used to modify the links associated with
either a single bank ({\tt IFLAG=0})
or a whole linear structure ({\tt IFLAG=1}), using information provided
by the parameters
{\tt LSUP} and {\tt JBIAS}, which have the same significance as in
\Rind{MZLIFT}.
\Subr{CALL ZSHUNT (IXSTOR,LSH,LSUP,JBIAS,IFLAG)}
\index{data structure!change}
\Idesc
\begin{DL}{MMMM}
\item[IXSTOR]Index of the store containing the bank to be shunted
{\tt IXDIV}, the index of the division containing
the bank to be shunted, may be given instead
\item[LSH]The address of the data structure to be shunted
\item[LSUP]if {\tt JBIAS < 0:} the address of the new supporting up bank\\
if {\tt JBIAS = 0:} the address of the new supporting previous bank\\
if {\tt JBIAS = 1:} the new supporting link
\item[JBIAS]if {\tt JBIAS < 1:} the link bias in the new supporting bank\\
if {\tt JBIAS = 1:} the origin link in bank {\tt LSH}
will point to link {\tt LSUP}\\
if {\tt JBIAS = 2:} detach without insertion
\item[IFLAG]if {\tt IFLAG = 0:} shunt the one single bank at {\tt LSH}\\
if {\tt IFLAG = 1:} shunt the whole linear structure pointed to by {\tt LSH}
\end{DL}
\par If the bank or the structure to be relinked is in fact inserted
or added into an existing linear structure,
both must be contained in the same division.
\subsubsection{Examples}
\begin{figure}
\epsffile{fshunt1.ps}
\caption{ZSHUNT - The original data structures}
\label{FSHUNT1}
\end{figure}
\par Originally we have the data structures shown in
Figure~\ref{FSHUNT1}, where any bank may support further
dependent partial data structures since
the corresponding vertical structural links are not changed by \Rind{ZSHUNT}.
\par In what follows the notation {\tt Lxx}is used to designate
a link pointing to bank {\tt xx}.
\par Each example below
refers to the starting situation described in Figure~\ref{FSHUNT1}.
\begin{figure}
\epsffile{fshunt2.ps}
\caption{ZSHUNT - Add bank (and dependents) to front of linear structure}
\label{FSHUNT2}
\end{figure}
\begin{verbatim}
CALL ZSHUNT (0,LA2,LUN,-7,0)
\end{verbatim}
\par This moves a single bank (with is dependents, if any) out of
a linear structure, and inserts it at the head of the linear
structure supported by link -7 of the bank UN.
\begin{figure}
\epsffile{fshunt3.ps}
\caption{ZSHUNT - Move part of linear structure in front of another one}
\label{FSHUNT3}
\end{figure}
\begin{verbatim}
CALL ZSHUNT (0,LA2,LUN,-7,1)
\end{verbatim}
\par This is the same as in Figure~\ref{FSHUNT2},
except that the (partial) linear
structure starting with bank A2 is relinked.
\begin{figure}
\epsffile{fshunt4.ps}
\caption{ZSHUNT - Move bank into a linear structure}
\label{FSHUNT4}
\end{figure}
\begin{verbatim}
CALL ZSHUNT (0,LA2,LN2,0,0)
\end{verbatim}
\par This is again the same as in Figure~\ref{FSHUNT2},
but the bank is inserted inside the linear structure,
rather than at its front.
\begin{figure}
\epsffile{fshunt5.ps}
\caption{ZSHUNT - Move a bank to a top level structure}
\label{FSHUNT5}
\end{figure}
\begin{verbatim}
CALL ZSHUNT (0,LA2,LQMAIN,1,0)
\end{verbatim}
\par This re-links bank A2 to be the first in the top-level linear
structure supported by {\tt LQMAIN}.
\begin{figure}
\epsffile{fshunt6.ps}
\caption{ZSHUNT - Attach a linear structure to a top level link}
\label{FSHUNT6}
\end{figure}
\begin{verbatim}
CALL ZSHUNT (0,LA1,LHEAD,1,1)
\end{verbatim}
\par Supposing {\tt LHEAD=0} initially; this connects the linear structure
to the (structural) link {\tt LHEAD}, i.e.
the origin link of the header bank A1
points back to the location of {\tt LHEAD}.
\begin{figure}
\epsffile{fshunt7.ps}
\caption{ZSHUNT - Detach a linear structure}
\label{FSHUNT7}
\end{figure}
\begin{verbatim}
CALL ZSHUNT (0,LA1,0,2,1)
\end{verbatim}
\par This removes the linear structure from its old position
without inserting it into a new one.
This should only be temporary; one should insert the floating
structure into a new position by a second call to \Rind{ZSHUNT}
soon after.
\section{Copy a data structure}
\par A data structure can be copied from memory to memory by using
routine \Rind{MZCOPY}. The data structure can be in one or more divisions
in one store or in ``stand-alone'' memory and can be copied to a division
in the same store, a different one or to ``stand-alone'' memory.
\par The case of ``stand-alone'' or ``flat''
memory copies is intended for communication
between separate processes running on the same computer through sharable
memory (formally FORTRAN common blocks). The information must belong to
a single data structure
(see the ZEBRA reference manual for more details).
\Subr{CALL MZCOPY (IXDVFR,LENTRY,IXDVTO,*LSUP*,JBIAS,CHOPT)}
\index{data structure!copy}
\Idesc
\begin{DL}{MMMM}
\item[IXDVFR]Index of division(s) containing the data structure to be copied
\item[LENTRY]Address of the data structure to be copied
\item[IXDVTO]Index of the {\bf particular}
division into which the data structure has to be copied.
\item[*LSUP*]
\item[JBIAS]{\tt JBIAS < 1: LSUP} is the supporting bank in the target division
and JBIAS is the link bias specifying where the data structure has to be
introduced into this bank, i.e. the data structure will be connected
to LQ(LSUP+JBIAS).\\
{\tt JBIAS = 1: LSUP} is the supporting link, i.e. the data structure
is connected to {\tt LSUP} (top level data structure)\\
{\tt JBIAS = 2:} Stand alone data structure, no connection.
\item[CHOPT]Character variable specifying the selected options.
\begin{DL}{MMMMMMM}
\item[Data structure]
\begin{DL}{MM}
\item[' ']Copy the data structure at {\tt LENTRY} (the next link is not
followed).
\item['D']Copy complete division(s)\\
default: Dropped banks are squeezed out\\
\phantom{default:} (slower but maybe more economic than 'DI')
\item['DI']Immediate dump of divisions with dropped banks included
\item['L']Copy the data structure supported by the
linear structure at {\tt LENTRY} (the next link of {\tt LENTRY} is followed)
\item['S']Copy the single bank at {\tt LENTRY}.
linear structure at {\tt LENTRY} (the next link of {\tt LENTRY} is followed)
\end{DL}
\item[others]
\begin{DL}{MM}
\item['N']No link, i.e. linkless handling
By default link are significant
\item['P']Permit error returns
By default an error exit with a call to \Rind{ZTELL}{\tt (15,1)} is generated.
\item['Z']Zero all links pointing outside the data structure.
This is implied if origin and target stores are different.
\end{DL}
\end{DL}
\end{DL}
\Odesc
\begin{DL}{MMMM}
\item[*LSUP*]For JBIAS = 1 or 2, LSUP receives
the entry address to the data structure
\end{DL}
\par For a discussion of the cases of ``stand alone' memory, the user is
referred to the ZEBRA reference manual.
\index{stand alone memory}
\subsection{MZCOPY return codes}
\index{QUEST!IQUEST}
\begin{DL}{MMMMM}
\item[IQUEST(1)]Error status
\begin{DL}{M}
\item[0]Normal completion
\item[1]LENTRY invalid
\item[2]Bank chaining clobbered in the input data structure
\item[3]Not enough space for the input data structure
\item[4]The data structure is larger than the target space
\item[5]The data structure to be copied is empty
\item[6]Bank chaining clobbered in the output data structure
\end{DL}
\end{DL}
\subsection{Examples}
\begin{verbatim}
CALL MZCOPY (0,LENTRY,IXDIV,LTOP,1,' ')
\end{verbatim}
copies the data structure at {\tt LENTRY} in the primary store into division
{\tt IXDIV}. The copied data structure will be addressable via the
top level link {\tt LTOP}.
\begin{verbatim}
CALL MZCOPY (IXDVFR,LENTRY,IXDVTO,LSUP,-1,'D')
\end{verbatim}
copies the division identified by the identifier {\tt IXDVFR} to the division
specified by {\tt IXDVTO} squeezing out unused space.
The entry address into the
data structure in division {\tt IXDVFR} is {\tt LENTRY}.
In the target division
with index {\tt IXDVTO} this data structure will be attached to link -1
in bank {\tt LSUP} and be addressable as {\tt LCOPY = LQ2(LSUP-1)}
\section{Sort the banks of a linear structure}
\par The routines described below
re-arrange the horizontal linking
within a given linear structure such that the values of the
keywords contained in
each bank increase monotonically when moving through the linear
structure with {\tt L=LQ(L)}.
For equal keyword values the original order is preserved.
\par Key-words may be either floating-point, integer or Hollerith.
For Hollerith sorting a collating sequence
inherent in the representation is used,
thus the results will depend on the machine.
\par Sorting may be done either for a
{\bf single keyword} in every bank
or for a {\bf key vector} in every bank:
\index{linear structure!sort}
\Subr{CALL ZSORT (IXSTOR,LLS,JKEY)}
\par Sorts banks according to a single floating-point keyword
\Subr{CALL ZSORTI (IXSTOR,LLS,JKEY)}
\par Sorts banks according to a single integer keyword
\Subr{CALL ZSORTH (IXSTOR,LLS,JKEY)}
\par Sorts banks according to a single Hollerith keyword
\par
\Subr{CALL ZSORV (IXSTOR,LLS,JKEY,NKEYS)}
\par Sorts banks according to a floating-point key vector
\Subr{CALL ZSORVI (IXSTOR,LLS,JKEY,NKEYS)}
\par Sorts banks according to an integer key vector
\Subr{CALL ZSORVH (IXSTOR,LLS,JKEY,NKEYS)}
\par Sorts banks according to a Hollerith key vector
\Idesc
\begin{DL}{MMMM}
\item[IXSTOR]Store index.
\item[JKEY]{\tt Q(L+JKEY)} - The key word or the first word of the key vector.
\item[NKEYS]Number of words in the key vector.
\item[LLS]Address of the first bank of the linear structure.
\end{DL}
\par The execution time taken by these routines is a function
of the reordering which needs to be done.
For perfect order the operation is a simple verification pass
through the structure.
The maximum time is taken if the banks are arranged with
decreasing keywords.
\par Sorting relinks the banks such that the keywords are in
increasing order.
If one needs them in decreasing order one can use routine \Rind{ZTOPSY}
(see below).
\section{Operations on linear structures}
\par The routines described in this section perform service operations
on linear structures.
The parameter {\tt LLS} is the address of the first bank
of the linear structure.
\index{linear structure!reverse order}
\Subr{CALL ZTOPSY (IXSTOR,LLS)}
\par This routine reverses the order of the banks in the linear structure,
i.e. the first bank becomes the last, and the last the first,
for traversing the structure with {\tt L=LQ(L)}.
\index{linear structure!bridging}
\Subr{CALL ZPRESS (IXSTOR,LLS)}
\par This routine removes, by bridging, dead banks still present
in the linear structure pointed to by {\tt LLS}.
\section{Interrogate a linear structure}
\par These routines perform service functions for linear structures.
The parameter {\tt LLS} is the address of the first bank
of the linear structure.
\index{linear structure!interrogate}
\Func{LFCALL = LZLAST (IXSTOR,LLS)}
\par This function ssearches the linear structure pointed to by {\tt LLS}
for its end.
It returns in {\tt LF} the address of the last bank in the structure.
{\tt LF = 0} is returned if the structure is empty.
\Func{LFCALL = LZFIND (IXSTOR,LLS,IT,JW)}
\par This function searches the linear structure pointed to by {\tt LLS}
for the first bank containing {\tt IT} in word {\tt JW};
it returns its address in {\tt LF}. If none: {\tt LF=0}.
\Func{LFCALL = LZLONG (IXSTOR,LLS,N,IT,JW)}
\par Same functionality as function {\tt LZFIND},
but {\tt IT} is a vector of {\tt N} words expected
in words {\tt JW} to {\tt JW+N-1} of the bank.
\Func{LFCALL = LZBYT (IXSTOR,LLS,IT,JBIT,NBITS)}
\par Similar to function \Rind{LZFIND},
but it looks for a bank having IT in the byte of the status word
starting at {\tt JBIT} and with a width of {\tt NBITS} bits.
\Func{LFCALL = LZFVAL (IXSTOR,LLS,VAL,TOL,JW)}
\par Same functionality as function \Rind{LZFIND},
but it looks for a bank having in word {\tt JW} a floating point number
which is equal to VAL within the tolerance {\tt TOL}.
\Func{NCALL = NZBANK (IXSTOR,LLS)}
\par Function which counts the number of banks in the linear
structure pointed to by {\tt LLS}.
\Func{NCALL = NZFIND (IXSTOR,LLS,IT,JW)}
\par Function similar to \Rind{LZFIND} but for all banks.
It returns the number of such banks in {\tt N}
and stores the addresses of the first 100 such banks into {\tt IQUEST},
starting at {\tt IQUEST(1)} in common {\tt /QUEST/}.
\index{QUEST!IQUEST}
\Func{NCALL = NZLONG (IXSTOR,LLS,N,IT,JW)}
\par Function similar to \Rind{LZLONG} but for all banks.
It returns the number of such banks in {\tt N}
and stores the addresses of the first 100 such banks into {\tt IQUEST},
starting at {\tt IQUEST(1)} in common {\tt /QUEST/}.
\index{QUEST!IQUEST}
\section{Locate a bank in a division}
\par
The routines of this section do not operate on data structures as such,
but they perform a sequential search on the banks in a division.
\subsection{Sequential scan by Hollerith identifier}
\par
Function \Rind{LZFIDH} performs a sequential scan over the banks of a specified
division, starting with the bank following the specified bank,
and returns
the address of the first bank with the specified Hollerith identifier.
\Func{LFCALL = LZFIDH (IXDIV,IDH,LGO)}
\index{division!scan}
\Idesc
\begin{DL}{MMM}
\item[IXDIV]The index of the division to be scanned
\item[IDH]The 4 character {\bf\it Hollerith} identifier (NOT a CHARACTER
variable).
\item[LGO]The address of the bank {\bf after} which the scan has to start.\\
{\tt LGO = 0} means start with the first bank in the division.
\end{DL}
\subsection{Sequential scan by Hollerith and numeric identifier}
\par
Function \Rind{LZFID} performs a sequential scan over the banks of a specified
division, starting with the bank following the specified bank and returns
the address of the first bank with the specified Hollerith/numeric
identifier pair.
\Func{LFCALL = LZFID (IXDIV,IDH,IDN,LGO)}
\index{division!scan}
\Idesc
\begin{DL}{MMM}
\item[IXDIV]The index of the division to be scanned
\item[IDH]The 4 character {\bf\it Hollerith} identifier (NOT a CHARACTER
variable).
\item[IDN]The numeric identifier
\item[LGO]The address of the bank {\bf after} which the scan has to start.
{\tt LGO = 0} means start with the first bank in the division.
\end{DL}
\subsection{Division scan by bank identifier}
\par Starting at the first bank in the specified division,
function \Rind{LZLOC} performs a sequential scan through the
division, and returns
the address of the {\bf first bank} with the specified
Hollerith/numeric bank identifier pair.
\Func{LFCALL = LZLOC (IXDIV,CHIDH,IDN)}
\index{division!scan}
\Idesc
\begin{DL}{MMM}
\item[IXDIV]The index of the division to be scanned
\item[CHIDH]Character variable containing the Hollerith identifier
\item[IDN]The numeric identifier
\end{DL}
\subsection{Number of words available in a division}
\par The number of words available in a division can be obtained
by issueing a call to MZNEED.
If desired (option 'G') a garbage collection can be preformed to
get the needed number of words.
\Subr{CALL MZNEED (IXDIV,NWNEED,CHOPT)}
\Idesc
\begin{DL}{MMMM}
\item[IXDIV]The index of the division where the free space is needed
\item[NWNEED]The number of words needed in the division
\item[CHOPT]Character variable specifying the desired option
\begin{DL}{MM}
\item[' ']Do not garbage collect the division to free space
\item['G']Garbage collect the division to increase the space
\end{DL}
\end{DL}
\subsection{Note}
\index{QUEST!IQUEST}
\par Variable {\tt IQUEST(11)} in common {\tt QUEST} contains the space
available with the words needed already substracted,
i.e. a negative value in {\tt IQUEST(11)} signals that there are
no {\tt NWNEED} words available in the given division.
\section{Global operation aids}
\subsection{Set the program phase}
\par Primarily to avoid recovery to ``next event'' at the wrong moment,
ZEBRA needs to know in which phase the user program is at any
given moment. This is accomplished with the routine \Rind{ZPHASE},
described in the ZEBRA reference manual.
\label{SR_ZPHASE}% Special treatment for not described routine
\index{program!phase}
\subsection{\protect\label{ZEND}Normal program termination}
\Subr{CALL ZEND}
\par The routine
\Rind{ZEND} is defined to be the entry-point for normal run
termination. This routine is normally {\bf provided by the user}
to close files and print accumulated results.
It is important that all termination operations are
done through this routine,
if the user wants them to happen even in abnormal run termination.
\par It would normally look like this:
\index{program!termination!normal}
\begin{verbatim}
SUBROUTINE ZEND
CALL ZPHASE (-3) ! start termination
. . . ! any user termination code
CALL MZEND
STOP
END
\end{verbatim}
\subsection{CALL MZEND}
\par \Rind{MZEND} is a ZEBRA routine which prints statistics about
the usage of all divisions.
\par A user routine similar to \Rind{ZEND} is defined for taking over control
of fatal error termination. Its name is \Rind{ZABEND} and it is
described in the next paragraph.
This should perform any extra operations needed
for fatal termination and then it should transfer
to \Rind{ZEND} for termination.
\subsection{Abnormal program termination}
\par To cope with situations where the program ends abnormally,
an entry point \Rind{ZFATAL} is defined.
\index{program!termination!abnormal}
\par A second routine \Rind{ZFATAM} is identical to \Rind{ZFATAL},
except that it prints a message,
given as a character string to the routine.
\par The routines \Rind{ZFATAL} and \Rind{ZFATAM} are {\bf supplied by the
system}. They are protected against recovery loops, and they must
{\bf not} be supplied by the user.
They should be called only when the run cannot usefully continue.
If the application program discovers such a fatal condition,
it too should call \Rind{ZFATAL} or \Rind{ZFATAM},
preceded with some diagnostic printing or
with loading to {\tt IQUEST} in common {\tt /QUEST/} some clue to the trouble.
\label{SR_ZFATAL}% Special treatment for not described routine
\label{SR_ZFATAM}% Special treatment for not described routine
\index{QUEST!IQUEST}
\par The calling sequences are, for example:
\begin{verbatim}
CALL ZFATAL or
CALL ZFATAM ('MY ERROR MESSAGE')
\end{verbatim}
\par Routine \Rind{ZABEND}
\label{SR_ZABEND}% Special treatment for not described routine
receives control from \Rind{ZFATAL} to handle fatal run termination.
This routine may be supplied by the user.
\par The ZEBRA system contains the standard routine as follows:
\begin{verbatim}
SUBROUTINE ZABEND
+CDE,ZSTATE ! Specifies NQPHAS, the program phase
CALL ZPOSTM('TCWM') ! Print a DZSNAP in 'TCWM' mode
! for the faulty store
IF (NQPHAS.LT.0) STOP ! Immediate stop if 'initialization'
! or 'termination' phase
NQPHAS = -2 ! Set 'termination' phase
CALL ZEND
END
\end{verbatim}
\par This \Rind{ZABEND} routine is not just a dummy; it generates
an optimal post-mortem
dump, including a subroutine trace-back, followed by any normal
user output programmed in \Rind{ZEND}. Transfer to \Rind{ZEND} takes place if
the break-down happened during normal operation, but not if the
program is still in the initialization phase or if it is already under
\Rind{ZEND} control.
\par The parameter to \Rind{ZPOSTM} is passed from there to \Rind{DZSNAP}
\label{SR_ZPOSTM}% Special treatment for not described routine
to select the options for dumping the dynamic store
(see page~\pageref{SR_DZSNAP} for details).
\subsection{Recovery through \Rind{ZTELL}-\Rind{ZTELUS}}
\label{SR_ZTELL}% Special treatment for not described routine
\par During normal operation any request from the user for space
with \Rind{MZWORK}, \Rind{MZBOOK}/\Rind{MZLIFT},
\Rind{MZDIV} and \Rind{MZPUSH} is satisfied,
after garbage collection if that is necessary and possible.
If, however, the request cannot be satisfied,
the normal course of the program must be changed.
To relieve the user of the burden of checking for success
after each space request,
the garbage collector will normally send control to the user at the
entry-point \Rind{QNEXT} (via \Rind{ZTELL} and the KERNLIB routine
\Rind{QNEXTE)}, to skip the current event and to continue
by processing the next one.
\index{KERNLIB}
\par Other ZEBRA packages, apart from MZ, and the user code, may
have similar problems.
Therefore a general trouble-control routine \Rind{ZTELL} has been
included in ZEBRA.
This is a switching routine with several modes of continuation
controlled by the user routine \Rind{ZTELUS},
one of which is to send control to \Rind{QNEXT}.
\index{program!recovery}
\par For more details about the recovery facilities in ZEBRA, the user
is referred to the ZEBRA reference manual.
| {
"alphanum_fraction": 0.7378995068,
"avg_line_length": 43.1435986159,
"ext": "tex",
"hexsha": "299714af76b0958740fdacc406034da86fc3f732",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "berghaus/cernlib-docs",
"max_forks_repo_path": "zebra/zebutils.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "berghaus/cernlib-docs",
"max_issues_repo_path": "zebra/zebutils.tex",
"max_line_length": 80,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "berghaus/cernlib-docs",
"max_stars_repo_path": "zebra/zebutils.tex",
"max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z",
"num_tokens": 6818,
"size": 24937
} |
%% ============================================================================
%%
%% Part A / PhD Progress Report
%%
%% Author: Jakob Lysgaard Rørsted (Mosumgaard)
%%
%% Future work and outlook
%% ============================================================================
\chapter{Future Work}
\label{chap:future}
In the following, I will outline the topics I will work on in the final part of
my PhD \ldots | {
"alphanum_fraction": 0.4385542169,
"avg_line_length": 29.6428571429,
"ext": "tex",
"hexsha": "66cd78c24222350771a267cdda23a5bde4940df4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b12758fb3c4908e54aae4a0c0a98454bd4166a3d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jakobmoss/templates",
"max_forks_repo_path": "partA/main/future.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "b12758fb3c4908e54aae4a0c0a98454bd4166a3d",
"max_issues_repo_issues_event_max_datetime": "2020-01-06T11:24:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-01-13T09:33:11.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jakobmoss/templates",
"max_issues_repo_path": "partA/main/future.tex",
"max_line_length": 79,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "b12758fb3c4908e54aae4a0c0a98454bd4166a3d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jakobmoss/templates",
"max_stars_repo_path": "partA/main/future.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T22:03:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-19T15:02:58.000Z",
"num_tokens": 79,
"size": 415
} |
\documentclass[12pt, a4paper]
{article}
\usepackage[margin=2cm]{geometry}
\usepackage{svg}
\setsvg{inkscape=inkscape -z -D,svgpath=images/}
\usepackage{float}
\usepackage{amsmath}
\usepackage{todonotes}
\usepackage{xspace}
\usepackage{booktabs}
\usepackage[]{algorithm2e}
\title{Pybricks motor control algorithms}
\author{The Pybricks authors}
% Generic macros
\providecommand{\lr}[1]{\left(#1\right)}
\providecommand{\sub}[1]{_{\text{#1}}}
\renewcommand{\sup}[1]{^{\text{#1}}}
% omega symbols
\providecommand{\w}{\omega}
\providecommand{\wt}{\w^*}
\providecommand{\wref}{\w\sub{ref}}
\providecommand{\wmax}{\w\sub{max}}
% theta symbols
\renewcommand{\th}{\theta}
\providecommand{\thref}{\th\sub{ref}}
% alpha symbols
\renewcommand{\a}{\alpha}
% math
\providecommand{\minab}[2]{\min\,\lr{{#1},\,\,{#2}}}
\providecommand{\abs}[1]{\left|#1\right|}
\providecommand{\inlineadd}{\,\,+\!\!=\,\,}
\providecommand{\inlinesubtract}{\,\,-\!\!=\,\,}
%
%
% Begin Document
%
%
\begin{document}
\maketitle
\tableofcontents
\pagebreak
\section{Control system overview}
\begin{figure}[H]
\centering
\fontsize{8}{10}\selectfont
\includesvg[inkscapelatex=false,width=1\textwidth]{control}
\caption{
Control system overview.
\label{fig:controloverview}}
\end{figure}
\section{Reference trajectories}
When the user gives a motor command, we compute the reference trajectories for
the motor angle ($\thref$) and angular velocity ($\wref$) as a function of
time ($t$). These trajectories describe the ideal motion that would be followed
if the motor is not subject to external loads or disturbances.
\subsection{Trajectory definition}
For a typical maneuver, the reference trajectories for $\thref$ and $\wref$
are shown in Figure \ref{fig:plots}. Initially, when the user executes the
command at time $t_0$, the motor has a given initial angle $\th_0$ and a given
initial angular velocity $\w_0$. The motor accelerates with a magnitude
$\abs{\a_0}$ until it reaches the user-specified target angular velocity $\wt$
at time $t_1$. The target speed is maintained until it begins decelerating at
time $t_2$, in order to stop precisely at $t_3$. During this maneuver, at the
corresponding times, the motor angle reference traverses from $\th_0$ through
$\th_3$.
The reference trajectories $\thref(t)$ and $\wref(t)$ are uniquely specified by
the time instants $t_0$, $t_1$, $t_2$, $t_3$, the angles $\th_0$, $\th_1$,
$\th_2$, $\th_3$, the initial velocity $\w_0$, the target velocity $\wt$, the
final velocity $\w_3$, the acceleration $\a_0$ and deceleration $\a_2$.
Depending on which command is executed, we are given a subset of these
parameters and we have to compute the dependent variables.
\begin{figure}[H]
\centering
\includesvg[width=0.9\textwidth]{trajectory}
\caption{
Reference velocity (top) and reference angle (bottom).
\label{fig:plots}}
\end{figure}
If these parameters are known, the trajectories $\wref(t)$ and $\thref(t)$ are
given by
%
\begin{align}
\label{eq:wref}
\wref(t)&=
\begin{cases}
\w_0 & \text{if} \quad t = t_0\\
\w_0 + \a_0(t-t_0) & \text{if} \quad t < t_1\\
\w_1=\w_2=\wt & \text{if} \quad t_1 \leq t \leq t_2\\
\w_2 + \a_2(t-t_2) & \text{if}\quad t_2 < t < t_3\\
\w_3 & \text{if} \quad t = t_3
\end{cases}\\[1em]
\label{eq:thref}
\thref(t)&=
\begin{cases}
\th_0 & \text{if} \quad t = t_0\\
\th_0 + \w_0(t-t_0) + \dfrac{1}{2}\a_0(t-t_0)^2 &
\text{if} \quad t_0 < t \leq t_1\\
\th_1 + \w_1(t-t_1) & \text{if} \quad t_1 < t \leq t_2\\
\th_2 +\w_2(t-t_2)+\dfrac{1}{2}\a_2(t-t_2)^2 &
\text{if}\quad t_2 < t < t_3\\
\th_3 & \text{if} \quad t = t_3
\end{cases}
\end{align}
Some parameters are known because they are measured or because they are
specified by the user, while others are to be computed from the known
parameters. Which of the parameters are known depends on the user-specified
maneuver.
If the user specifies to rotate the motor for a certain duration with
\texttt{run\_time(speed, duration)}, the final time $t_3=t_0+t\sub{duration}$
is known but we must compute the corresponding angle $\th_3$. If instead the
final angle $\th_3$ is specified by the user command, we have to compute the
final time $t_3$. This applies to \texttt{run\_target(speed, target)}, where
$\th_3=\th\sub{target}$. The following two sections provide the formulas to
compute the unknown parameters for both cases.
\begin{table}[H]
\centering
\caption{Overview of known and computed trajectory parameters}
\label{tab:parameters}
\begin{tabular}{@{}lllll@{}}
\toprule
& Known & Obtained from & Computed & Method \\
& & user command & & \\ \midrule
Time-based &
$t_0$, $\th_0$, $\w_0$ &
$\abs{\a_0}$, $\abs{\a_2}$, $\wt$, $t_3$, $\w_3$ &
$\a_0$, $\a_2$, $t_1$, $t_2$, $\th_1$, $\th_2$, $\boldsymbol{\th_3}$
& Section \ref{sec:timebasedref}\\
Angle-based &
$t_0$, $\th_0$, $\w_0$ &
$\abs{\a_0}$, $\abs{\a_2}$, $\wt$, $\th_3$, $\w_3$ &
$\a_0$, $\a_2$, $t_1$, $t_2$, $\boldsymbol{t_3}$, $\th_1$, $\th_2$ &
Section \ref{sec:anglebasedref} \\
\bottomrule
\end{tabular}
\end{table}
For a typical single command, the final speed is always zero ($\w_3 = 0$), in
which case $\th(t) \equiv \th_3$ for $t > t_3$. We will also allow the user to
specify $\w_3 = \wt$. This can be used to blend subsequent commands together
without stopping.
\subsection{Calculating trajectory parameters given time target}
\label{sec:timebasedref}
This section derives the parameters in Table \ref{tab:parameters} for a
time-based maneuver. Without loss of generality, we will assume that the
target angular velocity is nonnegative:
%
\begin{align}
\label{eq:t:forwardmaneuver}
\wt \geq 0, \quad \w_3 \in \{0, \wt\}
\end{align}
%
If it is not, we can mirror the inputs along the $\w=0$ line, perform the
following computations, and mirror back the final result. This is discussed
in Section \ref{sec:t:reversing}.
In a time-based maneuver, we are only concerned with angular velocity control,
while the final angle is arbitrary. In principle, this means we are only
concerned with tracking $\wref(t)$ as shown in the top graph of
Figure \ref{fig:plots}. However, that graph depicts only one possible angular
velocity trajectory, for a particular set of parameters.
Figure \ref{fig:time} captures four possible types of angular velocity
reference trajectories, which differ in initial speed compared to the
target speed and final speed.
\begin{figure}[H]
\centering
\includesvg[width=0.8\textwidth]{timebased}
\caption{
Time-based motions with a duration of $t_3-t_0$ for various initial
conditions, with a stationary endpoint (left), or a nonzero final speed
(right). The trajectory is determined by the initial speed $\w_0$,
which may be equal to or greater than the target $\wt$ (blue),
or lower than the target (green). It could also be so low or high that
it will not be able to reach the target speed before completion
(orange and red).\label{fig:time}}
\end{figure}
Because we allow only positive duration arguments, we have $t_3-t_0 > 0$ by
definition. In order to ensure that the motor is able to reach $\w_3$ at time
$t_3$, the initial angular velocity must be bound by the gray area in Figure
\ref{fig:time}. The slope magnitude of the upper boundary equals the magnitude
of the acceleration $\abs{\a_0}$ or deceleration $\abs{\a_2}$, whichever is
larger. The lower boundary slope corresponds to the acceleration magnitude
$\abs{\a_0}$. We also limit the target speed such that it is able to
decelerate to the final speed with the given deceleration $\a_2$. This gives
the constraints
%
\begin{align}
-\abs{\a_0} \lr{t_3-t_0} &\leq \w_0 \leq \max \{\abs{\a_0}, \abs{\a_2}\} \lr{t_3-t_0} + \w_3\label{eq:t:timeboundary1}\\[1em]
0 &\leq \wt \leq \abs{\a_2} \lr{t_3-t_0} + \w_3\nonumber
\end{align}
%
In all cases, the trajectory decelerates between $t_2$ and $t_3$, so that
$\a_2 = - \abs{\a_2} < 0$. The initial acceleration between $t_0$ and $t_1$
depends on the initial speed $\w_0$ with respect to the \mbox{target $\wt$},
which gives
%
\begin{align}
\label{eq:t:accel0}
\a_0 &=
\begin{cases}
\phantom{-}\abs{\a_0} & \text{if} \quad \w_0 < \wt\\
-\abs{\a_0} & \text{otherwise}
\end{cases}
\end{align}
%
The case of equality is handled intrinsically within the
second case by ensuring that $\a_0$ is never used if $\w_0 = \wt$.
\subsubsection{Standard case}
\label{sec:t:standard}
The first step is to accelerate or decelerate to reach the target speed $\wt$.
Solving for the intersection time $t_1$ gives:
%
\begin{align}
\label{eq:t:t1mt0:standard}
\lr{t_1 - t_0}\sub{standard} &= \dfrac{\wt-\w_0}{\a_0}
\end{align}
%
Similarly, the time $t_2$ at which we start decelerating becomes defined by:
%
\begin{align}
\label{eq:t:t3mt2:standard}
\lr{t_3 - t_2}\sub{standard} &= \dfrac{\w_3-\wt}{\a_2}\\[1em]
\label{eq:t:t2mt1:standard}
\lr{t_2 - t_1}\sub{standard} &=
(t_3-t_0) - \lr{t_1 - t_0}\sub{standard} - \lr{t_3 - t_2}\sub{standard}
\end{align}
%
Since the target speed is reached the constant speed value is simply
\begin{align}
\lr{\w_1}\sub{standard} &= \wt
\end{align}
%
The result is valid if and only if $\lr{t_2 - t_1}\sub{standard}\geq 0$.
Otherwise, we resort to the cut-short case covered below.
\subsubsection{Cut short case}
\label{sec:t:cutshort}
If the initial velocity is too low or if the maneuver is too short to be able
to reach the target velocity, it accelerates until it must begin to
decelerate, as shown by the first segment of the red line in Figure
\ref{fig:time}. Solving for the intersection time $t_1$
gives:
%
\begin{align}
\label{eq:t:t1mt0:cutshort}
\lr{t_1 - t_0}\sub{cut short} &=
\dfrac{\w_3-\w_0 - \a_2(t_3-t_0)}{\a_0-\a_2}
\end{align}
%
This result also applies if the initial acceleration is negative (orange line).
Zero division would occur if $\a_2 = \a_0$. Since $\a_2 < 0$, this is a concern
only when $\a_0 < 0$. However, this never happens since $\a_2 = \a_0 < 0$
implies that the standard case in Section \ref{sec:t:standard} has a valid
solution.
Similarly, the time when we start decelerating ($t_2$) becomes defined by:
%
\begin{align}
\label{eq:t:t3mt2:cutshort}
\lr{t_3 - t_2}\sub{cut short} &= (t_3 - t_0) - (t_1 - t_0)\sub{cut short}
\\[1em]
\label{eq:t:t2mt1:cutshort}
\lr{t_2 - t_1}\sub{cut short} &= 0
\end{align}
%
When cut short, the target speed $\wt$ is not reached but it peaks out at
%
\begin{align}
\lr{\w_1}\sub{cut short} &= \w_0 + \a_0(t_1 - t_0)\sub{cut short}
\end{align}
\subsubsection{Cut short case with $\w_3 = \w_1$ and $\a_0 > 0$}
\label{sec:t:cutshortw3}
If $\w_3 = \wt$ and $\a_0 > 0$, there is only the increasing
ramp for the whole duration of the maneuver, indicated by the
red line in the right graph of Figure \ref{fig:time}, giving:
%
\begin{align}
\w_1 = \w_3 := \w_0 + \a_0(t_3 - t_0)
\end{align}
%
and accordingly $t_3=t_2=t_1$.
\subsubsection{Reversing and unreversing the final and target speed}
\label{sec:t:reversing}
The aforementioned derivation assumes $\wt \geq \w_3 \geq 0$
\eqref{eq:t:forwardmaneuver} to reduce the number of (similar) cases that must
be accounted for. This section shows how to transform a given time based
maneuver to match this assumption, calculate the trajectory, and map the final
result back to obtain the originally requested command.
\begin{itemize}
\item Cap $\w_0$ and $\wt$ using \eqref{eq:t:timeboundary1}.
\item Let the boolean $a := \wt < 0$.
\item If $a$, then invert all speeds: $\w_0 := -\w_0$, $\wt := -\wt$,
$\w_3 := -\w_3$.
\item Calculate time and speed intersections using Sections \ref{sec:t:standard}--
\ref{sec:t:cutshortw3}.
\item If $a$, then invert the results as shown in Section \ref{sec:invert}.
\end{itemize}
\subsubsection{Intermediate angles (all cases)}
Having derived expressions to evaluate $t_1$, $t_2$, and $\w_1$, the remaining
parameters of Table \ref{tab:parameters} to compute are the angles
$\th_1$, $\th_2$, and $\th_3$, which can be derived by integrating the
angular velocity reference signal \eqref{eq:thref}:
% %
\begin{align}
\label{eq:t:anglepar1}
\th_1 &= \th_0 + \w_0(t_1-t_0)+\dfrac{1}{2}\a_0(t_1-t_0)^2\\
\label{eq:t:anglepar2}
\th_2&=\th_1+ \w_1(t_2-t_1)\\
\label{eq:t:anglepar3}
\th_3 &=\th_2+ \w_2(t_3-t_2)+\dfrac{1}{2}\a_2(t_3-t_2)^2
\end{align}
%
\subsection{Calculating trajectory parameters given angle target}
\label{sec:anglebasedref}
This section derives the parameters in Table \ref{tab:parameters} for an
angle-based maneuver. For simplicity of the derivation will assume that
the target angle is greater than the initial angle. This
means that the motor must move forward to reach its goal:
%
\begin{align}
\label{eq:a:forwardmaneuver}
\th_3 &> \th_0\\
\wt &> 0
\end{align}
%
If it is not, we can mirror the inputs along the $\th_3$ line, perform the
following computations, and mirror back the final result. This is discussed
in Section \ref{sec:a:reversing}.
In an angle-based maneuver, the end time $t_3$ is arbitrary, so the trajectory
is best analyzed in a ($\th$, $\w$) phase plot. This is shown in Figure
\ref{fig:positions} for various initial conditions indicated with blue dots.
To reduce the complexity of quadratic solutions on the microcontroller, we
restrict the final velocity to be either $\w_3=0$ or $\w_3=\w_2=\w_1$, implying
either deceleration to zero or no deceleration at all. Possible end states are
indicated with orange dots. In all cases $\a_2 < 0$.
\begin{figure}[H]
\centering
\includesvg[width=1\textwidth]{angbased}
\caption{
Phase portrait of trajectory from different types of initial conditions
indicated with blue dots:
(a) nonnegative initial speed with a
sufficient distance from target to have a constant speed phase.
(b) Same as (a), except with negative initial speed.
(c) nonnegative initial speed without a constant speed phase because
the target is too close.
(d) Same as (c), except with negative initial speed.
\label{fig:positions}}
\end{figure}
The typical trajectory is similar to case (a) and (c): The motor starts
with a nonnegative velocity, accelerates, optionally runs through a constant
speed phase, and then decelerates to standstill at the target.
If the initial speed is negative ($\w_0 < 0$) as in initial conditions (b)
and (d), the motor slows down and goes backwards in the process. Once the
velocity passes through zero, the remaining trajectory is just like case (a)
and (c). For all trajectory types, it is convenient to define the common
zero-speed angle $\th_f$ as indicated
with green dots in Figure~\ref{fig:positions}:
%
\begin{align}
\th_f = \th_0 - \dfrac{1}{2 \a_0}\w_0^2
\end{align}
Because we allow only positive speeds we have $\th_3-\th_0 > 0$ by
definition. In order to ensure that the motor is able to reach $\w_3$ at time
$t_3$, the initial angular velocity must be bound by the gray area in Figure
\ref{fig:positions}. The upper boundary corresponds to the maximum speed
we can be at initially and still decelerate to the target angle on time.
In particular, we restrict the initial speed to the value from which we can
decelerate with either $\abs{\a_0}$ or $\abs{\a_2}$, whichever is larger.
There is no need for a negative lower bound. To see this, consider cases (b)
and (d) in Figure~\ref{fig:positions}: a negative initial speed makes it move
farther from the target angle, eliminating the risk of overshooting it. This
gives the constraint:
%
\begin{align}
\w_0 \leq \sqrt{\w_3^2 + 2 \max\{ \abs{\a_0}, \abs{\a_2}\}\lr{\th_3-\th_0}}
\end{align}
%
Likewise, we bind the strictly positive target speed to a value from which we
can still decelerate to the final speed with the given deceleration
$\abs{\a_2}$:
%
\begin{align}
0 < \wt \leq \sqrt{\w_3^2 + 2\abs{\a_2}\lr{\th_3-\th_0}}
\end{align}
%
In all cases, the trajectory decelerates between $t_2$ and $t_3$, so that
$\a_2 = - \abs{\a_2} < 0$. The initial acceleration between $t_0$ and $t_1$
depends on the initial speed $\w_0$ with respect to the \mbox{target $\wt$},
which gives
%
\begin{align}
\label{eq:a:accel0}
\a_0 &=
\begin{cases}
\phantom{-}\abs{\a_0} & \text{if} \quad \w_0 < \wt\\
-\abs{\a_0} & \text{otherwise}
\end{cases}
\end{align}
%
The case of equality is handled intrinsically within the
second case by ensuring that $\a_0$ is never used if $\w_0 = \wt$.
\subsubsection{Standard case with $\w_3 \in \{0, \wt\}$}
\label{sec:a:standard}
In the standard maneuver, it accelerates or decelerates
until it reaches the target speed $\w_1$, as shown for cases (a), (b), (c),
(d), and (e) in Figure \ref{fig:positions}.
Solving for the intersection with $\wt$ gives:
%
\begin{align}
\label{eq:a:t1mt0:standard}
\lr{\th_1}\sub{standard} &= \th_f + \dfrac{1}{2\a_0}(\wt)^2\\[1em]
\lr{\th_2}\sub{standard} &= \th_3 + \dfrac{1}{2\a_2}\lr{(\wt)^2 - \w_3^2}
\end{align}
%
%
Since the target speed is reached the constant speed value is simply
\begin{align}
\lr{\w_1}\sub{standard} &= \wt
\end{align}
%
The standard case is valid if and only if:
\begin{align}
\label{eq:a:t1mt0:standardvalidity}
\lr{\th_1}\sub{standard} &< \lr{\th_2}\sub{standard}
\end{align}
%
Otherwise, we have to evaluate the cut-short case.
%
\subsubsection{Cut short case with $\w_3 = 0$}
\label{sec:a:cutshortw3is0}
If initial velocity is too low or if the
maneuver is too short to be able to reach the target velocity, it accelerates
until it must begin to decelerate, as in cases (c) and (d) in
Figure \ref{fig:positions}.
Solving for the intersection angle $\th_1=\th_2$ for $\w_1=\w_2$ gives:
%
\begin{align}
\label{eq:a:cutshort}
\lr{\th_1}\sub{cut short} &= \lr{\th_2}\sub{cut short}\\[1em]
\dfrac{1}{2\a_0}\w_1^2 + \th_f &= \th_3 + \dfrac{1}{2\a_2} \w_1^2
\end{align}
%
which can be solved for $\w_1$ as:
%
\begin{align}
\label{eq:a:cutshortsolve}
\w_1^2 = 2 \dfrac{\a_0\a_2}{\a_2-\a_0}\lr{\th_3 - \th_f}
\end{align}
%
from which $\th_1=\th_2$ follow via \eqref{eq:a:cutshort}.
%
When cut short, the target speed $\wt$ is not reached but the
peak $\w_1 \geq 0$ can be obtained as the square root
of \eqref{eq:a:cutshortsolve}.
%
\subsubsection{Cut short case with $\w_3 = \w_1$ and $\a_0 > 0$}
\label{sec:a:cutshortw3isw1}
If $\a_0 > 0$ and there is no deceleration phase but we still can't reach the
target speed, we have $\w_3=\w_1 < \wt$ with $\th_1=\th_2=\th_3$:
%
\begin{align}
\label{eq:a:cutshortw1}
\w_1^2 &= 2\a_0\lr{\th_3 - \th_f}
\end{align}
\subsubsection{Reversing and unreversing the final and target speed}
\label{sec:a:reversing}
The aforementioned derivation assumes $\th_3 > \th_0$ and so $\wt > 0$ to
reduce the number of (similar) cases that must be accounted for. This section
shows how to transform a given angle based maneuver to match this assumption,
calculate the trajectory, and map the final result back to obtain the
originally requested command.
\begin{itemize}
\item Let the boolean $a := \th_3 < \th_0$.
\item If $a$, then invert targets as:
$\th_3 := 2 \th_0 - \th_3$, $\wt := -\wt$, $\w_0 := -\w_0$,
$\w_3 := -\w_3$.
\item Calculate angle and speed intersections using
Sections \ref{sec:a:standard}--\ref{sec:a:cutshortw3isw1}.
\item If $a$, then reverse results using Section \ref{sec:invert}.
\end{itemize}
\subsubsection{Intermediate times (all cases)}
Having derived expressions to evaluate $\th_1$, $\th_2$, and $\w_1$, the
remaining parameters of Table \ref{tab:parameters} to compute are the times
$t_1$, $t_2$, and $t_3$:
%
\begin{align}
t_1 - t_0 &= \dfrac{\w_1-\w_0}{\a_0}\\[1em]
t_2 - t_1 &= \dfrac{\th_2-\th_1}{\w_1}\\[1em]
t_3 - t_2 &= \dfrac{\w_3-\w_1}{\a_2}
\end{align}
\subsection{Making a stationary trajectory}
\label{sec:stationary}
For a stationary hold trajectory, we have:
%
\begin{align}
t_3 = t_2 = t_1 = t_0 \\[1em]
\th_3 = \th_2 = \th_1 = \th_0 \\[1em]
\w_1 = \w_0 = 0 \\[1em]
\a_0 = \a_2 = 0
\end{align}
\subsection{Reversing a trajectory}
\label{sec:invert}
In Sections \ref{sec:timebasedref} and \ref{sec:anglebasedref} several
assumptions were made to ensure that the calculated trajectory is always
forwards with $\wt > 0$. If the original target speed was negative, the
newly computed maneuver can be reversed as follows:
%
\begin{align}
\th_1 &:= 2 \th_0 - \th_1\\[1em]
\th_2 &:= 2 \th_0 - \th_2\\[1em]
\th_3 &:= 2 \th_0 - \th_3\\[1em]
\w_0 &:= -\w_0\\[1em]
\w_1 &:= -\w_1\\[1em]
\a_0 &:= -\a_0\\[1em]
\a_2 &:= -\a_2
\end{align}
\subsection{Stretching trajectories for synchronization}
In some applications, two or more separate trajectories are executed in
parallel to synchronize their movements. Typically, each trajectory has its own
target angle $\th_3$. To make them run in parallel, we slow down the shorter
maneuvers such that they take as long as the longest maneuver. For this
analysis, let the trajectory with superscript $0$ take the longest, so that
%
\begin{align}
t^0_3 - t^0_0 \geq t^i_3 - t^i_0 \quad \forall \quad i
\end{align}
For synchronization we require that for all other trajectories $i$ we have:
%
\begin{align}
t^i_1 &= t^0_1=t_1\\[1em]
t^i_2 &= t^0_2=t_2\\[1em]
t^i_3 &= t^0_3=t_3
\end{align}
%
Each trajectory still has to reach its own target $\th^i_3$.
Using (\ref{eq:t:anglepar1}--\ref{eq:t:anglepar3}) this gives the constraint:
%
\begin{align}
\label{eq:stretchconstraint1}
\th^i_3 - \th^i_0 &= \w^i_0(t_1-t_0)+\dfrac{1}{2}\a^i_0(t_1-t_0)^2+
\w^i_1(t_2-t_1)+ \w^i_1(t_3-t_2)+\dfrac{1}{2}\a^i_2(t_3-t_2)^2
\end{align}
%
Likewise, each trajectory has to reach its top speed $\w^i_1$ and its final
speed $\w^i_3$ in the same time spans as the longest maneuver, which gives the
two additional constraints:
%
\begin{align}
\label{eq:stretchconstraint2}
\a^i_0 &= \dfrac{\w^i_1-\w_0^i}{t_1 - t_0}\\[1em]
\label{eq:stretchconstraint3}
\a_2^i &= \dfrac{\w_3^i-\w_1^i}{t_3 - t_2}
\end{align}
%
With three constraints we can solve for the three unknowns $\a^i_0$, $\a^i_1$.
and $\w^i_1$. To do so, solve for $\w^i_1$ by substituting
\eqref{eq:stretchconstraint2}, \eqref{eq:stretchconstraint3} into
\eqref{eq:stretchconstraint1}:
%
\begin{align}
\label{eq:stretchconstraintsolved}
\w^i_1 = \dfrac{2\lr{\th_3^i-\th_0^i}-\w_0^i\lr{t_1-t_0} -
\w_3^i\lr{t_3-t_2}}{t_3-t_0 + t_2 - t_1}
\end{align}
%
Since $t_2 - t_1 \geq 0$, zero division is avoided if $t_3 - t_0 > 0$. If $t_3
- t_0 = 0$, then we have a stationary trajectory as per Section
\ref{sec:stationary}. If $\w_3^i$ was nonzero, it needs to be lowered
too as $\w_3^i := \w_1^i$.
Once $\w^i_1$ is known, $\a^i_0$, $\a^i_1$ follow directly from
\eqref{eq:stretchconstraint2}, \eqref{eq:stretchconstraint3}. Zero division is
avoided because $\a^i_0$ is undefined (not used) when $t_1 = t_0$ and $\a^i_2$
is undefined (not used) when $t_3 = t_2$. The intermediate angles $\th_1^i$
and $\th_2^i$ can be obtained from
(\ref{eq:t:anglepar1}--\ref{eq:t:anglepar2}).
\end{document}
| {
"alphanum_fraction": 0.6834846813,
"avg_line_length": 36.3672839506,
"ext": "tex",
"hexsha": "6f2ec413f08a9ecd63db129007fe027cb7dc91da",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4cb036fdcdcf1240576efae772375a6b37e8a7ba",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Novakasa/pybricks-micropython",
"max_forks_repo_path": "lib/pbio/doc/control/control.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4cb036fdcdcf1240576efae772375a6b37e8a7ba",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Novakasa/pybricks-micropython",
"max_issues_repo_path": "lib/pbio/doc/control/control.tex",
"max_line_length": 129,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4cb036fdcdcf1240576efae772375a6b37e8a7ba",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Novakasa/pybricks-micropython",
"max_stars_repo_path": "lib/pbio/doc/control/control.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7942,
"size": 23566
} |
\documentclass[10pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{bm}
\usepackage{xspace}
\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}
\setcounter{MaxMatrixCols}{20}
\usepackage[capitalise]{cleveref}
\usepackage[per-mode=symbol]{siunitx}
\usepackage{url}
\newcommand{\xk}{\ensuremath{\bm{x}_k}\xspace}
\newcommand{\yk}{\ensuremath{\bm{y}_k}\xspace}
\newcommand{\xkk}{\ensuremath{\bm{x}_{k+1}}\xspace}
\newcommand{\xx}[1]{\ensuremath{\bm{x}_{#1}}\xspace}
\newcommand{\uk}{\ensuremath{\bm{u}_k}\xspace}
\newcommand{\ukk}{\ensuremath{\bm{u}_{k+1}}\xspace}
\newcommand{\uu}[1]{\ensuremath{\bm{u}_{#1}}\xspace}
\newcommand{\duk}{\ensuremath{\bm{\dot{u}}_k}\xspace}
\newcommand{\dukk}{\ensuremath{\bm{\dot{u}}_{k+1}}\xspace}
\newcommand{\duu}[1]{\ensuremath{\bm{\dot{u}}_{#1}}\xspace}
\newcommand{\ek}{\ensuremath{\bm{e}_k}\xspace}
\newcommand{\ekk}{\ensuremath{\bm{e}_{k+1}}\xspace}
\newcommand{\ee}[1]{\ensuremath{\bm{e}_{#1}}\xspace}
\newcommand{\rk}{\ensuremath{\bm{r}_k}\xspace}
\newcommand{\ymin}{\ensuremath{\bm{y}_\text{min}}\xspace}
\newcommand{\ymax}{\ensuremath{\bm{y}_\text{max}}\xspace}
\newcommand{\umin}{\ensuremath{\bm{u}_\text{min}}\xspace}
\newcommand{\umax}{\ensuremath{\bm{u}_\text{max}}\xspace}
\newcommand{\dumin}{\ensuremath{\bm{\dot{u}}_\text{min}}\xspace}
\newcommand{\dumax}{\ensuremath{\bm{\dot{u}}_\text{max}}\xspace}
\newcommand{\epsy}{\ensuremath{\epsilon_{y}}\xspace}
\newcommand{\epsu}{\ensuremath{\epsilon_{u}}\xspace}
\newcommand{\epsdu}{\ensuremath{\epsilon_{\dot{u}}}\xspace}
\newcommand{\epst}{\ensuremath{\epsilon_T}\xspace}
\newcommand{\code}[1]{\texttt{#1}\xspace}
\newcommand{\R}{\ensuremath{\mathbb{R}\xspace}}
\newcommand{\ts}{\ensuremath{t_s}}
\newcommand{\Co}{\ensuremath{C_1}}
\newcommand{\Cc}{\ensuremath{C_2}}
\def\pwidth{0.49}
\begin{document}
\section{Problem formulation}
The \code{MPCProblem} class permits to solve Model Predictive Control (MPC) problems.
The class considers the following state space representation of the system
\begin{align}
\xkk &= A\xk + B\uk\\
\yk &= \Co\xk
\end{align}
where $\xk \in \R^n$, $\yk \in \R^q$, and $\uk \in \R^p$ represent the state of the system, the output, and the control at time step $k$, respectively.
$A \in \R^{n \times n}$, $B \in \R^{n \times p}$, and $\Co \in \R^{q \times n}$ are the state, the control, and the output matrices.
The matrices given to the class are the discrete time matrices, so if you have the continuous time state space representation of your system, you should first discretize it for a particular sampling time.
In general, the \code{MPCProblem} class solves the following quadratic optimization problem over the time horizon $T$:
\begin{equation}
\label{eq:min}
\min_s s^\intercal Q s
\end{equation}
with
\begin{equation}
\label{eq:s}
s = [\bm{e}, \bm{u}, \bm{\dot{u}}, \epsy, \epsy, \epsdu, \epst]
\end{equation}
subject to
\begin{align}
\uu{0} &= \bm{u}^* \label{eq:u0}\\
\ek &= \Co\xk - \rk,\quad k = 0,\dots,T \label{eq:error}\\
-\epst &\leq \ee{T} \leq \epst \label{eq:terminal}\\
\ukk &= \uk + \duk \cdot \ts,\quad k = 0,\dots,T-1 \label{eq:duk}\\
\Cc\xkk &\leq \ymax + \epsy,\quad k = 0,\dots,T-1 \label{eq:ymax}\\
\Cc\xkk &\geq \ymin - \epsy,\quad k = 0,\dots,T-1 \label{eq:ymin}\\
\ukk &\leq \umax + \epsu,\quad k = 0,\dots,T-1 \label{eq:umax}\\
\ukk &\geq \umin - \epsu,\quad k = 0,\dots,T-1 \label{eq:umin}\\
\ukk - \uk &\leq \dumax\cdot\ts + \epsdu,\quad k = 0,\dots,T-1 \label{eq:dumax}\\
\ukk - \uk &\geq \dumin\cdot\ts - \epsdu,\quad k = 0,\dots,T-1 \label{eq:dumin}\\
\epsy &\geq 0 \label{eq:epsy}\\
\epsu &\geq 0 \label{eq:epsu}\\
\epsdu &\geq 0 \label{eq:epsdu}\\
\epst &\geq 0 \label{eq:epst}\\
\end{align}
where
\begin{equation}
\label{eq:xk}
\xk = A^{k} \xx{0} + \sum_{i=0}^{k-1} A^{k-i-1}B \uu{i}
\end{equation}
In \cref{eq:min}, $s$ is the minimization variable and $Q$ a diagonal matrix, which can be modified by the user to weight the minimization terms.
The $Q$ matrix must be positive definite, so all entries on the diagonal must be strictly positive.
This is required because of the resolution method used by the solver, which requires problems to be strictly convex.
This kind of problem is simpler to solve, so the solution can be computed very efficiently.
In turn, we cannot set diagonal entries to 0, which requires the problem to be expressed in a non-intuitive way.
For example, in standard MPC problems \xk is part of the state so, for example, the error constraint is formulated as
\begin{align}
\xkk &= A\xk + B\uk\\
\ek &= \Co\xk - \rk,
\end{align}
which is much easier to read than the combination of \cref{eq:error,eq:xk}.
Clearly, \xk should not be part of the minimization: you want to minimize the error, not the state.
With a solver accepting semi-definite programs, this is not a problem, as we can simply set the diagonal entries of the $Q$ matrix regarding the \xk terms to 0.
With the QuadProg++\footnote{\url{https://github.com/liuq/QuadProgpp}} solver used in here this is not possible, and this is the reason why the problem is formulated in a more ``complex'' way.
The $s$ variable is composed by a set of subvariables:
\begin{itemize}
\item $\bm{e}$, output error: this is the set of output errors for each time step $k$ in the considered time horizon. $\ek \in \R^q$ is the difference between the output of the system ($\Co\xk$) and the target reference value ($\rk$) (constraint in \cref{eq:error}) for $k = 0,\dots,T$.
\item $\bm{u}$, control: this is the set of control actions for each time step $k$ in the considered time horizon, with $\uk \in \R^p$ for $k = 0,\dots,T-1$. The first value (\uu{0}) is fixed and it is a parameter of the problem (\cref{eq:u0}).
\item $\bm{\dot{u}}$, control derivative: this is the set of control derivatives, i.e., $\duk \in \R^q$ is the difference between \ukk and \uk (constraint in \cref{eq:duk}) for $k = 0,\dots,T-1$. This term is optional can can be disabled when instantiating the problem, disregarding the control derivative when minimizing.
\item \epsy: slack variable for the violation of output constraints. This is optional and can be disabled when instantiating the problem. This removes the slack variable from the optimization problem and allows no constraint violation.
\item \epsu: slack variable for the violation of control constraints. This is optional and can be disabled when instantiating the problem. This removes the slack variable from the optimization problem and allows no constraint violation.
\item \epsdu: slack variable for the violation of control derivative constraints. This is optional and can be disabled when instantiating the problem. This removes the slack variable from the optimization problem and allows no constraint violation.
\item \epst: slack variable for the terminal constraint. This is optional and can be disabled when instantiating the problem. This removes the slack variable from the optimization problem and transforms the terminal constraint into $\ee{T} = 0$.
\end{itemize}
With respect to the constraints, the interpretation is the following:
\begin{itemize}
\item \cref{eq:terminal}: this is called the terminal constraint and indicates that, at the end of the time horizon, we want our output error to be zero, or very close to it. Notice that this might make the problem unfeasible, e.g., when setting a very small time horizon. For this reason, the constraint can be disabled. Alternatively to disabling the constraint, it is possible to enable a slack variable for it, so that the constraint can be violated but at a certain cost.
\item \cref{eq:duk}: this constraint simply defines that two successive control actions differ by the control derivative. When the minimization on the control derivative is disabled, this constraint is not used.
\item \cref{eq:ymax,eq:ymin}: constraints on output limiting the maximum and the minimum output value. Notice that we use a different output matrix than the one of the error constraint (\cref{eq:error}), which we call \Cc. The reason is simple: imagine that our state is composed by the acceleration and the speed of a vehicle and that we want to reach a target speed while limiting the acceleration. If we use the same output matrix this is not possible (if not with some ugly hack), but by using two different output matrices we increase the flexibility while lowering the computational effort. In the document and in the code, $\Cc \in \R^{q_2 \times n}$. These constraints can be violated if the slack variable for the output is enabled. If the slack variable on output is not enabled, then the \epsy variable is not considered.
\item \cref{eq:umax,eq:umin}: constraints on control limiting the maximum and the minimum control value. These constraints can be violated if the slack variable for the control is enabled. If the slack variable on control is not enabled, then the \epsu variable is not considered.
\item \cref{eq:dumax,eq:dumin}: constraints on control derivative limiting the maximum and the minimum control derivative value. The limit is multiplied by the sampling time \ts, so changing the sampling time doesn't require the user to change the \dumax and the \dumin values. These constraints can be violated if the slack variable for the control derivative is enabled. If the slack variable on control derivative is not enabled, then the \epsu variable is not considered.
\item \cref{eq:epsy,eq:epsu,eq:epsdu,eq:epst}: these constraints simply force the slack variables to be positive.
\end{itemize}
Finally, \cref{eq:xk} defines \xk in terms of the variables of the problem.
\xx{0} and \uu{0} are the initial state and control action, respectively, while \uk for $k = 1,\dots,k-1$ are the control actions computed by the algorithm.
\section{Sample problem}
The library comes with a sample program (\texttt{test-mpc.cc}) that solves an MPC problem where the state is described by the following system of differential equations:
\begin{equation}
\dot{\bm{x}} = \begin{cases}
\dot{a} = -\frac{1}{\tau} a + \frac{1}{\tau} u\\
\dot{v} = a
\end{cases}
\end{equation}
The system state represents the acceleration and the speed of a vehicle, where the acceleration is subject to an actuation lag modeled as a first order lag with a time constant $\tau$.
By writing the system in the standard state-space representation we obtain
\begin{align}
\dot{\bm{x}} &= G\bm{x} + H\bm{u}\\
\bm{y} &= \Co\bm{x}
\end{align}
where
\begin{equation}
G = \begin{bmatrix}-\frac{1}{\tau} & 0\\1 & 0\end{bmatrix},\quad H = \begin{bmatrix}\frac{1}{\tau}\\0\end{bmatrix},\quad \Co = \begin{bmatrix}0 & 1\end{bmatrix}
\end{equation}
We discretize the continuous time representation by transforming $H$ and $G$ into $A$ and $B$, respectively, so that the state-space representation now becomes
\begin{align}
\xkk &= A\xk + B\uk\\
\yk &= \Co\xk
\end{align}
where
\begin{equation}
A = e^{G \cdot \ts},\quad B = \int_0^{\ts} e^{G \lambda} d\lambda H
\end{equation}
obtaining
\begin{equation}
A = \begin{bmatrix}
e^{-\frac{\ts}{\tau}} & 0\\
\tau\left(1 - e^{-\frac{\ts}{\tau}}\right) & 1\\
\end{bmatrix},\quad
B = \begin{bmatrix}
1 - e^{-\frac{\ts}{\tau}}\\
\ts + \tau\left(e^{-\frac{\ts}{\tau}} - 1\right)
\end{bmatrix}
\end{equation}
$A$, $B$, and $C$ are thus the matrices we pass to the \texttt{MPCProblem} class given a particular choice of the time constant $\tau$ and the sampling time \ts.
In the provided example, we set $\ts = \SI{0.1}{\second}$ and $\tau = \SI{0.5}{\second}$, obtaining
\begin{equation}
A = \begin{bmatrix}
0.8187 & 0\\
0.09063 & 1\\
\end{bmatrix},\quad
B = \begin{bmatrix}
0.1812\\
0.009365\\
\end{bmatrix}
\end{equation}
The initial state $\xx{0}$ and the initial control $\uu{0}$ are set to $[0, 0]^\intercal$ and $[0]$, respectively, the reference $\rk$ to $[1]$, the time horizon $T$ to 60 steps (\SI{6}{\second}), the terminal constraint is enabled, and there is no bound for output, control, and control derivative variables.
Finally all the weights in the $Q$ matrix are set to 1.
In the root folder you find a bash script (\texttt{test-mpc.sh}) that runs the test application under different output parameters.
If the scripts finds an \texttt{R} installation, it will also plot the results to some PDF files.
The first example (\cref{fig:test01}) shows the results for the default parameters, i.e., the ones previously described.
The graph shows the different quantities considered in the optimization problem, i.e., the actual speed and the target speed, the acceleration, the control input, and the control derivative.
As expected, the solution brings the system to the target speed minimizing also the effort on control and control derivative.
\begin{figure}[h]
\centering
\begin{minipage}[t]{\pwidth\textwidth}
\includegraphics[width=\textwidth]{./fig/test01}
\caption{Results for default parameters.}
\label{fig:test01}
\end{minipage}
\hfill
\begin{minipage}[t]{\pwidth\textwidth}
\includegraphics[width=\textwidth]{./fig/test02}
\caption{Results for lower weights on control and control derivative.}
\label{fig:test02}
\end{minipage}
\end{figure}
In the second example (\cref{fig:test02}) we lower the weights for the control and the control derivative terms in the minimization, setting them to $0.01$ instead of $1$.
Control and control derivative are now much larger, as we told our solver that we do not care too much about minimizing them.
This results in a faster settling time.
In the third example (\cref{fig:test03}) we use the parameters of the second but we add upper and lower bounds on control actions, i.e., \SI{1}{\meter\per\second\squared} and \SI{-1}{\meter\per\second\squared}, respectively.
The result is very similar to the second, with the only difference that the control action is ``truncated'' at \SI{1}{\meter\per\second\squared} as per constraint.
This causes a slightly larger settling time with respect to the second example.
\begin{figure}[h]
\centering
\begin{minipage}[t]{\pwidth\textwidth}
\includegraphics[width=\textwidth]{./fig/test03}
\caption{Results for lower weights on control and control derivative, plus bounded control action.}
\label{fig:test03}
\end{minipage}
\hfill
\begin{minipage}[t]{\pwidth\textwidth}
\includegraphics[width=\textwidth]{./fig/test04}
\caption{Results for lower weights on control and control derivative, plus bounded control and control derivative action.}
\label{fig:test04}
\end{minipage}
\end{figure}
In the fourth example (\cref{fig:test04}), w.r.t. the third, we add a constraint on the control derivative as well, i.e., $\SI{-0.5}{\meter\per\second\cubed} \leq \duk \leq \SI{0.5}{\meter\per\second\cubed}$.
The result is pretty evident, both in terms of control derivative bounds and in terms of control input.
The bound on control derivative causes a ``linear'' increase and decrease of the control action.
The additional bound causes the settling time to increase.
The fifth example (\cref{fig:test05}) is the same as the second (i.e., we lower the weights for control and control derivative) but, in addition, we add a bound to the maximum acceleration (\SI{0.6}{\meter\per\second\squared}).
The plot shows that the system reaches the target speed, but the acceleration never exceeds the bound set by the constraint.
\begin{figure}[h]
\centering
\begin{minipage}[t]{\pwidth\textwidth}
\includegraphics[width=\textwidth]{./fig/test05}
\caption{Results for lower weights on control and control derivative, plus bound on acceleration.}
\label{fig:test05}
\end{minipage}
\hfill
\begin{minipage}[t]{\pwidth\textwidth}
\includegraphics[width=\textwidth]{./fig/test06}
\caption{Results for lower weights on control and control derivative, plus bound on acceleration and slack variable for output bounds enabled.}
\label{fig:test06}
\end{minipage}
\end{figure}
The final example (\cref{fig:test06}) enables the slack variable for the output constraint set in the fifth example, setting the weight penalization for the slack variable in the $Q$ matrix to 10.
In this case the solver is allowed to violate the constraint on acceleration, as it can bee seen in the plot.
The constraint bound is highlighted by the gray dashed line, and the solver exceeds this limit reaching roughly \SI{0.75}{\meter\per\second\squared}.
\end{document}
| {
"alphanum_fraction": 0.7175477196,
"avg_line_length": 64.4329501916,
"ext": "tex",
"hexsha": "13d287869651238215d3344bcec62b8d508d3d82",
"lang": "TeX",
"max_forks_count": 13,
"max_forks_repo_forks_event_max_datetime": "2022-03-12T11:16:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-02-03T15:22:44.000Z",
"max_forks_repo_head_hexsha": "a030421cbbcca8d3eb5e50b7cd0335bac0057fb0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "michele-segata/mpclib",
"max_forks_repo_path": "doc/mpclib.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "a030421cbbcca8d3eb5e50b7cd0335bac0057fb0",
"max_issues_repo_issues_event_max_datetime": "2018-07-22T13:19:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-01-03T15:23:49.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "michele-segata/mpclib",
"max_issues_repo_path": "doc/mpclib.tex",
"max_line_length": 836,
"max_stars_count": 22,
"max_stars_repo_head_hexsha": "a030421cbbcca8d3eb5e50b7cd0335bac0057fb0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "michele-segata/mpclib",
"max_stars_repo_path": "doc/mpclib.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-16T17:50:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-17T10:20:31.000Z",
"num_tokens": 4914,
"size": 16817
} |
% This LaTeX was auto-generated from an M-file by MATLAB.
% To make changes, update the M-file and republish this document.
\subsection*{gTrigT.m}
\begin{par}
\textbf{Summary:} Test the gTrig function, which computes (at least) the mean and the variance of the transformed variable for a Gaussian distributed input $x\sim\mathcal N(m,v)$. Check the outputs using Monte Carlo, and the derivatives using finite differences.
\end{par} \vspace{1em}
\begin{verbatim}function gTrigT(m, v, i, e)\end{verbatim}
\begin{par}
\textbf{Input arguments:}
\end{par} \vspace{1em}
\begin{verbatim}m mean vector of Gaussian [ d ]
v covariance matrix [ d x d ]
i vector of indices of elements to augment [ I x 1 ]
e (optional) scale vector; default: 1 [ I x 1 ]\end{verbatim}
\begin{par}
Copyright (C) 2008-2013 by Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen.
\end{par} \vspace{1em}
\begin{par}
Last modified: 2013-03-25
\end{par} \vspace{1em}
\begin{lstlisting}
function gTrigT(m, v, i, e)
\end{lstlisting}
\subsection*{Code}
\begin{lstlisting}
% create a default test if no input arguments are given
if ~nargin
D = 4;
m = randn(D,1);
v = randn(D); v = v*v'+eye(D);
i = [2; 4]; I = 2*length(i);
e = exp(randn(size(i)));
else
D = length(m);
end
n = 1e6; % Monte Carlo sample size
delta = 1e-4; % for finite difference approx
x = bsxfun(@plus, m, chol(v)'*randn(D,n));
y = bsxfun(@times, [e; e], [sin(x(i,:)); cos(x(i,:))]);
y = y(reshape(1:I,I/2,2)',:); % reorder rows
[M, V, C] = gTrig(m, v, i, e);
Q = cov([x' y']); Qv = Q(D+1:end,D+1:end); Qc = v\Q(1:D,D+1:end);
disp(['mean: gTrig Monte Carlo'])
disp([M mean(y,2)]); disp([' ']);
disp(['var: gTrig Monte Carlo'])
disp([V(:) Qv(:)]); disp([' ']);
disp(['cov: gTrig Monte Carlo'])
disp([C(:) Qc(:)]); disp(' ');
disp('dMdm')
for j = 1:I
checkgrad(@gTrigT0, m, delta, v, i, e, j);
disp(['this was element # ' num2str(j) '/' num2str(I)]);
end
disp(' ');
disp('dVdm')
for j = 1:I*I
checkgrad(@gTrigT1, m, delta, v, i, e, j);
disp(['this was element # ' num2str(j) '/' num2str(I*I)]);
end
disp(' ');
disp('dCdm')
for j = 1:I*D
checkgrad(@gTrigT2, m, delta, v, i, e, j);
disp(['this was element # ' num2str(j) '/' num2str(I*D)]);
end
disp(' ');
disp('dMdv')
for j = 1:I
checkgrad(@gTrigT3, v(tril(ones(length(v)))==1), delta, m, i, e, j);
disp(['this was element # ' num2str(j) '/' num2str(I)]);
end
disp(' ');
disp('dVdv')
for j = 1:I*I
checkgrad(@gTrigT4, v(tril(ones(length(v)))==1), delta, m, i, e, j);
disp(['this was element # ' num2str(j) '/' num2str(I*I)]);
end
disp(' ');
disp('dCdv')
for j = 1:I*D
checkgrad(@gTrigT5, v(tril(ones(length(v)))==1), delta, m, i, e, j);
disp(['this was element # ' num2str(j) '/' num2str(I*D)]);
end
\end{lstlisting}
\begin{lstlisting}
function [f, df] = gTrigT0(m, v, i, e, j)
[M, V, C, dMdm] = gTrig(m, v, i, e);
f = M(j); df = dMdm(j,:);
function [f, df] = gTrigT1(m, v, i, e, j)
[M, V, C, dMdm, dVdm] = gTrig(m, v, i, e);
dVdm = reshape(dVdm,[size(V) length(m)]);
dd = length(M); p = fix((j+dd-1)/dd); q = j-(p-1)*dd;
f = V(p,q); df = squeeze(dVdm(p,q,:));
function [f, df] = gTrigT2(m, v, i, e, j)
[M, V, C, dMdm, dVdm, dCdm] = gTrig(m, v, i, e);
dCdm = reshape(dCdm,[size(C) length(m)]);
dd = length(M); p = fix((j+dd-1)/dd); q = j-(p-1)*dd;
f = C(p,q); df = squeeze(dCdm(p,q,:));
function [f, df] = gTrigT3(v, m, i, e, j)
d = length(m);
vv(tril(ones(d))==1) = v; vv = reshape(vv,d,d);
vv = vv + vv' - diag(diag(vv));
[M, V, C, dMdm, dVdm, dCdm, dMdv] = gTrig(m, vv, i, e);
dMdv = reshape(dMdv,[length(M) size(v)]);
f = M(j); df = squeeze(dMdv(j,:,:));
df = df+df'-diag(diag(df)); df = df(tril(ones(d))==1);
function [f, df] = gTrigT4(v, m, i, e, j)
d = length(m);
vv(tril(ones(d))==1) = v; vv = reshape(vv,d,d);
vv = vv + vv' - diag(diag(vv));
[M, V, C, dMdm, dVdm, dCdm, dMdv, dVdv] = gTrig(m, vv, i, e);
dVdv = reshape(dVdv,[size(V) size(v)]);
dd = length(M); p = fix((j+dd-1)/dd); q = j-(p-1)*dd;
f = V(p,q); df = squeeze(dVdv(p,q,:,:));
df = df+df'-diag(diag(df)); df = df(tril(ones(d))==1);
function [f, df] = gTrigT5(v, m, i, e, j)
d = length(m);
vv(tril(ones(d))==1) = v; vv = reshape(vv,d,d);
vv = vv + vv' - diag(diag(vv));
[M, V, C, dMdm, dVdm, dCdm, dMdv, dVdv, dCdv] = gTrig(m, vv, i, e);
dCdv = reshape(dCdv,[size(C) size(v)]);
dd = length(M); p = fix((j+dd-1)/dd); q = j-(p-1)*dd;
f = C(p,q); df = squeeze(dCdv(p,q,:,:));
df = df+df'-diag(diag(df)); df = df(tril(ones(d))==1);
\end{lstlisting}
| {
"alphanum_fraction": 0.5528421053,
"avg_line_length": 30.2547770701,
"ext": "tex",
"hexsha": "7a1e6fe0b205088d9e935e268bb2a6f08213cade",
"lang": "TeX",
"max_forks_count": 36,
"max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z",
"max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "sahandrez/quad_pilco",
"max_forks_repo_path": "doc/tex/gTrigT.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834",
"max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "sahandrez/quad_pilco",
"max_issues_repo_path": "doc/tex/gTrigT.tex",
"max_line_length": 262,
"max_stars_count": 53,
"max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "SJTUGuofei/pilco-matlab",
"max_stars_repo_path": "doc/tex/gTrigT.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z",
"num_tokens": 1839,
"size": 4750
} |
% !TEX root = ../thesis.tex
\chapter{Preliminaries}
\label{chap:preliminaries}
There are several topological spaces that play a key role
when describing properties of the Hopf map $h : S^3 \to S^2\!$.
These include of course the domain $S^3$ and codomain $S^2\!$,
which are traditionally defined as subspaces of $\R^4$ and $\R^3$ respectively.
As we will see later,
it is useful to consider $S^3$ and $S^2$ as quotient spaces of $\C^2 \setminus \set{0}$
or subspaces of the quaternion algebra $\H$ instead.
Both $\C^2$ and $\H$ are isomorphic to $\R^4$ as a real vector space,
but their additional structure sheds light on various properties of the Hopf map.
We will therefore briefly examine these spaces before defining the Hopf map.
In later chapters we will explore the differentiable structure of these spaces,
but for now we focus on their topological properties.
Furthermore, group actions are used extensively throughout this chapter,
so we quickly recall some of the terms involved.
\section{Definitions}
\definition
Let $X$ be a set with additional structure,
such as a vector space or a topological space.
The \emph{automorphism group} of $X$,
denoted $\Aut(X)$ is the group of bijections $X \to X$
that preserve its structure.
More formally we can say that $X$ is an object in a concrete category $\mathcal{A}$,
a category equipped with a faithful functor $\mathcal{A} \to \mathbf{Sets}$,
the forgetful functor.
The automorphism group is the group of invertible morphisms $X \to X$
in this category.
\example
For a group $G$, $\Aut(G)$ is the group of group isomorphisms $G \to G$.
For a module $M$ over a ring $R$,
or for a vector space $V$ over a field $F$,
the automorphism group consists of $R$-linear bijections $M \to M$
and $F$-linear bijections $V \to V$ respectively.
For a topological space $X$,
$\Aut(X)$ is the group of homeomorphisms $X \to X$.
For sets, the automorphism group is simply the group of bijections
from the set to itself.
\definition
Let $G$ be a group and $X$ a set with additional structure.
A \emph{group action} of $G$ on $X$ is a group homomorphism $\phi : G \to \Aut(X)$.
If $X$ is an object in the category $\mathcal{A}$ \,(e.g. the category of vector spaces,
topological spaces, etc.),
we say $G$ acts on $X$ in this category.
Given an element $x \in X$ and $g \in G$,
we will often write $g \cdot x$ for the element $\phi(g)(x)$.
In the definition above,
the group $G$ acts \emph{from the left} on $X$.
For $g, h \in G$ and $x \in X$,
we have $(gh) \cdot x = g \cdot (h \cdot x)$.
Sometimes we encounter a natural \emph{antihomomorphism} $G \to \Aut(X)$.
In this case,
we say that $G$ acts \emph{from the right} on $X$ in the category $\mathcal{A}$,
and to make the notation more natural we write $x \cdot g$ instead of $g \cdot x$,
so that $x \cdot (gh) = (x \cdot g) \cdot h$.
\definition
Let $G$ and $X$ be as before, and $x \in X$.
The \emph{orbit} of $x$ is the set
\[ Gx = \set{ g \cdot x \mid g \in G } \]
Having the same orbit defines an equivalence relation on $X$,
and the quotient with respect to this relation is called the \emph{orbit space},
written $X/G$.
\definition
Let $G$ and $X$ be as before, and $x \in X$.
The \emph{stabiliser} of $x$ is the subgroup
\[ G_x = \set{ g \in G \mid g \cdot x = x } \]
\proposition[prop:open-quotient-map]
Let $X$ be a topological space, $G$ a group that acts on $X$.
When $X/G$ is endowed with the quotient topology,
the quotient map $q : X \surj X/G$ is an open map.
\proof
Let $U \subseteq X$ be open,
define $V = q(U)$.
$V$ is open if and only if $q^{-1}(V)$ is open
by definition of the quotient topology.
We have
\[ q^{-1}(V)
= \bigcup_{g \in G} g \cdot U \]
Because the automorphisms of the group action are homeomorphisms,
they are open maps,
so $g \cdot U$ is open for all $g \in G$.
Hence, $q^{-1}(V)$ is the union of open sets,
so it is open.
\qed
\definition
A \emph{topological group} is a group $G$ that is also a Hausdorff space,
such that the map $G \times G \to G$, $(g, h) \mapsto gh^{-1}$ is continuous
when $G \times G$ is endowed with the product topology.
This is equivalent to the statement that multiplication and inversion are continuous;
see for example \parencite[ch.~\textsc{iii}, \S~1.1]{bourbaki1971} or \parencite[p.~276]{szekeres2004}.
\definition
Let $G$ be a topological group and $X$ a topological space,
such that $G$ acts on $X$ in the category of sets.
The action is said to be \emph{continuous} if
\[ G \times X \longto X,
\quad (g, x) \longmapsto g \cdot x \]
is a continuous map.
It follows immediately from this definition that
$x \mapsto g \cdot x$ is a homeomorphism for all $g \in G$ when $G$ acts continuously on $X$,
therefore $G$ acts on $X$ in the category of topological spaces;
the group homomorphism $G \to \Aut(X)$ is actually a group homomorphism $G \to \Homeo(X)$.
\theorem{Universal property of the quotient topology}[thm:universal-property-quotient-topology]
Let $X$ and $Y$ be topological spaces and $\sim$ an equivalence relation on $X$.
Denote by $q : X \surj X\modsim$ the quotient map.
Let $f : X \to Y$ be a continuous map such that
for all $x, y \in X$ it holds that $x \sim y$ implies $f(x) = f(y)$.
(Such $f$ is said to be \emph{compatible} with the equivalence relation.)
Then there exists a unique continuous map $g : X\modsim \to Y$ that makes the following diagram commute:
\vspace{-\parskip}
\begin{center}
\begin{tikzcd}[column sep = small] &
X \ar[dl, two heads, swap, "q"]
\ar[dr, "f"] & \\
X\modsim \ar[rr, dashed, swap, "\exists_! g"] & & Y
\end{tikzcd}
\end{center}
\proof
See for example \parencite[ch.~\textsc{i}, \S~3.4]{bourbaki1971}.
\proposition
Let $G_1$ and $G_2$ be groups, and $X$ a set.
Suppose that $G_1 \times G_2$ acts on $X$ in the category of sets.
This implies that the subgroups $G_1$ and $G_2$ act on $X$ individually as well.
Then the following holds:
\begin{enumerate}
\item There is a natural action of $G_1$ on $X / G_2$ in the category of sets.
\item If $X$ is a topological space and $G_1 \times G_2$ acts in the category of topological spaces,
$G_1$ acts on $X / G_2$ in this category.
\item If $G_1$ and $G_2$ are topological groups such that $G_1 \times G_2$ acts continuously on $X$,
then $G_1$ acts continuously on $X / G_2$.
\end{enumerate}
\proof
Let $x \in X$ such that $[x] \in X / G_2$, and $g \in G_1$.
Define $g \cdot [x] = [g \cdot x]$.
We have to show that this action is well-defined.
Suppose that $[x] = [y]$ for some $y \in X$.
Then there exists an $h \in G_2$ such that $x = h \cdot y$.
Because $h$ and $g$ commute in $G_1 \times G_2$,
we have
\[ g \cdot x = g \cdot (h \cdot y) = (g, h) \cdot y = h \cdot (g \cdot y) \]
Thus, we find $[g \cdot x] = [g \cdot y]$.
That this defines a homomorphism $G_1 \to \Aut(X / G_2)$ follows
from the fact that $G_1 \to \Aut(X)$ is a homomorphism.
This proves statement \textbbf{i}.
Suppose that $X$ is a topological space
and $G_1 \times G_2$ acts on $X$ in the category of topological spaces.
Let $g \in G_1$, then $g$ induces a homeomorphism $\phi : X \to X$,
and a bijection $\psi : X / G_2 \to X / G_2$.
Denote by $q : X \surj X / G_2$ the quotient map,
then $q \circ \phi$ is a continuous map $X \to X / G_2$
that satisfies $\psi \circ q = q \circ \phi$ due to statement \textbbf{i}.
This means that $q \circ \phi$ is compatible with the quotient map,
so by the universal property of the quotient topology (theorem \ref{thm:universal-property-quotient-topology}),
there exists a unique continuous map $\psi'$ such that $q \circ \phi = \psi' \circ q$.
Uniqueness implies that $\psi' = \psi$,
therefore $\psi$ is continuous.
The same argument applies to $\psi^{-1}$,
hence $\psi$ is a homeomorphism.
This shows that $G_1$ acts on $X / G_2$ in the category of topological spaces,
which proves statement \textbbf{ii}.
To prove statement \textbbf{iii},
we will use the following diagram:
\begin{center}
\begin{tikzcd}[row sep = large]
&
X \ar[dr, two heads, "q"] & \\
G_1 \times X \ar[ur, "a"]
\ar[r, "f"]
\ar[dr, two heads, swap, "r"] &
G_1 \times {(X / G_2)} \ar[r, "T"] &
X / G_2 \\ &
{(G_1 \times X)} / G_2 \ar[u, dashed, "\exists_! \phi"]
\ar[ur, dashed, swap, "\exists_! \psi"] &
\end{tikzcd}
\end{center}
The map $a : G_1 \times X \to X$,
$(g, x) \mapsto g \cdot x$
is continuous because it is the restriction of $G_1 \times G_2 \times X \to X$
that is continuous by assumption.
Let $q: X \surj X / G_2$ denote the quotient map.
Define $f : G_1 \times X \to G_1 \times {(X / G_2)}$ as $f = (\id, q)$.
This map is continuous because both of its coordinates are.
(See proposition 1 of \parencite[ch.~\textsc{i}, \S~4.1]{bourbaki1971}.)
Furthermore $f$ is open,
for $\id$ and $q$ are open. (See proposition~\ref{prop:open-quotient-map}.)
Let $G_2$ act on $G_1 \times X$ by $h \cdot (g, x) = (g, h \cdot x)$
where $g \in G_1, h \in G_2, x \in X$,
and let $r$ denote the quotient map.
$f$ is compatible with $r$,
so by the universal property of the quotient topology (theorem \ref{thm:universal-property-quotient-topology}),
there exists a unique continuous map
$\phi$ that makes the bottom left triangle of the diagram commute.
The map is given by $[g, x] \mapsto (g, [x])$
and its inverse is given by $(g, [x]) \mapsto [g, x]$.
An open set in $(G_1 \times X) / G_2$
is the image under $r$ of an open set in $G_1 \times X$,
so from commutativity it follows that $\phi$ is an open map.
Hence, $\phi$ is a homeomorphism.
On the top of the diagram, we have the map $q \circ a : G_1 \times X \to X / G_2$,
given by $(g, x) \mapsto [g \cdot x]$.
As the composition of continuous maps it is continuous,
and it is compatible with $r$.
Thus, by the universal property of the quotient topology,
there exists a unique continuous map $\psi$
such that $\psi \circ r = q \circ a$.
Composing with $\phi^{-1}$, we find that the map
\[ T : G_1 \times {(X / G_2)} \longto X / G_2,
\quad (g, [x]) \longmapsto [g \cdot x] \]
is continuous,
which proves claim \textbbf{iii}.
Furthermore, the above diagram commutes.
\qed
\theorem[thm:quotient-map-factors]
Let $G_1$ and $G_2$ be groups and $X$ a topological space
such that $G_1 \times G_2$ acts on $X$.
Then $X / (G_1 \times G_2)$ is canonically homeomorphic to $(X / G_1) / G_2$.
In particular, the quotient map $X \surj X / (G_1 \times G_2)$ factors over $X / G_1$.
\proof
Let
$q_1 : X \surj X / G_1$,
$q_2 : (X / G_1) \surj (X / G_1) / G_2$,
and $q_{12} : X \surj X / (G_1 \times G_2)$ denote the quotient maps.
Then we have the following commutative diagram:
\begin{center}
\begin{tikzcd}[row sep = huge, column sep = large]
X \ar[r, two heads, "q_1"]
\ar[d, two heads, "q_{12}"] &
X / G_1 \ar[d, two heads, "q_2"]
\ar[dl, dashed, bend right = 7, swap, "\phi_1"] \\
X / (G_1 \times G_2) \ar[r, dashed, bend left = 13, "\phi_{12}"] &
(X / G_1) / G_2 \ar[l, dashed, bend left = 13, "\phi_2"]
\end{tikzcd}
\end{center}
The map $q_2 \circ q_1$ is continuous and compatible with $q_{12}$,
so by the universal property of the quotient topology (theorem \ref{thm:universal-property-quotient-topology})
there exists a unique continuous map $\phi_{12}$ that makes the diagram commute.
Because $q_{12}$ is compatible with $q_1$,
there exists a unique continuous map $\phi_1$ such that $q_{12} = \phi_1 \circ q_1$.
It follows that $\phi_1$ is compatible with $q_2$,
so there exists a unique continuous map $\phi_2$ that makes the diagram commute.
Now we see that $\phi_{12}$ and $\phi_2$ are continuous inverses of one another,
hence $X / (G_1 \times G_2)$ and $(X / G_1) / G_2$ are homeomorphic.
\qed
\subsection*{Physical interpretation}
Groups are prevalent in mathematics.
In physics, groups are often encountered in the context of symmetries.
In that case one may think of a group as a set of transformations of a system,
transformations under which a certain property is invariant.
For instance, angular momentum is invariant under rotation of space,
and four-momentum is invariant under Lorentz transformations.
A \emph{group action} generalises this idea.
Elements of the group induce a transformation of a system.
By applying all possible transformations to a point,
we obtain the \emph{orbit} of a point.
For instance, when we let the Lorentz group act on Minkowski space,
the orbit of a timelike vector is all of the light cone (past and future).
Often, a group encodes transformations that we are \emph{not} interested in.
The \emph{orbit space} is what remains if we consider points that differ
by such a transformation to be equal.
For example, the orbit space of the Lorentz group action on Minkowski space
consists of four elements:
the origin, the class of null (or light-like) vectors,
the class of timelike vectors,
and the class of spacelike vectors.
The \emph{stabiliser} of a point is the subgroup of transformations under which the point is invariant.
Topology is the branch of mathematics that studies abstract properties of space.
It gives us the tools to study properties that do not depend
on exact distances, but rather on overall shape.
For instance, one would like to think of a garden hose
as a one dimensional system where water can move back and forth,
regardless of how the hose is bent or twisted.
Topology allows us to ignore the bending and twisting.
Virtually all spaces that occur in physics are topological spaces:
$\R^3\!$, Minkowski space, Hilbert spaces, etc.
Often these spaces have additional structure
such as a metric or inner product,
but many properties can be derived from the topology alone.
An important example of such a property
is \emph{continuity} of a map between topological spaces,
a notion that is prevalent throughout physics.
Many of the groups encountered in this thesis happen to have a natural topology as well.
In this case, an action on another topological space can be \emph{continuous}.
The definition given in this section codifies our intuition:
if two group elements that are near act on a point,
the resulting points should be near as well.
\section{Projective space}
\label{sec:projective-space}
It is possible to identify $\R^4$ and $\C^2$ as four-dimensional real vector spaces,
by identifying the standard basis $(e_1, e_2, e_3, e_4)$ with the basis $((1, 0), (i, 0), (0, 1), (0, i))$.
The space $\C^2 \setminus \set{0}$ will be prevalent in the rest of this section,
so we introduce a shorthand notation. Furthermore, we embed $S^1$ in $\C$.
\definition
$\CZ = \C^2 \setminus \set{0}$.
\definition
The \emph{unit circle} is defined by
\[ S^1 = \set{ \, z \in \C \mid 1 = |\,z\,| \, } \]
This is a group under multiplication.
\definition[def:s3-real]
The \emph{three-sphere} is defined by
\[ S^3 = \set{ \, x \in \R^4 \mid 1 = \nsq{x} } \]
Here $\|{}\cdot{}\|$ denotes the regular Euclidean norm.
By identifying $\R^4$ with $\C^2$ as above,
we can consider $S^3$ to be a subset of $\CZ$.
Consider the multiplicative group $\Rpos$ of positive real numbers.
It acts continuously on $\C^2$ (in the category of real vector spaces) by scalar multiplication,
and this action can be restricted to $\CZ$ (in the category of sets).
This allows us to give an alternative definition of $S^3$ as a quotient:
\definition[def:s3-complex]
$\SC$ is the orbit space of $\CZ$ with respect to the $\Rpos$ action.
Denote by $r : \CZ \surj \SC$ the quotient map.
$\C^2$ is endowed with its regular topology induced by the Euclidean metric,
and $\SC$ is endowed with the quotient topology.
Intuitively, this definition is not that different from definition~\ref{def:s3-real}.
Every point $p$ at the three-sphere defines a ray from the origin through $p$.
This ray, except for the origin, is the orbit of $p$ under the $\Rpos$ action.
In other words, every orbit can be represented by a point at unit distance from the origin.
The quotient map $r$ corresponds to projection onto the sphere.
\proposition[prop:s3-equivalence]
$S^3$ and $\SC$ as defined in definition~\ref{def:s3-real} and \ref{def:s3-complex} are homeomorphic.
\proof
\newcommand*{\RZ}{{\R^4_{\,\circ}}}
Write $\R^4 \setminus \set{0} = \RZ$.
Let $i : S^3 \to \RZ$ be the inclusion,
and let $\phi : \R^4 \to \C^2$ be the vector space isomorphism
induced by the identification of the bases given earlier in this section.
The inclusion $i$ is continuous, and the restriction $\phi|_\RZ = \phi_\circ$ is a homeomorphism.
Therefore, the composition $\psi = r \circ \phi_\circ \circ i : S^3 \to \SC$ is continuous.
Consider the map
\vspace{-0.3\parskip}
\[ \CZ \longto S^3,
\quad x \longmapsto \phi_\circ^{-1} \left( \frac{x}{\|x\|} \right) \vspace{0.3\parskip} \]
which is continuous and compatible with $r$.
By the universal property of the quotient topology (theorem~\ref{thm:universal-property-quotient-topology}),
this map induces a unique continuous map $\psi^{-1} : \SC \to S^3$ that is the inverse of $\psi$.
Thus, $\psi$ is a homeomorphism.
\qed
Consider the multiplicative group $\C^*$ (the complex plane minus the origin).
It acts continuously on $\C^2$ (in the category of complex vector spaces) by scalar multiplication,
and this action can be restricted to $\CZ$.
This allows us to define the projective space:
\definition[def:complex-projective-line]
The \emph{complex projective line} $\PC$ is the orbit space of $\CZ$ with respect to the $\C^*$ action.
Denote by $q : \CZ \surj \PC$ the quotient map.
$\PC$ is endowed with the quotient topology.
Elements of $\PC$ are indicated by \emph{homogeneous coordinates}:
if $(z_1, z_2) \in \C^2$ is nonzero,
then we write $(z_1 : z_2)$ for $q(z_1, z_2)$.
We can embed $\C$ in $\PC$ via $z \mapsto (z : 1)$.
The only point that is not reached in this manner is $(1 : 0)$.
\theorem[thm:s2-homeom-p1c]
There exists a homeomorphism between $S^2$ and $\PC$.
\proof
We will postpone the proof until section \ref{sec:hopf-quaternionic},
and prove this with the aid of quaternions in theorem~\ref{thm:hopf-map-equivalence}.
For an alternative proof, see \parencite[ch.~\textsc{viii}, \S~4.3]{bourbaki1974}.
The general linear group $\GLC$
of invertible complex $2 \times 2$ matrices
acts on $\C^2$ by matrix multiplication.
This induces a group action of $\GLC$ on $\CZ$.
Furthermore, the groups $\Rpos$, $S^1$, and $\C^*$ are isomorphic to subgroups of $\GLC$:
given an element $z \in \C^*\!$,
we can identify it with the matrix
\[ \begin{pmatrix} z & 0 \\ 0 & z \end{pmatrix} \]
in the centre of $\GLC$.
$\C^*$ is isomorphic to the direct product $\Rpos \times S^1$:
this is the decomposition of a complex number into its modulus and argument.
It follows that $S^1$ and $\Rpos$ are central in $\GLC$, because their elements correspond to scalar matrices.
Consequently, $S^1$ and $\Rpos$ are normal in $\GLC$.
\subsection*{Informal summary}
The \emph{projective space} $\PC$ is a construction with several interpretations.
For starters, $\PC$ can be thought of as $\C$ with one extra point,
a point “at infinity”.
This allows us to talk about $z_1/z_2$ even when $z_2$ is zero.
Instead of $z_1/z_2$, we write $(z_1 : z_2)$, called \emph{homogeneous coordinates}.
Secondly, theorem \ref{thm:s2-homeom-p1c} tells us that $\PC$ can be thought of as the unit sphere $S^2\!$.
(In fact, $\PC$ is sometimes called the \emph{Riemann sphere}.)
A \emph{homeomorphism} between two spaces
is a function, both one-to-one and onto,
that preserves all topological properties.
From a topological point of view, $\PC$ and $S^2$ are the same space.
This means that when we formulate the Hopf map later on
— a function from $S^3$ to $S^2$ —
we can express it as a function to $\PC$.
This expression is significantly simpler than the one involving Cartesian coordinates on $S^2\!$.
\section{Quaternions}
\label{sec:quaternions}
\definition
The \emph{quaternion algebra} $\H$ is
the real noncommutative algebra with basis $(1, i, j, k)$.
Multiplication is given by the identities
\[ i^2 = j^2 = k^2 = -1,
\quad i\!j = k, \ jk = i, \ ki = j,
\quad j i = -k, \ kj = -i, \ ik = -j \]
and $1$ commutes with all elements.
In particular, $\H$ is a ring and a four-dimensional real vector space.
Analogously to complex numbers,
this algebra has an involution $\overline{\raisebox{0pt}[0.5em]{${}\cdot{}$}}$
called \emph{conjugation}
that flips the sign of the $i$, $j$, and $k$ components.
\definition
The \emph{trace} is the map $\Tr : \H \to \R,\ q \mapsto q + \overline{q}$.
Because the imaginary parts cancel, the trace of a quaternion is real.
Furthermore, the trace is $\R$-linear.
The reals commute with all quaternions,
so $\Tr(q)$ commutes with $q$ for all $q \in \H$.
Because $\overline{q} = \Tr(q) - q$,
it follows that $q$ and $\overline{q}$ commute.
\definition
The standard inner product on $\H$ is given by
\[ \inp{{}\cdot{}}{{}\cdot{}} : \H \times \H \longto \R,
\quad (p, q) \longmapsto \tfrac{1}{2} \Tr(p\overline{q}) = \tfrac{1}{2}(p\overline{q} + q \overline{p}) \]
Symmetry is clear from the definition,
and bilinearity follows from the linearity of the trace.
For positive definiteness,
remark that for $q = a + bi + cj + dk$,
we have $q \overline{q} = a^2 + b^2 + c^2 + d^2\!$.
Therefore $\inp{q}{q} \geq 0$,
and $\inp{q}{q} = 0 \implies a = b = c = d = 0 \implies q = 0$.
\definition
The \emph{norm} of $q \in \H$ is given by $\nsq{q} = q \overline{q}$.
Because $q$ and $\overline{q}$ commute,
$q\overline{q} = \smallfrac{1}{2}(q \overline{q} + \overline{q} q)$,
so the norm is induced by the inner product.
This norm coincides with the Euclidean norm on $\H$
as real vector space with orthonormal basis $(1, i, j, k)$.
Therefore,
$\H$ with the topology induced by the norm is homeomorphic to $\R^4\!$.
Because conjugation reverses the order of multiplication,
the norm is multiplicative:
for $p, q \in \H$, we have
\[ \nsq{pq}
= (pq)\overline{(pq)}
= p \, q \overline{q} \, \overline{p}
= p \nsq{q} \, \overline{p}
= \nsq{q} p \overline{p}
= \nsq{q} \nsq{p} \]
Because $q \overline{q} = \nsq{q}\!$,
we have $q^{-1} = \overline{q} \, \|q\|^{-2}$ for $\|q\| \neq 0$.
Therefore, $\H$ is a division algebra:
every nonzero element has an inverse.
\proposition
$\H^* = \H \setminus \set{0}$ is a topological group.
\proof
Multiplication is continuous,
because for $p, q \in \H^*\,$,
the components of the product $pq$
can be written as a polynomial in the components of $p$ and $q$.
Inversion is continuous,
because the components of $q^{-1}$ are rational functions of the components of $q$,
which do not vanish because $\nsq{q} \neq 0$.
See also \parencite[ch.~\textsc{viii}, \S~1.4]{bourbaki1974}.
\qed
With this machinery,
we can give a quaternionic definition of $S^3$ and $S^2\!$.
Whereas the definitions in section \ref{sec:projective-space} emphasise
how $S^3$ and $S^2$ are quotients with respect to a group action,
the quaternionic definitions emphasise the group structure on the three-sphere itself,
and the action of $S^3$ on $S^2\!$.
Let us revisit the three-sphere as defined in definition~\ref{def:s3-real}.
By identifying $\H$ with $\R^4$ as a normed real vector space
via the basis given earlier in this section,
we can consider $S^3$ to be a subset of $\H$,
the set of quaternions with unit norm:
\[ S^3 = \set{ q \in \H \mid 1 = \nsq{q} } \]
This set is closed under multiplication due to the multiplicativity of the norm,
and it contains $1$.
Therefore, this is a subgroup of $\H^*\!$.
We can embed $S^2$ in $S^3\!$,
but in $\R^4$ there is no preferred way of doing so.
For quaternions, there is one natural choice:
\definition[def:s2-quaternion]
The \emph{two-sphere} $S^2 = \set{ q \in S^3 \mid \Tr(q) = 0 }$,
the set of pure imaginary quaternions with unit norm.
This definition coincides with the conventional definition of $S^2$
when $\R^3$ is identified with the subspace of $\H$ spanned by $i$, $j$, and $k$.
$S^2$ may alternatively be written as $\set{q \in S^3 \mid \inp{1}{q} = 0} = 1^\perp \cap S^3$.
The group $\H^*$ acts
on $\H$ in the category of $\R$-algebras via the following homomorphism:
\[ \phi : \H^* \longto \Aut(\H),
\quad
p \longmapsto (q \mapsto p q p^{-1}) \]
Because quaternion multiplication is continuous,
this is a continuous action.
By restriction to the subgroup $S^3\!$,
we get a continuous action of $S^3$ on $\H$.
\proposition[prop:invariant-inner-product]
The inner product on $\H$ is invariant under the action of $\H^*\!$.
\proof
Let $p \in \H^*\!$, $q_1, q_2 \in \H$,
then we have
\begin{align*}
2 \, \inp{p \cdot q_1}{p \cdot q_2}
&= p q_1 p^{-1} \overline{p q_2 p^{-1}} + p q_2 p^{-1} \overline{p q_1 p^{-1}} \\
&= p q_1 \nsq{p^{-1}} \overline{q_2} \, \overline{p} + p q_2 \nsq{p^{-1}} \overline{q_1} \, \overline{p} \\
&= \nsq{p^{-1}} p (q_1 \overline{q_2} + q_2 \overline{q_1}) \overline{p} \\
&= \nsq{p^{-1}} \, \nsq{p} \, 2 \, \inp{q_1}{q_2} \\
&= 2 \, \inp{q_1}{q_2}
\tag*{\qed}
\end{align*}
\corollary[cor:s3-action]
Identify $\R^3$ with the subspace of $\H$ spanned by $i$, $j$, and $k$.
Then $\R^3 = 1^\perp$ and $S^2 = 1^\perp \cap S^3$ are invariant under the action of $\H^*\!$,
which means $\H^*$ and its subgroup $S^3$ act continuously on $\R^3$ and $S^2\!$.
$\C$ is a commutative subring of~$\H$.
As real vector spaces with bases $(1, i)$ and $(1, i, j, k)$,
$\C$ can be identified with the subspace of $\H$ spanned by $1$ and $i$.
The stabiliser of $i \in \H$ consists of the nonzero elements that commute with $i$.
These elements are linear combinations of $1$ and $i$,
so we have $\H^*_i = \C^*$ and $S^3_i = S^1\!$.
\proposition[prop:s3-isom-su2c]
$S^3$ is isomorphic to $\SUC$, the group of unitary $2 \times 2$ matrices with determinant $1$.
\proof
Define the unitary matrices
\[ I = \begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}
\qquad \sigma_1 = \begin{pmatrix}0 & 1 \\ 1 & 0\end{pmatrix}
\qquad \sigma_2 = \begin{pmatrix}0 & -i \\ i & 0\end{pmatrix}
\qquad \sigma_3 = \begin{pmatrix}1 & 0 \\ 0 & -1\end{pmatrix} \]
These matrices are sometimes called the \emph{Pauli spin matrices}.
Let $\phi: \H \to \Mat(2 \times 2, \C)$ be the $\R$-linear extension of
\[ 1 \longmapsto I,
\quad i \longmapsto i \sigma_1,
\quad j \longmapsto i \sigma_2,
\quad k \longmapsto i \sigma_3 \]
Let $\psi$ be the restriction of $\phi$ to $S^3\!$.
All matrices in the image of $\psi$ are unitary,
and a little computation shows that for $q \in S^3\!$,
$\det \psi(q) = 1$.
The matrices $I, i\sigma_1, i\sigma_2, i\sigma_3$
satisfy the same multiplication rules as $1, i, j, k$.
That is, $i\sigma_1 i\sigma_2 = i\sigma_3$, etc.
Therefore, $\psi$ is a group homomorphism $S^3 \to \SUC$.
This homomorphism is surjective (see \parencite[p.~173]{szekeres2004}),
and $1$ is the only element in its kernel.
Therefore, $\psi$ is an isomorphism.
\qed
\theorem[thm:s3-surj-so3r]
The map
\[ \phi: S^3 \longto \SOR,
\quad q \longmapsto (x \mapsto q \cdot x) \]
is a surjective group homomorphism with kernel $\set{\pm 1}$.
Here $x \in \R^3 \cong \Span(i, j, k)$.
\proof
The map $x \mapsto q \cdot x$ is linear,
and orthogonality follows from
the fact that the inner product is invariant under the action,
as shown in proposition~\ref{prop:invariant-inner-product}.
To show that $x \mapsto q \cdot x$ is not a reflection,
note that $\det : \textup{O}_3(\R) \to \R$ is a continuous map
(see for example \parencite[p.~281]{hatcher2002}).
We can express $\phi$ as a polynomial on all coordinates
when elements of $\SOR$ are written as matrices,
so $\phi$ is continuous.
By composition we get a continuous map $S^3 \to \set{\pm 1}$.
Because $S^3$ is connected, this map must be constant.
% (See for instance proposition~3.4.11 of \parencite[p.~92]{runde2005}.)
The determinant of $\id$ is $1$,
so all $q \in S^3$ induce an orthogonal map with positive determinant.
To show that the kernel of $\phi$ is $\set{\pm 1}$,
suppose that $q \in S^3$ is such that $q \cdot x = q x q^{-1} = x$ for all $x \in \R^3\!$.
Then $q$ commutes with all $x \in \R^3\!$,
so $q$ must be real.
Because $\nsq{q} = 1$, it follows that $q = 1$ or $q = -1$.
To prove surjectivity,
suppose that $\rho \in \SOR$ is an anticlockwise rotation
of $\alpha$ radians about an axis spanned by $u \in \R^3\!$,
where $\nsq{u} = 1$.
Then the quaternion $q = \cos(\smallfrac{1}{2}\alpha) + u \sin(\smallfrac{1}{2}\alpha)$
will map to $\rho$.
To see this,
note that all points on the axis of rotation are fixed points,
for $u$ commutes with $q$.
Furthermore, suppose that $v \in \R^3$ is such that $\inp{u}{v} = 0$.
Set $q_0 = \cos(\smallfrac{1}{2}\alpha)$ and $\vec{q} = u \sin(\smallfrac{1}{2}\alpha)$.
By using identities from \parencite[p.~157]{szekeres2004},
we find
\newcommand*{\vv}{v} % Could switch to v with an arrow, but to me it is just noise.
\newcommand*{\vq}{\vec{q}}
\begin{align*}
q \cdot v &= (q_0 + \vq) \vv (q_0 - \vq)
% \\ &= (q_0 + \vq) (\inp{v}{\vq} + q_0 \vec{v} - \vec{v} \times \vq)
\\ &= (q_0 + \vq) (q_0 \vv - \vv \times \vq)
\\ &= -\inp{\vq}{q_0 \vv - \vv \times \vq} + q_0 (q_0 \vv - \vv \times \vq) + \vq \times {(q_0 \vv - \vv \times \vq)}
\\ &= q_0^2 \vv - q_0 \vv \times \vq + q_0 \vq \times \vv - \vq \times {(\vv \times \vq)}
\\ &= q_0^2 \vv - 2 q_0 \vv \times \vq - \vv \inp{\vq}{\vq} + \vq \inp{\vq}{\vv}
\\ &= (\cos^2(\smallfrac{1}{2}\alpha) - \sin^2(\smallfrac{1}{2}\alpha)) v
- 2 \cos(\smallfrac{1}{2}\alpha)\sin(\smallfrac{1}{2}\alpha) \, {v \times u}
\\ &= \cos(\alpha) v + \sin(\alpha) \, {u \times v}
\end{align*}
This demonstrates that $q$ rotates $v$ anticlockwise by $\alpha$ radians about $u$.
We saw already that $x \mapsto q \cdot x$ is an orthogonal map with determinant $1$.
Therefore, $q$ maps to $\rho$.
\qed
\corollary[cor:transitive-s3-action]
$S^3$ acts transitively on $S^2\!$,
for every point on $S^2$ can be mapped
into any other point on $S^2$ by a rotation of the sphere.
The proof of theorem~\ref{thm:s3-surj-so3r} gives us a way to explicitly get
a $q \in S^3$ such that $q \cdot i = p$ for any $p \in S^2$:
we rotate $i$ onto $p$ with a rotation of $\R^3\!$.
If $p = -i$, $q = j$ will suffice,
so suppose $p \neq -i$.
Then an axis that we can rotate about is the one spanned by $i + p$,
which bisects the angle between $i$ and $p$,
so we need to rotate by $\pi$ radians.
We find
\begin{equationref}
\label{eqn:transitive-s3-action}
q = \frac{i + p}{\|i + p\|}
\end{equationref}
To verify that this works,
note that for $p \in S^2$ we have
$p \overline{p} = 1$ and $\overline{p} = -p$,
so $p^2 = -1$.
It then follows that
\[ p^2 = -1
% \implies pi + p^2 = -1 + pi
\enskip \implies \enskip p(i + p) = (i + p)i
\enskip \implies \enskip p(i + p)\overline{(i + p)} = (i + p)i\overline{(i + p)} \]
Multiplying by $\|i + p\|^{-2}$ on both sides then yields $p = q i q^{-1}$.
\subsection*{Physical interpretation}
Just like complex numbers are an extension of the real numbers,
quaternions are an extension of the complex numbers.
These extensions come at a cost:
when going from $\R$ to $\C$, you have to give up the ordering.
When going from $\C$ to $\H$, you have to give up commutativity.
Apart from their rich structure that is interesting in its own right,
quaternions have many useful applications.
By considering $S^3$ as a subset of $\H$,
it inherits a group structure.
Theorem~\ref{thm:s3-surj-so3r} tells us that this group is in a sense
twice $\SOR$: every rotation of $\R^3$ is represented by two antipodal quaternions.
When traversing a great circle through $1$ in $S^3\!$,
the points $1$ and $-1$ both correspond to the identity in $\SOR$.
This path in $S^3$ corresponds to a $4\pi$ rotation of $\R^3\!$,
and after a $2\pi$ rotation we will have moved from $1 \in S^3$ to $-1 \in S^3\!$.
This property is reminiscent of \emph{spinors},
and indeed proposition~\ref{prop:s3-isom-su2c} links the unit quaternions to the Pauli spin matrices.
| {
"alphanum_fraction": 0.6834082409,
"avg_line_length": 45.1946022727,
"ext": "tex",
"hexsha": "08a9bdc9e35fc3524e2723e40ab5cdf889bf1905",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7dfc0003ed842b556d530f2ffb6313856ce17721",
"max_forks_repo_licenses": [
"CNRI-Python",
"Naumen",
"Condor-1.1",
"MS-PL"
],
"max_forks_repo_name": "ruud-v-a/bscthesis",
"max_forks_repo_path": "chapters/preliminaries.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7dfc0003ed842b556d530f2ffb6313856ce17721",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CNRI-Python",
"Naumen",
"Condor-1.1",
"MS-PL"
],
"max_issues_repo_name": "ruud-v-a/bscthesis",
"max_issues_repo_path": "chapters/preliminaries.tex",
"max_line_length": 124,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7dfc0003ed842b556d530f2ffb6313856ce17721",
"max_stars_repo_licenses": [
"CNRI-Python",
"Naumen",
"Condor-1.1",
"MS-PL"
],
"max_stars_repo_name": "ruud-v-a/bscthesis",
"max_stars_repo_path": "chapters/preliminaries.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10564,
"size": 31817
} |
\batchmode
%
\providecommand{\GRBcNUMBER}{\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces}%
\providecommand{\GRBcVERSION}{\GRBcMAYOR.\GRBcMINOR.\GRBcTECHNICAL\ignorespaces}%
\providecommand{\GRBcBPATHL}{{\ttfamily/opt/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /linux64}}%
\providecommand{\GRBcRSPATHL}{\mbox{{\ttfamily/opt/gurobi\_server\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /linux64}}}%
\providecommand{\GRBcRSPATHM}{{\ttfamily/Library/gurobi\_server\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /mac64}}%
\providecommand{\GRBcRSPATHW}{{\ttfamily{c:/gurobi\_server\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /win64}}}%
\providecommand{\GRBcBPATHM}{{\ttfamily/Library/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /mac64}}%
\providecommand{\GRBcMPATHW}{{\ttfamily{c:/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /win64}}}%
\providecommand{\GRBcBPATHW}{{\ttfamily{c:{\textbackslash}gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces {\textbackslash}win64}}}%
\providecommand{\GRBcBPATHWW}{{\ttfamily{c:{\textbackslash}gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces {\textbackslash}win32}}}%
\providecommand{\GRBcBPATHWF}{{\ttfamily{c:/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /win64}}}%
\providecommand{\GRBcMPATH}{\GRBplatform{{\ttfamily/opt/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /linux64}}{{\ttfamily/Library/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /mac64}}{{\ttfamily{c:/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /win64}}}\ignorespaces}%
\providecommand{\GRBcBPATH}{\GRBplatform{{\ttfamily/opt/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /linux64}}{{\ttfamily/Library/gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /mac64}}{{\ttfamily{c:{\textbackslash}gurobi\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces {\textbackslash}win64}}}\ignorespaces}%
\providecommand{\GRBcCSPATHL}{{\ttfamily/opt/gurobi\_server\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /linux64}}%
\providecommand{\GRBcCSPATHM}{{\ttfamily/Library/gurobi\_server\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /mac64}}%
\providecommand{\GRBcCSPATHW}{{\ttfamily{c:/gurobi\_server\GRBcMAYOR\GRBcMINOR\GRBcTECHNICAL\ignorespaces /win64}}}%
\providecommand{\GRBcDPATHL}{{\ttfamily/examples/data/}}%
\providecommand{\GRBcDPATHM}{{\ttfamily/examples/data/}}%
\providecommand{\GRBcDPATHW}{{\ttfamily{\textbackslash}examples{\textbackslash}data{\textbackslash}}}%
\providecommand{\GRBcDPATH}{\GRBplatform{{\ttfamily/examples/data/}}{{\ttfamily/examples/data/}}{{\ttfamily{\textbackslash}examples{\textbackslash}data{\textbackslash}}}\ignorespaces}%
\providecommand{\GRBcOS}{\GRBplatform{Linux}{Mac}{64-bit Windows}\ignorespaces}%
\providecommand{\GRBcPlatform}{\GRBplatform{linux64}{mac64}{win64}\ignorespaces}%
\providecommand{\GRBcLicenseEmail}{{\htmladdnormallink{key@gurobi.com}{mailto:key@gurobi.com}}}%
\providecommand{\tagformat}[1]{%
Tag for #1s.
If you will be retrieving the solution to your
model in JSON format, you should define a tag for every #1 that you plan
to retrieve solution information for. Each #1 tag must be unique,
and only tagged #1s will appear in the
\hyperref{JSON solution string}{}{}{format:JSON}.
Tags must consist of printable US-ASCII characters. Using
extended characters or escaped characters will result in an
error. The maximum supported length for a tag is 10240 characters.}
%
\providecommand{\EnvargPool}{%
The machine pool. Machine pools allow you to create fixed
configurations on the Instant Cloud website (capturing things
like type of machine, geographic region, etc.), and then launch
and share machines from client programs without having to
restart the configuration information each time you launch a
machine. May be \texttt{NULL} (or an empty string), in which case
your job will be launched in the default pool associated with
your cloud license.}
%
\providecommand{\EnvargSecretKey}{%
The secret key for your Gurobi Instant Cloud license. This can
be retrieved from the Gurobi Instant Cloud website. When used in
combination with your \texttt{accessID}, this allows you to launch
Instant Cloud instances and submit jobs to them. Note that you
should keep your secret key private.}
%
\providecommand{\EnvargAccessID}{%
The access ID for your Gurobi Instant Cloud license. This can be
retrieved from the Gurobi Instant Cloud website. When used in
combination with your \texttt{secretKey}, this allows you to launch
Instant Cloud instances and submit jobs to them.}
%
\providecommand{\EnvargTlsInsecure}{%
Indicates whether to use insecure mode in the TLS (Transport
Layer Security). Set this to 0 unless your server administrator
tells you otherwise.}
%
\providecommand{\EnvargGroup}{The name of the Compute Server group.}
%
\providecommand{\EnvargPassword}{%
The password for gaining access to the specified Compute Server
cluster. Pass an empty string if no password is required.}
%
\providecommand{\EnvargComputeServer}{%
A Compute Server. You can refer to the server using its name or
its IP address. If you are using a non-default port, the server
name should be followed by the port number (e.g.,
\texttt{server1:61000})}
%
\providecommand{\EnvargTimeout}{%
Queue timeout (in seconds). If the job doesn't reach the front of
the queue before the specified timeout, the call will exit with a
\texttt{JOB\_REJECTED} error. Use -1 to indicate
that the call should never timeout.}
%
\providecommand{\EnvargRouter}{%
The router for a Compute Server cluster. A router can be used to
improve the robustness of a Compute Server deployment. You
should refer to the router using either its name or its IP
address. If no router is used (which is the typical case), pass
an empty string.}
%
\providecommand{\EnvargPriorityCloud}{%
The priority of the job. Priorities must be between -100 and
100, with a default value of 0 (by convention). Higher priority
jobs are chosen from the server job queue before lower priority
jobs.}
%
\providecommand{\EnvargPriorityCS}{%
The priority of the job. Priorities must be between -100 and
100, with a default value of 0 (by convention). Higher priority
jobs are chosen from the server job queue before lower priority
jobs.\ Depending on the configuration of the
server, a job with priority 100 runs immediately, bypassing the
job queue and ignoring the job limit on the server. You should
exercise caution with priority 100 jobs, since they can severely
overload a server, which can cause jobs to fail, and in extreme
cases can cause the server to crash. This behavior is managed by
the {\ttfamily HARDJOBLIMIT}, and is disabled by default. Refer
to the \htmladdnormallink{Gurobi Remote Services Reference
Manual}{../remoteservices/remoteservices.html} for more
information on starting Compute Server options.}
%
\providecommand{\envreadprm}[1]{%
This #1 will also check the current working directory for a file
named {\ttfamily gurobi.env}, and it will attempt to read
parameter settings from this file if it exists. The file should
be in \hyperref{PRM}{}{}{format:PRM} format (briefly, each line
should contain a parameter name, followed by the desired value
for that parameter).}
%
\providecommand{\envreadlicparam}[1]{%
This #1 will also populate any
parameter ({\ttfamily ComputeServer}, {\ttfamily TokenServer},
{\ttfamily ServerPassword}, etc.) specified in your {\ttfamily
gurobi.lic} file.}
%
\providecommand{\envrecommendeduse}{%
In general, you should aim to create a single Gurobi environment in
your program, even if you plan to work with multiple models. Reusing
one environment is much more efficient than creating and destroying
multiple environments. The one exception is if you are writing a
multi-threaded program, since environments are not thread safe. In
this case, you will need a separate environment for each of your
threads.}
%
\providecommand{\CbStopMultiobjStub}{%
the optimization process of one of the optimization steps in
a multi-objective MIP problem without stopping the hierarchical
optimization process}
%
\providecommand{\CbStopMultiobjDesc}[1]{%
Interrupt the optimization process of one of the optimization steps in
a multi-objective MIP problem without stopping the hierarchical
optimization process.
Only available for multi-objective MIP models and when the {\ttfamily
where} member variable is not equal to {\ttfamily #1} (see the
\hyperref{Callback Codes}{}{}{sec:CallbackCodes} section for more
information).}
%
\providecommand{\CbStopMultiobjUsage}{%
You would typically stop a multi-objective optimization step
by querying the last finished number of multi-objectives steps,
and using that number to stop the current step and move
on to the next hierarchical objective (if any) as shown in the
following example:}
%
\providecommand{\CbStopMultiobjMOS}{%
You should refer to the section on \hyperref{Multiple
Objectives}{}{}{sec:MultipleObjectives} for information on how to
specify multiple objective functions and control the trade-off between
them.}
%
\providecommand{\CbStopMultiobjObjnumArg}{%
\hspace{20 mm}{\textbf{objnum}}: The number of the multi-objective optimization step
to interrupt. For processes running locally, this argument can have the
special value -1, meaning to stop the current step.}
%
\providecommand{\CbArgumentDetail}[2]{%
\hspace{20 mm}{\textbf{#1}}: The \texttt{#1} argument that was passed into the user
callback by the Gurobi optimizer. This argument must be passed
unmodified from the user callback to \texttt{#2()}.}
%
\providecommand{\EnvCreationDescNolic}[1]{%
This #1 will also check the current working directory for a file
named {\ttfamily gurobi.env}, and it will attempt to read
parameter settings from this file if it exists. The file should
be in \hyperref{PRM}{}{}{format:PRM} format (briefly, each line
should contain a parameter name, followed by the desired value
for that parameter).
In general, you should aim to create a single Gurobi environment in
your program, even if you plan to work with multiple models. Reusing
one environment is much more efficient than creating and destroying
multiple environments. The one exception is if you are writing a
multi-threaded program, since environments are not thread safe. In
this case, you will need a separate environment for each of your
threads.}
%
\providecommand{\EnvCreationDescStart}[1]{%
This #1 will also populate any
parameter ({\ttfamily ComputeServer}, {\ttfamily TokenServer},
{\ttfamily ServerPassword}, etc.) specified in your {\ttfamily
gurobi.lic} file.\ This #1 will also check the current working directory for a file
named {\ttfamily gurobi.env}, and it will attempt to read
parameter settings from this file if it exists. The file should
be in \hyperref{PRM}{}{}{format:PRM} format (briefly, each line
should contain a parameter name, followed by the desired value
for that parameter)..
After that, it will apply all parameter changes specified by the user
prior to this call. Note that this might overwrite parameters set in
the license file, or in the {\ttfamily gurobi.env} file, if present.
After all these changes are performed, the code will actually
activate the environment, and make it ready to work with models.
In general, you should aim to create a single Gurobi environment in
your program, even if you plan to work with multiple models. Reusing
one environment is much more efficient than creating and destroying
multiple environments. The one exception is if you are writing a
multi-threaded program, since environments are not thread safe. In
this case, you will need a separate environment for each of your
threads.}
%
\providecommand{\EnvCreationDesc}[1]{%
This #1 will also populate any
parameter ({\ttfamily ComputeServer}, {\ttfamily TokenServer},
{\ttfamily ServerPassword}, etc.) specified in your {\ttfamily
gurobi.lic} file.\ This #1 will also check the current working directory for a file
named {\ttfamily gurobi.env}, and it will attempt to read
parameter settings from this file if it exists. The file should
be in \hyperref{PRM}{}{}{format:PRM} format (briefly, each line
should contain a parameter name, followed by the desired value
for that parameter).
In general, you should aim to create a single Gurobi environment in
your program, even if you plan to work with multiple models. Reusing
one environment is much more efficient than creating and destroying
multiple environments. The one exception is if you are writing a
multi-threaded program, since environments are not thread safe. In
this case, you will need a separate environment for each of your
threads.}
%
\providecommand{\EmptyEnvDesc}[1]{%
You should use #1 if you want to set parameters before actually
starting the environment. This can be useful if you want to
connect to a Compute Server, a Token Server, the Gurobi
Instant Cloud or a Cluster Manager. See the \hyperref{Empty
Environment}{}{}{sec:EmptyEnvironment}~Section for more details.}
%
\providecommand{\GRBBatchOptimizationDesc}[5]{%
Gurobi Compute Server enables programs to offload
optimization computations onto dedicated servers.
The Gurobi Cluster Manager adds a number of additional
capabilities on top of this. One important one,
\emph{batch optimization}, allows you to build an
optimization model with your client program,
submit it to a Compute Server cluster (through the Cluster
Manager), and later check on the status of the model
and retrieve its solution. You can use a
\hyperref{Batch object}{}{}{#4} to make it easier to
work with batches.
For details on batches, please refer to the
\hyperref{Batch Optimization}{}{}{sec:BatchOptimization}
section.}
%
\providecommand{\GrbMaxintNote}{%
One important note about integer-valued parameters: while the maximum
value that can be stored in a signed integer is $2^{31}-1$, we use a
{\ttfamily MAXINT} value of 2,000,000,000. Attempting to set an integer
parameter to a value larger than this maximum will produce an error.}
%
\providecommand{\GRBwarmstartwarning}{%
Note that if you provide a valid starting extreme point, either through
{\ttfamily \hyperref{PStart}{}{}{attr:PStart},
\hyperref{DStart}{}{}{attr:DStart}}, or through {\ttfamily
\hyperref{VBasis}{}{}{attr:VBasis}, \hyperref{CBasis}{}{}{attr:CBasis}},
then LP presolve will be disabled. For models where presolve
greatly reduces the problem size, this might hurt performance.}
%
\providecommand{\Farkasdesc}{%
Together, attributes \hyperref{FarkasDual}{}{}{attr:FarkasDual} and
\hyperref{FarkasProof}{}{}{attr:FarkasProof} provide a certificate of
infeasibility for the given problem. Specifically, they can
be used to form the following inequality from the original
constraints that is trivially infeasible:
\begin{displaymath} \bar{a}x = \lambda^tAx \leq \lambda^tb = -\beta + \sum\limits_{j:\bar{a}_j<0}\bar{a}_jU_j + \sum\limits_{j:\bar{a}_j>0}\bar{a}_jL_j,\end{displaymath}
where $\beta>0$, $L_j$\ is the lower bound of variable $x_j$, $U_j$\ is the upper
bound of variable $x_j$, $\lambda_i \geq 0$\ if the $i$-th constraint
has a $\leq$\ sense, $\lambda_i \leq 0$\ if the $i$-th constraint has a
$\geq$\ sense, $\bar{a}_j \geq 0$\ if $U_j = \infty$, and $\bar{a}_j
\leq 0$\ if $L_j = -\infty$.
This constraint can not be satisfied for any $\beta>0$.
The FarkasProof attribute provides $\beta$, and the FarkasDual attributes
provide $\lambda$\ multipliers for the original constraints.
This attribute is only available when parameter
\hyperref{InfUnbdInfo}{}{}{parameter:InfUnbdInfo} is set to 1.
}
%
\providecommand{\pythonname}{%
Note that \texttt{name} will be stored as an ASCII string. Thus, a name
like {\ttfamily 'A${\rightarrow}$B'} will produce an error, because
'${\rightarrow}$' can not be represented as an ASCII character. Note
also that names that contain spaces are strongly discouraged,
because they can't be written to LP format files.}
%
\providecommand{\pythonnames}{%
The given name will be subscripted by the index of the generator
expression, so if the index is an integer, \texttt{c} would become
\texttt{c[0]}, \texttt{c[1]}, etc.
Note that the generated names will be stored as ASCII strings, so you
should avoid using names that contain non-ASCII characters. In
addition, names that contain spaces are strongly discouraged, because
they can't be written to LP format files.}
%
\providecommand{\getmultiobjenvdesc}{%
Create/retrieve a multi-objective
environment for the objective with the given index. This
environment enables fine-grained control over the multi-objective
optimization process. Specifically, by changing parameters on this
environment, you modify the behavior of the optimization that
occurs during the corresponding pass of the multi-objective
optimization.
Each multi-objective environment starts with a copy of the current
model environment.
Please refer to the discussion of
\hyperref{Multiple Objectives}{}{}{sec:MultipleObjectives} for
information on how to specify multiple objective functions and control
the trade-off between them.}
%
\providecommand{\discardmultiobjenvsdesc}{%
Discard all multi-objective environments associated with the model,
thus restoring multi objective optimization to its default behavior.
Please refer to the discussion of
\hyperref{Multiple Objectives}{}{}{sec:MultipleObjectives} for
information on how to specify multiple objective functions and control
the trade-off between them.}
%
\providecommand{\pythonlazynote}{%
Note that, due to our lazy update approach, the change won't actually
take effect until you update the model
(using \hyperref{Model.update}{}{}{pythonmethod:Model.update}),
optimize the model
(using \hyperref{Model.optimize}{}{}{pythonmethod:Model.optimize}),
or write the model to disk
(using \hyperref{Model.write}{}{}{pythonmethod:Model.write}).}
%
\providecommand{\dotnetlazynote}{%
Note that, due to our lazy update approach, the change won't actually
take effect until you update the model
(using \hyperref{GRBModel.Update}{}{}{dotnetmethod:GRBModel.Update}),
optimize the model
(using \hyperref{GRBModel.Optimize}{}{}{dotnetmethod:GRBModel.Optimize}),
or write the model to disk
(using \hyperref{GRBModel.Write}{}{}{dotnetmethod:GRBModel.Write}).}
%
\providecommand{\javalazynote}{%
Note that, due to our lazy update approach, the change won't actually
take effect until you update the model
(using \hyperref{GRBModel.update}{}{}{javamethod:GRBModel.update}),
optimize the model
(using \hyperref{GRBModel.optimize}{}{}{javamethod:GRBModel.optimize}),
or write the model to disk
(using \hyperref{GRBModel.write}{}{}{javamethod:GRBModel.write}).}
%
\providecommand{\cpplazynote}{%
Note that, due to our lazy update approach, the change won't actually
take effect until you update the model
(using \hyperref{GRBModel::update}{}{}{cppmethod:GRBModel::update}),
optimize the model
(using \hyperref{GRBModel::optimize}{}{}{cppmethod:GRBModel::optimize}),
or write the model to disk
(using \hyperref{GRBModel::write}{}{}{cppmethod:GRBModel::write}).}
%
\providecommand{\clazynote}[2]{
Note that, due to our lazy update approach, the #1 won't actually be #2
until you update the model
(using \hyperref{GRBupdatemodel}{}{}{routine:GRBupdatemodel}),
optimize the model
(using \hyperref{GRBoptimize}{}{}{routine:GRBoptimize}),
or write the model to disk
(using \hyperref{GRBwrite}{}{}{routine:GRBwrite}).}
%
\providecommand{\parameterpointer}{
Please consult the \hyperref{parameter section}{}{}{sec:Parameters}
for a complete list of Gurobi parameters, including descriptions of
their purposes and their minimum, maximum, and default values.}
%
\providecommand{\GCfunction}{
A piecewise-linear approximation of the function is
added to the model. The details of the approximation are controlled
using the following four attributes (or using the
parameters with the same names):
\hyperref{FuncPieces}{}{}{attr:FuncPieces},
\hyperref{FuncPieceError}{}{}{attr:FuncPieceError},
\hyperref{FuncPiecesLength}{}{}{attr:FuncPieceLength}, and
\hyperref{FuncPieceRatio}{}{}{attr:FuncPieceRatio}.
For details, consult the
\hyperref{General Constraint}{}{}{subsubsection:GenConstrFunction}
discussion.
}
%
\providecommand{\GCoptionstring}{
A string that can be used to set the attributes that control
the piecewise-linear approximation of this function constraint.
To assign a value to an attribute, follow the attribute
name with an equal sign and the desired value (with no spaces).
Assignments for different attributes should be separated by spaces
(e.g. "FuncPieces=-1 FuncPieceError=0.001").}
%
\providecommand{\GCget}[1]{
Retrieve the data associated with a general constraint of type #1.
Calling this method for a general constraint of a different type
leads to an exception.
You can query the \hyperref{GenConstrType}{}{}{attr:GenConstrType}
attribute to determine the type of the general constraint.}
%
\providecommand{\GCgetC}[1]{
Retrieve the data associated with a general constraint of type #1.
Calling this method for a general constraint of a different type
leads to an error return code.
You can query the \hyperref{GenConstrType}{}{}{attr:GenConstrType}
attribute to determine the type of the general constraint.}
%
\providecommand{\MRlogicalvec}[1]{A logical vector that indicates whether a #1\ appears in the computed IIS.}%
\providecommand{\MRspace}{\ }%
\providecommand{\MRhlink}[3]{\htmladdnormallink{#1}{#2}}%
\providecommand{\MRhrefH}[3]{\hyperref{#1}{}{}{#2}}%
\providecommand{\MRrefremoteservices}{\htmladdnormallink{Gurobi Remote Services Reference Manual}{../remoteservices/remoteservices.html}}%
\providecommand{\MRcode}[1]{{\ttfamily #1}}%
\providecommand{\MRdescription}[1]{#1}%
\providecommand{\MRdetail}{}%
\providecommand{\MRdeqn}[2]{\begin{displaymath}#1\end{displaymath}}%Must be in one line for the Rd code to work%
\providecommand{\MReqn}[2]{$#1$}%Must be in one line....%
\providecommand{\MRendargs}{}%
\providecommand{\MRenddetail}{}%
\providecommand{\MRendheading}{}%
\providecommand{\MRemph}[1]{{\em #1}}%
\providecommand{\MRexample}[1]{{\ttfamily #1}}%
\providecommand{\MRapi}{\MRalternative{MATLAB}{R}}%
\providecommand{\MRapiR}{\MRalternative{MATLAB\textregistered}{R}}%
\providecommand{\MRlang}{\MRalternative{matlab}{r}}%
\providecommand{\MRext}{\MRalternative{m}{R}}%
\providecommand{\MRarray}{\MRalternative{array}{list}}%
\providecommand{\MRarrays}{\MRalternative{arrays}{of lists}}%
\providecommand{\MRarraystructs}{\MRalternative{struct array}{list of lists}}%
\providecommand{\MRassign}{\MRalternative{=}{<-}}%
\providecommand{\MRcendl}{\MRalternative{;}{}}%
\providecommand{\MRfalse}{\MRalternative{false}{False}}%
\providecommand{\MRfield}{\MRalternative{field}{named component}}%
\providecommand{\MRfields}{\MRalternative{fields}{named components}}%
\providecommand{\MRfieldsep}{\MRalternative{.}{\$}}%
\providecommand{\MRfieldsepstr}{\MRalternative{period}{dollar sign}}%
\providecommand{\MRinitstruct}[1]{\MRalternative{}{#1 <- list()\\}}%
\providecommand{\MRnan}{\MRalternative{nan}{NA}}%
\providecommand{\MRnulldefault}{\MRalternative{}{=NULL}}%
\providecommand{\MRpos}[1]{\MRalternative{(#1)}{[[#1]]}}%
\providecommand{\MRrepone}[1]{\MRalternative{ones(#1,1)}{rep(1,#1)}}%
\providecommand{\MRstringtype}{\MRalternative{cell array}{character vector}}%
\providecommand{\MRsection}{\section}%
\providecommand{\MRsubsection}{\subsection}%
\providecommand{\MRsubsubsection}{\subsubsection}%
\providecommand{\MRstruct}{\MRalternative{struct}{list}}%
\providecommand{\MRstructs}{\MRalternative{structs}{lists}}%
\providecommand{\MRtwoarray}[2]{\MRalternative{c(#1,#2)}{[#1\ #2]}}%
\providecommand{\MRvpos}[1]{\MRalternative{(#1)}{[#1]}}%
\providecommand{\MRmodelarg}{%
\hspace{20 mm}{\textbf{model}}: The model {\ttfamily\MRalternative{struct}{list}} must contain a valid Gurobi model.
See the
\hyperref{{\ttfamily{model}}}{}{}{\MRalternative{matlab}{r}:model}\ argument section
for more information.
}
%
\providecommand{\MRparamsarg}{%
\hspace{20 mm}{\textbf{params}}: The params {\ttfamily\MRalternative{struct}{list}}, when provided, contains a list of modified Gurobi parameters. See the
\hyperref{{\ttfamily{params}}}{}{}{\MRalternative{matlab}{r}:params}\ argument section
for more information.
}
%
\providecommand{\MRenvarg}{%
\hspace{20 mm}{\textbf{env}}: The env {\ttfamily\MRalternative{struct}{list}}, when provided, allows you to
use Gurobi Compute Server or Gurobi Instant Cloud. See the
\hyperref{{\ttfamily{env}}}{}{}{\MRalternative{matlab}{r}:env}\ argument section
for more information.
}
%
\providecommand{\MRGCitem}[2]{\item[#2] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}#2}}
%
\providecommand{\MRGCOitem}[2]{\item[#2 (optional)] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}#2}}
%
\providecommand{\MRGCresvar}[1]{\item[resvar] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}resvar}. Index of the
variable in the left-hand side of the constraint.}
%
\providecommand{\MRGCindex}[1]{\item[vars] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}vars}, it is a vector of
indices of variables in the right-hand side of the constraint.}
%
\providecommand{\MRGCname}[2]{\item[name (optional)] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}name}. When present,
specifies the name of the $i$-th #2\ general constraint.}
%
\providecommand{\MRGCfuncxvar}[1]{\item[xvar] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}xvar}. Index of the
variable in the right-hand side of the constraint.}
%
\providecommand{\MRGCfuncyvar}[1]{\item[yvar] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}yvar}. Index of the
variable in the left-hand side of the constraint.}
%
\providecommand{\MRGCfuncname}[2]{\item[name (optional)] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}name}. When present,
specifies the name of the $i$-th #2\ function constraint.}
%
\providecommand{\MRGCfuncoptions}[2]{%
\item[funcpieces (optional)] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}funcpieces}. When present, specifies the
\hyperref{FuncPieces}{}{}{attr:FuncPieces} attribute of the $i$-th
#2\ function constraint.
\item[funcpiecelength (optional)] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}funcpiecelength}. When present, specifies the
\hyperref{FuncPieceLength}{}{}{attr:FuncPieceLength} attribute of the $i$-th
#2\ function constraint.
\item[funcpieceerror (optional)] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}funcpieceerror}. When present, specifies the
\hyperref{FuncPieceError}{}{}{attr:FuncPieceError} attribute of the $i$-th
#2\ function constraint.
\item[funcpieceratio (optional)] Specified via {\ttfamily
model{\MRalternative{.}{\$}}#1{\MRalternative{(i)}{[[i]]}}{\MRalternative{.}{\$}}funcpieceratio}. When present, specifies the
\hyperref{FuncPieceRatio}{}{}{attr:FuncPieceRatio} attribute of the $i$-th
#2\ function constraint.
}
%
\providecommand{\GRBwriteDescription}[1]{%
This #1~is the general entry point for writing optimization data
to a file. It can be used to write optimization models, solutions
vectors, basis vectors, start vectors, or parameter settings. The
type of data written is determined by the file suffix.
File formats are described in the
\hyperref{File Format}{}{}{sec:FileFormats} section.
Note that writing a model to a file will process all pending model
modifications. However, writing other model information (solutions,
bases, etc.) will not.
Note also that when you write a Gurobi parameter file (PRM), both
integer or double parameters not at their default value will be
saved, but no string parameter will be saved into the file.
}
%
\providecommand{\GRBwriteFilenameArgument}[2]{%
\hspace{20 mm}{\textbf{filename}}: The name of the file to be written. The file
type is encoded in the file name suffix. Valid suffixes are
\texttt{.mps}, \texttt{.rew}, \texttt{.lp}, or \texttt{.rlp} for writing
the model itself, \texttt{.ilp} for writing just the IIS associated
with an infeasible model (see \hyperref{#1}{}{}{#2} for further
information), \texttt{.sol} for writing the current solution,
\texttt{.mst} for writing a start vector, \texttt{.hnt} for writing a
hint file, \texttt{.bas} for writing an LP basis, \texttt{.prm}
for writing modified parameter settings, \texttt{.attr} for writing
model attributes, or \texttt{.json} for writing solution
information in JSON format. If your system has compression
utilities installed (e.g., {\ttfamily 7z} or {\ttfamily zip} for Windows,
and {\ttfamily gzip}, {\ttfamily bzip2}, or {\ttfamily unzip} for
Linux or Mac OS), then the files can be compressed, so additional
suffixes of \texttt{.gz}, \texttt{.bz2}, or \texttt{.7z} are
accepted.}
%
\providecommand{\GRBgetJSONSolutionDescription}[2]{%
After a call to \hyperref{#1}{}{}{#2}, this method returns the
resulting solution and related model attributes as a JSON
string. Please refer to the
\hyperref{JSON solution format}{}{}{format:JSON} section for details.}
%
\providecommand{\GRBoptimizebatchDescription}{%
Submit a new batch request to the Cluster Manager.
Returns the {\ttfamily BatchID} (a string), which
uniquely identifies the job in the Cluster Manager and
can be used to query the status of this request (from this
program or from any other). Once the request has completed,
the \texttt{BatchID} can also be used to retrieve the associated
solution. To submit a batch request, you must tag at
least one element of the model by setting one of the
\hyperref{VTag}{}{}{attr:VTag}, \hyperref{CTag}{}{}{attr:CTag} or
\hyperref{QCTag}{}{}{attr:QCTag} attributes. For more details on
batch optimization, please refer to the \hyperref{Batch
Optimization}{}{}{sec:BatchOptimization} section.
Note that this routine will process all pending model modifications.}
%
\providecommand{\GRBgetbatchDescription}[2]{%
Given a {\ttfamily BatchID}, as returned by {#1}, and a Gurobi
environment that can connect to the appropriate
Cluster Manager (i.e., one where parameters
\hyperref{CSManager}{}{}{parameter:CSManager},
\hyperref{UserName}{}{}{parameter:UserName}, and
\hyperref{ServerPassword}{}{}{parameter:ServerPassword}
have been set appropriately), this function returns a {#2}.
With it, you can query the current status
of the associated batch request and, once the batch request has
been processed, you can query its solution. Please refer to the
\hyperref{Batch Optimization}{}{}{sec:BatchOptimization} section for
details and examples.}
%
\providecommand{\GRBabortbatchDescription}[1]{%
This {#1}\ instructs the Cluster Manager to abort the processing of this
batch request, changing its status to
{\ttfamily ABORTED}. Please refer to the \hyperref{Batch
Status Codes}{}{}{sec:BatchStatusCodes} section for further details.}
%
\providecommand{\GRBdiscardbatchDescription}[1]{%n
This {#1}\ instructs the Cluster Manager to remove all information
related to the batch request in question, including the stored
solution if available. Further queries for the associated batch
request will fail with error code {\ttfamily
GRB\_ERROR\_DATA\_NOT\_AVAILABLE}. Use this function with
care, as the removed information can not be recovered later on.}
%
\providecommand{\GRBretrybatchDescription}[1]{%
This {#1}\ instructs the Cluster Manager to retry optimization of a failed
or aborted batch request, changing its status to
{\ttfamily SUBMITTED}. Please refer to the \hyperref{Batch
Status Codes}{}{}{sec:BatchStatusCodes} section for further details.}
%
\providecommand{\GRBgetbatchjsonsolutionDescription}[1]{%
This {#1}\ retrieves the solution of a
completed batch request from a Cluster Manager. The solution is returned as a
\hyperref{JSON solution string}{}{}{format:JSON}. For this call to
succeed, the status of the batch request must be {\ttfamily COMPLETED}.
Please refer to the \hyperref{Batch Status
Codes}{}{}{sec:BatchStatusCodes} section for further details.}
%
\providecommand{\GRBwritebatchjsonsolutionDescription}[1]{%
This {#1}\ returns the stored solution of a completed
batch request from a Cluster Manager. The solution is returned in
a gzip-compressed JSON file. The file name you provide must end with a {\ttfamily
.json.gz} extension. The JSON format is described in the
\hyperref{JSON solution format}{}{}{format:JSON} section. Note that
for this call to succeed, the status of the batch request must be
{\ttfamily COMPLETED}. Please refer to the \hyperref{Batch
Status Codes}{}{}{sec:BatchStatusCodes} section for further details.}
%
\providecommand{\GRBupdatebatchDescription}[1]{%
All Batch attribute values are cached locally, so queries
return the value received during the last communication with the
Cluster Manager. This {#1}\ refreshes the values of all attributes with
the values currently available in the Cluster Manager (which involves
network communication).}
%
\providecommand{\GRBbatchattrdetail}{%
Note that all Batch attributes are cached locally, and are only
updated when you create a client-side batch object or when you
explicitly update this cache (by calling the appropriate
{\ttfamily update} function -
\hyperref{GRBupdatebatch}{}{}{routine:GRBupdatebatch} for {\bf C},
\hyperref{update}{}{}{pythonmethod:Batch.update} for {\bf Python}, etc.).}
%
\providecommand{\GRBcallbackWarning}{%
Note that changing parameters from within a callback is not supported,
doing so may lead to undefined behavior.
}
%
\providecommand{\GRBvarindexDescription}[1]{%
This {#1}\ returns the current index, or order, of the variable in the
underlying constraint matrix.
{\large \textbf{Return value:}}
%
\begin{tabular}{l@{:\ }l}
$= -2$\ & removed\\
$= -1$\ & not in model\\
$\geq 0$\ & index of the variable in the model\\
\end{tabular}
Note that the index of a variable may change after
subsequent model modifications.}
%
\providecommand{\GRBconstrindexDescription}[1]{%
This {#1}\ returns the current index, or order, of the constraint in
the underlying constraint matrix.
{\large \textbf{Return value:}}
%
\begin{tabular}{l@{:\ }l}
$= -2$\ & removed\\
$= -1$\ & not in model\\
$\geq 0$\ & index of the constraint in the model\\
\end{tabular}
Note that the index of a constraint may change after
subsequent model modifications.}
%
\providecommand{\GRBscaledimage}[1]{%
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/#1.png}}}
\documentclass{article}
\RequirePackage{ifthen}
\usepackage{gurobidoc}
\usepackage{html}
\usepackage{color}
\usepackage{xspace}
\usepackage{graphicx}
\usepackage{verbatim}
\usepackage{fancyvrb}
\definecolor{darkred}{rgb}{.50, .10, .10}
%
\providecommand{\routineonearg}[4]{
\subsubsection{#1}
\label{routine:#1}
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4 ) \\
\end{tabular}
}
%
\providecommand{\routinetwoarg}[6]{
\subsubsection{#1}
\label{routine:#1}
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4, \\
& & #5 & #6 ) \\
\end{tabular}
}
%
\providecommand{\routinethreearg}[8]{
\subsubsection{#1}
\label{routine:#1}
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4, \\
& & #5 & #6, \\
& & #7 & #8 ) \\
\end{tabular}
}
%
\providecommand{\routinefourarg}[9]{
\subsubsection{#1}
\label{routine:#1}
\begin{tabular}{llll}
int & {\large \color{darkred} \textbf{#1}} ( & #2 & #3, \\
& & #4 & #5, \\
& & #6 & #7, \\
& & #8 & #9 ) \\
\end{tabular}
}
%
\providecommand{\routinefourplus}[9]{
\subsubsection{#1}
\label{routine:#1}
\begin{tabular}{llll}
int & {\large \color{darkred} \textbf{#1}} ( & #2 & #3, \\
& & #4 & #5, \\
& & #6 & #7, \\
& & #8 & #9, \\
}
%
\providecommand{\anotherarg}[2]{
& & #1 & #2, \\
}
%
\providecommand{\lastarg}[2]{
& & #1 & #2 ) \\
\end{tabular}
}
%
\providecommand{\argstext}[1]{{\large \textbf{#1}}}
%
\providecommand{\args}{{\large \textbf{Arguments:}}}
%
\providecommand{\argument}[2]{\hspace{20 mm}{\textbf{#1}}: #2}
%
\providecommand{\returnvalue}[1]{
{\large \textbf{Return value:}}
A non-zero return value indicates that a problem occurred while #1.
Refer to the \hyperref{Error Code}{}{}{sec:ErrorCodes} table for a
list of possible return values. Details on the error can be obtained
by calling \hyperref{GRBgeterrormsg}{}{}{routine:GRBgeterrormsg}.}
%
\providecommand{\returnval}[1]{
{\large \textbf{Return value:}}
#1}
%
\providecommand{\clazynote}[2]{
Note that, due to our lazy update approach, the #1 won't actually be #2
until you update the model
(using \hyperref{GRBupdatemodel}{}{}{routine:GRBupdatemodel}),
optimize the model
(using \hyperref{GRBoptimize}{}{}{routine:GRBoptimize}),
or write the model to disk
(using \hyperref{GRBwrite}{}{}{routine:GRBwrite}).}
%
\providecommand{\parameterpointer}{
Please consult the \hyperref{parameter section}{}{}{sec:Parameters}
for a complete list of Gurobi parameters, including descriptions of
their purposes and their minimum, maximum, and default values.}
%
\newenvironment{exampleenv}{{\large\bf Example usage:}}{}%
\providecommand{\exampleMR}[1]{
{\large\bf Example usage:}\\
{\ttfamily
#1}}
%
\newenvironment{cppzeroarg}[2]{
\begin{tabular}{ll}
#2 & {\large \color{darkred} \textbf{#1}} ( ) \\
\end{tabular}
\begin{list}{}{}
}{\end{list}}
%
\newenvironment{cpponearg}[4]{
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4 ) \\
\end{tabular}
\begin{list}{}{}
}{\end{list}}
%
\providecommand{\signatureoneplus}[4]{
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4, \\
}
%
\newenvironment{cpplastarg}{
\begin{list}{}{}
}{\end{list}}
%
\providecommand{\cppargs}{{\large \textbf{Arguments:}}}
%
\providecommand{\cpplazynote}{
Note that, due to our lazy update approach, the change won't actually
take effect until you update the model
(using \hyperref{GRBModel::update}{}{}{cppmethod:GRBModel::update}),
optimize the model
(using \hyperref{GRBModel::optimize}{}{}{cppmethod:GRBModel::optimize}),
or write the model to disk
(using \hyperref{GRBModel::write}{}{}{cppmethod:GRBModel::write}).}
%
\newenvironment{javazeroarg}[2]{
\begin{tabular}{ll}
#2 & {\large \color{darkred} \textbf{#1}} ( ) \\
\end{tabular}
\begin{list}{}{}
}{\end{list}}
%
\newenvironment{javaonearg}[4]{
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4 ) \\
\end{tabular}
\begin{list}{}{}
}{\end{list}}
%
\providecommand{\signatureoneplus}[4]{
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4, \\
}
%
\newenvironment{javalastarg}{
\begin{list}{}{}
}{\end{list}}
%
\providecommand{\jargs}{{\large \textbf{Arguments:}}}
%
\providecommand{\javalazynote}{
Note that, due to our lazy update approach, the change won't actually
take effect until you update the model
(using \hyperref{GRBModel.update}{}{}{javamethod:GRBModel.update}),
optimize the model
(using \hyperref{GRBModel.optimize}{}{}{javamethod:GRBModel.optimize}),
or write the model to disk
(using \hyperref{GRBModel.write}{}{}{javamethod:GRBModel.write}).}
%
\newenvironment{dotnetproperty}[2]{
\begin{tabular}{ll}
#2 & {\large \color{darkred} \textbf{#1}}\\
\end{tabular}
\begin{list}{}{}
}{\end{list}}
%
\newenvironment{dotnet0arg}[2]{
\begin{tabular}{ll}
#2 & {\large \color{darkred} \textbf{#1}} ( ) \\
\end{tabular}
\begin{list}{}{}
}{\end{list}}
%
\newenvironment{dotnet1arg}[4]{
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4 ) \\
\end{tabular}
\begin{list}{}{}
}{\end{list}}
%
\providecommand{\signatureoneplus}[4]{
\begin{tabular}{llll}
#2 & {\large \color{darkred} \textbf{#1}} ( & #3 & #4, \\
}
%
\newenvironment{dotnetlastarg}{
\begin{list}{}{}
}{\end{list}}
%
\providecommand{\dotnetargs}{{\large \textbf{Arguments:}}}
%
\providecommand{\dotnetlazynote}{
Note that, due to our lazy update approach, the change won't actually
take effect until you update the model
(using \hyperref{GRBModel.Update}{}{}{dotnetmethod:GRBModel.Update}),
optimize the model
(using \hyperref{GRBModel.Optimize}{}{}{dotnetmethod:GRBModel.Optimize}),
or write the model to disk
(using \hyperref{GRBModel.Write}{}{}{dotnetmethod:GRBModel.Write}).}
%
\newenvironment{pythonfunction}[2]{
\csname subsubsection\endcsname{#1()}
\label{pythonmethod:#1}
{\large \color{darkred} \textbf{#1}} ( #2 )
{}{}}{}
%
\newenvironment{pythonmethod}{%
}{}
%
\providecommand{\pymethod}[3]{
\subsubsection{#1.#2()}
\label{pythonmethod:#1.#2}
{\large \color{darkred} \textbf{#2}} ( #3 )
}
%
\providecommand{\pyproperty}[2]{
\subsubsection{#1.#2}
\label{pythonmethod:#1.#2}
{\large \color{darkred} \textbf{#2}}
}
%
\providecommand{\pythonargstext}[1]{{\large \textbf{#1}}}
%
\providecommand{\pythonargs}{{\large \textbf{Arguments:}}}
%
\providecommand{\pythonlazynote}{
Note that, due to our lazy update approach, the change won't actually
take effect until you update the model
(using \hyperref{Model.update}{}{}{pythonmethod:Model.update}),
optimize the model
(using \hyperref{Model.optimize}{}{}{pythonmethod:Model.optimize}),
or write the model to disk
(using \hyperref{Model.write}{}{}{pythonmethod:Model.write}).}
%
\providecommand{\MRrefname}{\MRalternative{matlab}{r}}%
\providecommand{\MRhrefI}[3]{
\hyperref{{\ttfamily{#1}}}{}{}{#2}}
%
\providecommand{\MRfunctiona}[3]{
\subsubsection{#1()}
\label{\MRalternative{matlab}{r}:#1}
{
\MRalternative{
\begin{tabular}{ll}
{\color{darkred} \large \textbf{#1} } & ( #2 ) \\
{\color{darkred} \large \textbf{#1} } & ( #3 ) \\
\end{tabular}
}{
\begin{tabular}{ll}
{\color{darkred} \large \textbf{#1} } & ( #3 ) \\
\end{tabular}
}
}
}
%
\providecommand{\MRfunctionb}[4]{
\subsubsection{#1()}
\label{\MRalternative{matlab}{r}:#1}
{
\MRalternative{%
\begin{tabular}{ll}
{\color{darkred} \large \textbf{#1} } & ( #2 ) \\
{\color{darkred} \large \textbf{#1} } & ( #3 ) \\
{\color{darkred} \large \textbf{#1} } & ( #4 ) \\
\end{tabular}
}{
\begin{tabular}{ll}
{\color{darkred} \large \textbf{#1} } & ( #4 ) \\
\end{tabular}
}
}
}
%
\providecommand{\MRargs}{{\large \textbf{Arguments:}}}
%
\providecommand{\heading}[1]{{\large \textbf{#1}}}%
\providecommand{\subheading}[1]{\textbf{#1}}
%
\providecommand{\vurb}[1]{\texttt{#1}}
%
\providecommand{\Verbatiminput}[1]{\verbatiminput{#1}}
\usepackage[latin1]{inputenc}
\makeatletter
\makeatletter
\count@=\the\catcode`\_ \catcode`\_=8
\newenvironment{tex2html_wrap}{}{}%
\catcode`\<=12\catcode`\_=\count@
\newcommand{\providedcommand}[1]{\expandafter\providecommand\csname #1\endcsname}%
\newcommand{\renewedcommand}[1]{\expandafter\providecommand\csname #1\endcsname{}%
\expandafter\renewcommand\csname #1\endcsname}%
\newcommand{\newedenvironment}[1]{\newenvironment{#1}{}{}\renewenvironment{#1}}%
\let\newedcommand\renewedcommand
\let\renewedenvironment\newedenvironment
\makeatother
\let\mathon=$
\let\mathoff=$
\ifx\AtBeginDocument\undefined \newcommand{\AtBeginDocument}[1]{}\fi
\newbox\sizebox
\setlength{\hoffset}{0pt}\setlength{\voffset}{0pt}
\addtolength{\textheight}{\footskip}\setlength{\footskip}{0pt}
\addtolength{\textheight}{\topmargin}\setlength{\topmargin}{0pt}
\addtolength{\textheight}{\headheight}\setlength{\headheight}{0pt}
\addtolength{\textheight}{\headsep}\setlength{\headsep}{0pt}
\setlength{\textwidth}{349pt}
\newwrite\lthtmlwrite
\makeatletter
\let\realnormalsize=\normalsize
\global\topskip=2sp
\def\preveqno{}\let\real@float=\@float \let\realend@float=\end@float
\def\@float{\let\@savefreelist\@freelist\real@float}
\def\liih@math{\ifmmode$\else\bad@math\fi}
\def\end@float{\realend@float\global\let\@freelist\@savefreelist}
\let\real@dbflt=\@dbflt \let\end@dblfloat=\end@float
\let\@largefloatcheck=\relax
\let\if@boxedmulticols=\iftrue
\def\@dbflt{\let\@savefreelist\@freelist\real@dbflt}
\def\adjustnormalsize{\def\normalsize{\mathsurround=0pt \realnormalsize
\parindent=0pt\abovedisplayskip=0pt\belowdisplayskip=0pt}%
\def\phantompar{\csname par\endcsname}\normalsize}%
\def\lthtmltypeout#1{{\let\protect\string \immediate\write\lthtmlwrite{#1}}}%
\usepackage[tightpage,active]{preview}
\newbox\lthtmlPageBox
\newdimen\lthtmlCropMarkHeight
\newdimen\lthtmlCropMarkDepth
\long\def\lthtmlTightVBox#1#2{%
\setbox\lthtmlPageBox\vbox{\hbox{\catcode`\_=8 #2}}%
\lthtmlCropMarkHeight=\ht\lthtmlPageBox \advance \lthtmlCropMarkHeight 6pt
\lthtmlCropMarkDepth=\dp\lthtmlPageBox
\lthtmltypeout{^^J:#1:lthtmlCropMarkHeight:=\the\lthtmlCropMarkHeight}%
\lthtmltypeout{^^J:#1:lthtmlCropMarkDepth:=\the\lthtmlCropMarkDepth:1ex:=\the \dimexpr 1ex}%
\begin{preview}\copy\lthtmlPageBox\end{preview}}}%
\long\def\lthtmlTightFBox#1#2{%
\adjustnormalsize\setbox\lthtmlPageBox=\vbox\bgroup %
\let\ifinner=\iffalse \let\)\liih@math %
{\catcode`\_=8 #2}%
\@next\next\@currlist{}{\def\next{\voidb@x}}%
\expandafter\box\next\egroup %
\lthtmlCropMarkHeight=\ht\lthtmlPageBox \advance \lthtmlCropMarkHeight 6pt
\lthtmlCropMarkDepth=\dp\lthtmlPageBox
\lthtmltypeout{^^J:#1:lthtmlCropMarkHeight:=\the\lthtmlCropMarkHeight}%
\lthtmltypeout{^^J:#1:lthtmlCropMarkDepth:=\the\lthtmlCropMarkDepth:1ex:=\the \dimexpr 1ex}%
\begin{preview}\copy\lthtmlPageBox\end{preview}}%
\long\def\lthtmlinlinemathA#1#2\lthtmlindisplaymathZ{\lthtmlTightVBox{#1}{#2}}
\def\lthtmlinlineA#1#2\lthtmlinlineZ{\lthtmlTightVBox{#1}{#2}}
\long\def\lthtmldisplayA#1#2\lthtmldisplayZ{\lthtmlTightVBox{#1}{#2}}
\long\def\lthtmlinlinemathA#1#2\lthtmlindisplaymathZ{\lthtmlTightVBox{#1}{#2}}
\def\lthtmlinlineA#1#2\lthtmlinlineZ{\lthtmlTightVBox{#1}{#2}}
\long\def\lthtmldisplayA#1#2\lthtmldisplayZ{\lthtmlTightVBox{#1}{#2}}
\long\def\lthtmldisplayB#1#2\lthtmldisplayZ{\\edef\preveqno{(\theequation)}%
\lthtmlTightVBox{#1}{\let\@eqnnum\relax#2}}
\long\def\lthtmlfigureA#1#2\lthtmlfigureZ{\let\@savefreelist\@freelist
\lthtmlTightFBox{#1}{#2}\global\let\@freelist\@savefreelist}
\long\def\lthtmlpictureA#1#2\lthtmlpictureZ{\let\@savefreelist\@freelist
\lthtmlTightVBox{#1}{#2}\global\let\@freelist\@savefreelist}
\def\lthtmlcheckvsize{\ifdim\ht\sizebox<\vsize
\ifdim\wd\sizebox<\hsize\expandafter\hfill\fi \expandafter\vfill
\else\expandafter\vss\fi}%
\providecommand{\selectlanguage}[1]{}%
\makeatletter \tracingstats = 1
\begin{document}
\pagestyle{empty}\thispagestyle{empty}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength hsize=\the\hsize}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength vsize=\the\vsize}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength hoffset=\the\hoffset}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength voffset=\the\voffset}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength topmargin=\the\topmargin}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength topskip=\the\topskip}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength headheight=\the\headheight}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength headsep=\the\headsep}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength parskip=\the\parskip}\lthtmltypeout{}%
\lthtmltypeout{latex2htmlLength oddsidemargin=\the\oddsidemargin}\lthtmltypeout{}%
\makeatletter
\if@twoside\lthtmltypeout{latex2htmlLength evensidemargin=\the\evensidemargin}%
\else\lthtmltypeout{latex2htmlLength evensidemargin=\the\oddsidemargin}\fi%
\lthtmltypeout{}%
\makeatother
\setcounter{page}{1}
\onecolumn
% !!! IMAGES START HERE !!!
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap2629}%
\scalebox{1.0}{\includegraphics[width=60mm]{logo}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap2637}%
\scalebox{1.0}{\includegraphics[width=6in]{graphics/api}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
%
\providecommand{\quadterminfo}[1]{%
A quadratic term is represented using three values: a pair of indices
(stored in \texttt{qrow} and \texttt{qcol}), and a coefficient (stored in
\texttt{qval}). The three argument arrays provide the corresponding values
for each quadratic term. To give an example, to represent
$2 x_0^2 + x_0 x_1 + x_1^2$, you would have \texttt{#1=3},
\texttt{qrow[] = \{0, 0, 1\}}, \texttt{qcol[] = \{0, 1, 1\}}, and
\texttt{qval[] = \{2.0, 1.0, 1.0\}}}%
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9414}%
$y = max(x_1, x_2, ..., c)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9416}%
$y = min(x_1, x_2, ..., c)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9418}%
$y = \vert x\vert$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9420}%
$y = x_1 \land x_2 \land x_3...$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9422}%
$y = x_1 \lor x_2 \lor x_3...$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9424}%
$y = 1 \rightarrow a'x \leq b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9426}%
$y = pwl(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9428}%
$y = p_0 x^d + p_1 x^{d-1} + ... + p_{d-1} x + p_{d}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9430}%
$y = e^x$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9432}%
$y = a^x$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9434}%
$y = log_e(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9436}%
$y = log_a(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9438}%
$y = x^a$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9440}%
$y = sin(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9442}%
$y = cos(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9444}%
$y = tan(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9446}%
$r = \max\{x_1,\ldots,x_n,c\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9448}%
$r$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9450}%
$x_1,\ldots,x_n$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9452}%
$c$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9456}%
$n$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9458}%
$x_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9462}%
$r = \min\{x_1,\ldots,x_n,c\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9478}%
$r = \mbox{abs}\{x\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9482}%
$x$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9488}%
$r = \mbox{and}\{x_1,\ldots,x_n\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9492}%
$1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9498}%
$0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9508}%
$r = \mbox{or}\{x_1,\ldots,x_n\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9528}%
$z = f \rightarrow a^Tx \leq b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9530}%
$z$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9532}%
$f \in \{0,1\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9534}%
$a^Tx \leq b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9536}%
$z = 1-f$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9538}%
$=$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9540}%
$\geq$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9546}%
$f$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9556}%
$a_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9558}%
$y =
f(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9562}%
$y$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9586}%
$x^d$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9588}%
$d+1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9590}%
$y = exp(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9606}%
$a > 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9614}%
$y = log(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9648}%
$a$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9684}%
$x^TQx + q^Tx \le b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9686}%
$2 x_0^2 + x_0 x_1 +
x_1^2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline9690}%
$Q$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23170}%
$Bx=b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23172}%
$B$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23174}%
$b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23178}%
$B^Tx=b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23180}%
$B^T$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23186}%
$Bx=A_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23190}%
$A_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23192}%
$A$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23194}%
$j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23198}%
$i$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23200}%
$B^{-1} A$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline23202}%
$B^{-1}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline36722}%
$= -2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline36724}%
$= -1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline36726}%
$\geq 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{paragraph}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
%
\providecommand{\GRBPyDoubleIneqWarning}{%
\begin{description}
\item[{\large\rmfamily\textbf{Warning}}] A constraint can only have a single comparison operator.\\
While {\ttfamily 1 <= x + y <= 2} or {\ttfamily 1 <= x[i] + y[i] <= 2 for i in range(3)}\\
may look like valid constraints, our Python API won't interpret them as they were intended, which will almost\\
certainly result in unexpected behavior.
\end{description}
}%
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline97601}%
${\rightarrow}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline100222}%
$A x = b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline100224}%
$<$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline100226}%
$>$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline100230}%
$x_{Q_L}' Q x_{Q_R} + c' x_c = \mbox{rhs}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline109176}%
$Ax + b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline111763}%
$2.0*x$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline111765}%
$2.0*var$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{section}
\stepcounter{section}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline112298}%
$x^TQx + c^Tx + \mathrm{alpha}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline112302}%
$\ell \le x \le u$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline112306}%
$x^TQc\, x + q^Tx \le \mathrm{beta}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline112308}%
$x_i$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline113221}%
$\mathrm{alpha}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116295}%
\begin{displaymath}x[\mathrm{resvar}] = \max\left\{\mathrm{con},x[j]:j\in\mathrm{vars}\right\}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline116327}%
$-\infty$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116296}%
\begin{displaymath}x[\mathrm{resvar}] = \min\left\{\mathrm{con},x[j]:j\in\mathrm{vars}\right\}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline116331}%
$\infty$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116297}%
\begin{displaymath}x[\mathrm{resvar}] = \vert x[\mathrm{argvar}]\vert\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116298}%
\begin{displaymath}x[\mathrm{resvar}] = \mathrm{and}\{x[i]:i\in\mathrm{vars}\}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116299}%
\begin{displaymath}x[\mathrm{resvar}] = \mathrm{or}\{x[i]:i\in\mathrm{vars}\}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116300}%
\begin{displaymath}x[\mathrm{binvar}] = \mathrm{binval}\Rightarrow\sum\left( x\MRalternative{(j)}{[[j]]}\cdot\mathrm{a}\MRalternative{(j)}{[[j]]}\right) \ \mathrm{sense}\ \mathrm{rhs}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline116341}%
$x[\mathrm{binvar}]$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline116343}%
$\sum\left(x[\mathrm{vars}\MRalternative{(j)}{[[j]]}]\cdot\mathrm{val}\MRalternative{(j)}{[[j]]}\right)\ \mathrm{sense}\ \mathrm{rhs}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline116347}%
$\leq$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116301}%
\begin{displaymath}x[\mathrm{yvar}] = f(x[\mathrm{xvar}])\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116302}%
\begin{displaymath}x[\mathrm{yvar}] = p_0 x[\mathrm{xvar}]^d + p_1 x[\mathrm{xvar}]^{d-1} + ... + p_{d-1} x[\mathrm{xvar}] + p_{d}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116303}%
\begin{displaymath}x[\mathrm{yvar}] = \mathrm{exp}(x[\mathrm{xvar}])\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116304}%
\begin{displaymath}x[\mathrm{yvar}] = \mathrm{a}^{x[\mathrm{xvar}]}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116305}%
\begin{displaymath}x[\mathrm{yvar}] = \mathrm{log}(x[\mathrm{xvar}])\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116306}%
\begin{displaymath}x[\mathrm{yvar}] = \mathrm{log}(x[\mathrm{xvar}])\setminus\mathrm{log}(a)\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116307}%
\begin{displaymath}x[\mathrm{yvar}] = x[\mathrm{xvar}]^\mathrm{a}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116308}%
\begin{displaymath}x[\mathrm{yvar}] = \mathrm{sin}(x[\mathrm{xvar}])\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116309}%
\begin{displaymath}x[\mathrm{yvar}] = \mathrm{cos}(x[\mathrm{xvar}])\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath116310}%
\begin{displaymath}x[\mathrm{yvar}] = \mathrm{tan}(x[\mathrm{xvar}])\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline116465}%
$\vert ObjBound-ObjVal\vert/\vert ObjVal\vert$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline116467}%
$ObjBound$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline116469}%
$ObjVal$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125374}%
$3 x + 4 y \leq 5z$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125376}%
$x > y$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125378}%
$x-y$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125380}%
$x <= M b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125386}%
$M$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125394}%
$3 x^2 + 4 y^2 + 5 z \leq 10$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125400}%
$x^Tx \le y^{2}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125406}%
$x^Tx \le y z$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125420}%
$f()$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125424}%
$r = \max\{x_1,\ldots,x_k,c\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125428}%
$x_1,\ldots,x_k$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125432}%
$(r=3, x_1=2, x_2=3, x_3=0)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125434}%
$r = \max\{x_1,x_2,x_3,1.7\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125436}%
$3$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125438}%
$2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125444}%
$1.7$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125446}%
$r = \min\{x_1,\ldots,x_k,c\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125460}%
$(r=3, x=-3)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125464}%
$r = \mbox{and}\{x_1,\ldots,x_k\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125474}%
$(r=1, x_1=1, x_2=1, x_3=1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125476}%
$r = \mbox{and}\{x_1,x_2,x_3\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125478}%
$r = \mbox{or}\{x_1,\ldots,x_k\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125488}%
$y = f \rightarrow a^Tx \leq b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125496}%
$y \neq f$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125498}%
$y = 1-f$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125506}%
$(x, y)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125510}%
$(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath125366}%
\begin{displaymath}
\begin{array}{rcll}
r & = & x_j + s_j & \mbox{ for all } j = 1,\ldots,k \\
r & = & c + s_{k+1} & \\
z_1 + \ldots + z_{k+1} & = & 1 & \\
SOS1(s_j, z_j) & & & \mbox{ for all } j = 1,\ldots,k+1 \\
s_j & \geq & 0 & \mbox{ for all } j = 1,\ldots,k+1 \\
z_j & \in & \{0,1\} & \mbox{ for all } j = 1,\ldots,k+1
\end{array}
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125514}%
$r \geq \max\{x_1,\ldots,x_k,c\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125522}%
$s_j \geq 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125524}%
$r \leq \max\{x_1,\ldots,x_k,c\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125526}%
$z_j \in \{0,1\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125528}%
$s_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125530}%
$z_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125532}%
$z_j = 1 \rightarrow s_j = 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125540}%
$r = x_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125544}%
$r = c$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath125367}%
\begin{displaymath}
\begin{array}{rcll}
r & \geq & x_j \;\;\mbox{ for all } j = 1,\ldots,k \\
r & \geq & c
\end{array}
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125554}%
$y = p_0 x^n + p_1 x^{n-1} + ... + p_n x + p_{n+1}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125566}%
$y = ln(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125574}%
$a >= 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap125912}%
\scalebox{1.0}{\includegraphics[width=2.5in]{graphics/func}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125584}%
$y=x^2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125588}%
$1.0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125592}%
$[0,2]$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125594}%
$P_{u1}(0,0), P_{u2}(1,1), P_{u3}(2,4)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125596}%
$P_{l1}(0,-0.25), P_{l2}(1, 0.75), P_{l3}(2,3.75)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125598}%
$0.0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125600}%
$P_{l1}, P_{l2}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125602}%
$P_{l3}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125606}%
$P_{u1}, P_{u2}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125608}%
$P_{u3}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125610}%
$0.6$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125614}%
$P_{ui}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125616}%
$0.4$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125618}%
$P_{li}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125620}%
$(0, -0.1), (1,
0.9)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125622}%
$(2, 3.9)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125624}%
$-1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125632}%
$0.25$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125646}%
$y = 2x-1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125650}%
$(x, y) = (1,
1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125652}%
$x=0.9$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125654}%
$x=1.1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125658}%
$1.01$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125660}%
$x=1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125662}%
$3 x
+ 4 y + 2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125664}%
$f(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap125960}%
\scalebox{1.0}{\includegraphics[width=4in]{graphics/pwl}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125668}%
$(1, 1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125670}%
$(3, 2)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125672}%
$(5, 4)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125674}%
$f(1) = 1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125676}%
$f(3) = 2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125678}%
$f(5) = 4$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125682}%
$f(-1)=0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125684}%
$f(6)=5$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath125368}%
\begin{displaymath}
\mathtt{x} = [x_1, \ldots, x_n], \quad \mathtt{y} = [y_1, \ldots, y_n]
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath125369}%
\begin{displaymath}
f(v) =
\left\{
\begin{array}{ll}
y_1 + \frac{y_2-y_1}{x_2-x_1} (v - x_1), & \mathrm{if}\; v \le x_1,\\[7pt]
y_i + \frac{y_{i+1}-y_i}{x_{i+1}-x_i} (v - x_i), & \mathrm{if}\; v \ge x_i\; \mathrm{and}\; v \le x_{i+1}, \\[7pt]
y_n + \frac{y_n - y_{n-1}}{x_n - x_{n-1}} (v - x_n), & \mathrm{if}\; v \ge x_n. \\[7pt]
\end{array}
\right.
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125688}%
$x=x_i$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125692}%
$(x_{i-1}, y_{i-1}),(x_i, y_i),
(x_{i+1}, y_{i+1}), (x_{i+2}, y_{i+2})$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125694}%
$x_i = x_{i+1}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125696}%
$y_i \neq y_{i+1}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125698}%
$x_{i-1} \le x < x_{i}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125700}%
$(x_{i-1}, y_{i-1})$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125702}%
$(x_i, y_i)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125706}%
$x_{i} \le x < x_{i+2}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125708}%
$(x_{i+1}, y_{i+1})$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125710}%
$(x_{i+2}, y_{i+2})$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125718}%
$y_i$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125720}%
$y_{i+1}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125730}%
$(x_{i-2}, y_{i-2}), (x_{i-1}, y_{i-1}), (x_i, y_i),
(x_{i+1}, y_{i+1}), (x_{i+2}, y_{i+2})$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125732}%
$x_{i-1} = x_i = x_{i+1}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125734}%
$y_i \neq y_{i-1}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125738}%
$y_{i-1}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap125968}%
\scalebox{1.0}{\includegraphics[width=4in]{graphics/pwljump}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125754}%
$(-1, 0), (1, 1), (1, 2), (3, 2), (3, 0), (3,2)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125762}%
$(1,2)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125764}%
$(3,0)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap125976}%
\scalebox{1.0}{\includegraphics[width=3.5in]{graphics/convex0}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125768}%
$f(1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap125982}%
\scalebox{1.0}{\includegraphics[width=3.5in]{graphics/convex1}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125770}%
$3
x^2 + 4 y^2 + 2 x y + 2 x + 3$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap126012}%
\scalebox{1.0}{\includegraphics[width=3.5in]{graphics/convex2}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline125772}%
$1e-6$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{section}
\stepcounter{section}
\newedenvironment{oneintattr}[2]{
\csname subsubsection\endcsname{#1}
\begin{tabular}{ll}
\textbf{Type:} & int \\
\textbf{Modifiable:} & #2 \\
\end{tabular}
}{
\par
For examples of how to query or modify attributes, refer to
our \hyperref{Attribute Examples}{}{}{sec:AttributeExamples}.}%
\newedenvironment{onedoubleattr}[2]{
\csname subsubsection\endcsname{#1}
\begin{tabular}{ll}
\textbf{Type:} & double \\
\textbf{Modifiable:} & #2 \\
\end{tabular}
}{
\par
For examples of how to query or modify attributes, refer to
our \hyperref{Attribute Examples}{}{}{sec:AttributeExamples}.}%
\newedenvironment{onecharattr}[2]{
\csname subsubsection\endcsname{#1}
\begin{tabular}{ll}
\textbf{Type:} & char \\
\textbf{Modifiable:} & #2 \\
\end{tabular}
}{
\par
For examples of how to query or modify attributes, refer to
our \hyperref{Attribute Examples}{}{}{sec:AttributeExamples}.}%
\newedenvironment{onestringattr}[2]{
\csname subsubsection\endcsname{#1}
\begin{tabular}{ll}
\textbf{Type:} & string \\
\textbf{Modifiable:} & #2 \\
\end{tabular}
}{
\par
For examples of how to query or modify attributes, refer to
our \hyperref{Attribute Examples}{}{}{sec:AttributeExamples}.}%
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130253}%
$k$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath130245}%
\begin{displaymath} \bar{a}x = \lambda^tAx \leq \lambda^tb = -\beta + \sum\limits_{j:\bar{a}_j<0}\bar{a}_jU_j + \sum\limits_{j:\bar{a}_j>0}\bar{a}_jL_j,\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130263}%
$\beta>0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130265}%
$L_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130269}%
$U_j$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130273}%
$\lambda_i \geq 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130279}%
$\lambda_i \leq 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130285}%
$\bar{a}_j \geq 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130287}%
$U_j = \infty$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130289}%
$\bar{a}_j
\leq 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130291}%
$L_j = -\infty$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130295}%
$\beta$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130297}%
$\lambda$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath130246}%
\begin{displaymath}
\begin{array}{ll}
\mathrm{minimize} & c'x \\
\mathrm{subject\ to} & Ax \ge b \\
& x \ge 0
\end{array}
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath130247}%
\begin{displaymath}
\begin{array}{ll}
\mathrm{maximize} & b'y \\
\mathrm{subject\ to} & A'y \le c \\
& y \ge 0
\end{array}
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130301}%
$\ge$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130303}%
$\ge 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130305}%
$\le$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130307}%
$\le 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130357}%
$-2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130369}%
$a^Tx \le b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130371}%
$a^Tx + s = b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130373}%
$a^Tx + s$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130377}%
$l$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130379}%
$q$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130381}%
$s$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130383}%
$g$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130387}%
$l+q+s$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130389}%
$l+q+s+g$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130391}%
$i-l-q-s$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130401}%
$a^Ty \ge c$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130403}%
$a^Ty - z = c$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130409}%
$a^Ty - z$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline130447}%
$|z|$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline132596}%
$m$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline132598}%
$cols$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline132602}%
$set$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline132606}%
$X$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline132618}%
$Set$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline132630}%
$m.NumVars$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline132632}%
$m.numvars$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\newedenvironment{oneparam}[2]{
\csname subsubsection\endcsname{#1}
#2
}{
\par
For examples of how to query or modify parameter values from
our different APIs, refer to our
\hyperref{Parameter Examples}{}{}{sec:ParameterExamples}.}%
%
\providecommand{\intparamvalues}[3]{
\begin{tabular}{@{}ll}
\textbf{Type:} & int \\
\textbf{Default value:} & #1 \\
\textbf{Minimum value:} & #2 \\
\textbf{Maximum value:} & #3 \\
\par
\end{tabular}
}%
%
\providecommand{\doubleparamvalues}[3]{
\begin{tabular}{@{}ll}
\textbf{Type:} & double \\
\textbf{Default value:} & #1 \\
\textbf{Minimum value:} & #2 \\
\textbf{Maximum value:} & #3 \\
\par
\end{tabular}
}%
%
\providecommand{\stringparamvalues}[1]{
\begin{tabular}{@{}ll}
\textbf{Type:} & string \\
\textbf{Default value:} & #1 \\
\end{tabular}
\par
}%
%
\providecommand{\note}[1]{
\textbf{Note:} #1 \\
}%
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline137945}%
$2^{31}-1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline141800}%
$(10 X01^2 + 2 X01 * X02 + 2 X02 * X01 + 2 X02^2)/2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline141802}%
$10 X01^2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline141804}%
$4 X01*X02$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline141806}%
$2 X02^2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_indisplay166312}%
$\displaystyle \begin{array}{ll}
\mathrm{minimize} & y - 1.3 x (1-z) + (1-z) \\
\mathrm{subject\ to} & 2 y - 3 x + 1.7 w = 1.7 \\
& -y + x + x z (1-v) \ge 0 \\
& -y \le 0,\\
& v, w, x, y, z \in \{0, 1\}.
\end{array}
$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline142649}%
$AA^T$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143410}%
$1 + x + 2y$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143412}%
$y
+ 2z$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143418}%
$-1 \cdot (1 + x + 2y) + 2 \cdot (y + 2z) = -1 - x + 4z$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143420}%
$10^6$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143424}%
$10$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143426}%
$5$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143428}%
$100$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath143394}%
\begin{displaymath}\mathrm{base\_value} = \max\{bestsol, bestbound + |bestbound|*rgap,
bestbound + agap\},\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143432}%
$bestsol$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143434}%
$bestbound$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143436}%
$rgap$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143438}%
$agap$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143442}%
$20$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143444}%
$120$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143454}%
$\{2, 2, 1\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143456}%
$\{0.10, 0.05, 0.00\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143458}%
$\{0,
1, 2\}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline143462}%
$\max\{10 \cdot 0.10, 10 \cdot 0.05, 0, 1\}~=~1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{section}
\stepcounter{section}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145144}%
\begin{displaymath}
\begin{array}{rcl}
x - 6y &=&1\\
0.333x - 2y &= & .333
\end{array}
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145145}%
\begin{displaymath}
y := 0.1665x - 0.1665
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145146}%
\begin{displaymath}\begin{array}{rcl}
x - 6\cdot(0.1665x - 0.1665) &=& 1\\
\Leftrightarrow 0.001x &=& 0.001
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145206}%
$x = 1,\ y = 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145147}%
\begin{displaymath}\begin{array}{rcl}
x - 6y&=&1\\
0.3333333333333333x - 2y&=&0.3333333333333333
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145148}%
\begin{displaymath}
y := 0.1666666666666667x - 0.1666666666666667
\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145149}%
\begin{displaymath}\begin{array}{rcl}
x - 6\cdot(0.1666666666666667x - 0.1666666666666667) &=& 1\\
\Leftrightarrow 2\cdot10^{-16} x + 1 + 2\cdot10^{-16} &\approx& 1
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145208}%
$x = 6y + 1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145210}%
$1\approx 1+2\cdot10^{-16}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145218}%
$\mathtt{\leq}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145220}%
$a \cdot x \leq b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145226}%
$a \cdot y \leq c$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145150}%
\begin{displaymath}\begin{array}{lll}
\min&0\\
s.t.&x \leq&0\\
&x\geq &10^{-10}\\
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145232}%
$10^{-6}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145234}%
$10^{-5}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145236}%
$10^3$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145238}%
$10^{-9}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145240}%
$x\in[-10^{-6},10^{-6}]$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145242}%
$x\in[-10^{10},10^{10}]$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145244}%
$10^{-16}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145151}%
\begin{displaymath}(P) \max \{cx : Ax = b, l\leq x\leq
u\}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145246}%
$D$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145248}%
$D_{ii} > 0,\,\forall i$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145250}%
$(P)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145252}%
$(P_D)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145152}%
\begin{displaymath}(P_D) \max \{cD x': AD x' = b, D^{-1} l \leq
x' \leq D^{-1} u\}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145256}%
$10^4$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145260}%
$10^{10}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145264}%
$P$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145266}%
$f_1(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145270}%
$f_2(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145272}%
$\bar{f}(x) = M f_1(x) +
f_2(x)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145280}%
$10^{-13}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145153}%
\begin{displaymath}\begin{array}{rcl}
10^{-7}x + 10y &\leq& 10\\
x+10^4z&\leq&10^3\\
x,y,z&\geq&0,
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145282}%
$[10^{-7},10^4]$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145286}%
$10^5$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145154}%
\begin{displaymath}\begin{array}{rcl}
10^{-2}x' + 10y &\leq& 10\\
10^2x'+10z&\leq&1\\
x',y,z&\geq&0,
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145290}%
$x=10^5x'$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145292}%
$[10^{-2},10^2]$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145294}%
$[10^{-3},10^{6}]$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145155}%
\begin{displaymath}\begin{array}{rcl}
x - 10^{6} y &\geq& 0 \\
y&\in&[0,10]
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145156}%
\begin{displaymath}\begin{array}{rcl}
x - 10 y_1 &\geq& 0\\
y_1 - 10 y_2 &=& 0\\
y_2 - 10 y_3 &=& 0\\
y_3 - 10 y_4 &=& 0\\
y_4 - 10 y_5 &=& 0\\
y_5 - 10 y &=& 0\\
y&\in&[0,10]
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145296}%
$y=-10^{-6},\ x=-1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145158}%
\begin{displaymath}\begin{array}{rcl}
x - 10^{3} y' &\geq& 0 \\
y'&\in&[0,10^4]\\
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145298}%
$10^{-3} y' = y$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145302}%
$-10^{-3}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145306}%
$-10^{-9}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145159}%
\begin{displaymath}\begin{array}{rcl}
x&\leq&10^6y\\
x&\geq&0\\
y&\in& \{0,1\},
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145310}%
$y = 0.0000099999$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145316}%
$9.999$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145160}%
\begin{displaymath}\begin{array}{rcl}
x&\leq&10^3y\\
x &\geq& 0\\
y &\in & \{0,1\}
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145332}%
$x \leq 0.01$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145336}%
$y = 0 \Rightarrow x = 0$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145338}%
$10^9$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145161}%
\begin{displaymath} 6\cdot10^6 / 0.00099 = 6.0606\cdot10^9.\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145340}%
$10^{12}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145342}%
$10^{-8}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145346}%
$0.1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145350}%
$10^{-17}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145360}%
$A^{-1}b$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145364}%
$\varepsilon$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145366}%
$A^{-1}(b+\varepsilon)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145162}%
\begin{displaymath}\eta(b,\varepsilon):=\frac{\|A^{-1}b\|}{\|A^{-1}(b+\varepsilon)\|}/\frac{\|b\|}{\|b+\varepsilon\|}.\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145163}%
\begin{displaymath}\kappa(A):=\max_{b,\varepsilon}\eta(b,\varepsilon).\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145164}%
\begin{displaymath}\kappa(A)=\frac{\lambda_{\max}}{\lambda_{\min}},\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145374}%
$\lambda_{\max}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145376}%
$\lambda_{\min}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145165}%
\begin{displaymath}\kappa(A)=\frac{\|A\|}{\|A^{-1}\|}.\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145380}%
$\kappa(A)=10^k$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145390}%
$\kappa$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145166}%
\begin{displaymath}\begin{array}{ll}
\max & cx\\
s.t. & Ax \leq b.\\
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145167}%
\begin{displaymath}\begin{array}{lrrl}
\max & x + y & \vec{c}=&(1,1)\\
s.t. & -x \leq 0 & A_{1\cdot}=&(-1,0)\\
& x \leq 1 & A_{2\cdot}=&(1,0)\\
& -y \leq 0 & A_{3\cdot}=&(0,-1)\\
& y \leq 1 & A_{4\cdot}=&(0,1).\\
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145394}%
$b^t:=(0,1,0,1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145168}%
\begin{displaymath}\max_{x\in\mathbb{R}^2}\{ \vec{c}x:Ax\leq b\}.\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145396}%
$\vec{c}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145398}%
$x^*$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap145964}%
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/codedraw3.png}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145402}%
$\vec{c}x$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145412}%
$\tilde{b}^t=(\varepsilon,1,0,1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145414}%
$\tilde{\vec{c}}=(1+\varepsilon,1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145416}%
$\tilde{A_{4\cdot}}=(\varepsilon,1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap145970}%
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/codedraw4.png}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145418}%
$A_{4\cdot}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145420}%
$x \leq 1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145422}%
$100 x \leq 100$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145424}%
$x + \varepsilon y\leq 1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145426}%
$100 x + \varepsilon y \leq 100$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145169}%
\begin{displaymath}\begin{array}{lrrl}
\max & y & \vec{c}=&(0,1)\\
s.t. & -x \leq 0 & A_{1\cdot}=&(-1,0)\\
& x \leq 1 & A_{2\cdot}=&(1,0)\\
& -y \leq 0 & A_{3\cdot}=&(0,-1)\\
& y \leq 1 & A_{4\cdot}=&(0,1).\\
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap145986}%
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/codedraw5.png}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145428}%
$x^1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145430}%
$x^3$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145436}%
$x^2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145170}%
\begin{displaymath}\begin{array}{lrrl}
\max & \varepsilon x + y & \vec{c}=&(\varepsilon,1)\\
s.t. & -x \leq 0 & A_{1\cdot}=&(-1,0)\\
& x \leq 1 & A_{2\cdot}=&(1,0)\\
& -y \leq 0 & A_{3\cdot}=&(0,-1)\\
& y \leq 1 & A_{4\cdot}=&(0,1).\\
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap145992}%
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/codedraw6.png}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145464}%
$(0,0)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145466}%
$(0,1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145468}%
$(10^6,1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145470}%
$(10^6,0)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145476}%
$1-10^6\varepsilon$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145171}%
\begin{displaymath}\begin{array}{lrrl}
\max & y & \vec{c}=&(0,1)\\
s.t. & -x + \varepsilon y \leq 1 & A_{1\cdot}=&(-1,\varepsilon)\\
& x + \varepsilon y \leq 1 & A_{2\cdot}=&(1,\varepsilon)\\
& -y \leq 0 & A_{3\cdot}=&(0,-1)\\
\end{array}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap146010}%
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/codedraw7.png}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145486}%
$A_{1\cdot}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145488}%
$A_{2\cdot}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145490}%
$x^*=(0,\frac{1}{\varepsilon})$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145504}%
$(1+\delta,1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145506}%
$\tilde{x}^*=(-\frac{\delta}{2},\frac{2+\delta}{2\varepsilon})$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145508}%
$L_1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145172}%
\begin{displaymath}\|x^*-\tilde{x}^*\|_1 = \frac{|\delta|}{2}+\frac{|\delta|}{\varepsilon}\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145510}%
$\delta$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145524}%
$(-1,\delta)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145526}%
$\tilde{x}^*=(1-\frac{2\varepsilon}{\varepsilon+\delta},\frac{2}{\varepsilon+\delta})$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145528}%
$\delta=\varepsilon/2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145532}%
$\frac{1}{\varepsilon}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145534}%
$\frac{4}{3\varepsilon}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmldisplayA{displaymath145173}%
\begin{displaymath}\lim_{\varepsilon\rightarrow0^+}\|x^*\|=\infty\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145538}%
$y \leq 10^4$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145542}%
$10^{-4}$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmldisplayA{displaymath145174}%
\begin{displaymath}\sin(2\pi\frac{i}{10^6}) x + \cos(2\pi\frac{i}{10^6}) y \leq
1,\,\forall i\in\{1,\ldots,10^6\},\end{displaymath}%
\lthtmldisplayZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145548}%
$\mathbb{R}^2$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap146024}%
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/codedraw1.png}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145552}%
$[-1,1]$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
{\newpage\clearpage
\lthtmlpictureA{tex2html_wrap146046}%
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/codedraw2.png}}%
\lthtmlpictureZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145560}%
$\vec{c}_1$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145568}%
$\vec{c}_1=(0,1)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
{\newpage\clearpage
\lthtmlinlinemathA{tex2html_wrap_inline145572}%
$\vec{c}_2=(1,0)$%
\lthtmlindisplaymathZ
\lthtmlcheckvsize\clearpage}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsection}
\stepcounter{subsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\stepcounter{subsubsection}
\end{document}
| {
"alphanum_fraction": 0.8021874025,
"avg_line_length": 27.9501000858,
"ext": "tex",
"hexsha": "2caa4e0fcd6d48d927473bf86540e3bfab4e017e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-15T11:20:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-15T11:20:30.000Z",
"max_forks_repo_head_hexsha": "75d05ca72dab67b065c8cbab983b1b83a21fede8",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Practical-Formal-Methods/clam-racetrack",
"max_forks_repo_path": "barto_big/eran/gurobi900/linux64/docs/refman/images.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "75d05ca72dab67b065c8cbab983b1b83a21fede8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Practical-Formal-Methods/clam-racetrack",
"max_issues_repo_path": "barto_big/eran/gurobi900/linux64/docs/refman/images.tex",
"max_line_length": 325,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "75d05ca72dab67b065c8cbab983b1b83a21fede8",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Practical-Formal-Methods/clam-racetrack",
"max_stars_repo_path": "ring/eran/gurobi900/linux64/docs/refman/images.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-17T04:03:21.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-19T17:35:30.000Z",
"num_tokens": 61521,
"size": 195483
} |
\documentclass[aspectratio=169]{beamer}
\usetheme{metropolis} % Use metropolis theme
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{eso-pic}
\usepackage{graphics}
\usepackage{tikz}
\usepackage[export]{adjustbox}
\usepackage{multicol}
\usepackage{listings}
\usepackage{helvet}
\usepackage{booktabs}
\usepackage{threeparttable}
\usepackage{upquote}
\title{Data Quality Assurance}
\date{\today}
\author{Benjamin Daniels} % Name of author(s) of session here
\institute{Development Impact Evaluation (DIME) \newline The World Bank }
\setbeamercolor{background canvas}{bg=white} % Sets background color
% The below command places the World Bank logo and DIME logo to the right corner
\titlegraphic{%
\begin{picture}(0,0)
\put(330,-180){\makebox(0,0)[rt]{\includegraphics[width=3cm]{img/WB_logo}}}
\end{picture}%
\begin{picture}(0,0)
\put(390,-180){\makebox(0,0)[rt]{\includegraphics[width=1.5cm]{img/i2i}}}
\end{picture}%
}
%%% Section page with picture of Light bulb
\makeatletter
\defbeamertemplate*{section page}{mytheme}[1][]{
\centering
\begin{minipage}{22em}
\raggedright
\usebeamercolor[fg]{section title}
\usebeamerfont{section title}
\par
\ifx\insertsubsectionhead\@empty\else%
\usebeamercolor[fg]{subsection title}%
\usebeamerfont{subsection title}%
\fi
\ifstrempty{#1}{}{%
\includegraphics[width=100mm, height=60mm]{#1}%
}
\insertsectionhead\\[-1ex]
\insertsubsectionhead
\usebeamertemplate*{progress bar in section page}
\end{minipage}
\par
\vspace{\baselineskip}
}
\makeatother
%%% Define a command to include picture in section,
%%% make section, and revert to old template
\newcommand{\sectionpic}[2]{
\setbeamertemplate{section page}[mytheme][#2]
\section{#1}
\setbeamertemplate{section page}[mytheme]
}
%%% The command below allows for the text that contains Stata code
\lstset{ %
backgroundcolor=\color{white},
basicstyle=\tiny,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
commentstyle=\color{green},
escapeinside={\%*}{*)},
extendedchars=true,
frame=single,
numbers=left,
numbersep=5pt,
numberstyle=\tiny\color{gray},
rulecolor=\color{black},
showspaces=false,
showstringspaces=false,
showtabs=false,
stringstyle=\color{mauve},
tabsize=2,
title=\lstname,
morekeywords={not,\},\{,preconditions,effects },
deletekeywords={time}
}
%% The below command creates the ligh bulb logos in the top right corner of the
\begin{document}
{
\usebackgroundtemplate{\includegraphics[height=55mm, right]{img/top_right_corner.pdf}}
\maketitle
}
\begin{frame}{Data Quality Control/Assurance (QC/QA)}
\begin{itemize}[<default overlay specification>]
\item<1> What is quality data?
\newline - Data that is not systematically biased.
\newline - Data that does not misstate representativeness or coverage.
\item<1> Think of everything that might go wrong
\item<1> Setting up a data quality checklist is much better than doing it ad-hoc as results are pouring in.
\newline - They take some time to prepare but are worth it!
\end{itemize}
\end{frame}
\begin{frame}{When can data be low-quality?}
\begin{itemize}[<default overlay specification>]
\item<1> \textbf{Respondents are human}, so they have imperfect recall and motivation, they can become tired or annoyed.
\item<1> \textbf{Enumerators are human}, so they can make mistakes, quickly fill answers to unasked questions when they’re sure they know the answer, or even just fake interviews.
\item<1> \textbf{Research assistants are human}, so they often fail to implement quality-control efforts in a timely manner, they shy away from conflict and don’t confront under-performing staff, and they often operate with chronic shortages of time and prior experience.
\end{itemize}
\end{frame}
\begin{frame}[fragile]{Data quality assurance}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> Enumerators have really hard jobs!
\newline - They are often traveling to new places and meeting people who are more or less friendly.
\newline - Weather, pollution, congestion, and other conditions can be challenging.
\newline - Instructions can be unclear.
\newline - Respondents may not match what the listing describes.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=75mm]{img/Quality}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{3 lines of defense against low-quality data}
\begin{multicols}{2}
\begin{enumerate}[<default overlay specification>]
\item<1> \textbf{Survey design}
\newline - Use constraints and relevance in survey coding. Make sure questions are clear in English and survey language. Pilot survey effectively so team is well-practiced.
\item<1> \textbf{Field management}
\newline - Enumerators and managers “buy-in” to quality. Check in constantly and make team feel valued. Unless egregious/fraudulent, errors are yours.
\item<1> \textbf{High-frequency checks}
\newline Demonstrate that quality. Catch key mistakes early. Visible performance metrics.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=65mm, right]{img/Quality2}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{Well-coded surveys}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> Relevance fields and constraint fields in survey design can provide instant feedback to enumerators.
\item<1> It also reminds them you care about this and have put in a lot of work to make it easy for them to do right!
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=70mm, right]{img/Survey}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{3 lines of defense against low-quality data}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> Data entry starts with a page containing:
\newline - Location/site ID [dropdown]
\newline - Respondent ID [dropdown?]
\newline - Enumerator ID [dropdown]
\item<1> The next page preloads the corresponding names from those IDs based on the Universe and says:
\newline - You indicated that this survey was completed in [City Name], at [Clinic Name], with [Provider Name] by [SP Name]. Is that correct? (Answer: Yes/No)
\item<1> Enumerator checks this against the written names on the assignments, if that exists, and reports back any inconsistency.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=65mm, right]{img/Survey2}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{Key tool in survey design: tracking sheets}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> Data quality in the field is best supported by “continuous” accountability.
\item<1> Tracking should be compared against the prepopulated list, if possible.
\item<1> Survey tracking, if used, must be entered every day and are “core tasks”, not “in addition” to the survey work.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=65mm, right]{img/Survey3}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{SurveyCTO allows “case management”}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> Surveys are pre-assigned to individuals (potentially also to enumerators).
\item<1> Completion can be monitored in real-time against the preset list.
\item<1> This works if you know the list of who should be interviewed (but not if you don’t).
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=65mm, right]{img/Survey4}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{Reminder: Back-Check Surveys}
\begin{itemize}[<default overlay specification>]
\item<1> \textbf{ The answers are compared with the original survey}
\newline - Every team and every surveyor must be back-checked as soon as possible, and regularly.
\item<1> \textbf{ Should cover 1\% of sample, with 20\% being administered in the first 2 weeks of field work.}
\newline - Random sub-sample.
\newline - Include missing respondents to verify that your team is not biasing your sample by not tracking hard to find respondents.
\newline - Observations flagged in other quality tests
\leavevmode \newline - Surveys of enumerators who are suspected of cheating.
\end{itemize}
\end{frame}
\begin{frame}{Back-Check Variables: use [bcstats]}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/Backcheck}
\end{figure}
\end{frame}
\begin{frame}[fragile]{In the office: High-frequency checks}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> Done by you:
\newline - Prepare code / instructions for HFCs before the survey goes to field.
\newline - Monitor flags reporting and address in the field.
\item<1> Done by the research assistant:
\newline - Maintain HFC code.
\newline - Daily data download.
\newline - Daily flags reporting.
\newline - This should be a one-click process.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=65mm, right]{img/HFCs}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{SurveyCTO allows tracking dashboards}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> \textbf{Examples}
https://www.surveycto.com/best-practices/hacking-google-sheets-for-real-time-dashboards/
https://www.surveycto.com/case-studies/monitoring-and-visualization/
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=75mm, right]{img/Dashboard}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{SurveyCTO has new workflow tools for QA}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> Quality issues can be dealt with as they arise:
\newline - Enumerator is consistently fast/slow.
\newline - Back-check disagrees with survey
\newline - Random resurveying
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=70mm, right]{img/Workflow}
\end{figure}
\end{multicols}
\end{frame}
\begin{frame}[fragile]{Daily downloads of data support routine checks}
\begin{multicols}{2}
\begin{itemize}[<default overlay specification>]
\item<1> \textbf{Enumerator checks}
\newline - Check the percentage of “don’t know” and “refusal” responses by enumerator.
\newline - Check the distribution of responses for key questions by enumerator.
\newline - Check the number of surveys per day by enumerator.
\newline - Check the average interview duration by enumerator.
\newline - Check the duration of consent by enumerator.
\item<1> \textbf{ Project checks}
\newline - Overall survey progress relative to planned sample.
\newline - Summaries of key research variables.
\newline - Two-way summaries of survey variables by demographic/geographic characteristics.
\newline - Attrition rates by type and treatment status.
\newline - Maps/GIS: all observations where they’re meant to be?
\end{itemize}
\end{multicols}
\end{frame}
\begin{frame}{SCTO can also automate some of these checks}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/HFCs2}
\end{figure}
\end{frame}
%%%%%%%%%%%%% heading of section 1 %%%%%%%%%%%%%%%%%%%%%%%
\sectionpic{Treatment Monitoring}{img/section_slide}
\begin{frame}{What is treatment monitoring?}
\begin{itemize}[<default overlay specification>]
\item<1> Ensuring that individuals to which treatment is assigned actually receive treatment (or at least an offer of treatment) and that control households do not receive treatment.
\item<1> Treatment contamination is when some of the control group receives the treatment.
\item<1> It should be reduced to as great an extent as possible.
\end{itemize}
\end{frame}
\begin{frame}{Example: Savings, Grants, Training}
\begin{itemize}[<default overlay specification>]
\item<1> In the context of new rural roads, we study the impact of complementary interventions for increasing productivity:
\item<1> Mechanisms for facilitating access to capital for investment . . .
\begin{figure}
\centering
\includegraphics[width=50mm]{img/Monitoring}
\end{figure}
\item<1> Developing the soft skills necessary to be a successful entrepreneur.
\begin{figure}
\centering
\includegraphics[width=50mm]{img/Monitoring2}
\end{figure}
\end{itemize}
\end{frame}
\begin{frame}{Savings accounts}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/Savings}
\end{figure}
\end{frame}
\begin{frame}{Personal initiative training}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/Training}
\end{figure}
\end{frame}
\begin{frame}{How can we monitor treatment fidelity?}
\begin{itemize}[<default overlay specification>]
\item<1> Physical checks (from the field)
\newline Performance of random physical in-person checks to ensure treatment applied as stated in IE protocol to units selected for treatment.
\item<1> Administrative data (from the office)
\newline Checking attendance, account records to ensure treatment applied only to treatment HH and not to the control group.
\end{itemize}
\end{frame}
\begin{frame}{Physical checks}
\begin{itemize}[<default overlay specification>]
\item<1> Ensure treatment is offered:
\leavevmode \newline - to the correct households (complicated by poor geospatial data, names as written on official identity cards, i.e., “El Loco”)
\newline - in the following order: HH head, spouse, oldest child above 18
\newline - according to the script to the extent possible so that all training offers are the same
\newline - using marketing materials as planned
\item<1> For the training, could have checked to ensure participants were only those to whom treatment was offered
\end{itemize}
\end{frame}
\begin{frame}{Administrative data}
\begin{itemize}[<default overlay specification>]
\item<1> Training
\leavevmode \newline - Check attendance and pre-/post- tests to ensure that only HH to which treatment was offered are attending training.
\item<1> Savings accounts
\newline - Check accounts opened to ensure account opener related in some way to the household (based on BL data).
\newline - Correct functioning of matching mechanism (i.e., individuals achieving pre-determined savings goals receive match).
\newline - After each match period, commercial bank sent Excel with accounts, savings behavior, and amount of match (if any) to be received by account-holders.
\newline - Important to ensure that IE team and implementing partner (in this case, a bank) on the same page and that treatment is not applied incorrectly.
\end{itemize}
\end{frame}
\begin{frame}{Other field challenges}
\begin{itemize}[<default overlay specification>]
\item<1> Re: the training, implementing NGO found that allowing community leaders to participate in training would increase take-up of community members.
\newline - But concerns over spillovers (i.e., control HH in communities in which leaders participate may benefit from the training in ways in which control HH in communities in which leaders do not).
\newline - Ultimately, have to weigh potential for spillovers against possibility of reduced take-up.
\end{itemize}
\end{frame}
\begin{frame}{Other field challenges}
\begin{itemize}[<default overlay specification>]
\item<1> Two implementing organizations, each of which offered treatment without coordinating with the other.
\item<1> NGO did not know of bank’s presence and vice versa.
\item<1> Led to confusion and mistrust on the part of community members directed at both implementers.
\item<1> In the case of the combined training + matched grant treatment, led to reduced take-up: HH that had accepted the training, but after the account offer elected not to participate in the training (bank’s have a really bad rep in Nicaragua).
\end{itemize}
\end{frame}
\begin{frame}{Bottom line}
\begin{itemize}[<default overlay specification>]
\item<1> Things are going to go wrong!
\item<1> Plan as much as you can.
\item<1> Establish good communication with your team (internally, at the WB, as well as with implementing partners).
\item<1> Keep calm and . . . resolve issues as quickly and as effectively as possible
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Final thougts section
\begin{frame}{Conclusion}
Thank You!
\vspace{20mm}
For more information or further questions please contact:
\newline Benjamin Daniels (\url{bdaniels@worldbank.org })
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The End
\sectionpic{The End}{img/section_slide}
\end{document} | {
"alphanum_fraction": 0.7427464999,
"avg_line_length": 33.3697813121,
"ext": "tex",
"hexsha": "2db01c43a9d459a70e82357e4a401b06451f29ac",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2021-07-19T14:18:42.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-21T01:00:28.000Z",
"max_forks_repo_head_hexsha": "a0c37f6dc1bb04a8bf3f66e3959c0bf223ba5c03",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "samaddarsushmita/DIME-MSIE-Workshop",
"max_forks_repo_path": "Material/Labs/Stata-Track-2/Presentations/Data Quality/Data_Quality.tex",
"max_issues_count": 19,
"max_issues_repo_head_hexsha": "a0c37f6dc1bb04a8bf3f66e3959c0bf223ba5c03",
"max_issues_repo_issues_event_max_datetime": "2019-06-14T16:44:56.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-04-18T20:20:12.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "samaddarsushmita/DIME-MSIE-Workshop",
"max_issues_repo_path": "Material/Labs/Stata-Track-2/Presentations/Data Quality/Data_Quality.tex",
"max_line_length": 273,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "a0c37f6dc1bb04a8bf3f66e3959c0bf223ba5c03",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "worldbank/DIME-MSIE-Workshop",
"max_stars_repo_path": "Material/Labs/Stata-Track-2/Presentations/Data Quality/Data_Quality.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-16T05:17:35.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-14T08:36:15.000Z",
"num_tokens": 4642,
"size": 16785
} |
\documentclass[course, english]{Notes}
\title{Software Security}
\subject{CS}
\author{Pontus Persson}
\email{ponpe412@student.liu.se}
\speaker{Ulf Karg\'{e}n}
\date{04}{11}{2015}
\dateend{15}{01}{2016}
\place{LiTH}
\begin{document}
\lecture{04}{11}{2015}
\section{Introduction}
\subsection{SQL Slammer}
\begin{itemize}
\item One UDP package to MSSQL 2003 allowed attacker to run arbitrairy code
\item 90\% of machines infected in 10min
\end{itemize}
\subsection{Stux}
\begin{itemize}
\item First cyber warfare weapon
\item 4 windows zero-days
\item Targetet centrifuges in Iranian nuclear enrichment facility
\item Reprogrammed centrifuges to spin them out of control and intercepted comms to tell operators everything was ok
\end{itemize}
\subsection{Software Security today}
\begin{itemize}
\item 10-15 years ago: most attacks "for fun"
\item Today: almost exclusively motivated by political or economical gains
\item Almost exclusively targets client and not server software
\item PDF-readers, MS Office, etc. Are viable targets
\end{itemize}
\subsection{Common types of defects}
\begin{itemize}
\item Buffer overflows
\item Race conditions
\item Encofing bugs
\item Double free
\item \ldots
\end{itemize}
\section{Organization}
\begin{itemize}
\item 10 lectures
\item 3 labs
\begin{itemize}
\item Pong
\item Static analysis
\item Web security
\end{itemize}
\end{itemize}
\lecture{05}{11}{2015}
\section{Secure SW developent}
Security should not be patched in later. Software considerations must permeate all phases of the software developemnt cycle.
\\ Insert security stuff like pen-test, risk analysis and static analysis in all parts of development.
\subsection{Security requirements}
\begin{itemize}
\item Reqa are gathered during the initial phase of the software development life cycle
\item Look at missuse cases
\end{itemize}
\subsubsection{Missuse cases}
\begin{itemize}
\item Identify threats and countermeasures
\item Identify how we do \textbf{not} want our program to be used
\end{itemize}
Examples in slides.\\
Goes well together with risk analysis.
\subsection{Risk analysis}
\subsubsection{CORAS}
\begin{enumerate}
\item Experts and clients decide what system to protext. Draw lo-fi
image
\item Define assets, high-level risk analysis and communication flow
\item Prioritize assets, create consequence scale and likelihood values.
Create risk evaluation matrix
\item Create threat diagrams
\item Estimate risks in the diagram (consequence and likleihood)
\item Risk evaluation, estimates are confirmed or adjusted
\item Risk treatment
\end{enumerate}
\subsubsection{Attack trees}
Risk evaluation but from the attackers POV. Very easy, see more in the slides.
\subsection{Software development Lifecycle (SDL)}
The team must complete 16 mandatory secuity activities to comply with the MS SDL
process.
\subsubsection{Pre-SDL}
Education
\begin{itemize}
\item Threat modeling
\item Secure cofing
\item Privacy
\end{itemize}
\subsubsection{Requirements}
\begin{itemize}
\item Operational envitonment
\item Quality gates (no compiler warnings when commiting codes, X number
of manual CR's) to go to next phase
\item Bug bars (no known critical bugs left in system) overall
\item What needs to be pentested
\item Privacy rating
\end{itemize}
\subsubsection{Design}
\begin{itemize}
\item Attaack surface reduction
\item Threat modeling
\end{itemize}
\subsubsection{Implementation}
\begin{itemize}
\item Publish list of approved tools
\item List to be approved by external security advisor
\item Teams should analyze all functions and APIs that will be used in
conjunction with the sw development and prohibit those unsafe
\item All code should be scanned for prohibited functions and APIs
\item Static analysis of code should be performed
\end{itemize}
\subsubsection{Verification}
\begin{itemize}
\item Dynamic analysis (monitor problems with memory corruption, user
privilege issues etc.)
\item Fuss testing (deliberitely introduce malformed or random data to
an app during dynamic analysis
\item Update threat model and attack surface analysis, account for any
design or implementation changes to the system.
\end{itemize}
\subsubsection{Release}
\begin{itemize}
\item First point of contact. On call contacts with decision making auth
avaliable within 24 hours.
\item Security service plans for code inherited from other groups
\item Same goes for 3rd party libraries
\end{itemize}
\subsubsection{Final security review}
\begin{itemize}
\item Independent panel goes through everything
\begin{itemize}
\item Pass FSR - g2g
\item Pass - g2g but fix problems in near future
\item Not pass - go pass to adress SDL reqs that is not
fullfilled
\end{itemize}
\item If privacy rating P1, independent review
\item Everything needs to be archived for future projects
\end{itemize}
\subsection{Secure design patterns}
Categorized by abstraction, design or implementation.
\begin{description}
\item[Architecure level] Focus on high level allocation of responces (Ex
priv.sep.
\item[Design level] Internal design och signle component (sec factory)
\item[Implementation level] Secure logger etc.
\end{description}
\subsubsection{Privilege separation}
Reduce the amount of code that is run as a priviledged user.
\subsubsection{Secure factory}
Seperate the security dependent logic incolved in creating an ovject from the
basic functionalitu of the created object
\subsubsection{Secure chain of responsibility}
Decouple the logic that determines pivileges from the portion of the program
that is requesting the privileges.
\subsubsection{Secure logger}
Prevent attacker from gathering potentially useful information about the system
from system logs or editing logs.
\subsubsection{Clear sensitive information}
Is it possible that sensitive information has been stored in reusable resources
after a user session or applicatin has run.
\lecture{06}{11}{2015}
\section{Vulnerabilities in C/C++}
Problem with programs that compile to machine code.
\subsection{Assembly language primer}
\begin{itemize}
\item All processes see 4GB of private continuóus virtual memory
\item The stack is located high and grows down
\item Main exe (text) nad its Data and BSS segment is located in low
memory
\item The heap i located above Text, Data and BSS. Grows upwards.
\begin{itemize}
\item Used for dynamically allocated memory (malloc,
new)
\end{itemize}
\item x86 is a little-endian architecture, First byte of e.g. a 4-byte
word is the least significant byte.
\end{itemize}
\lecture{12}{11}{2015}
\subsubsection{More vulnerabilities}
\begin{itemize}
\item Integer overflow Wrap-arounds cause length check to pass
\item Type conversions, usigned to signed, int to short, etc.
\end{itemize}
\subsection{Non memory corruption vulnerabilities}
\begin{itemize}
\item Race conditions (time between access check and write)
\item Out of bounds reads (heartbleed)
\end{itemize}
\lecture{13}{11}{2015}
\section{Software Engineering Reviews}
Inspections are the best way to find defects in code and other documents. At
least looking at the number of anomalies found.\\
Used for:
\begin{itemize}
\item Find defects
\item Training
\item Communication
\item Hostage taking
\end{itemize}
\subsection{Fagan review}
\subsubsection{Process}
\begin{itemize}
\item Initial
\begin{itemize}
\item Check criteria
\item Plan
\item Overview
\end{itemize}
\item Individual
\begin{itemize}
\item Preparation, or
\item Detection
\end{itemize}
\item Group
\begin{itemize}
\item Detection, or
\item Collection
\item Inspection record
\item Data collection
\end{itemize}
\item Exit
\begin{itemize}
\item Change
\item Follow-up
\item Document \& data handling
\end{itemize}
\end{itemize}
\subsubsection{Inspection record}
\begin{itemize}
\item Identification
\item Location
\item Description
\item Decision for entire document
\begin{itemize}
\item Pass w/ changes
\item Reinspect
\end{itemize}
\end{itemize}
\subsubsection{Data collection}
\begin{itemize}
\item Number of defects
\item Classes of defects
\item Severity
\item Other statistics
\end{itemize}
\lecture{18}{11}{2015}
\section{Static Analysis}
Static program analysis analyses computer programs statically, without executing
them.
\begin{itemize}
\item No need to run, before deployment
\item No need to restrict to single input for testing
\item Usefull in compiler optimization, finding security analysis
\item Ofter on source code, sometimes on object code
\item Usually highly automated (w/ possibility of some user
interraction)
\end{itemize}
A sounds analysis algorithm is sound in the case where each time it reports the
program is safe wet. some errors, then the original program is indeed safe wrt.
those errors. An algorithm is complete if it each time its given a program that
is safe with some errors and it reports it save with those errors.
\\
We try to approximate all possible configurations, this gives us an over- or
under approximation.
\begin{itemize}
\item A sounds analysis cannot give false negatives
\item A complete analysis cannot give false positives
\end{itemize}
\subsection{Syntactic Analysis}
\subsubsection{Splint}
Splint is neither sound nor complete. Kind of a fancy version of grep. Looks for
``dangerous'' patterns.
\subsection{Abstract interpretation}
Suppose you have a program analysis that captures the progra behaviour but that
is inefficient or uncomputable. You want an analysis that is efficient but that
also can over-approcimate all behaviours of the program.
\\
Consider a language with multiplication, addition and subtraction of integers.
If you are only interested in the sign of variable, you can associate
$\set{+,0,-}$ for each configuration instead of $\mathcal{Z}$. \\\
For an integer variable, the set of concrete values at a location is in
$\mathcal{P}(\mathcal{Z})$. $S_1 \sqsubseteq S_2$ means that $S_1$ is more
precise than $S_2$. Example; $\mathcal{P}(\set{0,+}) \sqsubseteq
\mathcal{P}(\set{+,0,-})$
\subsubsection{Lattice}
A pair $(Q, \leq)$ is a lattice if each pair p,q in Q has
\begin{itemize}
\item a greatest lower bound $p \sqcap q$ with $\leq$ (meet) and
\item a least upper bound $p \sqcup q$ with $\leq$ (join)
\end{itemize}
\subsubsection{Galois connections}
$(\alpha,\gamma)$\\
Concretization function $\gamma$, abstraction function $\alpha$. Eg, S is all
prime numbers. $\alpha(S)
= \set{+}=A$ and $\gamma(A)=\set{i|i>0}$. \\
Also, $S\sqsubseteq_c \gamma \circ
\alpha(S)$ and $\alpha \circ \gamma(A) \sqsubseteq_a A$.
\\
\lecture{19}{11}{2015}
\subsection{Symbolic execution}
Symbolic execution testing is complete, but not sound.
\begin{description}
\item[SMT]Satisfiability Modulo Theory
\end{description}
Similar to SAT solvers, SMT solvers work on ranges and not just boolean values.
\begin{itemize}
\item Most common form of software validation
\item Explores only one possible execution at a time
\item For each new value, run a new test
\item On 32-bit machine an integer equals test requires $2^{32}$
different tests
\item The ida in symbolic testing is to associate \textbf{symbolic
values} to the variables
\item Along the path, maintain a Path Constrain (PC) and symbolic state
($\sigma$)
\item PC collects constrains on variable's values along a path
\item $\sigma$ associates variables to symbolic expressions
\item We get concrete values if PC i satisfiable
\item Negate a condition in the path constraint to get another path
\end{itemize}
\subsection{Hoares and deductive reasoning}
Total correctness requires that:
\begin{itemize}
\item If the pre-condition holds
\item then the implementation terminates
\item after termination, the post-condition holds
\end{itemize}
A Hoare triple $\set{P}$ stmt $\set{R}$ consists in:
\begin{itemize}
\item a predicate pre-condition $P$
\item an instruction stmt
\item A predictate post-condition $R$
\end{itemize}
\subsubsection{Weakest precondition}
\begin{itemize}
\item if $\set{P} stmt \set{R}$ and $P' \Rightarrow P$ s.t. $\set{P'}
stmt \set{R}$, then P is the weakest precondition of R with stmt
written wp(stmt, R)
\end{itemize}
$wp(x=x+1,x>1,x\geq 1)=(x\geq 0$.
\begin{itemize}
\item $wp(x=E,R)=R[x/E]$, i.e. replace each occurence of x in R by E.
\end{itemize}
For instance
\begin{align*}
&wp(x=3,x==5)=(x == 5)[x/3]=(3 == 5) == false \\
&wp(x=3,x>0)=(x>0)[x/3]=(3 >0) == true \\
&wp(stmt;stmt',R) = wp(stmt, wp(stmt', R))
\label{Weakest precondition}
\end{align*}
In order to establish $\set{P} (while(B)do \set{ stmt }) \set{R}$, we need to fins
an invariant $Inv$:
\begin{itemize}
\item $P \Rightarrow Inv$
\item $\set{ Inv && B }stmt \set{ Inv }$
\item $(Inv&&!B) \Rightarrow R$
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7618417982,
"avg_line_length": 32.5493670886,
"ext": "tex",
"hexsha": "3803a862fe63f59a0f3f2992c598863ab2b910fa",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "565c426ba5c47bb9d94844ddf34017f358e2ae0c",
"max_forks_repo_licenses": [
"Beerware"
],
"max_forks_repo_name": "PontusPersson/lecture-notes",
"max_forks_repo_path": "TDDC90/TDDC90-notes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "565c426ba5c47bb9d94844ddf34017f358e2ae0c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Beerware"
],
"max_issues_repo_name": "PontusPersson/lecture-notes",
"max_issues_repo_path": "TDDC90/TDDC90-notes.tex",
"max_line_length": 124,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "565c426ba5c47bb9d94844ddf34017f358e2ae0c",
"max_stars_repo_licenses": [
"Beerware"
],
"max_stars_repo_name": "PontusPersson/lecture-notes",
"max_stars_repo_path": "TDDC90/TDDC90-notes.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-17T18:50:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-17T18:50:02.000Z",
"num_tokens": 3538,
"size": 12857
} |
\subsection{Classifying with probabilistic decision trees}
Previously our decision tree classifier was binary.
We can instead adapt the mixed tree model and using a probit model at each leaf.
| {
"alphanum_fraction": 0.807106599,
"avg_line_length": 21.8888888889,
"ext": "tex",
"hexsha": "9be61523f4a54b6a2b99666c46a8b83e2f2aa40b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/treesRegression/01-04-treeProbabilistic.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/treesRegression/01-04-treeProbabilistic.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/treesRegression/01-04-treeProbabilistic.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 38,
"size": 197
} |
\subsection{Attack characteristics}
The attack is highly stable and reliable, it involves no probabilistic aspects - e.g. requirements to win a race condition or the like. In case the environment fulfills all requirements, then the attack is almost guaranteed to work. This is due to the fact that almost all aspects of the attack uses tricks which are also used for legitimate reasons. Hence the environment inherently supports the attack. The only exception is the modification of the HTTP payload, the obvious counter measure is using HTTPS. But the transition from the web using HTTP to HTTPS only is still ongoing, and is far from finished.
Getting into the right environment to deploy the attack will be the most challenging obstacle into employing the tool successfully. The following requirements apply:
\begin{description}
\item[Position in the network] The attacker has to be able to achieve a MitM position in between, this could either mean that the attacker should be on the same network as the victim, or that it takes over some gateway or router which the victim uses.
\item[Insecure cookies] The cookie which the attacker wants to intercept may not have the secure flag set, since then the browser of the victim would not include this into insecure requests. Hence making it impossible to intercept it.
\item[HTTP traffic] The attacker needs insecure HTTP traffic in which the image tag can be injected.
\end{description}
\noindent The impact of the attack can be severe, since cookies can be extremely sensitive information like tokens of web sessions. Which, in case the web service is public, gives the attacker to take over the session and hence impersonate the victim. In practice would every online service onto which the victim logs on be an interesting target.
\subsection{Defense mechanisms}
It will be quite hard for most victims to detect or mitigate this attack, since it does not generate errors on both client and server side. The attack could either be detected by noticing the vast amount of ARP packets, although this is highly impractical for most users, since it is low level and hence requires technical knowledge in order to detect the attack. The other option is to detect the changes made to the HTTP requests, in order to make this work should the victim or server notice that either the request header or response body was tampered with. Although both are still completely legitimate according to the standards do they often contain newly introduced padding. Obviously is this not definitive proof that someone is tampering with the connection, but it does make a strong indicator. | {
"alphanum_fraction": 0.8124048706,
"avg_line_length": 187.7142857143,
"ext": "tex",
"hexsha": "866fa1bed20baf8f109bb72c45e909c975d6e697",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6ac98ac9abe56259565d45854f58daf4ab808d65",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "akbokha/fhttp",
"max_forks_repo_path": "docs/src/paper/attack_analysis.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "6ac98ac9abe56259565d45854f58daf4ab808d65",
"max_issues_repo_issues_event_max_datetime": "2018-04-07T09:09:53.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-22T23:38:09.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "akbokha/fhttp",
"max_issues_repo_path": "docs/src/paper/attack_analysis.tex",
"max_line_length": 805,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6ac98ac9abe56259565d45854f58daf4ab808d65",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "akbokha/fhttp",
"max_stars_repo_path": "docs/src/paper/attack_analysis.tex",
"max_stars_repo_stars_event_max_datetime": "2018-03-19T10:16:37.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-19T10:16:37.000Z",
"num_tokens": 506,
"size": 2628
} |
\section{PyLith Parameter Viewer}
\label{sec:pylith:parameter:viewer}
\newfeature{v2.2.0}
The PyLith Parameter Viewer provides a graphical user interface for
viewing the parameters associated with a PyLith simulation and the
version information for PyLith and its dependencies. This viewer is
an updated and interactive interface to the information generated
by the \filename{pylith\_info} script. It displays the hiearchy of components
and the parameters for each one, including default values.
\section{Installation}
The PyLith Parameter Viewer is included in the PyLith binary
distributions and PyLith Docker container for versions 2.1.5 and
later. Additionally, the PyLith Installer will install the Parameter
Viewer by default. For manual installation you can download the
PyLith Parameter Viewer tarball from the PyLith software page
(\url{https://geodynamics.org/cig/software/pylith/}). After
downloading the tarball, unpack it. We recommend unpacking the tarball
in the top-level PyLith directory.
\begin{shell}
$ tar -xvf pylith_parameters-1.1.0.tgz
\end{shell}
\section{Running the Parameter Viewer}
The steps to run the parameter viewer are:
\begin{enumerate}
\item Generate the parameter JSON file.
\item Start the web server (if not already running).
\item Load the parameter JSON file.
\end{enumerate}
\subsection{Generate the parameter JSON file}
The parameter viewer uses a JSON file with all of the parameters
collected from \filename{cfg} files, command line arguments, etc as
input. This file can be generated using \filename{pylith\_info} (see
Section \ref{sec:pylith\_info}) and, by default, it will be generated
whenever a \filename{pylith} simulation is run. When using
\filename{pylith\_info}, the name of the parameter file can be set via a
command line argument. When using \filename{pylith}, the
DumpParametersJSON component contains a property for the name of the
file. You can set the filename on the command line
\begin{shell}
$ pylith --dump_parameters.filename=FILENAME.json
\end{shell}
or within a .cfg file
\begin{cfg}
<h>[pylithapp.dump_parameters]</h>
<p>filename</p> = FILENAME.json
\end{cfg}
Currently, the JSON parameter file cannot be used to run a PyLith
simulation. This feature will be added in an upcoming release.
\subsection{Start the web server}
Change to the directory containing the \filename{pylith\_paramviewer}
script (usually the \filename{parametersgui} directory under the top-level
\filename{pylith} directory), and run the \filename{pylith\_paramviewer}
script. This will start a simple Python-based web server on your local
computer.
\begin{shell}
$ cd parametersgui
$ ./pylith_paramviewer
\end{shell}
The script will instruct you to point your web browswer to a local
port on your computer. The default is \url{http://127.0.0.1:9000}.
You can change the default port using the \filename{-{}-port} command
line argument to the \filename{pylith\_paramviewer} script.
\section{Using the Parameter Viewer}
When you point your web browser to the correct port, you should see
the PyLith Parameter Viewer as shown in Figure
\ref{fig:parameters:gui:startup}. Click the \textsf{Choose File}
button and navigate to the desired JSON parameter file. The viewer
tarball includes a sample parameter file
\filename{sample\_parameters.json}. Click the \textsf{Reload} button
to reload the same JSON parameter file if you regenerate it. To select
a new JSON parameter file, click the \textsf{Choose File} button and
navigate to the desired file.
\begin{figure}[htbp]
\fbox{\includegraphics[width=5in]{runpylith/figs/paramgui_startup}}
\caption{Screenshot of PyLith Parameter Viewer in web browser upon startup.}
\label{fig:parameters:gui:startup}
\end{figure}
\subsection{Version Information}
Click on the \textsf{Version} tab to examine the version information.
This tab displays the same version information shown with the
\filename{-{}-version} command line argument to \filename{pylith} in
an easy to read layout. This includes information about the platform
on which \filename{pylith} or \filename{pylith\_info} was run, the
PyLith version, and versions of the dependencies, as shown in Figure
\ref{fig:parameters:gui:version}.
\begin{figure}[htbp]
\fbox{\includegraphics[width=5in]{runpylith/figs/paramgui_version}}
\caption{Screenshot of \textsf{Version} tab of the PyLith Parameter Viewer
with sample JSON parameter file.}
\label{fig:parameters:gui:version}
\end{figure}
\subsection{Parameter Information}
Click on the \textsf{Parameters} tab to examine the hiearchy of
components and the parameters for each. You can expand/collapse the
Component Hierarchy tree in the left panel by clicking on the
triangles or facility name in blue to the left of the equals sign
(Figure \ref{fig:parameters:gui:parameters:empty}). Clicking on the
component in red to the right of the equals sign will show its
parameters in the right panel (Figure
\ref{fig:parameters:gui:parameters:empty}). The selected facility in
the left panel whose parameters are shown in the right panel will be
highlighted via a gray background (Figure
\ref{fig:parameters:gui:parameters:selected}).
\begin{figure}[htbp]
\fbox{\includegraphics[width=5in]{runpylith/figs/paramgui_parameters}}
\caption{Screenshot of \textsf{Parameters} tab of the PyLith Parameter Viewer
with sample JSON parameter file before selecting a component in the
left panel.}
\label{fig:parameters:gui:parameters:empty}
\end{figure}
\begin{figure}[htbp]
\fbox{\includegraphics[width=5in]{runpylith/figs/paramgui_detail}}
\caption{Screenshot of \textsf{Parameters} tab of the PyLith Parameter Viewer
with sample JSON parameter file with the \facility{z\_neg} facility
selected.}
\label{fig:parameters:gui:parameters:selected}
\end{figure}
% End of file
| {
"alphanum_fraction": 0.7893108782,
"avg_line_length": 39.8561643836,
"ext": "tex",
"hexsha": "d14db0db2bb6f64ffc54dc9e13d2f695636041fd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Grant-Block/pylith",
"max_forks_repo_path": "doc/userguide/runpylith/parametersgui.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Grant-Block/pylith",
"max_issues_repo_path": "doc/userguide/runpylith/parametersgui.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Grant-Block/pylith",
"max_stars_repo_path": "doc/userguide/runpylith/parametersgui.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1500,
"size": 5819
} |
\documentclass[12pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage{amsmath,amsthm,amssymb,amsfonts}
\usepackage{enumitem}
\usepackage{tabu}
\usepackage{xcolor}
\usepackage{mathtools}
\usepackage{tcolorbox}
\usepackage{changepage}
\usepackage{kpfonts}
\usepackage{picture}
\usepackage{venndiagram}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\R}{\mathbb{R}}
\begin{document}
\section*{CHAPTER 7}
\noindent
\textbf{Problem 3:} Consider a card game, using the standard $52$-card deck, in which each of four players is dealt thirteen cards. Compute the probabilities that a specific player: $(a)$ has no clubs; $(b)$ has exactly ten clubs; $(c)$ has at least three of four aces.
\vspace*{5cm}
\noindent
\textbf{Problem 4:} (Cont.) In the same card game, what is the probability that each player is dealt a jack?
\vspace*{5cm}
\noindent
\textbf{Problem 9:} Recall that there are U.S. senators (two from each state).
\begin{enumerate}[label=(\roman*)]
\item If two senators are chosen at random, what is the probability that they are from the same state?
\item If the $100$ senators are organized into disjoint sets of two, what is the probability that, in each set, the two senators are from the same state?
\item In a committee of ten senators, what is the probability that no two are from the same state? (Assume that there are no restrictions on committee memberships)
\end{enumerate}
\vspace*{3cm}
% ================================================================================================================================
% ================================================================================================================================
\section*{CHAPTER 8}
\noindent
\textbf{Problem 3:} Compute the probability that, in a sample of $10$ people in the population at large, $2$ have their birthdays in May or June, $3$ have their birthdays in December or January, and the $5$ remaining ones have their birthdays during the rest of the year. (Just give the formula. Also, you may assume for simplicity that all the months have the same number of days.)
\vspace*{5cm}
\noindent
\textbf{Problem 4:} Two fair dice are thrown repeatedly until for the first time their sum exceeds $4$. What is the distribution of the trial number of that event?
\vspace*{5cm}
\noindent
\textbf{Problem 5:} Check the accuracy of the approximation of the binomial distribution with parameters $50$ and $\frac{1}{50}$ by the Poisson distribution with parameter $1$. Compute both distributions for the values $0$, $1$, $3$, and $5$.
\vspace*{5cm}
\noindent
\textbf{Problem 8:} In a particular ESP (extrasensory perception) experiment, an experimenter looks at one of five cards on each trial, and the subject is to guess at which card the experimenter is looking. Assuming that the subject does not have ESP, what is the distribution of the outcome (successful guess, unsuccessful guess) of a trial?
\vspace*{5cm}
% ================================================================================================================================
% ================================================================================================================================
\section*{CHAPTER 9}
\noindent
\textbf{Problem 1:} Archie has three coins in his pocket: a standard coin, a coin with heads on both sides, and a coin with tails on both sides. He pulls one coin out of his pocket, looks at one side of the coin, and notices that it is a tail. He reasons that the probability of seeing a head on the other side of this coin is $\frac{1}{2}$. Do you agree with his reasoning?
\vspace*{5cm}
\noindent
\textbf{Problem 2:} Let $a$, $b$, and $c$ be outcomes in some finite sample space $\Omega$ having $2^{\Omega}$ as a field of events, with some probability measure $\mathbb{P}$. You are told that $\mathbb{P}(\{ a,b \} \lvert \{ b,c \}) = \alpha$ and that $\mathbb{P}(\{ c \}) = \beta$.
\begin{enumerate}[label=(\roman*)]
\item Compute $\mathbb{P}(\{ b \})$ in terms of $\alpha$ and $\beta$.
\item Give some possible values for $\alpha$ and $\beta$.
\item Find constraints on $\alpha$ and $\beta$, that is, find a general expression constraining the possible values of $\alpha$ and $\beta$.
\item Find an expression constraining the possible values of $\mathbb{P}(\{ a \})$.
\end{enumerate}
\vspace*{5cm}
\noindent
\textbf{Problem 8:} Consider, in some ethnic group, the set of all families having two children. Let us assume that, for such families, the probability of having a boy is $.5$. Take one family at random and suppose that one of their children is a boy. What is the probability that the other child is also a boy?
\vspace*{5cm}
\noindent
\textbf{Problem 12:} An astronomer has detected punctual signals from an unknown source in the sky. The signals are of two kinds, which she denotes `$A$' and `$B$.' She assumes that the occurrence of the signals is governed by a random process, namely, that the number of signals of any kind received in the course of one hour has a Poisson distribution with parameter $\lambda$. When a signal occurs, it is an `$A$' signal with probability $\theta$ and a `$B$' signal with probability $1-\theta$. Write a formula for the distribution of `$A$' signals received in one hour.
\vspace*{5cm}
\end{document} | {
"alphanum_fraction": 0.6680451128,
"avg_line_length": 54.8453608247,
"ext": "tex",
"hexsha": "07c3146077a1b653161cf6f88ad75862d81689d3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "72711363cf0b58863ffb193ee79ff40244e517eb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jShiohaha/math-classes",
"max_forks_repo_path": "probability-theory/exam-review/exam-1/EXAM_1_REVIEW.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "72711363cf0b58863ffb193ee79ff40244e517eb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jShiohaha/math-classes",
"max_issues_repo_path": "probability-theory/exam-review/exam-1/EXAM_1_REVIEW.tex",
"max_line_length": 574,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "72711363cf0b58863ffb193ee79ff40244e517eb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jShiohaha/math-classes",
"max_stars_repo_path": "probability-theory/exam-review/exam-1/EXAM_1_REVIEW.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1336,
"size": 5320
} |
% This LaTeX was auto-generated from an M-file by MATLAB.
% To make changes, update the M-file and republish this document.
%%% \documentclass{article}
%%% \usepackage{graphicx}
%%% \usepackage{color}
%%% \sloppy
%%% \definecolor{lightgray}{gray}{0.5}
\setlength{\parindent}{0pt}
%%% \begin{document}
\subsection*{testM}
\begin{par}
Example for algorithm testM. Algorithm is usefull only for testing QWTB toolbox. It calculates maximal and minimal value of the record. MCM is calculated by wrapper.
\end{par} \vspace{1em}
\begin{par}
See also \lstinline{qwtb}
\end{par} \vspace{1em}
\subsubsection*{Contents}
\begin{itemize}
\setlength{\itemsep}{-1ex}
\item Generate sample data
\item Call algorithm
\item Plot results
\end{itemize}
\subsubsection*{Generate sample data}
\begin{par}
Two quantities are prepared: \lstinline{x} and \lstinline{y}.
\end{par} \vspace{1em}
\begin{lstlisting}[style=mcode]
x = []; y = [];
x.v = [1:20];
y.v = [1:14 13:-1:8];
\end{lstlisting}
\begin{par}
All uncertainties are set to 1.
\end{par} \vspace{1em}
\begin{lstlisting}[style=mcode]
x.u = x.v.*0 + 1;
y.u = y.v.*0 + 1;
\end{lstlisting}
\begin{par}
Quantities are put into data input structure \lstinline{DI}.
\end{par} \vspace{1em}
\begin{lstlisting}[style=mcode]
DI = [];
DI.x = x;
DI.y = y;
\end{lstlisting}
\begin{par}
Create calculation settings \lstinline{CS} and set uncertainty calculation method to Monte Carlo method. Allow randomization of uncertainties by the QWTB toolbox.
\end{par} \vspace{1em}
\begin{lstlisting}[style=mcode]
CS = [];
CS.unc = 'mcm';
CS.mcm.randomize = 1;
\end{lstlisting}
\subsubsection*{Call algorithm}
\begin{par}
Use QWTB to apply algorithm \lstinline{testM} to data \lstinline{DI} with calculation settings \lstinline{CS}.
\end{par} \vspace{1em}
\begin{lstlisting}[style=mcode]
DO = qwtb('testM', DI, CS);
\end{lstlisting}
\begin{lstlisting}[style=output]
QWTB: default correlation matrix generated for quantity `x`
QWTB: quantity x was randomized by QWTB
QWTB: default correlation matrix generated for quantity `y`
QWTB: quantity y was randomized by QWTB
QWTB: uncertainty calculation by means of wrapper or algorithm
\end{lstlisting} \color{black}
\subsubsection*{Plot results}
\begin{par}
Plot input data and calculated maximal and minimal values as a red and green lines with uncertainties represented by dashed lines.
\end{par} \vspace{1em}
\begin{lstlisting}[style=mcode]
figure
hold on
errorbar(DI.x.v, DI.y.v, DI.y.u, 'xb')
plot([DI.x.v(1) DI.x.v(end)], [DO.max.v DO.max.v], '-r', 'linewidth', 3)
plot([DI.x.v(1) DI.x.v(end)], [DO.max.v - DO.max.u DO.max.v - DO.max.u], '--r', 'linewidth', 3)
plot([DI.x.v(1) DI.x.v(end)], [DO.min.v DO.min.v], '-g', 'linewidth', 3)
plot([DI.x.v(1) DI.x.v(end)], [DO.min.v - DO.min.u DO.min.v - DO.min.u], '--g', 'linewidth', 3)
plot([DI.x.v(1) DI.x.v(end)], [DO.max.v + DO.max.u DO.max.v + DO.max.u], '--r', 'linewidth', 3)
plot([DI.x.v(1) DI.x.v(end)], [DO.min.v + DO.min.u DO.min.v + DO.min.u], '--g', 'linewidth', 3)
legend('original data (DI.x.v, DI.y.v)', 'line at maximum value (DO.max.v)', 'uncertainty', 'line at minimum value (DO.min.v)', 'uncertainty', 'location', 'southoutside')
xlabel('quantity x')
ylabel('quantity y')
title('input data and results of testM algorithm')
hold off
\end{lstlisting}
\begin{center}
\includegraphics[width=0.7\textwidth]{algs_examples_published/testM_alg_example_01.pdf}
\end{center}
\begin{par}
Plot histogram of calculated maximal value, i.e. probability density function simulated by Monte Carlo method.
\end{par} \vspace{1em}
\begin{lstlisting}[style=mcode]
figure
hist(DO.max.r, 50)
title('histogram of Monte Carlo method results of maximum value (DO.max.r)')
\end{lstlisting}
\begin{center}
\includegraphics[width=0.7\textwidth]{algs_examples_published/testM_alg_example_02.pdf}
\end{center}
%%% \end{document}
| {
"alphanum_fraction": 0.7045337455,
"avg_line_length": 29.6335877863,
"ext": "tex",
"hexsha": "338a80b21b87d90dd1f454d48c20a5b78ffdd940",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-09-17T12:59:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-11-11T02:12:47.000Z",
"max_forks_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "qwtb/qwtb",
"max_forks_repo_path": "doc/algs_examples_published/doc_testM.tex",
"max_issues_count": 18,
"max_issues_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34",
"max_issues_repo_issues_event_max_datetime": "2021-07-13T11:33:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-12-09T13:08:38.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "qwtb/qwtb",
"max_issues_repo_path": "doc/algs_examples_published/doc_testM.tex",
"max_line_length": 171,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "qwtb/qwtb",
"max_stars_repo_path": "doc/algs_examples_published/doc_testM.tex",
"max_stars_repo_stars_event_max_datetime": "2015-12-09T13:18:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-12-09T13:18:54.000Z",
"num_tokens": 1217,
"size": 3882
} |
\par
\section{Prototypes and descriptions of {\tt A2} methods}
\label{section:A2:proto}
\par
This section contains brief descriptions including prototypes
of all methods that belong to the {\tt A2} object.
\par
\subsection{Basic methods}
\label{subsection:A2:proto:basics}
\par
As usual, there are four basic methods to support object creation,
setting default fields, clearing any allocated data, and free'ing
the object.
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
A2 * A2_new ( void ) ;
\end{verbatim}
\index{A2_new@{\tt A2\_new()}}
This method simply allocates storage for the {\tt A2} structure
and then sets the default fields by a call to
{\tt A2\_setDefaultFields()}.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setDefaultFields ( A2 *mtx ) ;
\end{verbatim}
\index{A2_setDefaultFields@{\tt A2\_setDefaultFields()}}
The structure's fields are set to default values:
{\tt type} = {\tt SPOOLES\_REAL},
{\tt n1} = {\tt inc1} = {\tt n2} = {\tt inc2} = {\tt nowned} = 0 and
{\tt entries} = {\tt NULL} .
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_clearData ( A2 *mtx ) ;
\end{verbatim}
\index{A2_clearData@{\tt A2\_clearData()}}
This method clears the object and free's any owned data.
If {\tt nowned > 0} and {\tt entries} is not {\tt NULL},
then {\tt DVfree(entries)} is called to free the storage.
It calls {\tt A2\_setDefaultFields()}.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_free ( A2 *mtx ) ;
\end{verbatim}
\index{A2_free@{\tt A2\_free()}}
This method releases any storage by a call to
{\tt A2\_clearData()} and then free the space for {\tt mtx}.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Instance methods}
\label{subsection:A2:proto:instance}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_nrow ( A2 *mtx ) ;
\end{verbatim}
\index{A2_nrow@{\tt A2\_nrow()}}
This method returns the number of rows in the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_ncol ( A2 *mtx ) ;
\end{verbatim}
\index{A2_ncol@{\tt A2\_ncol()}}
This method returns the number of columns in the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_inc1 ( A2 *mtx ) ;
\end{verbatim}
\index{A2_inc1@{\tt A2\_inc1()}}
This method returns the primary increment, the stride in memory
(with respect to real or complex entries)
between adjacent entries in the same column.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_inc2 ( A2 *mtx ) ;
\end{verbatim}
\index{A2_inc2@{\tt A2\_inc2()}}
This method returns the secondary increment, the stride in memory
(with respect to real or complex entries)
between adjacent entries in the same row.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double * A2_entries ( A2 *mtx ) ;
\end{verbatim}
\index{A2_entries@{\tt A2\_entries()}}
This method returns a pointer to the base address of the entries.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double * A2_row ( A2 *mtx, int irow ) ;
\end{verbatim}
\index{A2_row@{\tt A2\_row()}}
This method returns a pointer to the leading element of row {\tt irow}.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt entries} is {\tt NULL},
or if {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double * A2_column ( A2 *mtx, int jcol ) ;
\end{verbatim}
\index{A2_column@{\tt A2\_column()}}
This method returns a pointer to the leading element
of column {\tt jcol}.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt entries} is {\tt NULL},
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_realEntry ( A2 *mtx, int irow, int jcol, double *pValue ) ;
\end{verbatim}
\index{A2_realEntry@{\tt A2\_realEntry()}}
This method fills {\tt *pValue} with the entry in
location {\tt (irow, jcol)}.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt pValue} is {\tt NULL},
or if the matrix is not real,
or {\tt irow} is not in {\tt [0,n1-1]},
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_complexEntry ( A2 *mtx, int irow, int jcol,
double *pReal, double *pImag ) ;
\end{verbatim}
\index{A2_complexEntry@{\tt A2\_complexEntry()}}
This method fills {\tt (*pReal,*pImag)} with the entry in
location {\tt (irow, jcol)}.
\par \noindent {\it Error checking:}
If {\tt mtx}, {\tt pReal} or {\tt pImag} is {\tt NULL},
or if the matrix is not complex,
or {\tt irow} is not in {\tt [0,n1-1]},
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setRealEntry ( A2 *mtx, int irow, int jcol, double value ) ;
\end{verbatim}
\index{A2_setRealEntry@{\tt A2\_setRealEntry()}}
This method sets entry {\tt (irow,jcol)} to {\tt value}.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
or if the matrix is not real,
or {\tt irow} is not in {\tt [0,n1-1]}
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setComplexEntry ( A2 *mtx, int irow, int jcol,
double real, double imag ) ;
\end{verbatim}
\index{A2_setComplexEntry@{\tt A2\_setComplexEntry()}}
This method sets entry {\tt (irow,jcol)} to {\tt (real,imag)}.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
or if the matrix is not complex,
or {\tt irow} is not in {\tt [0,n1-1]}
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_pointerToRealEntry ( A2 *mtx, int irow, int jcol, double **ppValue ) ;
\end{verbatim}
\index{A2_pointerToRealEntry@{\tt A2\_pointerToRealEntry()}}
This method sets {\tt *ppValue} to the pointer
of the {\tt (irow,jcol)} entry.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt ppValue} is {\tt NULL},
or if the matrix is not real,
or if {\tt irow} is not in {\tt [0,n1-1]},
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_pointerToComplexEntry ( A2 *mtx, int irow, int jcol,
double **ppReal, double **ppImag ) ;
\end{verbatim}
\index{A2_pointerToComplexEntry@{\tt A2\_pointerToComplexEntry()}}
This method sets {\tt *ppReal} to the pointer to the real part
of the {\tt (irow,jcol)} entry,
and sets {\tt *ppImag} to the pointer to the imaginary part
of the {\tt (irow,jcol)} entry.
\par \noindent {\it Error checking:}
If {\tt mtx}, {\tt ppReal} or {\tt ppImag} is {\tt NULL},
or if the matrix is not complex,
or if {\tt irow} is not in {\tt [0,n1-1]},
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Initialize methods}
\label{subsection:A2:proto:initial}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_init ( A2 *mtx, int type, int n1, int n2, int inc1, int inc2,
double *entries ) ;
\end{verbatim}
\index{A2_init@{\tt A2\_init()}}
This is the basic initializer method.
We require that {\tt mtx} not be {\tt NULL},
{\tt type} be either {\tt SPOOLES\_REAL} or {\tt SPOOLES\_COMPLEX},
{\tt n1} and {\tt n2} both be positive,
and both {\tt inc1} and {\tt inc2} both be
positive and that one of them be equal to one.
Also, we only initialize a full matrix, i.e.,
one of
{\tt inc1 = 1} and {\tt inc2 = nrow}
or
{\tt inc1 = ncol} and {\tt inc2 = 1}
must hold.
\par
The object is first cleared with a call to {\tt A2\_clearData()}.
If {\tt entries} is {\tt NULL} then {\tt n1*n2} new entries
are found, {\tt mtx->entries} is set to this address and {\tt nowned}
is set to {\tt n1*n2}.
If {\tt entries} is not {\tt NULL}, then {\tt mtx->entries} is set
to {\tt entries} and {\tt nowned} is set to zero.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
or if {\tt n1}, {\tt n2}, {\tt inc1} or {\tt inc2} are
less than or equal to zero,
or if the matrix is not full matrix
(i.e., {\tt inc1} must be {\tt 1} and {\tt inc2} must be {\tt n1},
{\it or}
{\tt inc1} must be {\tt n2} and {\tt inc2} must be {\tt 1}),
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_subA2 ( A2 *mtxA, A2 *mtxB,
int firstrow, int lastrow, int firstcol, int lastcol ) ;
\end{verbatim}
\index{A2_subA2@{\tt A2\_subA2()}}
This initializer method makes the object {\tt mtxA} point into a
submatrix of object {\tt mtxB}, as
\begin{verbatim}
A(0:lastrow-firstrow,0:lastcol-firstcol) = B(firstrow:lastrow, firstcol:lastcol)
\end{verbatim}
Note, {\tt firstrow}, {\tt lastrow}, {\tt firstcol} and {\tt lastcol}
must satisfy {\tt 0 <= firstrow <= lastrow < mtxB->n1}
and {\tt 0 <= firstcol <= lastcol < mtxB->n2}.
Object {\tt mtxA} does not own its entries, but points into the
entries of {\tt mtxB}.
\par \noindent {\it Error checking:}
If {\tt mtxA} or {\tt mtxB} are {\tt NULL},
or if {\tt firstrow} or {\tt lastrow} are out of range,
or if {\tt firstcol} or {\tt lastcol} are out of range,
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Methods used in the $QR$ factorization}
\label{subsection:A2:proto:QR}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_makeStaircase ( A2 *A ) ;
\end{verbatim}
\index{A2_makeStaircase@{\tt A2\_makeStaircase()}}
This method permutes the rows of {\tt A} by the location of the
leading nonzero of each row.
Upon return, the matrix is in {\it staircase} form.
\par \noindent {\it Error checking:}
If {\tt A} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_QRreduce ( A2 *A, DV *workDV, int msglvl, FILE *msgFile ) ;
\end{verbatim}
\index{A2_QRreduce@{\tt A2\_QRreduce()}}
This method computes $A = QR$ factorization.
On return, the matrix $Q$ is not available, and $R$ is found in the
upper triangle or upper trapezoid of {\tt A}.
The Householder vectors are stored in the lower triangle of {\tt
mtxA}, with $v_j(j) = 1.0$.
The return value is the number of floating point operations that
were executed.
\par \noindent {\it Error checking:}
If {\tt A} or {\tt workDV} is {\tt NULL},
or if {\tt msglvl > 0} and {\tt msgFile} if {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_computeQ ( A2 *Q, A2 *A, DV *workDV, int msglvl, FILE *msgFile ) ;
\end{verbatim}
\index{A2_computeQ@{\tt A2\_computeQ()}}
This method computes $Q$ from the $A = QR$ factorization computed
in {\tt A2\_QRreduce()}.
Note: {\tt A} and {\tt Q} must be column major.
\par \noindent {\it Error checking:}
If {\tt Q}, {\tt A} or {\tt workDV} is {\tt NULL},
or if {\tt msglvl > 0} and {\tt msgFile} if {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_applyQT ( A2 *Y, A2 *A, A2 *X, DV *workDV, int msglvl, FILE *msgFile ) ;
\end{verbatim}
\index{A2_applyQT@{\tt A2\_applyQT()}}
This method computes $Y = Q^T X$ (if real) or $Y = Q^H X$ (if
complex), where $Q$ is stored in Householder vectors inside $A$.
We assume that $A2\_reduce()$ has been previously called with $A$
as an argument.
Since $Y$ is computed column-by-column, $X$ and $Y$ can be the same
{\tt A2} object.
The {\tt workDV} object is resized as necessary.
Note: {\tt Y}, {\tt A} and {\tt X} must be column major.
\par \noindent {\it Error checking:}
If {\tt Y}, {\tt A}, {\tt X} or {\tt workDV} is {\tt NULL},
or if {\tt msglvl > 0} and {\tt msgFile} if {\tt NULL},
or if {\tt Y}, {\tt A} or {\tt X} is not column major,
or if the types of {\tt Y}, {\tt A} and {\tt X} are not the same,
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Norm methods}
\label{subsection:A2:proto:norms}
\par
These methods return a norm of a row or a column, or the easily
computable norms of the matrix.
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_maxabs ( A2 *mtx ) ;
\end{verbatim}
\index{A2_maxabs@{\tt A2\_maxabs()}}
This method returns magnitude of the entry with largest magnitude.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_frobNorm ( A2 *mtx ) ;
\end{verbatim}
\index{A2_frobNorm@{\tt A2\_frobNorm()}}
This method returns the Frobenius norm of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_oneNorm ( A2 *mtx ) ;
\end{verbatim}
\index{A2_oneNorm@{\tt A2\_oneNorm()}}
This method returns the one norm of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_infinityNorm ( A2 *mtx ) ;
\end{verbatim}
\index{A2_infinityNorm@{\tt A2\_infinityNorm()}}
This method returns the infinity norm of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_oneNormOfColumn ( A2 *mtx, int jcol ) ;
\end{verbatim}
\index{A2_oneNormOfColumn@{\tt A2\_oneNormOfColumn()}}
This method returns the one-norm of column {\tt jcol} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL}, or {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_twoNormOfColumn ( A2 *mtx, int jcol ) ;
\end{verbatim}
\index{A2_twoNormOfColumn@{\tt A2\_twoNormOfColumn()}}
This method returns the two-norm of column {\tt jcol} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL}, or {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_infinityNormOfColumn ( A2 *mtx, int jcol ) ;
\end{verbatim}
\index{A2_infinityNormOfColumn@{\tt A2\_infinityNormOfColumn()}}
This method returns the infinity-norm of column {\tt jcol}
of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL}, or {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_oneNormOfRow ( A2 *mtx, int irow ) ;
\end{verbatim}
\index{A2_oneNormOfRow@{\tt A2\_oneNormOfRow()}}
This method returns the one-norm of row {\tt irow} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL}, or {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_twoNormOfRow ( A2 *mtx, int irow ) ;
\end{verbatim}
\index{A2_twoNormOfRow@{\tt A2\_twoNormOfRow()}}
This method returns the two-norm of row {\tt irow} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL}, or {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double A2_infinityNormOfRow ( A2 *mtx, int irow ) ;
\end{verbatim}
\index{A2_infinityNormOfRow@{\tt A2\_infinityNormOfRow()}}
This method returns the infinity-norm of row {\tt irow}
of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL}, or {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Sort methods}
\label{subsection:A2:proto:sort}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_permuteRows ( A2 *mtx, int nrow, int index[] ) ;
\end{verbatim}
\index{A2_permuteRows@{\tt A2\_permuteRows()}}
The {\tt index[]} vector contains the {\it row ids} of the leading
{\tt nrow} rows.
This method permutes the leading {\tt nrow} rows of the matrix
so that the {\tt index[]} vector is in ascending order.
This method calls {\tt A2\_permuteRows()} but does not overwrite
the {\tt index[]} vector.
\par
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt index[]} is {\tt NULL},
or if {\tt nrow < 0} or {\tt nrow > n1},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_permuteColumns ( A2 *mtx, int nrow, int index[] ) ;
\end{verbatim}
\index{A2_permuteColumns@{\tt A2\_permuteColumns()}}
The {\tt index[]} vector contains the {\it column ids} of the leading
{\tt ncol} rows.
This method permutes the leading {\tt ncol} columns of the matrix
so that the {\tt index[]} vector is in ascending order.
This method calls {\tt A2\_permuteColumns()} but does not overwrite
the {\tt index[]} vector.
\par
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt index[]} is {\tt NULL},
or if {\tt ncol < 0} or {\tt ncol > n2},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_sortRowsUp ( A2 *mtx, int nrow, int rowids[] ) ;
\end{verbatim}
\index{A2_sortRowsUp@{\tt A2\_sortRowsUp()}}
This method sorts the leading {\tt nrow} rows of the matrix
into ascending order with respect to the {\tt rowids[]} vector.
The return value is the number of row swaps made.
\par
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt rowids} is {\tt NULL},
or if {\tt nrow < 0} or {\tt nrow > n1},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_sortColumnsUp ( A2 *mtx, int ncol, int colids[] ) ;
\end{verbatim}
\index{A2_sortColumnsUp@{\tt A2\_sortColumnsUp()}}
This method sorts the leading {\tt ncol} columnss of the matrix
into ascending order with respect to the {\tt colids[]} vector.
The return value is the number of column swaps made.
\par
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt colids} is {\tt NULL},
or if {\tt ncol < 0} or {\tt ncol > n2},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Utility methods}
\label{subsection:A2:proto:utilities}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_sizeOf ( A2 *mtx ) ;
\end{verbatim}
\index{A2_sizeOf@{\tt A2\_sizeOf()}}
This method returns the number of bytes owned by this object.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL}
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_shiftBase ( A2 *mtx, int rowoff, int coloff ) ;
\end{verbatim}
\index{A2_shiftbase@{\tt A2\_shiftBase()}}
This method is used to shift the base of the entries and adjust
dimensions of the {\tt A2} object.
\begin{verbatim}
mtx(0:n1-rowoff-1,0:n2-coloff-1) := mtx(rowoff:n1-1,coloff:n2-1)
\end{verbatim}
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL}
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_rowMajor ( A2 *mtx ) ;
\end{verbatim}
\index{A2_rowMajor@{\tt A2\_rowMajor()}}
This method returns 1 if the storage is row major,
otherwise it returns zero.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_columnMajor ( A2 *mtx ) ;
\end{verbatim}
\index{A2_columnMajor@{\tt A2\_columnMajor()}}
This method returns 1 if the storage is column major,
otherwise it returns zero.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_transpose ( A2 *mtx ) ;
\end{verbatim}
\index{A2_transpose@{\tt A2\_transpose()}}
This method replaces {\tt mtx} with its transpose.
Note, this takes $O(1)$ operations since we just swap dimensions and
increments.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_extractRow ( A2 *mtx, double row[], int irow ) ;
\end{verbatim}
\index{A2_extractRow@{\tt A2\_extractRow()}}
This method fills the {\tt row[]} vector with row {\tt irow} of the
matrix.
\par \noindent {\it Error checking:}
If {\tt mtx}, {\tt entries} or {\tt row} are {\tt NULL},
or if {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_extractRowDV ( A2 *mtx, DV *rowDV, int irow ) ;
\end{verbatim}
\index{A2_extractRowDV@{\tt A2\_extractRowDV()}}
This method fills the {\tt rowDV} object with row {\tt irow} of the
matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt rowDV} are {\tt NULL},
or if the matrix is not real,
or if {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_extractRowZV ( A2 *mtx, ZV *rowZV, int irow ) ;
\end{verbatim}
\index{A2_extractRowZV@{\tt A2\_extractRowZV()}}
This method fills the {\tt rowZV} object with row {\tt irow} of the
matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt rowZV} are {\tt NULL},
or if the matrix is not complex,
or if {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_extractColumn ( A2 *mtx, double col[], int jcol ) ;
\end{verbatim}
\index{A2_extractColumn@{\tt A2\_extractColumn()}}
This method fills the {\tt col[]} vector
with column {\tt jcol} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx}, {\tt entries} or {\tt col} are {\tt NULL},
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_extractColumnDV ( A2 *mtx, DV *colDV, int jcol ) ;
\end{verbatim}
\index{A2_extractColumnDV@{\tt A2\_extractColumnDV()}}
This method fills the {\tt colDV} object
with column {\tt jcol} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt colDV} are {\tt NULL},
or if the matrix is not complex,
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_extractColumnZV ( A2 *mtx, ZV *colZV, int jcol ) ;
\end{verbatim}
\index{A2_extractColumnZV@{\tt A2\_extractColumnZV()}}
This method fills the {\tt colZV} object
with column {\tt jcol} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt colZV} are {\tt NULL},
or if the matrix is not complex,
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setRow ( A2 *mtx, double row[], int irow ) ;
\end{verbatim}
\index{A2_setRow@{\tt A2\_setRow()}}
This method fills row {\tt irow} of the matrix
with the entries in the {\tt row[]} vector.
\par \noindent {\it Error checking:}
If {\tt mtx}, {\tt entries} or {\tt row[]} are {\tt NULL},
or if {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setRowDV ( A2 *mtx, DV rowDV, int irow ) ;
\end{verbatim}
\index{A2_setRowDV@{\tt A2\_setRowDV()}}
This method fills row {\tt irow} of the matrix
with the entries in the {\tt rowDV} object.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt rowDV} are {\tt NULL},
or if the matrix is not real,
or if {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setRowZV ( A2 *mtx, ZV rowZV, int irow ) ;
\end{verbatim}
\index{A2_setRowZV@{\tt A2\_setRowZV()}}
This method fills row {\tt irow} of the matrix
with the entries in the {\tt rowZV} object.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt rowZV} are {\tt NULL},
or if the matrix is not complex,
or if {\tt irow} is not in {\tt [0,n1-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setColumn ( A2 *mtx, double col[], int jcol ) ;
\end{verbatim}
\index{A2_setColumn@{\tt A2\_setColumn()}}
This method fills column {\tt jcol} of the matrix with
the entries in the {\tt col[]} vector.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt colZV} are {\tt NULL},
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setColumnDV ( A2 *mtx, DV colDV, int jcol ) ;
\end{verbatim}
\index{A2_setColumnDV@{\tt A2\_setColumnDV()}}
This method fills column {\tt jcol} of the matrix with
the entries in the {\tt colDV} object.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt colDV} are {\tt NULL},
or if the matrix is not complex,
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_setColumnZV ( A2 *mtx, ZV colZV, int jcol ) ;
\end{verbatim}
\index{A2_setColumnZV@{\tt A2\_setColumnZV()}}
This method fills column {\tt jcol} of the matrix with
the entries in the {\tt colZV} object.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt colZV} are {\tt NULL},
or if the matrix is not complex,
or if {\tt jcol} is not in {\tt [0,n2-1]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_fillRandomUniform ( A2 *mtx, double lower, double upper, int seed ) ;
\end{verbatim}
\index{A2_fillRandomUniform@{\tt A2\_fillRandomUniform()}}
This method fills the matrix with random numbers taken from a
uniform distribution on {\tt [lower,upper]} using the {\tt Drand}
object.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_fillRandomNormal ( A2 *mtx, double mean, double variance, int seed ) ;
\end{verbatim}
\index{A2_fillRandomNormal@{\tt A2\_fillRandomNormal()}}
This method fills the matrix with random numbers taken from a
normal distribution with mean {\tt mean} and variance {\tt
variance} using the {\tt Drand} object.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_fillWithIdentity ( A2 *mtx ) ;
\end{verbatim}
\index{A2_fillWithIdentity@{\tt A2\_fillWithIdentity()}}
This method fills the matrix with the identity matrix.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL} or if {\tt n1 != n2},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_zero ( A2 *mtx ) ;
\end{verbatim}
\index{A2_zero@{\tt A2\_zero()}}
This method fills the matrix with zeros.
\par \noindent {\it Error checking:}
If {\tt mtx} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_copy ( A2 *mtxA, A2 *mtxB ) ;
\end{verbatim}
\index{A2_copy@{\tt A2\_copy()}}
This method copies entries from matrix {\tt mtxB}
into matrix {\tt mtxA}.
Note, {\tt mtxA} and {\tt mtxB} need not be of the same size,
the leading {\tt min(mtxA->n1,mtxB->n1)} rows
and {\tt min(mtxA->n2,mtxB->n2)} columns are copied.
\par \noindent {\it Error checking:}
If {\tt mtxA} or {\tt mtxB} is {\tt NULL},
or if the matrices are not of the same type,
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_sub ( A2 *mtxA, A2 *mtxB ) ;
\end{verbatim}
\index{A2_sub@{\tt A2\_sub()}}
This method subtracts entries in matrix {\tt mtxB}
from entries in matrix {\tt mtxA}.
Note, {\tt mtxA} and {\tt mtxB} need not be of the same size,
the leading {\tt min(mtxA->n1,mtxB->n1)} rows
and {\tt min(mtxA->n2,mtxB->n2)} columns are subtracted.
\par \noindent {\it Error checking:}
If {\tt mtxA} or {\tt mtxB} is {\tt NULL},
or if the matrices are not of the same type,
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_swapRows ( A2 *mtx, int irow1, int irow2 ) ;
\end{verbatim}
\index{A2_swapRows@{\tt A2\_swapRows()}}
This method swaps rows {\tt irow1} and {\tt irow2} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtxA} or {\tt mtxB} is {\tt NULL},
or if {\tt irow1} or {\tt irow2} are out of range,
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_swapColumns ( A2 *mtx, int irow1, int irow2 ) ;
\end{verbatim}
\index{A2_swapColumns@{\tt A2\_swapColumns()}}
This method swaps columns {\tt jcol1} and {\tt jcol2} of the matrix.
\par \noindent {\it Error checking:}
If {\tt mtxA} or {\tt mtxB} is {\tt NULL},
or if {\tt jcol1} or {\tt jcol1} are out of range,
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_copyEntriesToVector ( A2 *mtx, int length, double dvec[],
int copyflag, int storeflag ) ;
\end{verbatim}
\index{A2_copyEntriesToVector@{\tt A2\_copyEntriesToVector()}}
This method copies selected entries from {\tt mtx} into the vector
{\tt dvec[]} with length {\tt length}.
The return value is the number of entries copied.
This method is used during the $QR$ factorization to extract factor
entries and update matrix entries from a front.
All entries may be copied, or
only the diagonal, lower or upper entries,
and the entries may be copied to {\tt dvec[]} by rows or by
columns.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt dvec} is {\tt NULL},
or if {\tt length} is not as large as the number of entries to be
copied,
or if {\tt copyflag} is not one of {\tt A2\_STRICT\_LOWER},
{\tt A2\_LOWER}, {\tt A2\_DIAGONAL}, {\tt A2\_UPPER},
{\tt A2\_STRICT\_UPPER} or {\tt A2\_ALL\_ENTRIES},
or if {\tt storeflag} is not one of {\tt A2\_BY\_ROWS}
or {\tt A2\_BY\_COLUMNS},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{IO methods}
\label{subsection:A2:proto:IO}
\par
There are the usual eight IO routines plus a method to write the
object to a Matlab file.
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_readFromFile ( A2 *mtx, char *fn ) ;
\end{verbatim}
\index{A2_readFromFile@{\tt A2\_readFromFile()}}
\par
This method reads a {\tt A2} object from a file.
It tries to open the file and if it is successful,
it then calls {\tt A2\_readFromFormattedFile()} or
{\tt A2\_readFromBinaryFile()},
closes the file
and returns the value returned from the called routine.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt fn} are {\tt NULL},
or if {\tt fn} is not of the form
{\tt *.a2f} (for a formatted file)
or {\tt *.a2b} (for a binary file),
an error message is printed and the method returns zero.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_readFromFormattedFile ( A2 *mtx, FILE *fp ) ;
\end{verbatim}
\index{A2_readFromFormattedFile@{\tt A2\_readFromFormattedFile()}}
\par
This method reads a {\tt A2} object from a formatted file
whose pointer is {\tt fp}.
If there are no errors in reading the data,
the value {\tt 1} is returned.
If an IO error is encountered from {\tt fscanf}, zero is returned.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt fp} are {\tt NULL},
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_readFromBinaryFile ( A2 *mtx, FILE *fp ) ;
\end{verbatim}
\index{A2_readFromBinaryFile@{\tt A2\_readFromBinaryFile()}}
\par
This method reads a {\tt A2} object from a binary file
whose pointer is {\tt fp}.
If there are no errors in reading the data,
the value {\tt 1} is returned.
If an IO error is encountered from {\tt fread}, zero is returned.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt fp} are {\tt NULL},
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_writeToFile ( A2 *mtx, char *fn ) ;
\end{verbatim}
\index{A2_writeToFile@{\tt A2\_writeToFile()}}
\par
This method writes a {\tt A2} object to a file.
It tries to open the file and if it is successful,
it then calls {\tt A2\_writeFromFormattedFile()} or
{\tt A2\_writeFromBinaryFile()},
closes the file
and returns the value returned from the called routine.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt fn} are {\tt NULL},
or if {\tt fn} is not of the form
{\tt *.a2f} (for a formatted file)
or {\tt *.a2b} (for a binary file),
an error message is printed and the method returns zero.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_writeToFormattedFile ( A2 *mtx, FILE *fp ) ;
\end{verbatim}
\index{A2_writeToFormattedFile@{\tt A2\_writeToFormattedFile()}}
\par
This method writes a {\tt A2} object to a formatted file
whose pointer is {\tt fp}.
If there are no errors in writing the data,
the value {\tt 1} is returned.
If an IO error is encountered from {\tt fprintf}, zero is returned.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt fp} are {\tt NULL},
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int A2_writeToBinaryFile ( A2 *mtx, FILE *fp ) ;
\end{verbatim}
\index{A2_writeToBinaryFile@{\tt A2\_writeToBinaryFile()}}
\par
This method writes a {\tt A2} object to a binary file
whose pointer is {\tt fp}.
If there are no errors in writing the data,
the value {\tt 1} is returned.
If an IO error is encountered from {\tt fwrite}, zero is returned.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt fp} are {\tt NULL},
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_writeForHumanEye ( A2 *mtx, FILE *fp ) ;
\end{verbatim}
\index{A2_writeForHumanEye@{\tt A2\_writeForHumanEye()}}
\par
This method writes a {\tt A2} object to a file in an easily
readable format.
The method {\tt A2\_writeStats()} is called to write out the
header and statistics.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt fp} are {\tt NULL},
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_writeStats ( A2 *mtx, FILE *fp ) ;
\end{verbatim}
\index{A2_writeStats@{\tt A2\_writeStats()}}
\par
This method writes a header and some statistics to a file.
\par \noindent {\it Error checking:}
If {\tt mtx} or {\tt fp} are {\tt NULL},
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void A2_writeForMatlab ( A2 *mtx, char *mtxname, FILE *fp ) ;
\end{verbatim}
\index{A2_writeForMatlab@{\tt A2\_writeForMatlab()}}
\par
This method writes the entries of the matrix to a file
in Matlab format.
The name of the matrix is {\tt mtxname}.
\par \noindent {\it Error checking:}
If {\tt mtx}, {\tt mtxname} or {\tt fp} are {\tt NULL},
an error message is printed and zero is returned.
%-----------------------------------------------------------------------
\end{enumerate}
| {
"alphanum_fraction": 0.5923638958,
"avg_line_length": 39.0105566219,
"ext": "tex",
"hexsha": "e72a5c10169b92c10d626982cd4871ca27488b6c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alleindrach/calculix-desktop",
"max_forks_repo_path": "ccx_prool/SPOOLES.2.2/A2/doc/proto.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alleindrach/calculix-desktop",
"max_issues_repo_path": "ccx_prool/SPOOLES.2.2/A2/doc/proto.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alleindrach/calculix-desktop",
"max_stars_repo_path": "ccx_prool/SPOOLES.2.2/A2/doc/proto.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10937,
"size": 40649
} |
\section{Pickpocket}\label{sec:pickpocket}
\textbf{Skill, Passive, Source(10 Gold), Repeatable}\\
You can add your relevant game mode level to checks concerning the picking of pockets.
This includes actually stealing a person's wallet, but also tricks like stealing a ring off of their finger while shaking hands, or recognizing how much money a person might have on them. | {
"alphanum_fraction": 0.7989276139,
"avg_line_length": 93.25,
"ext": "tex",
"hexsha": "6644a2a4b208d78156e9d4b89e9f3823ab280ba0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_forks_repo_path": "perks/skills/pickpocket.tex",
"max_issues_count": 155,
"max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_issues_repo_path": "perks/skills/pickpocket.tex",
"max_line_length": 188,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_stars_repo_path": "perks/skills/pickpocket.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z",
"num_tokens": 85,
"size": 373
} |
\section{Previous work}
The AMOP \cite{Kiczales:1991:AMP:574212} contains a section titled
``Living with Circularity'', which describes the essential nature of
the two kinds of issues discussed here, namely \emph{bootstrapping
issues} and \emph{metastability issues}. That section does not
contain a complete list of all possible issues in any implementation
of \clos{}, and probably could not contain such a list, since it would
depend on the exact organization of each particular implementation.
The section in the AMOP has two subsections, one for bootstrapping
issues and one for metastability issues. The subsection on
bootstrapping issues is more comprehensive.
\subsection{Bootstrapping issues}
The subsection in the AMOP on bootstrapping issues contains two explicit
issues.
The first one involves the class \texttt{standard-class}, which is the
metaclass of all standard classes, including itself. The authors
simply suggest creating this class manually.
The second issue involves the fact that generic functions are required
in order to create classes, but during bootstrapping, there are no
generic functions, since generic functions are instances of classes.
The technique used to handle such issues is to define ordinary
functions to contain code for essential methods, so that such
functions can be called during bootstrapping. To avoid code
duplication, the methods defined later in the bootstrapping process
simply call those functions.
\subsection{Metastability issues}
The subsection in the AMOP on metastability issues also contains two
issues.
The first issue involves the function \texttt{slot-value}. As
described, the scenario does not correspond to the specification,
because the signature of the function \texttt{slot-value-using-class}
used in the scenario is different from its definition in the
specification. Either way, the basis of the scenario is that
\texttt{slot-value} on some instance would need to access the list of
slot descriptions of the class of the instance, and that list is
contained in a slot, so that a recursive use of \texttt{slot-value}
would be required on the class of the instance. However, in a
high-performance implementation, a slot reader would not call
\texttt{slot-value}. The reason is that \texttt{slot-value} is much
too general, so that unnecessary work would be done. In particular,
\texttt{slot-value} must find a slot description metaobject with a
particular name, whereas this name is already known in the slot
reader function. Instead, in a high-performance implementation, the
slot reader would access the slot directly by location.%
\footnote{The situation is a bit more complicated due to the fact that
the location may vary according to the exact subclass of the
specializer of the reader method. In fact, it can even be the case
that the slot has a different allocation in different subclasses.}
As described in the introduction of this paper, the second issue has
to do with \texttt{compute-discriminating-function}. Again, the
scenario described is an approximation of that of a real
high-performance implementation. Their example involves adding a
method to some generic function \texttt{F}, which would trigger the
computation of a new discriminating function for \texttt{F}. The
metastability issue occurs when \texttt{F} happens to be the function
\texttt{compute-discriminating-function}. In that case,
\texttt{compute-discriminating-function} would be called with itself
as an argument, in which case, according to the AMOP, ``the game would
of course be over.'' Even in a naive implementation without
effective-method caching, the scenario would be more complicated than
that. In such an implementation,
\texttt{compute-discriminating-function} would call
\texttt{compute-applicable-methods-using-classes}%
\footnote{We omit the possibility of the presence of \texttt{eql}
specializers in order to keep the description manageable.}
and then the function \texttt{compute-effective-method}. In an
implementation without caching, the real metastability issue occurs in
these last two functions. When one of these functions is called, the
discriminating function will be invoked, and therefore they will be
called recursively.
In a high-performance implementation, on the other hand, what really
happens depends on the contents of the cache. If the cache contains
an entry that applies to instances of the class
\texttt{standard-generic-function}, then no metastability issue is
present. In such an implementation, the issue occurs only in the
initial stages where the cache is empty, and after the cache has been
flushed, should the implementation use this technique.
%% LocalWords: metastability metaclass accessors specializer
%% LocalWords: metaobject subclasses specializers
| {
"alphanum_fraction": 0.8056537102,
"avg_line_length": 51.7311827957,
"ext": "tex",
"hexsha": "34a3890f61fb4a5de0c1d196a558f41e91ee2d63",
"lang": "TeX",
"max_forks_count": 80,
"max_forks_repo_forks_event_max_datetime": "2022-03-15T05:30:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-03-06T12:52:05.000Z",
"max_forks_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "gwerbin/SICL",
"max_forks_repo_path": "Papers/Satiation/sec-previous.tex",
"max_issues_count": 85,
"max_issues_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6",
"max_issues_repo_issues_event_max_datetime": "2022-02-18T11:06:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-25T00:31:09.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "gwerbin/SICL",
"max_issues_repo_path": "Papers/Satiation/sec-previous.tex",
"max_line_length": 72,
"max_stars_count": 842,
"max_stars_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "gwerbin/SICL",
"max_stars_repo_path": "Papers/Satiation/sec-previous.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T14:03:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-12T15:44:23.000Z",
"num_tokens": 1078,
"size": 4811
} |
\section{sysmonitor.pl \label{ssysmonitor}}
\cc{sysmonitor.pl} monitors the system status and provides notification of alarm conditions via
files written to a specified directory, and via calling the alarm delivery system. In particular,
\cc{lcdmonitor} reads this directory to pick up current alarms.
Some of the conditions currently monitored include:
\begin{itemize}
\item TIC logging running (via it's status file)
\item reference oscillator logging running
\item reference is locked
\item reference has lost power (PRS10 only)
\item GPS logging running
\item GPS receiver is tracking sufficient satellites
\item RAID status (where RAID is used)
\item NTP reference clocks are healthy
\end{itemize}
The run time for an alarm must integrate up to the configured threshold before an alarm is issued.
Similarly, the run time for a clearing alarm must integrate to zero before a clear is issued.
\subsection{usage}
\cc{sysmonitor.pl} is normally started by the init system, for example by \cc{systemd} on Debian.
To run \cc{sysmonitor.pl} on the command line, use:
\begin{lstlisting}
sysmonitor.pl [OPTION]
\end{lstlisting}
The command line options are:
\begin{description*}
\item[-c \textless file\textgreater] use the specified configuration file
\item[-d] run in debugging mode
\item[-h] print help and exit
\item[-v] print version information and exit
\end{description*}
To manually run \cc{okcounterd}, you may need to disable the system service
and kill any running \cc{okcounterd} process.
\subsection{configuration file}
The configuration file uses the format described in \ref{sConfigFileFormat}.\\
{\bfseries alarm path}\\
This defines the directory to which alarm notifications are written.\\
\textit{Example:}
\begin{lstlisting}
alarm path = /usr/local/log/alarms
\end{lstlisting}
{\bfseries alarm threshold}\\
This defines the threshold at which alarms are raised. The units are seconds.\\
\textit{Example:}
\begin{lstlisting}
alarm threshold = 60
\end{lstlisting}
{\bfseries alerter queue}\\
Alarms can be delivered by other methods using \cc{alerter}. This entry defines the queue used by \cc{alerter}.
\textit{Example:}
\begin{lstlisting}
alerter queue = /usr/local/log/alert.log
\end{lstlisting}
{\bfseries gpscv account}\\
This defines the account used for GPSCV processing (and implicitly, the location of \cc{gpscv.conf}).\\
\textit{Example:}
\begin{lstlisting}
gpscv account = cvgps
\end{lstlisting}
{\bfseries log file}\\
This defines the file used for logging of sysmonitor's operation and alarm events.\\
\textit{Example:}
\begin{lstlisting}
log file = /usr/local/log/sysmonitor.log
\end{lstlisting}
{\bfseries ntp account}\\
This defines the account used for NTP-related logging and processing.\\
\textit{Example:}
\begin{lstlisting}
ntp account = ntp-admin
\end{lstlisting}
{\bfseries ntpd refclocks}\\
This specifies a list of sections, each of which defines an \cc{ntpd} refclock that is to be monitored.\\
\textit{Example:}
\begin{lstlisting}
ntpd refclocks = PPS,NMEA
\end{lstlisting}
An \cc{ntpd} refclock section looks like:\\
\begin{lstlisting}
[NMEA]
refid = 127.127.20.0
name = NMEA
\end{lstlisting}
\subsection{log file}
\cc{sysmonitor.pl} creates a log file. The default file is \cc{/usr/local/log/sysmonitor.log} | {
"alphanum_fraction": 0.771935188,
"avg_line_length": 34.4315789474,
"ext": "tex",
"hexsha": "75189c9aa0543548c0132536f9130fd6171b846f",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-01-28T05:54:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-06-18T19:48:55.000Z",
"max_forks_repo_head_hexsha": "34c7641ddace2bfaa13175367d4f5dfc4861d2dc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "openttp/openttp",
"max_forks_repo_path": "doc/manual/srcs/sysmonitor.tex",
"max_issues_count": 12,
"max_issues_repo_head_hexsha": "34c7641ddace2bfaa13175367d4f5dfc4861d2dc",
"max_issues_repo_issues_event_max_datetime": "2019-05-12T01:15:00.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-03-24T04:02:50.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "openttp/openttp",
"max_issues_repo_path": "doc/manual/srcs/sysmonitor.tex",
"max_line_length": 111,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "34c7641ddace2bfaa13175367d4f5dfc4861d2dc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "openttp/openttp",
"max_stars_repo_path": "doc/manual/srcs/sysmonitor.tex",
"max_stars_repo_stars_event_max_datetime": "2019-07-16T09:32:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-08-15T03:15:35.000Z",
"num_tokens": 867,
"size": 3271
} |
\documentclass{article}
\usepackage{amsmath}
\usepackage[margin=1.0in]{geometry}
\usepackage{xcolor}
\begin{document}
\noindent
Does $\displaystyle \sum_{n=1}^\infty \frac{\cos \frac\pi3 n}{n^2}$
diverge, converge absolutely, or converge conditionally?
\subsection*{Note}
The series $\displaystyle \sum_{n=1}^\infty \frac{\cos \frac\pi3 n}{n^2}$ is NOT an alternating series.
\subsection*{Solution}
We consider
\[
\sum_{n=1}^\infty \left|\frac{\cos \frac\pi3 n}{n^2}\right|
= \sum_{n=1}^\infty \frac{|\cos \frac\pi3 n|}{n^2}
\]
Note that
\[ 0 \leq |\cos \frac\pi3 n| \leq 1\]
so by dividing all three sides by $n^2$, we have
\[ 0 \leq \frac{|\cos \frac\pi3 n|}{n^2} \leq \frac1{n^2}.\]
The series $\displaystyle \sum \frac1{n^2}$ converges by the $p$-test. Since, terms of the series $\displaystyle \sum_{n=1}^\infty \frac{|\cos \frac\pi3 n|}{n^2}$ are positive, by the Direct Comparison test, the series $\displaystyle \sum_{n=1}^\infty \frac{|\cos \frac\pi3 n|}{n^2}$ converges.
Therefore the series $\displaystyle \sum_{n=1}^\infty \frac{\cos \frac\pi3 n}{n^2}$ converges by the Absolute Convergence Test. In fact, the series $\displaystyle \sum_{n=1}^\infty \frac{\cos \frac\pi3 n}{n^2}$ is absolutely convergent.
\end{document}%%%%%%%%%%%%%%%%%
\begin{align*}
L&=\lim_{n \to \infty} \sqrt[n]{|a_n|}\\
&= \lim_{n \to \infty} \sqrt[n]{\left| \right|}\\
\end{align*}
Since $\sum |a_n| = \sum a_n$, the series $\displaystyle \sum_{n=1}^\infty AAAAAAAAAAAAAA$ converges absolutely.
Since $|r| < 1$, the series ... converges by the Geometric Series Test.
Since $|r| \geq 1$, the series ... diverges by the Geometric Series Test.
The function $f(x)=\frac{}{}$ is continuous, positive, and decreasing on $[1,\infty)$.
\subsection*{Solution}
| {
"alphanum_fraction": 0.6515486726,
"avg_line_length": 36.8979591837,
"ext": "tex",
"hexsha": "26831d7b77db09cc47988057db200e545a37065f",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2017-06-25T22:14:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-12-25T18:51:52.000Z",
"max_forks_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "edward-kim-math/edward-d-kim.github.io",
"max_forks_repo_path": "key/series/k4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "edward-kim-math/edward-d-kim.github.io",
"max_issues_repo_path": "key/series/k4.tex",
"max_line_length": 295,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "edward-kim-math/edward-d-kim.github.io",
"max_stars_repo_path": "key/series/k4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 638,
"size": 1808
} |
\documentclass[letterpaper]{article}
\usepackage[margin=1in]{geometry}
\usepackage[hidelinks]{hyperref}
\usepackage{amsmath}
\hypersetup{colorlinks,allcolors=blue}
\setcounter{tocdepth}{2}
\begin{document}
\begin{center}
\Large
Canadian Amateur Radio Operator Guide
Advanced
\large
with notes by VE2AAY
\end{center}
\pagenumbering{roman}
\tableofcontents
\newpage
\section*{Keywords}
\addcontentsline{toc}{section}{Keywords}
\begin{description}
\item[farad] The unit of capacitance (symbol: F), 1 farad is the capacitance of a capacitor that has a charge of 1 coulomb when applied voltage drop of 1 volt.
\item[henry] The unit of inductance (symbol: H), 1 henry is the amount of inductance that causes a voltage of one volt, when the current is changing at a rate of one ampere per second.
\item[reactance] the nonresistive component of impedance in an AC circuit, arising from the effect of inductance or capacitance or both and causing the current to be out of phase with the electromotive force causing it.
The unit of reactance is the ohm.
\item[RC circuit] A resistor-capacitor circuit (RC circuit), or RC filter or RC network, is an electric circuit composed of resistors and capacitors driven by a voltage or current source. A first order RC circuit is composed of one resistor and one capacitor and is the simplest type of RC circuit.
\item[RL circuit] A resistor-inductor circuit (RL circuit), or RL filter or RL network, is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor and is the simplest type of RL circuit.
\item[RLC circuit] A resistor-inductor-capacitor circuit (RLC circuit) consists of a resistance, a capacitance and an inductance connected to an alternating supply.
\item[Skin Effect] The tendency of AC to flow in an increasingly thinner layer at the surface of a conductor as frequency increases.
\end{description}
\newpage
\pagenumbering{arabic}
\section{Advanced Theory}
\subsection{Fundamentals}
\subsubsection{Fields}
\begin{description}
\item[electromagnetic field] the magnetic field created around a conductor carrying current.
\item[magnetic field] a space around a magnet or a conductor where a magnetic force is present.
The `Left-hand Rule':
position the left hand with your thumb pointing in the direction of electron flow;
encircle the conductor with the remaining fingers, the fingers point in the direction of the magnetic lines of force.
Using conventional current flow, this would become the Right-hand rule.
A magnetic field is oriented about a conductor in relation to the direction of electron flow \textbf{in the direction determined by the left-hand rule}.
\item[electrostatic field] the electric field present between objects with different static electrical charges.
An electrostatic field is the electric field present between objects with different static electrical charges.
Voltage across a capacitor creates an electrostatic field between the plates.
\item[electric field] a space where an electrical charge exerts a force (attraction or repulsion) on other charges.
\end{description}
\begin{itemize}
\item Electromagnetic and electrostatic fields are capable of storing energy. Energy in this state is \textbf{potential energy}.
\end{itemize}
\subsubsection{Capacitors}
Capacitors store energy in an \textbf{electrostatic field}.
The capacitance in \textbf{farads} is one factor influencing how much energy can be stored in a capacitor.
One farad accepts a charge of one coulomb when subjected to one volt.
Capacitive reactance is calculated as follows:
$$ X_C = \frac{1}{2 \pi f C} $$
\subsubsection{Inductors}
Inductors store energy in an electromagnetic field.
The inductance in \textbf{henries} is one factor influencing how much energy can be stored in an inductor.
One henry produces one volt of counter EMF with current changing at a rate of one ampere per second.
Inductive reactance is calculated as follows:
$$ X_L = 2 \pi f L $$
\subsubsection{Skin Effect}
Skin Effect is the tendency of AC to flow in an increasingly thinner layer at the surface of a conductor as frequency increases.
\begin{itemize}
\item Skin Effect causes most of an RF current to flow along the surface of a conductor.
\item The resistance of a conductor different for RF currents than for direct currents because of the Skin Effect.
\end{itemize}
\newpage
\subsection{Resistor-capacitor circuits}
\subsubsection{Time constant}
The time constant of an RC circuit (in seconds), is equal to the product of the circuit resistance (in ohms) and the circuit capacitance (in farads).
$$ \tau = R \cdot C $$
It is the time required to:
\begin{itemize}
\item \textbf{charge} the capacitor from an initial charge voltage of zero to approximately \textbf{63.2\%} of the value of an applied DC \textbf{voltage}, or
\item \textbf{discharge} the capacitor to approximately \textbf{36.8\%} of its initial charge \textbf{voltage}.
\end{itemize}
When charging a capacitor, every time constant reduces the difference to the value of applied DC voltage by 63.2\%.
$$ \Delta V_f = \Delta V_i \cdot 0.368^{t / \tau} $$
Where $ \Delta V $ is the difference between the voltage of the capacitor and the applied DC voltage, and $ t $ is time in seconds.
When discharging a capacitor, every time constant reduces the voltage of the capacitor by 63.2\%.
$$ V_f = V_i \cdot 0.368^{t / \tau} $$
Where $ V $ is the voltage of the capacitor and $ t $ is time in seconds.
\begin{itemize}
\item Charging a capacitor, the voltage after 1, 2, and 5 times constants is respectively 63\%, 87\%, and 100\% of the final value.
\item Discharging a capacitor, the voltage is respectively 37\% ($ 100 - 63 $) and 13\% ($ 100 - 87 $) of the initial voltage after 1 and 2 time constants.
\end{itemize}
\newpage
\subsection{Resistor-inductor circuits}
\subsubsection{Time constant}
The time constant of an RL circuit (in seconds), is equal to the circuit inductance (in henries) divided by the circuit resistance (in ohms).
$$ \tau = L / R $$
It is the time required to:
\begin{itemize}
\item \textbf{build} the \textbf{current} in the circuit up to \textbf{63.2\%} of its maximum value.
\end{itemize}
When building current in a circuit, every time constant reduces the difference to the maximum value by 63.2\%.
$$ \Delta I_f = \Delta I_i \cdot 0.368^{t / \tau} $$
Where $ \Delta I $ is the difference between the current in a circuit and the maximum value, and $ t $ is time in seconds.
\begin{itemize}
\item The current after 1, 2 and 5 time constants is respectively 63\%, 87\%, and 100\% of the final value.
\end{itemize}
\subsubsection{Counter electromotive force}
Back EMF or `counter electromotive force' is the voltage induced by changing current in an inductor. It is the force opposing changes in current through inductors.
\textbf{Back EMF} is \textbf{A voltage that opposes the applied EMF}.
\newpage
\subsection{Resistor-inductor-capacitor circuits}
\subsubsection{Resonance}
Resonance occurs in an RLC circuit when the supply frequency causes the voltages across L and C to be equal and opposite in phase.
At resonance, reactances are equal.
$$ X_L = X_C $$
The resonant frequency of an RLC circuit is calculated as follows for inductance in henries and capaciance in farads:
$$ f = \frac{1}{2 \pi \sqrt{LC}} $$
Restating the equation for frequency in megahertz, inductance in microhenries, and capacitance in picofarads results in the following equation:
$$ f(\mathrm{MHz}) = \frac{1000}{2 \pi \sqrt{L (\mathrm{\mu H}) \cdot C (\mathrm{p F})}} $$
These equations can be rearranged to solve for inductance or capacitance as follows:
\begin{align*}
L &= \frac{1}{4 \pi^2 f^2 C} \\
C &= \frac{1}{4 \pi^2 f^2 L}
\end{align*}
Or in megahertz, microhenries, and picofarads:
\begin{align*}
L (\mathrm{\mu H}) &= \frac{1000000}{4 \pi^2 f (\mathrm{MHz})^2 C (\mathrm{p F})} \\
C (\mathrm{p F}) &= \frac{1000000}{4 \pi^2 f (\mathrm{MHz})^2 L (\mathrm{\mu H})}
\end{align*}
\subsubsection{Quality factor}
The quality factor or selectivity, $ Q $, is proportional to the inverse of bandwidth for a constant resonant frequency.
This relationship is illustrated by the fact that resonant frequency is equal to the product of the quality factor and bandwidth of a circuit.
The Q factor can be calculated by dividing resistance by reactance.
\begin{align*}
Q &= \frac{R}{X} \\
Q &= \frac{R}{2 \pi f L} = 2 \pi f C R = R \sqrt{\frac{C}{L}}
\end{align*}
A resistor often included in a parallel resonant circuit \textbf{to decrease the Q and increase the bandwidth}.
\newpage
\section{Advanced Components and Circuits}
\subsection{Filters}
There are 4 categories of filters: high-pass, low-pass, band-pass, and band-stop.
The three general groupings of filters are \textbf{high-pass, low-pass, and band-pass}.
Note: \textbf{The bandwidth of a fast-scan TV channel is 6 MHz; that is much too wide for any of the filters listed}.
\subsubsection{Crystal filters}
A crystal lattice filter is \textbf{filter with narrow bandwidth and steep skirts made using quartz crystals}.
\begin{description}
\item[Crystal lattice filter] uses two matched pairs of series crystals and a higher-frequency matched pair of shunt crystals in a balanced configuration.
\item[Half-lattice crystal filter] uses two crystals in an unbalanced configuration.
\end{description}
The frequency separation between the crystals (\textbf{relative frequencies of the individual crystals}) sets the bandwidth and the response shape.
Speech frequencies on a communication-grade SSB voice channel range from 300 hertz to 3000 hertz and thus require a bandwidth of $ 2.7 \mathrm{kHz} $.
$ 2.4 \mathrm{kHz} $ is a good compromise between fidelity and selectivity.
\begin{itemize}
\item For single-sideband phone emissions, $ \mathbf{2.4} \mathrm{\textbf{kHz}} $ would be the bandwidth of a good crystal lattice filter.
\end{itemize}
A quartz crystal filter is superior to an LC filter for narrow bandpass applications because of the \textbf{crystal's high Q}.
Piezoelectric crystals behave like tuned circuits with an extremely high Quality Factor (in excess of 25 000). Their accuracy and stability are outstanding.
\begin{itemize}
\item Electrically, a crystal looks like \textbf{a very high Q tuned circuit}.
\end{itemize}
The piezoelectric property of quartz (generating electricity under mechanical stress, bending when subjected to electric field) is used in crystal-based oscillators, radio-frequency crystal filters, such as the lattice filter, and crystal microphones.
The Active Filter is based on an active device, generally an operational amplifier, and a network of resistors and capacitors.
\begin{itemize}
\item Crystal oscillators, filters, and microphones depend upon the \textbf{piezoelectric effect}.
\item Crystals are not applicable to \textbf{Active Filters}.
\end{itemize}
Piezoelectricity is generated by \textbf{deforming certain crystals}.
The piezoelectric property of quartz is two-fold:
\begin{itemize}
\item apply mechanical stress to a crystal and it produces a small electrical field;
\item subject quartz to an electrical field and the crystal changes dimensions slightly.
\end{itemize}
Crystals are capable of resonance either at a fundamental frequency depending on their physical dimensions or at overtone frequencies near odd-integer multiples (3rd, 5th, 7th, etc.).
Piezoelectric crystals can serve as filters because of their extremely high ``Q'' (greater than 25 000) or as stable, noise-free, and accurate frequency references.
\subsubsection{Butterworth and Chebyshev filters}
The Butterworth class of filters exhibit ``maximally flat response'': smooth response, no passband ripple.
Their frequency response is as flat as mathematically possible in the passband, no bumps or variations (ripple) [first described by British engineer Stephen Butterworth].
\begin{itemize}
\item The distinguishing feature of a Butterworth filter is that \textbf{it has a maximally flat response over its pass-band}.
\item The primary advantage of the Butterworth filter over the Chebyshev filter is that \textbf{it has maximally flat response over its passband}.
\end{itemize}
Here is a mnemonic trick: ``The Butterworth's response is smooth as butter''.
The Chebyshev class of filters [in honour of Pafnuty Chebyshev, a Russian mathematician] have steeper cutoff slopes and more ripple than Butterworth filters.
Elliptic filters are sharper than the previous two.
\begin{itemize}
\item The distinguishing feature of a Chebyshev filter is that \textbf{it allows ripple in the passband in return for steeper skirts}.
\item The primary advantage of the Chebyshev filter over the Butterworth filter is that \textbf{it allows ripple in the passband in return for steeper skirts}.
\item \textbf{A Chebyshev filter} is described as having ripple in the passband and a sharp cutoff.
\end{itemize}
\subsubsection{Resonant cavities}
The quarter wavelength Resonant Cavity behaves like a very high Q filter.
Due to their physical size, they become practical only at VHF frequencies.
\begin{itemize}
\item At 50 MHz (6 m), the length of the cavity is \textbf{1.5 m} (one quarter wavelength).
\item The \textbf{cavity} filter type is not suitable for use at audio and low radio frequencies.
\end{itemize}
\subsubsection{Helical resonators}
The Helical Resonator, based on the concept of a resonant helically-wound section of transmission line within a shielded enclosure, achieves selectivity comparable to the quarter-wave resonant cavity but with a substantial size reduction.
\begin{itemize}
\item A device which helps with receiver overload and spurious responses at VHF, UHF and above may be installed in the receiver front end. It is called a \textbf{helical resonator}.
\end{itemize}
\newpage
\subsection{Semiconductors}
\subsubsection{Materials}
The most basic semiconductor materials are silicon and germanium.
Atoms in metallic elements hold their peripheral electrons loosely, such materials make good conductors.
Peripheral electrons in non-metallic elements are tightly bound, such materials are insulators.
Germanium and silicon fall somewhere between the two categories but are mostly insulators when pure.
Doping with impurities increases their conductivity.
\begin{itemize}
\item An element which is sometimes an insulator and sometimes a conductor is called a \textbf{semiconductor}.
\item \textbf{Silicon and germanium} are widely used in semiconductor devices exhibit both metallic and non-metallic characteristics.
\item Silicon, in its pure form, is \textbf{an insulator}.
\item A semiconductor is said to be doped when it has added to it small quantities of \textbf{impurities}.
\end{itemize}
Pure germanium and silicon are doped with impurities to produce the basic semiconductor materials.
Certain doping impurities add free electrons, forming N-Type material while others accept electrons, thus creating `holes' found in P-Type material.
\begin{itemize}
\item \textbf{P-type} semiconductor material contains fewer free electrons than pure germanium or silicon crystals.
\item \textbf{Holes} are the majority charge carriers in P-type semiconductor material.
\item \textbf{N-type} semiconductor material contains more free electrons than pure germanium or silicon crystals.
\item \textbf{Free electrons} are the majority charge carriers in N-type semiconductor material.
\end{itemize}
Gallium arsenide (GaAs) devices can work at higher frequencies with less noise than their silicon counterparts.
\begin{itemize}
\item \textbf{At microwave frequencies}, gallium-arsenide used as a semiconductor material in preference to germanium or silicon.
\end{itemize}
\subsubsection{Diodes}
\begin{description}
\item[Zener diodes] maintain a constant voltage across a range of currents.
\item[The Varactor] (or Varicap) is a diode used under reverse bias as a ``voltage-variable capacitor''.
\item[Hot-carrier diodes] (or Schottky-barrier diodes) have lower forward voltage and good high-frequency response:
\begin{itemize}
\item their speed make them useful in Very High Frequency mixers or detectors;
\item in power circuits, they are excellent rectifiers in switching power supplies.
\end{itemize}
\item[PIN diodes] (with a layer of undoped or lightly doped `intrinsic' silicon between the P and N regions) are used as switches or attenuators.
\end{description}
\begin{itemize}
\item The principal characteristic of a Zener diode is \textbf{a constant voltage under conditions of varying current}.
\item A Zener diode is a device used to \textbf{regulate voltage}.
\item A \textbf{Varactor} varies its internal capacitance as the voltage applied to its terminals varies.
\item A common use for the hot-carrier (Schottky) diode is \textbf{as VHF and UHF mixers and detectors}.
\item One common use for PIN diodes is \textbf{As an RF switch}.
\end{itemize}
Diodes conduct in one direction only:
under forward bias, maximum forward current is limited by acceptable junction temperature.
The voltage drop across the junction (volts) multiplied by the forward current (amperes) gives rise to heat dissipation (watts).
Surviving a reverse bias is determined by the Peak Inverse Voltage (PIV) rating.
\begin{itemize}
\item \textbf{Junction temperature} limits the maximum forward current in a junction diode.
\item \textbf{Maximum forward current and peak inverse voltage (PIV)} are the major ratings for junction diodes.
\end{itemize}
There are two main categories of semiconductor diodes:
\begin{description}
\item[Point-contact diodes] where a small metal whisker touches the semiconductor material, exhibit low capacitance and serve as RF detectors or UHF mixers.
\item[Junction diodes] are formed with adjacent blocks of P and N material; these are usable from DC to microwave.
\end{description}
\begin{itemize}
\item A common use for point contact diodes is \textbf{as an RF detector}.
\end{itemize}
To calculate power, voltage, or current through a diode:
$$ P = I V $$
where $ P $ is power in watts, $ I $ is current in amperes, and $ V $ is voltage in volts.
\begin{itemize}
\item For example, if a Zener diode rated at 10 V and 50 W was operated at maximum dissipation rating, it would conduct \textbf{5 A}.
\end{itemize}
Heat flows from hot to cold.
If ambient temperature is higher, less heat can be drained from the junction, the junction will reach maximum safe operating temperature quicker.
\begin{itemize}
\item If the temperature is increased, the power handling capability is \textbf{less}.
\end{itemize}
\subsubsection{Transistors}
\begin{description}
\item[In a `common base' configuration] where the Emitter is the input and the Collector is the output, the Alpha factor (or common base forward current transfer ratio) is a ratio of a change in Collector current to the corresponding change in Emitter current.
\item[In a `common emitter' configuration] where the Base is the input and the Collector is the output, the Beta factor (or common emitter forward current gain) is a ratio of a change in Collector current to a given change in Base current.
\end{description}
The Beta factor applies equally to a Common Collector configuration where the Base is also the input.
\begin{itemize}
\item The alpha of a bipolar transistor is \textbf{the change of collector current with respect to emitter current}.
\item The beta of a bipolar transistor is \textbf{the change of collector current with respect to base current}.
\item The change of collector current with respect to base current is called the \textbf{beta}.
\item The alpha of a bipolar transistor is specified for \textbf{common base} configuration.
\item The beta of a bipolar transistor is specified for \textbf{common base or common collector} configurations.
\end{itemize}
The terms Emitter, Collector and Base refer to bipolar transistors, of which there are two types: NPN and PNP\@.
The Base-Emitter junction must be forward-biased for Base current to exist.
A positive voltage on the Base supposes P material for conduction to take place, the `sandwich' is thus NPN\@.
Inversely, a negative Base voltage relates to a PNP.
\begin{itemize}
\item \textbf{An NPN transistor} conducts electricity from a negative emitter to a positive collector when its base voltage is made positive.
\item \textbf{A PNP transistor} conducts electricity from a positive emitter to a negative collector when its base is made negative.
\end{itemize}
The Alpha being a number smaller than 1, many authors refer to it as the `common base forward current transfer ratio' rather than a gain.
In a `common base' configuration where the Emitter is the input and the Collector is the output, the Alpha factor (or common base forward current transfer ratio) is a ratio of a change in Collector current to the corresponding change in Emitter current.
In a `common emitter' configuration where the Base is the input and the Collector is the output, the Beta factor (or common emitter forward current gain) is a ratio of a change in Collector current to a given change in Base current.
The Beta factor applies equally to a Common Collector configuration where the Base is also the input.
\begin{itemize}
\item The alpha of a bipolar transistor in common base configuration is \textbf{forward current gain}.
\item The alpha of a bipolar transistor is equal to $\pmb{\beta / (1 + \beta)}$.
\item The current gain of a bipolar transistor in common emitter or common collector compared to common base configuration is \textbf{high to very high}.
\item The beta of a bipolar transistor is equal to $\pmb{\alpha / (1 - \alpha)}$.
\end{itemize}
\newpage
\subsection{Amplifiers, mixers, and frequency multipliers}
\newpage
\section{Measurements}
\newpage
\section{Power Supplies}
\newpage
\section{Transmitters, Modulation, and Processing}
\newpage
\section{Receivers}
\newpage
\section{Feedlines - Matching and Antenna Systems}
\end{document}
| {
"alphanum_fraction": 0.6951381884,
"avg_line_length": 61.1975308642,
"ext": "tex",
"hexsha": "5f7427b6cea7462afef5dc676ce7f086b46ceccb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "03d841f9e9a2e53bbec3d1002cccd28bf7a39788",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "Green-Avocado/Canada-Amateur-Radio-Operator-Advanced-Guide",
"max_forks_repo_path": "src/CA-Amateur-Radio-Advanced.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "03d841f9e9a2e53bbec3d1002cccd28bf7a39788",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "Green-Avocado/Canada-Amateur-Radio-Operator-Advanced-Guide",
"max_issues_repo_path": "src/CA-Amateur-Radio-Advanced.tex",
"max_line_length": 310,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "03d841f9e9a2e53bbec3d1002cccd28bf7a39788",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "Green-Avocado/Canada-Amateur-Radio-Operator-Advanced-Guide",
"max_stars_repo_path": "src/CA-Amateur-Radio-Advanced.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5670,
"size": 24785
} |
\section{Measurement Procedue}
\setenumerate[2]{label=(\arabic*)}
\begin{enumerate}
\item Adjustment of the Stokes’ viscosity measurement device.
\begin{enumerate}
\item Adjust the knobs beneath the base to make the plumb aiming at the center
of the base.
\item Turn on the two lasers, adjust the beams so that they are parallel and aim
at the plumb line.
\item Remove the plumb and place the graduated flask with castor oil at the
center of the base.
\item Place the guiding pipe on the top of the viscosity measurement device.
\item Put a metal ball into the pipe and check whether the ball, falling down in
the oil, can blocks the laser beams. If not, repeat Step 1.
\end{enumerate}
\item Measurement of the (constant) velocity of a falling ball.
\begin{enumerate}
\item Measure the vertical distance s between the two laser beams at least three
times.
\item Put a metal ball into the guiding pipe. Start the stopwatch when the ball
passes through the first beam, and stop it when it passes through the second
one. Record the time $t$ and repeat the procedure for at least six times.
\end{enumerate}
\item Measurement of the ball density $\rho_2$.
\begin{enumerate}
\item Use electronic scales to measure the mass of 40 metal balls. Calculate the
average to find the mass of a single ball.
\item Use a micrometer to measure the diameter of the metal balls. Repeat for
ten times and calculate the average value.
\item Calculate the ball density $\rho_2$.
\end{enumerate}
\item Measure of the density $\rho_1$ of the castor oil by using the provided
densimeter (one measurement). Use a calliper to measure the inner diameter $D$
of the graduated flask for six times. Read the ambient temperature from the
thermometer placed in the lab.
\item Calculate the value of viscosity coefficient $\mu$ using Eq. (5).
\end{enumerate}
| {
"alphanum_fraction": 0.7633832976,
"avg_line_length": 38.9166666667,
"ext": "tex",
"hexsha": "79ce5e0b8e45710115665cba4c9811992657a030",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "iamwrm/VP141",
"max_forks_repo_path": "E2/part/3mp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "iamwrm/VP141",
"max_issues_repo_path": "E2/part/3mp.tex",
"max_line_length": 80,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "iamwrm/VP141",
"max_stars_repo_path": "E2/part/3mp.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-24T11:28:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-06-24T11:28:04.000Z",
"num_tokens": 453,
"size": 1868
} |
\documentclass[a4paper,10pt]{paper}%%%%where rsproca is the template name
%%%% *** Do not adjust lengths that control margins, column widths, etc. ***
%%%%%%%%%%% Defining Enunciations %%%%%%%%%%%
\newtheorem{theorem}{\bf Theorem}[section]
\newtheorem{condition}{\bf Condition}[section]
\newtheorem{corollary}{\bf Corollary}[section]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage[version=4]{mhchem}
\usepackage{mathalfa}
\usepackage{psfrag}
\usepackage{subfig}
\usepackage{multirow}
\newtheorem{defi}{Definition}
\usepackage[margin=1in]{geometry}
\newcommand*{\bc}{k_{\textup{B}}}
\newcommand{\ecoli}{{\em E.coli}}
\newcommand{\yeast}{{\em S. cerevisiae}}
\newcommand{\leftP}{({left})}
\newcommand{\rightP}{({right})}
\newcommand{\siac}{{\em N}-acetyl\-neura\-minic acid}
\newcommand{\PPS}{{pentose phosphate cycle}}
\newcommand{\one}{($i$) }
\newcommand{\two}{($ii$) }
\newcommand{\three}{($iii$) }
\newcommand{\four}{($iv$) }
\begin{document}
%%%% Article title to be placed here
\title{Statistical physics of codon usage bias}
\author{%%%% Author details
Deng Yun, Jeremie Kalfon, Dominique Chu, Tobias von der Haar}
%%%%%%%%% Insert author address here
\institution{School of Computing, University of Kent, CT2 7NF, Canterbury, UK\\d.f.chu@kent.ac.uk}
%%%% Subject entries to be placed here %%%%
%\subject{statistical mechanics, theory of computing, theoretical biology}
%%%% Keyword entries to be placed here %%%%
\maketitle
%%%% Abstract text to be placed here %%%%%%%%%%%%
\begin{abstract}
We model codon usage bias
\end{abstract}
%%%%%%%%%%%%%%%%%%%%%%%%%%%
\keywords{biological computing, entropy, computational performance}
\section{Introduction }
Codon usage bias (CUB), the preferred usage of particular codons over others encoding the same amino acid, is an established phenomenon. The principal component forces that shape CUB are thought to be mutation, selection, and random drift (reviewed in \cite{fantomas7}). While this view appears generally accepted, there is little consensus over precisely what generates the selective force. A number of individual mechanisms have been proposed either by way of correlation or through experimental evidence, but it is currently unclear whether selection has multiple causes or a single dominant cause, and if so, what the dominant cause may be.
\par
The proposed selective forces can be categorised into two groups. The first group comprises forces that act at the level of codons only, i.e. they are independent of the sequence in which the codon is located and lead to uniform bias across the entire genome. We will henceforth refer to this as {\em beanbag selection}. Examples include tRNA-based selection models where codon usage is matched to the supply of tRNAs \cite{iki}, and GC content based models where codon usage is matched to constraints imposed by some preferred proportion of G and C bases in the DNA sequence \cite{fantomas8,fantomas1,fantomas2}.
\par
Alternatively, the second group comprises forces that act at the sequence, resulting in {\em sequence level selection (SLS)}. Examples include codon usage-dependent control of protein levels \cite{myembopaper} and of protein quality \cite{fantomas9,pff} evolutionary selection based on such mechanisms can differ between individual sequences, and they do thus not necessarily result in uniform CUB across the genome. Sequence-level forces acting on CUB have gained increased appreciation in recent years through the demonstration that specific codon usage patterns enable the functioning of biological mechanisms as diverse as the mammalian cell cycle \cite{fantomas10}, the mechanism by which sub-physiological temperatures engender neuroprotection \cite{fantomas11} and fungal circadian clocks \cite{23417067}. It has also been observed that codons usage may change along a codong sequence \cite{fantomas4,ramppaper}. Codon usage can also influence both mRNA \cite{fantomas5} and protein structures \cite{fantomas6,relcodon} and as such impact on biological function.
\par
In addition to the limited examples listed above, many additional correlations between CUB and other cellular parameters have been described \cite{fantomas7,29018283}. Moreover, experimental effects of altered codon usage have been observed for which no underlying mechanisms is known \cite{tobiasandlynnepaper}. Based on these observations it is likely that many different selection pressures, both known and unknown, simultaneously act to shape CUB in genomes.
\par
Some recent studies have begun to address the interplay between different types of selective pressure. For example, the ``Effective Number of Codons'' approach \cite{2110097} was specifically developed in order to separate effects of gene length and amino acid bias from other forces, and a number of approaches were developed to separate the effect of background GC bias from other forces \cite{7713409,8893856,11729162,12140252}. A common approach in these studies appear to be that signals of interest are to be separated from confounding signals. However, in the context of organismal evolution all signals contributing to selection on codon usage bias are ultimately of interest, and the existing approaches have left the question unaddressed how the many different selection pressures come together to shape the evolution of codon usage.
\par
Treating all forces acting on codon usage bias explicitly is not feasible due to the many unknown parameters involved. However, for many systems governed by large numbers of competing influences, it is now well established that the details of these influences may not matter anymore and instead a global behavior emerges that can be described by sometimes simple and universal behaviors. The classical example of a system with a high number of internal degrees of freedom is a gas. It is impossible to measure and track the positions and momenta of each molecule in a volume of air. The key insight of statistical physics is that it is not necessary to do so either, because the aggregate behavior of ideal gazes is amenable to a simple description involving just a few macro-variables related to one another by the ideal gas law. It is now well known that this insight is not limited to gazes, but there are now many well known macroscopic simple laws emerging from complex underlying behaviors, including scaling laws in biology \cite{gwest}, word frequencies in texts \cite{zipf} , spatial structures of genomes \cite{fantomas21} and evolution \cite{baksneppen,baksneoppen2}.
\par%contributions
As the main contribution of this paper, we derive from first principles a novel, parsimonious and general model of codon evolution as a random walk based on ideas of stochastic thermodynamics and information thermodynamics. This model has two free parameters and is in good approximation, but not exactly, a multinomial distribution. The second main contribution of this study are the results of fitting this model to a comprehensive genomic dataset consisting of 462 fungal genomes from the ENSEMBL database \cite{together}. Such datasets are only now becoming available in sufficient numbers to probe thermodynamic features of CUB in any depth. We find model we derive to be a surprisingly good description of the distribution of codons across the fungal genomes. Moreover, we also find that most sequences we considered fit within a narrow area of the parameter space. However, within this narrow area, individual sequences are distributed almost randomly in parameter space, with relatively weak correlation between sequences of the same type.
\par
As a third contribution, we find unambiguous evidence for SLS type forces leaving pervasive signature on the distribution of codons over genes in fungal genomes. This provides evidence from an evolutionary perspective for recent observations that connect codon usage to translational control in a number of different setting including human development and diseases including cancer \cite{xxx}.
\par
Finally, as a forth contribution, we propose a new quantitative description of codon usage bias, that, while summarized as a single number, does not only take into account the relative proportion of codons but crucially also how they are distributed across the genome. We argue that this this captures more accurately the selection pressure than measures that merely quantify the relative abundance of codons, such as CAI \cite{cai} or tAI \cite{tai}.
\section{Results}
\subsection{The random walk model}
In a mathematical sense, the evolution of codons over time is a discrete space, continuous time random walk. Here we take the approach that, rather than considering the entire genome as a random walker, we view each gene as an independent random walker. More precisely, since we consider the random walk to take place within the space of synonymous codon sequences, each gene represents up to 18 independent random walkers, one for each amino acid. This means that each genome is then an ensemble of random walkers. For as long as this ensemble is large enough, the distribution of walkers can then be analyzed so as to infer selective pressures acting on the codon usage within this genome.
\par
For each amino acid, we will consider each gene $g$ as a sequence of synonymous codons, representing a {\em subsequence} of the gene; see Method section \ref{methods} for details on how we generated these subsequences. Each such subsequence has a length $L^{\textrm{A},g}$ which depends on the gene and the amino acid A considered. Each subsequence consists of $k_1$ codons of type 1, $k_2$ codons of type 2, \ldots, $k_{|C^\textrm{A}|}$ codons of type $|C^\textrm{A}|$, where $|C^\textrm{A}|$ is the total number of codons encoding for amino acid A. Here, we assign arbitrarily which codons are type 1, 2 and so on, but once we taken the choice we keep it fixed for all analyses. For example, the number of codons for tryptophan is $|C^E|=2$. See table \ref{symbols} for a summary of the symbols used.
%
%
\begin{table}
\centering
\begin{tabular}{|l|l|}
{\bf Symbol}& {\bf Meaning} \\\hline
$|A^g|$ & Number of occurrences of amino acid $A$ in gene $g$\\\hline
$C^\textrm{A}$ & A type of codon of amino acid A\\\hline
$|C^\textrm{A}|$ & number of different codons for amino acid A\\\hline
$C_i^\textrm{A}$ & $i$-th codon type of amino acid A\\\hline
$|C^\textrm{A}|\in \{1,2,3,4,6\}$ & The number of codons codon for amino acid A\\\hline
$k^\textrm{A,g}_i,k^\textrm{A}_i, k_i$ & The number of codons of type $i$ of amino acid A occuring in gene g.\\\hline
$L^\textrm{A,g} := \sum_i k^\textrm{A,g}_i$ & The number of occurrences of A in gene g.\\\hline
\end{tabular}
\caption{Explanation of the symbols used.}
\label{symbols}
\end{table}
%
%
\par
Given this, we can now consider each possible configuration $\{k_1,\ldots, k_{|C^\textrm{A}|}\}$ as a state. From any such state, the random walker can access all states that are 1 synonymous mutation away. % EDIT: state changes happens 1 synonymous mutation at a time.
For example, codon 1 may be mutated to codon two, which would correspond to the transition from $\{k_1, k_2, \ldots, k_{|C^\textrm{A}|}\}$ to $\{k_1-1,k_2+1, \ldots, k_{|C^\textrm{A}|}\}$. In the case of only two codons, where $|C^\textrm{A}|=2$ this random walk reduces to a 1-dimensional discrete state random walk in continuous time with $L^{\textrm{A},g} +1$ states, corresponding to $L^{\textrm{A},g}$ codons being of type 1, $L^{\textrm{A},g}-1$ codons being of type 1,\ldots, $0$ codons being of type 1; see supplementary information for more detail on the model.
\par%simplifying assumptions we make
Throughout this contribution, we make a number of simplifying assumptions about the nature of the random walk. Firstly, we assume that non-synonymous mutations are negligible, i.e. the rate of mutation from a codon to a non-synonymous codon is zero. Secondly, we assume that the mutation rates between synonymous codons are {\em a priori} the same, i.e. the random walk is unbiased. Any deviations from this assumption are due to evolutionary selection pressures (including effects of random drift). Thirdly, the random walker is in a steady state. Continuing evolutionary pressure could therefore change individual sequences, but will not, on the whole, change the statistics of the codon distribution. Fourthly, throughout this article we are not concerned with the spatial arrangements of codons across a gene or genome, but we only record how many codons of a particular kind are to be found in a particular subsequence.
\par
In order to derive predictions for the distribution of codons across sequences in response to specific selective pressures, we devised a theoretical model of the dynamics of codon evolution based on stochastic thermodynamics \cite{seifertreview}. Based on that, we can then conceptualize each sequence $i$ as having an energy $E_i$, where $E_i$ depends on the codon composition of the sequence and the selection pressure. In steady state the probability of observing a sequence with energy $E_i$, i.e. the probability to find the random walker in state $i$, is then given by the Boltzmann distribution $P(E_i) = \exp(-E_i/T)/\sum_i \exp(-E_i/T)$, where we have assumed that the Boltzmann constant $\bc=1$. In this model $T$ is a constant that in a physical system would correspond to the temperature of the system, but we will interpret this here as an abstract temperature that is not in a clear relationship with the ambient temperature experienced by the organism. We are setting $T=1$, but as we will find below, some selection scenarios will force a different value of the temperature. Having established this conceptual framework, we are now able to determine the energy that is implied by various selection scenarios, which in turn leads to a prediction for a steady state Boltzmann distribution of random walkers/sequences, which can be compared to data.
\par
The simplest energy function can be derived for the beanbag model and no selection forces acting on codon usage. In this contribution, we will mainly consider the case of amino acids with 2 codons only. In this case we find that $E_i=-\ln {L\choose i}$, where $i=k_1$ is the frequency of the first codon; see SI for calculation. The corresponding Boltzmann distribution coincides with the binomial distribution, as expected. This simplest model can be readily expanded to include a beanbag model with a global codon usage bias $q$ for codon 1, yielding an energy $\hat E_i = E_i + \ln\left( (1-q)^i / q^1 \right)$. Again, the resulting Boltzmann distribution coincides with the binomial distribution with bias $p=q$.
\par
The preceding two model are beanbag models and assume that selection acts merely on the global composition of codons. In the model this manifests itself in the assumption that the mutation rate from codon 1 to codon 2 is proportional to the number of codons $k_1$ in the sequence; see derivation in SI for details. We obtain an SLS model, where the selection bias depends on the composition of the subsequence and the rate of mutation from codon 1 to codon 2 becomes proportional to $k_1^\xi$, and the rate from codon 2 to codon 1 becomes proportional to $(L-k_1)^\eta$. This assumption leads to an energy for a sequence with $k_1=i$ codons of type 1 given by the {\em full model} (see SI for derivation):
%
%
\begin{equation}
\bar E_i= \xi E_i + T (\gamma - \xi) \ln(i!).
\label{fullmodel}
\end{equation}
%
%
In order to understand the model it is instructive to consider first the special case of $\xi=\gamma\neq 1$. In this case, the second term on the right hand side disappears and the energy is the same as in the biased beanbag model, but with a modified inverse temperature $\xi$. For this particular combination of parameters, there will be no selection pressure affecting global usage of codons, but there will be a SLS affecting how codons are distributed across sequences. For $\xi=\gamma = 1$ the full model \ref{fullmodel} reduces to the binomial distribution with $q=0.5$ exactly. The second term becomes relevant when $\xi\neq\gamma$. It represents an effective ``potential'' that biases codon evolution in one direction and one codon becomes preferred over the other, resulting in a global shift of codon usage as a consequence of SLS. Note that this SLS model is different from the biased beanbag model/binomial model in that it does not require a constant factor that biases evolution, but is realized through an exponent on the entropic rate.
\par
We now define an inverse temperature $T^{-1} :=(\xi + \gamma)/2$ for model as the simplest function that is symmetric in the two parameter and reduces to the inverse temperature in the case of $\xi = \gamma$. While this temperature is unrelated to the physical temperature of the organism, it has an interpretation in terms of the width of the steady state distribution. The ``colder'' the distribution, the more the probability mass is concentrated around the maximum of the standard case of $T=1$. In the extreme case of a $T=0$ all sequences would be equal the most probable sequence. Hotter temperature are analogously wider distributed. As $T\to\infty$ sequence distribution becomes flat, such that all sequences are equally likely to be observed. We will find here that actual genomes tend to be moderately hot with $1<T<2$ for most sequences.
\par
The full model \ref{fullmodel} is not a generalization of the binomial distribution for $q\neq 1/2$, in the sense it cannot be tuned in general such that the corresponding Boltzmann distribution coincides with the binomial distribution. However, given the relevant lengths of sequences, we can always find values for the parameters such that the full model approximates, to high degrees of accuracy, any given binomial distribution. A consequence of this is that, given the statistical error in the parameter estimation, it would not be possible to reject SLS based on the data, even if the underlying distribution of the data was binomial (meaning that the beanbag model is the correct model). If, on the other hand, we find that sequences are better fitted by the full model, rather than the binomial distribution --- as indeed we will observe --- then we will be able to reject beanbag selection as the sole evolutionary driving force.
\subsection{Fitting the model to data}
%
%
\begin{figure}
\centering
\subfloat[][]{\includegraphics[angle=-0,width=0.8\textwidth]{histogram.eps}\label{histogram}}\\
\subfloat[][]{\includegraphics[angle=0,width=0.45\textwidth]{fullagainstbinomControl.eps}\label{fullagainstbinomcontrol}}
\subfloat[][]{\includegraphics[angle=0,width=0.45\textwidth]{fullagainstbinom.eps} \label{fullagainstbinom}}
\caption{ \protect\subref{histogram} Histogram for the residuals of fits to the binomial distribution and the full model for both the real data and the control. The data is shown on a logarithmic scale. In the control, codons have been replaced by random synonymous codons with a bias corresponding to the global codon usage bias. The purple line shows the histogram of the mean-residuals of the binomial model to the real data. The distribution is clearly shifted to the right, showing that the binomial model fits the data worse on the whole. On the other hand, the residuals of the full model overlap perfectly with the distribution of the mean-residuals resulting from the fit of both the binomial and the full model to the control data. \protect\subref{fullagainstbinomcontrol} Plotting the residuals arising from the full model fit to the control data against the residuals from the binomial. Points above the diagonal indicate subsequences where the full model is a better fit than the binomial model. Points on the diagonal indicate that both models fit the subsequence equally well. \protect\subref{fullagainstbinom} Same comparison, but for real data. The contour lines indicate the density of the control data in \protect\subref{fullagainstbinomcontrol} }
\label{histogram}
\end{figure}
%
%
%%describe the fitting below
In order to understand whether or not there is evidence for SLS or the beanbag-model, we first fitted each subsequence with $4\leq L^{\textrm{A},g}\leq 15$ of our fungal dataset to a binomial distribution. This resulted in $45702$ individual fits. We found that the vast majority of subsequences are fitted very well by a binomial distribution with mean residuals between $\exp(-4)$ and $\exp(-9)$ peaking at $\exp(-7)$. Visual inspection of a number of examples suggest that these mean residuals indicate a reasonably good fit of the data. The only fitting parameter in the model is the bias $p$, which can be interpreted as the probability that codon 1 is chosen, i.e. the {\em global codon usage bias} $q$. Since we fitted each length separately we obtained, for each species and each amino acid 10 different estimates for the global codon usage bias $q$. Pairwise comparison of the estimates of $q$ between length 15 and the other length yielded extremely good correlations, with Pearson coefficient $> 0.9$. % insert a relevant table.
Taken on their own, these results seem to point to codons being distributed binomially, which is consistent with the beanbag model.
\par
As a comparison we also fitted the full model to the data thus obtaining estimated values for the parameters $\xi$ and $\gamma$ of the full model and another mean residual indicating how well the full model can be fitted to the data.
\par
%Here is the code to perform the calculation below
%source("readResiduals.R")
%#determine the number of points that are outside 0,2
%tmp<-nolF[which(nolF$a<2 & nolF$b<2 & nolF$a>0 & nolF$b>0),]
%length(tmp[[1]])/length(nolF[[1]])
%length(nolF[[1]])
\par
%To generate this data, jump to tag SUMMARYDATA in figures.Rscript
%
%Fungi (full)
% Min. 1st Qu. Median Mean 3rd Qu. Max.
%0.0000000 0.0001480 0.0002830 0.0005512 0.0005630 0.0182800
%
%Fungi binomial
% Min. 1st Qu. Median Mean 3rd Qu. Max.
%0.000006 0.000448 0.000836 0.001208 0.001532 0.018010
%
%FULL data below:
%summary(nolB$residual)
% Min. 1st Qu. Median Mean 3rd Qu. Max.
%0.000006 0.000450 0.000845 0.001613 0.001560 0.206000
% summary(nolF$residual)
% Min. 1st Qu. Median Mean 3rd Qu. Max.
%0.0000000 0.0001490 0.0002850 0.0009934 0.0005720 0.3515000
For all our datasets, the typical values of the parameters $\gamma$ and $\xi$ are small and positive with $96.39$\% of the fits resulting in $0<\gamma,\xi<2$, which indicates that at least some of the subsequence distributions are non-binomial. The quality of the fits can be quantified by considering the mean-residuals. Comparing the mean-residuals obtained from fitting the full model with those obtained from fitting the binomial model indicates that the former is a better description of the data {\em on the whole} in the sense that the distribution of mean residuals is shifted to the left towards smaller mean residuals; see fig. \ref{histogram}. This can be quantified. The median for the residuals of the full model is $0.0002850$, and as such much roughly 3 times smaller than the corresponding value for the binomial fits, which is $0.000845$.
\par
The better fit of the full model could be merely a reflection of the fact that it has more parameters than the binomial model. We therefore prepared a control set of distributions. This control set consists of the same subsequences that the real data set contains, but with all codons replaced by a random synonymous codon according to the global codon usage bias; see supplementary information for a description and for the control data-set. By construction this control set implements the beanbag model exactly, meaning that the sequence composition of sub-sequences is distributed according to the binomial distribution with a global codon usage bias $q$ corresponding to empirically measured values. Fitting both the full model and the binomial model to this control data results in mean-residuals that are visually indistinguishable from one another reflecting the above cited fact that the full model can approximate binomial data; see fig. \ref{histogram}.
\par
The quality of the fit of the binomial model to the binomial data of the control-set can be viewed as a benchmark for the best mean-residuals that can be obtained given the statistical noise inherent in the dataset. An inspection of the histogram in fig. \ref{histogram} reveals that the residuals obtaining from fitting the full model to the real data is only minimally shifted to the right of this optimal benchmark. This allows the conclusion that the full model captures almost all of the variation of the underlying real data, thus capturing its essence. From this we conclude that the beanbag model (which implies a binomial distribution) is not sufficient to explain how codons are distributed across the genome in fungi. Instead, it is necessary to postulate sequence-level selection in order to account for the distribution of codons over subsequences. In contrast, the full model, as formulated in eq. \ref{fullmodel} accounts for the distribution.
\par
So far, we have concluded that the full model is a better fit to the distribution of codons on the whole, but we do not know whether this applies to all individual subsequences, or whether there is only a subset of sequences that is better described by the full model, whereas the rest is equally well described by the binomial model. To decide this, we plot the mean residuals for each sub-sequence against the mean residual obtained from fitting the full model to the same subsequence. We did this for both the control dataset described above and for the real data. It is instructive to first consider the former; see fig \ref{fullagainstbinomialcontrol}. This analysis confirms that most subsequences of the control data are approximately equally well fitted by the binomial and the control data, %EDIT: by the binomial and the full model
although the density of points appears to be higher below the diagonal indicating that the binomial model fits the control data somewhat better. This reflects the fact that the full model can only approximate the binomial distribution.
\par
Turning now to the real data the same analysis leads to a high density of points with low mean residual for the full model, but high residual for the binomial model; see fig. \ref{fullagainstbinom}. The sequences corresponding to these points are not well fitted by the binomial model, but are well fitted by the full model. These sequences are consistent with sequence-level selection, but not with the beanbag model. In contrast, those sequences where the residuals of the full model and the binomial model are similar the subsequences along the diagonal, can be equally well explained by the beanbag model and SLS.
\par
%
%
\begin{figure}
\psfrag{a}{$\xi$}
\psfrag{b}{$\gamma$}
\centering
\subfloat[][]{\includegraphics[width=0.35\textwidth]{fitResultsFungi.eps}\label{fitresultsfungi}}
\subfloat[][]{\includegraphics[width=0.35\textwidth]{fitResultsRandom.eps}\label{fitresultsrandom}}\\
\subfloat[][]{\includegraphics[width=0.35\textwidth]{globalTemperatureHist.eps}\label{globaltemperaturehist}}
\subfloat[][]{\includegraphics[width=0.35\textwidth]{detailTemperatureHist.eps}\label{detailtemperaturehist}}\\
\subfloat[][]{\includegraphics[width=0.35\textwidth]{globalEuclideanHist.eps}\label{globaleuclideanhist}}
\subfloat[][]{\includegraphics[width=0.35\textwidth]{detailEuclideanHist.eps}\label{detaileuclideanhist}}
\caption{\protect\subref{fitresultsfungi} The fitted parameters $\protect\xi$ and $\protect\eta$ for each of the 2-codon amino acids for all 462 fungal species in our dataset. We are limiting ourselves to those amino acid sequences that have a sublength of 15. Each dot shows the $\xi$ and $\eta$ value thus obtained for a particular amino acid. The fitted values largely concentrate into the interval of $[0,1.5]$. The color indicates the density of points in the area; red indicates a high point density. \protect\subref{fitresultsrandom} Comparing the fitted parameters obtained from the full model (red) to the fitted parameters obtained from the control (blue). The plot shows actual points rather than density. \protect\subref{globaltemperaturehist} Distribution of inverse temperature in the fungal data-set showing all sub-length and all species. The control data peaks around an inverse temperature of 1, whereas the real data is distributed around a lower inverse temperature. \protect\subref{detailtemperaturehist} Distribution of inverse data for two different species. This shows the temperature for two species including all amino acids and all sublengths for two species and is a subset of \protect\subref{globaltemperaturehist}. \protect\subref{globaleuclideanhist} The control data clearly has a smaller distance to the non-selection model, indicating that considering only the global codon usage bias underestimates the selection pressure. \protect\subref{detailteuclideanhist} Same data as in \protect\subref{globaltemperaturehist} but the Euclidean distance in parameter space from the no-selection model is shown. \protect\subref{detaileuclideanhist} Same data, but for two species only. }
\label{fourspecies}
\label{fitresults}
\end{figure}
%
%
%
For any given subsequence, it is not possible to decide unambiguously whether it is generated by a binomial model or SLS. The aggregate data, however, reveal a much higher density of points in the north-western corners of fig. \ref{fullagainstbinom}, indicating sub-sequences that are well fitted by the full model, and less well by the binomial.
\par
A different perspective on the difference between the model can be obtained from the distribution of subsequences in $\xi, \gamma$ space; see fig. \ref{fitresultsrandom}. It reveals that real data concentrates in a high density in the south-west of the plot where the density of subsequences for the binomial data is very low. At the same time, there is also a significant area of overlap between the two datasets.
%
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
$L$ & $\xi$ & $\gamma$ \\
\hline
14 & 0.6993699& 0.7338252 \\
13 & 0.7030388& 0.7452789 \\
12 & 0.6872513&0.7510245 \\
11& 0.6553442& 0.7677006 \\
10& 0.5658081& 0.7656194 \\
9& 0.5617377& 0.8087374 \\
8& 0.5708965& 0.8071532 \\
7& 0.5716121& 0.8196577 \\
6& 0.5124212& 0.8129648 \\
5& 0.4267368& 0.7883095 \\\hline
\end{tabular}
\caption{Pearson correlation coefficients between parameters fits for $\xi$ and $\gamma$ for fungi} %EDIT: maybe explain correlation to what
\label{corrtable}
\end{table}
%
%
\subsubsection{Defining distance}
Fitting the full model to real data reveals that SLS has shaped the codon usage bias beyond merely changing the global codon usage bias. We now describe how to quantify the evolutionary pressure on a sequence. Established measures, such as the CAI \cite{cai} or tAI \cite{tai}, even though they differ in details, are based on the global codon usage biases. The presence of SLS means, however,that measures based on the global codon usage bias may be incomplete, because it captures only global codon usage. Selection may also affect how codons are distributed across subsequences, which is not captured by the global codon usage bias alone. A quantity that captures this is the inverse temperature of subsequences.
The need for a measure of SLS forces on genomes is highlighted by the special case of subsequences in our fungal dataset that have (virtually) no global codon usage bias but still bear the signatures of SLS, i.e. most of these sequences are not distributed binomially. Fig.\ref{nocubtemperaturehist} shows that among the subsequences that have no apparent global codon usage bias the majority are much hotter than one would expect from a binomial distribution. The inverse inverse temperature of this subset peaks at around $T^{-1} =1/2$, whereas one would expect that a binomial distribution is strongly peaks around a temperature of 1.
\par
%
%
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{nocubTemperatureHist.eps}
\caption{Distribution of distances $\mathcal D$ from the no-selection case for sequences with no codon usage bias. To obtain this, we selected all subsequences where the global codon usage bias towards codon 1 is between $0.495$ and $0.5$. The beanbag model of selection would predict that these sequences have a distance of 0. It is apparent that there are many examples of sequences that have no global bias, but at the same time subject to a SLS pressure, as evidenced by a distance that is different from 0.}
\label{nocubtemperaturehist}
\end{figure}
%
%
\par
For sequences with global codon usage bias, the inverse temperature can be viewed as a measure of distance from the beanbag model. However, this measure is only statistically correct and not reliable for individual sequences. Fig. \ref{globaltemperaturehist} indicates that even perfectly binomial data may result in temperature estimates which are on average 1, but with a spread around the mean. This spread is partially due to statistical error, but crucially also due to the fact that the full model does not reduce exactly to binomial models with $q\neq 1$. Therefore, while real data clearly is distributed around higher temperatures than the control data, for individual points the temperature is only a probabilistic indicator of the distance from the beanbag model.
\par
A compound measure that quantifies the selection pressure on individual sequences, irrespective of whether it arises from beanbag or SLS forces is the the distance of the sequence from the no-selection case. Concretely, the no-selection case corresponds to the binomial distribution with $q=1/2$, corresponds exactly to the case of $\xi=\gamma=1$. Any deviation from that indicates a selection pressure. We thus propose as a measure of the evolutionary pressure the Euclidean distance of a subsequence from the point $\xi=\gamma=1$ in parameter space:
%
%
\begin{equation}
\mathcal D:= \sqrt{(1-\xi)^2 + (1-\gamma)^2} \tag{\textrm{Selection pressure}}
\end{equation}
%
%
In the case of no global codon usage bias, the selection pressure is in a simple relationship to the inverse temperature $\mathcal D = | 1- T^{-1}|= | 1- \xi|$.
\par
We focused the discussion above on the fungal dataset. Yet, a repetition of the same analyses for 560 species of bacteria and to 126 species of protists yielded qualitatively and quantitatively similar results; see supplementary information. Notably, the parameters of the model $\xi$ and $\gamma$ were distributed into the same range and differed only in minor ways in their temperature, such that those sequences as well, bear the signatures of SLS.
\section{Discussion}
At present there is no consensus on what precisely drives codon usage bias. A main outcome of this contribution is that, whatever the evolutionary drivers, we cannot think of them as solely acting at the level of the codon. In fungi we found clear and strong signature of sequence level selection. Almost identical results we found also for protists and bacteria. Our study cannot exclude that bean-bag model selection is working simultaneously, but if it exists, as a sole driver it is restricted to a subset of the genomes only.
\par
A further result of our study is that the evolution of codon usage can be summarized by a parsimonious mathematical model (eq. \ref{fullmodel}) with only 2 parameters. This model is derived from first principles and it is the simplest model that is consistent with SLS. Its two parameters can be directly interpreted in terms of selection forces namely as the exponents modifying the rate of synonymous mutations from one codon to another one.
\par
A central part of the full model is the effective ``selection potential,'' which encapsulates the evolutionary pressures that act on codon usage. In the random walk picture this potential has a clear interpretation as the force that acts on the random walker and biases the steady state probabilities relative to the entropic case of no-selection. We do not claim here that this potential exists in biology. There is no uniform force that biases the codon usage. Instead, we rather think of this potential as the aggregate result of many evolutionary drivers acting simultaneously. Listing these forces and disentangling how they act and interact is perhaps an intractable task. Even more so, it is surprising that a simple expression can capture this model.
\par
A consequence of SLS is that selection forces on codon usage bias do not exclusively manifest themselves in global biases in the codon usage bias, but also, less visibly, in the way codons are distributed. This means that traditional codon usage indices, such a cAI or tAI tend to underestimate the real codon usage bias. From the full model we derived a measure of the distance from the no-selection case, which takes into account both the global codon usage bias but also deviations from the binomial distribution, i.e. SLS effects. In the special case of no global codon usage bias, traditional metrics would conclude that there is no global codon usage bias. However, we showed that in fact even the those sequences do show signatures of selection (see fig. \ref{nocubtemperaturehist}).
%Higher level selection important
%utionut
%Even though there is no global codon usage bias, does not mean that there is no selection.
\par
%Comment on AA with more than 2 codons
We limited our analysis above to the 9 amino acids that are encoded by 2 codons only. In principle, there is no theoretical difficulty to extend the model to the remaining 9 amino acids. The binomial distribution needs to be replaced by a multinomial distribution and the full model needs to be adapted to include an extra parameter for each additional codon. In practice, the analysis becomes problematic for two reasons. Firstly, with more codons the number of possible subsequence compositions grows quickly, but the number of sequences does not. As a consequence, there are fewer examples per configuration which increases the statistical error. Secondly, and connected to this is that fitting 4 or more parameter model to noisy data becomes unreliable and does not yield meaningful results. To a limited extent it is possible to compare distribution of subsequences of these codons to a multinomial distribution. For the vast majority of subsequences no meaningful conclusions can be reached because they are dominated by noise. However, for the few high-probability subsequences, it appears that the distribution is consistent with a multinomial distribution; see supplementary information for details. Hence, while beanbag-selection is still acting on those amino acids, SLS is likely not.
\par
Based on general considerations this is not entirely surprising. The same statistical error that makes the analysis of amino acids with more than 2 codons difficult also affects the cells itself, in the sense that the effects of even a moderate selection pressures at the level of the sequence will remain inefficient against the high levels of mutational noise.
\section{Methods}
\subsection{The dataset}
\label{dataset}
We downloaded all datasets from ENSEMBL {\tt https://www.ensembl.org}. For each species we downloaded the coding sequences of interest (CDS files) for further analysis in two steps. We downloaded locally 462 species from the Fungi kingdom (release 36 in AUG 2017), 442 species in Bacteria kingdom (release 40 in JUL 2018), 143 species in Protista kingdom (release 40 in JUL 2018). All the species names and corresponding download weblinks are in the supplementary file ``species.xlsx''. We then further processed the downloaded files to calculate codon usage bias for each subsequence. To do this we converted each gene sequence into a valid codon sequence. We removed all genes of which the number of nucleotides was not a multiple of 3, which indicates an error in the ORF. There are 35748 error genes excluded from 4554328 total genes of Fungi kingdom, 6384 excluded from 1286467 of Bacteria kingdom, and 25142 excluded from 1439975 of Protist kingdom. Using the remaining genes we then determined subsequence distributions as described above.
\par
For each species, a control coding sequence was produced by replacing each codon with a random synonymous codon (which could be the same as the one in the original sequence). The probability of choosing a random synonymous codon was biased according to the observed global codon usage bias of the respective species, such that in the control data the codons were distributed according to the multinomial distribution by construction.
\par
We then prepared the data by splitting each gene into (up to) 18 {\em codon subsequences} or simply {\em subsequences}. We do this as follows: For each gene $g$ in the dataset we find all the codons that code for a particular amino acid A and discard all other codons. Thus, we have reduced the gene $g$ to a subsequence of codons of length $L^{\textrm{A},g}$. We do this for each gene and each species in the dataset, and thus generate all subsequences for amino acid A. We do this one by one for each amino acid. Each subsequence of an amino acid A and gene $g$ in the set is characterized by its total length $L^{\textrm{A},g}$, and the number of occurrences $k_i^{\textrm{A},g}$ of each codon type in the sequence. | {
"alphanum_fraction": 0.7801750797,
"avg_line_length": 134.3067092652,
"ext": "tex",
"hexsha": "b63058b7f5e15be4b5f212a49248dab9539226a5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b31033ba4e20bbd260d9ce1e31659ee97d24bc74",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jkobject/CodonUsageBias",
"max_forks_repo_path": "kaflonDTA/correction_remarks_nebel.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b31033ba4e20bbd260d9ce1e31659ee97d24bc74",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jkobject/CodonUsageBias",
"max_issues_repo_path": "kaflonDTA/correction_remarks_nebel.tex",
"max_line_length": 1717,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b31033ba4e20bbd260d9ce1e31659ee97d24bc74",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jkobject/CodonUsageBias",
"max_stars_repo_path": "kaflonDTA/correction_remarks_nebel.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10135,
"size": 42038
} |
\pagestyle{myheadings} \setcounter{page}{1} \setcounter{footnote}{0}
\section{~Coupling with NUOPC} \label{app:nuopc}
\newcounters
\vssub
\subsection{~Introduction} \label{sec:nuopcintro}
\vssub
\ws\ as of v6.02 has a component cap {\code wmesmf} that interfaces with the multi-grid routines via the
National Unified Operational Prediction Capability (NUOPC)\footnote{https://earthsystemcog.org/projects/nuopc/}
Layer which specifies the use of the Earth System Modeling Framework (ESMF)
\footnote{https://www.earthsystemcog.org/projects/esmf/}
for coupling with other earth systems such as atmosphere, ocean, ice and storm surge models.
This cap is meant to be flexible and is already in use in multiple different coupled models at NOAA and the US Navy.
This section describes how to build, install, configure and run the NUOPC cap. It assumes a basic knowledge of \ws.
\vssub
\subsection{~Building and Installing the NUOPC Cap} \label{sec:nuopcbuild}
\vssub
To make the library that will contain the \ws\ code and cap that can be included in a NUOPC coupled model, use
{\code model\/esmf\/Makefile}. For example, in the NOAA Environmental Modeling System (NEMS) the command is:
\command{make ww3\_nems}
this makefile will subsequently call {\code w3\_make}. As part of this process a nuopc.mk makefile
fragment will also be created, which tells NUOPC/ESMF where the \ws\ library is located.
Note there is a new switch {\code WRST} which will add 10 m wind to the restart file and use that wind field
at the initial time step of the wave model. This can be used in situations where the coupled atmospheric model
does not have 10 m wind speeds at initialization.
\vssub
\subsection{~Import/Export Fields in the NUOPC Cap} \label{sec:nuopcfields}
\vssub
The avaiable fields for import and export are listed below. Please see Section \ref{sec:nuopcconfig} for
information on how to activate coupling for an import field.
\noindent Import Fields:
\begin{itemize}
\item sea\_surface\_height\_above\_sea\_level
\item surface\_eastward\_sea\_water\_velocity
\item surface\_northward\_sea\_water\_velocity
\item eastward\_wind\_at\_10m\_height
\item northward\_wind\_at\_10m\_height
\item sea\_ice\_concentration
\end{itemize}
\noindent Export:
\begin{itemize}
\item wave\_induced\_charnock\_parameter
\item wave\_z0\_roughness\_length
\item eastward\_stokes\_drift\_current
\item northward\_stokes\_drift\_current
\item eastward\_wave\_bottom\_current
\item northward\_wave\_bottom\_current
\item wave\_bottom\_current\_period
\item eastward\_wave\_radiation\_stress
\item eastward\_northward\_wave\_radiation\_stress
\item northward\_wave\_radiation\_stress
\end{itemize}
\vssub
\subsection{~Configuration of Input Files for the NUOPC Cap} \label{sec:nuopcconfig}
\vssub
The required \ws\ input file is the ww3\_multi.inp or ww3\_multi.nml file. To spcify that a particular input field
is to be obtained via coupling and not a file 'CPL:' is put infront of the input grid specification for the
particular input field.
Note that current limitations of the NUOPC cap are that there can only be one input grid (whether it is one of the
model grids or an input grid) and one export grid (the first computational grid if there are multiple computational
grids). The grid can be unstructured or structured.
Note that while the start and end times for the run is determined by the NUOPC driver, the start time, end time and
frequency of output are still determined by the ww3\_multi input file.
\vssub
\subsection{~Running the NUOPC Cap} \label{sec:nuopcrun}
\vssub
While the cap is designed to be used in coupled systems outside of the scope of this documentation.
A script to run a regression test of the standalone \ws\ cap is provided in the {\code regtest\/run\_esmf\_test\_suite}
script.
| {
"alphanum_fraction": 0.7871119228,
"avg_line_length": 43.5568181818,
"ext": "tex",
"hexsha": "d149b670e6a212cfeaea02882fa0207ffe864af6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_forks_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_forks_repo_name": "minsukji/ci-debug",
"max_forks_repo_path": "WW3/manual/app/nuopc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_issues_repo_name": "minsukji/ci-debug",
"max_issues_repo_path": "WW3/manual/app/nuopc.tex",
"max_line_length": 119,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_stars_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_stars_repo_name": "minsukji/ci-debug",
"max_stars_repo_path": "WW3/manual/app/nuopc.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1038,
"size": 3833
} |
\section{Results}
The algorithm was tested with different audio samples. The most important tests were done with anechoic samples of a single violin player, as this was closest to the original purpose. We also tested the algorithm with samples of recorded electric guitar, and an already heavily processed piece of music. Different parameter values for the algorithm were also tested.
Figure~\ref{fig:dryspec} shows the spectrogram of an uneffected, anechoic violin recording. The partials are clear and the vibrato of the player is also easy to see, particularly around the $2.5s$ time point
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{dry_spec.png}
\caption{Spectrogram of dry signal.}
\label{fig:dryspec}
\end{figure}
Figure~\ref{fig:effspec} shows the spectrogram of the same sample as figure~\ref{fig:dryspec}, but this time passed through the Warm Chorus algorithm. Though the partials are still some what visible, the spectrogram is now more blurry and the vibrato of the original recordings appears to be masked by the effect.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{effected_spec.png}
\caption{Spectrogram of effected signal.}
\label{fig:effspec}
\end{figure}
The blurryness is explained by two factors. Firstly, as is also audible, the algorithm gives the input a reverberated quality due to the reasonably long delays. This blurs the spectrogram in the horisontal direction Secondly, the harmonisers spread the frequency content, which blurs the spectrogram in the vertical direction.
The algorithm's output has no obvious periodic pulse. The output gives the impression of many players playing the same thing in a space. The spaciousness seems inseparable from the impression of a section of players.
When compared with a generic chorus algorithm, the Warm Chorus is much more convincing and realistic. However, the generic chorus has a distinct sound of its own.
When using some extreme settings, the Warm Chorus algorithm may produce quite unpleasant results. After some point, the detuning of the input makes the output sound audibly out-of-tune. Similarly, too much simulated distance between the players results in losing temporal accuracy.
| {
"alphanum_fraction": 0.8099547511,
"avg_line_length": 81.8518518519,
"ext": "tex",
"hexsha": "ba33a3b5ae0aca37557d227a68e150efdf0088a4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cfa716c633150949fa115a15d7e29fdf1fbd06e1",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "Gastron/WarmChorus",
"max_forks_repo_path": "report/results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cfa716c633150949fa115a15d7e29fdf1fbd06e1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "Gastron/WarmChorus",
"max_issues_repo_path": "report/results.tex",
"max_line_length": 366,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "cfa716c633150949fa115a15d7e29fdf1fbd06e1",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "Gastron/WarmChorus",
"max_stars_repo_path": "report/results.tex",
"max_stars_repo_stars_event_max_datetime": "2018-11-21T17:07:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-21T17:07:29.000Z",
"num_tokens": 490,
"size": 2210
} |
\documentclass[11pt]{article}
\usepackage{amsfonts, amsmath, amssymb, amsthm, fullpage, mdwlist, fancyhdr}
\setlength{\textheight}{9.25in}
\setlength{\textwidth}{6.5in}
\setlength{\topmargin}{0.0in}
\setlength{\headheight}{0.0in}
\setlength{\headsep}{0.0in}
\setlength{\leftmargin}{0.0in}
\setlength{\oddsidemargin}{0.0in}
\setlength{\parindent}{0pc}
\begin{document}
\title{Functional Equations}
\author{Alex Zhu}
\date{November 29, 2013}
\maketitle
This handout will cover three common techniques for dealing with functional equations.
\section{Techniques}
\subsection{Injectivity / Surjectivity}
A function $f$ is \emph{injective} if, whenever $f(a) = f(b)$, we have $a = b$. A function $f$ is \emph{surjective} if for all $r$ in the codomain of $f$, there is some $x$ such that $f(x) = r$. (So if $f$ is a real-valued function, $f$ is surjective if each real number is in the range of $f$.)
\\\\
\textbf{Example 1.} The function $f : \mathbb{R} \to \mathbb{R}$ satisfies $x + f(x) = f(f(x))$ for every $x \in \mathbb{R}$. Find all solutions of the equation $f(f(x)) = 0$.
\\\\
\textbf{Solution.} If $f(x) = f(y)$ for some $x$ and $y$, then $x = f(f(x)) - f(x) = f(f(y)) - f(y) = y$, so $f$ is injective. Setting $x = 0$ gives $0 + f(0) = f(f(0))$. Since $f$ is injective, this means $f(0) = 0$. Thus, $f(f(0)) = 0$, so $x = 0$ is a solution to $f(f(x)) = 0$. It is also the only solution, since $f(f(x)) = 0 = f(f(0))$ implies $f(x) = f(0)$, so $x = 0$.
\\\\
\textbf{Example 2.} (ISL 2002) Find all functions $f$ from the reals to the reals such that
\[ f(f(x) + y) = 2x + f(f(y) - x) \]
holds for all real $x$ and $y$.
\\\\
\textbf{Solution.} If $y = -f(x)$, then $f(f(-x) - x) = f(0) - 2x$. Since $f(0) - 2x$ is surjective, so is $f(x)$. Take $c$ so that $f(c) = 0$, and set $x = c$. We have $f(y) = 2c + f(f(y) - c)$. Since $f$ is surjective, $f(y)$ takes on all real values, so for all $z$, we have $z = 2c + f(z - c)$. Thus, $f(z) = z - c$ for all $z$. It is easy to verify that any function of this form satisfies the functional equation.
\subsection{Easy Values}
When approaching a functional equations, there are a number of simple values that one should try substituting, like $x = 0$, $x = 1$, or $x = y$.
\\\\
\textbf{Example 3.} Find all functions $f : \mathbb{R} \to \mathbb{R}$ such that $f(xf(y)+y) = f(x)f(y) + y$ for all real $x$ and $y$.
\\\\
\textbf{Solution.} Substitute $x = 0$ to get $f(y) = f(0)f(y) + y$ for all $y$. Setting $y = 0$ yields $f(0) = f(0)^2$, so $f(0) = 0$ or $f(0) = 1$. If $f(0) = 1$, then $f(y) = f(0)f(y) + y = f(y) + y$ for all $y$, so $y = 0$ for all $y$, which is absurd. Thus, $f(0) = 0$, so $f(y) = y$ for all $y$. It is easy to verify that this function satisfies the functional equation.
\subsection{Cauchy's Equation}
Often, functional equations get reduced to something of the form $f(x+y) = f(x) + f(y)$. If some weak condition holds, we can conclude that $f(x) = cx$ for some constant $c$. In particular, if we can show that $f$ is continuous, that $f$ is monotonic over some interval, or that $f$ is bounded from above or below over some interval, then $f(x) = cx$ for all $x$. Here, we will show that if $f(x+y) = f(x) + f(y)$ and $f$ is increasing over $\mathbb{R}$, then $f(x) = cx$ for some $c$. (The other cases are left as an exercise.)
\\\\
Setting $x=y=0$ gives $f(0) = 0$. For any positive integer $n$ and any real $x$, we have $f(\underbrace{x + x + \cdots + x}_n) = \underbrace{f(x) + f(x) + \cdots + f(x)}_n$, so $f(nx) = nf(x)$ for all positive integers $n$. Additionally, since $0 = f(-x + x) = f(-x) + f(x)$, we have $f(-x) = -f(x)$ for all $x$, whence $f(nx) = nf(x)$ for all integers $n$. Additionally, we have $f(n(x/n)) = n f(x/n)$, so $f(x/n) = f(x)/n$ for each integer $n$. It follows that for any rational $q = \frac{m}{n}$, we have $f(qx) = f(mx/n) = mf(x/n) = (m/n)f(x) = qf(x)$.
\\\\
Let $c = f(1)$. We have $f(q) = qf(1) = cq$ for all rational $q$. Take any real number $x$. For any positive real number $\epsilon$, we can find rationals $r$ and $s$ such that $r < x < s$, and $r,s \in (x - \epsilon/c, x + \epsilon/c)$. We have $f(r) = cq \leq f(x) \leq f(s) = cs$. Thus, $c(q - x) \leq f(x) - cx \leq c(s - x)$. Since $|q-x| < \epsilon/c$ and $|s - x| < \epsilon/c$, we have $|f(x) - cx| < \epsilon$. Since this holds for each positive $\epsilon$, we must have $f(x) = cx$ for all $x$, as desired.
\subsection{Additional Tips}
\begin{enumerate}
\item When you get a solution for the functional equation, don't forget to verify that the solution works.
\item Often you will be able to deduce that a function $f$ satisfying the functional equation always equals one function or some other---say, you may deduce $f(x)^2 = x^2$, and conclude that for all $x$, $f(x) = x$ or $f(x) = -x$. From this, you cannot conclude directly that $f(x) = x$ for all $x$ or $f(x) = -x$ for all $x$.
\end{enumerate}
\section{Problems}
\begin{enumerate}
\item Find all increasing functions $f : \mathbb{R} \to \mathbb{R}$ such that $f(x+y) = f(x)f(y)$ for all real $x$ and $y$.
\item Find all functions $f : \mathbb{R} \to \mathbb{R}$ such that $f(x+y) = f(x) + f(y)$ and $f(xy) = f(x)f(y)$ for all real $x$ and $y$.
\item Find all functions $f : \mathbb{R} \to \mathbb{R}$ such that $f(x^2 + y) = f(x) + f(y^2)$ for all real $x$ and $y$.
% \item Find all increasing functions $f : \mathbb{R} \to \mathbb{R}$ such that $f(f(x) + y) = x + f(y)$ for all real $x$ and $y$.
\item (USAMO 2002) Let $\mathbb{R}$ be the set of real numbers. Determine all functions $f : \mathbb{R} \to \mathbb{R}$ such that
\[ f(x^2 - y^2) = xf(x) - yf(y) \]
for all pairs of real numbers $x$ and $y$.
% \item (EGMO 2012) Find all functions $f : \mathbb{R} \to \mathbb{R}$ such that \[ f(yf(x+y) + f(x)) = 4x + 2y f(x+y) \] for all $x, y \in \mathbb{R}$.
\item Find all functions $f : \mathbb{R} \to \mathbb{R}$ such that $f(xf(x) + f(y)) = f(x)^2 + y$.
% \item (IMO 1992) Find all functions $f : \mathbb{R} \to \mathbb{R}$ such that $f(x^2 + f(y)) = y + f(x)^2$ for all $x,y \in \mathbb{R}$.
\item (USAMO 2000) Call a real-valued function $f$ \emph{very convex} if
\[ \frac{f(x) + f(y)}{2} \geq f \left( \frac{x+y}{2} \right) + |x - y| \]
holds for all real numbers $x$ and $y$. Prove that no very convex function exists.
\item (IMO 2008) Find all functions $ f: (0, \infty) \mapsto (0, \infty)$ such that
\[ \frac {\left( f(w) \right)^2 + \left( f(x) \right)^2}{f(y^2) + f(z^2) } = \frac {w^2 + x^2}{y^2 + z^2} \]
for all positive real numbes $ w,x,y,z,$ satisfying $ wx = yz.$
\item Find all polynomials $f$ with real coefficients such that $f(x) f(2x^2) = f(2x^3 + x)$.
\item (IMO 2009) Determine all functions $ f$ from the set of positive integers to the set of positive integers such that, for all positive integers $ a$ and $ b$, there exists a non-degenerate triangle with sides of lengths
\[ a, f(b), \text{ and } f(b + f(a) - 1). \]
(A triangle is non-degenerate if its vertices are not collinear.)
% substitute variables for each other
\item (ISL 2007) Find all functions $f : \mathbb{R}^+ \to \mathbb{R}^+$ satisfying $f(x+f(y)) = f(x+y) + f(y)$ for all pairs of positive reals $x$ and $y$. ($\mathbb{R}^+$ is the set of positive reals.)
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.6011785188,
"avg_line_length": 72.4951456311,
"ext": "tex",
"hexsha": "79faa79cd58134eed18075964674787c3e87fc92",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a7bcc5eaf5c352e7b659c60194dafae7a19e1874",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "diego9627/mathOrgConvert",
"max_forks_repo_path": "example-files/functional_equations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a7bcc5eaf5c352e7b659c60194dafae7a19e1874",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "diego9627/mathOrgConvert",
"max_issues_repo_path": "example-files/functional_equations.tex",
"max_line_length": 557,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a7bcc5eaf5c352e7b659c60194dafae7a19e1874",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "diego9627/mathOrgConvert",
"max_stars_repo_path": "example-files/functional_equations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2770,
"size": 7467
} |
\chapter{RANDOM EFFECTS TABLES FOR THE BIRD SPECIES}
Note the ``Residual'' row presents the random error, or $\epsilon$. These values were ignored for analysis.
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{American Goldfinch}
\label{American Goldfinch}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.2769126 & 0.52622 \\ \hline
city & Intercept & 0.0170802 & 0.13069 \\ \hline
NHalfDays & Intercept & 0.0009424 & 0.0307 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.0091336 & 0.09557 \\ \hline
Residual & & 0.5117337 & 0.71536 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Black-billed Magpie}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.18718 & 0.4326 \\ \hline
city & Intercept & 0.02261 & 0.1504 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.01063 & 0.1031 \\ \hline
Residual & & 0.25656 & 0.5065 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Black-capped Chickadee}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.17376 & 0.41684 \\ \hline
city & Intercept & 0.034414 & 0.18551 \\ \hline
NHalfDays & Intercept & 0.001475 & 0.03841 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.00497 & 0.07049 \\ \hline
Residual & & 0.173698 & 0.41677 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Blue Jay}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.1797844 & 0.42401 \\ \hline
city & Intercept & 0.0164977 & 0.12844 \\ \hline
NHalfDays & Intercept & 0.0002742 & 0.01656 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.0076209 & 0.0873 \\ \hline
Residual & & 0.2386128 & 0.48848 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Brown Creeper}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.0055615 & 0.07458 \\ \hline
city & Intercept & 0.0006456 & 0.02541 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0 & 0 \\ \hline
Residual & & 0.0408037 & 0.202 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Chestnut-backed Chickadee}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.16533 & 0.4066 \\ \hline
city & Intercept & 0.01781 & 0.13346 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.00689 & 0.08301 \\ \hline
Residual & & 0.20546 & 0.45327 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Chipping Sparrow}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.089829 & 0.29972 \\ \hline
city & Intercept & 0.007325 & 0.08559 \\ \hline
NHalfDays & Intercept & 0.001693 & 0.04115 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0 & 0 \\ \hline
Residual & & 0.244769 & 0.49474 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Common Grackle}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.171613 & 0.41426 \\ \hline
city & Intercept & 0.003542 & 0.05951 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.006067 & 0.07789 \\ \hline
Residual & & 0.829976 & 0.91103 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Common Redpoll}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.175402 & 0.41881 \\ \hline
city & Intercept & 0.129613 & 0.36002 \\ \hline
NHalfDays & Intercept & 0.002685 & 0.05182 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0 & 0 \\ \hline
Residual & & 0.808522 & 0.89918 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Dark-eyed Junco}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.2346261 & 0.48438 \\ \hline
city & Intercept & 0.0455615 & 0.21345 \\ \hline
NHalfDays & Intercept & 0.0008802 & 0.02967 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.0136565 & 0.11686 \\ \hline
Residual & & 0.4199296 & 0.64802 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Downy Woodpecker}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.081578 & 0.28562 \\ \hline
city & Intercept & 0.010689 & 0.10339 \\ \hline
NHalfDays & Intercept & 0.001016 & 0.03187 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.00653 & 0.08081 \\ \hline
Residual & & 0.120721 & 0.34745 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{European Starlin}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.195598 & 0.44226 \\ \hline
city & Intercept & 0.005131 & 0.07163 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.012898 & 0.11357 \\ \hline
Residual & & 0.694701 & 0.83349 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Evening Grosbeak}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.42765 & 0.65395 \\ \hline
city & Intercept & 0.05215 & 0.22836 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.00264 & 0.05138 \\ \hline
Residual & & 0.71113 & 0.84329 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Hairy Woodpecker}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.0356293 & 0.18876 \\ \hline
city & Intercept & 0.0038428 & 0.06199 \\ \hline
NHalfDays & Intercept & 0.0003211 & 0.01792 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.0023614 & 0.04859 \\ \hline
Residual & & 0.0813611 & 0.28524 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Mountain Chickadee}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.0356293 & 0.18876 \\ \hline
city & Intercept & 0.0038428 & 0.06199 \\ \hline
NHalfDays & Intercept & 0.0003211 & 0.01792 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.0023614 & 0.04859 \\ \hline
Residual & & 0.0813611 & 0.28524 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Mourning Dove}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.265376 & 0.51515 \\ \hline
city & Intercept & 0.006261 & 0.07912 \\ \hline
NHalfDays & Intercept & 0.00337 & 0.05806 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.017 & 0.13038 \\ \hline
Residual & & 0.4813 & 0.69376 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Northern Mockingbird}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 8.12E-03 & 0.0901 \\ \hline
city & Intercept & 1.02E-03 & 0.03192 \\ \hline
NHalfDays & Intercept & 5.82E-05 & 0.00763 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.00E+00 & 0 \\ \hline
Residual & & 3.79E-02 & 0.19456 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Northern Cardinal}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.272015 & 0.52155 \\ \hline
city & Intercept & 0.059878 & 0.2447 \\ \hline
NHalfDays & Intercept & 0.009218 & 0.09601 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.008608 & 0.09278 \\ \hline
Residual & & 0.225365 & 0.47473 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Pine Grosbeak}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.1908 & 0.4368 \\ \hline
city & Intercept & 0.03666 & 0.1915 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0 & 0 \\ \hline
Residual & & 0.54211 & 0.7363 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Pine Siskin}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.2292297 & 0.47878 \\ \hline
city & Intercept & 0.0280484 & 0.16748 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.0006648 & 0.02578 \\ \hline
Residual & & 0.6927975 & 0.83234 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Red-bellied Woodpecker}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.026744 & 0.16354 \\ \hline
city & Intercept & 0.002125 & 0.0461 \\ \hline
NHalfDays & Intercept & 0 & 0 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.002155 & 0.04642 \\ \hline
Residual & & 0.057358 & 0.23949 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{Tufted Titmouse}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.109101 & 0.33031 \\ \hline
city & Intercept & 0.015438 & 0.12425 \\ \hline
NHalfDays & Intercept & 0.001416 & 0.03763 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.00652 & 0.08075 \\ \hline
Residual & & 0.160669 & 0.40084 \\ \hline
\end{longtable}
% Please add the following required packages to your document preamble:
% \usepackage{longtable}
% Note: It may be necessary to compile the document several times to get a multi-page table to line up properly
\begin{longtable}[c]{|l|l|l|l|}
\caption{White-throated Sparrow}
\label{my-label}\\
\hline
Groups & Name & Variance & Std. Dev. \\ \hline
\endhead
%
LOC\_ID & Intercept & 0.240152 & 0.4901 \\ \hline
city & Intercept & 0.083445 & 0.2889 \\ \hline
NHalfDays & Intercept & 0.002411 & 0.0491 \\ \hline
Effort\_Hrs\_Atleast & Intercept & 0.002052 & 0.0453 \\ \hline
Residual & & 0.320554 & 0.5662 \\ \hline
\end{longtable}
\chapter{GUIDELINES FOR PFW DATA}
\includepdf[pages=-]{appendix/pfwGuidelines.pdf}
\chapter{SCRIPT IMPLEMENTATION USING PYTHON FOR APPENDING CITIES}
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
basicstyle=\footnotesize,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
\lstset{style=mystyle}
\lstinputlisting[language=Python]{appendix/adding_cities.py}
\chapter{SCRIPT IMPLEMENTATION USING PYTHON FOR COLLECTING WEATHER UNDERGROUND CLIMATE DATA}
\lstinputlisting[language=Python]{appendix/adding_temp_500_0.py}
| {
"alphanum_fraction": 0.6103105451,
"avg_line_length": 40.8767123288,
"ext": "tex",
"hexsha": "b4092c3a7fd6eb7b3215a88ac04652137070c274",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6eff0b7138abd77ab46c7f6149a6bdf1054f7c40",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "siddkahal/MastersThesis_Latex_Doc",
"max_forks_repo_path": "appendix-outline.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6eff0b7138abd77ab46c7f6149a6bdf1054f7c40",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "siddkahal/MastersThesis_Latex_Doc",
"max_issues_repo_path": "appendix-outline.tex",
"max_line_length": 111,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6eff0b7138abd77ab46c7f6149a6bdf1054f7c40",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "siddkahal/MastersThesis_Latex_Doc",
"max_stars_repo_path": "appendix-outline.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5657,
"size": 17904
} |
\documentclass[a4paper,11pt]{article}
\usepackage{geometry}
\geometry{
a4paper,
total={170mm,257mm},
left=20mm,
top=20mm,
}
\usepackage{multirow}
\usepackage{colortbl}
\usepackage{hhline}
\usepackage{lipsum} %%% Lorem ipsum
\setlength{\headheight}{30.0pt}
\setlength{\footskip}{20pt}
\usepackage{hyperref}
\hypersetup{
colorlinks=True,
linkcolor={blue!20!black},
filecolor=magenta,
urlcolor=cyan,
}
\usepackage[export]{adjustbox}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage{fancyhdr}
\usepackage{multicol}
\pagestyle{fancy}
\fancyhf{}
\rhead{\textit{Pul074BEX004}}
\lhead{\textit{Amrit Prasad Phuyal}}
\rfoot{\thepage}
\usepackage{mathpazo} % Palatino font
\usepackage{graphicx}
\usepackage{float}
\input{./AnsENV.tex} %% Answer environment
\input{./QueENV.tex} %% Question Environment
\input{./CoverPage.tex} %%% cover page
\include{./CMD output.tex} %%% Cmd OUTPUT blue background
\begin{document}
%%%% COver page
\CP{Computer Network}{Lab \#7}{Configuration of Dynamic Routing using RIP and OSPF}
{SHARAD KUMAR GHIMIRE}
%%%%%%%%%%%%%%%%%%%%
\pagenumbering{gobble}
\renewcommand{\contentsname}{Table of Contents}
\tableofcontents
%\pagebreak
%\listoffigures
% \pagebreak
% \listoftables
\pagebreak
\lstlistoflistings
\pagebreak
\listoffigures
\pagebreak
\pagenumbering{arabic}
\section{Title} {\large Configuration of Dynamic Routing using RIP and OSPF}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Objective}
\begin{itemize}
\item To be familiar with dynamic routing
\item To configure dynamic routing using RIP and OSPF
\item To observe how the dynamic routing can address changing network topology automatically
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%
\section{Requirement}
\begin{itemize}
\item Network simulation tool: Packet Tracer
\end{itemize}
%%%%%%%%%%%%%%%%%%%
\section{Procedure}
With the help of Cisco Packet Tracer we simulated Subnetting of different IP ranges also explored
Configuration of Dynamic Routing using RIP and OSPF. We performed ping and trace route between different PCs and compare result before and after performing RIP and OSPF dynamic routing.
\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Exercises:}
%%%%%%%%%%%%%%%%%%%%%%%%%11111111111111111111111
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{Q}
{
What is a dynamic routing? How it differs with static routing? Explain briefly.
}
\end{Q}
\begin{A}
{
Dynamic Routing is an Adaptive type routing which adjust its Routing Table according to change in Network Topology.It is less secure and uses Protocols like BGP , RIP, OSPF.
Static Routing is manually defined for every router and every network whereas Dynamic routing adapts the change and propagate the changes to neighboring router. Static routing use simple algorithm whereas Dynamic has complex algorithm. Static Routing is suitable for small network and doesnot required additional resources But Dyanamic Routing is suitable for bigger network and additional resources is required.
}
\end{A}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%22222222222222222222222222222222222
\begin{Q}
{
List out the dynamic routing configuration commands (that you have used in lab) of router
with their syntax and examples.
}
\end{Q}
\begin{A}
{
{\large\textbf{RIP}}\\
\textbf{router rip}\\
\textbf{network network-number}\\\\
Router(config)\# \textbf{router rip}\\
Router(config-router)\# \textbf{network 192.5.5.0}\\
Router(config-router)\# \textbf{network 205.7.5.0}\\
\HRule
{\large\textbf{OSPF}}\\
\textbf{router ospf PROCESS-ID}\\
\textbf{ network IP\_ADDRESS WILDCARD\_MASK AREA\_ID}\\\\
R1\# \textbf{configure terminal}\\
R1(config)\# \textbf{router ospf 1}\\
R1(config-router)\# \textbf{network 102.108.109.16 0.0.0.3 area 0}\\
R1(config-router)\# \textbf{end}\\
}
\end{A}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%33333333333333333333333333333333333
\begin{Q}
{
Note down the observation of each steps with necessary commands specified in activities
A, B and C mentioned above and comment on the result by explaining the reason in detail.
}
\end{Q}
%
%
%
%
%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
%
%
%
\addtocontents{lol}{\protect\subsection*{\HRule \\ Activities A\\ \HRule}}
\addtocontents{lof}{\protect\subsection*{\HRule \\ Activities A\\ \HRule}}
\subsubsection{Activities A}
{\bfseries \textit{A. Create the following network topology using Packet Tracer and perform the followings:}}
\begin{figure}[H]
\centering
\includegraphics[scale=0.7,cframe=blue 0.5pt 3pt]{./FIG/Lab7A.jpg}
\caption{Network topology Lab 7A}
\end{figure}
\begin{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA11111111111111111111111
\item\textbf{Configure the hostname, console password and enable password in each Router.}
\addtocontents{lol}{\protect\subsubsection*{A.1 : Routers Configuration}}
\CMD{./CODES/A_config0.txt}{Config Hostname, Console ,enable,vty password for Router 0}
% \usepackage{colortbl}
\begin{table}[H]
\centering
\begin{tabular} {| m{6em}| m{6em}| m{9em} | m{8em}| m{7em} |}
\hline
{\cellcolor[rgb]{0.278,0.671,0.984}}\textbf{S.N} & \textbf{Hostname} & \textbf{Console Password} & \textbf{Enable Password} & \textbf{vty Password} \\
\hline
{\cellcolor[rgb]{0.278,0.671,0.984}}Router 0 & AMRIT\_0 & amrit & 403 & phuyal \\
\hline
{\cellcolor[rgb]{0.278,0.671,0.984}}Router 1 & AMRIT\_1 & amrit & 403 & phuyal \\
\hline
{\cellcolor[rgb]{0.278,0.671,0.984}}Router 2 & AMRIT\_2 & amrit & 403 & phuyal \\
\hline
{\cellcolor[rgb]{0.278,0.671,0.984}}Router 3 & AMRIT\_3 & amrit & 403 & phuyal \\
\hline
\end{tabular}
\caption{Table for hostname, console password , enable password,vty password}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA222222222222222222222222222
\item\textbf{ Configure each interfaces of Router with given IP address and appropriate interface description.}
\addtocontents{lol}{\protect\subsubsection*{A.2 : Assign IP to Interfaces}}
\CMD{./CODES/A_IP0.txt}{Configuring each interface of Router0}
\CMD{./CODES/A_IP3.txt}{Configuring each interface of Router3}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\
\begin{tabular}{|l|l|l|l|}
\hline
\rowcolor[rgb]{0.443,0.831,1} \textbf{Router no.} & {\cellcolor[rgb]{0.325,1,0.784}}\textbf{GigabitEthernet} & \textbf{Assigned Ip} & \textbf{Description} \\
\hline
\multirow{2}{*}{\textbf{Router 0~ }} & {\cellcolor[rgb]{0.325,1,0.784}}0/0 & 202.60.1.1 & Connected to Router 1~~ \\
\hhline{|~---|}
& {\cellcolor[rgb]{0.325,1,0.784}}0/1 & 202.60.0.1 & Connected to Network 1 \\
\hline
\multirow{3}{*}{\textbf{Router 1~~ }} & {\cellcolor[rgb]{0.325,1,0.784}}0/0 & 202.60.1.2 & Connected to Router 0 \\
\hhline{|~---|}
& {\cellcolor[rgb]{0.325,1,0.784}}0/1 & 202.60.3.1 & Connected to Router 2 \\
\hhline{|~---|}
& {\cellcolor[rgb]{0.325,1,0.784}}0/2 & 202.60.2.1 & Connected to Network 2 \\
\hline
\multirow{3}{*}{\textbf{~Router 2~~ }} & {\cellcolor[rgb]{0.325,1,0.784}}0/0 & 202.60.3.2 & Connected to Router 1 \\
\hhline{|~---|}
& {\cellcolor[rgb]{0.325,1,0.784}}0/1 & 202.60.5.1 & Connected to Router 3 \\
\hhline{|~---|}
& {\cellcolor[rgb]{0.325,1,0.784}}0/2 & 202.60.4.1 & Connected to Network 3 \\
\hline
\multirow{2}{*}{\textbf{Router 3 }} & {\cellcolor[rgb]{0.325,1,0.784}}0/0 & 202.60.5.2 & Connected to Router 2 \\
\hhline{|~---|}
& {\cellcolor[rgb]{0.325,1,0.784}}0/1 & 202.60.6.1 & Connected to Network 4 \\
\hline
\end{tabular}
\caption{Assigned IPs and description for all interfaces}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAA3333333333333333333333333333333
\item\textbf{ Configure the IP address and default gateway on each computer as specified in figure above. }
% \usepackage{multirow}
% \usepackage{colortbl}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{6em}| m{8em}| m{9em} | m{8em} |}
\hline
\rowcolor[rgb]{0.235,1,0.808} \textbf{Network no.~} & \textbf{Default gateway} & \textbf{~Device name} & \textbf{Assigned IP} \\
\hline
{\cellcolor[rgb]{0.259,0.753,1}} & \multirow{2}{*}{202.60.0.1} & Server 0 & 202.60.0.2 \\
\hhline{|>{\arrayrulecolor[rgb]{0.259,0.753,1}}-~>{\arrayrulecolor{black}}--|}
\multirow{-2}{*}{{\cellcolor[rgb]{0.259,0.753,1}}Network 1} & & PC0 & 202.60.0.3 \\
\hline
{\cellcolor[rgb]{0.259,0.753,1}} & \multirow{2}{*}{202.60.2.1} & PC1 & 202.60.2.2 \\
\hhline{|>{\arrayrulecolor[rgb]{0.259,0.753,1}}-~>{\arrayrulecolor{black}}--|}
\multirow{-2}{*}{{\cellcolor[rgb]{0.259,0.753,1}}Network 2} & & PC4 & 202.60.2.3 \\
\hline
{\cellcolor[rgb]{0.259,0.753,1}} & \multirow{2}{*}{202.60.4.1} & PC5 & 202.60.4.2 \\
\hhline{|>{\arrayrulecolor[rgb]{0.259,0.753,1}}-~>{\arrayrulecolor{black}}--|}
\multirow{-2}{*}{{\cellcolor[rgb]{0.259,0.753,1}}Network 3} & & PC2 & 202.60.4.3 \\
\hline
{\cellcolor[rgb]{0.259,0.753,1}} & \multirow{2}{*}{202.60.6.1} & Server 1 & 202.60.6.2 \\
\hhline{|>{\arrayrulecolor[rgb]{0.259,0.753,1}}-~>{\arrayrulecolor{black}}--|}
\multirow{-2}{*}{{\cellcolor[rgb]{0.259,0.753,1}}Network 4} & & PC3 & 202.60.6.3 \\
\hline
\end{tabular}
\caption{Table for Name, assigned ip, Default gateway}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA44444444444444444444444444444
\item\textbf{ Enable telnet on each Router. }
Already enabled in Activity A.1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA55555555555555555555555555555
\item\textbf{ Observe the output of the command \textit{show ip route} in each Router and note it down.}
\addtocontents{lol}{\protect\subsubsection*{A.5 : Routing table of Routers}}
\CMD{./CODES/A_Showip0.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/A_Showip1.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/A_Showip2.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/A_Showip3.txt}{\textit{show ip route} Router 3}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA666666666666666666666666666
\item\textbf{ Observe the output while using ping command from PC0 to PC0, PC1, PC2, PC3, Server0,
Server1, Router0, Router1, Router2 and Router3.
}
\addtocontents{lol}{\protect\subsubsection*{A.6 : Ping from PC0}}
\CMD{./CODES/AP0-r01.txt}{Ping from PC0 to Router 0 : 0/1}
\CMD{./CODES/AP0-r22.txt}{Ping from PC0 to Router 2 : 0/2}
\CMD{./CODES/AP0-3.txt}{Ping from PC0 to PC3}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{9em}| m{12em}| m{9em} |}
\hline
{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Sending Host} & \textbf{Destination} & \textbf{Ping status} \\
\hline
{\cellcolor[rgb]{0.333,0.686,1}} & PC0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Server 0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/0 & \multirow{-4}{*}{{\cellcolor[rgb]{0.376,1,0.882}}\textbf{ Successful}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}--|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC3 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
\multirow{-16}{*}{{\cellcolor[rgb]{0.333,0.686,1}}\textbf{PC0}} & Server 1 & \multirow{-12}{*}{{\cellcolor[rgb]{1,0.173,0.09}} \textbf{Failed} } \\
\hline
\end{tabular}
\caption{Ping from PC0 to all Routers,PCs and Servers }
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA777777777777777777777777777777777
\item\textbf{ Similarly, observe the output while using ping command from PC1 to all other computers,
servers and routers.
}
\addtocontents{lol}{\protect\subsubsection*{A.7 : Ping from PC1}}
\CMD{./CODES/AP1-r01.txt}{Ping from PC1 to Router 0 : 0/1}
\CMD{./CODES/AP1-r22.txt}{Ping from PC1 to Router 2 : 0/2}
\CMD{./CODES/AP1-3.txt}{Ping from PC1 to PC3}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{9em}| m{12em}| m{9em} |}
\hline
{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Sending Host} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping status}} \\
\hline
{\cellcolor[rgb]{0.333,0.686,1}} & PC0 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.173,0.09}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Server 0 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.173,0.09}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/1 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.173,0.09}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/0 & \multicolumn{1}{l|}{\multirow{-4}{*}{{\cellcolor[rgb]{1,0.173,0.09}}\textbf{Failed}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}--}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/2 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC1 & \multirow{-4}{*}{{\cellcolor[rgb]{0.376,1,0.882}}\textbf{Successful}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC3 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
\multirow{-16}{*}{{\cellcolor[rgb]{0.333,0.686,1}}\textbf{PC1}} & Server 1 & \multirow{-8}{*}{{\cellcolor[rgb]{1,0.173,0.09}}\textbf{ Failed }} \\
\hhline{>{\arrayrulecolor{black}}|-->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
\end{tabular}
\arrayrulecolor{black}
\caption{Ping from PC1 to all Routers,PCs and Servers }
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA888888888888888888888888888
\item\textbf{ Repeat the process from all other computers and routers.}
\addtocontents{lol}{\protect\subsubsection*{A.8 : Ping from other PCs and Router}}
\begin{enumerate}
\item \textbf{Ping from PC2}
\addtocontents{lol}{\protect\subsubsection*{\quad \quad \small A.8.a : Ping from PC2}}
\CMD{./CODES/AP2-r01.txt}{Ping from PC2 to Router 0 : 0/1}
\CMD{./CODES/AP2-1.txt}{Ping from PC2 to PC1}
\CMD{./CODES/AP2-3.txt}{Ping from PC2 to PC3}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{9em}| m{12em}| m{9em} |}
\hline
{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Sending Host} & \textbf{Destination} & \textbf{Ping status} \\
\hline
{\cellcolor[rgb]{0.333,0.686,1}} & PC0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Server 0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC1 & \multirow{-8}{*}{{\cellcolor[rgb]{1,0.173,0.09}}\textbf{Failed} } \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}--}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/0 & \multicolumn{1}{l}{{\cellcolor[rgb]{0.376,1,0.882}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/2 & \multicolumn{1}{l}{{\cellcolor[rgb]{0.376,1,0.882}}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC2 & \multicolumn{1}{l}{{\cellcolor[rgb]{0.376,1,0.882}}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/1 & \multicolumn{1}{l}{\multirow{-4}{*}{{\cellcolor[rgb]{0.376,1,0.882}}\textbf{Successful}}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/0 & \multicolumn{1}{l}{{\cellcolor[rgb]{1,0.173,0.09}}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/1 & \multicolumn{1}{l}{{\cellcolor[rgb]{1,0.173,0.09}}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC3 & \multicolumn{1}{l}{{\cellcolor[rgb]{1,0.173,0.09}}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
\multirow{-16}{*}{{\cellcolor[rgb]{0.333,0.686,1}}\textbf{PC2}} & Server 1 & \multicolumn{1}{l}{\multirow{-4}{*}{{\cellcolor[rgb]{1,0.173,0.09}}\textbf{Failed}}} \\
\hhline{>{\arrayrulecolor{black}}|-->{\arrayrulecolor[rgb]{1,0.173,0.09}}-}
\end{tabular}
\caption{Ping from PC2 to all Routers,PCs and Servers }
\arrayrulecolor{black}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
\item \textbf{Ping from PC3}
\addtocontents{lol}{\protect\subsubsection*{\quad \quad \small A.8.b : Ping from PC3}}
\CMD{./CODES/AP3-r01.txt}{Ping from PC3 to Router 0 : 0/1}
\CMD{./CODES/AP3-1.txt}{Ping from PC3 to PC1}
\CMD{./CODES/AP3-2.txt}{Ping from PC3 to PC2}
\CMD{./CODES/AP3-s1.txt}{Ping from PC3 to Server 1}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular} {| m{9em}| m{12em}| m{9em} |}
\hline
{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Sending Host} & \textbf{Destination} & \textbf{Ping status} \\
\hline
{\cellcolor[rgb]{0.333,0.686,1}} & PC0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Server 0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC1 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/0 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC2 & {\cellcolor[rgb]{1,0.173,0.09}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.173,0.09}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/1 & \multirow{-12}{*}{{\cellcolor[rgb]{1,0.173,0.09}} \textbf{Failed}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}--}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/0 & \multicolumn{1}{l}{{\cellcolor[rgb]{0.376,1,0.882}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/1 & \multicolumn{1}{l}{{\cellcolor[rgb]{0.376,1,0.882}} \textbf{Successful}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC3 & \multicolumn{1}{l}{{\cellcolor[rgb]{0.376,1,0.882}}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
\multirow{-16}{*}{{\cellcolor[rgb]{0.333,0.686,1}}\textbf{PC3}} & Server 1 & \multicolumn{1}{l}{{\cellcolor[rgb]{0.376,1,0.882}}} \\
\hhline{>{\arrayrulecolor{black}}|-->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
\end{tabular}
\arrayrulecolor{black}
\caption{Ping from PC3 to all Routers,PCs and Servers}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
\item \textbf{Ping from Router 0}
\addtocontents{lol}{\protect\subsubsection*{\quad \quad \small A.8.c : Ping from Router 0}}
\CMD{./CODES/APr0-0.txt}{Ping from Router 0 to PC0}
\CMD{./CODES/APr0-1.txt}{Ping from Router 0 to PC1}
\CMD{./CODES/APr0-2.txt}{Ping from Router 0 to PC2}
\CMD{./CODES/APr0-s1.txt}{Ping from Router 0 to Server 1}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{9em}| m{12em}| m{9em} |}
\hline
{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Sending Host} & \textbf{Destination} & \textbf{Ping status} \\
\hline
{\cellcolor[rgb]{0.333,0.686,1}} & PC0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Server 0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/0 & \multirow{-5}{*}{{\cellcolor[rgb]{0.376,1,0.882}}\textbf{Successful}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}--|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/2 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/0 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/2 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC2 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/0 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC3 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
\multirow{-16}{*}{{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Router 0}} & Server 1 & \multirow{-11}{*}{{\cellcolor[rgb]{1,0.141,0.059}} \textbf{Failed}} \\
\hline
\end{tabular}
\caption{Ping from Router 0 to all Routers,PCs and Servers }
\end{table}
\item \textbf{Ping from Router 1}
\addtocontents{lol}{\protect\subsubsection*{\quad \quad \small A.8.d : Ping from Router 1}}
\CMD{./CODES/APr1-0.txt}{Ping from Router 1 to PC0}
\CMD{./CODES/APr1-1.txt}{Ping from Router 1 to PC1}
\CMD{./CODES/APr1-r21.txt}{Ping from Router 1 to Router 2: 0/1}
\CMD{./CODES/APr1-s1.txt}{Ping from Router 1 to Server 1}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{9em}| m{12em}| m{9em} |}
\hline
{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Sending Host} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping status}} \\
\hline
{\cellcolor[rgb]{0.333,0.686,1}} & PC0 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.141,0.059}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Server 0 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.141,0.059}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/1 & \multicolumn{1}{l|}{\multirow{-3}{*}{{\cellcolor[rgb]{1,0.141,0.059}}\textbf{Failed} }} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}--}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/2 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC1 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/0 & \multirow{-6}{*}{{\cellcolor[rgb]{0.376,1,0.882}}\textbf{Successful}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/2 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC2 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/0 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC3 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
\multirow{-16}{*}{{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Router 1}} & Server 1 & \multirow{-7}{*}{{\cellcolor[rgb]{1,0.141,0.059}}\textbf{Failed}} \\
\hhline{>{\arrayrulecolor{black}}|-->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
\end{tabular}
\caption{Ping from Router 1 to all Routers,PCs and Servers }
\arrayrulecolor{black}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\item \textbf{Ping from Router 2}
\addtocontents{lol}{\protect\subsubsection*{\quad \quad \small A.8.e : Ping from Router 2}}
\CMD{./CODES/APr2-0.txt}{Ping from Router 2 to PC0}
\CMD{./CODES/APr2-r10.txt}{Ping from Router 2 to Router 1 : 0/0}
\CMD{./CODES/APr2-2.txt}{Ping from Router 2 to PC2}
\CMD{./CODES/APr2-3.txt}{Ping from Router 2 to PC3}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{9em}| m{12em}| m{9em} |}
\hline
{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Sending Host} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping status}} \\
\hline
{\cellcolor[rgb]{0.333,0.686,1}} & PC0 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.141,0.059}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Server 0 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.141,0.059}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/1 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.141,0.059}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/0 & \multicolumn{1}{l|}{{\cellcolor[rgb]{1,0.141,0.059}}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/0 & \multicolumn{1}{l|}{\multirow{-5}{*}{{\cellcolor[rgb]{1,0.141,0.059}}\textbf{Failed} }} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}--}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.376,1,0.882}}\textbf{Successful} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/2 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC1 & \multirow{-2}{*}{{\cellcolor[rgb]{1,0.141,0.059}}\textbf{Failed}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/2 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC2 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/1 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/0 & \multirow{-5}{*}{{\cellcolor[rgb]{0.376,1,0.882}}\textbf{Successful}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
{\cellcolor[rgb]{0.333,0.686,1}} & PC3 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{>{\arrayrulecolor{black}}|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
\multirow{-16}{*}{{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Router 2}} & Server 1 & \multirow{-3}{*}{{\cellcolor[rgb]{1,0.141,0.059}}\textbf{Failed}} \\
\hhline{>{\arrayrulecolor{black}}|-->{\arrayrulecolor[rgb]{1,0.141,0.059}}-}
\end{tabular}
\caption{Ping from Router 2 to all Routers,PCs and Servers }
\arrayrulecolor{black}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\item \textbf{Ping from Router 3}
\addtocontents{lol}{\protect\subsubsection*{\quad \quad \small A.8.f : Ping from Router 3}}
\CMD{./CODES/APr3-0.txt}{Ping from Router 3 to PC0}
\CMD{./CODES/APr3-1.txt}{Ping from Router 3 to PC1}
\CMD{./CODES/APr3-2.txt}{Ping from Router 3 to PC2}
\CMD{./CODES/APr3-s1.txt}{Ping from Router 3 to Server 1}
% \usepackage{colortbl}
% \usepackage{multirow}
% \usepackage{hhline}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{9em}| m{12em}| m{9em} |}
\hline
{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Sending Host} & \textbf{Destination} & \textbf{Ping status} \\
\hline
{\cellcolor[rgb]{0.333,0.686,1}} & PC0 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Server 0 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 1 : 0/2 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC1 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/0 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/2 & {\cellcolor[rgb]{1,0.141,0.059}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{1,0.141,0.059}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC2 & \multirow{-11}{*}{{\cellcolor[rgb]{1,0.141,0.059}}\textbf{Failed} } \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}--|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 2 : 0/1 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/0 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & Router 3 : 0/1 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.333,0.686,1}} & PC3 & {\cellcolor[rgb]{0.376,1,0.882}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.333,0.686,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.376,1,0.882}}->{\arrayrulecolor{black}}|}
\multirow{-16}{*}{{\cellcolor[rgb]{0.333,0.686,1}}\textbf{Router 3}} & Server 1 & \multirow{-5}{*}{{\cellcolor[rgb]{0.376,1,0.882}}\textbf{Successful}} \\
\hline
\end{tabular}
\caption{Ping from Router 3 to all Routers,PCs and Servers }
\end{table}
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAA99999999999999999999999999999
\item\textbf{From PC0 enter into Router0 using telnet and configure RIP.}
\addtocontents{lol}{\protect\subsubsection*{A.9 : Telnet to Router 0 and configure RIP. }}
\CMD{./CODES/AT0-0R.txt}{Telnet to Router 0 and configure RIP}
%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAA10 10 10 10 10 10 1 01 0
\item\textbf{From there enter into Router1 using telnet and configure RIP.}
\addtocontents{lol}{\protect\subsubsection*{A.10 : Telnet to Router 1 and configure RIP. }}
\CMD{./CODES/AT0-1R.txt}{Telnet to Router 1 and configure RIP}
%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAA11 11 11 11 11 11 11 11
\item\textbf{Repeat the process for Router2 and Router3.}
\addtocontents{lol}{\protect\subsubsection*{A.11 : Telnet to Router 2 and 3 and configure RIP. }}
From Router 1 we further Telent to Router 2 and Router 3 to configure RIP in them.
\CMD{./CODES/AT0-2R.txt}{Telnet to Router 2 and configure RIP}
\CMD{./CODES/AT0-3R.txt}{Telnet to Router 3 and configure RIP}
%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAA 12 12 12 12 12 12 12 12 12
\item\textbf{Repeat the step from 5 to 8 and note down the output by observing it.}
\addtocontents{lol}{\protect\subsubsection*{A.12 : Repeat the step from 5 to 8 after RIP }}
\CMD{./CODES/A_Showip0R.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/A_Showip1R.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/A_Showip2R.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/A_Showip3R.txt}{\textit{show ip route} Router 3}
We can clearly see that there is additional entries in Routing Table in all routers with initial \textbf{R} than available in Activities A.5. Although we have only configured RIP for connected interface the routing table includes all others routes through communication between the router. In Router 0 we have only entered the neighboring details in Rip but its Routing Table has Routing Path to Network 4.
\CMD{./CODES/AP0-r22R.txt}{Ping from PC0 to Router 2 : 0/2}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{10em}| m{10em}| m{10em} |}
\hline
\multicolumn{1}{|l|}{\textbf{Sending Host}} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping Status}} \\
\hline
{\cellcolor[rgb]{0.141,0.525,1}} & PC1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC3 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC4 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC5 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
\multirow{-10}{*}{{\cellcolor[rgb]{0.141,0.525,1}}\textbf{PC0}} & PC6 & \multirow{-10}{*}{{\cellcolor[rgb]{0.42,0.988,0.827}} \textbf{Successful}} \\
\hline
\end{tabular}
\caption{ Ping Status PC0 to all others after RIP}
\end{table}
\CMD{./CODES/AP1-3R.txt}{Ping from PC1 to PC3}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{10em}| m{10em}| m{10em} |}
\hline
\multicolumn{1}{|l|}{\textbf{Sending Host}} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping Status}} \\
\hline
{\cellcolor[rgb]{0.141,0.525,1}} & PC1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC3 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC4 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC5 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
\multirow{-10}{*}{{\cellcolor[rgb]{0.141,0.525,1}}\textbf{PC1}} & PC6 & \multirow{-10}{*}{{\cellcolor[rgb]{0.42,0.988,0.827}} \textbf{Successful}} \\
\hline
\end{tabular}
\caption{ Ping Status PC1 to all others after RIP}
\end{table}
\CMD{./CODES/AP2-1R.txt}{Ping from PC2 to PC1}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{10em}| m{10em}| m{10em} |}
\hline
\multicolumn{1}{|l|}{\textbf{Sending Host}} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping Status}} \\
\hline
{\cellcolor[rgb]{0.141,0.525,1}} & PC1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC3 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC4 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC5 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
\multirow{-10}{*}{{\cellcolor[rgb]{0.141,0.525,1}}\textbf{PC2}} & PC6 & \multirow{-10}{*}{{\cellcolor[rgb]{0.42,0.988,0.827}} \textbf{Successful}} \\
\hline
\end{tabular}
\caption{ Ping Status PC2 to all others after RIP}
\end{table}
\CMD{./CODES/AP3-s1R.txt}{Ping from PC3 to Server 1}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{10em}| m{10em}| m{10em} |}
\hline
\multicolumn{1}{|l|}{\textbf{Sending Host}} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping Status}} \\
\hline
{\cellcolor[rgb]{0.141,0.525,1}} & PC1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC3 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC4 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC5 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
\multirow{-10}{*}{{\cellcolor[rgb]{0.141,0.525,1}}\textbf{PC3}} & PC6 & \multirow{-10}{*}{{\cellcolor[rgb]{0.42,0.988,0.827}} \textbf{Successful}} \\
\hline
\end{tabular}
\caption{ Ping Status PC3 to all others after RIP}
\end{table}
\CMD{./CODES/APr0-1R.txt}{Ping from Router 0 to PC1}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{10em}| m{10em}| m{10em} |}
\hline
\multicolumn{1}{|l|}{\textbf{Sending Host}} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping Status}} \\
\hline
{\cellcolor[rgb]{0.141,0.525,1}} & PC1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC3 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC5 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
\multirow{-10}{*}{{\cellcolor[rgb]{0.141,0.525,1}}\textbf{Router 0}} & PC6 & \multirow{-10}{*}{{\cellcolor[rgb]{0.42,0.988,0.827}} \textbf{Successful}} \\
\hline
\end{tabular}
\caption{ Ping Status Router 0 to all others after RIP}
\end{table}
\CMD{./CODES/APr1-0R.txt}{Ping from Router 1 to PC0}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{10em}| m{10em}| m{10em} |}
\hline
\multicolumn{1}{|l|}{\textbf{Sending Host}} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping Status}} \\
\hline
{\cellcolor[rgb]{0.141,0.525,1}} & PC1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC3 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC5 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
\multirow{-10}{*}{{\cellcolor[rgb]{0.141,0.525,1}}\textbf{Router 1}} & PC6 & \multirow{-10}{*}{{\cellcolor[rgb]{0.42,0.988,0.827}} \textbf{Successful}} \\
\hline
\end{tabular}
\caption{ Ping Status Router 1 to all others after RIP}
\end{table}
\CMD{./CODES/APr2-2R.txt}{Ping from Router 2 to PC2}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{10em}| m{10em}| m{10em} |}
\hline
\multicolumn{1}{|l|}{\textbf{Sending Host}} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping Status}} \\
\hline
{\cellcolor[rgb]{0.141,0.525,1}} & PC1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC3 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC5 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
\multirow{-10}{*}{{\cellcolor[rgb]{0.141,0.525,1}}\textbf{Router 2}} & PC6 & \multirow{-10}{*}{{\cellcolor[rgb]{0.42,0.988,0.827}} \textbf{Successful}} \\
\hline
\end{tabular}
\caption{ Ping Status Router 2 to all others after RIP}
\end{table}
\CMD{./CODES/APr3-0R.txt}{Ping from Router 3 to PC0}
\begin{table}[H]
\centering
\arrayrulecolor{black}
\begin{tabular}{| m{10em}| m{10em}| m{10em} |}
\hline
\multicolumn{1}{|l|}{\textbf{Sending Host}} & \textbf{Destination} & \multicolumn{1}{l|}{\textbf{Ping Status}} \\
\hline
{\cellcolor[rgb]{0.141,0.525,1}} & PC1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 0 : 0/2 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC3 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/0 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & Router 1 : 0/1 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
{\cellcolor[rgb]{0.141,0.525,1}} & PC5 & {\cellcolor[rgb]{0.42,0.988,0.827}} \\
\hhline{|>{\arrayrulecolor[rgb]{0.141,0.525,1}}->{\arrayrulecolor{black}}->{\arrayrulecolor[rgb]{0.42,0.988,0.827}}->{\arrayrulecolor{black}}|}
\multirow{-10}{*}{{\cellcolor[rgb]{0.141,0.525,1}}\textbf{Router 3}} & PC6 & \multirow{-10}{*}{{\cellcolor[rgb]{0.42,0.988,0.827}} \textbf{Successful}} \\
\hline
\end{tabular}
\caption{ Ping Status Router 3 to all others after RIP}
\end{table}
After Successfully Configuring RIP in every Router. Routers stat to exchange its information available in its Routing Table to its Neighbors which make it possible to reach each PCs to other PCs.
%%%%%%%%%%%%%%%%%%%%%%%%%%AAAAAAAAAAAAAAAAAAAAAAAAAA 13 13 13 13 13 13 13 13 1 3 13
\item\textbf{Use tracert command to observe the output from each PC to all other PCs.}
\addtocontents{lol}{\protect\subsubsection*{A.13 : Trace Route after RIP. }}
\CMD{./CODES/AT0-1.txt}{Trace Route from PC0 to PC1}
\CMD{./CODES/AT0-2.txt}{Trace Route from PC0 to PC2}
\CMD{./CODES/AT0-3.txt}{Trace Route from PC0 to PC3}
\CMD{./CODES/AT1-2.txt}{Trace Route from PC1 to PC2}
\CMD{./CODES/AT1-3.txt}{Trace Route from PC1 to PC3}
\CMD{./CODES/AT2-3.txt}{Trace Route from PC2 to PC3}
\CMD{./CODES/AT3-4.txt}{Trace Route from PC3 to PC4}
\end{enumerate}
We can observe that Packets followed Optimal path to its destination.
\pagebreak
%
%
%
%
%%%%%%%%%%%%%%%%%%%%%%BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
%%%%%%%%%%%%%%%%%%%%%%BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
%%%%%%%%%%%%%%%%%%%%%%BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
%%%%%%%%%%%%%%%%%%%%%BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
%%%%%%%%%%%%%%%%%%%%%%BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
%
%
%
\addtocontents{lol}{\protect\subsection*{\HRule \\ Activities B\\ \HRule}}
\addtocontents{lof}{\protect\subsection*{\HRule \\ Activities B\\ \HRule}}
\subsubsection{Activities B}
{\bfseries \textit{B. To increase the reliability, add a new router to connect Switch0 with Switch1 as shown in
figure below. Assign IP address to both interfaces of router as specified. Configure RIP in this
router (for each of the network connected to it) and observe the followings}}
\begin{figure}[H]
\centering
\includegraphics[scale=0.7,cframe=blue 0.5pt 3pt]{./FIG/Lab7B.jpg}
\caption{Network topology Lab 7B}
\end{figure}
First Disable Passive interface for Network 1 and Network 4 in order to receive updates in Router 4.
\CMD{./CODES/BR4IPR.txt}{Assigning Ip and Configuring RIP Router 4}
\begin{enumerate}
\item \textbf{Observe the output of show ip route command in each router, and compare with that of
previous case that is specified in Activity A.}
\addtocontents{lol}{\protect\subsubsection*{B.1 : \textit{show ip route} in each Router and Compare }}
\CMD{./CODES/B_Showip0R.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/B_Showip1R.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/B_Showip2R.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/B_Showip3R.txt}{\textit{show ip route} Router 3}
\CMD{./CODES/B_Showip4R.txt}{\textit{show ip route} Router 4}
After enabling RIP on additional Router (Router 4) the Routing table in each router has been updated to reach its destination with minimum hops or through shortest path. For example in Router 0, previously there was entry to reach Network 4 through 202.60.1.0 network taking 3 hops but now it has been updated through 200.60.0.10 in 1 hop.
\item \textbf{Use tracert command from each PC to each another PC and note down the result. Note how
the changing network topology is addressed by dynamic routing automatically.}
\addtocontents{lol}{\protect\subsubsection*{B.2 : \textit{tracert} in each Router and Compare }}
\CMD{./CODES/BT0-1.txt}{Trace Route from PC0 to PC1}
\CMD{./CODES/BT0-2.txt}{Trace Route from PC0 to PC2}
\CMD{./CODES/BT0-3.txt}{Trace Route from PC0 to PC3}
\CMD{./CODES/BT1-2.txt}{Trace Route from PC1 to PC2}
\CMD{./CODES/BT1-3.txt}{Trace Route from PC1 to PC3}
\CMD{./CODES/BT2-3.txt}{Trace Route from PC2 to PC3}
\CMD{./CODES/BT3-4.txt}{Trace Route from PC3 to PC4}
\CMD{./CODES/BT3-5.txt}{Trace Route from PC3 to PC5}
\CMD{./CODES/BT3-0.txt}{Trace Route from PC3 to PC0}
With RIP and Shortest path algorithm Router now has data or routes to reach destination through shortest routes. Taking PC0 and PC3 as example, Before introducing Router 4 between Network 1 and 4 , Packets from PC0 has to travel through Router 1 then Router 2 then Router 3 and finally to PC3 But now Packets can reach to PC3 in 1 hops through Router 4.
\item \textbf{Now disconnect the link between Router0 and Router1 and observe the output while using
tracert command from each PC to each another PC. Note how changing network topology
is addressed in dynamic routing. Also observe the routing table of each router using show
ip route.}
\addtocontents{lol}{\protect\subsubsection*{B.3 : Trace Route and Routing Table after diconnecting \textbf{202.60.1.0/24} line}}
\CMD{./CODES/BT0-1D1.txt}{Trace Route from PC0 to PC1}
\CMD{./CODES/BT5-0D1.txt}{Trace Route from PC5 to PC0}
\CMD{./CODES/BT4-0D1.txt}{Trace Route from PC4 to PC0}
\CMD{./CODES/B_Showip0D1.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/B_Showip1D1.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/B_Showip2D1.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/B_Showip3D1.txt}{\textit{show ip route} Router 3}
\CMD{./CODES/B_Showip4D1.txt}{\textit{show ip route} Router 4}
Through \textit{tracert} output of PCs and Routing Table of routers , it is clear that Router has Successfully adapted to change in topology hence determine shortest and alternative Router to destination. For example, Previously Packets from PC4 used to travel through 202.60.1.0 to Router 0 and then to PC0 but as that link is down Router has determined the alternative path theough Router 2 then Router 3 then Router 4 and finally to PC0.
\item \textbf{Similarly observe the routing table of each router and output of traceroute/tracert command
by removing different links between routers as well as by connecting the links.}
\addtocontents{lol}{\protect\subsubsection*{B.3 : Trace Route and Routing Table after diconnecting \textbf{202.60.3.0/24} line}}
Trace Route and Routing Table after diconnecting \textbf{202.60.3.0/24} line:
\CMD{./CODES/BT0-2D3.txt}{Trace Route from PC0 to PC2}
\CMD{./CODES/BT1-5D3.txt}{Trace Route from PC1 to PC5}
\CMD{./CODES/BT3-4D3.txt}{Trace Route from PC3 to PC4}
\CMD{./CODES/B_Showip0D3.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/B_Showip1D3.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/B_Showip2D3.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/B_Showip3D3.txt}{\textit{show ip route} Router 3}
\CMD{./CODES/B_Showip4D3.txt}{\textit{show ip route} Router 4}
Here PC1 uses the Routes through PC4 to reach PC2 after link to connect these Network is disconnected.
\item \textbf{Note down how the changing network topology is addressed by dynamic routing protocol
automatically to determine the optimal path to reach each of the destination network.}
Through RIP protocol, Routing information is periodically shared between the routers so, when any changes is detected, it first update its routing table then send updates to neighbors about changes and its available neighbors.
\end{enumerate}
\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%
%
%
%
%
%%%%%%%%%%%%%%%%CCCCCCCCCCCCCCCCCCCCC
%%%%%%%%%%%%%%%%CCCCCCCCCCCCCCCCCCCCC
%%%%%%%%%%%%%%%%CCCCCCCCCCCCCCCCCCCCC
%%%%%%%%%%%%%%%%CCCCCCCCCCCCCCCCCCCCC
%%%%%%%%%%%%%%%%CCCCCCCCCCCCCCCCCCCCC
%
%
%
%
\addtocontents{lol}{\protect\subsection*{\HRule \\ Activities C\\ \HRule}}
\addtocontents{lof}{\protect\subsection*{\HRule \\ Activities C\\ \HRule}}
\subsubsection{Activities C}
{\bfseries \textit{C. Create the network topology as shown in figure below and perform the following activities:}}
\begin{figure}[H]
\centering
\includegraphics[scale=0.61,cframe=blue 0.5pt 3pt]{./FIG/Lab7C.jpg}
\caption{Network topology Lab 7C}
\end{figure}
\begin{enumerate}
\item \textbf{You have given an IP addresses of 200.100.50.0/24. You have to divide this address range
for different LANs i.e. LAN1, LAN2, LAN3 and LAN4 interconnected as shown in figure
above, each LAN having 60, 27, 25 and 12 number of hosts. In addition to this there are
few networks having only two host in each (i.e. connection between two routers). Allocate
the IP address range for each of the sub-networks with their network address, broadcast
address and subnet mask. Also list out the unused range of IP addresses (if any)}
% \usepackage{colortbl}
\begin{table}[H]
\centering
\begin{tabular} {| m{3em} | m{3em} | m{4em} | m{6em} | m{2em} | m{7em} | m{7em} | m{6em} |}
\rowcolor[rgb]{0.278,0.573,0.792} \textbf{Subnet Name} & \textbf{Needed Size} & \textbf{Allocated Size} & \textbf{Network Address} & \textbf{Mask} & \textbf{Dec Mask} & \textbf{Assignable Range} & \textbf{Broadcast Address} \\
\hline
\textbf{LAN 1} & 60 & 62 & 200.100.50.0 & /26 & 255.255.255.192 & 200.100.50.1 - 200.100.50.62 & 200.100.50.63 \\
\hline
\textbf{LAN 2} & 27 & 30 & 200.100.50.64 & /27 & 255.255.255.224 & 200.100.50.65 - 200.100.50.94 & 200.100.50.95 \\
\hline
\textbf{LAN 3} & 25 & 30 & 200.100.50.96 & /27 & 255.255.255.224 & 200.100.50.97 - 200.100.50.126 & 200.100.50.127 \\
\hline
\textbf{LAN 4} & 12 & 14 & 200.100.50.128 & /28 & 255.255.255.240 & 200.100.50.129 - 200.100.50.142 & 200.100.50.143 \\
\hline
\textbf{Link 1} & 2 & 2 & 200.100.50.144 & /30 & 255.255.255.252 & 200.100.50.145 - 200.100.50.146 & 200.100.50.147 \\
\hline
\textbf{Link 2} & 2 & 2 & 200.100.50.148 & /30 & 255.255.255.252 & 200.100.50.149 - 200.100.50.150 & 200.100.50.151 \\
\hline
\textbf{Link 2} & 2 & 2 & 200.100.50.152 & /30 & 255.255.255.252 & 200.100.50.153 - 200.100.50.154 & 200.100.50.155 \\
\hline
\end{tabular}
\caption{Assignning IP thropugh VLSM}
\end{table}
The unused IP range is \textbf{ 200.100.50.156 - 200.100.50.255}
\item \textbf{On the basis of your division configure IP address for each interface of routers. Also configure the IP address and default gateway of each PC and server.}
\addtocontents{lol}{\protect\subsubsection*{C.2 : Configure IP in Router and PCs}}
\CMD{./CODES/C_ConfigR0.txt}{Configure Router 0}
\CMD{./CODES/C_ConfigR1.txt}{Configure Router 1}
\CMD{./CODES/C_ConfigR2.txt}{Configure Router 2}
\CMD{./CODES/C_ConfigR3.txt}{Configure Router 3}
\begin{figure}[H]
\centering
\includegraphics[scale=0.81,cframe=blue 0.5pt 3pt]{./FIG/CPC0.jpg}
\caption{Config IP and Default gateway in PC 0}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.81,cframe=blue 0.5pt 3pt]{./FIG/CPC3.jpg}
\caption{Config IP and Default gateway in PC 3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.81,cframe=blue 0.5pt 3pt]{./FIG/CPC5.jpg}
\caption{Config IP and Default gateway in PC 5}
\end{figure}
\item \textbf{Enable routing in between LANs using OSPF. Test the connectivity from a pc of any LAN
to pc of any another LAN by using ping.}
\addtocontents{lol}{\protect\subsubsection*{C.3 : Enable Routing using OSPF and Testing}}
\CMD{./CODES/C_RouteR0.txt}{Routing using OSPF Router 0}
\CMD{./CODES/C_RouteR1.txt}{Routing using OSPF Router 1}
\CMD{./CODES/C_RouteR2.txt}{Routing using OSPF Router 2}
\CMD{./CODES/C_RouteR3.txt}{Routing using OSPF Router 3}
\CMD{./CODES/CP0-1.txt}{Ping from PC0 to PC1}
\CMD{./CODES/CP1-2.txt}{Ping from PC1 to PC2}
\CMD{./CODES/CP2-3.txt}{Ping from PC2 to PC3}
\CMD{./CODES/CP0-2.txt}{Ping from PC0 to PC2}
\CMD{./CODES/CP3-1.txt}{Ping from PC3 to PC1}
\CMD{./CODES/CP3-0.txt}{Ping from PC3 to PC0}
All Ping are Successful after using OSPF routing Protocols. Similar to RIP , OSPF also adapts the Routing table as per the changes im its Topology and has initials as \textbf{O} in its Routing Table.
\item \textbf{Note down the result of traceroute using traceroute command from PC0 to PC3}
\addtocontents{lol}{\protect\subsubsection*{C.4 : Trace Route from PC0 to PC3}}
\CMD{./CODES/CT0-3.txt}{Trace Route from PC0 to PC3}
\item \textbf{Observe the routing table in each router by using show ip route command.}
\addtocontents{lol}{\protect\subsubsection*{C.5 : Routing Table}}
\CMD{./CODES/C_Showip0.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/C_Showip1.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/C_Showip2.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/C_Showip3.txt}{\textit{show ip route} Router 3}
\end{enumerate}
\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%
%
%
%5%%%%%%%%%%%%%%%%%%%%DDDDDDDDDDDDDDDDDDDD
%5%%%%%%%%%%%%%%%%%%%%DDDDDDDDDDDDDDDDDDDD
%5%%%%%%%%%%%%%%%%%%%%DDDDDDDDDDDDDDDDDDDD
%5%%%%%%%%%%%%%%%%%%%%DDDDDDDDDDDDDDDDDDDD
%5%%%%%%%%%%%%%%%%%%%%DDDDDDDDDDDDDDDDDDDD
%
%
%
%
%
%
%
\addtocontents{lol}{\protect\subsection*{\HRule \\ Activities D\\ \HRule}}
\addtocontents{lof}{\protect\subsection*{\HRule \\ Activities D\\ \HRule}}
\subsubsection{Activities D}
{\bfseries \textit{D.Now Connect additional router as shown in figure below:}}
\begin{figure}[H]
\centering
\includegraphics[scale=0.62,cframe=blue 0.5pt 3pt]{./FIG/Lab7D.jpg}
\caption{Network topology Lab 7D}
\end{figure}
\begin{enumerate}
\item \textbf{Configure the interfaces of Router4 with appropriate IP addresses and enable OSPF in it.
Now note down the result of traceroute command from PC3 to PC0 and routing table in
each router. Compare the result of previous.}
\addtocontents{lol}{\protect\subsubsection*{D.1 : Configure IP and Enable OSPF in Router 0}}
\CMD{./CODES/D_ConfigR4.txt}{Configure Router 4}
\CMD{./CODES/D_RouteR4.txt}{Routing using OSPF Router 4}
\CMD{./CODES/DT3-0.txt}{Trace Route from PC3 to PC0}
\CMD{./CODES/D_Showip0.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/D_Showip1.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/D_Showip2.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/D_Showip3.txt}{\textit{show ip route} Router 3}
\CMD{./CODES/D_Showip4.txt}{\textit{show ip route} Router 4}
With addition of Router 4 there is noticeable changes in Routing Tables in all Routers.
From above Trace route and Routing table of Router 3 one can explain how OSPF adapts to changes and determine the shortest path to destination.
\item \textbf{Remove a link i.e. the link between router 4 and switch 0 and then observe the result of
traceroute command from PC3 to PC0 and routing table in each router. Compare the result
of previous. Note down how the routing is updated to address the changing topology.}
\addtocontents{lol}{\protect\subsubsection*{D.2 : Removing Link and observing Route path amd Routing table}}
The link between Router 4 and switch 2 of LAN 1(200.100.50.0/26) is removed.
\CMD{./CODES/DT3-0D1.txt}{Trace Route from PC3 to PC0}
\CMD{./CODES/D_Showip0D1.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/D_Showip1D1.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/D_Showip2D1.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/D_Showip3D1.txt}{\textit{show ip route} Router 3}
\CMD{./CODES/D_Showip4D1.txt}{\textit{show ip route} Router 4}
\item \textbf{Again connect the link and repeat the observations.}
Observation is Similar to Activity D.1 that is before disconnection.
\item \textbf{You can observe by removing another link between Router0 and Router1 also.}
\addtocontents{lol}{\protect\subsubsection*{D.4 : Removing Link 1 and observing Route path amd Routing table}}
After removing Link 1 (200.100.50.144/30) following Observation is done.
\CMD{./CODES/DT1-0L1.txt}{Trace Route from PC1 to PC0}
\CMD{./CODES/D_Showip0L1.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/D_Showip1L1.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/D_Showip2L1.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/D_Showip3L1.txt}{\textit{show ip route} Router 3}
\CMD{./CODES/D_Showip4L1.txt}{\textit{show ip route} Router 4}
\item \textbf{Similarly you can observe by removing other links.}
\addtocontents{lol}{\protect\subsubsection*{D.4 : Removing Link 2 and observing Route path amd Routing table}}
After removing Link 1 (200.100.50.148/30) following Observation is done.
\CMD{./CODES/DT1-2L2.txt}{Trace Route from PC1 to PC2}
\CMD{./CODES/D_Showip0L2.txt}{\textit{show ip route} Router 0}
\CMD{./CODES/D_Showip1L2.txt}{\textit{show ip route} Router 1}
\CMD{./CODES/D_Showip2L2.txt}{\textit{show ip route} Router 2}
\CMD{./CODES/D_Showip3L2.txt}{\textit{show ip route} Router 3}
\CMD{./CODES/D_Showip4L2.txt}{\textit{show ip route} Router 4}
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%4444444444444444444444444444444444
\begin{Q}
{
How dynamic routing can address the changing topology of a network automatically?
Explain with reference to the observation of your lab exercise.
}
\end{Q}
\begin{A}
{
Dynamic Routing is Adaptive routing technique. Here in this LAb we explore RIp and OSPF . In dynamic Routing, Routers shares the routing Information to its Neighbors and neighbor updates its routing table if any shortest path to any network is found.
RIP uses distance vector algorithm where as OSPF Dijkstra’s algorithm to determine the shortest path to destination.
}
\end{A}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion}
In this Lab we familiarize ourselves with dynamic Routing and its protocols like RIP and OSPF.
In Activity A we created the setup and Configured RIP and tested with the help of Ping and Tracert command. In Activity B we connected additional router and studied the changes it makes to the routing tables. We also observe the output after disconnecting different links and how Network reacts and stay connected during that time. Activity C and D are based on OSPF with additional touch of VLSM .
\end{document} | {
"alphanum_fraction": 0.4665967292,
"avg_line_length": 75.4398773006,
"ext": "tex",
"hexsha": "7b1b390c4576e4a05340527368d8908784ea64db",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2022-01-17T12:19:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-19T09:04:46.000Z",
"max_forks_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "amritphuyal/LATEX",
"max_forks_repo_path": "Computer network LABS/LAB 7/CN LAB 7 Amrit Prasad Phuyal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "amritphuyal/LATEX",
"max_issues_repo_path": "Computer network LABS/LAB 7/CN LAB 7 Amrit Prasad Phuyal.tex",
"max_line_length": 451,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "amritphuyal/LATEX",
"max_stars_repo_path": "Computer network LABS/LAB 7/CN LAB 7 Amrit Prasad Phuyal.tex",
"max_stars_repo_stars_event_max_datetime": "2020-10-01T08:20:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-01T08:20:34.000Z",
"num_tokens": 35405,
"size": 122967
} |
The class inspected is \textbf{ProductDisplayWorker}.\\
It belongs to the package \textit{org.apache.ofbiz.shoppingcart.production}\\
The class inheritance is the following:
\begin{verbatim}
java.lang.Object
org.apache.ofbiz.order.shoppingcart.product.ProductDisplayWorker
org.apache.ofbiz.order.shoppingcart.product.ProductPromoWorker
org.apache.ofbiz.order.shoppingcart.product.ProductPromoWorker.ActionResultInfo
org.apache.ofbiz.order.shoppingcart.product.ProductStoreCartAwareEvents
\end{verbatim}
This class is a part of the usage of a \textbf{Worker pattern}.
It consist in the creation of a \textit{Worker object} that perform operation on a
specific type, or different type, of object.
This patters is really helpfull in the maintenance and the writing of the code
because permit to split the object we want to manage and the operation on this
object in order to maintain a well-posed structure a smaller class in term of line of code.
In plus this class contains a private static class used into the method of \textbf{ProductDisplayWorker}.
Usually this pattern is used with another pattern called \textbf{Manager pattern}, in fact also in the this case, Apache OFBIZ, we find an Order Manager that
is charged all the payments.
\section{Class code}
For reader's convenience, the whole content of the \textbf{ProductDisplayWorker} Java class source file is reported below.
\lstinputlisting{./ProductDisplayWorker.java}
| {
"alphanum_fraction": 0.8157162726,
"avg_line_length": 57.52,
"ext": "tex",
"hexsha": "168e98d205c641dca7ae9e30e978750d38f887ee",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1ced41ed087251661b6c9783753b1912ddc31759",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "FrancescoZ/PowerEnJoy",
"max_forks_repo_path": "Workspace/Code Inspection/Introduction/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1ced41ed087251661b6c9783753b1912ddc31759",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "FrancescoZ/PowerEnJoy",
"max_issues_repo_path": "Workspace/Code Inspection/Introduction/introduction.tex",
"max_line_length": 157,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1ced41ed087251661b6c9783753b1912ddc31759",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "FrancescoZ/PowerEnJoy",
"max_stars_repo_path": "Workspace/Code Inspection/Introduction/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 319,
"size": 1438
} |
%*****************************************
\chapter{Tables}\label{ch05:tables}
%*****************************************
Excel workbooks are designed to store data and it is challenging to organize that data to yield meaningful information. Excel has many features that can help organize data and find needed information efficiently. Setting up data as a table from the onset allows users to sort, filter, total, and subtotal the data easily. In Excel, a table is a collection of data about a subject stored in adjacent rows and columns. Tables can improve the look and feel of a worksheet. This chapter explores how to set up, edit, and work with Excel tables effectively. These skills will be demonstrated in the context of a multi-sheet file that shows national average weather for two vastly different cities in the United States. Weather data is often voluminous and difficult to summarize since so much is collected every hour of every day and providing meaningful summaries of such data is a useful skill. The skills learned using weather data in this chapter can be transferred to data found in any discipline or field.
\section{Basic Table Skills}
\begin{center}
\begin{objbox}{Learning Objectives}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Understand table structure.
\item Plan, create, and edit a table.
\item Freeze rows and columns.
\item Sort data in a table.
\end{itemize}
\end{objbox}
\end{center}
This section reviews the fundamental skills for setting up and maintaining an Excel table. The objective used for this chapter is the construction of a multi-sheet file to keep track of two cities' national weather data for the month of January. Organizing, maintaining, and reporting data are essentials skills for employees in most industries.
Figure \ref{05:fig01} shows the completed workbook that will be created in this chapter. Notice that this workbook contains four worksheets. The first worksheet contains weather data for Portland, Maine, the second contains weather data for Portland, Oregon, the third organizes the Oregon data by week, and the fourth contains subtotals for the Oregon data.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig01}
\caption{Completed National Weather Workbook}
\label{05:fig01}
\end{figure}
\subsection{Creating a Table}
When data is presented in long lists or columns, it helps if the table is set up properly. Here are some rules of data-entry etiquette to follow when creating a table from scratch.
\begin{itemize}
\item Whenever possible, organize information using adjacent (neighboring) columns and rows.
\item Start the table in the upper-left corner of the worksheet and work down the sheet.
\item Do not skip columns and rows just to ``space out'' the information. To place white space between information in adjacent columns and rows, widen columns, heighten rows, or change the alignment.
\item Reserve a single column at the left edge of the table for the table's row headings or identifying information.
\item Reserve a single row at the top of the table for the table's column headings.
\item If the table requires a title, put the title in the row(s) above the column headings.
\end{itemize}
Following these rules will help ensure that the sorts, filters, totals, and subtotals applied to the table with return the desired results. With these rules in mind, begin working on \textit{National Weather} workbook.
\begin{enumbox}
\begin{enumerate}
\item Open data file \fmtWorksheet{CH5-Data} and save the file as \fmtWorksheet{CH5-National Weather}.
\item Click the \fmtWorksheet{Portland ME} worksheet tab to activate it.
\item Notice that the data is in adjacent columns and rows. The upper-left corner of the table is in \fmtLoc{A5} and the titles are above the column headings in \fmtLoc{Row 5}.
\item Click cell \fmtLoc{A5}.
\item Click \fmtButton{Insert $ \Rightarrow $ Tables $ \Rightarrow $ Table}.
\item The dialog box illustrated in Figure \ref{05:fig02} pops up. Excel has correctly determined that the data range is $ A5 $:$ E33 $ and that the table has headers.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig02}
\caption{Create Table}
\label{05:fig02}
\end{figure}
\item Click \fmtButton{OK}.
\item Click \fmtLoc{A5}.
\item Adjust all column widths so that the complete headings are visible in \fmtLoc{Row 5} with the filter arrows showing. The filter arrows are the down-arrow buttons that appear in the header row when the table is created.
\end{enumerate}
\end{enumbox}
After this, the top of the worksheet will look like Figure \ref{05:fig03}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig03}
\caption{Weather Table}
\label{05:fig03}
\end{figure}
Notice that a new ribbon tab, \textit{Table Tools Design}, appears when the mouse is clicked inside the table. This ribbon tab contains controls to edit, style, and add functionality to the table. (\fmtNewExcel{Excel 365}. The tab is named \textit{Table Design}, not \textit{Table Tools Design}.)
\begin{enumbox}
\begin{enumerate}
\item Click the \fmtWorksheet{Portland OR} worksheet tab to activate it.
\item Click cell \fmtLoc{A5}.
\item Click \fmtButton{Insert $ \Rightarrow $ Tables $ \Rightarrow $ Table}.
\item Be sure Excel automatically determines the data is in \fmtLoc{A5:E33} and that it has headers.
\item Click \fmtButton{OK}.
\item Click \fmtLoc{A5}.
\item Adjust all column widths so that the complete headings are visible in \fmtLoc{Row 5} with the filter arrows showing.
\end{enumerate}
\end{enumbox}
\begin{center}
\begin{sklbox}{Skill Refresher}
\textbf{Create a Table}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Click on the top left cell in the data.
\item Click \textit{Insert $ \Rightarrow $ Tables $ \Rightarrow $ Table}.
\item Make sure the data range is properly identified and \textit{My table has headers} is checked.
\item Click \textit{OK}.
\item Click on the top left cell again.
\item Adjust all columns widths so the complete headings with the filter arrows are showing.
\end{itemize}
\end{sklbox}
\end{center}
\subsection{Formatting Tables}
There are many ways to format an Excel table. One of the easiest to use is a preset \textit{Table Style} using Light, Medium, or Dark colors. There are also a variety of \textit{Table Style Options} as listed in Table \ref{05:tab01}.
\begin{table}[H]
\rowcolors{1}{}{tablerow} % zebra striping background
{\small
%\fontsize{8}{10} \selectfont %Replace small for special font size
\begin{longtable}{L{1.0in}L{3.00in}} %Left-aligned, Max width: 4.25in
\textbf{Table Style} & \textbf{Description} \endhead
\hline
Header Row & Top row of the table that includes column headings\\
Total Row & Row added to the bottom that applies column summary calculations\\
First Column & Formatting added to the left-most column in the table\\
Last Column & Formatting added to the right-most column in the table\\
Banded Rows & Alternating rows of color added to make it easier to see rows of data\\
Banded Columns & Alternating columns of color added to make it easier to see columns of data\\
Filter Button & Button that appear at the top of each column that lists options for sorting and filtering\\
\rowcolor{captionwhite}
\caption{Table Style Options}
\label{05:tab01}
\end{longtable}
} % End small
\end{table}
Add formatting to both of the Portland weather tables in the following steps.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland ME} sheet to activate it.
\item Click cell \fmtLoc{A5}.
\item Click \fmtButton{Table Tools Design $ \Rightarrow $ Table Styles $ \Rightarrow $ More Down Arrow}. (\fmtNewExcel{Excel 365}. The tab is named \textit{Table Design}, not \textit{Table Tools Design}.)
\item A gallery of table styles will appear as in Figure \ref{05:fig04}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.75\linewidth}]{gfx/ch05_fig04}
\caption{Table Styles}
\label{05:fig04}
\end{figure}
\item In the Table Styles gallery, in the Medium Section, click \textit{Table Style Medium} $ 7 $ (this is a green-colored style).
\item Uncheck \fmtButton{Table Style Design $ \Rightarrow $ Table Style Options $ \Rightarrow $ Banded Rows}.
\item The alternating colored rows will disappear. The data in the table is now more difficult to read.
\item Try some of the other options in the Table Style Options group. When finished, be sure that \textit{Header Row}, \textit{Banded Rows}, and \textit{Filter Button} are the only ones checked, as in Figure \ref{05:fig05}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.65\linewidth}]{gfx/ch05_fig05}
\caption{Ribbon Table Style Options}
\label{05:fig05}
\end{figure}
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
\end{enumerate}
\end{enumbox}
\subsection{Adding Data to Tables}
Over time, new data will need to be added to a table, and that should be added in a blank row. The easiest way to do this is to enter the data below the last row in the table, then sort the table to arrange its data. If data must be added in a specific place in the middle of a table, insert a blank row and add the data there.
Add the last three days of the months to the \textit{Portland ME} and \textit{Portland OR} worksheets. The following steps walks through this process.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland ME} worksheet.
\item Click on \fmtLoc{A34} (the left-most cell below the last row in the table).
\item Enter the data from the following table.
\end{enumerate}
\end{enumbox}
\begin{table}[H]
\rowcolors{1}{}{tablerow} % zebra striping background
{\small
%\fontsize{8}{10} \selectfont %Replace small for special font size
\begin{longtable}{R{0.50in}R{0.50in}R{0.50in}R{0.50in}R{0.50in}} %Left-aligned, Max width: 4.25in
\textbf{Day} & \textbf{High} (\textdegree F) & \textbf{Low} (\textdegree F) & \textbf{Rain} (inches) & \textbf{Snow} (inches) \endhead
\hline
$ 29 $ & $ 31.4 $ & $ 13.3 $ & $ 0.12 $ & $ 0.59 $ \\
$ 30 $ & $ 31.6 $ & $ 3.4 $ & $ 0.08 $ & $ 0.47 $ \\
$ 31 $ & $ 31.7 $ & $ 13.5 $ & $ 0.12 $ & $ 0.63 $ \\
\rowcolor{captionwhite}
\caption{Portland, Maine data}
\label{05:tab02}
\end{longtable}
} % End small
\end{table}
Notice that the banded row formatting continues as additional rows are added to the tables.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland OR} worksheet.
\item Click on \fmtLoc{A34} (the left-most cell below the last row in the table).
\item Enter the data from the following table.
\end{enumerate}
\end{enumbox}
\begin{table}[H]
\rowcolors{1}{}{tablerow} % zebra striping background
{\small
%\fontsize{8}{10} \selectfont %Replace small for special font size
\begin{longtable}{R{0.50in}R{0.50in}R{0.50in}R{0.50in}R{0.50in}} %Left-aligned, Max width: 4.25in
\textbf{Day} & \textbf{High} (\textdegree F) & \textbf{Low} (\textdegree F) & \textbf{Rain} (inches) & \textbf{Snow} (inches) \endhead
\hline
$ 29 $ & $ 48.8 $ & $ 36.2 $ & $ 0.16 $ & $ 0.00 $ \\
$ 30 $ & $ 49.0 $ & $ 36.2 $ & $ 0.11 $ & $ 0.32 $ \\
$ 31 $ & $ 49.1 $ & $ 36.1 $ & $ 0.16 $ & $ 0.00 $ \\
\rowcolor{captionwhite}
\caption{Portland, Oregon data}
\label{05:tab03}
\end{longtable}
} % End small
\end{table}
\begin{enumbox}
\begin{enumerate}
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
\end{enumerate}
\end{enumbox}
\subsection{Finding and Editing Data}
It is inevitable that data errors which need to be corrected will appear in a table. While it is possible to visually scan through a table to find errors, this can be a tedious and tiresome process, especially if the table is large. Excel can help with this through the \textit{Find} command. When using \textit{Find}, the best practice is to start with the cell pointer in cell $ A1 $ to ensure that all the data in the worksheet is included in the search.
A temperature of $ 3.4 $ degrees was entered erroneously in the \textit{Portland ME} sheet. It should have been $ 13.4 $. To fix this error, complete the following steps.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland ME} sheet.
\item Press the \fmtKeystroke{Ctrl}+\fmtKeystroke{Home} keys together to go to the top of the sheet (cell \fmtLoc{A1}).
\item Click \fmtButton{Home $ \Rightarrow $ Editing $ \Rightarrow $ Find \& Select $ \Rightarrow $ Find}.
\item In the \fmtButton{Find} box, type $ 3.4 $, and then click \fmtButton{Find Next}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig06}
\caption{Find and Replace}
\label{05:fig06}
\end{figure}
\item Click the \fmtButton{Close} button.
\item Replace $ 3.4 $ in the \textit{Low} column for Day $ 10 $ with $ 13.4 $.
\item Now switch to the \fmtWorksheet{Portland OR} sheet and find the Snow error of $ 0.32 $ in Day $ 3 $. Change it to $ 0.12 $.
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
\end{enumerate}
\end{enumbox}
\begin{center}
\begin{sklbox}{Skill Refresher}
\textbf{Finding and Replacing Data}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Click \textit{Home $ \Rightarrow $ Editing $ \Rightarrow $ Find \& Select $ \Rightarrow $ Find}.
\item In the \textit{Find} box, type the phrase to find then click \textit{Find Next}.
\item Continue clicking \textit{Find Next} until the desired phrase is found.
\item Click \textit{Close} and edit the data.
\end{itemize}
\end{sklbox}
\end{center}
\subsection{Freeze Rows and Columns}
When panes are ``frozen'' in a worksheet, Microsoft Excel keeps specific rows or columns visible in the table as it is scrolled on the screen. For example, if the first row in the spreadsheet contains labels, that row might be frozen to make sure that the column labels remain visible as the sheet is scrolled down. Follow these steps to freeze the worksheet headings and keep them visible on the screen.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland ME} sheet.
\item Click in \fmtLoc{A6}, the left-most cell \textit{below} the headings row.
\item Click \fmtButton{View $ \Rightarrow $ Window $ \Rightarrow $ Freeze Panes $ \Rightarrow $ Freeze Panes} (see Figure \ref{05:fig07}).
\item Scroll up and down the sheet and notice that the headings are always displayed at the top of the table. Also, notice that a thin line appears under \fmtLoc{Row 5} to mark the bottom of the frozen section.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.65\linewidth}]{gfx/ch05_fig07}
\caption{Freeze Pane}
\label{05:fig07}
\end{figure}
To unfreeze the headings, follow these steps.
\begin{enumbox}
\begin{enumerate}
\item Click \fmtButton{View $ \Rightarrow $ Window $ \Rightarrow $ Freeze Panes $ \Rightarrow $ Unfreeze Panes}
\end{enumerate}
\end{enumbox}
\subsection{Simple Sort}
Content in a table can be sorted alphabetically, numerically, and in many other ways. Sorting helps organize data by ordering one or more columns in the table. Table \ref{05:tab04} describes the different sort orders available for each column of data.
\begin{table}[H]
\rowcolors{1}{}{tablerow} % zebra striping background
{\small
%\fontsize{8}{10} \selectfont %Replace small for special font size
\begin{longtable}{L{0.75in}L{1.00in}L{0.65in}L{1.50in}} %Left-aligned, Max width: 4.25in
\textbf{Sort Order} & \textbf{Text} & \textbf{Numbers} & \textbf{Dates} \endhead
\hline
Ascending & Alphabetical (A-Z) & Smallest to Largest & Chronological (oldest to newest)\\
Descending & Reverse Alphabetical (Z-A) & Largest to Smallest & Reverse Chronological (newest to oldest)\\
\rowcolor{captionwhite}
\caption{Sort Options}
\label{05:tab04}
\end{longtable}
} % End small
\end{table}
Suppose it is important to know what the snowiest day was in January in Portland, Maine. One way to find that is to sort the \textit{Snow} column in Descending order.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland ME} sheet.
\item Click on the \fmtButton{Down Arrow} to the right of the header in \fmtLoc{E5}, \textit{Snow (inches)}.
\item Click on Click \textit{ZA$ \downarrow $ Sort Largest to Smallest} (see Figure \ref{05:fig08}).
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.65\linewidth}]{gfx/ch05_fig08}
\caption{Sort by One Column}
\label{05:fig08}
\end{figure}
If this is done correctly, the snowiest day, January 3rd (in \textit{Row} $ 6 $) with $ 0.73 $ inches of snow, will sort to the top of the list. Notice the filter arrow changes in the snow column to a downward pointing arrow to indicate that the column is sorted in descending order (largest to smallest).
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig09}
\caption{Snowiest Days in Maine}
\label{05:fig09}
\end{figure}
\begin{enumbox}
\begin{enumerate}
\item Now switch to the \fmtWorksheet{Portland OR} sheet and repeat these sort steps to find the snowiest day in Oregon. Check the worksheet with Figure \ref{05:fig10}.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig10}
\caption{Snowiest Days in Oregon}
\label{05:fig10}
\end{figure}
\begin{center}
\begin{sklbox}{Skill Refresher}
\textbf{Sort a Column}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Click on the filter down arrow to the right of the header in the column to be sorted.
\item Click on the choice \textit{AZ}$ \downarrow $ or \textit{ZA}$ \downarrow $ to sort the data in that column.
\end{itemize}
\end{sklbox}
\end{center}
\subsection{Multi-Level Sort}
Sometimes a table needs to be sorted by more than one column at a time to efficiently analyze the data. For example, if the data included different types of loans from several bank offices, it would need to be sorted by the type of loan and then by bank office name to clearly see the different groups of loans. As another example, if a worksheet included a list of grades for students over their time in high school, the data should be sorted first by student name, then by grade level (freshman, sophomore, junior, and senior) so that each student's grades appear in chronological order.
For the weather data, determine how cold the snow days were in Oregon.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland OR} sheet.
\item Click cell \fmtLoc{A6}.
\item Click \fmtButton{Data $ \Rightarrow $ Sort \& Filter $ \Rightarrow $ Sort}.
\item Click the down-arrow for \textit{Column Sort By} and select \textit{Snow (inches)}.
\item Click the down-arrow for \textit{Order} and select \textit{Largest to Smallest}.
\item To add a 2nd level sort, click \fmtButton{Add Level} at the top left corner of the dialog box.
\item Click the down-arrow for \textit{Then by} and select \textit{Low (\textdegree F)}.
\item In the same row, click the down-arrow for \textit{Order} and select \textit{Smallest to Largest}. The dialog box should look like Figure \ref{05:fig11}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig11}
\caption{Multi-Level Sort}
\label{05:fig11}
\end{figure}
\item Click \fmtButton{OK}.
\item The table sort results should look like Figure \ref{05:fig12}. Notice for the two days with $ 0.08 $ inches of snow, the low temp of $ 35.5 $ on Day $ 9 $ is displayed before the low temp of $ 36.2 $ on Day $ 25 $. The lowest of the two was listed first. Also notice that the filter arrows changed on the sorted columns to show how they are sorted.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig12}
\caption{Multi-Level Sort Results}
\label{05:fig12}
\end{figure}
\subsection{Custom Sorts}
In most cases, the data should be sorted in ``typical'' sort order: numbers sorted highest to lowest, words sorted alphabetically, etc. Some data does not make sense when sorted this way. For example, if the days of the week are sorted alphabetically, the result would be Friday, Monday, Saturday, Sunday, Thursday, Tuesday, and Wednesday. This order would be of no use to anyone! Similarly, the months of the year would not make sense alphabetically.
To illustrate how days can be sorted, a \textit{Week} number column was added to the \textit{Weekly OR} sheet. Then, the \textit{Day} column was changed from a number to names like Sunday. This sheet facilitates further analysis of the data to see if there are weekly trends in the weather. To look for those trends, sort the \textit{Weekly OR} sheet by \textit{Week} and then by \textit{Day}.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Weekly OR} worksheet.
\item Click cell \fmtLoc{A5}
\item Click \fmtButton{Insert $ \Rightarrow $ Tables $ \Rightarrow $ Table}.
\item Click \fmtButton{OK}.
\item Click cell \fmtLoc{A5}.
\item Click \fmtButton{Data $ \Rightarrow $ Sort \& Filter $ \Rightarrow $ Sort}
\item Click the down-arrow for \textit{Column Sort By} and select \textit{Week}.
\item Click the down-arrow for \textit{Order} and select \textit{Smallest to Largest}.
\item To add a 2nd level sort, click \fmtButton{Add Level} at the top left corner of the dialog box.
\item Click the down-arrow for \textit{Then by} and select \textit{Day}.
\item In the same row, click the down-arrow for \textit{Order} and select \textit{Custom List}. The dialog box should look like Figure \ref{05:fig13} will appear on the screen.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig13}
\caption{Custom Lists}
\label{05:fig13}
\end{figure}
\item Click on \fmtButton{Sunday, Monday, Tuesday, etc.} in the \textit{Custom} lists on the left-side of the dialog box. NOTE: Make sure to select the list with the days of the week spelled out, not the abbreviations for the days of the week.
\item Click \fmtButton{OK}.
\item The \textit{Sort} dialog box should look like Figure \ref{05:fig14}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig14}
\caption{Sort Dialog Box}
\label{05:fig14}
\end{figure}
\item Click \fmtButton{OK}.
\item The sorted table should now look like Figure \ref{05:fig15}. Notice the data is in \textit{Week} order and, within each week, in \textit{Day} order.
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig15}
\caption{Custom Sort}
\label{05:fig15}
\end{figure}
\begin{center}
\begin{tkwbox}{Key Take-Aways}
\textbf{Basic Table Skills}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Tables are made up of adjacent rows and columns of data with a single row of column headings at the top.
\item Tables are created by clicking in the top left-most cell in the data and clicking \textit{Insert $ \Rightarrow $ Group $ \Rightarrow $ Table}.
\item There are a gallery of styles and options to choose from to format a table.
\item To add data it is best to add it one row below the bottom of the table. The table can then be resorted to organize the data.
\item Freezing heading keeps column headings displayed while scrolling through the table data.
\item Filter arrows in the table headings sort the data by a single column. Click \textit{Data $ \Rightarrow $ Sort \& Filter $ \Rightarrow $ Sort} to sort by two or more columns at a time.
\item Custom Sorts can be used when data needs to be sorted in a special way (\ie Days of the Week).
\end{itemize}
\end{tkwbox}
\end{center}
\section{Advanced Table Skills}
\begin{center}
\begin{objbox}{Learning Objectives}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Filter table data.
\item Add a total row to a table.
\item Insert subtotals into a table.
\end{itemize}
\end{objbox}
\end{center}
\subsection{Filtering Data}
When an Excel table is first created, filter arrows appear in all the column headings. Those arrows can be used to sort the data by a single column. These same arrows can also be used to filter or limit the displayed data within a column. There are many ways to filter data within a column depending on whether the data is text or numeric.
Notice there is sometimes more than one way to filter data (\ie with a filter choice or a checked box). There are also single criteria filters and multi-criteria filters. As an introduction to filtering, look at just the first week of data in the \textit{Weekly OR} sheet.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Weekly OR} worksheet.
\item Click cell \fmtLoc{B5}.
\item Click the filter arrow to the right of the \textit{Week} heading.
\item Click the \fmtButton{Select All} checkbox to deselect all the checkbox choices.
\item Click on $ 1 $ to select just Week $ 1 $.
\item Click \fmtButton{OK}.
\end{enumerate}
\end{enumbox}
The table should look like Figure \ref{05:fig16}. Only seven rows of Week One are visible in the table. Notice in the Status Bar at the bottom of the screen the message ``$ 7 $ of $ 31 $ records found''. Also notice that the filter arrow in the Week heading has changed to a funnel which indicates that this column is currently filtered.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig16}
\caption{Filter}
\label{05:fig16}
\end{figure}
Follow this procedure to remove the filter.
\begin{enumbox}
\begin{enumerate}
\item Click the funnel next to the Week heading.
\item Click \fmtButton{Clear Filter from ``Week''}.
\end{enumerate}
\end{enumbox}
\begin{center}
\begin{sklbox}{Skill Refresher}
\textbf{Filter a Column}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Click the filter arrow to the right of the heading in the column to be filtered.
\item Click the \textit{Select All} checkbox to deselect all of the checkbox choices.
\item Click on the checkboxes to filter by.
\item Click \textit{OK}.
\end{itemize}
\bigskip
\textbf{Un-Filter a Column}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Click the funnel to the right of the heading in the column to be filtered.
\item Select \textit{Clear Filter}.
\end{itemize}
\end{sklbox}
\end{center}
As an example of using a numeric filter, find the days in Portland ME when it was warmer than $ 32 $ degrees.
\begin{enumbox}
\begin{enumerate}
\item Click in the \fmtWorksheet{Portland ME} worksheet.
\item Click cell \fmtLoc{B5}.
\item Click on the filter arrow next to the \textit{High} heading.
\item Click \fmtButton{Number Filters $ \Rightarrow $ Greater Than...}.
\item The \textit{Custom AutoFilter} dialog box will appear on the screen.
\item Enter \fmtTyping{32} in the space to the right of \textit{is Greater than}. The \textit{Custom AutoFilter} dialog box should now match Figure \ref{05:fig17}.
\item Click \fmtButton{OK}.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig17}
\caption{AutoFilter Dialog Box}
\label{05:fig17}
\end{figure}
It is now easy to see that it was above $ 32 $ degrees for only the first three days, as illustrated in Figure \ref{05:fig18}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig18}
\caption{Maine Filter Results}
\label{05:fig18}
\end{figure}
Review sorting and filtering in the following steps.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Weekly OR} worksheet.
\item Sort the table by \textit{Week} (smallest to largest).
\item Filter the \textit{Day} to only show \textit{Monday}.
\item Compare the results to Figure \ref{05:fig19}.
\item After checking the results, clear the \textit{Day} filter.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig19}
\caption{Oregon Filter Results}
\label{05:fig19}
\end{figure}
\subsection{Filtering Using the Slicer}
Beginning in Excel $ 2013 $, slicers were added as another way to filter table data. A slicer is useful because it clearly indicates what data is shown in the table after the data has been filtered.
Try using the Slicer to filter the \textit{Portland OR} data table.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland OR} worksheet.
\item Click cell \fmtLoc{A5}.
\item Click \fmtButton{Insert $ \Rightarrow $ Filters $ \Rightarrow $ Slicer}.
\end{enumerate}
\end{enumbox}
\begin{center}
\begin{infobox}{Potential Error}
\textbf{Slicer Connections}
\\
\\
After clicking the \textit{Slicer} button, if Excel displays an \textit{Existing Connections} box with error messages about no connections, then the table was not active. To correct this error, click anywhere inside the table to activate it and then try the \textit{Slicer} button again.
\end{infobox}
\end{center}
\begin{enumbox}
\begin{enumerate}
\item Click on \fmtButton{Day} in the \textit{Insert Slicers} dialog box, and then click \fmtButton{OK} (see Figure \ref{05:fig20a}).
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.50\linewidth}]{gfx/ch05_fig20a}
\caption{Slicer Dialog Box}
\label{05:fig20a}
\end{figure}
\item Drag the slicer box so the top left corner is near the center of cell \fmtLoc{G5}.
\item Notice that when the Slicer was inserted, a \textit{Slicer Options} tab appears on the ribbon. (\fmtNewExcel{Excel 365}. This tab is named \textit{Slicer} rather than \textit{Slicer Options}.) This tab contains links to change the style and size of the slicer box or individual slicer buttons.
\item Click \fmtButton{Slicer Tools Options $ \Rightarrow $ Slicer Styles $ \Rightarrow $ Expand Arrow}. The styles illustrated in Figure \ref{05:fig20} will become available.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig20}
\caption{Slicer Styles}
\label{05:fig20}
\end{figure}
\begin{enumbox}
\begin{enumerate}
\item Select the first choice under \textit{Dark}.
\item Click \fmtButton{Slicer Tools Options $ \Rightarrow $ Size $ \Rightarrow $ Width} and enter \fmtTyping{1} inch. (Note: Do NOT set the size in the \textit{Buttons} group.)
\item Click in the slicer and scroll down to Day $ 15 $. Click the \fmtButton{15} button to show only the data for day $ 15 $ in the data table.
\item Click the \textit{Multiselect} button in the top center of the slicer to permit the selection of multiple lines.
\item Click the Slicer buttons for Days $ 10 $ through $ 14 $. The table should now show the data from Days $ 10-15 $.
\item Using the filter arrow on the right edge of the \textit{Day} header in \fmtLoc{A5}, sort the column from smallest to largest to show the days in order as in Figure \ref{05:fig21}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig21}
\caption{Slicer Results}
\label{05:fig21}
\end{figure}
\item Click the red \fmtButton{X} in the top right corner of the slicer to remove the filter and show the complete data file.
\item Using the filter arrow on the right edge of the \textit{Day} header in \fmtLoc{A5}, sort the column from smallest to largest to show the days in order.
\item Click in the top margin of the slicer to select the slicer tool. It should have a thick border and drag handles. Press \fmtKeystroke{Delete} to remove the slicer.
\end{enumerate}
\end{enumbox}
\subsection{Total Rows}
By adding a total row to the bottom of the table, summary data is easily seen for one or more of the columns. Total rows can be added to tables as a whole or to only those that are filtered. Total rows can easily be toggled on and off as the need for summary data arises.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland ME} worksheet.
\item Clear the filter from the \textit{High} column by clicking the filter down-arrow and then \textit{Clear Filter from High (\textdegree F)}.
\item Click cell \fmtLoc{A5}.
\item Click \fmtButton{Table Tools Design $ \Rightarrow $ Table Style Options $ \Rightarrow $ Total Row}. (\fmtNewExcel{Excel 365}. The tab is named \textit{Table Design} rather than \textit{Table Tools Design}.)
\item Scroll to the bottom of the table to the \textit{Total Row}. Notice the total for the \textit{Snow} data.
\item Click cell \fmtLoc{D37} (in the \textit{Rain} column), and then click the down-arrow that appears on the right of the cell.
\item Choose \fmtButton{Sum} to add a sum to the \textit{Total Row} in the \textit{Rain} column.
\item To see the Average rainfall for the month, click on the arrow again and choose \fmtButton{Average}.
\item Change the total in cell \fmtLoc{E37} (\textit{Snow}) to see the Average snowfall for the month.
\item Click \fmtButton{Home $ \Rightarrow $ Number $ \Rightarrow $ Decrease Decimal} to change the decimal places in \fmtLoc{D37} and \fmtLoc{E37} to $ 2 $. Compare the \textit{Total Row} to Figure \ref{05:fig22}.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig22}
\caption{Total Row}
\label{05:fig22}
\end{figure}
Now switch to the \textit{Weekly OR} sheet and add a Slicer and Total Row to this table.
\begin{enumbox}
\begin{enumerate}
\item Open the \fmtWorksheet{Weekly OR} sheet.
\item If the \textit{Day} column is still filtered, clear it by clicking the filter down-arrow and then clicking the \textit{(Select All)} checkbox.
\item Add a Slicer for the \textit{Day} column to the sheet.
\item Drag the slicer so the top left corner is near the center of cell \fmtLoc{H5}.
\item Resize the slicer as desired and apply any Slicer Style.
\item Select Monday through Friday in the Slicer so that Saturday and Sunday data do \textit{NOT} show in the table.
\item Add a Total Row that averages the \textit{High} and \textit{Low} columns. The averages should be \textit{High}: $ 47.0 $ and \textit{Low}: $ 35.8 $.
\item Change the \textit{Total} label to \textit{Average} by clicking cell \fmtLoc{A37} and typing \fmtTyping{Average}.
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
\end{enumerate}
\end{enumbox}
\begin{center}
\begin{sklbox}{Skill Refresher}
\textbf{Add a Total Row}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Click \textit{Table Tools Design $ \Rightarrow $ Table Style Options $ \Rightarrow $ Total Row}.
\item Scroll to the bottom of the table to find the Total Row.
\item Click in one of the columns in the Total Row, and then click the down-arrow that appears to the right of the cell.
\item Choose \textit{Sum} to add a sum to the Total Row in the column.
\item To see the Average for column, click on the arrow again and choose \textit{Average}. Some other choices in the Total Row are Count (for words), Count Numbers, Max, and Min.
\end{itemize}
\end{sklbox}
\end{center}
\begin{center}
\begin{sklbox}{Skill Refresher}
\textbf{Add a Slicer}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Click \textit{Insert $ \Rightarrow $ Filters $ \Rightarrow $ Slicer}.
\item Check the box for the column to which a Slicer is added.
\item Click \textit{OK}.
\end{itemize}
\end{sklbox}
\end{center}
\subsection{Subtotaling}
Subtotals and grand totals can be easily calculated for a column in a table. This is a powerful tool that to quickly display multiple levels of summary data within the table. This can provide Management with a report of higher-level summary data one minute, and then can be easily switched back to detailed data the next minute. \textit{It is important to save often during this process and follow the steps carefully.} It is recommended to make a copy of the data to be subtotaled and place it in a new sheet, so the summary subtotaled data can be separately saved if desired.
The following steps summarize the process to add subtotals to a worksheet.
\begin{enumbox}
\begin{enumerate}
\item Sort by the column to subtotal on.
\item Convert the table back to a normal Excel range since a table cannot contain a subtotal.
\item Click \textit{Data $ \Rightarrow $ Outline $ \Rightarrow $ Subtotal}.
\item To limit the displayed data further, click \textit{Data $ \Rightarrow $ Sort \& Filter $ \Rightarrow $ Filter}.
\end{enumerate}
\end{enumbox}
To practice subtotaling, determine what the weather looks like for each day of the week.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Weekly OR} sheet.
\item Hover the mouse over the \fmtWorksheet{Weekly OR} sheet tab at the bottom of the screen, hold the \fmtKeystroke{Ctrl} key down and then click-and-drag the sheet to the right until it is past all the existing sheets.
\item When a sheet icon with a \textbf{+} sign is visible, let go of the mouse button and then the \fmtKeystroke{Ctrl} key. A \fmtWorksheet{Weekly OR (2)} sheet will appear.
\item Right-click on the new sheet tab, select \fmtButton{Rename}, type \fmtTyping{Subtotal OR}, and then press \fmtKeystroke{Enter}.
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
\item Remove all filters in the table by clicking \fmtButton{Data $ \Rightarrow $ Sort \& Filter $ \Rightarrow $ Clear}.
\item Click in the top margin of the slicer to select the slicer tool. It should have a thick border and drag handles. Press \fmtKeystroke{Delete} to remove the slicer.
\item Click \fmtButton{Data $ \Rightarrow $ Sort \& Filter $ \Rightarrow $ Sort}.
\item Click the down-arrow for \textit{Column Sort By} and select \textit{Day}.
\item Click the down-arrow for \textit{Order} and select \textit{Custom List}.
\item Select the custom list Sunday, Monday, Tuesday, etc. (See Figure \ref{05:fig13} through Figure \ref{05:fig15} for a review of Custom Sorting.)
\item Click \fmtButton{OK} then click \fmtButton{OK} a second time.
\item Before adding subtotals, convert the table back to a regular range. To do this, click \fmtButton{Table Tools Design $ \Rightarrow $ Tools $ \Rightarrow $ Convert to Range} (see Figure \ref{05:fig23}). (\fmtNewExcel{Excel 365}. This tab is called \textit{Table Design} rather than \textit{Table Tools Design}.)
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig23}
\caption{Convert to Range}
\label{05:fig23}
\end{figure}
\item When a box pops up with a warning about converting the table, click \fmtButton{Yes}.
\item Click cell \fmtLoc{A6}.
\item Click \fmtButton{Data $ \Rightarrow $ Outline $ \Rightarrow $ Subtotal}.
\item In the \textit{Subtotal Window}, make the choices shown in the Figure \ref{05:fig24}. It is essential to select the column that the data is sorted by in the \textit{At each change in field} at the top of the window.
\item Click \fmtButton{OK}.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig24}
\caption{Subtotal Window}
\label{05:fig24}
\end{figure}
The data should look like Figure \ref{05:fig25}. Successful subtotaling shows only one subtotal for each group in the column sorted by. (\textit{HINT}: If there is more than one Subtotal for the same group (\ie one of the days of the week in the example), then the column was not sorted before subtotaling. Remove the subtotals using the \textit{Remove All} button in Figure \ref{05:fig24}, sort the table, and then subtotal again.)
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig25}
\caption{Subtotal Results}
\label{05:fig25}
\end{figure}
Notice the three \textit{Outline} buttons circled in the upper-left corner of the spreadsheet. These control the amount of subtotaled data that is displayed. Table \ref{05:tab06} describes the different \textit{Outline} buttons.
\begin{table}[H]
\rowcolors{1}{}{tablerow} % zebra striping background
{\small
%\fontsize{8}{10} \selectfont %Replace small for special font size
\begin{longtable}{L{0.50in}L{3.00in}} %Left-aligned, Max width: 4.25in
\textbf{Button} & \textbf{Content Displayed} \endhead
\hline
Level 1 & Only the grand average\\
Level 2 & Subtotals and grand total\\
Level 3 & Individual records, subtotals, and grand total\\
\rowcolor{captionwhite}
\caption{Subtotal Outline Buttons}
\label{05:tab06}
\end{longtable}
} % End small
\end{table}
Click the three \textit{Outline} buttons to see the difference in the data displayed.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtButton{$ 1 $ Outline} button in the upper left-hand corner of the sheet.
\item Only the \textit{Grand Average} row with averages for High, Low, Rain, and Snow should be visible.
\item Click on the \fmtButton{$ 2 $ Outline} button.
\item The average for each day of the week along with the \textit{Grand Average} are now visible.
\item Click on the \fmtButton{+ Sign} button to the left of the Sunday Average row.
\item This expands just the \textit{Sunday Day} data and displays the individual records for this subset of the data. Clicking on \fmtButton{+ Sign} buttons will expand a portion of the data at a time. Clicking on \fmtButton{– Sign} buttons hide a portion of the data at a time.
\item Click on the \fmtButton{$ 3 $ Outline} button.
\item All the individual records along with the subtotals, and \textit{Grand Average} should be displayed.
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
\end{enumerate}
\end{enumbox}
\begin{center}
\begin{tkwbox}{Key Take-Aways}
\textbf{Advanced Table Skills}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Filtering is an easy way to see a subset of the data. Filtering arrows appear to the right of each column heading when the table has a header row.
\item Data can be filtered by text or numerically.
\item A slicer is another way to filter in Excel that provides a set of filtering buttons on the sheet.
\item Adding a total row to a table is a quick, efficient way to see summary statistics for one or more columns in a table.
\item Subtotaling provides a way to quickly add totals to groups within a column along with providing a grand total at the bottom of the table.
\item Subtotal Outline buttons allow users to see add of the subtotaled data, just the totals and grand total, or simply the grand total.
\item $ + $ and $ - $ buttons within subtotaling allow a user to expand and hide portions of the subtotaled data.
\end{itemize}
\end{tkwbox}
\end{center}
\section{Preparing to Print}
\begin{center}
\begin{objbox}{Learning Objectives}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Review options for professional page setup for printing.
\item Understand how to insert a picture to enhance the visual appearance of a worksheet.
\item Preview worksheets containing tables to ensure they will print in a professional manner.
\end{itemize}
\end{objbox}
\end{center}
\subsection{Previewing a Worksheet}
Now that the weather data has been sorted, filtered, and subtotaled as needed, it is time to print the worksheets. Start with the \textit{Portland ME} worksheet.
\begin{enumbox}
\begin{enumerate}
\item Click on the \fmtWorksheet{Portland ME} worksheet.
\item Click cell \fmtLoc{A1}.
\end{enumerate}
\end{enumbox}
Notice that cells $ A1 $, $ A2 $, and $ A3 $ are not merged and centered over the entire width of the table.
\begin{enumbox}
\begin{enumerate}
\item Select cell \fmtLoc{A1} and click \fmtButton{Home $ \Rightarrow $ Alignment $ \Rightarrow $ Merge \& Center}. This should split \fmtLoc{A1} into four cells (\fmtLoc{A1:D1}).
\item Select the range \fmtLoc{A1:E1} and click \fmtButton{Home $ \Rightarrow $ Alignment $ \Rightarrow $ Merge \& Center}. Cell \fmtLoc{A1} should now be merged across \fmtLoc{A1:E1}.
\item Repeat these steps for \fmtLoc{A2} and \fmtLoc{A3}.
\end{enumerate}
\end{enumbox}
Next, open \textit{Print Preview} and determine what page setup options need to be set.
\begin{enumbox}
\begin{enumerate}
\item Click \fmtButton{File $ \Rightarrow $ Print}.
\item Notice that the table should be centered on the page.
\item At the bottom of the \textit{Settings} section, click the link for \fmtButton{Page Setup}. This opens the \textit{Page Setup} dialog box.
\item Click on the \fmtButton{Margins} tab. See Figure \ref{05:fig26}.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig26}
\caption{Page Setup}
\label{05:fig26}
\end{figure}
\item In the \textit{Center on page} section, check the box for \fmtButton{Horizontally}.
\item Click \fmtButton{OK}. The table should now be centered horizontally on the page.
\item Next, add a footer with the workbook filename and worksheet name.
\item At the bottom of the \textit{Print Preview Settings} section, click the link for \fmtButton{Page Setup}. This opens the \textit{Page Setup} dialog box.
\item Click the \fmtButton{Header/Footer} tab then click the \fmtButton{Custom Footer} button.
\item In the \textit{Left section:} box type \fmtTyping{File:} (making sure to leave a space after the colon).
\item Click the \fmtButton{Insert File Name} button (see Figure \ref{05:fig26a}).
\item In the \textit{Right section:} box type \fmtTyping{Worksheet:} (making sure to leave a space after the colon).
\item Click the \fmtButton{Insert Sheet Name} button.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.85\linewidth}]{gfx/ch05_fig26a}
\caption{File/Sheet Name Button Locations}
\label{05:fig26a}
\end{figure}
\item The completed Footer dialog box should look like Figure \ref{05:fig27}. Click the \fmtButton{OK} button twice to return to \textit{Print Preview}.
\item Confirm that the footer appears correctly, then click the arrow at the top left corner of the screen to exit \textit{Print Preview}.
\end{enumerate}
\end{enumbox}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig27}
\caption{Custom Footer}
\label{05:fig27}
\end{figure}
\subsection{Inserting an Image to Enhance a Worksheet}
Next add a small weather-related graphic to the worksheet to enhance its appearance. In Excel, an image file from either the local hard drive or downloaded from an online source can be used. For this exercise, use the graphic in the student files for this chapter.
\begin{enumbox}
\begin{enumerate}
\item \fmtOldExcel{Excel 2016} Click \fmtButton{Insert $ \Rightarrow $ Illustrations $ \Rightarrow $ Pictures}. (This inserts an image from the local computer. To search for an online image, click the \fmtButton{Online Pictures} button.) (See Figure \ref{05:fig28}.)
\item \fmtNewExcel{Excel 365} Click \fmtButton{Insert $ \Rightarrow $ Illustrations $ \Rightarrow $ Pictures $ \Rightarrow $ This Device}. (This inserts an image from the local computer. To search for an online image, click the \fmtButton{Online Pictures} button in the \textit{Pictures} dropdown menu.)
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig28}
\caption{Insert Pictures}
\label{05:fig28}
\end{figure}
\item Navigate to the folder containing the Chapter $ 5 $ files and double-click the \textit{CH5-Weather.png} image file.
\end{enumerate}
\end{enumbox}
The image now appears on the worksheet, but not in the desired location. It is also slightly larger than desired (see Figure \ref{05:fig29}).
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig29}
\caption{Inserted Image}
\label{05:fig29}
\end{figure}
\begin{enumbox}
\begin{enumerate}
\item Place the mouse pointer in the image so that the pointer changes to crosshairs. Drag the image so that the top left corner is at the left edge of cell \fmtLoc{E1}.
\item Using the resizing handle in the bottom right corner of the image, resize the image so that it does not cover any of the table. Hint: drag diagonally to the left and up. The bottom right corner of the image will end up near the left edge of cell \fmtLoc{F4}.
\item Check \textit{Print Preview} again to make sure the worksheet with the image added looks good.
\item Exit \textit{Print Preview}.
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
\end{enumerate}
\end{enumbox}
\subsection{Previewing the Remaining Worksheets}
Before considering this workbook finished, confirm that the remaining worksheets are all printing appropriately.
\begin{enumbox}
\begin{enumerate}
\item Click the \fmtWorksheet{Portland OR} worksheet.
\item Click \fmtButton{File $ \Rightarrow $ Print}.
\item No changes need to be made to this worksheet so click the arrow in the top left corner of the screen to exit \textit{Print Preview}.
\item Click the \fmtWorksheet{Weekly OR} worksheet.
\item Click \fmtButton{File $ \Rightarrow $ Print}.
\item Notice that the Slicer is printing on a second page. To fix this, set the \textit{Page Scaling} to \fmtButton{Fit All Columns on One Page}.
\item If one or more Slicer buttons are cut off, exit \textit{Print Preview}, and resize the Slicer so that all of the buttons display then return to \textit{Print Preview}.
\item Confirm the worksheet, including the slicer, is printing appropriately.
\item Click the arrow in the top left corner of the screen to exit \textit{Print Preview}.
\item Click the \fmtWorksheet{Subtotal OR} worksheet.
\item Click \fmtButton{File $ \Rightarrow $ Print}.
\item Click \fmtButton{Page Setup} at the bottom of the \textit{Settings} area.
\item Center this worksheet horizontally on the page.
\item Click the arrow in the top left corner of the screen to exit \textit{Print Preview}.
%filesave CH5-National Weather
\item Save the \fmtWorksheet{CH5-National Weather} workbook.
%fileclose CH5-National Weather
\item Compare the worksheet with the self-check answer key (\fmtWorksheet{CH5-National Weather Solution}) and then close and submit the \fmtWorksheet{CH5-National Weather} workbook as directed by the instructor.
\end{enumerate}
\end{enumbox}
\begin{center}
\begin{tkwbox}{Key Take-Aways}
\textbf{Printing}
\\
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item When working with Excel workbooks, the final step should always be to review the worksheets in \textit{Print Preview} to make sure they are printing appropriately.
\item Images can be added to a worksheet to enhance its appearance. Be sure to resize and move them appropriately so they do not detract from the data.
\end{itemize}
\end{tkwbox}
\end{center}
\section{Chapter Practice}
\subsection{Tables for a Tourism Company}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig30}
\caption{Chapter Practice Completed Exercise}
\label{05:fig30}
\end{figure}
Travel and tour companies need to keep track of client data as well as travel/tour options and tour guides. Keeping up-to-date, accurate records is essential to their bottom line. To run a tour company, employees must be able to manipulate their data quickly and easily. This exercise illustrates how to use the skills presented in this chapter to generate the data needed daily by a tourism company. Figure \ref{05:fig30} shows one of the completed worksheets.
\begin{enumbox}
\begin{enumerate}
\item Open the data file \fmtWorksheet{PR5-Data} and save the file as \fmtWorksheet{PR5-Canyon Trails}.
\item In \fmtLoc{J4}, calculate \textit{Total Cost} (\fmtTyping{Guests*Per Person Cost}). Copy the formula down the column.
\item Format \fmtLoc{Column I} and \fmtLoc{Column J} as \textit{Accounting} with no decimal places.
\item Center all headings in \fmtLoc{Row }$ 3 $.
\item Click cell \fmtLoc{A3}. Insert a table with headers for the range \fmtLoc{A3:J53}.
\item Adjust column widths within the table so that all the headings are completely visible.
\item Rename \fmtWorksheet{Sheet $ 1 $} to \fmtTyping{Current Tours}. Sort this sheet alphabetically (A to Z) by \textit{Last Name}.
\item Make a copy of the \fmtWorksheet{Current Tours} sheet and rename it \fmtWorksheet{Tours by Canyon}.
\item Place the \fmtWorksheet{Tours by Canyon} sheet to the right of the \fmtWorksheet{Current Tours} sheet.
\item Sort the \fmtWorksheet{Tours by Canyon} worksheet by \textit{Tour Canyon} (A to Z), then \textit{Home Country} (A to Z), and then \textit{Last Name} (A to Z).
\item Make another copy of the \fmtWorksheet{Current Tours} sheet and rename it \fmtWorksheet{US Guests}.
\item Place the \fmtWorksheet{US Guests} sheet to the right of the \fmtWorksheet{Tours by Canyon} sheet.
\item Filter the \fmtWorksheet{US Guests} worksheet so that only guests with a \textit{Home Country} of the United States show.
\item Sort the worksheet by \textit{Tour State} (A to Z).
\item Add a Total Row that sums the \textit{Guests} and \textit{Total Cost} columns.
\item Make another copy of the \fmtWorksheet{Current Tours} sheet and rename it \fmtWorksheet{European Guests}.
\item Place the \fmtWorksheet{European Guests} sheet to the right of the \fmtWorksheet{US Guests} sheet.
\item On the \fmtWorksheet{European Guests} worksheet, hide the \textit{Average Age} column.
\item Insert a slicer in the \fmtWorksheet{European Guests} sheet for \textit{Home Country}. Move the slicer so its top left corner is near the center of \fmtLoc{K3}. Click-and-drag the bottom right corner of the slicer so it is near the center of \fmtLoc{N16}.
\item Select both \textit{Germany} and the \textit{United Kingdom} on the slicer.
\item Sort the filtered sheet by \textit{Home Country} (A to Z) and then \textit{Last Name} (A to Z).
\item Make one more copy of the \fmtWorksheet{Current Tours} sheet and rename it \fmtWorksheet{Tours by State}.
\item Place the \fmtWorksheet{Tours by State} sheet to the right of the \fmtWorksheet{European Guests} sheet.
\item Subtotal the sheet by \textit{Tour State}, summing the \textit{Total Cost} column. (Remember to sort the worksheet by \textit{Tour State} before applying subtotals.)
\item Change the name of the \fmtWorksheet{Tours by State} sheet to \fmtWorksheet{$ 5-7 $ Day Tours by State}. Filter out $ 3 $-day tours in the table.
\item Save the \fmtWorksheet{PR5-Canyon Trails} workbook.
\item On each worksheet, make the following print setup changes.
\begin{itemize}
\item Add a footer with the worksheet name in the center.
\item Change to \fmtButton{Landscape Orientation}.
\item Set the scaling to \fmtButton{Fit All Columns on One Page}.
\end{itemize}
\item For any worksheets that print on more than one page, add Print Titles to repeat the first three rows at the top of each page.
\item Make sure the sheets are in the following order from left to right: \fmtWorksheet{Current Tours}, \fmtWorksheet{Tours by Canyon}, \fmtWorksheet{US Guests}, \fmtWorksheet{European Guests}, and \fmtWorksheet{$ 5-7 $ Day Tours by State}.
\item Save the \fmtWorksheet{PR5-Canyon Trails} workbook.
\item Compare the workbook with the self-check answer key (\fmtWorksheet{PR5-Canyon Trails Solution}) and then close and submit the \fmtWorksheet{PR5-Canyon Trails} workbook as directed by the instructor.
\end{enumerate}
\end{enumbox}
\section{Scored Assessment}
\subsection{Tables for a Retail Company}
Retail companies with online and in-store sales have a huge quantity of data to keep track of. Keeping track of sales, costs, and profits on a daily basis is essential to making the most of a business. This exercise illustrates how to use the skills presented in this chapter to generate the data needed on a daily basis by a retail company. Figure \ref{05:fig31} shows the completed worksheet.
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig31}
\caption{Scored Assessment Completed Exercise}
\label{05:fig31}
\end{figure}
\begin{enumbox}
\begin{enumerate}
\item Open \fmtWorksheet{SC5-Data} and save the file as \fmtWorksheet{SC5-Dynamite Customer Sales}.
\item Click on the \fmtWorksheet{Sales} worksheet.
\item In \fmtLoc{I4}, enter a \fmtTyping{Vlookup} function that will find the \textit{Product Price} for the Product in \fmtLoc{E4} using the table in the \fmtWorksheet{Product Table} worksheet. In the \fmtTyping{Vlookup} function, fill in the required parameters using the values below along with Figure \ref{05:fig32} as a reference. (Notice that the \textit{Table\_array} parameter uses absolute cell references.)
\begin{itemize}
\item \textbf{Lookup\_value}: E$ 4 $
\item \textbf{Table\_array}: 'Product Table'!\$A\$$ 1 $:\$D\$$ 13 $
\item \textbf{Col\_index\_num}: $ 4 $
\item \textbf{Range\_lookup}: False
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[width=\maxwidth{.95\linewidth}]{gfx/ch05_fig32}
\caption{\fmtTyping{Vlookup} window}
\label{05:fig32}
\end{figure}
\item Copy cell \fmtLoc{I5} to \fmtLoc{I6:I52}.
\item In \fmtLoc{J4}, enter a \fmtTyping{Vlookup} function that will find the \textit{Product Cost} for the \textit{Product} in \fmtLoc{E4} using the table in the \fmtWorksheet{Product Table} worksheet. Note that the COL\_INDEX\_NUM needs to be $ 3 $ instead of $ 4 $.
\item Copy \fmtLoc{J4} to \fmtLoc{J5:J52}.
\item In \fmtLoc{K4}, calculate \textit{Profit} (\textit{Product Price} $ – $ \textit{Product Cost}).
\item Copy \fmtLoc{K4} to \fmtLoc{K5:K52}.
\item Format \fmtLoc{Columns I}, \fmtLoc{Column J}, and \fmtLoc{Column K} as \textit{Accounting} with two decimal places.
\item Click cell \fmtLoc{A3}. Insert a table with headers for the range \fmtLoc{A3:K52}. \textit{BE CAREFUL HERE}: Excel will try to insert a table starting with \fmtLoc{A2}. Be certain the range starts with \fmtLoc{A3}.
\end{enumerate}
\end{enumbox}
\noindent
\textbf{Online Sales by Date}
\begin{enumbox}
\begin{enumerate}
\item Make a copy of the \fmtWorksheet{Sales} sheet.
\item Rename \fmtWorksheet{Sales (2)} to \fmtWorksheet{Online Sales by Date}. \item Place \fmtWorksheet{Online Sales by Date} to the right of the \fmtWorksheet{Sales} worksheet.
\item On the \fmtWorksheet{Online Sales by Date} worksheet, filter \textit{Sales Type} so that only \textit{Online Sales} are displayed.
\item Sort the filtered data by \textit{Date Sold} (Oldest to Newest).
\end{enumerate}
\end{enumbox}
\noindent
\textbf{June Sales by Country}
\begin{enumbox}
\begin{enumerate}
\item Make a copy of the \fmtWorksheet{Sales} sheet and rename it \fmtWorksheet{June Sales by Country}.
\item Place \fmtWorksheet{June Sales by Country} to the right of the \fmtWorksheet{Online Sales by Date} worksheet.
\item On the \fmtWorksheet{June Sales by Country} worksheet, filter \textit{Date Sold} to only show \textit{June} dates by using the \textit{Date Filter Between}.
\item Sort the worksheet by \textit{Country} (A to Z) and then by \textit{Name} (A to Z).
\end{enumerate}
\end{enumbox}
\noindent
\textbf{Sales by Product}
\begin{enumbox}
\begin{enumerate}
\item Make another copy of the \fmtWorksheet{Sales} sheet and rename it \fmtWorksheet{Sales by Product}.
\item Place \fmtWorksheet{Sales by Product} to the right of the \fmtWorksheet{June Sales by Country} worksheet.
\item In the \fmtWorksheet{Sales by Product} worksheet, hide the \textit{Region} column.
\item Insert a slicer for \textit{Product Sold}. Move the slicer so its top left corner is near the center of \fmtLoc{L3}. Click-and-drag the bottom right corner of the slicer so it is near the center of \fmtLoc{O16}.
\item Select both \textit{DETA100} and \textit{DETA200} in the slicer (all other products should be off).
\item Sort the filtered sheet by \textit{Product Sold} (A to Z).
\item Add a \textit{Total Row} that includes the average for the \textit{Product Price}, \textit{Product Cost}, and \textit{Profit} columns.
\item Change the heading in \fmtLoc{A53} to \fmtTyping{Average}.
\end{enumerate}
\end{enumbox}
\noindent
\textbf{Subtotals by Date}
\begin{enumbox}
\begin{enumerate}
\item Make a copy of the \fmtWorksheet{Sales} sheet and rename it \fmtWorksheet{Subtotals by Date}.
\item Place \fmtWorksheet{Subtotals by Date} to the right of the \fmtWorksheet{Sales by Product} worksheet.
\item In the \fmtWorksheet{Subtotals by Date} worksheet, subtotal the sheet by \textit{Date Sold} (Oldest to Newest), summing the \textit{Profit} column.
\item Click the \fmtButton{2 Outline} button to show just the subtotals by date and the Grand Total.
\end{enumerate}
\end{enumbox}
\noindent
\textbf{Subtotals by Type}
\begin{enumbox}
\begin{enumerate}
\item Make a copy of the \fmtWorksheet{Sales} sheet and rename it \fmtWorksheet{Subtotals by Type}.
\item Place \fmtWorksheet{Subtotals by Type} to the right of the \fmtWorksheet{Subtotals by Date} worksheet.
\item Subtotal the sheet by \textit{Sales Type}, summing the \textit{Profit} column.
\item Add a second subtotal that subtotals by \textit{Type} and averages the \textit{Profit} column. (\textit{Hint}: uncheck \fmtButton{Replace Current Subtotals} in the \textit{Subtotal} dialog box.)
\item Notice that four outline buttons appear with the second subtotal. Experiment to determine which outline button to click to display the \textit{Average} and \textit{Total} subtotals for both \textit{Online} and \textit{Retail} along with the \textit{Grand Average} and \textit{Grand Total}.
\end{enumerate}
\end{enumbox}
\noindent
\textbf{Print Settings}
\begin{enumbox}
\begin{enumerate}
\item For each worksheet, click \fmtButton{Insert $ \Rightarrow $ Text $ \Rightarrow $ Header \& Footer} to add a custom footer with the worksheet name in the center.
\item Preview each worksheet in \textit{Print Preview} and make any necessary changes for professional printing. (\textit{Hint}: Orientation, page scaling, and print titles might need to be adjusted.)
\item Double-check that the sheets are in the following order from left to right: \fmtWorksheet{Sales}, \fmtWorksheet{Online Sales by Date}, \fmtWorksheet{June Sales by Country}, \fmtWorksheet{Sales by Product}, \fmtWorksheet{Subtotals by Date}, \fmtWorksheet{Subtotals by Type}, and \fmtWorksheet{Product Table}.
\item Save and close the \fmtWorksheet{SC5-Dynamite Customer Sales} workbook.
\item Submit the \fmtWorksheet{SC5-Dynamite Customer Sales} workbook as directed by the instructor.
\end{enumerate}
\end{enumbox} | {
"alphanum_fraction": 0.7387844269,
"avg_line_length": 51.6398066076,
"ext": "tex",
"hexsha": "0b66e7e22b50cb9d8890c1c02d7cce4f01d9489c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5ee0b8a3fa4c14a9c255343dcdc1109c1184e551",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "grself/cll_excel",
"max_forks_repo_path": "Chapters/05Tables.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5ee0b8a3fa4c14a9c255343dcdc1109c1184e551",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "grself/cll_excel",
"max_issues_repo_path": "Chapters/05Tables.tex",
"max_line_length": 1006,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5ee0b8a3fa4c14a9c255343dcdc1109c1184e551",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "grself/cll_excel",
"max_stars_repo_path": "Chapters/05Tables.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 18584,
"size": 64085
} |
\hypertarget{section}{%
\section{1}\label{section}}
\bibverse{1} Paul, an apostle of Christ+ 1:1 ``Christ'' means ``Anointed
One''. Jesus through the will of God, and Timothy our brother, to the
assembly of God which is at Corinth, with all the saints who are in the
whole of Achaia: \bibverse{2} Grace to you and peace from God our Father
and the Lord Jesus Christ.
\bibverse{3} Blessed be the God and Father of our Lord Jesus Christ, the
Father of mercies and God of all comfort, \bibverse{4} who comforts us
in all our affliction, that we may be able to comfort those who are in
any affliction, through the comfort with which we ourselves are
comforted by God. \bibverse{5} For as the sufferings of Christ abound to
us, even so our comfort also abounds through Christ. \bibverse{6} But if
we are afflicted, it is for your comfort and salvation. If we are
comforted, it is for your comfort, which produces in you the patient
enduring of the same sufferings which we also suffer. \bibverse{7} Our
hope for you is steadfast, knowing that, since you are partakers of the
sufferings, so you are also of the comfort.
\bibverse{8} For we don't desire to have you uninformed, brothers,+ 1:8
The word for ``brothers'' here and where context allows may also be
correctly translated ``brothers and sisters'' or ``siblings.''
concerning our affliction which happened to us in Asia: that we were
weighed down exceedingly, beyond our power, so much that we despaired
even of life. \bibverse{9} Yes, we ourselves have had the sentence of
death within ourselves, that we should not trust in ourselves, but in
God who raises the dead, \bibverse{10} who delivered us out of so great
a death, and does deliver, on whom we have set our hope that he will
also still deliver us, \bibverse{11} you also helping together on our
behalf by your supplication; that, for the gift given to us by means of
many, thanks may be given by many persons on your behalf.
\bibverse{12} For our boasting is this: the testimony of our conscience
that in holiness and sincerity of God, not in fleshly wisdom but in the
grace of God, we behaved ourselves in the world, and more abundantly
toward you. \bibverse{13} For we write no other things to you than what
you read or even acknowledge, and I hope you will acknowledge to the
end--- \bibverse{14} as also you acknowledged us in part---that we are
your boasting, even as you also are ours, in the day of our Lord Jesus.
\bibverse{15} In this confidence, I was determined to come first to you,
that you might have a second benefit, \bibverse{16} and by you to pass
into Macedonia, and again from Macedonia to come to you, and to be sent
forward by you on my journey to Judea. \bibverse{17} When I therefore
planned this, did I show fickleness? Or the things that I plan, do I
plan according to the flesh, that with me there should be the ``Yes,
yes'' and the ``No, no?'' \bibverse{18} But as God is faithful, our word
toward you was not ``Yes and no.'' \bibverse{19} For the Son of God,
Jesus Christ, who was preached among you by us---by me, Silvanus, and
Timothy---was not ``Yes and no,'' but in him is ``Yes.'' \bibverse{20}
For however many are the promises of God, in him is the ``Yes.''
Therefore also through him is the ``Amen'', to the glory of God through
us.
\bibverse{21} Now he who establishes us with you in Christ and anointed
us is God, \bibverse{22} who also sealed us and gave us the down payment
of the Spirit in our hearts.
\bibverse{23} But I call God for a witness to my soul, that I didn't
come to Corinth to spare you. \bibverse{24} We don't control your faith,
but are fellow workers with you for your joy. For you stand firm in
faith.
\hypertarget{section-1}{%
\section{2}\label{section-1}}
\bibverse{1} But I determined this for myself, that I would not come to
you again in sorrow. \bibverse{2} For if I make you grieve, then who
will make me glad but he who is made to grieve by me? \bibverse{3} And I
wrote this very thing to you, so that when I came, I wouldn't have
sorrow from them of whom I ought to rejoice; having confidence in you
all that my joy would be shared by all of you. \bibverse{4} For out of
much affliction and anguish of heart I wrote to you with many tears, not
that you should be made to grieve, but that you might know the love that
I have so abundantly for you.
\bibverse{5} But if any has caused sorrow, he has caused sorrow not to
me, but in part (that I not press too heavily) to you all. \bibverse{6}
This punishment which was inflicted by the many is sufficient for such a
one; \bibverse{7} so that, on the contrary, you should rather forgive
him and comfort him, lest by any means such a one should be swallowed up
with his excessive sorrow. \bibverse{8} Therefore I beg you to confirm
your love toward him. \bibverse{9} For to this end I also wrote, that I
might know the proof of you, whether you are obedient in all things.
\bibverse{10} Now I also forgive whomever you forgive anything. For if
indeed I have forgiven anything, I have forgiven that one for your sakes
in the presence of Christ, \bibverse{11} that no advantage may be gained
over us by Satan, for we are not ignorant of his schemes.
\bibverse{12} Now when I came to Troas for the Good News of Christ, and
when a door was opened to me in the Lord, \bibverse{13} I had no relief
for my spirit, because I didn't find Titus my brother, but taking my
leave of them, I went out into Macedonia.
\bibverse{14} Now thanks be to God who always leads us in triumph in
Christ, and reveals through us the sweet aroma of his knowledge in every
place. \bibverse{15} For we are a sweet aroma of Christ to God in those
who are saved and in those who perish: \bibverse{16} to the one a stench
from death to death, to the other a sweet aroma from life to life. Who
is sufficient for these things? \bibverse{17} For we are not as so many,
peddling the word of God. But as of sincerity, but as of God, in the
sight of God, we speak in Christ.
\hypertarget{section-2}{%
\section{3}\label{section-2}}
\bibverse{1} Are we beginning again to commend ourselves? Or do we need,
as do some, letters of commendation to you or from you? \bibverse{2} You
are our letter, written in our hearts, known and read by all men,
\bibverse{3} being revealed that you are a letter of Christ, served by
us, written not with ink, but with the Spirit of the living God; not in
tablets of stone, but in tablets that are hearts of flesh.
\bibverse{4} Such confidence we have through Christ toward God,
\bibverse{5} not that we are sufficient of ourselves to account anything
as from ourselves; but our sufficiency is from God, \bibverse{6} who
also made us sufficient as servants of a new covenant, not of the letter
but of the Spirit. For the letter kills, but the Spirit gives life.
\bibverse{7} But if the service of death, written engraved on stones,
came with glory, so that the children of Israel could not look
steadfastly on the face of Moses for the glory of his face, which was
passing away, \bibverse{8} won't service of the Spirit be with much more
glory? \bibverse{9} For if the service of condemnation has glory, the
service of righteousness exceeds much more in glory. \bibverse{10} For
most certainly that which has been made glorious has not been made
glorious in this respect, by reason of the glory that surpasses.
\bibverse{11} For if that which passes away was with glory, much more
that which remains is in glory.
\bibverse{12} Having therefore such a hope, we use great boldness of
speech, \bibverse{13} and not as Moses, who put a veil on his face so
that the children of Israel wouldn't look steadfastly on the end of that
which was passing away. \bibverse{14} But their minds were hardened, for
until this very day at the reading of the old covenant the same veil
remains, because in Christ it passes away. \bibverse{15} But to this
day, when Moses is read, a veil lies on their heart. \bibverse{16} But
whenever someone turns to the Lord, the veil is taken away.
\bibverse{17} Now the Lord is the Spirit; and where the Spirit of the
Lord is, there is liberty. \bibverse{18} But we all, with unveiled face
seeing the glory of the Lord as in a mirror, are transformed into the
same image from glory to glory, even as from the Lord, the Spirit.
\hypertarget{section-3}{%
\section{4}\label{section-3}}
\bibverse{1} Therefore, seeing we have this ministry, even as we
obtained mercy, we don't faint. \bibverse{2} But we have renounced the
hidden things of shame, not walking in craftiness nor handling the word
of God deceitfully, but by the manifestation of the truth commending
ourselves to every man's conscience in the sight of God. \bibverse{3}
Even if our Good News is veiled, it is veiled in those who are dying,
\bibverse{4} in whom the god of this world has blinded the minds of the
unbelieving, that the light of the Good News of the glory of Christ, who
is the image of God, should not dawn on them. \bibverse{5} For we don't
preach ourselves, but Christ Jesus as Lord, and ourselves as your
servants for Jesus' sake, \bibverse{6} seeing it is God who said,
``Light will shine out of darkness,''+ 4:6 Genesis 1:3 who has shone in
our hearts to give the light of the knowledge of the glory of God in the
face of Jesus Christ.
\bibverse{7} But we have this treasure in clay vessels, that the
exceeding greatness of the power may be of God and not from ourselves.
\bibverse{8} We are pressed on every side, yet not crushed; perplexed,
yet not to despair; \bibverse{9} pursued, yet not forsaken; struck down,
yet not destroyed; \bibverse{10} always carrying in the body the putting
to death of the Lord Jesus, that the life of Jesus may also be revealed
in our body. \bibverse{11} For we who live are always delivered to death
for Jesus' sake, that the life also of Jesus may be revealed in our
mortal flesh. \bibverse{12} So then death works in us, but life in you.
\bibverse{13} But having the same spirit of faith, according to that
which is written, ``I believed, and therefore I spoke.''+ 4:13 Psalm
116:10 We also believe, and therefore we also speak, \bibverse{14}
knowing that he who raised the Lord Jesus will raise us also with Jesus,
and will present us with you. \bibverse{15} For all things are for your
sakes, that the grace, being multiplied through the many, may cause the
thanksgiving to abound to the glory of God.
\bibverse{16} Therefore we don't faint, but though our outward person is
decaying, yet our inward person is renewed day by day. \bibverse{17} For
our light affliction, which is for the moment, works for us more and
more exceedingly an eternal weight of glory, \bibverse{18} while we
don't look at the things which are seen, but at the things which are not
seen. For the things which are seen are temporal, but the things which
are not seen are eternal.
\hypertarget{section-4}{%
\section{5}\label{section-4}}
\bibverse{1} For we know that if the earthly house of our tent is
dissolved, we have a building from God, a house not made with hands,
eternal, in the heavens. \bibverse{2} For most certainly in this we
groan, longing to be clothed with our habitation which is from heaven,
\bibverse{3} if indeed being clothed, we will not be found naked.
\bibverse{4} For indeed we who are in this tent do groan, being
burdened, not that we desire to be unclothed, but that we desire to be
clothed, that what is mortal may be swallowed up by life. \bibverse{5}
Now he who made us for this very thing is God, who also gave to us the
down payment of the Spirit.
\bibverse{6} Therefore we are always confident and know that while we
are at home in the body, we are absent from the Lord; \bibverse{7} for
we walk by faith, not by sight. \bibverse{8} We are courageous, I say,
and are willing rather to be absent from the body and to be at home with
the Lord. \bibverse{9} Therefore also we make it our aim, whether at
home or absent, to be well pleasing to him. \bibverse{10} For we must
all be revealed before the judgment seat of Christ that each one may
receive the things in the body according to what he has done, whether
good or bad.
\bibverse{11} Knowing therefore the fear of the Lord, we persuade men,
but we are revealed to God, and I hope that we are revealed also in your
consciences. \bibverse{12} For we are not commending ourselves to you
again, but speak as giving you occasion of boasting on our behalf, that
you may have something to answer those who boast in appearance and not
in heart. \bibverse{13} For if we are beside ourselves, it is for God.
Or if we are of sober mind, it is for you. \bibverse{14} For the love of
Christ compels us; because we judge thus: that one died for all,
therefore all died. \bibverse{15} He died for all, that those who live
should no longer live to themselves, but to him who for their sakes died
and rose again.
\bibverse{16} Therefore we know no one according to the flesh from now
on. Even though we have known Christ according to the flesh, yet now we
know him so no more. \bibverse{17} Therefore if anyone is in Christ, he
is a new creation. The old things have passed away. Behold,+ 5:17
``Behold'', from ``ἰδοὺ'', means look at, take notice, observe, see, or
gaze at. It is often used as an interjection. all things have become
new. \bibverse{18} But all things are of God, who reconciled us to
himself through Jesus Christ, and gave to us the ministry of
reconciliation; \bibverse{19} namely, that God was in Christ reconciling
the world to himself, not reckoning to them their trespasses, and having
committed to us the word of reconciliation.
\bibverse{20} We are therefore ambassadors on behalf of Christ, as
though God were entreating by us: we beg you on behalf of Christ, be
reconciled to God. \bibverse{21} For him who knew no sin he made to be
sin on our behalf, so that in him we might become the righteousness of
God.
\hypertarget{section-5}{%
\section{6}\label{section-5}}
\bibverse{1} Working together, we entreat also that you do not receive
the grace of God in vain. \bibverse{2} For he says, ``At an acceptable
time I listened to you. In a day of salvation I helped you.''+ 6:2
Isaiah 49:8
Behold, now is the acceptable time. Behold, now is the day of salvation.
\bibverse{3} We give no occasion of stumbling in anything, that our
service may not be blamed, \bibverse{4} but in everything commending
ourselves as servants of God: in great endurance, in afflictions, in
hardships, in distresses, \bibverse{5} in beatings, in imprisonments, in
riots, in labors, in watchings, in fastings, \bibverse{6} in pureness,
in knowledge, in perseverance, in kindness, in the Holy Spirit, in
sincere love, \bibverse{7} in the word of truth, in the power of God, by
the armor of righteousness on the right hand and on the left,
\bibverse{8} by glory and dishonor, by evil report and good report, as
deceivers and yet true, \bibverse{9} as unknown and yet well known, as
dying and behold---we live, as punished and not killed, \bibverse{10} as
sorrowful yet always rejoicing, as poor yet making many rich, as having
nothing and yet possessing all things.
\bibverse{11} Our mouth is open to you, Corinthians. Our heart is
enlarged. \bibverse{12} You are not restricted by us, but you are
restricted by your own affections. \bibverse{13} Now in return---I speak
as to my children---you also open your hearts.
\bibverse{14} Don't be unequally yoked with unbelievers, for what
fellowship do righteousness and iniquity have? Or what fellowship does
light have with darkness? \bibverse{15} What agreement does Christ have
with Belial? Or what portion does a believer have with an unbeliever?
\bibverse{16} What agreement does a temple of God have with idols? For
you are a temple of the living God. Even as God said, ``I will dwell in
them and walk in them. I will be their God and they will be my
people.''+ 6:16 Leviticus 26:12; Jeremiah 32:38; Ezekiel 37:27
\bibverse{17} Therefore ```Come out from among them, and be separate,'
says the Lord. `Touch no unclean thing. I will receive you.+ 6:17 Isaiah
52:11; Ezekiel 20:34,41 \bibverse{18} I will be to you a Father. You
will be to me sons and daughters,'
says the Lord Almighty.''+ 6:18 2 Samuel 7:14; 7:8
\hypertarget{section-6}{%
\section{7}\label{section-6}}
\bibverse{1} Having therefore these promises, beloved, let's cleanse
ourselves from all defilement of flesh and spirit, perfecting holiness
in the fear of God.
\bibverse{2} Open your hearts to us. We wronged no one. We corrupted no
one. We took advantage of no one. \bibverse{3} I say this not to condemn
you, for I have said before that you are in our hearts to die together
and live together. \bibverse{4} Great is my boldness of speech toward
you. Great is my boasting on your behalf. I am filled with comfort. I
overflow with joy in all our affliction.
\bibverse{5} For even when we had come into Macedonia, our flesh had no
relief, but we were afflicted on every side. Fightings were outside.
Fear was inside. \bibverse{6} Nevertheless, he who comforts the lowly,
God, comforted us by the coming of Titus, \bibverse{7} and not by his
coming only, but also by the comfort with which he was comforted in you
while he told us of your longing, your mourning, and your zeal for me,
so that I rejoiced still more.
\bibverse{8} For though I grieved you with my letter, I do not regret
it, though I did regret it. For I see that my letter made you grieve,
though just for a while. \bibverse{9} I now rejoice, not that you were
grieved, but that you were grieved to repentance. For you were grieved
in a godly way, that you might suffer loss by us in nothing.
\bibverse{10} For godly sorrow produces repentance leading to salvation,
which brings no regret. But the sorrow of the world produces death.
\bibverse{11} For behold, this same thing, that you were grieved in a
godly way, what earnest care it worked in you. Yes, what defense,
indignation, fear, longing, zeal, and vindication! In everything you
demonstrated yourselves to be pure in the matter. \bibverse{12} So
although I wrote to you, I wrote not for his cause that did the wrong,
nor for his cause that suffered the wrong, but that your earnest care
for us might be revealed in you in the sight of God. \bibverse{13}
Therefore we have been comforted. In our comfort we rejoiced the more
exceedingly for the joy of Titus, because his spirit has been refreshed
by you all. \bibverse{14} For if in anything I have boasted to him on
your behalf, I was not disappointed. But as we spoke all things to you
in truth, so our glorying also which I made before Titus was found to be
truth. \bibverse{15} His affection is more abundantly toward you, while
he remembers all of your obedience, how with fear and trembling you
received him. \bibverse{16} I rejoice that in everything I am confident
concerning you.
\hypertarget{section-7}{%
\section{8}\label{section-7}}
\bibverse{1} Moreover, brothers, we make known to you the grace of God
which has been given in the assemblies of Macedonia, \bibverse{2} how in
a severe ordeal of affliction, the abundance of their joy and their deep
poverty abounded to the riches of their generosity. \bibverse{3} For
according to their power, I testify, yes and beyond their power, they
gave of their own accord, \bibverse{4} begging us with much entreaty to
receive this grace and the fellowship in the service to the saints.
\bibverse{5} This was not as we had expected, but first they gave their
own selves to the Lord, and to us through the will of God. \bibverse{6}
So we urged Titus, that as he had made a beginning before, so he would
also complete in you this grace. \bibverse{7} But as you abound in
everything---in faith, utterance, knowledge, all earnestness, and in
your love to us---see that you also abound in this grace.
\bibverse{8} I speak not by way of commandment, but as proving through
the earnestness of others the sincerity also of your love. \bibverse{9}
For you know the grace of our Lord Jesus Christ, that though he was
rich, yet for your sakes he became poor, that you through his poverty
might become rich. \bibverse{10} I give advice in this: it is expedient
for you who were the first to start a year ago, not only to do, but also
to be willing. \bibverse{11} But now complete the doing also, that as
there was the readiness to be willing, so there may be the completion
also out of your ability. \bibverse{12} For if the readiness is there,
it is acceptable according to what you have, not according to what you
don't have. \bibverse{13} For this is not that others may be eased and
you distressed, \bibverse{14} but for equality. Your abundance at this
present time supplies their lack, that their abundance also may become a
supply for your lack, that there may be equality. \bibverse{15} As it is
written, ``He who gathered much had nothing left over, and he who
gathered little had no lack.''+ 8:15 Exodus 16:18
\bibverse{16} But thanks be to God, who puts the same earnest care for
you into the heart of Titus. \bibverse{17} For he indeed accepted our
exhortation, but being himself very earnest, he went out to you of his
own accord. \bibverse{18} We have sent together with him the brother
whose praise in the Good News is known throughout all the assemblies.
\bibverse{19} Not only so, but he was also appointed by the assemblies
to travel with us in this grace, which is served by us to the glory of
the Lord himself, and to show our readiness. \bibverse{20} We are
avoiding this, that any man should blame us concerning this abundance
which is administered by us. \bibverse{21} Having regard for honorable
things, not only in the sight of the Lord, but also in the sight of men.
\bibverse{22} We have sent with them our brother whom we have many times
proved earnest in many things, but now much more earnest, by reason of
the great confidence which he has in you. \bibverse{23} As for Titus, he
is my partner and fellow worker for you. As for our brothers, they are
the apostles of the assemblies, the glory of Christ. \bibverse{24}
Therefore show the proof of your love to them before the assemblies, and
of our boasting on your behalf.
\hypertarget{section-8}{%
\section{9}\label{section-8}}
\bibverse{1} It is indeed unnecessary for me to write to you concerning
the service to the saints, \bibverse{2} for I know your readiness, of
which I boast on your behalf to those of Macedonia, that Achaia has been
prepared for the past year. Your zeal has stirred up very many of them.
\bibverse{3} But I have sent the brothers so that our boasting on your
behalf may not be in vain in this respect, that, just as I said, you may
be prepared, \bibverse{4} lest by any means, if anyone from Macedonia
comes there with me and finds you unprepared, we (to say nothing of you)
would be disappointed in this confident boasting. \bibverse{5} I thought
it necessary therefore to entreat the brothers that they would go before
to you and arrange ahead of time the generous gift that you promised
before, that the same might be ready as a matter of generosity, and not
of greediness.
\bibverse{6} Remember this: he who sows sparingly will also reap
sparingly. He who sows bountifully will also reap bountifully.
\bibverse{7} Let each man give according as he has determined in his
heart, not grudgingly or under compulsion, for God loves a cheerful
giver. \bibverse{8} And God is able to make all grace abound to you,
that you, always having all sufficiency in everything, may abound to
every good work. \bibverse{9} As it is written, ``He has scattered
abroad. He has given to the poor. His righteousness remains forever.''+
9:9 Psalm 112:9
\bibverse{10} Now may he who supplies seed to the sower and bread for
food, supply and multiply your seed for sowing, and increase the fruits
of your righteousness, \bibverse{11} you being enriched in everything
for all generosity, which produces thanksgiving to God through us.
\bibverse{12} For this service of giving that you perform not only makes
up for lack among the saints, but abounds also through much giving of
thanks to God, \bibverse{13} seeing that through the proof given by this
service, they glorify God for the obedience of your confession to the
Good News of Christ and for the generosity of your contribution to them
and to all, \bibverse{14} while they themselves also, with supplication
on your behalf, yearn for you by reason of the exceeding grace of God in
you. \bibverse{15} Now thanks be to God for his unspeakable gift!
\hypertarget{section-9}{%
\section{10}\label{section-9}}
\bibverse{1} Now I Paul, myself, entreat you by the humility and
gentleness of Christ, I who in your presence am lowly among you, but
being absent am bold toward you. \bibverse{2} Yes, I beg you that I may
not, when present, show courage with the confidence with which I intend
to be bold against some, who consider us to be walking according to the
flesh. \bibverse{3} For though we walk in the flesh, we don't wage war
according to the flesh; \bibverse{4} for the weapons of our warfare are
not of the flesh, but mighty before God to the throwing down of
strongholds, \bibverse{5} throwing down imaginations and every high
thing that is exalted against the knowledge of God and bringing every
thought into captivity to the obedience of Christ, \bibverse{6} and
being in readiness to avenge all disobedience when your obedience is
made full.
\bibverse{7} Do you look at things only as they appear in front of your
face? If anyone trusts in himself that he is Christ's, let him consider
this again with himself, that even as he is Christ's, so we also are
Christ's. \bibverse{8} For even if I boast somewhat abundantly
concerning our authority, which the Lord gave for building you up and
not for casting you down, I will not be ashamed, \bibverse{9} that I may
not seem as if I desire to terrify you by my letters. \bibverse{10} For,
``His letters'', they say, ``are weighty and strong, but his bodily
presence is weak, and his speech is despised.'' \bibverse{11} Let such a
person consider this, that what we are in word by letters when we are
absent, such are we also in deed when we are present.
\bibverse{12} For we are not bold to number or compare ourselves with
some of those who commend themselves. But they themselves, measuring
themselves by themselves, and comparing themselves with themselves, are
without understanding. \bibverse{13} But we will not boast beyond proper
limits, but within the boundaries with which God appointed to us, which
reach even to you. \bibverse{14} For we don't stretch ourselves too
much, as though we didn't reach to you. For we came even as far as to
you with the Good News of Christ, \bibverse{15} not boasting beyond
proper limits in other men's labors, but having hope that as your faith
grows, we will be abundantly enlarged by you in our sphere of influence,
\bibverse{16} so as to preach the Good News even to the parts beyond
you, not to boast in what someone else has already done. \bibverse{17}
But ``he who boasts, let him boast in the Lord.''+ 10:17 Jeremiah 9:24
\bibverse{18} For it isn't he who commends himself who is approved, but
whom the Lord commends.
\hypertarget{section-10}{%
\section{11}\label{section-10}}
\bibverse{1} I wish that you would bear with me in a little foolishness,
but indeed you do bear with me. \bibverse{2} For I am jealous over you
with a godly jealousy. For I promised you in marriage to one husband,
that I might present you as a pure virgin to Christ. \bibverse{3} But I
am afraid that somehow, as the serpent deceived Eve in his craftiness,
so your minds might be corrupted from the simplicity that is in Christ.
\bibverse{4} For if he who comes preaches another Jesus whom we didn't
preach, or if you receive a different spirit which you didn't receive,
or a different ``good news'' which you didn't accept, you put up with
that well enough. \bibverse{5} For I reckon that I am not at all behind
the very best apostles. \bibverse{6} But though I am unskilled in
speech, yet I am not unskilled in knowledge. No, in every way we have
been revealed to you in all things.
\bibverse{7} Or did I commit a sin in humbling myself that you might be
exalted, because I preached to you God's Good News free of charge?
\bibverse{8} I robbed other assemblies, taking wages from them that I
might serve you. \bibverse{9} When I was present with you and was in
need, I wasn't a burden on anyone, for the brothers, when they came from
Macedonia, supplied the measure of my need. In everything I kept myself
from being burdensome to you, and I will continue to do so.
\bibverse{10} As the truth of Christ is in me, no one will stop me from
this boasting in the regions of Achaia. \bibverse{11} Why? Because I
don't love you? God knows.
\bibverse{12} But what I do, that I will continue to do, that I may cut
off opportunity from those who desire an opportunity, that in which they
boast, they may be recognized just like us. \bibverse{13} For such men
are false apostles, deceitful workers, masquerading as Christ's
apostles. \bibverse{14} And no wonder, for even Satan masquerades as an
angel of light. \bibverse{15} It is no great thing therefore if his
servants also masquerade as servants of righteousness, whose end will be
according to their works.
\bibverse{16} I say again, let no one think me foolish. But if so, yet
receive me as foolish, that I also may boast a little. \bibverse{17}
That which I speak, I don't speak according to the Lord, but as in
foolishness, in this confidence of boasting. \bibverse{18} Seeing that
many boast after the flesh, I will also boast. \bibverse{19} For you
bear with the foolish gladly, being wise. \bibverse{20} For you bear
with a man if he brings you into bondage, if he devours you, if he takes
you captive, if he exalts himself, or if he strikes you on the face.
\bibverse{21} To my shame, I speak as though we had been weak. Yet in
whatever way anyone is bold (I speak in foolishness), I am bold also.
\bibverse{22} Are they Hebrews? So am I. Are they Israelites? So am I.
Are they the offspring+ 11:22 or, seed of Abraham? So am I.
\bibverse{23} Are they servants of Christ? (I speak as one beside
himself.) I am more so: in labors more abundantly, in prisons more
abundantly, in stripes above measure, and in deaths often. \bibverse{24}
Five times I received forty stripes minus one from the Jews.
\bibverse{25} Three times I was beaten with rods. Once I was stoned.
Three times I suffered shipwreck. I have been a night and a day in the
deep. \bibverse{26} I have been in travels often, perils of rivers,
perils of robbers, perils from my countrymen, perils from the Gentiles,
perils in the city, perils in the wilderness, perils in the sea, perils
among false brothers; \bibverse{27} in labor and travail, in watchings
often, in hunger and thirst, in fastings often, and in cold and
nakedness.
\bibverse{28} Besides those things that are outside, there is that which
presses on me daily: anxiety for all the assemblies. \bibverse{29} Who
is weak, and I am not weak? Who is caused to stumble, and I don't burn
with indignation?
\bibverse{30} If I must boast, I will boast of the things that concern
my weakness. \bibverse{31} The God and Father of the Lord Jesus Christ,
he who is blessed forever more, knows that I don't lie. \bibverse{32} In
Damascus the governor under King Aretas guarded the Damascenes' city,
desiring to arrest me. \bibverse{33} I was let down in a basket through
a window by the wall, and escaped his hands.
\hypertarget{section-11}{%
\section{12}\label{section-11}}
\bibverse{1} It is doubtless not profitable for me to boast, but I will
come to visions and revelations of the Lord. \bibverse{2} I know a man
in Christ who was caught up into the third heaven fourteen years
ago---whether in the body, I don't know, or whether out of the body, I
don't know; God knows. \bibverse{3} I know such a man (whether in the
body, or outside of the body, I don't know; God knows), \bibverse{4} how
he was caught up into Paradise and heard unspeakable words, which it is
not lawful for a man to utter. \bibverse{5} On behalf of such a one I
will boast, but on my own behalf I will not boast, except in my
weaknesses. \bibverse{6} For if I would desire to boast, I will not be
foolish; for I will speak the truth. But I refrain, so that no man may
think more of me than that which he sees in me or hears from me.
\bibverse{7} By reason of the exceeding greatness of the revelations,
that I should not be exalted excessively, a thorn in the flesh was given
to me: a messenger of Satan to torment me, that I should not be exalted
excessively. \bibverse{8} Concerning this thing, I begged the Lord three
times that it might depart from me. \bibverse{9} He has said to me, ``My
grace is sufficient for you, for my power is made perfect in weakness.''
Most gladly therefore I will rather glory in my weaknesses, that the
power of Christ may rest on me.
\bibverse{10} Therefore I take pleasure in weaknesses, in injuries, in
necessities, in persecutions, and in distresses, for Christ's sake. For
when I am weak, then am I strong. \bibverse{11} I have become foolish in
boasting. You compelled me, for I ought to have been commended by you,
for I am in no way inferior to the very best apostles, though I am
nothing. \bibverse{12} Truly the signs of an apostle were worked among
you in all perseverance, in signs and wonders and mighty works.
\bibverse{13} For what is there in which you were made inferior to the
rest of the assemblies, unless it is that I myself was not a burden to
you? Forgive me this wrong!
\bibverse{14} Behold, this is the third time I am ready to come to you,
and I will not be a burden to you; for I seek not your possessions, but
you. For the children ought not to save up for the parents, but the
parents for the children. \bibverse{15} I will most gladly spend and be
spent for your souls. If I love you more abundantly, am I loved the
less? \bibverse{16} Even so, I myself didn't burden you. But you might
say that being crafty, I caught you with deception. \bibverse{17} Did I
take advantage of you by anyone of those whom I have sent to you?
\bibverse{18} I exhorted Titus, and I sent the brother with him. Did
Titus take any advantage of you? Didn't we walk in the same spirit?
Didn't we walk in the same steps?
\bibverse{19} Again, do you think that we are excusing ourselves to you?
In the sight of God we speak in Christ. But all things, beloved, are for
your edifying. \bibverse{20} For I am afraid that perhaps when I come, I
might find you not the way I want to, and that I might be found by you
as you don't desire, that perhaps there would be strife, jealousy,
outbursts of anger, factions, slander, whisperings, proud thoughts, or
riots, \bibverse{21} that again when I come my God would humble me
before you, and I would mourn for many of those who have sinned before
now, and not repented of the uncleanness, sexual immorality, and
lustfulness which they committed.
\hypertarget{section-12}{%
\section{13}\label{section-12}}
\bibverse{1} This is the third time I am coming to you. ``At the mouth
of two or three witnesses shall every word be established.''+ 13:1
Deuteronomy 19:15 \bibverse{2} I have warned previously, and I warn
again, as when I was present the second time, so now, being absent, I
write to those who have sinned before now and to all the rest that if I
come again, I will not spare, \bibverse{3} seeing that you seek a proof
of Christ who speaks in me who is not weak, but is powerful in you.
\bibverse{4} For he was crucified through weakness, yet he lives through
the power of God. For we also are weak in him, but we will live with him
through the power of God toward you.
\bibverse{5} Examine your own selves, whether you are in the faith. Test
your own selves. Or don't you know about your own selves, that Jesus
Christ is in you?---unless indeed you are disqualified. \bibverse{6} But
I hope that you will know that we aren't disqualified.
\bibverse{7} Now I pray to God that you do no evil; not that we may
appear approved, but that you may do that which is honorable, though we
may seem to have failed. \bibverse{8} For we can do nothing against the
truth, but for the truth. \bibverse{9} For we rejoice when we are weak
and you are strong. We also pray for this: your becoming perfect.
\bibverse{10} For this cause I write these things while absent, that I
may not deal sharply when present, according to the authority which the
Lord gave me for building up and not for tearing down.
\bibverse{11} Finally, brothers, rejoice! Be perfected. Be comforted. Be
of the same mind. Live in peace, and the God of love and peace will be
with you. \bibverse{12} Greet one another with a holy kiss.
\bibverse{13} All the saints greet you.
\bibverse{14} The grace of the Lord Jesus Christ, God's love, and the
fellowship of the Holy Spirit be with you all. Amen.
| {
"alphanum_fraction": 0.7676321565,
"avg_line_length": 57.7574334898,
"ext": "tex",
"hexsha": "8187e707fa70d725409cb2fe914d757175893cff",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "bibliadelpueblo/BibliaLibre",
"max_forks_repo_path": "Bibles/English.WorldEnglishBibleUS/out/tex/64-2 Corinthians.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "bibliadelpueblo/BibliaLibre",
"max_issues_repo_path": "Bibles/English.WorldEnglishBibleUS/out/tex/64-2 Corinthians.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "bibliadelpueblo/BibliaLibre",
"max_stars_repo_path": "Bibles/English.WorldEnglishBibleUS/out/tex/64-2 Corinthians.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9854,
"size": 36907
} |
%-------------------------
% Resume in Latex
% Author : Xinyue Ou
% License : MIT
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[pdftex]{hyperref}
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{} % clear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.375in}
\addtolength{\evensidemargin}{-0.375in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
%-------------------------
% Custom commands
\newcommand{\resumeItem}[2]{
\item\small{
\textbf{#1}{#2 \vspace{-2pt}}
}
}
\newcommand{\resumeSimpleItem}[1]{
\item\small{
{#1 \vspace{-2pt}}
}
}
\newcommand{\resumeSubheading}[4]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeProjectHeading}[2]{
\vspace{-3pt}\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{#1} & \textit{\small #2} \\
\end{tabular*}\vspace{-1pt}
}
\newcommand{\resumeSimpleProjectHeading}[3]{
\vspace{-3pt}\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{#1} & \textit{\small #2} \\
\end{tabular*}\vspace{-1pt}
{\begin{itemize}[leftmargin=*]
\small{#3}
\end{itemize}}\vspace{-5pt}
}
\newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}}
\renewcommand{\labelitemii}{$\circ$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
%-------------------------------------------
%%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING-----------------
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{\Large Xinyue Ou} & Email : xinyue.ou@yahoo.com \\
& Mobile : +1 858-405-4803\\
\end{tabular*}
%-----------EDUCATION-----------------
\section{Education}
\resumeSubHeadingListStart
\resumeSubheading
{University of California, San Diego}{La Jolla, CA}
{Master of Science in Computer Science}{Sept 2017 -- Mar 2019}
\resumeItemListStart
\resumeItem{Courses: }{Operating Systems, Security, Database, Networked Systems, Neural Networks, Parallel Computing.}
\resumeItemListEnd
\resumeSubheading
{Shanghai Jiao Tong University}{Shanghai, China}
{Bachelor of Science in Electrical and Computer Engineering}{Sept 2013 -- Aug 2017}
\resumeSubHeadingListEnd
%--------PROGRAMMING SKILLS------------
\section{Programming Skills}
\resumeSubHeadingListStart
\resumeItem{Languages}{: C/C++, Go, Java, Python, Ocaml, Scala, Javascript, SQL. }\vspace{-5pt}
\resumeItem{Skills}{: Distributed System, Multiprogramming, NoSQL, Machine Learning}\vspace{-5pt}
\resumeItem{Tools}{: Git, TensorFlow, PyTorch, Docker, Gradle, Maven, Django, ProtoBuf, ANTLR, Netty, Flume, gRPC. }
\resumeSubHeadingListEnd
%-----------EXPERIENCE-----------------
\section{Experience}
\resumeSubHeadingListStart
\resumeSubheading
{Google, Inc.}{Mountain View, CA}
{Software Engineer Intern @ Display Ads Infrastructure}{June 2018 -- Sept 2018}
\resumeItemListStart
\resumeItem{}{Developed a data pipeline in \textbf{Flume} to generate user profile digest for ads recommendation, fixing 75\% missing data generation and traffic tracking.}
\resumeItem{}{Deployed 10+ jobs through \textbf{Borg} and executed phased rollout.}
\resumeItem{}{Implemented an web UI using \textbf{Django} to monitor results of petabyte scale data service.}
\resumeItemListEnd
\resumeSubheading
{Intel Asia-Pacific R\&D Center}{Shanghai, China}
{Software Engineer Intern @ BigDL Data Analytics}{Feb 2017 -- July 2017}
\resumeItemListStart
\resumeItem{}
{Used Bash and Python to efficiently port Neural Network modules from Scala to Python.}
\resumeItem{}
{Designed memory shared mechanism for buffers in Neural Network modules that saves 50\% of memory usage.}
\resumeItem{}{Built a graph converter in Scala that converts machine learning models into graphs to accelerate the training of NN modules.}
\resumeItemListEnd
% \resumeSubheading
% {Ailibi Medical Technology Co.}{Shanghai, China}
% {Software Engineer Intern}{Dec 2014 -- Feb 2015}
% \resumeItemListStart
% \resumeItem{}
% {Built the company's official website using HTML/CSS/JQuery.}
% \resumeItem{}
% {Documenting user requirements and specifications for the company's mobile app.}
% \resumeItemListEnd
\resumeSubHeadingListEnd
%-----------PROJECTS-----------------
\section{Projects}
\resumeSubHeadingListStart
\resumeProjectHeading{Key Value Storage Server - Go}{Feb 2019 -- Mar 2019}
\resumeItemListStart
\resumeSimpleItem{Designed a strongly consistent key value storage system with linearizability and idempotent transaction that can tolerate multiple server failures and network partition.}
\resumeSimpleItem{Implemented a full RAFT protocol with leader election, log replication, log compaction and snapshot.}
\resumeSimpleItem{Heavily exposed to multithreading semantics of Go.}
\resumeItemListEnd
\resumeProjectHeading{Vulnerability Excavation of Secure DDS Systems - Ocaml, Python, Docker}{Oct 2018 -- Dec 2018}
\resumeItemListStart
\resumeSimpleItem{Developed a passive network reconnaissance technique extracting underlying data object topology of Secure DDS Systems, a widely used protocol for IoT devices. Experimented the reconnaissance process in a Docker containerized environment within a software defined network.}
\resumeSimpleItem{Built a system validation and penetration testing tools of DDS access control using Ocaml and Imandra, a formal verification tool.}
\resumeItemListEnd
% \resumeProjectHeading
% {Tessaract: Triton Dropbox Service (Team Leader) - Java, gRPC}{May 2018 -- June 2018}
% \resumeItemListStart
% \resumeSimpleItem{Implemnted simplified RAFT consensus protocol for server cluster leader election and log replication.}
% \resumeSimpleItem{Designed an achitecture that separates storage server and client-handling server.}
% \resumeSimpleItem{Developed server-client logics with gRPC and ProtoBuffer.}
%\resumeItemListEnd
% \resumeProjectHeading
% {RNN Music Generator - Python}{Feb 2018 -- Mar 2018}
% {Implemented a char-based generative LSTM model using PyTorch and trained it to “write” songs.}
% \resumeProjectHeading
% {XQuery Processor (Team Leader) - Java, ANTLR}{Feb 2018 -- Mar 2018}
% \resumeItemListStart
% \resumeSimpleItem{Developed XQuery compiler with ANTLR and Java JDOM2 to process XQuery and generate output.}
% \resumeSimpleItem{Optimized join operator using Union Find to cluster joins and table size to determine join order.}
% \resumeSimpleItem{Used gradle to set up the project and manage package dependency.}
% \resumeItemListEnd
\resumeProjectHeading
{Multiprogramming Support for Nachos Kernel - Java}{Oct 2017 -- Nov 2017}
\resumeItemListStart
\resumeSimpleItem{Designed a virtual memory management system that enables demand paging, lazy paging and page swapping.}
\resumeSimpleItem{Managed multithread programming using mutex, semaphore and conditional variable.}
\resumeSimpleItem{Implemented file-related and process-related system calls for Nachos kernel.}
\resumeItemListEnd
% \resumeProjectHeading
% {Passive Entry Passive Start Smart Key System - Java, Android}{Jun 2017 -- Aug 2017}
% { Built an Android App for motor vehicle key control based on Bluetooth Low Energy protocol. Developed with FastBLE framework and implemented noise filtering algorithm.}
% \resumeProjectHeading
% {Optimization of CT Image Reconstruction - MatLab}{Mar 2016 -- May 2016}
% {Used CGLS method to reconstruct medical image and optimized the computation speed by profiling. Built a GUI in MatLab which can inspect each slice in any given direction and the 3D image. }
% \resumeProjectHeading
% { Sign Language Translator Based on PIC32 - C}{Jun 2016 -- Aug 2016}
% {Constructed a sign language translator based on hand gestures captured by flex sensors and speed sensors. Developed a succinct data structure that makes use of small storage space of PIC32 to store signals and convert the matching to voice via VS1003 module}
% \resumeProjectHeading
% {Foreground Extraction Based on Optical Flow - C++}{Sep 2014 -- Mar 2015}
% {Used OpenCV library to compute intensive optical flow fields for each frame and discretized the vector fields.Tracked the foreground object based on the largest connected component, }
% \resumeSubHeadingListEnd
%
%-----------More, Other Projects-----------------
%\section{Side Projects}
% \resumeSubHeadingListStart
\resumeSimpleProjectHeading{Triton Router - C}{Nov 2018 -- Dec 2018}{
Implemented a simple router that supports ARP, ICMP and IP.
}
\resumeSimpleProjectHeading{Sliding Window Protocol for Flow Control - C}{Oct 2019}{
Implemented an efficient Go-back-N sliding window protocol with CRC error detection.
}
% \resumeSimpleProjectHeading
% {NewChain: A BitCoin Immitator - Java, Netty}{July 2018 -- Aug 2018}
% {Implemented a bitcoin-like ledger service using Netty framework.}
\resumeSimpleProjectHeading
{UCSD Class Schedule Calendar Generator - Javascript}{Feb 2018}
{Designed a chrome extension that can generate a calendar file by parsing the web page of UCSD WebReg. }
\resumeSimpleProjectHeading{Gossip Membership Protocol - C++}{Nov 2017 -- Dec 2017}
{Implemented a gossip membership protocol in an emulated distributed system.}
\resumeSubHeadingListEnd
%-------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.7030518629,
"avg_line_length": 42.728,
"ext": "tex",
"hexsha": "cfefbb95cb989c463f5c63711781df18bca3285e",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-01-14T18:28:00.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-08-15T05:28:52.000Z",
"max_forks_repo_head_hexsha": "cab1c6d0c563096dcb5dd852e361d7ef8fc3bf3e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "plrectco/Resume",
"max_forks_repo_path": "Resume_XinyueOu.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cab1c6d0c563096dcb5dd852e361d7ef8fc3bf3e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "plrectco/Resume",
"max_issues_repo_path": "Resume_XinyueOu.tex",
"max_line_length": 294,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "cab1c6d0c563096dcb5dd852e361d7ef8fc3bf3e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "plrectco/Resume",
"max_stars_repo_path": "Resume_XinyueOu.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-05T07:08:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-08-15T06:52:18.000Z",
"num_tokens": 2829,
"size": 10682
} |
\chapter{Introduction}
\label{chapter:Introduction}
The discovery of the Higgs boson in July 2012 by the ATLAS and CMS experiments, completes the Standard Model of particle physics.
Nonetheless, the description of the Universe with only the Standard Model (SM) is known to be incomplete.
The difficulty to model gravity in the same theoretical framework, the hierarchy problem, or the existence of Dark Matter are some of the many aspects of Nature that the SM cannot explain.
This Thesis presents a search for new phenomena in $\pp$ collisions at $\sqrt{s}=\unit[8]{TeV}$ recorded with the ATLAS detector at the LHC collider.
The final state under investigation is defined by the presence of a very energetic jet, large missing transverse energy, a maximum of three reconstructed jets, and no reconstructed leptons, leading to a monojet-like configuration.
The monojet final state constitutes a very clean and distinctive signature for new physics processes.
After the discovery of the Higgs and the constraints on the masses of first and second generation squarks and gluinos up to the TeV scale, much attention has been put to searches for third generation squarks.
These searches are motivated by naturalness arguments, which point to relatively light stops and sbottoms, and therefore allowing their production at the LHC.
The monojet analysis is interpreted in terms of pair production of stops and sbottoms, and in terms of inclusive searches for pair production of squarks, and gluinos.
In particular, this final state has large sensitivity to supersymmetric models involving a very compressed mass spectra of the superpartners in the final state (also known as ``compressed scenarios'').
This study is performed in the framework of complementing existing searches for squarks and gluinos that are not sensitive to such compressed spectra.
Monojet final states have been used traditionally to search for large extra dimensions and the production of Dark Matter (DM).
In this context, limits on the parameters of models involving the direct production of Kaluza-Klein towers of gravitons, neutralinos, or light gravitinos in gauge mediated supersymmetry breaking scenarios, are also considered.
This Thesis is organized as follows.
Chapter~\ref{chapter:StandardModel} provides an introduction to the SM theory, the QCD phenomenology at hadron colliders and the different Monte Carlo simulators used in the analysis.
Different scenarios for physics beyond the SM model are described in Chapter~\ref{chapter:BSM}.
Chapter~\ref{chapter:StatisticalModel} introduces the statistical model and the hypothesis testing that is used in the analysis.
The LHC collider and the ATLAS experiment are described in Chapter~\ref{chapter:ATLASDetector}.
Chapter~\ref{chapter:ReconstructionOfObjects} details the reconstruction of the different physics objects in ATLAS, and Chapter~\ref{chapter:MonojetAnalysis} describes the event selection, the background determination and the systematic uncertainties in full detail.
The final results and their interpretations in terms of the different models, are discussed from Chapters~\ref{chapter:Interpretations} to \ref{chapter:ADDGravitonProduction}.
Finally, Chapter~\ref{chapter:conclusions} is devoted to conclusions.
The document is complemented with several appendices.
The results presented in this thesis have led to the following publications by the ATLAS Collaboration:
\begin{itemize}
\item \emph{Search for pair-produced top squarks decaying into charm quarks and the lightest neutralinos using $\unit[20.3]{fb^{-1}}$ of $\pp$ collisions at $\sqrt{s} = \unit[8]{TeV}$ with the ATLAS detector at the LHC}, ATLAS-CONF-2013-068, \url{http://cds.cern.ch/record/1562880/}.
\item \emph{Search for pair-produced third-generation squarks decaying via charm quarks or in compressed supersymmetric scenarios in $pp$ collisions at $\sqrt{s}=8~$TeV with the ATLAS detector}, Phys. Rev. D90.052008, \href{http://www.arXiv.org/abs/arXiv:1407.0608/}{arXiv:1407.0608 [hep-ex]}.
\end{itemize}
The monojet results have also contributed to the summary notes of the searches for third generation squarks, and the searches for inclusive squarks and gluinos in Run~I, still not public by the time that this Thesis has been printed.
Furthermore, the interpretations of the monojet analysis in terms of large extra dimensions and the production of dark matter have significantly improved the previous ATLAS results, and are used to cross check the results from a new dedicated analysis in preparation.
| {
"alphanum_fraction": 0.8111918925,
"avg_line_length": 108.0714285714,
"ext": "tex",
"hexsha": "fb1dd4e004eb5e1b44e768435173d443a46a2226",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b4582c8c1c5858878dfdb8e69986a55c1aeb9e3e",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "rogercaminal/PhDThesis",
"max_forks_repo_path": "Introduction/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b4582c8c1c5858878dfdb8e69986a55c1aeb9e3e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "rogercaminal/PhDThesis",
"max_issues_repo_path": "Introduction/introduction.tex",
"max_line_length": 293,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b4582c8c1c5858878dfdb8e69986a55c1aeb9e3e",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "rogercaminal/PhDThesis",
"max_stars_repo_path": "Introduction/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 985,
"size": 4539
} |
%!TEX root = install.tex
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Copyright (c) 2012-2020 by The University of Queensland
% http://www.uq.edu.au
%
% Primary Business: Queensland, Australia
% Licensed under the Apache License, version 2.0
% http://www.apache.org/licenses/LICENSE-2.0
%
% Development until 2012 by Earth Systems Science Computational Center (ESSCC)
% Development 2012-2013 by School of Earth Sciences
% Development from 2014 by Centre for Geoscience Computing (GeoComp)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Notes about compilers
\chapter{Installing from Source}\label{chap:source}
This chapter describes installing \escript from source on unix/posix like
systems (including MacOSX) and Windows 10.
\section{Parallel Technologies}\label{sec:par}
It is likely that the computer you run \escript on, will have more than one processor core.
Escript can make use of multiple cores [in order to solve problems more quickly] if it is told to do so,
but this functionality must be enabled at compile time.
Section~\ref{sec:needpar} gives some rough guidelines to help you determine what you need.
There are two technologies which \escript can employ here.
\begin{itemize}
\item OpenMP -- the more efficient of the two [thread level parallelism].
\item MPI -- Uses multiple processes (less efficient), needs less help from
the compiler (not supported on Windows).
\end{itemize}
Escript is primarily tested on recent versions of the GNU and Intel suites
(``g++'' / ``icpc''), and Microsoft Visual C++ (MSVC). However, it also passes
our tests when compiled using ``clang++''. Escript now requires compiler
support for some features of the C++11 standard. See
Appendix~\ref{app:cxxfeatures} for a list.
Our current test compilers include:
\begin{itemize}
\item g++ 8
\item clang++ 7
\item intel icpc v17
\item MSVC 2017 or 2019
\end{itemize}
Note that:
\begin{itemize}
\item OpenMP will not function correctly for g++ $\leq$ 4.2.1 (and is not currently supported by clang).
\item icpc v11 has a subtle bug involving OpenMP and C++ exception handling, so this combination should not be used.
\end{itemize}
\subsection{What parallel technology do I need?}\label{sec:needpar} If you are
using any version of Linux released in the past few years, then your system
compiler will support \openmp with no extra work (and give better performance);
so you should use it. MSVC 2017 and 2019 also have \openmp support on Windows
(\openmp 2.0). You will not need MPI unless your computer is some form of
cluster. MPI is not recommended on Windows as it will interfer with Jupyter.
If you are using BSD or MacOSX and you are just experimenting with \escript, then performance is
probably not a major issue for you at the moment so you don't need to use either \openmp or MPI.
This also applies if you write and polish your scripts on your computer and then send them to a cluster to execute.
If in the future you find escript useful and your scripts take significant time to run, then you may want to recompile
\escript with more options.
Note that even if your version of \escript has support for \openmp or MPI, you will still need to tell the system to
use it when you run your scripts.
If you are using the \texttt{run-escript} launcher, then this is controlled by
the \texttt{-t}, \texttt{-p}, and \texttt{-n} flags.
If not, then consult the documentation for your MPI libraries (or the compiler documentation in the case of OpenMP
\footnote{It may be enough to set the \texttt{OMP\_NUM\_THREADS} environment variable.}).
If you are using MacOSX, then see the next section, if not, then skip to Section~\ref{sec:build}.
\section{MacOS}
This release of \escript has only been tested on OSX 10.13.
For this section we assume you are using either \texttt{homebrew} or \texttt{MacPorts} as a package
manager\footnote{Note that package managers will make changes to your computer based on programs configured by other people from
various places around the internet. It is important to satisfy yourself as to the security of those systems.}.
You can of course install prerequisite software in other ways.
For example, we have had \emph{some} success changing the default
compilers used by those systems. However this is more complicated and we do not provide a guide here.
\noindent Both of those systems require the XCode command line tools to be installed\footnote{As of OSX10.9, the
command \texttt{xcode-select --install} will allow you to download and install the commandline tools.}.
\section{Building}\label{sec:build}
\esfinley is built using \textit{SCons}. To simplify the installation process, we have prepared \textit{SCons} \textit{_options.py} files for a number of common systems\footnote{These are correct a time of writing but later versions of those systems may require tweaks.
Also, these systems represent a cross section of possible platforms rather than meaning those systems get particular support.}.
The options files are located in the \textit{scons/templates} directory. We suggest that the file most relevant to your OS
be copied from the templates directory to the scons directory and renamed to the form XXXX_options.py where XXXX
should be replaced with your computer's (host-)name.
If your particular system is not in the list below, or if you want a more customised
build,
see Section~\ref{sec:othersrc} for instructions.
\begin{itemize}
\item Debian - \ref{sec:debsrc}
\item Ubuntu - \ref{sec:ubsrc}
\item OpenSuse - \ref{sec:susesrc}
\item Centos - \ref{sec:centossrc}
\item Fedora - \ref{sec:fedorasrc}
\item MacOS (macports) - \ref{sec:macportsrc}
\item MacOS (homebrew) - \ref{sec:homebrewsrc}
\item FreeBSD - \ref{sec:freebsdsrc}
\item Windows - \ref{sec:windowssrc}
\end{itemize}
Once these are done proceed to Section~\ref{sec:cleanup} for cleanup steps.
\noindent All of these instructions assume that you have obtained the \escript source (and uncompressed it if necessary).
\subsection{Debian}\label{sec:debsrc}
\noindent These instructions were prepared on Debian 10 \textit{Buster}.
\noindent As a preliminary step, you should install the dependencies that \esfinley requires from the repository.
If you intend to use Python 2.7, then you should install the following
\begin{shellCode}
sudo apt-get install python-dev python-numpy python-pyproj python-gdal
sudo apt-get install python-sympy python-matplotlib python-scipy
sudo apt-get install libboost-python-dev libboost-random-dev
sudo apt-get install libnetcdf-dev libnetcdf-cxx-legacy-dev libnetcdf-c++4-dev
sudo apt-get install scons lsb-release libsuitesparse-dev gmsh
\end{shellCode}
\noindent If you intend to use Python 3.0+, then you should install the following
\begin{shellCode}
sudo apt-get install python3-dev python3-numpy python3-pyproj python3-gdal
sudo apt-get install python3-sympy python3-matplotlib python3-scipy
sudo apt-get install libboost-python-dev libboost-random-dev
sudo apt-get install libnetcdf-dev libnetcdf-cxx-legacy-dev libnetcdf-c++4-dev
sudo apt-get install libsuitesparse-dev scons lsb-release gmsh
\end{shellCode}
\noindent In the source directory execute the following (substitute \textit{buster_py2} or \textit{buster_py3} for XXXX):
\begin{shellCode}
scons -j1 options_file=scons/templates/XXXX_options.py
\end{shellCode}
\noindent If you wish to test your build, you can use the following:
\begin{shellCode}
scons -j1 py_tests options_file=scons/templates/XXXX_options.py
\end{shellCode}
% \begin{optionalstep}
% If for some reason, you wish to rebuild the documentation, you would also need the following:
% \begin{shellCode}
% sudo aptitude install python-sphinx doxygen python-docutils texlive
% sudo aptitude install zip texlive-latex-extra latex-xcolor
% \end{shellCode}
% \end{optionalstep}
\subsection{Ubuntu}\label{sec:ubsrc}
These instructions were prepared on Ubuntu 20.04 LTS \textit{Focal Fossa}. \newline
\noindent As a preliminary step, you should install the dependencies that \esfinley requires from the repository.
% If you intend to use Python 2.7, then you should install the following packages:
% \begin{shellCode}
% sudo apt-get install python-dev python-numpy python-pyproj python-gdal
% sudo apt-get install python-sympy python-matplotlib python-scipy
% sudo apt-get install libnetcdf-cxx-legacy-dev libnetcdf-c++4-dev libnetcdf-dev
% sudo apt-get install libboost-random-dev libboost-python-dev libboost-iostreams-dev
% sudo apt-get install scons lsb-release libsuitesparse-dev
% \end{shellCode}
For Python 3.0+, you should instead install the following packages:
\begin{shellCode}
sudo apt-get install python3-dev python3-numpy python3-pyproj python3-gdal
sudo apt-get install python3-sympy python3-matplotlib python3-scipy
sudo apt-get install libnetcdf-cxx-legacy-dev libnetcdf-c++4-dev libnetcdf-dev
sudo apt-get install libboost-random-dev libboost-python-dev libboost-iostreams-dev
sudo apt-get install scons lsb-release libsuitesparse-dev
\end{shellCode}
% \begin{optionalstep}
% If for some reason, you wish to rebuild the documentation, you would also need the following:
% \begin{shellCode}
% sudo aptitude install python-sphinx doxygen python-docutils texlive
% sudo aptitude install zip texlive-latex-extra latex-xcolor
% \end{shellCode}
% \end{optionalstep}
% \noindent Then navigate to the source directory and execute the following (substitute \textit{focus_py2} or \textit{focus_py3} as appropriate for XXXX):
% \begin{shellCode}
% scons -j1 options_file=scons/templates/XXXX_options.py
% \end{shellCode}
\noindent Then navigate to the source directory and execute the following
\begin{shellCode}
scons -j1 options_file=scons/templates/focus_options.py
\end{shellCode}
% \noindent If you wish to test your build, you can use the following:
% \begin{shellCode}
% scons -j1 py_tests options_file=scons/templates/XXXX_options.py
% \end{shellCode}
\subsection{OpenSuse}\label{sec:susesrc}
These instructions were prepared using OpenSUSE Leap 15.2. \newline
\noindent As a preliminary step, you should install the dependencies that \esfinley requires from the repository.
\noindent If you intend to use Python 2.7, then you should install the following packages
\begin{shellCode}
sudo zypper in python-devel python2-numpy python-gdal
sudo zypper in python2-scipy python2-sympy python2-matplotlib
sudo zypper in gcc gcc-c++ scons netcdf-devel libnetcdf_c++-devel
sudo zypper in libboost_python-py2_7-1_66_0-devel libboost_numpy-py2_7-1_66_0-devel
sudo zypper in libboost_iostreams1_66_0-devel suitesparse-devel
\end{shellCode}
\noindent If you intend to use Python 3.0, then you should instead install the following packages
\begin{shellCode}
sudo zypper in python3-devel python3-numpy python3-GDAL
sudo zypper in python3-scipy python3-sympy python3-matplotlib
sudo zypper in gcc gcc-c++ scons netcdf-devel libnetcdf_c++-devel
sudo zypper in libboost_python-py3-1_66_0-devel libboost_numpy-py3-1_66_0-devel
sudo zypper in libboost_random1_66_0-devel libboost_iostreams1_66_0-devel
sudo zypper in suitesparse-devel
\end{shellCode}
\noindent Now to build escript itself.
\noindent In the escript source directory execute the following (substitute \textit{opensuse_py2} or \textit{opensuse_py3} as appropriate for XXXX):
\begin{shellCode}
scons -j1 options_file=scons/templates/XXXX_options.py
\end{shellCode}
\noindent If you wish to test your build, you can use the following:
\begin{shellCode}
scons -j1 py_tests options_file=scons/templates/XXXX_options.py
\end{shellCode}
\noindent Now go to Section~\ref{sec:cleanup} for cleanup.
\subsection{CentOS}\label{sec:centossrc}
It is possible to install \escript on both CentOS releases $7$ and $8$. We include separate instructions for each of these CentOS releases in this section.
\subsubsection{CentOS release $7$}
The core of escript works, however some functionality is not available because the default packages for some dependencies in CentOS are too old.
At present, it is not possible to compile \escript using Python 3.0+ on CentOS $7$ as Python 3.0+ versions of many of the dependencies are not currently available in any of the CentOS repositories, but this may change in the future.
In this section we only outline how to install a version of \escript that uses Python 2.7.
\noindent First, add the \texttt{EPEL} repository.
\begin{shellCode}
yum install epel-release.noarch
\end{shellCode}
\noindent Install packages:
\begin{shellCode}
yum install netcdf-devel netcdf-cxx-devel gdal-python
yum install python-devel numpy scipy sympy python2-scons
yum install python-matplotlib gcc gcc-c++ boost-devel
yum install boost-python gdal-devel suitesparse-devel pyproj
\end{shellCode}
\noindent Now to build escript itself.
In the escript source directory:
\begin{shellCode}
scons -j1 options_file=scons/templates/centos7_0_options.py
\end{shellCode}
\noindent Now go to Section~\ref{sec:cleanup} for cleanup.
\subsubsection{CentOS release $8$}
The core of escript works in CentOS $8$, however some functionality is not available because the default packages for some dependencies in CentOS are too old. This install is for Python 3.
First, add the EPEL, PowerTools and Okay repositories:
\begin{shellCode}
yum update
yum install epel-release.noarch dnf-plugins-core
yum config-manager --set-enabled PowerTools
rpm -ivh http://repo.okay.com.mx/centos/8/x86_64/release/okay-release-1-3.el8.noarch.rpm
yum update
\end{shellCode}
Now, install the packages:
\begin{shellCode}
yum install python3-devel python3-numpy python3-scipy python3-pyproj
yum install boost-devel boost-python3 boost-python3-devel
yum install gcc gcc-c++ scons
yum install suitesparse suitesparse-devel
\end{shellCode}
Finally, you can compile \escript with the command
\begin{shellCode}
scons -j1 options_file=scons/templates/centos8_0_options.py
\end{shellCode}
\subsection{Fedora}\label{sec:fedorasrc}
These instructions were prepared using Fedora $31$ Workstation.
\noindent To build the a version of \escript that uses Python 2.7, install the following packages:
\begin{shellCode}
yum install gcc-c++ scons suitesparse-devel
yum install python2-devel boost-python2-devel
yum install python2-scipy
yum install netcdf-devel netcdf-cxx-devel netcdf-cxx4-devel
\end{shellCode}
\noindent To build the a version of \escript that uses Python 3.0+, install the following packages:
\begin{shellCode}
yum install gcc-c++ scons suitesparse-devel
yum install python3-devel boost-python3-devel
yum install python3-scipy python3-pyproj python3-matplotlib
yum install netcdf-devel netcdf-cxx-devel netcdf-cxx4-devel
\end{shellCode}
\noindent In the source directory execute the following (substitute \textit{fedora_py2} or \textit{fedora_py3} for XXXX):
\begin{shellCode}
scons -j1 options_file=scons/templates/XXXX_options.py
\end{shellCode}
\noindent Now go to Section~\ref{sec:cleanup} for cleanup.
\subsection{MacOS 10.10 and later (macports)}\label{sec:macportsrc}
The following will install the capabilities needed for the \texttt{macports_10.10_options.py} file (later versions can use the same options file).
\begin{shellCode}
sudo port install scons
sudo port select --set python python27
sudo port install boost
sudo port install py27-numpy
sudo port install py27-sympy
sudo port select --set py-sympy py27-sympy
sudo port install py27-scipy
sudo port install py27-pyproj
sudo port install py27-gdal
sudo port install netcdf-cxx
sudo port install silo
\end{shellCode}
\begin{shellCode}
scons -j1 options_file=scons/templates/macports_10.10options.py
\end{shellCode}
\subsection{MacOS 10.13 and later (homebrew)}\label{sec:homebrewsrc}
The following will install the capabilities needed for the \texttt{homebrew_10.13_options.py} file.
\begin{shellCode}
brew install scons
brew install boost-python
brew install netcdf
\end{shellCode}
There do not appear to be formulae for \texttt{sympy} or \texttt{pyproj} so if you wish to use those features, then
you will need to install them separately.
\begin{shellCode}
scons -j1 options_file=scons/templates/homebrew_10.13_options.py
\end{shellCode}
\subsection{FreeBSD}\label{sec:freebsdsrc}
At time of writing, \texttt{numpy} does not install correctly on FreeBSD.
Since \texttt{numpy} is a critical dependency for \escript, we have been unable to test on FreeBSD.
\begin{comment}
\subsubsection{Release 10.0}
Install the following packages:
\begin{itemize}
\item python
\item scons
\item boost-python-libs
\item bash
\item netcdf
\item silo
\item py27-scipy
\item py27-gdal
\item py27-matplotlib
\item py27-pyproj
\item py27-sympy
\end{itemize}
\noindent Next choose (or create) your options file.
For the setup as above the escript source comes with a prepared file in
\texttt{scons/templates/freebsd10.0_options.py}.
Finally to build escript issue the following in the escript source directory
(replace the options file as required):
\begin{shellCode}
scons -j1 options_file=scons/templates/freebsd10.0_options.py
\end{shellCode}
\emph{Note:} Some packages installed above are built with gcc 4.7. Somewhere
in the toolchain a system-installed gcc library is pulled in which is
incompatible with the one from version 4.7 and would prevent escript from
executing successfully. As explained in the FreeBSD
documentation\footnote{see \url{http://www.freebsd.org/doc/en/articles/custom-gcc/article.html}}
this can be fixed by adding a line to \texttt{/etc/libmap.conf}:
\begin{shellCode}
libgcc_s.so.1 gcc47/libgcc_s.so.1
\end{shellCode}
\end{comment}
\subsection{Windows}\label{sec:windowssrc}
\noindent These instructions were prepared for Microsoft Windows 10.
\noindent Start by installing \escript dependencies.
\begin{itemize}
\item Microsoft Visual Studio
\begin{enumerate}
\item Download the Microsoft Visual Studio Community 2017 (or VS 2019 if
preferred) installer from
\begin{itemize}
\item[] \url{https://visualstudio.microsoft.com}.
\end{itemize}
\item Launch the Visual Studio installer, selecting Individual components:
\begin{itemize}
\item VS 2017: \textbf{VC++ 2017 latest v141 tools} \\
VS 2019: \textbf{MSVC v142 - VS 2019 C++ build tools}
\item \textbf{Windows 10 SDK}
\end{itemize}
\end{enumerate}
\item Anaconda
\begin{enumerate}
\item Download the Python 3.7 64-Bit Graphical Installer for Windows from
\begin{itemize}
\item[] \url{https://anaconda.org/}.
\end{itemize}
\item Launch the Anaconda installer, selecting installation type: \textbf{Just
Me} and destination folder: \newline \verb!C:\Users\%USERNAME%\Anaconda3!.
\end{enumerate}
\end{itemize}
\noindent Next, open Windows Command Prompt (\file{cmd.exe}) and set-up the
\escript dependencies.
\begin{itemize}
\item Conda environment
\begin{enumerate}
\item Create and activate a new environment
\begin{shellCode}
C:\Users\%USERNAME%\Anaconda3\Scripts\activate
conda create --name escript python=3.7
conda deactivate
C:\Users\%USERNAME%\Anaconda3\Scripts\activate escript
\end{shellCode}
\item Install required conda modules
\begin{shellCode}
conda install numpy==1.15.4 matplotlib==2.2.2 sympy==1.1.1
boost gdal git pyproj scons scipy m2-patch mumps gmsh
-c defaults -c conda-forge
\end{shellCode}
\end{enumerate}
\item Vcpkg
\begin{enumerate}
\item Build vcpkg package manager
\begin{shellCode}
cd C:\Users\%USERNAME%
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
bootstrap-vcpkg
\end{shellCode}
\item Install the CppUnit vcpkg package
\begin{shellCode}
vcpkg install cppunit:x64-windows
\end{shellCode}
\end{enumerate}
\end{itemize}
\noindent Once the dependencies are installed and set-up, you can download and
build \escript from source.
\begin{enumerate}
\item Activate the conda environment (if not active).
\begin{shellCode}
C:\Users\%USERNAME%\Anaconda3\Scripts\activate escript
\end{shellCode}
\item Set-up the Command Prompt for the 64-bit MSVC command line build environment.
\begin{shellCode}
"C:\Program Files (x86)\Microsoft Visual Studio\2017\
Community\VC\Auxiliary\Build\vcvars64.bat"
\end{shellCode}
\item Add CppUnit to the Windows System Path.
\begin{shellCode}
set PATH=%PATH%;C:\Users\%USERNAME%\vcpkg\packages\cppunit_x64-windows\bin
\end{shellCode}
\item Download the \escript source code tarball from
\begin{itemize}
\item[] \url{https://launchpad.net/escript-finley}
\end{itemize}
Extract the tarball to \verb!C:\Users\%USERNAME%\escript!
\item Build and install the netCDF-4 C++ library before starting the \escript
build. Download the netCDF-4 C++ v4.3.1 source code tarball from
\begin{itemize}
\item[] \url{https://github.com/Unidata/netcdf-cxx4/archive/v4.3.1.tar.gz}
\end{itemize}
Extract the tarball to \verb!C:\Users\%USERNAME%\escript!
\item Apply the provided patch.
\begin{shellCode}
cd C:\Users\%USERNAME%\escript\netcdf-cxx4-4.3.1
patch -p1 < ..\src\tools\anaconda\Anaconda3\netcdf-cxx4.patch
\end{shellCode}
\item Configure, build, and install netcdf-cxx4.
\begin{shellCode}
mkdir build
cd build
cmake -G "Visual Studio 15 2017 Win64" -DBUILD_SHARED_LIBS=OFF
-DCMAKE_INSTALL_PREFIX="%CONDA_PREFIX%\Library"
-DCMAKE_LIBRARY_PATH="%CONDA_PREFIX%\Library\lib"
-DCMAKE_PREFIX_PATH="%CONDA_PREFIX%\Library"
-DNETCDF_LIB_NAME="netcdf" -DHDF5_LIB_NAME="hdf5" ..
cmake --build . --config Release
cmake --build . --config Release --target install
\end{shellCode}
\item Kick-off the \escript build when the netCDF-4 C++ install is complete.
\begin{shellCode}
cd C:\Users\%USERNAME%\escript\src
scons -j1 build_full options_file=scons/templates/windows_msvc141_options.py
\end{shellCode}
\item Once the build completes successfully, you can validate \escript using
the provided test script.
\begin{shellCode}
python utest.py C:\Users\%USERNAME%\escript\src\build -t8
\end{shellCode}
\end{enumerate}
\subsection{Other Systems / Custom Builds}\label{sec:othersrc}
\escript has support for a number of optional packages.
Some, like \texttt{netcdf} need to be enabled at compile time, while others, such as \texttt{sympy} and the projection packages
used in \downunder are checked at run time.
For the second type, you can install them at any time (ensuring that python can find them) and they should work.
For the first type, you need to modify the options file and recompile with scons.
The rest of this section deals with this.
To avoid having to specify the options file each time you run scons, copy an existing \texttt{_options.py} file from the
\texttt{scons/} or \texttt{scons/templates/} directories. Put the file in the \texttt{scons} directory and name
it \textit{yourmachinename}\texttt{_options.py}.\footnote{If the name
has - or other non-alpha characters, they must be replaced with underscores in the filename}.
For example: on a machine named toybox, the file would be \texttt{scons/toybox_options.py}.
Individual lines can be enabled/disabled, by removing or adding \# (the python comment character) to the beginning of the line.
For example, to enable OpenMP, change the line
\begin{verbatim}
#openmp = True
\end{verbatim}
to
\begin{verbatim}
openmp = True
\end{verbatim}
If you are using libraries which are not installed in the standard places (or have different names) you will need to
change the relevant lines.
A common need for this would be using a more recent version of the boost::python library.
You can also change the compiler or the options passed to it by modifying the relevant lines.
\subsubsection*{MPI}
If you wish to enable or disable MPI, or if you wish to use a different implementation of MPI, you can use the \texttt{mpi}
configuration variable.
You will also need to ensure that the \texttt{mpi_prefix} and \texttt{mpi_libs} variables are uncommented and set correctly.
To disable MPI use, \verb|mpi = 'none'|.
\subsubsection{Testing}
As indicated earlier, you can test your build using \texttt{scons py_tests}.
Note however, that some features like \texttt{netCDF} are optional for using \escript, the tests will report a failure if
they are missing.
\section{Cleaning up}
\label{sec:cleanup}
Once the build (and optional testing) is complete, you can remove everything except:
\begin{itemize}
\item bin
\item esys
\item lib
\item doc
\item CREDITS
\item LICENSE
\item README
\end{itemize}
The last three aren't strictly required for operation.
The \texttt{doc} directory is not required either but does contain examples of escript scripts.
You can run escript using \texttt{\textit{path_to_escript_files}/bin/run-escript}.\\
Where \texttt{\textit{path_to_escript_files}} is replaced with the real path.
\begin{optionalstep}
You can add the escript \texttt{bin} directory to your \texttt{PATH} variable.
The launcher will then take care of the rest of the environment.
\end{optionalstep}
\section{Optional Extras}
Some other packages which might be useful include:
\begin{itemize}
\item Lapack and UMFPACK --- direct solvers (install the relevant libraries and enable them in the options file).
\item support for the Silo file format (install the relevant libraries and enable them in the options file).
\item VisIt --- visualisation package. Can be used independently but our \texttt{weipa} library can make a Visit
plug-in to allow direct visualisation of escript simulations.
\item gmsh --- meshing software used by our \texttt{pycad} library.
\item Mayavi2 --- another visualisation tool.
\end{itemize}
\subsection{Trilinos}
\escript now has some support for Trilinos\footnote{\url{https://trilinos.org/}}
solvers and preconditioners.
The most significant limitation is that the current Trilinos release does not
support block matrices so \escript can only use Trilinos solvers for single
PDEs (i.e. no PDE systems).
If your distribution does not provide Trilinos packages you can build a working
version from source. (See Appendix~\ref{app:trilinos})
\section{Testing \escript}\label{chap:utest}
\escript has extensive testing that can be used to confirm that the program is working correctly. To run the unit testing, compile \escript with the flag \texttt{build_full}. This will build \escript normally and then create a shell script named \texttt{utest.sh}. Once this file has been created, you can run unit testing using the command
\begin{shellCode}
sh utest.sh path_to_build_folder '-tT -nN -pP'
\end{shellCode}
where \texttt{T}, \texttt{N} and \texttt{P} represent the number of threads, nodes and processes to run the testing on. Some of these terms can be omitted. For example, to run the testing in serial, you would run
\begin{shellCode}
sh utest.sh path_to_build_folder '-t1'
\end{shellCode}
Note that a careless selection of these parameters may cause the testing program to skip many of the tests. For example, if you compile \escript with OpenMP enabled but then instruct the testing program to run on a single thread, many of the OpenMP tests will not be run.
| {
"alphanum_fraction": 0.7828905697,
"avg_line_length": 42.3996840442,
"ext": "tex",
"hexsha": "3a3f6a4ef11a48ce66fcfe3806bcad229024c637",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "markendr/esys-escript.github.io",
"max_forks_repo_path": "doc/install/source.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "markendr/esys-escript.github.io",
"max_issues_repo_path": "doc/install/source.tex",
"max_line_length": 340,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "markendr/esys-escript.github.io",
"max_stars_repo_path": "doc/install/source.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7172,
"size": 26839
} |
\subsection{Quick Introduction}
Another package inheriting the namespace package is the replica
management in |saga::replica|. It introduces logical files, which
are namespace entries that have a number of locations (URLs)
attached pointing to identical physical copies of the same file.
The replica package is again a very simple extension of the namespace
package: the class |logical_file| is a namespace |entry| class with
a number of additional methods which allow to manage the list of
attached URLs:
\begin{mycode}[label=Managing replica locations]
// open a logical file
saga::url u ("/replica_1");
saga::replica::logical_file lf (u, saga::replica::Create
| saga::replica::ReadWrite);
// Add a replica location, replicate the file
lf.add_location (saga::url ("file://localhost//etc/passwd"));
lf.replicate (saga::url ("file://localhost//tmp/passwd"),
saga::replica::Overwrite);
// list all locations
std::vector <saga::url> replicas = lf.list_locations ();
for ( unsigned int i = 0; i < replicas.size (); i++ )
{
std::cout << "replica: " << replicas[i] << std::endl;
}
// remove the first location
lf.remove_location (replicas[0]);
\end{mycode}
Managing replicas is fairly simple, because it only covers methods to
list, add, delete, and update replica locations on logical files.
Additionally, logical files and logical directories can have meta
data attached, which are sets of string-based key-value
pairs\footnote{For brevity, the example does not distinguish between scalar and
vector attributes.}:
\begin{mycode}[label=Managing replica meta data]
// open a logical file
saga::url u ("/replica_1");
saga::replica::logical_file lf (u, saga::replica::Create
| saga::replica::ReadWrite);
// get all attributes
std::vector <std::string> keys = lf.list_attributes ();
// print the keys and values
for ( unsigned int i = 0; i < keys.size (); i++ )
{
std::string key = keys[i];
std::string val;
if ( lf.attribute_is_vector (key) )
{
std::vector <std::string> vals = lf.get_vector_attribute (key);
val = vals[0] + " ...";
}
else
{
val = lf.get_attribute (key);
}
std::cout << key << " -> " << val << std::endl;
}
\end{mycode}
The attentive reader will notice that the attribute management
is in accordance with the attribute management part of the SAGA
\LF.
\subsection{Reference}
\begin{mycode}[label=Prototypes: saga::replica]
namespace saga
{
namespace replica
{
namespace metrics
{
char const * const logical_file_modified
= "logical_file.Modified";
char const * const logical_file_deleted
= "logical_file.Deleted";
char const * const logical_directory_created_entry
= "logical_directory.CreatedEntry";
char const * const logical_directory_modified_entry
= "logical_directory.ModifiedEntry";
char const * const logical_directory_deleted_entry
= "logical_directory.DeletedEntry";
}
enum flags
{
Unknown = /* -1, */ saga::name_space::Unknown ,
None = /* 0, */ saga::name_space::None ,
Overwrite = /* 1, */ saga::name_space::Overwrite ,
Recursive = /* 2, */ saga::name_space::Recursive ,
Dereference = /* 4, */ saga::name_space::Dereference ,
Create = /* 8, */ saga::name_space::Create ,
Exclusive = /* 16, */ saga::name_space::Exclusive ,
Lock = /* 32, */ saga::name_space::Lock ,
CreateParents = /* 64, */ saga::name_space::CreateParents ,
// 128, reserved for Truncate
// 256, reserved for Append
Read = 512,
Write = 1024,
ReadWrite = 1036,
// 2048, reserved for Binary
};
class logical_file
: public saga::name_space::entry,
public saga::attributes
{
public:
logical_file (session const & s,
saga::url url,
int mode = Read);
logical_file (saga::url url,
int mode = Read);
logical_file (saga::object const & o);
logical_file (void);
~logical_file (void);
logical_file & operator= (saga::object const & o);
void add_location (saga::url url);
void remove_location (saga::url url);
void update_location (saga::url old,
saga::url new);
std::vector<saga::url>
list_locations (void);
void replicate (saga::url url,
int flags = None);
};
class logical_directory
: public saga::name_space::directory,
public saga::attributes
{
public:
logical_directory (saga::session const & s,
saga::url url,
int mode = ReadWrite);
logical_directory (saga::url url,
int mode = ReadWrite);
logical_directory (saga::object const & o);
logical_directory (void);
~logical_directory (void);
logical_directory & operator=(saga::object const& o);
bool is_file (saga::url url);
std::vector<saga::url>
find (std::string name_pattern,
std::vector <std::string> key_pattern,
int flags = Recursive);
saga::replica::logical_file
open (saga::url url,
int flags = Read);
saga::replica::logical_directory
open_dir (saga::url url,
int flags = None);
};
}
}
\end{mycode}
\subsection{Replica Details}
| {
"alphanum_fraction": 0.5239040782,
"avg_line_length": 37.1988636364,
"ext": "tex",
"hexsha": "799beeb04db0802b952202378b049bae76c39a14",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-04-10T17:23:52.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-11-17T04:38:38.000Z",
"max_forks_repo_head_hexsha": "7376c0de0529e7d7b80cf08b94ec484c2e56d38e",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "saga-project/saga-cpp",
"max_forks_repo_path": "docs/manuals/programming_guide/tex/saga-programming-guide_replica.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7376c0de0529e7d7b80cf08b94ec484c2e56d38e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "saga-project/saga-cpp",
"max_issues_repo_path": "docs/manuals/programming_guide/tex/saga-programming-guide_replica.tex",
"max_line_length": 81,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "7376c0de0529e7d7b80cf08b94ec484c2e56d38e",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "saga-project/saga-cpp",
"max_stars_repo_path": "docs/manuals/programming_guide/tex/saga-programming-guide_replica.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-12T11:05:55.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-09-15T16:24:14.000Z",
"num_tokens": 1449,
"size": 6547
} |
\chapter{Lit Review}
A second chapter!
\snip{lrchap1}
\snip{lrchap2} | {
"alphanum_fraction": 0.7428571429,
"avg_line_length": 11.6666666667,
"ext": "tex",
"hexsha": "037ef0eeb21f8de5ee9e6f2a01d40e31979f8a4f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d2d154405a907baee31b886015332ab06d1a269e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "probablytom/thesis_template",
"max_forks_repo_path": "chapters/Lit_Review/chap.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d2d154405a907baee31b886015332ab06d1a269e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "probablytom/thesis_template",
"max_issues_repo_path": "chapters/Lit_Review/chap.tex",
"max_line_length": 20,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d2d154405a907baee31b886015332ab06d1a269e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "probablytom/thesis_template",
"max_stars_repo_path": "chapters/Lit_Review/chap.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 26,
"size": 70
} |
\section{Solving random-exist quantified SSAT}
\label{sect:ressat-technique}
Consider a random-exist quantified SSAT formula $\Qf=\random{} X,\exists Y.\pf(X,Y)$.
The satisfying probability of $\Qf$ equals the summation of weight of all SAT minterms over $X$, or, equivalently,
$1$ minus the summation of weight of all UNSAT minterms over $X$.
To identify an assignment $\as$ over $X$ as a SAT or an UNSAT minterm,
it suffices to check whether $\pcf{\pf(X,Y)}{\as}$ is satisfiable or not.
A naive solution to computing the satisfying probability of $\Qf$ is to exhaustively examine all assignments over $X$, classify them as SAT or UNSAT minterms, and aggregate the weight of collected minterms.
The above naive idea can be improved by exploiting the minterm-generalization techniques discussed in~\cref{sect:ressat-generalize}.
For instance, in~\cref{ex:ressat-assign},
$\as=x_1x_2$ is a SAT minterm over $\{x_1,x_2\}$ for $\pf(x_1,x_2,y_1,y_2)=x_1 \land (\lnot x_2 \lor y_1 \lor y_2)$.
Observe that $\pf(x_1,x_2,y_1,y_2)$ is satisfiable under the partial assignment $\as^+=x_1$.
In other words, the SAT minterm $\as$ can be generalized into the SAT cube $\as^+$, which contains two minterms.
Through the generalization analysis, multiple minterms can be collected in a single SAT-solving run,
enhancing the efficiency to enumerate all possible assignments over $X$.
As will be shown in Section~\ref{sect:ressat-evaluation},
the minterm-generalization techniques are essential to the efficiency of the proposed algorithm.
However, the weight of each collected cube cannot be summed up directly due to the potential overlap between generalized cubes.
This difficulty is overcome by applying weighted model counting,
which aggregates the total weight of the collected cubes correctly, taking the overlap into account.
\begin{algorithm}[p]
\caption{Solving random-exist quantified SSAT formulas}
\label{alg:ressat}
\begin{algorithmic}[1]
\REQUIRE
$\Qf=\random{} X,\exists Y.\pf(X,Y)$ and a run-time limit \timeout
\ENSURE
Lower and upper bounds $(P_L,P_U)$ of $\spb{\Qf}$
\STATE $\select(X) := \top$
\STATE $C_\top := \emptyset$
\STATE $C_\bot := \emptyset$
\WHILE{($\sat{\select}$ \textbf{and} $\texttt{run-time} < \timeout$)}
\STATE $\as := \model{\select}$
\IF{($\sat{\pcf{\pf}{\as}}$)}
\STATE $\as^+ := \texttt{MinimalSatisfying}(\pf,\as)$
\STATE $C_\top := C_\top \cup \{\as^+\}$
\ELSE
\STATE $\as^+ := \texttt{MinimalConflicting}(\pf,\as)$
\STATE $C_\bot := C_\bot \cup \{\as^+\}$
\ENDIF
\STATE $\select := \select \land \lnot\as^+$
\ENDWHILE
\RETURN $(\texttt{ComputeWeight}(C_\top),1-\texttt{ComputeWeight}(C_\bot))$
\end{algorithmic}
\end{algorithm}
The above thoughts give rise to the proposed algorithm in~\cref{alg:ressat} to compute the satisfying probability of $\Qf=\random{} X,\exists Y.\pf(X,Y)$.
The proposed algorithm works as follows.
For now, assume the run-time limit \timeout to be infinity.
The effect of imposing a run-time limit on~\cref{alg:ressat} will be explained in~\cref{sect:ressat-approximate}.
Two SAT solvers are used in~\cref{alg:ressat}.
In addition to the SAT solver that holds the matrix CNF $\pf(X,Y)$,
the other SAT solver $\select(X)$, named the selector in the following,
is initialized as a tautology, i.e., without clauses.
The selector $\select(X)$ is in charge of selecting an assignment $\as$ over $X$.
After $\as$ is chosen, the matrix solver $\pf(X,Y)$ will check whether $\pcf{\pf(X,Y)}{\as}$ is satisfiable or not.
If $\pcf{\pf(X,Y)}{\as}$ is satisfiable,
$\as$ will be generalized into a SAT cube by the subroutine \texttt{MinimalSatisfying};
if $\pcf{\pf(X,Y)}{\as}$ is unsatisfiable,
$\as$ will be generalized into an UNSAT cube by the subroutine \texttt{MinimalConflicting}.
Instead of finding a \textit{minimum} satisfying or conflicting assignment,
which is computationally expensive,
we resort to finding a \textit{minimal} satisfying or conflicting assignment,
i.e., an assignment that has no literals removable without affecting the (un)satisfiability,
to leverage the efficient UNSAT-core computation for effective generalization.
After $\as$ is generalized to $\as^+$ and enlisted in $C_\bot$ or $C_\top$,
the negation of $\as^+$, which becomes a blocking clause,
will be conjoined with $\select$ to prune the assignments contained by $\as^+$.
The above process is repeated until $\select$ becomes unsatisfiable,
which signifies the Boolean space spanned by $X$ has been exhaustively searched.
The subroutine \texttt{ComputeWeight} is then invoked to evaluate the weight of the collected cubes.
The subroutines \texttt{MinimalConflicting}, \texttt{MinimalSatisfying}, and \texttt{ComputeWeight} will be detailed below.
\subsection{Minimal satisfying assignment}
Given a SAT minterm $\as$ over $X$,
let $\mu$ be a satisfying assignment over $Y$ for $\pcf{\pf(X,Y)}{\as}$.
The subroutine \texttt{MinimalSatisfying} generalizes $\as$ to $\as^+$ by the following steps.
\begin{itemize}
\item[a)] Remove every clause $C$ in $\pcf{\pf(X,Y)}{\as}$ that contains some true literal from $\mu$.
\item[b)] For each literal $l$ in $\as$, drop $l$ and examine whether the rest of clauses remain satisfied
by scanning these clauses and checking if each of them still contains some true literal.
If the rest of clauses are still satisfied, discard $l$; otherwise, put $l$ in $\as^+$.
\end{itemize}
After the above steps, the SAT minterm $\as$ will be generalized into a minimal satisfying assignment $\as^+$.
\subsection{Minimal conflicting assignment}
Let $\as$ be an UNSAT minterm over $X$ for $\pf(X,Y)$.
The analysis of unsatisfiability can be done with a modern SAT solver (e.g., using function \texttt{analyzeFinal()} in \minisat) to find a conjunction of literals from $\as$ responsible for the conflict.
However, in general this conjunction of literals might not be minimal,
and some of the literals could be dropped.
The subroutine \texttt{MinimalConflicting} takes the conjunction of literals responsible for the conflict computed by a SAT solver and makes it minimal as follows.
For each literal $l$ in the conjunction,
drop $l$ and examine whether $\pf(X,Y)$ remains unsatisfiable by invoking a SAT call.
If it is unsatisfiable, discard $l$; otherwise, put $l$ in $\as^+$.
After the above steps, the UNSAT minterm $\as$ will be generalized into a minimal conflicting assignment $\as^+$.
\subsection{Weight computation}
The subroutine \texttt{ComputeWeight} aggregates the weight of collected cubes by invoking a weighted model counter.
Because a weighted model counter takes CNF formulas as input,
\texttt{ComputeWeight} first negates each collected cube to turn it into a clause,
and conjoins the resulting clauses into a CNF formula.
As the CNF formula is the negation of the disjunction of the cubes,
the weight of the cubes equals $1$ minus the weight of the CNF formula,
which is computed by a weighted model counter.
\subsection{Modification for approximate SSAT}
\label{sect:ressat-approximate}
The proposed algorithm can be easily modified to solve \textit{approximate SSAT},
where upper and lower bounds of the satisfying probability of an SSAT formula are computed.
Suppose~\cref{alg:ressat} is forced to terminate before the selector $\select$ becomes unsatisfiable.
The weights of the collected SAT and UNSAT cubes are still valid and can be aggregated by \texttt{ComputeWeight},
and the resulted weights reflect the lower and upper bounds of the satisfying probability, respectively.
The early termination can be triggered by imposing a run-time limit for~\cref{alg:ressat}.
Compared to previous DPLL-based approaches that branch on a single variable,
the proposed algorithm considers all randomly quantified variables together and exploits the concept of SAT and UNSAT cubes over the Boolean space spanned by randomly quantified variables,
making the intermediate collected SAT and UNSAT cubes convey useful information about the upper and lower bounds of the exact satisfying probability.
Compared to the DPLL-based state-of-the-art methods,
which cannot be easily modified for approximate SSAT,
the proposed method enjoys the flexibility of solving SSAT approximately or exactly,
depending on the imposed run-time constraint.
We note that the proposed algorithm is more efficient in memory consumption than previous DPLL-based algorithms.
Prior DPLL-based algorithms mostly apply subproblem memorization to avoid repeated computation on the same subproblem.
However, without special treatment, such memorization may result in rapid growth in memory usage.
On the other hand, in the proposed algorithm,
the numbers of collected cubes are greatly reduced by the minterm-generalization techniques,
which gives rise to the memory efficiency.
In our empirical evaluation,
the proposed algorithm consumed two orders of magnitude less memory than the state-of-the-art DPLL-based solver.
\begin{example}
\label{ex:ressat-solve}
Consider a random-exist quantified SSAT formula
\begin{align*}
\Qf=\random{0.5}r_1,\random{0.5}r_2,\random{0.5}r_3,\exists e_1,\exists e_2,\exists e_3.\pf,
\end{align*}
with $\pf$ consisting of the following clauses:
\begin{itemize}
\item[] $C_1: (r_1 \lor r_2 \lor e_1)$
\item[] $C_2: (r_1 \lor \lnot r_3 \lor e_2)$
\item[] $C_3: (r_2 \lor \lnot r_3 \lor \lnot e_1 \lor \lnot e_2)$
\item[] $C_4: (r_3 \lor e_3)$
\item[] $C_5: (r_3 \lor \lnot e_3)$
\end{itemize}
\begin{table}[t]
\centering
\caption{Solving process of~\cref{alg:ressat} on~\cref{ex:ressat-solve}}
\label{tbl:ressat-solve-example}
\small
\begin{tabular}{c|c|c|c|c}
Assignment & Minterm Type & Generalization & UB & LB \\
\hline
$\as_1=\lnot r_1 \lnot r_2 \lnot r_3$ & UNSAT & $\as_1^+=\lnot r_3$ & $0.5$ & $0$ \\
$\as_2=\lnot r_1 \lnot r_2 r_3$ & UNSAT & $\as_2^+=\lnot r_1 \lnot r_2$ & $0.375$ & $0$ \\
$\as_3=\lnot r_1 r_2 r_3$ & SAT & $\as_3^+=r_2r_3$ & $0.375$ & $0.25$ \\
$\as_4=r_1 \lnot r_2 r_3$ & SAT & $\as_4^+=r_1r_3$ & $0.375$ & $0.375$
\end{tabular}
\end{table}
The solving process is summarized in~\cref{tbl:ressat-solve-example}.
In the beginning, the selector $\select(r_1,r_2,r_3)$ is initialized without clauses,
and the sets $C_\top$ and $C_\bot$ to collect SAT and UNSAT cubes are empty.
Suppose $\select$ first selects an assignment $\as_1=\lnot r_1 \lnot r_2 \lnot r_3$.
Since $\pcf{\pf}{\as_1}$ is unsatisfiable due to the conflict between $C_4$ and $C_5$,
the subroutine \texttt{MinimalConflicting} returns $\as_1^+=\lnot r_3$,
which is the minimal conflicting assignment responsible for this conflict.
Note that this minimal conflicting assignment $\as_1^+$ reflects an upper bound of $0.5$ for $\spb{\Qf}$.
The selector $\select$ is then strengthened through conjunction with the negation of $\as_1^+$ to block the searched subspace.
Next, suppose $\as_2=\lnot r_1 \lnot r_2 r_3$ is selected.
Under $\as_2$,
formula $\pcf{\pf}{\as_2}$ is unsatisfiable due to the conflict among clauses $C_1$, $C_2$, and $C_3$,
and the minimal conflicting assignment $\as_2^+$ equals $\lnot r_1 \lnot r_2$.
After conjoining $\select$ with $\lnot \as_2^+$, suppose $\as_3=\lnot r_1 r_2 r_3$ is chosen.
Formula $\pcf{\pf}{\as_3}$ is satisfiable through the assignment $\mu_3=\lnot e_1 e_2 \lnot e_3$.
The subroutine \texttt{MinimalSatisfying} is invoked to generalize $\as_3$ to $\as_3^+=r_2r_3$,
which reflects a lower bound of $0.25$ for $\spb{\Qf}$.
Similarly, the negation of $\as_3^+$ is conjoined with $\select$.
Next, let the assignment chosen by $\select$ be $\as_4=r_1 \lnot r_2 r_3$.
Since $\pcf{\pf}{\as_4}$ is satisfiable through the assignment $\mu_4=\lnot e_1 \lnot e_2 \lnot e_3$,
assignment $\as_4$ is generalized to $\as_4^+=r_1r_3$ by \texttt{MinimalSatisfying}.
After conjoined with $\lnot \as_4^+$,
formula $\select$ becomes unsatisfiable,
which indicates the Boolean space over $\{r_1,r_2,r_3\}$ has been explored exhaustively.
At the end, we have
$C_\bot=\{\as_1^+,\as_2^+\}=\{\lnot r_3,\lnot r_1 \lnot r_2\}$ and
$C_\top=\{\as_3^+,\as_4^+\}=\{r_2r_3,r_1r_3\}$.
The subroutine \texttt{ComputeWeight} is finally invoked and returns $0.375$ as the satisfying probability of $\Qf$.
For approximate SSAT solving, suppose the procedure is forced to terminate right after $\as_3^+$ is collected.
The subroutine \texttt{ComputeWeight} will be invoked over $C_\top=\{r_2r_3\}$ and $C_\bot=\{\lnot r_3,\lnot r_1 \lnot r_2\}$.
The cubes in $C_\top$ or $C_\bot$ are negated into CNF formulas for weighted model counting.
To compute an upper bound,
the UNSAT cubes $\lnot r_3$ and $\lnot r_1 \lnot r_2$ are rewritten into a CNF formula $(r_3)\land(r_1 \lor r_2)$ and yields a probability of $0.375$ with respect to the weights specified by the prefix.
This probability is the satisfying probability of the negation of the UNSAT cubes,
which gives an upper bound of $0.375$ for $\spb{\Qf}$.
Similarly, we can obtain a lower bound of $0.25$ for $\spb{\Qf}$ from the SAT cube $r_2 r_3$.
\end{example}
\section{Applications}
In this section, we discuss several applications of random-exist quantified SSAT formulas.
\subsection{Probability of success in planning}
Many planning problems can be formulated in terms of forall-exist QBFs,
i.e., QBFs of the form $\Qf=\forall X,\exists Y.\pf(X,Y)$.
Changing the universal quantifiers of these QBFs to randomized ones yields random-exist quantified SSAT formulas.
Under the game interpretation of QBFs,
the satisfying probability of such an SSAT formula corresponds to the likelihood
for the existential player to win the game if the universal player decides its moves at random.
In~\cref{sect:ressat-evaluation},
we will use the \textit{strategic companies} problem~\cite{Cadoli1997} as an example
to evaluate SSAT solvers on planning applications.
\subsection{Probabilistic circuit verification}
The second application is the formal verification of \textit{probabilistic design}.
As probabilistic errors are becoming more common in advanced nanometer technology,
the \textit{probabilistic equivalence checking} (PEC) problem asks to compute the probability for a probabilistic circuit to produce different outputs from its faultless specification.
PEC can be encoded into a random-exist quantified SSAT formula~\cite{LeeTC18ProbDesign}. | {
"alphanum_fraction": 0.7201908859,
"avg_line_length": 67.018018018,
"ext": "tex",
"hexsha": "0e13791638b3cdef19bf1fd3adaf3b79f735cc37",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "nianzelee/PhD-Dissertation",
"max_forks_repo_path": "paper/random-exist-ssat/technique.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "nianzelee/PhD-Dissertation",
"max_issues_repo_path": "paper/random-exist-ssat/technique.tex",
"max_line_length": 206,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "nianzelee/PhD-Dissertation",
"max_stars_repo_path": "paper/random-exist-ssat/technique.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z",
"num_tokens": 4147,
"size": 14878
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.