id
stringlengths 12
52
| text
stringlengths 75
13.2k
| source
stringclasses 9
values | format
stringclasses 2
values |
|---|---|---|---|
algo_block_0
|
\documentclass[a4paper]{book}
\usepackage{style}
\pagestyle{fancy}
\fancyhf{}\fancyfoot[LE,RO]{\thepage}
\fancyhead[RE]{\textit{\leftmark}}
\fancyhead[LO]{\textit{\rightmark}}
\fancyhead[LE]{Algorithms}
\fancyhead[RO]{Notes by Joachim Favre}
\fancypagestyle{plain}{\fancyhf{}\fancyfoot[LE,RO]{\thepage}}
\title{Algorithms\\ Prof. Mikhail Kapralov --- EPFL}
\author{Notes by Joachim Favre}
\date{Computer science bachelor --- Semester 3 \\ Autumn 2022}
\begin{document}
\maketitle
\clearemptydoublepage
\thispagestyle{empty}
\vspace*{\fill}
I made this document for my own use, but I thought that typed notes might be of interest to others. There are mistakes, it is impossible not to make any. If you find some, please feel free to share them with me (grammatical and vocabulary errors are of course also welcome). You can contact me at the following e-mail address:
\begin{center}
\texttt{joachim.favre@epfl.ch}
\end{center}
If you did not get this document through my GitHub repository, then you may be interested by the fact that I have one on which I put those typed notes and their LaTeX code. Here is the link (make sure to read the README to understand how to download the files you're interested in):
\begin{center}
\url{https://github.com/JoachimFavre/EPFLNotesIN}
\end{center}
Please note that the content does not belong to me. I have made some structural changes, reworded some parts, and added some personal notes; but the wording and explanations come mainly from the Professor, and from the book on which they based their course.
I think it is worth mentioning that in order to get these notes typed up, I took my notes in \LaTeX during the course, and then made some corrections. I do not think typing handwritten notes is doable in terms of the amount of work. To take notes in \LaTeX, I took my inspiration from the following link, written by Gilles Castel. If you want more details, feel free to contact me at my e-mail address, mentioned hereinabove.
\begin{center}
\url{https://castel.dev/post/lecture-notes-1/}
\end{center}
I would also like to specify that the words ``trivial'' and ``simple'' do not have, in this course, the definition you find in a dictionary. We are at EPFL, nothing we do is trivial. Something trivial is something that a random person in the street would be able to do. In our context, understand these words more as ``simpler than the rest''. Also, it is okay if you take a while to understand something that is said to be trivial (especially as I love using this word everywhere hihi).
Since you are reading this, I will give you a little advice. Sleep is a much more powerful tool than you may imagine, so do not neglect a good night of sleep in favour of studying (especially the night before an exam). I wish you to have fun during your exams.
\vspace*{\fill}
\vspace*{\fill}
\begin{center}
\initcurrdate
\def\setdateformat{Y--m--d}
\textit{Version \printdate}
\end{center}
\vspace*{\fill}
\clearemptydoublepage
\thispagestyle{empty}
\vspace*{\fill}
\begin{flushright}
\begin{minipage}{7cm}
{\itshape
To Gilles Castel, whose work has \\
inspired me this note taking method.
\vspace{1em}
Rest in peace, nobody \\
deserves to go so young.
}
\end{minipage}
\end{flushright}
\vspace*{\fill}
\vspace*{\fill}
\clearemptydoublepage
\tableofcontents
\cleardoublepage
\renewcommand{\cftchapleader}{\cftdotfill{\cftdotsep}}
\renewcommand*{\cftchapfont}{}
\listoflectures
\chapter{Summary by lecture}
\lecturetitlesummary{1}{2022-09-23}{I'm rambling a bit}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Definition of algorithm and instance.
\item Recall on asymptotics (I ramble a bit on this subject, but I find it very interesting).
\item Definition of the sorting problem, and of loop invariants.
\item Explanation of the insertion sort algorithm.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{2}{2022-09-26}{Teile und herrsche}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Proof that insertion sort works, and analysis of its complexity.
\item Definition of divide-and-conquer algorithms.
\item Explanation of merge sort, and analysis of its complexity.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{3}{2022-09-30}{Trees which grow in the wrong direction}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Proof of correctness of Merge-Sort.
\item Analysis of the complexity of merge-sort, through the substitution method and through trees.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{4}{2022-10-03}{Master theorem}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation of the master theorem.
\item Explanation of how to count the number of inversions in an array.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{5}{2022-10-07}{Fast matrix multiplication}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation of a solution to the maximum subarray problem.
\item Explanation of a divide-and-conquer algorithm for number multiplication.
\item Explanation of Strassen's algorithm, a divide-and-conquer algorithm for matrix multiplication.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{6}{2022-10-10}{Heap sort}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Definition of max-heap.
\item Explanation on how to store a heap.
\item Explanation of the \texttt{MaxHeapify} procedure.
\item Explanation on how to make a heap out of a random array.
\item Explanation on how to use heaps to make heapsort.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{7}{2022-10-14}{Queues, stacks and linked list}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation on how to implement a priority queue through a heap.
\item Explanation on how to implement a stack.
\item Explanation on how to implement a queue.
\item Explanation on how to implement a linked list.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{8}{2022-10-17}{More trees growing in the wrong direction}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Definition of binary search trees.
\item Explanation on how to search, find the extrema, find the successor and predecessor of a given element, how to print and how to insert an element in a binary search tree.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{9}{2022-10-21}{Dynamic cannot be a pejorative word}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation on how to delete a node from a binary search tree.
\item Explanation of top-down with memoisation and bottom-up algorithms in Dynamic Programming, through the example of the Fibonacci numbers.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{10}{2022-10-24}{''There are 3 types of mathematicians: the ones who can count, and the ones who cannot'' (Prof. Kapralov) (what do you mean by ``this title is too long''?)}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Application of dynamic programming to the rod-cutting, change-making and matrix-multiplication problems.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{11}{2022-10-28}{LCS but not LoL's one}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Application of dynamic programming to the longest common subsequence problem.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{12}{2022-10-31}{More binary search trees}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation on how to use dynamic programming in order to find the optimal binary search tree given a sorted sequence and a list of probabilities.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{13}{2022-11-04}{An empty course.}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item No really, we only did some revisions for the midterm.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{14}{2022-11-07}{I love XKCD}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Definition of directed and undirected graphs, and explanation on how to store them in memory.
\item Explanation of BFS.
\item Explanation of DFS, and of the depth-first forest and edge classification it implies.
\item Explanation of the parenthesis theorem.
\item Explanation of the white-path theorem.
\item Definition of directed acyclic graphs.
\item Proof that a DAG is acyclic if and only if it does not have any back edge.
\item Definition of topological sort, and explanation of an algorithm to compute it.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{15}{2022-11-14}{I definitely really like this date}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Definition of SCCs, and proof of their existence and unicity.
\item Definition of component graphs.
\item Explanation of Kosarju's algorithm for finding component graphs.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{16}{2022-11-18}{This date is really nice too, though}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Definition of flow network and flow.
\item Definition of residual capacity and residual networks.
\item Explanation of the Ford-Fulkerson greed algorithm for finding the maximum in a flow network.
\item Definition of the cut of a flow network, and its flow and capacity.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{17}{2022-11-21}{The algorithm may stop, or may not}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation and proof of the max-flow min-cut theorem.
\item Complexity analysis of the Ford-Fulkerson method.
\item Application of the Ford-Fulkerson method to the Bipartite matching problem and the Edge-disjoint paths problem.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{18}{2022-11-25}{Either Levi or Mikasa made this function}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item There exists no other Ackerman in the world, and when they wrote the term ``Inverse Ackermann function'', they definitely made a mistake while writing the word ``Ackerman''.
\item Definition of the disjoint-set data structures.
\item Explanation of how to implement a disjoint-set data structure though linked lists.
\item Explanation of how to implement a disjoint-set data structure though a forest of trees.
\item Definition of spanning trees.
\item Explanation of the minimum spanning tree problem.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{19}{2022-11-28}{Finding the optimal MST}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation and proof of Prim's algorithm for finding a MST.
\item Explanation and proof of Kruskal's algorithm for finding a MST.
\item Definition of the shortest path problem.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{20}{2022-12-02}{I like the structure of maths courses}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation of the Bellman-Ford's algorithm for finding shortest paths and detecting negative cycles.
\item Proof of optimality of the Bellmand-Ford algorithm.
\item Explanation of Dijkstra's algorithm for finding a shortest path in a weighted graph, and proof of its optimality.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{24}{2022-12-16}{Doing fun stuff with matrices (really)}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Applying dynamic programming to solve the all-pairs shortest paths problem.
\item Translating our dynamic algorithm to matrix usage, in order to use fast exponentiation.
\item Explanation of Floyd-Warshall's algorithm for solving the all-pairs shortest paths problem.
\item Explanation of Johnson's algorithm for solving the all-pairs shortest paths problem.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{21}{2022-12-05}{Stochastic probabilistic randomness}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Introduction to probabilistic analysis.
\item Definition of indicator random variables.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{22}{2022-12-09}{Hachis Parmentier}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Explanation of the birthday paradox, and proof of the birthday lemma.
\item Definition of hash function.
\item Explanation of hash tables.
\item Proof of an upper bound on the runtime complexity of unsuccessful search in hash tables.
\end{itemize}
\vspace{2em}
\lecturetitlesummary{23}{2022-12-12}{Quantum bogosort is a comparison sort in $\Theta\left(n\right)$}
\vspace{0.5em}
\begin{itemize}[left=0pt]
\item Proof of an upper bound on the runtime complexity of successful search in hash tables.
\item Explanation of quicksort.
\item Proof and analysis of naive quick sort.
\item Analysis of randomised quick sort.
\item Proof of the $\Omega\left(n\log\left(n\right)\right)$ lower bound for comparison sorts.
\end{itemize}
\vspace{2em}
\cleardoublepage
\input{Lecture01/lecture01.tex}
\input{Lecture02/lecture02.tex}
\input{Lecture03/lecture03.tex}
\input{Lecture04/lecture04.tex}
\input{Lecture05/lecture05.tex}
\input{Lecture06/lecture06.tex}
\input{Lecture07/lecture07.tex}
\input{Lecture08/lecture08.tex}
\input{Lecture09/lecture09.tex}
\input{Lecture10/lecture10.tex}
\input{Lecture11/lecture11.tex}
\input{Lecture12/lecture12.tex}
\input{Lecture13/lecture13.tex}
\input{Lecture14/lecture14.tex}
\input{Lecture15/lecture15.tex}
\input{Lecture16/lecture16.tex}
\input{Lecture17/lecture17.tex}
\input{Lecture18/lecture18.tex}
\input{Lecture19/lecture19.tex}
\input{Lecture20/lecture20.tex}
\input{Lecture24/lecture24.tex}
\input{Lecture21/lecture21.tex}
\input{Lecture22/lecture22.tex}
\input{Lecture23/lecture23.tex}
\clearemptydoublepage
\end{document}
|
algo
|
latex
|
intro_to_ML_block_0
|
\documentclass[a4paper]{article}
\usepackage{style}
\title{Introduction to machine learning --- BA3\\ Detailed summary}
\author{Joachim Favre\\ Course by Prof. Mathieu Salzmann }
\date{Autumn semester 2022}
\begin{document}
\maketitle
\cftsetindents{paragraph}{1.5em}{1em}
\setcounter{tocdepth}{5}
\tableofcontents
\initcurrdate
\def\setdateformat{Y--m--d}
\vspace*{\fill}
\vspace*{\fill}
\vspace*{\fill}
\vspace*{\fill}
\vspace*{\fill}
\vspace*{\fill}
\vspace*{\fill}
\vspace*{\fill}
\begin{center}
\textit{Version \printdate}
\end{center}
\vspace*{\fill}
\newpage
|
intro_to_ML
|
latex
|
intro_to_ML_block_1
|
\begin{parag}{Supervised and unsupervised learning}
In \important{supervised learning}, we are given data and its groundtruth labels. The goal is, given new data, we want to predict new labels, by doing regression or classification.
In \important{unsupervised learning}, we are only given data (without any label), and we want to output some information about it, by doing dimensionality reduction or clustering.
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_2
|
\begin{parag}{Regression and classification}
The goal of \important{regression} is to predict a continuous value for a given sample. The goal of \important{classification} is to output a discrete label (typically encoded in one-hot encoding with 0s and 1s or -1s and 1s).
The main difference is that there is the notion of closeness in regression (when predicting a date, outputting 1970 when it should be 1980 is better than outputting 2100), which is not in classification (when predicting what is on a picture, outputting a car when it should be a cat is not better or worse than outputting an elephant).
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_3
|
\begin{parag}{Dimensionality reduction}
Dimensionality reduction has two main advantages.
The first one is that it allows to decrease the dimension of our data, which typically yield a tremendous speed-up while preserving a lot of the precision.
The second one is that, depending on the model, we can also map data back from the lower dimensional space to the higher one. This can be very interesting since it allows to create new samples. For instance, applying dimensionality reduction on a set of human faces, we could create new points $\bvec{y}$ randomly thanks to the distribution of our data, and map it back to high dimension. That way, we created a new random face. Another use for this is to denoise the data: mapping a sample to a lower dimensional space and back to high dimensions may result to a lot less noise.
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_4
|
\begin{parag}{Notations}
We consider the following notation throughout this course, with some slight exceptions when specified otherwise. $N$ is the number of samples we have, $D$ is the dimensionality (the number of components) of any input, and $C$ is the dimensionality of any output.
Without specified otherwise, the input $\bvec{x}_i \in \mathbb{R}^{D+1}$ begins with a constant $1$ to account for a bias, followed by the input data. We let $\bvec{y}_i \in \mathbb{R}^C$ to be the $i$\Th output. $x_i^{\left(k\right)}$ is the $k$\Th component of the $i$\Th input, and similarly for $y_i^{\left(k\right)}$. To sum up, we have:
\[\bvec{x}_i = \begin{pmatrix} 1 \\ x_i^{\left(1\right)} \\ \vdots \\ x_i^{\left(D\right)} \end{pmatrix} \in \mathbb{R}^{D+1}, \mathspace \bvec{y}_i = \begin{pmatrix} y_i^{\left(1\right)} \\ \vdots \\ y_i^{\left(C\right)} \end{pmatrix} \in \mathbb{R}^{C}\]
Any value output by our model will be represented by a $\hat{\bvec{y}} \in \mathbb{R}^{C}$, to make a difference with the groundtruth $\bvec{y} \in \mathbb{R}^C$.
We will also need weights, which represent the parameters of our model. $\bvec{w}_{\left(i\right)}$ represents the weight to convert any $\bvec{x}$ to the $i$\Th component of the output $y_i$. We need their size to compute dot products with $\bvec{x}$, i.e. $\bvec{x}_i^T \bvec{w}_{\left(i\right)}$ must make sense. Thus, without specified otherwise, we use $\bvec{w}_{\left(i\right)} \in \mathbb{R}^{\left(D+1\right)}$.
To simplify the notation and the computations, we will stack our values in matrices. Thus, we let:
\[X = \begin{pmatrix} \bvec{x}_1^T \\ \vdots \\ \bvec{x}_N^T \end{pmatrix} = \begin{pmatrix} 1 & \cdots & x_1^{\left(D\right)} \\ \vdots & \ddots & \vdots \\ 1 & \cdots & x_N^{\left(D\right)} \end{pmatrix} \in \mathbb{R}^{N \times \left(D+1\right)}\]
\[Y = \begin{pmatrix} \bvec{y}_1^T \\ \vdots \\ \bvec{y}_N^T \end{pmatrix} = \begin{pmatrix} y_1^{\left(1\right)} & \cdots & y_1^{\left(C\right)} \\ \vdots & \ddots & \vdots \\ y_N^{\left(1\right)} & \cdots & y_n^{\left(C\right)} \end{pmatrix} \in \mathbb{R}^{N \times C}\]
\[W = \begin{pmatrix} \bvec{w}_{\left(1\right)} & \hdots & \bvec{w}_{\left(C\right)} \end{pmatrix} \in \mathbb{R}^{\left(D+1\right)\times C}\]
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_5
|
\begin{parag}{Feature expansion}
Increasing the amount of dimensions from $D$ to $F$ of our input data may help our models (using non-linear functions, since they would be of no help). Thus, we may let the following function:
\[\phi\left(\bvec{x}\right) = \begin{pmatrix} 1 & x^{\left(1\right)} & \cdots & x^{\left(D\right)} & \left(x^{\left(1\right)}\right)^2 & \cdots & \left(x^{\left(D\right)}\right)^2 & \cdots \end{pmatrix}^T \in \mathbb{R}^{F}\]
We also put it in a matrix so simplify notation:
\[\Phi = \begin{pmatrix} \phi\left(\bvec{x}_1\right)^T \\ \vdots \\ \phi\left(\bvec{x}_N\right)^T \end{pmatrix} \in \mathbb{R}^{N \times F} \]
We can replace $\bvec{x}$ by $\phi\left(\bvec{x}\right)$ and $X$ by $\Phi$ in mostly every model, especially the ones which will be kernalised. Note that, in this case, we need $\bvec{w} \in \mathbb{R}^F$, and thus $W = \mathbb{R}^{F \times C}$.
|
intro_to_ML
|
latex
|
intro_to_ML_block_6
|
\begin{subparag}{Remark}
The 1 we added to the input data to account for the bias is some kind of feature expansion.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_7
|
\begin{subparag}{Cover's Theorem}
Cover's theorem states (more or less) that doing non-linear feature expansion, then it is more likely for our data to be linearly separable.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_8
|
\begin{parag}{Kernel}
We can notice that defining our $\phi$ functions for feature expansion can be really tedious. However, since most of our methods depend on a dot product of $\phi\left(\bvec{x}_i\right)^T \phi\left(\bvec{x}_j\right)$, which gives some kind of measure of similarity between $\bvec{x}_i$ and $\bvec{x}_j$ (since it is proportional to the cosine of their angle), we can define a similarity function named a \important{kernel}, such that:
\[k\left(\bvec{x}_i, \bvec{x}_j\right) = \phi\left(\bvec{x}_i\right)^T \phi\left(\bvec{x}_j\right)\]
As usual, we put everything in vectors and matrices to simplify our notation. We first have a way to measure the similarity between a sample $\bvec{x}_i$ and all the other samples:
\[k\left(X, \bvec{x}_i\right) = \begin{pmatrix} k\left(\bvec{x}_1, \bvec{x}_i\right) \\ \vdots \\ k\left(\bvec{x}_n, \bvec{x}_i\right) \end{pmatrix} \in \mathbb{R}^N\]
And we can then stack all of them in a matrix to define similarity between all pairs of samples:
\[K = \begin{pmatrix} k\left(\bvec{x}_1, \bvec{x}_1\right) & k\left(\bvec{x}_1, \bvec{x}_2\right) & \cdots & k\left(\bvec{x}_1, \bvec{x}_N\right) \\ k\left(\bvec{x}_2, \bvec{x}_1\right) & k\left(\bvec{x}_2, \bvec{x}_2\right) & \cdots & k\left(\bvec{x}_2, \bvec{x}_N\right) \\ \vdots & \vdots & \ddots & \vdots \\ k\left(\bvec{x}_N, \bvec{x}_1\right) & k\left(\bvec{x}_N, \bvec{x}_2\right) & \cdots & k\left(\bvec{x}_N, \bvec{x}_N\right)\end{pmatrix} \in \mathbb{R}^{N \times N}\]
Note that, since $k\left(\bvec{x}_i, \bvec{x}_j\right) = k\left(\bvec{x}_j, \bvec{x}_i\right)$ by the commutativity of the dot product, $K$ is symmetric ($K^T = K$).
|
intro_to_ML
|
latex
|
intro_to_ML_block_9
|
\begin{subparag}{Remark}
The main advantage of a kernel is that we don't need to know what function $\phi$ is linked to it.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_10
|
\begin{subparag}{Examples}
We can for instance use the \important{polynomial kernel}:
\[k\left(\bvec{x}_i, \bvec{x}_j\right) = \left(\bvec{x}_i^T \bvec{x}_j + c\right)^d\]
$c$ is often set to $1$ and $d$ to 2. For this kernel, the corresponding mapping $\phi$ is known. This is, except for multiplicative constants, all the possible monomials of degree less than or equal to $d$ composed of the components of $\bvec{x}_i$ and $\bvec{x}_j$.
We can also use the \important{Gaussian kernel} (or radial basis function (RBF)):
\[k\left(\bvec{x}_i, \bvec{x}_j\right) = \exp\left(- \frac{\left\|\bvec{x}_i - \bvec{x}_j\right\|^2}{2 \sigma ^2}\right)\]
$\sigma$ is typically chosen relatively to the data.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_11
|
\begin{parag}{Representer theorem}
The minimizer of a regularized empirical risk function can be represented as a linear combination of expanded features. In other words, we can write:
\[\bvec{w}^* = \sum_{i=1}^{N} \alpha_i^* \phi\left(\bvec{x}_i\right) = \Phi^T \bvec{\alpha}^*\]
where $\bvec{\alpha} \in \mathbb{R}^N$.
|
intro_to_ML
|
latex
|
intro_to_ML_block_12
|
\begin{subparag}{Remark}
This theorem is really important to kernalise algorithms. When using it, the goal is to get rid of the $\Phi$ since we do not know the mapping $\phi$. Switching our view onto variables $\bvec{\alpha}$ instead of variables $\bvec{w}$ is typically a way to do so.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_13
|
\begin{parag}{Loss function}
The \important{loss function} $\ell \left(\hat{\bvec{y}}_i, \bvec{y}_i\right)$ computes an error value between the prediction and the true value.
This is a measure of the error for any given prediction.
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_14
|
\begin{parag}{Empirical risk}
Given $N$ training samples $\left\{\left(\hat{\bvec{x}}_i, \hat{\bvec{y}}_i\right)\right\}$, the \important{empirical risk} is defined as:
\[R\left(\left\{\hat{\bvec{x}}_i\right\}, \left\{\hat{\bvec{y}}_i\right\}, W\right) = \frac{1}{N} \sum_{i=1}^{N} \ell \left(\hat{\bvec{y}}_i, \bvec{y}_i\right)\]
This represents the global error on the training samples, this is what we try to minimise:
\[W^* = \argmin_{W} R\left(\left\{\hat{\bvec{x}}_i\right\}, \left\{\hat{\bvec{y}}_i\right\}, W\right)\]
|
intro_to_ML
|
latex
|
intro_to_ML_block_15
|
\begin{subparag}{Regularised}
Sometimes, we want to regularise our objective function, so that we prevent weights to become to large and make a lot of overfitting. We then instead seek to minimise.
\[E\left(W\right) = R\left(W\right) + \lambda E_W\left(W\right)\]
where $\lambda$ is an hyperparameter and $E_W\left(W\right)$ is the regulariser.
A regulariser that can be used is Tikhonov regularisation, and it will be used in ridge regressionb:
\[E_W\left(W\right) = \left\|W\right\|^2_F\]
where $\left\|W\right\|_F^2$ is the squared Frobenius norm of $W$, meaning the sum of its values squared.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_16
|
\begin{parag}{Gradient descent}
The goal of \important{gradient descent} is to minimise a function (an empirical risk $R\left(W\right)$ in this context). The idea is to begin with an estimate $W_0$ (typically completely random), and then to update it iteratively, by following the direction of greatest decrease (the opposite of the gradient):
\[W_k = W_{k-1} - \eta \nabla_{W} R\left(W_{k-1}\right)\]
where $\eta > 0$ is the learning rate.
This algorithm can then be stopped after the change in the function reaches a threshold $\left|R\left(W_{k}\right) - R\left(W_{k-1}\right)\right| < \delta_R$, the change in parameter value is less than a threshold $\left|W_{k} - W_{k-1}\right| < \delta_w$, or if a maximum number of iterations (also known as epochs) is reached (even though this gives no guarantee on a potential convergence).
|
intro_to_ML
|
latex
|
intro_to_ML_block_17
|
\begin{subparag}{Remark}
This algorithm does not always converge and, when it does, not necessarily to a minimum nor to the global minimum.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_18
|
\begin{parag}{Evaluation metrics}
Once a supervised machine learning model is trained, we want to be able to understand how well it performs on unseen test data (which must absolutely be separated from the train data).
We could use the loss function, but we may also use a different one.
For regression, we typically use \important{Mean Squared Error} (MSE):
\[\text{MSE} = \frac{1}{N_t} \sum_{i=1}^{N_t} \left\|\hat{\bvec{y}}_i - \bvec{y}_i\right\|^2\]
where $N_t$ is the number of test sample.
For classification, this is typically more complicated. Defining $TP$ to be the number true positive predictions (samples correctly predicted positive), $FN$ to be the number of false negative predictions (samples incorrectly predicted negative), and similarly for $TN$ and $FP$, we can define the \important{accuracy} (the percentage of correctly classified samples), the \important{precision} (the percentage of samples classified positives, which are truly positives) and the \important{recall} (the percentage of positive samples that are correctly classified as positives):
\[\text{accuracy} = \frac{TP + TN}{N_t}, \mathspace \text{precision} = \frac{TP}{TP + FP}, \mathspace \text{recall} = \frac{TP}{P}\]
We can then combine the two last to make a \important{F1 score}:
\[\text{F1} = 2 \frac{\text{precision} \cdot \text{recall}}{\text{precision} + \text{recall}}\]
We can typically then use accuracy and the F1 score to see how well our classification model did.
|
intro_to_ML
|
latex
|
intro_to_ML_block_19
|
\begin{subparag}{Remark}
There are many more metrics for regression and classification. For the former, we could quote RMSE (root mean squared error), MAE (mean absolute error) or the MAPE (mean absolute percentage error). For the latter, making a confusion matrix or computing the AUC (area under the ROC curve) can give good insights too.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_20
|
\begin{parag}{Decision boundary}
A classifier leads to a decision boundary. This is an object of dimension $D-1$ (a line if our data lies on a plane for instance), which splits the space into two regions: one where samples are considered positive (the predicted value is closer to the value of positive samples), and one where they are considered to be negative (the predicted value is closer to the value of negative samples).
Since it is the set of points which are equivalently distant to both labels, the boundary is parametrised by the following equation:
\[\hat{\bvec{y}}\left(\bvec{x}\right) = \frac{y_{pos} + y_{neg}}{2}\]
where $y_{pos}$ is the value for positive samples, and $y_{neg}$ is the value for negative samples.
|
intro_to_ML
|
latex
|
intro_to_ML_block_21
|
\begin{subparag}{Remark}
A classifier is said to be linear if its decision boundary is an hyperplane (a straight line if the data lies on a plane for instance).
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_22
|
\begin{parag}{Margin}
If $C = 1$, the orthogonal distance between the decision boundary and the nearest sample is called the margin.
\imagehere[0.5]{margin.png}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_23
|
\begin{parag}{Overfitting}
When we increase the complexity of the model (by changing the hyperparameters) we get better and better result for both training and test data. However, there is a point at which increasing the complexity keeps decreasing error on training data but increases the error on test data. This is a very general phenomenon, named \important{overfitting}.
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_24
|
\begin{parag}{Cross-validation}
Cross-validation is a way to find good hyperparameters that prevent overfitting. We test different models (in a predefined set), assign to each of them an error value, and pick the one yielding the smallest error.
The idea of \important{$k$-fold cross-validation} is, to evaluate the error of a model, to first randomly split the dataset into $k$ partitions (where $k$ is predefined). Then, we do $k$ iterations: at iteration $i$ we drop the $i$\Th partition, train the model on the $\left(k-1\right)$ other folds merged, and use the $i$\Th partition to compute the error. At the end of all the iterations, we average all the errors.
Note that we never test the model on data we used to train it, allowing to avoid overfitting. Also, we use all of our training data to get a complete insight over it (doing only one iteration, would give less information about the model). It is important to notice that the larger the $k$, the less data we waste but the more expensive this method becomes.
|
intro_to_ML
|
latex
|
intro_to_ML_block_25
|
\begin{subparag}{Remark}
Note that leaving $k = N$ (where $N$ is the number of training samples) is also sometimes named \important{leave-one-out cross-validation}. This is really expensive but wastes (almost) no data.
Another way to do cross-validation, which is much cheaper, is to split our training data into training and validation data, using \important{validation-set cross-validation}. This is like like doing only one iteration of $k$-fold cross-validation, meaning that it is very cheap but wastes a lot of data.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_26
|
\begin{parag}{Data representation}
All the models we will see only work for fixed size data. If we want to handle data of varying size (such as text or pictures), a good way is to consider \important{bag of words}: consider the number of times each word from a dictionary appears in the text and put this as a big vector.
Note that we don't need to consider the whole English dictionary, only picking the set of words appearing in the training data is enough. Also, it is often interesting to make a histogram out of our vector: divide it by the number of words in the sample, so that we instead have a repartition and give less importance to the length of the text.
Also, we can apply this to images. To do so, we need to extract ``words'', meaning fixed-size picture elements.
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_27
|
\begin{parag}{Pre-processing}
The training data might have problems: it might have noise (because of measurement errors), incorrect values, and so on. To fix those, a good idea is to do pre-processing.
|
intro_to_ML
|
latex
|
intro_to_ML_block_28
|
\begin{subparag}{Normalisation}
To begin with, a good idea is to scale each individual feature dimension to fall within a specified range (so that we don't give more impact to a dimension ranging from 1000 to 10000 than to another dimension ranging from 0 to 1). This can typically be done by using \important{normalisation}, such as $z$-score normalisation:
\[\widetilde{x}_i^{\left(d\right)} = \frac{x_i^{\left(d\right)} - \mu^{\left(d\right)}}{\sigma^{\left(d\right)}}\]
where $\mu^{\left(d\right)}$ is the mean of the $d$\Th dimension, and $\sigma^{\left(d\right)}$ is its standard deviation.
Note that there are many other ways to do normalisation, such as min-max normalisation (computing $\widetilde{x}_i = \frac{x_i - x_{min}}{x_{max} - x_{min}}$), max-normalisation (computing $\widetilde{x}_i = \frac{x_i}{x_{max}}$ where $x_{max}$ is the maximum in absolute value) or decimal-scaling (compute $\widetilde{x}_i = \frac{x_i}{10^k}$ where $k$ is the smallest integer $k$ such that $\left|\widetilde{x}_i\right| \leq 1$ for the largest $\widetilde{x}_i$).
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_29
|
\begin{subparag}{Imbalanced data}
Another important thing to consider is \important{imbalanced data}. There might be 10 times as much data in one class as in another (between-class imbalance), or data inside a class might have a lot of representative at some point in space and much less at other points (within-class imbalance).
To fix those problems, we can either work on the data, or on the cost function.
To work on the data, the first method is to decrease the data set by \important{undersampling}: we remove samples randomly, or more intelligently by removing samples considered redondant. The second method is to increase the data set by \important{oversampling}: we replicate exactly some of the samples (which might lead to overfitting), or do interpolation (which might create a lot of noise).
To work on the cost function, we can give more impact to some samples:
\[R\left(\left\{\bvec{x}_i\right\}, \left\{\bvec{y}_i\right\}, W\right) = \frac{1}{N} \sum_{i=1}^{N} \beta_i \ell \left(\hat{\bvec{y}}_i, \bvec{y}_i\right)\]
We can for instance use weights inversely proportional to the class frequency, such as $\beta_i = 1 - \frac{N_k}{N}$.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_30
|
\begin{parag}{Ridge regression}
The output of \important{ridge regression} is computed by a simple dot product:
\[\hat{\bvec{y}}_i = W^T \phi\left(\bvec{x}_i\right)\]
The training objective function we want to minimise is a squared Euclidean distance regularised by the sum of squares of the weights:
\[E\left(W\right) = R\left(W\right) + \lambda E_W \left(W\right) = \sum_{i=1}^{N} \left\|\hat{\bvec{y}}_i - \bvec{y}_i\right\|^2 + \lambda \left\|W\right\|^2_F\]
where $\left\|W\right\|^2_F$ is the Frobenius norm of $W$, meaning the sum of the square of all its values, and $\lambda \geq 0$ is a hyperparameter.
This can be solved explicitly:
\[W^* = \left(\Phi^T \Phi + \lambda I_F\right)^{-1} \Phi^T Y\]
where $I_F$ is the $F \times F$ identity matrix.
|
intro_to_ML
|
latex
|
intro_to_ML_block_31
|
\begin{subparag}{Linear regression}
Leaving $\lambda = 0$, we get the special case of \important{linear regression}. Then, the closed-form formula can be rephrased as:
\[W^* = \left(\Phi^T \Phi\right)^{-1} \Phi^T \bvec{y} = \Phi^{\dagger} Y\]
where $\Phi^{\dagger}$ is the Moore-Penrose pseudo-inverse.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_32
|
\begin{subparag}{Kernelisation}
Using the representer theorem, we can find that:
\[A^* = \left(K + \lambda I_N\right)^{-1} Y\]
This is not of much use on its own, but we can use this result to find how we predict a value $\hat{\bvec{y}}$ for a new $\bvec{x}$:
\[\hat{\bvec{y}} = Y^T \left(K + \lambda I_N\right)^{-1} k\left(X, \bvec{x}\right)\]
Note that the value $Y^T \left(K + \lambda I_N\right)^{-1}$ can be computed once during training, and then be reused at every evaluation of the model.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_33
|
\begin{subparag}{Classification}
Ridge regression can be used for classification tasks, by inputting the result into a softmax function (defined right after), but this does not work very well because we are not encoding this in the objective function.
This is named a \important{least-square classifier}.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_34
|
\begin{parag}{Logistic regression}
In \important{logistic regression} (which is a linear classification algorithm), we consider negative samples to be $y_i = 0$.
The output of logistic regression is computed by using the \important{softmax function}:
\[\hat{y}^{\left(k\right)} = \frac{\exp\left(\bvec{w}^T_{\left(k\right)} \bvec{x}\right)}{\sum_{j=1}^{C} \exp\left(\bvec{w}^T_{\left(j\right)} \bvec{x}\right)} \in \left[0, 1\right]\]
where $\hat{y}^{\left(k\right)}$ represents the probability that the sample $\bvec{x}$ is in class $k$.
The empirical risk we try to minimise is the \important{cross entropy}:
\[R\left(W\right) = - \sum_{i=1}^{N} \sum_{k=1}^{C} y_i^{\left(k\right)} \ln\left(\hat{y}_i^{\left(k\right)}\right)\]
Note that $y_i^{\left(k\right)}$ is non-zero only for a single sample $i$.
There is no closed-form formula, so we need the gradient of the empirical risk in order to do gradient descent (the solution is unique since the function is convex):
\[\nabla_{W} R\left(W\right) = \sum_{i=1}^{N} \bvec{x}_i \left(\hat{\bvec{y}}_i - \bvec{y}_i\right)^T\]
|
intro_to_ML
|
latex
|
intro_to_ML_block_35
|
\begin{subparag}{One dimension}
Let's consider $C = 1$. This special case of the softmax function is named the \important{logistic function}:
\[\hat{y}\left(\bvec{x}\right) = \sigma\left(\bvec{w}^T \bvec{x}\right) = \frac{1}{1 + \exp\left(- \bvec{w}^T \bvec{x}\right)}\]
One-dimensional cross-entropy can be rewritten as:
\[R\left(\bvec{w}\right) = -\sum_{i \in P}^{} \ln\left(\hat{y}_i\left(\bvec{w}\right)\right) - \sum_{i \in N}^{} \ln\left(1 - \hat{y}_i\left(\bvec{w}\right)\right)\]
where $P$ is the set of positive samples and $N$ the set of negative samples.
The gradient of one-dimensional cross-entropy is:
\[\nabla_{\bvec{w}} R\left(\bvec{w}\right) = \sum_{i=1}^{N} \left(\hat{y}_i - y_i\right)\bvec{x}_i\]
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_36
|
\begin{subparag}{Kernelisation}
This algorithm can be kernalised even though this is not very common.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_37
|
\begin{parag}{Support vector machine}
In \important{support vector machine} (SVM) classification (which is also a linear classifier), we consider negative samples to be $y_i = -1$. Also, we leave $\widetilde{\bvec{w}}$ to be the vector of parameters without $w^{\left(0\right)}$, and $\bvec{x} \in \mathbb{R}^D$ to not have an added 1. Note that we only consider $C = 1$ for now.
The idea of SVM is to maximise the size of the margin. A prediction is given by whether $w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i$ is closest to $-1$ or $1$ (as always for linear classifiers), and the decision boundary is given by $w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x} = \frac{-1 + 1}{2} = 0$.
We show in one of the following paragraphs that the problem can be formulated as:
\[\argmin_{\bvec{w}, \left\{\xi_i\right\}} \frac{1}{2} \left\|\widetilde{\bvec{w}}\right\|^2 + C \sum_{i=1}^{N} \xi_i, \]
subject to $y_i \left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right) \geq 1 - \xi_i$ and $\xi_i \geq 0$, for all $i$. Note that $C$ is an hyperparameter, and the $\xi_i$ are \important{slack variables} we added that we also minimise. Those slack variables allow for data which is not linearly separable data to also be used.
When $\xi_i = 0$, the point is on the correct side of the margin, this is how everything should work. If $0 < \xi_i < 1$, then the point $i$ are on the wrong side of the margin, but correctly classified. If $\xi_i = 1$, the point is on the decision boundary (and thus misclassified). If $\xi_i > 1$, then the point is on the wrong side of the decision boundary, and thus misclassified.
This representation of the problem is known as the \important{primal problem}.
|
intro_to_ML
|
latex
|
intro_to_ML_block_38
|
\begin{subparag}{Support vectors}
We notice that, for the margin to be maximised, there must be at least a point from each class lying on it. Such points are named \important{support vectors}.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_39
|
\begin{subparag}{Hinge loss}
By rewriting the constraints, we get:
\[\xi_i \geq 1 - y_i\left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right)\]
For samples $i$ that satisfy the support vector machine problem (they are on not in the margin nor misclassified), we have $\xi_i = 0$ (since they are forced to be non-negative). For samples $i$ which don't, this inequality is held. This allows us to rewrite our SVM primal problem as:
\[\argmin_{\bvec{w}, \left\{\xi_i\right\}} \frac{1}{2C} \left\|\widetilde{\bvec{w}}\right\|^2 + \sum_{i=1}^{N} \max\left(0, 1 - y_i\left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right)\right),\]
subject to the same conditions
This new term is called the \important{hinge loss}:
\[\max\left(0, 1 - y_i \left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right)\right)\]
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_40
|
\begin{subparag}{Dual problem}
We can reformulate our problem by letting one variable per training sample (meaning that we have $N$ variables instead of $\left(D+1\right)$):
\[\argmax_{\left\{\alpha_i\right\}} \left(\sum_{i=1}^{N} \alpha_i - \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N} \alpha_i \alpha_j y_i y_j \bvec{x}_i^T \bvec{x}_j\right),\]
subject to $\sum_{i=1}^{N} \alpha_i y_i = 0$ and $0 \leq \alpha_i \leq C$ for all $i$.
The solution is equivalent to the primal problem:
\[\widetilde{\bvec{w}}^* = \sum_{i=1}^{N} \alpha_i^* y_i \bvec{x}_i \implies \hat{y}\left(\bvec{x}\right) = w^{\left(0\right)*} + \sum_{i = 1}^{N} \alpha_i y_i \bvec{x}_i^T \bvec{x}\]
Note that $w^{\left(0\right)}$ can also be found thanks to those $\alpha_i^*$.
We can show that, at the solution, we have:
\[\alpha_i^* \left(y_i \left(w^{\left(0\right)*} + \widetilde{\bvec{w}}^{*T} \bvec{x}_i\right) - 1 + \xi_i^*\right) = 0\]
In other words, for every sample, either of those terms is equal to 0. The samples for which $\alpha_i^* \neq 0$ are the support vectors. However, most samples are not support vectors, so we can decrease the computations by leaving $\mathcal{S}$ to be the set of support vectors:
\[\hat{y}\left(\bvec{x}\right) = w^{\left(0\right)*} + \sum_{i \in\mathcal{S}}^{} \alpha_i^* y_i \bvec{x}_i^T \bvec{x}_i\]
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_41
|
\begin{subparag}{Kernelisation}
We need the dual problem to kernelise the SVM:
\[\argmax_{\left\{\alpha_i\right\}} \left(\sum_{i=1}^{N} \alpha_i - \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N} \alpha_i \alpha_j y_i y_j k\left(\bvec{x}_i, \bvec{x}_j\right)\right),\]
subject to $\sum_{i=1}^{N} \alpha_i y_i = 0$ and $0 \leq \alpha_i \leq C$ for all $i$.
The prediction is also similar:
\[\hat{y}\left(\bvec{x}\right) = w^{\left(0\right)*} + \sum_{i \in \mathcal{S}}^{} \alpha_i^* y_i k\left(\bvec{x}_i, \bvec{x}\right)\]
Note that we still have $\alpha_i^* = 0$ for all samples that are not support vectors.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_42
|
\begin{subparag}{Multi-class SVM}
To generalise our algorithm to multiple class, we can use multiple ways. The idea is always to use several binary classifiers.
One way is to use \important{one-vs-rest}: we train classifiers stating if the component is in class $i$ or not. Another way is to use \important{one-vs-one}: we train classifiers to know if the component is closer to class $i$ or $j$, and then pick the best one. However, in both cases, there are some samples which will give ambiguous answers (belonging to multiple classes or to none).
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_43
|
\begin{subparag}{Primal derivation}
Let's consider how we got the formula for the primal model, since it may typically help to remember and understand it.
First, we know that any two points on the decision boundary have the same prediction (which is 0), which yields that:
\[0 = \hat{y}_1 - \hat{y}_2 = \left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_1\right) - \left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_2\right) = \widetilde{\bvec{w}}^T \left(\bvec{x}_1 - \bvec{x}_2\right)\]
This is a dot product, and it thus means that $\widetilde{\bvec{w}}$ is orthogonal to the decision boundary.
Second, we use this information to split any point into a component colinear to $\widetilde{\bvec{w}}$, and one orthogonal to it (meaning colinear to the decision boundary):
\[\bvec{x} = \bvec{x}_{\perp} + r \frac{\widetilde{\bvec{w}}}{\left\|\widetilde{\bvec{w}}\right\|}\]
where $r$ is the signed orthogonal distance of any point to the decision boundary.
Now, looking at the prediction yielded by this point, we get:
\[\hat{y}\left(\bvec{x}\right) = w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x} = w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_{\perp} + r \frac{\widetilde{\bvec{w}}^T \widetilde{\bvec{w}}}{\left\|\widetilde{\bvec{w}}\right\|} = y\left(\bvec{x}_{\perp}\right) + r\left\|\widetilde{\bvec{w}}\right\|\]
However, we know that $y\left(\bvec{x}_{\perp}\right) = 0$ since it is on the decision boundary, meaning that:
\[\hat{y}\left(\bvec{x}\right) = r\left\|\widetilde{\bvec{w}}\right\| \iff r = \frac{\hat{y}\left(\bvec{x}\right)}{\left\|\widetilde{\bvec{w}}\right\|}\]
which is, to recall, the signed orthogonal distance of $\bvec{x}$ to the decision boundary.
Note that we can then make use of the ground truth label being $-1$ or 1 to make an unsigned distance:
\[\widetilde{r}_i = y_i r_i = \frac{y_i\left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right)}{\left\|\widetilde{\bvec{w}}\right\|}\]
This is what we would like to maximise, but it is hard. We thus need to turn it to an equivalent problem. To do so, we notice that there is an infinite number of solutions that matter, since we only want the direction of $\bvec{w}$ and that its magnitude does not matters. This can be proven mathematically by multiplying our weights by a $\lambda$, and seeing that we get the same $\widetilde{r}_i$ above (the $\lambda$ cancel out in the fraction).
So, we may as well require that the margin has size $\frac{1}{\left\|\widetilde{\bvec{w}}\right\|}$, meaning that:
\[r_i \geq \frac{1}{\left\|\widetilde{\bvec{w}}\right\|} \iff r_i \left\|\widetilde{\bvec{w}}\right\| \geq 1 \iff y_i \left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right) \geq 1\]
Now, maximising the margin means maximising $\frac{1}{\left\|\widetilde{\bvec{w}}\right\|}$ which can be shown to be equivalent to minimising $\left\|\widetilde{\bvec{w}}\right\|^2$. Our problem has thus become:
\[\argmin_{\bvec{w}} \frac{1}{2} \left\|\widetilde{\bvec{w}}\right\|^2, \mathspace \text{subject to } y_i \left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right) \geq 1, \ \forall i\]
where the factor $\frac{1}{2}$ was added for convenience.
However, if the data is not linearly separable, we need a way to let some samples violate this rule. This is done by adding \important{slack variables} $\xi_i \geq 0$ for each sample:
\[y_i \left(w^{\left(0\right)} + \widetilde{\bvec{w}} \bvec{x}_i\right) \geq 1 - \xi_i\]
In other words, if $0 < \xi_i \leq 1$, the sample $i$ lies inside in the margin but is still correctly classified. If $\xi_i \geq 1$, then the sample $i$ is misclassified.
We minimise those variables jointly with our original problem, giving us our final formulation:
\[\argmin_{\bvec{w}, \left\{\xi_i\right\}} \frac{1}{2} \left\|\widetilde{\bvec{w}}\right\|^2 + C \sum_{i=1}^{N} \xi_i, \]
subject to $y_i \left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right) \geq 1 - \xi_i$ and $\xi_i \geq 0$, for all $i$.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_44
|
\begin{parag}{$K$-nearest neighbours}
The idea of \important{$k$-nearest neighbours} (kNN) is to compute the distance between the test sample $\bvec{x}$ and all training samples $\left\{\bvec{x}_i\right\}$ and find the $k$ samples with minimum distances. Then, we can do classification by finding the most common label amongst these $k$ nearest neighbours, or do regression by computing $\hat{\bvec{y}}$ as a function of those neighbours and their distance to $\bvec{x}$.
To compute close points efficiently we can use datastructures such as $k$-d trees.
|
intro_to_ML
|
latex
|
intro_to_ML_block_45
|
\begin{subparag}{Remark}
The result of this model depends on the choice of the distance function. One can take the \important{Euclidean distance}:
\[d\left(\bvec{x}_i, \bvec{x}\right) = \sqrt{\sum_{d=1}^{D} \left(x_i^{\left(d\right)} - x^{\left(d\right)}\right)^2}\]
However, for some other structures such as histograms (each data point only has component between 0 and 1, and the sum of all of the components of a data point is equal to 1; like for a probability distribution), then we instead tend to use a Chi-square distance:
\[d\left(\bvec{x}_i, \bvec{x}\right) = \sqrt{\chi^2 \left(\bvec{x}_i, \bvec{x}\right)} = \sqrt{\sum_{d=1}^{D} \frac{\left(x_i^{\left(d\right)} - x^{\left(d\right)}\right)^2}{x_i^{\left(d\right)} + x^{\left(d\right)}}} \]
The important thing to remember from this paragraph is that the choice of the distance function is important.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_46
|
\begin{subparag}{Curse of dimensionality}
Because of a principle named the \important{curse of dimensionality}, we need exponentially more points to cover a space as the number of dimensions increases. Using dimensionality reduction is a good idea with this algorithm.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_47
|
\begin{subparag}{Complexity}
Unlike most of the other models, increasing the hyperparameter of this model (the $k$) leads to decreased complexity: the higher the $k$, the less complex the decision boundary is and thus the less overfit we have.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_48
|
\begin{parag}{Neural networks}
Neural networks can do both classification and regression (depending on the output representation and the empirical risk used, typically square loss for regression and cross-entropy for classification), and their main advantage is that they learn a good model during the training.
This method is named \important{neural network}, \important{multi-layer perceptron} (MLP), or \important{deep learning} (as long as there are at least two hidden layers).
The idea is to have layers composed of neurons. Every neuron of a layer is connected to every neurons of the previous layer, meaning that its value is computed by a weighted sum over them, plus a bias. More than that, the important thing for our model not to be just a big linear regression, is that each neuron is passed through a non-linear activation function. Mathematically speaking, this is given by:
\[\bvec{z}_{\left(l \right)} = f_{\left(l \right)} \left(W_{\left(l \right)}^T \bvec{z}_{\left(l - 1\right)}\right)\]
where $\bvec{z}_{\left(0\right)} = \bvec{x}$ is the input layer, $\bvec{z}_{\left(L+1\right)} = \hat{\bvec{y}}$ is the output layer, $L$ is the number of hidden layers (layers which are neither input nor output), and $f_{\left(l\right)} \left(x\right)$ is applied to every component of the vector. Note that each $\bvec{z}_{\ell }$ has a ``bias term'' just like the input data (meaning a 1 appended at the beginning).
This is trained to optimality using stochastic gradient descent, by focusing on a single loss term $\ell \left(\hat{\bvec{y}}_i, \bvec{y}_i\right)$ at a time. To do so, we need to compute the gradients $\frac{\partial \ell_i}{\partial W_l^{\left(k, j\right)}}$ (where we are considering the loss of the $i$\Th sample, and the weight at position $\left(k, j\right)$ of the weight matrix from layer $l$). This is done by an algorithm named \important{backpropagation}.
We can notice that, by the chain rule (and abusing slightly of the notation of the derivative):
\[\frac{\partial \ell _i}{\partial W_l} = \frac{\partial z_{\left(l\right)}}{\partial W_{l}} \frac{\partial z_{\left(l+1\right)}}{\partial z_l} \cdots \frac{\partial z_{\left(L\right)}}{\partial z_{\left(L-1\right)}} \frac{\partial \ell _i}{\partial z_{\left(L\right)}} \]
We can compute $\frac{\partial z_{\left(l\right)}}{\partial z_{\left(l-1\right)}} $ and $\frac{\partial z_{\left(l\right)}}{\partial W_l} $ rather easily if we store the values of each layer during the forward pass. We can then propagate backwards those value, by computing the gradients from the end and updating the weights.
|
intro_to_ML
|
latex
|
intro_to_ML_block_49
|
\begin{subparag}{Activation functions}
There are multiple choice for activation functions. The important thing is that they are non-linear.
We can for instance take the ReLU (Rectified Linear Unit) activation function:
\begin{functionbypart}{f\left(a\right)}
a, & \text{if } a > 0 \\
0, & \text{otherwise}
\end{functionbypart}
Another choice could be the sigmoid:
\[f\left(a\right) = \frac{1}{1 + \exp\left(-a\right)}\]
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_50
|
\begin{subparag}{Convolutional network}
When working with pictures, just vectorising them may give a lot of elements while removing the fact that the picture is inherently two-dimensional. A way to circumvent this problem is using convolutions.
To make a convolution, we need a small matrix of elements (plus a bias). We center this matrix at an element, compute the weighted sum resulting from it, add the bias, and use this as our result. We can then shift it to center it on each element, yielding a new picture. This allows to extract some features of our original picture, such as the edges.
We can also use multiple filters to create multiple channels, increasing the amount of data. If at some point we have 3 channels (for instance) and want to do a convolution with a $5\times5$ filter, then we will use matrices of size $5\times 5 \times 3$ (going three-dimensional over our channels). Note that we can also use some other parameters, such as strides (skip one pixels over two for instance) or padding (add some pixels on the edges).
We can also use pooling layers, splitting the pixels into $k \times k$ squares and extracting only one value per partition (by taking the maximum value or the average for instance). This allows to decrease the size of our pictures.
The main interest then comes from stacking those operations. For instance, beginning with a $28 \times 28$ input, we may apply convolutions to get a $3 \times 24 \times 24$ layer (we loose some pixels because we may require the filter to ignore pixels where it has to be partly outside of the picture) and then a pooling layer to get a $3 \times 12 \times12$ layer.
The idea of a \important{convolutional neural network} (CNN) is to make some convolutions (typically before the input layers, but also in-between some layers). The main idea is that we also optimise the values inside the filters of our convolutions. To compute the gradient for them, we also use back-propagation.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_51
|
\begin{parag}{Principle component analysis}
Sometimes, we realise that data is given in many dimensions, but actually lies in many less dimensions. The idea of \important{principle component analysis} (PCA) is to project the data on some orthogonal axis (of lower dimension), in a way to maximise the kept variance.
Leaving $\bar{\bvec{x}} = \frac{1}{N}\sum_{i=1}^{N} \bvec{x}_i$ to be the mean, we have:
\[\bvec{y}_i = W^T \left(\bvec{x}_i - \bar{\bvec{x}}\right)\]
To find this matrix $W \in \mathbb{R}^{D \times d}$ (where $d$ is the number of dimensions after the projection), we first need to consider the data covariance matrix:
\[C = \frac{1}{N} \sum_{i=1}^{N} \left(\bvec{x}_i - \bar{\bvec{x}}\right) \left(\bvec{x}_i - \bar{\bvec{x}}\right)^T\]
Then, picking the $d$ eigenvectors which highest eigenvalues of $C$, we get our matrix:
\[W = \begin{pmatrix} & & \\ \bvec{w}_{\left(1\right)} & \cdots & \bvec{w}_{\left(d\right)} \\ & & \end{pmatrix} \in \mathbb{R}^{D \times d}\]
The explained variance yielded by our projection can be found by computing:
\[\text{exval} = \frac{\sum_{j=1}^{d} \lambda_j}{\sum_{j=1}^{D} \lambda_j}\]
Using PCA can make the computations of the models much faster without losing much precision, or getting some insight of the data.
|
intro_to_ML
|
latex
|
intro_to_ML_block_52
|
\begin{subparag}{Remark}
Since the axis on which we project the data are orthogonal, we have:
\[W^T W = I_d\]
To make sure of this, we need to take the eigenvector so that they are orthogonal. This can always be done because $C$ is symmetric, thanks to the spectral theorem (this theorem also allows to know that we can compare eigenvalues, since they are real).
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_53
|
\begin{subparag}{Mapping}
From our computation, we can notice that, for any point $\bvec{y} \in\mathbb{R}^d$, we can move it to the high-dimensional space:
\[\hat{\bvec{x}} = \bar{\bvec{x}} + W \bvec{y}\]
This yields all the advantages presented in the first section of this document.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_54
|
\begin{subparag}{Kerenelisation}
PCA can be kernalised in a non-trivial fashion.
First, we need to account for the fact that our data may not be centered in input-space (this was done above by considering the mean of our input values), letting:
\[\widetilde{K} = K - 1_N K - K 1_N + 1_N K 1_N\]
where $1_N$ is an $N \times N$ matrix, with every element equal to $\frac{1}{N}$.
Leaving $\bvec{a}$ to be the vector of unknowns given by the representer theorem, we can find that it follows the following equation:
\[\widetilde{K} \bvec{a} = \lambda N \bvec{a}\]
This is an eigenvalue problem, which could be solved to find a solution $\bvec{a}$. From there, we can project our data:
\[y_i = \sum_{j=1}^{N} a_j k\left(\bvec{x}_i, \bvec{x}_j\right)\]
supposing that we want $d = 1$.
If we want $d > 1$, we can again take the $d$ eigenvectors with greatest eigenvalues, and compute each component of $\bvec{y}_i$ thanks to a different eigenvector.
Note that we can no longer map data from $d$ dimensions back to $D$ dimensions with the kernalised method (since it would require us to know the feature expansion mapping $\phi\left(\bvec{x}\right)$).
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_55
|
\begin{parag}{Autoencoder}
Another way to do dimensionality reduction is through a neural network.
The idea is to have a double funnel shaped neural network: an encoder decreasing the dimension, a layer with $d$ neurons, and a decoder increasing back the number of dimensions.
\imagehere[0.7]{Autoencoder.png}
We can train it to output the same data as what we input it, using a least square empirical risk. That way, it must learn an intelligent code.
|
intro_to_ML
|
latex
|
intro_to_ML_block_56
|
\begin{subparag}{Remark}
This can also be used to both do dimensionality reduction and mapping data back from the low-dimensional space.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_57
|
\begin{subparag}{Convolutional autoencoder}
We can use convolutional neural networks for autoencoders. To do so, we use the inverse functions of those convolutions.
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_58
|
\begin{parag}{Fisher linear discriminant analysis}
Even though \important{Fisher linear discriminant analysis} (LDA) is a dimensionality reduction algorithm, it is a supervised learning one. Its goal is to project data on lower space, while keeping classes (hence the supervision) clustered. It considers that clustering should be done relatively to compactness, meaning distance between points within a cluster should be small, whereas distances across clusters should be large.
The goal is to minimise the distances within a class, meaning the distance of the elements of a class to their class center $E_W\left(\bvec{w}\right)$, while maximising the distance of cluster centers (weighted by the number of elements in the class) $E_B\left(\bvec{w}\right)$. This is better expressed thanks to the within-class scatter matrix $S_W$ and the between-class scatter matrix $S_B$:
\[S_W = \sum_{c=1}^{C} \sum_{i \in c}^{} \left(\bvec{x}_i - \bvec{\mu}_c\right) \left(\bvec{x}_i - \bvec{\mu}_c\right)^T, \mathspace S_B = \sum_{c=1}^{C} N_c \left(\bvec{\mu}_c - \bar{\bvec{x}}\right)\left(\bvec{\mu}_c - \bar{\bvec{x}}\right)^T\]
where $\bar{\bvec{x}}$ si the mean of all the samples, and $\bvec{\mu}_c$ the mean of the values in class $c$.
Our problem can be specified as the following generalised eigenvector problem:
\[S_B \bvec{w}_{\left(1\right)} = \lambda_1 S_w \bvec{w}_{\left(1\right)}\]
As for PCA we are in fact looking for the $d$ eigenvectors with largest eigenvalues.
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_59
|
\begin{parag}{$K$-means clustering}
The idea of \important{$k$-means clustering} is to consider that clustering should also be done relatively to compactness.
To do so, we consider $K$ (an hyperparameter) cluster centers $\left\{\bvec{\mu}_1, \ldots, \bvec{\mu}_K\right\}$. While we are not converged, we assign each data point $\bvec{x}_i$ to the nearest center $\bvec{\mu}_k$, and then move $\bvec{\mu}_k$ to the mean of the points that were assigned to it.
This algorithm is guaranteed to converge, even though it may not be to the expected solution: some clusters may end up completely empty. A good way to fix this problem is to run it multiple times with different initialisation, and pick the solution minimising the sum of the distance between each point and their assigned cluster center.
|
intro_to_ML
|
latex
|
intro_to_ML_block_60
|
\begin{subparag}{Hyperparameter}
To choose the $K$, a good way is to draw a graph of the average within-cluster sum of distances with respect to the number of cluster, and pick a point at its ``elbow'' (where the drop in the $y$-axis becomes less significant).
\end{subparag}
\end{parag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_61
|
\begin{parag}{Spectral clustering}
The idea of \important{spectral clustering} is to consider that clustering should be done relatively to connectivity instead: we group the points based on edges in a graph, and remove some of the edges with longest length. Let us first consider the case where we only want to make 2 clusters, meaning that we only want one cut.
To create the graph, we need a way to give weights to edges in order to represent their affinity, a way to do so is:
\[w\left(i, j\right) = \exp\left(\frac{-\left\|\bvec{x}_i - \bvec{x}_j\right\|^2}{\sigma ^2}\right)\]
where $\sigma$ is an hyper-parameter. Note that this weight decreases as the distance between $\bvec{x}_i$ and $\bvec{x}_j$ increases. Also, considering this full graph may be expensive, so we can also restrict to the $k$ nearest neighbours of each points.
The goal is now to find a partition $\left\{A, B\right\}$, minimising the sum of weights of the edges we have to remove. We thus let this value to be named the cut:
\[\text{cut}\left(A, B\right) = \sum_{i \in A}^{} \sum_{j \in B}^{} w\left(i, j\right)\]
Just minimising the cut might favour imbalanced partitions, so we also define the degree of a node in the graph $d_i$ and the volume of a partition to be given by:
\[d_i = \sum_{j}^{} w\left(i, j\right), \mathspace \text{vol}\left(A\right) = \sum_{i \in A}^{} d_i\]
Our goal is now to minimise a normalised cut:
\[\text{ncut}\left(A, B\right) = \frac{\text{cut}\left(A, B\right)}{\text{vol}\left(A\right)} + \frac{\text{cut}\left(A, B\right)}{\text{vol}\left(B\right)}\]
This problem is NP-hard, but it can be relaxed as the following eigenvector problem:
\[\left(D - W\right)\bvec{y} = \lambda D \bvec{y}\]
where $D$ is the diagonal degree matrix ($D_{i, i} = d_i$) and $W$ is the affinity matrix of the graph ($W_{i, j} = w\left(i, j\right)$).
The eigenvector with smallest eigenvalue can be shown to be a vector of all ones with eigenvalue 0. We are thus looking for the eigenvector with second smallest eigenvalue. A positive value in this vector indicates that the corresponding point belongs to one partition, and a negative value to the other.
|
intro_to_ML
|
latex
|
intro_to_ML_block_62
|
\begin{subparag}{Remark}
Since we had to relax the problem, the solution is not always optimal.
\end{subparag}
|
intro_to_ML
|
latex
|
intro_to_ML_block_63
|
\begin{subparag}{$K$-way partitions}
To obtain more than two clusters, we have two choices.
The first one is to recursively apply the two-way partitioning algorithm. This is inefficient and unstable.
The second one is to find $K$ eigenvectors. This leads to each point being represented by a $K$-dimensional vector. We can then apply $K$-means clustering to those resulting vectors.
\end{subparag}
\end{parag}
\end{document}
|
intro_to_ML
|
latex
|
analysis_1_chunk_0
|
MATH 101 (en)– Analysis I (English)
Notes for the course given in Fall 2021
Teacher:Roberto Svaldi
Head Assistant: Stefano Filipazzi
Notes written by Zsolt Patakfalvi & Roberto Svaldi
Thursday 12th October, 2023
This work is licensed under a Creative Commons “Attribution-
NonCommercial-NoDerivatives 4.0 International” license.
1
|
analysis_1
|
pdf
|
analysis_1_chunk_1
|
CONTENTS
1
Proofs
3
2
Basic notions
8
2.1
Sets
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.2
Number sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
2.2.1
Half lines, intervals, balls
. . . . . . . . . . . . . . . . . . . . . . . . . .
10
2.2.2
Extended real numbers
. . . . . . . . . . . . . . . . . . . . . . . . . . .
10
2.3
Bounds
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.3.1
Basic definitions, properties, and results. . . . . . . . . . . . . . . . . . .
11
2.3.2
Archimedean property of R . . . . . . . . . . . . . . . . . . . . . . . . .
18
2.3.3
An alternative definition for infimum/supremum . . . . . . . . . . . . .
18
2.3.4
Infimum and supremum for subsets of Z . . . . . . . . . . . . . . . . . .
20
2.4
Rational numbers vs real numbers
. . . . . . . . . . . . . . . . . . . . . . . . .
21
2.4.1
√
3 is a real number
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
2.4.2
Integral part
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
2.4.3
Rational numbers are dense in R . . . . . . . . . . . . . . . . . . . . . .
23
|
analysis_1
|
pdf
|
analysis_1_chunk_2
|
22
2.4.3
Rational numbers are dense in R . . . . . . . . . . . . . . . . . . . . . .
23
2.4.4
Irrational numbers are dense in R
. . . . . . . . . . . . . . . . . . . . .
25
2.5
Absolute value
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
2.5.1
Properties of the absolute value . . . . . . . . . . . . . . . . . . . . . . .
25
2.5.2
Triangular inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3
Complex numbers
28
3.1
Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.2
Operations between complex numbers
. . . . . . . . . . . . . . . . . . . . . . .
30
3.3
Polar form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
3.4
Euler formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
3.5
Finding solutions of equations with complex coefficients
. . . . . . . . . . . . .
36
3.5.1
Solving complex equations . . . . . . . . . . . . . . . . . . . . . . . . . .
38
4
Sequences
40
4.1
Recursive sequences
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
4.2
Induction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
4.3
|
analysis_1
|
pdf
|
analysis_1_chunk_3
|
42
4.2
Induction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
4.3
Bernoulli inequality and (non-)boundedness of geometric sequences . . . . . . .
45
4.4
Limit of a sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
4.4.1
Limits and algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.5
Squeeze theorem
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
4.5.1
Limits of recursive sequences
. . . . . . . . . . . . . . . . . . . . . . . .
55
4.5.2
Unbounded sets and infinite limits . . . . . . . . . . . . . . . . . . . . .
59
4.6
More convergence criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
4.7
Monotone sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
4.8
Liminf, limsup
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
4.9
Subsequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
4.10 Cauchy convergence
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
2
|
analysis_1
|
pdf
|
analysis_1_chunk_4
|
1
PROOFS
The means to explore analysis from a mathematical viewpoint within this course will be
mathematical proofs. Part of the goal of the course will be for you to learn how to prove
mathematical statements via mathematical proofs.
There are two main types of proof that we will encounter:
◦Constructive proof: an argument in which, starting from certain hypotheses/assumptions,
one tries to explicitly construct a mathematical object or to explicitly show that a certain
mathematical property hold for a mathematical object;
◦Proof by contradiction: an argument in which we assume that the conclusion that we
are trying to reach does not hold and we show that such assumption, together with our
starting hypotheses leads to a contradiction.
You have probably already encountered many constructive proofs; on the other hand, the
reader may be encountering proofs by contradiction for the first time. So, let us start by giving
a classical example of proof by contradiction.
Before we explain our first example, let us recall that the set of rational numbers is the set
of numbers of the form a
b, with a, b integers, b ̸= 0, where the following identification between
different fractions holds: for any non-zero integer c,
a
b = a · c
b · c .
We shall start by showing a classical argument by contradiction. For the time being we shall
assume that we know how to construct the real numbers, and that we know that
√
3, that
is, the positive solution to the equation X2 −3 = 0, is a real number. For a more detailed
discussion about the real numbers, we refer the reader to Section 2.
Proposition 1.1. The real number
√
3 is not a rational number.
We are going to use a proof by contradiction; that is, we are going to assume that
√
3 is
rational and we are going to derive, by means of logical implications, a contradiction to some
other fact that we already know or to some other fact that is implied by the assumed rationality
of
√
3.
Let us recall here that a natural number p is prime if and only if the only natural numbers
that divide p are 1 and p itself.
Exercise 1.2. Prove that the following two properties for a natural number p are equivalent:
◦p is prime;
◦if a, b are natural numbers and p divides ab, then either p divides a or p divides b.
Proof of Proposition 1.1. Assume that
√
3 is rational. Thus, we may write
√
3 = a
b
|
analysis_1
|
pdf
|
analysis_1_chunk_5
|
◦p is prime;
◦if a, b are natural numbers and p divides ab, then either p divides a or p divides b.
Proof of Proposition 1.1. Assume that
√
3 is rational. Thus, we may write
√
3 = a
b
(1.2.a)
for some integers a and b ̸= 0. As
√
3 > 0, a and b should have the same sign. If they are
both negative, by multiplying both by −1 we may assume that they are positive. Hence, we
will assume that a, b are both positive integers.
Furthermore, by dividing both a, b by their greatest commond divisor gcd(a, b)1, we may assume
1Let us recall here the Fundamental Theorem of Arithmetic: any natural number n can be written uniquely
as a product of powers of the prime numbers: namely, n = pk1
1 · pk2
2 · · · · · pkn
n , where p1, . . . , pk are distinct prime
numbers and k1, . . . , kn are natural numbers > 0. For example, 36 = 4 · 9 = 22 · 32. In view of this, given two
natural numbers a, b, then gcd(a, b) is defined by writing it as a product gcd(a, b) = qj1
1 · qj2
2 · . . . qjn
n
where the
qi are primes that divide both a and b and ji is the maximal natural number such that qj1
i
divides both a and b.
3
|
analysis_1
|
pdf
|
analysis_1_chunk_6
|
that a and b are relatively prime, that is, they do not share any prime factors. Multiplying
both sides of (1.2.a) by b, then, since b ̸= 0,
b
√
3 = a.
(1.2.b)
Squaring both sides of (1.2.b) yields
b2 · 3 = a2.
(1.2.c)
Hence, as 3 divides the left hand side of (1.2.c), 3 must divide the right hand side, too. Thus,
a = 3r.
(1.2.d)
Substituting the relation (1.2.d) into equation (1.2.c), we obtain that
b2 · 3 = (3r)2 = 9r2
Hence, b2 = 3r2, which implies that 3|(b2). We write x|y, with x, y integers to mean that x
divides y. Again, as 3 is prime, then, since 3|b2,
3|b,
(1.2.e)
But, (1.2.d)-(1.2.e) together contradict the relatively prime assumption on a and b. Thus, we
obtained a contradiction with our original assumption, so that
√
3 is not a rational number.
Remark 1.3. The proof of Proposition 1.1 is a nice example of a proof by contradiction. On
the other hand, it does not tell us much about the nature of
√
3.
What is
√
3? Is it a real number? How can we define real numbers? What notable properties
do those have? We will get back to these questions in Section 2.2-2.4.
We can generalize the above proof to any prime number p ∈N.
Exercise 1.4. Imitate the proof of Proposition 1.1, to show that for every prime number p ∈N,
√p is not rational.
In particular, Exercise 1.4 implies that also
√
2 ̸∈Q.
As easy as it may seem at a first glance to find and write mathematical proofs, one ought
to be extremely careful: it is indeed very easy to write wrong proofs! This is often do to that
the fact that one may assume something wrong in the course of a proof: if the premise of an
implication is false, then anything can follow from it.
|
analysis_1
|
pdf
|
analysis_1_chunk_7
|
to be extremely careful: it is indeed very easy to write wrong proofs! This is often do to that
the fact that one may assume something wrong in the course of a proof: if the premise of an
implication is false, then anything can follow from it.
Example 1.5. Here is an example of an (incorrect) proof showing that 1 is the largest natural
number, a fact that is clearly false, since 2 > 1 and 2 ∈N.
Claim. 1 is the largest integer.
WRONG PROOF. Let l be the largest integer.
Then l ≥l2, so that l −l2 = l(1 −l) ≥0. Hence, there are two possibilities for l(1 −l) ≥0:
a) either l < 0 and 1 −l ≤0; or,
b) l ≥0 and 1 −l ≥0.
As 0 is an integer, we must be in case b), so that l ≥0 and l ≤1. Hence l = 1.
4
|
analysis_1
|
pdf
|
analysis_1_chunk_8
|
This claim cannot possibly be true: in fact, 2 is definitely an integer and 2 > 1. Even better,
the set of integeral numbers is not bounded from above2, that is, there is no real number C such
that z ≤C for all z ∈Z.
What went wrong in the above proof? All the algebraic manipulations that we made following
the first line of the proof appear to be correct. [Go back and check that!!] Thus, the issue must
be contained in the (absurd) assumption we made in the first sentence:
Let l be the largest integer.
In fact, as we just explained, there cannot be a largest element in the set of integers: in fact,
given an integer l, then l +1 is also an integer and l +1 > l, which clearly shows that the above
assumption was unreasonable.
Analysis is mostly focused on the study of real and complex numbers3 and their properties.
Even more generally, analysis is concerned with studying (or analyzing, hence the name Anal-
ysis) functions defined over the real (alternatively, over the complex numbers) with values in
the real numbers (alternatively, over the complex numbers) and their important properties4.
In order to carry out such analysis, we will often need to deal with infinity. Roughly speaking,
we will often be interested in understanding numbers/functions from the point of view of an
infinitely small or at an infinitely large viewpoint. Our main goal will be to provide a frame-
work to be able to treat in a formal mathematical way all the different aspects of infinity in
the realm of real/complex numbers. To make a slightly better sense of this statement, you may
try to think (and formalize) of how to define the speed of a particle moving linearly on a rod,
at a given time t.
How should we define the real numbers? Even more importantly, how can we represent
them numerically? Intuitively, we have been taught that real numbers are those numbers that
we can represent numerically by writing down a decimal expansion, for example,
√
2 =1.414213562373095048801688724209698078569671875376948073176679737990
|
analysis_1
|
pdf
|
analysis_1_chunk_9
|
we can represent numerically by writing down a decimal expansion, for example,
√
2 =1.414213562373095048801688724209698078569671875376948073176679737990
7324784621070388503875343276415727350138462309122970249248360 . . . .
As it suggested from this example, it may be the case that when we try to represent certain
real numbers, we have to account for an infinite decimal part5 of the expansion, that is, there
is an infinite sequence of digits to the right of the decimal dot “.”. Hence, we may at first
tempted to adopt the following definition of the set of real numbers:
The real numbers are all those numbers that we can represent with a decimal expansion whose
integral part (the digits to the left of “.”) can be written using a finite number of digits (chosen
in the set {0, 1, 2, . . . , 9}), whereas its decimal part (the digits to the right of “.”) is any infinite
sequence of digits (as above, chosen in the set {0, 1, 2, . . . , 9}).
While this may seem, at first,
as an intuitively fine definition for the real numbers, it actually hides some subtleties.
Here we illustrate one of the main subtleties of this definition: namely, we show that,
in the above definition, we certainly have to be careful. We show that there is non unique
correspondence between a real number and its decimal expansion. An example is given by the
following proposition, which also provides a great basic example of how we deal with infinity
in Analysis.
2We will give a formal definition of what being bounded from above means later, cf. Definition 2.8.
3See Section 3 for the definition and basic properties of complex numbers.
4Some of the most important classes of functions that we will encounter are those of continuous, differentiable,
integrable, analytic functions, but there are many more other possible classes of functions that are heavily studied
in analysis
|
analysis_1
|
pdf
|
analysis_1_chunk_10
|
3See Section 3 for the definition and basic properties of complex numbers.
4Some of the most important classes of functions that we will encounter are those of continuous, differentiable,
integrable, analytic functions, but there are many more other possible classes of functions that are heavily studied
in analysis
5The decimal part of the expansion is that part of the expansion that lays on the right hand side of the point
“.”. For example, the decimal part of the expansion of 41369.57693 is the sequence 57693. The integral part of
the decimal expansion is instead that part of the expansion that lays on the left hand side of the point “.”. The
integral part of 41369.57693 is 41369. The integral part always has finite length, that is, it can be written using
a finite number of digits.
5
|
analysis_1
|
pdf
|
analysis_1_chunk_11
|
Proposition 1.6. 0.¯9 = 1
By 0.¯9 we denote the real number whose decimal representation is given by an infinite
sequence of 9 in the decimal part, 0.999999 . . ..
Proof. We give two proofs none of which is completely correct, at least as far as our current
definition and knowledge of the real numbers go. Nevertheless, we carefully explain what the
issues are in each case; we also explain how these issues will be clarified and taken care of
during this course.
(1) First an elementary proof:
9 · 0.¯9 = (10 −1) · 0.¯9 = 10 · 0.¯9 −1 · 0.¯9 = 9.¯9 −0.¯9 = 9
So, 0.¯9 is a solution of the equation 9X −9 = 0; the only solution to this equation is
clearly X = 1, thus, 0.¯9 = 1.
At first sight, this proof is definitely a reasonable one from the point of view of the
algebraic manipulations that we carried out. However, we assumed that we know what
0.¯9 is. Moreover, we also assumed that we can algebraically manipulate 0.¯9 as usual,
despite the fact that it has an infinite decimal expansion. None of these facts are that
clear if you think about it, as we have not really defined what the properties of numbers
like 0.¯9 are.
So, what kind of number is 0.¯9? What are its properties? For example, what algebraic
manipulations are we allowed to make with it?
(2) Analysis provides us with a precise definition of 0.¯9
0.¯9 :=
∞
X
i=1
9
10i .
On the hand, what kind of mathematical object is P∞
i=1
9
10i ? This is a series and we will
study series in detail in Section 4. By definition,
∞
X
i=1
9
10i := lim
n→∞
n
X
i=1
9
10i
!
.
We have yet to learn a precise definition of lim, thus, we cannot quite continue in a precise
way from here, nevertheless we continue the argument for completeness. If you are not
comfortable with it now, it is completely OK, just skip this part of the proof.
However, before we proceed, we need to show an identity for the sum of elements in a
geometric series6.
|
analysis_1
|
pdf
|
analysis_1_chunk_12
|
way from here, nevertheless we continue the argument for completeness. If you are not
comfortable with it now, it is completely OK, just skip this part of the proof.
However, before we proceed, we need to show an identity for the sum of elements in a
geometric series6.
Claim. Let a ∈R, a ̸= 1. Then,
a + a2 + · · · + an = a −an+1
1 −a
.
(1.6.f)
Proof of the Claim. To prove this equality, we just multiply the left side by 1 −a to
obtain:
(a + a2 + · · · + an)(1 −a) = a −a · a + a2 −a2 · a + a3 −. . .
−an−1 · a + an −an · a = a −an+1
This shows that (1.6.f) indeed holds, since to obtain the form of the equation in the
statement of the claim , it suffieces to .
6A geometric series is a series whose elements are of the form caq, for c, a ∈R and q ∈N. This will be
explicitly defined when we introduce series, later; hence, do not worry about this definition for now.
6
|
analysis_1
|
pdf
|
analysis_1_chunk_13
|
And then we can proceed showing the statement:
∞
X
i=1
9
10i = 9 ·
∞
X
i=1
1
10i = 9 · lim
n→∞
n
X
i=1
1
10i
!
=
9 · lim
n→∞
1
10 −
1
10n+1
1 −1
10
!
= 9 ·
1
10 −lim
n→∞
1
10n+1
1 −1
10
=
9 ·
1
10
1 −1
10
= 91
9 = 1.
In Section 2 and in the following one, we will introduce all the necessary tools, definitions,
notations and conventions to answer all of the questions that were raised in these first few
pages.
7
|
analysis_1
|
pdf
|
analysis_1_chunk_14
|
2
BASIC NOTIONS
2.1
Sets
A set S is a collection of objects called elements. If a is an element of S, we say that a
belongs to S or that S contains a, and we write a ∈S. If an element a is not in S, we then
write a ̸∈S. If the elements a, b, c, d, . . . form the set S, we write S = {a, b, c, d, . . . }. We can
also define a set simply by specifying that its elements are given by some condition, and we
write
S := {s | s satisfies some condition}.
Notation 2.1. The symbol := indicates that we are identifying the object on the LHS (left
hand side) of “:=” with the object on the RHS (right hand side) of “:=”. You can read it as
“defined as”.
Example 2.2. The set S = {0, 1, 2, 3, 4, 5} of natural numbers that are at most 5 can be
defined as follows
S := {n | n is a natural number and n ≤5}.
A set T is said to be a subset of a set S if any element of T is also an element of S. If T is
a subset of S, we denote it by writing T ⊆S. Given a set S, one can always define a subset
T ⊂S, T := {s ∈S|“ condition”}, that is, S′ is the set formed by those elements of S that
satisfy the given condition.
Example 2.3. The subset 2N of N of even natural numbers can be defined as
2N := {n ∈N | 2 divides n}.
If T ⊆S, it may happen that there are elements of S which are not contained in T. In this
case we say that T is a strict subset of S, or that T is strictly included/contained in S. When
we want to stress that we know that a subset T of a set S is strictly included in S we shall
write T ⊊S.
Example 2.4. 2N ⊊N since 1 ̸∈2N.
If we just write T ⊆S, we mean that T is a subset of S that may be equal to S, but we are
|
analysis_1
|
pdf
|
analysis_1_chunk_15
|
write T ⊊S.
Example 2.4. 2N ⊊N since 1 ̸∈2N.
If we just write T ⊆S, we mean that T is a subset of S that may be equal to S, but we are
not making any particular statement about whether or not T is a strict subset of S. Hence, in
the previous Example 2.4, we may have also used the notation 2N ⊆N and that would have
been correct. To write that a set T is not a subset of a set S, we write T ̸⊆S.
We will consider the standard operations between sets, such as intersection, union, taking
the complent. More precisely, given two subsets U, V , we define:
Intersection:
U ∩V := {x | x ∈U and x ∈V };
Union:
U ∪V := {x | x ∈U or x ∈V };
Complement:
U \ V := {x | x ∈U and x ̸∈V }.
Exercise 2.5. Given sets E, F and D prove that the following relations hold:
Commutativity: E ∩F = F ∩E and E ∪F = F ∪E;
Associativity: D ∩(E ∩F) = (D ∩E) ∩F and D ∪(E ∪F) = (D ∪E) ∪F;
Distributivity: D ∩(E ∪F) = (D ∩E) ∪(D ∩F) and D ∪(E ∩F) = (D ∪E) ∩(D ∪F);
De Morgan laws: (E ∩F)c = Ec ∪F c and(E ∪F)c = Ec ∩F c.
8
|
analysis_1
|
pdf
|
analysis_1_chunk_16
|
2.2
Number sets
There are a few important sets that we are going to work with all along this course:
(1) ∅: the empty set; it is the set which has no elements, ∅:= { }.
Exercise 2.6. Show that for any set S, ∅⊆S.
(2) N : the set of natural numbers, N := {0, 1, 2, 3, 4, 5, 6, . . . }.
N is well ordered, that is, all its subsets contain a smallest element. We will prove that
later in Proposition 2.34.
(3) Z : the set of integral numbers7, Z := {. . . , −1, 0, 1, . . . } .
(4) Q : the set of rational numbers, Q := {a
b | a ∈Z and b ∈Z \ {0}}, where we impose the
following identification between fractions
a
b = a · c
b · c ,
for c ∈Z \ {0}.
(5) R : the set of real numbers. It is not easy to actually construct it and there are some
subtleties in trying to define real numbers by means of their decimal representation, as
we have already understood from Proposition 1.6.
Remark 2.7. In this course, we will not attempt to provide a rigorous construction of the set of
real numbers R, although there are many equivalent constructions. If you are curious, you can
click here to find out more about these constructions. Instead of going through the construction
of R in the course, we proceed to list here certain properties that uniquely define R [we also do
not prove such uniqueness, but, please, believe it] and we will assume them going forward:
(1) Q ⊆R;
(2) R is an ordered field
◦the word field refers to the fact that addition, substraction, multiplication are all
well-defined operation within R; moreover, these operations respect commutativity,
associativity and distributivity properties and for all x ∈R, x ̸= 0 it is possible to
defined a multiplicative inverse x−1 such that x · x−1 = 1;
◦the world ordered refers to the fact that given two elements x, y ∈R we can always
decide whether x < y, or x > y, or x = y; moreover, this comparison is also
|
analysis_1
|
pdf
|
analysis_1_chunk_17
|
defined a multiplicative inverse x−1 such that x · x−1 = 1;
◦the world ordered refers to the fact that given two elements x, y ∈R we can always
decide whether x < y, or x > y, or x = y; moreover, this comparison is also
compatible with the operations that make R into a field.
(3) R satisfies the Infimum Axiom 2.22, that will be introduced in next section.
The following inclusions hold among the sets just defined:
∅⊊N ⊊Z ⊊Q ⊊R.
To justify these inclusions:
◦∅⊊N : N is non-empty. For example, 0 ∈N.
◦N ⊊Z : an integral number can also be negative, for example, −1 ∈Z, while natural
number are always non-negative; thus Z ∋−1 ̸∈N.
◦Z ⊊Q : 1
2 ∈Q, but 1
2 ̸∈Z.
◦Q ⊊R : we saw in Proposition 2.38 that
√
3 ̸∈Q; we will prove formally in Section 2.4.1
that
√
3 ∈R.
7We will often call an integral number an “integer”.
9
|
analysis_1
|
pdf
|
analysis_1_chunk_18
|
2.2.1
Half lines, intervals, balls
We introduce here further notation regarding the real numbers and some special classes of
subsets that we will be using all throughout the course.
(1) Invertible real numbers: R∗:= {x ∈R | x ̸= 0}.
(2) Closed half lines: R+ := {x ∈R | x ≥0}, R−:= {x ∈R | x ≤0}.
At times, these are also denoted by R≥0 and R≤0, respectively.
(3) Open half lines: R∗
+ := {x ∈R | x > 0}, R∗
−:= {x ∈R | x < 0}.
At times, these are also denoted by R>0 and R<0, respectively.
We use the analogous definitions also for the sets
N∗, Z∗, Q∗,
N+, Q+, Z+,
N−, Q−, Z−,
N∗
+, Q∗
+, Z∗
+,
N∗
−, Q∗
−, Z∗
−.
(4) Bounded intervals: if a < b are real numbers, we define
Open bounded interval:
]a, b[ := {x ∈R | a < x < b}.
Closed bounded interval:
[a, b] := {x ∈R | a ≤x ≤b}.
Half-open bounded interval:
(
]a, b] := {x ∈R | a < x ≤b}.
[a, b[ := {x ∈R | a ≤x < b}.
If a = b, then [a, b] = [a, a] = {a}. When we say that a subset I is a bounded interval of
R of extreme a < b, we mean that I may be either one of
[a, b], [a, b[, ]a, b], ]a, b[.
(5) Open balls: let a, δ ∈R, δ > 0; we define the open ball B(a, δ) ⊆R of radius δ and center
a as
B(a, δ) :=]a −δ, a + δ[.
(6) Closed balls: let a, δ ∈R, δ ≥0; we define the closed ball B(a, δ) ⊆R of radius δ and
center a as
|
analysis_1
|
pdf
|
analysis_1_chunk_19
|
a as
B(a, δ) :=]a −δ, a + δ[.
(6) Closed balls: let a, δ ∈R, δ ≥0; we define the closed ball B(a, δ) ⊆R of radius δ and
center a as
B(a, δ) := [a −δ, a + δ].
When δ = 0, then B(a, 0) = {a}.
2.2.2
Extended real numbers
The extended real line is the set
R := {−∞, +∞} ∪R.
The symbol +∞(resp. −∞) is called “plus infinity” (resp. “minus infinity”). In this course
±∞shall not be treated as numbers: they are just symbols indicating two elements of the
extended real line ¯R. That means that we will not try to make sense of algebraic operations
10
|
analysis_1
|
pdf
|
analysis_1_chunk_20
|
involving ±∞; thus, be very careful not to treat those as numbers. If you think carefully a bit,
you can see that it is hard to coherently define for example the result of the addition
+∞+ (−∞).
Later in the course we will use extensively these symbols. For the time being, we just want
to use them to define the following subsets of R. Let a ∈R, then
Open unbounded intervals:
]a, +∞[:= {x ∈R|x > a}, ] −∞, a[:= {x ∈R|x < a}.
Closed unbounded intervals:
[a, +∞[:= {x ∈R|x ≥a}, ] −∞, a] := {x ∈R|x ≤a}.
Finally
] −∞, +∞[:= R.
These sets are also called open/closed half lines, or open/closed unbounded intervals, or
open/closed extended intervals, where open/closed is determined by whether or not a belongs
to the set.
So, from now on, when we say that a subset I of R is an interval, we will mean that I has
one of the following forms:
◦[a, b], ]a, b[, ]a, b], [a, b[,
a, b ∈R,
a < b;
◦[a, +∞[, ]a, +∞[, ] −∞, a], ] −∞, a[,
a ∈R;
◦] −∞, +∞[= R.
2.3
Bounds
We now start entering the realm of modern (and rigorous) analysis.
We start by defining some important properties of subset of R.
2.3.1
Basic definitions, properties, and results.
Definition 2.8. Let S ⊆be a non-empty subset of R.
(1) A real number a ∈R is an upper (resp. lower) bound for S if s ≤a (resp. s ≥a) holds
for all s ∈S.
(2) If S has an upper (resp. a lower) bound then S is said to be bounded from above (resp.
bounded from below).
(3) The set S is said to be bounded if it is bounded both from above and below.
|
analysis_1
|
pdf
|
analysis_1_chunk_21
|
for all s ∈S.
(2) If S has an upper (resp. a lower) bound then S is said to be bounded from above (resp.
bounded from below).
(3) The set S is said to be bounded if it is bounded both from above and below.
For a set S ⊆R in general upper and lower bounds are not unique.
Example 2.9.
(1) The set N ⊂R is bounded from below, since ∀n ∈N, n ≥0; in particular,
0 is a lower bound. In fact, any negative real number is also a lower bound for N.
On the other hand, N is not bounded. While this fact may appear intuitively clear, it
is not immediately clear how to prove it formally. Can you find a proof using only the
concepts and tools that we have introduced so far in the course? The answer is no, at
this time of the course. For a formal proof of the unboundedness of N, we shall need
Archimedes’ property for R, see Proposition 2.30.
(2) Z is neither bounded from above nor from below. In fact, it cannot be bounded from
above since N ⊆Z. It is also not bounded from below: if a lower bound l ∈R existed for
Z, then −l would be an upper bound for N, which we saw above does not hold. [Prove
this assertion in detail!].
11
|
analysis_1
|
pdf
|
analysis_1_chunk_22
|
(3) The set S := {n2|n ∈Z} is bounded from below: in fact, ∀n ∈N, n2 ≥0, thus 0 is
a lower bound. On the other hand, it is not bounded. In fact, assume for the sake of
contradiction that S were bounded from above, i.e., that there exists u ∈R and u ≥s,
∀s ∈S. Since for any n ∈N, n2 ≥n, then it would follow that u > n, for all n ∈N, but
this contradicts part (1).
(4) The set S := {n3|n ∈Z} is neither bounded from above nor from below. [Prove it! The
proof is similar to that in part (2).]
(5) The set S := {sin(n2)|n ∈Z} is bounded since for all x ∈R, −1 ≤sin x ≤1. Examples
of possible lower bounds are −5 and −13; example of possible upper bounds are 1 and
27. As sin x ∈[−1, 1], then it is certainly true that
◦any real number y such that y ≥1 is an upper bound for S, while
◦any real number y such that y ≤−1 is a lower bound for S.
(6) Let S := [3, 5[= {x ∈R | 3 ≤x < 5}. Then, 5 is an upper bound for S since for any
element x of S, x < 5. Moreover, if c is a real number and c > 5, then c is also an upper
bound for S, since c > 5 > x for all x ∈S.
The same reasoning shows that 3 is a lower bound for S and that for any real number d
such that d < 3, then d is a lower bound for S as well.
(It is left to you to prove that in this example you will obtain the exact same con-
clusions if instead of considering the interval [3, 5[, you considered any of the intervals
[3, 5], ]3, 5], ]3, 5[.)
Using the discussion of the above examples, we summarize here some of the main properties
of upper and lower bounds.
Proposition 2.10. Let S ⊂R be a non-empty set. Let c ∈R.
|
analysis_1
|
pdf
|
analysis_1_chunk_23
|
[3, 5], ]3, 5], ]3, 5[.)
Using the discussion of the above examples, we summarize here some of the main properties
of upper and lower bounds.
Proposition 2.10. Let S ⊂R be a non-empty set. Let c ∈R.
(1) If u is an upper bound for S, then for any d ≥u, d is also an upper bound for S.
(2) If l is a lower bound for S, then for any e ≤l, e is also a lower bound for S.
(3) If T ⊆S is a non-empty subset and c is a lower (resp. an upper) bound for S, then c is
also a lower (resp. an upper) bound for T.
(4) If T ⊆S is a non-empty subset and T is not bounded from above (resp. from below), then
also S is is not bounded from above (resp. from below).
(5) If S is a bounded interval of extremes a < b, then the set of lower bounds (resp. of upper
bounds) of S is given by
] −∞, a]
(resp. [b, +∞]).
(6) If S := [b, +∞[ or S :=]b, +∞[, b ∈R, then the set of lower bounds of S is given by
] −∞, b].
(7) If S :=] −∞, a] or S :=] −∞, a[, a ∈R, then the set of upper bounds of S is given by
[a, +∞].
Proof.
(1) Let u be an upper bound for S. Then ∀s ∈S, u ≥s. If d ≥u, then ∀s ∈S,
d ≥u ≥s, in particular, d ≥s, which shows the desired property.
(2) Analogous to (1) and left as an exercise (see the sheet from Week 2).
12
|
analysis_1
|
pdf
|
analysis_1_chunk_24
|
(3) If c is a lower bound for S, then c ≤s for all element s ∈S. Since T ⊆S, this means
that any element t ∈T is also an element of S. Hence, a fortiori, the inequality c ≤s,
∀s ∈S implies also that c ≤t, ∀t ∈T.
The case of an upper bound is analogous, it suffices to change the verse of the inequalities.
(4) Since T is not bounded from above, this means that ∀u ∈R, there exists an element
xu ∈T (which will depend in general from the real number u we fix) such that xu > u.
As T ⊆S, then xu ∈S, hence ∀u ∈R, there exists an element xu ∈S such that xu > u
and u cannot be an upper bound for S. As this holds ∀u ∈R, then also S is not bounded
from above.
The case of T not bounded from below is analogous, it suffices to change the verse of the
inequalities.
(5) Let us assume that S :=]a, b] = {x ∈R | a < x ≤b}. The other cases are similar – it
is left to you to prove that in you will obtain the exact same conclusions if instead of
considering the interval ]a, b], you considered any of the intervals [a, b], [a, b[, ]a, b].
Then, a is a lower bound for S, since for all s ∈S, a < s. Also for any real number
d < a, d is also a lower bound for S, since d < a < s, for all s ∈S. Similarly, b is an
upper bound for S, since ∀s ∈S, s ≤b, by definition. Thus, for any real number c > b,
then c > b ≥s, ∀x ∈S and c is an upper bound for S. Then, part (1) implies that any
element of the half line [b, +∞[ (resp. ] −∞, a]) is an upper bound (resp. lower bound)
for S. To conclude we need to show that no real number c > a (resp. d < b) is a lower
bound (resp. an upper bound) of S. To show this, it suffices to show that there exists an
|
analysis_1
|
pdf
|
analysis_1_chunk_25
|
for S. To conclude we need to show that no real number c > a (resp. d < b) is a lower
bound (resp. an upper bound) of S. To show this, it suffices to show that there exists an
element m ∈S such that m < c. Since c > a, then a < a + c−a
2
< c. If a + c−a
2
∈S, it
suffices to take m := a + c−a
2 . If a + c−a
2
̸∈S, then a + c−a
2
> b then c > b, and it suffices
to take m := b.
(6) Analogous to the proof of (5).
We have just seen that upper/lower bounds of a set S are never unique, when some exist.
Moreover, if S is an interval of extremes a < b, then a is a lower bound and b is an upper
bound. We may be tempted to ask whether in general there exists upper lower bounds of a set
S ⊆R that are element of S itself and what we can say in that case. In general, this is not
always true but nonetheless upper/lower bounds of S that are in S are very special elements
of S.
Definition 2.11. Let S ⊆R be a non-empty set.
(1) The maximum of S is a real number M ∈S which is also an upper bound for S.
(2) The minimum of S is a real number m ∈S which is also a lower bound for S.
In Definition 2.11, we used the determinative article “the” to intriduce maximum and
minimum of a set of real numbers. This suggests that they should both be uniquely determined.
This is indeed the content of the next exercise.
Proposition 2.12. Let S be a non-empty subset of R. If max S (resp. min S) exists, then it
is unique.
Notation 2.13. For S ⊆R, we denote the maximum (resp. the minimum) of S by max S
(resp. min S).
Proof. Suppose, for the sake of contradiction, that a maximum of S exists and it is not unique.
Then there are at least two distinct numbers n, n′ ∈R which are both a maximum for S. As
13
|
analysis_1
|
pdf
|
analysis_1_chunk_26
|
n, n′ are distinct, i.e., n ̸= n′, we can assume that n < n′. As n′ is a maximum, then n′ ∈S.
But as n is also a maximum, in particular, n is also an upper bound, i.e., n ≥s, ∀s ∈S; hence,
also n ≥n′, which is in contradiction with our assumption above that n′ > n.
You can apply a similar argument for the uniqueness of the minimum.
Example 2.14.
(1) Let us define S :=]1, 2[= {x ∈R | 1 < x < 2}. Then S does not have
minimum or maximum.
In fact, if u ∈R is an upper bound for S, then, by definition, u ≥x, ∀x ∈]1, 2[, which
implies that u ≥2. Hence u ̸∈]1, 2[.
Analogously, if l ∈R is a lower bound for S, then, by definition, l ≤x, ∀x ∈]1, 2[, which
implies that l ≤1. Hence l ̸∈]1, 2[.
(2) S := [1, 2] has both a minimum and a maximum.
min S = 1, since 1 ∈S and 1 ≤s, ∀s ∈S, so that 1 is also a lower bound for S.
max S = 2, since 2 ∈S and 2 ≥s, ∀s ∈S, so that 2 is also an upper bound for S.
(3) Let a < b be real numbers. S :=]a, b] has maximum but no minimum.
max S = b, since b ∈S and b ≥s, ∀s ∈S, so that b is also an upper bound for S.
min S, since any lower bound for S is ≤a, hence there is no lower bound that is contained
in S.
The above examples suggest that it should not be hard to understand when an interval
S admits a maximum or a minimum. Indeed, the following characterization is an immediate
consequence of Definition 2.11 and of Proposition 2.10
Proposition 2.15. Let S ⊆R be a bounded interval of extremes a < b.
(1) The maximum of S exists if and only if b ∈S. In this case, max S = b.
|
analysis_1
|
pdf
|
analysis_1_chunk_27
|
consequence of Definition 2.11 and of Proposition 2.10
Proposition 2.15. Let S ⊆R be a bounded interval of extremes a < b.
(1) The maximum of S exists if and only if b ∈S. In this case, max S = b.
(2) The minimum of S exists if and only if a ∈S. In this case, min S = b.
When S is not an interval, it may be more complicated to understand whether a maxi-
mum/minimum exists.
Example 2.16.
(1) Take S :=
n−1
n
| n ∈Z∗
+
. Then S has a minimum but it does not
have a maximum.
Indeed, min S = 0, since 0 = 1−1
1
∈S and n−1
n
≥0, ∀n ∈Z∗
+, so that 0 is a lower bound
which belongs to S. However, S does not have a maximum. To see this, let l ∈R, then:
(i) assume that l < 1.
Then a natural number n satisfies n >
1
1−l if and only if
1 −1
n = n−1
n
> 1 −(1 −l) = l. then 1 −1
n = n−1
n
> 1 −(1 −l) = 1; Thus, l cannot
be an upper bound for S, hence a fortiori it cannot be a maximum either.
(ii) on the other hand, if a ≥1, then l ̸∈S, so no such l can be a maximum for S.
One can actually show that the upper bounds of S are exactly the real numbers ≥1;
indeed, it is easy to show that any l ≥1 is an upper bound for S, since 1 −1
n ≤1 ≤a,
for all n ∈Z∗
+. On the other hand (i) above shows that no real number l < 1 can be an
upper bound for S. Hence, 1 is the least of all possible upper bounds for S.
Example 2.16.3 above, suggests that we might need a new notion generalizing the concept
of maximum/minimum.
In that example, 1 is very close to being the maximum of S :=
n−1
n
| n ∈Z∗
+
, as it is the least of all possible upper bounds. On the other hand, 1 cannot
be the maximum of S as 1 ̸∈S. This phenomenon motivates the next definition.
|
analysis_1
|
pdf
|
analysis_1_chunk_28
|
n−1
n
| n ∈Z∗
+
, as it is the least of all possible upper bounds. On the other hand, 1 cannot
be the maximum of S as 1 ̸∈S. This phenomenon motivates the next definition.
Definition 2.17. Let S ⊆R be a non-empty subset.
14
|
analysis_1
|
pdf
|
analysis_1_chunk_29
|
(1) If the set U of all upper bounds of S is non-empty and U admits a minimum u ∈U, then
we call u the supremum of S.
(2) If the set L of all lower bounds of S is non-empty and L admits a maximum l ∈L, then
we call l the supremum of S.
Remark 2.18. Let S ⊆R be a non-empty subset.
If the set U of all upper bounds of S is empty, then S is not bounded from above, cf. Defini-
tion 2.8. In this case, then the supremum of S does not exist, by the above definition.
Similarly, if the set L of all lower bounds of S is empty, then S is not bounded from below,
cf. Definition 2.8. In this case, then the infimum of S does not exist, by the above definition.
As in the case of maximum/minimum, the use of the determinative article in Definition 2.17
suggests that, when they exist, the supremum/infimum of a non-empty subset of R should be
unique.
Proposition 2.19. Let S be a non-empty subset of R. If sup S (resp. inf S) exists, then it is
unique.
Notation 2.20. For S ⊆R, we denote the supremum (resp. the infimum) of S by sup S (resp.
inf S), when those exist as real number.
If S is not bounded from above, we write sup S = +∞. If S is not bounded from below, we
write inf S = −∞.
Proof. By definition, if the supremum of S exists, it is the minimum of the set
U := {u ∈R | u is an upper bound for S} .
As the maximum of a set is unique when it exists, cf. 2.12, then the conclusion follows at once.
You can apply a similar argument for the uniqueness of the minimum.
Example 2.21.
(1) Let S :=
n−1
n
n ∈Z∗
+
. Then, sup S = 1, cf. Example 2.16.3.
|
analysis_1
|
pdf
|
analysis_1_chunk_30
|
You can apply a similar argument for the uniqueness of the minimum.
Example 2.21.
(1) Let S :=
n−1
n
n ∈Z∗
+
. Then, sup S = 1, cf. Example 2.16.3.
(2) Take S := {n3|n ∈Z}. Then, S is unbounded. Thus, inf S, sup S do not exist.
(3) If S is a bounded interval of extremes a < b, then
sup S = b,
inf S = a.
Indeed, we saw in Proposition 2.10 that the set of lower (resp. upper) bounds of S is
] −∞, a] (resp. [b, +∞[).
(4) Similarly, if S := [a, +∞[ or S := [a, +∞, a ∈R then inf S = a, while sup S does not exit
since S is not bounded from below.
(5) If S :=] −∞, b] or S :=] −∞, b[, b ∈R, then inf S = a, while sup S does not exit since S
is not bounded from below.
How do we know whether the supremum or infimum of a non-empty subset S ⊆R exist
as real numbers? We saw in Remark 2.18 that a necessary condition for the existence of the
supremum (resp. infimum) of S is that S be bounded from above (resp. below).
On the other hand, if, for example, S is bounded from above (resp. below), then we know
that the set U (resp. L) of all upper (resp. lower) bounds of S is non-empty. Hence, it is
legitimate to ask if U (resp. L), when non-empty, admits a least (resp. largest) element.
The existence of the largest of all possible lower bounds (resp. of the least of all possible
upper bounds) is one of the features of the construction of the real numbers. As we have already
mentioned that we are not going to explain the construction of R, we will assume the existence
of such elements. Indeed, it suffices to assume the following axiom, which then implies the full
existence of infima and suprema, cf. Corollary 2.26.
15
|
analysis_1
|
pdf
|
analysis_1_chunk_31
|
Axiom 2.22. [Infimum axiom] Each non-empty subset S of R∗
+ admits an infimum (which
is a real number).
Remark 2.23. In Mathematics, an axiom is a statement that we are going to assume to be true,
without requiring for it a formal proof. When we introduce an axiom, we are free to use the
properties stated in the axiom, without requiring a proof for them, and we can use those to
derive other mathematical properties of the objects that we are studying.
The property stated in the Infimum Axiom is a very important one. In a sense, which we
will try to make more precise when we introduce sequences of real numbers, this property says
that R does not contain any gaps. While at this time, this is a rather nebulous statement,
let us at least show that this axiom does not necessarily hold for all the number sets that we
have introduced so far, cf. Section 2.2: indeed, it is possible to show that the infimum axioms
does not necessarily hold for Q, for example, cf. Example 2.24 below. Hence, the Infimum
Axiom is indeed an axiom stating a (very relevant) property that is indeed peculiar to the real
numbers and, as such, in this course we actually utilize it to characterize the real numbers,
again, cf. Remark 2.7.
Example 2.24. Let S :=]
√
3, 5[ ∩Q.8 Then S ⊆R∗
+ and the Infimum Axiom implies that
inf S exists in the real numbers. We will show in Example 2.46 that inf S =
√
3. In particular,
the set of lower bounds of S coincides with the real numbers ≤
√
3.
Since S, by its very definition, is also a subset of Q, we may wonder whether it possible to find
a largest rational number l among the rational numbers which are lower bounds for S. Such
l ∈Q would then be an infimum for S among the rational numbers. By the above observation,
we know that if such l existed, then l <
√
3, since
√
3 ̸∈Q, cf Proposition 2.38, and l is certainly
a lower bound for l. But then, Proposition 2.44 shows that there exists a rational number m
such that l < m <
√
3. As m <
√
|
analysis_1
|
pdf
|
analysis_1_chunk_32
|
we know that if such l existed, then l <
√
3, since
√
3 ̸∈Q, cf Proposition 2.38, and l is certainly
a lower bound for l. But then, Proposition 2.44 shows that there exists a rational number m
such that l < m <
√
3. As m <
√
3, then we know that m is also a lower bound for S. This is
clearly a contradiction, as m ∈Q nad is a lower bound for S, while we had assumed that l was
the largest of all lower bounds of S that are rational. Hence, the infimum of S cannot exist in
Q.
Axiom 2.22 requires that we work with subsets of R∗
+ to be guaranteeed to find their
infimum. But, in general, we can find the infimum also for sets that are not necessarily contained
in R∗
+, as long as we have some lower bounds.
Example 2.25. The infimum of a set S ⊆R can exist even when S ̸⊆R∗
+. For example,
let S := {x ∈R | x > −
√
17}. As S contains −1, for example, then S ̸⊆R∗
+. On the other
hand, by Proposition 2.10.6, the set of lower bounds of S is given by ] −∞, −
√
17]. Hence,
inf S = −
√
17.
Using the Infimum Axiom 2.22, we can actually prove that the infimum (resp. the supre-
mum) exists for any subset S ⊆R which is bounded from below (resp. from above).
Corollary 2.26. Let S ⊆R be a non-empty set.
(1) If S is bounded from below, then S admits an infimum.
(2) If S is bounded from above, then S admits a supremum.
Proof.
(1) As S is bounded from below, there exists a lower bound l ∈R for S, that is, l ≤s,
for all s ∈S. We can rewrite the previous inequality as
s −l ≥0,
∀s ∈S.
(2.26.a)
8See Section 2.4.1 for a formal proof that
√
3 is actually a real number.
16
|
analysis_1
|
pdf
|
analysis_1_chunk_33
|
Let W ⊆R be the subset obtained by translating the elements of S by −l + 1,
W := {s −l + 1 | s ∈S}.
Why did we choose to translate the elements of S by −l+1? The reason is that W ⊆R∗
+:
in fact, by (2.26.a), s−l+1 ≥1 > 0, for all s ∈S.9 As W ⊆R∗
+, the Infimum Axiom 2.22
implies that inf W exists, call it a := inf W. Then a is the largest lower bound for the
set W.
How can we use a to compute inf S? To construct W, we translated all elements of S
by −l + 1. If we translate the elements of W back by l −1, then we undo what we
did before and we recover S. So, what happens if we translate a by l −1 as well? The
number we obtain by this translation should be the largest lower bound for S, as addition
is compatible with the order relation. Let us verify this.
Let a′ := a + l −1. Then a′ ≤w + l −1 for any element w ∈W. As any w ∈W is of
the form w = s −l + 1 for some s ∈S, then w + l −1 = s. Hence, a′ ≤s for all s ∈S
and a′ is a lower bound for S. If a′ is not the largest lower bound for S, then there is a
real number b′ > a′ which is a lower bound for S. But then b′ −l + 1 > a = a′ −l + 1
and b′ −l + 1 would be a lower bound for W [prove it!]. But this is a contradiction, since
a = inf W.
(2) The details are left to the reader. Here is a sketch.
Let S′ ⊆R be the set constructed by flipping the sign of the elements of S,
S′ := {−x | x ∈S}.
Since S is bounded from above, then S′ is bounded from below. [Prove this!] Then by
part (1), inf S′ exists. It is left to you to show that sup S = −inf S′.
|
analysis_1
|
pdf
|
analysis_1_chunk_34
|
S′ := {−x | x ∈S}.
Since S is bounded from above, then S′ is bounded from below. [Prove this!] Then by
part (1), inf S′ exists. It is left to you to show that sup S = −inf S′.
We have seen the definition of infimum/supremum and minimum/maximum.
Both the
infimum (resp. supremum) and minimum (resp. maxima) of a set S, provided that they exist,
are lower bounds (resp. upper bounds) for S. Can we be more precise about what is the
relationship among these notions?
Example 2.27. Let S := [3, 5[ ⊆R. Then, min S = 3 = inf S. On the other hand, max S
does not exist as sup S = 5 is the least upper bound and 5 ̸∈S; hence no upper bound of S is
contained in S, as any element of S is < 5.
The example above seems to suggest that, at least for intervals, if the minimum (resp.
maximum) of an interval exists, then it should coincide with the infimum (resp. the supremum)
of the interval. This property actually holds for any non-empty subset S ⊂R, as long as the
minimum (resp. maximum) of S exists.
Proposition 2.28. Let S ⊆R a non-empty set.
(1) If min S exists, then min S = inf S.
(2) If max S exists, then max S = sup S.
Proof. We prove (1), whereas (2) is left as an exercise. As min S exists, then S is bounded
from below, since min S is in particular a lower bound, cf. Definition 2.11. Hence, inf S exists,
by Corollary 2.26. Then inf S ≥min S since inf S is the largest of all lower bounds. On the
other hand, min S ∈S, and inf S ≤s, for all s ∈S. In particular, inf S ≤min S. Thus,
inf S ≤min S and inf S ≥min S, which implies that inf S = min S.
|
analysis_1
|
pdf
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.