id
stringlengths
12
52
text
stringlengths
75
13.2k
source
stringclasses
9 values
format
stringclasses
2 values
algo_block_0
\documentclass[a4paper]{book} \usepackage{style} \pagestyle{fancy} \fancyhf{}\fancyfoot[LE,RO]{\thepage} \fancyhead[RE]{\textit{\leftmark}} \fancyhead[LO]{\textit{\rightmark}} \fancyhead[LE]{Algorithms} \fancyhead[RO]{Notes by Joachim Favre} \fancypagestyle{plain}{\fancyhf{}\fancyfoot[LE,RO]{\thepage}} \title{Alg...
algo
latex
intro_to_ML_block_0
\documentclass[a4paper]{article} \usepackage{style} \title{Introduction to machine learning --- BA3\\ Detailed summary} \author{Joachim Favre\\ Course by Prof. Mathieu Salzmann } \date{Autumn semester 2022} \begin{document} \maketitle \cftsetindents{paragraph}{1.5em}{1em} \setcounter{tocdepth}{5} \tableofcontents...
intro_to_ML
latex
intro_to_ML_block_1
\begin{parag}{Supervised and unsupervised learning} In \important{supervised learning}, we are given data and its groundtruth labels. The goal is, given new data, we want to predict new labels, by doing regression or classification. In \important{unsupervised learning}, we are only given data (without any label), an...
intro_to_ML
latex
intro_to_ML_block_2
\begin{parag}{Regression and classification} The goal of \important{regression} is to predict a continuous value for a given sample. The goal of \important{classification} is to output a discrete label (typically encoded in one-hot encoding with 0s and 1s or -1s and 1s). The main difference is that there is the noti...
intro_to_ML
latex
intro_to_ML_block_3
\begin{parag}{Dimensionality reduction} Dimensionality reduction has two main advantages. The first one is that it allows to decrease the dimension of our data, which typically yield a tremendous speed-up while preserving a lot of the precision. The second one is that, depending on the model, we can also map data...
intro_to_ML
latex
intro_to_ML_block_4
\begin{parag}{Notations} We consider the following notation throughout this course, with some slight exceptions when specified otherwise. $N$ is the number of samples we have, $D$ is the dimensionality (the number of components) of any input, and $C$ is the dimensionality of any output. Without specified otherwise,...
intro_to_ML
latex
intro_to_ML_block_5
\begin{parag}{Feature expansion} Increasing the amount of dimensions from $D$ to $F$ of our input data may help our models (using non-linear functions, since they would be of no help). Thus, we may let the following function: \[\phi\left(\bvec{x}\right) = \begin{pmatrix} 1 & x^{\left(1\right)} & \cdots & x^{\left(D\...
intro_to_ML
latex
intro_to_ML_block_6
\begin{subparag}{Remark} The 1 we added to the input data to account for the bias is some kind of feature expansion. \end{subparag}
intro_to_ML
latex
intro_to_ML_block_7
\begin{subparag}{Cover's Theorem} Cover's theorem states (more or less) that doing non-linear feature expansion, then it is more likely for our data to be linearly separable. \end{subparag} \end{parag}
intro_to_ML
latex
intro_to_ML_block_8
\begin{parag}{Kernel} We can notice that defining our $\phi$ functions for feature expansion can be really tedious. However, since most of our methods depend on a dot product of $\phi\left(\bvec{x}_i\right)^T \phi\left(\bvec{x}_j\right)$, which gives some kind of measure of similarity between $\bvec{x}_i$ and $\bvec{x...
intro_to_ML
latex
intro_to_ML_block_9
\begin{subparag}{Remark} The main advantage of a kernel is that we don't need to know what function $\phi$ is linked to it. \end{subparag}
intro_to_ML
latex
intro_to_ML_block_10
\begin{subparag}{Examples} We can for instance use the \important{polynomial kernel}: \[k\left(\bvec{x}_i, \bvec{x}_j\right) = \left(\bvec{x}_i^T \bvec{x}_j + c\right)^d\] $c$ is often set to $1$ and $d$ to 2. For this kernel, the corresponding mapping $\phi$ is known. This is, except for multiplicative constants...
intro_to_ML
latex
intro_to_ML_block_11
\begin{parag}{Representer theorem} The minimizer of a regularized empirical risk function can be represented as a linear combination of expanded features. In other words, we can write: \[\bvec{w}^* = \sum_{i=1}^{N} \alpha_i^* \phi\left(\bvec{x}_i\right) = \Phi^T \bvec{\alpha}^*\] where $\bvec{\alpha} \in \mathbb{R}...
intro_to_ML
latex
intro_to_ML_block_12
\begin{subparag}{Remark} This theorem is really important to kernalise algorithms. When using it, the goal is to get rid of the $\Phi$ since we do not know the mapping $\phi$. Switching our view onto variables $\bvec{\alpha}$ instead of variables $\bvec{w}$ is typically a way to do so. \end{subparag} \end{parag}
intro_to_ML
latex
intro_to_ML_block_13
\begin{parag}{Loss function} The \important{loss function} $\ell \left(\hat{\bvec{y}}_i, \bvec{y}_i\right)$ computes an error value between the prediction and the true value. This is a measure of the error for any given prediction. \end{parag}
intro_to_ML
latex
intro_to_ML_block_14
\begin{parag}{Empirical risk} Given $N$ training samples $\left\{\left(\hat{\bvec{x}}_i, \hat{\bvec{y}}_i\right)\right\}$, the \important{empirical risk} is defined as: \[R\left(\left\{\hat{\bvec{x}}_i\right\}, \left\{\hat{\bvec{y}}_i\right\}, W\right) = \frac{1}{N} \sum_{i=1}^{N} \ell \left(\hat{\bvec{y}}_i, \bvec{...
intro_to_ML
latex
intro_to_ML_block_15
\begin{subparag}{Regularised} Sometimes, we want to regularise our objective function, so that we prevent weights to become to large and make a lot of overfitting. We then instead seek to minimise. \[E\left(W\right) = R\left(W\right) + \lambda E_W\left(W\right)\] where $\lambda$ is an hyperparameter and $E_W\left(W...
intro_to_ML
latex
intro_to_ML_block_16
\begin{parag}{Gradient descent} The goal of \important{gradient descent} is to minimise a function (an empirical risk $R\left(W\right)$ in this context). The idea is to begin with an estimate $W_0$ (typically completely random), and then to update it iteratively, by following the direction of greatest decrease (the op...
intro_to_ML
latex
intro_to_ML_block_17
\begin{subparag}{Remark} This algorithm does not always converge and, when it does, not necessarily to a minimum nor to the global minimum. \end{subparag}
intro_to_ML
latex
intro_to_ML_block_18
\begin{parag}{Evaluation metrics} Once a supervised machine learning model is trained, we want to be able to understand how well it performs on unseen test data (which must absolutely be separated from the train data). We could use the loss function, but we may also use a different one. For regression, we typical...
intro_to_ML
latex
intro_to_ML_block_19
\begin{subparag}{Remark} There are many more metrics for regression and classification. For the former, we could quote RMSE (root mean squared error), MAE (mean absolute error) or the MAPE (mean absolute percentage error). For the latter, making a confusion matrix or computing the AUC (area under the ROC curve) can gi...
intro_to_ML
latex
intro_to_ML_block_20
\begin{parag}{Decision boundary} A classifier leads to a decision boundary. This is an object of dimension $D-1$ (a line if our data lies on a plane for instance), which splits the space into two regions: one where samples are considered positive (the predicted value is closer to the value of positive samples), and on...
intro_to_ML
latex
intro_to_ML_block_21
\begin{subparag}{Remark} A classifier is said to be linear if its decision boundary is an hyperplane (a straight line if the data lies on a plane for instance). \end{subparag} \end{parag}
intro_to_ML
latex
intro_to_ML_block_22
\begin{parag}{Margin} If $C = 1$, the orthogonal distance between the decision boundary and the nearest sample is called the margin. \imagehere[0.5]{margin.png} \end{parag}
intro_to_ML
latex
intro_to_ML_block_23
\begin{parag}{Overfitting} When we increase the complexity of the model (by changing the hyperparameters) we get better and better result for both training and test data. However, there is a point at which increasing the complexity keeps decreasing error on training data but increases the error on test data. This is a...
intro_to_ML
latex
intro_to_ML_block_24
\begin{parag}{Cross-validation} Cross-validation is a way to find good hyperparameters that prevent overfitting. We test different models (in a predefined set), assign to each of them an error value, and pick the one yielding the smallest error. The idea of \important{$k$-fold cross-validation} is, to evaluate the e...
intro_to_ML
latex
intro_to_ML_block_25
\begin{subparag}{Remark} Note that leaving $k = N$ (where $N$ is the number of training samples) is also sometimes named \important{leave-one-out cross-validation}. This is really expensive but wastes (almost) no data. Another way to do cross-validation, which is much cheaper, is to split our training data into trai...
intro_to_ML
latex
intro_to_ML_block_26
\begin{parag}{Data representation} All the models we will see only work for fixed size data. If we want to handle data of varying size (such as text or pictures), a good way is to consider \important{bag of words}: consider the number of times each word from a dictionary appears in the text and put this as a big vecto...
intro_to_ML
latex
intro_to_ML_block_27
\begin{parag}{Pre-processing} The training data might have problems: it might have noise (because of measurement errors), incorrect values, and so on. To fix those, a good idea is to do pre-processing.
intro_to_ML
latex
intro_to_ML_block_28
\begin{subparag}{Normalisation} To begin with, a good idea is to scale each individual feature dimension to fall within a specified range (so that we don't give more impact to a dimension ranging from 1000 to 10000 than to another dimension ranging from 0 to 1). This can typically be done by using \important{normalisa...
intro_to_ML
latex
intro_to_ML_block_29
\begin{subparag}{Imbalanced data} Another important thing to consider is \important{imbalanced data}. There might be 10 times as much data in one class as in another (between-class imbalance), or data inside a class might have a lot of representative at some point in space and much less at other points (within-class i...
intro_to_ML
latex
intro_to_ML_block_30
\begin{parag}{Ridge regression} The output of \important{ridge regression} is computed by a simple dot product: \[\hat{\bvec{y}}_i = W^T \phi\left(\bvec{x}_i\right)\] The training objective function we want to minimise is a squared Euclidean distance regularised by the sum of squares of the weights: \[E\left(W\ri...
intro_to_ML
latex
intro_to_ML_block_31
\begin{subparag}{Linear regression} Leaving $\lambda = 0$, we get the special case of \important{linear regression}. Then, the closed-form formula can be rephrased as: \[W^* = \left(\Phi^T \Phi\right)^{-1} \Phi^T \bvec{y} = \Phi^{\dagger} Y\] where $\Phi^{\dagger}$ is the Moore-Penrose pseudo-inverse. \end{subpara...
intro_to_ML
latex
intro_to_ML_block_32
\begin{subparag}{Kernelisation} Using the representer theorem, we can find that: \[A^* = \left(K + \lambda I_N\right)^{-1} Y\] This is not of much use on its own, but we can use this result to find how we predict a value $\hat{\bvec{y}}$ for a new $\bvec{x}$: \[\hat{\bvec{y}} = Y^T \left(K + \lambda I_N\right)^...
intro_to_ML
latex
intro_to_ML_block_33
\begin{subparag}{Classification} Ridge regression can be used for classification tasks, by inputting the result into a softmax function (defined right after), but this does not work very well because we are not encoding this in the objective function. This is named a \important{least-square classifier}. \end{subpar...
intro_to_ML
latex
intro_to_ML_block_34
\begin{parag}{Logistic regression} In \important{logistic regression} (which is a linear classification algorithm), we consider negative samples to be $y_i = 0$. The output of logistic regression is computed by using the \important{softmax function}: \[\hat{y}^{\left(k\right)} = \frac{\exp\left(\bvec{w}^T_{\left(k...
intro_to_ML
latex
intro_to_ML_block_35
\begin{subparag}{One dimension} Let's consider $C = 1$. This special case of the softmax function is named the \important{logistic function}: \[\hat{y}\left(\bvec{x}\right) = \sigma\left(\bvec{w}^T \bvec{x}\right) = \frac{1}{1 + \exp\left(- \bvec{w}^T \bvec{x}\right)}\] One-dimensional cross-entropy can be rewritte...
intro_to_ML
latex
intro_to_ML_block_36
\begin{subparag}{Kernelisation} This algorithm can be kernalised even though this is not very common. \end{subparag} \end{parag}
intro_to_ML
latex
intro_to_ML_block_37
\begin{parag}{Support vector machine} In \important{support vector machine} (SVM) classification (which is also a linear classifier), we consider negative samples to be $y_i = -1$. Also, we leave $\widetilde{\bvec{w}}$ to be the vector of parameters without $w^{\left(0\right)}$, and $\bvec{x} \in \mathbb{R}^D$ to not ...
intro_to_ML
latex
intro_to_ML_block_38
\begin{subparag}{Support vectors} We notice that, for the margin to be maximised, there must be at least a point from each class lying on it. Such points are named \important{support vectors}. \end{subparag}
intro_to_ML
latex
intro_to_ML_block_39
\begin{subparag}{Hinge loss} By rewriting the constraints, we get: \[\xi_i \geq 1 - y_i\left(w^{\left(0\right)} + \widetilde{\bvec{w}}^T \bvec{x}_i\right)\] For samples $i$ that satisfy the support vector machine problem (they are on not in the margin nor misclassified), we have $\xi_i = 0$ (since they are forced...
intro_to_ML
latex
intro_to_ML_block_40
\begin{subparag}{Dual problem} We can reformulate our problem by letting one variable per training sample (meaning that we have $N$ variables instead of $\left(D+1\right)$): \[\argmax_{\left\{\alpha_i\right\}} \left(\sum_{i=1}^{N} \alpha_i - \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N} \alpha_i \alpha_j y_i y_j \bvec{x...
intro_to_ML
latex
intro_to_ML_block_41
\begin{subparag}{Kernelisation} We need the dual problem to kernelise the SVM: \[\argmax_{\left\{\alpha_i\right\}} \left(\sum_{i=1}^{N} \alpha_i - \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N} \alpha_i \alpha_j y_i y_j k\left(\bvec{x}_i, \bvec{x}_j\right)\right),\] subject to $\sum_{i=1}^{N} \alpha_i y_i = 0$ and $0 \l...
intro_to_ML
latex
intro_to_ML_block_42
\begin{subparag}{Multi-class SVM} To generalise our algorithm to multiple class, we can use multiple ways. The idea is always to use several binary classifiers. One way is to use \important{one-vs-rest}: we train classifiers stating if the component is in class $i$ or not. Another way is to use \important{one-vs-one...
intro_to_ML
latex
intro_to_ML_block_43
\begin{subparag}{Primal derivation} Let's consider how we got the formula for the primal model, since it may typically help to remember and understand it. First, we know that any two points on the decision boundary have the same prediction (which is 0), which yields that: \[0 = \hat{y}_1 - \hat{y}_2 = \left(w^{\le...
intro_to_ML
latex
intro_to_ML_block_44
\begin{parag}{$K$-nearest neighbours} The idea of \important{$k$-nearest neighbours} (kNN) is to compute the distance between the test sample $\bvec{x}$ and all training samples $\left\{\bvec{x}_i\right\}$ and find the $k$ samples with minimum distances. Then, we can do classification by finding the most common label ...
intro_to_ML
latex
intro_to_ML_block_45
\begin{subparag}{Remark} The result of this model depends on the choice of the distance function. One can take the \important{Euclidean distance}: \[d\left(\bvec{x}_i, \bvec{x}\right) = \sqrt{\sum_{d=1}^{D} \left(x_i^{\left(d\right)} - x^{\left(d\right)}\right)^2}\] However, for some other structures such as hist...
intro_to_ML
latex
intro_to_ML_block_46
\begin{subparag}{Curse of dimensionality} Because of a principle named the \important{curse of dimensionality}, we need exponentially more points to cover a space as the number of dimensions increases. Using dimensionality reduction is a good idea with this algorithm. \end{subparag}
intro_to_ML
latex
intro_to_ML_block_47
\begin{subparag}{Complexity} Unlike most of the other models, increasing the hyperparameter of this model (the $k$) leads to decreased complexity: the higher the $k$, the less complex the decision boundary is and thus the less overfit we have. \end{subparag} \end{parag}
intro_to_ML
latex
intro_to_ML_block_48
\begin{parag}{Neural networks} Neural networks can do both classification and regression (depending on the output representation and the empirical risk used, typically square loss for regression and cross-entropy for classification), and their main advantage is that they learn a good model during the training. This ...
intro_to_ML
latex
intro_to_ML_block_49
\begin{subparag}{Activation functions} There are multiple choice for activation functions. The important thing is that they are non-linear. We can for instance take the ReLU (Rectified Linear Unit) activation function: \begin{functionbypart}{f\left(a\right)} a, & \text{if } a > 0 \\ 0, & \text{otherwise} \end{f...
intro_to_ML
latex
intro_to_ML_block_50
\begin{subparag}{Convolutional network} When working with pictures, just vectorising them may give a lot of elements while removing the fact that the picture is inherently two-dimensional. A way to circumvent this problem is using convolutions. To make a convolution, we need a small matrix of elements (plus a bias)....
intro_to_ML
latex
intro_to_ML_block_51
\begin{parag}{Principle component analysis} Sometimes, we realise that data is given in many dimensions, but actually lies in many less dimensions. The idea of \important{principle component analysis} (PCA) is to project the data on some orthogonal axis (of lower dimension), in a way to maximise the kept variance. L...
intro_to_ML
latex
intro_to_ML_block_52
\begin{subparag}{Remark} Since the axis on which we project the data are orthogonal, we have: \[W^T W = I_d\] To make sure of this, we need to take the eigenvector so that they are orthogonal. This can always be done because $C$ is symmetric, thanks to the spectral theorem (this theorem also allows to know that we...
intro_to_ML
latex
intro_to_ML_block_53
\begin{subparag}{Mapping} From our computation, we can notice that, for any point $\bvec{y} \in\mathbb{R}^d$, we can move it to the high-dimensional space: \[\hat{\bvec{x}} = \bar{\bvec{x}} + W \bvec{y}\] This yields all the advantages presented in the first section of this document. \end{subparag}
intro_to_ML
latex
intro_to_ML_block_54
\begin{subparag}{Kerenelisation} PCA can be kernalised in a non-trivial fashion. First, we need to account for the fact that our data may not be centered in input-space (this was done above by considering the mean of our input values), letting: \[\widetilde{K} = K - 1_N K - K 1_N + 1_N K 1_N\] where $1_N$ is an $...
intro_to_ML
latex
intro_to_ML_block_55
\begin{parag}{Autoencoder} Another way to do dimensionality reduction is through a neural network. The idea is to have a double funnel shaped neural network: an encoder decreasing the dimension, a layer with $d$ neurons, and a decoder increasing back the number of dimensions. \imagehere[0.7]{Autoencoder.png} We c...
intro_to_ML
latex
intro_to_ML_block_56
\begin{subparag}{Remark} This can also be used to both do dimensionality reduction and mapping data back from the low-dimensional space. \end{subparag}
intro_to_ML
latex
intro_to_ML_block_57
\begin{subparag}{Convolutional autoencoder} We can use convolutional neural networks for autoencoders. To do so, we use the inverse functions of those convolutions. \end{subparag} \end{parag}
intro_to_ML
latex
intro_to_ML_block_58
\begin{parag}{Fisher linear discriminant analysis} Even though \important{Fisher linear discriminant analysis} (LDA) is a dimensionality reduction algorithm, it is a supervised learning one. Its goal is to project data on lower space, while keeping classes (hence the supervision) clustered. It considers that clusterin...
intro_to_ML
latex
intro_to_ML_block_59
\begin{parag}{$K$-means clustering} The idea of \important{$k$-means clustering} is to consider that clustering should also be done relatively to compactness. To do so, we consider $K$ (an hyperparameter) cluster centers $\left\{\bvec{\mu}_1, \ldots, \bvec{\mu}_K\right\}$. While we are not converged, we assign each ...
intro_to_ML
latex
intro_to_ML_block_60
\begin{subparag}{Hyperparameter} To choose the $K$, a good way is to draw a graph of the average within-cluster sum of distances with respect to the number of cluster, and pick a point at its ``elbow'' (where the drop in the $y$-axis becomes less significant). \end{subparag} \end{parag}
intro_to_ML
latex
intro_to_ML_block_61
\begin{parag}{Spectral clustering} The idea of \important{spectral clustering} is to consider that clustering should be done relatively to connectivity instead: we group the points based on edges in a graph, and remove some of the edges with longest length. Let us first consider the case where we only want to make 2 c...
intro_to_ML
latex
intro_to_ML_block_62
\begin{subparag}{Remark} Since we had to relax the problem, the solution is not always optimal. \end{subparag}
intro_to_ML
latex
intro_to_ML_block_63
\begin{subparag}{$K$-way partitions} To obtain more than two clusters, we have two choices. The first one is to recursively apply the two-way partitioning algorithm. This is inefficient and unstable. The second one is to find $K$ eigenvectors. This leads to each point being represented by a $K$-dimensional vector....
intro_to_ML
latex
analysis_1_chunk_0
MATH 101 (en)– Analysis I (English) Notes for the course given in Fall 2021 Teacher:Roberto Svaldi Head Assistant: Stefano Filipazzi Notes written by Zsolt Patakfalvi & Roberto Svaldi Thursday 12th October, 2023 This work is licensed under a Creative Commons “Attribution- NonCommercial-NoDerivatives 4.0 International” ...
analysis_1
pdf
analysis_1_chunk_1
CONTENTS 1 Proofs 3 2 Basic notions 8 2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Number sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 Half lines, intervals, balls . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2...
analysis_1
pdf
analysis_1_chunk_2
22 2.4.3 Rational numbers are dense in R . . . . . . . . . . . . . . . . . . . . . . 23 2.4.4 Irrational numbers are dense in R . . . . . . . . . . . . . . . . . . . . . 25 2.5 Absolute value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5.1 Properties of the absolute value . . . . . . . ...
analysis_1
pdf
analysis_1_chunk_3
42 4.2 Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.3 Bernoulli inequality and (non-)boundedness of geometric sequences . . . . . . . 45 4.4 Limit of a sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.4.1 Limits and algebra . . . . . . . ....
analysis_1
pdf
analysis_1_chunk_4
1 PROOFS The means to explore analysis from a mathematical viewpoint within this course will be mathematical proofs. Part of the goal of the course will be for you to learn how to prove mathematical statements via mathematical proofs. There are two main types of proof that we will encounter: ◦Constructive proof: an arg...
analysis_1
pdf
analysis_1_chunk_5
◦p is prime; ◦if a, b are natural numbers and p divides ab, then either p divides a or p divides b. Proof of Proposition 1.1. Assume that √ 3 is rational. Thus, we may write √ 3 = a b (1.2.a) for some integers a and b ̸= 0. As √ 3 > 0, a and b should have the same sign. If they are both negative, by multiplying both by...
analysis_1
pdf
analysis_1_chunk_6
that a and b are relatively prime, that is, they do not share any prime factors. Multiplying both sides of (1.2.a) by b, then, since b ̸= 0, b √ 3 = a. (1.2.b) Squaring both sides of (1.2.b) yields b2 · 3 = a2. (1.2.c) Hence, as 3 divides the left hand side of (1.2.c), 3 must divide the right hand side, too. Thus, a = ...
analysis_1
pdf
analysis_1_chunk_7
to be extremely careful: it is indeed very easy to write wrong proofs! This is often do to that the fact that one may assume something wrong in the course of a proof: if the premise of an implication is false, then anything can follow from it. Example 1.5. Here is an example of an (incorrect) proof showing that 1 is th...
analysis_1
pdf
analysis_1_chunk_8
This claim cannot possibly be true: in fact, 2 is definitely an integer and 2 > 1. Even better, the set of integeral numbers is not bounded from above2, that is, there is no real number C such that z ≤C for all z ∈Z. What went wrong in the above proof? All the algebraic manipulations that we made following the first line...
analysis_1
pdf
analysis_1_chunk_9
we can represent numerically by writing down a decimal expansion, for example, √ 2 =1.414213562373095048801688724209698078569671875376948073176679737990 7324784621070388503875343276415727350138462309122970249248360 . . . . As it suggested from this example, it may be the case that when we try to represent certain real ...
analysis_1
pdf
analysis_1_chunk_10
3See Section 3 for the definition and basic properties of complex numbers. 4Some of the most important classes of functions that we will encounter are those of continuous, differentiable, integrable, analytic functions, but there are many more other possible classes of functions that are heavily studied in analysis 5The ...
analysis_1
pdf
analysis_1_chunk_11
Proposition 1.6. 0.¯9 = 1 By 0.¯9 we denote the real number whose decimal representation is given by an infinite sequence of 9 in the decimal part, 0.999999 . . .. Proof. We give two proofs none of which is completely correct, at least as far as our current definition and knowledge of the real numbers go. Nevertheless, w...
analysis_1
pdf
analysis_1_chunk_12
way from here, nevertheless we continue the argument for completeness. If you are not comfortable with it now, it is completely OK, just skip this part of the proof. However, before we proceed, we need to show an identity for the sum of elements in a geometric series6. Claim. Let a ∈R, a ̸= 1. Then, a + a2 + · · · + an...
analysis_1
pdf
analysis_1_chunk_13
And then we can proceed showing the statement: ∞ X i=1 9 10i = 9 · ∞ X i=1 1 10i = 9 · lim n→∞ n X i=1 1 10i ! = 9 · lim n→∞ 1 10 − 1 10n+1 1 −1 10 ! = 9 · 1 10 −lim n→∞ 1 10n+1 1 −1 10 = 9 · 1 10 1 −1 10 = 91 9 = 1. In Section 2 and in the following one, we will introduce all the necessary tools, definitions, notatio...
analysis_1
pdf
analysis_1_chunk_14
2 BASIC NOTIONS 2.1 Sets A set S is a collection of objects called elements. If a is an element of S, we say that a belongs to S or that S contains a, and we write a ∈S. If an element a is not in S, we then write a ̸∈S. If the elements a, b, c, d, . . . form the set S, we write S = {a, b, c, d, . . . }. We can also defi...
analysis_1
pdf
analysis_1_chunk_15
write T ⊊S. Example 2.4. 2N ⊊N since 1 ̸∈2N. If we just write T ⊆S, we mean that T is a subset of S that may be equal to S, but we are not making any particular statement about whether or not T is a strict subset of S. Hence, in the previous Example 2.4, we may have also used the notation 2N ⊆N and that would have been...
analysis_1
pdf
analysis_1_chunk_16
2.2 Number sets There are a few important sets that we are going to work with all along this course: (1) ∅: the empty set; it is the set which has no elements, ∅:= { }. Exercise 2.6. Show that for any set S, ∅⊆S. (2) N : the set of natural numbers, N := {0, 1, 2, 3, 4, 5, 6, . . . }. N is well ordered, that is, all its...
analysis_1
pdf
analysis_1_chunk_17
defined a multiplicative inverse x−1 such that x · x−1 = 1; ◦the world ordered refers to the fact that given two elements x, y ∈R we can always decide whether x < y, or x > y, or x = y; moreover, this comparison is also compatible with the operations that make R into a field. (3) R satisfies the Infimum Axiom 2.22, that wi...
analysis_1
pdf
analysis_1_chunk_18
2.2.1 Half lines, intervals, balls We introduce here further notation regarding the real numbers and some special classes of subsets that we will be using all throughout the course. (1) Invertible real numbers: R∗:= {x ∈R | x ̸= 0}. (2) Closed half lines: R+ := {x ∈R | x ≥0}, R−:= {x ∈R | x ≤0}. At times, these are als...
analysis_1
pdf
analysis_1_chunk_19
a as B(a, δ) :=]a −δ, a + δ[. (6) Closed balls: let a, δ ∈R, δ ≥0; we define the closed ball B(a, δ) ⊆R of radius δ and center a as B(a, δ) := [a −δ, a + δ]. When δ = 0, then B(a, 0) = {a}. 2.2.2 Extended real numbers The extended real line is the set R := {−∞, +∞} ∪R. The symbol +∞(resp. −∞) is called “plus infinity” (r...
analysis_1
pdf
analysis_1_chunk_20
involving ±∞; thus, be very careful not to treat those as numbers. If you think carefully a bit, you can see that it is hard to coherently define for example the result of the addition +∞+ (−∞). Later in the course we will use extensively these symbols. For the time being, we just want to use them to define the following...
analysis_1
pdf
analysis_1_chunk_21
for all s ∈S. (2) If S has an upper (resp. a lower) bound then S is said to be bounded from above (resp. bounded from below). (3) The set S is said to be bounded if it is bounded both from above and below. For a set S ⊆R in general upper and lower bounds are not unique. Example 2.9. (1) The set N ⊂R is bounded from bel...
analysis_1
pdf
analysis_1_chunk_22
(3) The set S := {n2|n ∈Z} is bounded from below: in fact, ∀n ∈N, n2 ≥0, thus 0 is a lower bound. On the other hand, it is not bounded. In fact, assume for the sake of contradiction that S were bounded from above, i.e., that there exists u ∈R and u ≥s, ∀s ∈S. Since for any n ∈N, n2 ≥n, then it would follow that u > n, ...
analysis_1
pdf
analysis_1_chunk_23
[3, 5], ]3, 5], ]3, 5[.) Using the discussion of the above examples, we summarize here some of the main properties of upper and lower bounds. Proposition 2.10. Let S ⊂R be a non-empty set. Let c ∈R. (1) If u is an upper bound for S, then for any d ≥u, d is also an upper bound for S. (2) If l is a lower bound for S, the...
analysis_1
pdf
analysis_1_chunk_24
(3) If c is a lower bound for S, then c ≤s for all element s ∈S. Since T ⊆S, this means that any element t ∈T is also an element of S. Hence, a fortiori, the inequality c ≤s, ∀s ∈S implies also that c ≤t, ∀t ∈T. The case of an upper bound is analogous, it suffices to change the verse of the inequalities. (4) Since T is n...
analysis_1
pdf
analysis_1_chunk_25
for S. To conclude we need to show that no real number c > a (resp. d < b) is a lower bound (resp. an upper bound) of S. To show this, it suffices to show that there exists an element m ∈S such that m < c. Since c > a, then a < a + c−a 2 < c. If a + c−a 2 ∈S, it suffices to take m := a + c−a 2 . If a + c−a 2 ̸∈S, then a + ...
analysis_1
pdf
analysis_1_chunk_26
n, n′ are distinct, i.e., n ̸= n′, we can assume that n < n′. As n′ is a maximum, then n′ ∈S. But as n is also a maximum, in particular, n is also an upper bound, i.e., n ≥s, ∀s ∈S; hence, also n ≥n′, which is in contradiction with our assumption above that n′ > n. You can apply a similar argument for the uniqueness of...
analysis_1
pdf
analysis_1_chunk_27
consequence of Definition 2.11 and of Proposition 2.10 Proposition 2.15. Let S ⊆R be a bounded interval of extremes a < b. (1) The maximum of S exists if and only if b ∈S. In this case, max S = b. (2) The minimum of S exists if and only if a ∈S. In this case, min S = b. When S is not an interval, it may be more complica...
analysis_1
pdf
analysis_1_chunk_28
 n−1 n | n ∈Z∗ + , as it is the least of all possible upper bounds. On the other hand, 1 cannot be the maximum of S as 1 ̸∈S. This phenomenon motivates the next definition. Definition 2.17. Let S ⊆R be a non-empty subset. 14
analysis_1
pdf
analysis_1_chunk_29
(1) If the set U of all upper bounds of S is non-empty and U admits a minimum u ∈U, then we call u the supremum of S. (2) If the set L of all lower bounds of S is non-empty and L admits a maximum l ∈L, then we call l the supremum of S. Remark 2.18. Let S ⊆R be a non-empty subset. If the set U of all upper bounds of S i...
analysis_1
pdf
analysis_1_chunk_30
You can apply a similar argument for the uniqueness of the minimum. Example 2.21. (1) Let S :=  n−1 n n ∈Z∗ + . Then, sup S = 1, cf. Example 2.16.3. (2) Take S := {n3|n ∈Z}. Then, S is unbounded. Thus, inf S, sup S do not exist. (3) If S is a bounded interval of extremes a < b, then sup S = b, inf S = a. Indeed, ...
analysis_1
pdf
analysis_1_chunk_31
Axiom 2.22. [Infimum axiom] Each non-empty subset S of R∗ + admits an infimum (which is a real number). Remark 2.23. In Mathematics, an axiom is a statement that we are going to assume to be true, without requiring for it a formal proof. When we introduce an axiom, we are free to use the properties stated in the axiom, ...
analysis_1
pdf
analysis_1_chunk_32
we know that if such l existed, then l < √ 3, since √ 3 ̸∈Q, cf Proposition 2.38, and l is certainly a lower bound for l. But then, Proposition 2.44 shows that there exists a rational number m such that l < m < √ 3. As m < √ 3, then we know that m is also a lower bound for S. This is clearly a contradiction, as m ∈Q na...
analysis_1
pdf
analysis_1_chunk_33
Let W ⊆R be the subset obtained by translating the elements of S by −l + 1, W := {s −l + 1 | s ∈S}. Why did we choose to translate the elements of S by −l+1? The reason is that W ⊆R∗ +: in fact, by (2.26.a), s−l+1 ≥1 > 0, for all s ∈S.9 As W ⊆R∗ +, the Infimum Axiom 2.22 implies that inf W exists, call it a := inf W. Th...
analysis_1
pdf
analysis_1_chunk_34
S′ := {−x | x ∈S}. Since S is bounded from above, then S′ is bounded from below. [Prove this!] Then by part (1), inf S′ exists. It is left to you to show that sup S = −inf S′. We have seen the definition of infimum/supremum and minimum/maximum. Both the infimum (resp. supremum) and minimum (resp. maxima) of a set S, provi...
analysis_1
pdf