text stringlengths 0 1.61k |
|---|
Im(Φ) Image of linear mapping Φ |
ker(Φ) Kernel (null space) of a linear mapping Φ |
span[b1] Span (generating set) of b1 |
tr(A) Trace of A |
det(A) Determinant of A |
| · | Absolute value or determinant (depending on context) ∥·∥ Norm; Euclidean, unless specified |
λ Eigenvalue or Lagrange multiplier |
Eλ Eigenspace corresponding to eigenvalue λ |
Draft (2023-10-18) of “Mathematics for Machine Learning”. Feedback: https://mml-book.com. |
Foreword 7 |
Symbol Typical meaning |
x ⊥ y Vectors x and y are orthogonal |
V Vector space |
V⊥ Orthogonal complement of vector space V |
PN |
Qn=1 xn Sum of the xn: x1 + . . . + xN |
N |
n=1 xn Product of the xn: x1 · . . . · xN |
θ Parameter vector |
∂f |
∂x Partial derivative of f with respect to x |
df |
dxTotal derivative of f with respect to x |
∇ Gradient |
f∗ = minx f(x) The smallest function value of f |
x∗ ∈ arg minx f(x) The value x∗ that minimizes f (note: arg min returns a set of values) L Lagrangian |
L Negative log-likelihood |
Binomial coefficient, n choose k |
n |
k |
VX[x] Variance of x with respect to the random variable X EX[x] Expectation of x with respect to the random variable X CovX,Y [x, y] Covariance between x and y. |
X ⊥⊥ Y |Z X is conditionally independent of Y given Z X ∼ p Random variable X is distributed according to p |
Nµ, Σ Gaussian distribution with mean µ and covariance Σ Ber(µ) Bernoulli distribution with parameter µ Bin(N, µ) Binomial distribution with parameters N, µ Beta(α, β) Beta distribution with parameters α, β |
Table of Abbreviations and Acronyms |
Acronym Meaning |
e.g. Exempli gratia (Latin: for example) |
GMM Gaussian mixture model |
i.e. Id est (Latin: this means) |
i.i.d. Independent, identically distributed |
MAP Maximum a posteriori |
MLE Maximum likelihood estimation/estimator |
ONB Orthonormal basis |
PCA Principal component analysis |
PPCA Probabilistic principal component analysis |
REF Row-echelon form |
SPD Symmetric, positive definite |
SVM Support vector machine |
©2023 M. P. Deisenroth, A. A. Faisal, C. S. Ong. Published by Cambridge University Press (2020). |
Part I |
Mathematical Foundations |
9 |
This material is published by Cambridge University Press as Mathematics for Machine Learning by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong (2020). This version is free to view and download for personal use only. Not for re-distribution, re-sale, or use in derivative works. ©by M. P. Deisenroth, A. A. Fai... |
1 |
Introduction and Motivation |
Machine learning is about designing algorithms that automatically extract valuable information from data. The emphasis here is on “automatic”, i.e., machine learning is concerned about general-purpose methodologies that can be applied to many datasets, while producing something that is mean ingful. There are three conc... |
Since machine learning is inherently data driven, data is at the core data of machine learning. The goal of machine learning is to design general purpose methodologies to extract valuable patterns from data, ideally without much domain-specific expertise. For example, given a large corpus of documents (e.g., books in m... |
While machine learning has seen many success stories, and software is readily available to design and train rich and flexible machine learning systems, we believe that the mathematical foundations of machine learn ing are important in order to understand fundamental principles upon which more complicated machine learni... |
11 |
This material is published by Cambridge University Press as Mathematics for Machine Learning by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong (2020). This version is free to view and download for personal use only. Not for re-distribution, re-sale, or use in derivative works. ©by M. P. Deisenroth, A. A. Fai... |
12 Introduction and Motivation 1.1 Finding Words for Intuitions |
A challenge we face regularly in machine learning is that concepts and words are slippery, and a particular component of the machine learning system can be abstracted to different mathematical concepts. For example, the word “algorithm” is used in at least two different senses in the con text of machine learning. In th... |
predictor put data. We refer to these algorithms as predictors. In the second sense, we use the exact same phrase “machine learning algorithm” to mean a system that adapts some internal parameters of the predictor so that it performs well on future unseen input data. Here we refer to this adapta training tion as traini... |
This book will not resolve the issue of ambiguity, but we want to high light upfront that, depending on the context, the same expressions can mean different things. However, we attempt to make the context suffi ciently clear to reduce the level of ambiguity. |
The first part of this book introduces the mathematical concepts and foundations needed to talk about the three main components of a machine learning system: data, models, and learning. We will briefly outline these components here, and we will revisit them again in Chapter 8 once we have discussed the necessary mathem... |
While not all data is numerical, it is often useful to consider data in a number format. In this book, we assume that data has already been appropriately converted into a numerical representation suitable for read |
data as vectors ing into a computer program. Therefore, we think of data as vectors. As another illustration of how subtle words are, there are (at least) three different ways to think about vectors: a vector as an array of numbers (a computer science view), a vector as an arrow with a direction and magni tude (a physi... |
model A model is typically used to describe a process for generating data, sim ilar to the dataset at hand. Therefore, good models can also be thought of as simplified versions of the real (unknown) data-generating process, capturing aspects that are relevant for modeling the data and extracting hidden patterns from it... |
learning We now come to the crux of the matter, the learning component of machine learning. Assume we are given a dataset and a suitable model. Training the model means to use the data available to optimize some pa rameters of the model with respect to a utility function that evaluates how well the model predicts the t... |
Draft (2023-10-18) of “Mathematics for Machine Learning”. Feedback: https://mml-book.com. |
1.2 Two Ways to Read This Book 13 |
desired performance measure. However, in practice, we are interested in the model to perform well on unseen data. Performing well on data that we have already seen (training data) may only mean that we found a good way to memorize the data. However, this may not generalize well to unseen data, and, in practical applica... |
Let us summarize the main concepts of machine learning that we cover in this book: |
We represent data as vectors. |
We choose an appropriate model, either using the probabilistic or opti mization view. |
We learn from available data by using numerical optimization methods with the aim that the model performs well on data not used for training. |
1.2 Two Ways to Read This Book |
We can consider two strategies for understanding the mathematics for machine learning: |
Bottom-up: Building up the concepts from foundational to more ad vanced. This is often the preferred approach in more technical fields, such as mathematics. This strategy has the advantage that the reader at all times is able to rely on their previously learned concepts. Unfor tunately, for a practitioner many of the f... |
Top-down: Drilling down from practical needs to more basic require ments. This goal-driven approach has the advantage that the readers know at all times why they need to work on a particular concept, and there is a clear path of required knowledge. The downside of this strat egy is that the knowledge is built on potent... |
We decided to write this book in a modular way to separate foundational (mathematical) concepts from applications so that this book can be read in both ways. The book is split into two parts, where Part I lays the math ematical foundations and Part II applies the concepts from Part I to a set of fundamental machine lea... |
©2023 M. P. Deisenroth, A. A. Faisal, C. S. Ong. Published by Cambridge University Press (2020). |
14 Introduction and Motivation |
Figure 1.1 The |
foundations and |
four pillars of machine learning. |
n |
Machine Learning |
y |
t |
i |
l |
n |
a |
o |
n |
o |
i |
t |
o |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.