id stringlengths 18 48 | text stringlengths 52 3.74M | source stringclasses 10
values | format stringclasses 2
values |
|---|---|---|---|
algo_notes_block_0 | \cleardoublepage
\lecture{1}{2022-09-23}{I'm rambling a bit}{}
\chapterafterlecture{Introduction} | algo_notes | latex |
algo_notes_block_1 | \begin{parag}{Definition: Algorithm}
An \important{algorithm} is any well-defined computational procedure that takes some value, or set of values as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.
\end{parag} | algo_notes | latex |
algo_notes_block_2 | \begin{parag}{Definition: Instance}
Given a problem, an \important{instance} is a set of precise inputs. | algo_notes | latex |
algo_notes_block_3 | \begin{subparag}{Remark}
Note that for a problem that waits a number $n$ as an input, ``a positive integer'' is not an instance, whereas 232 would be one.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture01/NaiveArithmetic.code}
ans = 0
for i = 1, 2, ..., n
ans = ans + i
return ans
\end{filecon... | algo_notes | latex |
algo_notes_block_4 | \begin{parag}{Example: Arithmetic series}
Let's say that, given $n$, we want to compute $\sum_{i=1}^{n} i$. There are multiples way to do so. | algo_notes | latex |
algo_notes_block_5 | \begin{subparag}{Naive algorithm}
The first algorithm that could come to mind is to compute the sum:
\importcode{Lecture01/NaiveArithmetic.code}{pseudo}
This algorithm is very space efficient, since it only stores 2 numbers. However, it has a time-complexity of $\Theta\left(n\right)$ elementary operations, which i... | algo_notes | latex |
algo_notes_block_6 | \begin{subparag}{Clever algorithm}
A much better way is to simply use the arithmetic partial series formula, yielding:
\importcode{Lecture01/CleverArithmetic.code}{pseudo}
This algorithm is both very efficient in space and in time. This shows that the first algorithm we think of is not necessarily the best, and t... | algo_notes | latex |
algo_notes_block_7 | \begin{parag}{Complexity analysis}
We want to analyse algorithm complexities and, to do so, we need a model. We will consider that any primitive operations (basically any line in pseudocode) consists of a constant amount of time. Different lines may take a different time, but they do not depend on the sizes of the inp... | algo_notes | latex |
algo_notes_block_8 | \begin{subparag}{Remark}
When comparing asymptotic behaviour, we have to be careful about the fact that this is asymptotical. In other words, some algorithms which behave less well when $n$ is very large might be better when $n$ is very small.
As a personal remark, it makes me think about galactic algorithms. There ... | algo_notes | latex |
algo_notes_block_9 | \begin{parag}{Personal note: Definitions}
We say that $f\left(x\right) \in O\left(g\left(x\right)\right)$, or more informally $f\left(x\right) = O\left(g\left(x\right)\right)$, read ``$f$ is big-O of $g$'', if there exists a $M \in \mathbb{R}_+$ and a $x_0 \in \mathbb{R}$ such that: \[\left|f\left(x\right)\right| \leq... | algo_notes | latex |
algo_notes_block_10 | \begin{parag}{Personal note: Intuition}
We can have the following intuition:
\begin{itemize}
\item $f\left(x\right) \in O\left(g\left(x\right)\right)$ means that $f$ grows slower than (or as fast as) $g$ when $x \to \infty$.
\item $f\left(x\right) \in \Omega\left(g\left(x\right)\right)$ means that $f$ grows faster ... | algo_notes | latex |
algo_notes_block_11 | \begin{parag}{Personal note: Theorem}
Let $f$ and $g$ be two functions, such that the following limit exists or diverges:
\[\lim_{x \to \infty} \frac{\left|f\left(x\right)\right|}{\left|g\left(x\right)\right|} = \ell \in \mathbb{R} \cup \left\{\infty\right\}\]
We can draw the following conclusions, depending on th... | algo_notes | latex |
algo_notes_block_12 | \begin{subparag}{Proof}
We will only prove the third point, the other two are left as exercises to the reader.
First, we can see that $\ell > 0$, since $\ell \neq 0$ by hypothesis and since $\frac{\left|f\left(x\right)\right|}{\left|g\left(x\right)\right|} > 0$ for all $x$.
We can apply the definition of the limi... | algo_notes | latex |
algo_notes_block_13 | \begin{subparag}{Example}
Let $a \in \mathbb{R}$ and $b \in \mathbb{R}_+$. Let us compute the following ratio:
\[\lim_{n \to \infty} \frac{\left|\left(n + a\right)^b\right|}{\left|n^b\right|} = \lim_{n \to \infty} \left|\left(1 + \frac{a}{n}\right)^{b}\right| = 1\]
which allows us to conclude that $\left(n + a\right... | algo_notes | latex |
algo_notes_block_14 | \begin{subparag}{Side note: Link with series}
You can go read my Analyse 1 notes on my GitHub (in French) if you want more information, but there is an interesting link with series we can do here.
You can convince yourself that if $a_n \in \Theta\left(b_n\right)$, then $\sum_{n = 1}^{\infty} \left|a_n\right|$ and $\... | algo_notes | latex |
algo_notes_block_15 | \begin{parag}{Definition: The sorting problem}
For the \important{sorting problem}, we take a sequence of $n$ numbers $\left(a_1, \ldots, a_n\right)$ as input, and we want to output a reordering of those numbers $\left(a_1', \ldots, a_n'\right)$ such that $a_1' \leq \ldots \leq a_n'$. | algo_notes | latex |
algo_notes_block_16 | \begin{subparag}{Example}
Given the input $\left(5, 2, 4, 6, 1, 3\right)$, a correct output is $\left(1, 2, 3, 4, 5, 6\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_17 | \begin{subparag}{Personal note: Remark}
It is important to have the same numbers at the start and the end. Else, it allows to have algorithms such as the Stalin sort (remove all elements which are not in order, leading to a complexity of $\Theta\left(n\right)$), or the Nagasaki sort (clearing the list, leading to a co... | algo_notes | latex |
algo_notes_block_18 | \begin{parag}{Definition: In place algorithm}
An algorithm solving the sorting problem is said to be \important{in place} when the numbers are rearranged within the array (with at most a constant number of variables oustside the array at any time).
\end{parag} | algo_notes | latex |
algo_notes_block_19 | \begin{parag}{Loop invariant}
We will see algorithms, which we will need to prove are correct. To do so, one of the methods is to use a loop invariant. This is something that stays true at any iteration of a loop. The idea is very similar to induction.
To use a loop invariant, we need to do three steps. In the \impo... | algo_notes | latex |
algo_notes_block_20 | \section{Insertion sort}
\begin{filecontents*}[overwrite]{Lecture01/InsertionSort.code}
for j = 2 to n:
key = a[j]
// Insert a[j] into the sorted sequence.
i = j - 1
while i > 0 and a[i] > key
a[i + 1] = a[i]
i = i - 1
a[i+1] = key
\end{filecontents*} | algo_notes | latex |
algo_notes_block_21 | \begin{parag}{Insertion sort}
They idea of \important{insertion sort} is to iteratively sort the sequence. We iteratively insert elements at the right place.
This algorithm can be formulated as:
\importcode{Lecture01/InsertionSort.code}{pseudo}
We can see that this algorithm is in place.
\end{parag}
\lecture{2... | algo_notes | latex |
algo_notes_block_22 | \begin{parag}{Proof}
Let us prove that insertion sort works by using a loop invariant.
We take as an invariant that at the start of each iteration of the outer for loop, the subarray \texttt{a[1\ldots (j-1)]} consists of the elements originally in \texttt{a[1\ldots (j-1)]} but in sorted order.
\begin{enumerate}
\i... | algo_notes | latex |
algo_notes_block_23 | \begin{parag}{Complexity analysis}
We can see that the first line is executed $n$ times, and the lines which do not belong to the inner loop are executed $n-1$ times (the first line of a loop is executed one time more than its body, since we need to do a last comparison before knowing we can exit the loop). We only ne... | algo_notes | latex |
algo_notes_block_24 | \begin{parag}{Divide-and-conquer}
We will use a powerful algorithmic approach: recursively divide the problem into smaller subproblems.
We first \important{divide} the problem into $a$ subproblems of size $\frac{n}{b}$ that are smaller instances of the same problem. We then \important{conquer} the subproblems by sol... | algo_notes | latex |
algo_notes_block_25 | \begin{parag}{Merge sort}
Merge sort is a divide and conquer algorithm:
\importcode{Lecture02/MergeSort.code}{pseudo}
For it to be efficient, we need to have an efficient merge procedure. Note that merging two sorted subarrays is rather easy: if we have two sorted piles of cards and we want to merge them, we only h... | algo_notes | latex |
algo_notes_block_26 | \begin{subparag}{Rermark}
The Professor put the following video on the slides, and I like it very much, so here it is (reading the comments, dancers say ``teile une herrsche'', which means ``divide and conquer''):
\begin{center}
\url{https://www.youtube.com/watch?v=dENca26N6V4}
\end{center}
\end{subparag}
\end... | algo_notes | latex |
algo_notes_block_27 | \begin{parag}{Theorem: Correctness of Merge-Sort}
Assuming that the implementation of the \texttt{merge} procedure is correct, \texttt{mergeSort(A, p, r)} correctly sorts the numbers in $A\left[p\ldots r\right]$. | algo_notes | latex |
algo_notes_block_28 | \begin{subparag}{Proof}
Let's do a proof by induction on $n = r -p$.
\begin{itemize}[left=0pt]
\item When $n = 0$, we have $r = p$, and thus $A\left[p \ldots r\right]$ is trivially sorted.
\item We suppose our statement is true for all $n \in \left\{0, \ldots, k-1\right\}$ for some $k$, and we want to prove it for... | algo_notes | latex |
algo_notes_block_29 | \begin{parag}{Complexity analysis}
Let's analyse the complexity for merge sort.
Modifying the complexity for divide and conquer, we get:
\begin{functionbypart}{T\left(n\right)}
\Theta\left(1\right), \mathspace \text{if } n = 1\\
2T\left(\frac{n}{2}\right) + \Theta\left(n\right), \mathspace \text{otherwise}
\end... | algo_notes | latex |
algo_notes_block_30 | \begin{subparag}{Proof: Upper bound}
We want to show that there exists a constant $a > 0$ such that $T\left(n\right) \leq an\log\left(n\right)$ for all $n \geq 2$ (meaning that $T\left(n\right) = O\left(n \log\left(n\right)\right)$), by induction on $n$.
\begin{itemize}[left=0pt]
\item For any constant $n \in \left... | algo_notes | latex |
algo_notes_block_31 | \begin{subparag}{Proof: Lower bound}
We want to show that there exists a constant $b > 0$ such that $T\left(n\right) \geq bn\log\left(n\right)$ for all $n \geq 0$ (meaning that $T\left(n\right) = \Omega\left(n \log\left(n\right)\right)$), by induction on $n$.
\begin{itemize}[left=0pt]
\item For $n = 1$, $T\left(n\... | algo_notes | latex |
algo_notes_block_32 | \begin{subparag}{Proof: Conclusion}
Since $T\left(n\right) = O\left(n \log\left(n\right)\right)$ and $T\left(n\right) = \Omega\left(n \log\left(n\right)\right)$, we have proven that $T\left(n\right) = \Theta\left(n\log\left(n\right)\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_33 | \begin{subparag}{Remark}
The real recurrence relation for merge-sort is:
\begin{functionbypart}{T\left(n\right)}
c, \mathspace \text{if } n = 1 \\
T\left(\left\lfloor \frac{n}{2} \right\rfloor \right) + T\left(\left\lceil \frac{n}{2} \right\rceil \right) + c \cdot n.
\end{functionbypart}
Note that we are allowed... | algo_notes | latex |
algo_notes_block_34 | \begin{parag}{Remark}
We have to be careful when using asymptotic notations with induction. For instance, if we know that $T\left(n\right) = 4T\left(\frac{n}{4}\right) + n$, and we want to prove that $T\left(n\right) = O\left(n\right)$, then we cannot just do:
\[T\left(n\right) \over{\leq}{IH} 4c \frac{n}{4} + n = c... | algo_notes | latex |
algo_notes_block_35 | \begin{parag}{Other proof: Tree}
Another way of guessing the complexity of merge sort, which works for many recurrences, is thinking of the entire recurrence tree. A recurrence tree is a tree (really?) where each node corresponds to the cost of a subproblem. We can thus sum the costs within each level of the tree to o... | algo_notes | latex |
algo_notes_block_36 | \begin{parag}{Tree: Other example}
Let's do another example for a tree, but for which the substitution method does not work: we take $T\left(n\right) = T\left(\frac{n}{3}\right) + T\left(\frac{2n}{3}\right) + cn$. The tree looks like:
\imagehere[0.7]{Lecture03/TreeOtherExample.png}
Again, we notice that every level... | algo_notes | latex |
algo_notes_block_37 | \begin{parag}{Example}
Let's look at the following recurrence:
\[T\left(n\right) = T\left(\frac{n}{4}\right) + T\left(\frac{3}{4}n\right) + 1\]
We want to show it is $\Theta\left(n\right)$. | algo_notes | latex |
algo_notes_block_38 | \begin{subparag}{Upper bound}
Let's prove that there exists a $b$ such that $T\left(n\right) \leq bn$. We consider the base case to be correct, by choosing $b$ to be large enough.
Let's do the inductive step. We get:
\autoeq{T\left(n\right) = T\left(\frac{1}{4}n\right) + T\left(\frac{3}{4}n\right) + 1 \leq b \frac{... | algo_notes | latex |
algo_notes_block_39 | \begin{subparag}{Upper bound (better)}
Let's now instead the harder induction hypothesis, stating that $T\left(n\right) \leq bn - b'$. This gives us:
\autoeq{T\left(n\right) = T\left(\frac{1}{4}n\right) + T\left(\frac{3}{4}n\right) + 1 \leq b \frac{1}{4} n - b' + b \frac{3}{4} n - b' + 1 = bn - b' + \left(1 - b'\righ... | algo_notes | latex |
algo_notes_block_40 | \begin{parag}{Master theorem}
Let $a \leq 1$ and $b > 1$ be constants. Also, let $T\left(n\right)$ be a function defined on the nonnegative integers by the following recurrence:
\[T\left(n\right) = aT\left(\frac{n}{b}\right) + f\left(n\right)\]
Then, $T\left(n\right)$ has the following asymptotic bounds:
\begin{... | algo_notes | latex |
algo_notes_block_41 | \begin{subparag}{Example}
Let us consider the case for merge sort, thus $T\left(n\right) = 2T\left(\frac{n}{2}\right) + cn$. We get $a = b = 2$, so $\log_b\left(a\right) = 1$ and:
\[f\left(n\right) = \Theta\left(n^1\right) = \Theta\left(n^{\log_b\left(a\right)}\right)\]
This means that we are in the second case, ... | algo_notes | latex |
algo_notes_block_42 | \begin{subparag}{Tree}
To learn this theorem, we only need to get the intuition of why it works, and to be able to reconstruct it. To do so, we can draw a tree. The depth of this tree is $\log_b\left(n\right)$, and there are $a^{\log_b\left(n\right)} = n^{\log_b\left(a\right)}$ leaves. If a node does $f\left(n\right)$... | algo_notes | latex |
algo_notes_block_43 | \begin{parag}{Application}
Let's use a modified version of merge sort in order to count the number of inversions in an array $A$ (an inversion is $i < j$ such that $A\left[j\right] < A\left[i\right]$, where $A$ never has twice the same value).
The idea is that we can just add a return value to merge sort: the number... | algo_notes | latex |
algo_notes_block_44 | \begin{subparag}{Remark}
We can notice that there are at most $\frac{n\left(n-1\right)}{2}$ inversions (in a reverse-sorted array). It seems great that our algorithm achieves to count this value in a smaller complexity. This comes from the fact that, sometimes, we add much more than 1 at the same time in the merge pro... | algo_notes | latex |
algo_notes_block_45 | \begin{parag}{Maximum subarray problem}
We have an array of values representing stock price, and we want to find when we should have bought and when we should have sold (retrospectively, so this is no investment advice). We want to buy when the cost is as low as possible and sell when it is as high as possible. Note t... | algo_notes | latex |
algo_notes_block_46 | \begin{subparag}{Remark}
We will make a $\Theta\left(n\right)$ algorithm to solve this problem in the third exercise series.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_47 | \begin{parag}{Problem}
We want to multiply quickly two numbers. The regular algorithm seen in primary school is $O\left(n^2\right)$, but we think that we may be able to go faster.
We are given two integers $a, b$ with $n$ bits each (they are given to us through arrays of bits), and we want to output $a\cdot b$. This... | algo_notes | latex |
algo_notes_block_48 | \begin{parag}{Fast multiplication}
We want to use a divide and conquer strategy.
Let's say we have an array of values $a_0, \ldots, a_{n}$ giving us $a$, and an array of values $b_0, \ldots, b_n$ giving us $b$ (we will use base 10 here, but it works for any base):
\[a = \sum_{i=0}^{n-1} a_i 10^{i}, \mathspace b = ... | algo_notes | latex |
algo_notes_block_49 | \begin{subparag}{Complexity algorithm}
The recurrence of this algorithm is given by:
\[T\left(n\right) = 4T\left(\frac{n}{2}\right) + n\]
since addition takes a linear time.
However, this solves to $T\left(n\right) = \Theta\left(n^2\right)$ by the master theorem \frownie.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_50 | \begin{parag}{Karatsouba algorithm}
Karatsouba, a computer scientist, realised that we do not need 4 multiplications. Indeed, let's compute the following value:
\[\left(a_L + a_H\right)\left(b_L + b_H\right) = a_L b_L + b_H b_H + a_H b_L + b_H a_L\]
This means that computing $a_L b_L$ and $b_H b_H$, we can extrac... | algo_notes | latex |
algo_notes_block_51 | \begin{subparag}{Complexity analysis}
The recurrence of this algorithm is given by:
\[T\left(n\right) = 3T\left(\frac{n}{2}\right) + n\]
This solves to $T\left(n\right) = \Theta\left(n^{\log_2\left(3\right)}\right)$, which is better than the primary school algorithm.
Note that we are cheating a bit on the complex... | algo_notes | latex |
algo_notes_block_52 | \begin{parag}{Remark}
Note that, in most of the cases, we are working with 64-bit numbers which can be multiplied in constant time on a 64-bit CPU. The algorithm above is in fact really useful for huge numbers (in cryptography for instance).
\end{parag} | algo_notes | latex |
algo_notes_block_53 | \begin{parag}{Problem}
We are given two $n \times n$ matrices, $A = \left(a_{ij}\right)$ and $B = \left(b_{ij}\right)$, and we want to output a $n \times n$ matrix $C = \left(c_{ij}\right)$ such that $C = AB$.
Basically, when computing the value of $c_{ij}$, we compute the dot-product of the $i$\Th row of $A$ and th... | algo_notes | latex |
algo_notes_block_54 | \begin{subparag}{Example}
For instance, for $n = 2$:
\autoeq{\begin{pmatrix} c_{11} & c_{12} \\ c_{21} & c_{22} \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix} = \begin{pmatrix} a_{11} b_{11} + a_{12} b_{21} & a_{11} b_... | algo_notes | latex |
algo_notes_block_55 | \begin{parag}{Naive algorithm}
The naive algorithm is:
\importcode{Lecture05/naiveMatrixMultiplication.code}{pseudo} | algo_notes | latex |
algo_notes_block_56 | \begin{subparag}{Complexity}
There are three nested for-loops, so we get a runtime of $\Theta\left(n^3\right)$.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_57 | \begin{parag}{Divide and conquer}
We can realise that, when multiplying matrices, this is like multiplying submatrices. If we have $A$ and $B$ being two $n \times n$ matrices, then we can split them into submatrices and get:
\[\begin{pmatrix} C_{11} & C_{12} \\ C_{21} & C_{21} \end{pmatrix} = \begin{pmatrix} A_{11} ... | algo_notes | latex |
algo_notes_block_58 | \begin{subparag}{Complexity}
Since we are splitting our multiplication into 8 matrix multiplications that each need the multiplication of two $\frac{n}{2} \times \frac{n}{2}$ matrices, we get the following recurrence relation:
\[T\left(n\right) = 8T\left(\frac{n}{2}\right) + n^2\]
since adding two matrices takes $O... | algo_notes | latex |
algo_notes_block_59 | \begin{parag}{Strassen's algorithm}
Strassen realised that we only need to perform 7 recursive multiplications of $\frac{n}{2} \times \frac{n}{2}$ rather than $8$. This gives us the recurrence:
\[T\left(n\right) = 7T\left(\frac{n}{2}\right) + \Theta\left(n^2\right)\]
where the $\Theta\left(n^2\right)$ comes from ad... | algo_notes | latex |
algo_notes_block_60 | \begin{parag}{Remark}
Strassen was the first to beat the $\Theta\left(n^3\right)$, but now we find algorithms with better and better complexity (Even though the best ones currently known are galactic algorithms).
\end{parag}
\cleardoublepage
\lecture{6}{2022-10-10}{Heap sort}{}
\chapterafterlecture{Great data str... | algo_notes | latex |
algo_notes_block_61 | \begin{parag}{Nearly-complete binary tree}
A binary tree of depth $d$ is nearly complete if all $d-1$ levels are full, and, at level $d$, if a node is present, then all nodes to its left must also be present. | algo_notes | latex |
algo_notes_block_62 | \begin{subparag}{Terminology}
The size of a tree is its number of vertices.
\end{subparag} | algo_notes | latex |
algo_notes_block_63 | \begin{subparag}{Example}
For instance, the three on the left is a nearly-complete binary tree of depth $3$, but not the one on the right:
\imagehere{Lecture06/NearlyCompleteBinaryTreeExample.png}
Both binary trees are of size 10.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_64 | \begin{parag}{Heap}
A \important{heap} (or max-heap) is a nearly-complete binary tree such that, for every node $i$, the key (value stored at that node) of its children is less than or equal to its key. | algo_notes | latex |
algo_notes_block_65 | \begin{subparag}{Examples}
For instance, the nearly complete binary tree of depth 3 of the left is a max-heap, but not the one on the right:
\imagehere{Lecture06/MaxHeapExample.png}
\end{subparag} | algo_notes | latex |
algo_notes_block_66 | \begin{subparag}{Observations}
We notice that the maximum number is necessarily at the root. Also, any path linking a leaf to the root is an increasing sequence.
\end{subparag} | algo_notes | latex |
algo_notes_block_67 | \begin{subparag}{Remark 1}
We can define the min-heap to be like the max-heap, but the property each node follows is that the key of its children is greater than or equal to its key.
\end{subparag} | algo_notes | latex |
algo_notes_block_68 | \begin{subparag}{Remark 2}
We must not confuse heaps and binary-search trees (which we will define later), which are very similar but have a more restrictive property.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_69 | \begin{parag}{Height}
The height of a node is defined to be the number of edges on a \textit{longest} simple path from the node \textit{down} to a leaf. | algo_notes | latex |
algo_notes_block_70 | \begin{subparag}{Example}
For instance, in the following picture, the node holding 10 has height 1, the node holding 14 has height 2, and the one holding a 2 has height 0.
\imagehere[0.7]{Lecture06/MaxHeapExample2.png}
\end{subparag} | algo_notes | latex |
algo_notes_block_71 | \begin{subparag}{Remark}
We note that, if we have $n$ nodes, we can bound the height $h$ of any node:
\[h \leq \log_2\left(n\right)\]
Also, we notice that the height of the root is the largest height of a node from the tree. This is defined to be the \important{height of the heap}. We notice it is thus $\Theta\lef... | algo_notes | latex |
algo_notes_block_72 | \begin{parag}{Storing a heap}
We will store a heap in an array, layer by layer. Thus, take the first layer and store it in the array. Then, we take the next layer, and store it after the first layer. We continue this way until the end.
Let's consider we store our numbers in a starting with index starting at 1. The c... | algo_notes | latex |
algo_notes_block_73 | \begin{subparag}{Example}
For instance, let's consider again the following tree, but considering the index of each node:
\imagehere[0.7]{Lecture06/MaxHeapExample2-Array.png}
This would be stored in memory as:
\[A = \left[16, 14, 10, 8, 7, 9, 3, 2, 4, 1\right]\]
Then, the left child of $i = 3$ is $\text{left}\l... | algo_notes | latex |
algo_notes_block_74 | \begin{parag}{Max heapify}
To manipulate a heap, we need to \important{max-heapify}. Given an $i$ such that the subtrees of $i$ are heaps (this condition is important), this algorithm ensures that the subtree rooted at $i$ is a heap (satisfying the heap property). The only violation we could have is the root of one of... | algo_notes | latex |
algo_notes_block_75 | \begin{subparag}{Complexity}
Asymptotically, we have done less computations than the height of our node $i$, yielding a complexity of $O\left(\text{height}\left(i\right)\right)$.
Also, we are working in place, thus we are taking a space of $\Theta\left(n\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_76 | \begin{subparag}{Remark}
This procedure is the main primitive we have to work with heaps, it is really important.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture06/buildMaxHeap.code}
procedure buildMaxHeap(A, n)
for i = floor(n/2) downto 1
maxHeapify(A, i, n)
\end{filecontents*} | algo_notes | latex |
algo_notes_block_77 | \begin{parag}{Building a heap}
To make a heap from an unordered array $A$ of length $n$, we can use the following \texttt{buildMaxHeap} procedure:
\importcode{Lecture06/buildMaxHeap.code}{pseudo}
The idea is that the nodes strictly larger than $\left\lfloor \frac{n}{2} \right\rfloor $ are leafs (no node after $\le... | algo_notes | latex |
algo_notes_block_78 | \begin{subparag}{Complexity}
We can use the fact that \texttt{maxHeapify} is $O\left(\text{height}\left(i\right)\right)$ to compute the complexity of our new procedure. We can note that there are approximately $2^\ell $ nodes at the $\ell $\Th level (this is not really true for the last level, but it is not important)... | algo_notes | latex |
algo_notes_block_79 | \begin{subparag}{Correctness}
To prove the correctness of this algorithm, we can use a loop invariant: at the start of every iteration of for loops, each node $i+1, \ldots, n$ is a root of a max-heap.
\begin{enumerate}[left=0pt]
\item At start, each node $\left\lfloor \frac{n}{2} \right\rfloor + 1, \ldots, n $ is a... | algo_notes | latex |
algo_notes_block_80 | \begin{parag}{Heapsort}
Now that we built our heap, we can use it to sort our array:
\importcode{Lecture06/Heapsort.code}{pseudo}
A max-heap is only useful for one thing: get the maximum element. When we get it, we swap it to its right place. We can then max-heapify the new tree (without the element we put in the r... | algo_notes | latex |
algo_notes_block_81 | \begin{subparag}{Complexity}
We run $O\left(n\right)$ times the heap repair (which runs in $O\left(\log\left(n\right)\right)$), thus we get that our algorithm has complexity $O\left(n\log_2\left(n\right)\right)$.
It is interesting to see that, here, the good complexity comes from a really good data-structure. This i... | algo_notes | latex |
algo_notes_block_82 | \begin{subparag}{Remark}
We can note that, unlike Merge Sort, this sorting algorithm is in place.
\end{subparag}
\end{parag}
\lecture{7}{2022-10-14}{Queues, stacks and linked list}{} | algo_notes | latex |
algo_notes_block_83 | \begin{parag}{Definition: Priority queue}
A priority queue maintains a dynamic set $S$ of elements, where each element has a key (an associated value that regulates its importance). This is a more constraining datastructure than arrays, since we cannot access any element.
We want to have the following operations:
\... | algo_notes | latex |
algo_notes_block_84 | \begin{subparag}{Usage}
Priority queue have many usage, the biggest one will be in Dijkstra's algorithm, which we will see later in this course.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture07/PriorityQueueHeapIncreaseKey.code}
procedure HeapIncreaseKey(A, i, key)
if key < A[i]:
error "new... | algo_notes | latex |
algo_notes_block_85 | \begin{parag}{Using a heap}
Let us try to implement a priority queue using a heap. | algo_notes | latex |
algo_notes_block_86 | \begin{subparag}{Maximum}
Since we are using a heap, we have two procedures for free. \texttt{Maximum(S)} simply returns the root. This is $\Theta\left(1\right)$.
For \texttt{Extract-Max(S)}, we can move the last element of the array to the root and run \texttt{Max-Heapify} on the root (like what we do with heap-sor... | algo_notes | latex |
algo_notes_block_87 | \begin{subparag}{Increase key}
To implement \texttt{Increase-Key}, after having changed the key of our element, we can make it go up until its parent has a bigger key that it.
\importcode{Lecture07/PriorityQueueHeapIncreaseKey.code}{pseudo}
This looks a lot like max-heapify, and it is thus $O\left(\log\left(n\righ... | algo_notes | latex |
algo_notes_block_88 | \begin{subparag}{Insert}
To insert a new key into heap, we can increment the heap size, insert a new node in the last position in the heap with the key $-\infty$, and increase the $-\infty$ value to \texttt{key} using \texttt{Heap-Increase-Key}.
\importcode{Lecture07/PriorityQueueHeapInsert.code}{pseudo}
\end{subp... | algo_notes | latex |
algo_notes_block_89 | \begin{parag}{Remark}
We can make min-priority queues with min-heaps similarly.
\end{parag} | algo_notes | latex |
algo_notes_block_90 | \begin{parag}{Introduction}
We realise that the heap was really great because it led to very efficient algorithms. So, we can try to make more great data structures.
\end{parag} | algo_notes | latex |
algo_notes_block_91 | \begin{parag}{Definition: Stack}
A stack is a data structure where we can insert (\texttt{Push(S, x)}) and delete elements (\texttt{Pop(S)}). This is known as a last-in, first-out (LIFO), meaning that the element we get by using the \texttt{Pop} procedure is the one that was inserted the most recently. | algo_notes | latex |
algo_notes_block_92 | \begin{subparag}{Intuition}
This is really like a stack: we put elements over one another, and then we can only take elements back from the top.
\end{subparag} | algo_notes | latex |
algo_notes_block_93 | \begin{subparag}{Usage}
Stacks are everywhere in computing: a computer has a stack and that's how it operates.
Another usage is to know if an expressions with parenthesis, brackets and curly brackets is a well-parenthesised expression. Indeed, we can go through all letters from our expression. When we get an opening... | algo_notes | latex |
algo_notes_block_94 | \begin{parag}{Stack implementation}
A good way to implement a stack is using an array.
We have an array of size $n$, and a pointer \texttt{S.top} to the last element (some space in the array can be unused). | algo_notes | latex |
algo_notes_block_95 | \begin{subparag}{Empty}
To know if our stack is empty, we can basically only return \texttt{S.top == 0}. This definitely has a complexity of $O\left(1\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_96 | \begin{subparag}{Push}
To push an element in our array, we can do:
\importcode{Lecture07/StackPush.code}{pseudo}
Note that, in reality, we would need to verify that we have the space to add one more element, not to get an \texttt{IndexOutOfBoundException}.
We can notice that this is executed in constant time.
\... | algo_notes | latex |
algo_notes_block_97 | \begin{subparag}{Pop}
Popping element is very similar to pushing:
\importcode{Lecture07/StackPop.code}{pseudo}
We can notice that this is also done in constant time.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_98 | \begin{parag}{Queue}
A queue is a data structure where we can insert elements (\texttt{Enqueue(Q, x)}) and delete elements (\texttt{Dequeue(Q)}). This is known as a first-in, first-out (FIFO), meaning that the element we get by using the \texttt{Dequeue} procedure is the one that was inserted the least recently. | algo_notes | latex |
algo_notes_block_99 | \begin{subparag}{Intuition}
This is really like a queue in real life: people that get out of the queue are people who were there for the longest.
\end{subparag} | algo_notes | latex |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.