id stringlengths 18 48 | text stringlengths 52 3.74M | source stringclasses 10
values | format stringclasses 2
values |
|---|---|---|---|
algo_notes_block_100 | \begin{subparag}{Usage}
Queues are also used a lot, for instance in packet switches in the internet.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture07/Enqueue.code}
procedure Enqueue(Q, x)
Q[Q.tail] = x
if Q.tail == Q.length
Q.tail = 1
else
Q.tail = Q.tail + 1
\end{filecontents*}
\begin{fil... | algo_notes | latex |
algo_notes_block_101 | \begin{parag}{Queue implementation}
We have an array $Q$, a pointer \texttt{Q.head} to the first element of the queue, and \texttt{Q.tail} to the place after the last element.
\imagehere[0.5]{Lecture07/QueuePointers.png} | algo_notes | latex |
algo_notes_block_102 | \begin{subparag}{Enqueue}
To insert an element, we can simply use the \texttt{tail} pointer, making sure to have it wrap around the array if needed:
\importcode{Lecture07/Enqueue.code}{pseudo}
Note that, in real life, we must verify upon overflow. We can observe that this procedure is executed in constant time.
\e... | algo_notes | latex |
algo_notes_block_103 | \begin{subparag}{Dequeue}
To get an element out of our queue, we can use the \texttt{head} pointer:
\importcode{Lecture07/Dequeue.code}{pseudo}
Note that, again, in real life, we must verify upon underflow. Also, this procedure is again executed in constant time.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_104 | \begin{parag}{Summary}
Both stacks and queues are very efficient and have natural operations. However, they have limited support (we cannot search). Also, implementations using arrays have a fixed-size capacity.
\end{parag} | algo_notes | latex |
algo_notes_block_105 | \begin{parag}{Linked list}
The idea of a linked list is to, instead of having predefined memory slots like an array, each object is stored in a point in memory and has a pointer to the next element (meaning that we do not need to have all elements follow each other in memory). In an array we cannot increase the size a... | algo_notes | latex |
algo_notes_block_106 | \begin{parag}{Operations}
Let's consider the operations we can do with a linked list. | algo_notes | latex |
algo_notes_block_107 | \begin{subparag}{Search}
We can search in a linked list just as we search in an unsorted array:
\importcode{Lecture07/LinkedListSearch.code}{pseudo}
We can note that, if $k$ cannot be found, then this procedure returns \texttt{nil}. Also, clearly, this procedure is $O\left(n\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_108 | \begin{subparag}{Insertion}
We can insert an element to the first position of a double-linked list by doing:
\importcode{Lecture07/LinkedListInsert.code}{pseudo}
We are basically just rewiring the pointers of \texttt{L.head}, \texttt{x} and the first elements for everything to work.
This runs in $O\left(1\right)... | algo_notes | latex |
algo_notes_block_109 | \begin{subparag}{Delete}
Given a pointer to an element $x$, we want to remove it from $L$:
\importcode{Lecture07/LinkedListDelete.code}{pseudo}
We are basically just rewiring the pointers, by making sure we do not modify things that don't exist.
When we are working with a linked list with sentinel, this algorith... | algo_notes | latex |
algo_notes_block_110 | \begin{parag}{Summary}
A linked list is interesting because it does not have a fixed capacity, and we can insert and delete elements in $O\left(1\right)$ (as long as we have a pointer to the given element). However, searching is $O\left(n\right)$, which is not great (as we will see later).
\end{parag}
\lecture{8}{2... | algo_notes | latex |
algo_notes_block_111 | \begin{parag}{Intuition}
Let's think about the following following game: Alice thinks about an integer $n$ between 1 and 15, and Bob must guess it. To do so, he can make guesses, and Alice tells him if the number is correct, smaller, or larger.
Intuitively, it seems like we can make a much more efficient algorithm ... | algo_notes | latex |
algo_notes_block_112 | \begin{parag}{Definition: Binary Search Trees}
A binary search tree is a binary tree (which is \textit{not} necessarily nearly-complete), which follows the following core property: for any node $x$, if $y$ is in its left subtree then $y.key < x.key$, and if $y$ is in the right subtree of $x$, then $y.key \geq x.key$. | algo_notes | latex |
algo_notes_block_113 | \begin{subparag}{Example}
For instance, here is a binary search tree of height $h = 3$:
\imagehere{Lecture08/BinarySearchtreeExample.png}
We could also have the following binary search tree of height $h = 14$:
\imagehere{Lecture08/BinarySearchtreeExampleBad.png}
We will see that good binary search trees are one ... | algo_notes | latex |
algo_notes_block_114 | \begin{subparag}{Remark}
Even though binary search trees are not necessarily nearly-complete, we can notice that their property is much more restrictive than the one of heaps.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture08/BinarySearchTreeSearch.code}
procedure TreeSearch(x, k)
if x == Nil ... | algo_notes | latex |
algo_notes_block_115 | \begin{parag}{Searching}
We designed this data-structure for searching, so the implementation is not very complicated:
\importcode{Lecture08/BinarySearchTreeSearch.code}{pseudo}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture08/BinarySearchTreeMinimum.code}
procedure TreeMinimum(x)
while x.left != Nil
x = ... | algo_notes | latex |
algo_notes_block_116 | \begin{parag}{Extrema}
We can notice that the minimum element is located in the leftmost node, and the maximum is located in the rightmost node.
\imagehere[0.7]{Lecture08/BinarySearchTreeExtrma.png}
This gives us the following procedure to find the minimum element, in complexity $O\left(h\right)$:
\importcode{Lec... | algo_notes | latex |
algo_notes_block_117 | \begin{parag}{Successor}
The successor of a node $x$ is the node $y$ where $y.key$ is the smallest key such that $y.key > x.key$. For instance, if we have a tree containing the numbers $1, 2, 3, 5, 6$, the successor of 2 is 3 and the successor of 3 is 5.
To find the successor of a given element, we can split two cas... | algo_notes | latex |
algo_notes_block_118 | \begin{parag}{Printing}
To print a binary tree, we have three methods: \important{preorder}, \important{inorder} and \important{postorder} tree walk. They all run in $\Theta\left(n\right)$.
The preorder tree walk looks like:
\importcode{Lecture08/PreorderTreeWalk.code}{pseudo}
The inorder has the \texttt{print k... | algo_notes | latex |
algo_notes_block_119 | \begin{parag}{Insertion}
To insert an element, we can basically search for this element and, when finding its supposed position, we can basically insert it at that position.
\importcode{Lecture08/BinarySearchTreeInsert.code}{pseudo}
\end{parag}
\lecture{9}{2022-10-21}{Dynamic cannot be a pejorative word}{}
\begi... | algo_notes | latex |
algo_notes_block_120 | \begin{parag}{Deletion}
When deleting a node $z$, we can consider three cases. If $z$ has no child, then we can just remove it. If it has exactly one child, then we can make that child take $z$'s position in the tree. If it has two children, then we can find its successor $y$ (which is at the leftmost of its right sub... | algo_notes | latex |
algo_notes_block_121 | \begin{parag}{Balancing}
Note that neither our insertion nor our deletion procedures keep the height low. For instance, creating a binary search tree by inserting in order elements of a sorted list of $n$ elements makes a tree of height $n-1$.
There are balancing tricks when doing insertions and deletions, such as r... | algo_notes | latex |
algo_notes_block_122 | \begin{parag}{Summary}
We have been able to make search, max, min, predecessor, successor, insertion and deletion in $O\left(h\right)$.
Note that binary search tree are really interesting because we are easily able to insert elements. This is the main thing which makes them more interesting than a basic binary searc... | algo_notes | latex |
algo_notes_block_123 | \begin{parag}{Introduction}
Dynamic programming is a way to make algorithms and has little to do with programming. The idea is that we never compute twice the same thing, by remembering calculations already made.
This name was designed in the 50s, by a computer scientist who was trying to get money for his research,... | algo_notes | latex |
algo_notes_block_124 | \begin{parag}{Fibonacci numbers}
Let's consider the Fibonacci numbers:
\[F_0 = 1, \mathspace F_1 = 1, \mathspace F_n = F_{n-1} + F_{n-2}\]
The first idea to compute this is through the given recurrence relation:
\importcode{Lecture09/FibonacciRecurrence.code}{pseudo}
However, this is $\Theta\left(2^n\right)$.... | algo_notes | latex |
algo_notes_block_125 | \begin{subparag}{Top-down with memoisation}
The code looks like:
\importcode{Lecture09/FibonacciTopDownWithMemoization.code}{pseudo}
We can note that this has a much better runtime complexity than the naive one, since this is $\Theta\left(n\right)$. Also, for memory, we are taking about the same size since, for th... | algo_notes | latex |
algo_notes_block_126 | \begin{subparag}{Bottom-up}
The code looks like:
\importcode{Lecture09/BottomUpFibonacci.code}{pseudo}
Generally, the bottom-up version is slightly more optimised than top-down with memoisation.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_127 | \begin{parag}{Designing a dynamic programming algorithm}
Dynamic programming is a good idea when our problem has an optimal substructure: the problem consists of making a choice, which leaves one or several subproblems to solve, where finding the optimal solution to all those subproblems allows us to get the optimal s... | algo_notes | latex |
algo_notes_block_128 | \begin{parag}{Problem}
Let's say we have a rod of length $n$, and we would want to sell it on the market. We can sell it as is, but also we could cut it in different lengths. The goal is that, given the prices $p_i$ for lengths $i=1, \ldots, n$, we must find the way to cut our rod which gives us the most money.
\end{p... | algo_notes | latex |
algo_notes_block_129 | \begin{parag}{Brute force}
Let's first consider the brute force case: why work smart when we can work fast. We have $n-1$ places where we can cut the rod, at each place we can either cut or not, so we have $2^{n-1}$ possibilities to cut the rod (which is not great).
Also, as we will show in the \nth{5} exercise seri... | algo_notes | latex |
algo_notes_block_130 | \begin{parag}{Theorem: Optimal substructure}
We can notice that, if the leftmost cut in an optimal solution is after $i$ units, and an optimal way to cut a solution of size $n-i$ is into rods of sizes $s_1$, \ldots, $s_k$, then an optimal way to cut our rod is into rods of sizes $i, s_1, \ldots, s_k$. | algo_notes | latex |
algo_notes_block_131 | \begin{subparag}{Proof}
Let's prove the optimality of our solution. Let $i, o_1, \ldots, o_\ell$ be an optimal solution (which exits by assumption). Also, let $s_1, \ldots, s_k$ be an optimal solution to the subproblem of cutting a rod of size $n-1$. Since this second solution is an optimal solution to the subproblem,... | algo_notes | latex |
algo_notes_block_132 | \begin{parag}{Dynamic programming}
The theorem above shows the optimal substructure of our problem, meaning that we can apply dynamic programming. Letting $r\left(n\right)$ to be the optimal revenue from a rod of length $n$ we get by the structural theorem that we can express $r\left(n\right)$ recursively as:
\begin... | algo_notes | latex |
algo_notes_block_133 | \begin{parag}{Problem}
We need to give the change for some amount of money $W$ (a positive integer), knowing that we have $n$ distinct coins denominations (positive integers, positive integers too) $0 < w_1 < \ldots < w_n$. We want to know the minimum number of coins needed in order to make the change:
\[\min\left\{... | algo_notes | latex |
algo_notes_block_134 | \begin{parag}{Solution}
We first want to see our optimal substructure. To do so, we need to define our subproblems. Thus, let $r\left[w\right]$ be the smallest number of coins needed to make change for $w$. We can note that if we choose which coin $i$ to use first and know the optimal solution to make $W - w_i$, we ca... | algo_notes | latex |
algo_notes_block_135 | \begin{parag}{Problem}
We have $n$ matrices $A_1, \ldots, A_n$, where each matrix $A_i$ can have a different size $p_{i-1} \times p_i$. We have seen that scalar multiplications is what takes the most time, so we want to minimise the number of computations we do. To do so, we want to output a full parenthesisation of t... | algo_notes | latex |
algo_notes_block_136 | \begin{parag}{Theorem: Optimal substructure}
We can notice that, if the outermost parenthesisation in an optimal solution is $\left(A_1 \cdots A_i\right)\left(A_{i+1} \cdots A_n\right)$, and if $P_L$ and $P_R$ are optimal parenthesisation for $A_1 \cdots A_i$ and $A_{i+1} \cdots A_n$, respectively; then, $\left(\left(... | algo_notes | latex |
algo_notes_block_137 | \begin{subparag}{Proof}
Let $\left(\left(O_L\right) \cdot \left(O_R\right)\right)$ be an optimal parenthesisation (we know it has this form by hypothesis), where $O_L$ and $O_R$ are parenthesisation for $A_1 \cdots A_i$ and $A_{i+1} \cdots A_n$, respectively. Also, let $M\left(P\right)$, be the number of scalar multip... | algo_notes | latex |
algo_notes_block_138 | \begin{parag}{Dynamic programming}
We can note that our theorem gives us the following recursive formula, where $m\left[i, j\right]$ is the optimal number of scalar multiplications for calculating $A_i \cdots A_j$:
\begin{functionbypart}{m\left[i, j\right]}
0, \mathspace \text{if } i = j \\
\min_{i \leq k < j} \le... | algo_notes | latex |
algo_notes_block_139 | \begin{parag}{Summary}
To summarise, we first choose where to make the outermost parenthesis:
\[\left(A_1 \cdots A_k\right)\left(A_{k+1} \cdots A_n\right)\]
Then, we noticed the optimal substructure: to obtain an optimal solution, we need to parenthesise the two remaining expressions in an optimal way. This gives u... | algo_notes | latex |
algo_notes_block_140 | \begin{parag}{Problem}
We have two sequences $X = \left\langle x_1, \ldots, x_m \right\rangle$ and $Y = \left\langle y_1, \ldots, y_n \right\rangle$, and we want to find the longest common subsequence (LCS; it does not necessarily to have to be consecutive, as long as it is in order) common to both.
For instance, if... | algo_notes | latex |
algo_notes_block_141 | \begin{subparag}{Remark}
This problem can for instance be useful if we want a way to compute how far are two strings from each others: finding the length of the longest common subsequence would be one way to measure this distance.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_142 | \begin{parag}{Brute force}
We can note that brute force does not work, since we have $2^m$ subsequences of $X$ and that each subsequence takes $\Theta\left(n\right)$ time to check (since we need to scan $Y$ for the first letter, then scan it for the second, and so on, until we know if this is also a subsequence of $Y$... | algo_notes | latex |
algo_notes_block_143 | \begin{parag}{Theorem: Optimal substructure}
We can note the following idea. Let's say we start at the end of both words and move to the left step-by-step (the other direction would also work), considering one letter from both word at any time. If the two letters are the same, the we can let it to be in the susequence... | algo_notes | latex |
algo_notes_block_144 | \begin{subparag}{Proof of the first part of the first point}
Let's prove the first point, supposing for contradiction that $z_k \neq x_i = y_j$.
However, there is contradiction, since we can just create $Z' = \left\langle z_1, \ldots, z_k, x_i \right\rangle$. This is indeed a new subsequence of $X_i$ and $Y_j$ which... | algo_notes | latex |
algo_notes_block_145 | \begin{subparag}{Proof of the rest}
The proofs of the second part of the first point, and for the second and the third point (which are very similar) are considered trivial and left as exercises to the reader. \smiley
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture11/LongestCommonSubsequenceCompu... | algo_notes | latex |
algo_notes_block_146 | \begin{parag}{Dynamic programming}
The theorem above gives us the following recurrence, where $c\left[i, j\right]$ is the length of an LCS of $X_i$ and $Y_j$:
\begin{functionbypart}{c\left[i, j\right]}
0, \mathspace \text{if } i = 0 \text{ or } j = 0 \\
c\left[i-1, j-1\right] + 1, \mathspace \text{if } i, j > 0 \te... | algo_notes | latex |
algo_notes_block_147 | \begin{subparag}{Intuition}
Conceptually, for $X = \left\langle B, A, B, D, B A \right\rangle$ and $Y = \left\langle D, A, C, B, C, B, A \right\rangle$, we are drawing the following table:
\imagehere{Lecture11/LongestCommonSubsequenceTableArrowsExample.png}
where the numbers are stored in the array \texttt{c} and th... | algo_notes | latex |
algo_notes_block_148 | \begin{parag}{Optimal binary search trees}
The goal is that, given a \textit{sorted} sequence $K = \left\langle k_1, \ldots, k_n\right\rangle$ of $n$ distinct key, we want to build a binary search tree. There is a slight twist: we know that the probability to search for $k_i$ is $p_i$ (for instance, on social media, p... | algo_notes | latex |
algo_notes_block_149 | \begin{subparag}{Example}
For instance, let us consider the following tree and probability table:
\imagehere[0.5]{Lecture12/BinarySearchTreeDPExample.png}
\begin{center}
\begin{tabular}{c|cccccc}
$i$ & 1 & 2 & 3 & 4 & 5 \\
\hline
$p_i$ & $0.25$ & 0.2 & 0.05 & 0.3 & 0.3
\end{tabular}
\end{center}
Then, we ca... | algo_notes | latex |
algo_notes_block_150 | \begin{subparag}{Remark}
Designing a good binary search tree is equivalent to designing a good binary search strategy.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_151 | \begin{parag}{Observation}
We notice that the optimal binary search tree might not have the smallest height, and that it might not have the highest-probability key at the root too.
\end{parag} | algo_notes | latex |
algo_notes_block_152 | \begin{parag}{Brute force}
Before doing anything too fancy, let us start by considering the brute force algorithm: we construct each $n$-node BST, then for each put in keys, and compute their expected search cost. We can finally pick the tree with the smallest expected search cost.
However, there are exponentially m... | algo_notes | latex |
algo_notes_block_153 | \begin{parag}{Theorem: Optimal substructure}
We know that:
\[\exval \left[\text{search cost}\right] = \sum_{i=1}^{n} \left(\text{depth}_T\left(k_i\right) + 1\right)p_i\]
Thus, if we increase the depth by one, we have:
\[\exval \left[\text{search cost deeper}\right] = \sum_{i=1}^{n} \left(\text{depth}_T\left(k_i\r... | algo_notes | latex |
algo_notes_block_154 | \begin{parag}{Bottom-up algorithm}
We can write our algorithm as:
\importcode{Lecture12/OptimalBST.code}{pseudo}
We needed to be careful not to compute the sums again and again.
We note that there are three nested loops, thus the total runtime is $\Theta\left(n^3\right)$. Another way to see this is that we have $... | algo_notes | latex |
algo_notes_block_155 | \begin{parag}{Introduction}
Graphs are everywhere. For instance, in social media, when we have a bunch of entities and relationships between them, graphs are the way to go.
\end{parag} | algo_notes | latex |
algo_notes_block_156 | \begin{parag}{Definition: Graph}
A \important{graph} $G = \left(V, E\right)$ consists of a vertex set $V$, and an edge set $E$ that contains pairs of vertices. | algo_notes | latex |
algo_notes_block_157 | \begin{subparag}{Terminology}
We can have a \important{directed graph}, where such pairs are ordered (if $\left(a, b\right) \in E$, then it is not necessarily the case that $\left(b, a\right) \in E$), or an \important{undirected graph} where such pairs are non-ordered (if $\left(a, b\right) \in E$, then $\left(b, a\ri... | algo_notes | latex |
algo_notes_block_158 | \begin{subparag}{Personal remark}
It is funny that we are beginning graph theory right now because, very recently, XKCD published a comic about this subject:
\imagehere[0.6]{Lecture14/XKCD2694.png}
\begin{center}
\url{https://xkcd.com/2694/}
\end{center}
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_159 | \begin{parag}{Definition: Degree}
For a graph $G = \left(V, E\right)$, the \important{degree} of a vertex $u \in V$, denoted $\text{degree}\left(u\right)$, is its number of outgoing vertices. | algo_notes | latex |
algo_notes_block_160 | \begin{subparag}{Remark}
For an undirected graph, the degree of a vertex is its number of neighbours.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_161 | \begin{parag}{Storing a graph}
We can store a graph in two ways: \important{adjacency lists} and \important{adjacency matrices}. Note that any of those two representations can be extended to include other attributes, such as edge weights. | algo_notes | latex |
algo_notes_block_162 | \begin{subparag}{Adjacency lists}
We use an array. Each index represents a vertex, where we store the pointer to the head of a list containing its neighbours.
For an undirected graph, if $a$ is in the list of $b$, then $b$ is in the list of $a$.
For instance, for the undirected graph above, we could have the adja... | algo_notes | latex |
algo_notes_block_163 | \begin{subparag}{Adjacency matrix}
We can also use a $\left|V\right| \times \left|V\right|$ matrix $A = \left(a_{ij}\right)$, where:
\begin{functionbypart}{a_{ij}}
1, \mathspace \text{if } \left(i, j \right) \in E \\
0, \mathspace \text{otherwrise}
\end{functionbypart}
We can note that, for an undirected graph,... | algo_notes | latex |
algo_notes_block_164 | \begin{subparag}{Complexities}
Let us consider the different complexities of our two representations. The space complexity of adjacency lists is $\Theta\left(V + E\right)$ and the one of adjacency matrices is $\Theta\left(V^2\right)$. Then, the time to list all vertices adjacent to $u$ is $\Theta\left(\text{degree}\le... | algo_notes | latex |
algo_notes_block_165 | \section{Primitives for traversing and searching a graph}
\begin{filecontents*}[overwrite]{Lecture14/BreadthFirstSearch.code}
procedure BFS(V, E, s):
for u in V without s:
u.d = infinity
s.d = 0
Q = empty queue
Enqueue(Q, s)
while!Q.isEmpty():
u = Dequeue(Q)
for v in G.adj[u]:
if v.d == infinity:
v.d = u.d + ... | algo_notes | latex |
algo_notes_block_166 | \begin{parag}{Breadth-First Search}
We have as input a graph $G = \left(V, E\right)$, which is either directed or undirected, and some source vertex $s \in V$. The goal is, for all $v \in V$, to output the distance from $s$ to $v$, named $v.d$. | algo_notes | latex |
algo_notes_block_167 | \begin{subparag}{Algorithm}
The idea is to send some kind of wave out form $s$. It will first hit all vertices which are 1 edge from $s$. From those points, we can again send some kind of waves, which will hit all vertices at a distance of two edges from $s$, and so on. In other words, beginning with the root, we look... | algo_notes | latex |
algo_notes_block_168 | \begin{subparag}{Analysis}
The informal proof of correctness is that we are always considering the nodes closest to the root, meaning that whenever we visit a node we could not have visited with a shorted distance. We will do a formal proof for the generalisation of this algorithm, Dijkstra's algorithm.
The runtime ... | algo_notes | latex |
algo_notes_block_169 | \begin{subparag}{Observation}
We note that BFS may not reach all the vertices (if they are not connected).
\end{subparag} | algo_notes | latex |
algo_notes_block_170 | \begin{subparag}{Remark}
We can save the shortest path tree by keeping track of the edge that discovered the vertex. Note that since each vertex (which is not the root and which distance is not infinite) have exactly one such vertex, and thus this is indeed a tree. Then, when given a vertex, we can find the shortest p... | algo_notes | latex |
algo_notes_block_171 | \begin{parag}{Depth-First Search}
BFS goes through every connected vertex, but not necessarily every edge. We would now want to make an algorithm that goes through every edges. Note that this algorithm may seem very abstract and \textit{useless} for now but, as we will see right after, it gives us a very interesting i... | algo_notes | latex |
algo_notes_block_172 | \begin{subparag}{Algorithm}
Our algorithm can be stated the following way, where \texttt{WHITE} means not yet visited, \texttt{GREY} means currently visiting, and \texttt{BLACK} finished visiting:
\importcode{Lecture14/DepthFirstSearch.code}{pseudo}
\end{subparag} | algo_notes | latex |
algo_notes_block_173 | \begin{subparag}{Example}
For instance, running DFS on the following graph, where we have two DFS-visit (one on $b$ and one on $e$):
\imagehere[0.7]{Lecture14/DFSExample.png}
\end{subparag} | algo_notes | latex |
algo_notes_block_174 | \begin{subparag}{Analysis}
The runtime is $\Theta\left(V + E\right)$. Indeed, $\Theta\left(V\right)$ since every vertex is discovered once, and $\Theta\left(E\right)$ since each edge is examined once if it is a directed graph and twice if it is an undirected graph.
\end{subparag} | algo_notes | latex |
algo_notes_block_175 | \begin{subparag}{XKCD}
XKCD's viewpoint on the ways we have to traverse a graph:
\imagehere[0.6]{Lecture14/XKCD2407.png}
\begin{center}
\url{https://xkcd.com/2407/}
\end{center}
And there is also the following great XKCD on the slides:
\imagehere{Lecture14/XKCD761.png}
\begin{center}
\url{https://xkcd.com/761... | algo_notes | latex |
algo_notes_block_176 | \begin{parag}{Depth-First forest}
Just as BFS leads to a tree, DFS leads to a forest (a set of trees).
Indeed, we can again consider the edge that we used to discover a given vertex, to be an edge linking this vertex to its parent. Since trees might be disjoint but we are running DFS so that every edge is discovere... | algo_notes | latex |
algo_notes_block_177 | \begin{subparag}{Remark}
Since we have trees, in particular, DFS leads a certain partial ordering of our nodes: a node can be descendent of another in a tree (or have no relation because they are not in the same tree).
\end{subparag} | algo_notes | latex |
algo_notes_block_178 | \begin{subparag}{Formal definition}
Very formally, each tree is made of edges $\left(u, v\right)$ such that $u$ (currently explored) is grey and $v$ is white (not yet explored) when $\left(u, v\right)$ is explored.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_179 | \begin{parag}{Parenthesis theorem}
We can think of the discovery time as an opening parenthesis and the finishing time a closing parenthesis. Let us note $u$'s discovery and finishing time by brackets and $v$'s discovery and finishing times by braces. Then, to make a well-parenthesised formulation, we have only the fo... | algo_notes | latex |
algo_notes_block_180 | \begin{parag}{White-path theorem}
Vertex $v$ is a descendant of $u$ if and only if, at time $u.d$, there is a path from $u$ to $v$ consisting of only white vertices (except for $u$, which was just coloured grey).
\end{parag} | algo_notes | latex |
algo_notes_block_181 | \begin{parag}{Edge classification}
A depth-first-search run gives us a classification of edges:
\begin{enumerate}
\item Tree edges are edges making our trees, the edges which we used to visit new nodes when running \texttt{DFSVisit}.
\item Back edges are edges $\left(u, v\right)$ where $u$ is a descendant (a child,... | algo_notes | latex |
algo_notes_block_182 | \begin{subparag}{Example}
For instance, in the following graph, tree edges are represented in orange, back edges in blue, forward edges in red and cross edges in green.
\imagehere[0.9]{Lecture14/EdgeClassificationExample.png}
Note that, in this DFS forest, we have two trees.
\end{subparag} | algo_notes | latex |
algo_notes_block_183 | \begin{subparag}{Remark}
In the DFS of an undirected graph, it no longer makes sense to make the distinction between back and forward edges. We thus call both of them back edges.
Also, in an undirected graph, we cannot have any cross edge.
\end{subparag} | algo_notes | latex |
algo_notes_block_184 | \begin{subparag}{Observation}
A different starting point for DFS will lead to a different edge classification.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_185 | \begin{parag}{Definition: Directed acyclic graph}
A \important{directed acyclic graph} (DAG) is a directed graph $G$ such that there are no cycles (what a definition). In other words, for all $u, v \in E$ where $u \neq v$, if there exists a path from $u$ to $v$, then there exists no path from $v$ to $u$.
\end{parag} | algo_notes | latex |
algo_notes_block_186 | \begin{parag}{Topological sort}
We have as input a directed acyclic graph, and we want to output a linear ordering of vertices such that, if $\left(u, v\right) \in E$, then $u$ appears somewhere before $v$. | algo_notes | latex |
algo_notes_block_187 | \begin{subparag}{Use}
This can for instance really be useful for dependency resolution when compiling files: we need to know in what order we need to compiles files.
\end{subparag} | algo_notes | latex |
algo_notes_block_188 | \begin{subparag}{Example}
For instance, let us say that, as good computer scientists, we made the following graph to remember which clothes we absolutely need to put before other clothes, in order to get dressed in the morning:
\imagehere[0.6]{Lecture14/GraphGettingDressedUpInTheMorning.png}
Then, we would want to ... | algo_notes | latex |
algo_notes_block_189 | \begin{parag}{Theorem}
A directed graph $G$ is acyclic if and only if a DFS of $G$ yields no back edges. | algo_notes | latex |
algo_notes_block_190 | \begin{subparag}{Proof $\implies$}
We want to show that a cycle implies a back-edge.
Let $v$ be the first vertex discovered in the cycle $C$, and let $\left(u, v\right)$ be its preceding edge in $C$ (meaning that $u$ is also in the cycle). At time $v.d$, vertices in $C$ form a white path $v$ to $u$, and hence $u$ is... | algo_notes | latex |
algo_notes_block_191 | \begin{subparag}{Proof $\impliedby$}
We want to show that a back-edge implies a cycle.
We suppose by hypothesis that there is a back edge $\left(u, v\right)$. Then, $v$ is the ancestor of $u$ in the depth-first forest. Therefore, there is a path from $v$ to $u$, and thus it creates a cycle.
\qed
\end{subparag}
\e... | algo_notes | latex |
algo_notes_block_192 | \begin{parag}{Algorithm}
The idea of topological sort is to call DFS on our graph (starting from any vertex), in order to compute finishing times $v.f$ for all $v \in V$. We can then output vertices in order of decreasing finishing time. | algo_notes | latex |
algo_notes_block_193 | \begin{subparag}{Example}
For instance, let's consider the graph above. Running DFS, we may get:
\imagehere[0.6]{Lecture14/GraphGettingDressedUpInTheMorning-DFSed.png}
Then, outputting the vertices by decreasing $v.f$, we get the exact topological order shown above. Note that it could have naturally outputted a dif... | algo_notes | latex |
algo_notes_block_194 | \begin{subparag}{Proof of correctness}
We want to show that, if the graph is acyclic and $\left(u, v\right) \in E$, then $v.f < u.f$.
When we traverse the edge $\left(u, v\right)$, $u$ is grey (since we are currently considering it). We can then split our proof for the different colours $v$ can have:
\begin{enumera... | algo_notes | latex |
algo_notes_block_195 | \begin{parag}{Definition: Connected vertex}
Two vertices of an undirected graph are connected if there exists a path between those two vertices. | algo_notes | latex |
algo_notes_block_196 | \begin{subparag}{Remark}
To know if the two vertices are connected, we can run BFS on one of the vertex, and see if the other vertex has a finite distance.
\end{subparag} | algo_notes | latex |
algo_notes_block_197 | \begin{subparag}{Observation}
For directed graph, this definition no longer really makes sense. Since we may want one similar, we will define strongly connected components right after.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_198 | \begin{parag}{Notation: Path}
In a graph, if there is a path from $u$ to $v$, then we note $u \leadsto v$.
\end{parag} | algo_notes | latex |
algo_notes_block_199 | \begin{parag}{Definition: Strongly connected component}
A \important{strongly connected component} (SCC) of a directed graph $G = \left(V, E\right)$ is a maximal set of vertices $C \subseteq V$ such that, for all $u, v \in C$ both $u \leadsto v$ and $v \leadsto u$. | algo_notes | latex |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.