id
stringlengths
18
48
text
stringlengths
52
3.74M
source
stringclasses
10 values
format
stringclasses
2 values
algo_notes_block_200
\begin{subparag}{Remark} To verify if we indeed have a SCC, we first verify that every vertex can reach every other vertex. We then also need to verify that it is maximal, which we can do by adding any element which has one connection to the potential SCC, and verifying that what it yields is not a SCC. \end{subparag...
algo_notes
latex
algo_notes_block_201
\begin{subparag}{Example} For instance, the first example is not a SCC since $c\not\leadsto b$, the second is not either since we could add $f$ and it is thus not maximal: \imagehere[0.7]{Lecture15/WrongSCCExample.png} However, here are all of the SCC of the graph: \imagehere[0.7]{Lecture15/SCCExample.png} \end{s...
algo_notes
latex
algo_notes_block_202
\begin{parag}{Theorem: Existence and unicity of SCCs} Any vertex belongs to one and exactly one SCC.
algo_notes
latex
algo_notes_block_203
\begin{subparag}{Proof} First, we notice that a vertex always belongs to at least one SCC since we can always make an SCC containing one element (and adding it enough elements so that to make it maximal). This shows the existence. Second, let us suppose for contradiction that SCCs are not unique. Thus, for some grap...
algo_notes
latex
algo_notes_block_204
\begin{parag}{Definition: Component graph} For a directed graph (digraph) $G = \left(V, E\right)$, its \important{component graph} $G^{SCC} = \left(V^{S C C}, E^{S C C}\right)$ is defined to be the graph where $V^{S C C}$ has a vertex for each SCC in $G$, and $E^{S C C}$ has an edge between the corresponding SCCs in G...
algo_notes
latex
algo_notes_block_205
\begin{subparag}{Example} For instance, for the digraph hereinabove: \imagehere[0.5]{Lecture15/ComponentGraphExample.png} \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_206
\begin{parag}{Theorem} For any digraph $G$, its component graph $G^{SCC}$ is a DAG (directed acyclic graph).
algo_notes
latex
algo_notes_block_207
\begin{subparag}{Proof} Let's suppose for contradiction that $G^{S C C}$ has a cycle. This means that we can access one SCC from $G$ from another SCC (or more); and thus any elements from the first SCC have a path to elements of the second SCC, and reciprocally. However, this means that we could the SCCs, contradictin...
algo_notes
latex
algo_notes_block_208
\begin{parag}{Definition: Graph transpose} Let $G$ be a digraph (directed graph). The \important{transpose} of $G$, written $G^T$, is the graph where all the edges have their direction reversed: \[G^T = \left(V, E^T\right), \mathspace \text{where } E^T = \left\{\left(u, v\right): \left(v, u\right) \in E\right\}\]
algo_notes
latex
algo_notes_block_209
\begin{subparag}{Remark} We call this a transpose since the transpose of $G$ is basically given by transposing its adjacency matrix. \end{subparag}
algo_notes
latex
algo_notes_block_210
\begin{subparag}{Observation} We can create $G^T$ in $\Theta\left(V + E\right)$ if we are using adjacency lists. \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_211
\begin{parag}{Theorem} A graph and its transpose have the same SCCs. \end{parag}
algo_notes
latex
algo_notes_block_212
\begin{parag}{Kosarju's algorithm} The idea of Kosarju's algorithm to compute component graphs efficiently is: \begin{enumerate} \item Call \texttt{DFS($G$)} to compute the finishing times $u.f$ for all $u$. \item Compute $G^T$. \item Call \texttt{DFS($G^T$)} where the order of the main loop of this procedure goes...
algo_notes
latex
algo_notes_block_213
\begin{subparag}{Unicity} Since SCCs are unique, the result will always be the same, even though graphs can be traversed in very different ways with DFS. \end{subparag}
algo_notes
latex
algo_notes_block_214
\begin{subparag}{Analysis} Since every instruction is $\Theta\left(V + E\right)$, our algorithm runs in $\Theta\left(V + E\right)$. \end{subparag}
algo_notes
latex
algo_notes_block_215
\begin{subparag}{Intuition} The main intuition for this algorithm is to realise that elements from SCCs can be accessed from one another when going forwards (in the regular graph) or backwards (in the transposed graph). Thus, we first compute some kind of ``topological sort'' (this is not a real one this we don't have...
algo_notes
latex
algo_notes_block_216
\begin{subparag}{Personal remark} The Professor used the name ``magic algorithm'' since we do not prove this theorem and it seems very magical. I feel like it is better to give it its real name, but probably it is important to know its informal name for exams. \end{subparag} \end{parag} \lecture{16}{2022-11-18}{...
algo_notes
latex
algo_notes_block_217
\begin{parag}{Basic problem} The basic problem solved by flow networks is shipping as much of a resource from one node to another. Edge have a weight, which, if they were pipes, would represent their flow capacity. The question is then how to optimise the rate of flow from the source to the sink.
algo_notes
latex
algo_notes_block_218
\begin{subparag}{Applications} This has many applications. For instance: evacuating people out of a building. If we have given exits and corridors size, we can then know how many people we could evacuate in a given time. Another application is finding the best way to ship goods on roads, or disrupting it in another ...
algo_notes
latex
algo_notes_block_219
\begin{parag}{Definition: Flow network} A \important{flow network} is a directed graph $G = \left(V, E\right)$, where each edge $\left(u, v\right)$ has a capacity $c\left(u, v\right) \geq 0$. This is function is such that, $c\left(u, v\right) = 0$ if and only if $\left(u, v\right) \not\in E$. Finally, we have a \impor...
algo_notes
latex
algo_notes_block_220
\begin{parag}{Definition: Flow} A \important{flow} is a function $f: V \times V \mapsto \mathbb{R}$ satisfying the two following constraints. First, the capacity constraint states that, for all $u, v \in V$, we have: \[0 \leq f\left(u, v\right) \leq c\left(u, v\right)\] In other words, the flow cannot be greater ...
algo_notes
latex
algo_notes_block_221
\begin{subparag}{Notation} We will note flows on a flow network by noting $f\left(u, v\right) / c\left(u, v\right)$ for all edge. For instance, we could have: \imagehere[0.55]{Lecture16/FlowNetworkExample.png} \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_222
\begin{parag}{Definition: Value of a flow} The value of a flow $f$, denoted $\left|f\right|$, is: \[\left|f\right| = \sum_{v \in V}^{} f\left(s, v\right) - \sum_{v \in V}^{} f\left(v, s\right)\] which is the flow out of the source minus the flow into the source.
algo_notes
latex
algo_notes_block_223
\begin{subparag}{Observation} By the flow conservation constraint, this is equivalent to the flow into the sink minus the flow out of the sink: \[\left|f\right| = \sum_{v \in V}^{} f\left(v, t\right) - \sum_{v \in V}^{} f\left(t, v\right)\] \end{subparag}
algo_notes
latex
algo_notes_block_224
\begin{subparag}{Example} For instance, for the flow graph and flow hereinabove: \[\left|f\right| = \left(1 + 2\right) - 0 = 3\] \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_225
\begin{parag}{Goal} The idea is now to develop an algorithm that, given a flow network, we find the maximum flow. The basic idea that could come to mind is to take a random path through our network, consider its bottleneck link, and send this value of flow onto this path. We then have a new graph, with capacities redu...
algo_notes
latex
algo_notes_block_226
\begin{parag}{Definition: Residual capacity} Given a flow network $G$ and a flow $f$, the \important{residual capacity} is defined as: \begin{functionbypart}{c_f\left(u, v\right)} c\left(u, v\right) - f\left(u, v\right), \mathspace \text{if } \left(u, v\right) \in E \\ f\left(v, u\right), \mathspace \text{if } \lef...
algo_notes
latex
algo_notes_block_227
\begin{subparag}{Example} For instance, if we have an edge $\left(u, v\right)$ with capacity $c\left(u, v\right) = 5$ and current flow $f\left(u, v\right) = 3$, then $c_f\left(u, v\right) = 5 - 3 = 2$ and $c_f\left(v, u\right) = f\left(u, v\right) = 3$. \end{subparag}
algo_notes
latex
algo_notes_block_228
\begin{subparag}{Remark} This definition is the reason why we do not want antiparallel edges: the notation is much simpler without. \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_229
\begin{parag}{Definition: Residual network} Given a flow network $G$ and flow $f$, the \important{residual network} $G_f$ is defined as: \[G_f = \left(V, E_f\right), \mathspace \text{where } E_f = \left\{\left(u, v\right) \in V \times V: c_{f}\left(u, v\right) > 0\right\}\] We basically use our residual capacity f...
algo_notes
latex
algo_notes_block_230
\begin{parag}{Definition: Augmenting path} Given a flow network $G$ and flow $f$, an \important{augmenting path} is a simple path (never going twice on the same vertex) from $s$ to $t$ in the residual network $G_f$. \important{Augmenting the flow} $f$ by this path means applying the minimum capacity over the path: ...
algo_notes
latex
algo_notes_block_231
\begin{parag}{Ford-Fulkerson algorithm} The idea of the Ford-Fulkerson greedy algorithm for finding the maximum flow in a flow network is, as the one we had before, to improve our flow iteratively; but using residual networks in order to cancel wrong choices of paths.
algo_notes
latex
algo_notes_block_232
\begin{subparag}{Example} Let's consider again our non-trivial flow network, and the suboptimal flow our naive algorithm found: \imagehere[0.5]{Lecture16/FlowNetworkExampleSubOptimal.png} Now, the residual networks looks like: \imagehere[0.5]{Lecture16/FlowNetworkSubOptimal-ResidualNetwork.png} Now, the new algo...
algo_notes
latex
algo_notes_block_233
\begin{subparag}{Proof of optimality} We will want to prove its optimality. However, to do so, we need the following definitions. \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_234
\begin{parag}{Definition: Cut of flow network} A \important{cut of flow network} $G = \left(V, E\right)$ is a partition of $V$ into $S$ and $T = V \setminus S$ such that $s \in S$ and $t \in T$. In other words, we split our graph into nodes on the source side and on the sink side.
algo_notes
latex
algo_notes_block_235
\begin{subparag}{Example} For instance, we could have the following cut (where nodes from $S$ are coloured in black, and ones from $T$ are coloured in white): \imagehere[0.6]{Lecture16/FlowNetworkCutExample.png} Note that the cut does not necessarily have to be a straight line (since, anyway, straight lines make no...
algo_notes
latex
algo_notes_block_236
\begin{parag}{Definition: Net flow across a cut} The \important{net flow across a cut} $\left(S, T\right)$ is: \[f\left(S, T\right) = \sum_{\substack{u \in S \\ v \in T}}^{} f\left(u, v\right) - \sum_{\substack{u \in S \\ v \in T}}^{} f\left(v, u\right)\] This is basically the flow leaving $S$ minus the flow ente...
algo_notes
latex
algo_notes_block_237
\begin{subparag}{Example} For instance, on the graph hereinabove, it is: \[f\left(S, T\right) = 12 + 11 - 4 = 19\] \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_238
\begin{parag}{Property} Let $f$ be a flow. For any cut $S, T$: \[\left|f\right| = f\left(S, T\right)\]
algo_notes
latex
algo_notes_block_239
\begin{subparag}{Proof} We make a proof by structural induction on $S$. \begin{itemize}[left=0pt] \item If $S = \left\{s\right\}$, then the net flow is the flow out from $s$ minus the flow into $s$, which is exactly equal to the value of the flow. \item Let's say $S = S' \cup \left\{w\right\}$, supposing $\left|f\...
algo_notes
latex
algo_notes_block_240
\begin{parag}{Definition: Capacity of a cut} The \important{capacity of a cut} $S, T$ is defined as: \[c\left(S, T\right) = \sum_{\substack{u \in S \\ v \in T}}^{} c\left(u, v\right)\]
algo_notes
latex
algo_notes_block_241
\begin{subparag}{Example} For instance, on the graph hereinabove, the capacity of the cut is: \[12 + 14 = 26\] Note that we do not add the 9, since it goes in the wrong direction. \end{subparag}
algo_notes
latex
algo_notes_block_242
\begin{subparag}{Observation} This value, however, \textit{depends} on the cut. \end{subparag} \end{parag} \lecture{17}{2022-11-21}{The algorithm may stop, or may not}{}
algo_notes
latex
algo_notes_block_243
\begin{parag}{Property} For any flow $f$ and any cut $\left(S, T\right)$, then: \[\left|f\right| \leq c\left(S, T\right)\]
algo_notes
latex
algo_notes_block_244
\begin{subparag}{Proof} Starting from the left hand side: \[\left|f\right| = f\left(S, T\right) = \sum_{\substack{u \in S \\ v \in T}}^{} f\left(u, v\right) - \underbrace{\sum_{\substack{u \in S\\ v \in T}}^{} f\left(v, u\right)}_{\geq 0}\] And thus: \[\left|f\right| \leq \sum_{\substack{u \in S \\ v \in T}}^{} f...
algo_notes
latex
algo_notes_block_245
\begin{parag}{Definition: Min-cut} Let $f$ be a flow. A \important{min-cut} is a cut with minimum capacity. In other words, it is a cut $\left(S_{min}, T_{min}\right)$, such that for any cut $\left(S, T\right)$: \[c\left(S_{min}, T_{min}\right) \leq c\left(S, T\right)\]
algo_notes
latex
algo_notes_block_246
\begin{subparag}{Remark} By the property above, the value of the flow is less than or equal to the min-cut: \[\left|f\right| \leq c\left(S_{min}, T_{min}\right)\] We will prove right after that, in fact, $\left|f_{max}\right| = c\left(S_{min}, T_{min}\right)$. \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_247
\begin{parag}{Max-flow min-cut theorem} Let $G = \left(V, E\right)$ be a flow network, with source $s$, sink $t$, capacities $c$ and flow $f$. Then, the following propositions are equivalent: \begin{enumerate} \item $f$ is a maximum flow. \item $G_f$ has no augmenting path. \item $\left|f\right| = c\left(S, T\righ...
algo_notes
latex
algo_notes_block_248
\begin{subparag}{Remark} This theorem shows that the Ford-Fulkerson method gives the optimal value. Indeed, it terminates when $G_f$ has no augmenting path, which is, as this theorem says, equivalent to having found a maximum flow. \end{subparag}
algo_notes
latex
algo_notes_block_249
\begin{subparag}{Proof $\left(1\right) \implies \left(2\right)$} Let's suppose for contradiction that $G_f$ has an augmenting path $p$. However, then, Ford-Fulkerson method would augment $f$ by $p$ to obtain a flow with increased value. This contradicts the fact that $f$ was a maximum flow. \end{subparag}
algo_notes
latex
algo_notes_block_250
\begin{subparag}{Proof $\left(2\right) \implies \left(3\right)$} Let $S$ be the set of nodes reachable from $s$ in the residual network, and $T = V \setminus S$. Every edge going out of $S$ in $G$ must be at capacity. Indeed, otherwise, we could reach a node outside $S$ in the residual network, contradicting the co...
algo_notes
latex
algo_notes_block_251
\begin{subparag}{Proof $\left(3\right) \implies \left(1\right)$} We know that $\left|f\right| \leq c\left(S, T\right)$ for all cuts $S, T$. Therefore, if the value of the flow is equal to the capacity of some cut, it cannot be improved. This shows its maximality. \qed \end{subparag} \end{parag} \begin{filecontents...
algo_notes
latex
algo_notes_block_252
\begin{parag}{Summary} All this shows that our Ford-Fulkerson method for finding a max-flow works: \importcode{Lecture17/FordFulkerson-MaxFlow.code}{pseudo} Also, when we have found a max-flow, we can use our flow to find a min-cut: \importcode{Lecture17/FordFulkerson-MinCut.code}{pseudo} \end{parag}
algo_notes
latex
algo_notes_block_253
\begin{parag}{High complexity analysis} It takes $O\left(E\right)$ to find a path in the residual network (using breadth-first search for instance). Each time, the flow value is increased by at least 1. Thus, the running time has a worst case of $O\left(E \left|f_{max}\right|\right)$. We can note that, indeed, there...
algo_notes
latex
algo_notes_block_254
\begin{parag}{Lower complexity analysis} In fact, if we don't choose our paths randomly and if the capacities are integers (or rational numbers, this does not really matter since we could then just multiply everything by the lowest common divisor an get an equivalent problem), then we can get a much better complexity....
algo_notes
latex
algo_notes_block_255
\begin{subparag}{Proof} We will not show those two affirmations in this course. \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_256
\begin{parag}{Observation} If the capacities of our network are irrational, then the Ford-Fulkerson method might not really terminate. \end{parag}
algo_notes
latex
algo_notes_block_257
\begin{parag}{Application: Bipartite matching problem} Let's consider the Bipartite matching problem. It is easier to explain it with an example. We have $N$ students applying for $M \geq N$ jobs, where each student get several offers. Every job can be taken at most once, and every student can have at most one job. ...
algo_notes
latex
algo_notes_block_258
\begin{parag}{Application: Edge-disjoint paths problem} In an undirected graph, we may want to know the minimum number of routes that we can take that do not share a common road. To do so, we set an edge of capacity 1 for both directions for every roads (in a non-anti-parallel fashion, as seen earlier). Then, the ma...
algo_notes
latex
algo_notes_block_259
\begin{parag}{Disjoint-set data structures} The idea of \important{disjoint-set data structures} is to maintain a collection $\mathcal{S} = \left\{S_1, \ldots, S_k\right\}$ of disjoint sets, which can change over time. Each set is identified by a representative, which is some member of the set. It does not matter whic...
algo_notes
latex
algo_notes_block_260
\begin{subparag}{Remark} This datastructure can also be named union find. \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_261
\begin{parag}{Linked list representation} A way to represent this data structure is through a linked list. To do so, each set is an object looking like a single linked list. Each set object is represented by a pointer to the head of the list (which we will take as the representative) and a pointer to the tail of the l...
algo_notes
latex
algo_notes_block_262
\begin{subparag}{Make-Set} For the procedure \texttt{Make-Set(x)}, we can just create a singleton list containing $x$. This is easily done in time $\Theta\left(1\right)$. \end{subparag}
algo_notes
latex
algo_notes_block_263
\begin{subparag}{Find} For the procedure \texttt{Find(x)}, we can follow the pointer back to the list object, and then follow the head pointer to the representative. This is also done in time $\Theta\left(1\right)$. \end{subparag}
algo_notes
latex
algo_notes_block_264
\begin{subparag}{Union} For the procedure \texttt{Union(x, y)}, everything gets more complicated. We notice that we can append a list to the end of another list. However, we will need to update all the elements of the list we appended to point to the right set object, which will take a lot of time if its size is big. ...
algo_notes
latex
algo_notes_block_265
\begin{parag}{Theorem} Let us consider a linked-list implementation of a disjoint-set datastructure With the weighted-union heuristic, a sequence of (any) $m$ operations takes $O\left(m + n\log\left(n\right)\right)$ time, where $n$ is the number of elements our structure ends with after those operations. Without thi...
algo_notes
latex
algo_notes_block_266
\begin{subparag}{Proof with} The inefficiency comes from constantly rewiring our elements when running the \texttt{Union} procedure. Let us count how many times an element $i$ may get rewire if, amongst those $m$ operations, there are $n$ \texttt{Union} calls. When we merge a set $A$ containing $i$ and another set $...
algo_notes
latex
algo_notes_block_267
\begin{subparag}{Proof without} Let's say that we have $n$ elements each in a singleton set and that our $m$ operations consist in always appending the list of the first set to the second one, through unions. This way, the first set will get a size constantly growing. Thus, we will have to rewire $1 + 2 + \ldots + n-1...
algo_notes
latex
algo_notes_block_268
\begin{subparag}{Remark} This kind of analysis is amortised complexity analysis: we don't make our analysis on a single operation, since we may have a really bad case happening. However, on average, it is fine. \end{subparag} \end{parag} \begin{filecontents*}[overwrite]{Lecture18/DisjointSetForestMakeSet.code} pro...
algo_notes
latex
algo_notes_block_269
\begin{parag}{Forest of trees} Now, let's consider instead a much better idea. We make a forest of trees (which are \textit{not} binary), where each tree represents one set, and the root is the representative. Also, since we are working with trees, naturally each node only points to its parent. \imagehere[0.4]{Lectu...
algo_notes
latex
algo_notes_block_270
\begin{subparag}{Make-Set} \texttt{Make-Set(x)} can be done easily by making a single-node tree. \importcode{Lecture18/DisjointSetForestMakeSet.code}{pseudo} The rank will be defined and used in the \texttt{Union} procedure. \end{subparag}
algo_notes
latex
algo_notes_block_271
\begin{subparag}{Find} For \texttt{Find(x)}, we can just follow pointers to the root. However, we can also use the following great heuristic: \important{path compression}. The \texttt{Find(x)} procedure follows a path to the origin. Thus, we can make all those elements' parent be the representative directly (in ord...
algo_notes
latex
algo_notes_block_272
\begin{subparag}{Union} For \texttt{Union(x, y)}, we can make the root of one of the trees the child of another. Again, we can optimise this procedure with another great heuristic: \important{union by rank}. For the \texttt{Find(x)} procedure to be efficient, we need to keep the height of our trees as small as possi...
algo_notes
latex
algo_notes_block_273
\begin{subparag}{Complexity} Let's also consider applying $m$ operations to a datastructure with $n$ elements. We can show that, using both union by rank and path compression, we have a complexity of $O\left(m \alpha\left(n\right)\right)$, where $\alpha\left(n\right)$ is the inverse Ackermann function. This function...
algo_notes
latex
algo_notes_block_274
\begin{parag}{Application: Connected components} For instance, we can construct a disjoint-set data structure for all the connected components of an undirected graph. Using the fact that, in an undirected graph, two elements are connected if and only if there is a path between them: \importcode{Lecture18/ConnectedCom...
algo_notes
latex
algo_notes_block_275
\begin{subparag}{Example} For instance, in the following graph, we have two connected components: \imagehere[0.7]{Lecture18/ExampleConnectedComponents.png} This means that our algorithm will give us two disjoint sets in the end. \end{subparag}
algo_notes
latex
algo_notes_block_276
\begin{subparag}{Analysis} We notice that we have $V$ elements, and we have at most $V + 3E$ union or find operations. Thus, using the best implementation we saw for disjoint set data structures, we get a complexity of $O\left(\left(V + E\right) \alpha\left(V\right)\right) \approx O\left(V + E\right)$. For the other...
algo_notes
latex
algo_notes_block_277
\begin{parag}{Definition: Spanning tree} A spanning tree of a graph $G$ is a set $T$ of edges that is acyclic and spanning (it connects all vertices).
algo_notes
latex
algo_notes_block_278
\begin{subparag}{Example} For instance, the following is a spanning tree: \imagehere[0.6]{Lecture18/ExampleSpanningTree.png} However, the following is not a spanning tree since it has no cycle but is not spanning (the node $e$ is never reached): \imagehere[0.6]{Lecture18/NotSpanningTreeExample1.png} Similarly, t...
algo_notes
latex
algo_notes_block_279
\begin{subparag}{Remark} The number of edges of a spanning tree is $E_{span} = V - 1$. \end{subparag} \end{parag}
algo_notes
latex
algo_notes_block_280
\begin{parag}{Minimum spanning tree (MST)} The goal is now that, given an undirected graph $G = \left(V, E\right)$ and weights $w\left(u, v\right)$ for each edge $\left(u, v\right) \in E$, we want to output a spanning tree of minimum total weight (which sum of weights is the smallest).
algo_notes
latex
algo_notes_block_281
\begin{subparag}{Application: Communication networks} This problem can have many applications. For instance, let's say we have some cities between which we can make communication lines at different costs. Finding how to connect all the cities at the smallest cost possible is exactly an application of this problem. \e...
algo_notes
latex
algo_notes_block_282
\begin{subparag}{Application: Clustering} Another application is clustering. Let's consider the following graph, where edge weights equal to the distance of nodes: \imagehere[0.5]{Lecture18/MinimumSpanningTreesClustering.png} Then, to find $n$ clusters, we can make the minimum spanning tree (which will want to use ...
algo_notes
latex
algo_notes_block_283
\begin{parag}{Definition: Cut} Let $G = \left(E, V\right)$ be a graph. A \important{cut} $\left(S, V \setminus S\right)$ is a partition of the vertices into two non-empty disjoint sets $S$ and $V \setminus S$. \end{parag}
algo_notes
latex
algo_notes_block_284
\begin{parag}{Definition: Crossing edge} Let $G = \left(E, V\right)$ be a graph, and $\left(S, V \setminus S\right)$ be a cut. A \important{crossing edge} is an edge connecting a vertex from $S$ to a vertex from $V \setminus S$. \end{parag} \lecture{19}{2022-11-28}{Finding the optimal MST}{}
algo_notes
latex
algo_notes_block_285
\begin{parag}{Theorem: Cut property} Let $S, V \setminus S$ be a cut. Also, let $T$ be a tree on $S$ which is part of a MST, and let $e$ be a crossing edge of minimum weight. Then, there is a MST of $G$ containing both $e$ and $T$. \imagehere[0.5]{Lecture19/CutProperty.png}
algo_notes
latex
algo_notes_block_286
\begin{subparag}{Proof} Let us consider the MST $T$ is part of. If $e$ is already in it, then we are done. Since there must be a crossing edge (to span both $S$ and $V \setminus S$), if $e$ is not part of the MST, then another crossing edge $f$ is part of the MST. However, we can just replace $f$ by $e$: since $w\...
algo_notes
latex
algo_notes_block_287
\begin{parag}{Prim's algorithm} The idea of Prim's algorithm for finding MSTs is to greedily construct the tree by always picking the crossing edge with smallest weight.
algo_notes
latex
algo_notes_block_288
\begin{subparag}{Proof} Let's do this proof by structural induction on the number of nodes in $T$. Our base case is trivial: starting from any point, a single element is always a subtree of a MST. For the inductive step, we can just see that starting with a subtree of a MST and adding the crossing edge with smallest...
algo_notes
latex
algo_notes_block_289
\begin{subparag}{Implementation} We need to keep track of all the crossing edges at every iteration, and to be able to efficiently find the minimum crossing edge at every iteration. Checking out all outgoing edges is not really good since it leads to $O\left(E\right)$ comparisons at every iteration and thus a total...
algo_notes
latex
algo_notes_block_290
\begin{subparag}{Analysis} Initialising $Q$ and the first for loop take $O\left(V \log\left(V\right)\right)$ time. Then, decreasing the key of $r$ takes $O\left(\log\left(V\right)\right)$. Finally, in the while loop, we make $V$ \texttt{extractMin} calls---leading to $O\left(V \log\left(V\right)\right)$---and at most ...
algo_notes
latex
algo_notes_block_291
\begin{parag}{Kruskal's algorithm} Let's consider another way to solve this problem. The idea of Kruskal's algorithm for finding MSTs is to start from a forest $T$ with all nodes being in singleton trees. Then, at each step, we greedily add the cheapest edge that does not create a cycle. The forest will have been me...
algo_notes
latex
algo_notes_block_292
\begin{subparag}{Proof} Let's do a proof by structural induction on the number of edges in $T$ to show that $T$ is always a sub-forest of a MST. The base case is trivial since, at the beginning, $T$ is a union of singleton vertices and thus, definitely, it is the sub-forest of any tree on the graph (and of any MST, ...
algo_notes
latex
algo_notes_block_293
\begin{subparag}{Implementation} To implement this algorithm, we need to be able to efficiently check whether the cheapest edge creates a cycle. However, this is the same as checking whether its endpoint belong to the same component, meaning that we can use disjoint sets data structure. We can thus implement our al...
algo_notes
latex
algo_notes_block_294
\begin{subparag}{Analysis} Initialising \texttt{result} is in $O\left(1\right)$, the first for loop represents $V$ \texttt{makeSet} calls, sorting $E$ takes $O\left(E \log\left(E\right)\right)$ and the second for loop is $O\left(E\right)$ \texttt{findSets} and \texttt{unions}. We thus get a complexity of: \[\underbr...
algo_notes
latex
algo_notes_block_295
\begin{parag}{Definition: Shortest path problem} Let $G = \left(V, E\right)$ be a directed graph with edge-weights $w\left(u, v\right)$ for all $\left(u, v\right) \in E$. We want to find the path from $a \in V$ to $b \in V$, $\left(v_0, v_1, \ldots, v_k\right)$, such that its weight $\sum_{i=1}^{k} w\left(v_{i-1}, v...
algo_notes
latex
algo_notes_block_296
\begin{subparag}{Variants} Note that there are many variants of this problem. In \important{single-source}, we want to find the shortest path from a given source vertex to every other vertex of the graph. In \important{single-destination}, we want to find the shortest path from every vertex in the graph to a given d...
algo_notes
latex
algo_notes_block_297
\begin{parag}{Negative-weight edges} Note that we will try to allow negative weights, as long there is no negative-weight cycle (a cycle which sum is negative) reachable from the source (since then we could just keep going around in the cycle and all nodes would have distance $-\infty$). In fact, one of our algorithm ...
algo_notes
latex
algo_notes_block_298
\begin{subparag}{Remark} Dijkstra's algorithm, which we will present in the following course, only works with positive weights. \end{subparag}
algo_notes
latex
algo_notes_block_299
\begin{subparag}{Application} This can for instance be really interesting for exchange rates. Let's say we have some exchange rate for some given currencies. We are wondering if we can make an infinite amount of money by trading money to a currency, and then to another, and so on until, when we come back to the first ...
algo_notes
latex