category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
graph theory
|
Proof that n-dependent Set in Graph theory is NP Complete
|
https://cs.stackexchange.com/questions/132334/proof-that-n-dependent-set-in-graph-theory-is-np-complete
|
<p>Consider an undirected graph <span class="math-container">$G$</span>. A subset <span class="math-container">$S \subseteq V(G)$</span> is n-dependent if for every <span class="math-container">$x \in S, d_{<S>}(x) \leq n-1$</span>. The n-dependence number of <span class="math-container">$G$</span>, denoted <span class="math-container">$\beta_n(G)$</span>, is maximum cardinality of an n-dependent set of <span class="math-container">$G$</span>. That is, the n-dependence number of <span class="math-container">$G$</span> is the maximum cardinality of a subset of vertices <span class="math-container">$S$</span> such that <span class="math-container">$\Delta(<S>) < n$</span>.</p>
<p>Consider the following question:</p>
<p><strong>Instance</strong>: Given a graph <span class="math-container">$G$</span> and positive integers <span class="math-container">$n$</span> and <span class="math-container">$k$</span></p>
<p><strong>Question</strong>: Does there exist <span class="math-container">$S \subseteq V(G)$</span> such that <span class="math-container">$S$</span> is an n-dependent set and <span class="math-container">$|S| \geq k$</span>?
Prove that the n-dependent set problem in undirected graphs is NP-complete</p>
<p>This was a midterm question and my answer was this:</p>
<blockquote>
<p>For the sake of argument let's assume that there exists an algorithm <span class="math-container">$\mathcal{A}(G, n, k)$</span> that can solve the given problem in polynomial-time. Now let's say we are given an undirected graph <span class="math-container">$G$</span> with a number <span class="math-container">$k$</span> and are asked to check if the graph contains an independent set of size <span class="math-container">$k$</span>. To solve this problem in polynomial-time, we are going to call <span class="math-container">$\mathcal{A}(G, 1, k)$</span>. But since the independent set decision is NP-complete, we can't solve it in polynomial-time, hence <span class="math-container">$\mathcal{A}$</span> does not exist.</p>
</blockquote>
<p>I can't understand why this answer is wrong (I got 0). Please explain.</p>
| 100
|
|
graph theory
|
Difference between graph-partitioning and graph-clustering
|
https://cs.stackexchange.com/questions/54683/difference-between-graph-partitioning-and-graph-clustering
|
<p>What is the difference between graph-partitioning and graph-clustering in graph theory?</p>
|
<p>Graph partitioning and graph clustering are informal concepts, which (usually) mean partitioning the vertex set under some constraints (for example, the number of parts) such that some objective function is maximized (or minimized). We usually have some specific constraints and objective function in mind. However <em>graph partitioning</em> and <em>graph clustering</em>, as vague informal concepts, are pretty much the same.</p>
| 101
|
graph theory
|
A Question about a Question related to Graph Theory and Maximum Flow
|
https://cs.stackexchange.com/questions/88899/a-question-about-a-question-related-to-graph-theory-and-maximum-flow
|
<p>The following question is from the book "Introduction to Algorithms" By Cormen and three other authors.</p>
<p>$26.2-10$<br>
Show how to find a maximum flow in a network $G = (V,E)$ by a sequence of at
most $|E|$ augmenting paths. (Hint: Determine the paths after finding the maximum
flow.)</p>
<p>I find this question confusing because the hint contradicts the question. Is it asking you to find the maximum flow in a graph? or is it asking you to find a path?</p>
<p>Recall that for a given flow graph $G$ there might be several flows that yield the maximum flow. Is this question asking you to find all the flows that produce a maximum flow? Do you think this question is properly worded?</p>
<p>Bob</p>
|
<p>It's asking you to prove that there exists a sequence of $|E|$ augmenting paths that yields the maximum flow.</p>
<p>The hint suggests: suppose you already knew the maximum flow. Then use that information to choose $|E|$ augmenting paths, that will yield that maximum flow.</p>
<p>Yes, this sounds weird. Obviously what you have proven will not be useful as a maximum-flow algorithm (you would need to know the maximum flow already, so it's no use in computing the maximum flow). Think of it as proving a theoretical fact, rather than trying to design a useful algorithm.</p>
| 102
|
graph theory
|
A question about the graph theory or data structure and algorithms
|
https://cs.stackexchange.com/questions/155295/a-question-about-the-graph-theory-or-data-structure-and-algorithms
|
<p>I would like to ask this question as I am not sure about the answer.</p>
<p>Let <span class="math-container">$G=(V,E)$</span> be a connected, undirected graph, and let <span class="math-container">$x,y\in V$</span> be two different vertices. Let <span class="math-container">$A$</span> be the problem of finding the shortest simple path between <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, and let <span class="math-container">$B$</span> be the problem of finding the longest simple path between <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. Which of the following statement about <span class="math-container">$A$</span> and <span class="math-container">$B$</span> is true?</p>
<ol>
<li>Both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> can be solved on polynomial time</li>
<li><span class="math-container">$A$</span> is not known to be solvable in polynomial time but <span class="math-container">$B$</span> can be solved in polynomial time</li>
<li><span class="math-container">$A$</span> can be solved in polynomial time but <span class="math-container">$B$</span> is not known to be solvable in polynomial time</li>
<li>It is known that both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> can be solved in polynomial time</li>
<li>It is not known whether either <span class="math-container">$A$</span> and <span class="math-container">$B$</span> can be solved in polynomial time</li>
</ol>
<p>I think option 3 is correct because the time complexity for finding the shortest path between two nodes is <span class="math-container">$O(n^2)$</span> but for <span class="math-container">$B$</span>, I am not sure.</p>
<p>Could you please explain? Thank you so much</p>
|
<p>Problem <span class="math-container">$A$</span> can be solved in polynomial time. Problem <span class="math-container">$B$</span> is NP-hard.
This means that options 2 and 5 are certainly false.</p>
<p>We don't know whether option 1 is true. If it is true is, then P=NP.</p>
<p>Option 3 is currently true (but might become false in the future if P is proven to be equal to NP).</p>
<p>Option 4 is currently false but might become true in the future (if P is proven to be equal to NP).</p>
<p>If we live in a world where P=NP then both option 1 and option 3 are currently true.</p>
| 103
|
graph theory
|
What is the proper way to write logic formula, say concerning graph theory?
|
https://cs.stackexchange.com/questions/157369/what-is-the-proper-way-to-write-logic-formula-say-concerning-graph-theory
|
<p>Say for example I'd like to state that there exists a pair of vertices such that they form an edge in one graph but not some other graph. I'd go about it as follows:</p>
<p><span class="math-container">$$ \exists u, v \in V, (u,v) \in G, (u,v) \not\in H $$</span></p>
<p>My main question here is: Is my use of commas fine (it seems odd, since there's commas between <span class="math-container">$u$</span> and <span class="math-container">$v$</span> already which don't seem to be used in the same manner), or should I use vertical lines, colons, semicolons, logical <span class="math-container">$\land$</span> ... and if so where exactly? Is there a proper way to do this which I just haven't found or is me assuming everyone has their own distinct way correct?</p>
|
<p>The standard logical notation I have seen among computer scientists for saying there exists <span class="math-container">$x \in X$</span> such that <span class="math-container">$\varphi(x)$</span> holds is to write</p>
<p><span class="math-container">$$\exists x \in X . \varphi(x).$$</span></p>
<p>In other words, we use a period. Sometimes people will instead write</p>
<p><span class="math-container">$$\exists x \in X \; \varphi(x)$$</span></p>
<p>i.e., they use no separator (just some space). There are many more variants, for instance I have seen <span class="math-container">$(\exists x \in X) (\varphi(x))$</span> in communities that are more focused on mathematics or logic.</p>
<p>So, in short, there are many conventions. Ultimately, I think if you use a comma, people will understand what you mean.</p>
<p>If it were me, I would write something like</p>
<p><span class="math-container">$$\exists u,v \in V . (u,v) \in G \land (u,v) \notin H,$$</span></p>
<p>and use logical connectives (like <span class="math-container">$\land$</span>) to express a series of statements that must be true, rather than listing them with commas. But I think people will understand what you wrote.</p>
<p>See also <a href="https://math.stackexchange.com/q/197554/14578">https://math.stackexchange.com/q/197554/14578</a> and <a href="https://math.stackexchange.com/q/79190/14578">https://math.stackexchange.com/q/79190/14578</a>.</p>
| 104
|
graph theory
|
Rank of a graph in matroid theory
|
https://cs.stackexchange.com/questions/150541/rank-of-a-graph-in-matroid-theory
|
<p>I was going through the concept of graphs as matroids and I came upon the rank of a graph. Wikipedia lists it as <span class="math-container">$n - c$</span>, <span class="math-container">$n = |V|$</span>, <span class="math-container">$c =$</span> # of connected components.</p>
<p>I do understand rank and nullity of matrices, and graphs when expressed in their incidence matrix form have a one-to-one correspondence with the rank of its incidence matrix.
However, I am not understanding how
<span class="math-container">$r(G) = |V| - c$</span>, <span class="math-container">$c = $</span> # of connected components
and the definition of rank as the maximum size of a subforest of <span class="math-container">$G$</span> are equivalent.</p>
<p>I tried looking it up online but found no satisfactory explanation. Any resources that would be helpful to understand the concept would be great.</p>
|
<p>The rank of a graphical matroid is the size of a spanning forest, which consists of a spanning tree in each connected component. A spanning tree for a connected component of size <span class="math-container">$m$</span> contains <span class="math-container">$m - 1$</span> edges. Summing this over all connected components, we see that a spanning forest contains <span class="math-container">$n - c$</span> edges.</p>
| 105
|
graph theory
|
Graph theory: BFS (Breadth First Search) - why is current processed first?
|
https://cs.stackexchange.com/questions/117964/graph-theory-bfs-breadth-first-search-why-is-current-processed-first
|
<p>I am referencing some code I found on GeeksForGeeks.com: Why is the current node printed (and processed) first before its children are processed? Wouldn't "breadth first" mean "Process children first, then process parent"? or, is that only for Trees? I can't be the only one to not understand this, so instead of flaming me, somebody please simply post the answer?</p>
<pre><code>void Graph::DFSUtil(int v, bool visited[])
{
visited[v] = true; <-- why is this printed FIRST?
cout << v << " ";
// Recur for all the vertices adjacent
// to this vertex
list<int>::iterator i;
for (i = adj[v].begin(); i != adj[v].end(); ++i)
if (!visited[*i])
DFSUtil(*i, visited);
}
// DFS traversal of the vertices reachable from v.
// It uses recursive DFSUtil()
void Graph::DFS(int v)
{
// Mark all the vertices as not visited
bool *visited = new bool[V];
for (int i = 0; i < V; i++)
visited[i] = false;
// Call the recursive helper function
// to print DFS traversal
DFSUtil(v, visited);
}
</code></pre>
|
<p>1) In answer to the semantic question "Wouldn't 'breadth first' mean 'Process children first, then process parent'?"</p>
<p>This question is a duplicate of the following: <a href="https://cs.stackexchange.com/questions/107187/what-is-the-meaning-of-breadth-in-breadth-first-search">What is the meaning of 'breadth' in breadth first search?</a></p>
<p>2) In answer to the technical question "Why is the current node printed (and processed) first before its children are processed?":</p>
<p>BFS processes nodes in the following order: the starting vertex, then all the vertices at distance 1, then all the vertices at distance 2, etc. As pointed out by @user111398, you need to mark a node as visited before processing it or you will go on an infinite loop if the graph is cyclic. If you mark the parent but don't process it before (marking and) processing its children then you are processing the nodes in the following order: all the nodes at max distance, then all the nodes at max depth minus one, etc. This is reverse-BFS, not BFS.</p>
| 106
|
graph theory
|
Formulate the Marriage Problem into a Maximum-flow problem (Graph theory)
|
https://cs.stackexchange.com/questions/33569/formulate-the-marriage-problem-into-a-maximum-flow-problem-graph-theory
|
<p>Suppose I have $M=\{1,\ldots, n\}$ men and $W = \{1, \ldots, n\}$ women and $B =\{1, \ldots, m\}$ brokers, such that each broker knows a subset of $M \times W$ and for each pair in this subset a marriage can be set up among the corresponding man and women.</p>
<p>Each broker $i$ can set up a maximum of $b_i$ marriages and a person can only be married once. Also we assume all marriages are heterosexual.</p>
<p>I want to determine the maximum number of marriages possible and I want to show that the answer can be found be solving a <strong>maximum-flow problem</strong>.</p>
<p><img src="https://i.sstatic.net/JDLGU.png" alt="enter image description here"></p>
<p>What I've tried:</p>
<p>Make source and sink nodes with opposite demand. And then for each ordered pair $(i,j)$ where $i$ is a woman and $j$ is a man making a node. For each broker $j$ make a corresponding node and introduce an arc with capcity $b_j$. For each node $(i,j)$ make an arc from broker $k$ with capacity $1$ if broker $k$ can arrange a marriage otherwise $0$. </p>
<p>However, after this I stop. I need to keep track of state, that is no person gets married twice !</p>
|
<p>You shouldn't need to keep track of state. This can all be handled with capacity constraints over the nodes. The network can be structured as follows:</p>
<p>Start with the graph where one partition consists only of the "men" nodes and the other partition consists of the "women" nodes. Now, add a node for each broker $b$ and for each marriage pair $(m, w)$ on $b$'s list create two edges $(m, b)$ and $(b, w)$. Finally add a source node connected to all men and a sink node connected to all women (you can add another arc between those to simplify constraint cases if you want, but it's not necessary).</p>
<p>So now we have a graph, but to have a proper flow-network we need capacity constraints. For each man and women node we have a capacity constraint of 1, since they can each only marry at most one person. For the $i$th broker node, add a capacity constraint of $b_i$. This way the total number of flows (marriages) going through the broker is at most $b_i$.</p>
<p>Compute maximum flow over this graph and you should be good to go.</p>
| 107
|
graph theory
|
Applications of Graph Automorphisms
|
https://cs.stackexchange.com/questions/65391/applications-of-graph-automorphisms
|
<p>I've seen the topic of the automorphism group appear in several introductory graph theory books I've looked at. It always feel oddly disjointed and poorly motivated to me.</p>
<p>Is there any practical (or impractical for that matter) applications of knowing the automorphism group of a graph?</p>
|
<p>Automorphism capture a natural notion of symmetry of graphs. As a result, they can be used to speed up algorithms that would otherwise run slowly by chopping down the search space.</p>
<p>For example, integer programming is usually solved via branch-and-bound. However, if an equation is degenerate this can take far longer than necessary to run, because it has to keep checking symmetric parts of the tree. We can use graph automorphisms to compute the orbits of variables in the linear programming problem, and then treat parts with the same orbit as identical. A recent application of such techniques to MILP can be read <a href="http://www.optimization-online.org/DB_FILE/2012/10/3659.pdf" rel="noreferrer">here</a>.</p>
| 108
|
graph theory
|
Using an undirected graph to represent an ordered pair?
|
https://cs.stackexchange.com/questions/151497/using-an-undirected-graph-to-represent-an-ordered-pair
|
<p>Set theory depends on a set membership function <span class="math-container">$\epsilon$</span> which is a class of ordered pairs. Is it possible to construct <em>the ordered pair</em> from an undirected graph of <em>unordered pairs</em>? Alternatively, is there a way to construct a undirected graph that represents a directed graph?</p>
<p>The graph may be traversed from either a start node or optionally any node. I realize most theories on graph depend on set theory and I am considering the case where set theory depends on this structure of graph theory for its ordered pair. I'm interested because the undirected graph appears more basic then directed, as no order is required.</p>
<p>I was thinking of the following undirected graph for ordered pairs, where the vertices are points and edges lines. The start node is at the top. The structure of order can be distinguished by the two subgraphs at depth 4. The leaves > depth 4 represent an ordered value which depends on the depth of that member of the pair.</p>
<p><a href="https://i.sstatic.net/8YGiI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8YGiI.jpg" alt="enter image description here" /></a>
Appreciate any guidance</p>
| 109
|
|
graph theory
|
What does "local" mean?
|
https://cs.stackexchange.com/questions/86260/what-does-local-mean
|
<p>I study graph theory on my own using Diestel's <em>Graph Theory</em> book (with Algorithmic graph theory in mind). I don't understand what <em>local property</em>, <em>global property</em>, <em>locality</em> mean given a graph $G$. </p>
<p>For example, on the page 5 it says </p>
<blockquote>
<p>The average degree quantifies globally what is measured <strong>locally</strong> by the
vertex degrees: the number of edges of $G$ per vertex. Sometimes it will
be convenient to express this ratio directly, as $\varepsilon(G) := |E|/|V|$.</p>
</blockquote>
<p>In particular, I found the following phrases including the word <em>local...</em></p>
<ul>
<li><em>local information</em> (pg. 46)</li>
<li><em>maximum local density</em> (pg. 61)</li>
<li><em>the above local structures</em> (pg. 101)</li>
<li><em>locally looks like a tree</em> (pg. 110)</li>
<li><em>there is a local reason for it</em> (pg. 110)</li>
<li><em>we are looking for local implications of global assumptions</em> (pg. 181).</li>
</ul>
<p>and many more...</p>
<p>Could someone explain (possibly with examples) what these <em>local</em> and <em>global</em> mean in the context of the graph theory?</p>
|
<p><strong>Local property</strong> - a property that relates to a specific vertex and its near neighbors. For example, the number of neighbors or 2nd order neighbors a vertex $v$ has. "<em>locally looks like a tree</em>" more formally could be written as the sub-graph of vertices with distance $k$ or less of the vertex $v$ is a tree.</p>
<p><strong>Global property</strong> - a property that relates to the graph, like its diameter, average degree, number of connected components or even its size.</p>
| 110
|
graph theory
|
Understanding The Mapping Of Edges to Nodes In A Graph Theory Problem
|
https://cs.stackexchange.com/questions/59995/understanding-the-mapping-of-edges-to-nodes-in-a-graph-theory-problem
|
<p>I am really confused with this <a href="https://community.topcoder.com/stat?c=problem_statement&pm=13707" rel="nofollow">problem</a>.</p>
<p><strong>Here's the problem:</strong> <br></p>
<p>You have $N$ points numbered $1$ through $N$,inclusive, and $N$ arrows again numbered $1$ through $N$,inclusive. No two arrows start at the same place, but multiple arrows can point to the same place and arrows can start and end in the same place. The arrow from place $i$ points to place $a[i-1]$,($a$ being an array representing the game board with $N$ elements and $i$ is between $1$ and $N$, inclusive).There are $0$ to $N$ tokens,inclusive, placed in those places and that, in each round, move along the arrows from their current place. If two or more tokens are in the same place, then you lose that game. But if that doesn't happen for the $K$ rounds specified, then you win the game. There may be multiple ways to solve the problem, but Two ways are different if there is some $i$ such that at the beginning of the game place $i$ did contain a token in one case but not in the other. Count those ways and return their count modulo $1,000,000,007$.</p>
<p>The whole problem is confusing to me, but what really confuses me is that it states that the arrow that starts from $i$ goes to $a[i-1]$. How I understand it, for the first example,( $\{1,2,3\} \;5$ Returns:$8$ ), if $a[1]=1$, $a[2]=2$, and $a[3]=3$, then $3$ maps to $2$ and $2$ maps to $1$, but then $1$ maps to $0$,(but point $0$ doesn't exist). </p>
<p>What would be more correct would be if $a[0]=1$, $a[1]=2$, and $a[2]=3$, but then all the points would map to themselves,(though it says in the example that the tokens don't move during the rounds).</p>
<p>I am probably way off, but I couldn't find many explanations, and the ones I found didn't make any sense to me, and I couldn't find many visual depictions either. </p>
|
<p>$a[0] = 1, a[1] = 2, a[2] = 3$. </p>
<p>"The arrow from place $i$ points to place $a[i−1]$"</p>
<p>So yes, all $3$ places in this sample point to themselves. It actually says in the explanation that "in each round each token will stay in the same place". </p>
<p>In the second sample, all three places point towards the first one.</p>
<p>In the third sample, you have a $2$-cycle: $1$ points to $2$ and vice-versa. </p>
<p>Is everything clear now?</p>
| 111
|
graph theory
|
Research on exact-cover problem and graph theory for NP-complete problems?
|
https://cs.stackexchange.com/questions/142318/research-on-exact-cover-problem-and-graph-theory-for-np-complete-problems
|
<p>Sorry for the vagueness, but I'm trying to study the latest progress on the exact cover problem and using graphs for NP-complete problems. Googling around has not been very helpful. I understand the basics but I can't find a key journal in this area or where to look online. Any references would be greatly appreciated!</p>
| 112
|
|
graph theory
|
What is the difference between regular trees and phylogenetic trees in terms of graph theory?
|
https://cs.stackexchange.com/questions/49355/what-is-the-difference-between-regular-trees-and-phylogenetic-trees-in-terms-of
|
<p>If I am not mistaken, a tree is any graph that does not contain cycles.</p>
<p>However, I am currently taking a bioinformatics course where we deal a lot with algorithms on phylogenetic trees. Usually you are given a phylogenetic tree with <span class="math-container">$n$</span> leafs, and then you run some algorithm that can do something trivial like simple traversing of the tree, and then you get a <span class="math-container">$O(n)$</span> time bound, without saying anything about the number of internal nodes.</p>
<p>However given a regular tree with <span class="math-container">$n$</span> leafs, the total number of internal nodes can be infinite.</p>
<p>For example the following is a tree with one leaf (two if you consider it to be rooted) and infinite number of internal nodes:</p>
<p><a href="https://i.sstatic.net/I09Fg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I09Fg.jpg" alt="enter image description here" /></a></p>
<p>So what are phylogenetic trees that you can express the running time of various algorithms in terms of the number of leafs, even though your algorithm actually traverses the entire phylogenetic tree?</p>
|
<p>Phylogenetic trees are usually "full binary trees", that is rooted trees in which every internal node has exactly two children. A full binary tree having $n$ leaves has exactly $2n-1$ nodes.</p>
| 113
|
graph theory
|
How to find a Graph Embedding given a metric space?
|
https://cs.stackexchange.com/questions/90473/how-to-find-a-graph-embedding-given-a-metric-space
|
<p>I am interested to learn more about <a href="https://math.stackexchange.com/questions/520768/whats-the-relation-between-topology-and-graph-theory">topological graph theory</a> and <a href="https://en.wikipedia.org/wiki/Graph_embedding" rel="nofollow noreferrer">Graph Embedding</a>.</p>
<p>Assume I have a <a href="https://en.wikipedia.org/wiki/Metric_space" rel="nofollow noreferrer">metric space</a>, <span class="math-container">$d \colon M \times M \to \mathbb{R}$</span> and a graph, <span class="math-container">$G=(V,E)$</span>. </p>
<p>What is a rigorous way to define a valid <a href="http://mathworld.wolfram.com/GraphEmbedding.html" rel="nofollow noreferrer">embedding</a> of <span class="math-container">$G$</span> given <span class="math-container">$d$</span> and how to find such an <a href="https://mathoverflow.net/questions/33043/algorithm-for-embedding-a-graph-with-metric-constraints">embedding</a>? </p>
<p>Edit: I found this <a href="https://medium.com/@eliorcohen/node2vec-embeddings-for-graph-data-32a866340fef" rel="nofollow noreferrer">post</a> about embedding graphs.</p>
| 114
|
|
graph theory
|
Can any object be written as a graph?
|
https://cs.stackexchange.com/questions/48888/can-any-object-be-written-as-a-graph
|
<p>Lately graph theory has come into everyday practice with graph databases. So I wonder if any object can be written as a graph? I don't have the formal definition of an object, but say it is a C <code>struct</code> or a Java <code>Object</code>, isn't it so that we can express objects as graphs and that it is a more convenient and advantageous representation than an algebraic relational representation (rdbms)?</p>
| 115
|
|
graph theory
|
Undirected graph with exponential number of simple cycles
|
https://cs.stackexchange.com/questions/101241/undirected-graph-with-exponential-number-of-simple-cycles
|
<p>Hey I am new to graph theory and this question has me stuck for hours.</p>
<p>What is an example of undirected graph with n nodes where the number of simple cycles is exponential in n.</p>
<p>I was looking at complete graphs, but here's the catch: the total number of edges should be in theta of n.</p>
<p>Please help! I need to prove the correctness of such a graph but I can't find an example of the graph in the first place. </p>
|
<p><a href="https://en.wikipedia.org/wiki/M%C3%B6bius_ladder" rel="nofollow noreferrer">The Möbius ladder</a> <span class="math-container">$M_{2n}$</span>, also called pizza graph, which has <span class="math-container">$2n$</span> vertices and <span class="math-container">$3n$</span> edges, have <span class="math-container">$2^n+n^2-n+1$</span> simple cycles. Here <span class="math-container">$n\ge3$</span>.</p>
<p>The following two views of the Möbius ladder <span class="math-container">$M_{16}$</span> is also taken from <a href="https://en.wikipedia.org/wiki/M%C3%B6bius_ladder" rel="nofollow noreferrer">the same Wikipedia entry</a>. Be careful that the central point on the pizza view is not a vertex of the graph. <a href="https://upload.wikimedia.org/wikipedia/commons/7/71/Moebius-ladder-16-animated.svg" rel="nofollow noreferrer">Here</a> is an animation showing the transformation between the two views.</p>
<p><a href="https://i.sstatic.net/kKdEK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kKdEK.png" alt="Two views of the Möbius ladder M16 from Wikipedia"></a></p>
<hr>
<p>Exercise 1. (easy) Show there are at least <span class="math-container">$2^{n-1}$</span> simple cycles in <span class="math-container">$M_{2n}$</span>.</p>
<p>Exercise 2. Show there are <span class="math-container">$2^n+n^2-n+1$</span> simple cycles in <span class="math-container">$M_{2n}$</span>.</p>
| 116
|
graph theory
|
How to remove cycles from a directed graph
|
https://cs.stackexchange.com/questions/90481/how-to-remove-cycles-from-a-directed-graph
|
<p>I saw <a href="https://stackoverflow.com/questions/6284469/how-to-remove-cycles-in-an-unweighted-directed-graph-such-that-the-number-of-ed">this</a> from SO which led to <a href="https://en.wikipedia.org/wiki/Feedback_arc_set" rel="noreferrer">Feedback Arc Set</a>, which describes the problem nicely:</p>
<blockquote>
<p>In graph theory, a directed graph may contain directed cycles, a one-way loop of edges. In some applications, such cycles are undesirable, and we wish to eliminate them and obtain a directed acyclic graph (DAG).</p>
</blockquote>
<p>I am wondering how this is done. Given a graph such as this:</p>
<pre><code>a -> b
b -> c
c -> d
d -> a
</code></pre>
<p>Or a for loop flattened out such as:</p>
<pre><code>somemethod -> forloop start
forloop start -> forloop next
forloop next -> forloop result
forloop next -> forloop next // i+1
forloop next -> forloop end
forloop end -> forloop result
forloop result -> next method
</code></pre>
<p>Wondering how you can possibly remove the cycles from a graph like that.</p>
|
<p>There is a paper "breaking cycles in noisy hierarchies" which talks about leveraging graph hierarchy to delete cycle edges to reduce a directed graph to a DAG. </p>
<p>The reduced DAG will maintain the graph hierarchy of the original graph as much as possible. </p>
<p>The corresponding code is available on Github: <a href="https://github.com/zhenv5/breaking_cycles_in_noisy_hierarchies" rel="noreferrer">https://github.com/zhenv5/breaking_cycles_in_noisy_hierarchies</a></p>
| 117
|
graph theory
|
Is there an algorithm for getting the boundary of a non-planar graph?
|
https://cs.stackexchange.com/questions/124367/is-there-an-algorithm-for-getting-the-boundary-of-a-non-planar-graph
|
<p>This is my first question here!</p>
<p>If I have a non-planar graph where every vertex connects to 3 other vertices, and where the edges are allowed to intersect, how do I find the boundary of the graph?</p>
<p>For example in the below graph, the pink line shows the boundary of the graph which needs to be found. The boundary should also not intersect, with it self.</p>
<p><a href="https://i.sstatic.net/IYc6t.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYc6t.jpg" alt="Graph boundary image" /></a></p>
<p>Is there anything in graph theory which would help with this?</p>
| 118
|
|
graph theory
|
number of edges in a graph
|
https://cs.stackexchange.com/questions/23532/number-of-edges-in-a-graph
|
<p>I got a problem related to graph theory - </p>
<p>Consider an undirected graph ܩ where self-loops are not allowed. The vertex set of G is
{(i,j):1<=i,j <=12}. There is an edge between (a, b) and (c, d) if |a-c|<=1 and |b-d|<=1
The number of edges in this graph is </p>
<p>Answer is given as 506
but I am calculating it as 600, please see attachment.</p>
<p>I am unable to get why it is coming as 506 instead of 600.</p>
<p>Thanks<img src="https://i.sstatic.net/IKmgx.png" alt="enter image description here"></p>
|
<p>For a grid in the range of $[n_1,n_2]$, according to the problem statment, the number of edges is:</p>
<p>$$\#edges=\frac{8 \times (n_2-n_1+1)^2- 4\times 5-4\times3\times(n_2-n_1-1)}{2}$$</p>
<p><strong>explanation</strong>:
suppose every node has a degree of 8, then sum of the degrees is $8\times(n_2-n_1+1)^2$; For each corner we included 5 extra edges that must be removed (the term $4\times5$) and for every other border node we considered 3 more edges (the term $4\times3\times(n_2-n_1-1)$).</p>
<p>The solution for your problem is:</p>
<p>$$\frac{8(12-1+1)^2-4(5)-4\times3\times(12-1-1)}{2}=\frac{8(144)-20-12(10)}{2}=\frac{1012}{2}=506$$</p>
| 119
|
graph theory
|
Memory needed for computational graph
|
https://cs.stackexchange.com/questions/67711/memory-needed-for-computational-graph
|
<p>Suppose we have a set of equations like this</p>
<pre><code>p7=f(p1+p6); p6=f(p2+p5); p5=f(p3+p4); p4=f(p3); p3=f(p2); p2=f(p1); p1=f()
</code></pre>
<p>It can be represented by computational graph below</p>
<p><img src="https://i.sstatic.net/LKTOM.png" width="352" height="352"> </p>
<p>If each intermediate value takes 1 unit of memory, you need at least 4 units to compute p7 without any duplicate computation.</p>
<p>Is there an algorithm for estimating memory needed in this setting for a general DAG?</p>
<p>I found a <a href="http://www.autodiff.org/ad04/abstracts/Hascoet.pdf" rel="nofollow noreferrer">paper</a> called "Adjoint Dataflow Analysis" for estimating this for restricted set of graphs, but it feels like this ought to be a problem that is covered more generally in graph theory.</p>
|
<p>Your problem sounds similar to one-shot (black) pebbling. <a href="https://www.jair.org/media/4030/live-4030-7803-jair.pdf" rel="nofollow noreferrer">Wu, Austrin, Pitassi, and Liu</a>, in their paper titled <em>Inapproximability of treewidth, one-shot pebbling, and related layout problems</em> (J. Artificial Intelligence Res. 49 (2014), 569–600), show that it is (probably) hard to compute the optimal cost (which corresponds to your memory).</p>
| 120
|
graph theory
|
How to solve an arrangement problem at the Archive Nationale of France using graph theory?
|
https://cs.stackexchange.com/questions/48511/how-to-solve-an-arrangement-problem-at-the-archive-nationale-of-france-using-gra
|
<p>Good evening! I'm actually doing an internship at the Archives Nationales of France and I encountered a situation I wanted to solve using graphs...</p>
<h1>I. The dusty situation</h1>
<p>We want to optimize the arrangement of books of my library according to their height in order to minimize their archive cost. The height and thickness of the books are known. We already arranged the books in ascending order of height <span class="math-container">$H_1,H_2,\dots,H_n$</span> (I don't know if it was the best thing but... that's the way we did it). Knowing each book's thickness, we can determine for each <span class="math-container">$H_i$</span> class the necessary thickness for their arrangement, call it <span class="math-container">$L_i$</span> (for example, the books that are <span class="math-container">$H_i = 23\,\mathrm{cm}$</span> tall might have total thickness <span class="math-container">$L_i = 300\,\mathrm{cm}$</span>).</p>
<p>The library can custom manufacture shelves, indicating the wished length and height (no problem with depth). A shelf of height <span class="math-container">$H_i$</span> and length <span class="math-container">$x_i$</span> costs <span class="math-container">$F_i+C_ix_i$</span>, where <span class="math-container">$F_i$</span> is a fixed cost and and <span class="math-container">$C_i$</span> is the cost of the shelf per length unit.</p>
<p>Note that a shelf of height <span class="math-container">$H_i$</span> can be used to store books of height <span class="math-container">$H_j$</span>
with <span class="math-container">$j\leq i$</span>. We want to minimize the cost.</p>
<p>My tutor suggested I model this problem as a path-finding problem.
The model might involve <span class="math-container">$n+1$</span> vertices indexed form <span class="math-container">$0$</span> to <span class="math-container">$n$</span>. My mentor suggested I work out the existing conditions, each edge signification and how to work out the valuation <span class="math-container">$v(i,j)$</span> associated to the edge <span class="math-container">$(i,j)$</span>. I would also be OK with other solutions as well as insights.</p>
<p>For instance we have for the <em>Convention</em> (a dark period of the French History) such an array:</p>
<p><span class="math-container">\begin{array}{|c|rr}
i & 1 & 2 & 3 & 4\\
\hline
H_i & 12\,\mathrm{cm} & 15\,\mathrm{cm} & 18\,\mathrm{cm} & 23\,\mathrm{cm}\\
L_i & 100\,\mathrm{cm} & 300\,\mathrm{cm} & 200\,\mathrm{cm} & 300\,\mathrm{cm} \\
\hline
F_i & 1000€ & 1200€ & 1100€ & 1600€ \\
C_i & 5€/\mathrm{cm} & 6€/\mathrm{cm} & 7€/\mathrm{cm} & 9€/\mathrm{cm}\\
\end{array}</span></p>
<h1>II. The assumptions of a trainee bookworm</h1>
<p>I think I have to compute an algorithm between Djikstra, Bellman or Bellman-Kalaba... I'm trying to find out which one in the following subsections.</p>
<h2>1.Conditions</h2>
<p>We are here with a problem of pathfinding between a vertex <span class="math-container">$0$</span> and a vertex <span class="math-container">$n$</span>, <span class="math-container">$n$</span> must be outgoing from <span class="math-container">$0$</span> (that is to say, a path (or a walk) must exists between <span class="math-container">$0$</span> and <span class="math-container">$n$</span></p>
<h2>2.What to compute (updated (25/10/2015))</h2>
<p>//
<em>Work still under process as far as I don't know which vertices to and which edges to model...</em></p>
<h3>My best guess</h3>
<p>I think we get rid of at least one type of shelves every time we find a shortest path from the array, but that's only my assumption... ;).</p>
<p>I think the best way to model how to buy shelves and store our books must look like the following graph, <em>(but, please, feel free to criticize my method! ;))</em></p>
<p><a href="https://i.sstatic.net/HYcQC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HYcQC.png" alt="from 0 graph" /></a></p>
<p>vertices:</p>
<ul>
<li><span class="math-container">$i\in[1,4]$</span> are shelves we can use to store our books.</li>
<li><span class="math-container">$0$</span> is the state where no book is stored. Using this vertex allows me to use each cost formulas (edges).</li>
</ul>
<p>edges: <span class="math-container">$F_i+C_ix_i,i\in[1,4]$</span> are the cost using a type of shelve.
for instance: <span class="math-container">$F_1+C_1x_1$</span> fom 0 is the cost using only type 1 shelves to store our parchments, manuscripts...</p>
<p>Yet, from here I don't know how to create my shortest path problem.</p>
<p>Indeed, I would not know where would I have stowed all my books.</p>
<p>This leads me to another idea...</p>
<h3>another idea...</h3>
<p><a href="https://i.sstatic.net/Uax0A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uax0A.png" alt="to 0 graph" /></a></p>
<p>Here, I am searching for the shortest path from a given vertex to the 0 state, that is to say, knowing that the highest document is <span class="math-container">$type \ i$</span> tall, I am searching for the cheapest way to arrange my documents.</p>
<p>vertices:</p>
<ul>
<li><span class="math-container">$i\in[1,4]$</span> are shelves we can use to store our books.</li>
<li><span class="math-container">$0$</span> is the state where all books are stored. Using this vertex allows me to use each cost formulas (edges).</li>
</ul>
<p>edges: <span class="math-container">$F_i+C_ix_i,i\in[1,4]$</span> are the cost using a type of shelve.
for instance: <span class="math-container">$F_1+C_1x_1$</span> from 3 is the cost using <span class="math-container">$type \ 1$</span> shelves after using <span class="math-container">$type \ 3$</span> shelves to store our parchments, manuscripts...</p>
<p>Yet, I don't know where to put <span class="math-container">$F_4+C_4x_4$</span>.</p>
<h2>3.How to compute</h2>
<p>I think that we have to start with the higher shelves as far as we can then store the smaller books...</p>
<p>Do</p>
<blockquote>
<p>We take <span class="math-container">$L_n$</span> cm of with the <span class="math-container">$H_{i=n}$</span> height in a shelve of their height + <span class="math-container">$z$</span> cm of an <span class="math-container">$H_{i=n-1}$</span> height until it becomes more expensive than taking the <span class="math-container">$H_{i=n-1}$</span> shelve.
then <span class="math-container">$i=i-1$</span></p>
</blockquote>
<p>While <span class="math-container">$i\neq0$</span></p>
<p>Finally, I don't know how to make <span class="math-container">$x$</span> varying...</p>
<p>That is to say how to choose to put <span class="math-container">$x_i$</span> documents in <span class="math-container">$4$</span> or <span class="math-container">$3$</span> for instance.</p>
|
<p>I see you as asking, "I want to solve this with Dijkstra's algorithm but I can't set up a good graph to run on," therefore I will present you with such a graph.</p>
<h1>A digraph where vertices are sets of shelved books.</h1>
<p>Okay, we have books with heights $H_n,$ $1 \le n \le N$ and widths $W_n,$ with heights in ascending order for each book, and we want to group them into shelves.</p>
<p>Reuse these numbers for solution nodes $n,$ where that node represents a solution state "all books $i \le n$ have been shelved." We will therefore start at node $0$ and seek to get to node $N$ by the shortest path with Dijkstra's algorithm. These nodes are the vertices of our graph.</p>
<p>We then draw from node $i$ to any node $j \gt i$ a directed edge which assumes that all of those intermediary books will be shelved with one shelf, i.e. the length of this edge is $$L_{ij} = F_j + C_j~\sum_{n=i+1}^j W_n,$$where I have assumed that when you were saying the cost of the sum was $F_i + C_i x_i$ the subscript $i$ on the $x_i$ was totally meaningless.</p>
<p>Dijkstra's algorithm will then give us a shortest-length path to node $N.$ </p>
| 121
|
graph theory
|
Directed Trees: Finding all the edges and vertices in a specific direction
|
https://cs.stackexchange.com/questions/112388/directed-trees-finding-all-the-edges-and-vertices-in-a-specific-direction
|
<p>I am an electrical engineer without experience in graph theory. However, I have a problem which I believe can be solved by graph theory. We have a directed tree, such as the one below. We want to find all the vertices and edges starting from a vertex in a given direction. For instance, in the figure below, we want to identify the vertices and edges downward from vertex 1 (the red lines). Is there an algorithm to do this (preferably in MATLAB)? The edges have equal weight. </p>
<p><a href="https://i.sstatic.net/FoJQz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FoJQz.jpg" alt="enter image description here"></a></p>
| 122
|
|
graph theory
|
Is this graph Hamiltonian?
|
https://cs.stackexchange.com/questions/141068/is-this-graph-hamiltonian
|
<p>My case is a <em>directed</em> graph with <span class="math-container">$n$</span> nodes with <span class="math-container">$(n-1)^2+1$</span> edges. I have done the following till now.</p>
<p>We know that the maximum number of edges for a directed graph <span class="math-container">$K_n$</span> on <span class="math-container">$n$</span> nodes is <span class="math-container">$n(n-1)$</span> edges. The graph in my problem statement is <span class="math-container">$G(V,E)$</span> with <span class="math-container">$|V| = n$</span> and <span class="math-container">$|E|$</span> = <span class="math-container">$(n-1)^2+1$</span>.</p>
<p>Now, <span class="math-container">$n(n-1) - ((n-1)^2 + 1) = n-2$</span>, so any such graph can be obtained from <span class="math-container">$K_n$</span> by deleting exactly <span class="math-container">$n-2$</span> edges from <span class="math-container">$K_n$</span>.</p>
<blockquote>
<p>Is my approach correct till now? How can I apply induction to prove the graph is Hamiltonian? I'm new to graph theory and inductions. As such, a comprehensive simple explanation would be much appreciated.</p>
<p>If not induction, is there any other way to prove this?</p>
</blockquote>
|
<p>The complete digraph of <span class="math-container">$n$</span> nodes, <span class="math-container">$K_n$</span> has <span class="math-container">$n(n-1)$</span> edges. Describe a digraph of <span class="math-container">$n$</span> nodes with <span class="math-container">$n(n-1)-\delta$</span> edges as a digraph "with <span class="math-container">$\delta$</span> edges removed".</p>
<h3>A proof by induction</h3>
<p>The following is an outline to prove by induction that every digraph of <span class="math-container">$n$</span> nodes with <span class="math-container">$n-2$</span> edges removed contains a Hamiltonian cycle.</p>
<p>The base case, when <span class="math-container">$n=2$</span> or <span class="math-container">$n=3$</span> is obviously correct.</p>
<p>Suppose <span class="math-container">$n\gt3$</span>. Let <span class="math-container">$G$</span> be such a graph. There are two cases.</p>
<ul>
<li>There is one node with exactly one edge from it or to it removed.<br />
Let that node be <span class="math-container">$u$</span>. By induction hypothesis, there is one Hamiltonian cycle for the induced subgraph of the remaining nodes. Verify that cycle can be modified to pass <span class="math-container">$u$</span> as well, hence becoming a Hamiltonian cycle of <span class="math-container">$G$</span>.</li>
<li>Otherwise, for each node, either no edge from it or to it are removed, or at least two edges from it or to it are removed.<br />
Let <span class="math-container">$v$</span> be a node of the former kind and <span class="math-container">$w$</span> be a node of the latter kind. Let <span class="math-container">$G'$</span> be the induced subgraph of the remaining <span class="math-container">$n-2$</span> nodes. Since <span class="math-container">$2(n-2)\gt n-2$</span> and there are <span class="math-container">$2(n-2)$</span> possible edges between <span class="math-container">$w$</span> and a node in <span class="math-container">$G'$</span>, there must be one edge of <span class="math-container">$G'$</span> that is between <span class="math-container">$w$</span> and some node of <span class="math-container">$G'$</span>. By induction hypothesis, <span class="math-container">$G'$</span> contains a Hamiltonian cycle. Verify <span class="math-container">$C$</span> can be modified to include that edge as well as pass <span class="math-container">$v$</span>, becoming a Hamiltonian cycle of <span class="math-container">$G$</span>.</li>
</ul>
<h3>Explanation of Yuval's neat answer</h3>
<p>Consider <strong>all (directed) Hamiltonian cycles in <span class="math-container">$K_n$</span></strong>. What is the total number of edges in them, with duplicity counted?</p>
<ul>
<li>Let <span class="math-container">$f$</span> be the number of all Hamiltonian cycles. Since each cycle contains <span class="math-container">$n$</span> edges, that total number is <span class="math-container">$nf$</span>.</li>
<li>The number of times an edge appearing in those cycles is the same for each edge, thanks to symmetry. Denote it by <span class="math-container">$p$</span>. Since there are <span class="math-container">$n(n-1)$</span> distinct edges, that total number is <span class="math-container">$n(n-1)p$</span>.</li>
</ul>
<p>We have,
<span class="math-container">$$ nf = n(n-1)p,\ \ \text{ i.e., }\ \ f= (n-1)p $$</span></p>
<p>Let us remove edges from <span class="math-container">$K_n$</span> so as to obtain the given graph <span class="math-container">$G$</span>. Since removing an edge affects only Hamiltonian cycles in which that edge appears, removing <span class="math-container">$n-2$</span> edges will affect at most <span class="math-container">$(n-2)p$</span> Hamiltonian cycles. Since <span class="math-container">$f=(n-1)p > (n-2)p$</span>, at least one Hamiltonian cycle will not be affected after removing <span class="math-container">$n-2$</span> edges. That is, there is at least one Hamiltonian cycle in <span class="math-container">$G$</span>. <span class="math-container">$\quad\checkmark$</span></p>
<p>Stating the explanation in terms of probability and expectation, we shall obtain Yuval's answer.</p>
<hr />
<p>The only facts about Hamiltonian cycle used in this proof are that it has <span class="math-container">$n$</span> edges and that the concept is symmetric to each edge. We have, in fact, proved the following remarkable proposition.</p>
<p><em>Given <span class="math-container">$n\ge2$</span>, digraph <span class="math-container">$G$</span> of <span class="math-container">$n$</span> nodes with <span class="math-container">$n-2$</span> edges removed and digraph <span class="math-container">$D$</span> of <span class="math-container">$n$</span> nodes with <span class="math-container">$n$</span> edges, <span class="math-container">$G$</span> must contain a subgraph that is isomorphic to <span class="math-container">$D$</span>.</em></p>
| 123
|
graph theory
|
Finding a Hamiltonian Path through the complete graph on 37 vertices: $K_{37}$
|
https://cs.stackexchange.com/questions/39833/finding-a-hamiltonian-path-through-the-complete-graph-on-37-vertices-k-37
|
<p>I'm planning on making a fiber art $K_{37}$ (like the one I laser etched with help: <a href="http://www.thingiverse.com/thing:88130" rel="nofollow">K37: The complete graph on 37 nodes, svg</a>). To accomplish this, the plan is to construct 37 pegs equally spaced in a annulus made from wood and then to string yarn between them. Since $K_{37}$ is a complete graph, it has a Hamiltonian path. My knowledge of computational graph theory is nil, and I am aware that this is not exactly an easy problem (see <a href="http://en.wikipedia.org/wiki/Hamiltonian_path_problem" rel="nofollow">wikipedia: Hamiltonian Path Problem</a>, for instance)</p>
|
<p>It seems that you're trying to construct the graph $K_{37}$ from string and nails without cutting the string. For this, you don't want a <a href="https://en.wikipedia.org/wiki/Hamiltonian_path" rel="nofollow">Hamiltonian path</a> (a path that visits every vertex exactly once) but an <a href="https://en.wikipedia.org/wiki/Eulerian_path" rel="nofollow">Euler trail</a> (a walk that visits every edge exactly once).</p>
<p>In the case of a complete graph, finding an Euler trail is trivial: start at any vertex and move from vertex to vertex without repeating an edge. As long as the last thing you do is returning to the initial vertex, you've created an Euler trail. (This is essentially <a href="https://en.wikipedia.org/wiki/Eulerian_path#Fleury.27s_algorithm" rel="nofollow">Fleury's algorithm</a> except that checking you've not disconnected the graph is trivial for a complete graph.</p>
| 124
|
graph theory
|
Clarification sought for definition of a cut that respects a set A of edges in Graph Theory
|
https://cs.stackexchange.com/questions/14607/clarification-sought-for-definition-of-a-cut-that-respects-a-set-a-of-edges-in-g
|
<p>From CLRS (3rd edition), I came have this question on page 626:</p>
<p>Given these definitions from the text,</p>
<p>DEFINITIONS:
Given an undirected graph G = (V,E),
<p>1. A CUT (S ,V-S) of G is a partition of V,
<p>2. A LIGHT EDGE over a cut is any edge crossing the cut with a weight smaller than or equal to any other edge crossing that cut,
<p>3. A cut RESPECTS a set A of edges if no edge in A crosses the cut.</p>
<p>In every example in the text, the set A is coincident with one of the partitions of the cut but I cannot see why this must be. Given an arbitrary new vertex v', it could be added to the partition that contains A and no edge from v' to A would cross the cut. The definition of respects says that a cut cannot divide A, not that A must define the partition.</p>
<p>There is clearly something here I missed since that does not seem consistent with the theorems. Can anyone point out my error?</p>
<p>+++++++++++++++</p>
<p>Amendment.
Thanks for the help. I also now realize that Theorem 23.1 only says that a light-edge is a safe edge but not that such a cut is guaranteed to find every safe edge since some edges may have both vertices in the same partition as A. I see in GENERIC-MST(G) that line 3 only says to find an edge. It is not until Kruskal or Prim algorithms that the method by which we find this edge is explained and neither depend upon the partition being coincident with the set of vertices defined by the set of edges A.</p>
<p>I find it interesting that in Kleinberg Tardos that they take a different approach and define a cut property (4.17, pg 145) that avoids this (what I find awkward) development of the ideas. </p>
|
<p>One mistake you're making with the statement:</p>
<blockquote>
<p>In every example in the text, the set A is coincident with one of the partitions of the cut but I cannot see why this must be.</p>
</blockquote>
<p>$A$ is a set of <em>edges</em>, while a cut is a partition of <em>vertices</em>: $(S, V - S)$.</p>
<p>Suppose, as you say, that $A$ is a subset of the set $\{(u,v): u,v \in S\}$, then $A$ respects the cut $(S,V-S)$ because none of the edges with both ends in $S$ will cross the cut.</p>
<p>If you add a new vertex $v'$ into $S$, and some other edges from $v'$ to other vertices $x$ in $S$, then $A$ can include ALL of these edges and still respect the cut $(S,V-S)$ because both $v'$ and the other edges from $v'$ are in $S$.</p>
| 125
|
graph theory
|
In the dataflow programming paradigm programs are modeled as directed graphs. Are the edges of the graph variables? And are the vertexes functions?
|
https://cs.stackexchange.com/questions/128583/in-the-dataflow-programming-paradigm-programs-are-modeled-as-directed-graphs-ar
|
<p>As I understand it in dataflow programming, programs are structured as directed graphs, an example of which is below
<a href="https://i.sstatic.net/qII4K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qII4K.png" alt="enter image description here" /></a></p>
<p>Is it true to say that the arrows (or edges) represent the variables within a program and the vertexes (blue circles) represent programmatic functions? Or is this too much of a simplification?</p>
<p>I am interested in understanding how dataflow languages actually apply graph theory.</p>
|
<p>To be precise, the nodes represent the <em><strong>blocks</strong></em> (consisting a set of statements) and the edges represent <em><strong>possible data flow</strong></em> (the execution path).</p>
<p>Now basically the data flow analysis gives the information regarding the definition and use of data in the program (<em><strong>Available Expression, Reaching Definition, Live variable</strong></em>). This information is needed for code optimization.</p>
<p>And moving on to the graph theory applications in the compiler design, I would say <em><strong>Register allocation</strong></em> is the best example (Btw this can be done after gathering information from the data flow analysis).</p>
<p>This is a vast area, not easy to explain everything clearly. So, I would highly recommend to read a good book on compilers to get deeper insights</p>
| 126
|
graph theory
|
Is There a Term for "Factoring" a Graph by an Equivalence Relation on Nodes?
|
https://cs.stackexchange.com/questions/152000/is-there-a-term-for-factoring-a-graph-by-an-equivalence-relation-on-nodes
|
<p>I have a coding problem I'm running into that feels like it's solved:</p>
<p>Given a (directed) graph, and an equivalence relation on nodes, merge the equivalent nodes in a way that preserves the graph structure to create a "factor" graph, in the sense that every edge connected to one of the equivalent nodes is connected to the new node (in the same orientation, in the directed case).</p>
<p>What, if anything, is this problem called?</p>
<p>I am graph-theory and computer-science illiterate, so excuse me if this is a painfully ubiquitous transformation, but I am having trouble finding anything, likely because I'm stuck thinking about the problem in these mathematical terms.</p>
| 127
|
|
graph theory
|
Find closed loops in an undirected graph given an adjacency list
|
https://cs.stackexchange.com/questions/56076/find-closed-loops-in-an-undirected-graph-given-an-adjacency-list
|
<p>I am trying to find all the cycles in an undirected graph given the adjacency list of the vertices, with the an output of all the cycles in form of the vertices they are made up of.</p>
<p>For example -<a href="https://i.sstatic.net/Ku4fd.jpg" rel="nofollow noreferrer">https://i.sstatic.net/Ku4fd.jpg</a></p>
<p>The example depicts the graph drawn by the adjacency list with desired cycles result</p>
<pre><code>1 235
2 134
3 1245
4 235
5 134
</code></pre>
<p>The output from the algorithm would be</p>
<pre><code>Cycles:
1 2 3
1 3 5
2 3 4
3 4 5
</code></pre>
<p>Note that I am not trying to find ALL possible cycles in the graph but rather all the loops. We assume for this problem that there any vertex is connected to at least 2 others</p>
<p>I am new to discrete maths, algorithms and graph theory, any help would be greatly appreciated.</p>
| 128
|
|
graph theory
|
Given all maximal independent sets of a graph, find the maximum indepdent set
|
https://cs.stackexchange.com/questions/93405/given-all-maximal-independent-sets-of-a-graph-find-the-maximum-indepdent-set
|
<p>I am new to this independent set problem in graph theory. As per my understanding so far an independent set is a set of vertices in which no two vertices are adjacent. And the maximal independent set is a set of vertices in which if some vertex is added it will construct an edge. </p>
<p>I understand this now my question is if given all maximal independent set for a graph then the maximum independent set of that graph will be maximal independent set with maximum cardinality right?</p>
<p>Note: As per my understanding there can be more than one maximum independent set but their cardinality will be same.</p>
|
<p>If an independent set is <em>maximal</em>, it means you cannot add any more vertices into the set. As you correctly suggest, this does <em>not</em> mean that the set is necessarily a maximum independent set in the graph. </p>
<p>You are also correct in saying that every maximum independent set is maximal, and that a graph can have several maximum independent sets.</p>
| 129
|
graph theory
|
n-Cube as a Cayley Graph
|
https://cs.stackexchange.com/questions/23724/n-cube-as-a-cayley-graph
|
<p>I'm taking a class on graph theory that uses "Graph Theory (Graduate Texts in Mathematics)" by Bondy and Murty. One of the questions is about Cayley graphs and the n-cube, and I don't understand how to interpret it. It runs as follows:</p>
<blockquote>
<p>Let $\Gamma$ be a group and $S$ be a subset of $\Gamma$ not including the
identity element. Suppose that the inverse of every element in $S$ also
belongs to $S$. The Cayley graph of $\Gamma$ with respect to $S$ is the graph
$CG(\Gamma, S)$ with vertex set $\Gamma$ in which two vertices $x$ and $y$ are
adjacent iff $xy^{-1}\in S$.</p>
</blockquote>
<p>Okay. I follow so far.</p>
<blockquote>
<p>Recall that the n-cube is the graph whose vertex set is the set of all
n-tuples of 0s and 1s, where two n-tuples are adjacent if they differ
in precisely one coordinate.</p>
</blockquote>
<p>Makes sense.</p>
<blockquote>
<p>Show that the n-cube is a Cayley graph.</p>
</blockquote>
<p>What does it mean to talk about "$xy$" when $x$ and $y$ are n-tuples? What is the inverse of an n-tuple?</p>
<p>Someone I asked about the problem suggested that I treat $\Gamma$ here as the additive group $({\mathbb Z}/2{\mathbb Z})^n$, and so take $xy^{-1}$ to mean elementwise subtraction of $y$ from $x$, mod 2. But then it seems like $(0, 0, ..., 0)\in\Gamma$ and, since it's the identity element, every vertex will have an edge connecting to it, and that isn't what the n-cube looks like. Googling, I also see that there is an <a href="http://en.wikipedia.org/wiki/Tuple#Tuples_as_nested_sets" rel="nofollow">interpretation of tuples as nested sets</a>, but then I don't see how the product of two nested sets would ever be in S, since it will have a different cardinality from either of the original tuples. Interpreting the tuples as vectors can't work either since then $xy^{-1}$ will have different dimensions than either of the original tuples.</p>
<p>What is this question asking?</p>
|
<p>The question is asking you to find a suitable group $\Gamma$ and a subset $S$, so that $CG(\Gamma,S)$ is the n-cube.</p>
<p>Since by definition of $CG$, the nodes are the elements of $\Gamma$, you don't have a choice here, leaving the group operation and $S$.</p>
<p>$xy$ and the inverse of $y$ are then understood with respect to the group operation you choose.</p>
<p>Also be aware that the identity element being in $\Gamma$ does not imply that all other vertices are connected with it, since the edges are determined by $S$, not $\Gamma$.</p>
| 130
|
graph theory
|
Reference request on computer networks
|
https://cs.stackexchange.com/questions/142304/reference-request-on-computer-networks
|
<p>I read about correctness of iterative and recursive algorithms.I need a sequence of book recommendations for learning mathematical aspects of computer networks and data communtions such as applications of graph theory and number theory in routing and security and error control respectively.Please provide layerwise recommendation.I am asking this question as I was unable to find a consolidated list.</p>
<p>Number theory is used in parity codes in error control.Graph theory is used in optimal routing.</p>
| 131
|
|
graph theory
|
What is the difference between Partition and Division?
|
https://cs.stackexchange.com/questions/139384/what-is-the-difference-between-partition-and-division
|
<p>While reading graph theory, I came across different definitions where they use partitions and divisions, I was wondering, are these terms same or different?</p>
<p>Can anyone explain me their difference in Set Theory?</p>
<p>I know this is a simple question but anyone hardly talk about it but they always make here a mistake.</p>
|
<p><strong>I will explain this with an example :</strong></p>
<p><em>Consider a set <span class="math-container">$S$</span> which contains a collections of sets with a constant k such that:</em>
<span class="math-container">$$ S = \{ \{S_1\}, \{S_2\}, \{S_3\},\{S_4\},..........\{S_k\} \}$$</span></p>
<h3>In Partition of Set S :</h3>
<p>Then we can form the set again by</p>
<p><span class="math-container">$ S=S_1\cup S_2\cup S_3\cup S_4...........\cup S_k$</span></p>
<p><span class="math-container">$where\ S_i\cap S_j = \phi$</span></p>
<p><span class="math-container">$and\ i\neq j\ and\ i,j\in \mathcal{N} $</span></p>
<h3>In Division of Set S :</h3>
<p>We just have this</p>
<p><span class="math-container">$ S=S_1\cup S_2\cup S_3\cup S_4...........\cup S_k$</span></p>
| 132
|
graph theory
|
Space complexity of directed and undirected graph
|
https://cs.stackexchange.com/questions/49062/space-complexity-of-directed-and-undirected-graph
|
<p>I have started reading graph theory from <a href="https://mitpress.mit.edu/books/introduction-algorithms" rel="nofollow">Introduction to Algorithm</a>. The author starts by saying that if the graph is dense then:</p>
<p>$$|E|\text{ close to }|V|^2$$ else if the graph is sparse then:</p>
<p>$$|E|\text{ is much less than }|V^2|$$</p>
<p>According to me the above two relations makes sense in respect of a complete graph where the total number of edges is $\frac{(V)*(V-1)}{2}$ hence, $E = O(V^2)$.</p>
<p>Hence, we choose adjacency list representation where the length of the list is $\text{2|E|}$ for undirected graph and $\text{|E|}$ for directed graph. After that he simple concludes that space requirement for the adjacency list representation is $\Theta(V+E)$. I am really stuck on how he came to this conclusion without any explanation. According to me it should be $O(V*(V-1))$ because for each vertex the maximum possible edge is $V-1$. What I am doing wrong?</p>
|
<p>It's hard to know for sure from what you've written, but I suspect what you're doing wrong is that you're considering what the space requirement would be for a complete graph. For a complete graph, the space requirement for the adjacency list representation is indeed $\Theta(V^2)$ -- this is consistent with what is written in the book, as for a complete graph, we have $E=V(V-1)/2 =\Theta(V^2)$, so $\Theta(V+E)=\Theta(V^2)$.</p>
<p>However, you shouldn't limit yourself to just complete graphs. There are other graphs that aren't complete, and have fewer edges than the complete graph. In general, the space for the adjacency list representation is $\Theta(V+E)$; this fact holds for all graphs, regardless of how many edges they have.</p>
<p>Why is this true? It's because for each vertex you have a pointer to the head of a linked list. That's $\Theta(V)$ space for all of those pointers. Also, the total number of nodes across all the linked lists is equal to $2E$, as each edge $(u,v)$ appears twice: once in the list for $u$, and once in the list for $v$. Therefore the space needed for all of the linked lists is $\Theta(E)$. Adding the space for the pointers and the space for the lists gives $\Theta(V+E)$.</p>
<hr>
<p>You <em>could</em> say that the space requirement for the adjacency list is $O(V^2)$. It's true that it will never be more than that. However, for some graphs it will be much less. For instance, in a graph that is a simple path ($v_1 \to v_2 \to \dots \to v_n$), there are $V$ vertices and $E=V-1$ edges in total. For that graph, the adjacency list representation will need $\Theta(V+E)=\Theta(V)$ space. Thus the $O(V^2)$ upper bound is very loose and too pessimistic.</p>
| 133
|
graph theory
|
Does an algorithm exist that transforms any connected graph, cyclic or not, into tree form?
|
https://cs.stackexchange.com/questions/126500/does-an-algorithm-exist-that-transforms-any-connected-graph-cyclic-or-not-into
|
<p>I developed an algorithm that transforms any simple connected graph, cyclic or not, into a tree.
The resulting tree is syntax-preserving, in a sense that it allows to reconstruct the original input graph and only the original input graph. In other words, the constructed tree preserves adjacency information, while resolving cycles.
Moreover, I assume that with the constructed tree allowing to reconstruct only the original input graph, no other graph that is structurally different can exist that would result in an identical tree form. </p>
<p>I did some research on algorithms that transform graphs into tree form, but I was unable to find another algorithm that would work for any simple connected graph like described above.
However, I am pretty such there must exist something. </p>
<p>So maybe one more experienced in graph theory can help me out.
It would be highly appreciated. </p>
<p>Alex</p>
|
<p>In a general graph, the number of edges can go up to <span class="math-container">$\mathcal{O}(n^2)$</span> (where n is the number of verticies). But in a tree, that number is always <span class="math-container">$n-1$</span>.</p>
<p>Then, we understand that its impossible to convert a graph to a tree preserving syntax without increasing the memory for each node by a linear factor, or increasing the number of nodes significantly.</p>
<p>Let us define a unique number for every node (in any way you like).</p>
<p>The algorithm now will be:</p>
<ol>
<li>Let <span class="math-container">$\mathcal{T}$</span> be the tree from the BFS algorithm on the graph (starting from node 1, for uniqueness)</li>
<li>For every edge <span class="math-container">$e=(i,j)$</span> in the graph, that is not in <span class="math-container">$\mathcal{T}$</span> already, w.l.o.g assume <span class="math-container">$i\le j$</span>, and add a new node with value <span class="math-container">$j$</span> to <span class="math-container">$\mathcal{T}$</span>, and a connection between it and the (original) node <span class="math-container">$i$</span> in the tree.</li>
</ol>
<p>Note that in the new tree, there will be a lot of nodes that have the same number.</p>
<p>To reconstruct the original graph, we will do the following:</p>
<ol>
<li>Create a node for every unique number in <span class="math-container">$\mathcal{T}$</span>.</li>
<li>Add an edge <span class="math-container">$(i,j)$</span> if there is a node with number <span class="math-container">$i$</span> that has an edge to some node with value <span class="math-container">$j$</span></li>
</ol>
| 134
|
graph theory
|
Doubt in vertex connectivity less than edge connectivity
|
https://cs.stackexchange.com/questions/109400/doubt-in-vertex-connectivity-less-than-edge-connectivity
|
<p>Sir i recently started graph theory. I understood the reason why edge connectivity is less than min degree(remove all vertices incident to min degree vertex). I have doubt in 2nd part of proof when given graph is not complete graph. how to prove here vertex connectivity less than edge connectivity?
confused here. Pls clarify my doubt</p>
|
<p>I'm not sure I understand your question, so this is the specific question I'm answering:</p>
<blockquote>
<p>Why is the vertex connectivity of a graph always less than or equal to its edge connectivity?</p>
</blockquote>
<p>If that's wrong, please let me know in the comments or edit the question.</p>
<hr>
<p>The vertex connectivity of a graph is defined as <em>the smallest number of vertices you can delete to make the graph no longer connected</em>. The edge connectivity is the same, except substitute "edge" for "vertex".</p>
<p>So, let's take a graph <span class="math-container">$G$</span>, and say its edge connectivity is <span class="math-container">$e_c$</span>. This means, by definition, there's some set of edges <span class="math-container">$E_c$</span>, such that deleting all those edges will make <span class="math-container">$G$</span> no longer connected, and <span class="math-container">$|E_c| = e_c$</span> (there are <span class="math-container">$e_c$</span> different edges in the set).</p>
<p>Let's assume that the vertex connectivity is greater than <span class="math-container">$e_c$</span>. I'm going to show that this leads to a contradiction.</p>
<p>Let's go through <span class="math-container">$E_c$</span> and take one arbitrary endpoint from each edge. (At random, or always take the tail, etc, doesn't matter.) Call this new set of endpoints <span class="math-container">$V_{Ec}$</span>. There's one for each edge, so there are <span class="math-container">$e_c$</span> total.</p>
<p>Deleting a vertex includes deleting every edge that touches it, so deleting every vertex in <span class="math-container">$V_{Ec}$</span> must delete every edge in <span class="math-container">$E_c$</span>. Thus, deleting every vertex in <span class="math-container">$V_{Ec}$</span> makes the graph disconnected.</p>
<p>But we only deleted <span class="math-container">$e_c$</span> different vertices! If the vertex connectivity is any larger than <span class="math-container">$e_c$</span>, we have a contradiction—since the vertex connectivity is the <em>smallest</em> number of vertices you can delete to disconnect the graph. And we just showed you can do it with fewer.</p>
<p>Therefore, the vertex connectivity can never be larger than the edge connectivity.</p>
| 135
|
graph theory
|
Intuition behind eigenvalues of an adjacency matrix
|
https://cs.stackexchange.com/questions/109963/intuition-behind-eigenvalues-of-an-adjacency-matrix
|
<p>I am currently working to understand the use of the <a href="https://en.wikipedia.org/wiki/Cheeger_bound" rel="noreferrer">Cheeger bound</a> and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.<br>
Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.<br>
I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (<a href="https://math.stackexchange.com/questions/243533/how-to-intuitively-understand-eigenvalue-and-eigenvector">multivariate analysis</a>, <a href="https://mathoverflow.net/questions/201027/how-can-we-interpret-the-eigenvalues-and-eigenvectors-of-euclidean-distance-matr">Euclidian distance matrices</a>, <a href="https://stats.stackexchange.com/questions/2892/intuition-interpretation-of-a-distribution-of-eigenvalues-of-a-correlation-mat">correlation matrices</a> ...).<br>
But nothing about spectral partitioning and graph theory.</p>
<p>Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?</p>
|
<p>The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example <a href="https://web.archive.org/web/20170829012125/https://people.eecs.berkeley.edu/%7Eluca/cs278-08/lecture11.pdf" rel="nofollow noreferrer">lecture notes of Luca Trevisan</a>. Roughly speaking, the L2 distance to uniformity after <span class="math-container">$t$</span> steps can be bounded by <span class="math-container">$\lambda_2^t$</span>.</p>
<p>Another place where the second eigenvalue shows up is the <a href="https://en.wikipedia.org/wiki/Planted_clique" rel="nofollow noreferrer">planted clique problem</a>. The starting point is the observation that a random <span class="math-container">$G(n,1/2)$</span> graph contains a clique of size <span class="math-container">$2\log_2 n$</span>, but the greedy algorithm only finds a clique of size <span class="math-container">$\log_2 n$</span>, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)</p>
<p>This suggests <em>planting</em> a large clique on top of <span class="math-container">$G(n,1/2)$</span>. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size <span class="math-container">$C\sqrt{n\log n}$</span>, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size <span class="math-container">$\Omega(\sqrt{n\log n})$</span>. We can improve this using spectral techniques: if we plant a clique of size <span class="math-container">$C\sqrt{n}$</span>, then the <em>second eigenvector</em> encodes the clique, as <a href="https://people.math.ethz.ch/%7Esudakovb/hidden-clique.pdf" rel="nofollow noreferrer">Alon, Krivelevich and Sudakov</a> showed in a classic paper, <em>Finding a Large Hidden Clique in a
Random Graph</em>.</p>
<p>More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of <a href="https://web.archive.org/web/20160921172643/http://people.eecs.berkeley.edu/%7Eluca/books/expanders.pdf" rel="nofollow noreferrer">lecture notes of Luca Trevisan</a>, which describes higher-order Cheeger inequalities.</p>
| 136
|
graph theory
|
Directed Acyclic Graph partition into minimum subgraphs with a constraint
|
https://cs.stackexchange.com/questions/112221/directed-acyclic-graph-partition-into-minimum-subgraphs-with-a-constraint
|
<p><a href="https://i.sstatic.net/20JSo.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/20JSo.jpg" alt="enter image description here"></a></p>
<p>I have this problem, not sure there is a name for it, wherein a Directed Acyclic Graph has different colored nodes. The idea is to partition it into minimum number of subgraphs with the following 2 constraints:-</p>
<ol>
<li>A sub-graph should have nodes of similar color</li>
<li>A sub-graph cannot directly or indirectly depend on it's own output</li>
</ol>
<p>Example:- In the attached picture, the sub-graph with yellow nodes is invalid since input from the red node breaks rule#2 : 4th node from top has an input which depends on output of 2nd node - via the red node (outside the sub-graph)
Hence the algorithm should partition it after #2 or #3 so that 2 and 4 are on different sub-graphs</p>
<p>Am sure this is a pretty common problem in Graph theory and must have a name and a standard algorithm for it. Thanks in advance for any pointers to it!</p>
| 137
|
|
graph theory
|
What are some applications of computing the permanent of a matrix?
|
https://cs.stackexchange.com/questions/14438/what-are-some-applications-of-computing-the-permanent-of-a-matrix
|
<p>What are some applications that require computing the <a href="http://en.wikipedia.org/wiki/Permanent" rel="nofollow">permanent of a matrix?</a></p>
<p>One application I know of is related to graph theory and matchings. Apparently, the number of perfect matchings of a bipartite graph is the permanent of its incidence matrix.</p>
<p>I am curious to know more applications of matrix permanent.</p>
|
<p>Valiant proved that <a href="http://en.wikipedia.org/wiki/Permanent_is_sharp-P-complete" rel="noreferrer">the permanent is $\# P$-complete</a>, which means that an efficient algorithm for computing the permanent can be used to solve any problem in <a href="http://en.wikipedia.org/wiki/Sharp-P" rel="noreferrer">$\# P$</a>, such as counting the number of satisfying assignment to a CNF, the number of Hamiltonian circuits, the number of $k$-colorings and so on. In particular, it could be used to solve NP-complete problems.</p>
| 138
|
graph theory
|
Is there a "well known" example of a constraint satisfaction problem on a 3-element set which is polynomial-time solvable?
|
https://cs.stackexchange.com/questions/98307/is-there-a-well-known-example-of-a-constraint-satisfaction-problem-on-a-3-elem
|
<p>I'm basically looking for an example (in maybe graph theory) of a constraint satisfaction problem which has a 3-element set as a domain and the problem is known to be polynomial-time solvable.</p>
|
<p>If you want a graph, it needs to be bipartite. Hence the path of two edges or any subgraph thereof. (Here I am following the convention that graphs have no loops. As David Richerby points out in a comment, graphs with loops also have polynomial-time CSPs (by virtue of triviality).)</p>
<p>A more interesting example is linear algebra over the three-element field: The domain is the set of congruence classes <span class="math-container">$\bmod 3$</span> and the constraint relations are given by linear equations.</p>
| 139
|
graph theory
|
N-Guest Table, Graph Problem
|
https://cs.stackexchange.com/questions/77178/n-guest-table-graph-problem
|
<p>The Queen of England wants to organize a set of tables for n guests talking different languages. The tables have to be set in a way that every guest can speak to his neighbor on the right and his neighbor on the left.</p>
<p>Four possibilities come up :
1- One table for all guests.
2- One table for all guests, but we alternate between men and women.
3- Guests are grouped 2 by 2 (n/2 tables)</p>
<p>The problem seems to me similar to the Minimum Spanning Tree. The graph would have nodes of guests connected if the two guests can speak the same language. But there seems to be more than this to it. I'm going to continue trying to find a classic graph problem that can represent this problematic. Any help is greatly appreciated. I have an exam in graph theory next week. Wish me luck.</p>
| 140
|
|
graph theory
|
Is the optimal order of graph vertices s.t. minimizes edges to later vertices a well-known problem?
|
https://cs.stackexchange.com/questions/68894/is-the-optimal-order-of-graph-vertices-s-t-minimizes-edges-to-later-vertices-a
|
<p>I'm a little unfamiliar with graph theory, and I found an interesting problem in my work that I do not know if its already well-known or can be easily mapped to another one. If I were to express the problem more formally:</p>
<p>Given a directed unweighted graph $\langle V,E \rangle$, find a total order between vertices $v_1 < v_2 < \dots < v_n$ such that minimizes $\vert F \vert$, where $F = \{ \langle v_i,v_j \rangle \ \vert\ v_i < v_j , \langle v_i,v_j \rangle \in E \}$. That is find a total order between vertices such that it minimizes the number of edges that go "forward" in the order.</p>
<p>Lets suppose a simple directed graph:</p>
<p><a href="https://i.sstatic.net/7gN1L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7gN1L.png" alt="Simple directed graph"></a></p>
<p>The list of edges would be $[ a \rightarrow b $, $b \rightarrow c$, $a \rightarrow c ]$.</p>
<p>A bad order would be $b < a < c$, because, for example, $c$ comes after $b$ in the order and we have the edge $b \rightarrow c $. Therefore, $F = \{ \langle b,c \rangle , \langle a,c \rangle \}$, $\vert F\vert = 2$.</p>
<p>A good order would be $c < b < a$, because $b < a$ and $c < b$. This gives $F = \emptyset$ and therefore the optimum: $\vert F\vert = 0$.</p>
<p>I have graphs that are dense, with some strongly connected components, and when trying to solve it using a SMT solver it works really bad with non-trivial instances. My intuition says that it's like a pigeonhole problem. So, my questions are:</p>
<ul>
<li>Is this a well-known problem in the graph-theory community?</li>
<li>Can be easily mapped to any other problem?</li>
<li>If so, there is any good algorithm to solve it?</li>
<li>Can a good heuristic be computed easily?</li>
</ul>
<p>Many thanks in advance :)</p>
|
<p><a href="https://cs.stackexchange.com/a/68906/1984">Szymon Stankiewicz</a> is right -- this problem is basically Feedback Arc Set, which is unfortunately NP-complete. But I have to mention that a very similar graph property, which goes by the slightly alarming name of <em>agony</em>, can actually be computed in just $O(m^2)$ time, where $m$ is the number of edges. The agony of a graph is essentially a weighted form of the size of the feedback arc set, in which we weight each violated arc by "how much" it is violated. Specifically, if we group the vertices into horizontal levels so that every downward edge has a penalty of 0 and every horizontal or upward edge has a penalty equal to one more than the number of levels crossed, the agony is the minimum possible penalty that could be produced by any such grouping.</p>
<p>See <a href="http://users.ics.aalto.fi/ntatti/papers/tatti14agony.pdf" rel="nofollow noreferrer">http://users.ics.aalto.fi/ntatti/papers/tatti14agony.pdf</a> for the $O(m^2)$ algorithm, due to Nikolaj Tatti, which will construct an agony-minimising hierarchy. (Note that I myself have only read the abstract and enough to confirm that an actual hierarchy will be produced, not just the minimal agony score of one).</p>
| 141
|
graph theory
|
Proving Clique Number of a Regular Graph
|
https://cs.stackexchange.com/questions/90799/proving-clique-number-of-a-regular-graph
|
<p>I am very new to Graph Theory and I am trying to prove the following statement from a problem set for my class: </p>
<p>Prove that if G is a regular graph on n vertices $(n \ge 2)$, then $\omega(G) \in \{1, 2, 3,... \lfloor n / 2 \rfloor, n\}$</p>
<p>I am confused by the part where it places the clique number to be in this set: $\omega(G) \in \{1, 2, 3,... \lfloor n / 2 \rfloor, n\}$. Why can the clique number be only in the first half of this set (or it can be n) and why can't it be anything between $\lfloor n / 2 \rfloor$ and $ n$? </p>
<p>How can I go about proving this claim? Any tips would be appreciated. </p>
|
<p>Let $G$ be a $d$-regular graph on $n$ vertices containing an $a$-clique $A$, and let $B$ denote the other $b:=n-a$ vertices. Suppose that $a>b$.</p>
<p>Let $e$ be the number of edges connecting $A$ to $B$. Every vertex in $A$ has $a-1$ edges going to the other vertices in $A$, and so $d-(a-1)$ edges going to vertices in $B$. Hence $e = a(d-(a-1))$. Similarly, every vertex in $B$ has at most $b-1$ edges going to the other vertices in $B$, and so at least $d-(b-1)$ edges going to vertices in $A$. Hence $e \geq b(d-(b-1))$. It follows that
$$ b(d-(b-1)) \leq a(d-(a-1)). $$
Subtracting the left-hand side from the right=hand side, we get
$$
0 \leq (ad-a^2+a)-(bd-b^2+b) = (ad-bd)-(a^2-b^2)+(a-b) = (a-b)(d-a-b+1).
$$
Since $a > b$, it follows that $d \geq a+b-1 = n-1$. In other words, $G$ is the complete graph.</p>
<p>Summarizing, if a regular graph contains a clique on more than half the vertices then it is the complete graph. Therefore the clique number of a regular graph is either at most $\lfloor n/2 \rfloor$ or $n$.</p>
<hr>
<p>Let us now show that the upper bound $\lfloor n/2 \rfloor$ is tight. That is, for every $n$ there exists a regular graph on $n$ vertices whose clique number is $\lfloor n/2 \rfloor$.</p>
<p>When $n$ is even, there exist regular graphs on $n$ vertices with clique number $n/2$, for example two disjoint copies of $K_{n/2}$.</p>
<p>When $n=4m+1$, there exist regular graphs on $n$ vertices with clique number $2m$: take two disjoint copies of $K_{2m}$, add a new vertex connected to $m$ vertices from each copy of $K_{2m}$, and add a matching between the remaining $m$ vertices of each copy of $K_{2m}$. When $m=1$, this gives the 5-cycle.</p>
<p>When $n=4m+3$, the following construction gives a regular graph on $n$ vertices with clique number $2m+1$. Take two disjoint copies of $K_{2m+1}$, say on vertices $x_1,\ldots,x_{2m+1}$ and $y_1,\ldots,y_{2m+1}$. Connect $x_i$ to $y_i$ for all $i$, and connect $x_i$ to $y_{i+1}$ for $i=1,\ldots,m$. Finally, add an additional vertex connected to $x_{m+1},\ldots,x_{2m+1}$ and to $y_1,y_{m+2},\ldots,y_{2m+1}$.</p>
| 142
|
graph theory
|
determine Eulerian or Hamiltonian
|
https://cs.stackexchange.com/questions/140167/determine-eulerian-or-hamiltonian
|
<p><a href="https://i.sstatic.net/tzRlh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tzRlh.png" alt="enter image description here" /></a></p>
<p>I am a beginner in graph theory and just found this question in a book after completing few topics and I was wondering how you approach this questions.
For eulerian, I can say that the graph has vertex with odd degree hence not eulerian, but how can I determine if they are hamiltonian or not?</p>
|
<p>That's a very good question, and the easy answer is that checking whether a graph is Eulerian is much simpler than checking whether a graph is Hamiltonian. You're diving head-first into the field of complexity theory and the famous question <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem" rel="noreferrer">P vs NP</a>. (Further reading: <a href="https://cs.stackexchange.com/questions/9556/what-is-the-definition-of-p-np-np-complete-and-np-hard">What is the definition of P, NP, NP-complete and NP-hard</a>.)</p>
<p>As you said, a graph is Eulerian if and only if the vertices have even degrees.</p>
<p>For checking if a graph is Hamiltonian, I could give you a "certificate" (or "witness") if it indeed was Hamiltonian. However, there is no anti-certificate, or a certificate for showing that the graph is non-Hamiltonian; Checking if a graph is <em>not Hamiltonian</em> is a co-NP-complete problem.</p>
<p>In fact, we believe that any certificate for non-Hamiltonian-ness needs to be exponentially large.</p>
<hr />
<p>To answer the question:</p>
<ol>
<li>Try every permutation of vertices, and if one of the permutations is a cycle, then the graph is Hamiltonian. If so, you get a certificate.</li>
<li>If no permutation was a cycle, the graph is not Hamiltonian. You cannot convince your friends that the graph is non-Hamiltonian without trying all permutations*.</li>
</ol>
<p>(Ps, using algorithmic techniques (DP) you don't have to try <em>every</em> permutation, but still exponentially many.)</p>
| 143
|
graph theory
|
Find all combinations of adjacent records matching a graph template
|
https://cs.stackexchange.com/questions/145901/find-all-combinations-of-adjacent-records-matching-a-graph-template
|
<p>I have a graph theory or combinatorics problem that probably has a solution, but I haven't been able to find it. The problem can be simple: in the second figure below, choose one yellow block from each oval such that the blue edges between the blocks look like the edges in the first figure below. Find all possible such combinations.</p>
<p>The background to this problem is we want to allow a user to define an arbitrary undirected graph to serve as a query template, for example:
<a href="https://i.sstatic.net/PJcxW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PJcxW.png" alt="undirected graph n0-n1-n2-n4-n3-n1" /></a></p>
<p>For every node, the user can define criteria that must be satisfied, and we then query multiple different APIs to get matching records, with each record being associated with one edge of the query template, as illustrated in the figure below:
<a href="https://i.sstatic.net/w4Dzk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w4Dzk.png" alt="records overlaid " /></a></p>
<p>An example of a record would be <code>AC</code>, with <code>A</code> matching the criteria specified for <code>n0</code> and with <code>C</code> matching the criteria for <code>n1</code>. The line between <code>A</code> and <code>C</code> indicates the two of them together form a record.</p>
<p>We want to find every combination (unordered set of records) where every record within each combination meets these two conditions:</p>
<ol>
<li>one record per query template edge, e.g., <code>AC</code> for <code>e0</code>, <code>CF</code> for <code>e1</code>, etc</li>
<li>records that share a query template node (like <code>n1</code>) must also share a corresponding record node (like <code>C</code>)</li>
</ol>
<p>For the figure above, we should get the combinations below:</p>
<pre><code>{ AC, CF, FK, KI, IC }
{ AC, CF, FK, KI, ID }
{ AC, CH, HK, KI, IC }
{ AC, CH, HK, KI, ID }
{ BC, CF, FK, KI, IC }
{ BC, CF, FK, KI, ID }
{ BC, CH, HK, KI, IC }
{ BC, CH, HK, KI, ID }
{ BD, DH, HK, KI, IC }
{ BD, DH, HK, KI, ID }
</code></pre>
<p>How would I properly describe this problem in graph theory or combinatorics terminology? Is there an optimal solution to finding the combinations of records, considering that we're trying to avoid limitations on the query template and the number of records per query template edge could be in the hundreds of thousands?</p>
|
<p>If I understand your problem accurately, it is NP-hard. As such, there is no efficient solution.</p>
<p>I will show a reduction from 3SAT. In particular, suppose we have a 3SAT formula <span class="math-container">$\varphi$</span> with variables <span class="math-container">$x_1,\dots,x_m$</span> and <span class="math-container">$m$</span> clauses. For each <span class="math-container">$i$</span>, introduce an oval with two nodes inside it, one for <span class="math-container">$x_i$</span> and one for <span class="math-container">$\overline{x_i}$</span>. For each clause, introduce two ovals, with the following shape:</p>
<p><a href="https://i.sstatic.net/3M13e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3M13e.png" alt="gadget for reduction" /></a></p>
<p>Ddraw a blue line between <span class="math-container">$A$</span> and the node for the first literal in the clause; a blue line between <span class="math-container">$B$</span> and the node for the second literal in the clause; and a blue line between the <span class="math-container">$C$</span> and the node for the third literal in the clause. That is the record graph.</p>
<p>Finally, draw a query template that requires there to be a line between each of the two ovals of each gadget; and a line between the left oval of each gadget and the corresponding three ovals for the three variables occurring in the clause.</p>
<p>Now, if I understand your matching rules correctly, there is a satisfying assignment to <span class="math-container">$\varphi$</span> iff the query template has a match in the record graph.</p>
| 144
|
graph theory
|
Why does Skiena reserve space for n+1 adjacency lists?
|
https://cs.stackexchange.com/questions/57722/why-does-skiena-reserve-space-for-n1-adjacency-lists
|
<p>I am reading up on graph theory from the book <em>Algorithm Design Manual - Skiena</em>. And he shows a structure of a graph as follows : </p>
<pre><code>#define MAXV 100 /* maximum number of vertices */
#define MAXDEGREE 50 /* maximum outdegree of a vertex */
typedef struct {
int v; /* neighboring vertex */
int weight; /* edge weight */
} edge;
typedef struct {
edge edges[MAXV+1][MAXDEGREE]; /* adjacency info */
int degree[MAXV+1]; /* outdegree of each vertex */
int nvertices; /* number of vertices in the graph */
int nedges; /* number of edges in the graph */
} graph;
</code></pre>
<p>I am confused about why the size of adjacency list or the degree is MAXV+1. Is it to handle edges that begin and end in the same vertex E(v,v) ?</p>
|
<p>Note that the author uses 1 instead of 0 as the starting point of an array $A$. In other words, $A[0]$ is wasted. For example, to initialize a graph:</p>
<pre><code>for (i=1; i<=MAXV; i++) g->degree[i] = 0;
</code></pre>
<hr>
<p>By the way, in the <em>second edition</em> of this book, it is (Section 5.2):</p>
<pre><code>#define MAXV 1000 /* maximum number of vertices*/
typedef struct {
int y; /* adjacency info */
int weight; /* edge weight, if any */
struct edgenode *next; /* next edge in list */
} edgenode;
typedef struct {
edgenode *edges[MAXV+1]; /* adjacency info */
int degree[MAXV+1]; /* outdegree of each vertex */
int nvertices; /* number of vertices in graph */
int nedges; /* number of edges in graph */
bool directed; /* is the graph directed? */
} graph;
</code></pre>
| 145
|
graph theory
|
What is theory behind graphs relations?
|
https://cs.stackexchange.com/questions/28135/what-is-theory-behind-graphs-relations
|
<p>I have been trying to understand, what is the actual meaning of 2 graphs being:</p>
<pre><code>Symmetric
Transitive
Reflexive
A graph being a subgraph of another graph
</code></pre>
<p>And other similar relations if let's say I have two graphs containing 1000's of nodes and edges. Then, what do these terms mean w.r.t to those 2 graphs?</p>
|
<p>I have never seen the first three properties applied to a graph in the way that you are asking, however the final property, sub graphs, is strait forward.</p>
<p>By definition: A subgraph of a graph G is a graph whose vertex set is a subset of that of G, and whose adjacency relation is a subset of that of G restricted to this subset.</p>
<p>In more plain terms, if you have graphs $H=(V_2,E_2)$ and $G=(V_1,E_1)$ where $E$ is the edge set, and $V$ is the vertex set, if H is a sub graph of G then we can pair up each of the vertex in $V_2$ with one in $V_1$, With this pairing in mind, if we can then pair up each edge in $E_2$ with its counterpart in $E_1$ then the graph is a sub graph, if edges exist within $E_2$ which cannot have a pairing in $E_1$ then it is not.</p>
| 146
|
graph theory
|
Decomposition of graph to subgraphs according to parallel edges
|
https://cs.stackexchange.com/questions/117696/decomposition-of-graph-to-subgraphs-according-to-parallel-edges
|
<p>I am supposed to calculate all-pair shortest path lengths of a graph. However, I first need the graph to be decomposed/expanded to a <strong>simple</strong> one based on the presence of parallel edges. </p>
<p>If N parallel edges exist between any two vertices <strong>A</strong> and <strong>B</strong>, I need to create N replicas of both vertices. Each replica of <strong>A</strong> will be connected to <strong>one and only one</strong> replica of <strong>B</strong>, and vice versa. In addition, all replicas of a vertex must be <strong>fully</strong> connected to each other.</p>
<p>As an example:-</p>
<pre>
A === B
</pre>
<p>will become </p>
<pre>
A1 ----- B1
| |
A2 ----- B2
</pre>
<p>Does this formulation match any well-defined graph theory problem? I am trying to come up with an algorithm that can make use of a GPU's speed, since the graphs I am dealing with can become huge, and I am trying to do it by manipulating the adjacency matrix.</p>
| 147
|
|
graph theory
|
How was the four color theorem proved using brute-force search?
|
https://cs.stackexchange.com/questions/118873/how-was-the-four-color-theorem-proved-using-brute-force-search
|
<p>I recently learned some graph theory in Discrete Structures for Computer Science, we learned about the Four Color theorem, I realize there is a mathematical proof for this topic, but how was it initially proved using computation 50 years ago?</p>
| 148
|
|
graph theory
|
Prove that at least as many edges as vertices implies a cycle
|
https://cs.stackexchange.com/questions/66391/prove-that-at-least-as-many-edges-as-vertices-implies-a-cycle
|
<p>I am EXTREMELY confused on where to start with this problem. We recently just started learning about graph theory and I don't know where to begin.</p>
<blockquote>
<p>Prove that in a connected graph G with $p$ vertices, $q$ edges, and at least one cycle, $q \ge p$</p>
</blockquote>
<p>How do I begin with this question? Any help would be greatly appreciated. Thank you so much.</p>
|
<p>Graph has a cycle on $k$ vertices implies that those vertices are connected with $k$ edges. Also, the graph is connected so $p-k$ vertices which left should be connected to the cycle, that means extra $p-k$ edges (distinct end points). Totally the graph has more($\geq$) than $p$ edges. </p>
| 149
|
graph theory
|
Status on Naoki Katoh's "Rectangle Wiring Problem" (minimum length tree to cover a partitioned rectangle)?
|
https://cs.stackexchange.com/questions/119665/status-on-naoki-katohs-rectangle-wiring-problem-minimum-length-tree-to-cover
|
<p>I have found this interesting problem in graph theory and geometry which is allegedly an open problem but latest status seems to be from 01/25/02. I can't seem to find any more information about it, not even other papers describing it.</p>
<p><a href="https://i.sstatic.net/jT7Ub.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jT7Ub.png" alt="Problem"></a></p>
|
<p>This problem has been studied since 2002, under the name <em>minimum-length corridor problem</em>. </p>
<p>The problem is known to be NP-complete[1,2], but there is a constant factor polynomial time approximation algorithm[3].</p>
<p>There is more work on this problem, which you can find by searching the citation network of the papers below.</p>
<hr>
<p>[1]: GONZALEZ-GUTIERREZ, Arturo; GONZALEZ, Teofilo F. <em>Complexity of the minimum-length corridor problem</em>. Computational Geometry, 2007, 37.2: 72-103.</p>
<p>[2]: BODLAENDER, Hans L., et al. <em>On the minimum corridor connection problem and other generalized geometric problems</em>. Computational Geometry, 2009, 42.9: 939-951.</p>
<p>[3]: GONZALEZ-GUTIERREZ, Arturo; GONZALEZ, Teofilo F. <em>Approximation Algorithms for the Minimum-Length Corridor and Related Problems</em>. In: CCCG. 2007. p. 253-256.</p>
| 150
|
graph theory
|
is it necessary to cover all the verticies in eular path?
|
https://cs.stackexchange.com/questions/92930/is-it-necessary-to-cover-all-the-verticies-in-eular-path
|
<p>I was going through graph theory and came across the term Euler path or some people prefer Euler trail as vertices can repeat.</p>
<p>According to the definition from wiki (<a href="https://en.wikipedia.org/wiki/Eulerian_path" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Eulerian_path</a>), Euler path is defined as under</p>
<blockquote>
<p>In graph theory, a Eulerian trail (or Eulerian path) is a trail in a graph which visits every edge exactly once.</p>
</blockquote>
<p>Following are the conditions for Euler path,</p>
<blockquote>
<p>An undirected graph (G) has a Eulerian path if and only if every vertex has even degree except 2 vertices which will have odd degree, and all of its vertices with nonzero degree belong to a single connected component.</p>
</blockquote>
<p>So as per the definition, I need to cover all edges in a connected graph with non zero degrees. </p>
<p>The same information is stated here at geeks for geeks(<a href="https://www.geeksforgeeks.org/eulerian-path-and-circuit/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/eulerian-path-and-circuit/</a>).</p>
<p>so if I need to only consider vertices with non zero degrees then graph (G)can be disconnected?</p>
<p>Note: These are the other information I collected from the wiki page</p>
<p>1)Finite connected graph (with vertices of even degree except 2 or 0 with the odd degree) will have a Euler path.
2)But Euler path can also be present in the disconnected graph as shown in the following picture</p>
<p><a href="https://i.sstatic.net/tiywr.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tiywr.jpg" alt="enter image description here"></a></p>
<p>3) Doubt does following graph have Euler path,</p>
<p>My answer ,No as all vertices are not in same connected component.</p>
<p><a href="https://i.sstatic.net/wOGv7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wOGv7.jpg" alt="enter image description here"></a></p>
|
<p>It really comes down to your definition of an Euler trail. The one I'm familiar with is similar to the one on Wikipedia:</p>
<blockquote>
<p>An <em>Euler trail</em> is a trail (path that allows repeats) that uses every edge exactly once.</p>
</blockquote>
<p>With this definition, an Euler trail doesn't have to touch every vertex: if there's a vertex with no edges, the Euler trail doesn't have to go anywhere near it.</p>
<p>However, if there's a vertex with positive degree, and another vertex with positive degree, and they're not (weakly) connected, then there can be no Euler trail. Because if you use the edges on one, there's no way to reach the edges on the other.</p>
| 151
|
graph theory
|
Find all the ways to choose $k$ objects from a list of $n$ objects (using a graph?)
|
https://cs.stackexchange.com/questions/138207/find-all-the-ways-to-choose-k-objects-from-a-list-of-n-objects-using-a-grap
|
<p>I was playing around with graph theory and I noticed that a directed integer graph with unique vertices <span class="math-container">$V$</span> and edges <span class="math-container">$E$</span> such that each vertex only points to vertices with a higher value can be used to enumerate all <span class="math-container">$n \choose 2$</span> ways to choose <span class="math-container">$2$</span> values from a total of <span class="math-container">$\left|V\right|$</span> possible values.</p>
<p>For example:</p>
<p><a href="https://i.sstatic.net/XaMdd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XaMdd.png" alt="enter image description here" /></a></p>
<p>Here, <span class="math-container">$V = \left\{ 0, 1, 2, 3, 4, 5, 6 \right\}$</span> and <span class="math-container">$E = \left\{ (0,1),(0,2),\cdots,(5,6) \right\}$</span> and creating the initial graph is of the order <span class="math-container">$O(n^2)$</span>. <span class="math-container">$E$</span> readily contains all possible pairs of numbers chosen from the list.</p>
<p>I was wondering if the same strategy could be used to enumerate any <span class="math-container">$n \choose k$</span> where <span class="math-container">$0 \leq k \leq n$</span>, or if there are more efficient algorithms for this purpose.</p>
|
<p>To generalize your approach to <span class="math-container">$k$</span>-subsets of an <span class="math-container">$n$</span>-set, you would need to build a hypergraph. Ordinary graph edges are relations between <em>pairs</em> of vertices. Hypergraphs allow relations between arbitrary sets of vertices. They are <em>extremely</em> general objects, essentially representing families of sets.</p>
<p><a href="https://stackoverflow.com/q/35867033">This question</a> provides two good answers for how to (reasonably) efficiently enumerate these subsets. The first uses bit magic and is a special case when <span class="math-container">$n$</span> is small enough. The second takes a recursive approach which saves having to compute the first <span class="math-container">$k-1$</span> elements of a <span class="math-container">$k$</span> subset multiple times, but only really helps if enumerating over several consecutive values of <span class="math-container">$k$</span>.</p>
| 152
|
graph theory
|
About computer science and category theory
|
https://cs.stackexchange.com/questions/23872/about-computer-science-and-category-theory
|
<p>I read that Category Theory has alot to do with how programs and information can be organised.Can Category theory simplify various programming strategies? If a specific Category is represented as a directed graph is this similar to flow charts used in programming?</p>
| 153
|
|
graph theory
|
Is it possible to have a 2 by 2 rigid framework without having a corresponding connected bipartite graph?
|
https://cs.stackexchange.com/questions/159737/is-it-possible-to-have-a-2-by-2-rigid-framework-without-having-a-corresponding-c
|
<p>According to the theorem(see reference) on the rigidity of frameworks:</p>
<blockquote>
<p>A rectangular framework is rigid if and only if its associated bipartite graph is connected.</p>
</blockquote>
<p>Now consider the case for a 2-by-2 rectangular framework.</p>
<p><a href="https://i.sstatic.net/XPgX3s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XPgX3s.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/FHYqLs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FHYqLs.png" alt="enter image description here" /></a></p>
<p>In this case, if we draw a brace in the first and the third squares, the framework becomes rigid (since, by observation, none of the smaller squares can be skewed without skewing the braced squares).</p>
<p>However, the resulting bipartite graph is not a connected one. Since the theorem establishes an one-to-one correspondence between the associated bipartite graph being connected and the rigidity of framework, the example that I have come up here surely violates the theorem.</p>
<p><a href="https://i.sstatic.net/PBq3i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PBq3i.png" alt="enter image description here" /></a></p>
<p>What am I getting wrong here?</p>
<hr />
<p><strong>Reference:</strong></p>
<p>Theorem 1.4.1, Gross, J. L., & Yellen, J. (2005). Graph Theory and Its Applications. In Chapman and Hall/CRC eBooks. Informa. <a href="https://doi.org/10.1201/9781420057140" rel="nofollow noreferrer">https://doi.org/10.1201/9781420057140</a></p>
|
<p>The picture shows that the given framework is not a rigid framework since it can be skewed in the following manner:</p>
<p><a href="https://i.sstatic.net/KDGPO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KDGPO.png" alt="enter image description here" /></a></p>
| 154
|
graph theory
|
minimum number of edges that should be added to an undirected graph to make it a tree
|
https://cs.stackexchange.com/questions/134364/minimum-number-of-edges-that-should-be-added-to-an-undirected-graph-to-make-it-a
|
<p>Basically, it's this <a href="http://rosalind.info/problems/tree/" rel="nofollow noreferrer">rosalind</a> problem.</p>
<p>You're given a number of nodes and an adjacency list.
My initial guess was that the answer was the number of connected components minus 1, since by joining every connected component you would have a connected graph, and since it's stated that there are no cycles, that would be a tree.</p>
<p>Why is this approach wrong? The real answer is just the number of nodes-1-number of edges, which I understand, but can't see how is this not equivalent to my answer.</p>
<p>Also, the sample dataset given bugs me. I see three connected components so I don't see why the answer is not 2.
Bear in mind, i'm almost new to graph theory so i'm sorry if i'm missing something simple.</p>
|
<p>Your first guess is correct. Sometimes there is more than one way to write the same solution.</p>
<p>Clearly, if there are <span class="math-container">$k$</span> connected components you'll need exactly <span class="math-container">$k-1$</span> edges to connect them (without forming any cycle).</p>
<p>On the other hand, a tree with <span class="math-container">$n$</span> nodes must have exactly <span class="math-container">$n-1$</span> edges, so if the graph is acyclic and already has <span class="math-container">$m$</span> edges, then it is missing <span class="math-container">$n-1-m$</span> edges.</p>
<p>Regarding the example: the given graph has <span class="math-container">$n=10$</span> vertices, <span class="math-container">$m=6$</span> edges, and <span class="math-container">$k=4$</span> connected components (not 3), so the answer is <span class="math-container">$3=k-1=n-m-1$</span>. The sets of vertices in each connected component are <span class="math-container">$\{1,2,8\}, \{3\}, \{4, 6, 10\}$</span>, and <span class="math-container">$\{5, 7, 9\}$</span>.</p>
| 155
|
graph theory
|
I want to learn more about computer science. Where should I start?
|
https://cs.stackexchange.com/questions/131528/i-want-to-learn-more-about-computer-science-where-should-i-start
|
<p>Hello computer scientists,</p>
<p>I am a mathematician. I have taken some undergraduate courses in C++, Python, assembly language, boolean algebra, logic, graph theory, etc. I would like to learn more about computer science because I think it's cool. Could you give me some advice as to where to start? Thanks!</p>
|
<p>This sounds like a better fit for say academia.stackexchange, as this is mostly subjective, so bear in mind this is mostly "just my opinion".</p>
<hr />
<p>My advice would be to start with Algorithms, at a high level: learning about the big algorithmic paradigms (greedy, dynamic, linear programming...), runtime analysis (all the landau notations and derivatives, amortised analysis...), and go through the sorting algorithms at least. A good resource for that could be [1]. This part should be seen as "fun problem solving" I reckon.</p>
<p>In parallel, I would study the foundations of computer science and computability (easier if you've studied Logic before):</p>
<ul>
<li>Starting with finite automata/rational languages, and grammars</li>
<li>Building your way up to Turing Machines (with equivalency to recursive functions and lambda calculus)</li>
<li>Finally getting to the distinction between computable and undecidable</li>
</ul>
<p>Along the way you should have got a small introduction to complexity theory, which should enable you to understand what the classes P and NP are, as well as what an NP-complete problem is.</p>
<hr />
<p>[1] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2009. Introduction to Algorithms, Third Edition (3rd. ed.). The MIT Press.</p>
| 156
|
graph theory
|
How to Prove NP-Completeness of Minimum Crossing Problem?
|
https://cs.stackexchange.com/questions/44296/how-to-prove-np-completeness-of-minimum-crossing-problem
|
<blockquote>
<p>In graph theory, the crossing number cr(G) of a graph G is the lowest
number of edge crossings of a plane drawing of the graph G.
(from wikipedia)</p>
</blockquote>
<p>I know that the problem of counting the <a href="https://en.wikipedia.org/wiki/Crossing_number_(graph_theory)" rel="nofollow">Crossing Number</a> of a graph to be less than or equal to <code>K</code> is proven to be NP-Complete by Garey and Johnson in 1983. I couldn't find the actual proof papers though.</p>
<p>So, what is the proof idea? </p>
<p>One way would be to reduce (in polynomial time) from some known NPC problem but to which one? </p>
|
<p>Here is a short summary of the <a href="http://epubs.siam.org/doi/abs/10.1137/0604033" rel="nofollow">original paper</a>. More modern treatments can be found in a <a href="http://arxiv.org/pdf/1204.0660.pdf" rel="nofollow">paper</a> of Cabello proving a hardness of approximation result. We are concerned with three different problems:</p>
<p><strong>CROSSING NUMBER</strong>: Given a graph $G$ and a number $K$, can it be drawn with at most $K$ crossings?</p>
<p><strong>BIPARTITE CROSSING NUMBER</strong>: Given a connected bipartite graph $G$ and a number $K$, can it be drawn in the unit square so that vertices of one part are on the top side, vertices of the other part are on the bottom side, all edges are within the square, and there are at most $K$ crossings?</p>
<p><strong>OPTIMAL LINEAR ARRANGEMENT</strong>: Given a graph $G$ and a number $K$, can we arrange the vertices of $G$ in a line so that the total length of all edges is at most $K$.</p>
<p>The latter is known to be NP-complete. The reduction is from OPTIMAL LINEAR ARRANGEMENT to BIPARTITE CROSSING NUMBER, and then from BIPARTITE CROSSING NUMBER to CROSSING NUMBER.</p>
<p>There are several variants of the concept of <em>crossing number</em>, and the one meant here can be gleaned from the proof that crossing number is in NP: given a graph $G$ and a number $K$, to show that the crossing number is at most $K$, identify up to $K$ pairs of crossing edges, for each one introduce a new vertex, and add the appropriate edges so that the resulting graph becomes planar; planarity can be tested in polynomial (indeed, linear) time.</p>
<p><strong>OPTIMAL LINEAR ARRANGEMENT to BIPARTITE CROSSING NUMBER</strong>: Given an instance $G=(V,E),K$ of OPTIMAL LINEAR ARRANGEMENT, we construct an instance of BIPARTITE CROSSING NUMBER as follows:</p>
<ul>
<li>For each $v \in G$, we include two vertices $v_1,v_2$ on opposite bipartitions, and $|E|^2$ copies of the edge $(v_1,v_2)$.</li>
<li>Fix some ordering of $V$. For each $(x,y) \in E$ such that $x < y$, include the edge $(x_1,y_2)$.</li>
<li>Set the new $K$ to be $|E|^2(K-|E|) + |E|^2 - 1$.</li>
</ul>
<p>Given a good arrangement $A$ of the vertices in $V$, the intended drawing of the new graph has both copies of $V$ equally spaced and ordered according to $A$, with edges $(v_1,v_2)$ being vertical (but slightly fanned so that they don't cross each other) and the "real" edges being straight. A "real" edge spanning a stretch of $\ell$ crosses $(\ell-1)|E|^2$ vertical edges, for a total of $(K-|E|)|E|^2$ crossings. There are also at most $\binom{|E|}{2} \leq < |E|^2$ crossings between "real" edges.</p>
<p>Conversely, any drawing of the new graph with fewer than $|E|^4$ crossings must be of the form above (due to the vertical edges), and so we can read off a good arrangement of the original graph.</p>
<p><strong>BIPARTITE CROSSING NUMBER to CROSSING NUMBER</strong>: Given an instance $G=(V_1,V_2,E),K$ of BIPARTITE CROSSING NUMBER, we construct an instance of CROSSING NUMBER as follows:</p>
<ul>
<li>We take all vertices in $V_1,V_2$ along with two special vertices $o_1,o_2$.</li>
<li>We include all original edges.</li>
<li>We connect $o_i$ to each vertex in $V_i$ using $3K+1$ edges (for $i=1,2$).</li>
<li>We connect $o_1$ and $o_2$ using $3K+1$ edges.</li>
<li>We use the same $K$.</li>
</ul>
<p>Given a good drawing of the original graph, we draw the new graph by putting $o_1$ above $V_1$, $o_2$ below $V_2$, the edges from $o_i$ to $V_i$ are (almost) straight, and the edges connecting $o_1$ and $o_2$ go around the entire construction. This adds no new crossings.</p>
<p>In the other direction, we show that every good embedding of the new graph must be of this form, in a step by step fashion, completing the proof. I'm leaving this part to the reader.</p>
| 157
|
graph theory
|
A criterion for the planar graph to have unique dual
|
https://cs.stackexchange.com/questions/54540/a-criterion-for-the-planar-graph-to-have-unique-dual
|
<p>I get stuck with the following two criteria both about the uniqueness of plane embeddings of a given planar graph. The first one says that a planar graph admits unique plane embedding iff it is a subdivision of 3-connected planar graph (e.g. from the book "Planar Graphs: Theory and Algorithms"). The second one (is from paper "The uniquely embeddable planar graphs") tells that uniqueness holds iff it is 3-connected planar plus some exclusions. I suppose they are based on two different definitions of equivalence. As for me I consider plane embeddings equivalence as an equivalence of the respective complexes. Indeed, each plane graph is in fact a triple $(V,E,F)$ where $F$ denotes set of faces. So the equivalence means that there is a map that translates the first complex to another one that keeps incidence between vetices and edges, edges and faces, vertices and faces. </p>
<p>My question is particularly under what definition the first criterion holds ? The proof that the book gives is a bit untransparent for me in that matter because the definition of uniqueness that it gives is not that strict.</p>
|
<p>First let me mention that the definition of being uniquely embeddable requires ANY graph isomorphism (e.g. just renaming symmetric vertices or any other automorphism permutation) to be (not necessary uniquely) extendable to topological (or combinatorial) one (see Diestel Graph Theory chapter about planar graphs for these definitions) in contrast to usual understanding of this definition that for any two planar embeddings there EXISTS some topological (or combinatorial) isomorphism. Difference between the topological and combinatorial isomorphism definitions is negligible for 2-connected planar graphs (see Diestel book). This particularly means that if we reflect or twist some symmetric graph (as done in the proof of necessity of the aforementioned criterion "subdivision of 3-connected planar graph if and only if condition to be embeddable" in the book "Planar Graphs: Theory and Algorithms") we get topologically non-isomorphic graph in general. Though $K_{2,3}$ graph witnesses that the necessity proof here is wrong but the sufficiency is hold.</p>
<p>What I mean by complex-based isomorphism is just combinatorial one from Diestel book.</p>
<p>If we allow my unusual definition of uniqueness that any two embeddings are topologically isomorphic then there is an example of 2-connected non-subdivisions that are uniquely embeddable.</p>
<p>As for aforementioned paper "Uniquely embeddable planar graphs" the special definition which is given there is tougher (not robust to transposition of symmetric vertices) (but close to as Yuval assumed) than topological isomorphism according to its Theorem 3.5.</p>
| 158
|
graph theory
|
Negative edge weight in Dijkstra
|
https://cs.stackexchange.com/questions/142844/negative-edge-weight-in-dijkstra
|
<p>Suppose given an un-directed graph <span class="math-container">$G$</span>, such that bridge edge of <span class="math-container">$G$</span> has negative weight.</p>
<p><a href="https://en.wikipedia.org/wiki/Bridge_(graph_theory)#:%7E:text=In%20graph%20theory%2C%20a%20bridge,can%20uniquely%20determine%20a%20cut." rel="nofollow noreferrer">From Wikipedia:</a></p>
<blockquote>
<p>In graph theory, a bridge, isthmus, cut-edge, or cut arc is an edge of a graph whose deletion increases the graph's number of connected components.</p>
</blockquote>
<p>Now there is a claim:</p>
<blockquote>
<p>The Dijkstra find correct shortest simple-path if the bridge edge has negative weight.</p>
</blockquote>
<p>I think this claim is correct, but i can't show it. <a href="https://cs.stackexchange.com/questions/7649/dijkstras-algorithm-for-undirected-graphs-with-negative-edges">Already</a> i know that, if edges of source <span class="math-container">$s$</span> have negative weight, then Dijkstra can find correct shortest path.</p>
| 159
|
|
graph theory
|
Connected but not adjacent vertex
|
https://cs.stackexchange.com/questions/112660/connected-but-not-adjacent-vertex
|
<p>Are there specific terms or adjectives in graph theory to name these two situations?</p>
<ul>
<li>Two vertices are non-adjacent (disjoint? I have seen that the term "disjoint" is rather used for paths with non-common vertices or edges).</li>
<li>Two connected non-adjacent vertices (the shortest path or paths connecting them have a lenght <span class="math-container">$> 1$</span>).</li>
</ul>
|
<p>You've exactly described the first situation, i.e. we indeed say that <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are non-adjacent. For the second we do the same, but we want to also specify they are in the same component of <span class="math-container">$G$</span> which contains them if <span class="math-container">$G$</span> is not connected. </p>
<p>These are already simple and well-understood descriptions and there's nothing "more standard". However, you are also free to use your own definitions in your actual use case if it's worth it.</p>
| 160
|
graph theory
|
Enumeration of tree vertices such that each vertex has unique neighbor appearing before it
|
https://cs.stackexchange.com/questions/156294/enumeration-of-tree-vertices-such-that-each-vertex-has-unique-neighbor-appearing
|
<p><em>(Diestel, Graph Theory)</em> <strong>Corollary 1.5.2:</strong> Every tree has an enumeration of the vertices <span class="math-container">$\{v_1, v_2\ldots v_n\}$</span> such that each vertex <span class="math-container">$v_i$</span>, with <span class="math-container">$i\geq 2$</span>, has a unique neighbour in <span class="math-container">$\{v_1, v_2\ldots v_{i-1}\}$</span>.</p>
<p>I am wondering if there is an efficient algorithm that could produce such an enumeration in <span class="math-container">$O(n)$</span>.</p>
|
<p>Pick an arbitrary root. Do a preorder traversal. Now, each vertex has a unique neighbour prior to it in the ordering, namely its parent.</p>
| 161
|
graph theory
|
Meaning of source here
|
https://cs.stackexchange.com/questions/144139/meaning-of-source-here
|
<p>In graph theory, a source of a directed graph <span class="math-container">$D = (V(D), E(D))$</span> is a vertex of it whose in-degree is zero.</p>
<p>The book CLRS makes these statements:</p>
<p>Given a graph <span class="math-container">$G = (V, E)$</span> and a distinguished source vertex <span class="math-container">$s$</span>, breadth-first
search systematically explores the edges of <span class="math-container">$G$</span> to “discover” every vertex that is
reachable from <span class="math-container">$s$</span>.</p>
<p>I know this is an amateur question but does source have the same meaning here (at least when the graph is directed) or it's just some word the book uses without any particular reason? Maybe by source it means a root.</p>
|
<p>In that context <em>source</em> is just a way to give a specific name to the vertex <span class="math-container">$s$</span>.
It makes sense to use that word since it is the vertex from which all shortest-paths computed using BFS emanate.</p>
| 162
|
graph theory
|
algorithm to delete nodes that completely removes connectivity
|
https://cs.stackexchange.com/questions/70380/algorithm-to-delete-nodes-that-completely-removes-connectivity
|
<p>This is a distributed systems problem, but perhaps the graph theory gurus can help me.</p>
<p>I need an algorithm that that tells me which nodes to remove from the graph to completely remove connectivity. </p>
<p>See example:</p>
<p><a href="https://i.sstatic.net/eoLoF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eoLoF.png" alt="enter image description here"></a> </p>
<p>The most intelligently thing to do would be to remove the bcasts nodes:</p>
<p><a href="https://i.sstatic.net/0689s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0689s.png" alt="enter image description here"></a></p>
<p>Can you help me? One way to solve this would using a SAT or ILP solver, but I want to if there are any other algorithms for this kind of task.</p>
|
<p>Your problem is: given a graph, find a minimum-sized set of vertices whose removal disconnects the graph.</p>
<p>This is the "minimum vertex cut" problem. There are standard algorithms for this, based on network flow and the max-flow-min-cut theorem. You need to modify the graph slightly to make two copies of nodes, then apply network flow (network flow on the original graph finds a minimum edge cut; you want a minimum vertex cut). See <a href="https://en.wikipedia.org/wiki/Vertex_separator" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Vertex_separator</a>, <a href="https://en.wikipedia.org/wiki/Cut_(graph_theory)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Cut_(graph_theory)</a>, <a href="https://cstheory.stackexchange.com/q/2877/5038">https://cstheory.stackexchange.com/q/2877/5038</a>.</p>
| 163
|
graph theory
|
Optimizing order of graph reduction to minimize memory usage
|
https://cs.stackexchange.com/questions/1752/optimizing-order-of-graph-reduction-to-minimize-memory-usage
|
<p>Having extracted the data-flow in some rather large programs as directed, acyclic graphs, I'd now like to optimize the order of evaluation to minimze the maximum amount of memory used.</p>
<p>That is, given a graph {1 -> 3, 2 -> 3, 4 -> 5, 3 -> 5}, I'm looking for an algorithm that will decide the order of graph reduction to minimize the number of 'in-progress' nodes, in this particular case to decide that it should be reduced in the order 1-2-3-4-5; avoiding the alternative ordering, in this case 4-1-2-3-5, which would leave the output from node 4 hanging until 3 is also complete.</p>
<p>Naturally, if there are two nodes using the output from a third, then it only counts once; data is not copied unnecessarily, though it does hang around until both of those nodes are reduced.</p>
<p>I would also quite like to know what this problem is called, if it has a name. It looks similar to the graph bandwidth problem, only not quite; the problem statement may be defined in terms of path/treewidth, but I can't quite tell, and am unsure if I should prioritize learning that branch of graph theory right now.</p>
| 164
|
|
graph theory
|
Planar Embedding with Some Nodes Constrained
|
https://cs.stackexchange.com/questions/79662/planar-embedding-with-some-nodes-constrained
|
<p>I've read about basic planar-graph embedding and about embedding a planar graph onto a set of fixed points, but I was wondering how one might constrain the locations of some nodes—perhaps to a set of points—while allowing others complete freedom.</p>
<p>Is this achievable with a current popular algorithm, or with minor modification thereto? I guess the nature of the problem would lend itself better to grid-based embeddings, right?</p>
<p>I'm entirely new to graph theory, so it's possible the answer is obvious to those familiar with the field. I didn't see anything about it while searching.</p>
|
<p>This was meant to be a comment but it was a bit too long, sorry!</p>
<p>There is a well known algorithm to draw a planar graph, namely Tutte's drawing algorithm. The input graph is assumed to be 3-connected and planar. The idea of the algorithm is to fix the position of vertices of a face in convex position and from those coordinates deduce the positions for the rest of the vertices. The resulting drawing being a planar drawing of the input graph. Perhaps you can find more about this at [<a href="https://en.wikipedia.org/wiki/Tutte_embedding" rel="nofollow noreferrer">1</a>].</p>
<p>With respect to grids, there are a couple of well known algorithms for drawing planar 3-connected graphs. One is Schnyder's algorithm (through the a Schnyder wood decomposition of the edges), and the canonical ordering algorithm (through a partition of the vertices of the graph). Both of these algorithms run in $O(n)$ time and produce drawings in an $O(n)\times O(n)$grid, where $n$ is the number of vertices of the input graph. At [<a href="https://www2.cs.arizona.edu/~kobourov/schnyder.pdf" rel="nofollow noreferrer">2</a>] you may find an overview of these two algorithms for the case of planar triangulations. You could even go to references [4] and [9] cited there for the original papers. It is worth noting that these algorithms have been generalized to work on 3-connected planar graphs as stated above.</p>
<p>Are you thinking about drawings subject to more specific constraints?</p>
| 165
|
graph theory
|
Computing theory: can a single node be a subgraph?
|
https://cs.stackexchange.com/questions/41856/computing-theory-can-a-single-node-be-a-subgraph
|
<p>Can a single node be considered a subgraph?</p>
<p>For example, if I had this graph, G:</p>
<p><code>X</code>-----<code>Y</code> </p>
<p>and I deleted Y, leaving me with the graph</p>
<p><code>X</code></p>
<p>is this a subgraph (induced) of G?
<br><br><br><br>
What about the following argument? </p>
<p>Assume a single node can be considered a graph. Any graph is an induced subgraph of itself. Therefore, a single node graph has a single-node induced subgraph.</p>
<p>Though this is only valid if a single node can be considered a graph.
<br><br><br><br>
In computing theory, what is the generally accepted norm?</p>
|
<blockquote>
<p>What about the following argument?</p>
<p>Assume a single node can be considered a graph. Any graph is an induced subgraph of itself. Therefore, a single node graph has a single-node induced subgraph.</p>
<p>Though this is only valid if a single node can be considered a graph.</p>
</blockquote>
<p>That's completely circular. If a single node can be a graph, you can ask about its subgraphs and, sure, every graph is a subgraph of itself. But if a single node can't be a graph, it doesn't make sense to ask about subgraphs of something that isn't a graph.</p>
<p>In general, a single vertex is considered to be a graph, referred to as the "trivial graph". However, it's something of a special case in that it's often an exception to statements one might wish to make when proving things. For example, every connected graph contains at least one edge... except for the trivial graph; every graph has a proper subgraph... except for the trivial graph; etc. Because of this, writers often exclude the trivial graph from consideration. So, for example, in the "notation" section of a graph theory paper, you often see a statement such as "Except where stated otherwise, we assume that every graph contains at least one edge".</p>
<p>In this respect, asking whether the trivial graph is a graph is a bit like asking whether zero is a natural number. Some people will jump up and down and insist that it is; some people will jump up and down and insist that it isn't; the best plan is to say that it is or isn't according to what makes your life easiest in any particular situation.</p>
| 166
|
graph theory
|
Textbook request for Linear Cellular Automata, if possible with an abstract algebraic approach
|
https://cs.stackexchange.com/questions/148713/textbook-request-for-linear-cellular-automata-if-possible-with-an-abstract-alge
|
<p>I am a final semester pure math undergrad, and I became interested in Linear Cellular Automatas.I became interested after reading Klaus Sutner's <a href="https://www.link.cs.cmu.edu/15859-s11/notes/sutner.pdf" rel="nofollow noreferrer">article</a>. In the article, a little abstract algebra is used, linear mappings, a vector space over <span class="math-container">$F_2$</span>, I did use a bit of group theory to prove tiny things here and there.</p>
<p><strong>Question:</strong> Can someone please recommend me their favorite introductory textbook on linear cellular automata? I do not know how it is usually approached, but if possible an abstract algebraic approach is much appreciated (also helps keep my algebra fresh)!</p>
<p>As to my background (so you can judge if a book would be too hard for me or not), I have taken a standard linear algebra class, 2 abstract algebra classes (groups, rings, fields), 2 classes of graph theory (pretty far, we covered Turan graphs, Ramsey theory, etc), and a standard theory of algorithm class (it was a common class with CS students).</p>
| 167
|
|
algorithm complexity
|
How is algorithm complexity modeled for functional languages?
|
https://cs.stackexchange.com/questions/74494/how-is-algorithm-complexity-modeled-for-functional-languages
|
<p>Algorithm complexity is designed to be independent of lower level details but it is based on an imperative model, e.g. array access and modifying a node in a tree take O(1) time. This is not the case in pure functional languages. The Haskell list takes linear time for access. Modifying a node in a tree involves making a new copy of the tree.</p>
<p>Should then there be an alternate modeling of algorithm complexity for functional languages?</p>
|
<p>If you assume that the $\lambda$-calculus is a good model of functional programming languages, then one may think: the $\lambda$-calculus has a
seemingly simple notion of
time-complexity: just count
the number of $\beta$-reduction
steps $(\lambda x.M)N \rightarrow M[N/x]$. </p>
<p>But is this a good complexity measure? </p>
<p>To answer this
question, we should clarify what we mean by complexity measure in the
first place. One good answer is given by the <i>Slot and van Emde
Boas thesis</i>: any good complexity measure should have
a polynomial
relationship to the canonical notion of time-complexity defined using
Turing machines. In other words, there should be a 'reasonable'
encoding $tr(.)$ from $\lambda$-calculus terms to Turing machines, such for some polynomial $p$, it is the case that for
each term $M$ of size $|M|$: $M$ reduces to a value in $p(|M|)$ $\beta$-reduction steps exactly
when $tr(M)$ reduces to a value in $p(|tr(M)|)$ steps of a Turing machine.</p>
<p>For a long time, it was unclear if this can be achieved in the λ-calculus. The main problems are the following.</p>
<ul>
<li>There are terms that produce normal forms (in a polynomial number of steps) that are of exponential size. Even writing down the normal forms takes exponential time.</li>
<li>The chosen reduction strategy plays an important role. For example there exists a family of terms which reduces in a polynomial number of parallel β-steps (in the sense of <a href="http://www.cs.unibo.it/~asperti/SLIDES/optimal.pdf" rel="noreferrer">optimal λ-reduction</a>), but whose complexity is non-elementary (meaning worse then exponential).</li>
</ul>
<p>The paper "<a href="https://arxiv.org/abs/1601.01233" rel="noreferrer">Beta Reduction is Invariant, Indeed</a>" by B. Accattoli and U. Dal Lago clarifies the issue by showing a 'reasonable' encoding that preserves the complexity class <strong><a href="https://en.wikipedia.org/wiki/Time_complexity#Polynomial_time" rel="noreferrer">P</a></strong> of polynomial time functions, assuming <a href="https://en.wikipedia.org/wiki/Evaluation_strategy" rel="noreferrer">leftmost-outermost call-by-name</a> reductions. The key insight is the exponential blow-up can only happen for 'uninteresting' reasons which can be defeated by proper sharing. In other words, the class <strong>P</strong> is the same whether you define it counting Turing machine steps or (leftmost-outermost) $\beta$-reductions.</p>
<p>I'm not sure what the situation is for other evaluation strategies.
I'm not aware that a similar programme has been carried out for space complexity.</p>
| 168
|
algorithm complexity
|
Good text on algorithm complexity
|
https://cs.stackexchange.com/questions/3201/good-text-on-algorithm-complexity
|
<p>Where should I look for a good introductory text in algorithm complexity? So far, I have had an Algorithms class, and several language classes, but nothing with a theoretical backbone. I get the whole complexity, but sometimes it's hard for me to differentiate between O(1) and O(n) plus there's the whole theta notation and all that, basic explanation of P=NP and simple algorithms, tractability. I want a text that covers all that, and that doesn't require a heavy mathematical background, or something that can be read through.</p>
<p>LE: I'm still in highschool, not in University, and by heavy mathematical background I mean something perhaps not very high above Calculus and Linear Algebra (it's not that I can't understand it, it's the fact that for example learning Taylor series without having done Calculus I is a bit of a stretch; that's what I meant by not mathematically heavy. Something in which the math, with a normal amount of effort can be understood). And, do pardon if I'm wrong, but theoretically speaking, a class at which they teach algorithm design methods and actual algorithms should be called an "Algorithms" class, don't you think?
In terms of my current understanding, infinite series, limits and integrals I know (most of the complexity books I've glanced at seemed to use those concepts), but you've lost me at the Fast Fourier Transform.</p>
|
<p>It is my very personal opinion that the book of <a href="http://www.aw-bc.com/info/kleinberg/" rel="noreferrer">Jon Kleinberg and Éva Tardos</a> is the best book for studying the design and analysis of efficient algorithms. It might be not as comprehensive as <em>Cormen et al.</em> but it is a great textbook. Let me point out, why I think this book might suit your interests best</p>
<ul>
<li>you don't need heavy math machinery for the proofs</li>
<li>the book gives often a great intuition why something is working (or not), this is in my opinion very important for beginners and self learners</li>
<li>a very intuitive approach to NP-completeness</li>
<li>it has a great chapter how to deal with NP-complete problems in practice</li>
<li>it focuses on design patterns, which might help you to design your own clever algorithms</li>
</ul>
<p>You should also notice, that there is a lot of free material in the WWW available. Great <a href="http://compgeom.cs.uiuc.edu/~jeffe/teaching/algorithms/" rel="noreferrer">lecture notes</a> are provided by Jeff Erickson. And you can even watch the whole MIT class on <a href="http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/" rel="noreferrer">"Introductions to algorithms"</a> taught by Charles Leiserson and Erik D. Demaine. Cool stuff!</p>
| 169
|
algorithm complexity
|
Recursive algorithm. Complexity
|
https://cs.stackexchange.com/questions/73690/recursive-algorithm-complexity
|
<p>I need your help with time complexity.</p>
<p>I have this recursive function:
$$
F(n) = \left\{
\begin{array}{l l}
F(n-2) + 10 F\big(\frac{n}{6}\big)^2 + 6 F\big(\frac{n}{7}\big) + \frac{n^4}{5} & \text{if } n > 1\\
2 & \text{otherwise}\\
\end{array} \right.
$$</p>
<p>and I had to do the same thing using recursion in C#, so I came up with this :</p>
<pre><code> public static int F1(int n)
{
if (n > 1) return F1(n - 2) + 10 * F1((int)Math.Pow((n/6),2)) + 6 * F1(n / 7) + ((int)Math.Pow(n,4) / 5);
else return 2;
}
</code></pre>
<p>I need help calculating the time complexity of this algorithm, and I don't even know where to start and how to do it, could you please give me a hand?</p>
<p>Thank you for any kind of help.</p>
| 170
|
|
algorithm complexity
|
Algorithmic Complexity of Statistical Estimators
|
https://cs.stackexchange.com/questions/65987/algorithmic-complexity-of-statistical-estimators
|
<p>This might be very basic but I am interested in evaluating the algorithmic complexity of an estimator of the form:</p>
<p>$$\hat{\theta} = \text{argmin}_{\theta} \;\; Q_n (\theta)$$</p>
<p>where $Q_n(\theta)$ denotes some objective function of interest (e.g. - log likelihood) computed on a sample of length $n$. $\hat{\theta}$ is assumed to be obtain through some numerical optimization methods (typically a stepwise procedure). Under this setting, how could I compute the algorithmic complexity of $\hat{\theta}$?</p>
<p>I am really not sure if this makes sense but here is how I approach this problem:</p>
<ul>
<li>Suppose that the numerical procedure used to compute $\hat{\theta}$ requires $S$ steps to converge.</li>
<li>Assume that there exist a deterministic function, say $f(p)$, where $p$ denotes the dimension of $\theta$ such that $S \leq f(p)$ and that $f(p) < \infty$ for $p < \infty$.</li>
<li>Assume that $\mathcal{O} (Q_n(\theta)) = g(n)$ for all $\theta$.</li>
<li>Then $\mathcal{O}(\hat{\theta})$ is simply given by</li>
</ul>
<p>$$\mathcal{O}(\hat{\theta}) = \mathcal{O}(Q_n(\theta) S) = g(n),$$</p>
<p>since $p$ is assume to be fixed (and bounded). This seems really too simple... Any comments would be more than welcome. Thank you very much.</p>
|
<p>The bound $O(Q_n(θ) \cdot S(p))$ represents only the cost of evaluating $Q_n$ once per step in the "numerical optimization method"; you ignore all other cost that incurs.</p>
<p>Without looking at the whole algorithm, little more can be done.</p>
<p><em>Note:</em> I very deliberately replaced $S$ with $S(p)$. That is because there is no reason, per se, to believe that $S$ is a constant. You need to be more careful about setting up your cost functions; <a href="https://cs.stackexchange.com/questions/23593/is-there-a-system-behind-the-magic-of-algorithm-analysis">our reference question may be helpful</a>.</p>
| 171
|
algorithm complexity
|
Algorithm complexity similar to Selection Sort
|
https://cs.stackexchange.com/questions/98523/algorithm-complexity-similar-to-selection-sort
|
<p>I'm working on an algorithm that takes an array of <span class="math-container">$N$</span> values and it will iterate through each of the values and in each of them iterate through the rest of the array to the right. So the first value will check <span class="math-container">$N-1$</span> positions, the next one will check <span class="math-container">$N-2$</span>, etc</p>
<p>I'm a bit confused about the time complexity of my algorithm. I remembered that Selection Sort does a similar operation and I checked it's considered <span class="math-container">$O(N^2)$</span> best and worst case. Why is that? If both algorithms check less and less positions every new position I imagined it would be some complexity between <span class="math-container">$O(N)$</span> and <span class="math-container">$O(N*log(N))$</span> and not <span class="math-container">$O(N^2)$</span>.</p>
<p>Another question. If I did my algorithm only for half of the array what would the complexity be? I mean starting at the middle position and only doing the algorithm to the right side for example so the middle position would check only <span class="math-container">$N/2-1$</span> positions and the next only <span class="math-container">$N/2-2$</span>, etc.
I guess it would be <span class="math-container">$O((N/2)^2)$</span> which would be <span class="math-container">$O(N^2/4)$</span> and so still <span class="math-container">$O(N^2)$</span>?</p>
|
<p>You have to count. Intuition is necessary, but you must calculate.</p>
<p><span class="math-container">$$ 1+2+\cdots+N-1 = N(N-1)/2 \in \mathcal{O}(N^2)$$</span> by Gaussian summation rules.</p>
<p>In asymptotic notation, the constants are removed. For example.</p>
<p><span class="math-container">$\mathcal{O}(N/k) = \mathcal{O}(N)$</span> as long as <span class="math-container">$k$</span> is a constant.</p>
| 172
|
algorithm complexity
|
What is the algorithmic complexity of this?
|
https://cs.stackexchange.com/questions/121269/what-is-the-algorithmic-complexity-of-this
|
<p>I'm practicing leet code questions and want to understand more fully how to determine Big O notation. What is the algorithmic complexity of my solution to the following problem? </p>
<p><strong>O(n^2) ?</strong> For every item in n I could loop an additional n[i] times. Or is it <strong>O(n)</strong> ?</p>
<p>Given an array of non-negative integers, you are initially positioned at the first index of the array.</p>
<p>Each element in the array represents your maximum jump length at that position.</p>
<p>Determine if you are able to reach the last index.</p>
<p>Input: [2,3,1,1,4]</p>
<p>Output: true</p>
<p>Explanation: Jump 1 step from index 0 to 1, then 3 steps to the last index.</p>
<pre class="lang-js prettyprint-override"><code>var canJump = function(nums) {
let validIndices = {};
for(let i = nums.length - 1; i >= 0; i--) {
let currentNum = nums[i];
if(currentNum + i >= nums.length - 1) {
validIndices[i] = true;
} else {
while(currentNum > 0) {
if(validIndices[i + currentNum]) {
validIndices[i] = true;
}
currentNum--;
}
}
}
return !!validIndices[0];
};
</code></pre>
|
<p>Your algorithm has O(n^2) worst case complexity. You have a for loop of O(n). Inside your for loop you have a while loop which is O(currentNum). So if your currentNum is O(n) (ex nums[i] = length.nums for each i) then the complexity is O(n*n) = O(n^2).</p>
<p>Hint</p>
<p>Try not to recalculate things. You could use Dynamic programming technique. You need:</p>
<pre><code>def canJump(i):
if valid[i] has been assigned a True or False value:
return valid[i]
#else calculate valid[i]
if i + nums[i] > len(nums):
valid[i] = True
return valid[i]
for jump in range(1, len(nums[i])):
valid[i] = any(False, canJump(i + jump))
return valid[i]
print(canJump(0))
</code></pre>
<p>This is just a pseudocode. I have not tried it.
This way you only visit an element of nums once so it’s O(n).</p>
<p>[EDIT]<br>
Actually I am wrong. My code is also O(n^2). Here is an O(n) solution.</p>
<pre><code>at_least = n
for i from n-1 to 0:
if nums[i] + i >= at_least:
at_least = i
if at_least == 0: True else False
</code></pre>
<p>Now I think this is right. Sorry for the confusion. I got confused too.</p>
| 173
|
algorithm complexity
|
The exact relation between complexity classes and algorithm complexities
|
https://cs.stackexchange.com/questions/9909/the-exact-relation-between-complexity-classes-and-algorithm-complexities
|
<p>Are all algorithms which have polynomial time complexity belong to P class ? And P class do not have any algorithm which does have not polynomial complexity ? </p>
<p>Are all algorithms which have non polynomial complexity belong to NP or NP-Hard or both ?</p>
<p>I am just trying to understand the basic relationship.</p>
|
<p>$P$ is defined as the class of (decision) problems that have an algorithm that solves them in polynomial time (in a TM, or a polynomially-equivalent model). Thus, $P$ contains exactly these problems, no more and no less.</p>
<p>As for $NP$- the situation is more delicate. A problem is in $NP$ if it has a nondeterministic algorithm that runs in polynomial time. An equivalent, more user-friendly definition, is that given a solution to the problem, you can verify it's correctness in time polynomial in the size of the problem. For example, given a graph and a path that claims to be a Hamiltonian, you can verify in polynomial time that it is indeed a Hamiltonian path. Thus, the problem of deciding if a graph has a Hamiltonian path is in $NP$.</p>
<p>Clarification: $NP$ is a class of <em>problems</em>, not of <em>algorithms</em>. An algorithm doesn't belong to $NP$.</p>
<p>Now, some problems are known not to have a polynomial time algorithm. This doesn't mean that they are in $NP$. In fact, some problems are known not to be in $NP$. For example, any $NEXP$-hard problem.</p>
<p>Regarding $NP$-hard problems - since we don't know whether $P=NP$ or not, we don't know if every problem outside $P$ is $NP$-hard. If $NP=P$, then every problem is $NP$-hard (except $\Sigma^*$ and $\emptyset$).</p>
<p>This answer (which is by far incomplete) covers about 3 weeks of material in a basic complexity course. Perhaps consider thoroughly reading a textbook, such as Sipser's "Theory of Computation".</p>
| 174
|
algorithm complexity
|
algorithm with linear time complexity
|
https://cs.stackexchange.com/questions/152316/algorithm-with-linear-time-complexity
|
<p>We say that if an algorithm takes p time for an input size of n (here, p is a polynomial in n, and the degree of p is y), then the algorithm's complexity is O(n^y).</p>
<p>In the image, when n is very large it does not seem to matter much if it's x^2 or 2x^2. but, it clearly seems to matter if it's x or 2x. So, then why do we approximate, something like 3000x to x?
<a href="https://i.sstatic.net/LBeey.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LBeey.png" alt="enter image description here" /></a></p>
<p>thank you</p>
| 175
|
|
algorithm complexity
|
Relationship of algorithm complexity and automata class
|
https://cs.stackexchange.com/questions/52748/relationship-of-algorithm-complexity-and-automata-class
|
<p>I have been unable to find a graph depicting or text answering the following question: Is there a direct relationship between the complexity of an algorithm (such as best / worst case of quick sort), and class of automata that can implement the algorithm. For example is there a range of complexity push down automata can express? If the answer is yes to said question is there a resource depicting the relationship? Thanks!</p>
|
<p>Yes, there are relationships in many cases!</p>
<p>For example, it is known that any language which is accepted by reversal-bounded counter machines are in $P$ (see <a href="http://www.sciencedirect.com/science/article/pii/0022000081900283">here</a>).</p>
<p>Similarly, we know that all regular languages are in $P$, since there's a polynomial time algorithm for determining if an NFA accepts a given word.</p>
<p>There are too many to enumerate here, but in general, more limited computation models are in easier complexity classes.</p>
| 176
|
algorithm complexity
|
Algorithm complexity
|
https://cs.stackexchange.com/questions/70976/algorithm-complexity
|
<p>For example, we have $T(N) = T(N/2) + T(N/5) + O(N)$.
So:</p>
<p>$2T(N/2) + O(N) \leq T(N) \leq 2T(N/5) + O(N)$</p>
<p>$O(N) \leq T(N) \leq O(N)$. Thus, $T(N) = O(N)$. Is it correct?</p>
| 177
|
|
algorithm complexity
|
Algorithm Complexity Question
|
https://cs.stackexchange.com/questions/150440/algorithm-complexity-question
|
<p>this is my first question on this site and I would like to preface this by saying I am not very savvy when it comes to Computer Science. So, I will try to ask this the best I can.</p>
<p>I was doing some research on Polynomial time because that is what I was told hashing algorithms run on, correct me if I am wrong. So with that I read that for every input n there a running time of n^k. Is that correct? If so how do I know what k is. I know it is a constant, but how do I know what it is for a given algorithm? Also, is running time just the outputs of n^k?</p>
<p>Thanks and sorry if this was poorly written, I am just trying to learn about crypto and some of this stuff on my own.</p>
| 178
|
|
algorithm complexity
|
Algorithmic Complexity of Recognizing Claw-Free Graphs
|
https://cs.stackexchange.com/questions/162703/algorithmic-complexity-of-recognizing-claw-free-graphs
|
<p>Let <span class="math-container">$H=\left(V_H, E_H\right)$</span> and <span class="math-container">$G=(V, E)$</span> be graphs. A <em>subgraph isomorphism</em> from <span class="math-container">$H$</span> to <span class="math-container">$G$</span> is a function <span class="math-container">$f: V_H \rightarrow V$</span> such that if <span class="math-container">$(u, v) \in E_H$</span>, then <span class="math-container">$(f(u), f(v)) \in E$</span>. <span class="math-container">$f$</span> is an <em>induced subgraph isomorphism</em> if in addition if <span class="math-container">$(u, v) \notin E_H$</span>, then <span class="math-container">$(f(u), f(v)) \notin E$</span>.</p>
<p>A <em>claw</em> is another name for the complete bipartite graph <span class="math-container">$K_{1,3}$</span>. A <em>claw-free</em> graph is a graph that does not have a claw as an induced subgraph.</p>
<p>I know, in general, that induced subgraph isomorphism is an NP-complete problem. However, the situation may be different for certain special induced-subgraphs. For example, <span class="math-container">$C_3$</span>.</p>
<p>Claw-free graphs were initially studied as a generalization of line graphs (the line graph <span class="math-container">$L(G)$</span> of any graph <span class="math-container">$G$</span> is claw-free), and Roussopoulos (1973) and Lehot (1974) described <strong>linear time algorithms</strong> for recognizing line graphs and reconstructing their original graphs.</p>
<p>Our question is, what is the algorithmic complexity of identifying whether a graph contains an induced claw? Are they polynomial, or even linear?</p>
|
<p>A graph is, as you say, claw-free if and only if it does not contain <span class="math-container">$K_{1,3}$</span> as an induced subgraph.</p>
<p>This gives rise to the trivial <span class="math-container">$n^4$</span> algorithm: for every set of four vertices, is the degrees of the induced subgraph <span class="math-container">$3,1,1,1$</span>?</p>
<p>Steven, in another thread, points to a paper by <a href="https://www.cs.princeton.edu/%7Ehy2/files/graph-cr.pdf" rel="noreferrer">Williams, Wang, Williams, and Yu</a> that give an algorithm for claw detection running in matrix-multiplication time <span class="math-container">$O(n^\omega)$</span>, where <span class="math-container">$\omega < 2.3728$</span>.</p>
<p><a href="https://epubs.siam.org/doi/abs/10.1137/1.9781611973730.111" rel="noreferrer">Williams, Wang, Williams, and Yu, Finding Four-Node Subgraphs in Triangle Time, SODA 2015.</a></p>
| 179
|
algorithm complexity
|
Algorithmic complexity of a Maximum Capacity Representatives variant
|
https://cs.stackexchange.com/questions/77192/algorithmic-complexity-of-a-maximum-capacity-representatives-variant
|
<p>I have been trying to find the algorithmic complexity of a problem that I have. I am almost sure it is either NP-hard or NP-complete but I cannot find any proof. Recently, I found that my problem can be something similar to a special instance of the Maximum Capacity Representatives problem, which is NP-complete. However, the objective function to optimize in my case is different than the one in the MCR problem.</p>
<p>The problem that I am trying to solve is the following:</p>
<p>INSTANCE: Disjoint sets $S_1, \ldots, S_m$ and, for any $i \neq j$, $x \in S_i$, and $y
\in S_j$, a nonnegative capacity $c(x,y)$.</p>
<p>SOLUTION: A system of representatives $T$, i.e., a set $T$ such that, for any $i$, $\vert T \cap S_i\vert=1$.</p>
<p>MEASURE: $\min \{c(x,y): x,y \in T \}$.</p>
<p>And my goal is to maximize $\min \{c(x,y): x,y \in T \}$.</p>
<p>Do you know any way to determine the complexity of my problem? Is there any well known problem in the literature that can be reduced to this one?,</p>
<p>Thanks in advance.</p>
|
<p>I'm assuming that the goal of your optimization problem is to maximize $\min \{ c(x,y) : x,y \in T \}$. The decision problem is then:</p>
<blockquote>
<p>Given a system $S_1,\ldots,S_m$ of disjoint sets, a cost function $c\colon S \times S \to \mathbb{R}_+$ (where $S = S_1 \cup \cdots \cup S_m$) and a number $\gamma \in \mathbb{R}_+$, decide whether there is a choice of representatives $t_i \in S_i$ such that $c(t_i,t_j) \geq \gamma$ for all $i \neq j$.</p>
</blockquote>
<p>This problem is NP-hard (and so NP-complete), by reduction from max clique. Given a graph $G = (V,E)$ and a number $m$, let $S_i = \{i\} \times V$, define $c((i,v),(j,w))$ to be 1 if $(v,w) \in E$ and 0 otherwise, and let $\gamma = 1$. The graph $G$ contains an $m$-clique if and only if the answer to your decision problem is affirmative.</p>
| 180
|
algorithm complexity
|
Improve algorithmic complexity
|
https://cs.stackexchange.com/questions/116916/improve-algorithmic-complexity
|
<p>We have an array of N size. We have to perform Q queries on it, in which each Query contains and Index I for which we do:</p>
<pre><code>for j=I+1 to N:
if A[j]<A[I]:
A[j]=0
</code></pre>
<p>The Queries are not independent of each other so we need to use the changed Array everytime.</p>
<p>I have given a lot of thought but was able to come up only with brute force solution with complexity of O(Q*N). Can anyone tell a better solution?</p>
<p>Eg:-</p>
<pre><code>Array- 4 3 4 3 2, Query-3 2
After Query 1(Index 3, element 3)- 4 3 4 3 0
After Query 2(Index 2,element 4)-4 3 4 0 0
</code></pre>
|
<p>Solving this problem benefits from geometrical intuition. Think that for each index <span class="math-container">$i$</span>, the pair <span class="math-container">$(i, A[i])$</span> represents a point in 2D-space. Also we can think that assigning <span class="math-container">$A[j]=0$</span> is the same as removing the point <span class="math-container">$(j, A[j])$</span> from this space. Now a query with index i means removing all points that are left and below of point <span class="math-container">$(i, A[i])$</span>.</p>
<p>Now the set of removed points can be represented with a set of points <span class="math-container">$(i_1, A[i_1]), (i_2, A[i_2]), \ldots, (i_k, A[i_k])$</span>, for which <span class="math-container">$i_p < i_{p+1}$</span> and <span class="math-container">$A[i_p] < A[i_{p+1}]$</span> for all <span class="math-container">$1 \le p < k$</span>. A point <span class="math-container">$(j, A[j])$</span> is removed if there is some <span class="math-container">$p$</span> for which <span class="math-container">$i_p < j$</span> and <span class="math-container">$A[i_p] > A[j]$</span>. The picture <a href="https://en.wikipedia.org/wiki/Maxima_of_a_point_set#/media/File:Maxima_of_a_point_set.svg" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Maxima_of_a_point_set#/media/File:Maxima_of_a_point_set.svg</a> from Wikipedia might gives intuition for this (though it is left-right mirrored compared to our definition).
This set representation can be maintained with a balanced binary search tree. We can check in <span class="math-container">$O(\log n)$</span> time if a point <span class="math-container">$(j, A[j])$</span> is removed by finding the largest <span class="math-container">$i<j$</span> such that <span class="math-container">$(i, A[i])$</span> is in the set, and checking if <span class="math-container">$A[i] > A[j]$</span>. For performing the query in your problem, first check if the given point <span class="math-container">$(i, A[i])$</span> already is removed. If it's not, insert it into the set in <span class="math-container">$O(\log n)$</span> and repeatedly check if the next element with <span class="math-container">$j>i$</span> has <span class="math-container">$A[j] < A[i]$</span>, and remove it. This can be done in <span class="math-container">$O(k \log n)$</span>, where <span class="math-container">$k$</span> is the number of removed elements, so the amortized complexity if <span class="math-container">$O(\log n)$</span>.</p>
<p>In summary, your queries can be answered in <span class="math-container">$O(\log n)$</span> amortized complexity.</p>
| 181
|
algorithm complexity
|
Time complexity and space complexity in recursive algorithm
|
https://cs.stackexchange.com/questions/13055/time-complexity-and-space-complexity-in-recursive-algorithm
|
<pre><code>"The designer of an algorithm needs to balance between space complexity and time
complexity." - Comment on the validity of the statement in the context of recursive
algorithms.
</code></pre>
<p>This is a question from my university's previous paper. But i couldn't find a decent answer. Actually i am confused about how can a developer minimize the time-complexity for any recursive function. I get that if there is a <strong>tail-recursion</strong> then space complexity can be minimized. But can't get the idea of time-complexity. </p>
|
<p>One thing comes in mind is <a href="http://en.wikipedia.org/wiki/Memoization" rel="nofollow noreferrer">memoization</a>. Simple well studied problem for this is Fibonacci numbers, simple recursion is as follow:</p>
<pre><code>fib(int n)
{
if (n < 3)
return 1;
return fib(n-1) + fib(n-2);
}
</code></pre>
<p>But with memoization, we can use an auxiliary array to get rid of extra calls:</p>
<pre><code>f[1]=f[2] = 1;
fib(int n)
{
if (n < 3)
return 1;
if (f[n] == 0)
f[n] = fib(n-1) + fib(n-2);
return f[n];
}
</code></pre>
<p>This simple change, reduces the time from $\Theta(\phi^n)$ to $\Theta(n)$.</p>
<p>The memoization technique sometimes uses more memory, but very faster in time, and one of a tradeoffs that software developer should be care about it is this.</p>
<p><strong>Explanation of the memoization of Fibonacci numbers:</strong></p>
<p>First we create an array $f$, to save the values that already computed. This is the main part of all memoization algorithms. Instead of many repeated recursive calls we can save the results, already obtained by previous steps of algorithm. As shown in the algorithm we set the $f[1],f[2]$ to $1$.</p>
<p>In the first <code>if</code> we actually check whether we are in the start or not. But we can remove this <code>if</code> statement. But for make it simpler to read I left it.</p>
<p>In the second <code>if</code>, we check if the value of <code>fib(n)</code> is already computed or not. This prevents us from multiple call for the same number, for example suppose we want to compute <code>f(6)</code>, then in normal recursion we have the first recursion tree as shown in the following figure and in the memoization version we have the second tree. The reason is, in memoization we just compute the green vertices one time and then we save them into the memory (array $f$) and if needed we fetch them later. </p>
<p>In the following figure, green nodes are parts which are necessary to be computed (in this way), yellow nodes are precomputed ones, and red nodes are the nodes that are repeatedly computed in the first recursion. </p>
<p>As is clear from the image, in the normal case we have just precomputed <code>f(1)</code> and <code>f(2)</code>, but in the memoization case, all functions for less than $n-1$ are precomputed, and this causes exponentially smaller recursion tree. (In memoization number of red nodes is zero, which is exponential in the normal recursion).</p>
<p><img src="https://i.sstatic.net/WEwIY.png" alt="First tree is normal recursion tree, second one is by memoization."></p>
| 182
|
algorithm complexity
|
Is every algorithm's complexity $\Omega(1)$ and $O(\infty)$?
|
https://cs.stackexchange.com/questions/8998/is-every-algorithms-complexity-omega1-and-o-infty
|
<p>From what I've read, Big O is the absolute worst ever amount of complexity an algorithm will be given an input. On the side, Big Omega is the best possible efficiency, i.e. lowest complexity.</p>
<p>Can it be said then that every algorithm has a complexity of $O(\infty)$ since infinite complexity is the worst ever possible?
By the same token, can it be said that every algorithm is $\Omega(1)$ since the absolute lowest complexity an algorithm can be is a constant?</p>
|
<p>To be clear, Big O and Big Omega are classes of functions. So if I have for example $\Omega(1)$, that's a set of a whole bunch of functions.</p>
<p>An algorithm's complexity is a function giving how many steps the algorithm takes on each input. This function may be in a class like $\Omega(1)$, or not.</p>
<p>$\Omega(1)$ is the class of functions $f(x)$ that are greater than some constant $c$ as $x$ goes to infinity. According to the technical definition, the function $f(x) = 0$ is not $\Omega(1)$, so an algorithm that never takes any steps (for instance) would not have complexity $\Omega(1)$. But virtually all algorithms would have complexity in $\Omega(1)$. (For example algorithms that always take at least one step.)</p>
<p>Since $\infty$ is not a number, $O(\infty)$ is not defined under any definition I've seen. It does not seem to be interesting to make this definition (
"all functions which are either finite or undefined"? but that is just all functions). So intuitively the answer to your question is yes, infinity is an upper bound on all algorithms' running times, but technically it's somewhat meaningless.</p>
<p>As a sidenote, we could extend little-o to include a domain of infinity by saying $f(x)$ is $o(\infty)$ if $f(x)$ is finite (i.e. well-defined) for all $x$. Under a definition like that, an algorithm that does not halt on some inputs would not have complexity $o(\infty)$. But every algorithm that does halt on all inputs would.</p>
<hr>
<p>Edit/PS as pointed out by Yuval, we can give a slightly better answer for $O(\infty)$. Let's restrict attention to algorithms that halt on all inputs. Then we can define a function that grows like the maximum number of steps taken by <em>any</em> algorithm that halts. Call it $B(n)$. Then every algorithm's running time will be $O(B(n))$.</p>
<p>To define the function, let's take Yuval's suggestion and let $B(n) = $ the maximum number of steps taken by any halting TM of up to $n$ states on an input of up to $n$ bits. Now given any algorithm, it will be encoded as a Turing Machine, so it will have a certain number of states; say $N$ of them. Then the running time of the algorithm on inputs of size $m \geq N$ is at most $B(m)$. We can see this because $B(m)$ takes the maximum over all Turing Machines up to a certain size, including this one, so $B(m)$ will be at least as big as this TM's runtime.</p>
<p>See also: <a href="http://en.wikipedia.org/wiki/Busy_beaver" rel="noreferrer">http://en.wikipedia.org/wiki/Busy_beaver</a></p>
| 183
|
algorithm complexity
|
Suffix Tree algorithm complexity
|
https://cs.stackexchange.com/questions/41199/suffix-tree-algorithm-complexity
|
<p>I really get confused by all the different complexities you find around. One is $O(n \log n)$, the next $O(n \cdot |\Sigma|)$. Personally I think it's the last one, but I'm really not that confident with it to say so. Well on average we go $\log n$ deep and need at max $|\Sigma|$ steps to find a corresponding node that matches (or not). Thus I would come up with $O(n \cdot |\Sigma| \cdot \log n)$.</p>
<blockquote>
<p>Repeat the following for all suffixes of the given string, right to left.</p>
<ol>
<li>Scan if its in tree
<ul>
<li>Not in tree -> add it as new node</li>
<li>Is partly in tree -> fork here, such that the matching part remains</li>
</ul></li>
<li>Go back to step 1 until the sequence that was observed equals the source</li>
</ol>
</blockquote>
|
<p>There is no one complexity for all suffix tree algorithms. There are multiple algorithms, with different running times. It is not helpful to talk about this as though there was only one complexity that applies to all algorithms for computing a suffix tree. If you want to ask what is the running time of an algorithm for this task, you need to specify which algorithm you are asking about.</p>
<p>There are also multiple different models. Some algorithms are designed for the regime where $\Sigma=\{0,1\}$ or where you have a small finite-sized alphabet (e.g., $|\Sigma|=256$), where it is reasonable to consider $|\Sigma| \in O(1)$. Consequently, the running time for those algorithms might not include any dependence on $|\Sigma|$ -- which is reasonable if you assume that $|\Sigma| \in O(1)$. Other algorithms are designed to work even when the size of the alphabet is large. For those, the dependence on $|\Sigma|$ is at the heart of the problem. Therefore, you'll also find some algorithms designed for the large-alphabet regime, and typically the running times that are quoted for those algorithms <em>do</em> show the dependence on $|\Sigma|$. Therefore, when you read a paper on a suffix tree algorithm, if you want to use the algorithm on a situation where the alphabet size is large, you need to carefully examine the assumptions that were made during the running time analysis. Does the quoted running time include the dependence on $|\Sigma|$? Does the paper assume $|\Sigma| \in O(1)$?</p>
<p>Finally, the dependence on $|\Sigma|$ can depend on subtle aspects of how the data structures are instantiated. Therefore, a high-level description of the algorithm is often not enough -- you often need to know how the tree is represented and how the individual steps are instantiated to tell what the dependence on $|\Sigma|$ will be.</p>
<p>I suggest you read <a href="https://cs.stackexchange.com/q/6842/755">What are the effects of the alphabet size on construct algorithms for suffix trees?</a></p>
<p>I also suggest you take a look at <a href="https://cs.stackexchange.com/q/13669/755">What is the difference between an algorithm, a language and a problem?</a>, to help you think clearly about the difference between a problem vs an algorithm.</p>
| 184
|
algorithm complexity
|
Algorithm Complexity Analysis on functional programming language implementations
|
https://cs.stackexchange.com/questions/63900/algorithm-complexity-analysis-on-functional-programming-language-implementations
|
<p>I've learned <a href="https://cs.stackexchange.com/questions/63889/is-there-a-decision-algorithm-with-time-complexity-of-%d3%a8n%c2%b2?noredirect=1#comment135296_63889">today</a> that algorithm analysis differs based on computational model. It is something I've never thought about or heard of. </p>
<p>An example given to me, that illustrated it further, by User <a href="https://cs.stackexchange.com/users/43599/chi">@chi</a> was:</p>
<blockquote>
<p>E.g. consider the task: given $(i,x_1 ,…,x_n )$
return
$x_i$
. In RAM this can be solved in $O(1)$
since array access is constant-time. Using TMs, we need to scan the whole input, so it's $O(n)$</p>
</blockquote>
<p>This makes me wonder about functional languages; From my understanding, "Functional languages are intimately related to the lambda calculus" (from a comment by Yuval Filmus on <a href="https://cs.stackexchange.com/a/44307/22046">here</a>). So, if functional languages are based on lambda calculus, but they run on RAM based machines, what is the proper way to perform complexity analysis on algorithms implemented using purely functional data structures and languages? </p>
<p>I have not had the opportunity to read <a href="http://rads.stackoverflow.com/amzn/click/0521663504" rel="nofollow noreferrer">Purely Functional Data Structures</a> but I have looked at the Wikipedia page for the subject, and it seems that some of the data structures do replace traditional arrays with:</p>
<blockquote>
<p>"Arrays can be replaced by map or random access list, which admits purely functional implementation, but the access and update time is logarithmic." </p>
</blockquote>
<p>In that case, the computational model would be different, correct? </p>
|
<p>It depends on the semantics of your functional language. You can't do algorithm analysis on programming languages in isolation, because you don't know what the statements actually mean. The specification for your language needs to provide sufficiently detailed semantics. If your language specifies everything in terms of lambda calculus you need some cost measure for reductions (are they O(1) or do they depend on the size of the term you reduce?). </p>
<p>I think that most functional languages don't do it that way and instead provide more useful statements like "function calls are O(1), appending to the head of a list is O(1)", things like that.</p>
| 185
|
algorithm complexity
|
Brute force Delaunay triangulation algorithm complexity
|
https://cs.stackexchange.com/questions/2400/brute-force-delaunay-triangulation-algorithm-complexity
|
<p>In the book <a href="http://www.cs.uu.nl/geobook/" rel="noreferrer">"Computational Geometry: Algorithms and Applications"</a> by Mark de Berg et al., there is a very simple brute force algorithm for computing Delaunay triangulations. The algorithm uses the notion of <em>illegal edges</em> -- edges that may not appear in a valid Delaunay triangulation and have to be replaced by some other edges. On each step, the algorithm just finds these illegal edges and performs required displacements (called <em>edge flips</em>) till there are no illegal edges.</p>
<blockquote>
<p>Algorithm <strong>LegalTriangulation</strong>(<span class="math-container">$T$</span>)</p>
<p><em>Input</em>. Some triangulation <span class="math-container">$T$</span> of a point set <span class="math-container">$P$</span>.<br />
<em>Output</em>. A legal triangulation of <span class="math-container">$P$</span>.</p>
<p><strong>while</strong> <span class="math-container">$T$</span> contains an illegal edge <span class="math-container">$p_ip_j$</span><br />
<strong>do</strong><br />
<span class="math-container">$\quad$</span> Let <span class="math-container">$p_i p_j p_k$</span> and <span class="math-container">$p_i p_j p_l$</span> be the two triangles adjacent to <span class="math-container">$p_ip_j$</span>.<br />
<span class="math-container">$\quad$</span> Remove <span class="math-container">$p_ip_j$</span> from <span class="math-container">$T$</span>, and add <span class="math-container">$p_kp_l$</span> instead.<br/>
<strong>return</strong> <span class="math-container">$T$</span>.</p>
</blockquote>
<p>I've heard that this algorithm runs in <span class="math-container">$O(n^2)$</span> time in worst case; however, it is not clear to me whether this statement is correct or not. If yes, how can one prove this upper bound?</p>
|
<p>A Delaunay triangulation can be considered as the lower convex hull of the 2d point set lifted to the paraboloid. Thus, if you take your 2d point set and assign to every point $(x_i,y_i)$ a $z$-coordinate $z_i=x_i^2+y_1^2$, then the projection of the lower convex hull into the $xy$-plane gives you the Delaunay triangulation.</p>
<p>Using this perspective, what does it mean for an edge $(p_i,p_j)$ to be illegal? First of all, for every triangulation $T$ we can use the parabolic map to get a 3d (triangulated) surface that projects down to $T$. Of course, this surface is not necessarily convex, if it would be convex, $T$ would be the Delaunay triangulation. Simply speaking, the edge $(p_i,p_j)$ is an obstruction for the convexity of the surface, a <em>concave</em> edge. When flipping this edge we change the situation on the lifted surface only locally. So lets look at the 4 points $p_i,p_j,p_k,p_l$. In 3d they form a tetrahedron, that projects down to quadrilateral. Since the two triangles $p_ip_jp_k$ and $p_ip_jp_l$ define the the concave edge $(p_i,p_j)$, the triangles $p_kp_lp_i$ and $p_kp_lp_j$ define the a convex edge $(p_l,p_k)$. Therefore, flipping an illegal edge corresponds to replacing a concave edge by a convex edge in the lifting. Notice that this flips might turn other convex edges to concave edges.</p>
<p><img src="https://i.sstatic.net/1j3tY.png" alt="3D Flip interpretation">
<em>Remark: The image is not geometrically correct and should only be considered as a sketch.</em></p>
<p>Let $T'$ be the triangulation after the flip. The lifted surface of $T'$ "contains" the surface of $T$. By this I mean that if you watch the two surfaces from the $xy$ plane you see only triangles from the surface of $T'$ (or triangles that are in both surfaces). You could also say that the surface of $T'$ encloses more volume. Also, the edge $(p_i,p_j)$ lies now "behind" the lifted surface induced by $T'$ when watching from the $xy$ plane. </p>
<p>During the flip sequence we get a sequence of surfaces with strictly increasing volume. Thus, the edge $(p_i,p_j)$ lies "behind" all these surfaces. Hence, it can never reappear during the flipping process. Since there are only $n$ choose 2 possible edges, we have at most $O(n^2)$ flips.</p>
| 186
|
algorithm complexity
|
A question about parallel algorithm complexity
|
https://cs.stackexchange.com/questions/7371/a-question-about-parallel-algorithm-complexity
|
<p>When in a Parallel algorithm we say:</p>
<blockquote>
<p>"This algorithm is done in $O(1)$ time using $O(n\log n)$ work, with $n$-exponential probability, or alternatively, in $O(\log n)$ time using $O(n)$ work, with $n$-exponential probability."</p>
</blockquote>
<p>Then Can we Implement this algorithm for a Quad-Core Computer (and just 4 threads) with $n=100,000$?</p>
<p>The other question is what is the "$n$-exponential probability" in this sentence?</p>
<p>Thanks.</p>
|
<p>You are probably in the realm of asynchronous parallel computations where units of work are performed by processors at their pace and communication is performed explicitly. This model is a good approximation to many real life parallel computers such as PC clusters or multicore CPUs.</p>
<p>You have an algorithm that can be represented as $O(n \log n)$ units of work each taking constant time or as $O(n)$ units of work each taking $O(\log n)$ time.
Here $n$ is a parameter that characterizes the size of the problem.</p>
<p>The units of work can be executed on a parallel computer with a fixed number of sequential processing elements (e.g. processor cores). It depends on the algorithm whether the work units can complete while other work units have not started, or whether the computations will have to interleave. </p>
<p>In a practical computer interleaving can be achieved through pre-emption and context switches.</p>
| 187
|
algorithm complexity
|
"Which complexity represents a majority of algorithms?"
|
https://cs.stackexchange.com/questions/95987/which-complexity-represents-a-majority-of-algorithms
|
<p>Student asked me this question. During lectures on algorithm complexity I've shown similar picture (<a href="https://towardsdatascience.com/linear-time-vs-logarithmic-time-big-o-notation-6ef4227051fb" rel="nofollow noreferrer">source</a>):</p>
<p><a href="https://i.sstatic.net/y4RLi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y4RLi.png" alt="image graph"></a></p>
<p>After I've explained and gave examples for each category, a student asked: "Which of these represent a majority of algorithms?" I guess that the question is not right in a sense that we are creating new algorithms and that it depends on the field - majority of sorting algorithms are more complex than nlogn.</p>
<p>What would be your answer?</p>
<p>This question is only about the Big O as that was the end of the lecture.</p>
<p>Thanks!</p>
|
<p>My answer would be that there is no running time that represents the majority of algorithms. Some common algorithms (for example, BFS and DFS) run in linear time; some run in loglinear time (for example, sorting); dynamic programming algorithms run in superlinear polynomial time (for example, the standard algorithm for edit distance runs in quadratic time); algorithms involving matrix multiplication run (practically) in cubic time, and theoretically faster; SAT solvers have worst case exponential running time; and some algorithms involved in verification have even faster growing worst case running time, or are not guaranteed to terminate at all.</p>
| 188
|
algorithm complexity
|
Complexity of Search Algorithm
|
https://cs.stackexchange.com/questions/82340/complexity-of-search-algorithm
|
<p>I have an algorithm which searches a sorted int array for two elements which sum up to a searched value. First I thought that the complexity is $\mathcal{O}(n)$, but the interpolation search algorithm has a similar approach and has a $\mathcal{O}(\log(log(n)))$ complexity with uniform distributed elements. </p>
<p>Which is the right complexity and why?</p>
<pre class="lang-java prettyprint-override"><code>boolean hasPairWithSum(int[] elements, int sum) {
int start = 0;
int end = elements.length-1;
while (start < end) {
int tempSum = elements[start] + elements[end];
if (tempSum == sum) return true;
if (tempSum > sum) {
end--;
} else {
start++;
}
}
return false;
}
</code></pre>
|
<p>This method works in linear time, because <code>end - start</code> decreases by 1 on each iteration. It's $n - 1$ initially, hence the loop will make at most $n - 1$ iterations.</p>
| 189
|
algorithm complexity
|
approximation algorithm with polynomial complexity
|
https://cs.stackexchange.com/questions/75286/approximation-algorithm-with-polynomial-complexity
|
<p>It might be a silly question, I do take a carefully read about approximation algorithm through coursenotes, but when I saw the words "approximation algorithm with polynomial complexity", I can't understand what it means, I searched a lot and here is my assumption.</p>
<p>First, for some NPc or NP hard problem, we can try to solve them approximately, because there is no optimization algorithm right now. So compared with some opotimal solution theoretically, some optimazation problem can use approximation algorithm to get an approximation solution, such like vertex covering problem.</p>
<p>Then it is the term polynomial complexity, an algorithm is polynomial if for some k>0, its running time on inputs of size n is O(n^k).</p>
<p>So is "approximation algorithm with polynomial complexity" means an approximation algorithm that whose complexity is in polynomial time? So is it means there are approximation algorithm that has other complexity such as exponential complexity.</p>
<p>By the way, through the approximation algorithm for vertex covering problem, I know what the algorithm is (briefly it is takes an arbitrary edge from subset E' of all possible set of edges, add two vertax at set S and remove all edge who has the same vertex with that arbitrary edge and continue until E' is empty). But how to observe it is polynomial complexity or what? </p>
<p>I really apprexiate if someone can solve my confusion.</p>
|
<p>As you say, we believe there is no polynomial-time algorithm for solving an NP-hard problem. So if we wanted to have a polynomial-time algorithm, it seems like we need to give up the hope of always finding an optimal solution. So you are right: typically, we strive to find approximation algorithms that run in time polynomial in the input size. But sure, there is nothing that prevents you from considering approximation algorithms that run in exponential-time (or for example in <a href="https://en.wikipedia.org/wiki/Parameterized_complexity" rel="nofollow noreferrer">FPT-time</a> for some parameter).</p>
<p>Sometimes these approximation algorithms are very simple to state. This is perhaps often the case for greedy algorithms, like the one you mention for vertex cover. To see that it runs in time polynomial in the size of the input graph, you need to perform a runtime analysis in the usual way.</p>
| 190
|
algorithm complexity
|
Async Distributed Algorithm Time Complexity
|
https://cs.stackexchange.com/questions/90946/async-distributed-algorithm-time-complexity
|
<p>In theory of distributed systems, I understand message complexity and time complexity are common performance measures, with the first being the number of messages sent in the overall execution of the algorithm, and the time being the number of "steps" it takes to complete the algorithm. </p>
<p>My question is, with asynchronous distributed algorithms, where we assume that at least one message is received and processed per step, how can the time complexity differ from the message complexity? If I have to send m messages to complete the algorithm, it seems to me the worst case time complexity would be $m$, sending one message per time step. For example, the time complexity of an async flood algorithm is $O(n)$ while the message complexity is $O(m)$. Why is the time complexity not $O(m)$ too? Here $m$ is the number of edges and $n$ is the number of vertices. Same goes for constructing a rooted spanning tree.</p>
<p>Again, this is for async systems where messages don't have to be received in parallel.</p>
|
<p>In a distributed system, messages can be sent in parallel. Vertex #1 can send a message at the same time as vertex #2 is sending a message. So, the total amount of time to complete the algorithm might be much less than the total number of messages sent.</p>
| 191
|
algorithm complexity
|
Is "super-exponential" a precise definition of algorithmic complexity?
|
https://cs.stackexchange.com/questions/99605/is-super-exponential-a-precise-definition-of-algorithmic-complexity
|
<p>I cannot seem to find a precise definition of what "super-exponential" is supposed to refer to when one's talking about an algorithm's time complexity.</p>
<p>For instance, if an algorithm runs for <span class="math-container">$nC(n-1)$</span> steps, where <span class="math-container">$C(\cdot)$</span> is the Catalan number, is this algorithm super-exponential in <span class="math-container">$n$</span>? </p>
|
<p>"Super-exponential" just means more than exponential, so a function is super-exponential if it grows faster than any exponential function. More formally, this means that it is <span class="math-container">$\omega(c^n)$</span> for every constant <span class="math-container">$c$</span>, i.e., if <span class="math-container">$\lim_{n\to\infty} f(n)/c^n=\infty$</span> for all constants <span class="math-container">$c$</span>.</p>
<p>Conversely, a function is "sub-exponential" if it is <span class="math-container">$o(c^n)$</span> for every constant <span class="math-container">$c>1$</span>, i.e., <span class="math-container">$\lim_{n\to\infty} f(n)/c^n=0$</span> for all constants <span class="math-container">$c>1$</span>.</p>
<p>Asymptotically, the <span class="math-container">$n$</span>th Catalan number is <span class="math-container">$\Theta(4^n\, n^{-3/2})$</span>. This is <span class="math-container">$o(4^n)$</span>, so the Catalan numbers are not super-exponential; it is <span class="math-container">$\omega(2^n)$</span>, so they're not subexponential either. The Catalan numbers are just exponential.</p>
<p>An exception to the above definitions is that, in some contexts, functions of the form <span class="math-container">$b^{n^k}$</span> for constants <span class="math-container">$b,k>1$</span> are considered to be exponential, even though
<span class="math-container">$$
\lim_{n\to\infty}\frac{b^{n^k}}{c^n}=\lim_{n\to\infty}b^{n^k-n\log_b c}=\infty\,.$$</span>
For example, the complexity class <strong>EXP</strong> is defined as the class of languages decided by Turing machines running in time <span class="math-container">$O(2^{n^k})$</span> for any <span class="math-container">$k$</span>. Thanks to <a href="https://cs.stackexchange.com/users/683/yuval-filmus">Yuval Filmus</a> for pointing this out.</p>
| 192
|
algorithm complexity
|
What would Dijkstra's shortest path algorithm complexity be with the following data structure?
|
https://cs.stackexchange.com/questions/85311/what-would-dijkstras-shortest-path-algorithm-complexity-be-with-the-following-d
|
<p>Considering $n$ number of pieces of data, what would Dijkstra's shortest path algorithm time complexity be if it was stored using a data structure with following properties? </p>
<p>• delete the record with the minimum value of the key (complexity $O(log n)$);</p>
<p>• decrease the key of some record (complexity $O(1)$);</p>
<p>• find the record with the minimum value of the key (complexity $O(1)$);</p>
<p>• insert a new record (complexity $O(1)$).</p>
<p>I think it would be $O(nlogn)$, but am having trouble proving it. Does anyone have suggestions? Thanks!</p>
| 193
|
|
algorithm complexity
|
Complexity analysis of an unsolvable algorithmic problem?
|
https://cs.stackexchange.com/questions/56556/complexity-analysis-of-an-unsolvable-algorithmic-problem
|
<p>In my automata theory class, for our term project we are required to present a complexity analysis for our algorithmic problem. I have chosen an unsolvable problem, and he has off-the-cuff mentioned that any unsolvable problem would have a complexity of infinite.</p>
<p>However, this strikes me as strange. It feels intuitive to me to say that an unsolvable problem would have no complexity, as any attempt to solve it would be impossible. The quantity of infinity says to me that "if you had unlimited time, you could solve it", but what if it's a question that the answer simply cannot be computed for?</p>
<p>Is this O(infinity) concept common in algorithmic complexity analyses?</p>
|
<p>Basically, this is a degenerate case. We can adopt the convention that the running time of any algorithm is $\infty$ (so its complexity is $\infty$), or we can adopt the convention that it doesn't have a complexity. For informal conversation, probably it doesn't matter much what convention you adopt, as long as it's clear which you're using and what you mean.</p>
<p>See also <a href="https://cs.stackexchange.com/q/55271/755">how to calculate time complexity of non terminating loops</a> for discussion of the running time of an algorithm that never terminates, and <a href="https://cs.stackexchange.com/q/13669/755">What is the difference between an algorithm, a language and a problem?</a> for background on the difference between the complexity of an algorithm vs the complexity of a problem.</p>
| 194
|
algorithm complexity
|
Time complexity for logarithmic algorithm
|
https://cs.stackexchange.com/questions/165211/time-complexity-for-logarithmic-algorithm
|
<p>I am trying to find complexity for following algorithm. It is from "The Algorithm Design Manual" book.</p>
<pre><code>for k = 1 to n:
x = k
while (x < n):
print ’*’
x = 2x
</code></pre>
<p>I simulated algorithm for some values. Each time inner loop operates on <code>n-k</code> value.</p>
<pre><code>k=1
x=1
x=2
x=4
x=8
...
k=2
x=2
x=4
x=8
x=16
k=3
x=3
x=6
x=12
</code></pre>
<p>And I do think that it has complexity of</p>
<p><span class="math-container">$\sum\limits_{k=1}^{n}k*\lg(n-k)$</span></p>
<p>What do you think?</p>
<p><strong>Edit 1</strong></p>
<p>After some time, I think it should be <span class="math-container">$\sum\limits_{k=1}^{n}\lg(n-k)$</span></p>
|
<p><span class="math-container">$\sum\limits_{k=1}^{n}\log \frac{n}{k} = \sum\limits_{k=1}^{n}\log n-\sum\limits_{k=1}^{n}\log k = \log n^n - \log n! = n \log n - (n \log n - n + O(\log n)) = n - O(\log n) = O(n)$</span></p>
| 195
|
algorithm complexity
|
Complexity class of an algorithm
|
https://cs.stackexchange.com/questions/122216/complexity-class-of-an-algorithm
|
<p>What is the complexity class of an algorithm that runs in <span class="math-container">$n^{\mathcal{O}(\sqrt{n}log(n))}$</span> time? </p>
<p>As <span class="math-container">$n$</span> gets large <span class="math-container">$\sqrt{n}log(n)$</span> increases at a very slow rate. Does this mean that the algorithm has the same complexity as <span class="math-container">$n^{\mathcal{O}(1)}$</span>, which would be in <span class="math-container">$P$</span>?</p>
|
<p>No, <span class="math-container">$\sqrt{n}$</span> increases far faster than <span class="math-container">$O(1)$</span>, and <span class="math-container">$n^{\sqrt{n}}$</span> grows far faster than <span class="math-container">$n^{O(1)}$</span>. No, it certainly does not have the same runtime. See <a href="https://cs.stackexchange.com/q/824/755">Sorting functions by asymptotic growth</a>.</p>
<p>There may be no predefined complexity class; the complexity class is the class of all algorithms who run in time <span class="math-container">$n^{O(\sqrt{n} \log n)}$</span>, and there's probably not much more to say.</p>
| 196
|
algorithm complexity
|
Algorithmic complexity of Sub-array with sum = target algorithm
|
https://cs.stackexchange.com/questions/63837/algorithmic-complexity-of-sub-array-with-sum-target-algorithm
|
<p>Question: Given an array of positive integers and a target total of X, find if there exists a contiguous subarray with sum = X </p>
<p>E.g: If array is [1, 3, 5, 18] and X = 8 Output: True, if X = 10, output is FALSE.</p>
<p>Approach I can think of is to expand sub-array window, until you hit an index such that sum of sub-array == target or > target. If > target, decrease sub-array by moving first element to the right.</p>
<p>It appears that in worst-case complexity is O(N) since I am moving either start or end index of sub-arrays, so int worst case I will just spend 2*N iterations. Is that correct analysis?</p>
<pre><code>BOOL checkIfArrHasSum(long *arr, size_t size; long target)
{
long currSum = [arr[0] longValue];
long startInd = 0;
long nextIndToCheck = 1;
while (nextIndToCheck < size)
{
if(currSum == target) return YES;
if (currSum + arr[nextIndToCheck] == target)
return YES;
else if(currSum + arr[nextIndToCheck] < target)
{
currSum = currSum + arr[nextIndToCheck];
nextIndToCheck++;
}
else
{
currSum = currSum - arr[startInd];
startInd++;
}
if(startInd == nextIndToCheck)
{
nextIndToCheck = startInd+1;
currSum = arr[nextIndToCheck];
}
}
return NO;
}
</code></pre>
| 197
|
|
algorithm complexity
|
MST: Prim's algorithm complexity, why not $O(EV \lg V)$?
|
https://cs.stackexchange.com/questions/13608/mst-prims-algorithm-complexity-why-not-oev-lg-v
|
<p>According to CLRS, the Prim's algorithms is implemented as below -- </p>
<blockquote>
<p>$\mathtt{\text{MST-PRIM}}(G,w,r)$ </p>
<ul>
<li>for each $u \in V[G]$ do<br>
<ul>
<li>$\mathtt{\text{key}}[u] \leftarrow \infty$ </li>
<li>$\pi[u] \leftarrow \mathtt{\text{NIL}}$ </li>
</ul></li>
<li>$\mathtt{\text{key}}[r] \leftarrow 0$ </li>
<li>$Q \leftarrow V[G]$ </li>
<li>while $Q \ne \emptyset$ do // ... $O(V)$
<ul>
<li>$u$ $\leftarrow$ $\mathtt{\text{EXTRACT-MIN}}(u)$ // ... $O(\lg V)$<br>
<ul>
<li>for each $v \in \mathtt{\text{adj}}[u]$ do // ... $O(E)$<br>
<ul>
<li>if $v \in Q$ and $w(u,v) \gt \mathtt{\text{key}}[v]$
<ul>
<li>then $\pi[v] \leftarrow u$
<ul>
<li>$\mathtt{\text{key}} \leftarrow w(u,v)$ // $\mathtt{\text{DECREASE-KEY}}$ ... $O(\lg V)$</li>
</ul></li>
</ul></li>
</ul></li>
</ul></li>
</ul></li>
</ul>
</blockquote>
<p>The book says the total complexity is $O(V \lg V + E \lg V) \approx O(E \lg V)$. However, what I understood is that the inner <code>for</code> loop with the <code>DECREASE-KEY</code> operation will cost $O(E \lg V)$, and the outer <code>while</code> loop encloses both the <code>EXTRACT-MIN</code> and the inner <code>for</code> loop, so the total complexity should be $O(V (\lg V + E \lg V)) = O(V \lg V + EV \lg V) \approx O(EV \lg V)$. </p>
<p>Why the complexity analysis is not performed as such? and What is wrong with my formulation?</p>
|
<p>The complexity is derived as follows. The initialization phase costs <span class="math-container">$O(V)$</span>. The <span class="math-container">$while$</span> loop is executed <span class="math-container">$\left| V \right|$</span> times. The <span class="math-container">$for$</span> loop nested within the <span class="math-container">$while$</span> loop is executed <span class="math-container">$degree(u)$</span> times. Finally, the handshaking lemma implies that there are <span class="math-container">$\Theta(E)$</span> implicit DECREASE-KEY’s. Therefore, the complexity is: <span class="math-container">$\Theta(V)* T_{EXTRACT-MIN} + \Theta(E) * T_{DECREASE-KEY}$</span>.</p>
<p>The actual complexity depends on the data structure actually used in the algorithm.
Using an array, <span class="math-container">$T_{EXTRACT-MIN} = O(V), T_{DECREASE-KEY} = O(1)$</span>, complexity is <span class="math-container">$O(V^2)$</span> in the worst case.</p>
<p>Using a binary heap, <span class="math-container">$T_{EXTRACT-MIN} = O(\log V), T_{DECREASE-KEY} = O(\log V)$</span>, complexity is <span class="math-container">$O(E \log V)$</span> in the worst case. Here is why: since the graph is connected, then <span class="math-container">$\left| E \right| \ge \left| V \right| - 1$</span>, and <span class="math-container">$E$</span> is at most <span class="math-container">$V^2$</span> (worst case, for a dense graph) . Probably, you missed this point. </p>
<p>Using a Fibonacci Heap, <span class="math-container">$T_{EXTRACT-MIN} = O(\log V)$</span> amortized, <span class="math-container">$T_{DECREASE-KEY} = O(1)$</span> amortized, complexity is <span class="math-container">$O(E + V \log V)$</span> in the worst case. </p>
| 198
|
algorithm complexity
|
Algorithm Time complexity analysis for algorithm having two different time complexities
|
https://cs.stackexchange.com/questions/63361/algorithm-time-complexity-analysis-for-algorithm-having-two-different-time-compl
|
<p>I'm implementing an algorithm that analyze several properties on large set of integers, the time complexity is bound to $N$ (set length) and $M$ (bits to represent the numbers). I'm having some trouble to figure out how to express its time complexity because I don't know how to handle next situation:</p>
<p>The asymptotic analysis establish that when $N$ is constant and $M$ grows the operations count grows at constant rate ($R1$) until it reaches a threshold that depended on the value of $N$, then the operations count does not grow anymore.</p>
<p>On the other hand when $M$ is constant and $N$ grows the operations count grows also at constant rate ($R2$) until it reaches a threshold that depends on the value of $M$, then, it continues growing but at a slower rate.</p>
<p><a href="https://i.sstatic.net/bpaC5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bpaC5.png" alt="enter image description here"></a></p>
<p>My understanding that the algorithm has two different time complexities depending on the values of $N$ and $M$.</p>
<p>Is there any way to integrate this behavior in a single Big-Oh expression? </p>
<p>And if not possible, how will be the right way to describe the time complexity for an algorithm with these properties?</p>
|
<p>No, you can't determine the asymptotic worst-case running time from these two graphs.</p>
<p>First, plotting empirical running times is not a reliable way to determine worst-case running time: <a href="https://cs.stackexchange.com/q/857/755">How to fool the plot inspection heuristic?</a>. So, the approach of graphing performance on some testcases isn't a reliable way to get a "big-O" running time.</p>
<p>Second, knowing just these two graphs isn't enough to uniquely determine how it depends as a function of both $N,M$. There are multiple possibilities for functions $f(N,M)$ that are consistent with both of these graphs. For example, a two-variable function $f(N,M)$ is not uniquely determined by the one-variable functions $f(100,M)$ and $f(N,100)$.</p>
<p>Third, I strongly suspect that those two graphs don't tell the whole story. For instance, you show us only a single graph for when $N$ is held constant and $M$ is varied. But I suspect the shape of the graph (or the slope of the line) might depend on exactly which constant value of $N$ you choose: a very large value of $N$ might lead to a different graph than a small value of $N$. This further highlights that we don't have enough information to uniquely infer the running time.</p>
<hr>
<p>So what <em>should</em> you do? Rather than plotting running times and treating the algorithm as a black box, it's probably to start by looking at the algorithm itself. Look at the pseudocode of the algorithm and analyze its worst-case running time using standard techniques. See, e.g., <a href="https://cs.stackexchange.com/q/23593/755">Is there a system behind the magic of algorithm analysis?</a> and <a href="https://cs.stackexchange.com/q/192/755">How to come up with the runtime of algorithms?</a>. This allows mathematically rigorous analysis that is provably correct and that can analyze the running time for all combinations of $N,M$.</p>
| 199
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.