category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
algorithm complexity
Complexity of recursive algorithm
https://cs.stackexchange.com/questions/121142/complexity-of-recursive-algorithm
<p>I'm having hard time understanding time complexity of my solution for <a href="https://leetcode.com/problems/combination-sum/" rel="nofollow noreferrer">combination sum problem</a>. The problem is as follows:</p> <blockquote> <p>Given a set of candidate numbers (<code>candidates</code>) (without duplicates) and a target number (<code>target</code>), find all unique combinations in candidates where the candidate numbers sums to <code>target</code>.</p> <p>The same repeated number may be chosen from candidates unlimited number of times.</p> </blockquote> <p>Below is my solution written in Java using recursion:</p> <pre><code>public List&lt;List&lt;Integer&gt;&gt; combinationSum(int[] candidates, int target) { Arrays.sort(candidates); List&lt;List&lt;Integer&gt;&gt; results = new ArrayList&lt;&gt;(); recurse(results, candidates, target, 0, new ArrayList&lt;&gt;()); return results; } private void recurse(List&lt;List&lt;Integer&gt;&gt; results, int[] candidates, int target, int idx, List&lt;Integer&gt; acc) { if (target == 0) { results.add(new ArrayList&lt;&gt;(acc)); return; } for (int i = idx; i &lt; candidates.length; i++) { if (candidates[i] &gt; target) { return; } acc.add(candidates[i]); recurse(results, candidates, target - candidates[i], i, acc); acc.remove(acc.size() - 1); } } </code></pre> <p>One can observe that the problem size of each recursive step is potentially not changed and the depth of the recursion is bound by <code>target</code> value e.g. if the <code>candidates</code> array contains number 1 the recursion will happen <code>target</code> times. If I simplify the code the interesting part is:</p> <pre><code>private void recurse(List&lt;List&lt;Integer&gt;&gt; results, int[] candidates, int target, int idx, List&lt;Integer&gt; acc) { if (target == 0) { results.add(new ArrayList&lt;&gt;(acc)); return; } for (int i = idx; i &lt; candidates.length; i++) { acc.add(candidates[i]); recurse(results, candidates, target - candidates[i], i, acc); acc.remove(acc.size() - 1); } } </code></pre> <p>Which feels like <code>O(candidates.length * target)</code> for the most pessimistic <code>candidates</code> input with number 1.</p> <p>Since my solution is not really a divide and conquer algorithm I guess that I can't apply <a href="https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)" rel="nofollow noreferrer">the master theorem</a>. It feels like a backtracking algorithm but I'm not familiar with finding upper bound for that type of algorithms.</p> <p>Can someone please advise how to approach complexity analysis of above code?</p>
<p>The obvious algorithm tries all subsets, i.e., is <span class="math-container">$O(2^n)$</span> if you have <span class="math-container">$n$</span> elements to select. And your algorithm does that (try with first element, try without first element).</p> <p>Complexity is given by the recurrence:</p> <p><span class="math-container">$$T(n) = 2 T(n - 1), T(0) \text{ given}$$</span></p> <p>(try two posibilities for first one, recurse on the <span class="math-container">$n - 1$</span> others for each).</p>
200
algorithm complexity
Time complexity of Prim&#39;s algorithm
https://cs.stackexchange.com/questions/85870/time-complexity-of-prims-algorithm
<p>There is this Prim's algorithm I am studying, the time complexity of which is $O(n^2)$ (in the adjacency matrix).</p> <p>As far as I have understood,that is because we have to ckeck all the nodes per every node, that is, when checking the first node's best edge, we have to check edges from the current node to all other nodes. That makes $(n-1)$ edges at most. We check this for all nodes, so it would be n*(n-1) edges to check at most.</p> <p>So why is the time complexity $O(n^2)$?</p> <p>In addition, why don't we consider the edges which create a loop and why don't we ommit them in the algorithm? Does that make any difference in the time complexity?</p>
<p>The time complexity is $O(n^2)$ because $O(n\cdot(n-1)) = O(n^2)$</p> <p>The big-O notation is showing the worst-case performance of one algorithm, it is not showing the exact number of steps the algorithm will make, but only its overall complexity</p> <p>For example $$O(2n) = O(n)\\O(3n) = O(n)\\O(\frac{n}{2}) = O(n)\\O(2n^2) = O(n^2)$$</p>
201
algorithm complexity
Is the DPLL algorithm complexity in terms of # of clauses or # of variables?
https://cs.stackexchange.com/questions/14004/is-the-dpll-algorithm-complexity-in-terms-of-of-clauses-or-of-variables
<p>I'm a bit confused how worst case complexity is estimated for the <a href="http://en.wikipedia.org/wiki/DPLL_algorithm" rel="nofollow">DPLL algorithm</a>. Is it in terms of number of clauses, number of variables, or something else?</p>
<p>In the papers I've read the time complexity of DPLL is expressed in terms of the number of variables in the CNF formula. Using the number of clauses is inappropriate in general because it is known that random k-SAT instances go through an easy-hard-easy transition if you fix the number of variables and increase the number of clauses. The solution space goes from underconstrained to overconstrained as the number of clauses increases with the hard instances clustering between those extremes.</p>
202
algorithm complexity
Level Ancestor Query long-paths algorithm complexity of sqrt n
https://cs.stackexchange.com/questions/130826/level-ancestor-query-long-paths-algorithm-complexity-of-sqrt-n
<p>Can someone explain why the complexity of a query for LA is √n when we only decompose the tree into long-paths? How can we show/prove that?</p> <p>How can we prove that the <strong>number of paths can be as high as O(√n)</strong> ??? Please help, I've been struggling for a couple of days now.</p> <p>Stage 1: <a href="https://en.wikipedia.org/wiki/Level_ancestor_problem#Naive_methods" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Level_ancestor_problem#Naive_methods</a></p> <p><strong>Long-path decomposition of a Tree</strong></p> <p><a href="https://i.sstatic.net/kpaCD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kpaCD.png" alt="enter image description here" /></a></p> <p>This is a recursive method that decomposes a given tree into paths. This stages starts off by finding the longest root-to-leaf path in the tree. It then removes this path by breaking its ties from the tree which will break the remaining of the tree into sub-trees and then it recursively processes each sub-tree. Every time a path is decomposed, an array is created in association with the path that contains the elements on the path from the root to the leaf. The base case of this recursion is when the tree is a path in which case its removal leaves an empty graph. Each vertex v has a unique ladder which is the ladder containing it and we call it the &quot;v's ladder&quot;. However, after this pre-processing stage, the queries cannot be answered quickly. In fact in order to answer a level ancestor query, the algorithm needs to jump from a path to another until it reaches the root and there could be Θ(√n) of such paths on a leaf-to-root path. This leads us to an algorithm that can pre-process the tree in O(n) time and answers queries in O(√n). In order to reach the optimal query time.</p> <p>I only need to know how to show that the number of paths can be O(√n) in the worst case.</p>
203
algorithm complexity
Definition of complexity of an algorithm
https://cs.stackexchange.com/questions/157219/definition-of-complexity-of-an-algorithm
<p>In a sorting algorithm, While computing the complexity of an algorithm why do we only account for the number of comparisons done but not the number of swappings done? And what's the formal definition of complexity?</p>
<p>The formal definition of complexity of an algorithm is the runtime in terms of the length of the input. Meaning, suppose you have a number with size n, to represent the number n you need <span class="math-container">$log_2(n)$</span>. Thus, the actual time complexity is the runtime relative to <span class="math-container">$log_2(n)$</span> bits and not n.</p> <p>For instance, when computing the complexity of an algorithm, it may run in a linear time in terms of the input size, however, not in terms of the input length.</p> <p>I would encourage you to read about pseudo complexities. For example, say you have an algorithm that runs in <span class="math-container">$O(n^2)$</span>, where <span class="math-container">$n$</span> is the size of the number. Since you need <span class="math-container">$log_2(n)$</span> bits to represent the number n, the algorithm actually runs in an exponential time relative to its representational length in terms of bits.</p> <p>Why an algorithm that runs in <span class="math-container">$O(n^2)$</span> in terms of the numerical value of the input is actually exponential?</p> <p>As we said, to represent the number <span class="math-container">$n$</span>, you need <span class="math-container">$x=log_2(n)$</span> bits. Thus, this is equivalent to an exponential complexity in terms of x, where x is the number of bits of the numerical value. <span class="math-container">$O(2^{2x}) = O(2^{2log_2(n)}) = O(n^2)$</span>.</p> <p>An algorithm that runs in a polynomial time relative to the size of the numerical value of the input, but runs in an exponential time in terms of the size of bits to represent the input are called pseudo-polynomial complexities.</p> <p>This is why some algorithms which run in polynomial or even linear times aren't considered efficient.</p> <p>As for your first question, as Yuval and Iqazra pointed out. In comparison-based sorting, the elementary operation is comparing between the input values.</p> <p>In most comparison-based sorting algorithms, in order to swap two values with each other, you need to first compare them. Thus, in most cases, prior to swapping two values you will first need to compare them. You would need information about these two values in order to decide whether to swap them or not.</p> <p>Take bubble sort for instance. The number of comparisons is <span class="math-container">$O(n^2)$</span>, however, the number of swaps could be much less than that.</p>
204
algorithm complexity
Complexity of recursive Fibonacci algorithm
https://cs.stackexchange.com/questions/14733/complexity-of-recursive-fibonacci-algorithm
<p>Using the following recursive Fibonacci algorithm:</p> <pre><code>def fib(n): if n==0: return 0 elif n==1 return 1 return (fib(n-1)+fib(n-2)) </code></pre> <p>If I input the number 5 to find fib(5), I know this will output 5 but how do I examine the complexity of this algorithm? How do I calculate the steps involved? </p>
<p>Most of the times, you can represent the recursive algorithms using recursive equations. In this case the recursive equation for this algorithm is $T(n) = T(n-1) + T(n-2) + \Theta(1)$. Then you can find the closed form of the equation using the substitution method or the expansion method (or any other method used to solve recurrences). In this case you get $T(n) = \Theta(\phi^n)$, where $\phi$ is the golden ratio ($\phi = \frac{(1 + \sqrt{5})}{2}$).</p> <p>If you want to find out more about how to solve recurrences I strongly recommend you to read chapter 4 of <a href="http://rads.stackoverflow.com/amzn/click/0262033844">Introduction to Algorithms</a>.</p>
205
algorithm complexity
Euclid&#39;s Algorithm Time Complexity
https://cs.stackexchange.com/questions/72200/euclids-algorithm-time-complexity
<p>I have a question about the Euclid's Algorithm for finding greatest common divisors.</p> <p>gcd(p,q) where p > q and q is a n-bit integer.</p> <p>I'm trying to follow a time complexity analysis on the algorithm (input is n-bits as above)</p> <pre><code>gcd(p,q) if (p == q) return q if (p &lt; q) gcd(q,p) while (q != 0) temp = p % q p = q q = temp return p </code></pre> <p>I already understand that the sum of the two numbers, <code>u + v</code> where <code>u</code> and <code>v</code> stand for initial values of <code>p</code> and <code>q</code> , reduces by a factor of at least <code>1/2</code>.</p> <p>Now let <code>m</code> be the number of iterations for this algorithm. We want to find the smallest integer <code>m</code> such that <code>(1/2)^m(u + v) &lt;= 1</code></p> <p>Here is my question. I get that sum of the two numbers at each iteration is upper-bounded by <code>(1/2)^m(p + q)</code>. But I don't really see why the max <code>m</code> is reached when this quantity is <code>&lt;= 1</code>. </p> <p>The answer is O(n) for n-bits <code>q</code>, but this is where I'm getting stuck.</p> <p>Please help!!</p>
<p>Here is the idea of the proof. Throughout, we assume that $p \geq q$.</p> <p>Let $p^{(t)},q^{(t)}$ be the values of $p,q$ after $t$ iterations, so that $p^{(0)}=p$ and $q^{(0)}=q$.</p> <p>Let $F_m$ be the $m$th Fibonacci number, which satisfies $F_m = \Theta(\varphi^m)$, where $\varphi = \frac{1+\sqrt{5}}{2} &gt; 1$. Suppose that $p \leq F_m$. If $q \leq F_{m-1}$ then $p^{(1)} = q \leq F_{m-1}$. If $q \geq F_{m-1}$ then $q^{(1)} = p \bmod q \leq F_{m-2}$, and so $p^{(2)} = q^{(1)} \leq F_{m-2}$.</p> <p>This argument shows that if $p \leq F_m$ then after at most (roughly) $m$ steps, the algorithm will terminate (since we cannot have $p \leq F_0 = 0$). Since $F_m$ grows exponentially, it's not hard to check that $p \leq F_{O(n)}$, where $n$ is now the length of $p$ in bits. Therefore the algorithm terminates in $O(n)$ steps.</p> <p>If you want a bound depending on $\min(p,q)$ rather than on $\max(p,q)$, simply notice that $p^{(1)} = q$, and run the above analysis on $p^{(1)}$.</p>
206
algorithm complexity
Simplex algorithm experimental complexity
https://cs.stackexchange.com/questions/76347/simplex-algorithm-experimental-complexity
<p>For a school project I am doing on linear programming, I've implemented the simplex algorithm in Python. </p> <p>I was hoping to check the complexity on a number of matrices. </p> <p>Preferably, they would have to be in standard form ($Ax=b$ and $x\geq0$) (or easily put into standard form) and would have to have a finite inf (to avoid the unbounded case). Do you have any ideas? Or could you point me torwards a paper or someone who has already done this? </p>
207
algorithm complexity
Complexity of Longest Palindromic Subsequence Algorithm
https://cs.stackexchange.com/questions/93104/complexity-of-longest-palindromic-subsequence-algorithm
<p>I'm trying to find the longest palindromic subsequence for any string and I've tried two approaches: </p> <ol> <li>Recursive Algoritm</li> <li>Dynamic Programming</li> </ol> <p>Dynamic programming should be the better option in this case because of the time complexity but the time complexity of both algorithms is $O(n^2)$. My question is if the time complexity is the same for both algorithms, why is the dynamic programming approach considered better than recursive solution? I'm following the algoritms in <a href="https://www.geeksforgeeks.org/dynamic-programming-set-12-longest-palindromic-subsequence/" rel="nofollow noreferrer">this post</a>. It says that the time complexity of dynamic programming algorithm is much better than the worst case time complexity of naive recursive implementation. What could be the worst case?</p>
<p>The dynamic programming approach is indeed <code>O(n^2)</code>. However, the recursive solution is exponential in <code>n</code>: any time two characters don't match, a subproblem of size <code>k</code> is converted into <code>2</code> subproblems of size <code>k-1</code> each. It's easy to see that, with all letters different, this produces a complexity of <code>O(2^n)</code>.</p> <p><a href="https://ideone.com/vrLD4W" rel="nofollow noreferrer">Here</a> is the example from the page you mention, modified to count and output the number of calls. A local copy to prevent link rot:</p> <pre><code>#include &lt;stdio.h&gt; #include &lt;string.h&gt; int max (int x, int y) {return (x &gt; y) ? x : y;} int count; int lps (char * seq, int i, int j) { count += 1; if (i == j) return 1; if (seq[i] == seq[j] &amp;&amp; i + 1 == j) return 2; if (seq[i] == seq[j]) return lps (seq, i + 1, j - 1) + 2; return max (lps (seq, i, j - 1), lps(seq, i + 1, j)); } int main () { char seq [] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; for (int n = 1; n &lt;= 26; n++) { count = 0; printf ("n = %d: answer = %d", n, lps (seq, 0, n - 1)); printf (", count = %d\n", count); } return 0; } </code></pre> <p>The output is:</p> <pre><code>n = 1: answer = 1, count = 1 n = 2: answer = 1, count = 3 n = 3: answer = 1, count = 7 n = 4: answer = 1, count = 15 n = 5: answer = 1, count = 31 ... n = 24: answer = 1, count = 16777215 n = 25: answer = 1, count = 33554431 n = 26: answer = 1, count = 67108863 </code></pre>
208
algorithm complexity
Complexity of the Dijkstra algorithm
https://cs.stackexchange.com/questions/57226/complexity-of-the-dijkstra-algorithm
<p>I'm little confused by computing a time complexity for <code>Dijkstra</code> algorithm. It is said that the complexity is in $O(|V|^2)$ - <a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm" rel="nofollow" title="Wikipedia - Dijkstra">Wikipedia - Dijkstra</a>, which I understand. It's because for each node, we could theoretically relax edges going to each vertex so it is $n * n$ times, respectively $n(n-1)$. </p> <p>On the other hand, I can't figure out why this complexity isn't $O(|E|+|V|)$. For each $v \in V$, we relax only those edges $e$, which weren't computed yet. If vertex $v$ is already computed (red on gif above), we don't need to work with it anymore.</p> <p>So can I say that in one way, it is true that <code>Dijkstra</code> is in $O(|V|)$, but if we can put into the computation number of edges, I can say that it is in $O(|V|+|E|)$? </p>
<blockquote> <p>For each v from V, we relax only those edges e, which werent computed yet. If vertex v is already computed (red on gif above), we don't need to work with it anymore.</p> </blockquote> <p>Your are assuming that each edge is visited only once, but this assumption is not quite right. Let's say we have two sets $S$ and $S'$, such that $V=S \cup S'$ and $S$ is the set of vertices, for which we have found the shortest path from source $s$. Each time, we need to find an edge $e=(u,v)$ ($u \in S$ and $v \in S'$) that sits on a shortest path, but how do you find this edge? You need to either </p> <p>(1) use a brute-force algorithm and spend $O(|V|)$ to look at all edges $e=(u,v)$ ($u \in S$ and $v \in S'$) for finding the minimum one, which takes $O(|V|^2)$ (because each time you are looking at the same edge that are not in the shortest path).</p> <p>or</p> <p>(2) use a min-heap and spend $O( \log |V| ) $ for finding that edge, and achieve $O( (|V| + |E|)\cdot \log |V| )$ overall running time. </p> <p>However, if the graph is <strong>unweighted</strong>, your assumption is right and you can achieve the running time $O(|V|+|E|)$.</p>
209
algorithm complexity
Compare Complexity of Graph Algorithm
https://cs.stackexchange.com/questions/52796/compare-complexity-of-graph-algorithm
<p>Assume I know that there is an algorithm of complexity<br> $ \mathcal{O}( log ( \vert V \vert^2 \vert E \vert ) ) $ for a Graph $G(E,V)$.</p> <p>How do I compare this for example to the complexity of<br> $ \mathcal{O}( \vert V \vert^2 + \vert E \vert ) $ or other complexity algorithms on $G$. </p> <p>There has to be a connection between $\vert E \vert$ and $\vert V \vert $ if I am right which is needed to solve this. But the only thing I found is $max .\vert E \vert = \frac{\vert V \vert }{2}(\vert V \vert -1)$, what does not help here and in many other cases.</p> <p>Or should I just assume $max .\vert E \vert = \frac{\vert V \vert }{2}(\vert V \vert -1)$ and $min.\vert E \vert = \vert V \vert -1$ ?<br> Btw. must a vertex be connected via an edge to the graph per definition?</p>
<p>First of all $\min (|E|) = 0$ since the graph can be disconnected. $\min(|E|) = |V| - 1$, is true only for connected graphs. Whether a vertex is needed to be connected to the graph depends on the problem being considered. In general, a vertex can be isolated. So, in general, $0 \leq |E| \leq {{|V|}\choose{2}}$, if the graph is not a multi-graph. If the graph is a multi-graph then there is no upper limit to $|E|$.</p> <p>Secondly if we are comparing $O((\log|V|^2||E|))$ against something like $O(|V|^2+|E|)$, former will be always better than the latter , since $O(\log(|V|^2)|E|) = O(\log|E|+2\log|V|)$.</p> <p>So the question is, whether to analyze algorithms in terms of $|E|$ (i.e $|E|$ and $|V|$) or only in terms of $|V|$.</p> <p>Of course what you say about needing to relate $|V|$ and $|E|$ is correct. However, whenever a complexity is stated in terms of $|E|$, it is better than, say, substituting $|V|^2$ for $|E|$. Note that $|E|$ is always $O(|V|^2)$ even for the cases when it is smaller, for example, for trees. $O()$ is upper bound.</p> <p>If the analysis of an algorithm is in terms of $|E|$ than we get the complexity bound for all the cases, whether the graph is dense or sparse. Thus $|E||V|$-time algorithm is considered better than a $|V|^3$-time algorithm.</p> <p>In the cases, if the graph is a tree, or a graph with a max degree $d=O(1)$, then $|E|$ is only $O(|V|)$. So, as an example, an $O(|E||V|)$ algorithm will be considered better than $O(|V|^3)$ algorithm. Thus, if we are able to do a tighter analysis, we are sure that the algorithm will fare better in case of non-worst case input.</p> <p>As pointed by Raphael, $\Theta(|E|)$ is not same as $O(|V|^2)$. $\Theta$ analysis is better wherever applicable. But usually we don't give $\Theta$ analysis, because if we say some algorithm is $\Theta(f(n))$, and for some easy input the algorithm runs in $o(f(n))$ time, our statement will be wrong.</p>
210
algorithm complexity
Time Complexity of Algorithm
https://cs.stackexchange.com/questions/14298/time-complexity-of-algorithm
<p>I need help with finding out the time complexity of the following algorithm:</p> <pre><code>procedure VeryOdd(integer n): for i from 1 to n do if i is odd then for j from i to n do x = x + 1 for j from 1 to i do y = y + 1 </code></pre> <p>This is my attempt:</p> <p>$$ Loop1 = \Theta(n)$$ $$ Loop2 = \Theta(n)$$ $$ Loop2 = O(n)$$</p> <p>And we also know that loop2 and loop3 will get executed every second time of the execution of the outer loop. So we know that:</p> <p>$$T(n) = \Theta(n) * 1/2(\Theta(n) + O(n)) = \Theta(n^2)$$</p> <p>Now to the thing I'm not so sure about, nameley, is Loop3 really $$O(N)$$ and if yes, then is $$\Theta(n) + O(n) = \Theta(n)$$</p> <p>Thanks in advance</p>
<p>$$ Loop 1 = \theta(n) $$ Since both loop in total will run n times so, $$ Loop 2 + Loop3 = \theta(n) $$ $$ T(n) = \theta(n) * 1/2 ( \theta(n)) = \theta(n^2) $$</p>
211
algorithm complexity
Counterexample to an Algorithm Complexity Statement
https://cs.stackexchange.com/questions/100944/counterexample-to-an-algorithm-complexity-statement
<p>Say I have the following statement:</p> <blockquote> <p>If <span class="math-container">$f(n) = O(s(n))$</span> and <span class="math-container">$g(n) = O(r(n))$</span>, then <span class="math-container">$f(n) - g(n) = \Theta(s(n) - r(n))$</span>.</p> </blockquote> <p>What would be a counterexample to this?</p>
<p>A lot of counterexamples! <span class="math-container">$f(n) = n, s(n) = n, g(n) = n, r(n) = n^2$</span>. <span class="math-container">$f(n) - g(n) = 0 = \Theta(n^2 - n) = \Theta(n^2)$</span>.</p>
212
algorithm complexity
What time complexity is a reachability algorithm?
https://cs.stackexchange.com/questions/124959/what-time-complexity-is-a-reachability-algorithm
<p>I've read there are ways you can determine all reachable pairs using Strongly Connected Components. But, I want to calculate all reachable nodes on the fly - so I don't have to store a massive reachability matrix in RAM. What sort of time complexity would be possible for an algorithm to calculate all reachable nodes in a directed graph, from a single node?</p> <p><a href="https://i.sstatic.net/SLpaw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SLpaw.png" alt="enter image description here"></a></p> <p>Here's a naive algorithm I came up with, I'm not sure of the time complexity of this. <span class="math-container">$O(V!)$</span>?</p> <p>It seems to have an <span class="math-container">$O(V)$</span> spacial complexity though.</p> <p>I've read about the Bellman-Ford algorithm with a time complexity of <span class="math-container">$O(EV)$</span> which is essentially <span class="math-container">$O(V^3)$</span> and the Floyd-Warshall algorithm which is <span class="math-container">$O(V^3)$</span>. They require <span class="math-container">$O(V)$</span> and <span class="math-container">$O(V^2)$</span> space complexity, respectively.</p> <p>The problem is only inputs can be determined in constant time. So, one would have to find (in <span class="math-container">$O(V)$</span> time) all outputs for a particular node. What I actually did in my solution is invert the graph using a similar technique, before running DFS. But I don't know if this is optimal... Also, due to a copy of the graph being stored in memory, my solution has a spacial complexity worse than the bellman-ford algorithm. If this time complexity is also worse, I may as well use bellman fords algorithm</p>
<p>You can calculate all reachable nodes in <span class="math-container">$O(V+E)$</span> time and <span class="math-container">$O(V)$</span> space, using DFS on the fly.</p> <p>If you want to use less than <span class="math-container">$O(V)$</span> space, then the problem becomes much more challenging. I recommend looking at how the <a href="https://en.wikipedia.org/wiki/SPIN_model_checker" rel="nofollow noreferrer">SPIN</a> model checker works. See <a href="https://en.wikipedia.org/wiki/Bitstate_hashing" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bitstate_hashing</a>. Even then, the space required is probably still <span class="math-container">$O(V)$</span>, just with a lower constant factor.</p>
213
algorithm complexity
Finding time complexity of an algorithm
https://cs.stackexchange.com/questions/68745/finding-time-complexity-of-an-algorithm
<p>I am to find the time complexity of my algorithm and I found one method <a href="https://en.wikipedia.org/wiki/Time_complexity" rel="nofollow noreferrer">here</a>.<br> However I am really not sure about it correctness thus I would like to check it.<br> There is my algorithm (N is a set (len(N) is constant) and C is a number):</p> <pre><code>def f(N, C, k = 0): if C &lt; 0: return 0 if C == 0: return 1 if not N: return 0 i = 0 for l in N: if l &gt;= k: i += f(N, C - liczba, liczba) return i </code></pre> <p>And to my mind time complexity is equal to:<br> a+b+c+d+len(N)*C where a, b, c, d are constants.</p>
214
algorithm complexity
Complexity of dynamic programming algorithm for Knapsack
https://cs.stackexchange.com/questions/43175/complexity-of-dynamic-programming-algorithm-for-knapsack
<p>Dynamic programming algorithm for Knapsack is stated to have complexity $\mathcal O (nW)$.</p> <p>However, I've also seen the complexity stated as $\mathcal O (n^2V)$, where $V=\max v_i$.</p> <p>(Here $n$ is the number of items and $W$ the weight limit).</p> <p>I see from the algorithm that the first complexity measure is correct: <a href="http://en.wikipedia.org/wiki/Knapsack_problem" rel="nofollow">http://en.wikipedia.org/wiki/Knapsack_problem</a></p> <p>Can someone tell me, why the other complexity measure works ?</p>
<p>The first complexity measure is in terms of the target weight, the second in terms of the heaviest element. Since $W \leq nV$ (or rather, we can assume that $W \leq nV$), the first estimate $O(nW)$ implies the second $O(n^2V)$. So $O(nW)$ is a stronger estimate than $O(n^2V)$.</p>
215
algorithm complexity
Recursive euclid algorithm for gcd complexity
https://cs.stackexchange.com/questions/90573/recursive-euclid-algorithm-for-gcd-complexity
<p>I am trying to calculate the complexity of the euclidean algorithm for finding the greatest common divisor (gcd) (recursive version). </p> <p>Here's the pseudo code:</p> <pre><code>algorithm euclid(a,b) if b = 0 then return a else return euclid(b, a mod b) end if </code></pre> <p>My initial thought is that every time the recursive function is called, its complexity is O(1). I also know that $b\ge F_k$ for k calls of the function, as I have proven this by induction previously (see proof below). This means that the runtime grows with the input b and this is where I am stuck. Any help would be greatly appreciated!</p> <p><strong>Additional Question:</strong> Does calculating the complexity of the recursive algorithm work in any way similar to the calculation of the complexity of the iterative algorithm in this post: <a href="https://cs.stackexchange.com/questions/72200/euclids-algorithm-time-complexity">Euclid&#39;s Algorithm Time Complexity</a>? What would be similarities and differences? </p> <p><strong>Proof by induction</strong>, that $a\ge F_{k+1}$ and $b \ge F_k$ iff $a&gt;b\ge 0$ for $k \ge 1$ recursive calls of the function:</p> <ol> <li><p>Let $n_0=1$ then a%b = 0 and gcd(a,b) = b. From this follows that $F_1 = 1 \le b$, since if $b&lt;1$, $k$ would have to be $k = 0$. $F_2 = 1 \le a$ is also valid, since otherwise $a&gt;b$ would be invalid.</p></li> <li><p>$a\ge F_{n+1}$and$b\ge F_n$, $k = n$, is valid. </p></li> <li><p>To show: For $k=n+1$: $a \ge F_{n+2}$ and $b\ge F_{n+1}$</p> <p>Let <em>euclid</em>$(a_{n+1},b_{n+1})$ be a call of the recursive function above.</p> <p>Then for $q \in N, q&gt;0$: $b_n+1 = a_n \ge F_{n+1}$</p></li> </ol> <p>and</p> <p>$$a_{n+1} = b_{n+1}*q+b_n = a_n*q+b_n \ge F_{n+1}*q+F_n \ge F_{n+1}+F_n = F_n+2$$ Q.E.D </p>
216
algorithm complexity
Complexity of an algorithm with multiple inputs
https://cs.stackexchange.com/questions/53650/complexity-of-an-algorithm-with-multiple-inputs
<p>I've just started reading about the complexity of algorithms, but everywhere I look, it is only defined for one input $n$. For example an algorithm is cubic if its complexity is $O(n^3)$.</p> <p>But what about when the complexity depends on several inputs? For example if an algorithm has complexity $O(n^2k)$, is it 'cubic', or maybe 'quadratic in $n$ and linear in $k$'?</p> <p>I've also seen phrases such as 'cubic in $k$ and $n$'; what does this mean exactly?</p>
<p>[If I have time, I'll answer the rest of the question later.]</p> <blockquote> <p>I've also seen phrases such as 'cubic in $k$ and $n$'; what does this mean exactly?</p> </blockquote> <p>It's vague, unfortunately, and you should avoid writing anything like this, ever. It means at the very least, that the complexity depends somehow on $k^3$ and&nbsp;$n^3$ but I'm sure you'd already figured that out. Beyond that, it's impossible to say much. It's unclear whether it's some function of $k^3+n^3$ or some function of $k^3n^3$ or something else. It's also unclear whether they're talking about an upper bound or a lower bound: if I told you that a recipe needs a kilo of potatoes, you'd assume that was an upper bound, but if I told you that a restaurant needs kilos of potatoes, you'd assume a lower bound.</p>
217
algorithm complexity
Time complexity of algorithms
https://cs.stackexchange.com/questions/139140/time-complexity-of-algorithms
<p>I have some questions that I don't understand about time complexity.</p> <ol> <li>Given that the worst case complexity of the algorithm <span class="math-container">$A$</span> is <span class="math-container">$O(f(n))$</span> and the best case complexity of <span class="math-container">$A$</span> is <span class="math-container">$Ω(g(n))$</span>. It follows that <span class="math-container">$f(n) ∈ Ω(g(n))$</span>.</li> <li>Given that the best case complexity of the algorithm <span class="math-container">$A$</span> is <span class="math-container">$O(f(n))$</span> and the worst case complexity of <span class="math-container">$A$</span> is <span class="math-container">$Ω(g(n))$</span>. It follows that <span class="math-container">$f(n) ∈ Ω(g(n))$</span>.</li> <li>Given that the average case complexity of the algorithm <span class="math-container">$A$</span> is <span class="math-container">$Θ(f(n))$</span> and the worst case complexity of A is <span class="math-container">$O(g(n))$</span>. It follows that <span class="math-container">$f(n) ∈ O(g(n))$</span>.</li> </ol> <p>I will appreciate if you can help me understand those!</p> <p>Thanks a lot!</p>
<p>The first and the last statements are correct, while the second one is <em>incorrect</em>.</p> <h2>Statement 1</h2> <p>Denote by <span class="math-container">$T_{min}$</span> the actual running time of the algorithm <span class="math-container">$A$</span>, in the best case, and <span class="math-container">$T_{max}$</span> in the worst case. By how we chose <span class="math-container">$T_{min}$</span> and <span class="math-container">$T_{max}$</span> it follows that <span class="math-container">$T_{min}\le T_{max}$</span>.</p> <p>From our assumptions, <span class="math-container">$T_{min}=\Omega(g(n))\implies T_{min}\ge c_1g(n)$</span>. Also from our assumptions, <span class="math-container">$T_{max}=O(f(n))\implies T_{max}\le c_2f(n)$</span>.</p> <p>Combining them together we get:</p> <p><span class="math-container">$c_1f(n)\le T_{min} \le T_{max} \le c_2g(n)$</span>, which means that <span class="math-container">$f(n)\le \frac{c_2}{c_1}g(n)\implies f(n)=O(g(n))$</span></p> <h2>Statement 2</h2> <p>Consider the following algorithm:</p> <pre class="lang-py prettyprint-override"><code>if lst[0] != 0: for x in lst: print(x) </code></pre> <p>And consider the inputs <span class="math-container">$I_1:=[0,1,2,3,...,n]$</span> and <span class="math-container">$I_2:=[1,2,3,...n+1]$</span>. Clearly, the algorithm takes <span class="math-container">$O(1)$</span> time with input <span class="math-container">$I_1$</span>, but <span class="math-container">$\Omega(n)$</span> time with input <span class="math-container">$I_2$</span>. Obvoiusly, <span class="math-container">$n\neq O(1)$</span> and thus the statement is incorrect.</p> <h2>Statement 3</h2> <p>Repeat the proof of statement 1. Note that also <span class="math-container">$T_{avg}\le T_{max}$</span> and thus the proof still holds.</p>
218
algorithm complexity
Calculating the complexity of an algorithm exercises
https://cs.stackexchange.com/questions/97402/calculating-the-complexity-of-an-algorithm-exercises
<p>I am really bad at calculating correctly the complexity of a given algorithm. I would like to know if there is some book or online resources where I can find many exercises that ask to calculate the complexity of a given algorithm. </p> <p>Thank you! </p>
219
algorithm complexity
Complexity time needed for any algorithm
https://cs.stackexchange.com/questions/62727/complexity-time-needed-for-any-algorithm
<p>How to understand at least a small introduction to the complexity time for algorithm , how i can know that algorithm need O(log^2 N ) and so on is there any basic reference for this topic , needed in my project </p>
220
algorithm complexity
Optimal algorithmic complexity of &quot;a nonrepetitive stack&quot;?
https://cs.stackexchange.com/questions/154426/optimal-algorithmic-complexity-of-a-nonrepetitive-stack
<p>I'm wondering about the optimal complexity - or at the very least, some way of achieving non-terrible complexity - of a particular stack variant, that I'm calling a 'nonrepetitive stack'.</p> <p>A nonrepetitive stack is like an ordinary stack, except that a push that would result in a repeated subsequence fails, not updating the stack but instead returning the location of the subsequence that would be repeated. (To disambiguate, because &quot;repeated subsequence&quot; can mean multiple things: I mean multiple contiguous copies of contiguous subsequences. If you treat the stack as a string, something matching <code>.*(.+)\1.*</code>.)</p> <p>(Assume the usual model, e.g. comparing two individual items for equality is <span class="math-container">$O(1)$</span>.)</p> <p>The completely naive approach would be to check the entire stack for any repeated subsequences after each push, and undo the push and fail if one is found. Each check, and thus push, appears to be <span class="math-container">$O(n^3)$</span> in the current size of the stack, worst-case.</p> <p>We can do somewhat better by instead noting that the stack can never actually include any repeated substrings (as you cannot introduce a repeated substring by popping from a stack, and pushes that would introduce a repeated substring instead fail), and so a push only needs to check potential repeats that include the top of stack. This gets us down to <span class="math-container">$O(n^2)$</span> in the size of the stack per push.</p> <p>Some (terrible) Python pseudocode for this approach, to illustrate (again: this code is just to illustrate. Please don't focus on the exact code.)</p> <pre class="lang-python prettyprint-override"><code>def push(s, x): s.append(x) for i in range(1, len(s)//2 + 1): # this comparison is _not_ O(1) time. if s[-2*i:-i] == s[-i:]: ret = s[-i:], len(s)-2*i, len(s)-i s.pop() return True, *ret return False, None def pop(s): return s.pop() def check(act, exp): assert act == exp, (act, exp) s = [] check(push(s, &quot;A&quot;), (False, None)) check(push(s, &quot;B&quot;), (False, None)) check(push(s, &quot;A&quot;), (False, None)) check(s, [&quot;A&quot;, &quot;B&quot;, &quot;A&quot;]) # ABAB would have a repeated subsequence AB AB check(push(s, &quot;B&quot;), (True, [&quot;A&quot;, &quot;B&quot;], 0, 2)) check(pop(s), &quot;A&quot;) # ABB would have a repeated subsequence B B check(push(s, &quot;B&quot;), (True, [&quot;B&quot;], 1, 2)) </code></pre> <p>(Worst-case complexity here is <span class="math-container">$\sum_{i=1}^{n/2 + 1}i$</span>, which is <span class="math-container">$O(n^2)$</span>. Best-case for a successful push is if each array comparison compares one element and then short-circuits, which works out to <span class="math-container">$O(n)$</span>. [n.b. I know that CPython's array comparison with slices I showed doesn't actually work that way.] Best-case for an unsuccessful push is, of course, <span class="math-container">$O(1)$</span>.)</p> <p>Is there a way of doing better here? In particular, is it possible to achieve (amortized) worst-case complexity for pushes that is sublinear in the size of the stack?</p>
<p>Kosolobov [1] solved this exact problem. The first algorithm in the paper supports stack operations on a string while detecting a repeated substring, and each operation takes amortized <span class="math-container">$O(\log m)$</span> time where <span class="math-container">$m$</span> is the maximum string length so far.</p> <p>The algorithm works for unordered alphabets (only equality comparison between characters are allowed), and the time complexity is essentially optimal in this setting because detecting a repeated substring requires <span class="math-container">$\Omega(n \log n)$</span> time for unordered alphabets [2].</p> <ul> <li>[1]: Kosolobov, Dmitry. &quot;Online detection of repetitions with backtracking.&quot; Annual Symposium on Combinatorial Pattern Matching. Springer, Cham, 2015. <a href="https://arxiv.org/abs/1412.4471" rel="nofollow noreferrer">https://arxiv.org/abs/1412.4471</a></li> <li>[2]: Main, Michael G., and Richard J. Lorentz. &quot;An O(n log n) algorithm for finding all repetitions in a string.&quot; Journal of Algorithms 5.3 (1984): 422-432.</li> </ul>
221
algorithm complexity
Computation Complexity of stream algorithm
https://cs.stackexchange.com/questions/87316/computation-complexity-of-stream-algorithm
<p>I want to analyze a clustering algorithm that clusters data stream. Since the stream can be unbounded, I cannot write O(N^2), or explicitly denote the size of the stream. How can I use the arrival rate of the data for the computation complexity? Do you have any other idea for correct analysis of stream computation (for clustering problems)?</p> <p>Thank's</p>
<p>One standard way to measure the running time of streaming algorithms is to count the amount of time taken <em>per item</em> that the algorithm processes. For instance, one algorithm might take $O(1)$ time per item.</p> <p>Also it is common to analyze the memory usage (space complexity) of these algorithms, as in many practical situations memory can be a limiting factor. That is straightforward to measure. Usually the space complexity is measured not as a function of the number of elements seen, but rather as a function of something else -- e.g., the accuracy of the answer, or the number of distinct items in the input stream, or something else that is appropriate to the application and that makes it possible to do analysis.</p> <p>I suggest you read about streaming algorithms. There's lots of work on those topics and those fields certainly measure the running time of their algorithms. See, e.g., <a href="https://en.wikipedia.org/wiki/Streaming_algorithm" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Streaming_algorithm</a> and textbooks on the subject.</p>
222
algorithm complexity
Time complexity of Rabin-Karp algorithm
https://cs.stackexchange.com/questions/93009/time-complexity-of-rabin-karp-algorithm
<blockquote> <p><strong>$n$ : length of text T</strong> </p> <p><strong>$m$ : length of pattern P</strong></p> </blockquote> <p>When I study Rabin-Karp algorithm, I learned the best case of this algorithm is $\theta(n-m+1)$. Because if a hashed number is too small to modulo by some other number.</p> <p>But when I'm talking with my friends about this best complexity, I was very curios that does above theta notation can be represented by $\theta(n)$ if $n$ is asymptotic to infinity?</p> <p>First time, it was reasonable to me that $\theta(n)=\theta(n-m+1)$ is correct because it is asymptotic notation. But if $m$ is always $n-1$, can we use $\theta(n)$ in this situation? I'm confusing about notation concept and what I'm curious is below question.</p> <blockquote> <p>What is the right time complexity notation of Rabin-Karp algorithm?</p> <p>$\theta(n)$ or $\theta(n-m+1)$?</p> </blockquote>
<p>The running time you quote isn't correct. If $n = m$ then it takes $\Omega(n)$ to verify that $T = P$. The best case running time of <em>any</em> string matching algorithm is $\Omega(m)$, since this is how long it takes to verify a match. Perhaps you are disregarding the time it takes to compute the hashes.</p> <p>Ignoring all of this, let us suppose that we are considering an algorithm whose running time is $\Theta(n-m+1)$, where $m \leq n$ are two parameters. As your example makes clear, this is <em>not</em> the same as $\Theta(n)$, since if $m$ is close to $n$ then $n-m+1$ is much smaller than $n$. Generally speaking, an asymptotic expression depending on two parameters cannot be reduced to an asymptotic expression depending on only one of them.</p> <p>In practice, often $m$ is itself a known function of $n$, and this can be used to reduce $\Theta(n-m+1)$ to an asymptotic expression depending on a single parameter. For example, if $m = n/2$ then $\Theta(n-m+1) = \Theta(n)$, whereas if $m = n$, then $\Theta(n-m+1) = \Theta(1)$.</p>
223
algorithm complexity
Complexity of an algorithm with nested loops
https://cs.stackexchange.com/questions/163411/complexity-of-an-algorithm-with-nested-loops
<p>How do I calculate the complexity of this Algorithm below? I would like to know how I calculate the sums that form to obtain a formula as a function of n? I know that in general this algorithm has O(n^3) complexity, but how can I do that?</p> <pre><code>int i,j,k,s; for(i=0; i &lt; N-1; i++) for(j=i+1; j &lt; N; j++) for(k=1; k &lt; j; k++) s = 1; </code></pre> <p>On</p>
224
algorithm complexity
The complexity of the algorithm with loops
https://cs.stackexchange.com/questions/63216/the-complexity-of-the-algorithm-with-loops
<p>I have algorithm that contains next loops:</p> <pre><code>for (int i = 0; i &lt; size; ++i) { for (int j = i + 1; j &lt; size; ++j) { //Do stuff } } </code></pre> <p>I found that this algorithm has $O(n^2)$ complexity but I can't understand why? I.e. if $N = 4$ then $n^2 = 16$ but my loop has 6 iterations only. Just it's a half of $n^2$ value.</p> <p>P.S. I understand never how to measure the complexity of the algorithm, I only can understand how to write it in the mathematics.</p>
<p>Your "stuff" will get executed $N(N - 1)/2 = 0.5N^2 - 0.5N$ times. When analyzing the asymptotic complexity, only the highest order term is kept, and multiplicative constants are removed, leaving you with $O(N^2)$. </p> <p>It works this way because we're interested in what happens when $N$ goes to infinity (scalability).</p>
225
algorithm complexity
calculate the complexity of binary search algorithm
https://cs.stackexchange.com/questions/92416/calculate-the-complexity-of-binary-search-algorithm
<p>How do i calculate the complexity of this algorithm at the best case and its complexity at the worst case and the average complexity by assuming that the probability that the element is in the list is 0≤p≤1 and that all sites are equal probability</p>
226
algorithm complexity
Complexity of set oprations in algorithm
https://cs.stackexchange.com/questions/67053/complexity-of-set-oprations-in-algorithm
<p>I am designing a graph algorithm. Some steps of the algorithm, are set operations (union, difference, intersection, set-membership).</p> <p>Can I assume them as $~ \mathcal{O}(1)$ operations? Have someone used them as $~\mathcal{O}(1)$ in his paper?</p> <p>What minimum complexity should I assume for these operations? References are welcome. </p>
227
algorithm complexity
computational complexity of a tree pruning algorithm
https://cs.stackexchange.com/questions/86698/computational-complexity-of-a-tree-pruning-algorithm
<p>I've just designed an algorithm to prune a tree related to a particular fluid dynamics problem, and I need to determine its computational complexity; however, since I'm just a newcomer (from mechanical engineering) to the computational complexity theory, I'm not sure about the following reasoning:</p> <p><a href="https://i.sstatic.net/bg1nA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bg1nA.png" alt="enter image description here"></a></p> <p>The while loop iterates once, which means it continues till all elements of $L$ are examined. Moreover, the running time of each line as assumed to be 1.</p> <p>AFAIK, the outer loop (line 1) counts $n$ assuming that $L$ has $n$ elements. Thus, the computational complexity is proportional to the input size. Moreover, each inner operation of the algorithm counts 1, so the overall computational complexity should be $O(n)$.</p> <p>Can you please confirm the credibility of the reasoning above? </p>
228
algorithm complexity
Time Complexity: Intuition for Recursive Algorithm
https://cs.stackexchange.com/questions/92859/time-complexity-intuition-for-recursive-algorithm
<p>I decide to learn more about dynamic programming, so I started reading the Dynamic Programming chapter from the CLSR book.</p> <p>The first example problem presented there is Rod Cutting (15.1). Given a rod of length n and a list of prices for rods of any sizes figure out how to cut the rod so that the price of the pieces will be maximized (and one can only cut at even positions).</p> <p>The first recursive algorithm presented there is the following</p> <pre><code>CutRod(p, n) if n == 0 return 0 q = -inf for i = 1 to n q = max(q, p[i] + CutRod(p, n -1)) return q </code></pre> <p>n is the size of the rod and p an array that contains the prices. </p> <p>I understand the algorithm, the problem I have is that I thought intuitively the time complexity of such an algorithm would be O(b^d) (where b is the branching factor and d the depth of the recursion tree) which would be O(n^n).</p> <p>In the book the recurrence relation is presented: T(0) = 1 and T(n) = 1 + sum(j=0, n-1, T(j)) Then it is explained that the complexity following from this is O(2^n) which you can easily be seen by expand the recurrence relation.</p> <p>How can I quickly see that my initial intuition was wrong? And in general when looking at a recursive algo how can I figure out weather the time complexity is O(b^d) or not.</p>
<p>Based on the code you show there, your intuition is right.</p> <p>However, it looks like there is a typo in the code and the next-to-last line should have been</p> <pre><code> q = max(q, p[i] + CutRod(p, n-i)) </code></pre> <p>i.e., <code>n-i</code> rather than <code>n-1</code>. Try to work through a proof of correctness, or through a few examples, to see why I say that. The running time analysis they show is for the corrected code, rather than the code with the typo, and then once you make that correction to the code, the recurrence relation they provide is correct.</p>
229
algorithm complexity
Can a time-optimal algorithm have a time complexity better than its space complexity?
https://cs.stackexchange.com/questions/165152/can-a-time-optimal-algorithm-have-a-time-complexity-better-than-its-space-comple
<p>That's what I'm formally asking:</p> <p>Let the algorithm <span class="math-container">$A$</span> have the worst-case time complexity <span class="math-container">$\Theta(f(n))$</span>, such that for any algorithm <span class="math-container">$B$</span> with the worst-case time complexity <span class="math-container">$\Theta(g(n))$</span> doing the same job as <span class="math-container">$A$</span>, <span class="math-container">$g(n) \in \Omega(f(n))$</span>. Let <span class="math-container">$\Theta(s(n))$</span> be the worst-case space complexity for the algorithm <span class="math-container">$A$</span>, such that for any algorithm <span class="math-container">$B$</span> doing the same job as <span class="math-container">$A$</span>, having the same time complexity as it and the space complexity <span class="math-container">$\Theta(p(n))$</span>, <span class="math-container">$p(n) \in \Omega(s(n))$</span>. Can <span class="math-container">$f(n) \in o(s(n))$</span>?</p> <p>Intuitively, for any such algorithm, we need to spend <span class="math-container">$O(1)$</span> time for each <span class="math-container">$O(1)$</span> of the memory, therefore the space complexity would be worse or the same as the time complexity, that results in the &quot;no&quot; answer to the question. It's just an intuition anyway, that's why I'm asking the question here.</p>
230
algorithm complexity
Check Welzl&#39;s algorithm time complexity
https://cs.stackexchange.com/questions/147944/check-welzls-algorithm-time-complexity
<p>From the <a href="https://en.wikipedia.org/wiki/Smallest-circle_problem#Welzl%27s_algorithm" rel="nofollow noreferrer">wiki</a> this is the algorithm and we know that final complexity is O(n) but how we reached to this , is my problem :</p> <pre><code>algorithm welzl is input: Finite sets P and R of points in the plane |R| ≤ 3. output: Minimal disk enclosing P with R on the boundary. part 1: if P is empty or |R| = 3 then return trivial(R) part 2: choose p in P (randomly and uniformly) part 3: D := welzl(P − {p}, R) part 4: if p is in D then return D part 5: return welzl(P − {p}, R ∪ {p}) </code></pre> <p><strong>My try:</strong><br /> Easiest one, part 1 has O(1) time.<br /> part 2 has O(1) time.<br /> part 3 is something like T(n) = T(n-1).<br /> part 4 is in O(1) time.<br /> part 5 is something like T(n) = T(n-1) because we eliminate 1 point and increase R but we will work more with P so that would be the recursive equation.</p> <p>If I writed correctly my analysis(and I'm sure it have incorrect parts) I don't know how to combine this 5 parts time complexity inorder to reach O(n).is the final equation like this?:<span class="math-container">$$T(n) = 2T(n-1)$$</span><br /> but the final answer will be <a href="https://cs.stackexchange.com/questions/18900/how-do-i-show-tn-2tn-1-k-is-o2n">exponential</a> .<br /> If I made any mistake can someone help me please?</p>
<p>Note that the expected running time analysis is <span class="math-container">$O(n)$</span>. It is given in the <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.46.1450&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">original</a> paper itself (Section 2).</p> <p>Here, I am simply analyzing the worst-case time complexity of the algorithm. Note that Part <span class="math-container">$3$</span> and Part <span class="math-container">$5$</span> are not the same.</p> <p>Let <span class="math-container">$T(n,r)$</span> denote the complexity of algorithm when <span class="math-container">$|P| = n$</span> and <span class="math-container">$|R| = r$</span>. Then, Part <span class="math-container">$3$</span> corresponds to <span class="math-container">$T(n-1,r)$</span> and Part <span class="math-container">$4$</span> corresponds to <span class="math-container">$T(n-1,r+1)$</span>.</p> <p><span class="math-container">\begin{align} T(n,r) &amp;= T(n-1,r) + T(n-1,r+1) + O(1) \\ &amp;= T(n-2,r) + T(n-2,r+1) + T(n-1,r+1) + O(1) \\ &amp;= \dotsc \\ &amp;\leq n \cdot T(n,r+1) + O(1) \quad \textrm{(since $T(0,r) = O(1)$)}\\ &amp;\leq n^2 \cdot T(n,r+2) + O(n) \\ &amp;\leq n^3 \cdot T(n,r+3) + O(n^2) \\ ​ \end{align}</span></p> <p>At the start of the algorithm, we have <span class="math-container">$r = 0$</span>. Also, <span class="math-container">$T(n,3) = O(1)$</span> corresponding to Part <span class="math-container">$1$</span>. Therefore, <span class="math-container">$T(n,0) = O(n^3) $</span> is the worst case time complexity of the algorithm.</p>
231
algorithm complexity
I do not know if my algorithm complexity is $P$, $NP$ or $NP-hard$?
https://cs.stackexchange.com/questions/77021/i-do-not-know-if-my-algorithm-complexity-is-p-np-or-np-hard
<p>I developed an algorithm but just not sure what is the complexity of the algorithm. I provide a brief description of it below:</p> <p>"For $N$ user case, there are $B(N)$ Decision Variables. $B(N)$ is Bell Number, for those of you who are not familiar with Bell Numbers please be noted that $B(N)$ grows exponentially with $N$. In every turn, I have to choose the minimum Decision Variable among all of the Decision Variables and do a certain action accordingly."</p> <p>I know that sorting problem is considered a Polynomial $(P)$ problem. But I am confused if my algorithm is considered $P$ or $NP$ or $NP-hard$ since the number of decision variables that are being sorted grows exponentially with $N$.</p>
<p>Your question is somewhat malformed because <strong>P</strong> and <strong>NP</strong> are classes of problems, not algorithms. For example, as you state, the <em>problem</em> of sorting is in <strong>P</strong>. However, that doesn't mean that <em>every</em> sorting algorithm runs in polynomial time. For example, the well-known joke algorithm <a href="https://en.wikipedia.org/wiki/Bogosort" rel="nofollow noreferrer">bogosort</a> sorts lists by trying every possible permutation until it finds one that is sorted. It has running time approximately $n!$, which is far from polynomial. If a problem has polynomial time complexity, that means that there exists an algorithm that solves it in polynomial time, but it doesn't mean that all algorithms for the problem are that efficient.</p> <p>It's not possible to say what the complexity of your problem is, or what the running time of your algorithm is, because you haven't given a precise description of either. The fact that it involves choosing a "best" set from some exponentially large set doesn't necessarily mean the problem has to have exponential time complexity: there could be a smart way of picking the best set without considering all the options. For example, there are polynomial-time algorithms for finding the shortest path between two points in a graph, even though there may be exponentially many paths. </p>
232
algorithm complexity
Time complexity of a backtrack algorithm
https://cs.stackexchange.com/questions/13181/time-complexity-of-a-backtrack-algorithm
<p>I've developed the following backtrack algorithm, and I'm trying to find out it time complexity.</p> <p>A set of $K$ integers defines a set of modular distances between all pairs of them. In this algorithm, I considered the inverse problem of reconstructing all integer sets which realize a given distance multiset. i.e. :</p> <p><br> Inputs: $D=\{p_i−p_j \mod N, i≠j \},K $ <br> Output : $P=\{p_1,p_2,...,p_K\},\qquad p_i \in \{0,1,2,...,N-1\},\qquad p_i &gt; p_j $ for $i&gt;j$ <br></p> <p>Simply saying, the algorithm puts $K$ blanks to be filled. Initially, puts 1 in the first blank. For the second blank it looks for the first integer that if we add to P, it doesn't produce any difference exceeding the existent differences in $D$. Then, it does so, for next blanks. While filling a blank if it checked all possible integers and found no suitable integer for that blank, it turns back to the previous blank and looks for next suitable integer for it. If all blanks are filled, it has finished his job, otherwise it means that there weren't any possible $P$'s for this $D$.</p> <p>Here's my analysis so far. Since the algorithm checks at most all members of $\{2,...,N\}$ for each blank (upper bound) there is $N-1$ search for each blank. If each visited blank was filled in visiting time, the complexity would be $O((K-1)(N-1))$ since we have $K-1$ blank (assuming first one is filled with 1). But the algorithm is more complex since for some blanks it goes backward and some blanks may be visited more that once. I'm looking for the worst case complexity i.e. the case that all blanks are visited and no solution is found.</p>
<p>The running time of your algorithm is at most $N (N-1) (N-2) \cdots (N-K+1)$, i.e., $N!/(N-K)!$. This is $O(N^K)$, i.e., exponential in $K$.</p> <p>Justification: There are $N$ possible choices for what you put into the first blank, and in the worst case you might have to explore each. There are $N-1$ choices for the second blank, and so on. You can draw a tree of the choices made: the first level shows the choice of what to put in the first blank, the second level shows the choice of what to put in the second blank, and so on. The degree of the root is $N$; the degree of the nodes at the second level is $N-1$; and so on. The number of leaves is the product of the degrees at each level, i.e., $N (N-1) (N-2) \cdots (N-K+1)$. In the worst case, your algorithm might have to explore every possible node in this tree (if it is not able to stop early before reaching the $K$th level and backtrack from a higher-up node). Therefore, this is a valid upper bound for the running time of your algorithm.</p> <p>If you want a tighter analysis, here is the exact worst-case running time (not an upper bound). The number of leaves in your search tree, in the worst case, is the number of strictly increasing sequences of length $K$ over $\{1,\dots,N\}$ that start with 0. (We can assume without loss of generality that the first blank contains a 0, as you point out, which is why we can restrict to sequences that start with a 0.) That number is exactly $C(N-1,K-1) = (N-1)!/((K-1)!(N-K)!)$.</p> <p>This is a tighter analysis, but it doesn't save us from exponential running time. When $N\gg K$, $C(N-1,K-1)$ is still $O(N^K)$, i.e., exponential in $K$.</p> <p>That said, evaluating your algorithm experimentally (by testing it on some real data sets) would probably be a better way to evaluate your algorithm than trying to derive a worst-case running time. You might want to compare it to the performance of translating your problem into a SAT instance and using an off-the-shelf SAT solver. Depending upon the value of $N$ and $K$, there might be other better alternatives as well.</p> <p>See also the following question for a closely related problem, and for algorithms to solve it:</p> <ul> <li><a href="https://cstheory.stackexchange.com/q/17307/5038">How to obtain the unknown values ai,bj given an unordered list of ai−bjmodN?</a></li> </ul>
233
algorithm complexity
Complexity of a recursive bignum multiplication algorithm
https://cs.stackexchange.com/questions/14685/complexity-of-a-recursive-bignum-multiplication-algorithm
<p>We have started learning about analysis of recursive algorithms and I got the gist of it. However there are some questions, like the one I'm going to post, that confuse me a little.</p> <h3>The exercise</h3> <blockquote> <p>Consider the problem of multiplying two big integers, i.e. integers represented by a large number of bits that cannot be handled directly by the ALU of a single CPU. This type of multiplication has applications in data security where big integers are used in encryption schemes. The elementary-school algorithm for multiplying two n-bit integers has a complexity of . To improve this complexity, let x and y be the two n-bit integers, and use the following algorithm</p> <pre><code>Recursive-Multiply(x,y) Write x = x1 * 2^(n/2)+x0 //x1 and x0 are high order and low order n/2 bits y = y1 * 2^(n/2)+y0//y1 and y0 are high order and low order n/2 bits Compute x1+x0 and y1+y0 p = Recursive-Multiply (x1+x0,y1+y0) x1y1 = Recursive-Multiply (x1,y1) x0y0 = Recursive-Multiply (x0,y0) Return x1y1*2^n + (p-x1y1-x0y0)*2^(n/2)+x0y0 </code></pre> <p>(a) Explain how the above algorithm works and provides the correct answer.</p> <p>(b) Write a recurrence relation for the number of basic operations for the above algorithm.</p> <p>(c) Solve the recurrence relation and show that its complexity is <span class="math-container">$O(n^{\lg 3})$</span></p> </blockquote> <h3>My conjecture</h3> <ol> <li>Since the method is being called three times, the complexity is going to be <span class="math-container">$3C(n/2) + n/2$</span>.</li> </ol> <h3>My questions</h3> <ol> <li><p>What do they mean by hi-lo order bits?</p> </li> <li><p>How can I use a recurrence relation on this if I don't know how each recursion works?</p> </li> </ol>
<p>The idea is to divide each $n$-bit integer into two halves of $n/2$ bits. For example, $10100010$ would be divided into $1010$ and $0010$. Naively, we would need to multiply four such halves, but in fact there is a way to do with only three. (This is similar to matrix multiplication algorithms such as Strassen's.) The algorithm described by the exercise is known as <a href="http://en.wikipedia.org/wiki/Karatsuba_algorithm" rel="nofollow">Karatsuba multiplication</a>.</p> <p>Regarding your other question, once you have reduced the computation of running time to a recurrence relation, you no longer need to understand anything more about the recursion in order to compute the running time - this is the main point of this abstraction.</p>
234
algorithm complexity
Definition of space complexity when algorithm cycles.
https://cs.stackexchange.com/questions/79946/definition-of-space-complexity-when-algorithm-cycles
<p>I'm reading side by side my class notes and Papadimitrious' Computational Complexity book. At this point they are talking about space complexity. They give rules for computing space employed in an algorithm that runs on a multi-tape Turing machine:</p> <ol> <li>We count the cells used.</li> <li>If we don't write in the input, this cells don't count. </li> <li>If the output cells are written from left to right, they don't count. </li> </ol> <p>The final requirement is expressed differently in Papadimitrious' and my notes. In the books it is written:</p> <blockquote> <p>The cursor of the input string does not wander off into the blank symbols after the end of the input. It is a useful technical requirement, but not necessary.</p> </blockquote> <p>In my notes:</p> <blockquote> <p>In an algorithm where space is counted, there can exist computations that never end, but one can always transform the algorithm into another that doesn't cycle.</p> </blockquote> <p>So how does one measure the space complexity of an algorithm that may cycle forever? Are this statement equivalent to each other?</p>
<blockquote> <p>So how does one measure the space complexity of an algorithm that may cycle forever?</p> </blockquote> <p>When we say that some language $L$ belongs to some complexity class it is assumed that a TM (algorithm) <strong>decides</strong> $x \in L$ in finite space and time amount. In other words, the TM/algorithm terminates, i.e. it may not cycle forever. However, an algorithm that loops forever may use finite space, but it still cannot decide.</p> <p>But I don't understand the following statement</p> <blockquote> <p>In an algorithm where space is counted, there can exist computations that never end, but one can always transform the algorithm into another that doesn't cycle.</p> </blockquote>
235
algorithm complexity
Algorithm complexity of this program is O(n) or O(n^3)?
https://cs.stackexchange.com/questions/86238/algorithm-complexity-of-this-program-is-on-or-on3
<p>I am very much confuse that why complexity of this program is O(n)?</p> <pre><code> int j = 0; int i; For (i = 0; i &lt; n, i++) // O(n) { For (i = 0; i &lt; n, i++)//O(n) { While (j &lt; n)// O(n) { Statement; j++; } } } </code></pre> <p>I am totally new to Algorithms. Any help and explanation will be appreciated. </p>
236
algorithm complexity
Complexity inversely propotional to $n$
https://cs.stackexchange.com/questions/3495/complexity-inversely-propotional-to-n
<p>Is it possible an algorithm complexity decreases by input size? Simply $O(1/n)$ possible?</p>
<p>Consider an algorithm with some running time bounded by $f(n)$ and suppose that $f(n) \in O(1/n)$. That means that there is some constant $c$ such that for sufficiently large values of $n$, it holds that $$f(n) \leq c\frac{1}{n}.$$ Clearly, for any fixed $c$ and sufficiently large $n$, the right side will be strictly less than $1$, which requires $f(n)=0$, since $f$ maps to $\mathbb{N}$. In my understanding, even an algorithm that immediately terminates, takes at least $1$ step (namely to terminate), i.e., $\forall n\colon f(n)\ge 1$. So no such algorithm can exist. </p>
237
algorithm complexity
Termination proof and complexity of a algorithm
https://cs.stackexchange.com/questions/106224/termination-proof-and-complexity-of-a-algorithm
<p>I have written the following algorithm </p> <ul> <li><span class="math-container">$select(\Pi)$</span> select the first elements from <span class="math-container">$\Pi$</span>. When there are no element in <span class="math-container">$\Pi$</span>, it return <span class="math-container">$\emptyset$</span>. Always terminates .Worst case complexity <span class="math-container">$O(1)$</span></li> <li><span class="math-container">$processing(e)$</span> function take a element <span class="math-container">$e$</span> as input and process it. And based on the processing it either output element <span class="math-container">$r$</span> or <span class="math-container">$\emptyset$</span>. Always terminates. Complexity not known.</li> <li><span class="math-container">$update(r,\Pi,\Delta)$</span> function take <span class="math-container">$r,\Pi,\Delta$</span> as input and update criteria in the function. And based new criteria, it transfer already processing element from <span class="math-container">$\Delta$</span> to <span class="math-container">$\Pi$</span>. Always terminates. Worst case complexity <span class="math-container">$O(sizeof(\Pi+\Delta))$</span></li> </ul> <p><span class="math-container">$\textbf{Input} - \textbf{a set of elements $\Pi$}$</span></p> <p><span class="math-container">$\textbf{Output} - \textbf{a set of elements $C$}$</span></p> <p><span class="math-container">$C$</span>=<span class="math-container">$\emptyset$</span>;</p> <p><span class="math-container">$\Delta$</span>=<span class="math-container">$\emptyset$</span>;</p> <p><span class="math-container">$e$</span>=<span class="math-container">$select(\Pi)$</span>;</p> <p><span class="math-container">$\textbf{While}$</span>(<span class="math-container">$e \not=\emptyset$</span>){</p> <p><span class="math-container">$\hspace{10mm}r$</span>=<span class="math-container">$processing(e)$</span>;</p> <p><span class="math-container">$\hspace{10mm}\textbf{if}$</span>(<span class="math-container">$r \not=\emptyset$</span>){</p> <p><span class="math-container">$\hspace{20mm} C$</span>=<span class="math-container">$C\cup r$</span>;</p> <p><span class="math-container">$\hspace{20mm} \Pi$</span>=<span class="math-container">$\Pi-e$</span>;</p> <p><span class="math-container">$\hspace{20mm} \Pi,\Delta$</span>=<span class="math-container">$update(r,\Pi,\Delta)$</span>;</p> <p><span class="math-container">$\hspace{10mm}\}\textbf{else}$</span>{</p> <p><span class="math-container">$\hspace{20mm} \Pi$</span>=<span class="math-container">$\Pi-e$</span>;</p> <p><span class="math-container">$\hspace{20mm} \Delta$</span>=<span class="math-container">$\Delta\cup e$</span>;</p> <p><span class="math-container">$\hspace{10mm}$</span>}</p> <p><span class="math-container">$\hspace{10mm}e$</span>=<span class="math-container">$select(\Pi)$</span>;</p> <p>}</p> <p><span class="math-container">$\textbf{return}$</span> <span class="math-container">$C$</span>;</p> <p>Proof Sketch ---- The termination condition of the algorithm - when <span class="math-container">$\Pi$</span> is <span class="math-container">$\emptyset$</span>. Each iteration of the loop involves a <span class="math-container">$processing$</span> call which always terminates. Because <span class="math-container">$processing$</span> does not return any results for 10 seconds, then algorithm terminates the <span class="math-container">$processing$</span> call and return <span class="math-container">$\emptyset$</span>. We can say that the algorithm always terminates since the size of <span class="math-container">$\Pi$</span> and <span class="math-container">$\Delta$</span> move in a monotone way up and down respectively, and one always moves. Thus, eventually <span class="math-container">$\Pi$</span> become empty and the algorithm terminates.</p> <p>Can anyone please suggest to me how I can formally complete the proof?</p> <p>Can anyone please suggest how we can compute the worst case complexity as we do not know the complexity of the <span class="math-container">$processing$</span> function? How we can use Oracle for <span class="math-container">$processing$</span> function in that process?</p>
238
algorithm complexity
Design an algorithm with linear complexity
https://cs.stackexchange.com/questions/157540/design-an-algorithm-with-linear-complexity
<p>Let A[1 : n] be a vector of n integers such that all elements except O(n^2/3) elements are between 1 and 10n. Design an algorithm with linear complexity that sorts A.</p> <p>Beyond the algorithm, what I can't fully understand is why we are provided with the information &quot;all elements except O(n^2/3)&quot; It is certainly an information that guides the choice in some way of the algorithm. What considerations can we make on this information that is provided to us?. How can it guide us in choosing the best algorithm?</p>
<p>The reason you are provided that information is that, without this extra promise, it is not possible to sort in linear time. With this extra promise, it <em>is</em> possible to sort in linear time.</p> <p>It's your exercise, so I will leave it to you to come up with the strategy to sort such an array in linear time. Once you've found such an algorithm, hopefully you will be able to see why the same can't be used on any array (without that promise).</p>
239
algorithm complexity
Algorithm with amortized time complexity
https://cs.stackexchange.com/questions/153341/algorithm-with-amortized-time-complexity
<p>While I understand the process of considering/observing an algorithm and finding an average time, necessary to perform an operation that happens in this algorithm, I still cannot quite gasp the idea, or rather the expression:</p> <p><strong>&quot;The Algorithm has an amortized time complexity which is cost./linear etc&quot;</strong>.</p> <p>What to understand when someone says the above expression?</p> <p>One further question: Are the operations considered of the same type? What I mean by that, I'll try to showcase it with an example:</p> <p>If we use pushback(), to input an element in an dynamic array, the operation here, is the input of an element. Sometimes the operation is cheap (in terms of the amount of times it requires to be executed) and sometimes is expensive. But there is only one type of operation here, the pushback operation. So can we talk about the amortized time of an algorithm, in other words can we talk about the average time for an operation, when different types of operations are taking place in the algorithm?</p> <p>Sorry for the lack in my vocabulary. CS is not my major or main degree!</p>
<p>The amortized time is indeed the average time over a larger number of repetition of an operation (large enough that the variations are smoothed away).</p> <p>A typical example is a growable array supporting <code>push_back</code> operations. When there is room at the end of the allocated space, a <code>push_back</code> costs <span class="math-container">$O(1)$</span>. But when the allocated space is full, you increase it by doubling, and transfer all elements. This takes time <span class="math-container">$O(n)$</span>, but only when <span class="math-container">$n$</span> is a power of <span class="math-container">$2$</span>, so that the average remains <span class="math-container">$O(1)$</span>.</p> <p>In principle, only one operation is involved. In some cases, several can be (say insertions+deletions), provided this makes sense.</p>
240
algorithm complexity
Is there any relationship between time complexity and space complexity of an algorithm?
https://cs.stackexchange.com/questions/110934/is-there-any-relationship-between-time-complexity-and-space-complexity-of-an-alg
<p>For example:</p> <p>If algorithm A takes an input of size n, and has a time complexity of O(a^n) and a space complexity of O(1)</p> <p>Is there a way to increase the space complexity to something like O(n^2) that would guarantee that the time complexity would decrease?</p>
<blockquote> <p>If algorithm A takes an input of size n, and has a time complexity of O(a^n) and a space complexity of O(1)</p> </blockquote> <p>First of all we do not know any <em>exponential</em> or <em>sub-exponential time</em> algorithm that requires only <span class="math-container">$O(1)$</span> space, having said this, it is difficult to reason about a hypothetical algorithm "<em>A</em>" because the spatial and temporal complexity are closely linked to the functioning of the algorithms and are (generally) proportional to each other, however, the answer to your question is no.</p> <p>Let's try to reason starting from a <span class="math-container">$3SAT$</span> instance. Now we know that <span class="math-container">$3SAT$</span> is <strong><em>NP-Complete</em></strong> (best known time complexity for <span class="math-container">$3SAT$</span> is currently <span class="math-container">$O(k^n)$</span> with <span class="math-container">$ K=1.439$</span> for a deterministic algorithm) and <span class="math-container">$3SAT$</span> <span class="math-container">$∈$</span> <strong><em>PSPACE</em></strong> , in fact space complexity of <span class="math-container">$3SAT$</span> is <span class="math-container">$O(n)$</span>. Now it is difficult to imagine how increasing the space to a constant <span class="math-container">$k$</span> (in your question <span class="math-container">$k = 2$</span>) may lead to a decrease in the execution time of the algorithm that solves <span class="math-container">$3SAT$</span> ... in fact always keep in mind that an increase in space also corresponds to a proportional increase over time. Let me conclude by recalling the relationships between temporal and spatial complexity classes for which we believe all inclusions to be strict:</p> <p><span class="math-container">$L⊆NL⊆P⊆NP⊆PSPACE⊆EXPTIME⊆EXPSPACE$</span></p>
241
algorithm complexity
How to find an algorithm&#39;s complexity from actual running times
https://cs.stackexchange.com/questions/112083/how-to-find-an-algorithms-complexity-from-actual-running-times
<p>I have a certain algorithm which I can run, but I do not have access to its code. Thus, it works as a black box. I would like to now the order of complexity of this algorithm on a certain set of instances, which grow in size as a function of <span class="math-container">$n$</span>.</p> <p>Now, I have collected running times from <span class="math-container">$n = 1$</span> up to <span class="math-container">$n = 5000$</span>, which is as far as my computer can go in a reasonable amount of time.</p> <p>I have plotted my data using Python and I have made some simple regression (exponential, cuadratic, cubic...), but I still can't get to decide which function best fits the algorithm. Obviously, if I try regression with a polynomial of higher degree, I will get a tighter curve, so in case the running time is a polynomial in <span class="math-container">$n$</span>, I don't know how I could decide which is the one that really fits.</p> <p>With instances up to <span class="math-container">$n = 1000$</span>, it seems like the cubic polynomial is the best approximation. However, when I plot the instances up to <span class="math-container">$n = 5000$</span>, things become less clear (in the second graph I compare the actual running times against the approximations obtained for up to <span class="math-container">$n =1000$</span>, while the first one compares against the the approximations up to <span class="math-container">$n = 500$</span>)</p> <p><a href="https://i.sstatic.net/uQYqg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uQYqg.png" alt="enter image description here"></a> <a href="https://i.sstatic.net/VYjUa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VYjUa.png" alt="enter image description here"></a></p> <p>(in addition, I feel there is something off with float point calculations, as two of the curves below get really messy)</p> <p>Any suggestions?</p> <p>A bit of context: I am working on a particular class of hard Quantified Boolean Formulas (QBF) and I want to test their running times on different available QBF-solvers. These formulas verify a certain interesting property: they have linear size proofs in the QU-resolution proof system. However, QBF-solvers act as block boxes for me, and, altough I can see from data that they do not have the "predicted" linear running times, I would like to know how is the growth in complexity in these particular instances. Particularly, it would be great to give some approximate bound or even confirm that they do not seem to have an exponential growth. Thanks.</p>
<p>You can do regression on the whole data to get a better fit.</p>
242
algorithm complexity
Can an algorithm complexity be lower than its tight low bound / higher than its tight high bound?
https://cs.stackexchange.com/questions/128174/can-an-algorithm-complexity-be-lower-than-its-tight-low-bound-higher-than-its
<p>The worst case time complexity of a given algorithm is <span class="math-container">$\theta(n^3logn)$</span>.<br /> Is it possible that the worst time complexity is <span class="math-container">$\Omega(n^2)$</span>?<br /> Is it possible that the worst time complexity is <span class="math-container">$O(n^4)$</span>?<br /> The average time complexity is <span class="math-container">$O(n^4)$</span>?</p> <p>IMO it is possible as long as you control the constant <span class="math-container">$c$</span>, but then what's the point of mentioning any other bound than the tight bounds?</p>
<p>Let <span class="math-container">$t(x)$</span> be the time taken for input <span class="math-container">$x\in \{0,1\}^*$</span>, and <span class="math-container">$T(n)=\max_{x\in\{0,1\}^*,|x|=n}t(x)$</span> is the worst case.</p> <p>If <span class="math-container">$T\in\theta(n^3\log(n))$</span>, this means that there are constants <span class="math-container">$C_1,C_2$</span> and <span class="math-container">$N$</span> such that for all <span class="math-container">$n&gt; N$</span> you have <span class="math-container">$$C_1n^3\log(n)\leq T(n)\leq C_2n^3\log(n)$$</span>.</p> <p>Since <span class="math-container">$n^3\log(n)\geq n^2$</span> for <span class="math-container">$n&gt;1$</span>, then <span class="math-container">$C_1n^2\leq C_1n^3\log(n)\leq T(n)$</span>. Therefore, <span class="math-container">$T\in\Omega(n^2)$</span>.</p> <p>We also have that <span class="math-container">$n^3\log(n)\leq n^4$</span>, for all <span class="math-container">$n&gt;1$</span>. Therefore, <span class="math-container">$T(n)\leq C_2n^3\log(n)\leq C_2n^4$</span>.</p> <p>This implies that <span class="math-container">$T\in O(n^4)$</span>.</p> <p>Finally, <span class="math-container">$$\begin{align}A(n)&amp;=E(t(x), |x|=n)\\&amp;\leq E(T(n), |x|=n)\\&amp;=T(n)E(1,|x|=n)\\&amp;=T(n)\\&amp;\leq C_2n^3\log(n)\\&amp;\leq C_2n^4\end{align}$$</span>, for <span class="math-container">$n&gt;1$</span>. Therefore, the average number of steps satisfies <span class="math-container">$A\in O(n^4)$</span>.</p> <hr /> <p>Sometimes computing tight bounds is hard, while more relaxed bounds are more accessible.</p>
243
algorithm complexity
Definition of efficiency versus complexity of algorithm
https://cs.stackexchange.com/questions/71858/definition-of-efficiency-versus-complexity-of-algorithm
<p>When I read introductory textbooks I get contradictory answers. In some cases efficiency and complexity are treated the same and the big-O notation is used to indicate that (for example) for an O(n) algorithm the time to execute is linearly proportional to the input dataset size.</p> <p>From other sources I have read that efficiency is a measure of the computing resources required to execute an algorithm, but not the same as the complexity of the algorithm. So for example is it possible to have an algorithm (say to multiply two decimal numbers) that is efficient in terms of its resources (it does not need much to execute) but is complex in that as the input dataset increases in size the time to execute is O(n^2) ???</p>
<p>An algorithm is often called "efficient" if its runtime is short, compared to the inherent difficulty of the problem. </p> <p>For example, you cannot sort arbitrary arrays by comparing keys in less than O (n log n). Once sorted you can lookup values in O (log n). Looking up a value in the sorted array by doing a linear search has lower complexity then sorting the array, but it is inefficient because linear search takes time O (n) and could be done in O (n log n). </p>
244
algorithm complexity
Algorithm with no closed-form exact complexity
https://cs.stackexchange.com/questions/60622/algorithm-with-no-closed-form-exact-complexity
<p>Does there exist an algorithm for which an exact complexity <em>provably</em> cannot be expressed in <a href="https://en.wikipedia.org/wiki/Closed-form_expression" rel="nofollow">closed-form</a>? </p> <p>Here closed-form means a <em>finite</em> composition of addition, subtraction, product, division, factorial, power with any exponent, logarithm, trigonometric function, inverse trigonometric function, hyperbolic function, and inverse hyperbolic function. You may choose a subset of the above functions to allow in a closed-form expression; this makes the problem easier. However, the larger the set of allowed functions, the better, since this also answers the problem for the subsets. </p> <p><em>Exact complexity</em> is a function from the input-set to a real number. You may group the input by some property, and then study <em>exact worst-case complexity</em> instead (or <em>exact best-case complexity</em>).</p> <p>Any computational model will do, as well as counting any resource (e.g. number of comparisons). To close off a trivial solution, a function without a closed-form expression cannot be a primitive operation of the computational model.</p> <p>If yes, is there a simple example of such an algorithm?</p>
<p>The runtime of an algorithm that computes the Ackermann function can't be expressed using primitive recursive functions. All commonly known named functions (well, besides the Ackermann function) are primitive recursive as far as I know.</p>
245
algorithm complexity
Bentley–Ottmann algorithm time complexity issue
https://cs.stackexchange.com/questions/52979/bentley-ottmann-algorithm-time-complexity-issue
<p>In the <a href="https://en.wikipedia.org/wiki/Bentley%E2%80%93Ottmann_algorithm" rel="nofollow">Bentley–Ottmann algorithm</a>, Regarding :</p> <blockquote> <p>Find the segments r and t that are immediately below and above s in T (if they exist) and if their crossing forms a potential future event in the event queue, <strong>remove it</strong>.</p> </blockquote> <p>That means, every insertion of a segmant may lead to an intersection event deletion. </p> <p>I am thinking what makes it stay within the said time complexity. </p> <p>Is it valid to say that it is because i have up to n such insertions that would lead to up to n such deletions, which causes O(nlogn) operations at most hence the time complexity is not effected?</p>
246
algorithm complexity
Binary search algorithm - worst-case complexity
https://cs.stackexchange.com/questions/67387/binary-search-algorithm-worst-case-complexity
<p>I tried to calculate the worst case of binary search (not binary search tree). My calculations: <span class="math-container">$$T(n) = T\left(\frac{n}{2}\right) + 1$$</span> <span class="math-container">$$T(n) = T\left(\frac{n}{4}\right) + (1+1) = T\left(\frac{n}{8}\right) + (1+1+1) = {\dots} = T\left(\frac{n}{2^{k}}\right)+(1\cdot k) $$</span> <span class="math-container">$$T(n)=T(1) + (1\cdot k) = c_{1} + (1\cdot k) = c_{1} + log_{2}n = c_{1}+\frac{log(n)}{log(2)} $$</span> Finally the complexity should be <span class="math-container">$$O(log(n)) $$</span> Is this a good way to prove the worst case complexity of binary search algorithm? Make I mistakes?</p>
<p>A much better way is to use the master method :), check that out!</p>
247
algorithm complexity
Need help verifying the complexity of an algorithm
https://cs.stackexchange.com/questions/159948/need-help-verifying-the-complexity-of-an-algorithm
<p>I have the following algorithm which takes as an input a non negative integer n : <br/> i = n <br/> while i &gt; 0 do : <br/> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span>i = i - 1 <br/> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span>j = 1 <br/> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span>while j <span class="math-container">$\le$</span> n do <br/> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> <span class="math-container">$\,$</span> j = 2j</p> <p>The outer while loop has complexity of O(n) and the inner loop has complexity of O(logn) . So i assume the complexity of the algorithm as a whole is O(nlogn) .</p> <p>Is this correct ? Thank you .</p>
248
algorithm complexity
Shell algorithm knuth sequence time complexity analysis
https://cs.stackexchange.com/questions/160442/shell-algorithm-knuth-sequence-time-complexity-analysis
<p>Given this shell sort algorithm implementation:</p> <pre><code>void shell(float ∗a ,int l ,int r) { int i, j, h; for ( h = 1 ; 3∗h +1 &lt;= r−l; h = 3∗h + 1 ); for (; h &gt; 0; h / = 3) { for (i = l+h; i &lt;= r; ++ i) { for (j = i; j &gt;= l +h &amp;&amp; a[j] &lt; a[j−h]; j −= h) { swap (a+j−h , a+ j ); } } } } </code></pre> <p>I want to analyse its time complexity, but I am finding a bit of difficulty to model it mathematically in the form of two summation sigmas, can you help with that please? We know that for Knuth sequence the time complexity is <span class="math-container">$O(n^{3/2})$</span></p>
249
algorithm complexity
Is there an algorithm for algorithms time/space complexity optimisation?
https://cs.stackexchange.com/questions/51516/is-there-an-algorithm-for-algorithms-time-space-complexity-optimisation
<p>In 1950s a number of methods for <a href="https://en.wikipedia.org/wiki/Circuit_minimization_for_Boolean_functions" rel="nofollow">circuit minimization for Boolean functions</a> have been invented. Is there an extension of those methods or anything similar for optimising time or space complexity of algorithms?</p> <p>For example an implementation of <a href="https://en.wikipedia.org/wiki/Bubble_sort" rel="nofollow">bubble sort</a> as an input for such an algorithm would produce an implementation of a sorting algorithm with time complexity closer to $O(n\log n)$.</p>
<p>Look up <a href="https://en.wikipedia.org/wiki/Blum&#39;s_speedup_theorem">Blum's speedup theorem</a> (yes, this article is less than informative; look at a book on complexity theory). It essentially says that there are programs for which there is a program doing the same job that is faster by any specified margin for almost all input data.</p> <p>By <a href="https://en.wikipedia.org/wiki/Rice&#39;s_theorem">Rice's theorem</a>, it is impossible to know if two given programs do the same job.</p> <p>Yes, for some <em>very restricted</em> notions of "program", given an example one can construct the "best possible" program for the job. Important classes, even. But a very far cry from anything which is able to express bubblesort.</p>
250
algorithm complexity
Time complexity of tree algorithm
https://cs.stackexchange.com/questions/165081/time-complexity-of-tree-algorithm
<p>I'm new to recurrence relations and master theorem so trying to learn. Say there's an algorithm <span class="math-container">$A$</span> whose input is the root of a binary tree <span class="math-container">$T$</span>. <span class="math-container">$A$</span> recurses so that it's called on each and every node in <span class="math-container">$T$</span> exactly once. The time complexity of <span class="math-container">$A$</span> called on a node <span class="math-container">$X$</span> is <span class="math-container">$O(number\:of\:nodes\:in\:subtree\: rooted\:at\:X)$</span>.</p> <p>What's the overall big O time complexity of <span class="math-container">$A$</span> (probably in terms of <span class="math-container">$N$</span>, the total number of nodes in <span class="math-container">$T$</span>)?</p> <p>My (informal) approach is to imagine <span class="math-container">$T$</span> as a maximally imbalanced tree (single line of nodes straight down). Then the time complexities of the nodes starting from root is <span class="math-container">$N$</span>, <span class="math-container">$N-1$</span>, .... <span class="math-container">$1$</span>, of which there are <span class="math-container">$N$</span>. That becomes <span class="math-container">$(N+1)*(N/2) == N^2/2+N/2 == O(N^2)$</span>.</p> <p>However I'm not sure this holds for other types of binary trees (such as perfect). I'm struggling to come up with a formal approach.</p> <p>I'm trying to use <a href="https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)#Generic_form" rel="nofollow noreferrer">master theorem</a> and believe the recurrence relation is <span class="math-container">$T(n) = 2T(n/2) + f(n)$</span>.</p> <p><span class="math-container">$c_{crit} = log_2 2 = 1$</span>.</p> <p>Worst case of <span class="math-container">$f(n)$</span> is at root of <span class="math-container">$T$</span> which is <span class="math-container">$O(n)$</span>, so <span class="math-container">$f(n) = O(n^c)$</span> gives <span class="math-container">$c = 1$</span>, and thus it cannot be Case 1 or 3 because <span class="math-container">$c == c_{crit}$</span>. So it must be Case 2. But how do I determine the value of <span class="math-container">$k$</span> in <span class="math-container">$f(n) = \theta(n^{c_{crit}}log^{k}n)$</span>?</p>
<p>Note that the complexity of the algorithm depends on the tree <span class="math-container">$T$</span>. For the maximally imbalanced tree, as you computed the complexity is <span class="math-container">$O(n^2)$</span>. Another important observation is that the complexity on this particular type of tree is <span class="math-container">$\Omega(n^2)$</span>.</p> <p>If you take a perfectly balanced complete tree, then then the root node has cost <span class="math-container">$n$</span>; nodes at level <span class="math-container">$1$</span> have costs at most <span class="math-container">$n/2$</span>; and so on... If you sum up the costs, you get <span class="math-container">$n + 2 \cdot n/2 + 4 \cdot n/4 + \dots + n \cdot 1 = O(n \log n)$</span>. Moreover, the complexity over this particular type of tree is <span class="math-container">$\Omega(n \log n)$</span>.</p> <p>Now, let us talk about any general tree. Any node of the tree has cost at most <span class="math-container">$n$</span>. Suppose we assume this worst cost for every node in the tree, then the complexity is at most <span class="math-container">$n^2 = O(n^2)$</span>. Simple! This complexity is tight for general tree, since you already showed a worst case example of a maximally unbalanced tree to be <span class="math-container">$\Omega(n^2)$</span>.</p>
251
algorithm complexity
Time Complexity and Optimization for the Algorithm?
https://cs.stackexchange.com/questions/50072/time-complexity-and-optimization-for-the-algorithm
<p>I have found a algorithm to check whether a Hamiltonian Cycle Exists in the graph or not, but not able to compute/analyse it's time complexity.</p> <p>The algorithm is as follows :</p> <ol> <li>Label all the vertices with distinct prime numbers.</li> <li>Label all edges with weight equal to 1.</li> <li>Now remove one vertex at a time, while removing a vertex v, if there is edge between u and v &amp; v and w, then add a edge between u and w, with weight = weight(u->v)*weight(v->w)*label(v)</li> <li>If at the end you end up with only one vertex with self edges and if there is a self edge that is equal to the product of all the primes of the removed vertices then there is Hamiltonian Cycle.</li> </ol> <p>I have proved the algorithm is correct but unable to find it's time complexity. I think there can be much more optimization in this algorithm also, as we don't need to add those edges to the graph that whose weight divides the weight of some other already present edge. If someone can give some optimization to this algorithm it may turn out to be polynomial, thus proving P = NP.</p>
<p>Unfortunately, whenever you start thinking about algorithms that use the nice prime factorization property things start breaking down. Why? You never said <em>how</em> you could generate $n$ prime numbers. That is not $\in P$, in fact that is still an open problem in mathematics, how many prime numbers are there? So even though the algorithm amy be suitable for small input sizes it is not theoretically sound.</p>
252
algorithm complexity
Complexity of peak finding algorithm for N dimensions
https://cs.stackexchange.com/questions/170041/complexity-of-peak-finding-algorithm-for-n-dimensions
<p>I am totally new in this area.</p> <p>Lets say we have a list of random numbers and we have to find one peak value (only one is fine), and a peak is if <span class="math-container">$a \geq$</span> than its both neighbours (or one if it is in on the edge). For example:</p> <p><a href="https://i.sstatic.net/3KNtaoJl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KNtaoJl.png" alt="enter image description here" /></a></p> <p>So here, 4 is a peak, so is 5, 8, 5 and 7. But our algorithm has to find only 1 peak, not all.</p> <p>So if we use the binary search (which is optimal here, I think):</p> <p>For a 1D binary search, the complexity is:</p> <p><span class="math-container">$$\mathcal O=log_2(n)$$</span></p> <p>For a 2D matrix m x n, I heard that the complexity is:</p> <p><span class="math-container">$$\mathcal O=mlog_2(n)$$</span></p> <p>I am interested what is the complexity for 3D, 4D and so on for binary search.</p> <p>What is the complexity of this algorithm in arbitrary N dimensions. Is there a function of complexity as a function of numbers of dimensions we are in?</p>
<p>First of all, I recommend against using the term &quot;binary search&quot; in this context, since it's usually reserved for the specific problem of finding an element in a sorted list. I'd recommend calling it &quot;divide and conquer&quot;.</p> <p>Second, big-O notation looks like <span class="math-container">$O(f(n))$</span> and not <span class="math-container">$O = f(n)$</span>. Also, you don't need to write the base in the logarithm, since the base is equivalent to a constant factor, which the big-O notation hides anyways. By that I mean that <span class="math-container">$O(\log_2 n)$</span> and <span class="math-container">$O(\ln n)$</span> and <span class="math-container">$O(\log_{10} n)$</span> are all the same thing (e.g. <span class="math-container">$\ln n = (\ln 2)\log_2 n$</span>), so people just omit the base and write <span class="math-container">$O(\log n)$</span>.</p> <p>Terminology and notation aside, for an <span class="math-container">$N$</span>-dimensional grid of size <span class="math-container">$n$</span>, it's possible to find a peak in <span class="math-container">$O(n^{N-1}\log n)$</span> time. To do this, you can convert the problem to a 1D peak-finding algorithm by collapsing <span class="math-container">$N-1$</span> of the dimensions, and apply the 1D peak-finding algorithm.</p> <p>Think about how you'd do that, and then click on the spoiler:</p> <blockquote class="spoiler"> <p> Take the max over <span class="math-container">$N-1$</span> of the dimensions. Construct a length-<span class="math-container">$n$</span> array <span class="math-container">$y$</span> with <span class="math-container">$y_i = \max_{i_2, \ldots, i_N} x_{i,i_2,\ldots,i_N}$</span>, and apply the 1D peak finding algorithm on <span class="math-container">$y$</span>. Why does this work?</p> </blockquote>
253
algorithm complexity
Constant in Complexity of SQRT algorithm
https://cs.stackexchange.com/questions/30558/constant-in-complexity-of-sqrt-algorithm
<p>this is my first question in CS so I apologize if this question is off-topic.</p> <p>If we use Newton`s Method for finding square root then complexity is $O(M(n))$ (using Wikipedia Notation: $M(n)$ is the algorithm to multiply two $n$-digit numbers). </p> <p>I am interested to know whats the constant hidden in Landau notation in this case and/or how can I calculate it.</p>
254
algorithm complexity
Using software to calculate the complexity of an algorithm
https://cs.stackexchange.com/questions/12475/using-software-to-calculate-the-complexity-of-an-algorithm
<p>I am somewhat a beginner, and I have often seen complexity being calculated for various algorithms but they never actually gave me a very clear idea about how it is done. Can someone please point some resources where I can learn to calculate the complexity of an algorithm?</p> <p>Secondly, is there some software that calculates the space and time complexity for an algorithm? I have seen that <a href="http://en.wikipedia.org/wiki/Cyclomatic_complexity" rel="nofollow">cyclomatic complexity</a> can be calculated by software.</p>
<p>Depending on your background, the <a href="http://en.wikipedia.org/wiki/Introduction_to_Algorithms">CLRS book</a> is a solid introduction. I think in the very first chapter, they walk you through of how to analyze a simple algorithm in terms of both correctness (showing the algorithm really solves the problem) and complexity (how many steps the algorithm performs). There are lots of other books out there some other people might prefer more. </p> <p>In general, there is no software that does this for you. Coming up with right invariants etc. is somewhat of an art requiring insight and experience. Read more about complexity theory, and you'll discover some inherit impossibilities and problems related to automating such analysis.</p>
255
algorithm complexity
Calculate the complexity of an algorithm
https://cs.stackexchange.com/questions/76465/calculate-the-complexity-of-an-algorithm
<p>I have an algorithm here and it wants me to calculate the complexity:</p> <pre><code>for (i=1;i&lt;n;i++) for (j=1;j&lt;i*i;j++) if (j%i==0) for (k=0;k&lt;j;k++) sum++; </code></pre> <p>First of all, I think that i have different complexities for best, average and worst case but I don't know how to find them. I have one though and I said that in the best case I will have the 2 fors and count as operation the 'if'. So i have a double sum (ΣΣ 1) with bounds being the values of i,j in the for loops. That's all i did.</p>
<p>The number of times that the <em>if</em> is executed is $$ \sum_{i=1}^{n-1} (i^2-1) = \Theta(n^3). $$ The number of times that <em>sum</em> is incremented is $$ \sum_{i=1}^{n-1} \sum_{\substack{1 \leq j &lt; i^2 \\ i \mid j}} j = \sum_{i=1}^{n-1} \sum_{k=1}^{i-1} ki = \sum_{i=1}^{n-1} i \binom{i}{2} = \Theta(n^4). $$ Altogether, we get a running time of $\Theta(n^4)$.</p>
256
algorithm complexity
What is the algorithmic complexity of DFS under the cache oblivious model?
https://cs.stackexchange.com/questions/65409/what-is-the-algorithmic-complexity-of-dfs-under-the-cache-oblivious-model
<p>Consider the basic non-recursive DFS algorithm on a graph G=(V,E) (python-like pseudocode below) that uses array-based adjacency lists, a couple of arrays of size V, and a dynamic array stack of size &lt;= V. If I understand correctly, this is a cache oblivious algorithm since no information about the memory configuration is provided. Moreover, even if we suppose that <em>every</em> memory access in this algorithm produces a cache miss (the number of cache misses per line is given in comments), we would still have O(V+E) memory transfers, so I postulate this would be a worst case complexity upper bound under the CO-model. Is this correct?</p> <pre><code>def dfs(G, s): visited = [False for v in G] # V stack = [s] # 1 nxt = [0 for v in G] # V degree = [len(v) for v in G] # V + E while stack: v = stack.pop(-1) # 1 visited[v] = True # 1 while nxt[v] &lt; degree[v] and visited[G[v][nxt[v]]]: # 6 nxt[v]+=1 # 1 if nxt[v] &lt; lens[v]: # 2 stack.append(v) # 1 stack.append(G[v][nxt[v]]) # 3 def main(): # e.g. dfs order of this graph is 0 1 2 3 # # 0 -&gt; 2 # | \ ^ # | \ | # v v # 3 -&gt; 1 # G = [[1,3,2],[2],[],[1]] dfs(G, 0) </code></pre>
257
algorithm complexity
Notation for average case complexity of an algorithm
https://cs.stackexchange.com/questions/14960/notation-for-average-case-complexity-of-an-algorithm
<p>I'm just wondering what the correct notation is when referring to an average case complexity of an algorithm that was calculated by doing empirical analysis.</p> <p>For example, I have tested my algorithm and fitted the results to the curve $f(n)=2.65\times 10^{-15}\cdot(2.17^{n})$ and in my report right now I'm saying something like: </p> <blockquote> <p><em>the average case complexity was found to be $\approx 2.65\times 10^{-15}\cdot(2.17^{n})$</em>, </p> </blockquote> <p>but I would rather say something like </p> <blockquote> <p><em>the average case complexity is $\in \Theta(2.17^{n})$</em>. </p> </blockquote> <p>But I'm not sure if this is technically correct because the result hasn't been theoretically proven, only empirically tested and fitted to the curve.</p>
<p>The notation $\Theta(f(n))$ isn't reserved for worst-case complexity, it's asymptotic notation applicable to functions in general. So you can state that the average case complexity is $\Theta(2.17^n)$, and it means exactly what you think it does. (Regardless of whether this has been proved or not.)</p>
258
algorithm complexity
What&#39;s better for an algorithm complexity, O(log n) or amortized O(log n)?
https://cs.stackexchange.com/questions/12714/whats-better-for-an-algorithm-complexity-olog-n-or-amortized-olog-n
<p><strong>Some context:</strong> I'm to write a program that sorts the lines of a file in C for Linux. Since I have to read all lines of the file (<code>fgets()</code> for example) I'm thinking about inserting them in a tree like structure in a sorted manner using the so called tree sort algorithm.</p> <p>Looking for a self-balancing tree structure I came across two that may be interesting, the <a href="http://en.wikipedia.org/wiki/Red-black_tree" rel="noreferrer">red-black tree</a> and a <a href="http://en.wikipedia.org/wiki/Splay_tree" rel="noreferrer">splay tree</a>. According to Wikipedia the red-black tree has an <code>O(log n)</code> worst case and the splay tree has <em>amortized</em> <code>O(log n)</code>.</p> <p><strong>The actual question:</strong> I know how to roughly compare some complexity levels in O notation but what's that amortized time? Given two algorithms one that runs in <code>O(log n)</code> and the other in amortized <code>O(log n)</code> which one would be preferrable?</p>
<p>$O(\log n)$ in the worst case implies $O(\lg n)$ amortized.</p> <p>Basically, given two data structures supporting the same operator, one in $O(\lg n)$ time in the worst case and the other one in $O(\lg n)$ amortized time, the first one is considered to be superior asymptotically: being $O(\lg n)$ time in the worst case means that each call of the operator will be supported in this time, while having an amortized complexity of $O(\lg n)$ means that some (very few) operator calls can take $O(n)$ time. Usually, the concept of amortized analysis is used for data-structures which amortized complexity is <em>better</em> than their worst case complexity.</p> <p>As an illustration, consider a data-structure for integers, storing each such integer $x$ as a string of bits (e.g. $x=8$ represented by $(1,0,0,0)$), and the operator $x.inc()$ which increments $x$ by $1$. </p> <ul> <li>In the worst case (e.g. on $x=7$ represented by $(1,1,1)$), the operator $x.inc()$ corresponds to $\log(x)+1$ binary write (e.g. to write $(1,0,0,0)$ corresponding to $8$). </li> <li>In the best case (e.g. on $x=8$), the operator $x.inc()$ corresponds to exactly one binary write (e.g. to change the last bit of $(1,0,0,0)$ to $1$, giving $(1,0,0,1)$).</li> </ul> <p>In a sequence of increments of the same integer object (e.g. enumerating all integers from $0$), the "best case" described above happens half of the time, and the worst case one every $n$ increment (i.e. after $1,2,4,8,16,...$ increments). A case requiring $i$ bits write happens $1/2^i$ of the time. Summing all those costs gives an amortized cost of $\sum_i i/2^i \in O(1)$. Hence the operator $inc()$ has a worst case complexity of $O(\lg n)$ but an amortized complexity of $O(1)$.</p> <p>The notion of <a href="http://en.wikipedia.org/wiki/Amortized_analysis">amortized analysis</a> is well explained on wikipedia. You might want to see the page on the <a href="http://en.wikipedia.org/wiki/Potential_method">Potential method</a> for more details.</p> <p>Hope it helps!</p>
259
algorithm complexity
Complexity of dynamic card game algorithm
https://cs.stackexchange.com/questions/52668/complexity-of-dynamic-card-game-algorithm
<p>Consider the following dynamic card game with a regular deck of 26 red cards and 26 black cards. A dealer draws the unturned cards one by one, and we can ask him to stop at any time. For every red card drawn, we get 1 dollar and lose 1 dollar for every black card drawn. The problem consists in finding an algorithm which returns the expected value of the game. If we denote by $b$ and $r$, respectively, the number of black and red cards left in the deck at any time, the expected value of the game $E(b,r)$ satisfies:</p> <p>$$E(b,r)=\max\left\{b-r,\frac{b}{b+r}\,E(b-1,r)+\frac{r}{b+r}\,E(b,r-1)\right\}\,,$$</p> <p>with boundary conditions $E(0,r)=0$ and $E(b,0)=b$. The expected value of the game is therefore given by $E(26,26)$.</p> <p>My question is, if we implement the recursive algorithm associated with the above formula, how can we determine its complexity? Using the trivial cases of $E(1,1)$ and $E(2,2)$, it would appear that we are dealing with exponential complexity, but is there a way to prove this properly, and if so, what is the number of necessary operations to compute $E(n,n)$ for an arbitrarily large integer $n$? Any ideas or references to literature would be greatly appreciated.</p>
<p>As pointed out in the comments, your recurrence is wrong (though equivalent for $b=r$; it's a recurrence for $E(b,r)+b-r$). Also, you can solve this efficiently using memoization or its more principled cousin, dynamic programming. The dynamic programming solution calculates iteratively $E(i,j)$ for $i+j=0,1,\ldots,52$.</p> <p>Finally, to answer your question, if you implement you original solution, you get a running time satisfying the recurrence $$ T(b,r) = C + T(b-1,r) + T(b,r-1), $$ with initial conditions $T(b,0) = O(1)$, $T(0,r) = O(1)$. If $T'(b,r) = T(b,r)-C$ then $$ T'(b,r) = T'(b-1,r) + T'(b,r-1), $$ a recurrence solved by $\alpha \binom{b+r}{r}$. Taking the initial conditions into account, we get that $$ T(b,r) = \Theta\left(\binom{b+r}{r}\right). $$ In particular, $$ T(n,n) = \Theta\left(\binom{2n}{n}\right) = \Theta\left(\frac{4^n}{\sqrt{n}}\right). $$ (This assumes all arithmetic is $O(1)$. The running time is actually somewhat larger since the relevant numbers grow fast, but this is a (multiplicative) lower-order factor.)</p>
260
algorithm complexity
Complexity of algorithm waiting $e^{n}$ seconds
https://cs.stackexchange.com/questions/167613/complexity-of-algorithm-waiting-en-seconds
<p>A dumb question in complexity theory.</p> <p>Let's consider an algorithm that solves the following problem:</p> <p>is <span class="math-container">$e^{n}$</span> time passed?</p> <pre><code>f(n): 1. let t = get_current_time() 2. wait(e^n) 3. return get_current_time()-t </code></pre> <p>Naively, I'd say this algorithm takes exponential time given instruction 2, thus placing it outside the P class. However, it's easy to verify its answer is correct by computing <span class="math-container">$e^{n}$</span>, thus placing it inside NP.</p> <p>What am I missing?</p>
<p>There are some problems in your question:</p> <ol> <li>What is the computation model, and what is the operation &quot;wait&quot; in your model?</li> <li>&quot;Is <span class="math-container">$e^n$</span> time passed?&quot; is a very unclear statement. What is the input, and what is the question asked?</li> <li>&quot;this algorithm takes exponential time given instruction 2, thus placing it outside the P class&quot; suggests that you are forced to execute the algorithm to get an answer.</li> </ol>
261
algorithm complexity
Suffix array construction algorithm linear complexity constant
https://cs.stackexchange.com/questions/92336/suffix-array-construction-algorithm-linear-complexity-constant
<p>A non-recursive linear Suffix Array construction algorithm is presented in this thesis: <a href="https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.190/Mitarbeiter/baier/gsaca.pdf" rel="nofollow noreferrer">Linear-time Suffix Sorting</a>. The author claims that, overall, the algorithm runs at $O(n)$. While it is well explained, there are only hints that this is case. The implementation of the algorithm <a href="https://github.com/waYne1337/gsaca/blob/master/gsaca.c" rel="nofollow noreferrer">is in this GitHub repository</a> written in C.</p> <p>My question is: What are the complexity constants (say $O(\alpha n + \beta)$ where $\alpha$ and $\beta$ are constants) for C code between lines 92 and 186?</p>
<p>Constants like this are generally not calculated for the run time of algorithms, because it's not clear what they would count - would they count the number of some sort of primitive C operation executed, or the number of x86-64 operations, or the number of arm64 operations, or the microcode operations, or ...?</p> <p>More frequently than <em>calculating</em> the constants, people <em>measure</em> the run time and show the $\alpha$ that fits best. They will also sometimes calculate the number of times a specific operation is executed, like comparisons or swaps in a sorting algorithm.</p>
262
algorithm complexity
Kosaraju&#39;s algorithm&#39;s time complexity
https://cs.stackexchange.com/questions/39955/kosarajus-algorithms-time-complexity
<p>I've reading up on Kosaraju's algorithm to compute the strongly connected components of a directed graph and I found that</p> <ol> <li>using an adjacency list representation gives a time complexity of $\Theta(V+E)$.</li> <li>using an adjacency matrix representation gives a time complexity of $O(V^{2})$.</li> </ol> <p>Why $\Theta$ in the first case and $O$ in the second case?</p>
<p>Any algorithm for computing strongly connected components must examine at least $\binom{|V|}{2}$ entries in the worst case (proof below). In particular, since Kosaraju's algorithm is correct, its (worst-case) complexity is $\Omega(|V|^2)$, and so also $\Theta(|V|^2)$. There is no particular reason why Wikipedia used big O for one and bit $\Theta$ for the reader – probably a different author.</p> <p>Edit: One possible reason for using big O rather than big $\Theta$ is that sometimes the algorithm is lucky and runs faster. Whether this happens or not could depend on the particular implementation of DFS.</p> <p>Suppose $A$ is an algorithm examining less than $\binom{|V|}{2}$ entries. We will show that the algorithm cannot always know what the strongly connected components are. Run the algorithm, and for each entry of the adjacency matrix queried by the algorithm, return $0$. When the argument terminates, there are more than $\binom{|V|}{2}$ off-diagonal entries not queried, and in particular there exist vertices $i \neq j$ so that neither the $(i,j)$ nor the $(j,i)$ entry are queried. We could complete the adjacency matrix in two different ways: the all zeroes matrix, or the one in which both these entires are $1$. In the former case, there are $n$ connected components, in the latter, only $n-1$.</p>
263
algorithm complexity
Calculating Time Complexity of Algorithm Using Incrementor Variable
https://cs.stackexchange.com/questions/112130/calculating-time-complexity-of-algorithm-using-incrementor-variable
<p>I am trying to calculate the time complexity of an algorithm using <code>n</code> in the code below.</p> <p>I have a working solution to a coding challenge to sort a stack using only another stack, and I've added a counter variable <code>n</code> that is incremented anywhere an element in the stack is pushed, popped, or held.</p> <p>The following code is written in JavaScript: </p> <pre><code>const sortStack = (stack) =&gt; { let n = 0; sorted = new Stack(); while (stack.storage.length) { tmp = stack.pop(); n += 1; if (tmp &gt;= sorted.peek()) { sorted.push(tmp); n += 1; } else { while (tmp &lt; sorted.peek()) { stack.push(sorted.pop()); n += 1; } sorted.push(tmp); n += 1; } } console.log("n: ", n); return sorted; } sortedStack = sortStack(s); sortedStack.printContents(); </code></pre> <p>If my calculations and usage of <code>n</code> are correct, then this algorithm has an input <code>n</code> of 6 (length of <code>stack</code>) with a final <code>n</code> of 30, which would give it a time complexity of O(N*5). </p> <p>Is this correct? </p>
264
algorithm complexity
Time complexity of a tree-based algorithm
https://cs.stackexchange.com/questions/106350/time-complexity-of-a-tree-based-algorithm
<p>I solved a practice interview problem that was sent me by Daily Coding Problem mailing list. I am now curious about the exact time complexity of my solution.</p> <h2>Problem Statement</h2> <blockquote> <p>Given the mapping a = 1, b = 2, ... z = 26, and an encoded message, count the number of ways it can be decoded.</p> <p>For example, the message '111' would give 3, since it could be decoded as 'aaa', 'ka', and 'ak'. You can assume that the messages are decodable. For example, '001' is not allowed.</p> </blockquote> <p>I made an assumption that my solution should accept any type of mapping, not just the one mentioned in the question. So the running time of the algorithm is parametrized by the encoded message length and the type of mapping used.</p> <h2>Attempted Solution</h2> <pre><code>class Node: def __init__(self, val): self.val = val self.children = [] def add_child(self, child): self.children.append(child) def count_encodings(cipher, mapping): # what is the longest possible map value longest_val = max([len(str(x)) for x in mapping.values()]) root = Node('') create_cipher_subtree(root, cipher, longest_val, mapping) return count_leaves(root) def create_cipher_subtree(node, cipher, longest_val, mapping): for part_len in range(1, min(longest_val, len(cipher)) + 1): curr_part = cipher[:part_len] if curr_part in mapping.values(): child = Node(curr_part) node.add_child(child) remaining_part = cipher[part_len:] if remaining_part: create_cipher_subtree(child, remaining_part, longest_val, mapping) def count_leaves(node): if not node.children: return 1 count = 0 for child in node.children: count += count_leaves(child) return count </code></pre> <p>We can then reproduce the example solution as follows:</p> <pre><code>from string import ascii_lowercase mapping = {k: str(v) for v, k in enumerate(ascii_lowercase)} cipher = '111' print(count_encodings(cipher, mapping)) </code></pre> <p>In short, this solution constructs a tree, like this:</p> <pre><code> '' '1' '11' '1' '11' '1' '1' </code></pre> <p>Then the number of leaf nodes is counted.</p> <h2>Explanation</h2> <p>First, the algorithm checks all possible values in the mapping and records the length of the longest value (<code>longest_val</code>).</p> <p>We then create a tree, where each node's <code>val</code> field is a part of the encoded message (<code>cipher</code>) that corresponds to a single mapping value; the root is the only node which has <code>val</code> as empty string. Concatenation of nodes' <code>val</code> fields along the path from the root to a leaf is one possible way of encoding. </p> <p>The tree is created as follows:</p> <ol> <li>Check if the first character of <code>cipher</code> can be interpreted as a mapping value. </li> <li>If yes, create a node with that value recorded and make it a child of root. Then, pass the rest of the <code>remaining_part</code> of the encoded message (everything past the first character) to the child and repeat the process from there.</li> <li>Check if the first two characters of <code>cipher</code> can be interpreted as a mapping value.</li> <li>Repeat step 2, but now <code>val</code> is two characters. The <code>remaining_part</code> would be everything in the encoded message past the first two values. This would create another child node of root.</li> <li>If <code>longest_val</code> was 3, we would then check if the first 3 characters of <code>cipher</code> can be interpreted as a mapping value.... And so on.</li> </ol> <p>After the tree is created, we count the number of leaf nodes, which corresponds to number of possible messages that can produce the provided encoding.</p> <h2>Complexity Analysis</h2> <p>I know that creating the tree of all possible ways the message could have been encoded might have been an overkill for this problem (in terms of space use), but doing it this way helped me better reason about the solution.</p> <p>I am now unsure about the exact relation between a mapping choice and message length, and the answer to the question. What is the time complexity of this solution? </p> <p>I am more interested in how various mappings affect the complexity. E.g. if some of the values in the mapping were 3-digit numbers, then many nodes in the tree would have 3 children. Does this increase the complexity of the algorithm? How would one capture this fact when writing Big-Oh expression for the algorithm?</p>
<p>You can show a simple exponential lower bound for you algorithm as follows.</p> <ol> <li>Assume we have a <span class="math-container">$d$</span> digit encoding (2 in your initial example).</li> <li>For any cipher text of length <span class="math-container">$n$</span> we can create a worst-case encoding of <span class="math-container">$n$</span> 1's.</li> <li>Now we divide the input into <span class="math-container">$n \ / \ d$</span> segments of length <span class="math-container">$d$</span>.</li> <li>For any of these segments, we have <span class="math-container">$2^{d-1}$</span> ways of decoding it. <ul> <li>Take <span class="math-container">$d = 4$</span> for example, with a segment "1111" we have the following decodings: <ol> <li>"1111"</li> <li>"1", "111"</li> <li>"1", "1", "11"</li> <li>"1", "1", "1", "1"</li> <li>"1", "11", "1"</li> <li>"11", "11"</li> <li>"11", "1", "1"</li> <li>"111", "1"</li> </ol></li> </ul></li> <li>Now we can decode all of these <span class="math-container">$n \ / \ d$</span> segments separately so we can multiply this together to get:</li> </ol> <p><span class="math-container">$$(2^{d-1})^{n/d} = 2^{(d-1)n/d}$$</span></p> <hr> <p>You can get a tighter bound for this however by noting that the number of decodings for a string of ones is:</p> <p><span class="math-container">$$f(n) = \begin{cases} 1 &amp; n = 1\\ 2 &amp; n = 2\\ \vdots &amp; \vdots\\ 2^{d-1} &amp; n = d\\ \sum_{i = 1}^d f(n - i) \end{cases}$$</span></p> <p>If we plug in for <span class="math-container">$d=2$</span> we actually see something familiar: <span class="math-container">$$f(n) = \begin{cases} 1 &amp; n = 1\\ 2 &amp; n = 2\\ f(n-1) + f(n-2) \end{cases}$$</span> This is the Fibonacci Sequence! This, we <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">know is exponential</a> in <span class="math-container">$n$</span>. If you try <span class="math-container">$d=3$</span> you get the <a href="http://oeis.org/wiki/Tribonacci_numbers" rel="nofollow noreferrer">Tribonnaci Sequence</a>. As you increase <span class="math-container">$d$</span> you simply get <a href="https://en.wikipedia.org/wiki/Generalizations_of_Fibonacci_numbers#Fibonacci_numbers_of_higher_order" rel="nofollow noreferrer">higher order Fibonacci sequences</a>. All of these will be exponential in the worst case.</p>
265
algorithm complexity
Problems with proven complexity but no algorithm yet found
https://cs.stackexchange.com/questions/63005/problems-with-proven-complexity-but-no-algorithm-yet-found
<p>Does there exists a computable problem P such that:</p> <ul> <li>Is proven that P can be solved with an algorithm that has a certain complexity.</li> <li>The best algorithm know unluckily is still slower (greater complexity) than the proven bound.</li> </ul> <p><strong>Example:</strong></p> <ul> <li>A problem that we have proven can be solved in worst case in $O(n^2)$ operations.</li> <li>The best algorithm that's currently known runs in $\Omega(n^3)$ time.</li> </ul>
266
algorithm complexity
Complexity of exponential algorithm, optimised with memoization?
https://cs.stackexchange.com/questions/68491/complexity-of-exponential-algorithm-optimised-with-memoization
<p>I was solving a problem, where one part of it was the following:</p> <p>"Given a m-sided dice ([1,m] values) that will be rolled n times, calculate the possibility that the total sum of rolls will be higher than b"</p> <p>Initially, I implemented a naive exponential solution, discovering the whole state space (DFS-like). Then, I realised that this algorithm repeats the calculation for a given number of remaining rolls and current sum. So, I optimised it, saving these calculations and using them later. For example, let's say we have a 4-sided dice that will be rolled 2 times (m=4, n=2). The below image shows the state space: <a href="https://i.sstatic.net/yKu2c.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yKu2c.jpg" alt="enter image description here"></a></p> <p>With green color are the values that were cached from previous computations and will be returned directly from memory. With red color are the values that will be normally computed. I have been trying to calculate the complexity of the algorithm:</p> <ul> <li>The initial algorithm, where there is no "caching" of values, has obviously an exponential complexity of <code>O(M^n)</code></li> <li><p>The iterations needed for the optimised algorithm follow the below pattern:</p> <p>For (m=4,n=2): 11 = 4 + 7 = 4 + [(4*2)-1]</p> <p>For (m=4,n=3): 21 = 4 + 7 + 10 = 4 + [(4*2)-1] + [(4*3)-2]</p> <p>For (m=4,n=4): 34 = 4 + 7 + 10 + 13 = 4 + [(4*2)-1] + [(4*3)-2] + [(4*4)-3]</p> <p>...</p></li> </ul> <p>This seems to be a mathematical sequence, but I can't calculate the exact formula. Any help much appreciated.</p>
267
transformer architecture
What is prefix monotonicity?
https://cs.stackexchange.com/questions/7685/what-is-prefix-monotonicity
<p>I have a background in computer architecture and only cursory understanding of process networks. For a paper I am writing I need to understand prefix monotonicity properly. </p> <p>For now I have "a stream transformer is prefix monotonic if its output for a given input record r is dependent only on the input stream up to and including r, and independent from whether r is the last record in the stream". But this was gathered by word-of-mouth and I am not sure it is the proper approach.</p> <p>I would welcome suggestions for:</p> <ul> <li>proper formal background and definitions;</li> <li>useful analogies to explain the concept to a newcomer (the audience of the paper needs to understand prefix monotonicity but may not be knowledgeable with TCS).</li> </ul>
<p>This publication provides a definition of prefix monotonicity: <a href="http://www4.informatik.tu-muenchen.de/publ/papers/broy_acm93.pdf" rel="nofollow">link</a></p> <p>Definition:</p> <p><em>"Prefix monotonicity reflects a basic property of communicating systems: assume we have observed a finite sequence of output messages for a corresponding finite sequence of input messages. Then if we observe additional input (thus the old input sequence is a prefix of the extended one) we may just observe additional output (thus the old output sequence is a prefix of the extended one). Prefix monotonicity provides a notion of causality between input and output. It reflects the stepwise consumption of input and production of output and guarantees the existence of least fixed points, which is mandatory for giving meaning to communication feedback loops."</em></p>
268
transformer architecture
How can we improve the capture of hierarchical structures in Graph Neural Networks (GNNs)?
https://cs.stackexchange.com/questions/171414/how-can-we-improve-the-capture-of-hierarchical-structures-in-graph-neural-networ
<p>Graph Neural Networks (GNNs) have proven to be powerful tools for modeling relationships in structured data, but one of their main limitations is their difficulty in capturing hierarchical structures within a graph. Message-passing methods primarily focus on local relationships, which can limit their ability to represent multi-level dependencies.</p> <p>Some proposed solutions include the use of graph pooling techniques to create more abstract representations or hybrid architectures combining GNNs with transformers. However, these approaches often face challenges such as over-smoothing or loss of structural information.</p> <p>Given this context, what are the most efficient methods to capture hierarchical structures in GNNs without compromising model expressiveness or significantly increasing computational cost?</p>
269
attention mechanism
Are there any neural NLG systems which don&#39;t generate in left-to-right order?
https://cs.stackexchange.com/questions/99987/are-there-any-neural-nlg-systems-which-dont-generate-in-left-to-right-order
<p>For a while, all classification tasks in natural language processing were based on simple RNN's, which operate in a very word-by-word order. Adding gating mechanisms increased ability to "look back", and the newer addition of context vectors which can train attention to different words during the task have made classification of text less about "left-to-right" reading and more about selective focusing.</p> <p>However, I have never seen a seq2seq or any other <strong>natural language generation</strong> system (machine translation, image2seq, etc) which generates the desired sequential output not in sequential order. It seems this would be very powerful. Are there any examples of using attention not only in encoders, but also in decoders?</p>
270
GPT model
Why GPT model is a higher order hidden markov model
https://cs.stackexchange.com/questions/160891/why-gpt-model-is-a-higher-order-hidden-markov-model
<p>I have read the GPT-1 paper, and my understanding is that it works as follows: <span class="math-container">$U$</span> is input tokens, <span class="math-container">$h_0=UW_e+W_p$</span>, <span class="math-container">$h_i=\text{transformer_block}(h_{i-1})$</span> and the output is a probability vector <span class="math-container">$P(u)=\text{softmax}(h_nW_e^T)$</span>, and the hidden states are the final latent layer before softmax, so the hidden states are <span class="math-container">$h_n(U)$</span>? I do not really understand, how to formalise this as a hidden Markov chain similar to the definition in <a href="https://en.wikipedia.org/wiki/Hidden_Markov_model#Definition" rel="nofollow noreferrer">Wikipedia</a>?</p>
<p>The statement that GPT is a higher order hidden Markov model should not be taken too seriously. It is intended as a slogan or intuition, not something that is intended to be be rigorously proven.</p> <p>Let <span class="math-container">$x_1,x_2,\dots,x_k$</span> denote the words of the text, in order (<span class="math-container">$x_1$</span> is the first word, etc.).</p> <p>GPT is a model for predicting the next word, given all prior words. In other words, it estimates <span class="math-container">$p(x_i | x_1,x_2,\dots,x_{i-1})$</span>. Hopefully you can see how this resembles the formulation of a Markov model.</p> <p>It is higher-order, because it uses <em>all</em> prior words to predict the next word, not just the previous few words.</p>
271
GPT model
Question about word embeddings in a specific language model - GPT-2
https://cs.stackexchange.com/questions/116184/question-about-word-embeddings-in-a-specific-language-model-gpt-2
<p>How were the <a href="https://openai.com/blog/better-language-models/" rel="nofollow noreferrer">GPT-2</a> token embeddings constructed? </p> <p>The authors mention that they used Byte Pair Encoding to construct their vocabulary. But BPE is a compression algorithm that returns a list of subword tokens that would best compress the total vocabulary (and allow rare words to be encoded efficiently).</p> <p>My question is: how was that list of strings turned into the vectors that they actually used for training the model? The papers they published on the <a href="https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf" rel="nofollow noreferrer">original GPT</a> and its <a href="https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf" rel="nofollow noreferrer">follow-up GPT-2</a> don't seem to specify those details.</p>
272
GPT model
What are the 175 billion parameters used in the GPT-3 language model?
https://cs.stackexchange.com/questions/156130/what-are-the-175-billion-parameters-used-in-the-gpt-3-language-model
<p>I am currently working my way through <em><a href="https://arxiv.org/abs/2005.14165" rel="nofollow noreferrer">Language Models are Few-Shot Learners </a></em>, the initial 75-page paper about <a href="https://en.wikipedia.org/wiki/GPT-3" rel="nofollow noreferrer">GPT-3</a>, the language learning model spawning off into ChatGTP.</p> <p>In it, they mention several times that they are using <strong>175 billion parameters</strong>, orders of magnitudes more than previous experiments by others. They show this table, for 8 models ranging from 125 million params to 175 billion params.</p> <p><a href="https://i.sstatic.net/rsKhP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rsKhP.png" alt="enter image description here" /></a></p> <p>Then they say:</p> <blockquote> <p>Table 2.1 shows the sizes and architectures of our 8 models. Here nparams is the total number of trainable parameters, nlayers is the total number of layers, dmodel is the number of units in each bottleneck layer (we always have the feedforward layer four times the size of the bottleneck layer, dff = 4 ∗ dmodel), and dhead is the dimension of each attention head. All models use a context window of nctx = 2048 tokens. We partition the model across GPUs along both the depth and width dimension in order to minimize data-transfer between nodes. The precise architectural parameters for each model are chosen based on computational efficiency and load-balancing in the layout of models across GPU’s. Previous work [KMH+20] suggests that validation loss is not strongly sensitive to these parameters within a reasonably broad range.</p> </blockquote> <p>I am not an expert in machine learning, I just know basic RNNs and how they work with just a few parameters and a few layers (I don't know, like 5 parameters and 5 layers max, it's been a while)? What are the things counted as <strong>parameters</strong> in this 175 billion parameter network? How does the network look with its 96 layers? How many nodes are there per layer sort of thing?</p> <p>I am trying to understand this paper and eventually how ChatGPT works, and getting to section 2 so far, I haven't seen what you would use as inputs/parameters to such a large model. The ones you learn in school are tiny compared to this. Hoping for a little illumination on what could be going on.</p>
<p>The 175 billion parameters in the GPT-3 language model are values that are used by the model to make predictions about the next word or words in a sentence or piece of text. These parameters are essentially the weights that are applied to the input data in order to make the model's predictions. In a neural network, the parameters are the values that are learned and adjusted during the training process in order to minimize the difference between the predicted output and the desired output.</p> <p>The GPT-3 model has 96 layers, which means that it is composed of multiple layers of neural networks. Each layer of the network is made up of a number of nodes, which are the individual processing units of the network. The number of nodes per layer can vary.</p> <p>To use the GPT-3 model, you would need to provide it with some input data, such as a sentence or a paragraph of text. The model would then process this input using its 175 billion parameters and its 96 layers, in order to make a prediction about the next word or words that should come next in the text. The model's predictions would be based on the input data and its learned parameters, and it would be able to generate human-like text as a result.</p>
273
GPT model
Is there a way to connect a deep language model output to input?
https://cs.stackexchange.com/questions/115948/is-there-a-way-to-connect-a-deep-language-model-output-to-input
<p>In models like GPT-2, TXL and Grover, is there a good way to know which input weights (tokens) resulted in each token of the output? </p>
274
GPT model
Difference between Byte Pair Encoding (BPE), Sequitur, and Re-Pair
https://cs.stackexchange.com/questions/171396/difference-between-byte-pair-encoding-bpe-sequitur-and-re-pair
<p>I looked at the Wikipedia pages for <a href="https://en.wikipedia.org/wiki/Byte_pair_encoding" rel="nofollow noreferrer">Byte Pair Encoding (BPE)</a>, <a href="https://en.wikipedia.org/wiki/Sequitur_algorithm" rel="nofollow noreferrer">Sequitur</a>, and <a href="https://en.wikipedia.org/wiki/Re-Pair" rel="nofollow noreferrer">Re-Pair</a>. These algorithms were published in <a href="http://www.pennelynn.com/Documents/CUJ/HTML/94HTML/19940045.HTM" rel="nofollow noreferrer">1994</a>, <a href="https://arxiv.org/pdf/cs/9709102" rel="nofollow noreferrer">1997</a>, and <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=892708" rel="nofollow noreferrer">1999</a>.</p> <p>These algorithms all appear to be based on a similar idea: given an input string, iteratively recognise commonly occurring pairs of symbols (&quot;digrams&quot;), and compress them into newly defined symbols (effectively abbreviations). The result can be expressed as a context-free grammar (CFG), which can be used to reconstruct/decode the input string.</p> <p>The Re-Pair paper mentions differences with Sequitur:</p> <blockquote> <p>...because Sequitur processes the message in a left-to-right manner and maintains its two invariants (uniqueness and utility) at all times, it does not necessarily choose as grammar rules the phrases that might eventually lead to the most compact representation.</p> </blockquote> <p>But as far as I can tell, neither of these papers mentions/references/discusses BPE, despite the fact it first appeared several years earlier. BPE is now very important in LLMs, EG it is used by several <a href="https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf" rel="nofollow noreferrer">GPT</a> models.</p> <p><strong>So my question is: how are algorithms such as Sequitur and Re-Pair different to BPE?</strong></p>
275
GPT model
Generate product description from product specifications
https://cs.stackexchange.com/questions/167540/generate-product-description-from-product-specifications
<p>I am looking for a python NLP library that can generate a proper product description based on product features provided to it.</p> <p>Till now, i have tried <strong>transformers</strong> library and this is the code:</p> <pre><code>from transformers import pipeline def generate_description(specs): # Construct input text from specifications input_text = &quot;This is a&quot; if 'category' in specs: input_text += f&quot; {specs['category']}&quot; else: input_text += &quot;n item&quot; if 'condition' in specs: input_text += f&quot; in {specs['condition']} condition&quot; if 'battery_health' in specs: input_text += f&quot; with battery health: {specs['battery_health']}&quot; if 'cosmetic_damage' in specs: input_text += f&quot; and {specs['cosmetic_damage']} cosmetic damage&quot; input_text += &quot;.&quot; # Generate description using GPT-2 model generator = pipeline(&quot;text-generation&quot;, model=&quot;gpt2&quot;) description = generator(input_text, max_length=100, num_return_sequences=1)[0]['generated_text'] return description # Example specifications specs = { 'category': 'laptop', 'condition': 'like new', 'battery_health': 'good', 'cosmetic_damage': 'minor scratches' } # Generate description description = generate_description(specs) print('DESCRIPTION : ' , description) </code></pre> <p>but the generated description is not even slightly accurate.</p> <p>I am looking for some library that works somewhat similar to: <a href="https://ahrefs.com/writing-tools/product-description-generator" rel="nofollow noreferrer">https://ahrefs.com/writing-tools/product-description-generator</a></p> <p>I would be using this along with a web application where dynamic fields for different products would be filled and I would require a description for the product.</p> <p>I have also tried storing different combinations of features and their resultant description in a DB and tried fetching them, but there seem to be just too many combinations for this method to be feasible.</p> <p>Any ideas regarding which library to use or how to solve this issue would be appreciated.</p>
<p>Shopping questions (requests for us to recommend a library or software package) are off-topic on Stack Exchange.</p> <p>One possible way to achieve your goals is to use a large language model, like GPT4, to generate a description. GPT2 is likely to be terrible at this. Try GPT4.</p>
276
LSTM
What are the inputs to an LSTM for Slot Filling Task
https://cs.stackexchange.com/questions/71032/what-are-the-inputs-to-an-lstm-for-slot-filling-task
<p>I am confused on the inputs of a Long-Short Term Memory (LSTM) for the slot filling task in Spoken Language Understanding. </p> <p>Before I worked on this, I implemented a language model with a Recurrent Neural Network (RNN) and then with a LSTM. The input to the RNN and LSTM language models was a one hot vector, which represented each word. </p> <p>Now, when moving on to the slot filling task for a LSTM, I am having trouble what the input would be. I know that a one-hot vector representation is not enough for this task because the outputs along each time step are slot labels. I have a dictionary (in Python) that maps words to indices (which I can turn into a one hot vector), and I also have a dictionary with a labels (that are used for slot filling), which I got from the ATIS data. Here is an example:</p> <p><a href="https://i.sstatic.net/Fo5hD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fo5hD.png" alt="enter image description here"></a></p> <p>I know I need the above two dictionaries to accomplish the slot filling task, but I cannot figure out how to use them as inputs for the LSTM? Furthermore, I have been using the basic LSTM structure, and for the language model LSTM I build, the output at each time step went through a Softmax function. Is this what will be required for slot filling too?</p> <p>I am in high school and do not have anyone to contact, so any help is really appreciated. Thank you so much.</p>
277
LSTM
What is Temperature in LSTM (and neural networks generally)?
https://cs.stackexchange.com/questions/79241/what-is-temperature-in-lstm-and-neural-networks-generally
<p>One of the hyperparameters for LSTM networks is temperature. What is it?</p>
<p><strong>Temperature</strong> is a hyperparameter of LSTMs (and neural networks generally) used to control the randomness of predictions by scaling the logits before applying softmax. For example, in TensorFlow’s Magenta <a href="https://github.com/tensorflow/magenta/blob/5cbbfb94ff4f506f1dd1f711e4704a1b3279a385/magenta/models/melody_rnn/melody_rnn_generate.py#L82" rel="noreferrer">implementation</a> of LSTMs, temperature represents how much to divide the logits by before computing the softmax.</p> <p>When the temperature is 1, we compute the softmax directly on the logits (the unscaled output of earlier layers), and using a temperature of 0.6 the model computes the softmax on $\frac{logits}{0.6}$, resulting in a larger value. Performing softmax on larger values makes the LSTM <strong>more confident</strong> (less input is needed to activate the output layer) but also <strong>more conservative</strong> in its samples (it is less likely to sample from unlikely candidates). Using a higher temperature produces a softer probability distribution over the classes, and makes the RNN more “easily excited” by samples, resulting in <strong>more diversity</strong> and also <strong>more mistakes</strong>.</p> <p>Neural networks produce class probabilities with logit vector $\mathbf{z}$ where $\mathbf{z} =$$(z_1,\ldots,z_n)$ by performing the softmax function to produce probability vector $\mathbf{q} = (q_1,\ldots,q_n)$ by comparing $z_i$ with with the other logits.</p> <p>$q_i = \frac{\exp{(z_i/T)}}{\sum_j\exp{(z_j/T)}}\tag{1}$</p> <p>where $T$ is the temperature parameter, normally set to 1.</p> <p>The softmax function normalizes the candidates at each iteration of the network based on their exponential values by ensuring the network outputs are all between zero and one at every timestep.</p> <p>Temperature therefore increases the sensitivity to low probability candidates. In LSTMs, the candidate, or sample, can be a letter, a word, or musical note, for example:</p> <blockquote> <p>For high temperatures (${\displaystyle \tau \to \infty }$), all [samples] have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature (${\displaystyle \tau \to 0^{+}}$), the probability of the [sample] with the highest expected reward tends to 1. </p> </blockquote> <p>- from <a href="https://en.wikipedia.org/wiki/Softmax_function" rel="noreferrer">Wikipedia article on softmax function</a></p> <h1>Reference</h1> <p>Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv preprint arXiv:1503.02531 (2015). <a href="https://arxiv.org/abs/1503.02531" rel="noreferrer">arXiv</a></p>
278
LSTM
Time Series Prediction with an LSTM
https://cs.stackexchange.com/questions/60138/time-series-prediction-with-an-lstm
<p>I have a time series that I want to predict with an LSTM. I am able to get very good results using 50 datapoints predicting 51, but I struggle to get any accuracy using something like 200 datapoints to predict 220. After an epoch, my network outputs 0 for all inputs. Is there a technique for predicting multiple timesteps ahead of the final output with a neural network?</p> <p>For example, would it make more sense to predict 1 timestep ahead 20 times in a row feeding the outputs back in to get to that 20th timestep? Training it on a sequence followed by the timestep 20 ahead does not seem to work so far.</p>
<p>Yes, you could try applying the LSTM iteratively 20 times. In other words: use the first 200 datapoints to predict the 201th; then use datapoints 2..201 to predict the 202th; and so on, until you predict the 220th. You'll have to evaluate how well this works on a test set; it might work, or it might not.</p> <p>This could still fail badly. It could even be that there is just no way to predict 20 timesteps out. For instance, it's possible that the short-term correlation is high but the long-term correlation is low. Think of the weather: it's possible to predict tomorrow's weather with relatively high accuracy, but seems to be extremely hard to predict the weather 3 weeks out. So there might just be fundamental barriers to making predictions that far into the future.</p>
279
LSTM
Hessian-Free instead of LSTM for Recurrent Net Machine Translation
https://cs.stackexchange.com/questions/38144/hessian-free-instead-of-lstm-for-recurrent-net-machine-translation
<p>Last year, Ilya Sutskever and collaborators came out with a <a href="http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf" rel="nofollow">paper</a> about a recurrent LSTM net that learns sequence to sequence mappings for machine translation. It's somewhat surprising that the authors used LSTM instead of Hessian-Free to train this net since the first author was one of the innovators behind the development of Hessian-Free methods for recurrent nets (<a href="http://www.icml-2011.org/papers/532_icmlpaper.pdf" rel="nofollow">citation</a>).</p> <p>I was wondering if anyone has tried Hessian-Free for learning sequence to sequence mappings for machine translation. If so, does it work? Is its performance inferior to LSTM's in some way?</p> <p>Suggestions of other places to post this kind of question are welcome; although, the obvious place - metaoptimize - is currently so slow and spam-ridden I decided to try here first. </p>
280
LSTM
What Happens if I swap the forget gate and update gate in LSTM model?
https://cs.stackexchange.com/questions/129526/what-happens-if-i-swap-the-forget-gate-and-update-gate-in-lstm-model
<p>Consider the following eqautions used in LSTM ( taken from Andrew ng's course on Sequential model)</p> <p>In an LSTM model, LSTM Cell has three inputs at any time step t</p> <ul> <li>Input(<span class="math-container">$X_t , a^{(t-1)}, C^{(t-1)})$</span>, <br><br> Here <span class="math-container">$X_t$</span> is the input vector, <span class="math-container">$a^{(t-1)}$</span> is the previous hidden state and <span class="math-container">$ C^{(t-1)}$</span> is the previous cell state <br> <br></li> </ul> <p>Now the new cell state <span class="math-container">$c^t$</span> is given by the following formula : <br><br> <span class="math-container">$C^t = $</span> forget_gate * <span class="math-container">$C^{(t-1)} + $</span> update_gate* <span class="math-container">$\overline{C^t}$</span> <br><br> <strong>Question</strong>: <br><br> If I swap the places of forget_gate and update_gate, I still get a valid <span class="math-container">$C^t$</span>, So why are we multilyting the previous cell state with forget gate only and the current cell state with update gate only, what if Imultiply previous cell state with update gate ?</p> <p>Edit : After swapping, the formula would look like this, <br> <br> <span class="math-container">$C^t = $</span> update_gate * <span class="math-container">$C^{(t-1)} + $</span> forget_gate* <span class="math-container">$\overline{C^t}$</span></p> <p><a href="https://i.sstatic.net/an40r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/an40r.png" alt="enter image description here" /></a></p>
<p>The two formulas are mathematically equivalent; the only change is that you have swapped the names of the variables (changes them to names that are less intuitive, compared to what effect they have), but that doesn't affect the behavior of the system.</p>
281
LSTM
How does the forget layer of an LSTM work?
https://cs.stackexchange.com/questions/118865/how-does-the-forget-layer-of-an-lstm-work
<p>Can someone explain the mathematical intuition behind the forget layer of an LSTM?</p> <p>So as far as I understand it, the cell state is essentially long term memory embedding (correct me if I'm wrong), but I'm also assuming it's a matrix. Then the forget vector is calculated by concatenating the previous hidden state and the current input and adding the bias to it, then putting that through a sigmoid function that outputs a vector then that gets multiplied by the cell state matrix.</p> <p>How does a concatenation of the hidden state of the previous input and the current input with the bias help with what to forget?</p> <p>Why is the previous hidden state, current input and the bias put into a sigmoid function? Is there some special characteristic of a sigmoid that creates a vector of important embeddings?</p> <p>I'd really like to understand the theory behind calculating the cell states and hidden states. Most people just tell me to treat it like a black box, but I think that, in order to have a successful application of LSTMs to a problem, I need to know what's going on under the hood. If anyone has any resources that are good for learning the theory behind why cell state and hidden state calculation extract key features in short and long term memory I'd love to read it.</p>
<p>Think of it like this: The cell state <span class="math-container">$h_t$</span> is a vector. The forget vector <span class="math-container">$f_t$</span> is used to choose which parts of the cell state to "forget". We update the hidden state with something like <span class="math-container">$c_t = f_t \circ c_{t-1}$</span> (it's actually more complicated, but let's start with that, to gain intuition). Suppose <span class="math-container">$f_t$</span> were a vector of 0's and 1's. In the coordinates where <span class="math-container">$f_t$</span> is 1, the value of <span class="math-container">$c_{t-1}$</span> would be copied over to <span class="math-container">$c_t$</span> (it's not forgotten). In the coordinates where <span class="math-container">$f_t$</span> is 1, <span class="math-container">$c_t$</span> is reset to zero and the value of <span class="math-container">$c_{t-1}$</span> is ignored (it's forgotten). So, the forget vector can be used to control in which positions we forget values from the previous cell state vector.</p> <p>Now what remains is to figure out a way to choose a forget vector <span class="math-container">$f_t$</span>. In general we might want to choose which positions to forget based on both the current input <span class="math-container">$x_t$</span> and the previous hidden state <span class="math-container">$h_{t-1}$</span>. So, we should compute <span class="math-container">$f_t$</span> as some function of <span class="math-container">$x_t$</span> and <span class="math-container">$h_{t-1}$</span>. Many choices of how to represent that function might be possible, but a LSTM chooses a specific function for this. In a LSTM, this is done by a single-layer fully-connected neural network. A single-layer fully-connected neural network concatenates all of the inputs, then multiplies them by a matrix, adds a bias, and feeds the result to an activation layer (in this case, sigmoid activation). So that's why the formula for <span class="math-container">$f_t$</span> looks the way it does: that formula is capturing what a single-layer fully-connected neural network does.</p>
282
LSTM
Intuitive description for training of LSTM (with forget gate/peephole)?
https://cs.stackexchange.com/questions/12871/intuitive-description-for-training-of-lstm-with-forget-gate-peephole
<p>I am a CS undergraduate (but I don't know much about AI though, did not take any courses on it, and definitely nothing about NN until recently) who is about to do a school project in AI, so I pick a topics regarding grammar induction (of context-free language and perhaps some subset of context-sensitive language) using reinforcement learning on a neural network. I started to study previous successful approach first to see if they can be tweaked, and now I am trying to understand the approach using supervised learning with Long Short Term Memory. I am reading <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.4170">"Learning to Forget: Continual Prediction with LSTM"</a>. I am also reading the paper on peephole too, but it seems even more complicated and I'm just trying something simpler first. I think I get correctly how the memory cell and the network topology work. What I do not get right now is the training algorithm. So I have some questions to ask:</p> <ul> <li><p>How exactly does different input get distinguished? Apparently the network is not reset after each input, and there is no special symbol to delimit different input. Does the network just receive a continuous stream of strings without any clues on where the input end and the next one begin?</p></li> <li><p>What is the time lag between the input and the corresponding target output? Certainly some amount of time lag are required, and thus the network can never be trained to get a target output from an input that it have not have enough time to process. If it was not Reber grammar that was used, but something more complicated that could potentially required a lot more information to be stored and retrieved, the amount of time need to access the information might varied depending on the input, something that probably cannot be predicted while we decide on the time lag to do training.</p></li> <li><p>Is there a more intuitive explanation of the training algorithm? I find it difficult to figure out what is going on behind all the complicated formulas, and I would need to understand it because I need to tweak it into a reinforced learning algorithm later.</p></li> <li><p>Also, the paper did not mention anything regarding noisy <strong>training</strong> data. I have read somewhere else that the network can handle very well noisy testing data. Do you know if LSTM can handle situation where the training data have some chances of being corrupted/ridden with superfluous information?</p></li> </ul>
<p>LSTM is designed to process a stream of data chunks (each chunk being the set of inputs for the network at this point in time) that arrive over time and observe features occurring in the data and yield output accordingly. The time lag (delay) between the occurrence of features to recognize may vary and may be prolonged.</p> <p>One would then train the network by streaming training examples in randomized ordering which should also have some timeshift noise added in the form of idle passes (have the network activate when inputs are at default idle values eg: when no audio in the case of a speech processor) [exception: if any training data should obey periodic timeshift patterns such as music then the timeshift noise should keep the timeshifting synchronized eg: in music making sure a start-of-measure training example isn't shifted to mid-measure and so forth]</p> <p>It is possible also to have a semi supervised setup where the network is always in a training configuration and it's trained with examples that expect output of an idle value when no feauture is present or the appropriate expected value when a feature is presented to train).</p> <p>If feedback format training is desired it can be emulated by:</p> <ol> <li>saving the internal state (time t)</li> <li>activating the network on current inputs (now at t+1)</li> <li>supervisory process evaluates the output obtained at t <ul> <li>3a if correction is needed first rewind to the saved state (rewinds network back to t)</li> <li>3b generate a training example with the correction</li> <li>3c run a train (backprop) pass for this slice rather than an activation</li> </ul></li> </ol> <p>thus one implements a feedback style system since training examples are basically only created while the network is "getting it wrong." The feedback format is useful if one wants the network to attempt improvisation (like Schmidhuber's music example).</p> <ul> <li>it should be pointed out that part of the correction feedback (and thus training examples) necessarily includes those that enforce idle valued output when features are not present at the current time</li> </ul> <p>It was mentioned by the OP that [there is no separation of inputs] except that actually there is. If one thinks of a voice recognition scenario one has periods of utterances (features the LSTM should detect) interspersed with long periods of silence. So to address the concern it would be fair to say those periods of silence are in fact separating the sequenced groups of inputs (those silences too are actually a feature the network needs to detect and learn to respond with idle valued outputs ie: learn to do nothing when silence).</p> <h2>A note about resetting of the network</h2> <p>Any reset or recalling of a saved network state in the LSTM sense has a meaning of "go back in time" thus undoing any learning the LSTM performed prior to the reset.</p> <p>Thus you were correct in stating LSTMs are not reset prior to each training sample nor tranining epoch. LSTMs want their data streamed, or provided in an 'online' manner, so-to-speak.</p>
283
LSTM
LSTM : What should I do if I am always getting an output too close to one value?
https://cs.stackexchange.com/questions/134959/lstm-what-should-i-do-if-i-am-always-getting-an-output-too-close-to-one-value
<p>I am training a model for ham and spam classification using LSTM. I am indicating the spams as 0, and the hams as 1. However, the dataset has much more hams than spams, so I tend to get an output very close to 1. That means the output is almost always above 0.5. So I have two questions about this :</p> <p>Q1. Should I rate my answers based on the ratio between ham and spam in the dataset? (etc. If there are 4000 ham and 1000 spam, then should I count an output higher than 0.8 as ham and lower than 0.8 as spam?)</p> <p>Q2. If this is not a legitimate way, do you have any solutions for my problem?</p>
<p>The usual starting point is that if the score is above 0.5, classify it as ham, otherwise as spam. If most emails are ham, then it makes sense that most emails give you a score above 0.5, so you have not said anything that indicates there is a problem.</p> <p>This approach assumes that the proportion of ham vs spam in the training set is the same as the proportion at test time.</p> <p>If that doesn't work, one standard approach is to choose a threshold, and everything with a score above the threshold is treated as ham, everything below as spam. A standard way to set a threshold is, after you've trained the LSTM, choose the optimal threshold based on the training set (i.e., that maximizes the accuracy on the training set, etc.), or on a validation set.</p>
284
LSTM
Is there something as good as a GRU or LSTM but simpler?
https://cs.stackexchange.com/questions/83939/is-there-something-as-good-as-a-gru-or-lstm-but-simpler
<p>I was just reading this paper: <a href="https://arxiv.org/pdf/1701.05923.pdf" rel="nofollow noreferrer">Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks Rahul Dey and Fathi M. Salem</a></p> <p>It seems to me that perhaps the architecture of LSTMs and GRUs are overly complicated. And that the same problems could probably be solved with a simpler architecture. </p> <p>I get the theory behind LSTMs and GRUs as they are kind of trying to model the short term memory. But really all the need to do is get rid of the gradient exploding problem of RNNs. </p> <p>What is the latest research. Is there something simpler than a GRU?</p> <p>Edit: Actually I found something called a MGU (minimal gated unit) which claims to be simpler. What's the latest?</p>
285
LSTM
How long can the short memory last in the RNN?
https://cs.stackexchange.com/questions/142325/how-long-can-the-short-memory-last-in-the-rnn
<p>For a recurrent neural network, the LSTM was a model of how the network worked. However, consider the case where an input was a long paragraph or even an article. <span class="math-container">$$c_1c_2...c_n$$</span> where <span class="math-container">$c_i$</span> were some characters. The LSTM would work as expected given <span class="math-container">$n$</span> not a large number. But what if <span class="math-container">$n$</span> was a large number, say <span class="math-container">$1e5$</span>. Clearly, the short term memory would not work as expected in the LSTM model.</p> <p>Logically, with each input of <span class="math-container">$c_{a+i}$</span> where <span class="math-container">$a$</span> was some fixed integers and <span class="math-container">$i\geq 1$</span>, the &quot;information&quot; or &quot;probability&quot; of the outcome contributed at <span class="math-container">$c_a$</span> got &quot;modified&quot; or even &quot;suppressed&quot;, the reason why the LSTM worked. However, with sufficient large iteration of <span class="math-container">$i$</span>, the information at <span class="math-container">$c_a$</span> might be completely suppressed.</p> <p>How long can the short memory last in the RNN? and how would this affect the training?</p>
286
LSTM
Why does a RNN network output different based on the training sequence
https://cs.stackexchange.com/questions/113158/why-does-a-rnn-network-output-different-based-on-the-training-sequence
<p>I've set up an RNN LSTM network in Java using DL4J as the library.</p> <p>I currently have 500 examples of positive text, and 500 examples of negative text.</p> <p>When I fitness the training data by first training all the negatives and then all the positives, my predictions only favor high positive responses even for things that would be considered negative in my training examples.</p> <p>And if I reverse it and train positive first and then train negative last everything is favored as a negative in high 80 - 90%.</p> <p>However, when I train in an oscillating pattern such as: Positive, Negative, Positive, Negative and so on, the predictions become accurate again and similar examples of negative text are picked up relatively accurately and vice versa for positive text.</p> <p>I've only just started studying machine learning in general but I can't quite find any resources explaining why the sequence of training impacted my model so hugely. Is this normally how an LSTM network should behave or is oscillating the training data the correct approach?</p> <p>Would the classification of a messages negative intent be something an LSTM RNN network should be used for or should I consider another network?</p>
<p>RNNs are pattern recognition tools. It is't entirely clear to me what it is exactly that you are trying to do, but if you simply intend for it to classify positive and negative messages a regular Neural Network might be better suited.</p> <p>What your RNN does (if implemented correctly) is learn classifications in the context of the sequence. i. e. It is learning: what are the odds of a given input X being positive/negative knowing the preceding N outcomes.</p> <p>This should already be giving you some insight on why different orderings of the training data skew the results. If your training set is all positives followed by all negatives then the vast majority of your training set will be teaching the network that a sequence of positives almost certainly indicate a positive message and vice versa.</p> <p>Building a good dataset is by far the most important part of applying machine learning and I suggest you look for some high level explanations of the different types of ML and some general information on bias in data. </p>
287
LSTM
How can I modify this detail in the article &quot;&quot;Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications&quot;?&quot;
https://cs.stackexchange.com/questions/144325/how-can-i-modify-this-detail-in-the-article-extending-multi-sense-word-embeddi
<p>This question is about de paper <a href="https://arxiv.org/pdf/2103.15330.pdf" rel="nofollow noreferrer">Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications</a>.</p> <p>I am interested in the transformer part of the paper and the main structures of the algorithm is represented in the following image:</p> <p><a href="https://i.sstatic.net/okHk2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/okHk2.jpg" alt="enter image description here" /></a></p> <p>Before the main questions I have other questions which perharps will help the main question:</p> <p>What is the role of the DECODER? Why do I need a encoder/decoder?</p> <p>Main question:</p> <p>In the paper the authors replace the transformer encoder by an bi-LSTM and the transformer decoder with LSTM.</p> <p>What are the other options for replacing the encoder/decoder part of the algorithm? Is it possible to replace the encoder/decoder at once by a single structure?</p>
<p>Thanks for being interested in our work.</p> <p>The role of the decoder is to model the dependency between the codebook embeddings. For example, in this case, outputting an embedding close to sings might be correlated to outputting an embedding close to microphone.</p> <p>There are several reasons that we choose to use a seq2seq (encoder/decoder) architecture. For example, we want to compare with the related work such as skip-thought. In addition, the sentence length varies but we want to output a fixed number of embeddings.</p> <p>If you want, you can input a fixed number of multiple special tokens into a transformer encoder and use the corresponding hidden states as the codebook embedding. We find that this encoder-only architecture is more likely to output almost identical embeddings (i.e., multiple embeddings collapse into a single embedding), especially when a transformer with many layters such as BERT. We are investigating some solutions to this problem now.</p>
288
LSTM
which neural network is good for predecting the value of strings
https://cs.stackexchange.com/questions/131318/which-neural-network-is-good-for-predecting-the-value-of-strings
<p>I have a dataset that contains some strings. A numeric value is assigned to each string. I want to develop a machine learning (deep learning) model to get a string and predict its value. What neural network do you suggest for this model? Should I use RNN (LSTM)?</p>
<p>Assuming the strings are variable-length, a recurrent neural network would be a reasonable choice: either a LSTM or a convolutional network.</p>
289
LSTM
Signal translation with Seq2Seq model
https://cs.stackexchange.com/questions/130020/signal-translation-with-seq2seq-model
<p>I'm currently doing some research on signal processing and I got a dataset which includes the signal in itself and its &quot;translation&quot;.</p> <p><a href="https://i.sstatic.net/Bnp2P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bnp2P.png" alt="A signal and its translation" /></a></p> <p>So I want to use a Many-to-Many RNN to translate the first into the second.</p> <p>After spending a week reading about the different option I have, I ended up learning about RNN and Seq2Seq models. I believe this is the right solution for the problem (correct me if I'm wrong).</p> <p>Now, as the input and the output are of the same length, I don't need to add padding and thus I tried a simple LSTM layer and TimeDistributed Dense layer (Keras):</p> <pre><code>model = Sequential([ LSTM(256, return_sequences=True, input_shape=SHAPE, dropout=0.2), TimeDistributed(Dense(units=1, activation=&quot;softmax&quot;)) ]) model.compile(optimizer='adam', loss='categorical_crossentropy') </code></pre> <p>But the model seems to learn nothing from the sequence and when I plot the &quot;prediction&quot;, it nothing but values between 0 and 1.</p> <p>As you can see, I'm a beginner and the code I wrote might not make sense to you but I need guidance on few questions:</p> <ul> <li>Does the model make sense for the problem I'm trying to solve ?</li> <li>Am I'm using the right loss/activation functions ?</li> <li>And finally, please correct/teach me</li> </ul>
<p>I am skeptical that machine learning is the right tool for this problem. I would look for a more direct solution, perhaps using peak detection or changepoint detection, or some other form of classical method for time-series analysis.</p> <p>If you do use machine learning, cross-entropy loss is not the right loss for your problem, and you will definitely need to change that.</p>
290
LSTM
Can an artificial neural network convert from cartesian coordinates to polar coordinates?
https://cs.stackexchange.com/questions/51090/can-an-artificial-neural-network-convert-from-cartesian-coordinates-to-polar-coo
<p>Given cartesian coordinates $x$ and $y$ as input, can a neural network output $r$ and $\theta$, the equivalent polar coordinates?</p> <p>This would seem to require an approximation of the pythagorean theorem (which requires approximations of $x^2$ and $\sqrt{x}$) and $\sin$, $\cos$, or $\tan$ approximations. Is this possible?</p> <p>If so, how many hidden layers would it take? I'm using an LSTM.</p>
<p>i dont know if this answers your question (or at least part of it)</p> <p>According to the <a href="https://en.wikipedia.org/wiki/Universal_approximation_theorem" rel="nofollow noreferrer">Universal approximation theorem for ANNs</a>, it is possible (at least within a region of interest).</p> <p>The question about how many hidden layers (and architecture) an ANN should have, is, AFAIK, an open problem, in the sense that there is no result to determine architecture and/or number of layers wrt specific final results (partly, this is due to the non-constructive proof of the above result), although you might want to see <a href="https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw">here</a> and <a href="https://stackoverflow.com/questions/9436209/how-to-choose-number-of-hidden-layers-and-nodes-in-neural-network">here</a>.</p> <p>Here is a review on methods to fix number of hidden layers in ANNs during the past 20 years: <a href="http://www.hindawi.com/journals/mpe/2013/425740/" rel="nofollow noreferrer">Review on Methods to Fix Number of Hidden Neurons in Neural Networks</a></p> <p>Most ANNs are built by a trial and error process.</p> <p><em>update (just an idea)</em></p> <p>By modeling the transformations from cartesian to polar (and approximating square roots,..) as a discrete system, maybe one can transfer this design into ANN model (but i wont pursue this further right now)</p>
291
LSTM
RNN input shape for sequence generation on Tensorflow
https://cs.stackexchange.com/questions/68612/rnn-input-shape-for-sequence-generation-on-tensorflow
<p>I would like to train a RNN with LSTM cells in Tensorflow to predict the next word of a sequence. Words are N-length vectors of 0s and 1s. By looking at different tutorials, I saw that the input tensor has a shape like this</p> <pre><code>seq_input = tf.placeholder(tf.float32, [n_steps, batch_size, seq_width]) </code></pre> <p>However, it is not clear to me what these shapes represent. How do I reshape my input words to match this shape? </p>
292
LSTM
Do all the cells in a recurrent neural network share learned parameters?
https://cs.stackexchange.com/questions/88891/do-all-the-cells-in-a-recurrent-neural-network-share-learned-parameters
<p>Most descriptions of modern RNNs present a "folded" characterisation, that is to say, a single cell with a loop back to itself transmitting the hidden state from one step to the next. However, in implementations the RNN is computed "unfolded", so a new cell is created for every step of the sequence up to some maximum sequence length, and the state is passed from one cell to the next.</p> <p>My question is: are the learned parameters shared between all the cells in the unfolded sequence? E.g. in the case of a stack of LSTMs, does each LSTM have its own set of forget, input-gate, candidate and output parameters, or does the whole stack share and update a common set?</p>
<p>Indeed, the copies of a cell in an unfolded version share their learning parameters.</p> <p>Why is it done this way? If the sequence processed by the LSTM is always the same lenght, we could conceivably get a better result with different parameters, but there are two key caveats:</p> <ol> <li>Shared parameters are faster to learn</li> <li>We want to be able to process cases when the lenght of the sequence processed is not fixed!</li> </ol>
293
LSTM
How would you go about creating a algorithm that should generate a shakespearean sonnet on any given theme
https://cs.stackexchange.com/questions/92523/how-would-you-go-about-creating-a-algorithm-that-should-generate-a-shakespearean
<p>I need to create an algorithm that is going to create a shakespearean sonnet for a specific theme. This theme should be generated out of twitter tweets that have some hashtag.</p> <p>My current idea goes like </p> <p><strong>While training:</strong></p> <ul> <li>break up sonnets and other kinds of poems into words (or charachters, have not decided yet)</li> <li>extract key words from sonnet using tf/idf</li> <li>convert key words into a vector that will represent a theme using doc2vec</li> <li>use this words and a theme vector as an input in lstm network to generate a sonnet by predicting next word (charachter) on a particular theme but without a sonnet structure</li> <li>create a discriminator lstm network with the same input but trained only on sonnets main function of which is going to be to find how much does this poem looks like a sonnet</li> <li>combine generator and discriminator networks into a GAN</li> </ul> <p><strong>While generating</strong></p> <ul> <li>find some amount of tweets with this hashtag</li> <li>extract keywords from this tweets using tf/idf</li> <li>convert this key words into one vector using doc2vec </li> <li>generate a sonnet with a trained model usin this parameters</li> </ul> <p>I am only a beginner in machine learning, so I would like to hear opinions of more experienced data scientist, is this algorithm going to work and what can be improved?</p> <p>Thank you.</p>
294
LSTM
What&#39;s the input to the decoder in a sequence to sequence autoencoder?
https://cs.stackexchange.com/questions/69432/whats-the-input-to-the-decoder-in-a-sequence-to-sequence-autoencoder
<p>What's the input to the decoder part of a sequence to sequence autoencoder? I've seen certain examples of such an autoencoder (using LSTM's more often than not) but am still unclear.</p> <ul> <li><p>For example, here in this often-cited <a href="https://pdfs.semanticscholar.org/6506/d13a84f90f8620fd028cfe5b8b9d0444a6d2.pdf" rel="noreferrer">paper</a> by Dai &amp; Le ('Semi Supervised Sequence Learning'), we have the following diagram:</p> <p><a href="https://i.sstatic.net/A2hds.png" rel="noreferrer"><img src="https://i.sstatic.net/A2hds.png" alt="Dai &amp; Le"></a></p> <p>What's the input to the decoder portion of the autoencoder here? In this example it's 'W-X-Y-Z.' But in general, is it the same as the input to the encoder? Or is it using the output from the previous timestep/LSTM cell as input?</p></li> <li><p>Similarly, in another popular <a href="https://arxiv.org/pdf/1502.04681.pdf" rel="noreferrer">paper</a> by Srivastava et. al ('Unsupervised Learning of Video Representations using LSTMs'), they have the following diagram:</p> <p><a href="https://i.sstatic.net/oUjbk.png" rel="noreferrer"><img src="https://i.sstatic.net/oUjbk.png" alt="Srivastava et al"></a></p> <p>It seems they're using the reversed input from the encoder as input here. However, there's a section as follows:</p> <blockquote> <p>The decoder can be of two kinds – conditional or unconditioned. A conditional decoder receives the last generated output frame as input, i.e., the dotted input in Fig. 2 is present. An unconditioned decoder does not receive that input.</p> </blockquote> <p>In the unconditioned decoder, what input does the decoder receive?</p></li> </ul>
<p>I was wondering the same and just stumbled across a nice <a href="http://cs.stanford.edu/~quocle/tutorial2.pdf" rel="noreferrer">tutorial</a> by Quoc V. Le. The following explanation deals with the conditional case since this seems to be the common case. My explanation is based on and the image is taken from chapter 5 <em>Sequence output prediction with Recurrent Neural Networks</em>.</p> <h3>Background</h3> <p>We only regard a decoder with a <em>single</em> cell RNN which has:</p> <ul> <li><strong>W</strong> input to hidden weights</li> <li><strong>U</strong> hidden to hidden weights (ignore that first weight is different here)</li> <li><strong>V</strong> hidden to label weight</li> </ul> <p>$$ f(x) = Vh_T \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ h_t = \sigma ( U h_{t-1} + W x_t ) ~~~ \text{for} ~ t = T, \dots, 1 \\ ~~~~~~~~~~ h_0 = \sigma ( W x_0 ) $$</p> <p><a href="https://i.sstatic.net/RZD6L.png" rel="noreferrer"><img src="https://i.sstatic.net/RZD6L.png" alt="enter image description here"></a></p> <p>Since we are doing sequence prediction, we are not only interested in the last output of the RNN but rather the output at every timestep. Therefore the conditional decoder <a href="https://arxiv.org/pdf/1508.04025.pdf" rel="noreferrer">is designed to predict each label based on the previous ones</a>, mathematically it splits the conditional probability in the following way:</p> <p>$$ p(y|x) = \prod_{i=1}^n p(y_{i} | y_{i-1}, \dots, y_{1}, x) $$</p> <p>Like often done in machine learning, you can transform the probability to a score/energy/whatever which consists of a more convenient summation:</p> <p>$$ \log p(y|x) = \sum_{i=1}^n \log p(y_{i} | y_{i-1}, \dots, y_{1}, x) $$</p> <p>The maximization of that score over our data is the objective. That can be equivalently formulated by the minimization of $ - \sum_{(x,y) \in D} \log p(y|x) $ where $D$ is our training set.</p> <p>To complete this thought: When this probability is approximated we are actually interested in the argument $\theta_{min}$ of the minimization of $ - \sum_{(x,y) \in D} \log f(y|x,\theta) $ where $f(y|x,\theta)$ is our function approximator, e.g. a RNN, and $\theta$ the vector containing all weights $U$, $W$ and $V$.</p> <h3>During Training</h3> <p>During training we have the ground truth at hand so we can feed the decoder (see image) with the respective previous label at each timestep. At the first timestep we use the output of the encoder. Easy peasy.</p> <h3>During Inference</h3> <p>Here we obviously do not have the ground truth so instead we use the output of the previous timestep. This however poses another problem, since to find the sequence of maximum probability, we would have to compute every possible sequence and its probability according to probability function defined above.</p> <p>Now, <a href="https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/translate.py#L282" rel="noreferrer">apparently</a> one "greedy" approach is, to just ignore the above probability model and take the argmax's of each timestep.</p> <p>Another more faithful approach is <em>Beam Search</em> which just heuristically looks at a subset of probable sequences and picks the one with maximum probability of them. </p> <p>Things to note:</p> <ul> <li>Inference is stopped, when the End-Of-Sequence symbol (<code>&lt;EOS&gt;</code>) is returned (greedy: when a timestep's argmax is <code>&lt;EOS&gt;</code>, beam search: the currently regarded sequence leads to <code>&lt;EOS&gt;</code>)</li> <li>Both inference methods do not gurantee retrieving the sequence with maximum probability</li> <li>The output of $f(x,\theta)$ needs to be a probability distribution at each timestep, so the final activation of the RNN is usually a softmax $f(x) = softmax(Vh_T)$</li> </ul> <h3>Unconditioned Case</h3> <p>I'd say based on the above explanation and the paper you referenced, that the unconditioned case does only get input at the first timestep of the decoder and then just "works with" the propagation of the hidden states. So the second equation of the RNN equations changes to $h_t = \sigma ( U h_{t-1} )$.</p> <p>Then the outputs at each step would not be conditioned on the previous timestep's output, but rather be <em>unconditioned</em>. This way the modelled probability would become</p> <p>$$ p(y|x) = \prod_{i=1}^n p(y_{i} | x) $$</p> <p>and the greedy inference strategy would become valid.</p>
295
LSTM
Clarification about RNN encoder-decoder equation
https://cs.stackexchange.com/questions/148739/clarification-about-rnn-encoder-decoder-equation
<p>In the paper by <a href="https://arxiv.org/pdf/1406.1078.pdf" rel="nofollow noreferrer">Cho et.al.</a>, section 2.3 details the equations for the modified LSTM cell in RNN used in the paper's implementation. The equation in question is :</p> <p><a href="https://i.sstatic.net/nRRBI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nRRBI.png" alt="enter image description here" /></a></p> <p>Here, the output of the reset gate (<strong>r</strong>) is element wise multiplied with the previous hidden state <strong>h</strong><sub>&lt;<em>t</em>-1&gt;</sub>, and then matrix multiplied with the <strong>U</strong> matrix.</p> <p>Later on, in the Appendix section <strong>A.1.1</strong> explains the equations for the decoder, specifically:</p> <p><a href="https://i.sstatic.net/Qv10J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qv10J.png" alt="enter image description here" /></a></p> <p>My question: Is the <em>r '<sub>j</sub></em> element-wise multiplied to the corresponding expression or not?</p>
<p><span class="math-container">$\mathbf{r}$</span> is a vector; <span class="math-container">$r'_j$</span> is a scalar; so this is multiplying a scalar (<span class="math-container">$r'_j$</span>) by a vector (the part in <span class="math-container">$[...]$</span>). There is only one way to multiply a scalar by a vector: you multiply each coordinate of the vector by the scalar.</p>
296
LSTM
Is deepfake detection viable?
https://cs.stackexchange.com/questions/131326/is-deepfake-detection-viable
<p>I'm thinking of doing a project on deepfake detection, but I'm not entirely sure if it is viable. Based on my understanding, how it works is that deepfake generation programs have a generative and discriminative network, and eventually after training, the systems reaches an equilibrium where the discriminative network can't detect real vs fake faces. I was thinking of building a CNN-LSTM architecture where I analyze not only single image frames, but image frames over time as well to better discern between real videos and deepfakes, but I'm not sure if this is viable? Any help or resources would be appreciated.</p>
297
LSTM
Can we supervise on the hidden states of RNN?
https://cs.stackexchange.com/questions/141532/can-we-supervise-on-the-hidden-states-of-rnn
<p>I'm trying to generate some history-dependent model with machine learning, whose underline physical model has a clear definition of its &quot;internal state variable&quot; (a state derived from historical inputs) and how this variable interacts with the inputs to get the outputs. Mathematically this reads: <span class="math-container">$y_t=f(x_t,h_t)$</span>, <span class="math-container">$h_t=g(x_1, ..., x_{t-1})$</span> .</p> <p>Intuitively, this can be realized by RNN(or LSTM/GRU). Now I wonder if we can add the supervision on state variables to the loss function to improve prediction. For instance, we extract the hidden states <span class="math-container">$H$</span> output from our RNN and pass them through another branch of feed-forward network to predict the internal state variables <span class="math-container">$h$</span>. The loss function then has two parts, one is for model outputs <span class="math-container">$\sum (y-\hat{y})^2$</span>, and the other is for model internal states <span class="math-container">$\sum(h-\hat{h})^2$</span>.</p> <p>Any insight toward the following questions are welcomed:</p> <ul> <li>Will this approach work? (Does it make sense?)</li> <li>Potential issues?</li> </ul> <p>I'll also be really grateful if anybody can indicate closely related works in the answers! Thanks in advance!</p>
<p>There are several examples of performing supervision on hidden states for history dependent models.</p> <p><strong>Latent variables</strong></p> <p>Loss functions are commonly applied to internal states such as latent variables in <em>variational autoencoders</em> (VAEs).</p> <p><strong>Hidden states</strong></p> <p>Supervision <strong>directly on hidden states</strong>, as opposed to latent states in RNNs, has been performed for imposing sparsity, as in [<a href="https://www2.informatik.uni-hamburg.de/wtm/publications/2019/YWH19/ICANN2019_202_final_v4.pdf" rel="nofollow noreferrer">1</a>]:</p> <blockquote> <p>In order to impose sparsity, we introduce an L1-norm loss term over the output gate at layer <span class="math-container">$l$</span> as</p> <p><span class="math-container">$$\mathcal{L}^S(o^l(x,\theta)) = \sum_{1 \leq b \leq B} \sum_{1 \leq &gt; t\leq T} | o_t^l(x^{(b)})|.$$</span></p> </blockquote> <p><strong>Autoregressive Neural Networks</strong></p> <p>Additional, <em>auxilliary latent variables</em> have been used to increase the flexibility of variational inference. An example of this is given in [<a href="https://arxiv.org/abs/1606.04934" rel="nofollow noreferrer">2</a>] where hidden state <span class="math-container">$\mathbf{h}$</span> and latent state <span class="math-container">$\mathbf{z}_{t-1}$</span> are fed into an autoregressive neural network, with final output <span class="math-container">$\mathbf{z}_t$</span>.</p> <h1>Reference</h1> <p><a href="https://www2.informatik.uni-hamburg.de/wtm/publications/2019/YWH19/ICANN2019_202_final_v4.pdf" rel="nofollow noreferrer">1</a> Learning Sparse Hidden States in Long Short-Term Memory, <a href="https://www2.informatik.uni-hamburg.de/wtm/publications/2019/YWH19/ICANN2019_202_final_v4.pdf" rel="nofollow noreferrer">(paper)</a></p> <p><a href="https://arxiv.org/abs/1606.04934" rel="nofollow noreferrer">2</a> Improved Variational Inference with Inverse Autoregressive Flow, <a href="https://arxiv.org/abs/1606.04934" rel="nofollow noreferrer">(arxiv)</a></p>
298
LSTM
How can one measure the time dependency of an RNN?
https://cs.stackexchange.com/questions/129437/how-can-one-measure-the-time-dependency-of-an-rnn
<p>Most of the discussion about RNN and LSTM alludes to the varying ability of different RNNs to capture &quot;long term dependency&quot;. However, most demonstrations use generated text to show the absence of long term dependency for vanilla RNN.</p> <p>Is there any way to explicitly measure the time dependency of a given trained RNN, much like ACF and PACF of a given ARMA time series?</p> <p>I am currently trying to look at the (Frobenius norm of) gradients of memories <span class="math-container">$s_k$</span> against input <span class="math-container">$x_l$</span>, where <span class="math-container">$l\le k$</span>, summed over training examples <span class="math-container">$\{x^i\}_{i=1}^N$</span> - <span class="math-container">$$\text{Dep}(k,l):=\sum_{i=1}^N \big\|\frac{\partial s_k}{\partial x_l}(x^i)\big\|_F$$</span> I would like to know if there are more refined or widely-used alternatives to this prototype.</p> <p>I am working with time series so I treat the inputs <span class="math-container">$\{x_t\}$</span> as realization of a random process <span class="math-container">$\{X_t\}$</span>, thus by &quot;current&quot; I mean <span class="math-container">$x_i,s_i$</span> for some fixed <span class="math-container">$i$</span>, &quot;the past&quot; I mean <span class="math-container">$\{x_j\}_{j=1}^{i-1},\{s_j\}_{j=1}^{i-1}$</span> and &quot;time&quot; I mean the index <span class="math-container">$t$</span>.</p> <p>I guess that the &quot;long-term dependency&quot; in literature refers to the sensitivity of the current memory <span class="math-container">$s_k$</span> w.r.t. past inputs <span class="math-container">$\{x_j\}_{j=1}^{k-1}$</span>, hence the prototype I formulated.</p>
<p>I am not aware of any standard or widely-used metric for this. I think what metric would be appropriate would depend on what you want to use it for.</p> <p>The issues with RNN is &quot;forgetting&quot;. If you feed a long sequence of inputs <span class="math-container">$x=(x_1,\dots,x_n)$</span> into a RNN, where <span class="math-container">$n$</span> is too large, the problem is that often the final decision is determined by the last few values (<span class="math-container">$\ldots,x_{n-1},x_n$</span>) and the earliest values (<span class="math-container">$x_1,x_2,\ldots$</span>) have been &quot;forgotten&quot; and do not affect the final decision. This is undesirable in many settings.</p> <p>Your metric would be one reasonable way to get a feeling for this. Another reasonable way might be to to feed in an input <span class="math-container">$x=(x_1,x_2,\dots,x_n)$</span>, then change just <span class="math-container">$x_1$</span> to get a new input <span class="math-container">$x'=(x'_1,x_2,\dots,x_n)$</span>, feed in <span class="math-container">$x'$</span>, and compare the outputs of the RNN on <span class="math-container">$x$</span> vs <span class="math-container">$x'$</span>; and repeat for many training samples or test samples <span class="math-container">$x$</span>.</p>
299