id int64 1 141k | title stringlengths 15 150 | body stringlengths 45 28.5k | tags stringlengths 1 102 | label int64 1 1 | text stringlengths 128 28.6k | source stringclasses 1
value |
|---|---|---|---|---|---|---|
6,360 | Will this algorithm terminate on any input? | <p>One can compress data with straight-line grammars. An algorithm that employs this technique is called <em>Sequitur</em>. If I understood correctly, Sequitur basically starts with one rule representing the input and then does these three steps till the grammar does not change anymore:</p>

<ol>
<li>For each rule, try to find any sequences of symbols in any other rule that match the rule's right hand side and replace these sequences by the rules left hand side.</li>
<li>For each pair of adjacent symbols in any right hand side, find all non-overlapping other pairs of adjacent symbols that are equal to the original pair. If there are any other pairs, add a new nonterminal, replace all occurrences of these pairs by the new nonterminal and add a new rule that defines the nonterminal.</li>
<li>For each nonterminal that appears exactly once on all right-hand sides of all rules, replace its occurrence by its definition, remove the nonterminal and the rule that defines it.</li>
</ol>

<p>For each (non-empty) input, can one guarantee that the above algorithm terminates?</p>
 | algorithms algorithm analysis formal grammars data compression correctness proof | 1 | Will this algorithm terminate on any input? -- (algorithms algorithm analysis formal grammars data compression correctness proof)
<p>One can compress data with straight-line grammars. An algorithm that employs this technique is called <em>Sequitur</em>. If I understood correctly, Sequitur basically starts with one rule representing the input and then does these three steps till the grammar does not change anymore:</p>

<ol>
<li>For each rule, try to find any sequences of symbols in any other rule that match the rule's right hand side and replace these sequences by the rules left hand side.</li>
<li>For each pair of adjacent symbols in any right hand side, find all non-overlapping other pairs of adjacent symbols that are equal to the original pair. If there are any other pairs, add a new nonterminal, replace all occurrences of these pairs by the new nonterminal and add a new rule that defines the nonterminal.</li>
<li>For each nonterminal that appears exactly once on all right-hand sides of all rules, replace its occurrence by its definition, remove the nonterminal and the rule that defines it.</li>
</ol>

<p>For each (non-empty) input, can one guarantee that the above algorithm terminates?</p>
 | habedi/stack-exchange-dataset |
6,363 | Nim game tree + minimax | <p><img src="https://i.stack.imgur.com/2hUx4.jpg" alt="search tree"></p>

<p><strong>Problem : Two players have in front of
them a single pile of objects, say a stack of 7 pennies. The first player divides the original
stack into two stacks that must be unequal. Each player alternatively thereafter does the
same to some single stack when it is his turn to play. The game proceeds until each stack has
either just one penny or two—at which point continuation becomes impossible. The player
who first cannot play is the loser. Show, by drawing a game tree, whether any of the players
can always win.</strong></p>

<p>Why is the state 6-1 not going to 3-3-1?If we have 6-1 pennies we can remove 3 pennies from the 6 stack and we have 3-3-1 pennies.So why isn't 3-3-1 not a child of 6-1?</p>
 | artificial intelligence search trees game theory | 1 | Nim game tree + minimax -- (artificial intelligence search trees game theory)
<p><img src="https://i.stack.imgur.com/2hUx4.jpg" alt="search tree"></p>

<p><strong>Problem : Two players have in front of
them a single pile of objects, say a stack of 7 pennies. The first player divides the original
stack into two stacks that must be unequal. Each player alternatively thereafter does the
same to some single stack when it is his turn to play. The game proceeds until each stack has
either just one penny or two—at which point continuation becomes impossible. The player
who first cannot play is the loser. Show, by drawing a game tree, whether any of the players
can always win.</strong></p>

<p>Why is the state 6-1 not going to 3-3-1?If we have 6-1 pennies we can remove 3 pennies from the 6 stack and we have 3-3-1 pennies.So why isn't 3-3-1 not a child of 6-1?</p>
 | habedi/stack-exchange-dataset |
6,368 | Can you have a binary search tree with O(logn + M) property for the following case | <p>Let $n$ be the number of strings which are sorted in lexicographical order and stored in a balanced binary search tree. You are provided with a prefix $x$ of which $M$ strings have the prefix $x$. I have devised the following algorithm, where I search until I find the first occurence of the prefix $x$ in one of the nodes. After that I run an inorder traversal on it, such that I print only the ones, that have prefix $x$ and are in order. </p>

<p>For example of sorted strings: $[ACT,BAT,CAT,CAB]$ and the prefix $x = CA$, I would print $CAT$ and $CAB$. </p>
 | binary trees search trees | 1 | Can you have a binary search tree with O(logn + M) property for the following case -- (binary trees search trees)
<p>Let $n$ be the number of strings which are sorted in lexicographical order and stored in a balanced binary search tree. You are provided with a prefix $x$ of which $M$ strings have the prefix $x$. I have devised the following algorithm, where I search until I find the first occurence of the prefix $x$ in one of the nodes. After that I run an inorder traversal on it, such that I print only the ones, that have prefix $x$ and are in order. </p>

<p>For example of sorted strings: $[ACT,BAT,CAT,CAB]$ and the prefix $x = CA$, I would print $CAT$ and $CAB$. </p>
 | habedi/stack-exchange-dataset |
6,371 | Proving DOUBLE-SAT is NP-complete | <p>The well known SAT problem is defined <a href="http://en.wikipedia.org/wiki/Boolean_satisfiability_problem">here</a> for reference sake. </p>

<p>The DOUBLE-SAT problem is defined as</p>

<p>$\qquad \mathsf{DOUBLE\text{-}SAT} = \{\langle\phi\rangle \mid \phi \text{ has at least two satisfying assignments}\}$</p>

<p>How do we prove it to be NP-complete? </p>

<p>More than one way to prove will be appreciated. </p>
 | complexity theory np complete satisfiability | 1 | Proving DOUBLE-SAT is NP-complete -- (complexity theory np complete satisfiability)
<p>The well known SAT problem is defined <a href="http://en.wikipedia.org/wiki/Boolean_satisfiability_problem">here</a> for reference sake. </p>

<p>The DOUBLE-SAT problem is defined as</p>

<p>$\qquad \mathsf{DOUBLE\text{-}SAT} = \{\langle\phi\rangle \mid \phi \text{ has at least two satisfying assignments}\}$</p>

<p>How do we prove it to be NP-complete? </p>

<p>More than one way to prove will be appreciated. </p>
 | habedi/stack-exchange-dataset |
6,374 | How to correlate a matrix of values to get a coordinated point? | <p>I got a n*m matrix updated in realtime (i.e. about every 10ms) with values between 0 and 1024, and I want to work out from that matrix a multitouch trackpad behaviour, which is:</p>

<ul>
<li>generate one or more points on the surface given the values on the matrix,</li>
<li>make this or those point as big as the value can be.</li>
</ul>

<p>For example here is a few lines of a 9x9 matrix updates, and we can consider the following matrix as an example (with a touch in the middle):</p>

<pre><code>[ [ 12, 7,12 ],
 [ 12,129,19 ],
 [ 12, 11,22 ] ]
</code></pre>

<p>The goal is to mimic the behaviour of a common touchpad (like on every smartphone, or laptop). So, I'm getting values from a evenly distributed matrix of capacitive sensors on a physical object, which are processed by a microcontroller into a matrix, and I want to get coordinates and weight of one or several points.</p>

<p>The idea would be to get something like <a href="https://www.youtube.com/watch?v=SiC-EfQ1fh4" rel="nofollow">this</a> (of course, I don't expect to have more than 2 or 3 detected points, and that level of precision with a matrix that small).</p>

<p>Here are a few example raw logs:</p>

<ul>
<li><a href="http://m0g.net/~guyzmo/touch_diag.log" rel="nofollow">http://m0g.net/~guyzmo/touch_diag.log</a> </li>
<li><a href="http://m0g.net/~guyzmo/touch_double.log" rel="nofollow">http://m0g.net/~guyzmo/touch_double.log</a></li>
</ul>

<p><strong>Edits</strong>:</p>

<p>Thinking about my problematic made me consider this idea: I think I should make some kind of interpolation to augment the definition of the matrix, and in some way make the new values additive.</p>

<p>i.e. imagine we have the following matrix :</p>

<pre><code>[ [ 200, 200, 150 ],
 [ 150, 150, 80 ],
 [ 80, 80, 40 ] ]
</code></pre>

<p>and we want to interpolate it somehow into something that would look like (I'm inventing the values, but it's to expose the idea):</p>

<pre><code>[ [ 200, 400, 200, 175, 150 ],
 [ 175, 200, 175, 150, 125 ],
 [ 150, 170, 150, 125, 80 ],
 [ 100, 125, 100, 80, 60 ],
 [ 80, 80, 80, 60, 40 ] ]
</code></pre>

<p>I've looked at interpolation algorithms, and it looks like the one we want that is the closer to our needs is the hermite interpolation. But though I have <a href="http://paulbourke.net/miscellaneous/interpolation/" rel="nofollow">RTFM</a> on interpolation methods, I don't know how I can apply it to a matrix.</p>
 | algorithms matrices | 1 | How to correlate a matrix of values to get a coordinated point? -- (algorithms matrices)
<p>I got a n*m matrix updated in realtime (i.e. about every 10ms) with values between 0 and 1024, and I want to work out from that matrix a multitouch trackpad behaviour, which is:</p>

<ul>
<li>generate one or more points on the surface given the values on the matrix,</li>
<li>make this or those point as big as the value can be.</li>
</ul>

<p>For example here is a few lines of a 9x9 matrix updates, and we can consider the following matrix as an example (with a touch in the middle):</p>

<pre><code>[ [ 12, 7,12 ],
 [ 12,129,19 ],
 [ 12, 11,22 ] ]
</code></pre>

<p>The goal is to mimic the behaviour of a common touchpad (like on every smartphone, or laptop). So, I'm getting values from a evenly distributed matrix of capacitive sensors on a physical object, which are processed by a microcontroller into a matrix, and I want to get coordinates and weight of one or several points.</p>

<p>The idea would be to get something like <a href="https://www.youtube.com/watch?v=SiC-EfQ1fh4" rel="nofollow">this</a> (of course, I don't expect to have more than 2 or 3 detected points, and that level of precision with a matrix that small).</p>

<p>Here are a few example raw logs:</p>

<ul>
<li><a href="http://m0g.net/~guyzmo/touch_diag.log" rel="nofollow">http://m0g.net/~guyzmo/touch_diag.log</a> </li>
<li><a href="http://m0g.net/~guyzmo/touch_double.log" rel="nofollow">http://m0g.net/~guyzmo/touch_double.log</a></li>
</ul>

<p><strong>Edits</strong>:</p>

<p>Thinking about my problematic made me consider this idea: I think I should make some kind of interpolation to augment the definition of the matrix, and in some way make the new values additive.</p>

<p>i.e. imagine we have the following matrix :</p>

<pre><code>[ [ 200, 200, 150 ],
 [ 150, 150, 80 ],
 [ 80, 80, 40 ] ]
</code></pre>

<p>and we want to interpolate it somehow into something that would look like (I'm inventing the values, but it's to expose the idea):</p>

<pre><code>[ [ 200, 400, 200, 175, 150 ],
 [ 175, 200, 175, 150, 125 ],
 [ 150, 170, 150, 125, 80 ],
 [ 100, 125, 100, 80, 60 ],
 [ 80, 80, 80, 60, 40 ] ]
</code></pre>

<p>I've looked at interpolation algorithms, and it looks like the one we want that is the closer to our needs is the hermite interpolation. But though I have <a href="http://paulbourke.net/miscellaneous/interpolation/" rel="nofollow">RTFM</a> on interpolation methods, I don't know how I can apply it to a matrix.</p>
 | habedi/stack-exchange-dataset |
6,378 | What is the time complexity of calling successor $n$ times during tree traversal? | <p>According to some <a href="http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/binarySearchTree.htm" rel="nofollow noreferrer">sources</a>, the time complexity of finding the successor of a node in a tree is $O(h)$. So, if the tree is well balanced, the height $h=\log n$, and the successor function takes time $O(\log n)$. 
Yet, according to this <a href="https://stackoverflow.com/questions/12447499/time-complexity-of-bst-inorder-traversal-if-implemented-this-way">stackoverflow post on the time complexity of an inorder traversal of a binary search tree</a>, if you call the successor function $n$ times, the time complexity is $O(n)$.</p>

<p>What resolves the apparent contradiction between:</p>

<blockquote>
 <p>If I call the sucessor function once, the time complexity is $O(h)$, which could be $O(n)$ or $O(\log n)$, depending on the kind of tree.</p>
</blockquote>

<p>AND</p>

<blockquote>
 <p>If I call the successor $n$ times, the time complexity is $O(n)$ in a balanced tree.</p>
</blockquote>

<p>Shouldn't tree traversal take $O(n^2)$ or $O(n\log n)$ time?</p>
 | binary trees search trees | 1 | What is the time complexity of calling successor $n$ times during tree traversal? -- (binary trees search trees)
<p>According to some <a href="http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/binarySearchTree.htm" rel="nofollow noreferrer">sources</a>, the time complexity of finding the successor of a node in a tree is $O(h)$. So, if the tree is well balanced, the height $h=\log n$, and the successor function takes time $O(\log n)$. 
Yet, according to this <a href="https://stackoverflow.com/questions/12447499/time-complexity-of-bst-inorder-traversal-if-implemented-this-way">stackoverflow post on the time complexity of an inorder traversal of a binary search tree</a>, if you call the successor function $n$ times, the time complexity is $O(n)$.</p>

<p>What resolves the apparent contradiction between:</p>

<blockquote>
 <p>If I call the sucessor function once, the time complexity is $O(h)$, which could be $O(n)$ or $O(\log n)$, depending on the kind of tree.</p>
</blockquote>

<p>AND</p>

<blockquote>
 <p>If I call the successor $n$ times, the time complexity is $O(n)$ in a balanced tree.</p>
</blockquote>

<p>Shouldn't tree traversal take $O(n^2)$ or $O(n\log n)$ time?</p>
 | habedi/stack-exchange-dataset |
6,382 | Graph Closeness - Different result with gephi and NodeXL | <p>I'm writing a JavaScript library for calculating graph measurements such as degree centrality, eccentrality, closeness and betweenness.</p>

<p>In order to validate my library I use two exist applications <a href="http://gephi.org" rel="nofollow">Gephi</a> and <a href="http://nodexl.codeplex.com/" rel="nofollow">NodeXL</a> to run calculation with them.
The problem is I got what looks like different results.</p>

<p>I build simple graph:</p>

<pre><code> (A) ----- (B)
 | |
 | | 
 (C) ----- (D)
</code></pre>

<p>Gephi gave those results:</p>

<pre><code>A ecc=2 close=1.333 bet=0.5
B ecc=2 close=1.333 bet=0.5
C ecc=2 close=1.333 bet=0.5
D ecc=2 close=1.333 bet=0.5
</code></pre>

<p>NodeXL gave those results:</p>

<pre><code>A close=0.25 bet=0.5
B close=0.25 bet=0.5
C close=0.25 bet=0.5
D close=0.25 bet=0.5
</code></pre>

<p>Note that NodeXL does not calculate eccentrality.</p>

<p>Which one is right?<br>
Are the results really different?</p>

<p>I didn't normalize (or at least not intend to normalize) any results.</p>
 | graphs terminology | 1 | Graph Closeness - Different result with gephi and NodeXL -- (graphs terminology)
<p>I'm writing a JavaScript library for calculating graph measurements such as degree centrality, eccentrality, closeness and betweenness.</p>

<p>In order to validate my library I use two exist applications <a href="http://gephi.org" rel="nofollow">Gephi</a> and <a href="http://nodexl.codeplex.com/" rel="nofollow">NodeXL</a> to run calculation with them.
The problem is I got what looks like different results.</p>

<p>I build simple graph:</p>

<pre><code> (A) ----- (B)
 | |
 | | 
 (C) ----- (D)
</code></pre>

<p>Gephi gave those results:</p>

<pre><code>A ecc=2 close=1.333 bet=0.5
B ecc=2 close=1.333 bet=0.5
C ecc=2 close=1.333 bet=0.5
D ecc=2 close=1.333 bet=0.5
</code></pre>

<p>NodeXL gave those results:</p>

<pre><code>A close=0.25 bet=0.5
B close=0.25 bet=0.5
C close=0.25 bet=0.5
D close=0.25 bet=0.5
</code></pre>

<p>Note that NodeXL does not calculate eccentrality.</p>

<p>Which one is right?<br>
Are the results really different?</p>

<p>I didn't normalize (or at least not intend to normalize) any results.</p>
 | habedi/stack-exchange-dataset |
6,385 | Multitape Turing machines against single tape Turing machines | <p><em>Introduction</em>: I recently learned that a multi-tape Turing Machine $\text{TM}_k$ is no more "powerful" than a single tape Turing machine $\text{TM}$. The proof that $\text{TM}_k \equiv \text{TM}$ is based on the idea that a $\text{TM}$ can simulate a $\text{TM}_k$ by using a unique character to separate the respective areas of each of the $k$ tapes.</p>

<p>Given this idea, how would we prove that a process taking $t(n)$ time on a $\text{TM}_k$ can be simulated by a 2-tape Turing machine $\text{TM}_2$ with $ O(t(n))\log(t(n))$ time?</p>
 | time complexity turing machines simulation tape complexity | 1 | Multitape Turing machines against single tape Turing machines -- (time complexity turing machines simulation tape complexity)
<p><em>Introduction</em>: I recently learned that a multi-tape Turing Machine $\text{TM}_k$ is no more "powerful" than a single tape Turing machine $\text{TM}$. The proof that $\text{TM}_k \equiv \text{TM}$ is based on the idea that a $\text{TM}$ can simulate a $\text{TM}_k$ by using a unique character to separate the respective areas of each of the $k$ tapes.</p>

<p>Given this idea, how would we prove that a process taking $t(n)$ time on a $\text{TM}_k$ can be simulated by a 2-tape Turing machine $\text{TM}_2$ with $ O(t(n))\log(t(n))$ time?</p>
 | habedi/stack-exchange-dataset |
6,391 | How to prove $L \cdot L^{*} = L^{+}$ | <p>How can one formally prove</p>

<p>$L \cdot L^{*} = L^{+}$</p>

<p>It looks obvious to me since with the concatenation you get rid of $\varepsilon$, but I cannot think of a formal proof through induction or something.</p>
 | formal languages regular languages | 1 | How to prove $L \cdot L^{*} = L^{+}$ -- (formal languages regular languages)
<p>How can one formally prove</p>

<p>$L \cdot L^{*} = L^{+}$</p>

<p>It looks obvious to me since with the concatenation you get rid of $\varepsilon$, but I cannot think of a formal proof through induction or something.</p>
 | habedi/stack-exchange-dataset |
6,393 | When are 2 decision/optimization problems equivalent? | <p>Does anybody know a good definition of 2 decision / optimization problems being equivalent? </p>

<p>I am asking since for example allowing polynomial time computations any 2 problems in NP could be considered equivalent.</p>
 | complexity theory terminology undecidability | 1 | When are 2 decision/optimization problems equivalent? -- (complexity theory terminology undecidability)
<p>Does anybody know a good definition of 2 decision / optimization problems being equivalent? </p>

<p>I am asking since for example allowing polynomial time computations any 2 problems in NP could be considered equivalent.</p>
 | habedi/stack-exchange-dataset |
6,405 | Maximum number of nodes with height h | <p>How is $\frac{n}{2^{h+1}}$ the maximum possible number of nodes at height $h$ for a binary search tree or heap tree? I saw this as proof to asymptotically bound the <code>build_heap</code> function in the book, but I don't get it.</p>
 | data structures binary trees | 1 | Maximum number of nodes with height h -- (data structures binary trees)
<p>How is $\frac{n}{2^{h+1}}$ the maximum possible number of nodes at height $h$ for a binary search tree or heap tree? I saw this as proof to asymptotically bound the <code>build_heap</code> function in the book, but I don't get it.</p>
 | habedi/stack-exchange-dataset |
6,406 | Can $f$ be not computable even if $L$ is decidable? | <p>I am trying to teach myself computability theory with a textbook. According to my book, a function $f$ over an alphabet $A=\{a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z\}$ is only computable iff the language</p>

<p>$$
L = \{s\#^j\sigma : s\in A^*, \sigma \in A, \text{ the }j\text{'th symbol of } f(s)\text{ is } \sigma\}$$</p>

<p>is decidable. Why is that? Couldn't a function $f$ be not computable even if $L$ is decidable?</p>
 | formal languages computability undecidability | 1 | Can $f$ be not computable even if $L$ is decidable? -- (formal languages computability undecidability)
<p>I am trying to teach myself computability theory with a textbook. According to my book, a function $f$ over an alphabet $A=\{a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z\}$ is only computable iff the language</p>

<p>$$
L = \{s\#^j\sigma : s\in A^*, \sigma \in A, \text{ the }j\text{'th symbol of } f(s)\text{ is } \sigma\}$$</p>

<p>is decidable. Why is that? Couldn't a function $f$ be not computable even if $L$ is decidable?</p>
 | habedi/stack-exchange-dataset |
6,410 | Solving a recurrence relation with √n as parameter | <p>Consider the recurrence </p>

<p>$\qquad\displaystyle T(n) = \sqrt{n} \cdot T\bigl(\sqrt{n}\bigr) + c\,n$ </p>

<p>for $n \gt 2$ with some positive constant $c$, and $T(2) = 1$.</p>

<p>I know the Master theorem for solving recurrences, but I'm not sure as to how we could solve this relation using it. How do you approach the square root parameter?</p>
 | asymptotics recurrence relation master theorem | 1 | Solving a recurrence relation with √n as parameter -- (asymptotics recurrence relation master theorem)
<p>Consider the recurrence </p>

<p>$\qquad\displaystyle T(n) = \sqrt{n} \cdot T\bigl(\sqrt{n}\bigr) + c\,n$ </p>

<p>for $n \gt 2$ with some positive constant $c$, and $T(2) = 1$.</p>

<p>I know the Master theorem for solving recurrences, but I'm not sure as to how we could solve this relation using it. How do you approach the square root parameter?</p>
 | habedi/stack-exchange-dataset |
6,414 | To prove Turing-completeness, is it enough to prove capability of producing arbitrary output? | <p>Turing completness is being typically proved via reduction to already proved Turing-complete machine.</p>

<p>Can the same be obtained by showing, that the machine in question is capable of generating arbitrary output – if proper input is given?</p>
 | simulation turing completeness | 1 | To prove Turing-completeness, is it enough to prove capability of producing arbitrary output? -- (simulation turing completeness)
<p>Turing completness is being typically proved via reduction to already proved Turing-complete machine.</p>

<p>Can the same be obtained by showing, that the machine in question is capable of generating arbitrary output – if proper input is given?</p>
 | habedi/stack-exchange-dataset |
6,415 | Context Free Grammar for language | <p>The language is $L = \{a^{i} b^{j} c^{k} \;|\; k \neq 2j\}$. I'm trying to write a grammar for this language, what I have so far is:</p>

<p>$S \rightarrow AT_{1} \;|\; AT_{2} \;|\; AT_{3} \;|\; AB \;|\; AC$</p>

<p>$A \rightarrow aA \;|\; \varepsilon$ </p>

<p>$B \rightarrow bB \;|\; \varepsilon$</p>

<p>$C \rightarrow cC \;|\; \varepsilon$</p>

<p>$T_{1} \rightarrow bbB'T_{1}c \;|\; \varepsilon $ (for $2j > k$)(1)</p>

<p>$B' \rightarrow bB' \;|\; b$</p>

<p>$T_{2} \rightarrow bT_{2}ccC'\;|\; \varepsilon$ (for $2j < k$)</p>

<p>$C' \rightarrow cC' \;|\; c$</p>

<p>$T_{3} \rightarrow bT_{3}c \;|\; \varepsilon$ (for $j = k$)</p>

<p>the problem that I am having is, the string $bbccc$ can't be generated although valid, in that case $j = 2$ and $k = 3$ so $2\times 2 > 3$ corresponding to production rule (1), how can I fix this?</p>
 | formal languages formal grammars context free | 1 | Context Free Grammar for language -- (formal languages formal grammars context free)
<p>The language is $L = \{a^{i} b^{j} c^{k} \;|\; k \neq 2j\}$. I'm trying to write a grammar for this language, what I have so far is:</p>

<p>$S \rightarrow AT_{1} \;|\; AT_{2} \;|\; AT_{3} \;|\; AB \;|\; AC$</p>

<p>$A \rightarrow aA \;|\; \varepsilon$ </p>

<p>$B \rightarrow bB \;|\; \varepsilon$</p>

<p>$C \rightarrow cC \;|\; \varepsilon$</p>

<p>$T_{1} \rightarrow bbB'T_{1}c \;|\; \varepsilon $ (for $2j > k$)(1)</p>

<p>$B' \rightarrow bB' \;|\; b$</p>

<p>$T_{2} \rightarrow bT_{2}ccC'\;|\; \varepsilon$ (for $2j < k$)</p>

<p>$C' \rightarrow cC' \;|\; c$</p>

<p>$T_{3} \rightarrow bT_{3}c \;|\; \varepsilon$ (for $j = k$)</p>

<p>the problem that I am having is, the string $bbccc$ can't be generated although valid, in that case $j = 2$ and $k = 3$ so $2\times 2 > 3$ corresponding to production rule (1), how can I fix this?</p>
 | habedi/stack-exchange-dataset |
6,418 | Find non-regular $L$ such that $L \cup L^R$ is regular? | <p>I've been studying for an exam I have tomorrow, and I was looking through some previous sample exam questions, when I came across this problem:</p>

<blockquote>
 <p>Give a non-regular language $L$ such that $L \cup L^R$ is regular.</p>
</blockquote>

<p>I've been sitting here and thinking and thinking, and I can't seem to come up with a situation where this is valid. I've determined a few things based on my understanding of non-regular languages, as well as the problem itself:</p>

<ul>
<li>$L$ must be infinite.</li>
<li>$L$ must involve some kind of counting.</li>
<li>$L$ must contain multiple letters (i.e. it cannot be composed of entirely $a$s).</li>
</ul>

<p>Given this, I went through a few basic possibilities:</p>

<ul>
<li>$a^ib^i$ : This would result in $L \cup L^R$ being irregular also.</li>
<li>$(ab)^i(ba)^i$ (or something else palindromic) : Again, this would result in $L \cup L^R$ being irregular also. (Any palindrome would, as $L = L^R$.)</li>
<li>$a^pb^q$ (where $p$ and $q$ are prime) : This, too, would result in $L \cup L^R$ being irregular also, though it would be a very much broader language, which I think is a step in the right direction.</li>
</ul>

<p>After I got this far, I think the key is in creating some language that, when unioned with itself, forms something akin to $a^*b^*$ or $(ab)^*$. The broader the words within the language, the easier it seems to define. But I can't seem to quite wrap my head around doing this.</p>

<p>Does anyone have a hint/spoiler or possible solution to this?</p>

<p><em>(NB: My professor does not post solutions.)</em></p>
 | formal languages regular languages | 1 | Find non-regular $L$ such that $L \cup L^R$ is regular? -- (formal languages regular languages)
<p>I've been studying for an exam I have tomorrow, and I was looking through some previous sample exam questions, when I came across this problem:</p>

<blockquote>
 <p>Give a non-regular language $L$ such that $L \cup L^R$ is regular.</p>
</blockquote>

<p>I've been sitting here and thinking and thinking, and I can't seem to come up with a situation where this is valid. I've determined a few things based on my understanding of non-regular languages, as well as the problem itself:</p>

<ul>
<li>$L$ must be infinite.</li>
<li>$L$ must involve some kind of counting.</li>
<li>$L$ must contain multiple letters (i.e. it cannot be composed of entirely $a$s).</li>
</ul>

<p>Given this, I went through a few basic possibilities:</p>

<ul>
<li>$a^ib^i$ : This would result in $L \cup L^R$ being irregular also.</li>
<li>$(ab)^i(ba)^i$ (or something else palindromic) : Again, this would result in $L \cup L^R$ being irregular also. (Any palindrome would, as $L = L^R$.)</li>
<li>$a^pb^q$ (where $p$ and $q$ are prime) : This, too, would result in $L \cup L^R$ being irregular also, though it would be a very much broader language, which I think is a step in the right direction.</li>
</ul>

<p>After I got this far, I think the key is in creating some language that, when unioned with itself, forms something akin to $a^*b^*$ or $(ab)^*$. The broader the words within the language, the easier it seems to define. But I can't seem to quite wrap my head around doing this.</p>

<p>Does anyone have a hint/spoiler or possible solution to this?</p>

<p><em>(NB: My professor does not post solutions.)</em></p>
 | habedi/stack-exchange-dataset |
6,419 | Proving the language of words with equal numbers of symbols non-context-free | <blockquote>
 <p><strong>Possible Duplicate:</strong><br>
 <a href="https://cs.stackexchange.com/questions/265/how-to-prove-that-a-language-is-not-context-free">How to prove that a language is not context-free?</a> </p>
</blockquote>



<p>I'm having a hard time figuring this out, any help is appreciated. </p>

<p>Let EQUAL be the language of all words over $\Sigma = \{a,b,c\}$ that have the same number of $a$’s, $b$’s and $c$’s</p>

<p>$\qquad \text{EQUAL} = \{ w \in \Sigma^* \mid |w|_a = |w|_b = |w|_c \}$</p>

<p>The order of the letters doesn't matter. How can you prove that EQUAL is non-context-free?</p>
 | formal languages context free | 1 | Proving the language of words with equal numbers of symbols non-context-free -- (formal languages context free)
<blockquote>
 <p><strong>Possible Duplicate:</strong><br>
 <a href="https://cs.stackexchange.com/questions/265/how-to-prove-that-a-language-is-not-context-free">How to prove that a language is not context-free?</a> </p>
</blockquote>



<p>I'm having a hard time figuring this out, any help is appreciated. </p>

<p>Let EQUAL be the language of all words over $\Sigma = \{a,b,c\}$ that have the same number of $a$’s, $b$’s and $c$’s</p>

<p>$\qquad \text{EQUAL} = \{ w \in \Sigma^* \mid |w|_a = |w|_b = |w|_c \}$</p>

<p>The order of the letters doesn't matter. How can you prove that EQUAL is non-context-free?</p>
 | habedi/stack-exchange-dataset |
6,420 | Is it more effective to vote for a woman? | <p>A certain political party wants to encourage women to participate in their primary elections, so they decide, that the 4th position is reserved for a woman. That is, if there is no woman in the top 4 positions, then the woman with the largest number of votes will be promoted to the 4th position, and the candidates at positions 4 and below (5, 6, 7...) will be demoted one position (of course, if there is initially a woman in one of the top 4 positions, then no promotion/demotion will take place).</p>

<p>There are two candidates that I support equally, one is a man and the other is a woman. Is it true that, if I vote for the woman, my vote is more effective?</p>

<p>In a more extreme case, where the 1st position is reserved for a woman, it's clear that my vote is most effective when I give it to the woman, because this is my only chance of sending my favorite candidate to the 1st position; voting for the man, in this case, will never bring my favorite candidate to the 1st position. </p>

<p>Intuitively, it seems to be the same with the 4th position reserved, because, if I vote for the man and he enters position <=4, he might be demoted, but if I vote for the woman and she enters position <=4, she might be promoted, so my single vote may be worth a lot.</p>

<p>However, I am looking for a formal proof that this is the case (or maybe a disproof?)</p>
 | game theory voting | 1 | Is it more effective to vote for a woman? -- (game theory voting)
<p>A certain political party wants to encourage women to participate in their primary elections, so they decide, that the 4th position is reserved for a woman. That is, if there is no woman in the top 4 positions, then the woman with the largest number of votes will be promoted to the 4th position, and the candidates at positions 4 and below (5, 6, 7...) will be demoted one position (of course, if there is initially a woman in one of the top 4 positions, then no promotion/demotion will take place).</p>

<p>There are two candidates that I support equally, one is a man and the other is a woman. Is it true that, if I vote for the woman, my vote is more effective?</p>

<p>In a more extreme case, where the 1st position is reserved for a woman, it's clear that my vote is most effective when I give it to the woman, because this is my only chance of sending my favorite candidate to the 1st position; voting for the man, in this case, will never bring my favorite candidate to the 1st position. </p>

<p>Intuitively, it seems to be the same with the 4th position reserved, because, if I vote for the man and he enters position <=4, he might be demoted, but if I vote for the woman and she enters position <=4, she might be promoted, so my single vote may be worth a lot.</p>

<p>However, I am looking for a formal proof that this is the case (or maybe a disproof?)</p>
 | habedi/stack-exchange-dataset |
6,423 | What are the different types of databases? | <p>Is there is a study or classification available on different types of databases? (Examples include structured, unstructured, semi structured relational, object oriented, folksonomies, etc.) </p>
 | terminology reference request databases | 1 | What are the different types of databases? -- (terminology reference request databases)
<p>Is there is a study or classification available on different types of databases? (Examples include structured, unstructured, semi structured relational, object oriented, folksonomies, etc.) </p>
 | habedi/stack-exchange-dataset |
6,435 | Base of logarithm in runtime of Prim's and Kruskal's algorithms | <p>For Prim's and Kruskal's Algorithm there are many implementations which will give different running times. However suppose our implementation of Prim's algorithm has runtime $O(|E| + |V|\cdot \log(|V|))$ and Kruskals's algorithm has runtime $O(|E|\cdot \log(|V|))$.</p>

<p>What is the base of the $\log$?</p>
 | algorithms time complexity graphs algorithm analysis runtime analysis | 1 | Base of logarithm in runtime of Prim's and Kruskal's algorithms -- (algorithms time complexity graphs algorithm analysis runtime analysis)
<p>For Prim's and Kruskal's Algorithm there are many implementations which will give different running times. However suppose our implementation of Prim's algorithm has runtime $O(|E| + |V|\cdot \log(|V|))$ and Kruskals's algorithm has runtime $O(|E|\cdot \log(|V|))$.</p>

<p>What is the base of the $\log$?</p>
 | habedi/stack-exchange-dataset |
6,443 | Simplification of regular expression and conversion into finite automata | <p>This is a beginners question. I and reading the book "Introduction to Computer Theory" by Daniel Cohen. But I end up with confusion regarding simplification of regular expressions and finite automata. I want to create an FA for the regular expression</p>

<p>$\qquad \displaystyle (a+b)^* (ab+ba)^+a^+\;.$</p>

<p>My first question is that how we can simplify this expression? Can we we write the middle part as $(ab+ba)(ab+ba)^*$? will this simplify the expression?</p>

<p>My second question is whether the automaton given below is equivalent to this regular expression? If not, what is the mistake?</p>

<p><img src="https://i.stack.imgur.com/m4Agi.png" alt="enter image description here"></p>

<p>This is not a homework but i want to learn this basic example. And please bear me as a beginner.</p>
 | formal languages automata finite automata regular expressions | 1 | Simplification of regular expression and conversion into finite automata -- (formal languages automata finite automata regular expressions)
<p>This is a beginners question. I and reading the book "Introduction to Computer Theory" by Daniel Cohen. But I end up with confusion regarding simplification of regular expressions and finite automata. I want to create an FA for the regular expression</p>

<p>$\qquad \displaystyle (a+b)^* (ab+ba)^+a^+\;.$</p>

<p>My first question is that how we can simplify this expression? Can we we write the middle part as $(ab+ba)(ab+ba)^*$? will this simplify the expression?</p>

<p>My second question is whether the automaton given below is equivalent to this regular expression? If not, what is the mistake?</p>

<p><img src="https://i.stack.imgur.com/m4Agi.png" alt="enter image description here"></p>

<p>This is not a homework but i want to learn this basic example. And please bear me as a beginner.</p>
 | habedi/stack-exchange-dataset |
6,446 | Modification of Hamilton Path | <p>Although I know that the <a href="http://en.wikipedia.org/wiki/Hamiltonian_path_problem" rel="nofollow noreferrer">Hamilton Path problem</a> is ${\sf NP}$-complete, I think the following variant can be solved in polynomial time:</p>

<blockquote>
 <p>Given a planar graph with vertex set $V$, edge set $E$, start node $S$ and target node $F$,
 our task is to find the Hamiltonian path from $S$ to $F$ or write that the path doesn't exist.</p>
 
 <p><em>Last condition</em>: In the path, in addition to selecting the directly connected vertices, 
 we can also choose those connected to exactly one neighbor.</p>
 
 <p><strong>Edit</strong>: The degree of any vertex is at most four ($\deg(v_i) \le 4$).</p>
</blockquote>

<p>Does anyone have any ideas how to prove that this can be solved in polynomial time? </p>

<p>It can be hard to understand, so I will give an example: </p>

<p><img src="https://i.stack.imgur.com/meTSp.png" alt="Examples"></p>

<p>In the left example, for $S=1,F=12$, the solution is the path $1, 11, 8, 7, 5, 9, 2, 10, 4, 6, 3, 12$. </p>

<p>In the right example, for $S=1,F=15$, there is no Hamiltonian path.</p>
 | algorithms complexity theory graphs np hard | 1 | Modification of Hamilton Path -- (algorithms complexity theory graphs np hard)
<p>Although I know that the <a href="http://en.wikipedia.org/wiki/Hamiltonian_path_problem" rel="nofollow noreferrer">Hamilton Path problem</a> is ${\sf NP}$-complete, I think the following variant can be solved in polynomial time:</p>

<blockquote>
 <p>Given a planar graph with vertex set $V$, edge set $E$, start node $S$ and target node $F$,
 our task is to find the Hamiltonian path from $S$ to $F$ or write that the path doesn't exist.</p>
 
 <p><em>Last condition</em>: In the path, in addition to selecting the directly connected vertices, 
 we can also choose those connected to exactly one neighbor.</p>
 
 <p><strong>Edit</strong>: The degree of any vertex is at most four ($\deg(v_i) \le 4$).</p>
</blockquote>

<p>Does anyone have any ideas how to prove that this can be solved in polynomial time? </p>

<p>It can be hard to understand, so I will give an example: </p>

<p><img src="https://i.stack.imgur.com/meTSp.png" alt="Examples"></p>

<p>In the left example, for $S=1,F=12$, the solution is the path $1, 11, 8, 7, 5, 9, 2, 10, 4, 6, 3, 12$. </p>

<p>In the right example, for $S=1,F=15$, there is no Hamiltonian path.</p>
 | habedi/stack-exchange-dataset |
6,456 | How many different max-heaps exist for a list of n integers? | <p>How many different max-heaps exist for a list of <span class="math-container">$n$</span> integers?</p>

<p>Example: list <code>[1, 2, 3, 4]</code></p>

<p>The max-heap can be either <code>4 3 2 1</code>:</p>

<pre><code> 4
 / \
 3 2
 /
1
</code></pre>

<p>or <code>4 2 3 1</code>:</p>

<pre><code> 4 
 / \
 2 3 
 /
1
</code></pre>
 | data structures combinatorics heaps | 1 | How many different max-heaps exist for a list of n integers? -- (data structures combinatorics heaps)
<p>How many different max-heaps exist for a list of <span class="math-container">$n$</span> integers?</p>

<p>Example: list <code>[1, 2, 3, 4]</code></p>

<p>The max-heap can be either <code>4 3 2 1</code>:</p>

<pre><code> 4
 / \
 3 2
 /
1
</code></pre>

<p>or <code>4 2 3 1</code>:</p>

<pre><code> 4 
 / \
 2 3 
 /
1
</code></pre>
 | habedi/stack-exchange-dataset |
6,463 | Hardness of ambiguity/non-ambiguity for context-free grammars | <p>A <a href="http://en.wikipedia.org/wiki/Formal_grammar" rel="nofollow">grammar</a> is <em><a href="http://en.wikipedia.org/wiki/Ambiguous_grammar" rel="nofollow">ambiguous</a></em> if at least one of the words in the
language it defines can be parsed in more than one way. A simple example of an ambiguous grammar
$$
 E \rightarrow E+E \ |\ E*E \ |\ 0 \ |\ 1 \ |\ ...
$$
because the string 1+2*3 can be parsed as (1+2)*3 and 1+(2*3). For
context free grammars (CFGs) ambiguity is not decidable [1, 2]. This implies that non-ambiguity is also not decidable. Moreover, at least one of ambiguity and
non-ambiguity cannot even be recursively enumerable, for otherwise
ambiguity of a given CFG $G$ could be decided by running the
enumeration of ambiguity and non-ambiguity together and seeing which
one contains $G$ (and one of them must).</p>

<p>So which problem is harder in this sense? Ambiguity or non-ambiguity?</p>

<ol>
<li><p>D. G. Cantor, On The Ambiguity Problem of Backus Systems.</p></li>
<li><p>R. W. Floyd, On ambiguity in phrase structure languages.</p></li>
</ol>
 | computability formal grammars context free undecidability ambiguity | 1 | Hardness of ambiguity/non-ambiguity for context-free grammars -- (computability formal grammars context free undecidability ambiguity)
<p>A <a href="http://en.wikipedia.org/wiki/Formal_grammar" rel="nofollow">grammar</a> is <em><a href="http://en.wikipedia.org/wiki/Ambiguous_grammar" rel="nofollow">ambiguous</a></em> if at least one of the words in the
language it defines can be parsed in more than one way. A simple example of an ambiguous grammar
$$
 E \rightarrow E+E \ |\ E*E \ |\ 0 \ |\ 1 \ |\ ...
$$
because the string 1+2*3 can be parsed as (1+2)*3 and 1+(2*3). For
context free grammars (CFGs) ambiguity is not decidable [1, 2]. This implies that non-ambiguity is also not decidable. Moreover, at least one of ambiguity and
non-ambiguity cannot even be recursively enumerable, for otherwise
ambiguity of a given CFG $G$ could be decided by running the
enumeration of ambiguity and non-ambiguity together and seeing which
one contains $G$ (and one of them must).</p>

<p>So which problem is harder in this sense? Ambiguity or non-ambiguity?</p>

<ol>
<li><p>D. G. Cantor, On The Ambiguity Problem of Backus Systems.</p></li>
<li><p>R. W. Floyd, On ambiguity in phrase structure languages.</p></li>
</ol>
 | habedi/stack-exchange-dataset |
6,470 | Iterative binary search analysis | <p>I'm a little bit confused about the analysis of <a href="http://en.wikipedia.org/wiki/Binary_search" rel="nofollow">binary search</a>.
In almost every paper, the writer assumes that the array size $n$ is always $2^k$.
Well I truly understand that the time complexity becomes $\log(n)$ (worst case) under this assumption. But what if $n \neq 2^k$?</p>

<p>For example if $n=24$, then we have
5 iterations for 24<br>
4 i. for 12<br>
3 i. for 6<br>
2 i. for 3<br>
1 i. for 1</p>

<p>So how do we get the result $k=\log n$ in this example (I mean of course every similar example whereby $n\neq2^k$)?</p>
 | algorithms time complexity algorithm analysis runtime analysis search algorithms | 1 | Iterative binary search analysis -- (algorithms time complexity algorithm analysis runtime analysis search algorithms)
<p>I'm a little bit confused about the analysis of <a href="http://en.wikipedia.org/wiki/Binary_search" rel="nofollow">binary search</a>.
In almost every paper, the writer assumes that the array size $n$ is always $2^k$.
Well I truly understand that the time complexity becomes $\log(n)$ (worst case) under this assumption. But what if $n \neq 2^k$?</p>

<p>For example if $n=24$, then we have
5 iterations for 24<br>
4 i. for 12<br>
3 i. for 6<br>
2 i. for 3<br>
1 i. for 1</p>

<p>So how do we get the result $k=\log n$ in this example (I mean of course every similar example whereby $n\neq2^k$)?</p>
 | habedi/stack-exchange-dataset |
6,473 | Why is $\sum_{j=0}^{\lfloor\log (n-1)\rfloor}2^j$ in $\Theta (n)$? | <p>I am trying to understand summation for amortization analysis of a hash-table from a <a href="http://videolectures.net/mit6046jf05_leiserson_lec13/" rel="nofollow noreferrer">MIT lecture video</a> (at time 16:09). </p>

<p>Although you guys don't have to go and look at the video, I feel that the summation he does is wrong so I will attach the screenshot of the slide.</p>

<p><img src="https://i.stack.imgur.com/EBRfs.jpg" alt="MIT Lecture Slide"></p>
 | algorithms data structures algorithm analysis mathematical analysis discrete mathematics | 1 | Why is $\sum_{j=0}^{\lfloor\log (n-1)\rfloor}2^j$ in $\Theta (n)$? -- (algorithms data structures algorithm analysis mathematical analysis discrete mathematics)
<p>I am trying to understand summation for amortization analysis of a hash-table from a <a href="http://videolectures.net/mit6046jf05_leiserson_lec13/" rel="nofollow noreferrer">MIT lecture video</a> (at time 16:09). </p>

<p>Although you guys don't have to go and look at the video, I feel that the summation he does is wrong so I will attach the screenshot of the slide.</p>

<p><img src="https://i.stack.imgur.com/EBRfs.jpg" alt="MIT Lecture Slide"></p>
 | habedi/stack-exchange-dataset |
6,476 | Minimum s-t cut in weighted directed acyclic graphs with possibly negative weights | <p>I ran into the following problem:</p>

<p>Given a directed acyclic graph with real-valued edge weights, and two vertices s and t, compute the minimum s-t cut.</p>

<p>For general graphs this is NP-hard, since one can trivially reduce max-cut to it by simply reversing the edge weights (correct me if I'm wrong).</p>

<p>What is the situation with DAGs? Can min-cut (or max-cut) be solved in polynomial time? Is it NP-hard and, if so, are there any known approximation algorithms?</p>

<p>I tried to find work on this but wasn't able to (maybe I'm just using wrong keywords in my searches), so I was hoping somebody may know (or find) something about this.</p>
 | algorithms complexity theory graphs weighted graphs | 1 | Minimum s-t cut in weighted directed acyclic graphs with possibly negative weights -- (algorithms complexity theory graphs weighted graphs)
<p>I ran into the following problem:</p>

<p>Given a directed acyclic graph with real-valued edge weights, and two vertices s and t, compute the minimum s-t cut.</p>

<p>For general graphs this is NP-hard, since one can trivially reduce max-cut to it by simply reversing the edge weights (correct me if I'm wrong).</p>

<p>What is the situation with DAGs? Can min-cut (or max-cut) be solved in polynomial time? Is it NP-hard and, if so, are there any known approximation algorithms?</p>

<p>I tried to find work on this but wasn't able to (maybe I'm just using wrong keywords in my searches), so I was hoping somebody may know (or find) something about this.</p>
 | habedi/stack-exchange-dataset |
6,480 | Commonly used Error Correcting Codes | <p>We know error correcting codes are parameterized as (n,k,d) codes. I wanted to know the values of these parameters for some commonly used error correcting codes in computer memories or in DRAMs, etc.</p>

<p>I just wanted to see some values for these parameters, used in real life applications.</p>
 | computer architecture error correcting codes | 1 | Commonly used Error Correcting Codes -- (computer architecture error correcting codes)
<p>We know error correcting codes are parameterized as (n,k,d) codes. I wanted to know the values of these parameters for some commonly used error correcting codes in computer memories or in DRAMs, etc.</p>

<p>I just wanted to see some values for these parameters, used in real life applications.</p>
 | habedi/stack-exchange-dataset |
6,488 | First-order logic arity defines decidability? | <p>I've read first-order logic is in general undecidable, and that could be decidable only when working with unary operators. (I think that's propositional logic, correct me if I am wrong)</p>

<p>The question is <strong>why arity leads to undecidable problems?</strong></p>

<p>I would like to see some reference material, or at least some simple <em>example</em> of it, as a way to think in this passage from unary to n-ary and why it leads to undecidable problems. </p>
 | reference request logic undecidability satisfiability first order logic | 1 | First-order logic arity defines decidability? -- (reference request logic undecidability satisfiability first order logic)
<p>I've read first-order logic is in general undecidable, and that could be decidable only when working with unary operators. (I think that's propositional logic, correct me if I am wrong)</p>

<p>The question is <strong>why arity leads to undecidable problems?</strong></p>

<p>I would like to see some reference material, or at least some simple <em>example</em> of it, as a way to think in this passage from unary to n-ary and why it leads to undecidable problems. </p>
 | habedi/stack-exchange-dataset |
6,491 | Resolution and incomplete Knowledge Base | <p>Assume I have an incomplete knowledge base, for example:</p>

<pre><code>(rich(dave), poor(dave)) // dave is either poor or rich

(not rich(dave), not poor(dave)) // dave is not poor and rich at the same time.
</code></pre>

<p>My questions are:
1. If I do resolution on the above clauses, will I get the empty clause? and 2. If Yes, Does that mean my knowledge base is inconsistent?</p>
 | logic knowledge representation reasoning | 1 | Resolution and incomplete Knowledge Base -- (logic knowledge representation reasoning)
<p>Assume I have an incomplete knowledge base, for example:</p>

<pre><code>(rich(dave), poor(dave)) // dave is either poor or rich

(not rich(dave), not poor(dave)) // dave is not poor and rich at the same time.
</code></pre>

<p>My questions are:
1. If I do resolution on the above clauses, will I get the empty clause? and 2. If Yes, Does that mean my knowledge base is inconsistent?</p>
 | habedi/stack-exchange-dataset |
6,500 | Can a graph have a cycle containing all nodes but not Hamiltonian? | <p>Is it possible for a graph to have a cycle that goes through all the nodes, but it does not have a Hamiltonian cycle (i.e. the cycle goes through some nodes more than once)? If yes, can anyone prove it? If not, can anyone give a counterexample?</p>
 | graphs | 1 | Can a graph have a cycle containing all nodes but not Hamiltonian? -- (graphs)
<p>Is it possible for a graph to have a cycle that goes through all the nodes, but it does not have a Hamiltonian cycle (i.e. the cycle goes through some nodes more than once)? If yes, can anyone prove it? If not, can anyone give a counterexample?</p>
 | habedi/stack-exchange-dataset |
6,503 | BPP search: what does boosting correctness entail? | <p>It is not really clear to me, how and if I can do boosting for correctness (or error reduction) on a <a href="http://en.wikipedia.org/wiki/BPP_%28complexity%29" rel="nofollow">BPP</a> (bounded-error probabilistic polynomial-time) search problem. Can anyone of you explain me how it works?</p>

<p>With BPP search, I mean a problem that can have false positive-negative, correct solution, and no-solution. Here's a definition:</p>

<p>A probabilistic polynomial-time algorithm $A$ solves the search problem of the relation $R$ if</p>

<ul>
<li>for every $x ∈ S$, $Pr[A(x) ∈ R(x)] > 1 - μ(|x|)$</li>
<li>for every $x ∉ SR$, $Pr[A(x) = \text{no-solution}] > 1 - μ(|x|)$</li>
</ul>

<p>were $R(x)$ is the set of solution for the problem and $μ(|x|)$ is a negligible function (it is rare that it fails).</p>

<p>So now I would like to increase my probability of getting a good answer, how can I do it?</p>

<hr>

<p>~ ".. boosting for correctness.." : a way to increase the probability of the algorithm (generally by multile runs of the probabilistic algorithm), i.e., when the problem have a solution then the algorithm likely return a valid one.</p>
 | probabilistic algorithms search problem | 1 | BPP search: what does boosting correctness entail? -- (probabilistic algorithms search problem)
<p>It is not really clear to me, how and if I can do boosting for correctness (or error reduction) on a <a href="http://en.wikipedia.org/wiki/BPP_%28complexity%29" rel="nofollow">BPP</a> (bounded-error probabilistic polynomial-time) search problem. Can anyone of you explain me how it works?</p>

<p>With BPP search, I mean a problem that can have false positive-negative, correct solution, and no-solution. Here's a definition:</p>

<p>A probabilistic polynomial-time algorithm $A$ solves the search problem of the relation $R$ if</p>

<ul>
<li>for every $x ∈ S$, $Pr[A(x) ∈ R(x)] > 1 - μ(|x|)$</li>
<li>for every $x ∉ SR$, $Pr[A(x) = \text{no-solution}] > 1 - μ(|x|)$</li>
</ul>

<p>were $R(x)$ is the set of solution for the problem and $μ(|x|)$ is a negligible function (it is rare that it fails).</p>

<p>So now I would like to increase my probability of getting a good answer, how can I do it?</p>

<hr>

<p>~ ".. boosting for correctness.." : a way to increase the probability of the algorithm (generally by multile runs of the probabilistic algorithm), i.e., when the problem have a solution then the algorithm likely return a valid one.</p>
 | habedi/stack-exchange-dataset |
6,504 | How can I convert the Turing machine the recognizes language $L$ into an unrestricted grammar? | <p>According to <a href="http://en.wikipedia.org/wiki/Unrestricted_grammar">this Wikipedia article</a>, unrestricted grammars are equivalent to Turing machines. The article notes that I can convert any Turing machine into an unrestricted grammar, but it only shows how to convert a grammar to a Turing machine.</p>

<p>How do I indeed do that and convert the Turing machine the recognizes language $L$ into an unrestricted grammar? I have tried replacing transition rules with grammar rules, but a Turing machine can have many different configurations of states as well...</p>
 | formal grammars turing machines simulation | 1 | How can I convert the Turing machine the recognizes language $L$ into an unrestricted grammar? -- (formal grammars turing machines simulation)
<p>According to <a href="http://en.wikipedia.org/wiki/Unrestricted_grammar">this Wikipedia article</a>, unrestricted grammars are equivalent to Turing machines. The article notes that I can convert any Turing machine into an unrestricted grammar, but it only shows how to convert a grammar to a Turing machine.</p>

<p>How do I indeed do that and convert the Turing machine the recognizes language $L$ into an unrestricted grammar? I have tried replacing transition rules with grammar rules, but a Turing machine can have many different configurations of states as well...</p>
 | habedi/stack-exchange-dataset |
6,506 | Some questions regarding compilers and assemblers | <p>Lots of basic questions are there in my mind. I need to clear them.</p>

<p><strong>Statement 1:</strong> A compiler converts a human-readable codes to object codes, and those are converted to a machine code (executable) by linker.</p>

<p>Am I right here?</p>

<p>At <a href="http://en.wikipedia.org/wiki/Object_file" rel="nofollow">wikipedia</a>, it is written that</p>

<pre><code>Object files are produced by an assembler, compiler, or other language
translator, and used as input to the linker.
</code></pre>

<p><strong>Question 1:</strong> An assembler converts assembly language code (<code>MOV A, B</code> <code>ADD C</code>) to machine code. In case of high-level language like C++, that is generated by linker above. So assembler is not used anywhere. So how can it create an object file as written above? </p>

<p>Intermediate code is generated to make the code run on different architectures.</p>

<p><strong>Question 2:</strong> Are *.class (bytecode) files created by java compiler object files? If yes, then can we say that the JVM that runs them is a type of linker (however its not creating the executable)?</p>

<p><strong>Question 3:</strong> When we compile a C++ program in Turbo C++, we get *.obj files which are the object files. Can we use them to generate the executable in some other architecture?</p>
 | terminology compilers code generation | 1 | Some questions regarding compilers and assemblers -- (terminology compilers code generation)
<p>Lots of basic questions are there in my mind. I need to clear them.</p>

<p><strong>Statement 1:</strong> A compiler converts a human-readable codes to object codes, and those are converted to a machine code (executable) by linker.</p>

<p>Am I right here?</p>

<p>At <a href="http://en.wikipedia.org/wiki/Object_file" rel="nofollow">wikipedia</a>, it is written that</p>

<pre><code>Object files are produced by an assembler, compiler, or other language
translator, and used as input to the linker.
</code></pre>

<p><strong>Question 1:</strong> An assembler converts assembly language code (<code>MOV A, B</code> <code>ADD C</code>) to machine code. In case of high-level language like C++, that is generated by linker above. So assembler is not used anywhere. So how can it create an object file as written above? </p>

<p>Intermediate code is generated to make the code run on different architectures.</p>

<p><strong>Question 2:</strong> Are *.class (bytecode) files created by java compiler object files? If yes, then can we say that the JVM that runs them is a type of linker (however its not creating the executable)?</p>

<p><strong>Question 3:</strong> When we compile a C++ program in Turbo C++, we get *.obj files which are the object files. Can we use them to generate the executable in some other architecture?</p>
 | habedi/stack-exchange-dataset |
6,507 | Stochastical algorithm | <p>We have a stochastic random source. This gives the bit $0$ (or $1$) with probability $1/2$. 
We want to generate a uniform distribution on the set S = $\{0, 1,..., n-1\}$. </p>

<p>Which algorithm gives with probability $1/n$ the value $i\in S$. And how many bits are needed.</p>
 | algorithms randomness sampling | 1 | Stochastical algorithm -- (algorithms randomness sampling)
<p>We have a stochastic random source. This gives the bit $0$ (or $1$) with probability $1/2$. 
We want to generate a uniform distribution on the set S = $\{0, 1,..., n-1\}$. </p>

<p>Which algorithm gives with probability $1/n$ the value $i\in S$. And how many bits are needed.</p>
 | habedi/stack-exchange-dataset |
6,514 | Breadth First Search with cost | <p>Looking for some tutorials / references that discuss Breadth First Search that takes into consideration the cost of paths, but could not find much information.</p>

<p>Could someone refer a tutorial?</p>
 | algorithms reference request graphs search algorithms | 1 | Breadth First Search with cost -- (algorithms reference request graphs search algorithms)
<p>Looking for some tutorials / references that discuss Breadth First Search that takes into consideration the cost of paths, but could not find much information.</p>

<p>Could someone refer a tutorial?</p>
 | habedi/stack-exchange-dataset |
6,515 | Reduction of A_LBA to E_LBA | <p>I have a rather interesting one to ponder and would love if I could get an answer for it. We were discussing the topic of mapping reduction today in my Computing theory course and I was wondering why this reduction can't exist, $A_{LBA} \leq_{m} E_{LBA}$, since both of them are linear bound automata (LBAs). I do realize that $E_{LBA}$ is undecidable, $A_{LBA}$ is decidable, and the normal proof uses $A_{TM}$, or $E_{TM}$, to prove the undecidibility of $E_{LBA}$. I am just curious why the proof is using a TM to prove an LBA. But, my Professor could not come up with a solution to my confusion. I was wondering is this possible, if so, why or why not.</p>

<p><strong>Definitions:</strong></p>

<p>$A_{LBA} = \{\langle M, w\rangle \mid \text{$M$ is a linear bound automaton that accepts the string $w$}\}$</p>

<p>$E_{LBA} = \{\langle M \rangle \mid \text{$M$ is a linear bound automaton with $L(M)=\emptyset$}\}$</p>

<p>$A_{TM}$ and $E_{TM}$ are the equivalent problems for Turing Machines.</p>
 | computability turing machines reductions undecidability | 1 | Reduction of A_LBA to E_LBA -- (computability turing machines reductions undecidability)
<p>I have a rather interesting one to ponder and would love if I could get an answer for it. We were discussing the topic of mapping reduction today in my Computing theory course and I was wondering why this reduction can't exist, $A_{LBA} \leq_{m} E_{LBA}$, since both of them are linear bound automata (LBAs). I do realize that $E_{LBA}$ is undecidable, $A_{LBA}$ is decidable, and the normal proof uses $A_{TM}$, or $E_{TM}$, to prove the undecidibility of $E_{LBA}$. I am just curious why the proof is using a TM to prove an LBA. But, my Professor could not come up with a solution to my confusion. I was wondering is this possible, if so, why or why not.</p>

<p><strong>Definitions:</strong></p>

<p>$A_{LBA} = \{\langle M, w\rangle \mid \text{$M$ is a linear bound automaton that accepts the string $w$}\}$</p>

<p>$E_{LBA} = \{\langle M \rangle \mid \text{$M$ is a linear bound automaton with $L(M)=\emptyset$}\}$</p>

<p>$A_{TM}$ and $E_{TM}$ are the equivalent problems for Turing Machines.</p>
 | habedi/stack-exchange-dataset |
6,517 | what is semantics? | <p>There are many popular languages. But, computer scientists tell us that in order to understand the behaviour of programs in those languages definitely and unambiguously argue upon program behavior (e.g. prove their identity), we need to translate them into another, well understood language. They call such language "a semantics". Authors propose one of many semantics. They explain the meaning of their constructions and how you can translate your language into their. Once you do that, everybody will understand your program certainly, they say. </p>

<p>Looks good, yet, I do not understand something. Do they tell us that they introduce another language to understand the first one? Why do we understand it better than the original one? Why this semantics is better than that? Why not to learn semantics of C right away instead of inventing another language, for describing semantics of C? The same applies to the syntax. Why don't I ask the same question regarding the syntax?</p>

<p><strong>PS</strong> In the comments I hear that semantics does not mean another language or translation into it. But Formal Semantics for VHDL says that if you understand something in only one way then you do not understand it and "meaning of meaning" can be specified if we supply a language with a mechanism that translates it into another (known) language. That is, "semantics is a Relation between formal systems". Hennessy, in <a href="https://www.scss.tcd.ie/Matthew.Hennessy/slexternal/resources/sembookWiley.pdf">Semantics of Programming Languages</a>, says that semantics allows for formal processing of the program "meaning", when semantics is supplied as BNF or Syntax Diagram. What is a formal system if not a language? </p>

<p><strong>PS2</strong> Can I say that HW synthesis of given HDL program into interconnection of gates, is a process of semantics extraction? We translate (high-level) description into the (low-level) language that we understand, afterwards.</p>
 | formal languages semantics | 1 | what is semantics? -- (formal languages semantics)
<p>There are many popular languages. But, computer scientists tell us that in order to understand the behaviour of programs in those languages definitely and unambiguously argue upon program behavior (e.g. prove their identity), we need to translate them into another, well understood language. They call such language "a semantics". Authors propose one of many semantics. They explain the meaning of their constructions and how you can translate your language into their. Once you do that, everybody will understand your program certainly, they say. </p>

<p>Looks good, yet, I do not understand something. Do they tell us that they introduce another language to understand the first one? Why do we understand it better than the original one? Why this semantics is better than that? Why not to learn semantics of C right away instead of inventing another language, for describing semantics of C? The same applies to the syntax. Why don't I ask the same question regarding the syntax?</p>

<p><strong>PS</strong> In the comments I hear that semantics does not mean another language or translation into it. But Formal Semantics for VHDL says that if you understand something in only one way then you do not understand it and "meaning of meaning" can be specified if we supply a language with a mechanism that translates it into another (known) language. That is, "semantics is a Relation between formal systems". Hennessy, in <a href="https://www.scss.tcd.ie/Matthew.Hennessy/slexternal/resources/sembookWiley.pdf">Semantics of Programming Languages</a>, says that semantics allows for formal processing of the program "meaning", when semantics is supplied as BNF or Syntax Diagram. What is a formal system if not a language? </p>

<p><strong>PS2</strong> Can I say that HW synthesis of given HDL program into interconnection of gates, is a process of semantics extraction? We translate (high-level) description into the (low-level) language that we understand, afterwards.</p>
 | habedi/stack-exchange-dataset |
6,519 | How asymptotically bad is naive shuffling? | <p>It's well-known that this 'naive' algorithm for shuffling an array by swapping each item with another randomly-chosen one doesn't work correctly:</p>

<pre><code>for (i=0..n-1)
 swap(A[i], A[random(n)]);
</code></pre>

<p>Specifically, since at each of $n$ iterations, one of $n$ choices is made (with uniform probability), there are $n^n$ possible 'paths' through the computation; because the number of possible permutations $n!$ doesn't divide evenly into the number of paths $n^n$, it's impossible for this algorithm to produce each of the $n!$ permutations with equal probability. (Instead, one should use the so-called <em>Fischer-Yates</em> shuffle, which essentially changes out the call to choose a random number from [0..n) with a call to choose a random number from [i..n); that's moot to my question, though.)</p>

<p>What I'm wondering is, how 'bad' can the naive shuffle be? More specifically, letting $P(n)$ be the set of all permutations and $C(\rho)$ be the number of paths through the naive algorithm that produce the resulting permutation $\rho\in P(n)$, what is the asymptotic behavior of the functions </p>

<p>$\qquad \displaystyle M(n) = \frac{n!}{n^n}\max_{\rho\in P(n)} C(\rho)$ </p>

<p>and </p>

<p>$\qquad \displaystyle m(n) = \frac{n!}{n^n}\min_{\rho\in P(n)} C(\rho)$? </p>

<p>The leading factor is to 'normalize' these values: if the naive shuffle is 'asymptotically good' then </p>

<p>$\qquad \displaystyle \lim_{n\to\infty}M(n) = \lim_{n\to\infty}m(n) = 1$. </p>

<p>I suspect (based on some computer simulations I've seen) that the actual values are bounded away from 1, but is it even known if $\lim M(n)$ is finite, or if $\lim m(n)$ is bounded away from 0? What's known about the behavior of these quantities?</p>
 | algorithms algorithm analysis asymptotics probability theory randomness | 1 | How asymptotically bad is naive shuffling? -- (algorithms algorithm analysis asymptotics probability theory randomness)
<p>It's well-known that this 'naive' algorithm for shuffling an array by swapping each item with another randomly-chosen one doesn't work correctly:</p>

<pre><code>for (i=0..n-1)
 swap(A[i], A[random(n)]);
</code></pre>

<p>Specifically, since at each of $n$ iterations, one of $n$ choices is made (with uniform probability), there are $n^n$ possible 'paths' through the computation; because the number of possible permutations $n!$ doesn't divide evenly into the number of paths $n^n$, it's impossible for this algorithm to produce each of the $n!$ permutations with equal probability. (Instead, one should use the so-called <em>Fischer-Yates</em> shuffle, which essentially changes out the call to choose a random number from [0..n) with a call to choose a random number from [i..n); that's moot to my question, though.)</p>

<p>What I'm wondering is, how 'bad' can the naive shuffle be? More specifically, letting $P(n)$ be the set of all permutations and $C(\rho)$ be the number of paths through the naive algorithm that produce the resulting permutation $\rho\in P(n)$, what is the asymptotic behavior of the functions </p>

<p>$\qquad \displaystyle M(n) = \frac{n!}{n^n}\max_{\rho\in P(n)} C(\rho)$ </p>

<p>and </p>

<p>$\qquad \displaystyle m(n) = \frac{n!}{n^n}\min_{\rho\in P(n)} C(\rho)$? </p>

<p>The leading factor is to 'normalize' these values: if the naive shuffle is 'asymptotically good' then </p>

<p>$\qquad \displaystyle \lim_{n\to\infty}M(n) = \lim_{n\to\infty}m(n) = 1$. </p>

<p>I suspect (based on some computer simulations I've seen) that the actual values are bounded away from 1, but is it even known if $\lim M(n)$ is finite, or if $\lim m(n)$ is bounded away from 0? What's known about the behavior of these quantities?</p>
 | habedi/stack-exchange-dataset |
6,521 | Reduce the following problem to SAT | <p>Here is the problem. Given $k, n, T_1, \ldots, T_m$, where each $T_i \subseteq \{1, \ldots, n\}$. Is there a subset $S \subseteq \{1, \ldots, n\}$ with size at most $k$ such that $S \cap T_i \neq \emptyset$ for all $i$? I am trying to reduce this problem to SAT. My idea of a solution would be to have a variable $x_i$ for each of 1 to $n$. For each $T_i$, create a clause $(x_{i_1} \vee \cdots \vee x_{i_k})$ if $T_i = \{i_1, \ldots, i_k\}$. Then and all these clauses together. But this clearly isn't a complete solution as it does not represent the constraint that $S$ must have at most $k$ elements. I know that I must create more variables, but I'm simply not sure how. So I have two questions:</p>

<ol>
<li>Is my idea of solution on the right track?</li>
<li>How should the new variables be created so that they can be used to represent the cardinality $k$ constraint?</li>
</ol>
 | complexity theory reductions np hard | 1 | Reduce the following problem to SAT -- (complexity theory reductions np hard)
<p>Here is the problem. Given $k, n, T_1, \ldots, T_m$, where each $T_i \subseteq \{1, \ldots, n\}$. Is there a subset $S \subseteq \{1, \ldots, n\}$ with size at most $k$ such that $S \cap T_i \neq \emptyset$ for all $i$? I am trying to reduce this problem to SAT. My idea of a solution would be to have a variable $x_i$ for each of 1 to $n$. For each $T_i$, create a clause $(x_{i_1} \vee \cdots \vee x_{i_k})$ if $T_i = \{i_1, \ldots, i_k\}$. Then and all these clauses together. But this clearly isn't a complete solution as it does not represent the constraint that $S$ must have at most $k$ elements. I know that I must create more variables, but I'm simply not sure how. So I have two questions:</p>

<ol>
<li>Is my idea of solution on the right track?</li>
<li>How should the new variables be created so that they can be used to represent the cardinality $k$ constraint?</li>
</ol>
 | habedi/stack-exchange-dataset |
6,525 | NP-Completeness - Proof by Restriction | <p>I'm reading Garey & Johnsons <em>"Computers and Intractability"</em> and I'm at the part <em>"Some techniques for solving NP-Completeness"</em>. Here's the text about Proof by Restriction:</p>

<blockquote>
 <p>Proof by restriction is the simplest, and perhaps most frequently
 applicable, of our three proof types. An NP-completeness proof by
 restriction for a given problem $L \in NP$ consists simply of showing that
 $L$ contains a known NP-complete problem $L'$ as a special case. The heart
 of such a proof lies in the specification of the additional
 restrictions to be placed on the instances of $L$ so that the resulting
 restricted problem will be identical to $L'$. We do not require that the
 restricted problem and the known NP-complete problem be exact
 duplicates of one another, but rather that there be an "obvious"
 one-to-one correspondence between their instances that preserves "yes"
 and "no" answers."</p>
</blockquote>

<p>And I'm trying to learn this technique by example, but need some help.</p>

<p>(If you have the book, my example is on page 65, 27th printing)</p>

<p>They prove that <em>Multiprocessor Scheduling</em> is NP-complete with the following proof:</p>

<p>(Paraphrasing):</p>

<blockquote>
 <p>Restrict to PARTITION by allowing only instances in which $m = 2$ and $D$
 $=$ half the total sum of the "lengths".</p>
</blockquote>

<p>Here $m$ is the number of processors and $D$ is the maximum allowed sum of "lengths" per processor. </p>

<p>This is obviously a special case of multiprocessor scheduling which is solvable by solving the PARTITION problem, and there's no confusion there.</p>

<p>But, I'm not sure why this proof holds. </p>

<p>Excerpt from above: <em>"The heart of such a proof lies in the specification of additional restrictions to be placed on the instances of $L$ so that the resulting restricted problem will be identical to $L'$ ".</em></p>

<p>The way I see it that means we have to find the special case, and then find restrictions that show us that this problem can always be reduced to the special case. What we're trying to do here is show that Problem $A$ (MS) is at least as hard as Problem $B$ (PARTITION), so why would a simple special case be enough here? Is it because there's an obvious way to map to this special case that I'm missing? Or perhaps because $m = 1$ is trivial and we know that the problem will only get harder with a higher $m$, and that $D$ is always arbitrary, therefore $A$ must be at least as hard as $B$ (I feel like I'm just guessing now :p)</p>

<p>I hope it is clear where I get lost. </p>

<p><strong>TLDR; Why is it enough to find a special case that is solvable by an NP-Complete problem? Don't we need some reduction to complete the proof?</strong></p>
 | complexity theory np complete proof techniques | 1 | NP-Completeness - Proof by Restriction -- (complexity theory np complete proof techniques)
<p>I'm reading Garey & Johnsons <em>"Computers and Intractability"</em> and I'm at the part <em>"Some techniques for solving NP-Completeness"</em>. Here's the text about Proof by Restriction:</p>

<blockquote>
 <p>Proof by restriction is the simplest, and perhaps most frequently
 applicable, of our three proof types. An NP-completeness proof by
 restriction for a given problem $L \in NP$ consists simply of showing that
 $L$ contains a known NP-complete problem $L'$ as a special case. The heart
 of such a proof lies in the specification of the additional
 restrictions to be placed on the instances of $L$ so that the resulting
 restricted problem will be identical to $L'$. We do not require that the
 restricted problem and the known NP-complete problem be exact
 duplicates of one another, but rather that there be an "obvious"
 one-to-one correspondence between their instances that preserves "yes"
 and "no" answers."</p>
</blockquote>

<p>And I'm trying to learn this technique by example, but need some help.</p>

<p>(If you have the book, my example is on page 65, 27th printing)</p>

<p>They prove that <em>Multiprocessor Scheduling</em> is NP-complete with the following proof:</p>

<p>(Paraphrasing):</p>

<blockquote>
 <p>Restrict to PARTITION by allowing only instances in which $m = 2$ and $D$
 $=$ half the total sum of the "lengths".</p>
</blockquote>

<p>Here $m$ is the number of processors and $D$ is the maximum allowed sum of "lengths" per processor. </p>

<p>This is obviously a special case of multiprocessor scheduling which is solvable by solving the PARTITION problem, and there's no confusion there.</p>

<p>But, I'm not sure why this proof holds. </p>

<p>Excerpt from above: <em>"The heart of such a proof lies in the specification of additional restrictions to be placed on the instances of $L$ so that the resulting restricted problem will be identical to $L'$ ".</em></p>

<p>The way I see it that means we have to find the special case, and then find restrictions that show us that this problem can always be reduced to the special case. What we're trying to do here is show that Problem $A$ (MS) is at least as hard as Problem $B$ (PARTITION), so why would a simple special case be enough here? Is it because there's an obvious way to map to this special case that I'm missing? Or perhaps because $m = 1$ is trivial and we know that the problem will only get harder with a higher $m$, and that $D$ is always arbitrary, therefore $A$ must be at least as hard as $B$ (I feel like I'm just guessing now :p)</p>

<p>I hope it is clear where I get lost. </p>

<p><strong>TLDR; Why is it enough to find a special case that is solvable by an NP-Complete problem? Don't we need some reduction to complete the proof?</strong></p>
 | habedi/stack-exchange-dataset |
6,528 | Mathematical model on which current computers are built | <p>It is said that "The Turing machine is not intended as practical computing technology, but rather as a hypothetical device representing a computing machine. Turing machines help computer scientists understand the limits of mechanical computation." [Wikipedia]</p>

<p>So on which model current machines are built?</p>
 | turing machines computer architecture machine models | 1 | Mathematical model on which current computers are built -- (turing machines computer architecture machine models)
<p>It is said that "The Turing machine is not intended as practical computing technology, but rather as a hypothetical device representing a computing machine. Turing machines help computer scientists understand the limits of mechanical computation." [Wikipedia]</p>

<p>So on which model current machines are built?</p>
 | habedi/stack-exchange-dataset |
6,532 | Equivalence of GFp and Gp in LTL | <p>In linear time logic, is $\mathbf{GF}p$ equivalent to $ \mathbf{G}p$ ?</p>

<p><em>$\mathbf{GF}p$ means that it is always the case that p is true eventually.</em></p>

<p>Let $\mathbf{G} p$ be defined as: $\forall j \ge0,\ p$ holds in the suffix $q_j, q_{j+1}, q_{j+2},\ldots$</p>

<p>and since: 
<em>formula $φ$ holds for state machine $M$ if $φ$ holds for all possible traces of $M$</em></p>

<p>Isn't $\mathbf{G}$ in $\mathbf{GF}p$ redundant then? </p>
 | logic linear temporal logic | 1 | Equivalence of GFp and Gp in LTL -- (logic linear temporal logic)
<p>In linear time logic, is $\mathbf{GF}p$ equivalent to $ \mathbf{G}p$ ?</p>

<p><em>$\mathbf{GF}p$ means that it is always the case that p is true eventually.</em></p>

<p>Let $\mathbf{G} p$ be defined as: $\forall j \ge0,\ p$ holds in the suffix $q_j, q_{j+1}, q_{j+2},\ldots$</p>

<p>and since: 
<em>formula $φ$ holds for state machine $M$ if $φ$ holds for all possible traces of $M$</em></p>

<p>Isn't $\mathbf{G}$ in $\mathbf{GF}p$ redundant then? </p>
 | habedi/stack-exchange-dataset |
6,541 | Why does $A_\text{TM} \le_m \text{HALTING} \le_m \text{HALTING}^\varepsilon$? | <p>I have a book that proves the halting problem with this simple statement:</p>

<p>$$
A_\text{TM} \le_m \text{HALTING} \le_m \text{HALTING}^\varepsilon
$$</p>

<p>It states that halting problem reduces to the language consisting of $\langle M, \omega \rangle$ for which a Turing machine $M$ accepts $\omega$ is undecidable.</p>

<p>What does this mean? What does the notation $\le_m$ indicate?</p>
 | turing machines reductions undecidability halting problem | 1 | Why does $A_\text{TM} \le_m \text{HALTING} \le_m \text{HALTING}^\varepsilon$? -- (turing machines reductions undecidability halting problem)
<p>I have a book that proves the halting problem with this simple statement:</p>

<p>$$
A_\text{TM} \le_m \text{HALTING} \le_m \text{HALTING}^\varepsilon
$$</p>

<p>It states that halting problem reduces to the language consisting of $\langle M, \omega \rangle$ for which a Turing machine $M$ accepts $\omega$ is undecidable.</p>

<p>What does this mean? What does the notation $\le_m$ indicate?</p>
 | habedi/stack-exchange-dataset |
6,549 | Call by value-result vs. call by reference? | <p>From my Googling, it appears that call by value-result is similar to call by reference in that it changes values in the caller, but it's different in that the changes don't take place until the callee exits, and that if the same variable is passed as more than one argument, it'll be treated as separate values in the callee instead of the same value as in call by reference.</p>

<p>Neither fact helps me explain why call by value-result produces different output than call by reference in the following code:</p>

<pre><code>program Param (input, output); 
var
 a, b: integer;
 procedure p (x, y : integer); 
 begin
 x := x + 2 ; 
 a := x * y ; (1)
 x := x + 1 (2)
 end; 
begin
 a := 1 ;
 b := 2 ;
 p (a, b) ; 
 writeln (a)
end.
</code></pre>

<p>Edit: here's my understanding of things:
The insight here is that in CBR, in line (1), both <code>a</code> and <code>x</code> point to the same thing, so assigning to <code>a</code> updates both <code>a</code> and <code>x</code> to <code>x * y</code> which is 6. But in CBVR, <code>a</code> and <code>x</code> point to different things, so line 1 only updates <code>a</code> to 6. <code>x</code> remains 3. Then CBR updates <code>a</code> right away so <code>a</code> ends up being 7 outside <code>p</code>. But CBVR updates <code>a</code> to whatever <code>x</code> is at the end of <code>p</code>, which is 4, so even though <code>a</code> was 6 in line (1), after <code>p</code> exits it's changed to 4.</p>
 | terminology programming languages semantics evaluation strategies | 1 | Call by value-result vs. call by reference? -- (terminology programming languages semantics evaluation strategies)
<p>From my Googling, it appears that call by value-result is similar to call by reference in that it changes values in the caller, but it's different in that the changes don't take place until the callee exits, and that if the same variable is passed as more than one argument, it'll be treated as separate values in the callee instead of the same value as in call by reference.</p>

<p>Neither fact helps me explain why call by value-result produces different output than call by reference in the following code:</p>

<pre><code>program Param (input, output); 
var
 a, b: integer;
 procedure p (x, y : integer); 
 begin
 x := x + 2 ; 
 a := x * y ; (1)
 x := x + 1 (2)
 end; 
begin
 a := 1 ;
 b := 2 ;
 p (a, b) ; 
 writeln (a)
end.
</code></pre>

<p>Edit: here's my understanding of things:
The insight here is that in CBR, in line (1), both <code>a</code> and <code>x</code> point to the same thing, so assigning to <code>a</code> updates both <code>a</code> and <code>x</code> to <code>x * y</code> which is 6. But in CBVR, <code>a</code> and <code>x</code> point to different things, so line 1 only updates <code>a</code> to 6. <code>x</code> remains 3. Then CBR updates <code>a</code> right away so <code>a</code> ends up being 7 outside <code>p</code>. But CBVR updates <code>a</code> to whatever <code>x</code> is at the end of <code>p</code>, which is 4, so even though <code>a</code> was 6 in line (1), after <code>p</code> exits it's changed to 4.</p>
 | habedi/stack-exchange-dataset |
6,552 | When can a greedy algorithm solve the coin change problem? | <p>Given a set of coins with different denominations $c1, ... , cn$ and a value v you want to find the least number of coins needed to represent the value v.</p>

<p>E.g. for the coinset 1,5,10,20 this gives 2 coins for the sum 6 and 6 coins for the sum 19. </p>

<p>My main question is: when can a greedy strategy be used to solve this problem?</p>

<hr>

<p>Bonus points: Is this statement plain incorrect? (From: <a href="https://stackoverflow.com/questions/6025076/how-to-tell-if-greedy-algorithm-suffices-for-the-minimum-coin-change-problem/6031625#6031625">How to tell if greedy algorithm suffices for the minimum coin change problem?</a>)</p>

<blockquote>
 <p>However, this paper has a proof that if the greedy algorithm works for the first largest denom + second largest denom values, then it works for them all, and it suggests just using the greedy algorithm vs the optimal DP algorithm to check it.
 <a href="http://www.cs.cornell.edu/~kozen/papers/change.pdf" rel="noreferrer">http://www.cs.cornell.edu/~kozen/papers/change.pdf</a></p>
</blockquote>

<p>Ps. note that the answers in that thread are incredibly crummy- that is why I asked the question anew.</p>
 | algorithms combinatorics greedy algorithms | 1 | When can a greedy algorithm solve the coin change problem? -- (algorithms combinatorics greedy algorithms)
<p>Given a set of coins with different denominations $c1, ... , cn$ and a value v you want to find the least number of coins needed to represent the value v.</p>

<p>E.g. for the coinset 1,5,10,20 this gives 2 coins for the sum 6 and 6 coins for the sum 19. </p>

<p>My main question is: when can a greedy strategy be used to solve this problem?</p>

<hr>

<p>Bonus points: Is this statement plain incorrect? (From: <a href="https://stackoverflow.com/questions/6025076/how-to-tell-if-greedy-algorithm-suffices-for-the-minimum-coin-change-problem/6031625#6031625">How to tell if greedy algorithm suffices for the minimum coin change problem?</a>)</p>

<blockquote>
 <p>However, this paper has a proof that if the greedy algorithm works for the first largest denom + second largest denom values, then it works for them all, and it suggests just using the greedy algorithm vs the optimal DP algorithm to check it.
 <a href="http://www.cs.cornell.edu/~kozen/papers/change.pdf" rel="noreferrer">http://www.cs.cornell.edu/~kozen/papers/change.pdf</a></p>
</blockquote>

<p>Ps. note that the answers in that thread are incredibly crummy- that is why I asked the question anew.</p>
 | habedi/stack-exchange-dataset |
6,568 | Inherent ambiguity of the language $L_2 = \{a^nb^mc^m \;|\; m,n \geq 1\}\cup \{a^nb^nc^m \;|\; m,n \geq 1\}$ | <p>I went through a question asking me to choose the inherently ambiguous language among a set of options.</p>

<p>$$L_1 = \{a^nb^mc^md^n \;|\; m,n \geq 1\}\cup \{a^nb^nc^md^m \;|\; m,n \geq 1\}$$
$$and$$
$$L_2 = \{a^nb^mc^m \;|\; m,n \geq 1\}\cup \{a^nb^nc^m \;|\; m,n \geq 1\}$$</p>

<p>The solution said that $L_1$ is ambiguous while $L_2$ isn't. It generated the following grammar for $L_1$</p>

<p>$S \rightarrow S_1\;|\;S_2$</p>

<p>$S_1 \rightarrow AB$</p>

<p>$A \rightarrow aAb\;|\;ab$</p>

<p>$B \rightarrow cBd\;|\;cd$</p>

<p>$S_2 \rightarrow aS_2d\;|\;aCd$</p>

<p>$C \rightarrow bCc\;|\;bc$</p>

<p>Now for the string <code>abcd</code>, it will generate two parse trees; so it is ambiguous.</p>

<p>But a similar grammar can be created for $L_2$ too</p>

<p>$S \rightarrow S_1|S_2$</p>

<p>$S_1 \rightarrow Ac$</p>

<p>$A \rightarrow aAb\;|\;\epsilon$</p>

<p>$S_2 \rightarrow aB$</p>

<p>$B \rightarrow bBc\;|\;\epsilon$</p>

<p>And it will also generate two parse trees for <code>abc</code>. Why isn't it ambiguous then?</p>

<p>If you need,
$L_2$ can be written as $\{a^nb^pc^m\;|\; n=p \;\; or \;\; m=p\}$</p>
 | formal languages formal grammars context free ambiguity | 1 | Inherent ambiguity of the language $L_2 = \{a^nb^mc^m \;|\; m,n \geq 1\}\cup \{a^nb^nc^m \;|\; m,n \geq 1\}$ -- (formal languages formal grammars context free ambiguity)
<p>I went through a question asking me to choose the inherently ambiguous language among a set of options.</p>

<p>$$L_1 = \{a^nb^mc^md^n \;|\; m,n \geq 1\}\cup \{a^nb^nc^md^m \;|\; m,n \geq 1\}$$
$$and$$
$$L_2 = \{a^nb^mc^m \;|\; m,n \geq 1\}\cup \{a^nb^nc^m \;|\; m,n \geq 1\}$$</p>

<p>The solution said that $L_1$ is ambiguous while $L_2$ isn't. It generated the following grammar for $L_1$</p>

<p>$S \rightarrow S_1\;|\;S_2$</p>

<p>$S_1 \rightarrow AB$</p>

<p>$A \rightarrow aAb\;|\;ab$</p>

<p>$B \rightarrow cBd\;|\;cd$</p>

<p>$S_2 \rightarrow aS_2d\;|\;aCd$</p>

<p>$C \rightarrow bCc\;|\;bc$</p>

<p>Now for the string <code>abcd</code>, it will generate two parse trees; so it is ambiguous.</p>

<p>But a similar grammar can be created for $L_2$ too</p>

<p>$S \rightarrow S_1|S_2$</p>

<p>$S_1 \rightarrow Ac$</p>

<p>$A \rightarrow aAb\;|\;\epsilon$</p>

<p>$S_2 \rightarrow aB$</p>

<p>$B \rightarrow bBc\;|\;\epsilon$</p>

<p>And it will also generate two parse trees for <code>abc</code>. Why isn't it ambiguous then?</p>

<p>If you need,
$L_2$ can be written as $\{a^nb^pc^m\;|\; n=p \;\; or \;\; m=p\}$</p>
 | habedi/stack-exchange-dataset |
6,588 | Reducing the integer factorization problem to an NP-Complete problem | <p>I'm struggling to understand the relationship between NP-Intermediate and NP-Complete. I know that if P != NP based on Ladner's Theorem there exists a class of languages in NP but not in P or in NP-Complete. Every problem in NP can be reduced to an NP-Complete problem, however I haven't seen any examples for reducing a suspected NPI problem (such as integer factorization) into an NP-Complete problem. Does anyone know of any example of this or another NPI->NPC reduction?</p>
 | np complete reductions factoring | 1 | Reducing the integer factorization problem to an NP-Complete problem -- (np complete reductions factoring)
<p>I'm struggling to understand the relationship between NP-Intermediate and NP-Complete. I know that if P != NP based on Ladner's Theorem there exists a class of languages in NP but not in P or in NP-Complete. Every problem in NP can be reduced to an NP-Complete problem, however I haven't seen any examples for reducing a suspected NPI problem (such as integer factorization) into an NP-Complete problem. Does anyone know of any example of this or another NPI->NPC reduction?</p>
 | habedi/stack-exchange-dataset |
6,589 | Does Max-SNP hard imply NP-hard | <p>I have difficulties understanding the definition of the class <a href="http://en.wikipedia.org/wiki/SNP_%28complexity%29">Max-SNP</a> (optimization variant of <strong>strict NP</strong>), thus I have to following basic question:</p>

<pre><code>If a problem is known to be Max-SNP hard, does this imply NP-hardness of the problem?
</code></pre>
 | complexity theory np hard | 1 | Does Max-SNP hard imply NP-hard -- (complexity theory np hard)
<p>I have difficulties understanding the definition of the class <a href="http://en.wikipedia.org/wiki/SNP_%28complexity%29">Max-SNP</a> (optimization variant of <strong>strict NP</strong>), thus I have to following basic question:</p>

<pre><code>If a problem is known to be Max-SNP hard, does this imply NP-hardness of the problem?
</code></pre>
 | habedi/stack-exchange-dataset |
6,590 | Language acceptance by DFA | <p>I have some questions regarding acceptance of a language by DFA</p>

<ol>
<li>Whether more that one dfa accept a language </li>
<li>Whether a dfa can accept more than one language</li>
</ol>
 | automata regular languages finite automata | 1 | Language acceptance by DFA -- (automata regular languages finite automata)
<p>I have some questions regarding acceptance of a language by DFA</p>

<ol>
<li>Whether more that one dfa accept a language </li>
<li>Whether a dfa can accept more than one language</li>
</ol>
 | habedi/stack-exchange-dataset |
6,604 | How to modify semantic actions when removing left-recursion from a grammer | <p>Is there any algorithm that tells us how to modify semantic actions associated with a left-recursive grammar? For example, we have the following grammar, and its associated semantic actions:</p>

<p>$ S \rightarrow id = expr $ { S.s = expr.size }</p>

<p>S $\rightarrow$ if expr then $S_1$ else $S_2$ { $S_1.t = S.t + 2; $
$S_2.t = S.t + 2;$ $S.s = expr.size + S_1.size + S_2.size + 2;$ }</p>

<p>S $\rightarrow$ while expr do $S_1$ { $S_1.t = S.t + 4;$ $S.s = expr.size + S_1.s + 1;$ }</p>

<p>S $\rightarrow$ $S_1$ ; $S_2$ {$S_1.t = S_2.t = S.t;$ $S.s = S_1.s + S_2.s; $ }</p>

<p>Clearly the non-recursive version of the grammer is:</p>

<p>S $\rightarrow$ id = expr T </p>

<p>S $\rightarrow$ if expr then $S_1$ else $S_2$ T</p>

<p>S $\rightarrow$ while expr do $S_1$ T</p>

<p>T $\rightarrow$ ; $S_2$ T</p>

<p>T $\rightarrow$ $\epsilon$</p>

<p>But we also need to change the semantic actions accordingly. Any ideas how this can be done?</p>
 | formal grammars compilers semantics left recursion | 1 | How to modify semantic actions when removing left-recursion from a grammer -- (formal grammars compilers semantics left recursion)
<p>Is there any algorithm that tells us how to modify semantic actions associated with a left-recursive grammar? For example, we have the following grammar, and its associated semantic actions:</p>

<p>$ S \rightarrow id = expr $ { S.s = expr.size }</p>

<p>S $\rightarrow$ if expr then $S_1$ else $S_2$ { $S_1.t = S.t + 2; $
$S_2.t = S.t + 2;$ $S.s = expr.size + S_1.size + S_2.size + 2;$ }</p>

<p>S $\rightarrow$ while expr do $S_1$ { $S_1.t = S.t + 4;$ $S.s = expr.size + S_1.s + 1;$ }</p>

<p>S $\rightarrow$ $S_1$ ; $S_2$ {$S_1.t = S_2.t = S.t;$ $S.s = S_1.s + S_2.s; $ }</p>

<p>Clearly the non-recursive version of the grammer is:</p>

<p>S $\rightarrow$ id = expr T </p>

<p>S $\rightarrow$ if expr then $S_1$ else $S_2$ T</p>

<p>S $\rightarrow$ while expr do $S_1$ T</p>

<p>T $\rightarrow$ ; $S_2$ T</p>

<p>T $\rightarrow$ $\epsilon$</p>

<p>But we also need to change the semantic actions accordingly. Any ideas how this can be done?</p>
 | habedi/stack-exchange-dataset |
6,611 | Functions between sets? | <p>I recently took a practice exam for the Computer Science GRE and this was one of the questions: </p>

<blockquote>
 <p>Assume that set $A$ has 5 elements and set $B$ has 4 elements, how many functions exist from set $A$ to set $B$?</p>
</blockquote>

<p>I had no idea what this means, I don't recall ever studying functions between sets, could someone shed some light on this question for me ?</p>
 | combinatorics sets | 1 | Functions between sets? -- (combinatorics sets)
<p>I recently took a practice exam for the Computer Science GRE and this was one of the questions: </p>

<blockquote>
 <p>Assume that set $A$ has 5 elements and set $B$ has 4 elements, how many functions exist from set $A$ to set $B$?</p>
</blockquote>

<p>I had no idea what this means, I don't recall ever studying functions between sets, could someone shed some light on this question for me ?</p>
 | habedi/stack-exchange-dataset |
6,616 | Need a practical solution for creating pattern database(5-5-5) for 15-Puzzle | <p>I have asked this exact question on <a href="https://stackoverflow.com/questions/13229722/need-a-practical-solution-for-creating-pattern-database5-5-5-for-15-puzzle">StackOverflow</a>. I did not get the answer that I was looking for. Please read this question fully before answering. Thank You.<br><br>
For static pattern database(5-5-5), see <a href="http://reference.kfupm.edu.sa/content/d/i/disjoint_pattern_database_heuristics_56916.pdf" rel="nofollow noreferrer">this</a>(page 290 and 283) OR there is an explanation below. For <a href="http://en.wikipedia.org/wiki/Fifteen_puzzle" rel="nofollow noreferrer">What is 15-puzzle?</a><br>
I am creating a static patter database(5-5-5). This code to to fill entries into the first table. I am doing it via the recursive function <code>insertInDB()</code>. The first input to the recursive function is this (actually the input puzzle contains it in 1-D array. For better understanding I have represented it as 2-D below)<br></p>

<hr>

<p>1 2 3 4<br>
0 6 0 0<br>
0 0 0 0<br>
0 0 0 0<br></p>

<hr>

<p>This is my code : <br></p>

<pre><code>class DBClass
{
 public Connection connection;
 public ResultSet rs;
 public PreparedStatement ps1;
 public PreparedStatement ps2;
 public int k;
 String read_statement,insert_statement;

 public DBClass()
 {
 try {
 Class.forName("com.mysql.jdbc.Driver");
 } catch (ClassNotFoundException e) {
 // TODO Auto-generated catch block
 e.printStackTrace();
 }
 try {
 connection = DriverManager
 .getConnection("jdbc:mysql://localhost/feedback?"
 + "user=ashwin&password=ashwin&autoReconnect=true&useUnicode=true&characterEncoding=utf8&validationQuery=Select 1");
 insert_statement="insert into staticpdb1(hash,permutation,cost) values(?,?,?)";
 read_statement="select SQL_NO_CACHE * from staticpdb1 where hash=? and permutation= ? LIMIT 1";
 ps1=connection.prepareStatement(read_statement, ResultSet.TYPE_SCROLL_SENSITIVE, 
 ResultSet.CONCUR_UPDATABLE);
 ps2=connection.prepareStatement(insert_statement);
 k=0;
 } catch (SQLException e) {
 // TODO Auto-generated catch block
 e.printStackTrace();
 }
 }
 public int updateIfNecessary(FifteenPuzzle sub) 
 {
 String str=sub.toDBString();
 try
 {

 ps1.setInt(1, sub.hashcode());
 ps1.setString(2,str);
 rs=ps1.executeQuery();
 if(rs.next())
 {
 //if a row exists, check if the cost is greater than sub's
 int cost=rs.getInt(3);
 if(sub.g_n<cost) //if the cost of sub is less than db row's cost
 {
 //replace the cost
 rs.updateInt(3, sub.g_n);
 rs.updateRow();
 return 1; //only examine - do not insert
 }
 else
 return 0; //dont examine - return

 }
 else
 return 2; //insert and examine
 }
 catch(SQLException e)
 {

 System.out.println("here1"+e);
 System.err.println("reported recursion level was "+e.getStackTrace().length);
 return 0;
 }
 finally{

 try{
 rs.close();}
 catch(final Exception e1)
 {
 System.out.println("here2"+e1);
 }

 }


 }
 public void insert(FifteenPuzzle sub)
 {

 try{
 String str=sub.toDBString();


 ps2.setInt(1,sub.hashcode());
 ps2.setString(2, str);
 ps2.setInt(3,sub.g_n);
 ps2.executeUpdate();
 ps2.clearParameters();
 }catch(SQLException e)
 {
 System.out.println("here3"+e);
 }
 }

 public void InsertInDB(FifteenPuzzle sub) throws SQLException
 {

 System.out.println(k++);

 int i;

 int p=updateIfNecessary(sub);
 if(p==0)
 {
 System.out.println("returning");
 return;
 }
 if(p==2)
 {
 insert(sub);
 System.out.println("inserted");
 }


 //FifteenPuzzle temp=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n);
 for(i=0;i<sub.puzzle.length;i++)
 {
 if(sub.puzzle[i]!=0)
 {

 //check the positions it can be moved to
 if(i%4!=0 && sub.puzzle[i-1]==0) //left
 {
 //create another clone and increment the moves
 FifteenPuzzle temp_inner=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n+1);
 //exchange positions
 int t=temp_inner.puzzle[i];
 temp_inner.puzzle[i]=temp_inner.puzzle[i-1];
 temp_inner.puzzle[i-1]=t;
 InsertInDB(temp_inner);
 }
 if(i%4!=3 && sub.puzzle[i+1]==0) //right
 {
 //create another clone and increment the moves
 FifteenPuzzle temp_inner=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n+1);
 //exchange positions
 int t=temp_inner.puzzle[i];
 temp_inner.puzzle[i]=temp_inner.puzzle[i+1];
 temp_inner.puzzle[i+1]=t;
 InsertInDB(temp_inner);
 }
 if(i/4!=0 && sub.puzzle[i-4]==0) //up
 {
 //create another clone and increment the moves
 FifteenPuzzle temp_inner=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n+1);
 //exchange positions
 int t=temp_inner.puzzle[i];
 temp_inner.puzzle[i]=temp_inner.puzzle[i-4];
 temp_inner.puzzle[i-4]=t;
 InsertInDB(temp_inner);
 }
 if(i/4!=3 && sub.puzzle[i+4]==0) //down
 {
 //create another clone and increment the moves
 FifteenPuzzle temp_inner=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n+1);
 //exchange positions
 int t=temp_inner.puzzle[i];
 temp_inner.puzzle[i]=temp_inner.puzzle[i+4];
 temp_inner.puzzle[i+4]=t;
 InsertInDB(temp_inner);

 }
 } 
 }
</code></pre>

<p><br><br>
The function <strong>insertInDB(FifteenPuzzle fp)</strong> in the class is the recursive function and is called first from the main function with the array for the fifteen puzzle argument(<code>puzzle</code> is an integer array field of the Class <code>FifteenPuzzle</code>) being - <code>1,2,3,4,0,6,0,0,0,0,0,0,0,0,0,0</code>(same as the matrix shown above). Before explaining the other functions I will explain what static pattern database is; briefly(Because of the comments below)<br></p>

<h2>What is a (5-5-5) static pattern database for 15-Puzzle?</h2>

<p>Pattern databases are heuristics used to solve a fifteen puzzle(can be any puzzle. But here I will talk about only 15-Puzzle). A heuristic is a number used to determine which state to be expanded next. I is like cost of each state. Here state is a <em>permutation</em> of the 15-Puzzle. For simple puzzles like 8-Puzzle, the heuristic can be <strong>manhattan distance</strong>. It gives the minimum number of moves, for each misplaced tile, to reach <strong>it's</strong> goal position. Then manhattan distances for all the tiles are added up to give the cost for that tile. Manhattan distance gives the lower bound to the estimate of the number of moves required to reach the goal state i.e you cannot reach the goal state with moves, less than the manhattan distance. <strong>BUT</strong> manhattan distance is not a very good heuristic, though admissible, because it does not consider other tiles near by it. If a tile has to be moved to it's goal position, the near by tiles also have to be moved and the number of moves increase. So, clearly for these puzzles, the actual cost is mostly much greater that
the manhattan distance.<br>
To <strong>overcome</strong> this(manhattan distance) and take into account the other tiles, pattern databases were introduced.
A static patter database holds the heuristics for sub-problems or for a group of tiles to reach for their goal state. Since, you are calculating the number of moves to make these group of tiles reach their goal state, the other tiles in that group will be taken into account when a tiles is being moved. So, this is a better heuristic and mostly will always is greater than manhattan distance.<br>
5-5-5 static pattern is just a form of static pattern database where the number of groups are 3, two of them containing 5 tiles each and the third one contains 6(6th isthe blank tile).</p>

<h2>One of the groups is this matrix :<br></h2>

<p>1 2 3 4<br>
0 6 0 0<br>
0 0 0 0<br>
0 0 0 0<br></p>

<hr>

<p>I calculating the heuristics/number_of_moves for all permutations of this group to reach the above configuration and <strong>inserting them into my database</strong>. <br> The total number of combinations(also the no of rows in db) possible is 
<br></p>

<pre><code>16!/(16-5)! = 524160
</code></pre>

<p><br> Now, the other functions - <code>updateIfNecessary(FifteenPuzzle)</code> - this function checks if the array of the passed FifteenPuzzle object is already present in the database. If already present in the database, it checks if the current object's cost is less than the cost in DB. If yes, it replaces it with the current cost else does nothing. The function -<code>insert(FifteenPuzzle)</code> inserts a new permutaion with the cost.<br><br>
<strong>NOTE :</strong> <code>fifteenuzzle.g_n</code> is the cost for the puzzle. For the initial puzzle that represents the matrix above, the cost is <code>0</code> and for each move the cost is <code>incremented by1</code>.<br><br></p>

<p>I have set the stack size to -<code>Xss128m</code>(1024, 512 and 256 were giving a fatal error) for stack size in run configurations. <br>
Currently the recursion number or the depth is <strong><code>7,500,000</code> and counting</strong>(value of <code>System.out.println(k++);</code>).
<br> The total number of combinations possible is 
<br></p>

<pre><code>16!/(16-5)! = 524160
</code></pre>

<p><br>
But the depth has already reached 7,500,000. This is because of generation of duplicate states. Currently the number of entries in the database is <strong>513423</strong>. You might think that there only 10,000 entries to fill up now. But now the rate at which entries are made has decreased drastically about <strong>1 entry every 30 min</strong>. This will never get over then. <br><br>
I need a solution that is practical - <strong>with or without recursion</strong>. Is it possible?</p>
 | algorithms artificial intelligence recursion | 1 | Need a practical solution for creating pattern database(5-5-5) for 15-Puzzle -- (algorithms artificial intelligence recursion)
<p>I have asked this exact question on <a href="https://stackoverflow.com/questions/13229722/need-a-practical-solution-for-creating-pattern-database5-5-5-for-15-puzzle">StackOverflow</a>. I did not get the answer that I was looking for. Please read this question fully before answering. Thank You.<br><br>
For static pattern database(5-5-5), see <a href="http://reference.kfupm.edu.sa/content/d/i/disjoint_pattern_database_heuristics_56916.pdf" rel="nofollow noreferrer">this</a>(page 290 and 283) OR there is an explanation below. For <a href="http://en.wikipedia.org/wiki/Fifteen_puzzle" rel="nofollow noreferrer">What is 15-puzzle?</a><br>
I am creating a static patter database(5-5-5). This code to to fill entries into the first table. I am doing it via the recursive function <code>insertInDB()</code>. The first input to the recursive function is this (actually the input puzzle contains it in 1-D array. For better understanding I have represented it as 2-D below)<br></p>

<hr>

<p>1 2 3 4<br>
0 6 0 0<br>
0 0 0 0<br>
0 0 0 0<br></p>

<hr>

<p>This is my code : <br></p>

<pre><code>class DBClass
{
 public Connection connection;
 public ResultSet rs;
 public PreparedStatement ps1;
 public PreparedStatement ps2;
 public int k;
 String read_statement,insert_statement;

 public DBClass()
 {
 try {
 Class.forName("com.mysql.jdbc.Driver");
 } catch (ClassNotFoundException e) {
 // TODO Auto-generated catch block
 e.printStackTrace();
 }
 try {
 connection = DriverManager
 .getConnection("jdbc:mysql://localhost/feedback?"
 + "user=ashwin&password=ashwin&autoReconnect=true&useUnicode=true&characterEncoding=utf8&validationQuery=Select 1");
 insert_statement="insert into staticpdb1(hash,permutation,cost) values(?,?,?)";
 read_statement="select SQL_NO_CACHE * from staticpdb1 where hash=? and permutation= ? LIMIT 1";
 ps1=connection.prepareStatement(read_statement, ResultSet.TYPE_SCROLL_SENSITIVE, 
 ResultSet.CONCUR_UPDATABLE);
 ps2=connection.prepareStatement(insert_statement);
 k=0;
 } catch (SQLException e) {
 // TODO Auto-generated catch block
 e.printStackTrace();
 }
 }
 public int updateIfNecessary(FifteenPuzzle sub) 
 {
 String str=sub.toDBString();
 try
 {

 ps1.setInt(1, sub.hashcode());
 ps1.setString(2,str);
 rs=ps1.executeQuery();
 if(rs.next())
 {
 //if a row exists, check if the cost is greater than sub's
 int cost=rs.getInt(3);
 if(sub.g_n<cost) //if the cost of sub is less than db row's cost
 {
 //replace the cost
 rs.updateInt(3, sub.g_n);
 rs.updateRow();
 return 1; //only examine - do not insert
 }
 else
 return 0; //dont examine - return

 }
 else
 return 2; //insert and examine
 }
 catch(SQLException e)
 {

 System.out.println("here1"+e);
 System.err.println("reported recursion level was "+e.getStackTrace().length);
 return 0;
 }
 finally{

 try{
 rs.close();}
 catch(final Exception e1)
 {
 System.out.println("here2"+e1);
 }

 }


 }
 public void insert(FifteenPuzzle sub)
 {

 try{
 String str=sub.toDBString();


 ps2.setInt(1,sub.hashcode());
 ps2.setString(2, str);
 ps2.setInt(3,sub.g_n);
 ps2.executeUpdate();
 ps2.clearParameters();
 }catch(SQLException e)
 {
 System.out.println("here3"+e);
 }
 }

 public void InsertInDB(FifteenPuzzle sub) throws SQLException
 {

 System.out.println(k++);

 int i;

 int p=updateIfNecessary(sub);
 if(p==0)
 {
 System.out.println("returning");
 return;
 }
 if(p==2)
 {
 insert(sub);
 System.out.println("inserted");
 }


 //FifteenPuzzle temp=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n);
 for(i=0;i<sub.puzzle.length;i++)
 {
 if(sub.puzzle[i]!=0)
 {

 //check the positions it can be moved to
 if(i%4!=0 && sub.puzzle[i-1]==0) //left
 {
 //create another clone and increment the moves
 FifteenPuzzle temp_inner=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n+1);
 //exchange positions
 int t=temp_inner.puzzle[i];
 temp_inner.puzzle[i]=temp_inner.puzzle[i-1];
 temp_inner.puzzle[i-1]=t;
 InsertInDB(temp_inner);
 }
 if(i%4!=3 && sub.puzzle[i+1]==0) //right
 {
 //create another clone and increment the moves
 FifteenPuzzle temp_inner=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n+1);
 //exchange positions
 int t=temp_inner.puzzle[i];
 temp_inner.puzzle[i]=temp_inner.puzzle[i+1];
 temp_inner.puzzle[i+1]=t;
 InsertInDB(temp_inner);
 }
 if(i/4!=0 && sub.puzzle[i-4]==0) //up
 {
 //create another clone and increment the moves
 FifteenPuzzle temp_inner=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n+1);
 //exchange positions
 int t=temp_inner.puzzle[i];
 temp_inner.puzzle[i]=temp_inner.puzzle[i-4];
 temp_inner.puzzle[i-4]=t;
 InsertInDB(temp_inner);
 }
 if(i/4!=3 && sub.puzzle[i+4]==0) //down
 {
 //create another clone and increment the moves
 FifteenPuzzle temp_inner=new FifteenPuzzle(sub.puzzle.clone(),2,sub.g_n+1);
 //exchange positions
 int t=temp_inner.puzzle[i];
 temp_inner.puzzle[i]=temp_inner.puzzle[i+4];
 temp_inner.puzzle[i+4]=t;
 InsertInDB(temp_inner);

 }
 } 
 }
</code></pre>

<p><br><br>
The function <strong>insertInDB(FifteenPuzzle fp)</strong> in the class is the recursive function and is called first from the main function with the array for the fifteen puzzle argument(<code>puzzle</code> is an integer array field of the Class <code>FifteenPuzzle</code>) being - <code>1,2,3,4,0,6,0,0,0,0,0,0,0,0,0,0</code>(same as the matrix shown above). Before explaining the other functions I will explain what static pattern database is; briefly(Because of the comments below)<br></p>

<h2>What is a (5-5-5) static pattern database for 15-Puzzle?</h2>

<p>Pattern databases are heuristics used to solve a fifteen puzzle(can be any puzzle. But here I will talk about only 15-Puzzle). A heuristic is a number used to determine which state to be expanded next. I is like cost of each state. Here state is a <em>permutation</em> of the 15-Puzzle. For simple puzzles like 8-Puzzle, the heuristic can be <strong>manhattan distance</strong>. It gives the minimum number of moves, for each misplaced tile, to reach <strong>it's</strong> goal position. Then manhattan distances for all the tiles are added up to give the cost for that tile. Manhattan distance gives the lower bound to the estimate of the number of moves required to reach the goal state i.e you cannot reach the goal state with moves, less than the manhattan distance. <strong>BUT</strong> manhattan distance is not a very good heuristic, though admissible, because it does not consider other tiles near by it. If a tile has to be moved to it's goal position, the near by tiles also have to be moved and the number of moves increase. So, clearly for these puzzles, the actual cost is mostly much greater that
the manhattan distance.<br>
To <strong>overcome</strong> this(manhattan distance) and take into account the other tiles, pattern databases were introduced.
A static patter database holds the heuristics for sub-problems or for a group of tiles to reach for their goal state. Since, you are calculating the number of moves to make these group of tiles reach their goal state, the other tiles in that group will be taken into account when a tiles is being moved. So, this is a better heuristic and mostly will always is greater than manhattan distance.<br>
5-5-5 static pattern is just a form of static pattern database where the number of groups are 3, two of them containing 5 tiles each and the third one contains 6(6th isthe blank tile).</p>

<h2>One of the groups is this matrix :<br></h2>

<p>1 2 3 4<br>
0 6 0 0<br>
0 0 0 0<br>
0 0 0 0<br></p>

<hr>

<p>I calculating the heuristics/number_of_moves for all permutations of this group to reach the above configuration and <strong>inserting them into my database</strong>. <br> The total number of combinations(also the no of rows in db) possible is 
<br></p>

<pre><code>16!/(16-5)! = 524160
</code></pre>

<p><br> Now, the other functions - <code>updateIfNecessary(FifteenPuzzle)</code> - this function checks if the array of the passed FifteenPuzzle object is already present in the database. If already present in the database, it checks if the current object's cost is less than the cost in DB. If yes, it replaces it with the current cost else does nothing. The function -<code>insert(FifteenPuzzle)</code> inserts a new permutaion with the cost.<br><br>
<strong>NOTE :</strong> <code>fifteenuzzle.g_n</code> is the cost for the puzzle. For the initial puzzle that represents the matrix above, the cost is <code>0</code> and for each move the cost is <code>incremented by1</code>.<br><br></p>

<p>I have set the stack size to -<code>Xss128m</code>(1024, 512 and 256 were giving a fatal error) for stack size in run configurations. <br>
Currently the recursion number or the depth is <strong><code>7,500,000</code> and counting</strong>(value of <code>System.out.println(k++);</code>).
<br> The total number of combinations possible is 
<br></p>

<pre><code>16!/(16-5)! = 524160
</code></pre>

<p><br>
But the depth has already reached 7,500,000. This is because of generation of duplicate states. Currently the number of entries in the database is <strong>513423</strong>. You might think that there only 10,000 entries to fill up now. But now the rate at which entries are made has decreased drastically about <strong>1 entry every 30 min</strong>. This will never get over then. <br><br>
I need a solution that is practical - <strong>with or without recursion</strong>. Is it possible?</p>
 | habedi/stack-exchange-dataset |
6,618 | How to implement a prolog interpreter in a purely functional language? | <p>Is there a clear reference, with pseudo-code, on how to go about implementing a Prolog interpreter in a purely functional language? That which I have found so far seems to deal only with imperative languages, is merely a demonstration of Prolog implemented in itself, or offers no concrete algorithm to use for interpretation. I would be very appreciative of an answer.</p>
 | functional programming prolog logic programming | 1 | How to implement a prolog interpreter in a purely functional language? -- (functional programming prolog logic programming)
<p>Is there a clear reference, with pseudo-code, on how to go about implementing a Prolog interpreter in a purely functional language? That which I have found so far seems to deal only with imperative languages, is merely a demonstration of Prolog implemented in itself, or offers no concrete algorithm to use for interpretation. I would be very appreciative of an answer.</p>
 | habedi/stack-exchange-dataset |
6,626 | How do I show that whether a PDA accepts some string $\{ w!w \mid w \in \{ 0, 1 \}^*\}$ is undecidable? | <p>How do I show that the problem of deciding whether a PDA accepts some string of the form $\{ w!w \mid w \in \{ 0, 1 \}^*\}$ is undecidable?</p>

<p>I have tried to reduce this problem to another undecidable one such as whether two context-free grammars accept the same language. However, I'm not sure how to use it as a subroutine.</p>
 | formal languages automata context free undecidability pushdown automata | 1 | How do I show that whether a PDA accepts some string $\{ w!w \mid w \in \{ 0, 1 \}^*\}$ is undecidable? -- (formal languages automata context free undecidability pushdown automata)
<p>How do I show that the problem of deciding whether a PDA accepts some string of the form $\{ w!w \mid w \in \{ 0, 1 \}^*\}$ is undecidable?</p>

<p>I have tried to reduce this problem to another undecidable one such as whether two context-free grammars accept the same language. However, I'm not sure how to use it as a subroutine.</p>
 | habedi/stack-exchange-dataset |
6,627 | Implementing addition for a binary counter | <p>A binary counter is represented by an infinite array of 0 and 1.</p>

<p>I need to implement the action $\text{add}(k)$ which adds $k$ to the value represented in the array.</p>

<p>The obvious way is to add 1, k times. Is there a more efficient way?</p>
 | algorithms data structures efficiency | 1 | Implementing addition for a binary counter -- (algorithms data structures efficiency)
<p>A binary counter is represented by an infinite array of 0 and 1.</p>

<p>I need to implement the action $\text{add}(k)$ which adds $k$ to the value represented in the array.</p>

<p>The obvious way is to add 1, k times. Is there a more efficient way?</p>
 | habedi/stack-exchange-dataset |
6,634 | Greedy algorithms tutorial | <p>Could anyone point me to simple tutorial on greedy algorithm for Minimum Spanning tree - Kruskal's and Prims' Method</p>

<p>I am looking for a tutorial which </p>

<ul>
<li>does not include all the mathematical notation </li>
<li>explains algorithm along with the analysis of the running time.</li>
</ul>
 | algorithm analysis greedy algorithms | 1 | Greedy algorithms tutorial -- (algorithm analysis greedy algorithms)
<p>Could anyone point me to simple tutorial on greedy algorithm for Minimum Spanning tree - Kruskal's and Prims' Method</p>

<p>I am looking for a tutorial which </p>

<ul>
<li>does not include all the mathematical notation </li>
<li>explains algorithm along with the analysis of the running time.</li>
</ul>
 | habedi/stack-exchange-dataset |
6,637 | Memoized Palindrome Subsequence | <p>I am trying to find the maximum palindrome sub-sequence and after going through some tutorials, I came up with a memoized version.But I am not sure about the runtime.I want to know if the following algorithm will work.Could also someone explain what the runtime will be?</p>

<pre><code>Memoized-Palindrome(A,n)
initialize longest [i][j] =0 for all i and j
then return Memoized-Palindrome1(A,1,n,longest)

Memoized-Palindrome1(A,i,j,longest)
if longest[i][j]>0 return longest [i][j]
if (j-i) <=1 return j-i
if A[i]==A[j] 
 then longest[i][j] = 2 + Memoized-Palindrome1(A,i+1,j-1,longest)
 else 
 longest[i][j]= max(Memoized-Palindrome1(A,i+1,j,longest),Memoized-Palindrome1(A,i,j+1,longest)
return longest[i][j]
</code></pre>
 | algorithm analysis dynamic programming memoization | 1 | Memoized Palindrome Subsequence -- (algorithm analysis dynamic programming memoization)
<p>I am trying to find the maximum palindrome sub-sequence and after going through some tutorials, I came up with a memoized version.But I am not sure about the runtime.I want to know if the following algorithm will work.Could also someone explain what the runtime will be?</p>

<pre><code>Memoized-Palindrome(A,n)
initialize longest [i][j] =0 for all i and j
then return Memoized-Palindrome1(A,1,n,longest)

Memoized-Palindrome1(A,i,j,longest)
if longest[i][j]>0 return longest [i][j]
if (j-i) <=1 return j-i
if A[i]==A[j] 
 then longest[i][j] = 2 + Memoized-Palindrome1(A,i+1,j-1,longest)
 else 
 longest[i][j]= max(Memoized-Palindrome1(A,i+1,j,longest),Memoized-Palindrome1(A,i,j+1,longest)
return longest[i][j]
</code></pre>
 | habedi/stack-exchange-dataset |
6,640 | Don Knuth and MMIXAL vs. Chuck Moore and Forth -- Algorithms and Ideal Machines -- was there any cross-pollination / influence in their ideas / work? | <p><strong>Question:</strong></p>

<p><strong>To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?</strong></p>

<p>I'm interested in citations, interviews, articles, links, or any other sort of evidence. It could also be evidence of the form of A and B here suggest that Moore might have borrowed or influenced C and D from Knuth here, or vice versa. (Opinions are of course welcome, but references / links would be better!)</p>

<p><strong>Context:</strong></p>

<p>Until fairly recently, I have been primarily familiar with Knuth's work on algorithms and computing models, mostly through TAOCP but also through his interviews and other writings.</p>

<p>However, the more I have been using Forth, the more I am struck by both the power of a stack-based machine model, and the way in which the spareness of the model makes fundamental algorithmic improvements more readily apparent. </p>

<p>A lot of what Knuth has done in fundamental analysis of algorithms has, it seems to me, a very similar flavour, and I can easily imagine that in a parallel universe, Knuth might perhaps have chosen Forth as his computing model.</p>

<p>That's the software / algorithms / programming side of things.</p>

<p>When it comes to "ideal computing machines", Knuth in the 70s came up with the MIX computer model, and then, collaborating with designers of state-of-the-art RISC chips through the 90s, updated this with the modern MMIX model and its attendant assembly language MMIXAL.</p>

<p>Meanwhile, Moore, having been using and refining Forth as a language, but using it on top of whatever processor happened to be in the computer he was programming, began to imagine a world in which the efficiency and value of stack-based programming were reflected in hardware. So he went on in the 80s to develop his own stack-based hardware chips, defining the term MISC (Minimal Instruction Set Computers) along the way, and ending up eventually with the first Forth chip, the MuP21.</p>

<p>Both are brilliant men with keen insight into the art of programming and algorithms, and both work at the intersection between algorithms, programs, and bare metal hardware (i.e. hardware without the clutter of operating systems).</p>

<p>Which leads to the question as headlined...</p>

<p><strong>Question: To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?</strong></p>
 | programming languages history | 1 | Don Knuth and MMIXAL vs. Chuck Moore and Forth -- Algorithms and Ideal Machines -- was there any cross-pollination / influence in their ideas / work? -- (programming languages history)
<p><strong>Question:</strong></p>

<p><strong>To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?</strong></p>

<p>I'm interested in citations, interviews, articles, links, or any other sort of evidence. It could also be evidence of the form of A and B here suggest that Moore might have borrowed or influenced C and D from Knuth here, or vice versa. (Opinions are of course welcome, but references / links would be better!)</p>

<p><strong>Context:</strong></p>

<p>Until fairly recently, I have been primarily familiar with Knuth's work on algorithms and computing models, mostly through TAOCP but also through his interviews and other writings.</p>

<p>However, the more I have been using Forth, the more I am struck by both the power of a stack-based machine model, and the way in which the spareness of the model makes fundamental algorithmic improvements more readily apparent. </p>

<p>A lot of what Knuth has done in fundamental analysis of algorithms has, it seems to me, a very similar flavour, and I can easily imagine that in a parallel universe, Knuth might perhaps have chosen Forth as his computing model.</p>

<p>That's the software / algorithms / programming side of things.</p>

<p>When it comes to "ideal computing machines", Knuth in the 70s came up with the MIX computer model, and then, collaborating with designers of state-of-the-art RISC chips through the 90s, updated this with the modern MMIX model and its attendant assembly language MMIXAL.</p>

<p>Meanwhile, Moore, having been using and refining Forth as a language, but using it on top of whatever processor happened to be in the computer he was programming, began to imagine a world in which the efficiency and value of stack-based programming were reflected in hardware. So he went on in the 80s to develop his own stack-based hardware chips, defining the term MISC (Minimal Instruction Set Computers) along the way, and ending up eventually with the first Forth chip, the MuP21.</p>

<p>Both are brilliant men with keen insight into the art of programming and algorithms, and both work at the intersection between algorithms, programs, and bare metal hardware (i.e. hardware without the clutter of operating systems).</p>

<p>Which leads to the question as headlined...</p>

<p><strong>Question: To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?</strong></p>
 | habedi/stack-exchange-dataset |
6,641 | Counting sort on non-integers - why not possible? | <p>What is it about the structure of <a href="http://en.wikipedia.org/wiki/Counting_sort">counting sort</a> that only makes it work on integers?</p>

<p>Surely strings can be counted?</p>

<pre><code>''' allocate an array Count[0..k] ; initialize each array cell to zero ; THEN '''
for each input item x:
 Count[key(x)] = Count[key(x)] + 1
total = 0
for i = 0, 1, ... k:
 c = Count[i]
 Count[i] = total
 total = total + c

''' allocate an output array Output[0..n-1] ; THEN '''
for each input item x:
 store x in Output[Count[key(x)]]
 Count[key(x)] = Count[key(x)] + 1
return Output
</code></pre>

<p>Where above does the linear time break down if you try to use counting sort on strings instead (assuming you have strings of fixed length)? </p>
 | algorithms sorting | 1 | Counting sort on non-integers - why not possible? -- (algorithms sorting)
<p>What is it about the structure of <a href="http://en.wikipedia.org/wiki/Counting_sort">counting sort</a> that only makes it work on integers?</p>

<p>Surely strings can be counted?</p>

<pre><code>''' allocate an array Count[0..k] ; initialize each array cell to zero ; THEN '''
for each input item x:
 Count[key(x)] = Count[key(x)] + 1
total = 0
for i = 0, 1, ... k:
 c = Count[i]
 Count[i] = total
 total = total + c

''' allocate an output array Output[0..n-1] ; THEN '''
for each input item x:
 store x in Output[Count[key(x)]]
 Count[key(x)] = Count[key(x)] + 1
return Output
</code></pre>

<p>Where above does the linear time break down if you try to use counting sort on strings instead (assuming you have strings of fixed length)? </p>
 | habedi/stack-exchange-dataset |
6,649 | Proving that NPSPACE $\subseteq$ EXPTIME | <p>I am following <strong>"Introduction to the theory of computation" by Sipser</strong>.</p>

<p>My question is about relationship of different classes which is present in <strong>Chapter 8.2. The Class PSPACE</strong>.</p>

<p>$P \subseteq NP \subseteq PSPACE = NPSPACE \subseteq EXPTIME$</p>

<p>I am trying to understand why the the following part is true $NPSPACE \subseteq EXPTIME$.</p>

<p>The explanation from the textbook is following: </p>

<p><em>"For $f(x)\geq n$, a TM that uses $f(x)$ space can have at most $f(n)2^{O(f(n))}$ different configurations, by a simple generalization of the proof of the Lemma 5.8 on page 194. A TM computation that halts may not repeat a configuration. Therefore a TM that uses space $f(n)$ must run in time $f(n)2^{O(f(n))}$, so $NPSPACE \subseteq EXPTIME$"</em></p>

<p>I am trying to understand why it's true, why TM that uses $f(n)$ space must run in time $f(n)2^{O(f(n))}$. Let's try to reverseengeneer the formula: $n$ is the length of the input, 2 is the size of alphabet, $f(n)$ is the space that TM use on the second tape (operational tape) and $f(n) \geq n$, but how to explain what $O(f(n))$ means. Apparently, $2^{O(f(n))}$ expreses a configuration, so $O(f(n))$ must express union of transition function and alphabet, but actually it seems like I get it wrong. The most intriguing question why, in the end, $f(n)2^{O(f(n))}$ expressed in the terms of time, the transition from space to time is very vague for me.</p>

<p>I will very appreciate if someone could explain me this relationship.</p>
 | complexity theory time complexity space complexity | 1 | Proving that NPSPACE $\subseteq$ EXPTIME -- (complexity theory time complexity space complexity)
<p>I am following <strong>"Introduction to the theory of computation" by Sipser</strong>.</p>

<p>My question is about relationship of different classes which is present in <strong>Chapter 8.2. The Class PSPACE</strong>.</p>

<p>$P \subseteq NP \subseteq PSPACE = NPSPACE \subseteq EXPTIME$</p>

<p>I am trying to understand why the the following part is true $NPSPACE \subseteq EXPTIME$.</p>

<p>The explanation from the textbook is following: </p>

<p><em>"For $f(x)\geq n$, a TM that uses $f(x)$ space can have at most $f(n)2^{O(f(n))}$ different configurations, by a simple generalization of the proof of the Lemma 5.8 on page 194. A TM computation that halts may not repeat a configuration. Therefore a TM that uses space $f(n)$ must run in time $f(n)2^{O(f(n))}$, so $NPSPACE \subseteq EXPTIME$"</em></p>

<p>I am trying to understand why it's true, why TM that uses $f(n)$ space must run in time $f(n)2^{O(f(n))}$. Let's try to reverseengeneer the formula: $n$ is the length of the input, 2 is the size of alphabet, $f(n)$ is the space that TM use on the second tape (operational tape) and $f(n) \geq n$, but how to explain what $O(f(n))$ means. Apparently, $2^{O(f(n))}$ expreses a configuration, so $O(f(n))$ must express union of transition function and alphabet, but actually it seems like I get it wrong. The most intriguing question why, in the end, $f(n)2^{O(f(n))}$ expressed in the terms of time, the transition from space to time is very vague for me.</p>

<p>I will very appreciate if someone could explain me this relationship.</p>
 | habedi/stack-exchange-dataset |
6,650 | How can P =? NP enhance integer factorization | <p>If ${\sf P}$ does in fact equal ${\sf NP}$, how would this enhance our algorithms to factor integers faster. In other words, what kind of insight would this fact give us in understanding integer factorization better?</p>
 | complexity theory computability np complete p vs np factoring | 1 | How can P =? NP enhance integer factorization -- (complexity theory computability np complete p vs np factoring)
<p>If ${\sf P}$ does in fact equal ${\sf NP}$, how would this enhance our algorithms to factor integers faster. In other words, what kind of insight would this fact give us in understanding integer factorization better?</p>
 | habedi/stack-exchange-dataset |
6,655 | How can I prove that a build max heap's amortized cost is $O(n)$? | <p>Suppose a <strong>build</strong> max-heap operation runs bubble down over a heap. How does its amortized cost equal $O(n)$?</p>
 | data structures runtime analysis heaps | 1 | How can I prove that a build max heap's amortized cost is $O(n)$? -- (data structures runtime analysis heaps)
<p>Suppose a <strong>build</strong> max-heap operation runs bubble down over a heap. How does its amortized cost equal $O(n)$?</p>
 | habedi/stack-exchange-dataset |
6,665 | Intuition behind Relativization | <p>I take course on Computational Complexity. My problem is I don't understand <strong>Relativization method</strong>. I tried to find a bit of intuition in many textbooks, unfortunately, so far with no success. I will appreciate if someone could shed the light on this topic so that I will be able to continue by myself.
Few following sentences are questions and my thoughts about relativization, they will help to navigate the discussion. </p>

<p>Very often relativization comes in comparison with diagonalization, which is a method that helps distinguish between countable set and uncountable set. It somehow comes from relativization that $P$ versus $NP$ question cannot be solved by diagonalization. I don't really see the idea why relativization show the useless of diagonalization, and if it's useless why is actually useless.</p>

<p>The idea behind oracle Turing machine $M^A$ at first is very clear. However, when it comes to $NP^A$ and $P^A$ the intuition disappears. Oracle is a blackbox that is designed for special language and answers the question whether the string on the input of the oracle is in the language in time 1. As I understood TM that contains an oracle is just make some auxiliary operations and ask the oracle. So the core of the TM is the oracle, everything else is less important. What's the difference between $P^A$ and $NP^A$, even thought oracle in both of them works in time 1.</p>

<p>The last thing is the proving the existence of an oracle $B$ such that $P^B \neq NP^B$. I found the proof in several textbooks and in all of them the proof seems very vague. I tried to use <strong>"Introduction to complexity" by Sipser, Chapter9. Intractability</strong>, and didn't get the idea of construction of a list of all polynomial time oracle TMs $M_i$. </p>

<p>This is more or less everything what I know about relativization, I will appreciate if someonw would decide to share his/her thoughts on the topic.</p>

<p><strong>Addendum</strong>: in one of the textbooks I found example of $NP^B$ language (Computational Complexity: A Modern Approach by Boaz Barak Sanjeev Arora. Theorem 3.7. Page 74). $U_B=\left \{ 1^n:some \space string \space of \space length \space n \space is \space in \space B\right \} $ it's unary language. I believe that (1,11,111,1111,...) are all in $U_B$. Author affirms that such a language is in $NP^B$ which is I cannot understand why, hence oracle for B can resolve everything in time 1. Why do we need nondeterministic TM with oracle. If it's not good example of $NP^B$ please put yours such that to approve the existence of $NP^B$.</p>
 | complexity theory np complete complexity classes relativization np | 1 | Intuition behind Relativization -- (complexity theory np complete complexity classes relativization np)
<p>I take course on Computational Complexity. My problem is I don't understand <strong>Relativization method</strong>. I tried to find a bit of intuition in many textbooks, unfortunately, so far with no success. I will appreciate if someone could shed the light on this topic so that I will be able to continue by myself.
Few following sentences are questions and my thoughts about relativization, they will help to navigate the discussion. </p>

<p>Very often relativization comes in comparison with diagonalization, which is a method that helps distinguish between countable set and uncountable set. It somehow comes from relativization that $P$ versus $NP$ question cannot be solved by diagonalization. I don't really see the idea why relativization show the useless of diagonalization, and if it's useless why is actually useless.</p>

<p>The idea behind oracle Turing machine $M^A$ at first is very clear. However, when it comes to $NP^A$ and $P^A$ the intuition disappears. Oracle is a blackbox that is designed for special language and answers the question whether the string on the input of the oracle is in the language in time 1. As I understood TM that contains an oracle is just make some auxiliary operations and ask the oracle. So the core of the TM is the oracle, everything else is less important. What's the difference between $P^A$ and $NP^A$, even thought oracle in both of them works in time 1.</p>

<p>The last thing is the proving the existence of an oracle $B$ such that $P^B \neq NP^B$. I found the proof in several textbooks and in all of them the proof seems very vague. I tried to use <strong>"Introduction to complexity" by Sipser, Chapter9. Intractability</strong>, and didn't get the idea of construction of a list of all polynomial time oracle TMs $M_i$. </p>

<p>This is more or less everything what I know about relativization, I will appreciate if someonw would decide to share his/her thoughts on the topic.</p>

<p><strong>Addendum</strong>: in one of the textbooks I found example of $NP^B$ language (Computational Complexity: A Modern Approach by Boaz Barak Sanjeev Arora. Theorem 3.7. Page 74). $U_B=\left \{ 1^n:some \space string \space of \space length \space n \space is \space in \space B\right \} $ it's unary language. I believe that (1,11,111,1111,...) are all in $U_B$. Author affirms that such a language is in $NP^B$ which is I cannot understand why, hence oracle for B can resolve everything in time 1. Why do we need nondeterministic TM with oracle. If it's not good example of $NP^B$ please put yours such that to approve the existence of $NP^B$.</p>
 | habedi/stack-exchange-dataset |
6,672 | Prove multiple cell move instructions don't increase power of Turing Machines | <p>How can you prove that multiple-cell-move instructions, for example (X, Y, 5R) and (X, Y, 17L), do not increase the power of a Turing Machine? </p>
 | turing machines | 1 | Prove multiple cell move instructions don't increase power of Turing Machines -- (turing machines)
<p>How can you prove that multiple-cell-move instructions, for example (X, Y, 5R) and (X, Y, 17L), do not increase the power of a Turing Machine? </p>
 | habedi/stack-exchange-dataset |
6,676 | Confusion about big-O notation comparison of two functions | <p>On page 16 of this <a href="http://www.cs.berkeley.edu/~vazirani/algorithms/all.pdf" rel="noreferrer">algorithms book</a>, it states:</p>

<blockquote>
 <p>For example, suppose we are choosing between two algorithms for a particular computational task. One takes $f_1(n) = n^2$ steps, while the other takes $f_2(n) = 2n + 20$ steps (Figure 0.2).</p>
</blockquote>

<p>He then goes on to say:</p>

<blockquote>
 <p>This superiority ... (of $f_2$ over $f_1$) ... is captured by the big-O notation: $f_2 = O(f_1)$, because ...</p>
</blockquote>

<p>Now my problem is that in the original quote, he said that $f_1(n) = n^2$ steps and $f_2(n) = 2n+20$ steps, so thus $f_1 = O(n^2)$ and $f_2 = O(n)$ (big-O is defined in Section 0.3). But the second quote above states $f_2 = O(f_1)$, which means $f_2 = O(n^2)$ and contradicts his definition of big-O notation. What have I missed?</p>
 | asymptotics landau notation | 1 | Confusion about big-O notation comparison of two functions -- (asymptotics landau notation)
<p>On page 16 of this <a href="http://www.cs.berkeley.edu/~vazirani/algorithms/all.pdf" rel="noreferrer">algorithms book</a>, it states:</p>

<blockquote>
 <p>For example, suppose we are choosing between two algorithms for a particular computational task. One takes $f_1(n) = n^2$ steps, while the other takes $f_2(n) = 2n + 20$ steps (Figure 0.2).</p>
</blockquote>

<p>He then goes on to say:</p>

<blockquote>
 <p>This superiority ... (of $f_2$ over $f_1$) ... is captured by the big-O notation: $f_2 = O(f_1)$, because ...</p>
</blockquote>

<p>Now my problem is that in the original quote, he said that $f_1(n) = n^2$ steps and $f_2(n) = 2n+20$ steps, so thus $f_1 = O(n^2)$ and $f_2 = O(n)$ (big-O is defined in Section 0.3). But the second quote above states $f_2 = O(f_1)$, which means $f_2 = O(n^2)$ and contradicts his definition of big-O notation. What have I missed?</p>
 | habedi/stack-exchange-dataset |
6,678 | What is the difference between abstract and concrete data structures? | <p>I thought associative array (i.e. map, or dictionary) and hashing table were the same concept, until I saw in <a href="http://en.wikipedia.org/wiki/Abstract_data_type#Implementation" rel="nofollow noreferrer">Wikipedia</a> that</p>

<blockquote>
 <p>For dictionaries with very small numbers of bindings, it may make
 sense to implement the dictionary using an association list, a linked
 list of bindings. ...</p>
 
 <p>The most frequently used general purpose implementation of an
 associative array is with a hash table: an array of bindings, together
 with a hash function that maps each possible key into an array index.
 ...</p>
 
 <p>Dictionaries may also be stored in binary search trees or in data
 structures specialized to a particular type of keys such as radix
 trees, tries, Judy arrays, or van Emde Boas trees. ...</p>
</blockquote>

<p>So, I think, my problem lies in that I don't know that associative array (i.e. map, or dictionary) is an abstract data type and hashing table is a concrete data structure, and different concrete data structures can be used to implement the same abstract data type. </p>

<p>My questions would be</p>

<ul>
<li><p>What is the difference and relation between abstract data structures and concrete data structures?</p></li>
<li><p>What examples are for each of them (abstract and concrete data structures)? The more the better.</p></li>
<li><p>Is there a list of what concrete data structures can be used to implement what abstract data structures? It would be nice to have one.</p></li>
</ul>
 | data structures abstract data types | 1 | What is the difference between abstract and concrete data structures? -- (data structures abstract data types)
<p>I thought associative array (i.e. map, or dictionary) and hashing table were the same concept, until I saw in <a href="http://en.wikipedia.org/wiki/Abstract_data_type#Implementation" rel="nofollow noreferrer">Wikipedia</a> that</p>

<blockquote>
 <p>For dictionaries with very small numbers of bindings, it may make
 sense to implement the dictionary using an association list, a linked
 list of bindings. ...</p>
 
 <p>The most frequently used general purpose implementation of an
 associative array is with a hash table: an array of bindings, together
 with a hash function that maps each possible key into an array index.
 ...</p>
 
 <p>Dictionaries may also be stored in binary search trees or in data
 structures specialized to a particular type of keys such as radix
 trees, tries, Judy arrays, or van Emde Boas trees. ...</p>
</blockquote>

<p>So, I think, my problem lies in that I don't know that associative array (i.e. map, or dictionary) is an abstract data type and hashing table is a concrete data structure, and different concrete data structures can be used to implement the same abstract data type. </p>

<p>My questions would be</p>

<ul>
<li><p>What is the difference and relation between abstract data structures and concrete data structures?</p></li>
<li><p>What examples are for each of them (abstract and concrete data structures)? The more the better.</p></li>
<li><p>Is there a list of what concrete data structures can be used to implement what abstract data structures? It would be nice to have one.</p></li>
</ul>
 | habedi/stack-exchange-dataset |
6,680 | Is the codomain/range of a hash function always $\mathbb{Z}$ or $\mathbb{N}$? | <p>From <a href="http://en.wikipedia.org/wiki/Hash_function#Universal_hashing" rel="nofollow">Wikipedia</a></p>

<blockquote>
 <p>A hash function is any algorithm or subroutine that maps <strong>large data
 sets of variable length</strong>, called keys, to <strong>smaller data sets of a
 fixed length</strong>. For example, a person's name, having a variable
 length, could be hashed to a single <strong>integer</strong>. The values returned
 by a hash function are called hash values, hash codes, hash sums,
 checksums or simply hashes.</p>
</blockquote>

<p>I wonder if the range/codomain of a hash function is always the set of natural numbers or integers, because their function values seem to be always used as indices to some array?</p>
 | terminology hash | 1 | Is the codomain/range of a hash function always $\mathbb{Z}$ or $\mathbb{N}$? -- (terminology hash)
<p>From <a href="http://en.wikipedia.org/wiki/Hash_function#Universal_hashing" rel="nofollow">Wikipedia</a></p>

<blockquote>
 <p>A hash function is any algorithm or subroutine that maps <strong>large data
 sets of variable length</strong>, called keys, to <strong>smaller data sets of a
 fixed length</strong>. For example, a person's name, having a variable
 length, could be hashed to a single <strong>integer</strong>. The values returned
 by a hash function are called hash values, hash codes, hash sums,
 checksums or simply hashes.</p>
</blockquote>

<p>I wonder if the range/codomain of a hash function is always the set of natural numbers or integers, because their function values seem to be always used as indices to some array?</p>
 | habedi/stack-exchange-dataset |
6,690 | How to quickly find a few bisimulations on a given labelled digraph? | <p>We are given a labelled directed graph, where both vertices (or states) and edges (or transitions) have labels. Informally, two states are bisimilar when they have the same label and they can simulate each other's transitions. On the states the two states evolve to, the same again is true.</p>

<p>More formally, a binary relation $R \subseteq S \times S$ is a <a href="http://en.wikipedia.org/wiki/Simulation_preorder" rel="nofollow">simulation</a> iff $\forall (p,q) \in R$</p>

<ul>
<li>$p$ and $q$ have the same label, and</li>
<li>if $p \overset{a}{\longrightarrow} p'$ then $\exists q', q \overset{a}{\longrightarrow} q'$ and $(p',q') \in R$.</li>
</ul>

<p>A relation $R$ is a <a href="http://en.wikipedia.org/wiki/Bisimulation" rel="nofollow">bisimulation</a> iff $R$ and $R^{-1}$ are simulations. The largest bisimulation on the given system is called the <em>bisimilarity relation</em>. </p>

<p>There are algorithms for finding the bisimilarity relation, that run in $O(m \log n)$ time and $O(m+n)$ space, where $n$ is the number of states and $m$ the number of transitions. An example is the Paige-Tarjan RCP algorithm from 1987.</p>

<p>However, consider a simpler problem. Instead of finding all the bisimulations, I just want to find a few of them. Can it be done faster than in loglinear time? If so, how? For example, let's say one is given two states $p,q \in S$ such that they have the same label and they can make the same transitions. What I find problematic is to check that the states they lead to are once again bisimulations. In other words, one could also ask if there is a quick way to decide if two given states are a bisimulation.</p>
 | algorithms graphs process algebras | 1 | How to quickly find a few bisimulations on a given labelled digraph? -- (algorithms graphs process algebras)
<p>We are given a labelled directed graph, where both vertices (or states) and edges (or transitions) have labels. Informally, two states are bisimilar when they have the same label and they can simulate each other's transitions. On the states the two states evolve to, the same again is true.</p>

<p>More formally, a binary relation $R \subseteq S \times S$ is a <a href="http://en.wikipedia.org/wiki/Simulation_preorder" rel="nofollow">simulation</a> iff $\forall (p,q) \in R$</p>

<ul>
<li>$p$ and $q$ have the same label, and</li>
<li>if $p \overset{a}{\longrightarrow} p'$ then $\exists q', q \overset{a}{\longrightarrow} q'$ and $(p',q') \in R$.</li>
</ul>

<p>A relation $R$ is a <a href="http://en.wikipedia.org/wiki/Bisimulation" rel="nofollow">bisimulation</a> iff $R$ and $R^{-1}$ are simulations. The largest bisimulation on the given system is called the <em>bisimilarity relation</em>. </p>

<p>There are algorithms for finding the bisimilarity relation, that run in $O(m \log n)$ time and $O(m+n)$ space, where $n$ is the number of states and $m$ the number of transitions. An example is the Paige-Tarjan RCP algorithm from 1987.</p>

<p>However, consider a simpler problem. Instead of finding all the bisimulations, I just want to find a few of them. Can it be done faster than in loglinear time? If so, how? For example, let's say one is given two states $p,q \in S$ such that they have the same label and they can make the same transitions. What I find problematic is to check that the states they lead to are once again bisimulations. In other words, one could also ask if there is a quick way to decide if two given states are a bisimulation.</p>
 | habedi/stack-exchange-dataset |
6,695 | Proving that if $\mathrm{NTime}(n^{100}) \subseteq \mathrm{DTime}(n^{1000})$ then $\mathrm{P}=\mathrm{NP}$ | <p>I'd really like your help with proving the following.</p>

<p>If $\mathrm{NTime}(n^{100}) \subseteq \mathrm{DTime}(n^{1000})$ then $\mathrm{P}=\mathrm{NP}$.</p>

<p>Here, $\mathrm{NTime}(n^{100})$ is the class of all languages which can be decided by nondeterministic Turing machine in polynomial time of $O(n^{100})$ and $\mathrm{DTime}(n^{1000})$ is the class of all languages which can be decided by a deterministic Turing machine in polynomial time of $O(n^{1000})$.</p>

<p>Any help/suggestions?</p>
 | time complexity complexity classes p vs np | 1 | Proving that if $\mathrm{NTime}(n^{100}) \subseteq \mathrm{DTime}(n^{1000})$ then $\mathrm{P}=\mathrm{NP}$ -- (time complexity complexity classes p vs np)
<p>I'd really like your help with proving the following.</p>

<p>If $\mathrm{NTime}(n^{100}) \subseteq \mathrm{DTime}(n^{1000})$ then $\mathrm{P}=\mathrm{NP}$.</p>

<p>Here, $\mathrm{NTime}(n^{100})$ is the class of all languages which can be decided by nondeterministic Turing machine in polynomial time of $O(n^{100})$ and $\mathrm{DTime}(n^{1000})$ is the class of all languages which can be decided by a deterministic Turing machine in polynomial time of $O(n^{1000})$.</p>

<p>Any help/suggestions?</p>
 | habedi/stack-exchange-dataset |
6,698 | Extract Max for a max-heap in $\log n + \log\log n$ comparisons | <p>Given a <strong>max heap</strong> with <strong>extract-max</strong> operation.</p>

<p>The basic version takes $2 \log n$ comparisons.
How can I make the running time just $\log n + \log\log n$ comparisons?
How about $\log n + \log\log\log n $ comparisons?</p>

<p>I thought of putting $-\infty$ on the heap root but not really sure what to do with it as it can go anywhere.</p>

<p>To be more precise, I'm only counting comparisons between array item values. I'm reading <a href="http://en.wikipedia.org/wiki/Introduction_to_Algorithms" rel="nofollow">CLRS</a> Chapter 6 (<strong>MAX-HEAPIFY</strong> and <strong>HEAP-EXTRACT-MAX</strong>).</p>
 | data structures heaps | 1 | Extract Max for a max-heap in $\log n + \log\log n$ comparisons -- (data structures heaps)
<p>Given a <strong>max heap</strong> with <strong>extract-max</strong> operation.</p>

<p>The basic version takes $2 \log n$ comparisons.
How can I make the running time just $\log n + \log\log n$ comparisons?
How about $\log n + \log\log\log n $ comparisons?</p>

<p>I thought of putting $-\infty$ on the heap root but not really sure what to do with it as it can go anywhere.</p>

<p>To be more precise, I'm only counting comparisons between array item values. I'm reading <a href="http://en.wikipedia.org/wiki/Introduction_to_Algorithms" rel="nofollow">CLRS</a> Chapter 6 (<strong>MAX-HEAPIFY</strong> and <strong>HEAP-EXTRACT-MAX</strong>).</p>
 | habedi/stack-exchange-dataset |
6,700 | Finding $c$ and $n_0$ for a big-O bound | <p>A book I am reading demonstrates how $5n^3 + 2n^2 + 22n + 6 = O(n^3)$, which I believe is true. After all, there exists a value $c$ for which $cn^3$ is always greater than $5n^3 + 2n^2 + 22n + 6$ for all $n$ greater than or equal to some value $n_0$.</p>

<p>However, the book then casually notes that $c = 5$ and $n_0 = 10$. Where did these values come from? What algebraic calculations were done (if any) to derive the $c$ and $n_0$ values?</p>
 | asymptotics landau notation | 1 | Finding $c$ and $n_0$ for a big-O bound -- (asymptotics landau notation)
<p>A book I am reading demonstrates how $5n^3 + 2n^2 + 22n + 6 = O(n^3)$, which I believe is true. After all, there exists a value $c$ for which $cn^3$ is always greater than $5n^3 + 2n^2 + 22n + 6$ for all $n$ greater than or equal to some value $n_0$.</p>

<p>However, the book then casually notes that $c = 5$ and $n_0 = 10$. Where did these values come from? What algebraic calculations were done (if any) to derive the $c$ and $n_0$ values?</p>
 | habedi/stack-exchange-dataset |
6,704 | Finite State Automata for recognising consecutive characters | <p>I'm currently working on this question as part of some homework, it has me stumped.</p>

<p><img src="https://i.stack.imgur.com/taFpH.png" alt="FSA Question"></p>

<p>I'm familiar with finite state automata (FSA), I know how they work and I've read everything I can find on Google, but nothing's helped me come any closer to a solution.</p>

<p>If I don't know the length of the input string, or I'm not searching for a particular pattern, how can I design a machine that will always land on the final state? </p>

<p>I've tried drawing some, but that always end up being a little off. </p>
 | formal languages finite automata | 1 | Finite State Automata for recognising consecutive characters -- (formal languages finite automata)
<p>I'm currently working on this question as part of some homework, it has me stumped.</p>

<p><img src="https://i.stack.imgur.com/taFpH.png" alt="FSA Question"></p>

<p>I'm familiar with finite state automata (FSA), I know how they work and I've read everything I can find on Google, but nothing's helped me come any closer to a solution.</p>

<p>If I don't know the length of the input string, or I'm not searching for a particular pattern, how can I design a machine that will always land on the final state? </p>

<p>I've tried drawing some, but that always end up being a little off. </p>
 | habedi/stack-exchange-dataset |
6,711 | When does $1.00001^n$ exceed $n^{100001}$? | <p>I have been told than $n^{1000001} = O(1.000001^n)$. If that's the case, there must be some value $n$ at which $1.000001^n$ exceeds $n^{1000001}$.</p>

<p>However, when I consult Wolfram Alpha, I get a negative value for when that occurs.
<a href="http://www.wolframalpha.com/input/?i=1.000001%5Ex+%3D+x%5E1000001" rel="nofollow">http://www.wolframalpha.com/input/?i=1.000001%5Ex+%3D+x%5E1000001</a></p>

<p>Why is that? Shouldn't this value be really big instead of negative?</p>
 | asymptotics landau notation | 1 | When does $1.00001^n$ exceed $n^{100001}$? -- (asymptotics landau notation)
<p>I have been told than $n^{1000001} = O(1.000001^n)$. If that's the case, there must be some value $n$ at which $1.000001^n$ exceeds $n^{1000001}$.</p>

<p>However, when I consult Wolfram Alpha, I get a negative value for when that occurs.
<a href="http://www.wolframalpha.com/input/?i=1.000001%5Ex+%3D+x%5E1000001" rel="nofollow">http://www.wolframalpha.com/input/?i=1.000001%5Ex+%3D+x%5E1000001</a></p>

<p>Why is that? Shouldn't this value be really big instead of negative?</p>
 | habedi/stack-exchange-dataset |
6,717 | Formalization of the shortest path algorithm to a linear program | <p>I'm trying to understand a formalization of the shortest path algorithm to a linear programming problem:</p>

<p>For a graph $G=(E,V)$, we defined $F(v)=\{e \in E \mid t(e)=v \}$ and $B(v)=\{ e \in E \mid h(e)=v\}$ where $t(e)$ is a tail of a node, and $h(e)$ is a head of a node.</p>

<p>Also the solutions for the conditions for the linear problem was defined as $b_v=1$ for every node $v$ except of the root $r$ which from it we find all the shortest paths in the graph where $b_r=-(n-1)$. It is written here "We associate a flow (primal variable) $x_e$ with each arc $e \in E$.</p>

<p>The main linear program is to minimize $\sum\limits_{e\in E }c_ex_e$, subject to $\sum\limits_{e\in B(v)}x_e-\sum\limits_{e\in F(v)}x_e=b_v$ for all $v \in V$ and $x_e \geq 0$ for all $e \in E$, where $c_e$ is the length of arc $e$.</p>

<p>I'd really love your help with understanding what does $x_e$ represent. Is it the number of times I use $e$ in order to find all the shortest paths in the graph?</p>

<p>I don't understand why does the above condition for this linear program is as at it, why does $\sum\limits_{e\in B(v)}x_e-\sum\limits_{e\in F(v)}x_e=b_v$ for all $v \in V$ should be $1$ for every node and $-(n-1)$ for the all the root? If I think of a $3$ nodes tree for a graph,for the middle node we get that the condition equals to $1$, which makes me think that I might be misunderstood what $x_e$ stands for.</p>
 | algorithms graphs shortest path linear programming | 1 | Formalization of the shortest path algorithm to a linear program -- (algorithms graphs shortest path linear programming)
<p>I'm trying to understand a formalization of the shortest path algorithm to a linear programming problem:</p>

<p>For a graph $G=(E,V)$, we defined $F(v)=\{e \in E \mid t(e)=v \}$ and $B(v)=\{ e \in E \mid h(e)=v\}$ where $t(e)$ is a tail of a node, and $h(e)$ is a head of a node.</p>

<p>Also the solutions for the conditions for the linear problem was defined as $b_v=1$ for every node $v$ except of the root $r$ which from it we find all the shortest paths in the graph where $b_r=-(n-1)$. It is written here "We associate a flow (primal variable) $x_e$ with each arc $e \in E$.</p>

<p>The main linear program is to minimize $\sum\limits_{e\in E }c_ex_e$, subject to $\sum\limits_{e\in B(v)}x_e-\sum\limits_{e\in F(v)}x_e=b_v$ for all $v \in V$ and $x_e \geq 0$ for all $e \in E$, where $c_e$ is the length of arc $e$.</p>

<p>I'd really love your help with understanding what does $x_e$ represent. Is it the number of times I use $e$ in order to find all the shortest paths in the graph?</p>

<p>I don't understand why does the above condition for this linear program is as at it, why does $\sum\limits_{e\in B(v)}x_e-\sum\limits_{e\in F(v)}x_e=b_v$ for all $v \in V$ should be $1$ for every node and $-(n-1)$ for the all the root? If I think of a $3$ nodes tree for a graph,for the middle node we get that the condition equals to $1$, which makes me think that I might be misunderstood what $x_e$ stands for.</p>
 | habedi/stack-exchange-dataset |
6,720 | Minimum weight triangulation | <p>I'm just curious about the pseudocode (or real source code, doesn't matter) of the recursive version of this algorithm. In almost every book chapter/paper when describing this topic, they mention that the recursive version takes exponential time and then they give the code for the dynamic programming approach. I understand how the iterative version (dynamic programming ie. memoization) works. But i just wonder about the recursive version.
For the info, the key part in the iterative code is: <br/></p>

<blockquote>
 <p>$\ell$ ... left <br/>
 $r$ ... right <br/>
 $a$ ... apex <br/>
 $T$ ... triangulation </p>
 
 <p>$T_{\ell,r}= \min\{T_{\ell,a} + \text{perimeter}_{\ell,a,r} + T_{a,r}\}$</p>
</blockquote>

<p>So how does the recursive function <code>findOT()</code> seem in <br/>
pseudocode or one of these languages (C#, Java, C/C++, PHP, Javascript, SML)?</p>
 | algorithms computational geometry recursion | 1 | Minimum weight triangulation -- (algorithms computational geometry recursion)
<p>I'm just curious about the pseudocode (or real source code, doesn't matter) of the recursive version of this algorithm. In almost every book chapter/paper when describing this topic, they mention that the recursive version takes exponential time and then they give the code for the dynamic programming approach. I understand how the iterative version (dynamic programming ie. memoization) works. But i just wonder about the recursive version.
For the info, the key part in the iterative code is: <br/></p>

<blockquote>
 <p>$\ell$ ... left <br/>
 $r$ ... right <br/>
 $a$ ... apex <br/>
 $T$ ... triangulation </p>
 
 <p>$T_{\ell,r}= \min\{T_{\ell,a} + \text{perimeter}_{\ell,a,r} + T_{a,r}\}$</p>
</blockquote>

<p>So how does the recursive function <code>findOT()</code> seem in <br/>
pseudocode or one of these languages (C#, Java, C/C++, PHP, Javascript, SML)?</p>
 | habedi/stack-exchange-dataset |
6,741 | probability wheel, redistribution of probabilities | <p>I have a contiguous ordered data structure (0 based index): </p>

<pre><code>x= [1/3, 1/3, 1/3]
</code></pre>

<p>Let's say I selected index 1 and increased the probability by 1/3. Rest of the probabilities each decrease by 1/6 and the total probability remains P = 1.</p>

<pre><code>x= [1/6, 2/3, 1/6]
</code></pre>

<p>Let's say I selected index 2 and increased the probability by 1/3. Rest of the probabilities in total need to decrease by 1/3 to make the total probability remain P= 1.</p>

<pre><code>x= [1/10, 2/5, 1/2]
</code></pre>

<p>Is there a name for this kind of data structure? I'd like to research that name and use a library instead of my custom rolled code if possible.</p>
 | data structures probability theory | 1 | probability wheel, redistribution of probabilities -- (data structures probability theory)
<p>I have a contiguous ordered data structure (0 based index): </p>

<pre><code>x= [1/3, 1/3, 1/3]
</code></pre>

<p>Let's say I selected index 1 and increased the probability by 1/3. Rest of the probabilities each decrease by 1/6 and the total probability remains P = 1.</p>

<pre><code>x= [1/6, 2/3, 1/6]
</code></pre>

<p>Let's say I selected index 2 and increased the probability by 1/3. Rest of the probabilities in total need to decrease by 1/3 to make the total probability remain P= 1.</p>

<pre><code>x= [1/10, 2/5, 1/2]
</code></pre>

<p>Is there a name for this kind of data structure? I'd like to research that name and use a library instead of my custom rolled code if possible.</p>
 | habedi/stack-exchange-dataset |
6,744 | Algorithms for graph generation using given properties | <p>There may be a large number of algorithms proposed for generating graphs satisfying some common properties (e.g., clustering coefficient, average shortest path length, degree distribution, etc).</p>

<p>My question concerns a specific case: I want to generate a few <em>undirected regular</em> graphs (i.e., every node in these graphs has the same number of neighbors) with different clustering coefficients and average shortest path lengths. More generally, by fixing a degree distribution, I want to generate graphs with different clustering coefficients and average shortest path lengths.</p>

<p>I wonder what are the well-known algorithms for doing this (or in fact, is there any?), and what are the recommended software for the same purpose?</p>
 | algorithms graphs sampling | 1 | Algorithms for graph generation using given properties -- (algorithms graphs sampling)
<p>There may be a large number of algorithms proposed for generating graphs satisfying some common properties (e.g., clustering coefficient, average shortest path length, degree distribution, etc).</p>

<p>My question concerns a specific case: I want to generate a few <em>undirected regular</em> graphs (i.e., every node in these graphs has the same number of neighbors) with different clustering coefficients and average shortest path lengths. More generally, by fixing a degree distribution, I want to generate graphs with different clustering coefficients and average shortest path lengths.</p>

<p>I wonder what are the well-known algorithms for doing this (or in fact, is there any?), and what are the recommended software for the same purpose?</p>
 | habedi/stack-exchange-dataset |
6,753 | Cantor's diagonal method in simple terms? | <p>Could anyone please explain Cantor's diagonalization principle in simple terms?</p>
 | complexity theory sets uncountability | 1 | Cantor's diagonal method in simple terms? -- (complexity theory sets uncountability)
<p>Could anyone please explain Cantor's diagonalization principle in simple terms?</p>
 | habedi/stack-exchange-dataset |
6,755 | Voronoi diagram with given number of vertices and sites | <p>I want to draw a Voronoi diagram with 9 sites and with </p>

<ol>
<li>no vertex, </li>
<li>1 vertex, </li>
<li>4 vertices, and</li>
<li>7 vertices.</li>
</ol>

<p>How do I approach this question. The one with no vertex is easy, it can be done by collinear points. What about the others.</p>

<p>A figure for each would be appreciated.</p>
 | computational geometry | 1 | Voronoi diagram with given number of vertices and sites -- (computational geometry)
<p>I want to draw a Voronoi diagram with 9 sites and with </p>

<ol>
<li>no vertex, </li>
<li>1 vertex, </li>
<li>4 vertices, and</li>
<li>7 vertices.</li>
</ol>

<p>How do I approach this question. The one with no vertex is easy, it can be done by collinear points. What about the others.</p>

<p>A figure for each would be appreciated.</p>
 | habedi/stack-exchange-dataset |
6,768 | How is this grammar LL(1)? | <p>This is a question from the Dragon Book. This is the grammar:</p>

<blockquote>
 <p>$S \to AaAb \mid BbBa $<br>
 $A \to \varepsilon$<br>
 $B \to \varepsilon$ </p>
</blockquote>

<p>The question asks how to show that it is LL(1) but not SLR(1). </p>

<p>To prove that it is LL(1), I tried constructing its parsing table, but I am getting multiple productions in a cell, which is contradiction.</p>

<p>Please tell how is this LL(1), and how to prove it?</p>
 | formal grammars compilers parsers | 1 | How is this grammar LL(1)? -- (formal grammars compilers parsers)
<p>This is a question from the Dragon Book. This is the grammar:</p>

<blockquote>
 <p>$S \to AaAb \mid BbBa $<br>
 $A \to \varepsilon$<br>
 $B \to \varepsilon$ </p>
</blockquote>

<p>The question asks how to show that it is LL(1) but not SLR(1). </p>

<p>To prove that it is LL(1), I tried constructing its parsing table, but I am getting multiple productions in a cell, which is contradiction.</p>

<p>Please tell how is this LL(1), and how to prove it?</p>
 | habedi/stack-exchange-dataset |
6,771 | LR(1) - Items, Look Ahead | <p>I am having diffuculties understanding the principle of lookahead in LR(1) - items. How do I compute the lookahead sets ? </p>

<p>Say for an example that I have the following grammar:</p>

<p>S -> AB
A -> aAb | b
B -> d</p>

<p>Then the first state will look like this:</p>

<pre><code>S -> .AB , {look ahead}
A -> .aAb, {look ahead}
A -> .b, {look ahead}
</code></pre>

<p>I now what look aheads are, but I don't know how to compute them. I have googled for answers but there isn't any webpage that explains this in a simple manner.</p>

<p>Thanks in advance </p>
 | formal languages formal grammars context free parsing | 1 | LR(1) - Items, Look Ahead -- (formal languages formal grammars context free parsing)
<p>I am having diffuculties understanding the principle of lookahead in LR(1) - items. How do I compute the lookahead sets ? </p>

<p>Say for an example that I have the following grammar:</p>

<p>S -> AB
A -> aAb | b
B -> d</p>

<p>Then the first state will look like this:</p>

<pre><code>S -> .AB , {look ahead}
A -> .aAb, {look ahead}
A -> .b, {look ahead}
</code></pre>

<p>I now what look aheads are, but I don't know how to compute them. I have googled for answers but there isn't any webpage that explains this in a simple manner.</p>

<p>Thanks in advance </p>
 | habedi/stack-exchange-dataset |
6,773 | Finding negative cycles for cycle-canceling algorithm | <p>I am implementing the cycle-canceling algorithm to find an optimal solution for the min-cost flow problem. By finding and removing negative cost cycles in the residual network, the total cost is lowered in each round. To find a negative cycle I am using the bellman-ford algorithm.</p>

<p>My Problem is:
Bellman-ford only finds cycles that are reachable from the source, but I also need to find cycles that are not reachable.</p>

<p>Example: In the following network, we already applied a maximum flow. The edge $(A, B)$ makes it very expensive. In the residual network, we have a negative cost cycle with capacity $1$. Removing it, would give us a cheaper solution using edges $(A, C)$ and $(C, T)$, but we cannot reach it from the source $S$.</p>

<p>Labels: Flow/Capacity, Cost</p>

<p><img src="https://i.stack.imgur.com/jKtUd.png" alt="enter image description here"></p>

<p>Of course, I could run Bellman-ford repeatedly with each node as source, but that does not sound like a good solution. I'm a little confused because all the papers I read seem to skip this step.</p>

<p>Can you tell me, how to use bellman-ford to find every negative cycle (reachable or not)?
And if not possible, which other algorithm do you propose?</p>
 | algorithms graphs shortest path network flow | 1 | Finding negative cycles for cycle-canceling algorithm -- (algorithms graphs shortest path network flow)
<p>I am implementing the cycle-canceling algorithm to find an optimal solution for the min-cost flow problem. By finding and removing negative cost cycles in the residual network, the total cost is lowered in each round. To find a negative cycle I am using the bellman-ford algorithm.</p>

<p>My Problem is:
Bellman-ford only finds cycles that are reachable from the source, but I also need to find cycles that are not reachable.</p>

<p>Example: In the following network, we already applied a maximum flow. The edge $(A, B)$ makes it very expensive. In the residual network, we have a negative cost cycle with capacity $1$. Removing it, would give us a cheaper solution using edges $(A, C)$ and $(C, T)$, but we cannot reach it from the source $S$.</p>

<p>Labels: Flow/Capacity, Cost</p>

<p><img src="https://i.stack.imgur.com/jKtUd.png" alt="enter image description here"></p>

<p>Of course, I could run Bellman-ford repeatedly with each node as source, but that does not sound like a good solution. I'm a little confused because all the papers I read seem to skip this step.</p>

<p>Can you tell me, how to use bellman-ford to find every negative cycle (reachable or not)?
And if not possible, which other algorithm do you propose?</p>
 | habedi/stack-exchange-dataset |
6,791 | Lambda calculus outside functional programming? | <p>I'm a university student, and we're currently studying Lambda Calculus. However, I still have a hard time understanding exactly why this is useful for me. I realize if you do loads of functional programming it might be useful, however I reckon that it's not really needed for learning functional programming, what do you think?</p>

<p>Secondly, is there any use for Lambda Calculus within the realm of Computer Science but outside of functional programming languages? </p>
 | lambda calculus functional programming | 1 | Lambda calculus outside functional programming? -- (lambda calculus functional programming)
<p>I'm a university student, and we're currently studying Lambda Calculus. However, I still have a hard time understanding exactly why this is useful for me. I realize if you do loads of functional programming it might be useful, however I reckon that it's not really needed for learning functional programming, what do you think?</p>

<p>Secondly, is there any use for Lambda Calculus within the realm of Computer Science but outside of functional programming languages? </p>
 | habedi/stack-exchange-dataset |
6,797 | Modifying Dijkstra's algorithm for edge weights drawn from range $[1,…,K]$ | <p>Suppose I have a directed graph with edge weights drawn from range $[1,\dots, K]$ where $K$ is constant. If I'm trying to find the shortest path using <a href="http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm">Dijkstra's algorithm</a>, how can I modify the algorithm / data structure and improve the time complexity to $O(|V|+|E|)$?</p>
 | algorithms data structures shortest path weighted graphs | 1 | Modifying Dijkstra's algorithm for edge weights drawn from range $[1,…,K]$ -- (algorithms data structures shortest path weighted graphs)
<p>Suppose I have a directed graph with edge weights drawn from range $[1,\dots, K]$ where $K$ is constant. If I'm trying to find the shortest path using <a href="http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm">Dijkstra's algorithm</a>, how can I modify the algorithm / data structure and improve the time complexity to $O(|V|+|E|)$?</p>
 | habedi/stack-exchange-dataset |
6,801 | Every simple undirected graph with more than $(n-1)(n-2)/2$ edges is connected | <p>If a graph with $n$ vertices has more than $\frac{(n-1)(n-2)}{2}$ edges then it is connected.</p>

<p>I am a bit confused about this question, since I can always prove that for a graph to connected you need more than $|E|>n-1$ edges.</p>
 | graphs | 1 | Every simple undirected graph with more than $(n-1)(n-2)/2$ edges is connected -- (graphs)
<p>If a graph with $n$ vertices has more than $\frac{(n-1)(n-2)}{2}$ edges then it is connected.</p>

<p>I am a bit confused about this question, since I can always prove that for a graph to connected you need more than $|E|>n-1$ edges.</p>
 | habedi/stack-exchange-dataset |
6,807 | Matching girls with boys without mutual attraction (variant of maximum bipartite matching) | <p>Let us say you have a group of guys and and a group of girls. Each girl is either attracted to a guy or not, and vice versa. You want to match as many people as possible to a partner they like.</p>

<p>Does this problem have a name? Is it feasibly solvable? Sounds hard to me...</p>

<p>Ps. note that since the attraction is not neccessarily mutual the standard max-flow solution does not work.</p>
 | algorithms graphs bipartite matching | 1 | Matching girls with boys without mutual attraction (variant of maximum bipartite matching) -- (algorithms graphs bipartite matching)
<p>Let us say you have a group of guys and and a group of girls. Each girl is either attracted to a guy or not, and vice versa. You want to match as many people as possible to a partner they like.</p>

<p>Does this problem have a name? Is it feasibly solvable? Sounds hard to me...</p>

<p>Ps. note that since the attraction is not neccessarily mutual the standard max-flow solution does not work.</p>
 | habedi/stack-exchange-dataset |
6,809 | LL grammars and left-recursiviity | <p>Why LL(k) and LL(∞) are incompatible with left-recursion? I understand that a LL(k) language can support left-recursivity provided that with k-overahead tokens can be resolved any ambiguity. But, with a LL(∞) grammar, which type of ambiguities can't be solved?</p>
 | formal grammars parsers left recursion | 1 | LL grammars and left-recursiviity -- (formal grammars parsers left recursion)
<p>Why LL(k) and LL(∞) are incompatible with left-recursion? I understand that a LL(k) language can support left-recursivity provided that with k-overahead tokens can be resolved any ambiguity. But, with a LL(∞) grammar, which type of ambiguities can't be solved?</p>
 | habedi/stack-exchange-dataset |
6,812 | Who (and when) first defined interval graphs? | <p>I've been searching google scholar for references and narrowed down the first mention to somewhere around <a href="http://books.google.com/ngrams/graph?content=interval%20graph&year_start=1800&year_end=2000&corpus=15&smoothing=0&share=">1963</a> with a very weird jitter in 1949.</p>

<p>So, I'm trying to track down the original paper introducing interval graphs for citation, but it's been rather elusive so far.</p>
 | graphs reference request history | 1 | Who (and when) first defined interval graphs? -- (graphs reference request history)
<p>I've been searching google scholar for references and narrowed down the first mention to somewhere around <a href="http://books.google.com/ngrams/graph?content=interval%20graph&year_start=1800&year_end=2000&corpus=15&smoothing=0&share=">1963</a> with a very weird jitter in 1949.</p>

<p>So, I'm trying to track down the original paper introducing interval graphs for citation, but it's been rather elusive so far.</p>
 | habedi/stack-exchange-dataset |
6,813 | How does increasing the page size affect the number of page faults? | <p>If we let the physical memory size remain constant,</p>

<ul>
<li>What effect does the size of the page have on the number of frames? </li>
<li>What effect does the number of frames have on the number of page faults?</li>
</ul>

<p>Also, please provide reference strings as an example. </p>
 | operating systems memory management virtual memory paging | 1 | How does increasing the page size affect the number of page faults? -- (operating systems memory management virtual memory paging)
<p>If we let the physical memory size remain constant,</p>

<ul>
<li>What effect does the size of the page have on the number of frames? </li>
<li>What effect does the number of frames have on the number of page faults?</li>
</ul>

<p>Also, please provide reference strings as an example. </p>
 | habedi/stack-exchange-dataset |
6,815 | What is the TAK function for? | <p>We covered this in class today. I understand the mechanics of it, but aside from being a nice example of recursion does it serve any purpose? </p>

<p><img src="https://i.stack.imgur.com/DOcD0.png" alt="enter image description here"></p>

<p>Searching the web reveals lots of pages with the formula and it's implementation in code, some talk about the author, but nothing about it's purpose.</p>
 | algorithms recursion | 1 | What is the TAK function for? -- (algorithms recursion)
<p>We covered this in class today. I understand the mechanics of it, but aside from being a nice example of recursion does it serve any purpose? </p>

<p><img src="https://i.stack.imgur.com/DOcD0.png" alt="enter image description here"></p>

<p>Searching the web reveals lots of pages with the formula and it's implementation in code, some talk about the author, but nothing about it's purpose.</p>
 | habedi/stack-exchange-dataset |
6,823 | Why is this $f(n) \leq 6n^3 + n^2 \log n \in O(n^3)$ for all $n \geq 1$? | <p>I'm currently studying for an algorithms midterm in about 2 days and am reading from the beginning of the course, and stumbled upon this when I actually looked at the examples.</p>

<p>The question equation: $f(n) = 6n^3 + n^2\log n$</p>

<p>The exact line written for the answer is: $f(n) \leq 6n^3 + n^2 \centerdot n \text{, for all }n \geq 1, \text{since} \log n \leq n$</p>

<p>First of all, I don't really see why the logarithm was removed or why it actually matters when the dominant piece is the $6n^3$. I also don't get why it's $n \geq 1$ instead of $n \geq 6$ (unless it's a continuation of the first one.</p>

<p>Been staring at it for about 15 minutes and still not getting how it comes down to $n \geq 1$. Would anybody be kind enough to give me a hint as to what's wrong?</p>
 | asymptotics landau notation | 1 | Why is this $f(n) \leq 6n^3 + n^2 \log n \in O(n^3)$ for all $n \geq 1$? -- (asymptotics landau notation)
<p>I'm currently studying for an algorithms midterm in about 2 days and am reading from the beginning of the course, and stumbled upon this when I actually looked at the examples.</p>

<p>The question equation: $f(n) = 6n^3 + n^2\log n$</p>

<p>The exact line written for the answer is: $f(n) \leq 6n^3 + n^2 \centerdot n \text{, for all }n \geq 1, \text{since} \log n \leq n$</p>

<p>First of all, I don't really see why the logarithm was removed or why it actually matters when the dominant piece is the $6n^3$. I also don't get why it's $n \geq 1$ instead of $n \geq 6$ (unless it's a continuation of the first one.</p>

<p>Been staring at it for about 15 minutes and still not getting how it comes down to $n \geq 1$. Would anybody be kind enough to give me a hint as to what's wrong?</p>
 | habedi/stack-exchange-dataset |
6,833 | Solving recurrence with logarithm squared $T(n)=2T(n/2) + n \log^2n$ | <p>$T(n)=2T(n/2) + n\log^2(n)$.</p>

<p>If I try to substitute $m = \log(n)$ I end up with </p>

<p>$T(2^m)=2 T(2^{m-1}) + 2^m\log^{2}(2^m)$.</p>

<p>Which isn't helpful to me. Any clues?</p>

<p>PS. hope this isn't too localized. I specified that the problem was a squared logarithm which should make it possible to find for others wondering about the same thing.</p>
 | asymptotics proof techniques recurrence relation | 1 | Solving recurrence with logarithm squared $T(n)=2T(n/2) + n \log^2n$ -- (asymptotics proof techniques recurrence relation)
<p>$T(n)=2T(n/2) + n\log^2(n)$.</p>

<p>If I try to substitute $m = \log(n)$ I end up with </p>

<p>$T(2^m)=2 T(2^{m-1}) + 2^m\log^{2}(2^m)$.</p>

<p>Which isn't helpful to me. Any clues?</p>

<p>PS. hope this isn't too localized. I specified that the problem was a squared logarithm which should make it possible to find for others wondering about the same thing.</p>
 | habedi/stack-exchange-dataset |
6,839 | Minimum cost subset of sensors covering targets | <p>I have a dynamic programming problem: </p>

<blockquote>
 <p>If I have a set of sensors covering targets (a target might be
 covered by mutiple sensors) how can I find the minimum cost subset of
 sensors covering all targets given each sensor has a cost?</p>
</blockquote>

<p>I have thought a lot about this, but I can't reach the recursive formula to write my program. The greedy algorithm does not always provide the correct minimum cost subset. My problem is that sensors overlap in covering targets. Any help?</p>

<p><strong>Example:</strong> I have set of sensors $\{s_1,s_2,s_3\}$ with costs $\{1,\frac{5}{2},2\}$ and 3 targets $\{t_1,t_2,t_3\}$. Sensors cover $\{t_1 t_2,t_1 t_2 t_3,t_2 t_3\}$ and I need to get minimum cost subset by dynamic programming. For the above example if I use greedy algorithm I would get $s_1,s_3$ but the right answer is $s_2$ only.</p>
 | algorithms dynamic programming | 1 | Minimum cost subset of sensors covering targets -- (algorithms dynamic programming)
<p>I have a dynamic programming problem: </p>

<blockquote>
 <p>If I have a set of sensors covering targets (a target might be
 covered by mutiple sensors) how can I find the minimum cost subset of
 sensors covering all targets given each sensor has a cost?</p>
</blockquote>

<p>I have thought a lot about this, but I can't reach the recursive formula to write my program. The greedy algorithm does not always provide the correct minimum cost subset. My problem is that sensors overlap in covering targets. Any help?</p>

<p><strong>Example:</strong> I have set of sensors $\{s_1,s_2,s_3\}$ with costs $\{1,\frac{5}{2},2\}$ and 3 targets $\{t_1,t_2,t_3\}$. Sensors cover $\{t_1 t_2,t_1 t_2 t_3,t_2 t_3\}$ and I need to get minimum cost subset by dynamic programming. For the above example if I use greedy algorithm I would get $s_1,s_3$ but the right answer is $s_2$ only.</p>
 | habedi/stack-exchange-dataset |
6,840 | A programming language that can only implement computable bijective functions? | <p>Are there programming languages(or logic) that can implement(or express) a function $f:\mathbb{N}\to \mathbb{N}$ if and only if $f$ is a computable bijective functions?</p>
 | reference request programming languages logic reversible computing | 1 | A programming language that can only implement computable bijective functions? -- (reference request programming languages logic reversible computing)
<p>Are there programming languages(or logic) that can implement(or express) a function $f:\mathbb{N}\to \mathbb{N}$ if and only if $f$ is a computable bijective functions?</p>
 | habedi/stack-exchange-dataset |
6,843 | How can repeated addition/multiplication be done in polynomial time? | <p>I can see how adding 2 unsigned $n$-bit values is $O(n)$. We just go from the rightmost digits to the leftmost digits and add the digits up sequentially. We can also perform multiplication in polynomial time ($O(n^2)$) via the algorithm we all learned in grade school.</p>

<p>However, how can we add up or multiply say $i$ numbers together in polynomial time? After we add up 2 numbers together, we get a bigger number that will require more bits to represent. Same with multiplication.</p>

<p>How can we ensure that these extra bits do not produce exponential blowup?</p>
 | time complexity arithmetic integers | 1 | How can repeated addition/multiplication be done in polynomial time? -- (time complexity arithmetic integers)
<p>I can see how adding 2 unsigned $n$-bit values is $O(n)$. We just go from the rightmost digits to the leftmost digits and add the digits up sequentially. We can also perform multiplication in polynomial time ($O(n^2)$) via the algorithm we all learned in grade school.</p>

<p>However, how can we add up or multiply say $i$ numbers together in polynomial time? After we add up 2 numbers together, we get a bigger number that will require more bits to represent. Same with multiplication.</p>

<p>How can we ensure that these extra bits do not produce exponential blowup?</p>
 | habedi/stack-exchange-dataset |
6,846 | is there an example of an algorithm that has O(1/n)? | <blockquote>
 <p><strong>Possible Duplicate:</strong><br>
 <a href="https://cs.stackexchange.com/questions/3495/complexity-inversely-propotional-to-n">Complexity inversely propotional to $n$</a> </p>
</blockquote>



<p>I'm curious if anyone's come up with a problem or method as n => infinity t => 0. Are there any sort of cases found in quantum computing?</p>
 | time complexity | 1 | is there an example of an algorithm that has O(1/n)? -- (time complexity)
<blockquote>
 <p><strong>Possible Duplicate:</strong><br>
 <a href="https://cs.stackexchange.com/questions/3495/complexity-inversely-propotional-to-n">Complexity inversely propotional to $n$</a> </p>
</blockquote>



<p>I'm curious if anyone's come up with a problem or method as n => infinity t => 0. Are there any sort of cases found in quantum computing?</p>
 | habedi/stack-exchange-dataset |
6,847 | Is the k-clique problem NP-complete? | <p>In this Wikipedia article about the <a href="http://en.wikipedia.org/wiki/Clique_%28graph_theory%29">Clique problem in graph theory</a> it states in the beginning that the problem of finding a clique of size K, in a graph G is NP-complete:</p>

<blockquote>
 <p>Cliques have also been studied in computer science: finding whether there is a clique of a given size in a graph (the clique problem) is NP-complete, but despite this hardness result many algorithms for finding cliques have been studied.</p>
</blockquote>

<p>But in this other Wikipedia article about the <a href="http://en.wikipedia.org/wiki/Clique_problem">Clique problem in CS</a>
 it says it is solving the problem for a fixed size k is a problem in P, it can be brute forced in polynomial time.</p>

<blockquote>
 <p>A brute force algorithm to test whether a graph G contains a k-vertex clique, and to find any such clique that it contains, is to examine each subgraph with at least k vertices and check to see whether it forms a clique. This algorithm takes time O(n^k k^2): there are O(n^k) subgraphs to check, each of which has O(k^2) edges whose presence in G needs to be checked. Thus, the problem may be solved in polynomial time whenever k is a fixed constant. When k is part of the input to the problem, however, the time is exponential.</p>
</blockquote>

<p>Is there something I am missing here? Maybe a difference in the wording of the problem? And what does the last sentence mean, that "When k is part of the input to the problem, however, the time is exponential."? Why is there a difference when the k is part of the input to the problem?</p>

<p>My idea is that to find a clique of size k in a graph G, is that we first choose a subset of size k of nodes from G, and test wether they are all related to the other k nodes, which can be done in constant time. And repeat this until we have a clique of size k. The number of sets of k nodes we can choose from G is n! / k!*(n-k)!. </p>
 | complexity theory graphs np complete complexity classes | 1 | Is the k-clique problem NP-complete? -- (complexity theory graphs np complete complexity classes)
<p>In this Wikipedia article about the <a href="http://en.wikipedia.org/wiki/Clique_%28graph_theory%29">Clique problem in graph theory</a> it states in the beginning that the problem of finding a clique of size K, in a graph G is NP-complete:</p>

<blockquote>
 <p>Cliques have also been studied in computer science: finding whether there is a clique of a given size in a graph (the clique problem) is NP-complete, but despite this hardness result many algorithms for finding cliques have been studied.</p>
</blockquote>

<p>But in this other Wikipedia article about the <a href="http://en.wikipedia.org/wiki/Clique_problem">Clique problem in CS</a>
 it says it is solving the problem for a fixed size k is a problem in P, it can be brute forced in polynomial time.</p>

<blockquote>
 <p>A brute force algorithm to test whether a graph G contains a k-vertex clique, and to find any such clique that it contains, is to examine each subgraph with at least k vertices and check to see whether it forms a clique. This algorithm takes time O(n^k k^2): there are O(n^k) subgraphs to check, each of which has O(k^2) edges whose presence in G needs to be checked. Thus, the problem may be solved in polynomial time whenever k is a fixed constant. When k is part of the input to the problem, however, the time is exponential.</p>
</blockquote>

<p>Is there something I am missing here? Maybe a difference in the wording of the problem? And what does the last sentence mean, that "When k is part of the input to the problem, however, the time is exponential."? Why is there a difference when the k is part of the input to the problem?</p>

<p>My idea is that to find a clique of size k in a graph G, is that we first choose a subset of size k of nodes from G, and test wether they are all related to the other k nodes, which can be done in constant time. And repeat this until we have a clique of size k. The number of sets of k nodes we can choose from G is n! / k!*(n-k)!. </p>
 | habedi/stack-exchange-dataset |
6,858 | Are monoids useful in optimization? | <p>Many common operations are <a href="http://en.wikipedia.org/wiki/Monoid">monoids</a>. Haskell has leveraged this observation to make many higher-order functions more generic (<code>Foldable</code> being one example).</p>

<p>There is one obvious way in which using monoids can be used to improve performance: the programmers is asserting the operation's associativity, and so operations can be parallelized. </p>

<p>I'm curious if there are any other ways a compiler could optimize the code, knowing that we're dealing with a monoid. </p>
 | optimization compilers category theory | 1 | Are monoids useful in optimization? -- (optimization compilers category theory)
<p>Many common operations are <a href="http://en.wikipedia.org/wiki/Monoid">monoids</a>. Haskell has leveraged this observation to make many higher-order functions more generic (<code>Foldable</code> being one example).</p>

<p>There is one obvious way in which using monoids can be used to improve performance: the programmers is asserting the operation's associativity, and so operations can be parallelized. </p>

<p>I'm curious if there are any other ways a compiler could optimize the code, knowing that we're dealing with a monoid. </p>
 | habedi/stack-exchange-dataset |
6,859 | Increasing entropy of random walk | <p>Let $P$ be a transition matrix of a random walk in an undirected <strong>(may not regular)</strong> graph $G$. Let $\pi$ be a distribution on $V(G)$. The Shannon entropy of $\pi$ is defined by </p>

<p>$$H(\pi)=-\sum_{v \in V(G)}\pi_v\cdot\log(\pi_v).$$</p>

<p>How do we prove that $H(P\pi)\ge H(\pi)$ ?</p>
 | entropy random walks | 1 | Increasing entropy of random walk -- (entropy random walks)
<p>Let $P$ be a transition matrix of a random walk in an undirected <strong>(may not regular)</strong> graph $G$. Let $\pi$ be a distribution on $V(G)$. The Shannon entropy of $\pi$ is defined by </p>

<p>$$H(\pi)=-\sum_{v \in V(G)}\pi_v\cdot\log(\pi_v).$$</p>

<p>How do we prove that $H(P\pi)\ge H(\pi)$ ?</p>
 | habedi/stack-exchange-dataset |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.