id int64 1 141k | title stringlengths 15 150 | body stringlengths 45 28.5k | tags stringlengths 1 102 | label int64 1 1 | text stringlengths 128 28.6k | source stringclasses 1
value |
|---|---|---|---|---|---|---|
2,240 | Do Higher Order Functions provide more power to Functional Programming? | <p><em>I've asked a similar question <a href="https://cstheory.stackexchange.com/questions/11652/does-high-order-functions-provide-more-power-to-functional-programming">on cstheory.SE</a>.</em></p>

<p>According to <a href="https://stackoverflow.com/a/1990580/209629">this answer on Stackoverflow</a> there is an algorithm that on a non-lazy pure functional programming language has an $\Omega(n \log n)$ complexity, while the same algorithm in imperative programming is $\Omega(n)$. Adding lazyness to the FP language would make the algorithm $\Omega(n)$.</p>

<p>Is there any equivalent relationship comparing a FP language with and without Higher Order Functions? Is it still Turing Complete? If it is, does the lack of Higher Order on FP makes the language less "powerful" or efficient? </p>
 | complexity theory lambda calculus functional programming turing completeness | 1 | Do Higher Order Functions provide more power to Functional Programming? -- (complexity theory lambda calculus functional programming turing completeness)
<p><em>I've asked a similar question <a href="https://cstheory.stackexchange.com/questions/11652/does-high-order-functions-provide-more-power-to-functional-programming">on cstheory.SE</a>.</em></p>

<p>According to <a href="https://stackoverflow.com/a/1990580/209629">this answer on Stackoverflow</a> there is an algorithm that on a non-lazy pure functional programming language has an $\Omega(n \log n)$ complexity, while the same algorithm in imperative programming is $\Omega(n)$. Adding lazyness to the FP language would make the algorithm $\Omega(n)$.</p>

<p>Is there any equivalent relationship comparing a FP language with and without Higher Order Functions? Is it still Turing Complete? If it is, does the lack of Higher Order on FP makes the language less "powerful" or efficient? </p>
 | habedi/stack-exchange-dataset |
2,249 | voting scheme for peaceful coexistence | <p>Many areas in the world suffer from conflicts between two groups (usually ethnic or religious). For the purpose of this question, I assume that most people of both sides want to live in peace, but there are few extremists who incite hatred and violence. The goal of this question is to find an objective way to filter out those extremists.</p>

<p>Imagine a town with 2 conflicting groups, A and B, each has N people. I propose the following voting scheme (which I explain from the point of view of group A, but it's entirely symmetric for the other group):</p>

<ul>
<li><strong>equality-rule</strong>: The number of people in each group must always remain equal.</li>
<li><strong>expel-vote</strong>: At any time, each person of group A can claim that a certain person of group B is "extremist", and start a vote. If more than 50% of the people in group A agree, then that certain person is expelled from town.</li>
<li><strong>counter-vote</strong>: To keep the equality-rule, a single person of group A should also leave the town. This person is selected by a vote between the people in group B (i.e. each person in group B votes for a single person in group A, and the one with most votes is expelled from town).</li>
</ul>

<p>My intuition is that:</p>

<ul>
<li>On one hand, this scheme encourages people to be nice to people of the other group, so that they won't be subject to expel-votes.</li>
<li>On the other hand, the equality rule encourages people to think twice before starting an expel-vote, because this will put them in danger of expel in the counter-vote.</li>
</ul>

<p>[ADDITION]
Several questions can be asked about this scheme, for example:</p>

<ul>
<li>Under what conditions does it diverge to a situation where people vote and counter-vote, until the number of citizens in one of the groups reaches 0? </li>
<li>Under what conditions does it stabilize on a situation where the two group has more than 0 citizens? </li>
<li>Under what conditions, the stable number of citizens is more than half the initial number?</li>
</ul>

<p>Note that this scheme does not even try to reach an objective measure of "extremism". The only goal is stability.</p>

<p>I would like to know, has this voting scheme has been studied in the past?</p>
 | reference request game theory voting | 1 | voting scheme for peaceful coexistence -- (reference request game theory voting)
<p>Many areas in the world suffer from conflicts between two groups (usually ethnic or religious). For the purpose of this question, I assume that most people of both sides want to live in peace, but there are few extremists who incite hatred and violence. The goal of this question is to find an objective way to filter out those extremists.</p>

<p>Imagine a town with 2 conflicting groups, A and B, each has N people. I propose the following voting scheme (which I explain from the point of view of group A, but it's entirely symmetric for the other group):</p>

<ul>
<li><strong>equality-rule</strong>: The number of people in each group must always remain equal.</li>
<li><strong>expel-vote</strong>: At any time, each person of group A can claim that a certain person of group B is "extremist", and start a vote. If more than 50% of the people in group A agree, then that certain person is expelled from town.</li>
<li><strong>counter-vote</strong>: To keep the equality-rule, a single person of group A should also leave the town. This person is selected by a vote between the people in group B (i.e. each person in group B votes for a single person in group A, and the one with most votes is expelled from town).</li>
</ul>

<p>My intuition is that:</p>

<ul>
<li>On one hand, this scheme encourages people to be nice to people of the other group, so that they won't be subject to expel-votes.</li>
<li>On the other hand, the equality rule encourages people to think twice before starting an expel-vote, because this will put them in danger of expel in the counter-vote.</li>
</ul>

<p>[ADDITION]
Several questions can be asked about this scheme, for example:</p>

<ul>
<li>Under what conditions does it diverge to a situation where people vote and counter-vote, until the number of citizens in one of the groups reaches 0? </li>
<li>Under what conditions does it stabilize on a situation where the two group has more than 0 citizens? </li>
<li>Under what conditions, the stable number of citizens is more than half the initial number?</li>
</ul>

<p>Note that this scheme does not even try to reach an objective measure of "extremism". The only goal is stability.</p>

<p>I would like to know, has this voting scheme has been studied in the past?</p>
 | habedi/stack-exchange-dataset |
2,251 | Why is it seemingly easier to resume torrent downloads than browser downloads? | <p>I really wonder how torrent downloads can be resumed at later point of time.
If such a technology exists, then why is it not possible in browsers?</p>

<p>It is often not possible to pause a browser download so that it can be resumed at a later point of time. Often, the download will start again from the beginning. But in the case of a torrent download, you can resume anytime.</p>

<p>One reason I could think of is that a browser makes an HTTP connection to the server which contains the file, and when this connection breaks, there is no data regarding how much file was saved so no resume is possible.</p>

<p>Is there a fundamental reason why torrent downloads are easier to resume than web downloads?</p>
 | computer networks communication protocols | 1 | Why is it seemingly easier to resume torrent downloads than browser downloads? -- (computer networks communication protocols)
<p>I really wonder how torrent downloads can be resumed at later point of time.
If such a technology exists, then why is it not possible in browsers?</p>

<p>It is often not possible to pause a browser download so that it can be resumed at a later point of time. Often, the download will start again from the beginning. But in the case of a torrent download, you can resume anytime.</p>

<p>One reason I could think of is that a browser makes an HTTP connection to the server which contains the file, and when this connection breaks, there is no data regarding how much file was saved so no resume is possible.</p>

<p>Is there a fundamental reason why torrent downloads are easier to resume than web downloads?</p>
 | habedi/stack-exchange-dataset |
2,257 | Generating number of possibilites of popping two stacks to two other stacks | <p>Context: I'm working on <a href="https://stackoverflow.com/questions/10875675/how-to-find-out-all-the-popping-out-possibilities-of-two-stacks">this problem</a>:</p>

<blockquote>
 <p>There are two stacks here:</p>

<pre><code>A: 1,2,3,4 <- Stack Top
 B: 5,6,7,8
</code></pre>
 
 <p>A and B will pop out to other two stacks: C and D. For example: </p>

<pre><code> pop(A),push(C),pop(B),push(D).
</code></pre>
 
 <p>If an item have been popped out , it must be pushed to C or D immediately.</p>
</blockquote>

<p>The goal is to enumerate all possible stack contents of C and D after moving all elements.</p>

<p>More elaborately, the problem is this: If you have two source stacks with $n$ unique elements (all are unique, not just per stack) and two destination stacks and you pop everything off each source stack to each destination stack, generate all unique destination stacks - call this $S$.</p>

<p>The stack part is irrelevant, mostly, other than it enforces a partial order on the result. If we have two source stacks and one destination stack, this is the same as generating all permutations without repetitions for a set of $2N$ elements with $N$ 'A' elements and $N$ 'B' elements. Call this $O$.</p>

<p>Thus</p>

<p>$\qquad \displaystyle |O| = (2n)!/(n!)^2$</p>

<p>Now observe all possible bit sequences of length 2n (bit 0 representing popping source stack A/B and bit 1 pushing to destination stack C/D), call this B. |B|=22n. We can surely generate B and check if it has the correct number of pops from each destination stack to generate |S|. It's a little faster to recursively generate these to ensure their validity. It's even faster still to generate B and O and then simulate, but it still has the issue of needing to check for duplicates.</p>

<p>My question</p>

<p>Is there a more efficient way to generate these?</p>

<p>Through simulation I found the result follows <a href="http://oeis.org/A084773" rel="nofollow noreferrer">this sequence</a> which is related to Delannoy Numbers, which I know very little about if this suggests anything.</p>

<p>Here is my Python code</p>

<pre><code>def all_subsets(list):
 if len(list)==0:
 return [set()]
 subsets = all_subsets(list[1:])

 return [subset.union(set([list[0]])) for subset in subsets] + subsets

def result_sequences(perms):
 for perm in perms:
 whole_s = range(len(perm))
 whole_set = set(whole_s)
 for send_to_c in all_subsets(whole_s):
 send_to_d = whole_set-set(send_to_c)
 yield [perm,send_to_c,send_to_d]

n = 4
perms_ = list(unique_permutations([n,n],['a','b'])) # number of unique sequences 
result = list(result_sequences(perms_))
</code></pre>
 | algorithms combinatorics efficiency | 1 | Generating number of possibilites of popping two stacks to two other stacks -- (algorithms combinatorics efficiency)
<p>Context: I'm working on <a href="https://stackoverflow.com/questions/10875675/how-to-find-out-all-the-popping-out-possibilities-of-two-stacks">this problem</a>:</p>

<blockquote>
 <p>There are two stacks here:</p>

<pre><code>A: 1,2,3,4 <- Stack Top
 B: 5,6,7,8
</code></pre>
 
 <p>A and B will pop out to other two stacks: C and D. For example: </p>

<pre><code> pop(A),push(C),pop(B),push(D).
</code></pre>
 
 <p>If an item have been popped out , it must be pushed to C or D immediately.</p>
</blockquote>

<p>The goal is to enumerate all possible stack contents of C and D after moving all elements.</p>

<p>More elaborately, the problem is this: If you have two source stacks with $n$ unique elements (all are unique, not just per stack) and two destination stacks and you pop everything off each source stack to each destination stack, generate all unique destination stacks - call this $S$.</p>

<p>The stack part is irrelevant, mostly, other than it enforces a partial order on the result. If we have two source stacks and one destination stack, this is the same as generating all permutations without repetitions for a set of $2N$ elements with $N$ 'A' elements and $N$ 'B' elements. Call this $O$.</p>

<p>Thus</p>

<p>$\qquad \displaystyle |O| = (2n)!/(n!)^2$</p>

<p>Now observe all possible bit sequences of length 2n (bit 0 representing popping source stack A/B and bit 1 pushing to destination stack C/D), call this B. |B|=22n. We can surely generate B and check if it has the correct number of pops from each destination stack to generate |S|. It's a little faster to recursively generate these to ensure their validity. It's even faster still to generate B and O and then simulate, but it still has the issue of needing to check for duplicates.</p>

<p>My question</p>

<p>Is there a more efficient way to generate these?</p>

<p>Through simulation I found the result follows <a href="http://oeis.org/A084773" rel="nofollow noreferrer">this sequence</a> which is related to Delannoy Numbers, which I know very little about if this suggests anything.</p>

<p>Here is my Python code</p>

<pre><code>def all_subsets(list):
 if len(list)==0:
 return [set()]
 subsets = all_subsets(list[1:])

 return [subset.union(set([list[0]])) for subset in subsets] + subsets

def result_sequences(perms):
 for perm in perms:
 whole_s = range(len(perm))
 whole_set = set(whole_s)
 for send_to_c in all_subsets(whole_s):
 send_to_d = whole_set-set(send_to_c)
 yield [perm,send_to_c,send_to_d]

n = 4
perms_ = list(unique_permutations([n,n],['a','b'])) # number of unique sequences 
result = list(result_sequences(perms_))
</code></pre>
 | habedi/stack-exchange-dataset |
2,259 | Finding interesting anagrams | <p>Say that $a_1a_2\ldots a_n$ and $b_1b_2\ldots b_n$ are two strings of the same length. An <strong>anagramming</strong> of two strings is a bijective mapping $p:[1\ldots n]\to[1\ldots n]$ such that $a_i = b_{p(i)}$ for each $i$.</p>

<p>There might be more than one anagramming for the same pair of strings. For example, If $a=$`abcab` and $b=$<code>cabab</code> we have $p_1[1,2,3,4,5]\to[4,5,1,2,3]$ and $p_2[1,2,3,4,5] \to [2,5,1,4,3]$, among others.</p>

<p>We'll say that the <strong>weight</strong> $w(p)$ of an anagramming $p$ is the number of cuts one must make in the first string to get chunks that can be rearranged to obtain the second string. Formally, this the number of values of $i\in[1\ldots n-1]$ for which $p(i)+1\ne p(i+1)$. That is, it is the number of points at which $p$ does <em>not</em> increase by exactly 1.For example, $w(p_1) = 1$ and $w(p_2) = 4$, because $p_1$ cuts <code>12345</code> once, into the chunks <code>123</code> and <code>45</code>, and $p_2$ cuts <code>12345</code> four times, into five chunks.</p>

<p>Suppose there exists an anagramming for two strings $a$ and $b$. Then at least one anagramming must have least weight. Let's say this this one is <strong>lightest</strong>. (There might be multiple lightest anagrammings; I don't care because I am interested only in the weights.)</p>

<h2>Question</h2>

<p>I want an algorithm which, given two strings for which an anagramming exists, efficiently <strong>yields the exact weight of the lightest anagramming</strong> of the two strings. It is all right if the algorithm also yields a lightest anagramming, but it need not.</p>

<p>It is a fairly simple matter to generate all anagrammings and weigh them, but there may be many, so I would prefer a method that finds light anagrammings directly.</p>

<hr>

<h2>Motivation</h2>

<p>The reason this problem is of interest is as follows. It is very easy to make the computer search the dictionary and find anagrams, pairs of words that contain exactly the same letters. But many of the anagrams produced are uninteresting. For instance, the longest examples to be found in Webster's Second International Dictionary are:</p>

<blockquote>
 <p>cholecystoduodenostomy<br>
 duodenocholecystostomy</p>
</blockquote>

<p>The problem should be clear: these are uninteresting because they admit a very light anagramming that simply exchanges the <code>cholecysto</code>, <code>duedeno</code>, and <code>stomy</code> sections, for a weight of 2. On the other hand, this much shorter example is much more surprising and interesting:</p>

<blockquote>
 <p>coastline<br>
 sectional</p>
</blockquote>

<p>Here the lightest anagramming has weight 8.</p>

<p>I have a program that uses this method to locate interesting anagrams, namely those for which all anagrammings are of high weight. But it does this by generating and weighing all possible anagrammings, which is slow.</p>
 | algorithms strings search algorithms natural language processing | 1 | Finding interesting anagrams -- (algorithms strings search algorithms natural language processing)
<p>Say that $a_1a_2\ldots a_n$ and $b_1b_2\ldots b_n$ are two strings of the same length. An <strong>anagramming</strong> of two strings is a bijective mapping $p:[1\ldots n]\to[1\ldots n]$ such that $a_i = b_{p(i)}$ for each $i$.</p>

<p>There might be more than one anagramming for the same pair of strings. For example, If $a=$`abcab` and $b=$<code>cabab</code> we have $p_1[1,2,3,4,5]\to[4,5,1,2,3]$ and $p_2[1,2,3,4,5] \to [2,5,1,4,3]$, among others.</p>

<p>We'll say that the <strong>weight</strong> $w(p)$ of an anagramming $p$ is the number of cuts one must make in the first string to get chunks that can be rearranged to obtain the second string. Formally, this the number of values of $i\in[1\ldots n-1]$ for which $p(i)+1\ne p(i+1)$. That is, it is the number of points at which $p$ does <em>not</em> increase by exactly 1.For example, $w(p_1) = 1$ and $w(p_2) = 4$, because $p_1$ cuts <code>12345</code> once, into the chunks <code>123</code> and <code>45</code>, and $p_2$ cuts <code>12345</code> four times, into five chunks.</p>

<p>Suppose there exists an anagramming for two strings $a$ and $b$. Then at least one anagramming must have least weight. Let's say this this one is <strong>lightest</strong>. (There might be multiple lightest anagrammings; I don't care because I am interested only in the weights.)</p>

<h2>Question</h2>

<p>I want an algorithm which, given two strings for which an anagramming exists, efficiently <strong>yields the exact weight of the lightest anagramming</strong> of the two strings. It is all right if the algorithm also yields a lightest anagramming, but it need not.</p>

<p>It is a fairly simple matter to generate all anagrammings and weigh them, but there may be many, so I would prefer a method that finds light anagrammings directly.</p>

<hr>

<h2>Motivation</h2>

<p>The reason this problem is of interest is as follows. It is very easy to make the computer search the dictionary and find anagrams, pairs of words that contain exactly the same letters. But many of the anagrams produced are uninteresting. For instance, the longest examples to be found in Webster's Second International Dictionary are:</p>

<blockquote>
 <p>cholecystoduodenostomy<br>
 duodenocholecystostomy</p>
</blockquote>

<p>The problem should be clear: these are uninteresting because they admit a very light anagramming that simply exchanges the <code>cholecysto</code>, <code>duedeno</code>, and <code>stomy</code> sections, for a weight of 2. On the other hand, this much shorter example is much more surprising and interesting:</p>

<blockquote>
 <p>coastline<br>
 sectional</p>
</blockquote>

<p>Here the lightest anagramming has weight 8.</p>

<p>I have a program that uses this method to locate interesting anagrams, namely those for which all anagrammings are of high weight. But it does this by generating and weighing all possible anagrammings, which is slow.</p>
 | habedi/stack-exchange-dataset |
2,263 | Probabilities of duplicate mail detection by comparing notes among servers | <p>I have the following problem:</p>
<blockquote>
<p>We want to implement a filtering strategy in e-mail servers to reduce the number of spam messages. Each server will have a buffer, and before sending an e-mail, it checks whether there is a duplicate of the same message in its own buffer and contacts k distinct neighboring servers at random to check whether the duplicate is in another buffer. In case any duplicate message is detected, it will be deleted as spam, otherwise it will be sent after all negative replies are received.</p>
<p>Let us assume that there are N mail servers, and that a spammer sends M copies of each spam mail. We assume that all copies are sent simultaneously and that each mail is routed to a mail server randomly.</p>
</blockquote>
<p>Given M, N and k I need to find out the probabilities that no spam message is deleted (i.e. no server detects spam), all spam messages are deleted (all servers detect spam) and spam messages are deleted from at at least one server.</p>
<p>So far, I have used combinations without repetition to find out the cases that need to be taken into account for an M and N. Now I need to find out the probability that one server receives at least two copies of a message, but I am at complete loss. Could you please provide some insight into the problem?</p>
 | combinatorics probability theory | 1 | Probabilities of duplicate mail detection by comparing notes among servers -- (combinatorics probability theory)
<p>I have the following problem:</p>
<blockquote>
<p>We want to implement a filtering strategy in e-mail servers to reduce the number of spam messages. Each server will have a buffer, and before sending an e-mail, it checks whether there is a duplicate of the same message in its own buffer and contacts k distinct neighboring servers at random to check whether the duplicate is in another buffer. In case any duplicate message is detected, it will be deleted as spam, otherwise it will be sent after all negative replies are received.</p>
<p>Let us assume that there are N mail servers, and that a spammer sends M copies of each spam mail. We assume that all copies are sent simultaneously and that each mail is routed to a mail server randomly.</p>
</blockquote>
<p>Given M, N and k I need to find out the probabilities that no spam message is deleted (i.e. no server detects spam), all spam messages are deleted (all servers detect spam) and spam messages are deleted from at at least one server.</p>
<p>So far, I have used combinations without repetition to find out the cases that need to be taken into account for an M and N. Now I need to find out the probability that one server receives at least two copies of a message, but I am at complete loss. Could you please provide some insight into the problem?</p>
 | habedi/stack-exchange-dataset |
2,272 | Representing Negative and Complex Numbers Using Lambda Calculus | <p>Most tutorials on Lambda Calculus provide example where Positive Integers and Booleans can be represented by Functions. What about -1 and i?</p>
 | data structures lambda calculus integers real numbers | 1 | Representing Negative and Complex Numbers Using Lambda Calculus -- (data structures lambda calculus integers real numbers)
<p>Most tutorials on Lambda Calculus provide example where Positive Integers and Booleans can be represented by Functions. What about -1 and i?</p>
 | habedi/stack-exchange-dataset |
2,280 | Does there exist any work on creating a Real Number/Probability Theory Framework in COQ? | <p><a href="http://en.wikipedia.org/wiki/Coq">COQ</a> is an interactive theorem prover that uses the calculus of inductive constructions, i.e. it relies heavily on inductive types. Using those, discrete structures like natural numbers, rational numbers, graphs, grammars, semantics etc. are very concisely represented.</p>

<p>However, since I grew to like the proof assistant, I was wondering whether there are libraries for uncountable structures, like real numbers, complex numbers, probability bounds and such. I am of course aware that one cannot define these structures inductively (at least not as far as I know), but they can be defined axiomatically, using for instance the <a href="http://en.wikipedia.org/wiki/Real_number#Axiomatic_approach">axiomatic approach</a>.</p>

<p>Is there any work that provides basic properties, or even probabilistic bounds like Chernoff bound or union bound as a library?</p>
 | probability theory coq real numbers uncountability | 1 | Does there exist any work on creating a Real Number/Probability Theory Framework in COQ? -- (probability theory coq real numbers uncountability)
<p><a href="http://en.wikipedia.org/wiki/Coq">COQ</a> is an interactive theorem prover that uses the calculus of inductive constructions, i.e. it relies heavily on inductive types. Using those, discrete structures like natural numbers, rational numbers, graphs, grammars, semantics etc. are very concisely represented.</p>

<p>However, since I grew to like the proof assistant, I was wondering whether there are libraries for uncountable structures, like real numbers, complex numbers, probability bounds and such. I am of course aware that one cannot define these structures inductively (at least not as far as I know), but they can be defined axiomatically, using for instance the <a href="http://en.wikipedia.org/wiki/Real_number#Axiomatic_approach">axiomatic approach</a>.</p>

<p>Is there any work that provides basic properties, or even probabilistic bounds like Chernoff bound or union bound as a library?</p>
 | habedi/stack-exchange-dataset |
2,292 | Computing follow sets conservatively for a PEG grammar | <p>Given a <a href="https://en.wikipedia.org/wiki/Parsing_expression_grammar" rel="nofollow">parsing expression grammar</a> (PEG) grammar and the name of the start production, I would like to label each node with the set of characters that can follow it. I would be happy with a good approximation that is conservative -- if a character can follow a node then it must appear in the follower set.</p>

<p>The grammar is represented as a tree of named productions whose bodies contain nodes representing</p>

<ol>
<li>Character</li>
<li>Concatenation</li>
<li>Union</li>
<li>Non-terminal references</li>
</ol>

<p>So given a grammar in ABNF style syntax:</p>

<pre><code>A := B ('a' | 'b');
B := ('c' | 'd') (B | ());
</code></pre>

<p>where adjacent nodes are concatenated, <code>|</code> indicates union, single quoted characters match the character they represent, and upper case names are non-terminals.</p>

<p>If the grammar's start production is <code>A</code>, the annotated version might look like</p>

<pre><code>A := 
 (
 (B /* [ab] */)
 (
 ('a' /* eof */)
 | 
 ('b' /* eof */)
 /* eof */
 )
 /* eof */
 );

B :=
 (
 (
 ('c' /* [abcd] */)
 |
 ('d' /* [abcd] */)
 /* [abcd] */
 )
 (
 (B /* [ab] */)
 |
 ( /* [ab] */)
 /* [ab] */
 )
 );
</code></pre>

<p>I want this so that I can do some simplification on a PEG grammar. Since order is important in unions in PEG grammars, I want to partition the members of unions based on which ones could accept the same character so that I can ignore order between partition elements.</p>

<p>I'm using OMeta's grow-the-seed scheme for handling direct left-recursion in PEG grammars, so I need something that handles that. I expect that any scheme for handling scannerless CF grammars with order-independent unions that is conservative or correct would be conservative for my purposes.</p>

<p>Pointers to algorithms or source code would be much appreciated.</p>
 | formal languages reference request formal grammars parsers | 1 | Computing follow sets conservatively for a PEG grammar -- (formal languages reference request formal grammars parsers)
<p>Given a <a href="https://en.wikipedia.org/wiki/Parsing_expression_grammar" rel="nofollow">parsing expression grammar</a> (PEG) grammar and the name of the start production, I would like to label each node with the set of characters that can follow it. I would be happy with a good approximation that is conservative -- if a character can follow a node then it must appear in the follower set.</p>

<p>The grammar is represented as a tree of named productions whose bodies contain nodes representing</p>

<ol>
<li>Character</li>
<li>Concatenation</li>
<li>Union</li>
<li>Non-terminal references</li>
</ol>

<p>So given a grammar in ABNF style syntax:</p>

<pre><code>A := B ('a' | 'b');
B := ('c' | 'd') (B | ());
</code></pre>

<p>where adjacent nodes are concatenated, <code>|</code> indicates union, single quoted characters match the character they represent, and upper case names are non-terminals.</p>

<p>If the grammar's start production is <code>A</code>, the annotated version might look like</p>

<pre><code>A := 
 (
 (B /* [ab] */)
 (
 ('a' /* eof */)
 | 
 ('b' /* eof */)
 /* eof */
 )
 /* eof */
 );

B :=
 (
 (
 ('c' /* [abcd] */)
 |
 ('d' /* [abcd] */)
 /* [abcd] */
 )
 (
 (B /* [ab] */)
 |
 ( /* [ab] */)
 /* [ab] */
 )
 );
</code></pre>

<p>I want this so that I can do some simplification on a PEG grammar. Since order is important in unions in PEG grammars, I want to partition the members of unions based on which ones could accept the same character so that I can ignore order between partition elements.</p>

<p>I'm using OMeta's grow-the-seed scheme for handling direct left-recursion in PEG grammars, so I need something that handles that. I expect that any scheme for handling scannerless CF grammars with order-independent unions that is conservative or correct would be conservative for my purposes.</p>

<p>Pointers to algorithms or source code would be much appreciated.</p>
 | habedi/stack-exchange-dataset |
2,293 | Can the encodings set of a non-trivial class of languages which contains the empty set be recursively enumerable? | <p>Let $C$ be a non-trivial set of recursively enumerable languages ($\emptyset \subsetneq C \subsetneq \mathrm{RE}$) and let $L$ be the set of encodings of Turing machines that recognize some language in $C$: $$L=\{\langle M \rangle \mid L(M) \in C \}$$</p>

<p>Suppose that $\langle M_{loopy}\rangle \in L$, where $M_{loopy}$ is a TM that never halts.
I wonder if it is possible that $L \in \mathrm{RE}$?</p>

<p>By Rice's theorem I know that $L \notin \mathrm{R}$ (the set of recursive languages), so either $L \notin \mathrm{RE}$ or $\overline{L} \notin \mathrm{RE}$. Does it have to be the first option since $M_{loopy} \in L$?</p>
 | computability turing machines | 1 | Can the encodings set of a non-trivial class of languages which contains the empty set be recursively enumerable? -- (computability turing machines)
<p>Let $C$ be a non-trivial set of recursively enumerable languages ($\emptyset \subsetneq C \subsetneq \mathrm{RE}$) and let $L$ be the set of encodings of Turing machines that recognize some language in $C$: $$L=\{\langle M \rangle \mid L(M) \in C \}$$</p>

<p>Suppose that $\langle M_{loopy}\rangle \in L$, where $M_{loopy}$ is a TM that never halts.
I wonder if it is possible that $L \in \mathrm{RE}$?</p>

<p>By Rice's theorem I know that $L \notin \mathrm{R}$ (the set of recursive languages), so either $L \notin \mathrm{RE}$ or $\overline{L} \notin \mathrm{RE}$. Does it have to be the first option since $M_{loopy} \in L$?</p>
 | habedi/stack-exchange-dataset |
2,301 | Categorisation of type systems (strong/weak, dynamic/static) | <p>In short: how are type systems categorised in academic contexts; particularly, where can I find reputable sources that make the distinctions between different sorts of type system clear?</p>

<p>In a sense the difficulty with this question is not that I can't find an answer, but rather that I can find too many, and none stand out as correct. The background is I am attempting to improve an article on the Haskell wiki about <a href="http://www.haskell.org/haskellwiki/Typing">typing</a>, which currently claims the following distinctions:</p>

<ul>
<li>No typing: The language has no notion of types, or from a typed perspective: There is exactly one type in the language. Assembly language has only the type 'bit pattern', Rexx and Tk have only the type 'text', core MatLab has only the type 'complex-valued matrix'.</li>
<li>Weak typing: There are only few distinguished types and maybe type synonyms for several types. E.g. C uses integer numbers for booleans, integers, characters, bit sets and enumerations.</li>
<li>Strong typing: Fine grained set of types like in Ada, Wirthian languages (Pascal, Modula-2), Eiffel</li>
</ul>

<p>This is entirely contrary to my personal perception, which was more along the lines of:</p>

<ul>
<li>Weak typing: Objects have types, but are implicitly converted to other types when the context demands it. For example, Perl, PHP and JavaScript are all languages in which <code>"1"</code> can be used in more or less any context that <code>1</code> can.</li>
<li>Strong typing: Objects have types, and there are no implicit conversions (although overloading may be used to simulate them), so using an object in the wrong context is an error. In Python, indexing an array with a string or float throws a TypeError exception; in Haskell it will fail at compile time.</li>
</ul>

<p>I asked for opinions on this from other people more experienced in the field than I am, and one gave this characterisation:</p>

<ul>
<li>Weak typing: Performing invalid operations on data is not controlled or rejected, but merely produces invalid/arbitrary results.</li>
<li>Strong typing: Operations on data are only permitted if the data is compatible with the operation.</li>
</ul>

<p>As I understand it, the first and last characterisations would call C weakly-typed, the second would call it strongly-typed. The first and second would call Perl and PHP weakly-typed, the third would call them strongly-typed. All three would describe Python as strongly-typed.</p>

<p>I think most people would tell me "well, there is no consensus, there is no accepted meaning of the terms". If those people are wrong, I'd be happy to hear about it, but if they are right, then how <em>do</em> CS researchers describe and compare type systems? What terminology can I use that is less problematic?</p>

<p>As a related question, I feel the dynamic/static distinction is often given in terms of "compile time" and "run time", which I find unsatisfactory given that whether or not a language is compiled is not so much a property of that language as its implementations. I feel there should be a purely-semantic description of dynamic versus static typing; something along the lines of "a static language is one in which every subexpression can be typed". I would appreciate any thoughts, particularly references, that bring clarity to this notion.</p>
 | reference request programming languages type theory | 1 | Categorisation of type systems (strong/weak, dynamic/static) -- (reference request programming languages type theory)
<p>In short: how are type systems categorised in academic contexts; particularly, where can I find reputable sources that make the distinctions between different sorts of type system clear?</p>

<p>In a sense the difficulty with this question is not that I can't find an answer, but rather that I can find too many, and none stand out as correct. The background is I am attempting to improve an article on the Haskell wiki about <a href="http://www.haskell.org/haskellwiki/Typing">typing</a>, which currently claims the following distinctions:</p>

<ul>
<li>No typing: The language has no notion of types, or from a typed perspective: There is exactly one type in the language. Assembly language has only the type 'bit pattern', Rexx and Tk have only the type 'text', core MatLab has only the type 'complex-valued matrix'.</li>
<li>Weak typing: There are only few distinguished types and maybe type synonyms for several types. E.g. C uses integer numbers for booleans, integers, characters, bit sets and enumerations.</li>
<li>Strong typing: Fine grained set of types like in Ada, Wirthian languages (Pascal, Modula-2), Eiffel</li>
</ul>

<p>This is entirely contrary to my personal perception, which was more along the lines of:</p>

<ul>
<li>Weak typing: Objects have types, but are implicitly converted to other types when the context demands it. For example, Perl, PHP and JavaScript are all languages in which <code>"1"</code> can be used in more or less any context that <code>1</code> can.</li>
<li>Strong typing: Objects have types, and there are no implicit conversions (although overloading may be used to simulate them), so using an object in the wrong context is an error. In Python, indexing an array with a string or float throws a TypeError exception; in Haskell it will fail at compile time.</li>
</ul>

<p>I asked for opinions on this from other people more experienced in the field than I am, and one gave this characterisation:</p>

<ul>
<li>Weak typing: Performing invalid operations on data is not controlled or rejected, but merely produces invalid/arbitrary results.</li>
<li>Strong typing: Operations on data are only permitted if the data is compatible with the operation.</li>
</ul>

<p>As I understand it, the first and last characterisations would call C weakly-typed, the second would call it strongly-typed. The first and second would call Perl and PHP weakly-typed, the third would call them strongly-typed. All three would describe Python as strongly-typed.</p>

<p>I think most people would tell me "well, there is no consensus, there is no accepted meaning of the terms". If those people are wrong, I'd be happy to hear about it, but if they are right, then how <em>do</em> CS researchers describe and compare type systems? What terminology can I use that is less problematic?</p>

<p>As a related question, I feel the dynamic/static distinction is often given in terms of "compile time" and "run time", which I find unsatisfactory given that whether or not a language is compiled is not so much a property of that language as its implementations. I feel there should be a purely-semantic description of dynamic versus static typing; something along the lines of "a static language is one in which every subexpression can be typed". I would appreciate any thoughts, particularly references, that bring clarity to this notion.</p>
 | habedi/stack-exchange-dataset |
2,304 | Recursive, Recursively Enumerable and None of the Above | <p>Let </p>

<ul>
<li>$A = \mathrm{R}$ be the set of all languages that are recursive,</li>
<li>$B = \mathrm{RE} \setminus \mathrm{R}$ be the set of all languages that are recursively enumerable but not recursive and</li>
<li>$C = \overline{\mathrm{RE}}$ be the set of all languages that are not recursively enumerable.</li>
</ul>

<p>It is clear that for example $\mathrm{CFL} \subseteq A$.</p>

<p>What is a simple example of a member of set B?</p>

<p>What is a simple example of a member of set C?</p>

<p>In general, how do you classify a language as either A, B or C?</p>
 | formal languages computability | 1 | Recursive, Recursively Enumerable and None of the Above -- (formal languages computability)
<p>Let </p>

<ul>
<li>$A = \mathrm{R}$ be the set of all languages that are recursive,</li>
<li>$B = \mathrm{RE} \setminus \mathrm{R}$ be the set of all languages that are recursively enumerable but not recursive and</li>
<li>$C = \overline{\mathrm{RE}}$ be the set of all languages that are not recursively enumerable.</li>
</ul>

<p>It is clear that for example $\mathrm{CFL} \subseteq A$.</p>

<p>What is a simple example of a member of set B?</p>

<p>What is a simple example of a member of set C?</p>

<p>In general, how do you classify a language as either A, B or C?</p>
 | habedi/stack-exchange-dataset |
2,317 | Complexity of space density and sequentiality | <p>I'm looking for some standard terminology, metrics and/or applications of the consideration of density and sequentiality of algorithms.</p>

<p>When we measure algorithms we tend to give the big-Oh notation such as $O(n)$ and usually we are measuring time complexity. Somewhat less frequently, though still often, we'll also measure the space complexity of an algorithm.</p>

<p>Given current computing systems however the density of memory and the sequence in which it is accessed plays a major role in the practical performance of the algorithm. Indeed there are scenarios where a time complexity algortihm of $O(\log n)$ with disperse random memory access can be slower than a $O(n)$ algorithm with dense sequential memory access. I've not seen these aspects covered in formal theory before; surely it must exist and I'm just ignorant here.</p>

<p>What are the standard metrics, terms, and approaches to this space density and access sequentiality?</p>
 | complexity theory reference request terminology space complexity | 1 | Complexity of space density and sequentiality -- (complexity theory reference request terminology space complexity)
<p>I'm looking for some standard terminology, metrics and/or applications of the consideration of density and sequentiality of algorithms.</p>

<p>When we measure algorithms we tend to give the big-Oh notation such as $O(n)$ and usually we are measuring time complexity. Somewhat less frequently, though still often, we'll also measure the space complexity of an algorithm.</p>

<p>Given current computing systems however the density of memory and the sequence in which it is accessed plays a major role in the practical performance of the algorithm. Indeed there are scenarios where a time complexity algortihm of $O(\log n)$ with disperse random memory access can be slower than a $O(n)$ algorithm with dense sequential memory access. I've not seen these aspects covered in formal theory before; surely it must exist and I'm just ignorant here.</p>

<p>What are the standard metrics, terms, and approaches to this space density and access sequentiality?</p>
 | habedi/stack-exchange-dataset |
2,326 | Type inference with product types | <p>I’m working on a compiler for a concatenative language and would like to add type inference support. I understand Hindley–Milner, but I’ve been learning the type theory as I go, so I’m unsure of how to adapt it. Is the following system sound and decidably inferable?</p>

<p>A term is a literal, a composition of terms, a quotation of a term, or a primitive.</p>

<p>$$ e ::= x \:\big|\: e\:e \:\big|\: [e] \:\big|\: \dots $$</p>

<p>All terms denote functions. For two functions $e_1$ and $e_2$, $e_1\:e_2 = e_2 \circ e_1$, that is, juxtaposition denotes reverse composition. Literals denote niladic functions.</p>

<p>The terms other than composition have basic type rules:</p>

<p>$$
\dfrac{}{x : \iota}\text{[Lit]} \\
\dfrac{\Gamma\vdash e : \sigma}{\Gamma\vdash [e] : \forall\alpha.\:\alpha\to\sigma\times\alpha}\text{[Quot]}, \alpha \text{ not free in } \Gamma
$$</p>

<p>Notably absent are rules for application, since concatenative languages lack it.</p>

<p>A type is either a literal, a type variable, or a function from stacks to stacks, where a stack is defined as a right-nested tuple. All functions are implicitly polymorphic with respect to the “rest of the stack”.</p>

<p>$$
\begin{aligned}
\tau & ::= \iota \:\big|\: \alpha \:\big|\: \rho\to\rho \\
\rho & ::= () \:\big|\: \tau\times\rho \\
\sigma & ::= \tau \:\big|\: \forall\alpha.\:\sigma
\end{aligned}
$$</p>

<p>This is the first thing that seems suspect, but I don’t know exactly what’s wrong with it.</p>

<p>To help readability and cut down on parentheses, I’ll assume that $a\:b = b \times (a)$ in type schemes. I’ll also use a capital letter for a variable denoting a stack, rather than a single value.</p>

<p>There are six primitives. The first five are pretty innocuous. <code>dup</code> takes the topmost value and produces two copies of it. <code>swap</code> changes the order of the top two values. <code>pop</code> discards the top value. <code>quote</code> takes a value and produces a quotation (function) that returns it. <code>apply</code> applies a quotation to the stack.</p>

<p>$$
\begin{aligned}
\mathtt{dup} & :: \forall A b.\: A\:b \to A\:b\:b \\
\mathtt{swap} & :: \forall A b c.\: A\:b\:c \to A\:c\:b \\
\mathtt{pop} & :: \forall A b.\: A\:b \to A \\
\mathtt{quote} & :: \forall A b.\: A\:b \to A\:(\forall C. C \to C\:b) \\
\mathtt{apply} & :: \forall A B.\: A\:(A \to B) \to B \\
\end{aligned}
$$</p>

<p>The last combinator, <code>compose</code>, ought to take two quotations and return the type of their concatenation, that is, $[e_1]\:[e_2]\:\mathtt{compose} = [e_1\:e_2]$. In the statically typed concatenative language <a href="http://www.cat-language.com/">Cat</a>, the type of <code>compose</code> is very straightforward.</p>

<p>$$
\mathtt{compose} :: \forall A B C D.\: A\:(B \to C)\:(C \to D) \to A\:(B \to D)
$$</p>

<p>However, this type is too restrictive: it requires that the production of the first function <em>exactly match</em> the consumption of the second. In reality, you have to assume distinct types, then unify them. But how would you write that type?</p>

<p>$$ \mathtt{compose} :: \forall A B C D E. A\:(B \to C)\:(D \to E) \to A \dots $$</p>

<p>If you let $\setminus$ denote a <em>difference</em> of two types, then I <em>think</em> you can write the type of <code>compose</code> correctly.</p>

<p>$$
\mathtt{compose} :: \forall A B C D E.\: A\:(B \to C)\:(D \to E) \to A\:((D \setminus C)\:B \to ((C \setminus D)\:E))
$$</p>

<p>This is still relatively straightforward: <code>compose</code> takes a function $f_1 : B \to C$ and one $f_2 : D \to E$. Its result consumes $B$ atop the consumption of $f_2$ not produced by $f_1$, and produces $D$ atop the production of $f_1$ not consumed by $f_2$. This gives the rule for ordinary composition.</p>

<p>$$
\dfrac{\Gamma\vdash e_1 : \forall A B.\: A \to B \quad \Gamma\vdash e_2 : \forall C D. C \to D}{\Gamma\vdash e_1 e_2 : ((C \setminus B)\:A \to ((B \setminus C)\:D))}\text{[Comp]}
$$</p>

<p>However, I don’t know that this hypothetical $\setminus$ actually corresponds to anything, and I’ve been chasing it around in circles for long enough that I think I took a wrong turn. Could it be a simple difference of tuples?</p>

<p>$$
\begin{align}
\forall A. () \setminus A & = () \\
\forall A. A \setminus () & = A \\
\forall A B C D. A B \setminus C D & = B \setminus D \textit{ iff } A = C \\
\text{otherwise} & = \textit{undefined}
\end{align}
$$</p>

<p>Is there something horribly broken about this that I’m not seeing, or am I on something like the right track? (I’ve probably quantified some of this stuff wrongly and would appreciate fixes in that area as well.)</p>
 | programming languages logic compilers type theory type checking | 1 | Type inference with product types -- (programming languages logic compilers type theory type checking)
<p>I’m working on a compiler for a concatenative language and would like to add type inference support. I understand Hindley–Milner, but I’ve been learning the type theory as I go, so I’m unsure of how to adapt it. Is the following system sound and decidably inferable?</p>

<p>A term is a literal, a composition of terms, a quotation of a term, or a primitive.</p>

<p>$$ e ::= x \:\big|\: e\:e \:\big|\: [e] \:\big|\: \dots $$</p>

<p>All terms denote functions. For two functions $e_1$ and $e_2$, $e_1\:e_2 = e_2 \circ e_1$, that is, juxtaposition denotes reverse composition. Literals denote niladic functions.</p>

<p>The terms other than composition have basic type rules:</p>

<p>$$
\dfrac{}{x : \iota}\text{[Lit]} \\
\dfrac{\Gamma\vdash e : \sigma}{\Gamma\vdash [e] : \forall\alpha.\:\alpha\to\sigma\times\alpha}\text{[Quot]}, \alpha \text{ not free in } \Gamma
$$</p>

<p>Notably absent are rules for application, since concatenative languages lack it.</p>

<p>A type is either a literal, a type variable, or a function from stacks to stacks, where a stack is defined as a right-nested tuple. All functions are implicitly polymorphic with respect to the “rest of the stack”.</p>

<p>$$
\begin{aligned}
\tau & ::= \iota \:\big|\: \alpha \:\big|\: \rho\to\rho \\
\rho & ::= () \:\big|\: \tau\times\rho \\
\sigma & ::= \tau \:\big|\: \forall\alpha.\:\sigma
\end{aligned}
$$</p>

<p>This is the first thing that seems suspect, but I don’t know exactly what’s wrong with it.</p>

<p>To help readability and cut down on parentheses, I’ll assume that $a\:b = b \times (a)$ in type schemes. I’ll also use a capital letter for a variable denoting a stack, rather than a single value.</p>

<p>There are six primitives. The first five are pretty innocuous. <code>dup</code> takes the topmost value and produces two copies of it. <code>swap</code> changes the order of the top two values. <code>pop</code> discards the top value. <code>quote</code> takes a value and produces a quotation (function) that returns it. <code>apply</code> applies a quotation to the stack.</p>

<p>$$
\begin{aligned}
\mathtt{dup} & :: \forall A b.\: A\:b \to A\:b\:b \\
\mathtt{swap} & :: \forall A b c.\: A\:b\:c \to A\:c\:b \\
\mathtt{pop} & :: \forall A b.\: A\:b \to A \\
\mathtt{quote} & :: \forall A b.\: A\:b \to A\:(\forall C. C \to C\:b) \\
\mathtt{apply} & :: \forall A B.\: A\:(A \to B) \to B \\
\end{aligned}
$$</p>

<p>The last combinator, <code>compose</code>, ought to take two quotations and return the type of their concatenation, that is, $[e_1]\:[e_2]\:\mathtt{compose} = [e_1\:e_2]$. In the statically typed concatenative language <a href="http://www.cat-language.com/">Cat</a>, the type of <code>compose</code> is very straightforward.</p>

<p>$$
\mathtt{compose} :: \forall A B C D.\: A\:(B \to C)\:(C \to D) \to A\:(B \to D)
$$</p>

<p>However, this type is too restrictive: it requires that the production of the first function <em>exactly match</em> the consumption of the second. In reality, you have to assume distinct types, then unify them. But how would you write that type?</p>

<p>$$ \mathtt{compose} :: \forall A B C D E. A\:(B \to C)\:(D \to E) \to A \dots $$</p>

<p>If you let $\setminus$ denote a <em>difference</em> of two types, then I <em>think</em> you can write the type of <code>compose</code> correctly.</p>

<p>$$
\mathtt{compose} :: \forall A B C D E.\: A\:(B \to C)\:(D \to E) \to A\:((D \setminus C)\:B \to ((C \setminus D)\:E))
$$</p>

<p>This is still relatively straightforward: <code>compose</code> takes a function $f_1 : B \to C$ and one $f_2 : D \to E$. Its result consumes $B$ atop the consumption of $f_2$ not produced by $f_1$, and produces $D$ atop the production of $f_1$ not consumed by $f_2$. This gives the rule for ordinary composition.</p>

<p>$$
\dfrac{\Gamma\vdash e_1 : \forall A B.\: A \to B \quad \Gamma\vdash e_2 : \forall C D. C \to D}{\Gamma\vdash e_1 e_2 : ((C \setminus B)\:A \to ((B \setminus C)\:D))}\text{[Comp]}
$$</p>

<p>However, I don’t know that this hypothetical $\setminus$ actually corresponds to anything, and I’ve been chasing it around in circles for long enough that I think I took a wrong turn. Could it be a simple difference of tuples?</p>

<p>$$
\begin{align}
\forall A. () \setminus A & = () \\
\forall A. A \setminus () & = A \\
\forall A B C D. A B \setminus C D & = B \setminus D \textit{ iff } A = C \\
\text{otherwise} & = \textit{undefined}
\end{align}
$$</p>

<p>Is there something horribly broken about this that I’m not seeing, or am I on something like the right track? (I’ve probably quantified some of this stuff wrongly and would appreciate fixes in that area as well.)</p>
 | habedi/stack-exchange-dataset |
2,336 | Sorting algorithms which accept a random comparator | <p>Generic sorting algorithms generally take a set of data to sort and a comparator function which can compare two individual elements. If the comparator is an order relation¹, then the output of the algorithm is a sorted list/array.</p>

<p>I am wondering though which sort algorithms would actually <em>work</em> with a comparator that is not an order relation (in particular one which returns a random result on each comparison). By "work" I mean here that they continue return a permutation of their input and run at their typically quoted time complexity (as opposed to degrading to the worst case scenario always, or going into an infinite loop, or missing elements). The ordering of the results would be undefined however. Even better, the resulting ordering would be a uniform distribution when the comparator is a coin flip.</p>

<p>From my rough mental calculation it appears that a merge sort would be fine with this and maintain the same runtime cost and produce a fair random ordering. I think that something like a quick sort would however degenerate, possibly not finish, and not be fair.</p>

<p>What other sorting algorithms (other than merge sort) would work as described with a random comparator?</p>

<hr>

<ol>
<li><p>For reference, a comparator is an order relation if it is a proper function (deterministic) and satisfies the axioms of an order relation:</p>

<ul>
<li>it is deterministic: <code>compare(a,b)</code> for a particular <code>a</code> and <code>b</code> always returns the same result.</li>
<li>it is transitive: <code>compare(a,b) and compare(b,c) implies compare( a,c )</code></li>
<li>it is antisymmetric <code>compare(a,b) and compare(b,a) implies a == b</code></li>
</ul></li>
</ol>

<p>(Assume that all input elements are distinct, so reflexivity is not an issue.)</p>

<p>A random comparator violates all of these rules. There are however comparators that are not order relations yet are not random (for example they might violate perhaps only one rule, and only for particular elements in the set).</p>
 | algorithms randomized algorithms sorting | 1 | Sorting algorithms which accept a random comparator -- (algorithms randomized algorithms sorting)
<p>Generic sorting algorithms generally take a set of data to sort and a comparator function which can compare two individual elements. If the comparator is an order relation¹, then the output of the algorithm is a sorted list/array.</p>

<p>I am wondering though which sort algorithms would actually <em>work</em> with a comparator that is not an order relation (in particular one which returns a random result on each comparison). By "work" I mean here that they continue return a permutation of their input and run at their typically quoted time complexity (as opposed to degrading to the worst case scenario always, or going into an infinite loop, or missing elements). The ordering of the results would be undefined however. Even better, the resulting ordering would be a uniform distribution when the comparator is a coin flip.</p>

<p>From my rough mental calculation it appears that a merge sort would be fine with this and maintain the same runtime cost and produce a fair random ordering. I think that something like a quick sort would however degenerate, possibly not finish, and not be fair.</p>

<p>What other sorting algorithms (other than merge sort) would work as described with a random comparator?</p>

<hr>

<ol>
<li><p>For reference, a comparator is an order relation if it is a proper function (deterministic) and satisfies the axioms of an order relation:</p>

<ul>
<li>it is deterministic: <code>compare(a,b)</code> for a particular <code>a</code> and <code>b</code> always returns the same result.</li>
<li>it is transitive: <code>compare(a,b) and compare(b,c) implies compare( a,c )</code></li>
<li>it is antisymmetric <code>compare(a,b) and compare(b,a) implies a == b</code></li>
</ul></li>
</ol>

<p>(Assume that all input elements are distinct, so reflexivity is not an issue.)</p>

<p>A random comparator violates all of these rules. There are however comparators that are not order relations yet are not random (for example they might violate perhaps only one rule, and only for particular elements in the set).</p>
 | habedi/stack-exchange-dataset |
2,338 | How to prove that ε-loops are not necessary in PDAs? | <p>In the context of our investigation of <a href="https://cs.stackexchange.com/questions/110/determining-capabilities-of-a-min-heap-or-other-exotic-state-machines">heap automata</a>, I would like to prove that a particular variant can not accept non-context-sensitive languages. As we have no equivalent grammar model, I need a proof that uses only automata; therefore, I have to show that heap automata can be simulated by <a href="https://en.wikipedia.org/wiki/Linear_bounded_automaton" rel="nofollow noreferrer">LBA</a>s (or an equivalent model).</p>

<p>I expect the proof to work similarly to showing that pushdown automata accept a subset the context-sensitive languages. However, all proofs I know work by</p>

<ul>
<li>using grammars -- here the fact is obvious by definition -- or</li>
<li>are unconvinvingly vague (e.g. <a href="http://www.cs.uky.edu/~lewis/texts/theory/automata/lb-auto.pdf" rel="nofollow noreferrer">here</a>).</li>
</ul>

<p>My problem is that a PDA (resp. HA) can contain cycles of $\varepsilon$-transitions that may write symbols to the stack (resp. heap). An LBA can not simulate arbitrary iterations of such loops. From the Chomsky hierarchy obtained with grammars, we know that </p>

<ol>
<li>every context-free language has an $\varepsilon$-cycle-free PDA or</li>
<li>the simulating LBA can prevent iterating $\varepsilon$-cycles too often.</li>
</ol>

<p>Intuitively, this is clear: such cycles write symbols independently of the input, therefore the stack (heap) content does only hold an amount of information linear in the length of the cycle (disregarding overlapping cycles for now). Also, you don't have a way to get rid of the stuff again (if you need to) other than using another $\varepsilon$-cycle. In essence, such cycles do not contribute to dealing with the input if iterated multiple times, so they are not necessary.</p>

<p>How can this argument be put rigorously/formally, especially considering overlapping $\varepsilon$-cycles?</p>
 | automata pushdown automata | 1 | How to prove that ε-loops are not necessary in PDAs? -- (automata pushdown automata)
<p>In the context of our investigation of <a href="https://cs.stackexchange.com/questions/110/determining-capabilities-of-a-min-heap-or-other-exotic-state-machines">heap automata</a>, I would like to prove that a particular variant can not accept non-context-sensitive languages. As we have no equivalent grammar model, I need a proof that uses only automata; therefore, I have to show that heap automata can be simulated by <a href="https://en.wikipedia.org/wiki/Linear_bounded_automaton" rel="nofollow noreferrer">LBA</a>s (or an equivalent model).</p>

<p>I expect the proof to work similarly to showing that pushdown automata accept a subset the context-sensitive languages. However, all proofs I know work by</p>

<ul>
<li>using grammars -- here the fact is obvious by definition -- or</li>
<li>are unconvinvingly vague (e.g. <a href="http://www.cs.uky.edu/~lewis/texts/theory/automata/lb-auto.pdf" rel="nofollow noreferrer">here</a>).</li>
</ul>

<p>My problem is that a PDA (resp. HA) can contain cycles of $\varepsilon$-transitions that may write symbols to the stack (resp. heap). An LBA can not simulate arbitrary iterations of such loops. From the Chomsky hierarchy obtained with grammars, we know that </p>

<ol>
<li>every context-free language has an $\varepsilon$-cycle-free PDA or</li>
<li>the simulating LBA can prevent iterating $\varepsilon$-cycles too often.</li>
</ol>

<p>Intuitively, this is clear: such cycles write symbols independently of the input, therefore the stack (heap) content does only hold an amount of information linear in the length of the cycle (disregarding overlapping cycles for now). Also, you don't have a way to get rid of the stuff again (if you need to) other than using another $\varepsilon$-cycle. In essence, such cycles do not contribute to dealing with the input if iterated multiple times, so they are not necessary.</p>

<p>How can this argument be put rigorously/formally, especially considering overlapping $\varepsilon$-cycles?</p>
 | habedi/stack-exchange-dataset |
2,339 | per-record timeline consistency vs. monotonic writes | <p>It seems to me that the <em>per-record timeline consistency</em> as defined by Cooper et al. in "PNUTS: Yahoo!’s Hosted Data Serving Platform" mimics the (older?) definition of <em>monotonic writes</em>. From the paper:</p>

<blockquote>
 <p>per-record timeline consistency: all replicas of a given record apply
 all updates to the record in the same order.</p>
</blockquote>

<p>This is quite similar to <a href="http://regal.csep.umflint.edu/~swturner/Classes/csc577/Online/Chapter06/img26.html" rel="nofollow">a definition for monotonic writes</a>:</p>

<blockquote>
 <p>A write operation by a process on data item x is completed before any
 successive write operation on x by the same process.</p>
</blockquote>

<p>Can I conclude that those things are the same, or is there a difference that I misunderstand? Note that the link above also mentions possible copies of data item <code>x</code>, so monotonic write includes replicas.</p>
 | terminology distributed systems | 1 | per-record timeline consistency vs. monotonic writes -- (terminology distributed systems)
<p>It seems to me that the <em>per-record timeline consistency</em> as defined by Cooper et al. in "PNUTS: Yahoo!’s Hosted Data Serving Platform" mimics the (older?) definition of <em>monotonic writes</em>. From the paper:</p>

<blockquote>
 <p>per-record timeline consistency: all replicas of a given record apply
 all updates to the record in the same order.</p>
</blockquote>

<p>This is quite similar to <a href="http://regal.csep.umflint.edu/~swturner/Classes/csc577/Online/Chapter06/img26.html" rel="nofollow">a definition for monotonic writes</a>:</p>

<blockquote>
 <p>A write operation by a process on data item x is completed before any
 successive write operation on x by the same process.</p>
</blockquote>

<p>Can I conclude that those things are the same, or is there a difference that I misunderstand? Note that the link above also mentions possible copies of data item <code>x</code>, so monotonic write includes replicas.</p>
 | habedi/stack-exchange-dataset |
2,341 | Can exactly one of NP and co-NP be equal to P? | <p>Maybe I am missing something obvious, but can it be that P = co-NP $\subsetneq$ NP or vice versa? My feeling is that there must be some theorem that rules out this possibility.</p>
 | complexity theory p vs np | 1 | Can exactly one of NP and co-NP be equal to P? -- (complexity theory p vs np)
<p>Maybe I am missing something obvious, but can it be that P = co-NP $\subsetneq$ NP or vice versa? My feeling is that there must be some theorem that rules out this possibility.</p>
 | habedi/stack-exchange-dataset |
2,374 | How to describe algorithms, prove and analyse them? | <p>Before reading <em>The Art of Computer Programming (TAOCP)</em>, I have not considered these questions deeply. I would use pseudo code to describe algorithms, understand them and estimate the running time only about orders of growth. The <em>TAOCP</em> thoroughly changes my mind.</p>

<p><em>TAOCP</em> uses English mixed with steps and <em>goto</em> to describe the algorithm, and uses flow charts to picture the algorithm more readily. It seems low-level, but I find that there's some advantages, especially with flow chart, which I have ignored a lot. We can label each of the arrows with an assertion about the current state of affairs at the time the computation traverses that arrow, and make an inductive proof for the algorithm. The author says:</p>

<blockquote>
 <p>It is the contention of the author that we really understand why an algorithm is valid only when we reach the point that our minds have implicitly filled in all the assertions, as was done in Fig.4.</p>
</blockquote>

<p>I have not experienced such stuff. Another advantage is that, we can count the number of times each step is executed. It's easy to check with Kirchhoff's first law. I have not analysed the running time exactly, so some $\pm1$ might have been omitted when I was estimating the running time.</p>

<p>Analysis of orders of growth is sometimes useless. For example, we cannot distinguish quicksort from heapsort because they are all $E(T(n))=\Theta(n\log n)$, where $EX$ is the expected number of random variable $X$, so we should analyse the constant, say, $E(T_1(n))=A_1n\lg n+B_1n+O(\log n)$ and $E(T_2(n))=A_2\lg n+B_2n+O(\log n)$, thus we can compare $T_1$ and $T_2$ better. And also, sometimes we should compare other quantities, such as variances. Only a rough analysis of orders of growth of running time is not enough. As <em>TAOCP</em> translates the algorithms into assembly language and calculate the running time, It's too hard for me, so I want to know some techniques to analyse the running time a bit more roughly, which is also useful, for higher-level languages such as C, C++ or pseudo codes.</p>

<p>And I want to know what style of description is mainly used in research works, and how to treat these problems.</p>
 | algorithms proof techniques runtime analysis | 1 | How to describe algorithms, prove and analyse them? -- (algorithms proof techniques runtime analysis)
<p>Before reading <em>The Art of Computer Programming (TAOCP)</em>, I have not considered these questions deeply. I would use pseudo code to describe algorithms, understand them and estimate the running time only about orders of growth. The <em>TAOCP</em> thoroughly changes my mind.</p>

<p><em>TAOCP</em> uses English mixed with steps and <em>goto</em> to describe the algorithm, and uses flow charts to picture the algorithm more readily. It seems low-level, but I find that there's some advantages, especially with flow chart, which I have ignored a lot. We can label each of the arrows with an assertion about the current state of affairs at the time the computation traverses that arrow, and make an inductive proof for the algorithm. The author says:</p>

<blockquote>
 <p>It is the contention of the author that we really understand why an algorithm is valid only when we reach the point that our minds have implicitly filled in all the assertions, as was done in Fig.4.</p>
</blockquote>

<p>I have not experienced such stuff. Another advantage is that, we can count the number of times each step is executed. It's easy to check with Kirchhoff's first law. I have not analysed the running time exactly, so some $\pm1$ might have been omitted when I was estimating the running time.</p>

<p>Analysis of orders of growth is sometimes useless. For example, we cannot distinguish quicksort from heapsort because they are all $E(T(n))=\Theta(n\log n)$, where $EX$ is the expected number of random variable $X$, so we should analyse the constant, say, $E(T_1(n))=A_1n\lg n+B_1n+O(\log n)$ and $E(T_2(n))=A_2\lg n+B_2n+O(\log n)$, thus we can compare $T_1$ and $T_2$ better. And also, sometimes we should compare other quantities, such as variances. Only a rough analysis of orders of growth of running time is not enough. As <em>TAOCP</em> translates the algorithms into assembly language and calculate the running time, It's too hard for me, so I want to know some techniques to analyse the running time a bit more roughly, which is also useful, for higher-level languages such as C, C++ or pseudo codes.</p>

<p>And I want to know what style of description is mainly used in research works, and how to treat these problems.</p>
 | habedi/stack-exchange-dataset |
2,382 | Methods to evaluate a system of written rules | <p>I was trying to come up with a system that would evaluate bylaws for an organization as to determine their underlying logic.</p>

<p>I think a first-order predicate system would work for representing the rules, which could be translated from the text via part-of-speech tagging and other NLP techniques. </p>

<p>Is there a systematic way to interpret the first-order logic rules as a whole, or some type of ML architecture that would work as a second layer to find similarities between the elements.</p>

<p>For example,</p>

<blockquote>
 <p>List of fun activities:</p>
 
 <ul>
 <li>golf</li>
 <li>coffee break</li>
 <li>pizza</li>
 </ul>
 
 <p>Bylaws:</p>
 
 <ol>
 <li><p>On Friday, we play golf</p></li>
 <li><p>On Friday or Saturday, we take a quick coffee break, and if it's Saturday, we get pizza</p></li>
 </ol>
</blockquote>

<p>Conclusion: our group has fun on weekends</p>

<p>It sounds far fetched, but I'm curious if it's possible. I also realize that perhaps more first-order logic would be a better fit for driving the conclusions of the second layer. </p>
 | machine learning algorithms pattern recognition logic | 1 | Methods to evaluate a system of written rules -- (machine learning algorithms pattern recognition logic)
<p>I was trying to come up with a system that would evaluate bylaws for an organization as to determine their underlying logic.</p>

<p>I think a first-order predicate system would work for representing the rules, which could be translated from the text via part-of-speech tagging and other NLP techniques. </p>

<p>Is there a systematic way to interpret the first-order logic rules as a whole, or some type of ML architecture that would work as a second layer to find similarities between the elements.</p>

<p>For example,</p>

<blockquote>
 <p>List of fun activities:</p>
 
 <ul>
 <li>golf</li>
 <li>coffee break</li>
 <li>pizza</li>
 </ul>
 
 <p>Bylaws:</p>
 
 <ol>
 <li><p>On Friday, we play golf</p></li>
 <li><p>On Friday or Saturday, we take a quick coffee break, and if it's Saturday, we get pizza</p></li>
 </ol>
</blockquote>

<p>Conclusion: our group has fun on weekends</p>

<p>It sounds far fetched, but I'm curious if it's possible. I also realize that perhaps more first-order logic would be a better fit for driving the conclusions of the second layer. </p>
 | habedi/stack-exchange-dataset |
2,385 | Which is the minimal number of operations for intractability? | <p>If we have an algorithm that need to run $n=2$ operations and then halt, I think we could say the problem it solves, is tractable, but if $n=10^{120}$ althought It could be theoretically solvable it seems to be intractable, and what about a problem that needs $n=10^{1000}$, or $n=10^{10^{1000}}$ operations, that's seems an intractable problem for sure.</p>

<p>Then we see there is a k, from which $n\ge k$ operations problems are intractable, and $n\lt k$ are tractable ones.</p>

<p>I doubt about that k to exist.. Where is the limit? Can a Technological advance turn some intractable problems <strong>for a given n</strong> into a tractable ? </p>

<p>I would like to read your opinion.</p>

<p><strong>EDIT</strong></p>

<p>I think this question is similar as asking if Church–Turing thesis is correct, because if the difference about solving a computable problem in a Turing Machine and in any other Turing Complete Machine, is "only a constant" about the number of operations, then I think that asking about computable is the same as asking about effective calculability.. Now I see tractable means polynomial time, and inctractable is related with no polynomial time solution. But the difference between two machines, for the same (even tractable) problem, is about Church-Turing thesis. </p>
 | complexity theory church turing thesis | 1 | Which is the minimal number of operations for intractability? -- (complexity theory church turing thesis)
<p>If we have an algorithm that need to run $n=2$ operations and then halt, I think we could say the problem it solves, is tractable, but if $n=10^{120}$ althought It could be theoretically solvable it seems to be intractable, and what about a problem that needs $n=10^{1000}$, or $n=10^{10^{1000}}$ operations, that's seems an intractable problem for sure.</p>

<p>Then we see there is a k, from which $n\ge k$ operations problems are intractable, and $n\lt k$ are tractable ones.</p>

<p>I doubt about that k to exist.. Where is the limit? Can a Technological advance turn some intractable problems <strong>for a given n</strong> into a tractable ? </p>

<p>I would like to read your opinion.</p>

<p><strong>EDIT</strong></p>

<p>I think this question is similar as asking if Church–Turing thesis is correct, because if the difference about solving a computable problem in a Turing Machine and in any other Turing Complete Machine, is "only a constant" about the number of operations, then I think that asking about computable is the same as asking about effective calculability.. Now I see tractable means polynomial time, and inctractable is related with no polynomial time solution. But the difference between two machines, for the same (even tractable) problem, is about Church-Turing thesis. </p>
 | habedi/stack-exchange-dataset |
2,393 | How to feel intuitively that a language is regular | <p>Given a language $ L= \{a^n b^n c^n\}$, how can I say directly, without looking at production rules, that this language is not regular?</p>

<p>I could use pumping lemma but some guys are saying just looking at the grammar that this is not regular one. How is it possible?</p>
 | formal languages regular languages pumping lemma intuition | 1 | How to feel intuitively that a language is regular -- (formal languages regular languages pumping lemma intuition)
<p>Given a language $ L= \{a^n b^n c^n\}$, how can I say directly, without looking at production rules, that this language is not regular?</p>

<p>I could use pumping lemma but some guys are saying just looking at the grammar that this is not regular one. How is it possible?</p>
 | habedi/stack-exchange-dataset |
2,394 | Algorithm to check the 2∀-connectness property of a graph | <p>A graph is 2∀-connected if it remains connected even if any single edge is removed. Let G = (V, E) be a connected undirected graph. Develop an algorithm as fast as possible to check 2∀-connectness of G.</p>

<p>I know the basic idea is to build a DFS searching tree and then check each edge is not on a circle with DFS. Any help would be appreciated.</p>

<p>What I expect to see is a detailed algorithm description(especially the initialization of needed variables which is obscure sometimes), complexity analysis could be omitted.</p>
 | algorithms graphs efficiency | 1 | Algorithm to check the 2∀-connectness property of a graph -- (algorithms graphs efficiency)
<p>A graph is 2∀-connected if it remains connected even if any single edge is removed. Let G = (V, E) be a connected undirected graph. Develop an algorithm as fast as possible to check 2∀-connectness of G.</p>

<p>I know the basic idea is to build a DFS searching tree and then check each edge is not on a circle with DFS. Any help would be appreciated.</p>

<p>What I expect to see is a detailed algorithm description(especially the initialization of needed variables which is obscure sometimes), complexity analysis could be omitted.</p>
 | habedi/stack-exchange-dataset |
2,404 | Online Learning Resources for Discrete Mathematics | <p>Are there any good Discrete mathematics learning web resources with problem sets?</p>
 | reference request education discrete mathematics | 1 | Online Learning Resources for Discrete Mathematics -- (reference request education discrete mathematics)
<p>Are there any good Discrete mathematics learning web resources with problem sets?</p>
 | habedi/stack-exchange-dataset |
2,406 | Must Neural Networks always converge? | <h2>Introduction</h2>

<p><strong>Step One</strong></p>

<p>I wrote a standard backpropegating neural network, and to test it, I decided to have it map XOR.</p>

<p>It is a 2-2-1 network (with tanh activation function)</p>

<pre><code>X1 M1
 O1
X2 M2

B1 B2
</code></pre>

<p>For testing purposes, I manually set up the top middle neuron (M1) to be an AND gate and the lower neuron (M2) to be an OR gate (both output 1 if true and -1 if false).</p>

<p>Now, I also manually set up the connection M1-O1 to be -.5, M2-O1 to be 1, and 
B2 to be -.75</p>

<p>So if M1 = 1 and M2 = 1, the sum is (-0.5 +1 -0.75 = -.25) tanh(0.25) = -0.24</p>

<p>if M1 = -1 and M2 = 1, the sum is ((-0.5)*(-1) +1 -0.75 = .75) tanh(0.75) = 0.63</p>

<p>if M1 = -1 and M2 = -1, the sum is ((-0.5)*(-1) -1 -0.75 = -1.25) tanh(1.25) = -0.8</p>

<p>This is a relatively good result for a "first iteration".</p>

<p><strong>Step Two</strong></p>

<p>I then proceeded to modify these weights a bit, and then train them using error propagation algorithm (based on gradient descent). In this stage, I leave the weights between the input and middle neurons intact, and just modify the weights between the middle (and bias) and output. </p>

<p>For testing, I set the weights to be and .5 .4 .3 (respectively for M1, M2 and bias)</p>

<p>Here, however, I start having issues.</p>

<hr>

<h2>My Question</h2>

<p>I set my learning rate to .2 and let the program iterate through training data (A B A^B) for 10000 iterations or more.</p>

<p><em>Most</em> of the time, the weights converge to a good result. However, at times, those weights converge to (say) 1.5, 5.7, and .9 which results in a +1 output (even) to an input of {1, 1} (when the result should be a -1).</p>

<p>Is it possible for a relatively simple ANN which has a solution to not converge at all or is there a bug in my implementation?</p>
 | machine learning neural networks | 1 | Must Neural Networks always converge? -- (machine learning neural networks)
<h2>Introduction</h2>

<p><strong>Step One</strong></p>

<p>I wrote a standard backpropegating neural network, and to test it, I decided to have it map XOR.</p>

<p>It is a 2-2-1 network (with tanh activation function)</p>

<pre><code>X1 M1
 O1
X2 M2

B1 B2
</code></pre>

<p>For testing purposes, I manually set up the top middle neuron (M1) to be an AND gate and the lower neuron (M2) to be an OR gate (both output 1 if true and -1 if false).</p>

<p>Now, I also manually set up the connection M1-O1 to be -.5, M2-O1 to be 1, and 
B2 to be -.75</p>

<p>So if M1 = 1 and M2 = 1, the sum is (-0.5 +1 -0.75 = -.25) tanh(0.25) = -0.24</p>

<p>if M1 = -1 and M2 = 1, the sum is ((-0.5)*(-1) +1 -0.75 = .75) tanh(0.75) = 0.63</p>

<p>if M1 = -1 and M2 = -1, the sum is ((-0.5)*(-1) -1 -0.75 = -1.25) tanh(1.25) = -0.8</p>

<p>This is a relatively good result for a "first iteration".</p>

<p><strong>Step Two</strong></p>

<p>I then proceeded to modify these weights a bit, and then train them using error propagation algorithm (based on gradient descent). In this stage, I leave the weights between the input and middle neurons intact, and just modify the weights between the middle (and bias) and output. </p>

<p>For testing, I set the weights to be and .5 .4 .3 (respectively for M1, M2 and bias)</p>

<p>Here, however, I start having issues.</p>

<hr>

<h2>My Question</h2>

<p>I set my learning rate to .2 and let the program iterate through training data (A B A^B) for 10000 iterations or more.</p>

<p><em>Most</em> of the time, the weights converge to a good result. However, at times, those weights converge to (say) 1.5, 5.7, and .9 which results in a +1 output (even) to an input of {1, 1} (when the result should be a -1).</p>

<p>Is it possible for a relatively simple ANN which has a solution to not converge at all or is there a bug in my implementation?</p>
 | habedi/stack-exchange-dataset |
2,407 | Time complexity of a triple nested loop with squared indices | <p>I have seen this function in past year exam paper.</p>

<pre><code>public static void run(int n){
 for(int i = 1 ; i * i < n ; i++){
 for(int j = i ; j * j < n ; j++){
 for(int k = j ; k * k < n ; k++){

 }
 }
 }
}
</code></pre>

<p>After give some example, I guess it is a function that with time complexity in following formula</p>

<p><strong><em>let make m = n^(1/2)</em></strong></p>

<p><strong><em>[m+(m-1)+(m-2)+...+3+2+1] + [(m-1)+(m-2)+...+3+2+1] + ...... + (3+2+1) + (2+1) + 1</em></strong></p>

<p>*Edit: I have asked this math question <a href="https://math.stackexchange.com/a/159142/33103">here</a>, the answer is <strong>m(m+1)(m+2)/6</strong></p>

<p>Is this correct, if no, what is wrong, if yes, how would you translate to big O notation.
The question that I want to ask is not <strong>only</strong> about this specific example; but also how would you evaluate an algorithm, let's say, I can only give some example to watch the pattern it appears. But some algorithm are not that easy to evaluate, what is your way to evaluate using this example.</p>

<p><strong>Edit:
@LuchianGrigore
@AleksG</strong></p>

<pre><code>public static void run(int n){
 for(int i = 1 ; i * i < n ; i++){
 for(int j = 1 ; j * j < n ; j++){
 for(int k = 1 ; k * k < n ; k++){

 }
 }
 }
 }
</code></pre>

<p>This is an example that in my lecture notes, each loop is with time complexity of <strong>n</strong> to the power of <strong>1/2</strong>, for each loop there is another n^(1/2) inside, the total are n^(1/2) * n^(1/2) * n^(1/2) = n^(3/2).
Is the first example the same? It is less than the second example, right?</p>

<p><strong>Edit,Add:</strong></p>

<p>How about this one? Is it <strong>log(n)*n^(1/2)*log(n^2)</strong></p>

<pre><code>for (int i = 1; i < n; i *= 2)
 for (int j = i; j * j < n; j++)
 for (int m = j; j < n * n; j *= 2)
</code></pre>
 | algorithm analysis runtime analysis loops | 1 | Time complexity of a triple nested loop with squared indices -- (algorithm analysis runtime analysis loops)
<p>I have seen this function in past year exam paper.</p>

<pre><code>public static void run(int n){
 for(int i = 1 ; i * i < n ; i++){
 for(int j = i ; j * j < n ; j++){
 for(int k = j ; k * k < n ; k++){

 }
 }
 }
}
</code></pre>

<p>After give some example, I guess it is a function that with time complexity in following formula</p>

<p><strong><em>let make m = n^(1/2)</em></strong></p>

<p><strong><em>[m+(m-1)+(m-2)+...+3+2+1] + [(m-1)+(m-2)+...+3+2+1] + ...... + (3+2+1) + (2+1) + 1</em></strong></p>

<p>*Edit: I have asked this math question <a href="https://math.stackexchange.com/a/159142/33103">here</a>, the answer is <strong>m(m+1)(m+2)/6</strong></p>

<p>Is this correct, if no, what is wrong, if yes, how would you translate to big O notation.
The question that I want to ask is not <strong>only</strong> about this specific example; but also how would you evaluate an algorithm, let's say, I can only give some example to watch the pattern it appears. But some algorithm are not that easy to evaluate, what is your way to evaluate using this example.</p>

<p><strong>Edit:
@LuchianGrigore
@AleksG</strong></p>

<pre><code>public static void run(int n){
 for(int i = 1 ; i * i < n ; i++){
 for(int j = 1 ; j * j < n ; j++){
 for(int k = 1 ; k * k < n ; k++){

 }
 }
 }
 }
</code></pre>

<p>This is an example that in my lecture notes, each loop is with time complexity of <strong>n</strong> to the power of <strong>1/2</strong>, for each loop there is another n^(1/2) inside, the total are n^(1/2) * n^(1/2) * n^(1/2) = n^(3/2).
Is the first example the same? It is less than the second example, right?</p>

<p><strong>Edit,Add:</strong></p>

<p>How about this one? Is it <strong>log(n)*n^(1/2)*log(n^2)</strong></p>

<pre><code>for (int i = 1; i < n; i *= 2)
 for (int j = i; j * j < n; j++)
 for (int m = j; j < n * n; j *= 2)
</code></pre>
 | habedi/stack-exchange-dataset |
2,411 | Attempt to write a function with cubed log runtime complexity $O(\log^3 n)$ | <p>I'm learning Data Structures and Algorithms now, I have a practical question that asked to write a function with O(log<sup>3</sup>n), which means log(n)*log(n)*log(n).</p>

<pre><code>public void run(int n) {
 for (int i = 1; i < n; i *= 2) {
 for (int j = 1; j < n; j *= 2) {
 for (int k = 1; k < n; k *= 2) {
 System.out.println("hi");
 }
 }
 }
}
</code></pre>

<p>I have come with this solution, but it seems not correct. Please help me out.</p>
 | time complexity | 1 | Attempt to write a function with cubed log runtime complexity $O(\log^3 n)$ -- (time complexity)
<p>I'm learning Data Structures and Algorithms now, I have a practical question that asked to write a function with O(log<sup>3</sup>n), which means log(n)*log(n)*log(n).</p>

<pre><code>public void run(int n) {
 for (int i = 1; i < n; i *= 2) {
 for (int j = 1; j < n; j *= 2) {
 for (int k = 1; k < n; k *= 2) {
 System.out.println("hi");
 }
 }
 }
}
</code></pre>

<p>I have come with this solution, but it seems not correct. Please help me out.</p>
 | habedi/stack-exchange-dataset |
2,415 | Shortest distance between a point in A and a point in B | <blockquote>
 <p>Given two sets $A$ and $B$ each containing $n$ disjoint points
 in the plane, compute the shortest distance between a point in $A$ and a point in $B$, i.e., $\min \space \{\mbox{ } \text{dist}(p, q) \mbox{ } | \mbox{ } p \in A \land q \in B \space \} $.</p>
</blockquote>

<p>I am not sure if I am right, but this problem very similar to problems that can be solved by linear programming in computational geometry. However, the reduction to LP is not straightforward. Also my problem looks related to finding the thinnest stip between two sets of points which obviously can be solved by LP in $O(n)$ in 2-dimensional space.</p>
 | algorithms computational geometry | 1 | Shortest distance between a point in A and a point in B -- (algorithms computational geometry)
<blockquote>
 <p>Given two sets $A$ and $B$ each containing $n$ disjoint points
 in the plane, compute the shortest distance between a point in $A$ and a point in $B$, i.e., $\min \space \{\mbox{ } \text{dist}(p, q) \mbox{ } | \mbox{ } p \in A \land q \in B \space \} $.</p>
</blockquote>

<p>I am not sure if I am right, but this problem very similar to problems that can be solved by linear programming in computational geometry. However, the reduction to LP is not straightforward. Also my problem looks related to finding the thinnest stip between two sets of points which obviously can be solved by LP in $O(n)$ in 2-dimensional space.</p>
 | habedi/stack-exchange-dataset |
2,422 | How to prove 2-EXP != EXP | <p>I am guessing that this is correct for <code>3-EXP</code>, <code>4-EXP</code> etc...</p>

<p>Basically I should find a problem in <code>2-EXP</code> that is not in <code>EXP</code>.
Any examples ?</p>
 | complexity theory | 1 | How to prove 2-EXP != EXP -- (complexity theory)
<p>I am guessing that this is correct for <code>3-EXP</code>, <code>4-EXP</code> etc...</p>

<p>Basically I should find a problem in <code>2-EXP</code> that is not in <code>EXP</code>.
Any examples ?</p>
 | habedi/stack-exchange-dataset |
2,425 | Semi-decidable problems with linear bound | <p>Take a semi-decidable problem and an algorithm that finds the positive answer in finite time. The run-time of the algorithm, restricted to inputs with a positive answer, cannot be bounded by a computable function. (Otherwise we’d know how long to wait for a positive answer. If the algorithm runs longer than that we know that the answer is no and the problem would be solvable.)</p>

<p>My question is now: Can such an algorithm still have a, say, a run-time bound linear (polynomial, constant,...) in the input size, but with an uncomputable constant? Or would that still allow me to decide the problem? Are there example?</p>
 | computability time complexity undecidability | 1 | Semi-decidable problems with linear bound -- (computability time complexity undecidability)
<p>Take a semi-decidable problem and an algorithm that finds the positive answer in finite time. The run-time of the algorithm, restricted to inputs with a positive answer, cannot be bounded by a computable function. (Otherwise we’d know how long to wait for a positive answer. If the algorithm runs longer than that we know that the answer is no and the problem would be solvable.)</p>

<p>My question is now: Can such an algorithm still have a, say, a run-time bound linear (polynomial, constant,...) in the input size, but with an uncomputable constant? Or would that still allow me to decide the problem? Are there example?</p>
 | habedi/stack-exchange-dataset |
2,433 | A concrete example about string w and string x used in the proof of Rice's Theorem | <p>So, in lectures about Rice's Theorem, reduction is usually used to proved the theorem. Reduction usually consists a construction of $M'$, using a TM $M$ which is in the form $\langle M,w \rangle$ to be simulated first, an input $x$ to be simulated if $M$ accepts. $M'$ accepts if x is accepted. </p>

<p>I really want a concrete input about $\langle M,w \rangle$ and $x$. For example:</p>

<blockquote>
 <p>$L = \{ \langle M\rangle \mid L(M) = \{\text{ stackoverflow }\}\}$, that is L contains all Turing machines whose languages contain one string: "stackoverflow". $L$ is undecidable.</p>
</blockquote>

<p>What kind of $\langle M,w \rangle$ to be simulated? </p>

<p>Suppose we have input x = "stackoverflow" or x = "this is stackoverflow" or any x with "stackoverflow" in it.</p>

<p>What if we first simulate a TM $M$ selected from in the possibilities of all TMs, and this TM accepts only a single character $a$ as its language. So, we simulate this $\langle M,w \rangle$ with $w = a$, and surely it will be accepted. And then input $x$ is also accepted according to the definition of $L$. </p>

<p>So, we conclude that $\langle M,w \rangle$ in which language is a single $a$ is reducible to $L$ that accepts all TMs which have "stackoverflow"?</p>

<p><strong>Edit:</strong> I've just looked up a brief definition of reduction. A reduction is a transformation from an unknown but easier problem to a harder problem but already known. If the harder problem is solvable, so is the easier one. Otherwise, it's not. </p>

<p>Given that definition, I think the correct TM $M$ with its description $\langle M,w \rangle$ in my example should be a TM such that it accepts regular languages. This is the harder problem. If this is solvable, then my trivial $L$ with one string is solvable. But apparently, it's not according to the proof. We can effectively say we reduced from language one string problem to regular language problem and try to solve it. Previously, I thought the other way around: $\langle M,w \rangle$ is reduced to one string problem. </p>

<p>Is my thinking correct? </p>
 | computability | 1 | A concrete example about string w and string x used in the proof of Rice's Theorem -- (computability)
<p>So, in lectures about Rice's Theorem, reduction is usually used to proved the theorem. Reduction usually consists a construction of $M'$, using a TM $M$ which is in the form $\langle M,w \rangle$ to be simulated first, an input $x$ to be simulated if $M$ accepts. $M'$ accepts if x is accepted. </p>

<p>I really want a concrete input about $\langle M,w \rangle$ and $x$. For example:</p>

<blockquote>
 <p>$L = \{ \langle M\rangle \mid L(M) = \{\text{ stackoverflow }\}\}$, that is L contains all Turing machines whose languages contain one string: "stackoverflow". $L$ is undecidable.</p>
</blockquote>

<p>What kind of $\langle M,w \rangle$ to be simulated? </p>

<p>Suppose we have input x = "stackoverflow" or x = "this is stackoverflow" or any x with "stackoverflow" in it.</p>

<p>What if we first simulate a TM $M$ selected from in the possibilities of all TMs, and this TM accepts only a single character $a$ as its language. So, we simulate this $\langle M,w \rangle$ with $w = a$, and surely it will be accepted. And then input $x$ is also accepted according to the definition of $L$. </p>

<p>So, we conclude that $\langle M,w \rangle$ in which language is a single $a$ is reducible to $L$ that accepts all TMs which have "stackoverflow"?</p>

<p><strong>Edit:</strong> I've just looked up a brief definition of reduction. A reduction is a transformation from an unknown but easier problem to a harder problem but already known. If the harder problem is solvable, so is the easier one. Otherwise, it's not. </p>

<p>Given that definition, I think the correct TM $M$ with its description $\langle M,w \rangle$ in my example should be a TM such that it accepts regular languages. This is the harder problem. If this is solvable, then my trivial $L$ with one string is solvable. But apparently, it's not according to the proof. We can effectively say we reduced from language one string problem to regular language problem and try to solve it. Previously, I thought the other way around: $\langle M,w \rangle$ is reduced to one string problem. </p>

<p>Is my thinking correct? </p>
 | habedi/stack-exchange-dataset |
2,437 | What is the type theory judgement symbol? | <p>In type theory judgements are often presented with the following syntax:</p>

<p><img src="https://i.stack.imgur.com/7V5r2.png" alt="enter image description here"></p>

<p>My question is what is that symbol in the middle called? All the papers I've found seem to use an image rather than a unicode character so I can't look it up. I've also not found any type-theory reference which says what that symbol is (they explain what it means however).</p>

<p>So what character is that symbol and what is its proper name?</p>
 | logic terminology type theory | 1 | What is the type theory judgement symbol? -- (logic terminology type theory)
<p>In type theory judgements are often presented with the following syntax:</p>

<p><img src="https://i.stack.imgur.com/7V5r2.png" alt="enter image description here"></p>

<p>My question is what is that symbol in the middle called? All the papers I've found seem to use an image rather than a unicode character so I can't look it up. I've also not found any type-theory reference which says what that symbol is (they explain what it means however).</p>

<p>So what character is that symbol and what is its proper name?</p>
 | habedi/stack-exchange-dataset |
2,450 | How do I construct a doubly connected edge list given a set of line segments? | <blockquote>
 <p>For a given planar graph $G(V,E)$ embedded in the plane, defined by a set of line segments $E= \left \{ e_1,...,e_m \right \} $, each segment $e_i$ is represented by its endpoints $\left \{ L_i,R_i \right \}$. Construct a DCEL data structure for the planar subdivision, describe an algorithm, prove it's correctness and show the complexity.</p>
</blockquote>

<p>According to <a href="http://en.wikipedia.org/wiki/DCEL" rel="noreferrer">this description of the DCEL data structure</a>, there are many connections between different objects (i.e. vertices, edges and faces) of the DCEL. So, a DCEL seems to be difficult to build and maintain.</p>

<p>Do you know of any efficient algorithm which can be used to construct a DCEL data structure?</p>
 | algorithms data structures computational geometry doubly connected edge list | 1 | How do I construct a doubly connected edge list given a set of line segments? -- (algorithms data structures computational geometry doubly connected edge list)
<blockquote>
 <p>For a given planar graph $G(V,E)$ embedded in the plane, defined by a set of line segments $E= \left \{ e_1,...,e_m \right \} $, each segment $e_i$ is represented by its endpoints $\left \{ L_i,R_i \right \}$. Construct a DCEL data structure for the planar subdivision, describe an algorithm, prove it's correctness and show the complexity.</p>
</blockquote>

<p>According to <a href="http://en.wikipedia.org/wiki/DCEL" rel="noreferrer">this description of the DCEL data structure</a>, there are many connections between different objects (i.e. vertices, edges and faces) of the DCEL. So, a DCEL seems to be difficult to build and maintain.</p>

<p>Do you know of any efficient algorithm which can be used to construct a DCEL data structure?</p>
 | habedi/stack-exchange-dataset |
2,453 | If any 3 points are collinear | <blockquote>
 <p>Given a set $S$ of points $p_1,..,p_2$ give the most efficient algorithm for determining if any 3 points of the set are collinear.</p>
</blockquote>

<p>The problem is I started with general definition but I cannot continue to actually solving the problem.</p>

<p>What can we say about collinear points in general, 3 points $a,b,c$ are collinear if the distance $d(a,c) = d(a,b)+d(b,c)$ in the case when $b$ is between $a$ and $c$.</p>

<p>The naive approach has $O(n(n-1)(n-2))=O(n^3)$ time complexity.</p>

<p>How to solve this problem, what should be the next step?</p>
 | algorithms computational geometry | 1 | If any 3 points are collinear -- (algorithms computational geometry)
<blockquote>
 <p>Given a set $S$ of points $p_1,..,p_2$ give the most efficient algorithm for determining if any 3 points of the set are collinear.</p>
</blockquote>

<p>The problem is I started with general definition but I cannot continue to actually solving the problem.</p>

<p>What can we say about collinear points in general, 3 points $a,b,c$ are collinear if the distance $d(a,c) = d(a,b)+d(b,c)$ in the case when $b$ is between $a$ and $c$.</p>

<p>The naive approach has $O(n(n-1)(n-2))=O(n^3)$ time complexity.</p>

<p>How to solve this problem, what should be the next step?</p>
 | habedi/stack-exchange-dataset |
2,462 | Why is Turing completeness right? | <p>I am using a digital computer to write this message. Such a machine has a property which, if you think about it, is actually quite remarkable: It is <em>one machine</em> which, if programmed appropriately, can perform <em>any possible computation</em>.</p>

<p>Of course, calculating machines of one kind or another go back to antiquity. People have built machines which for performing addition and subtraction (e.g., an abacus), multiplication and division (e.g., the slide rule), and more domain-specific machines such as calculators for the positions of the planets.</p>

<p>The striking thing about a computer is that it can perform <em>any</em> computation. Any computation at all. And all without having to rewire the machine. Today everybody takes this idea for granted, but if you stop and think about it, it's kind of amazing that such a device is possible.</p>

<p>I have two actual <em>questions</em>:</p>

<ol>
<li><p>When did mankind figure out that such a machine was possible? Has there ever been any serious <em>doubt</em> about whether it can be done? When was this settled? (In particular, was it settled before or after the first actual implementation?)</p></li>
<li><p>How did mathematicians <em>prove</em> that a Turing-complete machine really can compute everything?</p></li>
</ol>

<p>That second one is fiddly. Every formalism seems to have some things that <em>cannot</em> be computed. Currently "computable function" is <em>defined as</em> "anything a Turing-machine can compute". But how do we know there isn't some slightly more powerful machine that can compute more stuff? How do we know that Turing-machines are the correct abstraction?</p>
 | computability turing machines history | 1 | Why is Turing completeness right? -- (computability turing machines history)
<p>I am using a digital computer to write this message. Such a machine has a property which, if you think about it, is actually quite remarkable: It is <em>one machine</em> which, if programmed appropriately, can perform <em>any possible computation</em>.</p>

<p>Of course, calculating machines of one kind or another go back to antiquity. People have built machines which for performing addition and subtraction (e.g., an abacus), multiplication and division (e.g., the slide rule), and more domain-specific machines such as calculators for the positions of the planets.</p>

<p>The striking thing about a computer is that it can perform <em>any</em> computation. Any computation at all. And all without having to rewire the machine. Today everybody takes this idea for granted, but if you stop and think about it, it's kind of amazing that such a device is possible.</p>

<p>I have two actual <em>questions</em>:</p>

<ol>
<li><p>When did mankind figure out that such a machine was possible? Has there ever been any serious <em>doubt</em> about whether it can be done? When was this settled? (In particular, was it settled before or after the first actual implementation?)</p></li>
<li><p>How did mathematicians <em>prove</em> that a Turing-complete machine really can compute everything?</p></li>
</ol>

<p>That second one is fiddly. Every formalism seems to have some things that <em>cannot</em> be computed. Currently "computable function" is <em>defined as</em> "anything a Turing-machine can compute". But how do we know there isn't some slightly more powerful machine that can compute more stuff? How do we know that Turing-machines are the correct abstraction?</p>
 | habedi/stack-exchange-dataset |
2,464 | Time-space tradeoff for missing element problem | <p>Here is a well-known problem.</p>

<p>Given an array $A[1\dots n]$ of positive integers, output the smallest positive integer not in the array.</p>

<p>The problem can be solved in $O(n)$ space and time: read the array, keep track in $O(n)$ space whether $1,2,\dots,n+1$ occured, scan for smallest element.</p>

<p>I noticed you can trade space for time. If you have $O(\frac{n}{k})$ memory only, you can do it in $k$ rounds and get time $O(k n)$. In a special case, there is obviously constant-space quadratic-time algorithm.</p>

<p>My question is:</p>

<blockquote>
 <p>Is this the optimal tradeoff, i.e. does $\operatorname{time} \cdot \operatorname{space} = \Omega(n^2)$?
 In general, how does one prove such type of bounds?</p>
</blockquote>

<p>Assume RAM model, with bounded arithmetic and random access to arrays in O(1).</p>

<p>Inspiration for this problem: time-space tradeoff for palindromes in one-tape model (see for example, <a href="http://www.cs.uiuc.edu/class/fa05/cs475/Lectures/new/lec24.pdf" rel="noreferrer">here</a>).</p>
 | complexity theory time complexity space complexity | 1 | Time-space tradeoff for missing element problem -- (complexity theory time complexity space complexity)
<p>Here is a well-known problem.</p>

<p>Given an array $A[1\dots n]$ of positive integers, output the smallest positive integer not in the array.</p>

<p>The problem can be solved in $O(n)$ space and time: read the array, keep track in $O(n)$ space whether $1,2,\dots,n+1$ occured, scan for smallest element.</p>

<p>I noticed you can trade space for time. If you have $O(\frac{n}{k})$ memory only, you can do it in $k$ rounds and get time $O(k n)$. In a special case, there is obviously constant-space quadratic-time algorithm.</p>

<p>My question is:</p>

<blockquote>
 <p>Is this the optimal tradeoff, i.e. does $\operatorname{time} \cdot \operatorname{space} = \Omega(n^2)$?
 In general, how does one prove such type of bounds?</p>
</blockquote>

<p>Assume RAM model, with bounded arithmetic and random access to arrays in O(1).</p>

<p>Inspiration for this problem: time-space tradeoff for palindromes in one-tape model (see for example, <a href="http://www.cs.uiuc.edu/class/fa05/cs475/Lectures/new/lec24.pdf" rel="noreferrer">here</a>).</p>
 | habedi/stack-exchange-dataset |
2,470 | Efficient bandwidth algorithm | <p>Recently I sort of stumbled on a problem of finding an efficient topology given a weighted directed graph. Consider the following scenario:</p>

<ol>
<li><p>Node 1 is connected to 2,3,4 at 50 Mbps. Node 1 has 100 Mbps network card.</p></li>
<li><p>Node 3 is connected to 5 at 50 Mbps. Node 3 has 100 Mbps card.</p></li>
<li><p>Node 4 is connected to Node 3 at 40 Mbps. Node 4 has 100 Mbps card.</p></li>
</ol>

<p>(Sorry about not having a picture)</p>

<p>Problem: If Node 1 starts sending data to its immediate nodes (2 and 3), we can clearly see it's network card capacity will be drained out after Node 3. Whereas if it were to <em>skip</em> node 3 and start sending to node 4, the data will eventually reach to node 3 via 4 and hence, node 5 will be getting data via node 3.
The problem becomes more complicated if all the links were of 50 Mbps and we can clearly see that node 2 and node 4 are the only way to reach all nodes.</p>

<p>Question: Is there an algorithm which gives the optimal path to ALL nodes keeping the network (card) capacity in mind? </p>

<p>I read the shortest path algorithm,max flow algorithms but none of them seem to address my problems. perhaps,im missing something. I'll appreciate if someone can help me out.</p>
 | algorithms graphs optimization linear programming | 1 | Efficient bandwidth algorithm -- (algorithms graphs optimization linear programming)
<p>Recently I sort of stumbled on a problem of finding an efficient topology given a weighted directed graph. Consider the following scenario:</p>

<ol>
<li><p>Node 1 is connected to 2,3,4 at 50 Mbps. Node 1 has 100 Mbps network card.</p></li>
<li><p>Node 3 is connected to 5 at 50 Mbps. Node 3 has 100 Mbps card.</p></li>
<li><p>Node 4 is connected to Node 3 at 40 Mbps. Node 4 has 100 Mbps card.</p></li>
</ol>

<p>(Sorry about not having a picture)</p>

<p>Problem: If Node 1 starts sending data to its immediate nodes (2 and 3), we can clearly see it's network card capacity will be drained out after Node 3. Whereas if it were to <em>skip</em> node 3 and start sending to node 4, the data will eventually reach to node 3 via 4 and hence, node 5 will be getting data via node 3.
The problem becomes more complicated if all the links were of 50 Mbps and we can clearly see that node 2 and node 4 are the only way to reach all nodes.</p>

<p>Question: Is there an algorithm which gives the optimal path to ALL nodes keeping the network (card) capacity in mind? </p>

<p>I read the shortest path algorithm,max flow algorithms but none of them seem to address my problems. perhaps,im missing something. I'll appreciate if someone can help me out.</p>
 | habedi/stack-exchange-dataset |
2,471 | What is the bitwise xor of an interval? | <p>Let $\oplus$ be bitwise xor. Let $k,a,b$ be non-negative integers. $[a..b]=\{x\mid a\leq x, x\leq b\}$, it is called a integer interval.</p>

<p>What is a fast algorithm to find 
$\{ k\oplus x\mid x\in [a..b]\}$ as a union of set of integer intervals.</p>

<p>One can prove that $[a+k..b-k]\subseteq \{ k\oplus x\mid x\in [a..b]\}$ by showing that $x-y\leq x\oplus y \leq x+y$.</p>

<p><strong>Edit:</strong> I should specify the actually input and output to remove ambiguity.</p>

<p>Input: $k, a, b$.</p>

<p>Output: $a_1, b_1, a_2, b_2,\ldots,a_m,b_m$. Such that:</p>

<p>$$
\{ k\oplus x\mid x\in [a..b]\} = \bigcup_{i=1}^m [a_i..b_i]
$$</p>
 | algorithms integers | 1 | What is the bitwise xor of an interval? -- (algorithms integers)
<p>Let $\oplus$ be bitwise xor. Let $k,a,b$ be non-negative integers. $[a..b]=\{x\mid a\leq x, x\leq b\}$, it is called a integer interval.</p>

<p>What is a fast algorithm to find 
$\{ k\oplus x\mid x\in [a..b]\}$ as a union of set of integer intervals.</p>

<p>One can prove that $[a+k..b-k]\subseteq \{ k\oplus x\mid x\in [a..b]\}$ by showing that $x-y\leq x\oplus y \leq x+y$.</p>

<p><strong>Edit:</strong> I should specify the actually input and output to remove ambiguity.</p>

<p>Input: $k, a, b$.</p>

<p>Output: $a_1, b_1, a_2, b_2,\ldots,a_m,b_m$. Such that:</p>

<p>$$
\{ k\oplus x\mid x\in [a..b]\} = \bigcup_{i=1}^m [a_i..b_i]
$$</p>
 | habedi/stack-exchange-dataset |
2,487 | What is the time complexity of this function? | <p>This is an example in my lecture notes.
Is this function with time complexity $O(n \log n)$?.
Because the worst case is the funtion goes into <code>else</code> branch, and 2 nested loops with time complexity of $\log n$ and $n$, so it is $O(n \log n)$. Am I right?</p>

<pre><code>int j = 3;
int k = j * n / 345;
if(k > 100){
 System.out.println("k: " + k);
}else{
 for(int i=1; i<n; i*=2){
 for(int j=0; j<i; j++){
 k++;
 }
 }
}
</code></pre>
 | complexity theory time complexity algorithm analysis runtime analysis | 1 | What is the time complexity of this function? -- (complexity theory time complexity algorithm analysis runtime analysis)
<p>This is an example in my lecture notes.
Is this function with time complexity $O(n \log n)$?.
Because the worst case is the funtion goes into <code>else</code> branch, and 2 nested loops with time complexity of $\log n$ and $n$, so it is $O(n \log n)$. Am I right?</p>

<pre><code>int j = 3;
int k = j * n / 345;
if(k > 100){
 System.out.println("k: " + k);
}else{
 for(int i=1; i<n; i*=2){
 for(int j=0; j<i; j++){
 k++;
 }
 }
}
</code></pre>
 | habedi/stack-exchange-dataset |
2,492 | CLRS - Maxflow Augmented Flow Lemma 26.1 - don't understand use of def. in proof | <p>In Cormen et. al., <em>Introduction to Algorithms</em> (3rd ed.), I don't get a line in the proof of Lemma 26.1 which states that the augmented flow $f\uparrow f'$ is a flow in $G$ and is s.t. $|f\uparrow f'| =|f|+|f'|$ (this is pp. 717-718).</p>

<p>My confusion: When arguing <em>flow-conservation</em> they use the definition of $f\uparrow f'$ in the first line to say that for each $u\in V\setminus\{s,t\}$</p>

<p>$$ \sum_{v\in V} (f\uparrow f')(u,v) = \sum_{v\in V} (f(u,v)+f'(u,v) - f'(v,u)), $$</p>

<p>where the augmented path is defined as</p>

<p>$$ (f\uparrow f')(u,v) = \begin{cases} f(u,v)+f'(u,v) - f'(v,u) & \text{if $(u,v)\in E$}, \\
0 & \text{otherwise}. \end{cases} $$</p>

<p>Why can they ignore the 'otherwise' clause in the summation? I don't think the first clause evaluates to zero in all such cases. Do they use flow-conservation of $f$ and $f'$ in some way?</p>
 | algorithms network flow | 1 | CLRS - Maxflow Augmented Flow Lemma 26.1 - don't understand use of def. in proof -- (algorithms network flow)
<p>In Cormen et. al., <em>Introduction to Algorithms</em> (3rd ed.), I don't get a line in the proof of Lemma 26.1 which states that the augmented flow $f\uparrow f'$ is a flow in $G$ and is s.t. $|f\uparrow f'| =|f|+|f'|$ (this is pp. 717-718).</p>

<p>My confusion: When arguing <em>flow-conservation</em> they use the definition of $f\uparrow f'$ in the first line to say that for each $u\in V\setminus\{s,t\}$</p>

<p>$$ \sum_{v\in V} (f\uparrow f')(u,v) = \sum_{v\in V} (f(u,v)+f'(u,v) - f'(v,u)), $$</p>

<p>where the augmented path is defined as</p>

<p>$$ (f\uparrow f')(u,v) = \begin{cases} f(u,v)+f'(u,v) - f'(v,u) & \text{if $(u,v)\in E$}, \\
0 & \text{otherwise}. \end{cases} $$</p>

<p>Why can they ignore the 'otherwise' clause in the summation? I don't think the first clause evaluates to zero in all such cases. Do they use flow-conservation of $f$ and $f'$ in some way?</p>
 | habedi/stack-exchange-dataset |
2,495 | Book for algorithms beyond Cormen | <p>I've finished most of the material in Cormen's Intro to Algorithms book and I am looking for an algorithms book that covers material beyond Corman's book. Are there any recommendations?</p>

<p>NOTE: I asked this on stackoverflow but wasn't all too happy with the answer. </p>

<p>NOTE: Looking at most of the comments I think ideally I would like to find a book that would cover the material of the the 787 course in <a href="http://www.cs.wisc.edu/academic-programs/courses/cs-course-descriptions">this course description</a>.</p>
 | algorithms reference request books | 1 | Book for algorithms beyond Cormen -- (algorithms reference request books)
<p>I've finished most of the material in Cormen's Intro to Algorithms book and I am looking for an algorithms book that covers material beyond Corman's book. Are there any recommendations?</p>

<p>NOTE: I asked this on stackoverflow but wasn't all too happy with the answer. </p>

<p>NOTE: Looking at most of the comments I think ideally I would like to find a book that would cover the material of the the 787 course in <a href="http://www.cs.wisc.edu/academic-programs/courses/cs-course-descriptions">this course description</a>.</p>
 | habedi/stack-exchange-dataset |
2,501 | Neighbourhood in local search metaheuristic | <p>I cannot seem to find an answer to this question with Google, so I am going to ask here: is it required for a good neighbourhood function that it in principle (i. e. by recursively considering all neighbours of a certain solution - which is not practical) can reach all possible solutions?</p>

<p>My question is whether there are references in literature that explicitely state it's a requirement - I can see that it is a good property of a neighbourhood.</p>
 | optimization heuristics | 1 | Neighbourhood in local search metaheuristic -- (optimization heuristics)
<p>I cannot seem to find an answer to this question with Google, so I am going to ask here: is it required for a good neighbourhood function that it in principle (i. e. by recursively considering all neighbours of a certain solution - which is not practical) can reach all possible solutions?</p>

<p>My question is whether there are references in literature that explicitely state it's a requirement - I can see that it is a good property of a neighbourhood.</p>
 | habedi/stack-exchange-dataset |
2,503 | Eliminating useless productions resulting from PDA to CFG converison | <p>In my class we used a Pushdown Automata to Context Free Grammar conversion algorithm that produces a lot extraneous states.</p>

<p>For example, for two transitions, I am getting the following productions</p>

<blockquote>
 <p>$$\begin{gather*}
 \delta(q_0,1,Z) = (q_0,XZ) \\
 {}[q_0,Z,q_0] \to 1[q_0,X,q_0][q_0,Z,q_0] \\
 {}[q_0,Z,q_0] \to 1[q_0,X,q_1][q_1,Z,q_0] \\
 {}[q_0,Z,q_1] \to 1[q_0,X,q_0][q_0,Z,q_1] \\
 {}[q_0,Z,q_1] \to 1[q_0,X,q_1][q_1,Z,q_1] \\
\end{gather*}$$</p>
 
 <p>$$ \begin{gather*}
 \delta(q_1,0,Z) = (q_0,Z) \\
 {}[q_1,Z,q_0 ] \to 0[q_0,Z,q_0] \\
 {}[q_1,Z,q_1 ] \to 0[q_0,Z,q_1] \\
\end{gather*}$$</p>
</blockquote>

<p>How do I decide which state makes it into final production, and which one will be excluded ?</p>
 | automata formal grammars context free pushdown automata | 1 | Eliminating useless productions resulting from PDA to CFG converison -- (automata formal grammars context free pushdown automata)
<p>In my class we used a Pushdown Automata to Context Free Grammar conversion algorithm that produces a lot extraneous states.</p>

<p>For example, for two transitions, I am getting the following productions</p>

<blockquote>
 <p>$$\begin{gather*}
 \delta(q_0,1,Z) = (q_0,XZ) \\
 {}[q_0,Z,q_0] \to 1[q_0,X,q_0][q_0,Z,q_0] \\
 {}[q_0,Z,q_0] \to 1[q_0,X,q_1][q_1,Z,q_0] \\
 {}[q_0,Z,q_1] \to 1[q_0,X,q_0][q_0,Z,q_1] \\
 {}[q_0,Z,q_1] \to 1[q_0,X,q_1][q_1,Z,q_1] \\
\end{gather*}$$</p>
 
 <p>$$ \begin{gather*}
 \delta(q_1,0,Z) = (q_0,Z) \\
 {}[q_1,Z,q_0 ] \to 0[q_0,Z,q_0] \\
 {}[q_1,Z,q_1 ] \to 0[q_0,Z,q_1] \\
\end{gather*}$$</p>
</blockquote>

<p>How do I decide which state makes it into final production, and which one will be excluded ?</p>
 | habedi/stack-exchange-dataset |
2,508 | Could an artificial neural network algorithm be expressed in terms of map-reduce operations? | <p>Could an artificial neural network algorithm be expressed in terms of map-reduce operations? I am also interested more generally in methods of parallelization as applied to ANNs and their application to cloud computing. </p>

<p>I would think one approach would involve running a full ANN on each node and somehow integrating the results in order to treat the grid like a single entity (in terms of input/output and machine learning characteristics.) I would be curious even in this case what such an integrating strategy might look like.</p>
 | parallel computing artificial intelligence neural networks | 1 | Could an artificial neural network algorithm be expressed in terms of map-reduce operations? -- (parallel computing artificial intelligence neural networks)
<p>Could an artificial neural network algorithm be expressed in terms of map-reduce operations? I am also interested more generally in methods of parallelization as applied to ANNs and their application to cloud computing. </p>

<p>I would think one approach would involve running a full ANN on each node and somehow integrating the results in order to treat the grid like a single entity (in terms of input/output and machine learning characteristics.) I would be curious even in this case what such an integrating strategy might look like.</p>
 | habedi/stack-exchange-dataset |
2,519 | Efficiently calculating minimum edit distance of a smaller string at each position in a larger one | <p>Given two strings, $r$ and $s$, where $n = |r|$, $m = |s|$ and $m \ll n$, find the minimum edit distance between $s$ for each beginning position in $r$ efficiently.</p>

<p>That is, for each suffix of $r$ beginning at position $k$, $r_k$, find the <a href="http://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow noreferrer">Levenshtein distance</a> of $r_k$ and $s$ for each $k \in [0, |r|-1]$. In other words, I would like an array of scores, $A$, such that each position, $A[k]$, corresponds to the score of $r_k$ and $s$.</p>

<p>The obvious solution is to use the standard dynamic programming solution for each $r_k$ against $s$ considered separately, but this has the abysmal running time of $O(n m^2)$ (or $O(n d^2)$, where $d$ is the maximum edit distance). It seems like you should be able to re-use the information that you've computed for $r_0$ against $s$ for the comparison with $s$ and $r_1$.</p>

<p>I've thought of constructing a prefix tree and then trying to do dynamic programming algorithm on $s$ against the trie, but this still has worst case $O(n d^2)$ (where $d$ is the maximum edit distance) as the trie is only optimized for efficient lookup.</p>

<p>Ideally I would like something that has worst case running time of $O(n d)$ though I would settle for good average case running time. Does anyone have any suggestions? Is $O(n d^2)$ the best you can do, in general?</p>

<p>Here are some links that might be relevant though I can't see how they would apply to the above problem as most of them are optimized for lookup only:</p>

<ul>
<li><a href="http://stevehanov.ca/blog/index.php?id=114" rel="nofollow noreferrer">Fast and Easy Levensthein distance using a Trie</a></li>
<li><a href="https://stackoverflow.com/questions/3183149/most-efficient-way-to-calculate-levenshtein-distance">SO: Most efficient way to calculate Levenshtein distance</a></li>
<li><a href="https://stackoverflow.com/questions/4057513/levenshtein-distance-algorithm-better-than-onm?rq=1">SO: Levenshtein Distance Algoirthm better than $O(n m)$</a></li>
<li><a href="http://www.berghel.net/publications/asm/asm.php" rel="nofollow noreferrer">An extension of Ukkonen's enhanced dynamic programming ASM algorithm</a></li>
<li><a href="http://blog.notdot.net/2010/07/Damn-Cool-Algorithms-Levenshtein-Automata" rel="nofollow noreferrer">Damn Cool Algorithms: Levenshtein Automata</a></li>
</ul>

<p>I've also heard some talk about using some type of distance metric to optimize search (such as a <a href="http://en.wikipedia.org/wiki/BK-tree" rel="nofollow noreferrer">BK-tree</a>?) but I know little about this area and how it applies to this problem.</p>
 | algorithms runtime analysis strings dynamic programming string metrics | 1 | Efficiently calculating minimum edit distance of a smaller string at each position in a larger one -- (algorithms runtime analysis strings dynamic programming string metrics)
<p>Given two strings, $r$ and $s$, where $n = |r|$, $m = |s|$ and $m \ll n$, find the minimum edit distance between $s$ for each beginning position in $r$ efficiently.</p>

<p>That is, for each suffix of $r$ beginning at position $k$, $r_k$, find the <a href="http://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow noreferrer">Levenshtein distance</a> of $r_k$ and $s$ for each $k \in [0, |r|-1]$. In other words, I would like an array of scores, $A$, such that each position, $A[k]$, corresponds to the score of $r_k$ and $s$.</p>

<p>The obvious solution is to use the standard dynamic programming solution for each $r_k$ against $s$ considered separately, but this has the abysmal running time of $O(n m^2)$ (or $O(n d^2)$, where $d$ is the maximum edit distance). It seems like you should be able to re-use the information that you've computed for $r_0$ against $s$ for the comparison with $s$ and $r_1$.</p>

<p>I've thought of constructing a prefix tree and then trying to do dynamic programming algorithm on $s$ against the trie, but this still has worst case $O(n d^2)$ (where $d$ is the maximum edit distance) as the trie is only optimized for efficient lookup.</p>

<p>Ideally I would like something that has worst case running time of $O(n d)$ though I would settle for good average case running time. Does anyone have any suggestions? Is $O(n d^2)$ the best you can do, in general?</p>

<p>Here are some links that might be relevant though I can't see how they would apply to the above problem as most of them are optimized for lookup only:</p>

<ul>
<li><a href="http://stevehanov.ca/blog/index.php?id=114" rel="nofollow noreferrer">Fast and Easy Levensthein distance using a Trie</a></li>
<li><a href="https://stackoverflow.com/questions/3183149/most-efficient-way-to-calculate-levenshtein-distance">SO: Most efficient way to calculate Levenshtein distance</a></li>
<li><a href="https://stackoverflow.com/questions/4057513/levenshtein-distance-algorithm-better-than-onm?rq=1">SO: Levenshtein Distance Algoirthm better than $O(n m)$</a></li>
<li><a href="http://www.berghel.net/publications/asm/asm.php" rel="nofollow noreferrer">An extension of Ukkonen's enhanced dynamic programming ASM algorithm</a></li>
<li><a href="http://blog.notdot.net/2010/07/Damn-Cool-Algorithms-Levenshtein-Automata" rel="nofollow noreferrer">Damn Cool Algorithms: Levenshtein Automata</a></li>
</ul>

<p>I've also heard some talk about using some type of distance metric to optimize search (such as a <a href="http://en.wikipedia.org/wiki/BK-tree" rel="nofollow noreferrer">BK-tree</a>?) but I know little about this area and how it applies to this problem.</p>
 | habedi/stack-exchange-dataset |
2,521 | Is the type inference here really complicated? | <p>There's a <a href="https://stackoverflow.com/questions/9058430/why-doesnt-immutablemap-builder-build-pick-the-correct-type-parameters">question on SO</a> asking why in Java the right type doesn't get picked in a concrete case. I know that Java can't do it in such "complicated" cases, but I'm asking myself <em>WHY</em>?</p>

<p>The (for simplicity slightly modified) line failing to compile is</p>

<pre><code>Map<String, Number> m = ImmutableMap.builder().build();
</code></pre>

<p>and the methods are defined as<sup>1</sup></p>

<pre><code>class ImmutableMap {
 public static <K1, V1> Builder<K1, V1> builder() {...}
 ...
}

class Builder<K2, V2> {
 public ImmutableMap<K2, V2> build() {...}
 ...
}
</code></pre>

<p>The solution <code>K1=K2=String</code> and <code>V1=V2=Number</code> is obvious to everyone but the compiler. There are 4 variables here and I can see 4 trivial equations, so what's the problem with type inference here?</p>

<p><sup>1</sup>I simplified the <a href="http://guava-libraries.googlecode.com/git/guava/src/com/google/common/collect/ImmutableMap.java" rel="nofollow noreferrer">code piece from Guava</a> for this example and numbered the type variables to make it (hopefully) clearer.</p>
 | programming languages typing java type inference | 1 | Is the type inference here really complicated? -- (programming languages typing java type inference)
<p>There's a <a href="https://stackoverflow.com/questions/9058430/why-doesnt-immutablemap-builder-build-pick-the-correct-type-parameters">question on SO</a> asking why in Java the right type doesn't get picked in a concrete case. I know that Java can't do it in such "complicated" cases, but I'm asking myself <em>WHY</em>?</p>

<p>The (for simplicity slightly modified) line failing to compile is</p>

<pre><code>Map<String, Number> m = ImmutableMap.builder().build();
</code></pre>

<p>and the methods are defined as<sup>1</sup></p>

<pre><code>class ImmutableMap {
 public static <K1, V1> Builder<K1, V1> builder() {...}
 ...
}

class Builder<K2, V2> {
 public ImmutableMap<K2, V2> build() {...}
 ...
}
</code></pre>

<p>The solution <code>K1=K2=String</code> and <code>V1=V2=Number</code> is obvious to everyone but the compiler. There are 4 variables here and I can see 4 trivial equations, so what's the problem with type inference here?</p>

<p><sup>1</sup>I simplified the <a href="http://guava-libraries.googlecode.com/git/guava/src/com/google/common/collect/ImmutableMap.java" rel="nofollow noreferrer">code piece from Guava</a> for this example and numbered the type variables to make it (hopefully) clearer.</p>
 | habedi/stack-exchange-dataset |
2,524 | Getting parallel items in dependency resolution | <p>I have implemented a topological sort based on the <a href="http://en.wikipedia.org/wiki/Topological_sort">Wikipedia article</a> which I'm using for dependency resolution, but it returns a linear list. What kind of algorithm can I use to find the independent paths?</p>
 | algorithms graphs parallel computing scheduling | 1 | Getting parallel items in dependency resolution -- (algorithms graphs parallel computing scheduling)
<p>I have implemented a topological sort based on the <a href="http://en.wikipedia.org/wiki/Topological_sort">Wikipedia article</a> which I'm using for dependency resolution, but it returns a linear list. What kind of algorithm can I use to find the independent paths?</p>
 | habedi/stack-exchange-dataset |
2,531 | Time to construct a GNBA for LTL formula | <p>I have a problem with the proof for constructing a GNBA (<a href="https://en.wikipedia.org/wiki/Generalized_B%C3%BCchi_automaton" rel="nofollow">generalized nondeterministic Büchi automaton</a>) for a <a href="https://en.wikipedia.org/wiki/Linear_temporal_logic" rel="nofollow">LTL formula</a>:</p>

<p><strong>Theorem:</strong> For any LTL formula $\varphi$ there exists a GNBA $G_{\varphi}$ over alphabet $2^{AP}$ such that:</p>

<ol>
<li><p>$\operatorname{Word}(\varphi)=L_{\omega}(G_{\varphi})$.</p></li>
<li><p>$G_{\varphi}$ can be costructed in time and space $2^{O(|\varphi|)}$, where $|\varphi|$ is the size of $\varphi$.</p></li>
<li><p>The number of accepting states of $G_{\varphi}$ is bounded above by $O(|\varphi|)$.</p></li>
</ol>

<p>My problem lies in the proof of (2), that is, in the proof it says that the number of states in $G_{\varphi}$ is bounded by $2^{|\operatorname{subf}(\varphi)|}$ but since $|\operatorname{subf}(\varphi)| \leq 2\cdot|\varphi|$ (where $\operatorname{subf}(\varphi)$ is the set of all subformulae) the number of states is bounded by $2^{O(|\varphi|)}$. </p>

<p>But why does $|\operatorname{subf}(\varphi)| \leq 2\cdot|\varphi|$ hold? </p>
 | logic automata formal methods model checking linear temporal logic | 1 | Time to construct a GNBA for LTL formula -- (logic automata formal methods model checking linear temporal logic)
<p>I have a problem with the proof for constructing a GNBA (<a href="https://en.wikipedia.org/wiki/Generalized_B%C3%BCchi_automaton" rel="nofollow">generalized nondeterministic Büchi automaton</a>) for a <a href="https://en.wikipedia.org/wiki/Linear_temporal_logic" rel="nofollow">LTL formula</a>:</p>

<p><strong>Theorem:</strong> For any LTL formula $\varphi$ there exists a GNBA $G_{\varphi}$ over alphabet $2^{AP}$ such that:</p>

<ol>
<li><p>$\operatorname{Word}(\varphi)=L_{\omega}(G_{\varphi})$.</p></li>
<li><p>$G_{\varphi}$ can be costructed in time and space $2^{O(|\varphi|)}$, where $|\varphi|$ is the size of $\varphi$.</p></li>
<li><p>The number of accepting states of $G_{\varphi}$ is bounded above by $O(|\varphi|)$.</p></li>
</ol>

<p>My problem lies in the proof of (2), that is, in the proof it says that the number of states in $G_{\varphi}$ is bounded by $2^{|\operatorname{subf}(\varphi)|}$ but since $|\operatorname{subf}(\varphi)| \leq 2\cdot|\varphi|$ (where $\operatorname{subf}(\varphi)$ is the set of all subformulae) the number of states is bounded by $2^{O(|\varphi|)}$. </p>

<p>But why does $|\operatorname{subf}(\varphi)| \leq 2\cdot|\varphi|$ hold? </p>
 | habedi/stack-exchange-dataset |
2,539 | Minimizing the total variation of a sequence of discrete choices | <p>My setup is something like this: I have a sequence of sets of integers $C_i (1\leq i\leq n)$, with $|C_i|$ relatively small - on the order of four or five items for all $i$. I want to choose a sequence $x_i (1\leq i\leq n)$ with each $x_i\in C_i$ such that the total variation (either $\ell_1$ or $\ell_2$, i.e. $\sum_{i=1}^{n-1} |x_i-x_{i+1}|$ or $\sum_{i=1}^{n-1} \left(x_i-x_{i+1}\right)^2$) is minimized. While it seems like the choice for each $x_i$ is 'local', the problem is that choices can propagate and have non-local effects and so the problem seems inherently global in nature.</p>

<p>My primary concern is in a practical algorithm for the problem; right now I'm using annealing methods based on mutating short subsequences, and while they should be all right it seems like I ought to be able to do better. But I'm also interested in the abstract complexity — my hunch would be that the standard query version ('is there a solution of total variation $\leq k$?') would be NP-complete via a reduction from some constraint problem like 3-SAT but I can't quite see the reduction. Any pointers to previous study would be welcome — it seems like such a natural problem that I can't believe it hasn't been looked at before, but my searches so far haven't turned up anything quite like it.</p>
 | algorithms complexity theory optimization | 1 | Minimizing the total variation of a sequence of discrete choices -- (algorithms complexity theory optimization)
<p>My setup is something like this: I have a sequence of sets of integers $C_i (1\leq i\leq n)$, with $|C_i|$ relatively small - on the order of four or five items for all $i$. I want to choose a sequence $x_i (1\leq i\leq n)$ with each $x_i\in C_i$ such that the total variation (either $\ell_1$ or $\ell_2$, i.e. $\sum_{i=1}^{n-1} |x_i-x_{i+1}|$ or $\sum_{i=1}^{n-1} \left(x_i-x_{i+1}\right)^2$) is minimized. While it seems like the choice for each $x_i$ is 'local', the problem is that choices can propagate and have non-local effects and so the problem seems inherently global in nature.</p>

<p>My primary concern is in a practical algorithm for the problem; right now I'm using annealing methods based on mutating short subsequences, and while they should be all right it seems like I ought to be able to do better. But I'm also interested in the abstract complexity — my hunch would be that the standard query version ('is there a solution of total variation $\leq k$?') would be NP-complete via a reduction from some constraint problem like 3-SAT but I can't quite see the reduction. Any pointers to previous study would be welcome — it seems like such a natural problem that I can't believe it hasn't been looked at before, but my searches so far haven't turned up anything quite like it.</p>
 | habedi/stack-exchange-dataset |
2,546 | Find string that minimizes the sum of the edit distances to all other strings in set | <p>I have a set of strings $S$ and I am using the edit-distance (<a href="http://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow">Levenshtein</a>) to measure the distance between all pairs.</p>

<p>Is there an algorithm for finding the string $x$ which minimizes the sum of the distances to all strings in $S$, that is</p>

<p>$\arg_x \min \sum_{s \in S} \text{edit-distance}(x,s)$</p>

<p>It seems like there should, but I can't find the right reference.</p>
 | algorithms reference request strings string metrics | 1 | Find string that minimizes the sum of the edit distances to all other strings in set -- (algorithms reference request strings string metrics)
<p>I have a set of strings $S$ and I am using the edit-distance (<a href="http://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow">Levenshtein</a>) to measure the distance between all pairs.</p>

<p>Is there an algorithm for finding the string $x$ which minimizes the sum of the distances to all strings in $S$, that is</p>

<p>$\arg_x \min \sum_{s \in S} \text{edit-distance}(x,s)$</p>

<p>It seems like there should, but I can't find the right reference.</p>
 | habedi/stack-exchange-dataset |
2,553 | Maximum number of points that two paths can reach | <p>Suppose we are given a list of $n$ points, whose $x$ and $y$ coordinates are all non-negative. Suppose also that there are no duplicate points. We can only go from point $(x_i, y_i)$ to point $(x_j, y_j)$ if $x_i \le x_j$ and $y_i \le y_j$. The question is: given these $n$ points, what is the maximum number of points that we can reach if we are allowed to draw two paths that connect points using the above rule? Paths must start from the origin and may contain repeated points. $(0, 0)$ is of course not included in the points reached.</p>

<p>An example: given $(2, 0), (2, 1), (1, 2), (0, 3), (1, 3), (2, 3), (3, 3), (2, 4), (1, 5), (1, 6)$, the answer is $8$ since we can take $(0, 0) \rightarrow (2, 0) \rightarrow (2, 1) \rightarrow (2, 3) \rightarrow (2, 4)$ and $(0, 0) \rightarrow (1, 2) \rightarrow (1, 3) \rightarrow (1, 5) \rightarrow (1, 6)$.</p>

<p>If we are allowed to draw only one path, I can easily solve the question by dynamic programming that runs in $O(n^2)$. I first sort the points by decreasing $x_i+y_i$. Let $D[i]$ be the maximum number of coins that one can pick up from coins $1$ to $i$ in the sorted list. Then $D[1] = 1$ and $D[i] = \max\limits_{1\le j < i, x_j \le x_i, y_j \le y_i} D[j] + 1$. The answer then is just $\max\limits_{1\le i \le n} D[i] + 1$.</p>

<p>But I cannot come up with a recurrence relation for two paths. If anyone has any idea about such a recurrence relation, I would be happy to hear what they are.</p>
 | computational geometry dynamic programming recurrence relation | 1 | Maximum number of points that two paths can reach -- (computational geometry dynamic programming recurrence relation)
<p>Suppose we are given a list of $n$ points, whose $x$ and $y$ coordinates are all non-negative. Suppose also that there are no duplicate points. We can only go from point $(x_i, y_i)$ to point $(x_j, y_j)$ if $x_i \le x_j$ and $y_i \le y_j$. The question is: given these $n$ points, what is the maximum number of points that we can reach if we are allowed to draw two paths that connect points using the above rule? Paths must start from the origin and may contain repeated points. $(0, 0)$ is of course not included in the points reached.</p>

<p>An example: given $(2, 0), (2, 1), (1, 2), (0, 3), (1, 3), (2, 3), (3, 3), (2, 4), (1, 5), (1, 6)$, the answer is $8$ since we can take $(0, 0) \rightarrow (2, 0) \rightarrow (2, 1) \rightarrow (2, 3) \rightarrow (2, 4)$ and $(0, 0) \rightarrow (1, 2) \rightarrow (1, 3) \rightarrow (1, 5) \rightarrow (1, 6)$.</p>

<p>If we are allowed to draw only one path, I can easily solve the question by dynamic programming that runs in $O(n^2)$. I first sort the points by decreasing $x_i+y_i$. Let $D[i]$ be the maximum number of coins that one can pick up from coins $1$ to $i$ in the sorted list. Then $D[1] = 1$ and $D[i] = \max\limits_{1\le j < i, x_j \le x_i, y_j \le y_i} D[j] + 1$. The answer then is just $\max\limits_{1\le i \le n} D[i] + 1$.</p>

<p>But I cannot come up with a recurrence relation for two paths. If anyone has any idea about such a recurrence relation, I would be happy to hear what they are.</p>
 | habedi/stack-exchange-dataset |
2,557 | How to simulate backreferences, lookaheads, and lookbehinds in finite state automata? | <p>I created a simple regular expression lexer and parser to take a regular expression and generate its parse tree. Creating a non-deterministic finite state automaton from this parse tree is relatively simple for basic regular expressions. However I can't seem to wrap my head around how to simulate backreferences, lookaheads, and lookbehinds.</p>

<p>From what I read in the purple dragon book I understood that to simulate a lookahead $r/s$ where the regular expression $r$ is matched if and only if the match is followed by a match of the regular expression $s$, you create a non-deterministic finite state automaton in which $/$ is replaced by $\varepsilon$. Is it possible to create a deterministic finite state automaton that does the same?</p>

<p>What about simulating negative lookaheads and lookbehinds? I would really appreciate it if you would link me to a resource which describes how to do this in detail.</p>
 | automata finite automata regular expressions | 1 | How to simulate backreferences, lookaheads, and lookbehinds in finite state automata? -- (automata finite automata regular expressions)
<p>I created a simple regular expression lexer and parser to take a regular expression and generate its parse tree. Creating a non-deterministic finite state automaton from this parse tree is relatively simple for basic regular expressions. However I can't seem to wrap my head around how to simulate backreferences, lookaheads, and lookbehinds.</p>

<p>From what I read in the purple dragon book I understood that to simulate a lookahead $r/s$ where the regular expression $r$ is matched if and only if the match is followed by a match of the regular expression $s$, you create a non-deterministic finite state automaton in which $/$ is replaced by $\varepsilon$. Is it possible to create a deterministic finite state automaton that does the same?</p>

<p>What about simulating negative lookaheads and lookbehinds? I would really appreciate it if you would link me to a resource which describes how to do this in detail.</p>
 | habedi/stack-exchange-dataset |
2,569 | Worst case $O(n \ln n)$ in place stable sort? | <p>I am having trouble finding good resources that give a worst case $O(n \ln n)$ <a href="http://en.wikipedia.org/wiki/In-place_algorithm">in place</a> <a href="http://www.algorithmist.com/index.php/Stable_Sort">stable</a> sorting algorithm. Does anyone know of any good resources?</p>

<p>Just a reminder, in place means it uses the array passed in and the sorting algorithm is only allowed to use constant extra space. Stable means that elements with the same key appear in the same order in the sorted array as they did in the original.</p>

<p>For example, naive merge sort is worst case $O(n \ln n)$ and stable but uses $O(n)$ extra space. Standard quicksort can be made stable, is in place but is worst case $O(n^2)$. Heapsort is in place, worst case $O(n \ln n)$ but isn't stable. <a href="http://en.wikipedia.org/wiki/Sorting_algorithm">Wikipedia</a> has a nice chart of which sorting algorithms have which drawbacks. Notice that there is no sorting algorithm that they list that has all three conditions of stability, worst case $O(n \ln n)$ and being in place.</p>

<p>I have found a paper called <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.8523&rep=rep1&type=pdf">"Practical in-place mergesort"</a> by Katajainen, Pasanen and Teuhola, which claims to have a worst case $O(n \ln n)$ in place stable mergesort variant. If I understand their results correctly, they use (bottom-up?) mergesort recursively on the first $\frac{1}{4}$ of the array and the latter $\frac{1}{2}$ of the array and use the second $\frac{1}{4}$ as scratch space to do the merge. I'm still reading through this so any more information on whether I'm interpreting their results correctly is appreciated.</p>

<p>I would also be very interested in a worst case $O(n \ln n)$ in place stable quicksort. From what I understand, modifying quicksort to be worst case $O(n \ln n)$ requires <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">selecting a proper pivot</a> which would destroy the stability that it would otherwise normally enjoy.</p>

<p>This is purely of theoretical interest and I have no practical application. I would just like to know the algorithm that has all three of these features.</p>
 | algorithms reference request sorting in place | 1 | Worst case $O(n \ln n)$ in place stable sort? -- (algorithms reference request sorting in place)
<p>I am having trouble finding good resources that give a worst case $O(n \ln n)$ <a href="http://en.wikipedia.org/wiki/In-place_algorithm">in place</a> <a href="http://www.algorithmist.com/index.php/Stable_Sort">stable</a> sorting algorithm. Does anyone know of any good resources?</p>

<p>Just a reminder, in place means it uses the array passed in and the sorting algorithm is only allowed to use constant extra space. Stable means that elements with the same key appear in the same order in the sorted array as they did in the original.</p>

<p>For example, naive merge sort is worst case $O(n \ln n)$ and stable but uses $O(n)$ extra space. Standard quicksort can be made stable, is in place but is worst case $O(n^2)$. Heapsort is in place, worst case $O(n \ln n)$ but isn't stable. <a href="http://en.wikipedia.org/wiki/Sorting_algorithm">Wikipedia</a> has a nice chart of which sorting algorithms have which drawbacks. Notice that there is no sorting algorithm that they list that has all three conditions of stability, worst case $O(n \ln n)$ and being in place.</p>

<p>I have found a paper called <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.8523&rep=rep1&type=pdf">"Practical in-place mergesort"</a> by Katajainen, Pasanen and Teuhola, which claims to have a worst case $O(n \ln n)$ in place stable mergesort variant. If I understand their results correctly, they use (bottom-up?) mergesort recursively on the first $\frac{1}{4}$ of the array and the latter $\frac{1}{2}$ of the array and use the second $\frac{1}{4}$ as scratch space to do the merge. I'm still reading through this so any more information on whether I'm interpreting their results correctly is appreciated.</p>

<p>I would also be very interested in a worst case $O(n \ln n)$ in place stable quicksort. From what I understand, modifying quicksort to be worst case $O(n \ln n)$ requires <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">selecting a proper pivot</a> which would destroy the stability that it would otherwise normally enjoy.</p>

<p>This is purely of theoretical interest and I have no practical application. I would just like to know the algorithm that has all three of these features.</p>
 | habedi/stack-exchange-dataset |
2,571 | 3-dimensional matching approximation algorithm (implementation details) | <p>I have a run-time implementation question regarding the 3-dimensional (unweighted 2-)approximation algorithm below:
How can I construct the maximum matching M_r in S_r in linear time in line 8?</p>

<p>$X, Y, Z $ are disjoint sets; a matching $M$ is a subset of $S$ s.t. no two triples in $M$ have the same coordinate at any dimension.</p>

<p>$
\text{Algorithm: unweighted 3-dimensional matching (2-approximation)} \\
\text{Input: a set $S\subseteq X \times Y \times Z$ of triples} \\
\text{Output: a matching M in S}
$</p>

<pre><code> 1) construct maximal matching M in S; 
 2) change = TRUE; 
 3) while (change) { 
 4) change = FALSE; 
 5) for each triple (a,b,c) in M { 
 6) M = M - {(a,b,c)}; 
 7) let S_r be the set of triples in S not contradicting M; 
 8) construct a maximum matching M_r in S_r; 
 9) if (M_r contains more than one triple) { 
10) M = M \cup M_r; 
11) change = TRUE; 
12) } else { 
13) M = M \union {(a,b,c)}; 
14) } 
15) } 
</code></pre>

<hr>

<p>[1] <a href="http://faculty.cse.tamu.edu/chen/courses/cpsc669/2011/notes/ch9.pdf" rel="nofollow">http://faculty.cse.tamu.edu/chen/courses/cpsc669/2011/notes/ch9.pdf</a>, p. 326</p>
 | algorithms graphs approximation matching | 1 | 3-dimensional matching approximation algorithm (implementation details) -- (algorithms graphs approximation matching)
<p>I have a run-time implementation question regarding the 3-dimensional (unweighted 2-)approximation algorithm below:
How can I construct the maximum matching M_r in S_r in linear time in line 8?</p>

<p>$X, Y, Z $ are disjoint sets; a matching $M$ is a subset of $S$ s.t. no two triples in $M$ have the same coordinate at any dimension.</p>

<p>$
\text{Algorithm: unweighted 3-dimensional matching (2-approximation)} \\
\text{Input: a set $S\subseteq X \times Y \times Z$ of triples} \\
\text{Output: a matching M in S}
$</p>

<pre><code> 1) construct maximal matching M in S; 
 2) change = TRUE; 
 3) while (change) { 
 4) change = FALSE; 
 5) for each triple (a,b,c) in M { 
 6) M = M - {(a,b,c)}; 
 7) let S_r be the set of triples in S not contradicting M; 
 8) construct a maximum matching M_r in S_r; 
 9) if (M_r contains more than one triple) { 
10) M = M \cup M_r; 
11) change = TRUE; 
12) } else { 
13) M = M \union {(a,b,c)}; 
14) } 
15) } 
</code></pre>

<hr>

<p>[1] <a href="http://faculty.cse.tamu.edu/chen/courses/cpsc669/2011/notes/ch9.pdf" rel="nofollow">http://faculty.cse.tamu.edu/chen/courses/cpsc669/2011/notes/ch9.pdf</a>, p. 326</p>
 | habedi/stack-exchange-dataset |
2,575 | Finding the $k$th largest element in an evolving query data structure | <p>Basically, the problem I am solving is this. Initially, the array $A$ is empty. Then I am given data to fill the array and at any time I have to make a query to print the $|A|/3$-th largest element inserted so far.</p>

<p>I was solving the problem with segment trees, but I am not able to make a little modification to the query function of the segment tree. The query function that I wrote returns the largest element between indices $a_{\text{begin}}$ and $a_{\text{end}}$:</p>

<pre><code>int query(int Nodenumber,int t_begin,int t_end,int a_begin,int a_end)
{
 if (t_begin>=a_begin && t_end<=a_end)
 return Tree[Nodenumber];
 else
 {
 int mid=((t_begin+t_end)/2);
 int res = -1;

 if (mid>=a_begin && t_begin<=a_end)
 res = max(res,query(2*Nodenumber,t_begin,mid,a_begin,a_end));

 if (t_end>=a_begin && mid+1<=a_end)
 res = max(res,query(2*Nodenumber+1,mid+1,t_end,a_begin,a_end));

 return res;
 }
} 
</code></pre>

<p>Note to make a query, I call the query function as <code>query(1,0,N-1,QA,QB)</code>.</p>

<p>But I want to return the $|A|/3$-th largest element between indices $a_{\text{begin}}$ and $a_{\text{end}}$. So how should I modify the function to do this?</p>

<p>So updating, queries, updating, queries, updating, queries and so on are done randomly and several (upto $10^5$) times.</p>

<p>So, for solving the problem, did I pick the right data structure? I thought of using heaps, but that will be too slow, as I would have to pop $|A|/3$ elements from the top and reinsert them for every query.</p>
 | algorithms data structures | 1 | Finding the $k$th largest element in an evolving query data structure -- (algorithms data structures)
<p>Basically, the problem I am solving is this. Initially, the array $A$ is empty. Then I am given data to fill the array and at any time I have to make a query to print the $|A|/3$-th largest element inserted so far.</p>

<p>I was solving the problem with segment trees, but I am not able to make a little modification to the query function of the segment tree. The query function that I wrote returns the largest element between indices $a_{\text{begin}}$ and $a_{\text{end}}$:</p>

<pre><code>int query(int Nodenumber,int t_begin,int t_end,int a_begin,int a_end)
{
 if (t_begin>=a_begin && t_end<=a_end)
 return Tree[Nodenumber];
 else
 {
 int mid=((t_begin+t_end)/2);
 int res = -1;

 if (mid>=a_begin && t_begin<=a_end)
 res = max(res,query(2*Nodenumber,t_begin,mid,a_begin,a_end));

 if (t_end>=a_begin && mid+1<=a_end)
 res = max(res,query(2*Nodenumber+1,mid+1,t_end,a_begin,a_end));

 return res;
 }
} 
</code></pre>

<p>Note to make a query, I call the query function as <code>query(1,0,N-1,QA,QB)</code>.</p>

<p>But I want to return the $|A|/3$-th largest element between indices $a_{\text{begin}}$ and $a_{\text{end}}$. So how should I modify the function to do this?</p>

<p>So updating, queries, updating, queries, updating, queries and so on are done randomly and several (upto $10^5$) times.</p>

<p>So, for solving the problem, did I pick the right data structure? I thought of using heaps, but that will be too slow, as I would have to pop $|A|/3$ elements from the top and reinsert them for every query.</p>
 | habedi/stack-exchange-dataset |
2,576 | Most efficient algorithm to print 1-100 using a given random number generator | <p>We are given a random number generator <code>RandNum50</code> which generates a random integer uniformly in the range 1–50.
We may use only this random number generator to generate and print all integers from 1 to 100 in a random order. Every number must come exactly once, and the probability of any number occurring at any place must be equal.</p>

<p>What is the most efficient algorithm for this?</p>
 | algorithms integers randomness random number generator | 1 | Most efficient algorithm to print 1-100 using a given random number generator -- (algorithms integers randomness random number generator)
<p>We are given a random number generator <code>RandNum50</code> which generates a random integer uniformly in the range 1–50.
We may use only this random number generator to generate and print all integers from 1 to 100 in a random order. Every number must come exactly once, and the probability of any number occurring at any place must be equal.</p>

<p>What is the most efficient algorithm for this?</p>
 | habedi/stack-exchange-dataset |
2,582 | How to detect stack order? | <p>We take the sequence of integers from $1$ to $n$, and we push them onto a stack one by one in order. Between each push, we can choose to pop any number of items from the stack (from 0 to the current stack size).</p>

<p>Every time we pop a value from the stack, we will print it out.</p>

<p>For example, $1,2,3$ is printed out when we do <code>push, pop, push, pop, push, pop</code>. $3,2,1$ comes from <code>push, push, push, pop, pop, pop</code>. </p>

<p>However, $3,1,2$ is not a possible printout, because it is not possible to have $3$ printed followed by $1$, without seeing $2$ in between.</p>

<p>Question: <strong>How can we detect impossible orders like $3,1,2$?</strong></p>

<p>In fact, based on my observation, I have come out a potential solution. But the problem is I can't prove my observation is complete.</p>

<p>The program that I wrote with the following logic:</p>

<p>When the current value minus the next value is larger than 1, a value between current and next cannot appear after next. For example, if current=3 and next=1, then the value between current (3) and next (1) is 2 which cannot appear after next(1), hence $3,1,2$ violates the rule.</p>

<p>Does this cover all cases?</p>
 | algorithms stacks | 1 | How to detect stack order? -- (algorithms stacks)
<p>We take the sequence of integers from $1$ to $n$, and we push them onto a stack one by one in order. Between each push, we can choose to pop any number of items from the stack (from 0 to the current stack size).</p>

<p>Every time we pop a value from the stack, we will print it out.</p>

<p>For example, $1,2,3$ is printed out when we do <code>push, pop, push, pop, push, pop</code>. $3,2,1$ comes from <code>push, push, push, pop, pop, pop</code>. </p>

<p>However, $3,1,2$ is not a possible printout, because it is not possible to have $3$ printed followed by $1$, without seeing $2$ in between.</p>

<p>Question: <strong>How can we detect impossible orders like $3,1,2$?</strong></p>

<p>In fact, based on my observation, I have come out a potential solution. But the problem is I can't prove my observation is complete.</p>

<p>The program that I wrote with the following logic:</p>

<p>When the current value minus the next value is larger than 1, a value between current and next cannot appear after next. For example, if current=3 and next=1, then the value between current (3) and next (1) is 2 which cannot appear after next(1), hence $3,1,2$ violates the rule.</p>

<p>Does this cover all cases?</p>
 | habedi/stack-exchange-dataset |
2,583 | Finding the Largest "Ordered" Difference in Elements of an Array | <p>Suppose we are given an array of positive integers $P = [p_1, p_2, \dots, p_N]$ where each $p_i$ represents the price of a product on a different day $i = 1 \dots N$. </p>

<p>I would like to design an algorithm to find the maximum profit that you can given this array of prices. Profit is made by buying at a given date $i$ and selling at a later date $j$ so that $i \leq j$.</p>

<p>One easy solution is the following "exhaustive algorithm":</p>

<pre><code>profit = 0
for i = 1 to N-1 
 for j = i+1 to N
 if P(j) - P(i) > profit 
 profit = P(j) - P(i) 
</code></pre>

<p>The issue with this however is that it takes time $\Omega(N^2)$. </p>

<p>Can anyone think of something faster?</p>
 | algorithms arrays | 1 | Finding the Largest "Ordered" Difference in Elements of an Array -- (algorithms arrays)
<p>Suppose we are given an array of positive integers $P = [p_1, p_2, \dots, p_N]$ where each $p_i$ represents the price of a product on a different day $i = 1 \dots N$. </p>

<p>I would like to design an algorithm to find the maximum profit that you can given this array of prices. Profit is made by buying at a given date $i$ and selling at a later date $j$ so that $i \leq j$.</p>

<p>One easy solution is the following "exhaustive algorithm":</p>

<pre><code>profit = 0
for i = 1 to N-1 
 for j = i+1 to N
 if P(j) - P(i) > profit 
 profit = P(j) - P(i) 
</code></pre>

<p>The issue with this however is that it takes time $\Omega(N^2)$. </p>

<p>Can anyone think of something faster?</p>
 | habedi/stack-exchange-dataset |
2,587 | Time complexity version of the Church-Turing Thesis | <p>There's a <a href="http://plato.stanford.edu/entries/church-turing/#Bloopers" rel="nofollow">lot of debate</a> about what exactly the Church-Turing thesis is, but roughly it's the argument that "undecidable" should be considered equivalent to "undecidable by a universal turing machine."</p>

<p>I'm wondering if there's an analogous statement for time complexity, i.e. an argument that if some language is decided in $\Theta\left(f(n)\right)$ on a universal turing machine, then we should say its time complexity is $\Theta\left(f(n)\right)$. </p>

<p>This isn't equivalent to the CT thesis - e.g. quantum computers decide precisely those languages which are decidable in a non-quantum TM, but they may run that decision procedure more quickly.</p>
 | computability terminology turing machines church turing thesis | 1 | Time complexity version of the Church-Turing Thesis -- (computability terminology turing machines church turing thesis)
<p>There's a <a href="http://plato.stanford.edu/entries/church-turing/#Bloopers" rel="nofollow">lot of debate</a> about what exactly the Church-Turing thesis is, but roughly it's the argument that "undecidable" should be considered equivalent to "undecidable by a universal turing machine."</p>

<p>I'm wondering if there's an analogous statement for time complexity, i.e. an argument that if some language is decided in $\Theta\left(f(n)\right)$ on a universal turing machine, then we should say its time complexity is $\Theta\left(f(n)\right)$. </p>

<p>This isn't equivalent to the CT thesis - e.g. quantum computers decide precisely those languages which are decidable in a non-quantum TM, but they may run that decision procedure more quickly.</p>
 | habedi/stack-exchange-dataset |
2,595 | Operations on OBDD: negation through Shannon's expansion | <p>I have a problem with the application of the <a href="http://en.wikipedia.org/wiki/Shannon_expansion" rel="nofollow">Shannon expansion</a> for to obtain the negation of a formula boolean, than will need for implement the negation operator on OBDD (<a href="http://en.wikipedia.org/wiki/Binary_decision_diagram" rel="nofollow">Order Binary Decision Diagram</a>) that is, show that:</p>

<p>$\qquad \displaystyle \neg f(x_1,\ldots,x_n) = (\neg x_1 \wedge \neg f|_{x_1=0}) \vee (x_1 \wedge \neg f|_{x_1=1})$</p>

<p>where $f|_{x_i=b}$ is the function boolean in which replaces $x_i$ with b, that is:</p>

<p>$\qquad \displaystyle f|_{x_i=b}(x_1,\ldots,x_n)=f(x_1,\ldots,x_{i-1},b,x_{i+1},\ldots,x_n)$.</p>

<p>The proof says:</p>

<p>$\qquad \displaystyle\neg f(x_1,\ldots,x_n) = \neg((\neg x_1 \wedge f|_{x_1=0}) \vee (x_1 \wedge f|_{x_1=1}))$. </p>

<p>Applying the negation (skip the intermediate steps), we get:</p>

<p>$\qquad \displaystyle (x_1 \wedge \neg x_1) \vee (\neg x_1 \wedge \neg f|_{x_1=0}) \vee (x_1 \wedge \neg f|_{x_1=1}) \vee (\neg f|_{x_1=0} \wedge \neg f|_{x_1=1}) $. </p>

<p>Now $(x_1 \wedge \neg x_1)= \mathrm{false}$ can be dropped, which leads to</p>

<p>$\qquad \displaystyle (\neg x_1 \wedge \neg f|_{x_1=0}) \vee (x_1 \wedge \neg f|_{x_1=1}) \vee (\neg f|_{x_1=0} \wedge \neg f|_{x_1=1}) $ </p>

<p>which in turn is, finally, equal to </p>

<p>$\qquad \displaystyle (\neg x_1 \wedge \neg f|_{x_1=0}) \vee (x_1 \wedge \neg f|_{x_1=1})$.</p>

<p>Why does this hold?</p>
 | logic information theory | 1 | Operations on OBDD: negation through Shannon's expansion -- (logic information theory)
<p>I have a problem with the application of the <a href="http://en.wikipedia.org/wiki/Shannon_expansion" rel="nofollow">Shannon expansion</a> for to obtain the negation of a formula boolean, than will need for implement the negation operator on OBDD (<a href="http://en.wikipedia.org/wiki/Binary_decision_diagram" rel="nofollow">Order Binary Decision Diagram</a>) that is, show that:</p>

<p>$\qquad \displaystyle \neg f(x_1,\ldots,x_n) = (\neg x_1 \wedge \neg f|_{x_1=0}) \vee (x_1 \wedge \neg f|_{x_1=1})$</p>

<p>where $f|_{x_i=b}$ is the function boolean in which replaces $x_i$ with b, that is:</p>

<p>$\qquad \displaystyle f|_{x_i=b}(x_1,\ldots,x_n)=f(x_1,\ldots,x_{i-1},b,x_{i+1},\ldots,x_n)$.</p>

<p>The proof says:</p>

<p>$\qquad \displaystyle\neg f(x_1,\ldots,x_n) = \neg((\neg x_1 \wedge f|_{x_1=0}) \vee (x_1 \wedge f|_{x_1=1}))$. </p>

<p>Applying the negation (skip the intermediate steps), we get:</p>

<p>$\qquad \displaystyle (x_1 \wedge \neg x_1) \vee (\neg x_1 \wedge \neg f|_{x_1=0}) \vee (x_1 \wedge \neg f|_{x_1=1}) \vee (\neg f|_{x_1=0} \wedge \neg f|_{x_1=1}) $. </p>

<p>Now $(x_1 \wedge \neg x_1)= \mathrm{false}$ can be dropped, which leads to</p>

<p>$\qquad \displaystyle (\neg x_1 \wedge \neg f|_{x_1=0}) \vee (x_1 \wedge \neg f|_{x_1=1}) \vee (\neg f|_{x_1=0} \wedge \neg f|_{x_1=1}) $ </p>

<p>which in turn is, finally, equal to </p>

<p>$\qquad \displaystyle (\neg x_1 \wedge \neg f|_{x_1=0}) \vee (x_1 \wedge \neg f|_{x_1=1})$.</p>

<p>Why does this hold?</p>
 | habedi/stack-exchange-dataset |
2,598 | Balanced weighting of edges in cactus graph | <p>Given a <a href="https://en.wikipedia.org/wiki/Cactus_graph" rel="nofollow">cactus</a>, we want to weight its edges in such a way that</p>

<ol>
<li>For each vertex, the sum of the weights of edges incident to the vertex is no more than 1.</li>
<li>The sum of all edge weights is maximized.</li>
</ol>

<p>Clearly the answer is no more than $\frac{n}{2}$ for $n$ vertices ($\sum d_i = 2D$ where $d_i$ is the sum for one vertex and $D$ is the sum over every edge). This bound is achievable for cycle graphs by weighting each edge 1/2.</p>

<p>I found a greedy algorithm for trees. Just assign 1 to edges incident to leaves and remove them and their neighbors from the graph in repeated passes. This prunes the cactus down to a bunch of interconnected cycles. At this point I assumed the remaining cycles were not interconnected and weighted each edge 1/2. This got 9/10 test cases but is, of course, incomplete.</p>

<p>So, how might we solve this problem for cacti in general? I would prefer hints to full solutions, but either is fine.</p>

<p><sub>
This question involves a problem from <a href="https://genesys.interviewstreet.com" rel="nofollow">an InterviewStreet CompanySprint</a>. I already competed but I'd like some thoughts on a problem (solutions aren't released, and I've been banging my head against the wall over this problem).
</sub></p>
 | algorithms graphs greedy algorithms | 1 | Balanced weighting of edges in cactus graph -- (algorithms graphs greedy algorithms)
<p>Given a <a href="https://en.wikipedia.org/wiki/Cactus_graph" rel="nofollow">cactus</a>, we want to weight its edges in such a way that</p>

<ol>
<li>For each vertex, the sum of the weights of edges incident to the vertex is no more than 1.</li>
<li>The sum of all edge weights is maximized.</li>
</ol>

<p>Clearly the answer is no more than $\frac{n}{2}$ for $n$ vertices ($\sum d_i = 2D$ where $d_i$ is the sum for one vertex and $D$ is the sum over every edge). This bound is achievable for cycle graphs by weighting each edge 1/2.</p>

<p>I found a greedy algorithm for trees. Just assign 1 to edges incident to leaves and remove them and their neighbors from the graph in repeated passes. This prunes the cactus down to a bunch of interconnected cycles. At this point I assumed the remaining cycles were not interconnected and weighted each edge 1/2. This got 9/10 test cases but is, of course, incomplete.</p>

<p>So, how might we solve this problem for cacti in general? I would prefer hints to full solutions, but either is fine.</p>

<p><sub>
This question involves a problem from <a href="https://genesys.interviewstreet.com" rel="nofollow">an InterviewStreet CompanySprint</a>. I already competed but I'd like some thoughts on a problem (solutions aren't released, and I've been banging my head against the wall over this problem).
</sub></p>
 | habedi/stack-exchange-dataset |
2,605 | Is rejection sampling the only way to get a truly uniform distribution of random numbers? | <p>Suppose that we have a random generator that outputs
numbers in the range $[0..R-1]$ with uniform distribution and we
need to generate random numbers in the range $[0..N-1]$
with uniform distribution.</p>

<p>Suppose that $N < R$ and $N$ does not evenly divide $R$;
in order to get a <strong>truly uniform distribution</strong> we can use the
<a href="http://en.wikipedia.org/wiki/Rejection_sampling">rejection sampling</a> method:</p>

<ul>
<li>if $k$ is the greatest integer such that $k N < R$</li>
<li>pick a random number $r$ in $[0..R-1]$</li>
<li>if $r < k N$ then output $r \mod N$, otherwise keep trying with other random numbers r', r", ... until the condition is met</li>
</ul>

<blockquote>
Is rejection sampling the only way to get a truly uniform discrete distribution?
</blockquote>

<p>If the answer is yes, why? </p>

<p>Note: if $N > R$ the idea is the same: generate a random number $r'$ in $[0..R^m-1], R^m >= N$, for example $r' = R(...R(R r_1 + r_2)...)+r_m$ where $r_i$ is a random number in the range $[0..R-1]$</p>
 | probability theory randomness random number generator sampling | 1 | Is rejection sampling the only way to get a truly uniform distribution of random numbers? -- (probability theory randomness random number generator sampling)
<p>Suppose that we have a random generator that outputs
numbers in the range $[0..R-1]$ with uniform distribution and we
need to generate random numbers in the range $[0..N-1]$
with uniform distribution.</p>

<p>Suppose that $N < R$ and $N$ does not evenly divide $R$;
in order to get a <strong>truly uniform distribution</strong> we can use the
<a href="http://en.wikipedia.org/wiki/Rejection_sampling">rejection sampling</a> method:</p>

<ul>
<li>if $k$ is the greatest integer such that $k N < R$</li>
<li>pick a random number $r$ in $[0..R-1]$</li>
<li>if $r < k N$ then output $r \mod N$, otherwise keep trying with other random numbers r', r", ... until the condition is met</li>
</ul>

<blockquote>
Is rejection sampling the only way to get a truly uniform discrete distribution?
</blockquote>

<p>If the answer is yes, why? </p>

<p>Note: if $N > R$ the idea is the same: generate a random number $r'$ in $[0..R^m-1], R^m >= N$, for example $r' = R(...R(R r_1 + r_2)...)+r_m$ where $r_i$ is a random number in the range $[0..R-1]$</p>
 | habedi/stack-exchange-dataset |
2,608 | How to random-generate a graph with Pareto-Lognormal degree nodes? | <p>I have read that the degree of nodes in a "knowledge" graph of people roughly follows a power law distribution, and more exactly can be approximated with a Pareto-Lognormal distribution.</p>

<p>Where can I find a kind of algorithm that will produce a random graph with this distribution?</p>

<p>See for example the paper <a href="http://www.cs.ucsb.edu/~alessandra/papers/ba048f-sala.pdf" rel="nofollow">Revisiting Degree Distribution Models for Social Graph Analysis</a> (page 4, equation 1) for a mathematical description (distribution function) of the kind of distribution I'm interested in.</p>
 | algorithms graphs probability theory randomness | 1 | How to random-generate a graph with Pareto-Lognormal degree nodes? -- (algorithms graphs probability theory randomness)
<p>I have read that the degree of nodes in a "knowledge" graph of people roughly follows a power law distribution, and more exactly can be approximated with a Pareto-Lognormal distribution.</p>

<p>Where can I find a kind of algorithm that will produce a random graph with this distribution?</p>

<p>See for example the paper <a href="http://www.cs.ucsb.edu/~alessandra/papers/ba048f-sala.pdf" rel="nofollow">Revisiting Degree Distribution Models for Social Graph Analysis</a> (page 4, equation 1) for a mathematical description (distribution function) of the kind of distribution I'm interested in.</p>
 | habedi/stack-exchange-dataset |
2,614 | When does the function mapping a string to its prefix-free Kolmogorov complexity halt? | <p>In <em>Algorithmic Randomness and Complexity</em> from Downey and Hirschfeldt, it is stated on page 129 that </p>

<p>$\qquad \displaystyle \sum_{K(\sigma)\downarrow} 2^{-K(\sigma)} \leq 1$, </p>

<p>where $K(\sigma)\downarrow$ means that $K$ halts on $\sigma$, $\sigma$ being a binary string. $K$ denotes the prefix-free Kolmogorov complexity.</p>

<p>When does $K$ halt? I think it only halts on a finite number of inputs, since the classical proof on non-computability of the Kolmogorov complexity gives an upper bound on the domain of $K$. But then, the finite set of inputs on which $K$ halts can be chosen arbitrary (one just needs to store the finite number of complexities in the source code).</p>

<p>So is this sum well-defined? In other words, is the domain of $K$ well defined?</p>
 | computability terminology kolmogorov complexity descriptive complexity | 1 | When does the function mapping a string to its prefix-free Kolmogorov complexity halt? -- (computability terminology kolmogorov complexity descriptive complexity)
<p>In <em>Algorithmic Randomness and Complexity</em> from Downey and Hirschfeldt, it is stated on page 129 that </p>

<p>$\qquad \displaystyle \sum_{K(\sigma)\downarrow} 2^{-K(\sigma)} \leq 1$, </p>

<p>where $K(\sigma)\downarrow$ means that $K$ halts on $\sigma$, $\sigma$ being a binary string. $K$ denotes the prefix-free Kolmogorov complexity.</p>

<p>When does $K$ halt? I think it only halts on a finite number of inputs, since the classical proof on non-computability of the Kolmogorov complexity gives an upper bound on the domain of $K$. But then, the finite set of inputs on which $K$ halts can be chosen arbitrary (one just needs to store the finite number of complexities in the source code).</p>

<p>So is this sum well-defined? In other words, is the domain of $K$ well defined?</p>
 | habedi/stack-exchange-dataset |
2,615 | Prove that every two longest paths have at least one vertex in common | <p>If a graph $G$ is connected and has no path with a length greater than $k$, prove that every two paths in $G$ of length $k$ have at least one vertex in common. </p>

<p>I think that that common vertex should be in the middle of both the paths. Because if this is not the case then we can have a path of length $>k$. Am I right?</p>
 | graphs combinatorics | 1 | Prove that every two longest paths have at least one vertex in common -- (graphs combinatorics)
<p>If a graph $G$ is connected and has no path with a length greater than $k$, prove that every two paths in $G$ of length $k$ have at least one vertex in common. </p>

<p>I think that that common vertex should be in the middle of both the paths. Because if this is not the case then we can have a path of length $>k$. Am I right?</p>
 | habedi/stack-exchange-dataset |
2,623 | Is this language Context-Free? | <p>Is the language</p>

<p>$$L = \{a,b\}^* \setminus \{(a^nb^n)^n\mid n \geq1 \}$$</p>

<p>context-free? I believe that the answer is that it is not a CFL, but I can't prove it by Ogden's lemma or Pumping lemma.</p>
 | formal languages context free pumping lemma | 1 | Is this language Context-Free? -- (formal languages context free pumping lemma)
<p>Is the language</p>

<p>$$L = \{a,b\}^* \setminus \{(a^nb^n)^n\mid n \geq1 \}$$</p>

<p>context-free? I believe that the answer is that it is not a CFL, but I can't prove it by Ogden's lemma or Pumping lemma.</p>
 | habedi/stack-exchange-dataset |
2,638 | Does there exist a Turing complete typed lambda calculus? | <p>Do there exist any Turing complete typed lambda calculi? If so, what are a few examples?</p>
 | computability lambda calculus type theory | 1 | Does there exist a Turing complete typed lambda calculus? -- (computability lambda calculus type theory)
<p>Do there exist any Turing complete typed lambda calculi? If so, what are a few examples?</p>
 | habedi/stack-exchange-dataset |
2,641 | Witness for the $EU(\phi_1,\phi_2)$ using BDDs | <p>I wanted ask if you know an algorithm to find the witness for $EU(\phi_1,\phi_2)$ (CTL formula "Exist Until") using BDDs (<a href="http://en.wikipedia.org/wiki/Binary_decision_diagram" rel="nofollow">Binary Decision Diagram</a>). In pratice you should use the fixed point for calculating $EU(\phi_1,\phi_2)$, that is:</p>

<p>$\qquad \displaystyle EU(\phi_1,\phi_2)=\mu.Q (\phi_2 \vee (\phi_1 \wedge EX Q)) $</p>

<p>Unwinding the recursion, we get:</p>

<p>$\qquad \displaystyle \begin{align}
 Q_0 &= \textrm{false} \\
 Q_1 &= \phi_2 \\
 Q_2 &= \phi_2 \vee (\phi_1 \wedge EX \phi_2) \\
 \ \vdots
\end{align}$</p>

<p>and so on.</p>

<p>To generate a witness (path) we can do a forward reachability check within the sequence of $Q_i’s$, that is find a path</p>

<p>$\qquad \displaystyle \pi= s_1 \rightarrow s_2 \rightarrow \cdots \rightarrow s_n$ </p>

<p>such that $s_i \in Q_{n-i} \cap R(s_{i-1})$ (where $R(s_{i-1})= \{ s \mid R(s_{i-1},s) \}$ and $R(s_{i-1},s$) is the transition from $s_{i-1}$ to $s$ ) where $s_0 \in Q_n $ and $s_n \in Q_1=\phi_2$. </p>

<p>How you can do this with BDDs?</p>
 | formal methods model checking | 1 | Witness for the $EU(\phi_1,\phi_2)$ using BDDs -- (formal methods model checking)
<p>I wanted ask if you know an algorithm to find the witness for $EU(\phi_1,\phi_2)$ (CTL formula "Exist Until") using BDDs (<a href="http://en.wikipedia.org/wiki/Binary_decision_diagram" rel="nofollow">Binary Decision Diagram</a>). In pratice you should use the fixed point for calculating $EU(\phi_1,\phi_2)$, that is:</p>

<p>$\qquad \displaystyle EU(\phi_1,\phi_2)=\mu.Q (\phi_2 \vee (\phi_1 \wedge EX Q)) $</p>

<p>Unwinding the recursion, we get:</p>

<p>$\qquad \displaystyle \begin{align}
 Q_0 &= \textrm{false} \\
 Q_1 &= \phi_2 \\
 Q_2 &= \phi_2 \vee (\phi_1 \wedge EX \phi_2) \\
 \ \vdots
\end{align}$</p>

<p>and so on.</p>

<p>To generate a witness (path) we can do a forward reachability check within the sequence of $Q_i’s$, that is find a path</p>

<p>$\qquad \displaystyle \pi= s_1 \rightarrow s_2 \rightarrow \cdots \rightarrow s_n$ </p>

<p>such that $s_i \in Q_{n-i} \cap R(s_{i-1})$ (where $R(s_{i-1})= \{ s \mid R(s_{i-1},s) \}$ and $R(s_{i-1},s$) is the transition from $s_{i-1}$ to $s$ ) where $s_0 \in Q_n $ and $s_n \in Q_1=\phi_2$. </p>

<p>How you can do this with BDDs?</p>
 | habedi/stack-exchange-dataset |
2,646 | Is it possible to always construct a hamiltonian path on a tournament graph by sorting? | <p>Is it possible to always construct a hamiltonian path on a <a href="http://en.wikipedia.org/wiki/Tournament_%28graph_theory%29#Paths_and_cycles" rel="nofollow">tournament graph</a> $G=(V,E)$ by sorting (using any sorting algorithm) with the following total order:</p>

<p>$\qquad \displaystyle a \leq b \iff (a,b) \in E \lor \left(\exists\, c \in V. a \leq c \land c \leq b\right)$</p>

<p>For context, this came from an observation that the inductive construction in the above page seems to be equivalent to insertion sort using the given order. Is it possible to use other sorting algorithms?</p>
 | algorithms graphs sorting | 1 | Is it possible to always construct a hamiltonian path on a tournament graph by sorting? -- (algorithms graphs sorting)
<p>Is it possible to always construct a hamiltonian path on a <a href="http://en.wikipedia.org/wiki/Tournament_%28graph_theory%29#Paths_and_cycles" rel="nofollow">tournament graph</a> $G=(V,E)$ by sorting (using any sorting algorithm) with the following total order:</p>

<p>$\qquad \displaystyle a \leq b \iff (a,b) \in E \lor \left(\exists\, c \in V. a \leq c \land c \leq b\right)$</p>

<p>For context, this came from an observation that the inductive construction in the above page seems to be equivalent to insertion sort using the given order. Is it possible to use other sorting algorithms?</p>
 | habedi/stack-exchange-dataset |
2,648 | Collectively pay the bill problem | <p>There are $n$ people at a table. The $i$th person has to pay $p_i$ dollars. </p>

<p>Some people don't have the right bills to pay exactly $p_i$, so they come up with the following algorithm.</p>

<blockquote>
 <p>First, everyone puts some of their money on the table. Then each individual takes back the money they overpaid. </p>
</blockquote>

<p>The bills have a fixed set of denominations (not part of the input).</p>

<p>An example:
Suppose there are two people, Alice and Bob. Alice owes \$5 and has five \$1 bills. Bob owes \$2 and has one \$5 bill. After Alice and Bob put all their money on the table, Bob takes back \$3, and everyone is happy.</p>

<p>Of course, there are times where one doesn't have to put <em>all</em> his money on the table. For example, if Alice had a thousand \$1 bills, it's not necessary for her to put them all on the table and then take most of them back.</p>

<p>I want to find an algorithm with the following properties: </p>

<ol>
<li><p>The input specifies the number of people, how much each person owes and how many bills of each denomination each person has.</p></li>
<li><p>The algorithm tells each person which bills to put on the table in the first round.</p></li>
<li><p>The algorithm tells each person which bills to remove from the table in the second round.</p></li>
<li><p>The number of bills put on the table + the number of bills removed from the table is minimized. </p></li>
</ol>

<p>If there is no feasible solution, the algorithm just return an error.</p>
 | algorithms optimization | 1 | Collectively pay the bill problem -- (algorithms optimization)
<p>There are $n$ people at a table. The $i$th person has to pay $p_i$ dollars. </p>

<p>Some people don't have the right bills to pay exactly $p_i$, so they come up with the following algorithm.</p>

<blockquote>
 <p>First, everyone puts some of their money on the table. Then each individual takes back the money they overpaid. </p>
</blockquote>

<p>The bills have a fixed set of denominations (not part of the input).</p>

<p>An example:
Suppose there are two people, Alice and Bob. Alice owes \$5 and has five \$1 bills. Bob owes \$2 and has one \$5 bill. After Alice and Bob put all their money on the table, Bob takes back \$3, and everyone is happy.</p>

<p>Of course, there are times where one doesn't have to put <em>all</em> his money on the table. For example, if Alice had a thousand \$1 bills, it's not necessary for her to put them all on the table and then take most of them back.</p>

<p>I want to find an algorithm with the following properties: </p>

<ol>
<li><p>The input specifies the number of people, how much each person owes and how many bills of each denomination each person has.</p></li>
<li><p>The algorithm tells each person which bills to put on the table in the first round.</p></li>
<li><p>The algorithm tells each person which bills to remove from the table in the second round.</p></li>
<li><p>The number of bills put on the table + the number of bills removed from the table is minimized. </p></li>
</ol>

<p>If there is no feasible solution, the algorithm just return an error.</p>
 | habedi/stack-exchange-dataset |
2,649 | A regular expression for a given formal language | <p>I wanted to ask if someone can help me to construct a regular expression over the alphabet $\{a,b,x\}$ for the language $L$ which is constituted by all strings containing an odd number of $a$'s, and in which between each pair of consecutive $a$'s there is an even number of $b$'s (and an arbirtary number of $x$'s).</p>

<p>For example, $babbxbbxabbxaabxxbax \in L$, $bab \in L$, while $abba \notin L$ and $abbbaa \notin L$.</p>

<p>What is the approach?</p>
 | formal languages regular languages regular expressions | 1 | A regular expression for a given formal language -- (formal languages regular languages regular expressions)
<p>I wanted to ask if someone can help me to construct a regular expression over the alphabet $\{a,b,x\}$ for the language $L$ which is constituted by all strings containing an odd number of $a$'s, and in which between each pair of consecutive $a$'s there is an even number of $b$'s (and an arbirtary number of $x$'s).</p>

<p>For example, $babbxbbxabbxaabxxbax \in L$, $bab \in L$, while $abba \notin L$ and $abbbaa \notin L$.</p>

<p>What is the approach?</p>
 | habedi/stack-exchange-dataset |
2,653 | NP-completeness and NP problems | <p>Suppose that someone found a polynomial algorithm for a NP-complete decision problem. Would this mean that we can modify the algorithm a bit and use it for solving the problems that are in NP, but not in NP-complete? Or would this just shows the availability of a polynomial algorithm for each NP problem indirectly?</p>

<p>edit:
I know that when NP-complete problems have polynomial algorithms, all NP problems must have polynomial algorithms. The question I am asking is that whether we can use the discovered algorithm for NP-complete to all NP problems just by modifying the algorithm. Or would we just know that NP problems must have a polynomial algorithm indirectly? </p>
 | complexity theory | 1 | NP-completeness and NP problems -- (complexity theory)
<p>Suppose that someone found a polynomial algorithm for a NP-complete decision problem. Would this mean that we can modify the algorithm a bit and use it for solving the problems that are in NP, but not in NP-complete? Or would this just shows the availability of a polynomial algorithm for each NP problem indirectly?</p>

<p>edit:
I know that when NP-complete problems have polynomial algorithms, all NP problems must have polynomial algorithms. The question I am asking is that whether we can use the discovered algorithm for NP-complete to all NP problems just by modifying the algorithm. Or would we just know that NP problems must have a polynomial algorithm indirectly? </p>
 | habedi/stack-exchange-dataset |
2,655 | How do we distinguish NP-complete problems from other NP problems? | <p>I just learned that when we have a polynomial algorithm for NP-complete problems, it is possible to use that algorithm to solve all NP problems. </p>

<p>So, the question is how we then distinguish non-NP-complete NP problems from NP-complete problems? It seems that all these problems will have a polynomial algorithm to convert into other problems...</p>
 | complexity theory terminology np complete | 1 | How do we distinguish NP-complete problems from other NP problems? -- (complexity theory terminology np complete)
<p>I just learned that when we have a polynomial algorithm for NP-complete problems, it is possible to use that algorithm to solve all NP problems. </p>

<p>So, the question is how we then distinguish non-NP-complete NP problems from NP-complete problems? It seems that all these problems will have a polynomial algorithm to convert into other problems...</p>
 | habedi/stack-exchange-dataset |
2,658 | How hard is finding the discrete logarithm? | <p>The <a href="http://en.wikipedia.org/wiki/Discrete_logarithm">discrete logarithm</a> is the same as finding $b$ in $a^b=c \bmod N$, given $a$, $c$, and $N$.</p>

<p>I wonder what complexity groups (e.g. for classical and quantum computers) this is in, and what approaches (i.e. algorithms) are the best for accomplishing this task.</p>

<p>The wikipedia link above doesn't really give very concrete runtimes. I'm hoping for something more like what the best known methods are for finding such.</p>
 | algorithms complexity theory time complexity discrete mathematics | 1 | How hard is finding the discrete logarithm? -- (algorithms complexity theory time complexity discrete mathematics)
<p>The <a href="http://en.wikipedia.org/wiki/Discrete_logarithm">discrete logarithm</a> is the same as finding $b$ in $a^b=c \bmod N$, given $a$, $c$, and $N$.</p>

<p>I wonder what complexity groups (e.g. for classical and quantum computers) this is in, and what approaches (i.e. algorithms) are the best for accomplishing this task.</p>

<p>The wikipedia link above doesn't really give very concrete runtimes. I'm hoping for something more like what the best known methods are for finding such.</p>
 | habedi/stack-exchange-dataset |
2,660 | Is finding the longest path of a graph NP-complete? | <p>The problem of finding the largest subgraph of a graph that has a Hamiltonian path can be restated as finding the longest path of a graph. Is this NP-complete? Also, is finding the $k$-length path of a graph NP-complete? Is it still NP-complete if we require the path to visit a given vertex?</p>
 | complexity theory graphs np complete | 1 | Is finding the longest path of a graph NP-complete? -- (complexity theory graphs np complete)
<p>The problem of finding the largest subgraph of a graph that has a Hamiltonian path can be restated as finding the longest path of a graph. Is this NP-complete? Also, is finding the $k$-length path of a graph NP-complete? Is it still NP-complete if we require the path to visit a given vertex?</p>
 | habedi/stack-exchange-dataset |
2,669 | regular expression given the language | <p>The language is:
$$
L = \{ (a^n) (b^m) \mid n + m = 3k, k \ge 0 \}
$$</p>

<p>My attempt at an answer:
$$
(a \cup b)^{3k}
$$</p>

<p>This will work if the a OR b can change for each instance in the string that is (3k) long. If not, what can I do to fix this?</p>
 | formal languages regular languages regular expressions | 1 | regular expression given the language -- (formal languages regular languages regular expressions)
<p>The language is:
$$
L = \{ (a^n) (b^m) \mid n + m = 3k, k \ge 0 \}
$$</p>

<p>My attempt at an answer:
$$
(a \cup b)^{3k}
$$</p>

<p>This will work if the a OR b can change for each instance in the string that is (3k) long. If not, what can I do to fix this?</p>
 | habedi/stack-exchange-dataset |
2,676 | The $\text{k-key}$ problem | <p>Given an undirected graph, I define a structure called <em>k-key</em> as a path containing $k$ vertices which are connected to a simple cycle which contains $k$ vertices as well.</p>

<p>Here's the <em>k-key problem</em>: given an undirected graph $G$ and a number $k$, decide whether $G$ contains k $k$-key.</p>

<p>I want to show that the k-key problem is a NP-complete.</p>

<p>I want to make a reduction from the 'Undirected Hamiltonian Cycle' problem in which the input is a graph, and the problem is to decide whether it contains a Hamiltonian path. I already know that this problem is NP-complete. The input for the reduction would be an undirected graph $G$ and the output is $G'$ graph and $k$. Can you please help me understand what manipulation I should do to the original graph in order to show this reduction? And why should it work?</p>
 | complexity theory np complete reductions np hard | 1 | The $\text{k-key}$ problem -- (complexity theory np complete reductions np hard)
<p>Given an undirected graph, I define a structure called <em>k-key</em> as a path containing $k$ vertices which are connected to a simple cycle which contains $k$ vertices as well.</p>

<p>Here's the <em>k-key problem</em>: given an undirected graph $G$ and a number $k$, decide whether $G$ contains k $k$-key.</p>

<p>I want to show that the k-key problem is a NP-complete.</p>

<p>I want to make a reduction from the 'Undirected Hamiltonian Cycle' problem in which the input is a graph, and the problem is to decide whether it contains a Hamiltonian path. I already know that this problem is NP-complete. The input for the reduction would be an undirected graph $G$ and the output is $G'$ graph and $k$. Can you please help me understand what manipulation I should do to the original graph in order to show this reduction? And why should it work?</p>
 | habedi/stack-exchange-dataset |
2,677 | Algorithm for type conversion / signature matching | <p>I'm working on an expression typing system and looking for insights on what algorithms may be available which solve my problem -- or a proof that its complexity is too high to be reasonable to implement. The problem is defined below.</p>

<p>I have a set of types which form a directed graph $T = (V,E)$ (assume no cycles). This graph represents the allowed type conversions in a language. For example an edge $e_i = v_1 \rightarrow v_2$ indicates that $v_1$ can be implicitly converted to type $v_2$.</p>

<p>I have a set of parameter types for a function expressed as a set $P = { p_1 ... p_n : p_i ∈ V }$. I also have a list of functions $F$ that might be applicable at this point. Each function has a signature (the types it accepts) $F_j = { f_1 ... f_n : t_i ∈ V }$.</p>

<p>The goal is to use a series of type conversions allowed by $T$ to convert $P$ into a signature compatible with any function in $F$. Conversion means moving along an edge in the graph to another type. Compatible means the converted parameter types match the function types.</p>

<p>If each conversion has a cost of 1, which function, if selected, has the minimum total conversion cost for all parameters?</p>

<hr>

<p><em>A very simple example</em>: Assume we have a graph of types <code>integer -> real -> complex</code>. Our parameters have the types <code>{ integer, real }</code>. We have a function with types <code>{ complex, complex }</code>. The first integer takes two conversion to match complex, and the real takes one conversion, for a total cost of three. We have another function with types <code>{ real, real }</code>. This has a cost of one and is thus the better match.</p>

<hr>

<p>My initial idea is to treat the search as a path through a graph and use a modified A* algorithm. Each of the possible functions is a goal in that graph, and each path between nodes represents the conversion of a single parameter type. With even a modest number of allowed type conversions however this becomes very inefficient.</p>
 | algorithms typing | 1 | Algorithm for type conversion / signature matching -- (algorithms typing)
<p>I'm working on an expression typing system and looking for insights on what algorithms may be available which solve my problem -- or a proof that its complexity is too high to be reasonable to implement. The problem is defined below.</p>

<p>I have a set of types which form a directed graph $T = (V,E)$ (assume no cycles). This graph represents the allowed type conversions in a language. For example an edge $e_i = v_1 \rightarrow v_2$ indicates that $v_1$ can be implicitly converted to type $v_2$.</p>

<p>I have a set of parameter types for a function expressed as a set $P = { p_1 ... p_n : p_i ∈ V }$. I also have a list of functions $F$ that might be applicable at this point. Each function has a signature (the types it accepts) $F_j = { f_1 ... f_n : t_i ∈ V }$.</p>

<p>The goal is to use a series of type conversions allowed by $T$ to convert $P$ into a signature compatible with any function in $F$. Conversion means moving along an edge in the graph to another type. Compatible means the converted parameter types match the function types.</p>

<p>If each conversion has a cost of 1, which function, if selected, has the minimum total conversion cost for all parameters?</p>

<hr>

<p><em>A very simple example</em>: Assume we have a graph of types <code>integer -> real -> complex</code>. Our parameters have the types <code>{ integer, real }</code>. We have a function with types <code>{ complex, complex }</code>. The first integer takes two conversion to match complex, and the real takes one conversion, for a total cost of three. We have another function with types <code>{ real, real }</code>. This has a cost of one and is thus the better match.</p>

<hr>

<p>My initial idea is to treat the search as a path through a graph and use a modified A* algorithm. Each of the possible functions is a goal in that graph, and each path between nodes represents the conversion of a single parameter type. With even a modest number of allowed type conversions however this becomes very inefficient.</p>
 | habedi/stack-exchange-dataset |
2,688 | Time series probability and mutual information | <p>There is a time series of say $100$ data points. I wish to assign symbols of $0, 1, 2$ for each unique data point. The issue is I have tried but got stuck since no matter I specify the symbols, the program just outputs probability of $1$'s and $0$'s. The following are the questions:</p>

<ol>
<li>How to find probability or correct my code so that it outputs probablities when number of symbols size > 2?</li>
<li>How to calculate entropy annd mutual information for this case. I don't know although I have read Matlab's entropy calculation <a href="http://www.mathworks.com/matlabcentral/fileexchange/14888" rel="nofollow">Mutual Information & Entropy</a> but alas cannot follow how to apply in this case.</li>
</ol>
 | probability theory information theory | 1 | Time series probability and mutual information -- (probability theory information theory)
<p>There is a time series of say $100$ data points. I wish to assign symbols of $0, 1, 2$ for each unique data point. The issue is I have tried but got stuck since no matter I specify the symbols, the program just outputs probability of $1$'s and $0$'s. The following are the questions:</p>

<ol>
<li>How to find probability or correct my code so that it outputs probablities when number of symbols size > 2?</li>
<li>How to calculate entropy annd mutual information for this case. I don't know although I have read Matlab's entropy calculation <a href="http://www.mathworks.com/matlabcentral/fileexchange/14888" rel="nofollow">Mutual Information & Entropy</a> but alas cannot follow how to apply in this case.</li>
</ol>
 | habedi/stack-exchange-dataset |
2,689 | Is Karp Reduction identical to Levin Reduction | <h3>Definition: Karp Reduction</h3>
<p>A language <span class="math-container">$A$</span> is Karp reducible to a language <span class="math-container">$B$</span> if there is a polynomial-time computable function <span class="math-container">$f:\{0,1\}^*\rightarrow\{0,1\}^*$</span> such that for every <span class="math-container">$x$</span>, <span class="math-container">$x\in A$</span> if and only if <span class="math-container">$f(x)\in B$</span>.</p>
<h3>Definition: Levin Reduction</h3>
<p>A search problem <span class="math-container">$V_A$</span> is Levin reducible to a search problem <span class="math-container">$V_B$</span> if there is polynomial time function <span class="math-container">$f$</span> that Karp reduces <span class="math-container">$L(V_A)$</span> to <span class="math-container">$L(V_B)$</span> and there are polynomial-time computable functions <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that</p>
<ol>
<li><p><span class="math-container">$\langle x, y \rangle \in V_A \implies \langle f(x), g(x,y) \rangle \in V_B$</span>,</p>
</li>
<li><p><span class="math-container">$\langle f(x), z \rangle \in V_B \implies \langle x, h(x,z) \rangle \in V_A$</span></p>
</li>
</ol>
<p>Are these reductions equivalent?</p>
<hr />
<p>I think the two definitions are equivalent. For any two <span class="math-container">$\mathsf{NP}$</span> languages <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, if <span class="math-container">$A$</span> is Karp reducible to <span class="math-container">$B$</span>, then <span class="math-container">$A$</span> is Levin reducible to <span class="math-container">$B$</span>.</p>
<p>Here is my proof:</p>
<p>Let <span class="math-container">$x$</span> and <span class="math-container">$\overline{x}$</span> be arbitrary instances of <span class="math-container">$A$</span> while <span class="math-container">$x'$</span> be that of <span class="math-container">$B$</span>.
Suppose <span class="math-container">$V_A$</span> and <span class="math-container">$V_B$</span> are verifiers of <span class="math-container">$A$</span> and <span class="math-container">$B$</span>.
Let <span class="math-container">$y$</span> and <span class="math-container">$\overline{y}$</span> be arbitrary certificates of <span class="math-container">$x$</span> and <span class="math-container">$\overline{x}$</span> according to <span class="math-container">$V_A$</span>.
Let <span class="math-container">$z$</span> be that of <span class="math-container">$x'$</span> according to <span class="math-container">$V_B$</span>.</p>
<p>Construct new verifiers <span class="math-container">$V'_A$</span> and <span class="math-container">$V'_B$</span> with new certificates <span class="math-container">$y'$</span> and <span class="math-container">$z'$</span>:</p>
<p><span class="math-container">$V'_A(x,y'):$</span></p>
<ol>
<li><span class="math-container">$y'=\langle 0,\overline{x},\overline{y}\rangle$</span>: If <span class="math-container">$f(x)\ne f(\overline{x})$</span>, reject.
Otherwise output <span class="math-container">$V_A(\overline{x},\overline{y})$</span>.</li>
<li><span class="math-container">$y'=\langle 1,z\rangle$</span>: Output <span class="math-container">$V_B(f(x),z)$</span>.</li>
</ol>
<p><span class="math-container">$V'_B(x',z'):$</span></p>
<ol>
<li><p><span class="math-container">$z'=\langle 0,z\rangle$</span>: Output <span class="math-container">$V_B(x',z)$</span>.</p>
</li>
<li><p><span class="math-container">$z'=\langle 1,x,y\rangle$</span>: If <span class="math-container">$x'\ne f(x)$</span>, reject.
Otherwise output <span class="math-container">$V_A(x,y)$</span>.</p>
</li>
</ol>
<p>The polynomial-time computable functions <span class="math-container">$g$</span> and <span class="math-container">$h$</span> are defined as below:</p>
<p><span class="math-container">$g(x,y')$</span></p>
<ol>
<li><p><span class="math-container">$y'=\langle 0,\overline{x},\overline{y}\rangle$</span>: Output <span class="math-container">$\langle 1,\overline{x},\overline{y}\rangle$</span>.</p>
</li>
<li><p><span class="math-container">$y'=\langle 1,z\rangle$</span>: Output <span class="math-container">$\langle 0,z\rangle$</span>.</p>
</li>
</ol>
<p><span class="math-container">$h(x',z')$</span></p>
<ol>
<li><p><span class="math-container">$z'=\langle 0,z\rangle$</span>: Output <span class="math-container">$\langle 1,z\rangle$</span>.</p>
</li>
<li><p><span class="math-container">$z'=\langle 1,x,y\rangle$</span>: Output <span class="math-container">$\langle 0,x,y\rangle$</span>.</p>
</li>
</ol>
<p>Let <span class="math-container">$Y_x$</span> be the set of all certificates of <span class="math-container">$x$</span> according to <span class="math-container">$V_A$</span> and <span class="math-container">$Z_{x'}$</span> be the set of all certificates of <span class="math-container">$x'$</span> according to <span class="math-container">$V_B$</span>.
Then the set of all certificates of <span class="math-container">$x$</span> according to <span class="math-container">$V'_A$</span> is <span class="math-container">$0\overline{x}Y_\overline{x}+1Z_{f(x)}$</span> such that <span class="math-container">$f(x)=f(\overline{x})$</span>,
and the set of all certificates of <span class="math-container">$x'$</span> according to <span class="math-container">$V'_B$</span> is <span class="math-container">$0Z_{x'}+1\overline{x}Y_\overline{x}$</span> such that <span class="math-container">$x'=f(\overline{x})$</span>.</p>
<p>(This is derived from the accepting language of <span class="math-container">$V'_A$</span> and <span class="math-container">$V'_B$</span>.)</p>
<p>Now let <span class="math-container">$x'=f(x)$</span>, the rest part is easy to check.</p>
 | complexity theory reductions check my proof | 1 | Is Karp Reduction identical to Levin Reduction -- (complexity theory reductions check my proof)
<h3>Definition: Karp Reduction</h3>
<p>A language <span class="math-container">$A$</span> is Karp reducible to a language <span class="math-container">$B$</span> if there is a polynomial-time computable function <span class="math-container">$f:\{0,1\}^*\rightarrow\{0,1\}^*$</span> such that for every <span class="math-container">$x$</span>, <span class="math-container">$x\in A$</span> if and only if <span class="math-container">$f(x)\in B$</span>.</p>
<h3>Definition: Levin Reduction</h3>
<p>A search problem <span class="math-container">$V_A$</span> is Levin reducible to a search problem <span class="math-container">$V_B$</span> if there is polynomial time function <span class="math-container">$f$</span> that Karp reduces <span class="math-container">$L(V_A)$</span> to <span class="math-container">$L(V_B)$</span> and there are polynomial-time computable functions <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that</p>
<ol>
<li><p><span class="math-container">$\langle x, y \rangle \in V_A \implies \langle f(x), g(x,y) \rangle \in V_B$</span>,</p>
</li>
<li><p><span class="math-container">$\langle f(x), z \rangle \in V_B \implies \langle x, h(x,z) \rangle \in V_A$</span></p>
</li>
</ol>
<p>Are these reductions equivalent?</p>
<hr />
<p>I think the two definitions are equivalent. For any two <span class="math-container">$\mathsf{NP}$</span> languages <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, if <span class="math-container">$A$</span> is Karp reducible to <span class="math-container">$B$</span>, then <span class="math-container">$A$</span> is Levin reducible to <span class="math-container">$B$</span>.</p>
<p>Here is my proof:</p>
<p>Let <span class="math-container">$x$</span> and <span class="math-container">$\overline{x}$</span> be arbitrary instances of <span class="math-container">$A$</span> while <span class="math-container">$x'$</span> be that of <span class="math-container">$B$</span>.
Suppose <span class="math-container">$V_A$</span> and <span class="math-container">$V_B$</span> are verifiers of <span class="math-container">$A$</span> and <span class="math-container">$B$</span>.
Let <span class="math-container">$y$</span> and <span class="math-container">$\overline{y}$</span> be arbitrary certificates of <span class="math-container">$x$</span> and <span class="math-container">$\overline{x}$</span> according to <span class="math-container">$V_A$</span>.
Let <span class="math-container">$z$</span> be that of <span class="math-container">$x'$</span> according to <span class="math-container">$V_B$</span>.</p>
<p>Construct new verifiers <span class="math-container">$V'_A$</span> and <span class="math-container">$V'_B$</span> with new certificates <span class="math-container">$y'$</span> and <span class="math-container">$z'$</span>:</p>
<p><span class="math-container">$V'_A(x,y'):$</span></p>
<ol>
<li><span class="math-container">$y'=\langle 0,\overline{x},\overline{y}\rangle$</span>: If <span class="math-container">$f(x)\ne f(\overline{x})$</span>, reject.
Otherwise output <span class="math-container">$V_A(\overline{x},\overline{y})$</span>.</li>
<li><span class="math-container">$y'=\langle 1,z\rangle$</span>: Output <span class="math-container">$V_B(f(x),z)$</span>.</li>
</ol>
<p><span class="math-container">$V'_B(x',z'):$</span></p>
<ol>
<li><p><span class="math-container">$z'=\langle 0,z\rangle$</span>: Output <span class="math-container">$V_B(x',z)$</span>.</p>
</li>
<li><p><span class="math-container">$z'=\langle 1,x,y\rangle$</span>: If <span class="math-container">$x'\ne f(x)$</span>, reject.
Otherwise output <span class="math-container">$V_A(x,y)$</span>.</p>
</li>
</ol>
<p>The polynomial-time computable functions <span class="math-container">$g$</span> and <span class="math-container">$h$</span> are defined as below:</p>
<p><span class="math-container">$g(x,y')$</span></p>
<ol>
<li><p><span class="math-container">$y'=\langle 0,\overline{x},\overline{y}\rangle$</span>: Output <span class="math-container">$\langle 1,\overline{x},\overline{y}\rangle$</span>.</p>
</li>
<li><p><span class="math-container">$y'=\langle 1,z\rangle$</span>: Output <span class="math-container">$\langle 0,z\rangle$</span>.</p>
</li>
</ol>
<p><span class="math-container">$h(x',z')$</span></p>
<ol>
<li><p><span class="math-container">$z'=\langle 0,z\rangle$</span>: Output <span class="math-container">$\langle 1,z\rangle$</span>.</p>
</li>
<li><p><span class="math-container">$z'=\langle 1,x,y\rangle$</span>: Output <span class="math-container">$\langle 0,x,y\rangle$</span>.</p>
</li>
</ol>
<p>Let <span class="math-container">$Y_x$</span> be the set of all certificates of <span class="math-container">$x$</span> according to <span class="math-container">$V_A$</span> and <span class="math-container">$Z_{x'}$</span> be the set of all certificates of <span class="math-container">$x'$</span> according to <span class="math-container">$V_B$</span>.
Then the set of all certificates of <span class="math-container">$x$</span> according to <span class="math-container">$V'_A$</span> is <span class="math-container">$0\overline{x}Y_\overline{x}+1Z_{f(x)}$</span> such that <span class="math-container">$f(x)=f(\overline{x})$</span>,
and the set of all certificates of <span class="math-container">$x'$</span> according to <span class="math-container">$V'_B$</span> is <span class="math-container">$0Z_{x'}+1\overline{x}Y_\overline{x}$</span> such that <span class="math-container">$x'=f(\overline{x})$</span>.</p>
<p>(This is derived from the accepting language of <span class="math-container">$V'_A$</span> and <span class="math-container">$V'_B$</span>.)</p>
<p>Now let <span class="math-container">$x'=f(x)$</span>, the rest part is easy to check.</p>
 | habedi/stack-exchange-dataset |
2,690 | Vertex coloring with an upper bound on the degree of the nodes | <p>Consider the set of graphs in which the maximum degree of the vertices is a constant number $\Delta$ independent of the number of vertices. Is the vertex coloring problem (that is, color the vertices with minimum number of colors such that no pair of adjacent nodes have the same color) on this set still NP-hard? Why?</p>
 | algorithms complexity theory graphs np complete | 1 | Vertex coloring with an upper bound on the degree of the nodes -- (algorithms complexity theory graphs np complete)
<p>Consider the set of graphs in which the maximum degree of the vertices is a constant number $\Delta$ independent of the number of vertices. Is the vertex coloring problem (that is, color the vertices with minimum number of colors such that no pair of adjacent nodes have the same color) on this set still NP-hard? Why?</p>
 | habedi/stack-exchange-dataset |
2,696 | Left recursion and left factoring -- which one goes first? | <p>if I have a grammar having a production that contains both left recursion and left factoring like </p>

<p>$\qquad \displaystyle F \to FBa \mid cDS \mid c$ </p>

<p>which one has priority, left recursion or left factoring?</p>
 | formal languages formal grammars parsers left recursion | 1 | Left recursion and left factoring -- which one goes first? -- (formal languages formal grammars parsers left recursion)
<p>if I have a grammar having a production that contains both left recursion and left factoring like </p>

<p>$\qquad \displaystyle F \to FBa \mid cDS \mid c$ </p>

<p>which one has priority, left recursion or left factoring?</p>
 | habedi/stack-exchange-dataset |
2,697 | Why is solving of diagonal quadratic equations over $\mathbb R$ and $\mathbb C$ in $P$? | <p>Let $\mathbb F\in\{\mathbb R, \mathbb C\}$ the field of real or complex numbers. Then [1, page 22 in the middle] claims that the following equation can easily be solved in deterministic polynomial time:
$$ \sum_{i=1}^n a_ix_i^2=b$$
with $a_i, b\in\mathbb F$. Some discussion suggests, that the algorithmical model assumed in the paper is that of a machine that can perform field operations in one step. </p>

<p>My question is: What is the algorithm? Is it multidimensional Newton? That would be weird because this algorithm only converges (in some cases) and does not give an exact solution. I'm quite unused to computational models over fields like the reals or complex numbers and maybe for someone who is more experienced this is cristal-clear?</p>

<p>[1] <a href="http://www.math.uni-bonn.de/~saxena/papers/cubic-forms.pdf" rel="nofollow">Agrawal & Saxena, On the complexity of cubic forms, 2006.</a></p>
 | algorithms real numbers | 1 | Why is solving of diagonal quadratic equations over $\mathbb R$ and $\mathbb C$ in $P$? -- (algorithms real numbers)
<p>Let $\mathbb F\in\{\mathbb R, \mathbb C\}$ the field of real or complex numbers. Then [1, page 22 in the middle] claims that the following equation can easily be solved in deterministic polynomial time:
$$ \sum_{i=1}^n a_ix_i^2=b$$
with $a_i, b\in\mathbb F$. Some discussion suggests, that the algorithmical model assumed in the paper is that of a machine that can perform field operations in one step. </p>

<p>My question is: What is the algorithm? Is it multidimensional Newton? That would be weird because this algorithm only converges (in some cases) and does not give an exact solution. I'm quite unused to computational models over fields like the reals or complex numbers and maybe for someone who is more experienced this is cristal-clear?</p>

<p>[1] <a href="http://www.math.uni-bonn.de/~saxena/papers/cubic-forms.pdf" rel="nofollow">Agrawal & Saxena, On the complexity of cubic forms, 2006.</a></p>
 | habedi/stack-exchange-dataset |
2,698 | Concept of reduction and algorithm | <p>Suppose that someone found an algorithm A for a NP problem (that is not NP-complete) that uses an algorithm B for PSPACE-complete or #P-complete problem during execution. (Remaining part of the algorithm takes polynomial time.)</p>

<p>Then suppose there is also an algorithm C for a NP problem that uses the polynomial-consuming part of the algorithm A. The rest of the algorithm C is actually an algorithm that solves NP-complete problems.</p>

<p>Then would this mean that PSPACE-complete or #P-complete collapse to NP-complete?</p>

<p>If so or if not, why would it be like that?</p>

<p>I am asking this question, because I seem to get confused during reading my computation textbook.</p>

<p>Edit: 
I was a bit confused as in (or more accurately scalar function) math, if g(x)=f(h(x)) and g(x)=f(q(x)), h(x) and q(x) must be virtually the same. So, my question was virtually the aforementioned. That was the parallel I was making between algorithm A and C.</p>
 | complexity theory | 1 | Concept of reduction and algorithm -- (complexity theory)
<p>Suppose that someone found an algorithm A for a NP problem (that is not NP-complete) that uses an algorithm B for PSPACE-complete or #P-complete problem during execution. (Remaining part of the algorithm takes polynomial time.)</p>

<p>Then suppose there is also an algorithm C for a NP problem that uses the polynomial-consuming part of the algorithm A. The rest of the algorithm C is actually an algorithm that solves NP-complete problems.</p>

<p>Then would this mean that PSPACE-complete or #P-complete collapse to NP-complete?</p>

<p>If so or if not, why would it be like that?</p>

<p>I am asking this question, because I seem to get confused during reading my computation textbook.</p>

<p>Edit: 
I was a bit confused as in (or more accurately scalar function) math, if g(x)=f(h(x)) and g(x)=f(q(x)), h(x) and q(x) must be virtually the same. So, my question was virtually the aforementioned. That was the parallel I was making between algorithm A and C.</p>
 | habedi/stack-exchange-dataset |
2,704 | Is SAT in P if there are exponentially many clauses in the number of variables? | <p>I define a <em>long CNF</em> to contain at least $2^\frac{n}{2}$ clauses, where $n$ is the number of its variables. Let $\text{Long-SAT}=\{\phi: \phi$ is a satisfiable long CNF formula$\}$. </p>

<p>I'd like to know why $\text{Long-SAT} \in P$. First I thought it is $\text{NPC}$ since I can do a polynomial-time reduction from $\text{SAT}$ to $\text{Long-SAT}$, no?</p>

<p>But maybe I can reduce $\text{2-SAT}$ to $\text{Long-SAT}$? How do I do that?</p>
 | complexity theory np complete reductions satisfiability polynomial time | 1 | Is SAT in P if there are exponentially many clauses in the number of variables? -- (complexity theory np complete reductions satisfiability polynomial time)
<p>I define a <em>long CNF</em> to contain at least $2^\frac{n}{2}$ clauses, where $n$ is the number of its variables. Let $\text{Long-SAT}=\{\phi: \phi$ is a satisfiable long CNF formula$\}$. </p>

<p>I'd like to know why $\text{Long-SAT} \in P$. First I thought it is $\text{NPC}$ since I can do a polynomial-time reduction from $\text{SAT}$ to $\text{Long-SAT}$, no?</p>

<p>But maybe I can reduce $\text{2-SAT}$ to $\text{Long-SAT}$? How do I do that?</p>
 | habedi/stack-exchange-dataset |
2,710 | Is finding dead-end nodes in NL? | <p>Given a directed graph $G$ and two nodes $s,t$, decide whether there is some node $s'$ such that $s'$ is reachable from $s$ while $t$ is <em>not</em> reachable from $s'$.</p>

<p>I am wondering whether this problem is in <a href="https://en.wikipedia.org/wiki/NL_%28complexity%29" rel="nofollow">NL</a>.</p>
 | complexity theory graphs | 1 | Is finding dead-end nodes in NL? -- (complexity theory graphs)
<p>Given a directed graph $G$ and two nodes $s,t$, decide whether there is some node $s'$ such that $s'$ is reachable from $s$ while $t$ is <em>not</em> reachable from $s'$.</p>

<p>I am wondering whether this problem is in <a href="https://en.wikipedia.org/wiki/NL_%28complexity%29" rel="nofollow">NL</a>.</p>
 | habedi/stack-exchange-dataset |
2,713 | Are regular expressions $LR(k)$? | <p>If I have a Type 3 Grammar, it can be represented on a pushdown automaton (without doing any operation on the stack) so I can represent regular expressions by using context free languages. But can I know if a type 3 grammar is $LR(1)$, $LL(1)$, $SLR(1)$, etc. without constructing any parse tables?</p>
 | formal languages regular languages formal grammars parsers regular expressions | 1 | Are regular expressions $LR(k)$? -- (formal languages regular languages formal grammars parsers regular expressions)
<p>If I have a Type 3 Grammar, it can be represented on a pushdown automaton (without doing any operation on the stack) so I can represent regular expressions by using context free languages. But can I know if a type 3 grammar is $LR(1)$, $LL(1)$, $SLR(1)$, etc. without constructing any parse tables?</p>
 | habedi/stack-exchange-dataset |
2,717 | Polygons generated by a set of segments | <p>Given a set of segments, I would like to compute the set of closed polygons inside the convex hull of the set of the end of those segments. The vertices of the polygons are the intersections of the segments. For example, if you draw the 6 lines restricted which equations are: $x=-1$, $x=0$, $x=1$, $y=-1$, $y=0$, $y=1$, I would like the algorithm to output the four unit squares around the origin.<img src="https://i.stack.imgur.com/I6qmZ.png" alt="The polygons I'm trying to compute"></p>
 | algorithms computational geometry | 1 | Polygons generated by a set of segments -- (algorithms computational geometry)
<p>Given a set of segments, I would like to compute the set of closed polygons inside the convex hull of the set of the end of those segments. The vertices of the polygons are the intersections of the segments. For example, if you draw the 6 lines restricted which equations are: $x=-1$, $x=0$, $x=1$, $y=-1$, $y=0$, $y=1$, I would like the algorithm to output the four unit squares around the origin.<img src="https://i.stack.imgur.com/I6qmZ.png" alt="The polygons I'm trying to compute"></p>
 | habedi/stack-exchange-dataset |
2,718 | What is the difference between quantum TM and nondetermistic TM? | <p>I was going through the discussion on the question <a href="https://cs.stackexchange.com/questions/125/how-to-define-quantum-turing-machines/">How to define quantum Turing machines?</a> and I feel that quantum TM and <em>nondetermistic</em> TM are one and the same. The answers to the other question do not touch on that. Are these two models one and the same?</p>

<p>If no,</p>

<ol>
<li>What are the differences between quantum TM and NDTM? </li>
<li>Is there any computation which a NDTM would do quicker than Quantum TM? </li>
<li>If this is the case then quantum TM is a DTM, then why is there so much fuzz about this technology, we already have so many DTM. Why to design a new DTM in the end?</li>
</ol>
 | computability turing machines quantum computing nondeterminism | 1 | What is the difference between quantum TM and nondetermistic TM? -- (computability turing machines quantum computing nondeterminism)
<p>I was going through the discussion on the question <a href="https://cs.stackexchange.com/questions/125/how-to-define-quantum-turing-machines/">How to define quantum Turing machines?</a> and I feel that quantum TM and <em>nondetermistic</em> TM are one and the same. The answers to the other question do not touch on that. Are these two models one and the same?</p>

<p>If no,</p>

<ol>
<li>What are the differences between quantum TM and NDTM? </li>
<li>Is there any computation which a NDTM would do quicker than Quantum TM? </li>
<li>If this is the case then quantum TM is a DTM, then why is there so much fuzz about this technology, we already have so many DTM. Why to design a new DTM in the end?</li>
</ol>
 | habedi/stack-exchange-dataset |
2,722 | How would a neural network deal with an arbitrary length output? | <p>I've been looking into Recurrent Neural Networks, but I don't understand what the architecture of a neural network would look like when the output length is not necessarily fixed. </p>

<p>It seems like most networks I've read descriptions of require the output length to be equal to the input length or at least a fixed size. But how would you do something like convert a word to the string of corresponding phonemes? </p>

<p>The string of phonemes might be longer or shorter than the original word. I know you could sequence in the input characters using 8 input nodes (bitcode of the character) in a recurrent network, provided there's a loop in the network, but is this a common pattern for the output stream as well? Can you let the network provide provide something like a 'stop codon'?</p>

<p>I suppose a lot of practical networks, like those that do speech synthesis should have an output that is not fixed in length. How do people deal with that?</p>
 | neural networks | 1 | How would a neural network deal with an arbitrary length output? -- (neural networks)
<p>I've been looking into Recurrent Neural Networks, but I don't understand what the architecture of a neural network would look like when the output length is not necessarily fixed. </p>

<p>It seems like most networks I've read descriptions of require the output length to be equal to the input length or at least a fixed size. But how would you do something like convert a word to the string of corresponding phonemes? </p>

<p>The string of phonemes might be longer or shorter than the original word. I know you could sequence in the input characters using 8 input nodes (bitcode of the character) in a recurrent network, provided there's a loop in the network, but is this a common pattern for the output stream as well? Can you let the network provide provide something like a 'stop codon'?</p>

<p>I suppose a lot of practical networks, like those that do speech synthesis should have an output that is not fixed in length. How do people deal with that?</p>
 | habedi/stack-exchange-dataset |
2,728 | Does a never-halting machine always loop? | <p>A Turing machine that returns to a previously encountered state with its read/write head on the same cell of the exact same tape will be caught in a loop. Such a machine doesn't halt.</p>

<p>Can someone give an example of a never-halting machine that doesn't loop?</p>
 | computability turing machines halting problem | 1 | Does a never-halting machine always loop? -- (computability turing machines halting problem)
<p>A Turing machine that returns to a previously encountered state with its read/write head on the same cell of the exact same tape will be caught in a loop. Such a machine doesn't halt.</p>

<p>Can someone give an example of a never-halting machine that doesn't loop?</p>
 | habedi/stack-exchange-dataset |
2,734 | Algorithm to find optimal currency denominations | <p>Mark lives in a tiny country populated by people who tend to over-think things. One day, the king of the country decides to redesign the country's currency to make giving change more efficient. The king wants to minimize the expected number of coins it takes to exactly pay any amount up to (but not including) the amount of the smallest paper bill.</p>

<p>Suppose that the smallest unit of currency is the Coin. The smallest paper bill in the kingdom is worth $n$ Coins. The king decides that there should not be more than $m$ different coin denominations in circulation. The problem, then, is to find a $m$-set $\{d_1, d_2, ..., d_m\}$ of integers from $\{1, 2, ..., n - 1\}$ which minimizes $\frac{1}{n-1}\sum_{i = 1}^{n-1}{c_1(i) + c_2(i) + ... + c_m(i)}$ subject to $c_1(i)d_1 + c_2(i)d_2 + ... c_m(i)d_m = i$.</p>

<p>For instance, take the standard USD and its coin denominations of $\{1, 5, 10, 25, 50\}$. Here, the smallest paper bill is worth 100 of the smallest coin. It takes 4 coins to make 46 cents using this currency; we have $c_1(46) = 1, c_2(46) = 0, c_3(46) = 2, c_4(46) = 1, c_5(46) = 0$. However, if we had coin denominations of $\{1, 15, 30\}$, it would take only 3 coins: $c_1(46) = 1, c_2(46) = 1, c_3(46) = 1$. Which of these denomination sets minimizes the average number of coins to make any sum up to and including 99 cents?</p>

<p>More generally, given $n$ and $m$, how might one algorithmically determine the optimal set? Clearly, one might enumerate all viable $m$-subsets and compute the average number of coins it takes to make sums from 1 to $n - 1$, keeping track of the optimal one along the way. Since there are around $C(n - 1, m)$ $m$-subsets (not all of which are viable, but still), this would not be terribly efficient. Can you do better than that?</p>
 | algorithms optimization combinatorics integers | 1 | Algorithm to find optimal currency denominations -- (algorithms optimization combinatorics integers)
<p>Mark lives in a tiny country populated by people who tend to over-think things. One day, the king of the country decides to redesign the country's currency to make giving change more efficient. The king wants to minimize the expected number of coins it takes to exactly pay any amount up to (but not including) the amount of the smallest paper bill.</p>

<p>Suppose that the smallest unit of currency is the Coin. The smallest paper bill in the kingdom is worth $n$ Coins. The king decides that there should not be more than $m$ different coin denominations in circulation. The problem, then, is to find a $m$-set $\{d_1, d_2, ..., d_m\}$ of integers from $\{1, 2, ..., n - 1\}$ which minimizes $\frac{1}{n-1}\sum_{i = 1}^{n-1}{c_1(i) + c_2(i) + ... + c_m(i)}$ subject to $c_1(i)d_1 + c_2(i)d_2 + ... c_m(i)d_m = i$.</p>

<p>For instance, take the standard USD and its coin denominations of $\{1, 5, 10, 25, 50\}$. Here, the smallest paper bill is worth 100 of the smallest coin. It takes 4 coins to make 46 cents using this currency; we have $c_1(46) = 1, c_2(46) = 0, c_3(46) = 2, c_4(46) = 1, c_5(46) = 0$. However, if we had coin denominations of $\{1, 15, 30\}$, it would take only 3 coins: $c_1(46) = 1, c_2(46) = 1, c_3(46) = 1$. Which of these denomination sets minimizes the average number of coins to make any sum up to and including 99 cents?</p>

<p>More generally, given $n$ and $m$, how might one algorithmically determine the optimal set? Clearly, one might enumerate all viable $m$-subsets and compute the average number of coins it takes to make sums from 1 to $n - 1$, keeping track of the optimal one along the way. Since there are around $C(n - 1, m)$ $m$-subsets (not all of which are viable, but still), this would not be terribly efficient. Can you do better than that?</p>
 | habedi/stack-exchange-dataset |
2,735 | Context-free grammar to a pushdown automaton | <p>I'm trying to convert a context free grammar to a pushdown automaton (PDA); I'm not sure how I'm gonna get an answer or show you my progress as it's a diagram... Anyway this is the last problem I have on a homework that's due later today, so I'd appreciate some kind of help, even if it's just an explanation of the correct answers diagram. I need a PDA corresponding to this CFG:</p>

<p>$$S \rightarrow aSa | bSb | B$$
$$B \rightarrow bB | \epsilon$$</p>

<p>I know it will have to push X every time 'a' is read before a 'b', and pop X every time 'a' is read after a 'b'. But I'm not sure how to arrange the PDA in order to tell which a's came after b's. Also, I'm unsure of how to deal with the b's in terms of the stack, as there can be as many in the middle of the string as you want. Help appreciated.</p>

<p>Thanks, Pachun</p>
 | formal grammars context free pushdown automata | 1 | Context-free grammar to a pushdown automaton -- (formal grammars context free pushdown automata)
<p>I'm trying to convert a context free grammar to a pushdown automaton (PDA); I'm not sure how I'm gonna get an answer or show you my progress as it's a diagram... Anyway this is the last problem I have on a homework that's due later today, so I'd appreciate some kind of help, even if it's just an explanation of the correct answers diagram. I need a PDA corresponding to this CFG:</p>

<p>$$S \rightarrow aSa | bSb | B$$
$$B \rightarrow bB | \epsilon$$</p>

<p>I know it will have to push X every time 'a' is read before a 'b', and pop X every time 'a' is read after a 'b'. But I'm not sure how to arrange the PDA in order to tell which a's came after b's. Also, I'm unsure of how to deal with the b's in terms of the stack, as there can be as many in the middle of the string as you want. Help appreciated.</p>

<p>Thanks, Pachun</p>
 | habedi/stack-exchange-dataset |
2,739 | Algorithm to translate a deterministic Büchi automaton to LTL (when possible) | <p><a href="http://en.wikipedia.org/wiki/Linear_temporal_logic">Linear temporal logic</a> and deterministic <a href="http://en.wikipedia.org/wiki/B%C3%BCchi_automaton">Büchi automata</a> are incomparable: DBA cannot express $FGa$, and LTL cannot express <em>"at least each odd letter is 'a'"</em>. But sometimes it is interesting to know whether the language of a DBA can be expressed in LTL.</p>

<p>I need an algorithm that decides whether a language of a given DBA is describable in LTL. Do you know algorithms for that?</p>
 | logic automata linear temporal logic buchi automata | 1 | Algorithm to translate a deterministic Büchi automaton to LTL (when possible) -- (logic automata linear temporal logic buchi automata)
<p><a href="http://en.wikipedia.org/wiki/Linear_temporal_logic">Linear temporal logic</a> and deterministic <a href="http://en.wikipedia.org/wiki/B%C3%BCchi_automaton">Büchi automata</a> are incomparable: DBA cannot express $FGa$, and LTL cannot express <em>"at least each odd letter is 'a'"</em>. But sometimes it is interesting to know whether the language of a DBA can be expressed in LTL.</p>

<p>I need an algorithm that decides whether a language of a given DBA is describable in LTL. Do you know algorithms for that?</p>
 | habedi/stack-exchange-dataset |
2,741 | Minimize the maximum component of a sum of vectors | <p>I'd like to learn something about this optimization problem: For given non-negative whole numbers $a_{i,j,k}$,
find a function $f$ minimizing the expression</p>

<p>$$\max_k \sum_i a_{i,f(i),k}$$</p>

<p>An example using a different formulation might make it clearer:
You're given a set of sets of vectors like</p>

<pre><code>{
 {(3, 0, 0, 0, 0), (1, 0, 2, 0, 0)},
 {(0, 1, 0, 0, 0), (0, 0, 0, 1, 0)},
 {(0, 0, 0, 2, 0), (0, 1, 0, 1, 0)}
}
</code></pre>

<p>Choose one vector from each set, so that the maximum component of their sum is minimal.
For example, you may choose</p>

<pre><code>(1, 0, 2, 0, 0) + (0, 1, 0, 0, 0) + (0, 1, 0, 1, 0) = (1, 1, 2, 1, 0)
</code></pre>

<p>with the maximum component equal to 2, which is clearly optimal here.</p>

<p>I'm curious if this is a well-known problem and what problem-specific approximate solution methods are available. It should be fast and easy to program (no <a href="http://en.wikipedia.org/wiki/Linear_programming#Integer_unknowns" rel="nofollow">ILP</a> solver, etc.). No exact solution is needed as it's only an approximation of the real problem.</p>

<hr>

<p>I see that I should have added some details about the problem instances I'm interested in:</p>

<ul>
<li>$i \in \{0, 1, \ldots, 63\}$, i.e., there're always 64 rows (when written as in the above example).</li>
<li>$j \in \{0, 1\}$, i.e., there're only 2 vectors per row.</li>
<li>$k \in \{0, 1, \ldots, N-1\}$ where $N$ (the vector length) is between 10 and 1000.</li>
</ul>

<p>Moreover, on each row the sum of the elements of all vectors is the same, i.e.,</p>

<p>$$\forall i, j, j':\quad \sum_k a_{i,j,k} = \sum_k a_{i,j',k}$$</p>

<p>and the sum of the elements of the sum vector is less than its length, i.e.,</p>

<p>$$\sum_k \sum_i a_{i,f(i),k} < N$$</p>
 | algorithms optimization linear programming | 1 | Minimize the maximum component of a sum of vectors -- (algorithms optimization linear programming)
<p>I'd like to learn something about this optimization problem: For given non-negative whole numbers $a_{i,j,k}$,
find a function $f$ minimizing the expression</p>

<p>$$\max_k \sum_i a_{i,f(i),k}$$</p>

<p>An example using a different formulation might make it clearer:
You're given a set of sets of vectors like</p>

<pre><code>{
 {(3, 0, 0, 0, 0), (1, 0, 2, 0, 0)},
 {(0, 1, 0, 0, 0), (0, 0, 0, 1, 0)},
 {(0, 0, 0, 2, 0), (0, 1, 0, 1, 0)}
}
</code></pre>

<p>Choose one vector from each set, so that the maximum component of their sum is minimal.
For example, you may choose</p>

<pre><code>(1, 0, 2, 0, 0) + (0, 1, 0, 0, 0) + (0, 1, 0, 1, 0) = (1, 1, 2, 1, 0)
</code></pre>

<p>with the maximum component equal to 2, which is clearly optimal here.</p>

<p>I'm curious if this is a well-known problem and what problem-specific approximate solution methods are available. It should be fast and easy to program (no <a href="http://en.wikipedia.org/wiki/Linear_programming#Integer_unknowns" rel="nofollow">ILP</a> solver, etc.). No exact solution is needed as it's only an approximation of the real problem.</p>

<hr>

<p>I see that I should have added some details about the problem instances I'm interested in:</p>

<ul>
<li>$i \in \{0, 1, \ldots, 63\}$, i.e., there're always 64 rows (when written as in the above example).</li>
<li>$j \in \{0, 1\}$, i.e., there're only 2 vectors per row.</li>
<li>$k \in \{0, 1, \ldots, N-1\}$ where $N$ (the vector length) is between 10 and 1000.</li>
</ul>

<p>Moreover, on each row the sum of the elements of all vectors is the same, i.e.,</p>

<p>$$\forall i, j, j':\quad \sum_k a_{i,j,k} = \sum_k a_{i,j',k}$$</p>

<p>and the sum of the elements of the sum vector is less than its length, i.e.,</p>

<p>$$\sum_k \sum_i a_{i,f(i),k} < N$$</p>
 | habedi/stack-exchange-dataset |
2,745 | Building a finite state transducer | <p>I know it's possible to build a Finite State Transducer for converting numbers from base 2 to base 4 or 8 or other powers of 2 (translating from base N to base N^M is easy). However I've never seen a FST that can convert numbers from base 1 to base 2 or viceversa. Can a FST even do this? If so, can you please give some hints on building such a FST?</p>
 | automata finite automata | 1 | Building a finite state transducer -- (automata finite automata)
<p>I know it's possible to build a Finite State Transducer for converting numbers from base 2 to base 4 or 8 or other powers of 2 (translating from base N to base N^M is easy). However I've never seen a FST that can convert numbers from base 1 to base 2 or viceversa. Can a FST even do this? If so, can you please give some hints on building such a FST?</p>
 | habedi/stack-exchange-dataset |
2,749 | Finding Feature Representation Such That Two Samples Are Similar in Feature Space | <p>Consider one specific useful function of our human brain: abstraction of object. Take the example of two pictures: if we are told the pictures are similar, we actually make conclusion about the aspects in which they are close to each other.</p>

<p>I'm considering whether machine can have the ability described. More accurately, is it possible to find and select a set of feature representations of two samples (e.g. image, sound) such that under those representations, the samples are similar with respect to a metric, say weighted euclidean norm?</p>
 | machine learning artificial intelligence neural networks | 1 | Finding Feature Representation Such That Two Samples Are Similar in Feature Space -- (machine learning artificial intelligence neural networks)
<p>Consider one specific useful function of our human brain: abstraction of object. Take the example of two pictures: if we are told the pictures are similar, we actually make conclusion about the aspects in which they are close to each other.</p>

<p>I'm considering whether machine can have the ability described. More accurately, is it possible to find and select a set of feature representations of two samples (e.g. image, sound) such that under those representations, the samples are similar with respect to a metric, say weighted euclidean norm?</p>
 | habedi/stack-exchange-dataset |
2,752 | Weak hashing function for memorable IPv6 addresses | <p>IPv6 addresses in the form of <code>862A:7373:3386:BF1F:8D77:D3D2:220F:D7E0</code> are much harder to memorize or even transcribe than the 4 octets of IPv4. </p>

<p>There <a href="http://blog.jgc.org/2011/07/pronounceable-ipv6-addresses-wpa2-psk.html?m=1" rel="nofollow">have</a> <a href="http://www.halfbakery.com/idea/IPv6_20Worded_20Addresses#1260513928" rel="nofollow">been</a> attempts to mitigate this, making IPv6 addresses somehow more memorable.</p>

<p>Is there an intentionally-weak hashing function which could be reversed to find that the phrase, say, <a href="http://en.wikipedia.org/wiki/Dissociated_press" rel="nofollow">"This is relatively benign and easy to spot if the phrase is bent so as to be not worth paying"</a> would hash to a target IPv6 address? The hash would, of course, have many colliding inputs to choose from, and a potentially more memorable sentence, such as this example phrase, could be automatically offered.</p>

<p>I guess there are two parts: First a weak hash with good distribution in both directions. Second is an algorithm for selecting memorable phrases from among the many collisions (short, consisting of words from a specified language, perhaps even following a simplified grammar).</p>

<p>Although the hash function would need to be weak, I don't doubt that the effort is still significant - however, once the phrase is known, the computation of the hash to the target address is very quick.</p>

<p><strong>EDIT</strong></p>

<p>I found this related idea, <a href="https://en.wikipedia.org/wiki/Piphilology" rel="nofollow">Piphilology</a>, for memorizing some digits of π:</p>

<blockquote>
 <p>How I wish a drink, alcoholic of course, after the heavy lectures involving quantum mechanics!</p>
</blockquote>
 | cryptography computer networks hash user interface | 1 | Weak hashing function for memorable IPv6 addresses -- (cryptography computer networks hash user interface)
<p>IPv6 addresses in the form of <code>862A:7373:3386:BF1F:8D77:D3D2:220F:D7E0</code> are much harder to memorize or even transcribe than the 4 octets of IPv4. </p>

<p>There <a href="http://blog.jgc.org/2011/07/pronounceable-ipv6-addresses-wpa2-psk.html?m=1" rel="nofollow">have</a> <a href="http://www.halfbakery.com/idea/IPv6_20Worded_20Addresses#1260513928" rel="nofollow">been</a> attempts to mitigate this, making IPv6 addresses somehow more memorable.</p>

<p>Is there an intentionally-weak hashing function which could be reversed to find that the phrase, say, <a href="http://en.wikipedia.org/wiki/Dissociated_press" rel="nofollow">"This is relatively benign and easy to spot if the phrase is bent so as to be not worth paying"</a> would hash to a target IPv6 address? The hash would, of course, have many colliding inputs to choose from, and a potentially more memorable sentence, such as this example phrase, could be automatically offered.</p>

<p>I guess there are two parts: First a weak hash with good distribution in both directions. Second is an algorithm for selecting memorable phrases from among the many collisions (short, consisting of words from a specified language, perhaps even following a simplified grammar).</p>

<p>Although the hash function would need to be weak, I don't doubt that the effort is still significant - however, once the phrase is known, the computation of the hash to the target address is very quick.</p>

<p><strong>EDIT</strong></p>

<p>I found this related idea, <a href="https://en.wikipedia.org/wiki/Piphilology" rel="nofollow">Piphilology</a>, for memorizing some digits of π:</p>

<blockquote>
 <p>How I wish a drink, alcoholic of course, after the heavy lectures involving quantum mechanics!</p>
</blockquote>
 | habedi/stack-exchange-dataset |
2,762 | What is the average height of a binary tree? | <p>Is there any formal definition about the average height of a binary tree?</p>

<p>I have a tutorial question about finding the average height of a binary tree using the following two methods:</p>

<ol>
<li><p>The natural solution might be to take the average length of all possible
paths from the root to a leaf, that is</p>

<p>$\qquad \displaystyle \operatorname{avh}_1(T) = \frac{1}{\text{# leaves in } T} \cdot \sum_{v \text{ leaf of } T} \operatorname{depth}(v)$.</p></li>
<li><p>Another option is to define it recursively, that is the average height for a node is the average over the average heights of the subtrees plus
one, that is </p>

<p>$\qquad \displaystyle \operatorname{avh}_2(N(l,r)) = \frac{\operatorname{avh}_2(l) + \operatorname{avh}_2(r)}{2} + 1$</p>

<p>with $\operatorname{avh}_2(l) = 1$ for leafs $l$ and $\operatorname{avh}_2(\_) = 0$ for empty slots.</p></li>
</ol>

<p>Based on my current understanding, for example the average height of the tree $T$</p>

<pre><code> 1 
 / \
 2 3
 /
4
</code></pre>

<p>is $\operatorname{avh}_2(T) = 1.25$ by the second method, that is using recursion.</p>

<p>However, I still don't quite understand how to do the first one. $\operatorname{avh}_1(T) = (1+2)/2=1.5$ is not correct.</p>
 | graphs terminology combinatorics binary trees | 1 | What is the average height of a binary tree? -- (graphs terminology combinatorics binary trees)
<p>Is there any formal definition about the average height of a binary tree?</p>

<p>I have a tutorial question about finding the average height of a binary tree using the following two methods:</p>

<ol>
<li><p>The natural solution might be to take the average length of all possible
paths from the root to a leaf, that is</p>

<p>$\qquad \displaystyle \operatorname{avh}_1(T) = \frac{1}{\text{# leaves in } T} \cdot \sum_{v \text{ leaf of } T} \operatorname{depth}(v)$.</p></li>
<li><p>Another option is to define it recursively, that is the average height for a node is the average over the average heights of the subtrees plus
one, that is </p>

<p>$\qquad \displaystyle \operatorname{avh}_2(N(l,r)) = \frac{\operatorname{avh}_2(l) + \operatorname{avh}_2(r)}{2} + 1$</p>

<p>with $\operatorname{avh}_2(l) = 1$ for leafs $l$ and $\operatorname{avh}_2(\_) = 0$ for empty slots.</p></li>
</ol>

<p>Based on my current understanding, for example the average height of the tree $T$</p>

<pre><code> 1 
 / \
 2 3
 /
4
</code></pre>

<p>is $\operatorname{avh}_2(T) = 1.25$ by the second method, that is using recursion.</p>

<p>However, I still don't quite understand how to do the first one. $\operatorname{avh}_1(T) = (1+2)/2=1.5$ is not correct.</p>
 | habedi/stack-exchange-dataset |
2,764 | Canonical reference on agent-based computing | <p>I am interested in exploring the world of <a href="http://en.wikipedia.org/wiki/BDI_software_agent" rel="nofollow">BDI agents</a> (software agents that possess "beliefs, desires, intentions", essentially the agent has knowledge of the world, a set of motivations, and carries out certain plans).</p>

<p>I recently read A Canonical Agent Model for Healthcare Applications [1], which left me with a lot of questions, particularly about the specialization of different agent models for particular applications. </p>

<p>The particular modeling language used in their examples was ProForma, and I understand that this is more for the abstract specification of an agent, and that something like <a href="http://en.wikipedia.org/wiki/3APL" rel="nofollow">3APL</a> can be used as an actual programming language in this regard, with syntax like:</p>

<pre><code>BELIEFBASE {
 status(standby).
 at(0,0).
 location(r1,2,4).
 location(r5,6,1).
 dirty(r1).
 dirty(r5).
}
</code></pre>

<p>My question is, all of these systems clearly reflect years of cumulative efforts, and rather than jumping in to the deep end, I'd like to ease into this world of research a bit more slowly. Is there a canonical reference in this area that might be able to provide a more general overview of all of these levels of organization, and where the abstractions stop and the implementations begin?</p>

<hr>

<ol>
<li>Fox J., Glasspool, D., Modgil, S. <a href="http://www.sdela.dds.nl/entityresearch/fox_glasspool_modgil.pdf" rel="nofollow">A Canonical Agent Model for Healthcare Applications</a>. <em>IEEE Intelligent Systems, 21</em>(6), 21-28, 2006.</li>
</ol>
 | reference request human computing agent based computing | 1 | Canonical reference on agent-based computing -- (reference request human computing agent based computing)
<p>I am interested in exploring the world of <a href="http://en.wikipedia.org/wiki/BDI_software_agent" rel="nofollow">BDI agents</a> (software agents that possess "beliefs, desires, intentions", essentially the agent has knowledge of the world, a set of motivations, and carries out certain plans).</p>

<p>I recently read A Canonical Agent Model for Healthcare Applications [1], which left me with a lot of questions, particularly about the specialization of different agent models for particular applications. </p>

<p>The particular modeling language used in their examples was ProForma, and I understand that this is more for the abstract specification of an agent, and that something like <a href="http://en.wikipedia.org/wiki/3APL" rel="nofollow">3APL</a> can be used as an actual programming language in this regard, with syntax like:</p>

<pre><code>BELIEFBASE {
 status(standby).
 at(0,0).
 location(r1,2,4).
 location(r5,6,1).
 dirty(r1).
 dirty(r5).
}
</code></pre>

<p>My question is, all of these systems clearly reflect years of cumulative efforts, and rather than jumping in to the deep end, I'd like to ease into this world of research a bit more slowly. Is there a canonical reference in this area that might be able to provide a more general overview of all of these levels of organization, and where the abstractions stop and the implementations begin?</p>

<hr>

<ol>
<li>Fox J., Glasspool, D., Modgil, S. <a href="http://www.sdela.dds.nl/entityresearch/fox_glasspool_modgil.pdf" rel="nofollow">A Canonical Agent Model for Healthcare Applications</a>. <em>IEEE Intelligent Systems, 21</em>(6), 21-28, 2006.</li>
</ol>
 | habedi/stack-exchange-dataset |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.