id
int64
1
141k
title
stringlengths
15
150
body
stringlengths
45
28.5k
tags
stringlengths
1
102
label
int64
1
1
text
stringlengths
128
28.6k
source
stringclasses
1 value
2,770
Kolmogorov complexity of a decision problem
<p>What's the definition of Kolmogorov complexity for a decision problem? For example, how to define the length of the shortest program that solves the 3SAT problem? Is it the "smallest" Turing machine that recognizes the 3SAT langauge?</p>&#xA;
computability terminology decision problem kolmogorov complexity 3 sat
1
Kolmogorov complexity of a decision problem -- (computability terminology decision problem kolmogorov complexity 3 sat) <p>What's the definition of Kolmogorov complexity for a decision problem? For example, how to define the length of the shortest program that solves the 3SAT problem? Is it the "smallest" Turing machine that recognizes the 3SAT langauge?</p>&#xA;
habedi/stack-exchange-dataset
2,771
Assign m agents to N points by minimizing the total distance
<p>Suppose we have $N$ fixed points (set $S$ with $|S|=N$) on the plane and $m$ agents with fixed, known initial positions ($m&lt;N$) outside $S$. We should transfer the agents so that in our final configuration they are all positioned to different points of $S$. How could we achieve it by minimizing the total distance covered by the agents? </p>&#xA;
algorithms graphs optimization
1
Assign m agents to N points by minimizing the total distance -- (algorithms graphs optimization) <p>Suppose we have $N$ fixed points (set $S$ with $|S|=N$) on the plane and $m$ agents with fixed, known initial positions ($m&lt;N$) outside $S$. We should transfer the agents so that in our final configuration they are all positioned to different points of $S$. How could we achieve it by minimizing the total distance covered by the agents? </p>&#xA;
habedi/stack-exchange-dataset
2,774
How to convert a non-embedding context free grammar to regular grammar?
<p>Please note that I am aware the undecidability of the conversion of context-free grammar to regular grammar. But given the non-embedding property of the input context-free grammar, is there any algorithm to convert it to regular grammar, or DFA directly?</p>&#xA;
formal languages formal grammars context free
1
How to convert a non-embedding context free grammar to regular grammar? -- (formal languages formal grammars context free) <p>Please note that I am aware the undecidability of the conversion of context-free grammar to regular grammar. But given the non-embedding property of the input context-free grammar, is there any algorithm to convert it to regular grammar, or DFA directly?</p>&#xA;
habedi/stack-exchange-dataset
2,777
Determining how similar a given string is to a collection of strings
<p>I'm not sure if this question belongs here and I apologize if not. What I am looking to do is to develop a programmatic way in which I can probabilistically determine whether a given string "belongs" in a bag of strings. For example, if I have bag of 10,000 US city names, and then I have the string "Philadelphia", I would like some quantitative measure of how likely 'Philadelphia' is a US city name based on the US city names I already know. While I know I won't be able to separate real city names from fake city names in this context, I would at least expect to have strings such as "123.75" and "The quick red fox jumped over the lazy brown dogs" excluded given some threshold.</p>&#xA;&#xA;<p>To get started, I've looked at Levenshtein Distance and poked around a bit on how that's been applied to problems at least somewhat similar to the one I'm trying to solve. One interesting application I found was plagiarism detection, with one paper describing how Levenshtein distance was used with a modified Smith-Waterman algorithm to score papers based on how likely they were a plagarized version of a given base paper. My question is if anyone could point me in the right direction with other established algorithms or methodologies that might help me. I get the feeling that this may be a problem someone in the past has tried to solve but so far my Google-fu has failed me.</p>&#xA;
algorithms reference request string metrics
1
Determining how similar a given string is to a collection of strings -- (algorithms reference request string metrics) <p>I'm not sure if this question belongs here and I apologize if not. What I am looking to do is to develop a programmatic way in which I can probabilistically determine whether a given string "belongs" in a bag of strings. For example, if I have bag of 10,000 US city names, and then I have the string "Philadelphia", I would like some quantitative measure of how likely 'Philadelphia' is a US city name based on the US city names I already know. While I know I won't be able to separate real city names from fake city names in this context, I would at least expect to have strings such as "123.75" and "The quick red fox jumped over the lazy brown dogs" excluded given some threshold.</p>&#xA;&#xA;<p>To get started, I've looked at Levenshtein Distance and poked around a bit on how that's been applied to problems at least somewhat similar to the one I'm trying to solve. One interesting application I found was plagiarism detection, with one paper describing how Levenshtein distance was used with a modified Smith-Waterman algorithm to score papers based on how likely they were a plagarized version of a given base paper. My question is if anyone could point me in the right direction with other established algorithms or methodologies that might help me. I get the feeling that this may be a problem someone in the past has tried to solve but so far my Google-fu has failed me.</p>&#xA;
habedi/stack-exchange-dataset
2,783
A polynomial reduction from any NP-complete problem to bounded PCP
<p>Text books everywhere assume that the <a href="https://en.wikipedia.org/wiki/Post_correspondence_problem"><em>Bounded</em> Post Correspondence Problem</a> is NP-complete (no more than $N$ indexes allowed with repetitions). However, nowhere is one shown a simple (as in, something that an undergrad can understand) polynomial time reduction from another NP-complete problem.</p>&#xA;&#xA;<p>However every reduction I can think of is exponential (by $N$ or by the size of the series) in run-time. Perhaps it can be shown that it is reducible to SAT?</p>&#xA;
complexity theory np complete reductions
1
A polynomial reduction from any NP-complete problem to bounded PCP -- (complexity theory np complete reductions) <p>Text books everywhere assume that the <a href="https://en.wikipedia.org/wiki/Post_correspondence_problem"><em>Bounded</em> Post Correspondence Problem</a> is NP-complete (no more than $N$ indexes allowed with repetitions). However, nowhere is one shown a simple (as in, something that an undergrad can understand) polynomial time reduction from another NP-complete problem.</p>&#xA;&#xA;<p>However every reduction I can think of is exponential (by $N$ or by the size of the series) in run-time. Perhaps it can be shown that it is reducible to SAT?</p>&#xA;
habedi/stack-exchange-dataset
2,784
Message receipt verification in a cluster
<p>At my current project I had a network problem come up for which I could not find a solution. In a peer-to-peer network I needed to send an action to all peers, and each peer was to act on it only if it could verify that all other peers would also act on it.</p>&#xA;&#xA;<p>That is, given a network of peers $P = { P_1, ..., P_n }$. We wish to send, from some source peer $P_s$ a message to all other peers. This message contains an action which must be performed. The peer should perform this action if and only if every other peer will perform the action. That is, it performs the action if it can verify that all other peers will also have receipt of the action and can perform the same verification.</p>&#xA;&#xA;<p>The problem is subject to these conditions:</p>&#xA;&#xA;<ol>&#xA;<li>There is no implicit message delivery guarantee: if $P_x$ sends a message to $P_y$ there is no way for $P_x$ to know if $P_y$ gets the message. (Of course $P_y$ can send a receipt, but that receipt is subject to the same constraint)</li>&#xA;<li>Additional messages with any payload may be created.</li>&#xA;<li>There is no total ordering on the messages received by peers. Messages can arrive in a different time-order than which they were sent. This time-order may be unique per peer. <em>Two messages sent in order from $P_x$ to $P_y$ are very unlikely to arrive out of order.</em></li>&#xA;<li>Messages can arrive at any point in the future (so not only are they not ordered, they can be indefintely delayed). A message cannot inherently be detected as lost. <em>Most messages will be delivered quickly, or truly lost.</em></li>&#xA;<li>Each peer has a synchronized clock. It is accurate enough in the domain of scheduling an action and to approximately measure transmission delays. It is however not accurate enough to establish a total ordering on messages using timestamps.</li>&#xA;</ol>&#xA;&#xA;<p>I was not able to find a solution. I'm interested in a <em>guarantee</em> and not simply a high probability of being correct (which can be done simply be repeatedly sending confirmations from peer to peer and rejections upon any likely loss.) My stumbling block is the inability to verify that any particular message actually arrived. So even if $P_x$ determines there is an erorr, there is no guaranteed way to tell the other peers about it.</p>&#xA;&#xA;<p>A negative confirmation is also acceptable. I have a suspicion that a guarantee cannot actually be achieved, only an arbitrarily high probability.</p>&#xA;
algorithms distributed systems computer networks fault tolerance
1
Message receipt verification in a cluster -- (algorithms distributed systems computer networks fault tolerance) <p>At my current project I had a network problem come up for which I could not find a solution. In a peer-to-peer network I needed to send an action to all peers, and each peer was to act on it only if it could verify that all other peers would also act on it.</p>&#xA;&#xA;<p>That is, given a network of peers $P = { P_1, ..., P_n }$. We wish to send, from some source peer $P_s$ a message to all other peers. This message contains an action which must be performed. The peer should perform this action if and only if every other peer will perform the action. That is, it performs the action if it can verify that all other peers will also have receipt of the action and can perform the same verification.</p>&#xA;&#xA;<p>The problem is subject to these conditions:</p>&#xA;&#xA;<ol>&#xA;<li>There is no implicit message delivery guarantee: if $P_x$ sends a message to $P_y$ there is no way for $P_x$ to know if $P_y$ gets the message. (Of course $P_y$ can send a receipt, but that receipt is subject to the same constraint)</li>&#xA;<li>Additional messages with any payload may be created.</li>&#xA;<li>There is no total ordering on the messages received by peers. Messages can arrive in a different time-order than which they were sent. This time-order may be unique per peer. <em>Two messages sent in order from $P_x$ to $P_y$ are very unlikely to arrive out of order.</em></li>&#xA;<li>Messages can arrive at any point in the future (so not only are they not ordered, they can be indefintely delayed). A message cannot inherently be detected as lost. <em>Most messages will be delivered quickly, or truly lost.</em></li>&#xA;<li>Each peer has a synchronized clock. It is accurate enough in the domain of scheduling an action and to approximately measure transmission delays. It is however not accurate enough to establish a total ordering on messages using timestamps.</li>&#xA;</ol>&#xA;&#xA;<p>I was not able to find a solution. I'm interested in a <em>guarantee</em> and not simply a high probability of being correct (which can be done simply be repeatedly sending confirmations from peer to peer and rejections upon any likely loss.) My stumbling block is the inability to verify that any particular message actually arrived. So even if $P_x$ determines there is an erorr, there is no guaranteed way to tell the other peers about it.</p>&#xA;&#xA;<p>A negative confirmation is also acceptable. I have a suspicion that a guarantee cannot actually be achieved, only an arbitrarily high probability.</p>&#xA;
habedi/stack-exchange-dataset
2,785
Where can I find the data of the computer experiments in the book "Neural Networks and Learning Machines"?
<p>The book <a href="http://rads.stackoverflow.com/amzn/click/0131471392" rel="nofollow">"Neural Networks and Learning Machines"</a> by Simon Haykin has many computer experiments to which many exercises are related. But there seems to be no data for these experiments available online. Where can I find them?</p>&#xA;
reference request data sets
1
Where can I find the data of the computer experiments in the book "Neural Networks and Learning Machines"? -- (reference request data sets) <p>The book <a href="http://rads.stackoverflow.com/amzn/click/0131471392" rel="nofollow">"Neural Networks and Learning Machines"</a> by Simon Haykin has many computer experiments to which many exercises are related. But there seems to be no data for these experiments available online. Where can I find them?</p>&#xA;
habedi/stack-exchange-dataset
2,791
When did $LR(k)$ acquire the meaning "left-to-right scan, rightmost derivation?"
<p>According to <a href="http://en.wikipedia.org/wiki/LR_parser#LR_and_Other_Kinds_of_Parsers">the Wikipedia article</a>, the L in $LR(k)$ means "left-to-right scan", and the "R" means "rightmost derivation." However, in <a href="http://classes.engr.oregonstate.edu/eecs/winter2012/cs480/assignments/Knuth-1965-TranslationofLanguages.pdf">Knuth's original paper on $LR(k)$ grammars</a>, he defines $LR(k)$ (on page 610) as a language that is "translatable from left to right with bound $k$."</p>&#xA;&#xA;<p>I am guessing that this new terminology was chosen to complement $LL(k)$ parsing's "left-to-right scan, leftmost derivation." That said, I don't know when the terminology changed meaning.</p>&#xA;&#xA;<p>Does anyone know where the newer acronym for $LR(k)$ comes from?</p>&#xA;
formal languages reference request terminology formal grammars parsers
1
When did $LR(k)$ acquire the meaning "left-to-right scan, rightmost derivation?" -- (formal languages reference request terminology formal grammars parsers) <p>According to <a href="http://en.wikipedia.org/wiki/LR_parser#LR_and_Other_Kinds_of_Parsers">the Wikipedia article</a>, the L in $LR(k)$ means "left-to-right scan", and the "R" means "rightmost derivation." However, in <a href="http://classes.engr.oregonstate.edu/eecs/winter2012/cs480/assignments/Knuth-1965-TranslationofLanguages.pdf">Knuth's original paper on $LR(k)$ grammars</a>, he defines $LR(k)$ (on page 610) as a language that is "translatable from left to right with bound $k$."</p>&#xA;&#xA;<p>I am guessing that this new terminology was chosen to complement $LL(k)$ parsing's "left-to-right scan, leftmost derivation." That said, I don't know when the terminology changed meaning.</p>&#xA;&#xA;<p>Does anyone know where the newer acronym for $LR(k)$ comes from?</p>&#xA;
habedi/stack-exchange-dataset
2,792
Removing Left Recursion from Context-Free Grammars - Ordering of nonterminals
<p>I have recently implemented the Paull's algorithm for removing left-recursion from context-free grammars:</p>&#xA;<blockquote>&#xA;<p>Assign an ordering <span class="math-container">$A_1, \dots, A_n$</span> to the nonterminals of the grammar.</p>&#xA;<p>for <span class="math-container">$i := 1$</span> to <span class="math-container">$n$</span> do begin<br />&#xA;<span class="math-container">$\quad$</span> for <span class="math-container">$j:=1$</span> to <span class="math-container">$i-1$</span> do begin<br />&#xA;<span class="math-container">$\quad\quad$</span> for each production of the form <span class="math-container">$A_i \to A_j\alpha$</span> do begin<br />&#xA;<span class="math-container">$\quad\quad\quad$</span> remove <span class="math-container">$A_i \to A_j\alpha$</span> from the grammar<br />&#xA;<span class="math-container">$\quad\quad\quad$</span> for each production of the form <span class="math-container">$A_j \to \beta$</span> do begin<br />&#xA;<span class="math-container">$\quad\quad\quad\quad$</span> add <span class="math-container">$A_i \to \beta\alpha$</span> to the grammar<br />&#xA;<span class="math-container">$\quad\quad\quad$</span> end<br />&#xA;<span class="math-container">$\quad\quad$</span> end<br />&#xA;<span class="math-container">$\quad$</span> end<br />&#xA;<span class="math-container">$\quad$</span> transform the <span class="math-container">$A_i$</span>-productions to eliminate direct left recursion<br />&#xA;end</p>&#xA;</blockquote>&#xA;<p>According to <a href="http://research.microsoft.com/pubs/68869/naacl2k-proc-rev.pdf" rel="nofollow noreferrer" title="Removing Left Recursion from Context-Free Grammars">this document</a>, the efficiency of the algorithm crucially depends on the ordering of the nonterminals chosen in the beginning; the paper discusses this issue in detail and suggest optimisations.</p>&#xA;<p>Some notation:</p>&#xA;<blockquote>&#xA;<p>We will say that a symbol <span class="math-container">$X$</span> is a <em>direct left corner</em> of&#xA;a nonterminal <span class="math-container">$A$</span>, if there is an <span class="math-container">$A$</span>-production with <span class="math-container">$X$</span> as the left-most symbol on the right-hand side. We define the <em>left-corner relation</em> to be the reflexive transitive closure of the direct-left-corner relation, and we define the <em>proper-left-corner relation</em> to be the transitive closure of&#xA;the direct-left-corner relation. A nonterminal is <em>left recursive</em> if it is a proper left corner of itself; a nonterminal is <em>directly left recursive</em> if it is a direct left corner of itself; and a nonterminal is <em>indirectly left recursive</em> if it is left recursive, but not directly left recursive.</p>&#xA;</blockquote>&#xA;<p>Here is what the authors propose:</p>&#xA;<blockquote>&#xA;<p>In the inner loop of Paull’s algorithm, for nonterminals <span class="math-container">$A_i$</span> and <span class="math-container">$A_j$</span>, such that <span class="math-container">$i &gt; j$</span> and <span class="math-container">$A_j$</span> is a direct left corner of <span class="math-container">$A_i$</span>, we replace all occurrences of <span class="math-container">$A_j$</span> as a direct left corner of <span class="math-container">$A_i$</span> with all possible expansions of <span class="math-container">$A_j$</span>.</p>&#xA;<p>This only contributes to elimination of left recursion from the grammar if <span class="math-container">$A_i$</span> is a left-recursive nonterminal, and <span class="math-container">$A_j$</span> lies on a path that makes <span class="math-container">$A_i$</span> left recursive; that is, if <span class="math-container">$A_i$</span> is a left corner of <span class="math-container">$A_j$</span> (in addition to <span class="math-container">$A_j$</span> being a left corner of <span class="math-container">$A_i$</span>).</p>&#xA;<p>We could eliminate replacements that are useless in removing left recursion if we could order the nonterminals of the grammar so that, if <span class="math-container">$i &gt; j$</span> and <span class="math-container">$A_j$</span> is a direct left corner of <span class="math-container">$A_i$</span>, then <span class="math-container">$A_i$</span> is also a left corner of <span class="math-container">$A_j$</span>.</p>&#xA;<p>We can achieve this by ordering the nonterminals in decreasing order of the number of distinct left corners they have.</p>&#xA;<p>Since the left-corner relation is transitive, if C is a direct left corner of B, every left corner of C is also a left corner of B.</p>&#xA;<p>In addition, since we defined the left-corner relation to be reflexive, B is a left corner of itself.</p>&#xA;<p>Hence, if C is a direct left corner of B, it must follow B in decreasing order of number of distinct left corners, unless B is a left corner of C.</p>&#xA;</blockquote>&#xA;<p>All I want is to know how to order the nonterminals in the beginning, but I don't get it from the paper. Can someone explain it in a simpler way? Pseudocode would help me to understand it better.</p>&#xA;
algorithms context free formal grammars efficiency left recursion
1
Removing Left Recursion from Context-Free Grammars - Ordering of nonterminals -- (algorithms context free formal grammars efficiency left recursion) <p>I have recently implemented the Paull's algorithm for removing left-recursion from context-free grammars:</p>&#xA;<blockquote>&#xA;<p>Assign an ordering <span class="math-container">$A_1, \dots, A_n$</span> to the nonterminals of the grammar.</p>&#xA;<p>for <span class="math-container">$i := 1$</span> to <span class="math-container">$n$</span> do begin<br />&#xA;<span class="math-container">$\quad$</span> for <span class="math-container">$j:=1$</span> to <span class="math-container">$i-1$</span> do begin<br />&#xA;<span class="math-container">$\quad\quad$</span> for each production of the form <span class="math-container">$A_i \to A_j\alpha$</span> do begin<br />&#xA;<span class="math-container">$\quad\quad\quad$</span> remove <span class="math-container">$A_i \to A_j\alpha$</span> from the grammar<br />&#xA;<span class="math-container">$\quad\quad\quad$</span> for each production of the form <span class="math-container">$A_j \to \beta$</span> do begin<br />&#xA;<span class="math-container">$\quad\quad\quad\quad$</span> add <span class="math-container">$A_i \to \beta\alpha$</span> to the grammar<br />&#xA;<span class="math-container">$\quad\quad\quad$</span> end<br />&#xA;<span class="math-container">$\quad\quad$</span> end<br />&#xA;<span class="math-container">$\quad$</span> end<br />&#xA;<span class="math-container">$\quad$</span> transform the <span class="math-container">$A_i$</span>-productions to eliminate direct left recursion<br />&#xA;end</p>&#xA;</blockquote>&#xA;<p>According to <a href="http://research.microsoft.com/pubs/68869/naacl2k-proc-rev.pdf" rel="nofollow noreferrer" title="Removing Left Recursion from Context-Free Grammars">this document</a>, the efficiency of the algorithm crucially depends on the ordering of the nonterminals chosen in the beginning; the paper discusses this issue in detail and suggest optimisations.</p>&#xA;<p>Some notation:</p>&#xA;<blockquote>&#xA;<p>We will say that a symbol <span class="math-container">$X$</span> is a <em>direct left corner</em> of&#xA;a nonterminal <span class="math-container">$A$</span>, if there is an <span class="math-container">$A$</span>-production with <span class="math-container">$X$</span> as the left-most symbol on the right-hand side. We define the <em>left-corner relation</em> to be the reflexive transitive closure of the direct-left-corner relation, and we define the <em>proper-left-corner relation</em> to be the transitive closure of&#xA;the direct-left-corner relation. A nonterminal is <em>left recursive</em> if it is a proper left corner of itself; a nonterminal is <em>directly left recursive</em> if it is a direct left corner of itself; and a nonterminal is <em>indirectly left recursive</em> if it is left recursive, but not directly left recursive.</p>&#xA;</blockquote>&#xA;<p>Here is what the authors propose:</p>&#xA;<blockquote>&#xA;<p>In the inner loop of Paull’s algorithm, for nonterminals <span class="math-container">$A_i$</span> and <span class="math-container">$A_j$</span>, such that <span class="math-container">$i &gt; j$</span> and <span class="math-container">$A_j$</span> is a direct left corner of <span class="math-container">$A_i$</span>, we replace all occurrences of <span class="math-container">$A_j$</span> as a direct left corner of <span class="math-container">$A_i$</span> with all possible expansions of <span class="math-container">$A_j$</span>.</p>&#xA;<p>This only contributes to elimination of left recursion from the grammar if <span class="math-container">$A_i$</span> is a left-recursive nonterminal, and <span class="math-container">$A_j$</span> lies on a path that makes <span class="math-container">$A_i$</span> left recursive; that is, if <span class="math-container">$A_i$</span> is a left corner of <span class="math-container">$A_j$</span> (in addition to <span class="math-container">$A_j$</span> being a left corner of <span class="math-container">$A_i$</span>).</p>&#xA;<p>We could eliminate replacements that are useless in removing left recursion if we could order the nonterminals of the grammar so that, if <span class="math-container">$i &gt; j$</span> and <span class="math-container">$A_j$</span> is a direct left corner of <span class="math-container">$A_i$</span>, then <span class="math-container">$A_i$</span> is also a left corner of <span class="math-container">$A_j$</span>.</p>&#xA;<p>We can achieve this by ordering the nonterminals in decreasing order of the number of distinct left corners they have.</p>&#xA;<p>Since the left-corner relation is transitive, if C is a direct left corner of B, every left corner of C is also a left corner of B.</p>&#xA;<p>In addition, since we defined the left-corner relation to be reflexive, B is a left corner of itself.</p>&#xA;<p>Hence, if C is a direct left corner of B, it must follow B in decreasing order of number of distinct left corners, unless B is a left corner of C.</p>&#xA;</blockquote>&#xA;<p>All I want is to know how to order the nonterminals in the beginning, but I don't get it from the paper. Can someone explain it in a simpler way? Pseudocode would help me to understand it better.</p>&#xA;
habedi/stack-exchange-dataset
2,811
Perplexed by Rice's theorem
<p><strong>Summary:</strong> According to Rice's theorem, everything is impossible. And yet, I <em>do</em> this supposedly impossible stuff <em>all the time!</em></p>&#xA;&#xA;<hr>&#xA;&#xA;<p>Of course, Rice's theorem doesn't simply say "everything is impossible". It says something rather more specific: "Every property of a computer program is non-computable."</p>&#xA;&#xA;<p>(If you want to split hairs, every "non-trivial" property. That is, properties which <em>all</em> programs posses or <em>no</em> programs posses are trivially computable. But any other property is non-computable.)</p>&#xA;&#xA;<p>That's what the theorem says, or appears to say. And presumably a great number of very smart people have carefully verified the correctness of this theorem. But it seems to completely defy logic! There are <em>numerous</em> properties of programs which are <em>trivial</em> to compute!! For example:</p>&#xA;&#xA;<ul>&#xA;<li><p>How many steps does a program execute before halting? To decide whether this number is finite or infinite is precisely the Halting Problem, which is non-computable. To decide whether this number is greater or less than some finite $n$ is <em>trivial!</em> Just run the program for up to $n$ steps and see if it halts or not. Easy!</p></li>&#xA;<li><p>Similarly, does the program use more or less than $n$ units of memory in its first $m$ execution steps? Trivially computable.</p></li>&#xA;<li><p>Does the program text mention a variable named $k$? A trivial textual analysis will reveal the answer.</p></li>&#xA;<li><p>Does the program invoke command $\sigma$? Again, scan the program text looking for that command name.</p></li>&#xA;</ul>&#xA;&#xA;<p>I can see plenty of properties that <em>do</em> look non-computable as well; e.g., how many additions does a complete run of the program perform? Well, that's nearly the same as asking how many <em>steps</em> the program performs, which is virtually the Halting Problem. But it looks like there are boat-loads of program properties which a really, really <em>easy</em> to compute. And yet, Rice's theorem insists that none of them are computable.</p>&#xA;&#xA;<p>What am I missing here?</p>&#xA;
terminology computability undecidability rice theorem
1
Perplexed by Rice's theorem -- (terminology computability undecidability rice theorem) <p><strong>Summary:</strong> According to Rice's theorem, everything is impossible. And yet, I <em>do</em> this supposedly impossible stuff <em>all the time!</em></p>&#xA;&#xA;<hr>&#xA;&#xA;<p>Of course, Rice's theorem doesn't simply say "everything is impossible". It says something rather more specific: "Every property of a computer program is non-computable."</p>&#xA;&#xA;<p>(If you want to split hairs, every "non-trivial" property. That is, properties which <em>all</em> programs posses or <em>no</em> programs posses are trivially computable. But any other property is non-computable.)</p>&#xA;&#xA;<p>That's what the theorem says, or appears to say. And presumably a great number of very smart people have carefully verified the correctness of this theorem. But it seems to completely defy logic! There are <em>numerous</em> properties of programs which are <em>trivial</em> to compute!! For example:</p>&#xA;&#xA;<ul>&#xA;<li><p>How many steps does a program execute before halting? To decide whether this number is finite or infinite is precisely the Halting Problem, which is non-computable. To decide whether this number is greater or less than some finite $n$ is <em>trivial!</em> Just run the program for up to $n$ steps and see if it halts or not. Easy!</p></li>&#xA;<li><p>Similarly, does the program use more or less than $n$ units of memory in its first $m$ execution steps? Trivially computable.</p></li>&#xA;<li><p>Does the program text mention a variable named $k$? A trivial textual analysis will reveal the answer.</p></li>&#xA;<li><p>Does the program invoke command $\sigma$? Again, scan the program text looking for that command name.</p></li>&#xA;</ul>&#xA;&#xA;<p>I can see plenty of properties that <em>do</em> look non-computable as well; e.g., how many additions does a complete run of the program perform? Well, that's nearly the same as asking how many <em>steps</em> the program performs, which is virtually the Halting Problem. But it looks like there are boat-loads of program properties which a really, really <em>easy</em> to compute. And yet, Rice's theorem insists that none of them are computable.</p>&#xA;&#xA;<p>What am I missing here?</p>&#xA;
habedi/stack-exchange-dataset
2,814
Sums of Landau terms revisited
<p>I asked a (seed) question about sums of Landau terms <a href="https://cs.stackexchange.com/questions/366/what-goes-wrong-with-sums-of-landau-terms">before</a>, trying to gauge the dangers of abusing asymptotics notation in arithmetics, with mixed success.</p>&#xA;&#xA;<p>Now, <a href="https://cs.stackexchange.com/a/2803/98">over here</a> our recurrence guru <a href="https://cs.stackexchange.com/a/2803/98">JeffE</a> does essentially this:</p>&#xA;&#xA;<p>$\qquad \displaystyle \sum_{i=1}^n \Theta\left(\frac{1}{i}\right) = \Theta(H_n)$</p>&#xA;&#xA;<p>While the end result is correct, I think this is wrong. Why? If we add in all the existence of constants implied (only the upper bound), we have</p>&#xA;&#xA;<p>$\qquad \displaystyle \sum_{i=1}^n c_i \cdot \frac{1}{i} \leq c \cdot H_n$.</p>&#xA;&#xA;<p>Now how do we compute $c$ from $c_1, \dots, c_n$? The answer is, I believe, that we can not: $c$ has to bound for all $n$ but we get <em>more</em> $c_i$ as $n$ grows. We don't know anything about them; $c_i$ may very well depend on $i$, so we can not assume a bound: a finite $c$ may not exist.</p>&#xA;&#xA;<p>In addition, there is this subtle issue of which variable goes to infinity on the left-hand side -- $i$ or $n$? Both? If $n$ (for the sake of compatibility), what is the meaning of $\Theta(1/i)$, knowing that $1 \leq i \leq n$? Does it not only mean $\Theta(1)$? If so, we can't bound the sum better than $\Theta(n)$.</p>&#xA;&#xA;<p>So, where does that leave us? It it a blatant mistake? A subtle one? Or is it just the usual abuse of notation and we should not look at $=$ signs like this one out of context? Can we formulate a (rigorously) correct rule to evalutate (certain) sums of Landau terms?</p>&#xA;&#xA;<p>I think that the main question is: what is $i$? If we consider it constant (as it <em>is</em> inside the scope of the sum) we can easily build counterexamples. If it is not constant, I have no idea how to read it.</p>&#xA;
terminology asymptotics landau notation
1
Sums of Landau terms revisited -- (terminology asymptotics landau notation) <p>I asked a (seed) question about sums of Landau terms <a href="https://cs.stackexchange.com/questions/366/what-goes-wrong-with-sums-of-landau-terms">before</a>, trying to gauge the dangers of abusing asymptotics notation in arithmetics, with mixed success.</p>&#xA;&#xA;<p>Now, <a href="https://cs.stackexchange.com/a/2803/98">over here</a> our recurrence guru <a href="https://cs.stackexchange.com/a/2803/98">JeffE</a> does essentially this:</p>&#xA;&#xA;<p>$\qquad \displaystyle \sum_{i=1}^n \Theta\left(\frac{1}{i}\right) = \Theta(H_n)$</p>&#xA;&#xA;<p>While the end result is correct, I think this is wrong. Why? If we add in all the existence of constants implied (only the upper bound), we have</p>&#xA;&#xA;<p>$\qquad \displaystyle \sum_{i=1}^n c_i \cdot \frac{1}{i} \leq c \cdot H_n$.</p>&#xA;&#xA;<p>Now how do we compute $c$ from $c_1, \dots, c_n$? The answer is, I believe, that we can not: $c$ has to bound for all $n$ but we get <em>more</em> $c_i$ as $n$ grows. We don't know anything about them; $c_i$ may very well depend on $i$, so we can not assume a bound: a finite $c$ may not exist.</p>&#xA;&#xA;<p>In addition, there is this subtle issue of which variable goes to infinity on the left-hand side -- $i$ or $n$? Both? If $n$ (for the sake of compatibility), what is the meaning of $\Theta(1/i)$, knowing that $1 \leq i \leq n$? Does it not only mean $\Theta(1)$? If so, we can't bound the sum better than $\Theta(n)$.</p>&#xA;&#xA;<p>So, where does that leave us? It it a blatant mistake? A subtle one? Or is it just the usual abuse of notation and we should not look at $=$ signs like this one out of context? Can we formulate a (rigorously) correct rule to evalutate (certain) sums of Landau terms?</p>&#xA;&#xA;<p>I think that the main question is: what is $i$? If we consider it constant (as it <em>is</em> inside the scope of the sum) we can easily build counterexamples. If it is not constant, I have no idea how to read it.</p>&#xA;
habedi/stack-exchange-dataset
2,816
Tighter analysis of modified Borůvka's algorithm
<p><a href="http://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm">Borůvka's algorithm</a> is one of the standard algorithms for calculating the minimum spanning tree for a graph $G = (V,E)$, with $|V| = n, |E| = m$.</p>&#xA;&#xA;<p>The pseudo-code is:</p>&#xA;&#xA;<pre><code>MST T = empty tree&#xA;Begin with each vertex as a component&#xA;While number of components &gt; 1&#xA; For each component c&#xA; let e = minimum edge out of component c&#xA; if e is not in T&#xA; add e to T //merging the two components connected by e&#xA;</code></pre>&#xA;&#xA;<p>We call each iteration of the outer loop a round. In each round, the inner loop cuts the number of components at least in half. Therefore there are at most $O(\log n)$ rounds. In each round, the inner loop looks at each edge at most twice (once from each component). Therefore the running time is at most $O(m \log n)$.</p>&#xA;&#xA;<p>Now suppose after each round, we remove all the edges which only connect vertices within the same component and also remove duplicate edges between components, so that the inner loop only looks at some number of edges m' &lt; m which are the minimum weight edges which connect two previously disconnected components. </p>&#xA;&#xA;<p><strong>How does this optimization affect the running time?</strong></p>&#xA;&#xA;<p>If we somehow knew that in each round, it would cut the number of edges in half, then the running time would be significantly improved:&#xA;$T(m) = T(m /2) + O(m) = O(m)$.</p>&#xA;&#xA;<p>However, while the optimization will dramatically reduce the number of edges examined, (only 1 edge by the final round, and at most # of components choose 2 in general), it's not clear how/if we can use this fact to tighten the analysis of the run-time. </p>&#xA;
algorithms algorithm analysis spanning trees
1
Tighter analysis of modified Borůvka's algorithm -- (algorithms algorithm analysis spanning trees) <p><a href="http://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm">Borůvka's algorithm</a> is one of the standard algorithms for calculating the minimum spanning tree for a graph $G = (V,E)$, with $|V| = n, |E| = m$.</p>&#xA;&#xA;<p>The pseudo-code is:</p>&#xA;&#xA;<pre><code>MST T = empty tree&#xA;Begin with each vertex as a component&#xA;While number of components &gt; 1&#xA; For each component c&#xA; let e = minimum edge out of component c&#xA; if e is not in T&#xA; add e to T //merging the two components connected by e&#xA;</code></pre>&#xA;&#xA;<p>We call each iteration of the outer loop a round. In each round, the inner loop cuts the number of components at least in half. Therefore there are at most $O(\log n)$ rounds. In each round, the inner loop looks at each edge at most twice (once from each component). Therefore the running time is at most $O(m \log n)$.</p>&#xA;&#xA;<p>Now suppose after each round, we remove all the edges which only connect vertices within the same component and also remove duplicate edges between components, so that the inner loop only looks at some number of edges m' &lt; m which are the minimum weight edges which connect two previously disconnected components. </p>&#xA;&#xA;<p><strong>How does this optimization affect the running time?</strong></p>&#xA;&#xA;<p>If we somehow knew that in each round, it would cut the number of edges in half, then the running time would be significantly improved:&#xA;$T(m) = T(m /2) + O(m) = O(m)$.</p>&#xA;&#xA;<p>However, while the optimization will dramatically reduce the number of edges examined, (only 1 edge by the final round, and at most # of components choose 2 in general), it's not clear how/if we can use this fact to tighten the analysis of the run-time. </p>&#xA;
habedi/stack-exchange-dataset
2,832
Is a push-down automaton with two stacks equivalent to a turing machine?
<p>In <a href="https://stackoverflow.com/a/559969/113124">this answer</a> it is mentioned</p>&#xA;&#xA;<blockquote>&#xA; <p>A regular language can be recognized by a finite automaton. A context-free language requires a stack, and <strong>a context sensitive language requires two stacks (which is equivalent to saying it requires a full Turing machine)</strong>.</p>&#xA;</blockquote>&#xA;&#xA;<p>I wanted to know regarding the truth of the bold part above. Is it in fact true or not? What is a good way to reach at an answer to this?</p>&#xA;
computability turing machines automata pushdown automata
1
Is a push-down automaton with two stacks equivalent to a turing machine? -- (computability turing machines automata pushdown automata) <p>In <a href="https://stackoverflow.com/a/559969/113124">this answer</a> it is mentioned</p>&#xA;&#xA;<blockquote>&#xA; <p>A regular language can be recognized by a finite automaton. A context-free language requires a stack, and <strong>a context sensitive language requires two stacks (which is equivalent to saying it requires a full Turing machine)</strong>.</p>&#xA;</blockquote>&#xA;&#xA;<p>I wanted to know regarding the truth of the bold part above. Is it in fact true or not? What is a good way to reach at an answer to this?</p>&#xA;
habedi/stack-exchange-dataset
2,834
How can I verify a solution to Travelling Salesman Problem in polynomial time?
<p>So, <a href="http://en.wikipedia.org/wiki/Travelling_salesman_problem#Computational_complexity">TSP (Travelling salesman problem) decision problem is <strong>NP complete</strong></a>.</p>&#xA;&#xA;<p>But I do not understand how I can verify that a given solution to TSP is in fact optimal in polynomial time, given that there is no way to find the optimal solution in polynomial time (which is because the problem is not in P)?</p>&#xA;&#xA;<p>Anything that might help me see that the verification can in fact be done in polynomial time?</p>&#xA;
complexity theory np complete traveling salesman
1
How can I verify a solution to Travelling Salesman Problem in polynomial time? -- (complexity theory np complete traveling salesman) <p>So, <a href="http://en.wikipedia.org/wiki/Travelling_salesman_problem#Computational_complexity">TSP (Travelling salesman problem) decision problem is <strong>NP complete</strong></a>.</p>&#xA;&#xA;<p>But I do not understand how I can verify that a given solution to TSP is in fact optimal in polynomial time, given that there is no way to find the optimal solution in polynomial time (which is because the problem is not in P)?</p>&#xA;&#xA;<p>Anything that might help me see that the verification can in fact be done in polynomial time?</p>&#xA;
habedi/stack-exchange-dataset
2,837
Is it intuitive to see that finding a Hamiltonian path is not in P while finding Euler path is?
<p>I am not sure I see it. From what I understand, edges and vertices are complements for each other and it is quite surprising that this difference exists.</p>&#xA;&#xA;<p>Is there a good / quick / easy way to see that in fact finding a Hamiltonian path should be much harder than finding a Euler path?</p>&#xA;
complexity theory graphs np complete intuition
1
Is it intuitive to see that finding a Hamiltonian path is not in P while finding Euler path is? -- (complexity theory graphs np complete intuition) <p>I am not sure I see it. From what I understand, edges and vertices are complements for each other and it is quite surprising that this difference exists.</p>&#xA;&#xA;<p>Is there a good / quick / easy way to see that in fact finding a Hamiltonian path should be much harder than finding a Euler path?</p>&#xA;
habedi/stack-exchange-dataset
2,840
Greedy choice and matroids (greedoids)
<p>As I was going through the material about the greedy approach, I came to know that a knowledge on matroids (greedoids) will help me approaching the problem properly. After reading about matroids I have roughly understood what matroids are. But how do you use the concept of a matroid for solving a given optimisation problem? </p>&#xA;&#xA;<p>Take, for example, the <a href="https://en.wikipedia.org/wiki/Activity_selection_problem" rel="noreferrer">activity selection problem</a>. What are the steps to use matroid theory for solving the problem?</p>&#xA;
algorithms graphs greedy algorithms matroids
1
Greedy choice and matroids (greedoids) -- (algorithms graphs greedy algorithms matroids) <p>As I was going through the material about the greedy approach, I came to know that a knowledge on matroids (greedoids) will help me approaching the problem properly. After reading about matroids I have roughly understood what matroids are. But how do you use the concept of a matroid for solving a given optimisation problem? </p>&#xA;&#xA;<p>Take, for example, the <a href="https://en.wikipedia.org/wiki/Activity_selection_problem" rel="noreferrer">activity selection problem</a>. What are the steps to use matroid theory for solving the problem?</p>&#xA;
habedi/stack-exchange-dataset
2,845
Standard or Top Text on Applied Graph Theory
<p>I am looking for a reference text on applied graph theory and graph algorithms. Is there a standard text used in most computer science programs? If not, what are the most respected texts in the field? I have Cormen et al.</p>&#xA;
algorithms graphs reference request education books
1
Standard or Top Text on Applied Graph Theory -- (algorithms graphs reference request education books) <p>I am looking for a reference text on applied graph theory and graph algorithms. Is there a standard text used in most computer science programs? If not, what are the most respected texts in the field? I have Cormen et al.</p>&#xA;
habedi/stack-exchange-dataset
2,847
What is a good binary encoding for $\phi$-based balanced ternary arithmetic algorithms?
<p>I've been looking for a way to represent the <a href="http://en.wikipedia.org/wiki/Golden_ratio_base" rel="nofollow">golden ratio ($\phi$) base</a> more efficiently in binary. The standard binary golden ratio notation works but is horribly space inefficient. The Balanced Ternary Tau System (BTTS) is the best I've found but is quite obscure. The paper describing it in detail is <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.9.880" rel="nofollow">A. Stakhov, Brousentsov's Ternary Principle, Bergman's Number System and Ternary Mirror-symmetrical Arithmetic, 2002</a>. It is covered in less depth by <a href="http://neuraloutlet.wordpress.com/tag/ternary-tau-system/" rel="nofollow">this blog post</a>.</p>&#xA;&#xA;<p>BTTS is a <a href="http://en.wikipedia.org/wiki/Balanced_ternary" rel="nofollow">balanced ternary representation</a> that uses $\phi^2 = \phi + 1$ as a base and 3 values of $\bar 1$ ($-1$), $0$, and $1$ to represent addition or subtraction of powers of $\phi^2$. The table on page 6 of the paper lists integer values from 0 up to 10, and it can represent any $\phi$-based number as well.</p>&#xA;&#xA;<p>BTTS has some fascinating properties, but being ternary, I didn't think I'd be able to find a compact bit representation for it.</p>&#xA;&#xA;<p>Then I noticed that because of the arithmetic rules, the pattern $\bar 1 \bar 1$ never occurs as long as you only allow numbers $\ge 0$. This means that the nine possible combinations for each pair of trits ($3^2$) only ever has 8 values, so we can encode 2 trits with 3 bits ($2^3$, a.k.a octal). Also note that the left-most bit (and also right-most for integers because of the mirror-symmetric property) will only ever be $0$ or $1$ (again for positive numbers only), which lets us encode the left-most trit with only 1 bit.</p>&#xA;&#xA;<p>So a $2^n$-bit number can store $\lfloor 2^n/3\rfloor * 2 + 1$ balanced trits, possibly with a bit left over (maybe a good candidate for a sign bit). For example, we can represent $10 + 1 = 11$ balanced trits with $15 + 1 = 16$ bits, or $20 + 1 = 21$ balanced trits with $30 + 1 = 31$ bits, with 1 left over (32-bit). This has much better space density than ordinary golden ratio base binary encoding.</p>&#xA;&#xA;<p>So my question is, what would be a good octal (3-bit) encoding of trit pairs such that we can implement the addition and other arithmetic rules of the BTTS with as little difficulty as possible. One of the tricky aspects of this system is that carries happen in both directions, i.e. <br/>&#xA;$1 + 1 = 1 \bar 1 .1$ and $\bar 1 + \bar 1 = \bar 1 1.\bar 1$.</p>&#xA;&#xA;<p>This is my first post here, so please let me know if I need to fix or clarify anything.</p>&#xA;&#xA;<p>--<strong>Edit</strong>--</p>&#xA;&#xA;<p>ex0du5 asked for some clarification of what I need from a binary representation:</p>&#xA;&#xA;<ol>&#xA;<li>I want to be able to represent positive values of both integers and powers of $\phi$. The range of representable values need not be as good as binary, but it should be better than phinary per bit. I want to represent the largest possible set of phinary numbers in the smallest amount of space possible. Space takes priority over operation count for arithmetic operations.</li>&#xA;<li>I need addition to function such that carries happen in both directions. Addition will be the most common operation for my application. Consequently it should require as few operations as possible. If a shorter sequence of operations are possible using a longer bit representation (conflicting with goal 1), then goal 1 takes priority. Space is more important than speed.</li>&#xA;<li>Multiplication only needs to handle integers > 0 multiplied to a phinary number, not arbitrary phinary number multiplication, and so can technically be emulated with a series of additions, though a faster algorithm would be helpful.</li>&#xA;<li>I'm ignoring division and subtraction for now, but having algorithms for them would be a bonus.</li>&#xA;<li>I need to eventually convert a phinary number to a binary floating point approximation of it's value, but this will happen only just prior to output. There will be no converting back and forth.</li>&#xA;</ol>&#xA;
algorithms data structures efficiency coding theory
1
What is a good binary encoding for $\phi$-based balanced ternary arithmetic algorithms? -- (algorithms data structures efficiency coding theory) <p>I've been looking for a way to represent the <a href="http://en.wikipedia.org/wiki/Golden_ratio_base" rel="nofollow">golden ratio ($\phi$) base</a> more efficiently in binary. The standard binary golden ratio notation works but is horribly space inefficient. The Balanced Ternary Tau System (BTTS) is the best I've found but is quite obscure. The paper describing it in detail is <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.9.880" rel="nofollow">A. Stakhov, Brousentsov's Ternary Principle, Bergman's Number System and Ternary Mirror-symmetrical Arithmetic, 2002</a>. It is covered in less depth by <a href="http://neuraloutlet.wordpress.com/tag/ternary-tau-system/" rel="nofollow">this blog post</a>.</p>&#xA;&#xA;<p>BTTS is a <a href="http://en.wikipedia.org/wiki/Balanced_ternary" rel="nofollow">balanced ternary representation</a> that uses $\phi^2 = \phi + 1$ as a base and 3 values of $\bar 1$ ($-1$), $0$, and $1$ to represent addition or subtraction of powers of $\phi^2$. The table on page 6 of the paper lists integer values from 0 up to 10, and it can represent any $\phi$-based number as well.</p>&#xA;&#xA;<p>BTTS has some fascinating properties, but being ternary, I didn't think I'd be able to find a compact bit representation for it.</p>&#xA;&#xA;<p>Then I noticed that because of the arithmetic rules, the pattern $\bar 1 \bar 1$ never occurs as long as you only allow numbers $\ge 0$. This means that the nine possible combinations for each pair of trits ($3^2$) only ever has 8 values, so we can encode 2 trits with 3 bits ($2^3$, a.k.a octal). Also note that the left-most bit (and also right-most for integers because of the mirror-symmetric property) will only ever be $0$ or $1$ (again for positive numbers only), which lets us encode the left-most trit with only 1 bit.</p>&#xA;&#xA;<p>So a $2^n$-bit number can store $\lfloor 2^n/3\rfloor * 2 + 1$ balanced trits, possibly with a bit left over (maybe a good candidate for a sign bit). For example, we can represent $10 + 1 = 11$ balanced trits with $15 + 1 = 16$ bits, or $20 + 1 = 21$ balanced trits with $30 + 1 = 31$ bits, with 1 left over (32-bit). This has much better space density than ordinary golden ratio base binary encoding.</p>&#xA;&#xA;<p>So my question is, what would be a good octal (3-bit) encoding of trit pairs such that we can implement the addition and other arithmetic rules of the BTTS with as little difficulty as possible. One of the tricky aspects of this system is that carries happen in both directions, i.e. <br/>&#xA;$1 + 1 = 1 \bar 1 .1$ and $\bar 1 + \bar 1 = \bar 1 1.\bar 1$.</p>&#xA;&#xA;<p>This is my first post here, so please let me know if I need to fix or clarify anything.</p>&#xA;&#xA;<p>--<strong>Edit</strong>--</p>&#xA;&#xA;<p>ex0du5 asked for some clarification of what I need from a binary representation:</p>&#xA;&#xA;<ol>&#xA;<li>I want to be able to represent positive values of both integers and powers of $\phi$. The range of representable values need not be as good as binary, but it should be better than phinary per bit. I want to represent the largest possible set of phinary numbers in the smallest amount of space possible. Space takes priority over operation count for arithmetic operations.</li>&#xA;<li>I need addition to function such that carries happen in both directions. Addition will be the most common operation for my application. Consequently it should require as few operations as possible. If a shorter sequence of operations are possible using a longer bit representation (conflicting with goal 1), then goal 1 takes priority. Space is more important than speed.</li>&#xA;<li>Multiplication only needs to handle integers > 0 multiplied to a phinary number, not arbitrary phinary number multiplication, and so can technically be emulated with a series of additions, though a faster algorithm would be helpful.</li>&#xA;<li>I'm ignoring division and subtraction for now, but having algorithms for them would be a bonus.</li>&#xA;<li>I need to eventually convert a phinary number to a binary floating point approximation of it's value, but this will happen only just prior to output. There will be no converting back and forth.</li>&#xA;</ol>&#xA;
habedi/stack-exchange-dataset
2,855
Choosing an element from a set satisfying a predicate uniformly at random in $O(1)$ space
<p>We are given a set of objects, say integers, $S$. In addition, we are given a predicate $P$, for example $P(i): \Leftrightarrow i \geq 0$. We don't know in advance how many elements of $S$ satisfy the predicate $P$, but we would like to sample or choose an element uniformly at random from $S' = \{ i \mid i \in S \wedge P(i) \}$.</p>&#xA;&#xA;<p>The naive approach is to scan $S$ and for example record all the integers or indices for which $P$ holds, then choose one of them uniformly at random. The downside is that in the worst-case, we need $|S|$ space.</p>&#xA;&#xA;<p>For large sets or in say a streaming environment the naive approach is not acceptable. Is there an in-place algorithm for the problem?</p>&#xA;
algorithms randomized algorithms streaming algorithm in place
1
Choosing an element from a set satisfying a predicate uniformly at random in $O(1)$ space -- (algorithms randomized algorithms streaming algorithm in place) <p>We are given a set of objects, say integers, $S$. In addition, we are given a predicate $P$, for example $P(i): \Leftrightarrow i \geq 0$. We don't know in advance how many elements of $S$ satisfy the predicate $P$, but we would like to sample or choose an element uniformly at random from $S' = \{ i \mid i \in S \wedge P(i) \}$.</p>&#xA;&#xA;<p>The naive approach is to scan $S$ and for example record all the integers or indices for which $P$ holds, then choose one of them uniformly at random. The downside is that in the worst-case, we need $|S|$ space.</p>&#xA;&#xA;<p>For large sets or in say a streaming environment the naive approach is not acceptable. Is there an in-place algorithm for the problem?</p>&#xA;
habedi/stack-exchange-dataset
2,861
Are regular languages closed under sort (Parikh image)?
<p>Assume $L$ is a regular language over an ordered alphabet. Is the language built by taking every word in $L$ and sorting it always a regular language?</p>&#xA;
formal languages regular languages
1
Are regular languages closed under sort (Parikh image)? -- (formal languages regular languages) <p>Assume $L$ is a regular language over an ordered alphabet. Is the language built by taking every word in $L$ and sorting it always a regular language?</p>&#xA;
habedi/stack-exchange-dataset
2,863
Determining the particular number in $O(n)$ time and space (worst case)
<p>$\newcommand\ldotd{\mathinner{..}}$Given that $A[1\ldotd n]$ are integers such that $0\le A[k]\le m$ for all $1\le k\le n$, and the occurrence of each number except a particular number in $A[1\ldotd n]$ is an odd number. Try to find the number whose occurrence is an even number.</p>&#xA;&#xA;<p>There is an $\Theta(n\log n)$ algorithm: we sort $A[1\ldotd n]$ into $B[1\ldotd n]$, and break $B[1\ldotd n]$ into many pieces, whose elements' value are the same, therefore we can count the occurrence of each element.</p>&#xA;&#xA;<p>I want to find a worst-case-$O(n)$-time-and-$O(n)$-space algorithm.</p>&#xA;&#xA;<p>Supposing that $m=\Omega(n^{1+\epsilon})$ and $\epsilon&gt;0$, therefore radix sort is not acceptable.&#xA;$\DeclareMathOperator{\xor}{xor}$&#xA;Binary bitwise operations are acceptable, for example, $A[1]\xor A[2]$.</p>&#xA;
algorithms search algorithms
1
Determining the particular number in $O(n)$ time and space (worst case) -- (algorithms search algorithms) <p>$\newcommand\ldotd{\mathinner{..}}$Given that $A[1\ldotd n]$ are integers such that $0\le A[k]\le m$ for all $1\le k\le n$, and the occurrence of each number except a particular number in $A[1\ldotd n]$ is an odd number. Try to find the number whose occurrence is an even number.</p>&#xA;&#xA;<p>There is an $\Theta(n\log n)$ algorithm: we sort $A[1\ldotd n]$ into $B[1\ldotd n]$, and break $B[1\ldotd n]$ into many pieces, whose elements' value are the same, therefore we can count the occurrence of each element.</p>&#xA;&#xA;<p>I want to find a worst-case-$O(n)$-time-and-$O(n)$-space algorithm.</p>&#xA;&#xA;<p>Supposing that $m=\Omega(n^{1+\epsilon})$ and $\epsilon&gt;0$, therefore radix sort is not acceptable.&#xA;$\DeclareMathOperator{\xor}{xor}$&#xA;Binary bitwise operations are acceptable, for example, $A[1]\xor A[2]$.</p>&#xA;
habedi/stack-exchange-dataset
2,868
How to call something that can be either a terminal or a nonterminal?
<p>I had written a <a href="https://github.com/shabbyX/shCompiler" rel="nofollow">compiler compiler</a> a few years ago and I'm now cleaning it up, improving it, and turning it into C.</p>&#xA;&#xA;<p>I came across a terminology problem however that I remember in the past I couldn't solve it either.</p>&#xA;&#xA;<p>Imagine an LL(k) stack. In this stack, you may have terminals, that are expected to be matched with the next token, or non-terminals that would expand based on the next token. In either case, there is a string in the stack.</p>&#xA;&#xA;<p>The word I am looking for, is a term that means either a terminal or non-terminal. <a href="http://en.wikipedia.org/wiki/Terminal_and_nonterminal_symbols" rel="nofollow">Wikipedia</a> was of no help.</p>&#xA;&#xA;<p>To clarify a bit more, imagine a grammar with $t = \{a \mid a \text{ terminal}\}$ and $T = \{A \mid A \text{ non-terminal}\}$. If you have a set $X = \{x | x \in t \vee x \in T\}$, how would you refer to an element of $X$? "Grammar symbol"? "Grammar element"? "Terminal or non-terminal symbol"?</p>&#xA;&#xA;<p>I am in particular looking for a name as short and to the point as possible, since this will end up becoming a variable name!</p>&#xA;
terminology formal grammars compilers
1
How to call something that can be either a terminal or a nonterminal? -- (terminology formal grammars compilers) <p>I had written a <a href="https://github.com/shabbyX/shCompiler" rel="nofollow">compiler compiler</a> a few years ago and I'm now cleaning it up, improving it, and turning it into C.</p>&#xA;&#xA;<p>I came across a terminology problem however that I remember in the past I couldn't solve it either.</p>&#xA;&#xA;<p>Imagine an LL(k) stack. In this stack, you may have terminals, that are expected to be matched with the next token, or non-terminals that would expand based on the next token. In either case, there is a string in the stack.</p>&#xA;&#xA;<p>The word I am looking for, is a term that means either a terminal or non-terminal. <a href="http://en.wikipedia.org/wiki/Terminal_and_nonterminal_symbols" rel="nofollow">Wikipedia</a> was of no help.</p>&#xA;&#xA;<p>To clarify a bit more, imagine a grammar with $t = \{a \mid a \text{ terminal}\}$ and $T = \{A \mid A \text{ non-terminal}\}$. If you have a set $X = \{x | x \in t \vee x \in T\}$, how would you refer to an element of $X$? "Grammar symbol"? "Grammar element"? "Terminal or non-terminal symbol"?</p>&#xA;&#xA;<p>I am in particular looking for a name as short and to the point as possible, since this will end up becoming a variable name!</p>&#xA;
habedi/stack-exchange-dataset
2,869
What are staged functions (conceptually)?
<p>In a recent CACM article [1], the authors present an implementation for <em>staged functions</em>. They use the term as if it was well-known, and none of the references looks like an obvious introduction.</p>&#xA;&#xA;<p>They give a short explanation (emphasis mine and reference number changed; it's 22 in the original)</p>&#xA;&#xA;<blockquote>&#xA; <p>In the context of program generation, multistage programming (MSP, staging for short) as established by Taha and Sheard [2] <strong>allows programmers to explicitly delay evaluation of a program expression to a later stage</strong> (thus, staging an expression). The present stage effectively acts as a code generator that composes (and possibly executes) the program of the next stage. </p>&#xA;</blockquote>&#xA;&#xA;<p>However, Taha and Sheard write (emphasis mine):</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>A multi-stage program is one that involves the generation, compilation, and execution of code, all inside the same process.</strong> Multi-stage languages express multi-stage programs. Staging, and consequently multi-stage programming, address the need for general purpose solutions which do not pay run-time interpretive overheads.</p>&#xA;</blockquote>&#xA;&#xA;<p>They than go on to several references to older work allegedly showing that staging is effective, which suggests that the concept is even older. They don't give a reference for the term itself.</p>&#xA;&#xA;<p>These statements seem to be orthogonal, if not contradictory; maybe what Rompf and Odersky write is an application of what Taha and Sheard propose, but maybe it is another perspective on the same thing. They seem to agree that an important point is that programs (re)write parts of themselves at runtime, but I do not know whether that is a necessary and/or sufficient ability.</p>&#xA;&#xA;<p>So, what is <em>staging</em> respectively are interpretations of staging in this context? Where does the term come from?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="http://dx.doi.org/10.1145/2184319.2184345">Lightweight Modular Staging: A Pragmatic Approach to Runtime Code Generation and Compiled DSLs</a> by T. Rompf and M. Odersky (2012)</li>&#xA;<li><a href="http://dx.doi.org/10.1016/S0304-3975%2800%2900053-0">MetaML and multi-stageprogramming with explicit annotations</a> by W. Taha and T. Sheard (2000)</li>&#xA;</ol>&#xA;
terminology programming languages meta programming
1
What are staged functions (conceptually)? -- (terminology programming languages meta programming) <p>In a recent CACM article [1], the authors present an implementation for <em>staged functions</em>. They use the term as if it was well-known, and none of the references looks like an obvious introduction.</p>&#xA;&#xA;<p>They give a short explanation (emphasis mine and reference number changed; it's 22 in the original)</p>&#xA;&#xA;<blockquote>&#xA; <p>In the context of program generation, multistage programming (MSP, staging for short) as established by Taha and Sheard [2] <strong>allows programmers to explicitly delay evaluation of a program expression to a later stage</strong> (thus, staging an expression). The present stage effectively acts as a code generator that composes (and possibly executes) the program of the next stage. </p>&#xA;</blockquote>&#xA;&#xA;<p>However, Taha and Sheard write (emphasis mine):</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>A multi-stage program is one that involves the generation, compilation, and execution of code, all inside the same process.</strong> Multi-stage languages express multi-stage programs. Staging, and consequently multi-stage programming, address the need for general purpose solutions which do not pay run-time interpretive overheads.</p>&#xA;</blockquote>&#xA;&#xA;<p>They than go on to several references to older work allegedly showing that staging is effective, which suggests that the concept is even older. They don't give a reference for the term itself.</p>&#xA;&#xA;<p>These statements seem to be orthogonal, if not contradictory; maybe what Rompf and Odersky write is an application of what Taha and Sheard propose, but maybe it is another perspective on the same thing. They seem to agree that an important point is that programs (re)write parts of themselves at runtime, but I do not know whether that is a necessary and/or sufficient ability.</p>&#xA;&#xA;<p>So, what is <em>staging</em> respectively are interpretations of staging in this context? Where does the term come from?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="http://dx.doi.org/10.1145/2184319.2184345">Lightweight Modular Staging: A Pragmatic Approach to Runtime Code Generation and Compiled DSLs</a> by T. Rompf and M. Odersky (2012)</li>&#xA;<li><a href="http://dx.doi.org/10.1016/S0304-3975%2800%2900053-0">MetaML and multi-stageprogramming with explicit annotations</a> by W. Taha and T. Sheard (2000)</li>&#xA;</ol>&#xA;
habedi/stack-exchange-dataset
2,872
Origins of the term "distributed hash table"
<p>I am currently researching for my diploma thesis in computer science with a topic in the area of distributed hash tables. Naturally, I came to the question were the term <em>distributed hash table</em> came from. (I know it is not rocket science to just derive it from <em>distributing a hash table</em>, but somebody somewhere must have come up with it).</p>&#xA;&#xA;<p>Most papers I read referred to the original paper on <em>consistent hashing</em> and one of the first algorithms making use of it (e.g Chord). I know that there was a lot of research on distributed databases in the 80s, so I figure that the term, or maybe the idea behind it, should be older than ~15 years.</p>&#xA;&#xA;<p>The motivation behind this question is that knowing an earlier date and maybe another term for a similar idea would possibly widen the range of useful information I could gather for my research. For example, what have others done that is similar to what I want to do and where have they failed. Etc. etc.</p>&#xA;&#xA;<p>I tried to find more on this subject using <em>Structured Overlay Networks</em> as a search keyword, but the resulting definitions/papers are also quite young, which leaves me with the impression that the research topic might not be so old after all.</p>&#xA;&#xA;<p>Does anybody of you know of earlier research (maybe pre-90s?) in the topics of distributed hash tables and/or structured overlay networks? I'd be glad to hear some keywords which could lead me to more historic papers.</p>&#xA;
data structures terminology distributed systems hash tables history
1
Origins of the term "distributed hash table" -- (data structures terminology distributed systems hash tables history) <p>I am currently researching for my diploma thesis in computer science with a topic in the area of distributed hash tables. Naturally, I came to the question were the term <em>distributed hash table</em> came from. (I know it is not rocket science to just derive it from <em>distributing a hash table</em>, but somebody somewhere must have come up with it).</p>&#xA;&#xA;<p>Most papers I read referred to the original paper on <em>consistent hashing</em> and one of the first algorithms making use of it (e.g Chord). I know that there was a lot of research on distributed databases in the 80s, so I figure that the term, or maybe the idea behind it, should be older than ~15 years.</p>&#xA;&#xA;<p>The motivation behind this question is that knowing an earlier date and maybe another term for a similar idea would possibly widen the range of useful information I could gather for my research. For example, what have others done that is similar to what I want to do and where have they failed. Etc. etc.</p>&#xA;&#xA;<p>I tried to find more on this subject using <em>Structured Overlay Networks</em> as a search keyword, but the resulting definitions/papers are also quite young, which leaves me with the impression that the research topic might not be so old after all.</p>&#xA;&#xA;<p>Does anybody of you know of earlier research (maybe pre-90s?) in the topics of distributed hash tables and/or structured overlay networks? I'd be glad to hear some keywords which could lead me to more historic papers.</p>&#xA;
habedi/stack-exchange-dataset
2,878
Universal simulation of Turing machines
<p>Let <span class="math-container">$f$</span> be a fixed time-constructable function.</p>&#xA;<p>The classical universal simulation result for TMs (Hennie and Stearns, 1966) states that there is a two-tape TM <span class="math-container">$U$</span> such that given</p>&#xA;<ul>&#xA;<li>the description of a TM <span class="math-container">$\langle M \rangle$</span>, and</li>&#xA;<li>an input string <span class="math-container">$x$</span>,</li>&#xA;</ul>&#xA;<p>runs for <span class="math-container">$g(|x|)$</span> steps and returns <span class="math-container">$M$</span>'s answer on <span class="math-container">$x$</span>. And <span class="math-container">$g$</span> can be taken to be any function in <span class="math-container">$\omega(f(n)\lg f(n))$</span>.</p>&#xA;<p>My questions are:</p>&#xA;<blockquote>&#xA;<ol>&#xA;<li><p>What is the best known simulation result on a single tape TM? Does the result above also still hold?</p>&#xA;</li>&#xA;<li><p>Is there any improvement on [HS66]? Can we simulate TMs on a two-tape TM for <span class="math-container">$f(n)$</span> steps in a faster way?&#xA;Can we take <span class="math-container">$g(n)$</span> to be in <span class="math-container">$\omega(f(n))$</span> in place of <span class="math-container">$\omega(f(n)\lg f(n))$</span>?</p>&#xA;</li>&#xA;</ol>&#xA;</blockquote>&#xA;
complexity theory reference request turing machines machine models simulation
1
Universal simulation of Turing machines -- (complexity theory reference request turing machines machine models simulation) <p>Let <span class="math-container">$f$</span> be a fixed time-constructable function.</p>&#xA;<p>The classical universal simulation result for TMs (Hennie and Stearns, 1966) states that there is a two-tape TM <span class="math-container">$U$</span> such that given</p>&#xA;<ul>&#xA;<li>the description of a TM <span class="math-container">$\langle M \rangle$</span>, and</li>&#xA;<li>an input string <span class="math-container">$x$</span>,</li>&#xA;</ul>&#xA;<p>runs for <span class="math-container">$g(|x|)$</span> steps and returns <span class="math-container">$M$</span>'s answer on <span class="math-container">$x$</span>. And <span class="math-container">$g$</span> can be taken to be any function in <span class="math-container">$\omega(f(n)\lg f(n))$</span>.</p>&#xA;<p>My questions are:</p>&#xA;<blockquote>&#xA;<ol>&#xA;<li><p>What is the best known simulation result on a single tape TM? Does the result above also still hold?</p>&#xA;</li>&#xA;<li><p>Is there any improvement on [HS66]? Can we simulate TMs on a two-tape TM for <span class="math-container">$f(n)$</span> steps in a faster way?&#xA;Can we take <span class="math-container">$g(n)$</span> to be in <span class="math-container">$\omega(f(n))$</span> in place of <span class="math-container">$\omega(f(n)\lg f(n))$</span>?</p>&#xA;</li>&#xA;</ol>&#xA;</blockquote>&#xA;
habedi/stack-exchange-dataset
2,886
Notation for operational semantics that can be used in code comments
<p>I'm defining an intermediate language for a multi-backend code generator that I'm writing. I want to document the operational semantics for this intermediate language in a way that is readable both from within the source code and generated documentation (ocamldoc). The notation introduced used in "Types and Programming" languages is great for a book, but I don't want to try to do the prerequisite over conclusion style notation via ASCII art.</p>&#xA;&#xA;<p>Is there a widely recognized notation for operational semantics that doesn't require non-ASCII characters? I looked through various RFCs but can't find any that use a non-natural language way of specifying semantics.</p>&#xA;
programming languages semantics operational semantics
1
Notation for operational semantics that can be used in code comments -- (programming languages semantics operational semantics) <p>I'm defining an intermediate language for a multi-backend code generator that I'm writing. I want to document the operational semantics for this intermediate language in a way that is readable both from within the source code and generated documentation (ocamldoc). The notation introduced used in "Types and Programming" languages is great for a book, but I don't want to try to do the prerequisite over conclusion style notation via ASCII art.</p>&#xA;&#xA;<p>Is there a widely recognized notation for operational semantics that doesn't require non-ASCII characters? I looked through various RFCs but can't find any that use a non-natural language way of specifying semantics.</p>&#xA;
habedi/stack-exchange-dataset
2,887
Why does NTIME consider the length of the longest computation?
<p>In Sipser's textbook "Introduction to the Theory of Computation, Second Edition," he defines nondeterministic time complexity as follows:</p>&#xA;&#xA;<blockquote>&#xA; <p>Let $N$ be a nondeterministic Turing machine that is a decider. The <strong>running time</strong> of $N$ is the function $f : \mathbb{N} \rightarrow \mathbb{N}$, where $f(n)$ is the maximum number of steps that $N$ uses on any branch of its computation on any input of length $n$ [...].</p>&#xA;</blockquote>&#xA;&#xA;<p>Part of this definition says that the running time of the machine $N$ is the maximum number of steps taken by that machine on any branch. Is there a reason that all branches are considered? It seems like the length of the shortest accepting computation would be a better measure (assuming, of course, that the machine halts), since you would never need to run the machine any longer than this before you could conclude whether the machine was going to accept or not.</p>&#xA;
complexity theory time complexity terminology turing machines nondeterminism
1
Why does NTIME consider the length of the longest computation? -- (complexity theory time complexity terminology turing machines nondeterminism) <p>In Sipser's textbook "Introduction to the Theory of Computation, Second Edition," he defines nondeterministic time complexity as follows:</p>&#xA;&#xA;<blockquote>&#xA; <p>Let $N$ be a nondeterministic Turing machine that is a decider. The <strong>running time</strong> of $N$ is the function $f : \mathbb{N} \rightarrow \mathbb{N}$, where $f(n)$ is the maximum number of steps that $N$ uses on any branch of its computation on any input of length $n$ [...].</p>&#xA;</blockquote>&#xA;&#xA;<p>Part of this definition says that the running time of the machine $N$ is the maximum number of steps taken by that machine on any branch. Is there a reason that all branches are considered? It seems like the length of the shortest accepting computation would be a better measure (assuming, of course, that the machine halts), since you would never need to run the machine any longer than this before you could conclude whether the machine was going to accept or not.</p>&#xA;
habedi/stack-exchange-dataset
2,889
Determining whether a CFG is $LL(k)$ for any $k$?
<p>In <a href="http://classes.engr.oregonstate.edu/eecs/winter2012/cs480/assignments/Knuth-1965-TranslationofLanguages.pdf" rel="nofollow">Knuth's original paper on $LR(k)$ grammars</a>, he proved that the decision problem "Given a CFG $G$, is there a $k$ such that $G$ is an $LR(k)$ grammar?" is undecidable.</p>&#xA;&#xA;<p>Is there a similar result showing that it is undecidable whether a given CFG is an $LL(k)$ grammar for some choice of $k$? Or is this problem known to be decidable?</p>&#xA;
formal languages computability formal grammars context free parsing
1
Determining whether a CFG is $LL(k)$ for any $k$? -- (formal languages computability formal grammars context free parsing) <p>In <a href="http://classes.engr.oregonstate.edu/eecs/winter2012/cs480/assignments/Knuth-1965-TranslationofLanguages.pdf" rel="nofollow">Knuth's original paper on $LR(k)$ grammars</a>, he proved that the decision problem "Given a CFG $G$, is there a $k$ such that $G$ is an $LR(k)$ grammar?" is undecidable.</p>&#xA;&#xA;<p>Is there a similar result showing that it is undecidable whether a given CFG is an $LL(k)$ grammar for some choice of $k$? Or is this problem known to be decidable?</p>&#xA;
habedi/stack-exchange-dataset
2,893
Bound on space for selection algorithm?
<p>There is a well known worst case $O(n)$ <a href="http://en.wikipedia.org/wiki/Selection_algorithm">selection algorithm</a> to find the $k$'th largest element in an array of integers. It uses a <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Properties_of_pivot">median-of-medians</a> approach to find a good enough pivot, partitions the input array in place and then recursively continues in it's search for the $k$'th largest element.</p>&#xA;&#xA;<p>What if we weren't allowed to touch the input array, how much extra space would be needed in order to find the $k$'th largest element in $O(n)$ time? Could we find the $k$'th largest element in $O(1)$ extra space and still keep the runtime $O(n)$? For example, finding the maximum or minimum element takes $O(n)$ time and $O(1)$ space. </p>&#xA;&#xA;<p>Intuitively, I cannot imagine that we could do better than $O(n)$ space but is there a proof of this?</p>&#xA;&#xA;<p>Can someone point to a reference or come up with an argument why the $\lfloor n/2 \rfloor$'th element would require $O(n)$ space to be found in $O(n)$ time?</p>&#xA;
algorithms algorithm analysis space complexity lower bounds
1
Bound on space for selection algorithm? -- (algorithms algorithm analysis space complexity lower bounds) <p>There is a well known worst case $O(n)$ <a href="http://en.wikipedia.org/wiki/Selection_algorithm">selection algorithm</a> to find the $k$'th largest element in an array of integers. It uses a <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Properties_of_pivot">median-of-medians</a> approach to find a good enough pivot, partitions the input array in place and then recursively continues in it's search for the $k$'th largest element.</p>&#xA;&#xA;<p>What if we weren't allowed to touch the input array, how much extra space would be needed in order to find the $k$'th largest element in $O(n)$ time? Could we find the $k$'th largest element in $O(1)$ extra space and still keep the runtime $O(n)$? For example, finding the maximum or minimum element takes $O(n)$ time and $O(1)$ space. </p>&#xA;&#xA;<p>Intuitively, I cannot imagine that we could do better than $O(n)$ space but is there a proof of this?</p>&#xA;&#xA;<p>Can someone point to a reference or come up with an argument why the $\lfloor n/2 \rfloor$'th element would require $O(n)$ space to be found in $O(n)$ time?</p>&#xA;
habedi/stack-exchange-dataset
2,907
What exactly is the difference between supervised and unsupervised learning?
<p>I am trying to understand clustering methods.</p>&#xA;&#xA;<p>What I I think I understood:</p>&#xA;&#xA;<ol>&#xA;<li><p>In supervised learning, the categories/labels data is assigned to are known before computation. So, the labels, classes or categories are being used in order to "learn" the parameters that are really significant for those clusters.</p></li>&#xA;<li><p>In unsupervised learning, datasets are assigned to segments, without the clusters being known.</p></li>&#xA;</ol>&#xA;&#xA;<p>Does that mean that, if I don't even know which parameters are crucial for a segmentation, I should prefer supervised learning?</p>&#xA;
machine learning data mining clustering
1
What exactly is the difference between supervised and unsupervised learning? -- (machine learning data mining clustering) <p>I am trying to understand clustering methods.</p>&#xA;&#xA;<p>What I I think I understood:</p>&#xA;&#xA;<ol>&#xA;<li><p>In supervised learning, the categories/labels data is assigned to are known before computation. So, the labels, classes or categories are being used in order to "learn" the parameters that are really significant for those clusters.</p></li>&#xA;<li><p>In unsupervised learning, datasets are assigned to segments, without the clusters being known.</p></li>&#xA;</ol>&#xA;&#xA;<p>Does that mean that, if I don't even know which parameters are crucial for a segmentation, I should prefer supervised learning?</p>&#xA;
habedi/stack-exchange-dataset
2,919
Inferring refinement types
<p>At work I’ve been tasked with inferring some type information about a dynamic language. I rewrite sequences of statements into nested <code>let</code> expressions, like so:</p>&#xA;&#xA;<pre><code>return x; Z =&gt; x&#xA;var x; Z =&gt; let x = undefined in Z&#xA;x = y; Z =&gt; let x = y in Z&#xA;if x then T else F; Z =&gt; if x then { T; Z } else { F; Z }&#xA;</code></pre>&#xA;&#xA;<p>Since I’m starting from general type information and trying to deduce more specific types, the natural choice is refinement types. For example, the conditional operator returns a union of the types of its true and false branches. In simple cases, it works very well.</p>&#xA;&#xA;<p>I ran into a snag, however, when trying to infer the type of the following:</p>&#xA;&#xA;<pre><code>function g(f) {&#xA; var x;&#xA; x = f(3);&#xA; return f(x);&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>Which is rewritten to:</p>&#xA;&#xA;<pre><code>\f.&#xA; let x = undefined in&#xA; let x = f 3 in&#xA; f x&#xA;</code></pre>&#xA;&#xA;<p>HM would infer $\mathtt{f} : \mathtt{Int} \to \mathtt{Int}$ and consequently $\mathtt{g} : (\mathtt{Int} \to \mathtt{Int}) \to \mathtt{Int}$. The actual type I want to be able to infer is:</p>&#xA;&#xA;<p>$$\mathtt{g} : \forall \tau_1 \tau_2. \:(\mathtt{Int} \to \tau_1 \land \tau_1 \to \tau_2) \to \tau_2$$</p>&#xA;&#xA;<p>I’m already using functional dependencies to resolve the type of an overloaded <code>+</code> operator, so I figured it was a natural choice to use them to resolve the type of <code>f</code> within <code>g</code>. That is, the types of <code>f</code> in all its applications together uniquely determine the type of <code>g</code>. However, as it turns out, fundeps don’t lend themselves terribly well to variable numbers of source types.</p>&#xA;&#xA;<p>Anyway, the interplay of polymorphism and refinement typing is problematic. So is there a better approach I’m missing? I’m currently digesting “Refinement Types for ML” and would appreciate more literature or other pointers.</p>&#xA;
programming languages logic type theory type inference
1
Inferring refinement types -- (programming languages logic type theory type inference) <p>At work I’ve been tasked with inferring some type information about a dynamic language. I rewrite sequences of statements into nested <code>let</code> expressions, like so:</p>&#xA;&#xA;<pre><code>return x; Z =&gt; x&#xA;var x; Z =&gt; let x = undefined in Z&#xA;x = y; Z =&gt; let x = y in Z&#xA;if x then T else F; Z =&gt; if x then { T; Z } else { F; Z }&#xA;</code></pre>&#xA;&#xA;<p>Since I’m starting from general type information and trying to deduce more specific types, the natural choice is refinement types. For example, the conditional operator returns a union of the types of its true and false branches. In simple cases, it works very well.</p>&#xA;&#xA;<p>I ran into a snag, however, when trying to infer the type of the following:</p>&#xA;&#xA;<pre><code>function g(f) {&#xA; var x;&#xA; x = f(3);&#xA; return f(x);&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>Which is rewritten to:</p>&#xA;&#xA;<pre><code>\f.&#xA; let x = undefined in&#xA; let x = f 3 in&#xA; f x&#xA;</code></pre>&#xA;&#xA;<p>HM would infer $\mathtt{f} : \mathtt{Int} \to \mathtt{Int}$ and consequently $\mathtt{g} : (\mathtt{Int} \to \mathtt{Int}) \to \mathtt{Int}$. The actual type I want to be able to infer is:</p>&#xA;&#xA;<p>$$\mathtt{g} : \forall \tau_1 \tau_2. \:(\mathtt{Int} \to \tau_1 \land \tau_1 \to \tau_2) \to \tau_2$$</p>&#xA;&#xA;<p>I’m already using functional dependencies to resolve the type of an overloaded <code>+</code> operator, so I figured it was a natural choice to use them to resolve the type of <code>f</code> within <code>g</code>. That is, the types of <code>f</code> in all its applications together uniquely determine the type of <code>g</code>. However, as it turns out, fundeps don’t lend themselves terribly well to variable numbers of source types.</p>&#xA;&#xA;<p>Anyway, the interplay of polymorphism and refinement typing is problematic. So is there a better approach I’m missing? I’m currently digesting “Refinement Types for ML” and would appreciate more literature or other pointers.</p>&#xA;
habedi/stack-exchange-dataset
2,923
Looking for a book that derives and constructs a model checking application
<p>I am teaching myself program verification and am currently learning <a href="http://en.wikipedia.org/wiki/Proof_assistant" rel="nofollow">proof assistants</a>. I have the book <a href="http://www.cambridge.org/gb/knowledge/isbn/item2327697/?site_locale=en_GB" rel="nofollow">Handbook of Practical Logic and Automated Reasoning</a> which gives the proofs necessary for the understanding of such a system, but more importantly for me it also gives an implementation of the necessary algorithms as <a href="http://www.cl.cam.ac.uk/~jrh13/atp/index.html" rel="nofollow">OCAML source</a>.</p>&#xA;&#xA;<p>I know that some of the tools listed in <a href="http://en.wikipedia.org/wiki/List_of_model_checking_tools" rel="nofollow">Wikipedia: Model Checking tools</a> and <a href="http://anna.fi.muni.cz/yahoda/" rel="nofollow">YAHODA: Verifications Tools Database</a> are open source, but I also prefer it when the theory, proofs, algorithms and source code are presented at the same time reinforcing each other, and in a progression building up to a final application.</p>&#xA;&#xA;<p>Is there such a book for model checking?</p>&#xA;&#xA;<p>EDIT </p>&#xA;&#xA;<p>I may have found what I am looking for in <a href="http://www.springer.com/computer/theoretical+computer+science/book/978-1-4471-4128-0" rel="nofollow">Mathematical Logic for Computer Science</a> with <a href="http://code.google.com/p/mlcs/" rel="nofollow">Prolog source</a>. As I don't have the book, does anyone know if this book fits the requirement?</p>&#xA;
reference request formal methods proof assistants model checking
1
Looking for a book that derives and constructs a model checking application -- (reference request formal methods proof assistants model checking) <p>I am teaching myself program verification and am currently learning <a href="http://en.wikipedia.org/wiki/Proof_assistant" rel="nofollow">proof assistants</a>. I have the book <a href="http://www.cambridge.org/gb/knowledge/isbn/item2327697/?site_locale=en_GB" rel="nofollow">Handbook of Practical Logic and Automated Reasoning</a> which gives the proofs necessary for the understanding of such a system, but more importantly for me it also gives an implementation of the necessary algorithms as <a href="http://www.cl.cam.ac.uk/~jrh13/atp/index.html" rel="nofollow">OCAML source</a>.</p>&#xA;&#xA;<p>I know that some of the tools listed in <a href="http://en.wikipedia.org/wiki/List_of_model_checking_tools" rel="nofollow">Wikipedia: Model Checking tools</a> and <a href="http://anna.fi.muni.cz/yahoda/" rel="nofollow">YAHODA: Verifications Tools Database</a> are open source, but I also prefer it when the theory, proofs, algorithms and source code are presented at the same time reinforcing each other, and in a progression building up to a final application.</p>&#xA;&#xA;<p>Is there such a book for model checking?</p>&#xA;&#xA;<p>EDIT </p>&#xA;&#xA;<p>I may have found what I am looking for in <a href="http://www.springer.com/computer/theoretical+computer+science/book/978-1-4471-4128-0" rel="nofollow">Mathematical Logic for Computer Science</a> with <a href="http://code.google.com/p/mlcs/" rel="nofollow">Prolog source</a>. As I don't have the book, does anyone know if this book fits the requirement?</p>&#xA;
habedi/stack-exchange-dataset
2,938
Can bottom-up architectures be effectively programmed in top-down paradigms?
<p>The <a href="http://en.wikipedia.org/wiki/Subsumption_architecture" rel="nofollow">subsumption architecture</a>, proposed by Rodney Brooks in 1986, is a "bottom-up" approach, in which robots are designed using simple hierarchical models. These models build upon and subsume the lower modules to form a final product. For example, a robot can be given a "find opening" module which is subsumed by a more abstract "find doorway" module, which is then itself subsumed by an "exit the playing field" module.</p>&#xA;&#xA;<p>Now, obviously, in a more "object-oriented" design, we could have started the design with the "exit the playing field module", but for sake of argument, assume some of the more primitive components (as "functions") will likely get reused in other higher concepts.</p>&#xA;&#xA;<p>Realizing that implementing this robot in a procedural language (or a functional one, for that matter) could be the simplest, but is it counterproductive to try to conceive of a subsumption architecture-based robot in an object-oriented programming paradigm? (realizing also that it's perhaps difficult to weigh a software engineering paradigm against a robotics one) To obtain a solution, is there some form of "adapter" that can be implemented for increasing the effectiveness of using what may be two conflicting paradigms? </p>&#xA;
programming languages artificial intelligence algorithm design object oriented
1
Can bottom-up architectures be effectively programmed in top-down paradigms? -- (programming languages artificial intelligence algorithm design object oriented) <p>The <a href="http://en.wikipedia.org/wiki/Subsumption_architecture" rel="nofollow">subsumption architecture</a>, proposed by Rodney Brooks in 1986, is a "bottom-up" approach, in which robots are designed using simple hierarchical models. These models build upon and subsume the lower modules to form a final product. For example, a robot can be given a "find opening" module which is subsumed by a more abstract "find doorway" module, which is then itself subsumed by an "exit the playing field" module.</p>&#xA;&#xA;<p>Now, obviously, in a more "object-oriented" design, we could have started the design with the "exit the playing field module", but for sake of argument, assume some of the more primitive components (as "functions") will likely get reused in other higher concepts.</p>&#xA;&#xA;<p>Realizing that implementing this robot in a procedural language (or a functional one, for that matter) could be the simplest, but is it counterproductive to try to conceive of a subsumption architecture-based robot in an object-oriented programming paradigm? (realizing also that it's perhaps difficult to weigh a software engineering paradigm against a robotics one) To obtain a solution, is there some form of "adapter" that can be implemented for increasing the effectiveness of using what may be two conflicting paradigms? </p>&#xA;
habedi/stack-exchange-dataset
2,939
Do non-computable functions grow asymptotically larger?
<p>I read about busy beaver numbers and how they grow asymptotically larger than any computable function. Why is this so? Is it because of the busy beaver function's non-computability? If so, then do all non-computable functions grow asymptotically larger than computable ones?</p>&#xA;&#xA;<p><strong>Edit:</strong></p>&#xA;&#xA;<p>Great answers below but I would like to explain in plainer english what I understand of them.</p>&#xA;&#xA;<p>If there was a computable function f that grew faster than the busy beaver function, then this means that the busy beaver function is bounded by f. In other words, a turing machine would simply need to run for f(n) many steps to decide the halting problem. Since we know the halting problem is undecidable, our initial presupposition is wrong. Therefore, the busy beaver function grows faster than all computable functions.</p>&#xA;
computability asymptotics
1
Do non-computable functions grow asymptotically larger? -- (computability asymptotics) <p>I read about busy beaver numbers and how they grow asymptotically larger than any computable function. Why is this so? Is it because of the busy beaver function's non-computability? If so, then do all non-computable functions grow asymptotically larger than computable ones?</p>&#xA;&#xA;<p><strong>Edit:</strong></p>&#xA;&#xA;<p>Great answers below but I would like to explain in plainer english what I understand of them.</p>&#xA;&#xA;<p>If there was a computable function f that grew faster than the busy beaver function, then this means that the busy beaver function is bounded by f. In other words, a turing machine would simply need to run for f(n) many steps to decide the halting problem. Since we know the halting problem is undecidable, our initial presupposition is wrong. Therefore, the busy beaver function grows faster than all computable functions.</p>&#xA;
habedi/stack-exchange-dataset
2,948
Counting with constant space bounded TMs
<p>The problem, coming from an interview question, is:</p>&#xA;&#xA;<blockquote>&#xA; <p>You have a stream of incoming numbers in range 0 to 60000 and you have&#xA; a function which will take a number from that range and return the&#xA; count of occurrence of that number till that moment. Give a suitable&#xA; Data structure/algorithm to implement this system.</p>&#xA;</blockquote>&#xA;&#xA;<p>The stream is infinite, so if fixed size data structures re used, i.e. primitive types in Java or C, they will overflow. So there is the need to use data structures that have a size that grows over time. As pointed by the interviewer, the memory occupied by those data structures will diverge.</p>&#xA;&#xA;<p>The model of computation is a Turing machine with three tapes:</p>&#xA;&#xA;<ul>&#xA;<li>infinite read-only one-way input tape;</li>&#xA;<li>constant space bounded read-write two way work tape;</li>&#xA;<li>infinite write-only one-way output tape.</li>&#xA;</ul>&#xA;&#xA;<p>The main reason to choose the model above is that in the real world there is virtually no limit to the quantity of input that can be acquired using a keyboard or a network connection. Also, there is virtually no limit to the quantity of information that can be displayed on amonitor over time. But memory is limited and expensive.</p>&#xA;&#xA;<p>I modeled the problem as the problem to recognize the language L of all couples (number,number of occurrences so far).</p>&#xA;&#xA;<p>As a corollary of the Theorem 3.13 in Hopcroft-Ullman I know that every language recognized by a constant space bounded machine is regular.</p>&#xA;&#xA;<p>But, in any given moment, the language L is a finite language, because the number of couples to be recognized is finite: 60001. So I can't use the pumping lemma for regular languages to prove that such language is not regular.</p>&#xA;&#xA;<p>Is there a way I can complete my proof?</p>&#xA;&#xA;<p>The original question is <a href="https://stackoverflow.com/questions/11708957/find-the-count-of-a-particular-number-in-an-infinite-stream-of-numbers-at-a-part">here</a>.</p>&#xA;
regular languages turing machines finite automata space complexity streaming algorithm
1
Counting with constant space bounded TMs -- (regular languages turing machines finite automata space complexity streaming algorithm) <p>The problem, coming from an interview question, is:</p>&#xA;&#xA;<blockquote>&#xA; <p>You have a stream of incoming numbers in range 0 to 60000 and you have&#xA; a function which will take a number from that range and return the&#xA; count of occurrence of that number till that moment. Give a suitable&#xA; Data structure/algorithm to implement this system.</p>&#xA;</blockquote>&#xA;&#xA;<p>The stream is infinite, so if fixed size data structures re used, i.e. primitive types in Java or C, they will overflow. So there is the need to use data structures that have a size that grows over time. As pointed by the interviewer, the memory occupied by those data structures will diverge.</p>&#xA;&#xA;<p>The model of computation is a Turing machine with three tapes:</p>&#xA;&#xA;<ul>&#xA;<li>infinite read-only one-way input tape;</li>&#xA;<li>constant space bounded read-write two way work tape;</li>&#xA;<li>infinite write-only one-way output tape.</li>&#xA;</ul>&#xA;&#xA;<p>The main reason to choose the model above is that in the real world there is virtually no limit to the quantity of input that can be acquired using a keyboard or a network connection. Also, there is virtually no limit to the quantity of information that can be displayed on amonitor over time. But memory is limited and expensive.</p>&#xA;&#xA;<p>I modeled the problem as the problem to recognize the language L of all couples (number,number of occurrences so far).</p>&#xA;&#xA;<p>As a corollary of the Theorem 3.13 in Hopcroft-Ullman I know that every language recognized by a constant space bounded machine is regular.</p>&#xA;&#xA;<p>But, in any given moment, the language L is a finite language, because the number of couples to be recognized is finite: 60001. So I can't use the pumping lemma for regular languages to prove that such language is not regular.</p>&#xA;&#xA;<p>Is there a way I can complete my proof?</p>&#xA;&#xA;<p>The original question is <a href="https://stackoverflow.com/questions/11708957/find-the-count-of-a-particular-number-in-an-infinite-stream-of-numbers-at-a-part">here</a>.</p>&#xA;
habedi/stack-exchange-dataset
2,952
Generating inputs for random-testing graph algorithms?
<p>When testing algorithms, a common approach is random testing: generate a significant number of inputs according to some distribution (usually uniform), run the algorithm on them and verify correctness. Modern testing frameworks can generate inputs automatically given the algorithms signature, with some restrictions.</p>&#xA;&#xA;<p>If the inputs are numbers, lists or strings, generating such inputs in straight-forward. Trees are harder, but still easy (using stochastic context-free grammars or similar approaches).</p>&#xA;&#xA;<p>How can you generate random graphs (efficiently)? Usually, picking graphs uniformly at random is not what you want: they should be connected, or planar, or cycle-free, or fulfill any other property. Rejection sampling seems suboptimal, due to the potentially huge set of undesirable graphs.</p>&#xA;&#xA;<p>What are useful distributions to look at? Useful here means that</p>&#xA;&#xA;<ul>&#xA;<li>the graphs are likely to test the algorithm at hand well and</li>&#xA;<li>they can be generated effectively and efficiently.</li>&#xA;</ul>&#xA;&#xA;<p>I know that there are many models for random graphs, so I'd appreciate some insight into which are best for graph generation in this regard.</p>&#xA;&#xA;<p>If "some algorithm" is too general, please use shortest-path finding algorithms as a concrete class of algorithms under test. Graphs for testing should be connected and rather dense (with high probability, or at least in expectation). For testing, the optimal solution would be to create random graphs around a shortest path so we <em>know</em> the desired result (without having to employ another algorithm).</p>&#xA;
algorithms graphs randomness software testing
1
Generating inputs for random-testing graph algorithms? -- (algorithms graphs randomness software testing) <p>When testing algorithms, a common approach is random testing: generate a significant number of inputs according to some distribution (usually uniform), run the algorithm on them and verify correctness. Modern testing frameworks can generate inputs automatically given the algorithms signature, with some restrictions.</p>&#xA;&#xA;<p>If the inputs are numbers, lists or strings, generating such inputs in straight-forward. Trees are harder, but still easy (using stochastic context-free grammars or similar approaches).</p>&#xA;&#xA;<p>How can you generate random graphs (efficiently)? Usually, picking graphs uniformly at random is not what you want: they should be connected, or planar, or cycle-free, or fulfill any other property. Rejection sampling seems suboptimal, due to the potentially huge set of undesirable graphs.</p>&#xA;&#xA;<p>What are useful distributions to look at? Useful here means that</p>&#xA;&#xA;<ul>&#xA;<li>the graphs are likely to test the algorithm at hand well and</li>&#xA;<li>they can be generated effectively and efficiently.</li>&#xA;</ul>&#xA;&#xA;<p>I know that there are many models for random graphs, so I'd appreciate some insight into which are best for graph generation in this regard.</p>&#xA;&#xA;<p>If "some algorithm" is too general, please use shortest-path finding algorithms as a concrete class of algorithms under test. Graphs for testing should be connected and rather dense (with high probability, or at least in expectation). For testing, the optimal solution would be to create random graphs around a shortest path so we <em>know</em> the desired result (without having to employ another algorithm).</p>&#xA;
habedi/stack-exchange-dataset
2,961
Example using Penetrance & Branching Factor in State-space Heuristic Search
<p>I need an example for how to calculate penetrance and branching factor of the search tree in in state-space heuristic search. The definitions are as following. <em>Penetrance</em> $P$ is defined by</p>&#xA;&#xA;<p>$\qquad \displaystyle P = \frac{L}{T}$</p>&#xA;&#xA;<p>and <em>branching factor</em> $B$ is defined by</p>&#xA;&#xA;<p>$\qquad \displaystyle \frac{B}{(B-1)} \cdot (B^L – 1) = T$ </p>&#xA;&#xA;<p>where $L$ is the length of the path from the root to the solution and $T$ the total number of nodes expanded.</p>&#xA;
artificial intelligence search algorithms
1
Example using Penetrance & Branching Factor in State-space Heuristic Search -- (artificial intelligence search algorithms) <p>I need an example for how to calculate penetrance and branching factor of the search tree in in state-space heuristic search. The definitions are as following. <em>Penetrance</em> $P$ is defined by</p>&#xA;&#xA;<p>$\qquad \displaystyle P = \frac{L}{T}$</p>&#xA;&#xA;<p>and <em>branching factor</em> $B$ is defined by</p>&#xA;&#xA;<p>$\qquad \displaystyle \frac{B}{(B-1)} \cdot (B^L – 1) = T$ </p>&#xA;&#xA;<p>where $L$ is the length of the path from the root to the solution and $T$ the total number of nodes expanded.</p>&#xA;
habedi/stack-exchange-dataset
2,962
Period in postulate; what does it mean?
<p>While I am learning a lot from others here at the Computer Science site, I must admit that I don't get as much out of some questions and answers since I typically don't understand the theorems to the level necessary. I am currently reading <a href="http://rads.stackoverflow.com/amzn/click/0910319375" rel="nofollow noreferrer">How To Prove It - A Structured Approach</a> which is starting to make the theorems easier to read, but still does not get me to the point of being able to understand the theorems to the point that they add great insight to the question or answer.</p>&#xA;&#xA;<p>For this question <a href="https://cs.stackexchange.com/q/2646/268">Is it possible to always construct a hamiltonian path on a tournament graph by sorting?</a></p>&#xA;&#xA;<p>there is a use of a peroid in the premise.</p>&#xA;&#xA;<blockquote>&#xA; <p>$\qquad \displaystyle a \leq b \iff (a,b) \in E \lor \left(\exists\, c \in V. a \leq c \land c \leq b\right)$</p>&#xA;</blockquote>&#xA;&#xA;<p>What does the period mean? I would be expecting either a comma to mean conjuction or or to mean disjunction, but not a period. I don't see how this could be converted to logical statements.</p>&#xA;&#xA;<p>Note: <a href="https://cs.meta.stackexchange.com/users/232/jukka-suomela">Jukka Suomela</a> has already provided the answer in this <a href="https://cs.meta.stackexchange.com/q/493/268">CS meta question</a>.</p>&#xA;
terminology logic
1
Period in postulate; what does it mean? -- (terminology logic) <p>While I am learning a lot from others here at the Computer Science site, I must admit that I don't get as much out of some questions and answers since I typically don't understand the theorems to the level necessary. I am currently reading <a href="http://rads.stackoverflow.com/amzn/click/0910319375" rel="nofollow noreferrer">How To Prove It - A Structured Approach</a> which is starting to make the theorems easier to read, but still does not get me to the point of being able to understand the theorems to the point that they add great insight to the question or answer.</p>&#xA;&#xA;<p>For this question <a href="https://cs.stackexchange.com/q/2646/268">Is it possible to always construct a hamiltonian path on a tournament graph by sorting?</a></p>&#xA;&#xA;<p>there is a use of a peroid in the premise.</p>&#xA;&#xA;<blockquote>&#xA; <p>$\qquad \displaystyle a \leq b \iff (a,b) \in E \lor \left(\exists\, c \in V. a \leq c \land c \leq b\right)$</p>&#xA;</blockquote>&#xA;&#xA;<p>What does the period mean? I would be expecting either a comma to mean conjuction or or to mean disjunction, but not a period. I don't see how this could be converted to logical statements.</p>&#xA;&#xA;<p>Note: <a href="https://cs.meta.stackexchange.com/users/232/jukka-suomela">Jukka Suomela</a> has already provided the answer in this <a href="https://cs.meta.stackexchange.com/q/493/268">CS meta question</a>.</p>&#xA;
habedi/stack-exchange-dataset
2,965
Looking for books on creating and understanding theorems targeted at Computer Science
<p>In studying logic to understand verifying programs I have found that there are books on logic targeted at Computer Science e.g.</p>&#xA;&#xA;<ul>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/052154310X" rel="nofollow">Logic in Computer Science: Modelling and Reasoning about Systems </a></li>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/1447141288" rel="nofollow">Mathematical Logic for Computer Science </a></li>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/0521701465" rel="nofollow">Computability and Logic </a></li>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/0521899575" rel="nofollow">Handbook of Practical Logic and Automated Reasoning </a></li>&#xA;</ul>&#xA;&#xA;<p>With regards to books on understating theorems targeted at Computer Science I find only one that may fit. As I don't have the book I can't say for sure.</p>&#xA;&#xA;<ul>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/081764220X" rel="nofollow">Handbook of Logic and Proof Techniques for Computer Science </a></li>&#xA;</ul>&#xA;&#xA;<p>Are there any books for understating theorems targeted at Computer Science? In other words are there books for understating syntax, semantics and construction of theorems that don't rely on a heavy math background and that give examples more from the world of computer science and explain in a style more natural to a person in computer science.</p>&#xA;&#xA;<p>EDIT</p>&#xA;&#xA;<p>After seeking more on this topic I have come upon the phrases "informal mathematics" and "mathematical discourse" which are starting to turn up useful info from Google. In particular the following: <a href="https://sites.google.com/site/clauszinn/verifying-informal-proofs/37_04.pdf?attredirects=0" rel="nofollow">Understanding Informal Mathematical Discourse</a> found at <a href="https://sites.google.com/site/clauszinn/verifying-informal-proofs/" rel="nofollow">Understanding Informal Mathematical Proofs</a></p>&#xA;
logic proof techniques books
1
Looking for books on creating and understanding theorems targeted at Computer Science -- (logic proof techniques books) <p>In studying logic to understand verifying programs I have found that there are books on logic targeted at Computer Science e.g.</p>&#xA;&#xA;<ul>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/052154310X" rel="nofollow">Logic in Computer Science: Modelling and Reasoning about Systems </a></li>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/1447141288" rel="nofollow">Mathematical Logic for Computer Science </a></li>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/0521701465" rel="nofollow">Computability and Logic </a></li>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/0521899575" rel="nofollow">Handbook of Practical Logic and Automated Reasoning </a></li>&#xA;</ul>&#xA;&#xA;<p>With regards to books on understating theorems targeted at Computer Science I find only one that may fit. As I don't have the book I can't say for sure.</p>&#xA;&#xA;<ul>&#xA;<li><a href="http://rads.stackoverflow.com/amzn/click/081764220X" rel="nofollow">Handbook of Logic and Proof Techniques for Computer Science </a></li>&#xA;</ul>&#xA;&#xA;<p>Are there any books for understating theorems targeted at Computer Science? In other words are there books for understating syntax, semantics and construction of theorems that don't rely on a heavy math background and that give examples more from the world of computer science and explain in a style more natural to a person in computer science.</p>&#xA;&#xA;<p>EDIT</p>&#xA;&#xA;<p>After seeking more on this topic I have come upon the phrases "informal mathematics" and "mathematical discourse" which are starting to turn up useful info from Google. In particular the following: <a href="https://sites.google.com/site/clauszinn/verifying-informal-proofs/37_04.pdf?attredirects=0" rel="nofollow">Understanding Informal Mathematical Discourse</a> found at <a href="https://sites.google.com/site/clauszinn/verifying-informal-proofs/" rel="nofollow">Understanding Informal Mathematical Proofs</a></p>&#xA;
habedi/stack-exchange-dataset
2,971
Solving the recurrence relation $T(n) = 2T(\lfloor n/2 \rfloor) + n$
<p>Solving the recurrence relation $T(n) = 2T(\lfloor n/2 \rfloor) + n$.<br>&#xA;The book from which this example is, falsely claims that $T(n) = O(n)$ by guessing $T(n) \leq cn$ and then arguing </p>&#xA;&#xA;<p>$\qquad \begin{align*} T(n) &amp; \leq 2(c \lfloor n/2 \rfloor ) + n \\ &amp;\leq cn +n \\ &amp;=O(n) \quad \quad \quad \longleftarrow \text{ wrong!!} \end{align*}$ </p>&#xA;&#xA;<p>since $c$ is constant.The error is that we have not proved the <em>exact</em> form of the inductive hypothesis.</p>&#xA;&#xA;<p>Above I have exactly quoted what the book says. Now my question is why cannot we write $cn+n=dn$ where $d=c+1$ and now we have $T(n) \leq dn$ and hence $T(n) = O(n)$?</p>&#xA;&#xA;<p>Note: </p>&#xA;&#xA;<ol>&#xA;<li>The correct answer is $T(n) =O(n \log n).$ </li>&#xA;<li>The book I am referring here is <em>Introduction to algorithms</em> by Cormen et al., page 86, 3rd edition.</li>&#xA;</ol>&#xA;
proof techniques asymptotics recurrence relation landau notation induction
1
Solving the recurrence relation $T(n) = 2T(\lfloor n/2 \rfloor) + n$ -- (proof techniques asymptotics recurrence relation landau notation induction) <p>Solving the recurrence relation $T(n) = 2T(\lfloor n/2 \rfloor) + n$.<br>&#xA;The book from which this example is, falsely claims that $T(n) = O(n)$ by guessing $T(n) \leq cn$ and then arguing </p>&#xA;&#xA;<p>$\qquad \begin{align*} T(n) &amp; \leq 2(c \lfloor n/2 \rfloor ) + n \\ &amp;\leq cn +n \\ &amp;=O(n) \quad \quad \quad \longleftarrow \text{ wrong!!} \end{align*}$ </p>&#xA;&#xA;<p>since $c$ is constant.The error is that we have not proved the <em>exact</em> form of the inductive hypothesis.</p>&#xA;&#xA;<p>Above I have exactly quoted what the book says. Now my question is why cannot we write $cn+n=dn$ where $d=c+1$ and now we have $T(n) \leq dn$ and hence $T(n) = O(n)$?</p>&#xA;&#xA;<p>Note: </p>&#xA;&#xA;<ol>&#xA;<li>The correct answer is $T(n) =O(n \log n).$ </li>&#xA;<li>The book I am referring here is <em>Introduction to algorithms</em> by Cormen et al., page 86, 3rd edition.</li>&#xA;</ol>&#xA;
habedi/stack-exchange-dataset
2,973
Generalised 3SUM (k-SUM) problem?
<p>The <a href="http://en.wikipedia.org/wiki/3SUM">3SUM</a> problem tries to identify 3 integers $a,b,c$ from a set $S$ of size $n$ such that $a + b + c = 0$.</p>&#xA;&#xA;<p>It is conjectured that there is not better solution than quadratic, i.e. $\mathcal{o}(n^2)$. Or to put it differently: $\mathcal{o}(n \log(n) + n^2)$.</p>&#xA;&#xA;<p>So I was wondering if this would apply to the generalised problem: Find integers $a_i$ for $i \in [1..k]$ in a set $S$ of size $n$ such that $\sum_{i \in [1..k]} a_i = 0$.</p>&#xA;&#xA;<p>I think you can do this in $\mathcal{o}(n \log(n) + n^{k-1})$ for $k \geq 2$ (it's trivial to generalise the simple $k=3$ algorithm).<br>&#xA;But are there better algorithms for other values of $k$?</p>&#xA;
complexity theory combinatorics complexity classes
1
Generalised 3SUM (k-SUM) problem? -- (complexity theory combinatorics complexity classes) <p>The <a href="http://en.wikipedia.org/wiki/3SUM">3SUM</a> problem tries to identify 3 integers $a,b,c$ from a set $S$ of size $n$ such that $a + b + c = 0$.</p>&#xA;&#xA;<p>It is conjectured that there is not better solution than quadratic, i.e. $\mathcal{o}(n^2)$. Or to put it differently: $\mathcal{o}(n \log(n) + n^2)$.</p>&#xA;&#xA;<p>So I was wondering if this would apply to the generalised problem: Find integers $a_i$ for $i \in [1..k]$ in a set $S$ of size $n$ such that $\sum_{i \in [1..k]} a_i = 0$.</p>&#xA;&#xA;<p>I think you can do this in $\mathcal{o}(n \log(n) + n^{k-1})$ for $k \geq 2$ (it's trivial to generalise the simple $k=3$ algorithm).<br>&#xA;But are there better algorithms for other values of $k$?</p>&#xA;
habedi/stack-exchange-dataset
2,981
Showing that the set of TMs which visit the starting state twice on the empty input is undecidable
<p>I'm trying to prove that </p>&#xA;&#xA;<p>$L_1=\{\langle M\rangle \mid M \text{ is a Turing machine and visits } q_0 \text{ at least twice on } \varepsilon\} \notin R$.</p>&#xA;&#xA;<p>I'm not sure whether to reduce the halting problem to it or not. I tried to construct a new machine $M'$ for $(\langle M \rangle,w)$, such that $M'$ visits $q_0$ twice, iff $M$ halts on $w$. This is specific $q_0$ given to me, but I didn't come to any smart construction, which would yield the requested. Maybe it's easier to show that it's $RE$ and not $coRE$? It is obvious that it's in $RE$, and I need to show that $L_2^{c}$ is not in $RE$.</p>&#xA;&#xA;<p>What should I do?</p>&#xA;
turing machines reductions undecidability halting problem
1
Showing that the set of TMs which visit the starting state twice on the empty input is undecidable -- (turing machines reductions undecidability halting problem) <p>I'm trying to prove that </p>&#xA;&#xA;<p>$L_1=\{\langle M\rangle \mid M \text{ is a Turing machine and visits } q_0 \text{ at least twice on } \varepsilon\} \notin R$.</p>&#xA;&#xA;<p>I'm not sure whether to reduce the halting problem to it or not. I tried to construct a new machine $M'$ for $(\langle M \rangle,w)$, such that $M'$ visits $q_0$ twice, iff $M$ halts on $w$. This is specific $q_0$ given to me, but I didn't come to any smart construction, which would yield the requested. Maybe it's easier to show that it's $RE$ and not $coRE$? It is obvious that it's in $RE$, and I need to show that $L_2^{c}$ is not in $RE$.</p>&#xA;&#xA;<p>What should I do?</p>&#xA;
habedi/stack-exchange-dataset
2,982
Examples of undecidable problems whose intersection is decidable
<p>I know that given two problems are undecidable it does not follow that their intersection must be undecidable. For example, take a property of languages $P$ such that it is undecidable whether the language accepted by a given pushdown automaton $M$ has that property. Clearly $P$ and $\lnot P$ are undecidable (for a given $M$) but $P \cap \lnot P$ is trivially decidable (it is always false).</p>&#xA;&#xA;<p>I wonder if there are any "real life" examples which do not make use of the "trick" above? When I say "real life" I do not necessarily mean problems which people come across in their day to day life, I mean examples where we do not take a problem and it's complement. It would be interesting (to me) if there are examples where the intersection is not trivially decidable.</p>&#xA;
reference request undecidability decision problem
1
Examples of undecidable problems whose intersection is decidable -- (reference request undecidability decision problem) <p>I know that given two problems are undecidable it does not follow that their intersection must be undecidable. For example, take a property of languages $P$ such that it is undecidable whether the language accepted by a given pushdown automaton $M$ has that property. Clearly $P$ and $\lnot P$ are undecidable (for a given $M$) but $P \cap \lnot P$ is trivially decidable (it is always false).</p>&#xA;&#xA;<p>I wonder if there are any "real life" examples which do not make use of the "trick" above? When I say "real life" I do not necessarily mean problems which people come across in their day to day life, I mean examples where we do not take a problem and it's complement. It would be interesting (to me) if there are examples where the intersection is not trivially decidable.</p>&#xA;
habedi/stack-exchange-dataset
2,985
Micro-optimisation for edit distance computation: is it valid?
<p>On <a href="https://en.wikipedia.org/wiki/Levenshtein_distance#Computing_Levenshtein_distance">Wikipedia</a>, an implementation for the bottom-up dynamic programming scheme for the edit distance is given. It does not follow the definition completely; inner cells are computed thus:</p>&#xA;&#xA;<pre><code>if s[i] = t[j] then &#xA; d[i, j] := d[i-1, j-1] // no operation required&#xA;else&#xA; d[i, j] := minimum&#xA; (&#xA; d[i-1, j] + 1, // a deletion&#xA; d[i, j-1] + 1, // an insertion&#xA; d[i-1, j-1] + 1 // a substitution&#xA; )&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>As you can see, the algorithm <em>always</em> chooses the value from the upper-left neighbour if there is a match, saving some memory accesses, ALU operations and comparisons. </p>&#xA;&#xA;<p>However, deletion (or insertion) may result in a <em>smaller</em> value, thus the algorithm is locally incorrect, i.e. it breaks with the optimality criterion. But maybe the mistake does not change the end result -- it might be cancelled out.</p>&#xA;&#xA;<p>Is this micro-optimisation valid, and why (not)?</p>&#xA;
algorithms dynamic programming string metrics correctness proof program optimization
1
Micro-optimisation for edit distance computation: is it valid? -- (algorithms dynamic programming string metrics correctness proof program optimization) <p>On <a href="https://en.wikipedia.org/wiki/Levenshtein_distance#Computing_Levenshtein_distance">Wikipedia</a>, an implementation for the bottom-up dynamic programming scheme for the edit distance is given. It does not follow the definition completely; inner cells are computed thus:</p>&#xA;&#xA;<pre><code>if s[i] = t[j] then &#xA; d[i, j] := d[i-1, j-1] // no operation required&#xA;else&#xA; d[i, j] := minimum&#xA; (&#xA; d[i-1, j] + 1, // a deletion&#xA; d[i, j-1] + 1, // an insertion&#xA; d[i-1, j-1] + 1 // a substitution&#xA; )&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>As you can see, the algorithm <em>always</em> chooses the value from the upper-left neighbour if there is a match, saving some memory accesses, ALU operations and comparisons. </p>&#xA;&#xA;<p>However, deletion (or insertion) may result in a <em>smaller</em> value, thus the algorithm is locally incorrect, i.e. it breaks with the optimality criterion. But maybe the mistake does not change the end result -- it might be cancelled out.</p>&#xA;&#xA;<p>Is this micro-optimisation valid, and why (not)?</p>&#xA;
habedi/stack-exchange-dataset
2,987
Automated geometric theorem-proving using synthetic methods
<p>This question is about geometric theorem proving and is inspired by this <a href="https://math.stackexchange.com/questions/31192/is-it-possible-to-solve-any-euclidean-geometry-problem-using-a-computer"> Math.SE </a> post. Currently, Euclidean-geometric theorem provers, as referred to in the post, use coordinate geometry to convert a geometry problem into a set of algebraic equations. </p>&#xA;&#xA;<blockquote>&#xA; <p>Why haven't people developed a theorem prover that uses <i>synthetic</i> reasoning ?</p>&#xA;</blockquote>&#xA;&#xA;<p>By 'synthetic' I mean reasoning from axioms. I feel that synthetic reasoning would be more insightful than solving a large number of equations; yet, am unsure about how well it yields to implementation. Can you offer more insight? What would be the benefits and drawbacks of such a prover?</p>&#xA;&#xA;<p>Also,I felt that my question would be more appropriate here than on Math.SE.</p>&#xA;
artificial intelligence automated theorem proving
1
Automated geometric theorem-proving using synthetic methods -- (artificial intelligence automated theorem proving) <p>This question is about geometric theorem proving and is inspired by this <a href="https://math.stackexchange.com/questions/31192/is-it-possible-to-solve-any-euclidean-geometry-problem-using-a-computer"> Math.SE </a> post. Currently, Euclidean-geometric theorem provers, as referred to in the post, use coordinate geometry to convert a geometry problem into a set of algebraic equations. </p>&#xA;&#xA;<blockquote>&#xA; <p>Why haven't people developed a theorem prover that uses <i>synthetic</i> reasoning ?</p>&#xA;</blockquote>&#xA;&#xA;<p>By 'synthetic' I mean reasoning from axioms. I feel that synthetic reasoning would be more insightful than solving a large number of equations; yet, am unsure about how well it yields to implementation. Can you offer more insight? What would be the benefits and drawbacks of such a prover?</p>&#xA;&#xA;<p>Also,I felt that my question would be more appropriate here than on Math.SE.</p>&#xA;
habedi/stack-exchange-dataset
2,988
How fast can we find all Four-Square combinations that sum to N?
<p>A question was asked at Stack Overflow (<a href="https://stackoverflow.com/questions/11732555/how-to-find-all-possible-values-of-four-variables-when-squared-sum-to-n#comment15599644_11732555">here</a>):</p>&#xA;&#xA;<blockquote>&#xA; <p>Given an integer $N$, print out all possible&#xA; combinations of integer values of $A,B,C$ and $D$ which solve the equation $A^2+B^2+C^2+D^2 = N$.</p>&#xA;</blockquote>&#xA;&#xA;<p>This question is of course related to <a href="http://en.wikipedia.org/wiki/Lagrange%27s_four-square_theorem" rel="nofollow noreferrer">Bachet's Conjecture</a> in number theory (sometimes called Lagrange's Four Square Theorem because of his proof). There are some papers that discuss how to find a single solution, but I have been unable to find anything that talks about how fast we can find <em>all</em> solutions for a particular $N$ (that is, all <em>combinations</em>, not all <em>permutations</em>).</p>&#xA;&#xA;<p>I have been thinking about it quite a bit and it seems to me that it can be solved in $O(N)$ time and space, where $N$ is the desired sum. However, lacking any prior information on the subject, I am not sure if that is a significant claim on my part or just a trivial, obvious or already known result.</p>&#xA;&#xA;<p>So, the question then is, how fast can we find all of the Four-Square Sums for a given $N$?</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>OK, here's the (nearly) O(N) algorithm that I was thinking of. First two supporting functions, a nearest integer square root function:</p>&#xA;&#xA;<pre><code> // the nearest integer whose square is less than or equal to N&#xA; public int SquRt(int N)&#xA; {&#xA; return (int)Math.Sqrt((double)N);&#xA; }&#xA;</code></pre>&#xA;&#xA;<p>And a function to return all TwoSquare pairs summing from 0 to N:</p>&#xA;&#xA;<pre><code> // Returns a list of all sums of two squares less than or equal to N, in order.&#xA; public List&lt;List&lt;int[]&gt;&gt; TwoSquareSumsLessThan(int N)&#xA; {&#xA; //Make the index array&#xA; List&lt;int[]&gt;[] Sum2Sqs = new List&lt;int[]&gt;[N + 1];&#xA;&#xA; //get the base square root, which is the maximum possible root value&#xA; int baseRt = SquRt(N);&#xA;&#xA; for (int i = baseRt; i &gt;= 0; i--)&#xA; {&#xA; for (int j = 0; j &lt;= i; j++)&#xA; {&#xA; int sum = (i * i) + (j * j);&#xA; if (sum &gt; N)&#xA; {&#xA; break;&#xA; }&#xA; else&#xA; {&#xA; //make the new pair&#xA; int[] sumPair = { i, j };&#xA; //get the sumList entry&#xA; List&lt;int[]&gt; sumLst;&#xA; if (Sum2Sqs[sum] == null)&#xA; { &#xA; // make it if we need to&#xA; sumLst = new List&lt;int[]&gt;();&#xA; Sum2Sqs[sum] = sumLst;&#xA; }&#xA; else&#xA; {&#xA; sumLst = Sum2Sqs[sum];&#xA; }&#xA; // add the pair to the correct list&#xA; sumLst.Add(sumPair);&#xA; }&#xA; }&#xA; }&#xA;&#xA; //collapse the index array down to a sequential list&#xA; List&lt;List&lt;int[]&gt;&gt; result = new List&lt;List&lt;int[]&gt;&gt;();&#xA; for (int nn = 0; nn &lt;= N; nn++)&#xA; {&#xA; if (Sum2Sqs[nn] != null) result.Add(Sum2Sqs[nn]);&#xA; }&#xA;&#xA; return result;&#xA; }&#xA;</code></pre>&#xA;&#xA;<p>Finally, the algorithm itself:</p>&#xA;&#xA;<pre><code> // Return a list of all integer quads (a,b,c,d), where:&#xA; // a^2 + b^2 + c^2 + d^2 = N,&#xA; // and a &gt;= b &gt;= c &gt;= d,&#xA; // and a,b,c,d &gt;= 0&#xA; public List&lt;int[]&gt; FindAllFourSquares(int N)&#xA; {&#xA; // get all two-square sums &lt;= N, in descending order&#xA; List&lt;List&lt;int[]&gt;&gt; Sqr2s = TwoSquareSumsLessThan(N);&#xA;&#xA; // Cross the descending list of two-square sums &lt;= N with&#xA; // the same list in ascending order, using a Merge-Match&#xA; // algorithm to find all combinations of pairs of two-square&#xA; // sums that add up to N&#xA; List&lt;int[]&gt; hiList, loList;&#xA; int[] hp, lp;&#xA; int hiSum, loSum;&#xA; List&lt;int[]&gt; results = new List&lt;int[]&gt;();&#xA; int prevHi = -1;&#xA; int prevLo = -1;&#xA;&#xA; // Set the Merge sources to the highest and lowest entries in the list&#xA; int hi = Sqr2s.Count - 1;&#xA; int lo = 0;&#xA;&#xA; // Merge until done ..&#xA; while (hi &gt;= lo)&#xA; {&#xA; // check to see if the points have moved&#xA; if (hi != prevHi)&#xA; {&#xA; hiList = Sqr2s[hi];&#xA; hp = hiList[0]; // these lists cannot be empty&#xA; hiSum = hp[0] * hp[0] + hp[1] * hp[1];&#xA; prevHi = hi;&#xA; }&#xA; if (lo != prevLo)&#xA; {&#xA; loList = Sqr2s[lo];&#xA; lp = loList[0]; // these lists cannot be empty&#xA; loSum = lp[0] * lp[0] + lp[1] * lp[1];&#xA; prevLo = lo;&#xA; }&#xA;&#xA; // do the two entries' sums together add up to N?&#xA; if (hiSum + loSum == N)&#xA; {&#xA; // they add up, so cross the two sum-lists over each other&#xA; foreach (int[] hiPair in hiList)&#xA; {&#xA; foreach (int[] loPair in loList)&#xA; {&#xA; // make a new 4-tuple and fill it&#xA; int[] quad = new int[4];&#xA; quad[0] = hiPair[0];&#xA; quad[1] = hiPair[1];&#xA; quad[2] = loPair[0];&#xA; quad[3] = loPair[1];&#xA;&#xA; // only keep those cases where the tuple is already sorted&#xA; //(otherwise it's a duplicate entry)&#xA; if (quad[1] &gt;= quad[2]) //(only need to check this one case, the others are implicit)&#xA; {&#xA; results.Add(quad);&#xA; }&#xA; //(there's a special case where all values of the 4-tuple are equal&#xA; // that should be handled to prevent duplicate entries, but I'm&#xA; // skipping it for now)&#xA; }&#xA; }&#xA; // both the HI and LO points must be moved after a Match&#xA; hi--;&#xA; lo++;&#xA; }&#xA; else if (hiSum + loSum &lt; N)&#xA; {&#xA; lo++; // too low, so must increase the LO point&#xA; }&#xA; else // must be &gt; N&#xA; {&#xA; hi--; // too high, so must decrease the HI point&#xA; }&#xA; }&#xA; return results;&#xA; }&#xA;</code></pre>&#xA;&#xA;<p>As I said before, it should be pretty close to O(N), however, as Yuval Filmus points out, as the number of Four Square solutions to N can be of order (N ln ln N), then this algorithim could not be less than that.</p>&#xA;
complexity theory time complexity number theory enumeration
1
How fast can we find all Four-Square combinations that sum to N? -- (complexity theory time complexity number theory enumeration) <p>A question was asked at Stack Overflow (<a href="https://stackoverflow.com/questions/11732555/how-to-find-all-possible-values-of-four-variables-when-squared-sum-to-n#comment15599644_11732555">here</a>):</p>&#xA;&#xA;<blockquote>&#xA; <p>Given an integer $N$, print out all possible&#xA; combinations of integer values of $A,B,C$ and $D$ which solve the equation $A^2+B^2+C^2+D^2 = N$.</p>&#xA;</blockquote>&#xA;&#xA;<p>This question is of course related to <a href="http://en.wikipedia.org/wiki/Lagrange%27s_four-square_theorem" rel="nofollow noreferrer">Bachet's Conjecture</a> in number theory (sometimes called Lagrange's Four Square Theorem because of his proof). There are some papers that discuss how to find a single solution, but I have been unable to find anything that talks about how fast we can find <em>all</em> solutions for a particular $N$ (that is, all <em>combinations</em>, not all <em>permutations</em>).</p>&#xA;&#xA;<p>I have been thinking about it quite a bit and it seems to me that it can be solved in $O(N)$ time and space, where $N$ is the desired sum. However, lacking any prior information on the subject, I am not sure if that is a significant claim on my part or just a trivial, obvious or already known result.</p>&#xA;&#xA;<p>So, the question then is, how fast can we find all of the Four-Square Sums for a given $N$?</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>OK, here's the (nearly) O(N) algorithm that I was thinking of. First two supporting functions, a nearest integer square root function:</p>&#xA;&#xA;<pre><code> // the nearest integer whose square is less than or equal to N&#xA; public int SquRt(int N)&#xA; {&#xA; return (int)Math.Sqrt((double)N);&#xA; }&#xA;</code></pre>&#xA;&#xA;<p>And a function to return all TwoSquare pairs summing from 0 to N:</p>&#xA;&#xA;<pre><code> // Returns a list of all sums of two squares less than or equal to N, in order.&#xA; public List&lt;List&lt;int[]&gt;&gt; TwoSquareSumsLessThan(int N)&#xA; {&#xA; //Make the index array&#xA; List&lt;int[]&gt;[] Sum2Sqs = new List&lt;int[]&gt;[N + 1];&#xA;&#xA; //get the base square root, which is the maximum possible root value&#xA; int baseRt = SquRt(N);&#xA;&#xA; for (int i = baseRt; i &gt;= 0; i--)&#xA; {&#xA; for (int j = 0; j &lt;= i; j++)&#xA; {&#xA; int sum = (i * i) + (j * j);&#xA; if (sum &gt; N)&#xA; {&#xA; break;&#xA; }&#xA; else&#xA; {&#xA; //make the new pair&#xA; int[] sumPair = { i, j };&#xA; //get the sumList entry&#xA; List&lt;int[]&gt; sumLst;&#xA; if (Sum2Sqs[sum] == null)&#xA; { &#xA; // make it if we need to&#xA; sumLst = new List&lt;int[]&gt;();&#xA; Sum2Sqs[sum] = sumLst;&#xA; }&#xA; else&#xA; {&#xA; sumLst = Sum2Sqs[sum];&#xA; }&#xA; // add the pair to the correct list&#xA; sumLst.Add(sumPair);&#xA; }&#xA; }&#xA; }&#xA;&#xA; //collapse the index array down to a sequential list&#xA; List&lt;List&lt;int[]&gt;&gt; result = new List&lt;List&lt;int[]&gt;&gt;();&#xA; for (int nn = 0; nn &lt;= N; nn++)&#xA; {&#xA; if (Sum2Sqs[nn] != null) result.Add(Sum2Sqs[nn]);&#xA; }&#xA;&#xA; return result;&#xA; }&#xA;</code></pre>&#xA;&#xA;<p>Finally, the algorithm itself:</p>&#xA;&#xA;<pre><code> // Return a list of all integer quads (a,b,c,d), where:&#xA; // a^2 + b^2 + c^2 + d^2 = N,&#xA; // and a &gt;= b &gt;= c &gt;= d,&#xA; // and a,b,c,d &gt;= 0&#xA; public List&lt;int[]&gt; FindAllFourSquares(int N)&#xA; {&#xA; // get all two-square sums &lt;= N, in descending order&#xA; List&lt;List&lt;int[]&gt;&gt; Sqr2s = TwoSquareSumsLessThan(N);&#xA;&#xA; // Cross the descending list of two-square sums &lt;= N with&#xA; // the same list in ascending order, using a Merge-Match&#xA; // algorithm to find all combinations of pairs of two-square&#xA; // sums that add up to N&#xA; List&lt;int[]&gt; hiList, loList;&#xA; int[] hp, lp;&#xA; int hiSum, loSum;&#xA; List&lt;int[]&gt; results = new List&lt;int[]&gt;();&#xA; int prevHi = -1;&#xA; int prevLo = -1;&#xA;&#xA; // Set the Merge sources to the highest and lowest entries in the list&#xA; int hi = Sqr2s.Count - 1;&#xA; int lo = 0;&#xA;&#xA; // Merge until done ..&#xA; while (hi &gt;= lo)&#xA; {&#xA; // check to see if the points have moved&#xA; if (hi != prevHi)&#xA; {&#xA; hiList = Sqr2s[hi];&#xA; hp = hiList[0]; // these lists cannot be empty&#xA; hiSum = hp[0] * hp[0] + hp[1] * hp[1];&#xA; prevHi = hi;&#xA; }&#xA; if (lo != prevLo)&#xA; {&#xA; loList = Sqr2s[lo];&#xA; lp = loList[0]; // these lists cannot be empty&#xA; loSum = lp[0] * lp[0] + lp[1] * lp[1];&#xA; prevLo = lo;&#xA; }&#xA;&#xA; // do the two entries' sums together add up to N?&#xA; if (hiSum + loSum == N)&#xA; {&#xA; // they add up, so cross the two sum-lists over each other&#xA; foreach (int[] hiPair in hiList)&#xA; {&#xA; foreach (int[] loPair in loList)&#xA; {&#xA; // make a new 4-tuple and fill it&#xA; int[] quad = new int[4];&#xA; quad[0] = hiPair[0];&#xA; quad[1] = hiPair[1];&#xA; quad[2] = loPair[0];&#xA; quad[3] = loPair[1];&#xA;&#xA; // only keep those cases where the tuple is already sorted&#xA; //(otherwise it's a duplicate entry)&#xA; if (quad[1] &gt;= quad[2]) //(only need to check this one case, the others are implicit)&#xA; {&#xA; results.Add(quad);&#xA; }&#xA; //(there's a special case where all values of the 4-tuple are equal&#xA; // that should be handled to prevent duplicate entries, but I'm&#xA; // skipping it for now)&#xA; }&#xA; }&#xA; // both the HI and LO points must be moved after a Match&#xA; hi--;&#xA; lo++;&#xA; }&#xA; else if (hiSum + loSum &lt; N)&#xA; {&#xA; lo++; // too low, so must increase the LO point&#xA; }&#xA; else // must be &gt; N&#xA; {&#xA; hi--; // too high, so must decrease the HI point&#xA; }&#xA; }&#xA; return results;&#xA; }&#xA;</code></pre>&#xA;&#xA;<p>As I said before, it should be pretty close to O(N), however, as Yuval Filmus points out, as the number of Four Square solutions to N can be of order (N ln ln N), then this algorithim could not be less than that.</p>&#xA;
habedi/stack-exchange-dataset
2,994
Time complexity formula of nested loops
<p>I've just begun this stage 2 Compsci paper on algorithms, and stuff like this is not my strong point. I've come across this in my lecture slides.</p>&#xA;&#xA;<pre><code>int length = input.length();&#xA;for (int i = 0; i &lt; length - 1; i++) {&#xA; for (int j = i + 1; j &lt; length; j++) {&#xA; System.out.println(input.substring(i,j));&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>"In each iteration, the outer loop executes $\frac{n^{2}-(2i-1)n-i+i^{2}}{2}$ operations from the inner loop for $i = 0, \ldots, n-1$."</p>&#xA;&#xA;<p>Can someone please explain this to me step by step?</p>&#xA;&#xA;<p>I believe the formula above was obtained by using Gauss' formula for adding numbers... I think...</p>&#xA;
algorithms algorithm analysis runtime analysis loops
1
Time complexity formula of nested loops -- (algorithms algorithm analysis runtime analysis loops) <p>I've just begun this stage 2 Compsci paper on algorithms, and stuff like this is not my strong point. I've come across this in my lecture slides.</p>&#xA;&#xA;<pre><code>int length = input.length();&#xA;for (int i = 0; i &lt; length - 1; i++) {&#xA; for (int j = i + 1; j &lt; length; j++) {&#xA; System.out.println(input.substring(i,j));&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>"In each iteration, the outer loop executes $\frac{n^{2}-(2i-1)n-i+i^{2}}{2}$ operations from the inner loop for $i = 0, \ldots, n-1$."</p>&#xA;&#xA;<p>Can someone please explain this to me step by step?</p>&#xA;&#xA;<p>I believe the formula above was obtained by using Gauss' formula for adding numbers... I think...</p>&#xA;
habedi/stack-exchange-dataset
2,999
Problem with the definition of P
<p>In <em>"Introduction to Algorithms: 3rd Edition"</em> there is Theorem 34.2, which states</p>&#xA;&#xA;<blockquote>&#xA; <p>$P = \{ L \mid L \text{ is accepted by a polynomial-time algorithm} \}$</p>&#xA;</blockquote>&#xA;&#xA;<p><em>"Accepted in polynomial-time"</em> is defined by:</p>&#xA;&#xA;<blockquote>&#xA; <p>$L$ is accepted in polynomial time by an algorithm $A$ if it is accepted by $A$&#xA; and if in addition there exists a constant $k$ such that for any length-n string $x\in L$, &#xA; algorithm $A$ accepts $x$ in time $O(n^k)$.</p>&#xA;</blockquote>&#xA;&#xA;<p><em>"Accepted"</em> is defined by:</p>&#xA;&#xA;<blockquote>&#xA; <p>The language accepted by an algorithm $A$ is the set of strings&#xA; $L = \{ x \in \{0,1\}^* \mid A(x) = 1 \}$,&#xA; that is, the set of strings that the algorithm accepts.</p>&#xA;</blockquote>&#xA;&#xA;<p>But what if we take $k = 0$, and algorithm $A(\cdot) = 1$, which just returns 1 for everything?&#xA;Wouldn't that mean, that $P$ is just class of all languages?</p>&#xA;
complexity theory terminology time complexity polynomial time
1
Problem with the definition of P -- (complexity theory terminology time complexity polynomial time) <p>In <em>"Introduction to Algorithms: 3rd Edition"</em> there is Theorem 34.2, which states</p>&#xA;&#xA;<blockquote>&#xA; <p>$P = \{ L \mid L \text{ is accepted by a polynomial-time algorithm} \}$</p>&#xA;</blockquote>&#xA;&#xA;<p><em>"Accepted in polynomial-time"</em> is defined by:</p>&#xA;&#xA;<blockquote>&#xA; <p>$L$ is accepted in polynomial time by an algorithm $A$ if it is accepted by $A$&#xA; and if in addition there exists a constant $k$ such that for any length-n string $x\in L$, &#xA; algorithm $A$ accepts $x$ in time $O(n^k)$.</p>&#xA;</blockquote>&#xA;&#xA;<p><em>"Accepted"</em> is defined by:</p>&#xA;&#xA;<blockquote>&#xA; <p>The language accepted by an algorithm $A$ is the set of strings&#xA; $L = \{ x \in \{0,1\}^* \mid A(x) = 1 \}$,&#xA; that is, the set of strings that the algorithm accepts.</p>&#xA;</blockquote>&#xA;&#xA;<p>But what if we take $k = 0$, and algorithm $A(\cdot) = 1$, which just returns 1 for everything?&#xA;Wouldn't that mean, that $P$ is just class of all languages?</p>&#xA;
habedi/stack-exchange-dataset
3,001
Why can L3 caches hold only shared blocks?
<p>In a recent CACM article [1], the authors present a way to improve scalability of shared and coherent caches. The core ingredient is assuming the caches are <em>inclusive</em>, that is higher-level caches (e.g. L3, one global cache) contain all blocks which are stored in their descendant lower-level caches (e.g. L1, one cache per core).</p>&#xA;&#xA;<p>Typically, higher-level caches are larger than their respective descendant caches together. For instance, some models of the Intel Core i7 series with four cores have an 8MB shared cache (L3) and 256KB private caches (L2), that is the shared cache can hold eight times as many blocks as the private caches in total.</p>&#xA;&#xA;<p>This seems to suggest that whenever the shared cache has to evict a block (in order to load a new block) it can find a block that is shared with none of the private caches² (pigeon-hole principle). However, the authors write:</p>&#xA;&#xA;<blockquote>&#xA; <p>[We] can potentially eliminate all recalls, but only if the associativity, or number of places in which a specific block may be cached, of the shared cache exceeds the aggregate associativity of the private caches. With sufficient associativity, [the shared cache] is guaranteed to find a nonshared block [...]. Without this worst-case associativity, a pathological cluster of misses could lead to a situation in which all blocks in a set of the shared cache are truly shared.</p>&#xA;</blockquote>&#xA;&#xA;<p>How is this possible, that is how can, say, 1MB cover 8MB? Clearly I miss some detail of how such cache hierarchies work. What does "associativity" mean here? "number of places in which a specific block may be cached" is not clear; I can only come up with the interpretation that a block can be stored multiple times in each cache, but that would make no sense at all. What would such a "pathological cluster of misses" look like?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="http://dx.doi.org/10.1145/2209249.2209269">Why On-Chip Cache Coherence is Here to Stay</a> by M. M. K. Martin, M. D. Hill, D. J. Sorin (2012)</li>&#xA;<li>Assuming the shared caches knows which blocks are shared where. This can be achieved by explicit eviction notifications and tracking bit, which is also discussed in [1].</li>&#xA;</ol>&#xA;
terminology computer architecture cpu cache shared memory
1
Why can L3 caches hold only shared blocks? -- (terminology computer architecture cpu cache shared memory) <p>In a recent CACM article [1], the authors present a way to improve scalability of shared and coherent caches. The core ingredient is assuming the caches are <em>inclusive</em>, that is higher-level caches (e.g. L3, one global cache) contain all blocks which are stored in their descendant lower-level caches (e.g. L1, one cache per core).</p>&#xA;&#xA;<p>Typically, higher-level caches are larger than their respective descendant caches together. For instance, some models of the Intel Core i7 series with four cores have an 8MB shared cache (L3) and 256KB private caches (L2), that is the shared cache can hold eight times as many blocks as the private caches in total.</p>&#xA;&#xA;<p>This seems to suggest that whenever the shared cache has to evict a block (in order to load a new block) it can find a block that is shared with none of the private caches² (pigeon-hole principle). However, the authors write:</p>&#xA;&#xA;<blockquote>&#xA; <p>[We] can potentially eliminate all recalls, but only if the associativity, or number of places in which a specific block may be cached, of the shared cache exceeds the aggregate associativity of the private caches. With sufficient associativity, [the shared cache] is guaranteed to find a nonshared block [...]. Without this worst-case associativity, a pathological cluster of misses could lead to a situation in which all blocks in a set of the shared cache are truly shared.</p>&#xA;</blockquote>&#xA;&#xA;<p>How is this possible, that is how can, say, 1MB cover 8MB? Clearly I miss some detail of how such cache hierarchies work. What does "associativity" mean here? "number of places in which a specific block may be cached" is not clear; I can only come up with the interpretation that a block can be stored multiple times in each cache, but that would make no sense at all. What would such a "pathological cluster of misses" look like?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="http://dx.doi.org/10.1145/2209249.2209269">Why On-Chip Cache Coherence is Here to Stay</a> by M. M. K. Martin, M. D. Hill, D. J. Sorin (2012)</li>&#xA;<li>Assuming the shared caches knows which blocks are shared where. This can be achieved by explicit eviction notifications and tracking bit, which is also discussed in [1].</li>&#xA;</ol>&#xA;
habedi/stack-exchange-dataset
3,012
What if the electricity goes off while a file is being renamed?
<p>Suppose I'm renaming a file and the electricity goes off right in the middle. Naively, it looks like the file could be in some “half-renamed” state. Maybe the name would have half the old name and half the new name. Or maybe the file could disappear altogether because it no longer exists under the old name but it doesn't exist yet under the new name.</p>&#xA;&#xA;<p>How can a filesystem protect against loss of power while a file is renamed? Are these just theoretical techniques or are they used in practice?</p>&#xA;
operating systems filesystems fault tolerance
1
What if the electricity goes off while a file is being renamed? -- (operating systems filesystems fault tolerance) <p>Suppose I'm renaming a file and the electricity goes off right in the middle. Naively, it looks like the file could be in some “half-renamed” state. Maybe the name would have half the old name and half the new name. Or maybe the file could disappear altogether because it no longer exists under the old name but it doesn't exist yet under the new name.</p>&#xA;&#xA;<p>How can a filesystem protect against loss of power while a file is renamed? Are these just theoretical techniques or are they used in practice?</p>&#xA;
habedi/stack-exchange-dataset
3,014
Running time of CDCL compared to DPLL
<p>What's the complexity of Conflict-Driven Clause Learning SAT solvers, compared to DPLL solvers? Was it proven that CDCL is faster in general? Are there instances of SAT that are hard for CDCL but easy for DPLL?</p>&#xA;
complexity theory time complexity efficiency satisfiability sat solvers
1
Running time of CDCL compared to DPLL -- (complexity theory time complexity efficiency satisfiability sat solvers) <p>What's the complexity of Conflict-Driven Clause Learning SAT solvers, compared to DPLL solvers? Was it proven that CDCL is faster in general? Are there instances of SAT that are hard for CDCL but easy for DPLL?</p>&#xA;
habedi/stack-exchange-dataset
3,019
What is the novelty in MapReduce?
<p>A few years ago, <a href="https://en.wikipedia.org/wiki/Mapreduce">MapReduce</a> was hailed as revolution of distributed programming. There have also been <a href="http://craig-henderson.blogspot.de/2009/11/dewitt-and-stonebrakers-mapreduce-major.html">critics</a> but by and large there was an enthusiastic hype. It even got patented! [1]</p>&#xA;&#xA;<p>The name is reminiscent of <code>map</code> and <code>reduce</code> in functional programming, but when I read (Wikipedia)</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>Map step:</strong> The master node takes the input, divides it into smaller sub-problems, and distributes them to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes the smaller problem, and passes the answer back to its master node.</p>&#xA; &#xA; <p><strong>Reduce step:</strong> The master node then collects the answers to all the sub-problems and combines them in some way to form the output – the answer to the problem it was originally trying to solve.</p>&#xA;</blockquote>&#xA;&#xA;<p>or [2] </p>&#xA;&#xA;<blockquote>&#xA; <p><strong>Internals of MAP:</strong> [...] MAP splits up the input value into words. [...] MAP is meant to associate each given key/value pair of the input with potentially many intermediate key/value pairs.</p>&#xA; &#xA; <p><strong>Internals of REDUCE:</strong> [...] [REDUCE] performs imperative aggregation (say, reduction): take many values, and reduce them to a single value.</p>&#xA;</blockquote>&#xA;&#xA;<p>I can not help but think: this is <a href="https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm">divide &amp; conquer</a> (in the sense of Mergesort), plain and simple! So, is there (conceptual) novelty in MapReduce somewhere, or is it just a new implementation of old ideas useful in certain scenarios?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;d=PALL&amp;p=1&amp;u=/netahtml/PTO/srchnum.htm&amp;r=1&amp;f=G&amp;l=50&amp;s1=7,650,331.PN.&amp;OS=PN/7,650,331&amp;RS=PN/7,650,331"> US Patent 7,650,331: "System and method for efficient large-scale data processing "</a> (2010)</li>&#xA;<li><a href="http://dx.doi.org/10.1016/j.scico.2007.07.001">Google’s MapReduce programming model — Revisited</a> by R. Lämmel (2007)</li>&#xA;</ol>&#xA;
algorithms distributed systems parallel computing algorithm design
1
What is the novelty in MapReduce? -- (algorithms distributed systems parallel computing algorithm design) <p>A few years ago, <a href="https://en.wikipedia.org/wiki/Mapreduce">MapReduce</a> was hailed as revolution of distributed programming. There have also been <a href="http://craig-henderson.blogspot.de/2009/11/dewitt-and-stonebrakers-mapreduce-major.html">critics</a> but by and large there was an enthusiastic hype. It even got patented! [1]</p>&#xA;&#xA;<p>The name is reminiscent of <code>map</code> and <code>reduce</code> in functional programming, but when I read (Wikipedia)</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>Map step:</strong> The master node takes the input, divides it into smaller sub-problems, and distributes them to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes the smaller problem, and passes the answer back to its master node.</p>&#xA; &#xA; <p><strong>Reduce step:</strong> The master node then collects the answers to all the sub-problems and combines them in some way to form the output – the answer to the problem it was originally trying to solve.</p>&#xA;</blockquote>&#xA;&#xA;<p>or [2] </p>&#xA;&#xA;<blockquote>&#xA; <p><strong>Internals of MAP:</strong> [...] MAP splits up the input value into words. [...] MAP is meant to associate each given key/value pair of the input with potentially many intermediate key/value pairs.</p>&#xA; &#xA; <p><strong>Internals of REDUCE:</strong> [...] [REDUCE] performs imperative aggregation (say, reduction): take many values, and reduce them to a single value.</p>&#xA;</blockquote>&#xA;&#xA;<p>I can not help but think: this is <a href="https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm">divide &amp; conquer</a> (in the sense of Mergesort), plain and simple! So, is there (conceptual) novelty in MapReduce somewhere, or is it just a new implementation of old ideas useful in certain scenarios?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;d=PALL&amp;p=1&amp;u=/netahtml/PTO/srchnum.htm&amp;r=1&amp;f=G&amp;l=50&amp;s1=7,650,331.PN.&amp;OS=PN/7,650,331&amp;RS=PN/7,650,331"> US Patent 7,650,331: "System and method for efficient large-scale data processing "</a> (2010)</li>&#xA;<li><a href="http://dx.doi.org/10.1016/j.scico.2007.07.001">Google’s MapReduce programming model — Revisited</a> by R. Lämmel (2007)</li>&#xA;</ol>&#xA;
habedi/stack-exchange-dataset
3,022
Maximum Independent Subset of 2D Grid Subgraph
<p>In the general case finding a Maximum Independent Subset of a Graph is NP-Hard.</p>&#xA;&#xA;<p>However consider the following subset of graphs:</p>&#xA;&#xA;<ul>&#xA;<li>Create an $N \times N$ grid of unit square cells.</li>&#xA;<li>Build a graph $G$ by creating a vertex corresponding to every cell. Notice that there are $N^2$ vertices.</li>&#xA;<li>Create an edge between two vertices if their cells share a side. Notice there are $2N(N-1)$ edges.</li>&#xA;</ul>&#xA;&#xA;<p>A Maximum Independent Subset of $G$ is obviously a checker pattern. A cell at the $R$th row and $C$th column is part of it if $R+C$ is odd.</p>&#xA;&#xA;<p>Now we create a graph $G'$ by copying $G$ and removing some vertices and edges. (If you remove a vertex also remove all edges it ended of course. Also note you can remove an edge without removing one of the vertices it ends.)</p>&#xA;&#xA;<p>By what algorithm can we find a Maximum Independent Subset of $G'$?</p>&#xA;
algorithms graphs
1
Maximum Independent Subset of 2D Grid Subgraph -- (algorithms graphs) <p>In the general case finding a Maximum Independent Subset of a Graph is NP-Hard.</p>&#xA;&#xA;<p>However consider the following subset of graphs:</p>&#xA;&#xA;<ul>&#xA;<li>Create an $N \times N$ grid of unit square cells.</li>&#xA;<li>Build a graph $G$ by creating a vertex corresponding to every cell. Notice that there are $N^2$ vertices.</li>&#xA;<li>Create an edge between two vertices if their cells share a side. Notice there are $2N(N-1)$ edges.</li>&#xA;</ul>&#xA;&#xA;<p>A Maximum Independent Subset of $G$ is obviously a checker pattern. A cell at the $R$th row and $C$th column is part of it if $R+C$ is odd.</p>&#xA;&#xA;<p>Now we create a graph $G'$ by copying $G$ and removing some vertices and edges. (If you remove a vertex also remove all edges it ended of course. Also note you can remove an edge without removing one of the vertices it ends.)</p>&#xA;&#xA;<p>By what algorithm can we find a Maximum Independent Subset of $G'$?</p>&#xA;
habedi/stack-exchange-dataset
3,027
Maximum Independent Set of a Bipartite Graph
<p>I'm trying to find the Maximum Independent Set of a Biparite Graph.</p>&#xA;&#xA;<p>I found the following in some notes <strong>"May 13, 1998 - University of Washington - CSE 521 - Applications of network flow"</strong>:</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>Problem:</strong></p>&#xA; &#xA; <p>Given a bipartite graph <span class="math-container">$G = (U,V,E)$</span>, find an independent set <span class="math-container">$U' \cup V'$</span> which is as large as possible, where <span class="math-container">$U' \subseteq U$</span> and <span class="math-container">$V' \subseteq V$</span>. A set is independent if there are no edges of <span class="math-container">$E$</span> between&#xA; elements of the set.</p>&#xA; &#xA; <p><strong>Solution:</strong></p>&#xA; &#xA; <p>Construct a flow graph on the vertices <span class="math-container">$U \cup V \cup \{s,t\}$</span>. For&#xA; each edge <span class="math-container">$(u,v) \in E$</span> there is an infinite capacity edge from <span class="math-container">$u$</span> to&#xA; <span class="math-container">$v$</span>. For each <span class="math-container">$u \in U$</span>, there is a unit capacity edge from <span class="math-container">$s$</span> to <span class="math-container">$u$</span>,&#xA; and for each <span class="math-container">$v \in V$</span>, there is a unit capacity edge from <span class="math-container">$v$</span> to&#xA; <span class="math-container">$t$</span>.</p>&#xA; &#xA; <p>Find a finite capacity cut <span class="math-container">$(S,T)$</span>, with <span class="math-container">$s \in S$</span> and <span class="math-container">$t \in T$</span>. Let&#xA; <span class="math-container">$U' = U \cap S$</span> and <span class="math-container">$V' = V \cap T$</span>. The set <span class="math-container">$U' \cup V'$</span> is&#xA; independent since there are no infinite capacity edges crossing the&#xA; cut. The size of the cut is <span class="math-container">$|U - U'| + |V - V'| = |U| + |V| - |U' \cup V'|$</span>. This, in order to make the independent set as large as&#xA; possible, we make the cut as small as possible.</p>&#xA;</blockquote>&#xA;&#xA;<p>So lets take this as the graph:</p>&#xA;&#xA;<pre><code>A - B - C&#xA; |&#xA;D - E - F&#xA;</code></pre>&#xA;&#xA;<p>We can split this into a bipartite graph as follows &#xA;<span class="math-container">$(U,V)=(\{A,C,E\},\{B,D,F\})$</span></p>&#xA;&#xA;<p>We can see by brute force search that the sole Maximum Independent Set is <span class="math-container">$A,C,D,F$</span>. Lets try and work through the solution above:</p>&#xA;&#xA;<p>So the constructed flow network adjacency matrix would be:</p>&#xA;&#xA;<p><span class="math-container">$$\begin{matrix}&#xA; &amp; s &amp; t &amp; A &amp; B &amp; C &amp; D &amp; E &amp; F \\&#xA;s &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 \\&#xA;t &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 \\&#xA;A &amp; 1 &amp; 0 &amp; 0 &amp; \infty &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\&#xA;B &amp; 0 &amp; 1 &amp; \infty &amp; 0 &amp; \infty &amp; 0 &amp; \infty &amp; 0 \\&#xA;C &amp; 1 &amp; 0 &amp; 0 &amp; \infty &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\&#xA;D &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; \infty &amp; 0 \\&#xA;E &amp; 1 &amp; 0 &amp; 0 &amp; \infty &amp; 0 &amp; \infty &amp; 0 &amp; \infty \\&#xA;F &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; \infty &amp; 0 \\&#xA;\end{matrix}$$</span></p>&#xA;&#xA;<p>Here is where I am stuck, the smallest finite capacity cut I see is a trivial one: <span class="math-container">$(S,T) =(\{s\},\{t,A,B,C,D,E,F\})$</span> with a capacity of 3.</p>&#xA;&#xA;<p>Using this cut leads to an incorrect solution of:</p>&#xA;&#xA;<p><span class="math-container">$$ U' = U \cap S = \{\}$$</span>&#xA;<span class="math-container">$$ V' = V \cap T = \{B,D,F\}$$</span>&#xA;<span class="math-container">$$ U' \cup V' = \{B,D,F\}$$</span></p>&#xA;&#xA;<p>Whereas we expected <span class="math-container">$U' \cup V' = \{A,C,D,F\}$</span>? Can anyone spot where I have gone wrong in my reasoning/working?</p>&#xA;
algorithms graphs network flow
1
Maximum Independent Set of a Bipartite Graph -- (algorithms graphs network flow) <p>I'm trying to find the Maximum Independent Set of a Biparite Graph.</p>&#xA;&#xA;<p>I found the following in some notes <strong>"May 13, 1998 - University of Washington - CSE 521 - Applications of network flow"</strong>:</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>Problem:</strong></p>&#xA; &#xA; <p>Given a bipartite graph <span class="math-container">$G = (U,V,E)$</span>, find an independent set <span class="math-container">$U' \cup V'$</span> which is as large as possible, where <span class="math-container">$U' \subseteq U$</span> and <span class="math-container">$V' \subseteq V$</span>. A set is independent if there are no edges of <span class="math-container">$E$</span> between&#xA; elements of the set.</p>&#xA; &#xA; <p><strong>Solution:</strong></p>&#xA; &#xA; <p>Construct a flow graph on the vertices <span class="math-container">$U \cup V \cup \{s,t\}$</span>. For&#xA; each edge <span class="math-container">$(u,v) \in E$</span> there is an infinite capacity edge from <span class="math-container">$u$</span> to&#xA; <span class="math-container">$v$</span>. For each <span class="math-container">$u \in U$</span>, there is a unit capacity edge from <span class="math-container">$s$</span> to <span class="math-container">$u$</span>,&#xA; and for each <span class="math-container">$v \in V$</span>, there is a unit capacity edge from <span class="math-container">$v$</span> to&#xA; <span class="math-container">$t$</span>.</p>&#xA; &#xA; <p>Find a finite capacity cut <span class="math-container">$(S,T)$</span>, with <span class="math-container">$s \in S$</span> and <span class="math-container">$t \in T$</span>. Let&#xA; <span class="math-container">$U' = U \cap S$</span> and <span class="math-container">$V' = V \cap T$</span>. The set <span class="math-container">$U' \cup V'$</span> is&#xA; independent since there are no infinite capacity edges crossing the&#xA; cut. The size of the cut is <span class="math-container">$|U - U'| + |V - V'| = |U| + |V| - |U' \cup V'|$</span>. This, in order to make the independent set as large as&#xA; possible, we make the cut as small as possible.</p>&#xA;</blockquote>&#xA;&#xA;<p>So lets take this as the graph:</p>&#xA;&#xA;<pre><code>A - B - C&#xA; |&#xA;D - E - F&#xA;</code></pre>&#xA;&#xA;<p>We can split this into a bipartite graph as follows &#xA;<span class="math-container">$(U,V)=(\{A,C,E\},\{B,D,F\})$</span></p>&#xA;&#xA;<p>We can see by brute force search that the sole Maximum Independent Set is <span class="math-container">$A,C,D,F$</span>. Lets try and work through the solution above:</p>&#xA;&#xA;<p>So the constructed flow network adjacency matrix would be:</p>&#xA;&#xA;<p><span class="math-container">$$\begin{matrix}&#xA; &amp; s &amp; t &amp; A &amp; B &amp; C &amp; D &amp; E &amp; F \\&#xA;s &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 \\&#xA;t &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 \\&#xA;A &amp; 1 &amp; 0 &amp; 0 &amp; \infty &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\&#xA;B &amp; 0 &amp; 1 &amp; \infty &amp; 0 &amp; \infty &amp; 0 &amp; \infty &amp; 0 \\&#xA;C &amp; 1 &amp; 0 &amp; 0 &amp; \infty &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\&#xA;D &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; \infty &amp; 0 \\&#xA;E &amp; 1 &amp; 0 &amp; 0 &amp; \infty &amp; 0 &amp; \infty &amp; 0 &amp; \infty \\&#xA;F &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; \infty &amp; 0 \\&#xA;\end{matrix}$$</span></p>&#xA;&#xA;<p>Here is where I am stuck, the smallest finite capacity cut I see is a trivial one: <span class="math-container">$(S,T) =(\{s\},\{t,A,B,C,D,E,F\})$</span> with a capacity of 3.</p>&#xA;&#xA;<p>Using this cut leads to an incorrect solution of:</p>&#xA;&#xA;<p><span class="math-container">$$ U' = U \cap S = \{\}$$</span>&#xA;<span class="math-container">$$ V' = V \cap T = \{B,D,F\}$$</span>&#xA;<span class="math-container">$$ U' \cup V' = \{B,D,F\}$$</span></p>&#xA;&#xA;<p>Whereas we expected <span class="math-container">$U' \cup V' = \{A,C,D,F\}$</span>? Can anyone spot where I have gone wrong in my reasoning/working?</p>&#xA;
habedi/stack-exchange-dataset
3,031
What is known about coRL and RL?
<p>Wondering about any known relations between <a href="http://qwiki.stanford.edu/index.php/Complexity_Zoo%3aR#rl" rel="nofollow">$\mathsf{RL}$</a> complexity class (one sided error with logarithmic space) and its complementary class, $\mathsf{coRL}$.</p>&#xA;&#xA;<p>Are they the same class?</p>&#xA;&#xA;<p>What are $\mathsf{coRL}$'s relation to $\mathsf{NL}$, $\mathsf{P}$?</p>&#xA;
complexity theory complexity classes probabilistic algorithms
1
What is known about coRL and RL? -- (complexity theory complexity classes probabilistic algorithms) <p>Wondering about any known relations between <a href="http://qwiki.stanford.edu/index.php/Complexity_Zoo%3aR#rl" rel="nofollow">$\mathsf{RL}$</a> complexity class (one sided error with logarithmic space) and its complementary class, $\mathsf{coRL}$.</p>&#xA;&#xA;<p>Are they the same class?</p>&#xA;&#xA;<p>What are $\mathsf{coRL}$'s relation to $\mathsf{NL}$, $\mathsf{P}$?</p>&#xA;
habedi/stack-exchange-dataset
3,039
Run time of product of polynomially bounded numbers
<p>Let $M$ denote a set of $n$ positive integers, each less than $n^c$.</p>&#xA;&#xA;<p>What is the runtime of computing $\prod_{m \in M} m$ with a deterministic Turing machine?</p>&#xA;
algorithms time complexity integers
1
Run time of product of polynomially bounded numbers -- (algorithms time complexity integers) <p>Let $M$ denote a set of $n$ positive integers, each less than $n^c$.</p>&#xA;&#xA;<p>What is the runtime of computing $\prod_{m \in M} m$ with a deterministic Turing machine?</p>&#xA;
habedi/stack-exchange-dataset
3,044
Is the set of minimal DFA decidable?
<p>Let $\mathrm{MIN}_{\mathrm{DFA}}$ collection of all the codings of DFAs such that they are minimal regarding their states number. I mean if $\langle A \rangle \in \mathrm{MIN}_{\mathrm{DFA}}$ then for every other DFA $B$ with less states than $A$, $L(A)\ne L(B)$ holds. I'm trying to figure out how come that $\mathrm{MIN}_{\mathrm{DFA}} \in R$? How come it is decidable?</p>&#xA;&#xA;<p>What is about this kind of DFAs that is easy to decide?</p>&#xA;
formal languages computability automata finite automata
1
Is the set of minimal DFA decidable? -- (formal languages computability automata finite automata) <p>Let $\mathrm{MIN}_{\mathrm{DFA}}$ collection of all the codings of DFAs such that they are minimal regarding their states number. I mean if $\langle A \rangle \in \mathrm{MIN}_{\mathrm{DFA}}$ then for every other DFA $B$ with less states than $A$, $L(A)\ne L(B)$ holds. I'm trying to figure out how come that $\mathrm{MIN}_{\mathrm{DFA}} \in R$? How come it is decidable?</p>&#xA;&#xA;<p>What is about this kind of DFAs that is easy to decide?</p>&#xA;
habedi/stack-exchange-dataset
3,061
A hash function with predicted collisions
<p>As far as I know, the more collision-resistant a hash function is, the better. But is there any way to define a hash function with <em>predicted</em> collisions? In other words, a hash function that collides for some known set of possible inputs and avoids collisions for other input values. To state the problem simpler:</p>&#xA;&#xA;<p>Let $A$ be some set of strings. &#xA;Define a function $f$ such that $f(x_i) \rightarrow y_i$ with $y_i \neq y_j$ for all $i,j \notin A$ with $i \neq j$, and otherwise $f(x_i) \rightarrow y$ where $y \notin \{y_i \mid i \notin A\}$ is constant.</p>&#xA;&#xA;<p>Is it possible? In addition, is it possible for performance of such a hash function <em>not</em> to depend on the size of $A$?</p>&#xA;
cryptography hash
1
A hash function with predicted collisions -- (cryptography hash) <p>As far as I know, the more collision-resistant a hash function is, the better. But is there any way to define a hash function with <em>predicted</em> collisions? In other words, a hash function that collides for some known set of possible inputs and avoids collisions for other input values. To state the problem simpler:</p>&#xA;&#xA;<p>Let $A$ be some set of strings. &#xA;Define a function $f$ such that $f(x_i) \rightarrow y_i$ with $y_i \neq y_j$ for all $i,j \notin A$ with $i \neq j$, and otherwise $f(x_i) \rightarrow y$ where $y \notin \{y_i \mid i \notin A\}$ is constant.</p>&#xA;&#xA;<p>Is it possible? In addition, is it possible for performance of such a hash function <em>not</em> to depend on the size of $A$?</p>&#xA;
habedi/stack-exchange-dataset
3,067
Why classes implicitly derive from only the Object Class?
<p>I do not have any argument opposing why we need only a single universal class. However why not we have two universal classes, say an Object and an AntiObject Class. In nature and in science we find the concept of duality - like Energy &amp; Dark Energy; Male &amp; Female; Plus &amp; Minus; Multiply &amp; Divide; Electrons &amp; Protons; Integration &amp; Derivation; and in set theory. There are so many examples of dualism that it is a philosophy in itself. In programming itself we see Anti-Patterns which helps us to perform work in contrast to how we use Design patterns. I am not sure, but the usefulness of this duality concept may lie in creating garbage collectors that create AntiObjects that combine with free or loose Objects to destruct themselves, thereby releasing memory. Or may be AntiObjects work along with Objects to create a self-modifying programming language - that allows us to create a safe self modifying code, do evolutionary computing using genetic programming, do hiding of code to prevent reverse engineering. </p>&#xA;&#xA;<p>We call it object-oriented programming. Is that a limiting factor or is there something fundamental I am missing in understanding the formation of programming languages?</p>&#xA;
programming languages type theory object oriented
1
Why classes implicitly derive from only the Object Class? -- (programming languages type theory object oriented) <p>I do not have any argument opposing why we need only a single universal class. However why not we have two universal classes, say an Object and an AntiObject Class. In nature and in science we find the concept of duality - like Energy &amp; Dark Energy; Male &amp; Female; Plus &amp; Minus; Multiply &amp; Divide; Electrons &amp; Protons; Integration &amp; Derivation; and in set theory. There are so many examples of dualism that it is a philosophy in itself. In programming itself we see Anti-Patterns which helps us to perform work in contrast to how we use Design patterns. I am not sure, but the usefulness of this duality concept may lie in creating garbage collectors that create AntiObjects that combine with free or loose Objects to destruct themselves, thereby releasing memory. Or may be AntiObjects work along with Objects to create a self-modifying programming language - that allows us to create a safe self modifying code, do evolutionary computing using genetic programming, do hiding of code to prevent reverse engineering. </p>&#xA;&#xA;<p>We call it object-oriented programming. Is that a limiting factor or is there something fundamental I am missing in understanding the formation of programming languages?</p>&#xA;
habedi/stack-exchange-dataset
3,071
Cost in time of constructing and running an NFA vs DFA for a given regex
<p>Repost from Stack Overflow:</p>&#xA;&#xA;<p>I'm going through past exams and keep coming across questions that I can't find an answer for in textbooks or on google, so any help would be much appreciated.</p>&#xA;&#xA;<p>The question I'm having problems with at the moment is as follows: </p>&#xA;&#xA;<blockquote>&#xA; <p>Given a regular expression (a|bb)*, derive an estimate of the cost in time for &#xA; converting it to a corresponding NFA and a DFA. Your answer should refer to&#xA; the size of the regular expression.</p>&#xA;</blockquote>&#xA;&#xA;<p>A similar question from another year is:</p>&#xA;&#xA;<blockquote>&#xA; <p>Given that, for the above example, you know the size of the original regular&#xA; expression, |r| and the size of the input string |x|, explain how you would calculate the cost in time for constructing and running the NFA versus constructing&#xA; and running an equivalent DFA.</p>&#xA;</blockquote>&#xA;&#xA;<p>The resulting NFA for (a|bb)* has 9 states, while the DFA has 4. Even knowing this, I have no idea how to approach the question.</p>&#xA;
regular languages automata finite automata compilers
1
Cost in time of constructing and running an NFA vs DFA for a given regex -- (regular languages automata finite automata compilers) <p>Repost from Stack Overflow:</p>&#xA;&#xA;<p>I'm going through past exams and keep coming across questions that I can't find an answer for in textbooks or on google, so any help would be much appreciated.</p>&#xA;&#xA;<p>The question I'm having problems with at the moment is as follows: </p>&#xA;&#xA;<blockquote>&#xA; <p>Given a regular expression (a|bb)*, derive an estimate of the cost in time for &#xA; converting it to a corresponding NFA and a DFA. Your answer should refer to&#xA; the size of the regular expression.</p>&#xA;</blockquote>&#xA;&#xA;<p>A similar question from another year is:</p>&#xA;&#xA;<blockquote>&#xA; <p>Given that, for the above example, you know the size of the original regular&#xA; expression, |r| and the size of the input string |x|, explain how you would calculate the cost in time for constructing and running the NFA versus constructing&#xA; and running an equivalent DFA.</p>&#xA;</blockquote>&#xA;&#xA;<p>The resulting NFA for (a|bb)* has 9 states, while the DFA has 4. Even knowing this, I have no idea how to approach the question.</p>&#xA;
habedi/stack-exchange-dataset
3,078
Algorithm that finds the number of simple paths from $s$ to $t$ in $G$
<p>Can anyone suggest me a linear time algorithm that takes as input a directed acyclic graph $G=(V,E)$ and two vertices $s$ and $t$ and returns the number of simple paths from $s$ to $t$ in $G$.<br>&#xA;I have an algorithm in which I will run a DFS(Depth First Search) but if DFS finds $t$ then it will not change the color(from white to grey) of any of the nodes which comes in the path $s \rightsquigarrow t$ so that if this is the subpath of any other path then also DFS goes through this subpath again.For example consider the adjacency list where we need to find the number of paths from $p$ to $v$.<br>&#xA;$$\begin{array}{|c|c c c|}&#xA;\hline &#xA;p &amp;o &amp;s &amp;z \\ \hline&#xA;o &amp;r &amp;s &amp;v\\ \hline&#xA;s &amp;r \\ \hline&#xA;r &amp;y \\ \hline&#xA;y &amp;v \\ \hline&#xA;v &amp;w \\ \hline&#xA;z &amp; \\ \hline&#xA;w &amp;z \\ \hline&#xA;\end{array}$$&#xA;Here DFS will start with $p$ and then lets say it goes to $p \rightsquigarrow z$ since it doesnot encounter $v$ DFS will run normally.Now second path is $psryv$ since it encounter $v$ we will not change the color of vertices $s,r,y,v$ to grey.Then the path $pov$ since color of $v$ is still white.Then the path $posryv$ since color of $s$ is white and similarly of path $poryv$.Also a counter is maintained which get incremented when $v$ is encountered.</p>&#xA;&#xA;<p>Is my algorithm correct? if not, what modifications are needed to make it correct or any other approaches will be greatly appreciated.</p>&#xA;&#xA;<p><strong>Note</strong>:Here I have considered the DFS algorithm which is given in the book <em>"Introduction to algorithms by Cormen"</em> in which it colors the nodes according to its status.So if the node is unvisited , unexplored and explored then the color will be white,grey and black respectively.All other things are standard.</p>&#xA;
algorithms graphs
1
Algorithm that finds the number of simple paths from $s$ to $t$ in $G$ -- (algorithms graphs) <p>Can anyone suggest me a linear time algorithm that takes as input a directed acyclic graph $G=(V,E)$ and two vertices $s$ and $t$ and returns the number of simple paths from $s$ to $t$ in $G$.<br>&#xA;I have an algorithm in which I will run a DFS(Depth First Search) but if DFS finds $t$ then it will not change the color(from white to grey) of any of the nodes which comes in the path $s \rightsquigarrow t$ so that if this is the subpath of any other path then also DFS goes through this subpath again.For example consider the adjacency list where we need to find the number of paths from $p$ to $v$.<br>&#xA;$$\begin{array}{|c|c c c|}&#xA;\hline &#xA;p &amp;o &amp;s &amp;z \\ \hline&#xA;o &amp;r &amp;s &amp;v\\ \hline&#xA;s &amp;r \\ \hline&#xA;r &amp;y \\ \hline&#xA;y &amp;v \\ \hline&#xA;v &amp;w \\ \hline&#xA;z &amp; \\ \hline&#xA;w &amp;z \\ \hline&#xA;\end{array}$$&#xA;Here DFS will start with $p$ and then lets say it goes to $p \rightsquigarrow z$ since it doesnot encounter $v$ DFS will run normally.Now second path is $psryv$ since it encounter $v$ we will not change the color of vertices $s,r,y,v$ to grey.Then the path $pov$ since color of $v$ is still white.Then the path $posryv$ since color of $s$ is white and similarly of path $poryv$.Also a counter is maintained which get incremented when $v$ is encountered.</p>&#xA;&#xA;<p>Is my algorithm correct? if not, what modifications are needed to make it correct or any other approaches will be greatly appreciated.</p>&#xA;&#xA;<p><strong>Note</strong>:Here I have considered the DFS algorithm which is given in the book <em>"Introduction to algorithms by Cormen"</em> in which it colors the nodes according to its status.So if the node is unvisited , unexplored and explored then the color will be white,grey and black respectively.All other things are standard.</p>&#xA;
habedi/stack-exchange-dataset
3,086
Is there a repository for the hierarchy of proofs?
<p>I am self-learning <a href="http://en.wikipedia.org/wiki/Proof_assistant">proof assistants</a> and decided to start on some basic proofs and work my way up. Since proofs are based on other proofs and so form a hierarchy, is there a repository of the hierarchy of proofs?</p>&#xA;&#xA;<p>I know I can pick a particular proof-assistant and analyze its library to extract its hierarchy, however if I want to find the next proof in a chain to prove, I can't when it is not in the library.</p>&#xA;&#xA;<p>In my mind I picture a graph, probably a <a href="http://en.wikipedia.org/wiki/Directed_acyclic_graph">DAG</a>, of all of the known mathematical proofs that can be expressed using English statements, not <a href="http://www.billthelizard.com/2009/07/six-visual-proofs_25.html">proofs using pictures</a>. This would be the master map (a map in the sense of starting at one point and traveling to another point via intermediate points), and for a particular proof assistant, one would have a subgraph of the master map. Then if one wanted to create a proof using a proof assistant found on the master not on the subgraph, by comparing the two graphs one could get an idea of the work needed to create the missing proof(s) for the proof assistant. </p>&#xA;&#xA;<p>I am aware that mathematical proofs are not necessarily easily convertable for use with a proof assistant, however having a general idea of what to do is much better than none at all.</p>&#xA;&#xA;<p>Also by having the master map, I can see if there are mulitple paths from one point to antoher and choose a path that is more amenable for the particualr proof assistant.</p>&#xA;&#xA;<p>EDIT</p>&#xA;&#xA;<p>In searching I found something similar for <a href="http://dlmf.nist.gov/">mathematical functions</a>. I did not find one for proofs at the <a href="http://www.nist.gov/index.html">NIST</a></p>&#xA;
reference request logic proof assistants
1
Is there a repository for the hierarchy of proofs? -- (reference request logic proof assistants) <p>I am self-learning <a href="http://en.wikipedia.org/wiki/Proof_assistant">proof assistants</a> and decided to start on some basic proofs and work my way up. Since proofs are based on other proofs and so form a hierarchy, is there a repository of the hierarchy of proofs?</p>&#xA;&#xA;<p>I know I can pick a particular proof-assistant and analyze its library to extract its hierarchy, however if I want to find the next proof in a chain to prove, I can't when it is not in the library.</p>&#xA;&#xA;<p>In my mind I picture a graph, probably a <a href="http://en.wikipedia.org/wiki/Directed_acyclic_graph">DAG</a>, of all of the known mathematical proofs that can be expressed using English statements, not <a href="http://www.billthelizard.com/2009/07/six-visual-proofs_25.html">proofs using pictures</a>. This would be the master map (a map in the sense of starting at one point and traveling to another point via intermediate points), and for a particular proof assistant, one would have a subgraph of the master map. Then if one wanted to create a proof using a proof assistant found on the master not on the subgraph, by comparing the two graphs one could get an idea of the work needed to create the missing proof(s) for the proof assistant. </p>&#xA;&#xA;<p>I am aware that mathematical proofs are not necessarily easily convertable for use with a proof assistant, however having a general idea of what to do is much better than none at all.</p>&#xA;&#xA;<p>Also by having the master map, I can see if there are mulitple paths from one point to antoher and choose a path that is more amenable for the particualr proof assistant.</p>&#xA;&#xA;<p>EDIT</p>&#xA;&#xA;<p>In searching I found something similar for <a href="http://dlmf.nist.gov/">mathematical functions</a>. I did not find one for proofs at the <a href="http://www.nist.gov/index.html">NIST</a></p>&#xA;
habedi/stack-exchange-dataset
3,090
For a Turing Machine $M_1$, how is the set of machines $M_2$ which are "shorter" than $M_1$ and which accept the same language decidable?
<p>I wonder how come that the following language is in $\mathrm R$.</p>&#xA;&#xA;<p>$L_{M_1}=\Bigl\{\langle M_2\rangle \;\Big|\;\; M_2 \text{ is a TM, and } L(M_1)=L(M_2), \text{ and } |\langle M_1\rangle| &gt; | \langle M_2 \rangle| \Bigr\} $</p>&#xA;&#xA;<p>(I know that it's in $\mathrm R$ since there's an answer for this multi-choice question, but without explanation).</p>&#xA;&#xA;<p>I immediately thought that the $L_{M_1} \notin \textrm{co-RE} \cup \textrm{RE}$ since we know that checking if two machines accept the same language is really not decidable, I came to think: is it immediate "False", but it can't be since there's a lot of Turing machines who accepts the same answer and have different codings.</p>&#xA;&#xA;<p>Thanks! </p>&#xA;
computability undecidability
1
For a Turing Machine $M_1$, how is the set of machines $M_2$ which are "shorter" than $M_1$ and which accept the same language decidable? -- (computability undecidability) <p>I wonder how come that the following language is in $\mathrm R$.</p>&#xA;&#xA;<p>$L_{M_1}=\Bigl\{\langle M_2\rangle \;\Big|\;\; M_2 \text{ is a TM, and } L(M_1)=L(M_2), \text{ and } |\langle M_1\rangle| &gt; | \langle M_2 \rangle| \Bigr\} $</p>&#xA;&#xA;<p>(I know that it's in $\mathrm R$ since there's an answer for this multi-choice question, but without explanation).</p>&#xA;&#xA;<p>I immediately thought that the $L_{M_1} \notin \textrm{co-RE} \cup \textrm{RE}$ since we know that checking if two machines accept the same language is really not decidable, I came to think: is it immediate "False", but it can't be since there's a lot of Turing machines who accepts the same answer and have different codings.</p>&#xA;&#xA;<p>Thanks! </p>&#xA;
habedi/stack-exchange-dataset
3,095
Dataflow framework for global analysis: Why meet, then apply?
<p>In Chapter 9 of the Dragon Book, the authors describe the dataflow framework for global analysis (described also in <a href="http://dragonbook.stanford.edu/lecture-notes/Stanford-CS243/l3-handout.pdf">these slides</a>). In this framework, an analysis is defined by a set of transfer functions, along with a <a href="http://en.wikipedia.org/wiki/Semilattice">meet semilattice</a>.</p>&#xA;&#xA;<p>At each step of the iteration, the algorithm works by maintaining two values for each basic block: an IN set representing information known to be true on input to the basic block, and an OUT set representing information known to be true on output from the basic block. The algorithm works as follows:</p>&#xA;&#xA;<ol>&#xA;<li>Compute the meet of the OUT sets for all predecessors of the current basic block, and set that value as the IN set to the current basic block.</li>&#xA;<li>Compute $f(IN)$ for the current basic block, where $f$ is a transfer function representing the effects of the basic block. Then set OUT for this block equal to this value.</li>&#xA;</ol>&#xA;&#xA;<p>I am confused about why this algorithm works by taking the meet of all the input blocks before applying the transfer function. In some cases (non-distributive analyses), this causes a loss of precision. Wouldn't it make more sense to apply the transfer function to each of the OUT values of the predecessors of the given block, then to meet all of those values together? Or is this not sound?</p>&#xA;&#xA;<p>Thanks!</p>&#xA;
compilers program optimization
1
Dataflow framework for global analysis: Why meet, then apply? -- (compilers program optimization) <p>In Chapter 9 of the Dragon Book, the authors describe the dataflow framework for global analysis (described also in <a href="http://dragonbook.stanford.edu/lecture-notes/Stanford-CS243/l3-handout.pdf">these slides</a>). In this framework, an analysis is defined by a set of transfer functions, along with a <a href="http://en.wikipedia.org/wiki/Semilattice">meet semilattice</a>.</p>&#xA;&#xA;<p>At each step of the iteration, the algorithm works by maintaining two values for each basic block: an IN set representing information known to be true on input to the basic block, and an OUT set representing information known to be true on output from the basic block. The algorithm works as follows:</p>&#xA;&#xA;<ol>&#xA;<li>Compute the meet of the OUT sets for all predecessors of the current basic block, and set that value as the IN set to the current basic block.</li>&#xA;<li>Compute $f(IN)$ for the current basic block, where $f$ is a transfer function representing the effects of the basic block. Then set OUT for this block equal to this value.</li>&#xA;</ol>&#xA;&#xA;<p>I am confused about why this algorithm works by taking the meet of all the input blocks before applying the transfer function. In some cases (non-distributive analyses), this causes a loss of precision. Wouldn't it make more sense to apply the transfer function to each of the OUT values of the predecessors of the given block, then to meet all of those values together? Or is this not sound?</p>&#xA;&#xA;<p>Thanks!</p>&#xA;
habedi/stack-exchange-dataset
3,101
Is the set of Turing machines which stops in at most 50 steps on all inputs, decidable?
<p>Let $F = \{⟨M⟩:\text{M is a TM which stops for every input in at most 50 steps}\}$. I need to decide whether <em>F</em> is decidable or recursively enumerable. I think it's decidable, but I don't know how to prove it.</p>&#xA;&#xA;<p><strong>My thoughts</strong></p>&#xA;&#xA;<p>This "50 steps" part immediate turns the <strong>R</strong> sign for me. If it was for specific input it would be decidable. However, here it's for every input. Checking it for infinite inputs makes me think that the problem is <strong>co-RE</strong>, <em>i.e.</em> its complement is acceptable. </p>&#xA;&#xA;<p>Perhaps, I can check the configurations and see that all configurations after 50 steps don't lead to accept state- how do I do that?</p>&#xA;
computability undecidability
1
Is the set of Turing machines which stops in at most 50 steps on all inputs, decidable? -- (computability undecidability) <p>Let $F = \{⟨M⟩:\text{M is a TM which stops for every input in at most 50 steps}\}$. I need to decide whether <em>F</em> is decidable or recursively enumerable. I think it's decidable, but I don't know how to prove it.</p>&#xA;&#xA;<p><strong>My thoughts</strong></p>&#xA;&#xA;<p>This "50 steps" part immediate turns the <strong>R</strong> sign for me. If it was for specific input it would be decidable. However, here it's for every input. Checking it for infinite inputs makes me think that the problem is <strong>co-RE</strong>, <em>i.e.</em> its complement is acceptable. </p>&#xA;&#xA;<p>Perhaps, I can check the configurations and see that all configurations after 50 steps don't lead to accept state- how do I do that?</p>&#xA;
habedi/stack-exchange-dataset
3,109
How to compute linear recurrence using matrix with fraction coefficients?
<p>What I'm trying to do is generate <a href="http://en.wikipedia.org/wiki/Motzkin_number" rel="nofollow">Motzkin numbers</a> mod a large number $10^{14} + 7$ (not prime), and it needs to compute the $n$th Motzkin number as fast as possible. From Wikipedia, the formula for the $n$th Motzkin number is defined as following:</p>&#xA;&#xA;<p>$\qquad \displaystyle \begin{align}&#xA; M_{n+1} &amp;= M_n + \sum_{i=0}^{n-1} M_iM_{n-1-i} \\&#xA; &amp;= \frac{2n+3}{n+3}M_n + \frac{3n}{n+3}M_{n-1}&#xA;\end{align}$ </p>&#xA;&#xA;<p>My initial approach is to use the second formula which is obviously faster, but the problem I ran into is the division since modular arithmetic rule doesn't apply.</p>&#xA;&#xA;<pre><code>void generate_motzkin_numbers() {&#xA; motzkin[0] = 1;&#xA; motzkin[1] = 1;&#xA; ull m0 = 1;&#xA; ull m1 = 1;&#xA; ull numerator;&#xA; ull denominator;&#xA; for (int i = 2; i &lt;= MAX_NUMBERS; ++i) {&#xA; numerator = (((2*i + 1)*m1 + 3*(i - 1)*m0)) % MODULO;&#xA; denominator = (i + 2);&#xA; motzkin[i] = numerator/denominator;&#xA; m0 = m1;&#xA; m1 = motzkin[i];&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>Then I tried the second formula, but the running time is horribly slow because the summation:</p>&#xA;&#xA;<pre><code>void generate_motzkin_numbers_nested_recurrence() {&#xA; mm[0] = 1;&#xA; mm[1] = 1;&#xA; mm[2] = 2;&#xA; mm[3] = 4;&#xA; mm[4] = 9;&#xA; ull result;&#xA; for (int i = 5; i &lt;= MAX_NUMBERS; ++i) {&#xA; result = mm[i - 1];&#xA; for (int k = 0; k &lt;= (i - 2); ++k) {&#xA; result = (result + ((mm[k] * mm[i - 2 - k]) % MODULO)) % MODULO;&#xA; }&#xA; mm[i] = result;&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>Next, I'm thinking of using matrix form which eventually can be speed up using exponentiation squaring technique, in other words $M_{n+1}$ can be computed as follows:&#xA;$$M_{n+1} = \begin{bmatrix} \dfrac{2n+3}{n+3} &amp; \dfrac{3n}{n+3} \\ 1 &amp; 0\end{bmatrix}^n \cdot \begin{bmatrix} 1 \\ 1\end{bmatrix}$$ &#xA;With exponentiation by squaring, this method running time is $O(\log(n))$ which I guess the fastest way possible, where <code>MAX_NUMBERS = 10,000</code>. Unfortunately, again the division with modular is killing me. After apply the modulo to the numerator, the division is no longer accurate. So my question is, is there another technique to compute this recurrence modulo a number? I'm think of a dynamic programming approach for the summation, but I still think it's not as fast as this method. Any ideas or suggestions would be greatly appreciated. </p>&#xA;
algorithms recurrence relation efficiency integers
1
How to compute linear recurrence using matrix with fraction coefficients? -- (algorithms recurrence relation efficiency integers) <p>What I'm trying to do is generate <a href="http://en.wikipedia.org/wiki/Motzkin_number" rel="nofollow">Motzkin numbers</a> mod a large number $10^{14} + 7$ (not prime), and it needs to compute the $n$th Motzkin number as fast as possible. From Wikipedia, the formula for the $n$th Motzkin number is defined as following:</p>&#xA;&#xA;<p>$\qquad \displaystyle \begin{align}&#xA; M_{n+1} &amp;= M_n + \sum_{i=0}^{n-1} M_iM_{n-1-i} \\&#xA; &amp;= \frac{2n+3}{n+3}M_n + \frac{3n}{n+3}M_{n-1}&#xA;\end{align}$ </p>&#xA;&#xA;<p>My initial approach is to use the second formula which is obviously faster, but the problem I ran into is the division since modular arithmetic rule doesn't apply.</p>&#xA;&#xA;<pre><code>void generate_motzkin_numbers() {&#xA; motzkin[0] = 1;&#xA; motzkin[1] = 1;&#xA; ull m0 = 1;&#xA; ull m1 = 1;&#xA; ull numerator;&#xA; ull denominator;&#xA; for (int i = 2; i &lt;= MAX_NUMBERS; ++i) {&#xA; numerator = (((2*i + 1)*m1 + 3*(i - 1)*m0)) % MODULO;&#xA; denominator = (i + 2);&#xA; motzkin[i] = numerator/denominator;&#xA; m0 = m1;&#xA; m1 = motzkin[i];&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>Then I tried the second formula, but the running time is horribly slow because the summation:</p>&#xA;&#xA;<pre><code>void generate_motzkin_numbers_nested_recurrence() {&#xA; mm[0] = 1;&#xA; mm[1] = 1;&#xA; mm[2] = 2;&#xA; mm[3] = 4;&#xA; mm[4] = 9;&#xA; ull result;&#xA; for (int i = 5; i &lt;= MAX_NUMBERS; ++i) {&#xA; result = mm[i - 1];&#xA; for (int k = 0; k &lt;= (i - 2); ++k) {&#xA; result = (result + ((mm[k] * mm[i - 2 - k]) % MODULO)) % MODULO;&#xA; }&#xA; mm[i] = result;&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>Next, I'm thinking of using matrix form which eventually can be speed up using exponentiation squaring technique, in other words $M_{n+1}$ can be computed as follows:&#xA;$$M_{n+1} = \begin{bmatrix} \dfrac{2n+3}{n+3} &amp; \dfrac{3n}{n+3} \\ 1 &amp; 0\end{bmatrix}^n \cdot \begin{bmatrix} 1 \\ 1\end{bmatrix}$$ &#xA;With exponentiation by squaring, this method running time is $O(\log(n))$ which I guess the fastest way possible, where <code>MAX_NUMBERS = 10,000</code>. Unfortunately, again the division with modular is killing me. After apply the modulo to the numerator, the division is no longer accurate. So my question is, is there another technique to compute this recurrence modulo a number? I'm think of a dynamic programming approach for the summation, but I still think it's not as fast as this method. Any ideas or suggestions would be greatly appreciated. </p>&#xA;
habedi/stack-exchange-dataset
3,110
Introduction into first order logic verification
<p>I am trying to teach myself different approaches to software verification. I have read some articles. As far as I learned, propositional logic with temporal generally uses model checking with SAT solvers (in ongoing - reactive systems), but what about first order Logic with temporal? Does it use theorem provers? Or can it also use SAT?</p>&#xA;&#xA;<p>Any pointers to books or articles for beginners in this matter is much appreciated.</p>&#xA;
reference request logic formal methods sat solvers software verification
1
Introduction into first order logic verification -- (reference request logic formal methods sat solvers software verification) <p>I am trying to teach myself different approaches to software verification. I have read some articles. As far as I learned, propositional logic with temporal generally uses model checking with SAT solvers (in ongoing - reactive systems), but what about first order Logic with temporal? Does it use theorem provers? Or can it also use SAT?</p>&#xA;&#xA;<p>Any pointers to books or articles for beginners in this matter is much appreciated.</p>&#xA;
habedi/stack-exchange-dataset
3,115
What different lines of reasoning and traditions lead to the conclusion that Software Engineering is or isn't part of Computer Science?
<p><strong>Background:</strong> Some people consider Software Engineering as a branch of Computer Science, while others consider that they are, or should be, separate. The former stance seems to be well presented in written works. On Wikipedia, Software Engineering <a href="http://en.wikipedia.org/wiki/Computer_science#Areas_of_computer_science" rel="nofollow noreferrer">is classified as Applied Computer Science</a>, along with, e.g., Artificial Intelligence and Cryptography. The <a href="http://www.acm.org/about/class/ccs98-html" rel="nofollow noreferrer">ACM Computing Classification</a> system places SE under Software, along with, e.g., Programming Languages and Operating Systems. <a href="http://www.csab.org/" rel="nofollow noreferrer">CSAB</a> has also <a href="http://web.archive.org/web/20090117183438/http://www.csab.org/comp_sci_profession.html" rel="nofollow noreferrer">considered SE as part of Computer Science</a>, and considered that</p>&#xA;&#xA;<blockquote>&#xA; <p>[...] it includes theoretical studies, experimental methods, and engineering design all in one discipline. [...] It is this close interaction of the theoretical and design aspects of the field that binds them together into a single discipline.<br>&#xA; [...]<br>&#xA; Clearly, the computer scientist must not only have sufficient training in the computer science areas to be able to accomplish such tasks, but must also have a firm understanding in areas of mathematics and science, as well as a broad education in liberal studies to provide a basis for understanding the societal implications of the work being performed.</p>&#xA;</blockquote>&#xA;&#xA;<p>While the above seems to reflect my own view, there is also the stance that the term Computer Science should be reserved for what is sometimes called Theoretical Computer Science, such as Computability Theory, Computational Complexity Theory, Algorithms and Data Structures, and that other areas should be split off into their own disciplines. In the introductory courses I took for my CS degree, the core of CS was defined via the questions "what can be automated?" (Computability Theory) and "what can be automated efficiently?" (Computational Complexity Theory). The "how" was then explored at length in the remaining courses, but one could well consider SE being so far from these core questions that it shouldn't be considered part of CS.</p>&#xA;&#xA;<p>Even <a href="https://cs.meta.stackexchange.com/questions/39/what-is-our-stance-on-software-engineering">here on CS.SE</a>, there has been debate about whether SE questions are on-topic, reflecting the problematic relationship between CS and SE.</p>&#xA;&#xA;<p><strong>Question:</strong> I'm wondering what lines of reasoning and traditions within Computer Science might lead to one conclusion or the other: that SE is, or should be, part of CS or that it is not. (This implies that answers should present both sides.)</p>&#xA;
software engineering
1
What different lines of reasoning and traditions lead to the conclusion that Software Engineering is or isn't part of Computer Science? -- (software engineering) <p><strong>Background:</strong> Some people consider Software Engineering as a branch of Computer Science, while others consider that they are, or should be, separate. The former stance seems to be well presented in written works. On Wikipedia, Software Engineering <a href="http://en.wikipedia.org/wiki/Computer_science#Areas_of_computer_science" rel="nofollow noreferrer">is classified as Applied Computer Science</a>, along with, e.g., Artificial Intelligence and Cryptography. The <a href="http://www.acm.org/about/class/ccs98-html" rel="nofollow noreferrer">ACM Computing Classification</a> system places SE under Software, along with, e.g., Programming Languages and Operating Systems. <a href="http://www.csab.org/" rel="nofollow noreferrer">CSAB</a> has also <a href="http://web.archive.org/web/20090117183438/http://www.csab.org/comp_sci_profession.html" rel="nofollow noreferrer">considered SE as part of Computer Science</a>, and considered that</p>&#xA;&#xA;<blockquote>&#xA; <p>[...] it includes theoretical studies, experimental methods, and engineering design all in one discipline. [...] It is this close interaction of the theoretical and design aspects of the field that binds them together into a single discipline.<br>&#xA; [...]<br>&#xA; Clearly, the computer scientist must not only have sufficient training in the computer science areas to be able to accomplish such tasks, but must also have a firm understanding in areas of mathematics and science, as well as a broad education in liberal studies to provide a basis for understanding the societal implications of the work being performed.</p>&#xA;</blockquote>&#xA;&#xA;<p>While the above seems to reflect my own view, there is also the stance that the term Computer Science should be reserved for what is sometimes called Theoretical Computer Science, such as Computability Theory, Computational Complexity Theory, Algorithms and Data Structures, and that other areas should be split off into their own disciplines. In the introductory courses I took for my CS degree, the core of CS was defined via the questions "what can be automated?" (Computability Theory) and "what can be automated efficiently?" (Computational Complexity Theory). The "how" was then explored at length in the remaining courses, but one could well consider SE being so far from these core questions that it shouldn't be considered part of CS.</p>&#xA;&#xA;<p>Even <a href="https://cs.meta.stackexchange.com/questions/39/what-is-our-stance-on-software-engineering">here on CS.SE</a>, there has been debate about whether SE questions are on-topic, reflecting the problematic relationship between CS and SE.</p>&#xA;&#xA;<p><strong>Question:</strong> I'm wondering what lines of reasoning and traditions within Computer Science might lead to one conclusion or the other: that SE is, or should be, part of CS or that it is not. (This implies that answers should present both sides.)</p>&#xA;
habedi/stack-exchange-dataset
3,119
Is it decidable whether a TM reaches some position on the tape?
<p>I have these questions from an old exam I'm trying to solve. For each problem, the input is an encoding of some Turing machine $M$.</p>&#xA;&#xA;<blockquote>&#xA; <p>For an integer $c&gt;1$, and the following three problems:</p>&#xA; &#xA; <ol>&#xA; <li><p>Is it true that for every input $x$, M does not pass the $|x|+c$ position when running on $x$?</p></li>&#xA; <li><p>Is it true that for every input $x$, M does not pass the $\max \{|x|-c,1 \}$ position when running on $x$?</p></li>&#xA; <li><p>Is it true that for every input $x$, M does not pass the $(|x|+1)/c$ position when running on $x$?</p></li>&#xA; </ol>&#xA; &#xA; <p>How many problems are decidable? </p>&#xA;</blockquote>&#xA;&#xA;<p>Problem number (1), in my opinion, is in $\text {coRE} \smallsetminus \text R$ if I understand correct since, I can run all inputs in parallel, and stop if some input reached this position and for showing that it's not in $\text R$ I can reduce the complement of <strong>Atm</strong> to it. I construct a Turing machine $M'$ as follows: for an input $y$ I check if $y$ is a history of computation, if it is, then $M'$ running right and doesn't stop, if it's not, then it stops.</p>&#xA;&#xA;<p>For (3), I believe that it is decidable since for $c \geqslant 2$ it is all the Turing machines that always stay on the first cell of the stripe, since for a string of one char it can pass the first cell, so I need to simulate all the strings of length 1 for $|Q|+1$ steps (Is this correct?), and see if I'm using only the first cell in all of them.</p>&#xA;&#xA;<p>I don't really know what to do with (2).</p>&#xA;
computability turing machines undecidability
1
Is it decidable whether a TM reaches some position on the tape? -- (computability turing machines undecidability) <p>I have these questions from an old exam I'm trying to solve. For each problem, the input is an encoding of some Turing machine $M$.</p>&#xA;&#xA;<blockquote>&#xA; <p>For an integer $c&gt;1$, and the following three problems:</p>&#xA; &#xA; <ol>&#xA; <li><p>Is it true that for every input $x$, M does not pass the $|x|+c$ position when running on $x$?</p></li>&#xA; <li><p>Is it true that for every input $x$, M does not pass the $\max \{|x|-c,1 \}$ position when running on $x$?</p></li>&#xA; <li><p>Is it true that for every input $x$, M does not pass the $(|x|+1)/c$ position when running on $x$?</p></li>&#xA; </ol>&#xA; &#xA; <p>How many problems are decidable? </p>&#xA;</blockquote>&#xA;&#xA;<p>Problem number (1), in my opinion, is in $\text {coRE} \smallsetminus \text R$ if I understand correct since, I can run all inputs in parallel, and stop if some input reached this position and for showing that it's not in $\text R$ I can reduce the complement of <strong>Atm</strong> to it. I construct a Turing machine $M'$ as follows: for an input $y$ I check if $y$ is a history of computation, if it is, then $M'$ running right and doesn't stop, if it's not, then it stops.</p>&#xA;&#xA;<p>For (3), I believe that it is decidable since for $c \geqslant 2$ it is all the Turing machines that always stay on the first cell of the stripe, since for a string of one char it can pass the first cell, so I need to simulate all the strings of length 1 for $|Q|+1$ steps (Is this correct?), and see if I'm using only the first cell in all of them.</p>&#xA;&#xA;<p>I don't really know what to do with (2).</p>&#xA;
habedi/stack-exchange-dataset
3,134
Is #P closed under exponentiation? modulo?
<p>The complexity class $\newcommand{\sharpp}{\mathsf{\#P}}\sharpp$ is defined as </p>&#xA;&#xA;<p>$\qquad \displaystyle \sharpp = \{f \mid \exists \text{ polynomial-time NTM } M\ \forall x.\, f(x) = \#\operatorname{accept}_{M}(x)\}$. </p>&#xA;&#xA;<p>It is known that $\sharpp$ is closed under addition, multiplication and binomial coefficient. I was wondering if it is closed under power. For example, we are given a $\sharpp$ function $f$ and another $\sharpp$ function $g$. Is it true that $f^{g}$ or $g^{f}$ are $\sharpp$ functions as well? </p>&#xA;&#xA;<p>This is edit after the question has been answered.</p>&#xA;&#xA;<p>Is ($f$ modulo $g$) a $\sharpp$ function? How about when we are given a $\newcommand{\FP}{\mathsf{FP}}\FP$ function $h$. Then is ($f$ modulo $h$) a $\sharpp$ function? </p>&#xA;
complexity theory turing machines closure properties
1
Is #P closed under exponentiation? modulo? -- (complexity theory turing machines closure properties) <p>The complexity class $\newcommand{\sharpp}{\mathsf{\#P}}\sharpp$ is defined as </p>&#xA;&#xA;<p>$\qquad \displaystyle \sharpp = \{f \mid \exists \text{ polynomial-time NTM } M\ \forall x.\, f(x) = \#\operatorname{accept}_{M}(x)\}$. </p>&#xA;&#xA;<p>It is known that $\sharpp$ is closed under addition, multiplication and binomial coefficient. I was wondering if it is closed under power. For example, we are given a $\sharpp$ function $f$ and another $\sharpp$ function $g$. Is it true that $f^{g}$ or $g^{f}$ are $\sharpp$ functions as well? </p>&#xA;&#xA;<p>This is edit after the question has been answered.</p>&#xA;&#xA;<p>Is ($f$ modulo $g$) a $\sharpp$ function? How about when we are given a $\newcommand{\FP}{\mathsf{FP}}\FP$ function $h$. Then is ($f$ modulo $h$) a $\sharpp$ function? </p>&#xA;
habedi/stack-exchange-dataset
3,138
How Does Populating Pastry's Routing Table Work?
<p>I'm trying to implement the Pastry Distributed Hash Table, but some things are escaping my understanding. I was hoping someone could clarify.</p>&#xA;&#xA;<p><strong>Disclaimer</strong>: I'm not a computer science student. I've taken precisely two computer science courses in my life, and neither dealt with anything remotely complex. I've worked with software for years, so I feel I'm up to the implementation task, if I could just wrap my head around the ideas. So I may just be missing something obvious.</p>&#xA;&#xA;<p>I've read the paper that the authors published [1], and I've made some good progress, but I keep getting hung up on this one particular point in how the routing table works:</p>&#xA;&#xA;<p>The paper claims that</p>&#xA;&#xA;<blockquote>&#xA; <p>A node’s routing table, $R$, is organized into $\lceil \log_{2^b} N\rceil$ &#xA; rows with $2^b - 1$ entries each. The $2^b - 1$ entries at&#xA; row $n$ of the routing table each refer to a node whose nodeId shares&#xA; the present node’s nodeId in the first n digits, but whose $n + 1$th&#xA; digit has one of the $2^b - 1$ possible values other than the $n + 1$th&#xA; digit in the present node’s id.</p>&#xA;</blockquote>&#xA;&#xA;<p>The $b$ stands for an application-specific variable, usually $4$. Let's use $b=4$, for simplicity's sake. So the above is</p>&#xA;&#xA;<blockquote>&#xA; <p>A node’s routing table, $R$, is organized into $\lceil \log_{16} N\rceil$ rows &#xA; with $15$ entries each. The $15$ entries at&#xA; row $n$ of the routing table each refer to a node whose nodeId shares&#xA; the present node’s nodeId in the first n digits, but whose $n + 1$th&#xA; digit has one of the $2^b - 1$ possible values other than the $n + 1$th&#xA; digit in the present node’s id.</p>&#xA;</blockquote>&#xA;&#xA;<p>I understand that much. Further, $N$ is the number of servers in the cluster. I get that, too.</p>&#xA;&#xA;<p>My question is, if the row an entry is placed into depends on the shared length of the key, why the seemingly random limit on the number of rows? Each nodeId has 32 digits, when $b=4$ (128 bit nodeIds divided into digits of b bits). So what happens when $N$ gets high enough that $\lceil\log_{16} N\rceil &gt; 32$? I realise it would take 340,282,366,920,938,463,463,374,607,431,768,211,457 (if my math is right) servers to hit this scenario, but it just seems like an odd inclusion, and the correlation is never explained.</p>&#xA;&#xA;<p>Furthermore, what happens if you have a small number of servers? If I have fewer than 16 servers, I only have one row in the table. Further, under no circumstances would every entry in the row have a corresponding server. Should entries be left empty? I realise that I'd be able to find the server in the leaf set no matter what, given that few servers, but the same quandary is raised for the second row--what if I don't have a server that has a nodeId such that I can fill every possible permutation of the nth digit? Finally, if I have, say, four servers, and I have two nodes that share, say, 20 of their 32 digits, by some random fluke... should I populate 20 rows of the table for that node, even though that is far more rows than I could even come close to filling?</p>&#xA;&#xA;<p>Here's what I've come up with, trying to reason my way through this:</p>&#xA;&#xA;<ol>&#xA;<li>Entries are to be set to a null value if there is not a node that matches that prefix precisely.</li>&#xA;<li>Empty rows are to be added until enough rows exist to match the shared length of the nodeIds.</li>&#xA;<li>If, and only if, there is no matching entry for a desired message ID, fall back on a search of the routing table for a nodeId whose shared length is greater than or equal to the current nodeId's and whose entry is mathematically closer than the current nodeId's to the desired ID.</li>&#xA;<li>If no suitable node can be found in #3, assume this is the destination and deliver the message.</li>&#xA;</ol>&#xA;&#xA;<p>Do all four of these assumptions hold up? Is there somewhere else I should be looking for information on this?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="http://dx.doi.org/10.1007/3-540-45518-3_18">Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems</a> by A. Rowstrong and P. Druschel (2001) -- <a href="http://research.microsoft.com/~antr/PAST/pastry.pdf">download here</a></li>&#xA;</ol>&#xA;
algorithms data structures distributed systems hash tables
1
How Does Populating Pastry's Routing Table Work? -- (algorithms data structures distributed systems hash tables) <p>I'm trying to implement the Pastry Distributed Hash Table, but some things are escaping my understanding. I was hoping someone could clarify.</p>&#xA;&#xA;<p><strong>Disclaimer</strong>: I'm not a computer science student. I've taken precisely two computer science courses in my life, and neither dealt with anything remotely complex. I've worked with software for years, so I feel I'm up to the implementation task, if I could just wrap my head around the ideas. So I may just be missing something obvious.</p>&#xA;&#xA;<p>I've read the paper that the authors published [1], and I've made some good progress, but I keep getting hung up on this one particular point in how the routing table works:</p>&#xA;&#xA;<p>The paper claims that</p>&#xA;&#xA;<blockquote>&#xA; <p>A node’s routing table, $R$, is organized into $\lceil \log_{2^b} N\rceil$ &#xA; rows with $2^b - 1$ entries each. The $2^b - 1$ entries at&#xA; row $n$ of the routing table each refer to a node whose nodeId shares&#xA; the present node’s nodeId in the first n digits, but whose $n + 1$th&#xA; digit has one of the $2^b - 1$ possible values other than the $n + 1$th&#xA; digit in the present node’s id.</p>&#xA;</blockquote>&#xA;&#xA;<p>The $b$ stands for an application-specific variable, usually $4$. Let's use $b=4$, for simplicity's sake. So the above is</p>&#xA;&#xA;<blockquote>&#xA; <p>A node’s routing table, $R$, is organized into $\lceil \log_{16} N\rceil$ rows &#xA; with $15$ entries each. The $15$ entries at&#xA; row $n$ of the routing table each refer to a node whose nodeId shares&#xA; the present node’s nodeId in the first n digits, but whose $n + 1$th&#xA; digit has one of the $2^b - 1$ possible values other than the $n + 1$th&#xA; digit in the present node’s id.</p>&#xA;</blockquote>&#xA;&#xA;<p>I understand that much. Further, $N$ is the number of servers in the cluster. I get that, too.</p>&#xA;&#xA;<p>My question is, if the row an entry is placed into depends on the shared length of the key, why the seemingly random limit on the number of rows? Each nodeId has 32 digits, when $b=4$ (128 bit nodeIds divided into digits of b bits). So what happens when $N$ gets high enough that $\lceil\log_{16} N\rceil &gt; 32$? I realise it would take 340,282,366,920,938,463,463,374,607,431,768,211,457 (if my math is right) servers to hit this scenario, but it just seems like an odd inclusion, and the correlation is never explained.</p>&#xA;&#xA;<p>Furthermore, what happens if you have a small number of servers? If I have fewer than 16 servers, I only have one row in the table. Further, under no circumstances would every entry in the row have a corresponding server. Should entries be left empty? I realise that I'd be able to find the server in the leaf set no matter what, given that few servers, but the same quandary is raised for the second row--what if I don't have a server that has a nodeId such that I can fill every possible permutation of the nth digit? Finally, if I have, say, four servers, and I have two nodes that share, say, 20 of their 32 digits, by some random fluke... should I populate 20 rows of the table for that node, even though that is far more rows than I could even come close to filling?</p>&#xA;&#xA;<p>Here's what I've come up with, trying to reason my way through this:</p>&#xA;&#xA;<ol>&#xA;<li>Entries are to be set to a null value if there is not a node that matches that prefix precisely.</li>&#xA;<li>Empty rows are to be added until enough rows exist to match the shared length of the nodeIds.</li>&#xA;<li>If, and only if, there is no matching entry for a desired message ID, fall back on a search of the routing table for a nodeId whose shared length is greater than or equal to the current nodeId's and whose entry is mathematically closer than the current nodeId's to the desired ID.</li>&#xA;<li>If no suitable node can be found in #3, assume this is the destination and deliver the message.</li>&#xA;</ol>&#xA;&#xA;<p>Do all four of these assumptions hold up? Is there somewhere else I should be looking for information on this?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="http://dx.doi.org/10.1007/3-540-45518-3_18">Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems</a> by A. Rowstrong and P. Druschel (2001) -- <a href="http://research.microsoft.com/~antr/PAST/pastry.pdf">download here</a></li>&#xA;</ol>&#xA;
habedi/stack-exchange-dataset
3,139
How to find contour lines for Appel's Hidden Line Removal Algorithm
<p>For fun I am trying to make a wire-frame viewer for the <a href="http://0x10c.com/doc/dcpu-16.txt" rel="nofollow noreferrer">DCPU-16</a>. I understand how do do everything except how to hide the lines that are hidden in the wire frame. All of the questions here on SO all assume you have access to OpenGL, unfortunately I do not have access to anything like that for the DCPU-16 (or any kind of hardware acceleration).</p>&#xA;&#xA;<p>I found a fairly good description of Appel's algorithm on <a href="http://books.google.com/books?id=aVQnUfL3yEwC&amp;lpg=PA251&amp;ots=zCOEvuKqve&amp;dq=Arthur%20Appel%27s%20algorithm.&amp;pg=PA252#v=onepage&amp;q&amp;f=true" rel="nofollow noreferrer">Google Books</a>. However there is one issue I am having trouble figuring out.</p>&#xA;&#xA;<blockquote>&#xA; <p>Appel defined contour line as an edge shared by a front-facing and a&#xA; back-facing polygon, or unshared edge of a front facing polygon that&#xA; is not part of a closed polyhedron. An edge shared by two front-facing&#xA; polygons causes no change in visibility and therefore is not a contour&#xA; line. In Fig. 8.4, edges AB, EF, PC, GK and CH are contour lines,&#xA; whereas edges ED, DC and GI are not.</p>&#xA;</blockquote>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/Gajc7.png" alt="Fig. 8.4"></p>&#xA;&#xA;<p>I understand the rules of the algorithm and how it works once you have your contour lines, however I do not understand is what I need to do to determine if a edge is "<em>shared by a front-facing and a back-facing polygon, or unshared edge of a front facing polygon that is not part of a closed polyhedron</em>" from a coding point of view. I can look at a shape and I can know what lines are contour lines in my head but I don't have a clue on how to transfer that "understanding" in to a coded algorithm.</p>&#xA;&#xA;<hr>&#xA;&#xA;<h2>Update</h2>&#xA;&#xA;<p>I have made some progress in determining contour lines. I found <a href="http://www.eng.buffalo.edu/courses/mae573/handouts/lecture13.pdf" rel="nofollow noreferrer">these</a> <a href="http://www.eng.buffalo.edu/courses/mae573/handouts/appel.pdf" rel="nofollow noreferrer">two</a> lecture notes from a University of Buffalo class on computer graphics.</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/xoe49.png" alt="enter image description here"></p>&#xA;&#xA;<blockquote>&#xA; <p>Consider the edges. These fall into three categories. </p>&#xA; &#xA; <ol>&#xA; <li>An edge joining two invisible faces is itself invisible. This will be deleted from the list and ignored. </li>&#xA; <li>An edge joining two potentially-visible faces is called a 'material edge' and will require further processing. </li>&#xA; <li>An edge joining a potentially-visible face and an invisible face is a special case of a 'material edge' and is also called a 'contour&#xA; edge'.</li>&#xA; </ol>&#xA;</blockquote>&#xA;&#xA;<p>Using the above two pieces of information I am able to get closer to being able to write this out as code, but I still have a long way to go.</p>&#xA;
algorithms computational geometry graphics
1
How to find contour lines for Appel's Hidden Line Removal Algorithm -- (algorithms computational geometry graphics) <p>For fun I am trying to make a wire-frame viewer for the <a href="http://0x10c.com/doc/dcpu-16.txt" rel="nofollow noreferrer">DCPU-16</a>. I understand how do do everything except how to hide the lines that are hidden in the wire frame. All of the questions here on SO all assume you have access to OpenGL, unfortunately I do not have access to anything like that for the DCPU-16 (or any kind of hardware acceleration).</p>&#xA;&#xA;<p>I found a fairly good description of Appel's algorithm on <a href="http://books.google.com/books?id=aVQnUfL3yEwC&amp;lpg=PA251&amp;ots=zCOEvuKqve&amp;dq=Arthur%20Appel%27s%20algorithm.&amp;pg=PA252#v=onepage&amp;q&amp;f=true" rel="nofollow noreferrer">Google Books</a>. However there is one issue I am having trouble figuring out.</p>&#xA;&#xA;<blockquote>&#xA; <p>Appel defined contour line as an edge shared by a front-facing and a&#xA; back-facing polygon, or unshared edge of a front facing polygon that&#xA; is not part of a closed polyhedron. An edge shared by two front-facing&#xA; polygons causes no change in visibility and therefore is not a contour&#xA; line. In Fig. 8.4, edges AB, EF, PC, GK and CH are contour lines,&#xA; whereas edges ED, DC and GI are not.</p>&#xA;</blockquote>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/Gajc7.png" alt="Fig. 8.4"></p>&#xA;&#xA;<p>I understand the rules of the algorithm and how it works once you have your contour lines, however I do not understand is what I need to do to determine if a edge is "<em>shared by a front-facing and a back-facing polygon, or unshared edge of a front facing polygon that is not part of a closed polyhedron</em>" from a coding point of view. I can look at a shape and I can know what lines are contour lines in my head but I don't have a clue on how to transfer that "understanding" in to a coded algorithm.</p>&#xA;&#xA;<hr>&#xA;&#xA;<h2>Update</h2>&#xA;&#xA;<p>I have made some progress in determining contour lines. I found <a href="http://www.eng.buffalo.edu/courses/mae573/handouts/lecture13.pdf" rel="nofollow noreferrer">these</a> <a href="http://www.eng.buffalo.edu/courses/mae573/handouts/appel.pdf" rel="nofollow noreferrer">two</a> lecture notes from a University of Buffalo class on computer graphics.</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/xoe49.png" alt="enter image description here"></p>&#xA;&#xA;<blockquote>&#xA; <p>Consider the edges. These fall into three categories. </p>&#xA; &#xA; <ol>&#xA; <li>An edge joining two invisible faces is itself invisible. This will be deleted from the list and ignored. </li>&#xA; <li>An edge joining two potentially-visible faces is called a 'material edge' and will require further processing. </li>&#xA; <li>An edge joining a potentially-visible face and an invisible face is a special case of a 'material edge' and is also called a 'contour&#xA; edge'.</li>&#xA; </ol>&#xA;</blockquote>&#xA;&#xA;<p>Using the above two pieces of information I am able to get closer to being able to write this out as code, but I still have a long way to go.</p>&#xA;
habedi/stack-exchange-dataset
3,141
Pizza commercial claim of 34 million combinations
<p>A pizza commercial claims that you can combine their ingredients to 34 million different combinations. I didn't believe it, so I dusted off my rusty combinatorics skills and tried to figure it out. Here's what I have so far:&#xA;From the online ordering site I got the choices</p>&#xA;&#xA;<ol>&#xA;<li>crust (4 types, choose 1)</li>&#xA;<li>size (4 types, choose 1) some crusts are limited to a certain size - not accounting for that, but would like to.</li>&#xA;<li>cheese (5 types, choose 1)</li>&#xA;<li>sauce (4 types, choose 1)</li>&#xA;<li>sauce level (3 types, choose 1)</li>&#xA;<li>meats (9 types, choose up to 9)</li>&#xA;<li>non-meats (15 types, choose up to 15)</li>&#xA;</ol>&#xA;&#xA;<p>So I figured this was a combination problem (order is not important) and not an n choose k problem, null is allowed for anything but crust and crust, size, cheese, sauce and sauce level would all be choose only one. Meats and non-meats $2^?$? So that would be:</p>&#xA;&#xA;<ol>&#xA;<li>crust $\binom{4}{1}=4$</li>&#xA;<li>size $\binom{4}{1}=4$</li>&#xA;<li>cheese $\binom{5}{1}=5$</li>&#xA;<li>sauce $\binom{4}{1}=4$</li>&#xA;<li>sauce level $\binom{3}{1}=3$</li>&#xA;<li>meats $2^9 = 512$</li>&#xA;<li>non-meats $2^{15} = 32768$</li>&#xA;</ol>&#xA;&#xA;<p>At this point I'm stuck, how do I combine these to arrive at the total number of possible combinations?</p>&#xA;&#xA;<p>I found this <a href="http://mdm4u1.wetpaint.com/page/5.3+Problem+Solving+With+Combinations+%28Part+1%29" rel="noreferrer">site</a> helpful.</p>&#xA;&#xA;<p><strong>ETA:</strong>&#xA;If I don't account for the limitations on crust size - some crusts are only available in certain sizes - there are over 16 billion; 16,106,127,360 combinations available, so they were off by quite a bit.</p>&#xA;
combinatorics discrete mathematics
1
Pizza commercial claim of 34 million combinations -- (combinatorics discrete mathematics) <p>A pizza commercial claims that you can combine their ingredients to 34 million different combinations. I didn't believe it, so I dusted off my rusty combinatorics skills and tried to figure it out. Here's what I have so far:&#xA;From the online ordering site I got the choices</p>&#xA;&#xA;<ol>&#xA;<li>crust (4 types, choose 1)</li>&#xA;<li>size (4 types, choose 1) some crusts are limited to a certain size - not accounting for that, but would like to.</li>&#xA;<li>cheese (5 types, choose 1)</li>&#xA;<li>sauce (4 types, choose 1)</li>&#xA;<li>sauce level (3 types, choose 1)</li>&#xA;<li>meats (9 types, choose up to 9)</li>&#xA;<li>non-meats (15 types, choose up to 15)</li>&#xA;</ol>&#xA;&#xA;<p>So I figured this was a combination problem (order is not important) and not an n choose k problem, null is allowed for anything but crust and crust, size, cheese, sauce and sauce level would all be choose only one. Meats and non-meats $2^?$? So that would be:</p>&#xA;&#xA;<ol>&#xA;<li>crust $\binom{4}{1}=4$</li>&#xA;<li>size $\binom{4}{1}=4$</li>&#xA;<li>cheese $\binom{5}{1}=5$</li>&#xA;<li>sauce $\binom{4}{1}=4$</li>&#xA;<li>sauce level $\binom{3}{1}=3$</li>&#xA;<li>meats $2^9 = 512$</li>&#xA;<li>non-meats $2^{15} = 32768$</li>&#xA;</ol>&#xA;&#xA;<p>At this point I'm stuck, how do I combine these to arrive at the total number of possible combinations?</p>&#xA;&#xA;<p>I found this <a href="http://mdm4u1.wetpaint.com/page/5.3+Problem+Solving+With+Combinations+%28Part+1%29" rel="noreferrer">site</a> helpful.</p>&#xA;&#xA;<p><strong>ETA:</strong>&#xA;If I don't account for the limitations on crust size - some crusts are only available in certain sizes - there are over 16 billion; 16,106,127,360 combinations available, so they were off by quite a bit.</p>&#xA;
habedi/stack-exchange-dataset
3,149
What is the meaning of $O(m+n)$?
<p>This is a basic question, but I'm thinking that $O(m+n)$ is the same as $O(\max(m,n))$, since the larger term should dominate as we go to infinity? Also, that would be different from $O(\min(m,n))$. Is that right? I keep seeing this notation, especially when discussing graph algorithms. For example, you routinely see: $O(|V| + |E|)$ (e.g. see <a href="http://algs4.cs.princeton.edu/41undirected/">here</a>).</p>&#xA;
terminology asymptotics mathematical analysis landau notation
1
What is the meaning of $O(m+n)$? -- (terminology asymptotics mathematical analysis landau notation) <p>This is a basic question, but I'm thinking that $O(m+n)$ is the same as $O(\max(m,n))$, since the larger term should dominate as we go to infinity? Also, that would be different from $O(\min(m,n))$. Is that right? I keep seeing this notation, especially when discussing graph algorithms. For example, you routinely see: $O(|V| + |E|)$ (e.g. see <a href="http://algs4.cs.princeton.edu/41undirected/">here</a>).</p>&#xA;
habedi/stack-exchange-dataset
3,154
How to show two models of computation are equivalent?
<p>I'm seeking explanation on how one could prove that two models of computation are equivalent. I have been reading books on the subject except that equivalence proofs are omitted. I have a basic idea about what it means for two models of computation to be equivalent (the automata view: if they accept the same languages). Are there other ways of thinking about equivalence? If you could help me understand how to prove that the Turing-machine model is equivalent to lambda calculus, that would be sufficient.</p>&#xA;
proof techniques computation models simulation
1
How to show two models of computation are equivalent? -- (proof techniques computation models simulation) <p>I'm seeking explanation on how one could prove that two models of computation are equivalent. I have been reading books on the subject except that equivalence proofs are omitted. I have a basic idea about what it means for two models of computation to be equivalent (the automata view: if they accept the same languages). Are there other ways of thinking about equivalence? If you could help me understand how to prove that the Turing-machine model is equivalent to lambda calculus, that would be sufficient.</p>&#xA;
habedi/stack-exchange-dataset
3,156
Dynamic changes to classes or context activation -- how to treat existing objects in a consistent way?
<p>I am looking for references and papers on the following topic.</p>&#xA;<ol>&#xA;<li><p>In <em>general</em>, some programming languages allow dynamic changes to classes. As an example, a new instance variable ‘weight’ can be added to the class <code>Edge</code> (the class of unweighted edges of graphs). But what should happen with existing edge objects?</p>&#xA;<p>They can be upgraded to include the new instance variable with a default value, perhaps weight <code>0</code>, in the edge example. Or existing objects stay the same.</p>&#xA;</li>&#xA;<li><p>In <em>context-oriented</em> programming, similar situations can arise, when a context is dynamically activated at run-time. This may affect changes to methods which are currently executed (although I am concerned at the single thread execution at the moment).</p>&#xA;</li>&#xA;<li><p>Considering <em>design patterns</em>, when a proxy object wraps another object, references to the old object may expect certain invariants that the proxy object doesn’t adhere to. This may also lead to inconsistencies when an object is wrapped/’updated’ with a proxy object.</p>&#xA;</li>&#xA;</ol>&#xA;<p>Are there any references that list possible ways to treat the problem in case of dynamic changes/activation? Like the options to keep the state consistent?</p>&#xA;<p>I looked primarily in the communities of dynamic software evolution, context-oriented programming and software components. Are there other important communities I can search to find references?</p>&#xA;
reference request programming languages semantics object oriented
1
Dynamic changes to classes or context activation -- how to treat existing objects in a consistent way? -- (reference request programming languages semantics object oriented) <p>I am looking for references and papers on the following topic.</p>&#xA;<ol>&#xA;<li><p>In <em>general</em>, some programming languages allow dynamic changes to classes. As an example, a new instance variable ‘weight’ can be added to the class <code>Edge</code> (the class of unweighted edges of graphs). But what should happen with existing edge objects?</p>&#xA;<p>They can be upgraded to include the new instance variable with a default value, perhaps weight <code>0</code>, in the edge example. Or existing objects stay the same.</p>&#xA;</li>&#xA;<li><p>In <em>context-oriented</em> programming, similar situations can arise, when a context is dynamically activated at run-time. This may affect changes to methods which are currently executed (although I am concerned at the single thread execution at the moment).</p>&#xA;</li>&#xA;<li><p>Considering <em>design patterns</em>, when a proxy object wraps another object, references to the old object may expect certain invariants that the proxy object doesn’t adhere to. This may also lead to inconsistencies when an object is wrapped/’updated’ with a proxy object.</p>&#xA;</li>&#xA;</ol>&#xA;<p>Are there any references that list possible ways to treat the problem in case of dynamic changes/activation? Like the options to keep the state consistent?</p>&#xA;<p>I looked primarily in the communities of dynamic software evolution, context-oriented programming and software components. Are there other important communities I can search to find references?</p>&#xA;
habedi/stack-exchange-dataset
3,161
Why larger input sizes imply harder instances?
<p><em>Below, assume we're working with an infinite-tape Turing machine.</em></p>&#xA;&#xA;<p>When explaining the notion of time complexity to someone, and why it is measured relative to the input size of an instance, I stumbled across the following claim:</p>&#xA;&#xA;<blockquote>&#xA; <p>[..] For example, it's natural that you'd need more steps to multiply two integers with 100000 bits, than, say multiplying two integers with 3 bits.</p>&#xA;</blockquote>&#xA;&#xA;<p>The claim is convincing, but somehow hand-waving. In all algorithms I came across, the larger the input size, the more steps you need. In more precise words, the time complexity is a <a href="http://mathworld.wolfram.com/IncreasingFunction.html">monotonically increasing function</a> of the input size.</p>&#xA;&#xA;<blockquote>&#xA; <p>Is it the case that time complexity is <em>always</em> an increasing function in the input size? If so, why is it the case? Is there a <em>proof</em> for that beyond hand-waving?</p>&#xA;</blockquote>&#xA;
complexity theory time complexity intuition
1
Why larger input sizes imply harder instances? -- (complexity theory time complexity intuition) <p><em>Below, assume we're working with an infinite-tape Turing machine.</em></p>&#xA;&#xA;<p>When explaining the notion of time complexity to someone, and why it is measured relative to the input size of an instance, I stumbled across the following claim:</p>&#xA;&#xA;<blockquote>&#xA; <p>[..] For example, it's natural that you'd need more steps to multiply two integers with 100000 bits, than, say multiplying two integers with 3 bits.</p>&#xA;</blockquote>&#xA;&#xA;<p>The claim is convincing, but somehow hand-waving. In all algorithms I came across, the larger the input size, the more steps you need. In more precise words, the time complexity is a <a href="http://mathworld.wolfram.com/IncreasingFunction.html">monotonically increasing function</a> of the input size.</p>&#xA;&#xA;<blockquote>&#xA; <p>Is it the case that time complexity is <em>always</em> an increasing function in the input size? If so, why is it the case? Is there a <em>proof</em> for that beyond hand-waving?</p>&#xA;</blockquote>&#xA;
habedi/stack-exchange-dataset
3,176
Is Huffman Encoding always optimal?
<p>The requirement of the encoding to be <em>prefix free</em> results in large trees due to the tree having to be complete. Is there a threshold where fixed-length non-encoded storage of data would be more efficient than encoding the data?</p>&#xA;
information theory data compression
1
Is Huffman Encoding always optimal? -- (information theory data compression) <p>The requirement of the encoding to be <em>prefix free</em> results in large trees due to the tree having to be complete. Is there a threshold where fixed-length non-encoded storage of data would be more efficient than encoding the data?</p>&#xA;
habedi/stack-exchange-dataset
3,185
Floating point rounding
<p>Can an IEEE-754 floating point number &lt; 1 (i.e. generated with a random number generator which generates a number >= 0.0 and &lt; 1.0) ever be multiplied by some integer (in floating point form) to get a number equal to or larger than that integer due to rounding?</p>&#xA;&#xA;<p>i.e.</p>&#xA;&#xA;<pre><code>double r = random() ; // generates a floating point number in [0, 1)&#xA;double n = some_int ;&#xA;if (n * r &gt;= n) {&#xA; print 'Rounding Happened' ;&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>This might be equivalent to saying that does there exist an N and R such that if R is the largest number less than 1 which can be represented in IEEE-754 then N * R >= N (where * and >= are appropriate IEEE-754 operators)</p>&#xA;&#xA;<p>This comes from <a href="https://stackoverflow.com/questions/1400505/postgresql-random-number-range-1-10/1400752#comment15929846_1400752">this question</a> based on <a href="http://www.postgresql.org/docs/9.1/static/datatype-numeric.html#DATATYPE-FLOAT" rel="nofollow noreferrer">this documentation</a> and the postgresql <a href="http://www.postgresql.org/docs/8.2/static/functions-math.html#FUNCTIONS-MATH-FUNC-TABLE" rel="nofollow noreferrer">random function</a></p>&#xA;
numerical analysis floating point rounding
1
Floating point rounding -- (numerical analysis floating point rounding) <p>Can an IEEE-754 floating point number &lt; 1 (i.e. generated with a random number generator which generates a number >= 0.0 and &lt; 1.0) ever be multiplied by some integer (in floating point form) to get a number equal to or larger than that integer due to rounding?</p>&#xA;&#xA;<p>i.e.</p>&#xA;&#xA;<pre><code>double r = random() ; // generates a floating point number in [0, 1)&#xA;double n = some_int ;&#xA;if (n * r &gt;= n) {&#xA; print 'Rounding Happened' ;&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>This might be equivalent to saying that does there exist an N and R such that if R is the largest number less than 1 which can be represented in IEEE-754 then N * R >= N (where * and >= are appropriate IEEE-754 operators)</p>&#xA;&#xA;<p>This comes from <a href="https://stackoverflow.com/questions/1400505/postgresql-random-number-range-1-10/1400752#comment15929846_1400752">this question</a> based on <a href="http://www.postgresql.org/docs/9.1/static/datatype-numeric.html#DATATYPE-FLOAT" rel="nofollow noreferrer">this documentation</a> and the postgresql <a href="http://www.postgresql.org/docs/8.2/static/functions-math.html#FUNCTIONS-MATH-FUNC-TABLE" rel="nofollow noreferrer">random function</a></p>&#xA;
habedi/stack-exchange-dataset
3,201
Good text on algorithm complexity
<p>Where should I look for a good introductory text in algorithm complexity? So far, I have had an Algorithms class, and several language classes, but nothing with a theoretical backbone. I get the whole complexity, but sometimes it's hard for me to differentiate between O(1) and O(n) plus there's the whole theta notation and all that, basic explanation of P=NP and simple algorithms, tractability. I want a text that covers all that, and that doesn't require a heavy mathematical background, or something that can be read through.</p>&#xA;&#xA;<p>LE: I'm still in highschool, not in University, and by heavy mathematical background I mean something perhaps not very high above Calculus and Linear Algebra (it's not that I can't understand it, it's the fact that for example learning Taylor series without having done Calculus I is a bit of a stretch; that's what I meant by not mathematically heavy. Something in which the math, with a normal amount of effort can be understood). And, do pardon if I'm wrong, but theoretically speaking, a class at which they teach algorithm design methods and actual algorithms should be called an "Algorithms" class, don't you think?&#xA;In terms of my current understanding, infinite series, limits and integrals I know (most of the complexity books I've glanced at seemed to use those concepts), but you've lost me at the Fast Fourier Transform.</p>&#xA;
complexity theory reference request algorithm analysis education books
1
Good text on algorithm complexity -- (complexity theory reference request algorithm analysis education books) <p>Where should I look for a good introductory text in algorithm complexity? So far, I have had an Algorithms class, and several language classes, but nothing with a theoretical backbone. I get the whole complexity, but sometimes it's hard for me to differentiate between O(1) and O(n) plus there's the whole theta notation and all that, basic explanation of P=NP and simple algorithms, tractability. I want a text that covers all that, and that doesn't require a heavy mathematical background, or something that can be read through.</p>&#xA;&#xA;<p>LE: I'm still in highschool, not in University, and by heavy mathematical background I mean something perhaps not very high above Calculus and Linear Algebra (it's not that I can't understand it, it's the fact that for example learning Taylor series without having done Calculus I is a bit of a stretch; that's what I meant by not mathematically heavy. Something in which the math, with a normal amount of effort can be understood). And, do pardon if I'm wrong, but theoretically speaking, a class at which they teach algorithm design methods and actual algorithms should be called an "Algorithms" class, don't you think?&#xA;In terms of my current understanding, infinite series, limits and integrals I know (most of the complexity books I've glanced at seemed to use those concepts), but you've lost me at the Fast Fourier Transform.</p>&#xA;
habedi/stack-exchange-dataset
3,202
Undergraduate studies: Math or CS?
<p>hope it's possible for this question to be answered in an objective manner.</p>&#xA;&#xA;<p>I'm not studying in a liberal arts college so basically I only get to study one subject. Which would be the better preparation for future research (probably grad school) into theoretical CS? The standard undergraduate math course with analysis and algebra, or the standard undergraduate CS course with algos and concurrency and the like.</p>&#xA;
education
1
Undergraduate studies: Math or CS? -- (education) <p>hope it's possible for this question to be answered in an objective manner.</p>&#xA;&#xA;<p>I'm not studying in a liberal arts college so basically I only get to study one subject. Which would be the better preparation for future research (probably grad school) into theoretical CS? The standard undergraduate math course with analysis and algebra, or the standard undergraduate CS course with algos and concurrency and the like.</p>&#xA;
habedi/stack-exchange-dataset
3,209
What is the difference between a scripting language and a normal programming language?
<p>What is the difference between programming language and a scripting language?&#xA;For example, consider C versus Perl.</p>&#xA;&#xA;<p>Is the only difference that scripting languages require only the interpreter and don't require compile and linking?</p>&#xA;
programming languages
1
What is the difference between a scripting language and a normal programming language? -- (programming languages) <p>What is the difference between programming language and a scripting language?&#xA;For example, consider C versus Perl.</p>&#xA;&#xA;<p>Is the only difference that scripting languages require only the interpreter and don't require compile and linking?</p>&#xA;
habedi/stack-exchange-dataset
3,214
Is every linear-time algorithm a streaming algorithm?
<p>Over at <a href="https://cs.stackexchange.com/questions/3200/counting-inversion-pairs">this question about inversion counting</a>, I <a href="https://cs.stackexchange.com/questions/3200/counting-inversion-pairs#comment8724_3200">found a paper</a> that proves a lower bound on space complexity for all (exact) <a href="https://en.wikipedia.org/wiki/Streaming_algorithm" rel="nofollow noreferrer">streaming algorithms</a>. I have claimed that this bound extends to all linear time algorithms. This is a bit bold as in general, a linear time algorithm can jump around at will (random access) which a streaming algorithm can not; it has to investigate the elements in order. I may perform multiple passes, but only constantly many (for linear runtime).</p>&#xA;&#xA;<p>Therefore my question:</p>&#xA;&#xA;<blockquote>&#xA; <p>Can every linear-time algorithm be expressed as a streaming algorithm with constantly many passes?</p>&#xA;</blockquote>&#xA;&#xA;<p>Random access seems to prevent a (simple) construction proving a positive answer, but I have not been able to come up with a counter example either.</p>&#xA;&#xA;<p>Depending on the machine model, random access may not even be an issue, runtime-wise. I would be interested in answers for these models:</p>&#xA;&#xA;<ul>&#xA;<li>Turing machine, flat input</li>&#xA;<li>RAM, input as array</li>&#xA;<li>RAM, input as linked list</li>&#xA;</ul>&#xA;
algorithms streaming algorithm simulation lower bounds
1
Is every linear-time algorithm a streaming algorithm? -- (algorithms streaming algorithm simulation lower bounds) <p>Over at <a href="https://cs.stackexchange.com/questions/3200/counting-inversion-pairs">this question about inversion counting</a>, I <a href="https://cs.stackexchange.com/questions/3200/counting-inversion-pairs#comment8724_3200">found a paper</a> that proves a lower bound on space complexity for all (exact) <a href="https://en.wikipedia.org/wiki/Streaming_algorithm" rel="nofollow noreferrer">streaming algorithms</a>. I have claimed that this bound extends to all linear time algorithms. This is a bit bold as in general, a linear time algorithm can jump around at will (random access) which a streaming algorithm can not; it has to investigate the elements in order. I may perform multiple passes, but only constantly many (for linear runtime).</p>&#xA;&#xA;<p>Therefore my question:</p>&#xA;&#xA;<blockquote>&#xA; <p>Can every linear-time algorithm be expressed as a streaming algorithm with constantly many passes?</p>&#xA;</blockquote>&#xA;&#xA;<p>Random access seems to prevent a (simple) construction proving a positive answer, but I have not been able to come up with a counter example either.</p>&#xA;&#xA;<p>Depending on the machine model, random access may not even be an issue, runtime-wise. I would be interested in answers for these models:</p>&#xA;&#xA;<ul>&#xA;<li>Turing machine, flat input</li>&#xA;<li>RAM, input as array</li>&#xA;<li>RAM, input as linked list</li>&#xA;</ul>&#xA;
habedi/stack-exchange-dataset
3,226
How are all NP Complete problems similar?
<p>I'm reading few proofs which prove a given problem is NP complete. The proof technique has following steps.</p>&#xA;&#xA;<ol>&#xA;<li>Prove that current problem is NP, i.e., given a certificate, prove&#xA;that it can be verified in polynomial time.</li>&#xA;<li>Take any known NP-complete problem (call "Easy") and reduce <strong>all</strong>&#xA;of it's instances to <strong>few</strong> instances of given problem (call&#xA;"Hard"). Note this is <strong>not</strong> necessarily an 1:1 mapping.</li>&#xA;<li>Prove that above reduction can be done in polynomial time.</li>&#xA;</ol>&#xA;&#xA;<p>All is well here. Is this knowledge right "if you can solve any NP-complete problem in polynomial time, then all NP-complete problems can be solved in polynomial time" ?</p>&#xA;&#xA;<p>If yes, then as per above proof technique, let's say "Easy" problem can be solved in polynomial time, how does that imply "hard" can be solved in polynomial time? What am I missing here? Or is this true, that "hard" problem can be reduced to the "easy" problem too?</p>&#xA;
complexity theory np complete reductions
1
How are all NP Complete problems similar? -- (complexity theory np complete reductions) <p>I'm reading few proofs which prove a given problem is NP complete. The proof technique has following steps.</p>&#xA;&#xA;<ol>&#xA;<li>Prove that current problem is NP, i.e., given a certificate, prove&#xA;that it can be verified in polynomial time.</li>&#xA;<li>Take any known NP-complete problem (call "Easy") and reduce <strong>all</strong>&#xA;of it's instances to <strong>few</strong> instances of given problem (call&#xA;"Hard"). Note this is <strong>not</strong> necessarily an 1:1 mapping.</li>&#xA;<li>Prove that above reduction can be done in polynomial time.</li>&#xA;</ol>&#xA;&#xA;<p>All is well here. Is this knowledge right "if you can solve any NP-complete problem in polynomial time, then all NP-complete problems can be solved in polynomial time" ?</p>&#xA;&#xA;<p>If yes, then as per above proof technique, let's say "Easy" problem can be solved in polynomial time, how does that imply "hard" can be solved in polynomial time? What am I missing here? Or is this true, that "hard" problem can be reduced to the "easy" problem too?</p>&#xA;
habedi/stack-exchange-dataset
3,227
Uniform sampling from a simplex
<p>I am looking for an algorithm to generate an array of N random numbers, such that the sum of the N numbers is 1, and all numbers lie within 0 and 1. For example, N=3, the random point (x, y, z) should lie within the triangle:</p>&#xA;&#xA;<pre><code>x + y + z = 1&#xA;0 &lt; x &lt; 1&#xA;0 &lt; y &lt; 1&#xA;0 &lt; z &lt; 1&#xA;</code></pre>&#xA;&#xA;<p>Ideally I want each point within the area to have equal probability. If it's too hard, I can drop the requirement. Thanks.</p>&#xA;
algorithms randomness random number generator sampling
1
Uniform sampling from a simplex -- (algorithms randomness random number generator sampling) <p>I am looking for an algorithm to generate an array of N random numbers, such that the sum of the N numbers is 1, and all numbers lie within 0 and 1. For example, N=3, the random point (x, y, z) should lie within the triangle:</p>&#xA;&#xA;<pre><code>x + y + z = 1&#xA;0 &lt; x &lt; 1&#xA;0 &lt; y &lt; 1&#xA;0 &lt; z &lt; 1&#xA;</code></pre>&#xA;&#xA;<p>Ideally I want each point within the area to have equal probability. If it's too hard, I can drop the requirement. Thanks.</p>&#xA;
habedi/stack-exchange-dataset
3,240
Is $O$ contained in $\Theta$?
<p>So I have this question to prove a statement:</p>&#xA;&#xA;<p>$O(n)\subset\Theta(n)$...</p>&#xA;&#xA;<p>I don't need to know how to prove it, just that in my mind this makes no sense and I think it should rather be that $\Theta(n)\subset O(n)$. </p>&#xA;&#xA;<p>My understanding is that $O(n)$ is the set of all functions who do no worse than $n$ while $\Theta(n)$ is the set of all functions that do no better and no worse than n. </p>&#xA;&#xA;<p>Using this, I can think of the example of a constant function say $g(n)=c$. This function will surely be an element of $O(n)$ as it will do no worse than $n$ as $n$ approaches a sufficiently large number. </p>&#xA;&#xA;<p>However, the same function $g$ would not be an element of $\Theta(n)$ as g does do better than $n$ for large $n$... Then since $g \in O(n)$ and $g \not\in \Theta(n)$, then $O(n)\not\in\Theta(n)$ </p>&#xA;&#xA;<p>So is the question perhaps wrong ? I've learnt it is dangerous to make that assumption and usually I have missed something, I just can't see what it might be in this case. </p>&#xA;&#xA;<p>Any thoughts ? &#xA;Thanks a lot.. </p>&#xA;
asymptotics mathematical analysis landau notation
1
Is $O$ contained in $\Theta$? -- (asymptotics mathematical analysis landau notation) <p>So I have this question to prove a statement:</p>&#xA;&#xA;<p>$O(n)\subset\Theta(n)$...</p>&#xA;&#xA;<p>I don't need to know how to prove it, just that in my mind this makes no sense and I think it should rather be that $\Theta(n)\subset O(n)$. </p>&#xA;&#xA;<p>My understanding is that $O(n)$ is the set of all functions who do no worse than $n$ while $\Theta(n)$ is the set of all functions that do no better and no worse than n. </p>&#xA;&#xA;<p>Using this, I can think of the example of a constant function say $g(n)=c$. This function will surely be an element of $O(n)$ as it will do no worse than $n$ as $n$ approaches a sufficiently large number. </p>&#xA;&#xA;<p>However, the same function $g$ would not be an element of $\Theta(n)$ as g does do better than $n$ for large $n$... Then since $g \in O(n)$ and $g \not\in \Theta(n)$, then $O(n)\not\in\Theta(n)$ </p>&#xA;&#xA;<p>So is the question perhaps wrong ? I've learnt it is dangerous to make that assumption and usually I have missed something, I just can't see what it might be in this case. </p>&#xA;&#xA;<p>Any thoughts ? &#xA;Thanks a lot.. </p>&#xA;
habedi/stack-exchange-dataset
3,245
$L$ APX-hard thus PTAS for $L$ implies $\mathsf{P} = \mathsf{NP}$
<p>If $L$ is an APX-hard language, doesn't the existence of a PTAS for $L$ trivially imply $\mathsf{P} = \mathsf{NP}$?</p>&#xA;&#xA;<p>Since for example metric-TSP is in APX, but it is not approximable within 220/219 of OPT [1] unless $\mathsf{P} = \mathsf{NP}$. Thus if there was a PTAS for $L$ we could reduce metric-TSP using a PTAS reduction to $L$ and thus can approximate OPT within arbitrary precision.</p>&#xA;&#xA;<p>Is my argument correct?</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>[1] Christos H. Papadimitriou and Santosh Vempala. On the approximability Of the traveling salesman problem. Combinatorica, 26(1):101–120, Feb. 2006. </p>&#xA;
complexity theory np complete approximation
1
$L$ APX-hard thus PTAS for $L$ implies $\mathsf{P} = \mathsf{NP}$ -- (complexity theory np complete approximation) <p>If $L$ is an APX-hard language, doesn't the existence of a PTAS for $L$ trivially imply $\mathsf{P} = \mathsf{NP}$?</p>&#xA;&#xA;<p>Since for example metric-TSP is in APX, but it is not approximable within 220/219 of OPT [1] unless $\mathsf{P} = \mathsf{NP}$. Thus if there was a PTAS for $L$ we could reduce metric-TSP using a PTAS reduction to $L$ and thus can approximate OPT within arbitrary precision.</p>&#xA;&#xA;<p>Is my argument correct?</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>[1] Christos H. Papadimitriou and Santosh Vempala. On the approximability Of the traveling salesman problem. Combinatorica, 26(1):101–120, Feb. 2006. </p>&#xA;
habedi/stack-exchange-dataset
3,250
Proof of equivalence of parse-trees and derivations
<p>Intuitively, every derivation in a context-free grammar corresponds to a parse-tree and vise versa. </p>&#xA;&#xA;<p>Is this intuition correct? If so how can I formalize and prove such a thing?</p>&#xA;
formal grammars context free
1
Proof of equivalence of parse-trees and derivations -- (formal grammars context free) <p>Intuitively, every derivation in a context-free grammar corresponds to a parse-tree and vise versa. </p>&#xA;&#xA;<p>Is this intuition correct? If so how can I formalize and prove such a thing?</p>&#xA;
habedi/stack-exchange-dataset
3,251
How to show that a "reversed" regular language is regular
<p>I'm stuck on the following question:</p>&#xA;&#xA;<p>"Regular languages are precisely those accepted by finite automata. Given this fact, show that if the language $L$ is accepted by some finite automaton, then $L^{R}$ is also accepted by some finite; $L^{R}$ consists of all words of $L$ reversed."</p>&#xA;
formal languages regular languages automata finite automata
1
How to show that a "reversed" regular language is regular -- (formal languages regular languages automata finite automata) <p>I'm stuck on the following question:</p>&#xA;&#xA;<p>"Regular languages are precisely those accepted by finite automata. Given this fact, show that if the language $L$ is accepted by some finite automaton, then $L^{R}$ is also accepted by some finite; $L^{R}$ consists of all words of $L$ reversed."</p>&#xA;
habedi/stack-exchange-dataset
3,266
Heuristics for an Artificial Intelligence problem
<blockquote>&#xA; <p><strong>Problem</strong> : Given a (one dimensional) row containing $2N$ tiles arranged in $2N + 1$ spaces. There are $N$ black tiles (B), $N$ white tiles (W), and a single empty space. The tiles are initially in an arbitrary ordering. Our goal is to arrange the tiles such that all white tiles are positioned to the left of the black ones, and one black tile is in the rightmost position. The goal position of the empty space is not specified.<br>&#xA; Tiles can be moved to the empty space when the empty space is at most $N$ cells away. Hence there are at most $2N$ legal moves from each state. The cost of each move is the distance between the tile and the empty space to which it moves ($1$ to $N$).</p>&#xA;</blockquote>&#xA;&#xA;<p>So I am doing this problem with A* search algorithm with different heuristics(ofcourse admissible).So can anybody suggest me some heuristics.<br>&#xA;Thanks</p>&#xA;
artificial intelligence heuristics
1
Heuristics for an Artificial Intelligence problem -- (artificial intelligence heuristics) <blockquote>&#xA; <p><strong>Problem</strong> : Given a (one dimensional) row containing $2N$ tiles arranged in $2N + 1$ spaces. There are $N$ black tiles (B), $N$ white tiles (W), and a single empty space. The tiles are initially in an arbitrary ordering. Our goal is to arrange the tiles such that all white tiles are positioned to the left of the black ones, and one black tile is in the rightmost position. The goal position of the empty space is not specified.<br>&#xA; Tiles can be moved to the empty space when the empty space is at most $N$ cells away. Hence there are at most $2N$ legal moves from each state. The cost of each move is the distance between the tile and the empty space to which it moves ($1$ to $N$).</p>&#xA;</blockquote>&#xA;&#xA;<p>So I am doing this problem with A* search algorithm with different heuristics(ofcourse admissible).So can anybody suggest me some heuristics.<br>&#xA;Thanks</p>&#xA;
habedi/stack-exchange-dataset
3,274
Free variables of (λx.xy)x and bound variables of λxy.x
<p>I was solving exercises on Lambda calculus. However, my solutions are different from the answers and I cannot see what is wrong.</p>&#xA;&#xA;<ol>&#xA;<li><p>Find free variables of $(\lambda x.xy)x$.<br>&#xA;My workings: $FV((\lambda x.xy)x)=FV(\lambda x.xy) \cup FV(x)=\{y\} \cup \{x\}=\{x,y\}$.<br>&#xA;The model answer: $FV((\lambda x.xy)x)=\{x\}$.</p></li>&#xA;<li><p>Find bound variables of $\lambda xy.x$.<br>&#xA;My workings: A variable $y$ has its binding but since it is not present in the body of the $\lambda$-abstraction it cannot be bound and thus $BV(\lambda xy.x)=\{x\}$ only.<br>&#xA;The model answer: $BV(\lambda xy.x)=\{x, y\}$.</p></li>&#xA;</ol>&#xA;
logic lambda calculus
1
Free variables of (λx.xy)x and bound variables of λxy.x -- (logic lambda calculus) <p>I was solving exercises on Lambda calculus. However, my solutions are different from the answers and I cannot see what is wrong.</p>&#xA;&#xA;<ol>&#xA;<li><p>Find free variables of $(\lambda x.xy)x$.<br>&#xA;My workings: $FV((\lambda x.xy)x)=FV(\lambda x.xy) \cup FV(x)=\{y\} \cup \{x\}=\{x,y\}$.<br>&#xA;The model answer: $FV((\lambda x.xy)x)=\{x\}$.</p></li>&#xA;<li><p>Find bound variables of $\lambda xy.x$.<br>&#xA;My workings: A variable $y$ has its binding but since it is not present in the body of the $\lambda$-abstraction it cannot be bound and thus $BV(\lambda xy.x)=\{x\}$ only.<br>&#xA;The model answer: $BV(\lambda xy.x)=\{x, y\}$.</p></li>&#xA;</ol>&#xA;
habedi/stack-exchange-dataset
3,276
Given a set of sets, find the smallest set(s) containing at least one element from each set
<p>Given a set $\mathbf{S}$ of sets, I’d like to find a set $M$ such that every set $S$ in $\mathbf{S}$ contains at least one element of $M$. I’d also like $M$ to contain as few elements as possible while still meeting this criterion, although there may exist more than one smallest $M$ with this property (the solution is not necessarily unique).</p>&#xA;&#xA;<p>As a concrete example, suppose that the set $\mathbf{S}$ is the set of national flags, and for each flag $S$ in $\mathbf{S}$, the elements are the colors used in that nation’s flag. The United States would have $S = \{red, white, blue\}$ and Morocco would have $S = \{red, green\}$. Then $M$ would be a set of colors with the property that every national flag uses at least one of the colors in $M$. (<a href="https://secure.wikimedia.org/wikipedia/en/wiki/Olympic_rings#Symbol">The Olympic colors</a> blue, black, red, green, yellow, and white are an example of such an $M$, or at least were in 1920.)</p>&#xA;&#xA;<p>Is there a general name for this problem? Is there an accepted “best” algorithm for finding the set $M$? (I’m more interested in the solution itself than in optimizing the process for computational complexity.)</p>&#xA;
algorithms optimization sets
1
Given a set of sets, find the smallest set(s) containing at least one element from each set -- (algorithms optimization sets) <p>Given a set $\mathbf{S}$ of sets, I’d like to find a set $M$ such that every set $S$ in $\mathbf{S}$ contains at least one element of $M$. I’d also like $M$ to contain as few elements as possible while still meeting this criterion, although there may exist more than one smallest $M$ with this property (the solution is not necessarily unique).</p>&#xA;&#xA;<p>As a concrete example, suppose that the set $\mathbf{S}$ is the set of national flags, and for each flag $S$ in $\mathbf{S}$, the elements are the colors used in that nation’s flag. The United States would have $S = \{red, white, blue\}$ and Morocco would have $S = \{red, green\}$. Then $M$ would be a set of colors with the property that every national flag uses at least one of the colors in $M$. (<a href="https://secure.wikimedia.org/wikipedia/en/wiki/Olympic_rings#Symbol">The Olympic colors</a> blue, black, red, green, yellow, and white are an example of such an $M$, or at least were in 1920.)</p>&#xA;&#xA;<p>Is there a general name for this problem? Is there an accepted “best” algorithm for finding the set $M$? (I’m more interested in the solution itself than in optimizing the process for computational complexity.)</p>&#xA;
habedi/stack-exchange-dataset
3,278
What is a malformed token?
<p>I am reading Programming Language Pragmatics by Michael Scott. He says that on a first pass, a compiler will break a program into a series of tokens. He says that it will check for malformed tokens, like 123abc or $@foo (in C). </p>&#xA;&#xA;<p>What is a malformed token? A variable that does not meet the rules of variable-naming? An operator that does not exist (ex. "&lt;-")?</p>&#xA;&#xA;<p>Is this analogous to a misspelled word?</p>&#xA;
programming languages compilers
1
What is a malformed token? -- (programming languages compilers) <p>I am reading Programming Language Pragmatics by Michael Scott. He says that on a first pass, a compiler will break a program into a series of tokens. He says that it will check for malformed tokens, like 123abc or $@foo (in C). </p>&#xA;&#xA;<p>What is a malformed token? A variable that does not meet the rules of variable-naming? An operator that does not exist (ex. "&lt;-")?</p>&#xA;&#xA;<p>Is this analogous to a misspelled word?</p>&#xA;
habedi/stack-exchange-dataset
3,286
Types of reductions and associated definitions of hardness
<p>Let A be reducible to B, i.e., $A \leq B$. Hence, the Turing machine accepting $A$ has access to an oracle for $B$. Let the Turing machine accepting $A$ be $M_{A}$ and the oracle for $B$ be $O_{B}$. The types of reductions:</p>&#xA;&#xA;<ul>&#xA;<li><p>Turing reduction: $M_{A}$ can make multiple queries to $O_{B}$.</p></li>&#xA;<li><p>Karp reduction: Also called "polynomial time Turing reduction": The input to $O_{B}$ must be constructed in polytime. Moreover, the number of queries to $O_{B}$ must be bounded by a polynomial. In this case: $P^{A} = P^{B}$.</p></li>&#xA;<li><p>Many-one Turing reduction: $M_{A}$ can make only one query to $O_{B}$, during its the last step. Hence the oracle response cannot be modified. However, the time taken to constructed the input to $O_{B}$ need not be bounded by a polynomial.&#xA;Equivalently: ($\leq_{m}$ denoting many-one reduction)</p>&#xA;&#xA;<blockquote>&#xA; <p>$A \leq_{m} B$ if $\exists$ a computable function $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$.</p>&#xA;</blockquote></li>&#xA;<li><p>Cook reduction: Also called "polynomial time many-one reduction": A many-one reduction where the time taken to construct an input to $O_{B}$ must be bounded by a polynomial.&#xA;Equivalently: ($\leq^{p}_{m}$ denoting many-one reduction)</p>&#xA;&#xA;<blockquote>&#xA; <p>$A \leq^p_{m} B$ if $\exists$ a <em>poly-time</em> computable function $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$.</p>&#xA;</blockquote></li>&#xA;<li><p>Parsimonious reduction: Also called "polynomial time one-one reduction": A Cook reduction where every instance of $A$ mapped to a unique instance of $B$.&#xA;Equivalently: ($\leq^{p}_{1}$ denoting parsimonious reduction)</p>&#xA;&#xA;<blockquote>&#xA; <p>$A \leq^p_{1} B$ if $\exists$ a <em>poly-time</em> computable bijection $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$.</p>&#xA;</blockquote>&#xA;&#xA;<p>These reductions preserve the number of solutions. Hence $\#M_{A} = \#O_{B}$.</p></li>&#xA;</ul>&#xA;&#xA;<p>We can define more types of reductions by bounding the number of oracle queries, but leaving those out, could someone kindly tell me if I have gotten the nomenclature for the different types of reductions used, correctly.&#xA;Are NP-complete problems defined with respect Cook reduction or parsimonious reduction? Can anyone kindly give an example of a problem that is NP-complete under Cook and not under parsimonious reduction.</p>&#xA;&#xA;<p>If I am not wrong, the class #P-Complete is defined with respect to Karp reductions.</p>&#xA;
complexity theory np complete reductions complexity classes
1
Types of reductions and associated definitions of hardness -- (complexity theory np complete reductions complexity classes) <p>Let A be reducible to B, i.e., $A \leq B$. Hence, the Turing machine accepting $A$ has access to an oracle for $B$. Let the Turing machine accepting $A$ be $M_{A}$ and the oracle for $B$ be $O_{B}$. The types of reductions:</p>&#xA;&#xA;<ul>&#xA;<li><p>Turing reduction: $M_{A}$ can make multiple queries to $O_{B}$.</p></li>&#xA;<li><p>Karp reduction: Also called "polynomial time Turing reduction": The input to $O_{B}$ must be constructed in polytime. Moreover, the number of queries to $O_{B}$ must be bounded by a polynomial. In this case: $P^{A} = P^{B}$.</p></li>&#xA;<li><p>Many-one Turing reduction: $M_{A}$ can make only one query to $O_{B}$, during its the last step. Hence the oracle response cannot be modified. However, the time taken to constructed the input to $O_{B}$ need not be bounded by a polynomial.&#xA;Equivalently: ($\leq_{m}$ denoting many-one reduction)</p>&#xA;&#xA;<blockquote>&#xA; <p>$A \leq_{m} B$ if $\exists$ a computable function $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$.</p>&#xA;</blockquote></li>&#xA;<li><p>Cook reduction: Also called "polynomial time many-one reduction": A many-one reduction where the time taken to construct an input to $O_{B}$ must be bounded by a polynomial.&#xA;Equivalently: ($\leq^{p}_{m}$ denoting many-one reduction)</p>&#xA;&#xA;<blockquote>&#xA; <p>$A \leq^p_{m} B$ if $\exists$ a <em>poly-time</em> computable function $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$.</p>&#xA;</blockquote></li>&#xA;<li><p>Parsimonious reduction: Also called "polynomial time one-one reduction": A Cook reduction where every instance of $A$ mapped to a unique instance of $B$.&#xA;Equivalently: ($\leq^{p}_{1}$ denoting parsimonious reduction)</p>&#xA;&#xA;<blockquote>&#xA; <p>$A \leq^p_{1} B$ if $\exists$ a <em>poly-time</em> computable bijection $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$.</p>&#xA;</blockquote>&#xA;&#xA;<p>These reductions preserve the number of solutions. Hence $\#M_{A} = \#O_{B}$.</p></li>&#xA;</ul>&#xA;&#xA;<p>We can define more types of reductions by bounding the number of oracle queries, but leaving those out, could someone kindly tell me if I have gotten the nomenclature for the different types of reductions used, correctly.&#xA;Are NP-complete problems defined with respect Cook reduction or parsimonious reduction? Can anyone kindly give an example of a problem that is NP-complete under Cook and not under parsimonious reduction.</p>&#xA;&#xA;<p>If I am not wrong, the class #P-Complete is defined with respect to Karp reductions.</p>&#xA;
habedi/stack-exchange-dataset
3,293
Produce decision version of the problem
<p>An optimisation problem requires minimising some function $f(x)$, where $x$ is a&#xA;vector of integers. What is the corresponding decision version of the problem?</p>&#xA;
complexity theory optimization reductions
1
Produce decision version of the problem -- (complexity theory optimization reductions) <p>An optimisation problem requires minimising some function $f(x)$, where $x$ is a&#xA;vector of integers. What is the corresponding decision version of the problem?</p>&#xA;
habedi/stack-exchange-dataset
3,296
Why does the experience propagation rule for checkers work in Tom Mitchell's book?
<p>In Tom Mitchell's book <a href="http://rads.stackoverflow.com/amzn/click/0070428077">"Machine Learning"</a>, Chap.1, a checkers game is used to illustrate how machine learning can be applied solve problems.</p>&#xA;&#xA;<p>An experience propagation rule is described for iterative learning of a hypothesis. Suppose a game has been played and watched by the program, the state of the endgame is labeled 100 for winning and -100 for losing. For each of the states on the path toward the endgame, we label it as $\hat{V}(Successor)$, where $\hat{V}(state)$ is the current model output on some state. Then the model is trained by adding the new label, and iteratively, the model converges to a good checkers program.</p>&#xA;&#xA;<p>Why does this experience propagation rule work? It is mentioned in the book that it works quite well for most chess games.</p>&#xA;
machine learning
1
Why does the experience propagation rule for checkers work in Tom Mitchell's book? -- (machine learning) <p>In Tom Mitchell's book <a href="http://rads.stackoverflow.com/amzn/click/0070428077">"Machine Learning"</a>, Chap.1, a checkers game is used to illustrate how machine learning can be applied solve problems.</p>&#xA;&#xA;<p>An experience propagation rule is described for iterative learning of a hypothesis. Suppose a game has been played and watched by the program, the state of the endgame is labeled 100 for winning and -100 for losing. For each of the states on the path toward the endgame, we label it as $\hat{V}(Successor)$, where $\hat{V}(state)$ is the current model output on some state. Then the model is trained by adding the new label, and iteratively, the model converges to a good checkers program.</p>&#xA;&#xA;<p>Why does this experience propagation rule work? It is mentioned in the book that it works quite well for most chess games.</p>&#xA;
habedi/stack-exchange-dataset
3,310
Which Is a Better Way of Obtaining Scales, Gaussian Blur or Down Sampling?
<p>In computer vision, scales are important when we carry out a scene analysis. Choosing different scales affect the result of the analysis. For example, if a face is relatively small in the scene, then the details including nose, eyes will be omitted. On the other hand, details on larger faces become relatively more salient.</p>&#xA;&#xA;<p>I know both Gaussian Blur with different sigmas and Down Sampling on the image can generate different scales. Which is more reasonable on a cognitive sense?</p>&#xA;
machine learning computer vision image processing
1
Which Is a Better Way of Obtaining Scales, Gaussian Blur or Down Sampling? -- (machine learning computer vision image processing) <p>In computer vision, scales are important when we carry out a scene analysis. Choosing different scales affect the result of the analysis. For example, if a face is relatively small in the scene, then the details including nose, eyes will be omitted. On the other hand, details on larger faces become relatively more salient.</p>&#xA;&#xA;<p>I know both Gaussian Blur with different sigmas and Down Sampling on the image can generate different scales. Which is more reasonable on a cognitive sense?</p>&#xA;
habedi/stack-exchange-dataset
3,312
Is the validity of some instance of an equational problem decidable?
<p>Is the following FOL-problem (equality is a logical symbol) &#xA;effectively decidable?</p>&#xA;&#xA;<p><strong>Given.</strong> A finite equation system $E$ and an equation $s = t$.</p>&#xA;&#xA;<p><strong>Question.</strong> Is there a substitution $\sigma$, such that $\sigma(E)&#xA;\models \sigma(s = t)$?</p>&#xA;&#xA;<p><strong>Some useful information.</strong> </p>&#xA;&#xA;<ol>&#xA;<li><p>Obviously one can restrict $\sigma$ to be a ground substitution.</p></li>&#xA;<li><p>This problem is decidable: Given a finite system $E$ of<br>&#xA;ground equations and a ground equation $s = t$, does $E \models s = t$<br>&#xA;hold? (c.f. [1: Corollary 4.3.6]) </p></li>&#xA;</ol>&#xA;&#xA;<p><strong>References</strong></p>&#xA;&#xA;<p>[1] Franz Baader, Tobias Nipkow: Term Rewriting and All That,&#xA;&copy; 1998 Cambridge University Press.</p>&#xA;
computability logic decision problem equality
1
Is the validity of some instance of an equational problem decidable? -- (computability logic decision problem equality) <p>Is the following FOL-problem (equality is a logical symbol) &#xA;effectively decidable?</p>&#xA;&#xA;<p><strong>Given.</strong> A finite equation system $E$ and an equation $s = t$.</p>&#xA;&#xA;<p><strong>Question.</strong> Is there a substitution $\sigma$, such that $\sigma(E)&#xA;\models \sigma(s = t)$?</p>&#xA;&#xA;<p><strong>Some useful information.</strong> </p>&#xA;&#xA;<ol>&#xA;<li><p>Obviously one can restrict $\sigma$ to be a ground substitution.</p></li>&#xA;<li><p>This problem is decidable: Given a finite system $E$ of<br>&#xA;ground equations and a ground equation $s = t$, does $E \models s = t$<br>&#xA;hold? (c.f. [1: Corollary 4.3.6]) </p></li>&#xA;</ol>&#xA;&#xA;<p><strong>References</strong></p>&#xA;&#xA;<p>[1] Franz Baader, Tobias Nipkow: Term Rewriting and All That,&#xA;&copy; 1998 Cambridge University Press.</p>&#xA;
habedi/stack-exchange-dataset
3,313
Formally describing a new domain specific programming language
<p>I am about to implement a domain specific language for representation of social learning conventions. Part of the implementation is a formal description of a language - its 'calculus', symbols and logical expressions.</p>&#xA;&#xA;<p>My approach would be to describe the language by describing its grammar but there are also concepts such as relations, dialogs, expectations that require more theoretic approach and the description of the logic. </p>&#xA;&#xA;<p>I would like to ask for an example and a literature recommendation (papers, books) that would help me with this description. I feel relatively competent approaching this task so I am not asking for a total hand holding, but help from a theoretician in this area would be GREATLY appreciated. </p>&#xA;
formal languages programming languages semantics
1
Formally describing a new domain specific programming language -- (formal languages programming languages semantics) <p>I am about to implement a domain specific language for representation of social learning conventions. Part of the implementation is a formal description of a language - its 'calculus', symbols and logical expressions.</p>&#xA;&#xA;<p>My approach would be to describe the language by describing its grammar but there are also concepts such as relations, dialogs, expectations that require more theoretic approach and the description of the logic. </p>&#xA;&#xA;<p>I would like to ask for an example and a literature recommendation (papers, books) that would help me with this description. I feel relatively competent approaching this task so I am not asking for a total hand holding, but help from a theoretician in this area would be GREATLY appreciated. </p>&#xA;
habedi/stack-exchange-dataset
3,314
Given a mechanical assembly as a graph, how to find an upper bound on number of assembly paths
<p>The rules are that you can only build from an existing part, so in the example below, B is the only option for the first move = A.</p>&#xA;&#xA;<p>A mechanical assembly might be represented as follows:</p>&#xA;&#xA;<pre><code> E&#xA; |&#xA; C&#xA; |&#xA;A-B&#xA; |&#xA; D&#xA; |&#xA; F&#xA;</code></pre>&#xA;&#xA;<p>Where the valid assembly paths when starting from A are:</p>&#xA;&#xA;<pre><code>A, B, C, E, D, F&#xA;A, B, C, D, E, F&#xA;A, B, C, D, F, E&#xA;A, B, D, F, C, E&#xA;A, B, D, C, F, E&#xA;A, B, D, C, E, F&#xA;</code></pre>&#xA;&#xA;<p>This is a fairly simple example, but providing an upper bound for an arbitrary assembly is difficult since it's related to the "connectivity" of the parts.</p>&#xA;&#xA;<p>n! would be an absolute upper bound I guess, but I'm hopping to find something a little better.</p>&#xA;&#xA;<p>I've also looked at representing the graph with the parts (A, B, C, etc) as the edges and doing Kirchhoff's theorem, but that doesn't work for sparsely connected graphs like the example above.</p>&#xA;&#xA;<p>Any information about the problem would help. I'm not sure if there's a formal description of this type of problem or not.</p>&#xA;
algorithms graphs
1
Given a mechanical assembly as a graph, how to find an upper bound on number of assembly paths -- (algorithms graphs) <p>The rules are that you can only build from an existing part, so in the example below, B is the only option for the first move = A.</p>&#xA;&#xA;<p>A mechanical assembly might be represented as follows:</p>&#xA;&#xA;<pre><code> E&#xA; |&#xA; C&#xA; |&#xA;A-B&#xA; |&#xA; D&#xA; |&#xA; F&#xA;</code></pre>&#xA;&#xA;<p>Where the valid assembly paths when starting from A are:</p>&#xA;&#xA;<pre><code>A, B, C, E, D, F&#xA;A, B, C, D, E, F&#xA;A, B, C, D, F, E&#xA;A, B, D, F, C, E&#xA;A, B, D, C, F, E&#xA;A, B, D, C, E, F&#xA;</code></pre>&#xA;&#xA;<p>This is a fairly simple example, but providing an upper bound for an arbitrary assembly is difficult since it's related to the "connectivity" of the parts.</p>&#xA;&#xA;<p>n! would be an absolute upper bound I guess, but I'm hopping to find something a little better.</p>&#xA;&#xA;<p>I've also looked at representing the graph with the parts (A, B, C, etc) as the edges and doing Kirchhoff's theorem, but that doesn't work for sparsely connected graphs like the example above.</p>&#xA;&#xA;<p>Any information about the problem would help. I'm not sure if there's a formal description of this type of problem or not.</p>&#xA;
habedi/stack-exchange-dataset