id
int64
1
141k
title
stringlengths
15
150
body
stringlengths
43
35.6k
tags
stringlengths
1
118
label
int64
0
1
2,240
Do Higher Order Functions provide more power to Functional Programming?
<p><em>I've asked a similar question <a href="https://cstheory.stackexchange.com/questions/11652/does-high-order-functions-provide-more-power-to-functional-programming">on cstheory.SE</a>.</em></p>&#xA;&#xA;<p>According to <a href="https://stackoverflow.com/a/1990580/209629">this answer on Stackoverflow</a> there is an algorithm that on a non-lazy pure functional programming language has an $\Omega(n \log n)$ complexity, while the same algorithm in imperative programming is $\Omega(n)$. Adding lazyness to the FP language would make the algorithm $\Omega(n)$.</p>&#xA;&#xA;<p>Is there any equivalent relationship comparing a FP language with and without Higher Order Functions? Is it still Turing Complete? If it is, does the lack of Higher Order on FP makes the language less "powerful" or efficient? </p>&#xA;
complexity theory lambda calculus functional programming turing completeness
1
2,243
Is a device with restrictive execution policies Turing-complete?
<p>There are devices that do not allow users to load any application they want on it, only run a limited class of applications approved by the device vendor.</p>&#xA;&#xA;<p>Take an iPhone as an example where new applications are loaded (solely) from app-store and programs that would allow execution of arbitrary code by user (without permission of Apple) are not permitted (e.g. Flash).</p>&#xA;&#xA;<p>Are such machines where users cannot execute arbitrary code on themselves still Turing-complete computers? Can they still be considered as <em>universal Turing machines</em>?</p>&#xA;
computability turing completeness
0
2,244
Is connecting islands with pontoons NP-complete?
<p>I have a problem in my mind, I think it is a NPC problem but I don't know how to prove it.</p>&#xA;&#xA;<p>Here is the problem:</p>&#xA;&#xA;<p>There are <strong>k</strong> islands in a very big lake, and there are <strong>n</strong> fan-shaped pontoons. Those pontoons are in the same size but have different initial directions and are in different original positions in the lake. The pontoons can rotate freely around its center of mass, and no cost associated with rotation.</p>&#xA;&#xA;<p>Now we need to move those pontoons so that all islands in the lake can be connected. We can guarantee the number of pontoons is enough to connect all the islands.</p>&#xA;&#xA;<p><strong>[Note]: We cannot reuse the pontoons!!</strong></p>&#xA;&#xA;<p>The task is to find the solution having the minimum total distance of the moving pontoons in order to make all islands connected. The distance of moving one pontoon can be calculated as the distance between the center of mass's original position and its deployed position.</p>&#xA;&#xA;<p>To make it clear, I have drawn such a figure. Suppose we have 3 islands A, B and C. They are located somewhere in the lake. And I have several fan-shaped pantoons. Now the solution is to find a minimum moving distance summation to connect A, B and C, shown in bottom part of the figure. Hope it help understand the problem. :)</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/G6Hop.jpg" alt="enter image description here"></p>&#xA;&#xA;<p>It seems that the problem is a NPC one, but I don't know to prove it. Can anyone help me on this? </p>&#xA;
complexity theory np complete np hard
0
2,248
How are threads implemented in different OSs?
<p>I was reading <strong>Linux Kernel Development</strong> by Robert Love, where I came across this</p>&#xA;&#xA;<blockquote>&#xA; <p>Linux takes an interesting approach to thread support: It does not&#xA; differentiate between threads and normal processes.To the kernel, all&#xA; processes are the same— some just happen to share resources.</p>&#xA;</blockquote>&#xA;&#xA;<p>I do not know much about OSs (aspire to know more) and kernels and hence the above quote raised a question about thread implementations in different OSs(at least the popular ones like Windows, Linux and Unix).</p>&#xA;&#xA;<p>Can someone please explain the different techniques for providing thread-support in an OS? ( and optionally contrast them)</p>&#xA;
operating systems process scheduling threads
0
2,249
voting scheme for peaceful coexistence
<p>Many areas in the world suffer from conflicts between two groups (usually ethnic or religious). For the purpose of this question, I assume that most people of both sides want to live in peace, but there are few extremists who incite hatred and violence. The goal of this question is to find an objective way to filter out those extremists.</p>&#xA;&#xA;<p>Imagine a town with 2 conflicting groups, A and B, each has N people. I propose the following voting scheme (which I explain from the point of view of group A, but it's entirely symmetric for the other group):</p>&#xA;&#xA;<ul>&#xA;<li><strong>equality-rule</strong>: The number of people in each group must always remain equal.</li>&#xA;<li><strong>expel-vote</strong>: At any time, each person of group A can claim that a certain person of group B is "extremist", and start a vote. If more than 50% of the people in group A agree, then that certain person is expelled from town.</li>&#xA;<li><strong>counter-vote</strong>: To keep the equality-rule, a single person of group A should also leave the town. This person is selected by a vote between the people in group B (i.e. each person in group B votes for a single person in group A, and the one with most votes is expelled from town).</li>&#xA;</ul>&#xA;&#xA;<p>My intuition is that:</p>&#xA;&#xA;<ul>&#xA;<li>On one hand, this scheme encourages people to be nice to people of the other group, so that they won't be subject to expel-votes.</li>&#xA;<li>On the other hand, the equality rule encourages people to think twice before starting an expel-vote, because this will put them in danger of expel in the counter-vote.</li>&#xA;</ul>&#xA;&#xA;<p>[ADDITION]&#xA;Several questions can be asked about this scheme, for example:</p>&#xA;&#xA;<ul>&#xA;<li>Under what conditions does it diverge to a situation where people vote and counter-vote, until the number of citizens in one of the groups reaches 0? </li>&#xA;<li>Under what conditions does it stabilize on a situation where the two group has more than 0 citizens? </li>&#xA;<li>Under what conditions, the stable number of citizens is more than half the initial number?</li>&#xA;</ul>&#xA;&#xA;<p>Note that this scheme does not even try to reach an objective measure of "extremism". The only goal is stability.</p>&#xA;&#xA;<p>I would like to know, has this voting scheme has been studied in the past?</p>&#xA;
reference request game theory voting
1
2,251
Why is it seemingly easier to resume torrent downloads than browser downloads?
<p>I really wonder how torrent downloads can be resumed at later point of time.&#xA;If such a technology exists, then why is it not possible in browsers?</p>&#xA;&#xA;<p>It is often not possible to pause a browser download so that it can be resumed at a later point of time. Often, the download will start again from the beginning. But in the case of a torrent download, you can resume anytime.</p>&#xA;&#xA;<p>One reason I could think of is that a browser makes an HTTP connection to the server which contains the file, and when this connection breaks, there is no data regarding how much file was saved so no resume is possible.</p>&#xA;&#xA;<p>Is there a fundamental reason why torrent downloads are easier to resume than web downloads?</p>&#xA;
computer networks communication protocols
1
2,257
Generating number of possibilites of popping two stacks to two other stacks
<p>Context: I'm working on <a href="https://stackoverflow.com/questions/10875675/how-to-find-out-all-the-popping-out-possibilities-of-two-stacks">this problem</a>:</p>&#xA;&#xA;<blockquote>&#xA; <p>There are two stacks here:</p>&#xA;&#xA;<pre><code>A: 1,2,3,4 &lt;- Stack Top&#xA; B: 5,6,7,8&#xA;</code></pre>&#xA; &#xA; <p>A and B will pop out to other two stacks: C and D. For example: </p>&#xA;&#xA;<pre><code> pop(A),push(C),pop(B),push(D).&#xA;</code></pre>&#xA; &#xA; <p>If an item have been popped out , it must be pushed to C or D immediately.</p>&#xA;</blockquote>&#xA;&#xA;<p>The goal is to enumerate all possible stack contents of C and D after moving all elements.</p>&#xA;&#xA;<p>More elaborately, the problem is this: If you have two source stacks with $n$ unique elements (all are unique, not just per stack) and two destination stacks and you pop everything off each source stack to each destination stack, generate all unique destination stacks - call this $S$.</p>&#xA;&#xA;<p>The stack part is irrelevant, mostly, other than it enforces a partial order on the result. If we have two source stacks and one destination stack, this is the same as generating all permutations without repetitions for a set of $2N$ elements with $N$ 'A' elements and $N$ 'B' elements. Call this $O$.</p>&#xA;&#xA;<p>Thus</p>&#xA;&#xA;<p>$\qquad \displaystyle |O| = (2n)!/(n!)^2$</p>&#xA;&#xA;<p>Now observe all possible bit sequences of length 2n (bit 0 representing popping source stack A/B and bit 1 pushing to destination stack C/D), call this B. |B|=22n. We can surely generate B and check if it has the correct number of pops from each destination stack to generate |S|. It's a little faster to recursively generate these to ensure their validity. It's even faster still to generate B and O and then simulate, but it still has the issue of needing to check for duplicates.</p>&#xA;&#xA;<p>My question</p>&#xA;&#xA;<p>Is there a more efficient way to generate these?</p>&#xA;&#xA;<p>Through simulation I found the result follows <a href="http://oeis.org/A084773" rel="nofollow noreferrer">this sequence</a> which is related to Delannoy Numbers, which I know very little about if this suggests anything.</p>&#xA;&#xA;<p>Here is my Python code</p>&#xA;&#xA;<pre><code>def all_subsets(list):&#xA; if len(list)==0:&#xA; return [set()]&#xA; subsets = all_subsets(list[1:])&#xA;&#xA; return [subset.union(set([list[0]])) for subset in subsets] + subsets&#xA;&#xA;def result_sequences(perms):&#xA; for perm in perms:&#xA; whole_s = range(len(perm))&#xA; whole_set = set(whole_s)&#xA; for send_to_c in all_subsets(whole_s):&#xA; send_to_d = whole_set-set(send_to_c)&#xA; yield [perm,send_to_c,send_to_d]&#xA;&#xA;n = 4&#xA;perms_ = list(unique_permutations([n,n],['a','b'])) # number of unique sequences &#xA;result = list(result_sequences(perms_))&#xA;</code></pre>&#xA;
algorithms combinatorics efficiency
1
2,259
Finding interesting anagrams
<p>Say that $a_1a_2\ldots a_n$ and $b_1b_2\ldots b_n$ are two strings of the same length. An <strong>anagramming</strong> of two strings is a bijective mapping $p:[1\ldots n]\to[1\ldots n]$ such that $a_i = b_{p(i)}$ for each $i$.</p>&#xA;&#xA;<p>There might be more than one anagramming for the same pair of strings. For example, If $a=$`abcab` and $b=$<code>cabab</code> we have $p_1[1,2,3,4,5]\to[4,5,1,2,3]$ and $p_2[1,2,3,4,5] \to [2,5,1,4,3]$, among others.</p>&#xA;&#xA;<p>We'll say that the <strong>weight</strong> $w(p)$ of an anagramming $p$ is the number of cuts one must make in the first string to get chunks that can be rearranged to obtain the second string. Formally, this the number of values of $i\in[1\ldots n-1]$ for which $p(i)+1\ne p(i+1)$. That is, it is the number of points at which $p$ does <em>not</em> increase by exactly 1.For example, $w(p_1) = 1$ and $w(p_2) = 4$, because $p_1$ cuts <code>12345</code> once, into the chunks <code>123</code> and <code>45</code>, and $p_2$ cuts <code>12345</code> four times, into five chunks.</p>&#xA;&#xA;<p>Suppose there exists an anagramming for two strings $a$ and $b$. Then at least one anagramming must have least weight. Let's say this this one is <strong>lightest</strong>. (There might be multiple lightest anagrammings; I don't care because I am interested only in the weights.)</p>&#xA;&#xA;<h2>Question</h2>&#xA;&#xA;<p>I want an algorithm which, given two strings for which an anagramming exists, efficiently <strong>yields the exact weight of the lightest anagramming</strong> of the two strings. It is all right if the algorithm also yields a lightest anagramming, but it need not.</p>&#xA;&#xA;<p>It is a fairly simple matter to generate all anagrammings and weigh them, but there may be many, so I would prefer a method that finds light anagrammings directly.</p>&#xA;&#xA;<hr>&#xA;&#xA;<h2>Motivation</h2>&#xA;&#xA;<p>The reason this problem is of interest is as follows. It is very easy to make the computer search the dictionary and find anagrams, pairs of words that contain exactly the same letters. But many of the anagrams produced are uninteresting. For instance, the longest examples to be found in Webster's Second International Dictionary are:</p>&#xA;&#xA;<blockquote>&#xA; <p>cholecystoduodenostomy<br>&#xA; duodenocholecystostomy</p>&#xA;</blockquote>&#xA;&#xA;<p>The problem should be clear: these are uninteresting because they admit a very light anagramming that simply exchanges the <code>cholecysto</code>, <code>duedeno</code>, and <code>stomy</code> sections, for a weight of 2. On the other hand, this much shorter example is much more surprising and interesting:</p>&#xA;&#xA;<blockquote>&#xA; <p>coastline<br>&#xA; sectional</p>&#xA;</blockquote>&#xA;&#xA;<p>Here the lightest anagramming has weight 8.</p>&#xA;&#xA;<p>I have a program that uses this method to locate interesting anagrams, namely those for which all anagrammings are of high weight. But it does this by generating and weighing all possible anagrammings, which is slow.</p>&#xA;
algorithms strings search algorithms natural language processing
1
2,263
Probabilities of duplicate mail detection by comparing notes among servers
<p>I have the following problem:</p>&#xA;<blockquote>&#xA;<p>We want to implement a filtering strategy in e-mail servers to reduce the number of spam messages. Each server will have a buffer, and before sending an e-mail, it checks whether there is a duplicate of the same message in its own buffer and contacts k distinct neighboring servers at random to check whether the duplicate is in another buffer. In case any duplicate message is detected, it will be deleted as spam, otherwise it will be sent after all negative replies are received.</p>&#xA;<p>Let us assume that there are N mail servers, and that a spammer sends M copies of each spam mail. We assume that all copies are sent simultaneously and that each mail is routed to a mail server randomly.</p>&#xA;</blockquote>&#xA;<p>Given M, N and k I need to find out the probabilities that no spam message is deleted (i.e. no server detects spam), all spam messages are deleted (all servers detect spam) and spam messages are deleted from at at least one server.</p>&#xA;<p>So far, I have used combinations without repetition to find out the cases that need to be taken into account for an M and N. Now I need to find out the probability that one server receives at least two copies of a message, but I am at complete loss. Could you please provide some insight into the problem?</p>&#xA;
combinatorics probability theory
1
2,272
Representing Negative and Complex Numbers Using Lambda Calculus
<p>Most tutorials on Lambda Calculus provide example where Positive Integers and Booleans can be represented by Functions. What about -1 and i?</p>&#xA;
data structures lambda calculus integers real numbers
1
2,273
Numbers of ways of expressing the sum of a number between [a,b]
<p>I need an algorithm to calculate the number of ways of expressing a number N as sum of numbers inside the interval [a, b] </p>&#xA;
algorithms combinatorics
0
2,274
Enumerating all the walks in a graph between a start vertex and a terminal vertex?
<p>I was reading something about the concept of walks in a graph b/w a start vertex and a terminating vertex in a graph and then suddenly a problem struck me, is there any algorithm or a method that can be used to enumerate all the distinct walks from a start vertex to a terminal vertex in a graph, if so can you all point me to some relevant links to study this problem and what are some applications of solving this problem?</p>&#xA;
algorithms reference request graphs
0
2,280
Does there exist any work on creating a Real Number/Probability Theory Framework in COQ?
<p><a href="http://en.wikipedia.org/wiki/Coq">COQ</a> is an interactive theorem prover that uses the calculus of inductive constructions, i.e. it relies heavily on inductive types. Using those, discrete structures like natural numbers, rational numbers, graphs, grammars, semantics etc. are very concisely represented.</p>&#xA;&#xA;<p>However, since I grew to like the proof assistant, I was wondering whether there are libraries for uncountable structures, like real numbers, complex numbers, probability bounds and such. I am of course aware that one cannot define these structures inductively (at least not as far as I know), but they can be defined axiomatically, using for instance the <a href="http://en.wikipedia.org/wiki/Real_number#Axiomatic_approach">axiomatic approach</a>.</p>&#xA;&#xA;<p>Is there any work that provides basic properties, or even probabilistic bounds like Chernoff bound or union bound as a library?</p>&#xA;
probability theory coq real numbers uncountability
1
2,283
Why is the complexity of negative-cycle-cancelling $O(V^2AUW)$?
<p>We want to solve a minimal-cost-flow problem with a generic negative-cycle cancelling algorithm. That is, we start with a random valid flow, and then we do not pick any "good" negative cycles such as minimal average cost cycles, but use Bellman-Ford to discover a minimal cycle and augment along the discovered cycle. Let $V$ be the number of nodes in the graph, $A$ the number of edges, $U$ the maximal capacity of an edge in the graph, and $W$ the maximal costs of an edge in the graph. Then, my learning materials claim: </p>&#xA;&#xA;<ul>&#xA;<li>The maximal costs at the beginning can be no more than $AUW$ </li>&#xA;<li>The augmentation along one negative cycle reduces the costs by at least one unit </li>&#xA;<li>The lower bound for the minimal costs is 0, because we don't allow negative costs </li>&#xA;<li>Each negative cycle can be found in $O(VA)$ </li>&#xA;</ul>&#xA;&#xA;<p>And they follow from it that the algorithm's complexity is $O(V^2AUW)$. I understand the logic behind each of the claims, but think that the complexity is different. Specifically, the maximal number of augmentations is given by one unit of flow per augmentation, taking the costs from $AUW$ to zero, giving us a maximum of $AUW$ augmentations. We need to discover a negative cycle for each, so we multiply the maximal number of augmentations by the time needed to discover a cycle ($VA$) and arrive at $O(A^2VUW)$ for the algorithm. </p>&#xA;&#xA;<p>Could this be an error in the learning materials (this is a text provided by the professor, not a student's notes from the course), or is my logic wrong? </p>&#xA;
algorithms graphs algorithm analysis runtime analysis network flow
0
2,284
What's the difference between a bridge, a mediator and a wrapper?
<p>The slides from my course in software architecture hints that these are seperate terms, but I can't seem to find the difference. Aren't all of them just translating interfaces?</p>&#xA;
terminology software engineering
0
2,292
Computing follow sets conservatively for a PEG grammar
<p>Given a <a href="https://en.wikipedia.org/wiki/Parsing_expression_grammar" rel="nofollow">parsing expression grammar</a> (PEG) grammar and the name of the start production, I would like to label each node with the set of characters that can follow it. I would be happy with a good approximation that is conservative -- if a character can follow a node then it must appear in the follower set.</p>&#xA;&#xA;<p>The grammar is represented as a tree of named productions whose bodies contain nodes representing</p>&#xA;&#xA;<ol>&#xA;<li>Character</li>&#xA;<li>Concatenation</li>&#xA;<li>Union</li>&#xA;<li>Non-terminal references</li>&#xA;</ol>&#xA;&#xA;<p>So given a grammar in ABNF style syntax:</p>&#xA;&#xA;<pre><code>A := B ('a' | 'b');&#xA;B := ('c' | 'd') (B | ());&#xA;</code></pre>&#xA;&#xA;<p>where adjacent nodes are concatenated, <code>|</code> indicates union, single quoted characters match the character they represent, and upper case names are non-terminals.</p>&#xA;&#xA;<p>If the grammar's start production is <code>A</code>, the annotated version might look like</p>&#xA;&#xA;<pre><code>A := &#xA; (&#xA; (B /* [ab] */)&#xA; (&#xA; ('a' /* eof */)&#xA; | &#xA; ('b' /* eof */)&#xA; /* eof */&#xA; )&#xA; /* eof */&#xA; );&#xA;&#xA;B :=&#xA; (&#xA; (&#xA; ('c' /* [abcd] */)&#xA; |&#xA; ('d' /* [abcd] */)&#xA; /* [abcd] */&#xA; )&#xA; (&#xA; (B /* [ab] */)&#xA; |&#xA; ( /* [ab] */)&#xA; /* [ab] */&#xA; )&#xA; );&#xA;</code></pre>&#xA;&#xA;<p>I want this so that I can do some simplification on a PEG grammar. Since order is important in unions in PEG grammars, I want to partition the members of unions based on which ones could accept the same character so that I can ignore order between partition elements.</p>&#xA;&#xA;<p>I'm using OMeta's grow-the-seed scheme for handling direct left-recursion in PEG grammars, so I need something that handles that. I expect that any scheme for handling scannerless CF grammars with order-independent unions that is conservative or correct would be conservative for my purposes.</p>&#xA;&#xA;<p>Pointers to algorithms or source code would be much appreciated.</p>&#xA;
formal languages reference request formal grammars parsers
1
2,293
Can the encodings set of a non-trivial class of languages which contains the empty set be recursively enumerable?
<p>Let $C$ be a non-trivial set of recursively enumerable languages ($\emptyset \subsetneq C \subsetneq \mathrm{RE}$) and let $L$ be the set of encodings of Turing machines that recognize some language in $C$: $$L=\{\langle M \rangle \mid L(M) \in C \}$$</p>&#xA;&#xA;<p>Suppose that $\langle M_{loopy}\rangle \in L$, where $M_{loopy}$ is a TM that never halts.&#xA;I wonder if it is possible that $L \in \mathrm{RE}$?</p>&#xA;&#xA;<p>By Rice's theorem I know that $L \notin \mathrm{R}$ (the set of recursive languages), so either $L \notin \mathrm{RE}$ or $\overline{L} \notin \mathrm{RE}$. Does it have to be the first option since $M_{loopy} \in L$?</p>&#xA;
computability turing machines
1
2,295
Is the following recurrence for this program's runtime correct?
<p>Let $f$ and $g$ be two functions and $p$ a number. Consider the following program:</p>&#xA;&#xA;<pre><code>Recurs(v,p) :&#xA; find s &lt; v such that f(s,v) &lt; v/2 and g(s,v-s) &lt; p&#xA;&#xA; if no such s exists then&#xA; return v&#xA; else if s &lt;= v/4 then &#xA; return v-s U Recurs(s,p)&#xA; else if s &gt; v/4 then &#xA; return Recurs(s,p) U Recurs(v-s,p)&#xA;end&#xA;</code></pre>&#xA;&#xA;<p>Can the recurrence for the running time of this recursion be $T(v)=T\left(\frac{v}{4}\right)+T\left(\frac{3v}{4}\right)+1$?</p>&#xA;
algorithms algorithm analysis runtime analysis recurrence relation
0
2,296
How to use greedy algorithm to solve this?
<blockquote>&#xA; <p><strong>Possible Duplicate:</strong><br>&#xA; <a href="https://cs.stackexchange.com/questions/2188/how-to-use-greedy-algorithm-to-solve-this">How to use greedy algorithm to solve this?</a> </p>&#xA;</blockquote>&#xA;&#xA;&#xA;&#xA;<p>You are given $n$ integers $a_1, \ldots, a_n$ all between $0$ and $l$. Under each integer $a_i$ you should write an integer $b_i$ between $0$ and $l$ with the requirement that the $b_i$'s form a non-decreasing sequence (i.e. $b_i \le b_{i+1}$ for all $i$). Define the deviation of such a sequence to be $\max(|a_1−b_1|,\ldots,|a_n−b_n|)$. Design an algorithm that finds the $b_i$'s with the minimum deviation in runtime $O(n\sqrt[4]{l})$.</p>&#xA;&#xA;<p>There were also two hints, one is to first find an algorithm in $O(nl)$ time, the other is that the runtime of the optimal algorithm is actually must less than $\Theta(n\sqrt[4]{l})$.</p>&#xA;&#xA;<p>I was able to find a solution that runs in $O(n^2)$ (without using any of the hints), but I have no idea how to find an algorithm that runs in $O(n\sqrt[4]{l})$. Can anyone offer some insight into this? Maybe give a rough sketch of your algorithm? Thanks!</p>&#xA;
algorithms optimization greedy algorithms
0
2,300
Theoretical speed gain of quad core vs. single core
<p>I first asked this question at cstheory, but they suggested to ask my question here, so here it goes ...</p>&#xA;&#xA;<p>I'm working on my masters thesis and I need to have theoretical value of the (average) speed gain that a quadcore processor brings compared to a singlecore processor, when they both use the same frequency. So for example the speed gain of a 2 GHz singlecore vs. 2 Ghz quadcore.</p>&#xA;&#xA;<p>Somewhere on the internet I've read that a quadcore is 2.6 times faster than singlecore, but the author didn't mention any source so I cannot use that in my thesis.</p>&#xA;&#xA;<p>I've been trying to calculate some things myself, but didn't come to a conclusion. I tried like this:</p>&#xA;&#xA;<pre><code>threads | quad core | single core | ratio &#xA;--------|-----------|-------------|-------&#xA;1 | 1 | 1 | 1&#xA;2 | 2 | 1/2 | 4&#xA;3 | 3 | 1/3 | 9&#xA;4 | 4 | 1/4 | 16&#xA;5 | 3+1/2 | 1/5 | 17.5&#xA;6 | 2+2(1/2) | 1/5 | 15&#xA;7 | 1+3(1/2) | 1/5 | 12.5&#xA;...&#xA;</code></pre>&#xA;&#xA;<p>This table represents the timeslices available to execute a task (I've taken a fair 50/50 usage for each thread). For example when using a singlecore and a application uses 3 threads, each thread can work 1/3 of the time, while with a quadcore each thread can work 100% of the time, because 3 threads can be spread accross a separate core. After playing with some calculations in Excel, I could not come to any conclusion.</p>&#xA;&#xA;<p>I'm a bit stuck and need some fresh ideas on how to get a theoretical number that represents how much faster a quad core is (on average) compared to a single core. Maybe some of you know some empirical numbers with a good reference to the source? Anyway, all the help is welcome, because I'm a bit stuck and this is the last part I need to cover in my thesis.</p>&#xA;&#xA;<p>Thanks!</p>&#xA;
computer architecture performance
0
2,301
Categorisation of type systems (strong/weak, dynamic/static)
<p>In short: how are type systems categorised in academic contexts; particularly, where can I find reputable sources that make the distinctions between different sorts of type system clear?</p>&#xA;&#xA;<p>In a sense the difficulty with this question is not that I can't find an answer, but rather that I can find too many, and none stand out as correct. The background is I am attempting to improve an article on the Haskell wiki about <a href="http://www.haskell.org/haskellwiki/Typing">typing</a>, which currently claims the following distinctions:</p>&#xA;&#xA;<ul>&#xA;<li>No typing: The language has no notion of types, or from a typed perspective: There is exactly one type in the language. Assembly language has only the type 'bit pattern', Rexx and Tk have only the type 'text', core MatLab has only the type 'complex-valued matrix'.</li>&#xA;<li>Weak typing: There are only few distinguished types and maybe type synonyms for several types. E.g. C uses integer numbers for booleans, integers, characters, bit sets and enumerations.</li>&#xA;<li>Strong typing: Fine grained set of types like in Ada, Wirthian languages (Pascal, Modula-2), Eiffel</li>&#xA;</ul>&#xA;&#xA;<p>This is entirely contrary to my personal perception, which was more along the lines of:</p>&#xA;&#xA;<ul>&#xA;<li>Weak typing: Objects have types, but are implicitly converted to other types when the context demands it. For example, Perl, PHP and JavaScript are all languages in which <code>"1"</code> can be used in more or less any context that <code>1</code> can.</li>&#xA;<li>Strong typing: Objects have types, and there are no implicit conversions (although overloading may be used to simulate them), so using an object in the wrong context is an error. In Python, indexing an array with a string or float throws a TypeError exception; in Haskell it will fail at compile time.</li>&#xA;</ul>&#xA;&#xA;<p>I asked for opinions on this from other people more experienced in the field than I am, and one gave this characterisation:</p>&#xA;&#xA;<ul>&#xA;<li>Weak typing: Performing invalid operations on data is not controlled or rejected, but merely produces invalid/arbitrary results.</li>&#xA;<li>Strong typing: Operations on data are only permitted if the data is compatible with the operation.</li>&#xA;</ul>&#xA;&#xA;<p>As I understand it, the first and last characterisations would call C weakly-typed, the second would call it strongly-typed. The first and second would call Perl and PHP weakly-typed, the third would call them strongly-typed. All three would describe Python as strongly-typed.</p>&#xA;&#xA;<p>I think most people would tell me "well, there is no consensus, there is no accepted meaning of the terms". If those people are wrong, I'd be happy to hear about it, but if they are right, then how <em>do</em> CS researchers describe and compare type systems? What terminology can I use that is less problematic?</p>&#xA;&#xA;<p>As a related question, I feel the dynamic/static distinction is often given in terms of "compile time" and "run time", which I find unsatisfactory given that whether or not a language is compiled is not so much a property of that language as its implementations. I feel there should be a purely-semantic description of dynamic versus static typing; something along the lines of "a static language is one in which every subexpression can be typed". I would appreciate any thoughts, particularly references, that bring clarity to this notion.</p>&#xA;
reference request programming languages type theory
1
2,302
Restricted version of vertex cover
<p>I am interested in the complexity of the restricted version of the vertex cover problem below:</p>&#xA;<blockquote>&#xA;<p><strong>Instance:</strong> A bipartite graph <span class="math-container">$G =(L, R, E)$</span> and an integer <span class="math-container">$K$</span>.</p>&#xA;<p><strong>Question:</strong> Is there <span class="math-container">$S \subset L$</span>, <span class="math-container">$|S| \leq K$</span> and every vertex in <span class="math-container">$R$</span> has a neighbor in <span class="math-container">$S$</span> <span class="math-container">$( S$</span> is vertex cover for <span class="math-container">$R)$</span></p>&#xA;</blockquote>&#xA;<p>Vertex cover is <span class="math-container">$\mathsf{P}$</span> if <span class="math-container">$S \subset L \cup R$</span> and cover <span class="math-container">$L \cup R$</span>; and it is <span class="math-container">$\mathsf{NP}$</span>-complete for nonbipartite graphs. However, the problem I am looking at does not fit in either cases. Any pointers where I could find an answer will be appreciated.</p>&#xA;
complexity theory algorithms graphs
0
2,303
Non-linear grammars
<p>I look for information about grammars which can be described by a non-linear equation such as a quadratic equation:</p>&#xA;&#xA;<p>$\qquad \displaystyle G \to G G a \mid b$</p>&#xA;&#xA;<p>or</p>&#xA;&#xA;<p>$\qquad \displaystyle G \to G G \mid y G z \mid z G y \mid \varepsilon$</p>&#xA;&#xA;<p>While there is lots of material about linear grammars, their connection with regular languages etc. "quadratic grammars" ( a term, which doesn't even seem to exist ) are only mentioned when an author presents some counterexamples for a parsing algorithm in order to show its limitations. </p>&#xA;&#xA;<p>Is there an autonomous treatment of grammars which can be described by general polynomial equations?</p>&#xA;
reference request formal grammars
0
2,304
Recursive, Recursively Enumerable and None of the Above
<p>Let </p>&#xA;&#xA;<ul>&#xA;<li>$A = \mathrm{R}$ be the set of all languages that are recursive,</li>&#xA;<li>$B = \mathrm{RE} \setminus \mathrm{R}$ be the set of all languages that are recursively enumerable but not recursive and</li>&#xA;<li>$C = \overline{\mathrm{RE}}$ be the set of all languages that are not recursively enumerable.</li>&#xA;</ul>&#xA;&#xA;<p>It is clear that for example $\mathrm{CFL} \subseteq A$.</p>&#xA;&#xA;<p>What is a simple example of a member of set B?</p>&#xA;&#xA;<p>What is a simple example of a member of set C?</p>&#xA;&#xA;<p>In general, how do you classify a language as either A, B or C?</p>&#xA;
formal languages computability
1
2,306
"Flow layouts" inside a GUI -- how do I come up with a good algorithm?
<p>I was trying to write some simple code for a "flow layout" manager and what I came up with initially was something like the following (semi-pseudocode):</p>&#xA;&#xA;<pre><code>int rowHeight = 0;&#xA;RECT rect = parent.getClientRect();&#xA;POINT pos = rect.position; // Start at top-left corner, row by row&#xA;&#xA;foreach (Window child in parent.children)&#xA;{&#xA; // POINT is a tuple of: (x, y)&#xA; // SIZE is a tuple of: (width, height)&#xA; // RECT is a tuple of: (left, top, right, bottom)&#xA; RECT proposed1 = RECT(rect.left + pos.x, rect.top + pos.y, rect.right, rect.bottom),&#xA; proposed2 = RECT(rect.left, rect.top + pos.y + rowHeight, rect.right, rect.bottom);&#xA; SIZE size1 = child.getPreferredSize(proposed1),&#xA; size2 = child.getPreferredSize(proposed2);&#xA; if (size1.width &lt;= proposed1.width)&#xA; {&#xA; child.put(proposed1); // same row&#xA; pos.x += size1.width;&#xA; rowHeight = max(rowHeight, size1.height);&#xA; }&#xA; else&#xA; {&#xA; child.put(proposed2); // new row&#xA; pos.x = rect.left;&#xA; pos.y += rowHeight;&#xA; rowHeight = size2.height;&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>In other words, the algorithm is very simple:<br>&#xA;The layout manager asks every component, "is the remaining portion of the row enough for you?" and, if the component says "no, my width is too long", it places the component on the next row instead.</p>&#xA;&#xA;<p>There are two major problems with this approach:</p>&#xA;&#xA;<ul>&#xA;<li><p>This algorithm results in very long, thin components, because it is essentially greedy with the width of each component -- if a component wants the whole row, it will use the whole row (ugly), even if it could use a smaller width (but larger height).</p></li>&#xA;<li><p>It only works if you already <em>know</em> what the parent's size is -- but you might not! Instead, you might simply have a restriction, "the parent's size must be between these two dimensions", but the rest might be open-ended.</p></li>&#xA;</ul>&#xA;&#xA;<p>I am, however, at a loss of how to come up with a better algorithm -- how do I figure out what would be a good size to to 'propose' to the component? And even when I figure that out, what should I try to optimize, exactly? (The area, the width, the aspect ratio, the number of components on the screen, or something else?)</p>&#xA;&#xA;<p>Any ideas on how I should approach this problem?</p>&#xA;
algorithms computational geometry greedy algorithms user interface
0
2,311
Turing Machine-Like Formalism for The Actor Model
<p>Turing machines have a <a href="https://en.wikipedia.org/wiki/Turing_machine#Formal_definition">formal</a> symbol alphabet, state and transition-rules based description of how a computation is done. </p>&#xA;&#xA;<p><a href="https://en.wikipedia.org/wiki/Actor_model">The Actor Model</a> is sometimes mentioned as a more powerful computational-model than Turing machines (not in what it can compute, but in other aspects). </p>&#xA;&#xA;<ol>&#xA;<li>Is The Actor Model a full fledged Turning machine alternative as a computational model? </li>&#xA;<li>Does The Actor Model also have such a symbol-based formal computation description akin to the Turing machine?</li>&#xA;<li>Are the actors assumed to be Turing machine equivalent - since each message is processed sequentially (and atomically)?</li>&#xA;</ol>&#xA;&#xA;<p>There are many theoretical results based on Turing machines, e.g. the halting problem, decidability, relation to Gödel's incompleteness theorem etc. </p>&#xA;&#xA;<p>Can these proofs be formally generalize to the Actor Model? Has this been done?</p>&#xA;
terminology computability reference request programming languages computation models
0
2,312
Solve a problem through reduction
<p>I am aware that for a problem to be considered NP-Hard, any problem in NP must be reduceable to your problem (problem which you are trying to prove is NP-Hard).</p>&#xA;&#xA;<p>Let's assume that you have proven that a problem <code>Y</code> is NP-Hard, and you have a problem <code>X</code> which you know is in NP, and you would like to solve.</p>&#xA;&#xA;<p>To solve <code>X</code>, which of the following reductions would be carried out?</p>&#xA;&#xA;<ol>&#xA;<li>X -> Y</li>&#xA;<li>Y -> X</li>&#xA;</ol>&#xA;&#xA;<p>Which of the following? i.e. would you reduce <code>X</code> to <code>Y</code> or vice-versa, if you would like to solve <code>X</code> which is in NP, and <code>Y</code> which is NP-Hard?</p>&#xA;
algorithms complexity theory reductions np hard
0
2,317
Complexity of space density and sequentiality
<p>I'm looking for some standard terminology, metrics and/or applications of the consideration of density and sequentiality of algorithms.</p>&#xA;&#xA;<p>When we measure algorithms we tend to give the big-Oh notation such as $O(n)$ and usually we are measuring time complexity. Somewhat less frequently, though still often, we'll also measure the space complexity of an algorithm.</p>&#xA;&#xA;<p>Given current computing systems however the density of memory and the sequence in which it is accessed plays a major role in the practical performance of the algorithm. Indeed there are scenarios where a time complexity algortihm of $O(\log n)$ with disperse random memory access can be slower than a $O(n)$ algorithm with dense sequential memory access. I've not seen these aspects covered in formal theory before; surely it must exist and I'm just ignorant here.</p>&#xA;&#xA;<p>What are the standard metrics, terms, and approaches to this space density and access sequentiality?</p>&#xA;
complexity theory reference request terminology space complexity
1
2,319
How can a TM M' have a property P if it only accepts a single string x from language of P?
<p>Here is the document: <a href="http://spark-public.s3.amazonaws.com/automata/slides/19_tm4.pdf" rel="nofollow">More Undecidable Problems</a></p>&#xA;&#xA;<p>For a given property $P$ of languages, define $L_P$ as the set of all Turing machines (resp. their encodings) that accept languages with $P$, that is</p>&#xA;&#xA;<p>$\qquad \displaystyle L_{p} = \{ ⟨M⟩ \mid \mathcal{L}(M) \text{ has property } P \}$</p>&#xA;&#xA;<p>If $P$ is a trivial property, that is if $P$ holds for all or no language, $L_P$ is decidable, too (as $L_P=\emptyset$ or $L_P = \{\langle M\rangle \mid M \text{ Turing machine }\}$. If $P$ is not trivial, $L_p$ is undecidable (by Rice's theorem), which means the strings in this language (or with such property) cannot be determined if it can be halt.</p>&#xA;&#xA;<p>We determine a language $M$ has property $P$ by reduce $M$ to another $M'$, and check that if $M'$ accepts the reduced string $x$ from the initial string $w$, we can conclude that $L(M)$ and $L(M')$ have property $P$.</p>&#xA;&#xA;<p>However, as the title suggests, we only reduce a single string $w$ to $x$ and if $x$ is accepted, but we conclude the whole $L(M')$ has property $P$. Thus, $M'$ is obviously part of $L_p$. <em>What if some random strings in $L(M')$ do not have property $P$?</em></p>&#xA;&#xA;<p><em>What if M accepts some string, but the reduction of those strings are not accepted by M'?</em></p>&#xA;&#xA;<p>A reduction from language $L$ to language $L’$ is an algorithm (TM that always halts) that takes a string $w$ and converts it to a string $x$, with the property that: <strong>$x$ is in $L’$ if and only if $w$ is in $L$</strong>. </p>&#xA;&#xA;<p><em>Does this imply the reduced language $L'$ will contain every property $P$ from $L$?</em> Since we can conclude that if $L'$ is decidable, then $L$ is decidable as well and vice verse. <em>Can we conclude the same thing to property $P$?</em></p>&#xA;
computability turing machines
0
2,320
How to prove that a grammar is unambiguous?
<p>My problem is how can I prove that a grammar is unambiguous?&#xA;I have the following grammar:&#xA;$$S&#xA;→ statement&#xA;∣ \mbox{if } expression \mbox{ then } S&#xA;∣ \mbox{if } expression \mbox{ then } S \mbox{ else } S$$</p>&#xA;&#xA;<p>and make this to an unambiguous grammar, I think its correct:</p>&#xA;&#xA;<ul>&#xA;<li><p>$ S → S_1 ∣ S_2 $</p></li>&#xA;<li><p>$S_1&#xA;→ \mbox{if } expression \mbox{ then } S&#xA;∣ \mbox{if } expression \mbox{ then } S_2 \mbox{ else } S_1$</p></li>&#xA;<li><p>$S_2&#xA;→ \mbox{if } expression \mbox{ then } S_2 \mbox{ else } S_2&#xA;∣ statement$</p></li>&#xA;</ul>&#xA;&#xA;<p>I know that a unambiguous grammar has one parse tree for every term.</p>&#xA;
context free formal grammars proof techniques ambiguity
0
2,321
Where should we place action symbols in a grammar?
<p>In the following grammar that is used for math expressions(and other grammars) how do I know where should I place action symbols(@add, @mul, @pushID)? Is there a algorithm for it?</p>&#xA;&#xA;<pre>&#xA;E -> TE`&#xA;E'-> +T @add E'|ϵ&#xA;T -> FT`&#xA;T'-> xF @mul T'|ϵ&#xA;F -> (E)|@pushID id&#xA;</pre>&#xA;&#xA;<p>For example why @add is between <code>+T</code> and <code>E'</code> and not after <code>E'</code>?&#xA;I searched for it's algorithm but didn't find anything useful.</p>&#xA;
formal grammars compilers
0
2,326
Type inference with product types
<p>I’m working on a compiler for a concatenative language and would like to add type inference support. I understand Hindley–Milner, but I’ve been learning the type theory as I go, so I’m unsure of how to adapt it. Is the following system sound and decidably inferable?</p>&#xA;&#xA;<p>A term is a literal, a composition of terms, a quotation of a term, or a primitive.</p>&#xA;&#xA;<p>$$ e ::= x \:\big|\: e\:e \:\big|\: [e] \:\big|\: \dots $$</p>&#xA;&#xA;<p>All terms denote functions. For two functions $e_1$ and $e_2$, $e_1\:e_2 = e_2 \circ e_1$, that is, juxtaposition denotes reverse composition. Literals denote niladic functions.</p>&#xA;&#xA;<p>The terms other than composition have basic type rules:</p>&#xA;&#xA;<p>$$&#xA;\dfrac{}{x : \iota}\text{[Lit]} \\&#xA;\dfrac{\Gamma\vdash e : \sigma}{\Gamma\vdash [e] : \forall\alpha.\:\alpha\to\sigma\times\alpha}\text{[Quot]}, \alpha \text{ not free in } \Gamma&#xA;$$</p>&#xA;&#xA;<p>Notably absent are rules for application, since concatenative languages lack it.</p>&#xA;&#xA;<p>A type is either a literal, a type variable, or a function from stacks to stacks, where a stack is defined as a right-nested tuple. All functions are implicitly polymorphic with respect to the “rest of the stack”.</p>&#xA;&#xA;<p>$$&#xA;\begin{aligned}&#xA;\tau &amp; ::= \iota \:\big|\: \alpha \:\big|\: \rho\to\rho \\&#xA;\rho &amp; ::= () \:\big|\: \tau\times\rho \\&#xA;\sigma &amp; ::= \tau \:\big|\: \forall\alpha.\:\sigma&#xA;\end{aligned}&#xA;$$</p>&#xA;&#xA;<p>This is the first thing that seems suspect, but I don’t know exactly what’s wrong with it.</p>&#xA;&#xA;<p>To help readability and cut down on parentheses, I’ll assume that $a\:b = b \times (a)$ in type schemes. I’ll also use a capital letter for a variable denoting a stack, rather than a single value.</p>&#xA;&#xA;<p>There are six primitives. The first five are pretty innocuous. <code>dup</code> takes the topmost value and produces two copies of it. <code>swap</code> changes the order of the top two values. <code>pop</code> discards the top value. <code>quote</code> takes a value and produces a quotation (function) that returns it. <code>apply</code> applies a quotation to the stack.</p>&#xA;&#xA;<p>$$&#xA;\begin{aligned}&#xA;\mathtt{dup} &amp; :: \forall A b.\: A\:b \to A\:b\:b \\&#xA;\mathtt{swap} &amp; :: \forall A b c.\: A\:b\:c \to A\:c\:b \\&#xA;\mathtt{pop} &amp; :: \forall A b.\: A\:b \to A \\&#xA;\mathtt{quote} &amp; :: \forall A b.\: A\:b \to A\:(\forall C. C \to C\:b) \\&#xA;\mathtt{apply} &amp; :: \forall A B.\: A\:(A \to B) \to B \\&#xA;\end{aligned}&#xA;$$</p>&#xA;&#xA;<p>The last combinator, <code>compose</code>, ought to take two quotations and return the type of their concatenation, that is, $[e_1]\:[e_2]\:\mathtt{compose} = [e_1\:e_2]$. In the statically typed concatenative language <a href="http://www.cat-language.com/">Cat</a>, the type of <code>compose</code> is very straightforward.</p>&#xA;&#xA;<p>$$&#xA;\mathtt{compose} :: \forall A B C D.\: A\:(B \to C)\:(C \to D) \to A\:(B \to D)&#xA;$$</p>&#xA;&#xA;<p>However, this type is too restrictive: it requires that the production of the first function <em>exactly match</em> the consumption of the second. In reality, you have to assume distinct types, then unify them. But how would you write that type?</p>&#xA;&#xA;<p>$$ \mathtt{compose} :: \forall A B C D E. A\:(B \to C)\:(D \to E) \to A \dots $$</p>&#xA;&#xA;<p>If you let $\setminus$ denote a <em>difference</em> of two types, then I <em>think</em> you can write the type of <code>compose</code> correctly.</p>&#xA;&#xA;<p>$$&#xA;\mathtt{compose} :: \forall A B C D E.\: A\:(B \to C)\:(D \to E) \to A\:((D \setminus C)\:B \to ((C \setminus D)\:E))&#xA;$$</p>&#xA;&#xA;<p>This is still relatively straightforward: <code>compose</code> takes a function $f_1 : B \to C$ and one $f_2 : D \to E$. Its result consumes $B$ atop the consumption of $f_2$ not produced by $f_1$, and produces $D$ atop the production of $f_1$ not consumed by $f_2$. This gives the rule for ordinary composition.</p>&#xA;&#xA;<p>$$&#xA;\dfrac{\Gamma\vdash e_1 : \forall A B.\: A \to B \quad \Gamma\vdash e_2 : \forall C D. C \to D}{\Gamma\vdash e_1 e_2 : ((C \setminus B)\:A \to ((B \setminus C)\:D))}\text{[Comp]}&#xA;$$</p>&#xA;&#xA;<p>However, I don’t know that this hypothetical $\setminus$ actually corresponds to anything, and I’ve been chasing it around in circles for long enough that I think I took a wrong turn. Could it be a simple difference of tuples?</p>&#xA;&#xA;<p>$$&#xA;\begin{align}&#xA;\forall A. () \setminus A &amp; = () \\&#xA;\forall A. A \setminus () &amp; = A \\&#xA;\forall A B C D. A B \setminus C D &amp; = B \setminus D \textit{ iff } A = C \\&#xA;\text{otherwise} &amp; = \textit{undefined}&#xA;\end{align}&#xA;$$</p>&#xA;&#xA;<p>Is there something horribly broken about this that I’m not seeing, or am I on something like the right track? (I’ve probably quantified some of this stuff wrongly and would appreciate fixes in that area as well.)</p>&#xA;
programming languages logic compilers type theory type checking
1
2,332
Expression Problem – looking for a similar standard problem
<p>The <a href="http://en.wikipedia.org/wiki/Expression_problem" rel="nofollow">Expression Problem</a>, populated by Philip Wadler, is a often used to standard problem to evaluate programming languages.</p>&#xA;&#xA;<p>I think it is a very clear and popular example and I wonder if there are any similar standard problems that are possibly also as widely used and as clear.</p>&#xA;&#xA;<p>So, are there any similar standard problems?</p>&#xA;&#xA;<p>(In the case of Feature Oriented Programming (<a href="http://en.wikipedia.org/wiki/Feature_Oriented_Programming" rel="nofollow">link</a> <a href="http://en.wikipedia.org/wiki/FOSD_Program_Cubes#Applications" rel="nofollow">link</a>) I found some standard problems, like:&#xA;- implementation of a stack with different features&#xA;- implementation of linked lists&#xA;- implementation of a calculator&#xA;- the graph product line&#xA;- stock broker and bank account examples&#xA;- hierarchical display&#xA;)</p>&#xA;
reference request programming languages software engineering
0
2,336
Sorting algorithms which accept a random comparator
<p>Generic sorting algorithms generally take a set of data to sort and a comparator function which can compare two individual elements. If the comparator is an order relation¹, then the output of the algorithm is a sorted list/array.</p>&#xA;&#xA;<p>I am wondering though which sort algorithms would actually <em>work</em> with a comparator that is not an order relation (in particular one which returns a random result on each comparison). By "work" I mean here that they continue return a permutation of their input and run at their typically quoted time complexity (as opposed to degrading to the worst case scenario always, or going into an infinite loop, or missing elements). The ordering of the results would be undefined however. Even better, the resulting ordering would be a uniform distribution when the comparator is a coin flip.</p>&#xA;&#xA;<p>From my rough mental calculation it appears that a merge sort would be fine with this and maintain the same runtime cost and produce a fair random ordering. I think that something like a quick sort would however degenerate, possibly not finish, and not be fair.</p>&#xA;&#xA;<p>What other sorting algorithms (other than merge sort) would work as described with a random comparator?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><p>For reference, a comparator is an order relation if it is a proper function (deterministic) and satisfies the axioms of an order relation:</p>&#xA;&#xA;<ul>&#xA;<li>it is deterministic: <code>compare(a,b)</code> for a particular <code>a</code> and <code>b</code> always returns the same result.</li>&#xA;<li>it is transitive: <code>compare(a,b) and compare(b,c) implies compare( a,c )</code></li>&#xA;<li>it is antisymmetric <code>compare(a,b) and compare(b,a) implies a == b</code></li>&#xA;</ul></li>&#xA;</ol>&#xA;&#xA;<p>(Assume that all input elements are distinct, so reflexivity is not an issue.)</p>&#xA;&#xA;<p>A random comparator violates all of these rules. There are however comparators that are not order relations yet are not random (for example they might violate perhaps only one rule, and only for particular elements in the set).</p>&#xA;
algorithms randomized algorithms sorting
1
2,338
How to prove that ε-loops are not necessary in PDAs?
<p>In the context of our investigation of <a href="https://cs.stackexchange.com/questions/110/determining-capabilities-of-a-min-heap-or-other-exotic-state-machines">heap automata</a>, I would like to prove that a particular variant can not accept non-context-sensitive languages. As we have no equivalent grammar model, I need a proof that uses only automata; therefore, I have to show that heap automata can be simulated by <a href="https://en.wikipedia.org/wiki/Linear_bounded_automaton" rel="nofollow noreferrer">LBA</a>s (or an equivalent model).</p>&#xA;&#xA;<p>I expect the proof to work similarly to showing that pushdown automata accept a subset the context-sensitive languages. However, all proofs I know work by</p>&#xA;&#xA;<ul>&#xA;<li>using grammars -- here the fact is obvious by definition -- or</li>&#xA;<li>are unconvinvingly vague (e.g. <a href="http://www.cs.uky.edu/~lewis/texts/theory/automata/lb-auto.pdf" rel="nofollow noreferrer">here</a>).</li>&#xA;</ul>&#xA;&#xA;<p>My problem is that a PDA (resp. HA) can contain cycles of $\varepsilon$-transitions that may write symbols to the stack (resp. heap). An LBA can not simulate arbitrary iterations of such loops. From the Chomsky hierarchy obtained with grammars, we know that </p>&#xA;&#xA;<ol>&#xA;<li>every context-free language has an $\varepsilon$-cycle-free PDA or</li>&#xA;<li>the simulating LBA can prevent iterating $\varepsilon$-cycles too often.</li>&#xA;</ol>&#xA;&#xA;<p>Intuitively, this is clear: such cycles write symbols independently of the input, therefore the stack (heap) content does only hold an amount of information linear in the length of the cycle (disregarding overlapping cycles for now). Also, you don't have a way to get rid of the stuff again (if you need to) other than using another $\varepsilon$-cycle. In essence, such cycles do not contribute to dealing with the input if iterated multiple times, so they are not necessary.</p>&#xA;&#xA;<p>How can this argument be put rigorously/formally, especially considering overlapping $\varepsilon$-cycles?</p>&#xA;
automata pushdown automata
1
2,339
per-record timeline consistency vs. monotonic writes
<p>It seems to me that the <em>per-record timeline consistency</em> as defined by Cooper et al. in "PNUTS: Yahoo!’s Hosted Data Serving Platform" mimics the (older?) definition of <em>monotonic writes</em>. From the paper:</p>&#xA;&#xA;<blockquote>&#xA; <p>per-record timeline consistency: all replicas of a given record apply&#xA; all updates to the record in the same order.</p>&#xA;</blockquote>&#xA;&#xA;<p>This is quite similar to <a href="http://regal.csep.umflint.edu/~swturner/Classes/csc577/Online/Chapter06/img26.html" rel="nofollow">a definition for monotonic writes</a>:</p>&#xA;&#xA;<blockquote>&#xA; <p>A write operation by a process on data item x is completed before any&#xA; successive write operation on x by the same process.</p>&#xA;</blockquote>&#xA;&#xA;<p>Can I conclude that those things are the same, or is there a difference that I misunderstand? Note that the link above also mentions possible copies of data item <code>x</code>, so monotonic write includes replicas.</p>&#xA;
terminology distributed systems
1
2,341
Can exactly one of NP and co-NP be equal to P?
<p>Maybe I am missing something obvious, but can it be that P = co-NP $\subsetneq$ NP or vice versa? My feeling is that there must be some theorem that rules out this possibility.</p>&#xA;
complexity theory p vs np
1
2,344
Mixed-strategy Nash equilibria
<p>Is the following statement always true:</p>&#xA;&#xA;<blockquote>&#xA; <p>if there is a mixed-strategy Nash equilibria then it is unique.</p>&#xA;</blockquote>&#xA;&#xA;<p>I know that there can be several pure strategy Nash equilibrias.</p>&#xA;
game theory
0
2,345
Uni-directional synchronization and locking issues
<p>Suppose there are two databases, $D_1$ and $D_2$. Let's further assume $D_1$ is always up and $D_2$ can be down sometimes. When it goes up again, it has to restart.</p>&#xA;&#xA;<p>$D_1$ is filled by say a dozen other systems with event messages. Those messages have IDs and might be updated or deleted. $D_2$ needs to be in sync with $D_1$, which is realized by:</p>&#xA;&#xA;<ul>&#xA;<li>on restart, pull all data from $D_1$. During this pull $D_1$ is locked, thus all senders to $D_1$ need to wait.</li>&#xA;<li>otherwise $D_1$ always informs $D_2$ of updates by sending all updates. (During fetching, the data is locked of course.)</li>&#xA;</ul>&#xA;&#xA;<p>Now the question: what kind of blocking behaviour can we expect from $D_1$ and $D_2$?</p>&#xA;&#xA;<p>In particular I find the following corner case interesting and instructive:</p>&#xA;&#xA;<p>$D_1$ currently has a long list of events and the sender systems send a lot of new events/updates. $D_2$ just went down, goes up now and needs to fetch events from $D_1$, thus blocking the whole chain.</p>&#xA;
distributed systems database theory
0
2,347
Reduction to equipartition problem from the partition problem?
<p>Equipartition Problem:</p>&#xA;&#xA;<p>Instance: $2n$ positive integers $x_1,\dots,x_{2n}$ such that their sum is even. Let $B$ denote half their sum, so that $\sum x_{i} = 2B$.</p>&#xA;&#xA;<p>Query: Is there a subset $I \subseteq [2n]$ of size $|I| = n$ such that $\sum_{i \in I} x_{i} = B$?</p>&#xA;&#xA;<p>Can the <a href="http://en.wikipedia.org/wiki/Partition_problem" rel="nofollow">partition problem</a> - same as the above but without the restriction on $|I|$ - be reduced to the above problem ?</p>&#xA;
complexity theory reductions np hard
0
2,348
Can you specify a programming language without implementation?
<p>Is it theoretically possible to specify a programming language for which no implementation could exist? A programming language is a way of defining functions. An implementation means a method to execute a given program in that language on a given input to the output of the function corresponding to the program on that input.</p>&#xA;&#xA;<p>What is are the minimal requirements of such a language?</p>&#xA;
formal languages computability programming languages
0
2,351
Speech vs Music classification
<p>I want to determine which parts of an audio file contain speech respectively music.</p>&#xA;&#xA;<p>I hope someone has a made something like this or can tell me where to start. Can you please suggest some method or tutorial for doing the same?</p>&#xA;
reference request pattern recognition
0
2,353
Is computational power of Neural networks related to the activation function
<p>It is proven that neural networks with rational weights has the computational power of the Universal Turing Machine <a href="http://www.math.rutgers.edu/~sontag/FTP_DIR/aml-turing.pdf" rel="noreferrer">Turing computability with Neural Nets</a>. From what I get, it seems that using real-valued weights yields even more computational power, though I'm not certain of this one.</p>&#xA;&#xA;<p>However, is there any correlation between the computational power of a neural net and its activation function? For example, if the activation function compares the input against a limit of a Specker sequence (something you can't do with a regular Turing machine, right?), does this make the neural net computationally "stronger"? Could someone point me to a reference in this direction?</p>&#xA;
computability neural networks
0
2,357
Length of mid part of the string in Pumping Lemma
<p>This statement of the pumping lemma from Wikipedia.</p>&#xA;&#xA;<blockquote>&#xA; <p>Let <span class="math-container">$L$</span> be a regular language. Then there exists an integer <span class="math-container">$p \ge 1$</span> (depending only on <span class="math-container">$L$</span>) such that every string <span class="math-container">$w$</span> in <span class="math-container">$L$</span> of length at least <span class="math-container">$p$</span> (<span class="math-container">$p$</span> is called the "pumping length") can be written as <span class="math-container">$w = x y z$</span> (i.e., <span class="math-container">$w$</span> can be divided into three substrings), satisfying the following conditions: </p>&#xA; &#xA; <ol>&#xA; <li><span class="math-container">$\lvert y \rvert \ge 1$</span> </li>&#xA; <li><span class="math-container">$\lvert x y \rvert \le p$</span> and </li>&#xA; <li>for all <span class="math-container">$i \ge 0$</span>, <span class="math-container">$x y^i z \in L$</span>.<br>&#xA; <span class="math-container">$y$</span> is the substring that can be pumped (removed or repeated any number of times, and the resulting string is always in <span class="math-container">$L$</span>). </li>&#xA; </ol>&#xA;</blockquote>&#xA;&#xA;<p>What confuses me about the definition of pumping lemma are two requirements: <span class="math-container">$\lvert y \rvert \ge 1$</span> and <span class="math-container">$i \ge 0$</span>, <span class="math-container">$x y^i z$</span>. The way I read it, that we are required to have <span class="math-container">$y$</span> length be equal to one or greater, and at the same time, we can completely skip it, since <span class="math-container">$i \ge 0$</span>, i.e. effectively <span class="math-container">$\lvert y \rvert = 0 $</span>.&#xA;Intuitively, it makes sense that we should be able to skip <span class="math-container">$y$</span> and still have string be in <span class="math-container">$L$</span>. </p>&#xA;
formal languages regular languages pumping lemma
0
2,362
Problems in P with provably faster randomized algorithms
<p>Are there any problems in $\mathsf{P}$ that have randomized algorithms beating lower bounds on deterministic algorithms? More concretely, do we know any $k$ for which $\mathsf{DTIME}(n^k) \subsetneq \mathsf{PTIME}(n^k)$? Here $\mathsf{PTIME}(f(n))$ means the set of languages decidable by a randomized TM with constant-bounded (one or two-sided) error in $f(n)$ steps. </p>&#xA;&#xA;<blockquote>&#xA; <p>Does randomness buy us anything inside $\mathsf{P}$?</p>&#xA;</blockquote>&#xA;&#xA;<p>To be clear, I am looking for something where the difference is asymptotic (preferably polynomial, but I would settle for polylogarithmic), not just a constant.</p>&#xA;&#xA;<p><em>I am looking for algorithms asymptotically better in the worst case. Algorithms with better expected complexity are not what I am looking for. I mean randomized algorithms as in RP or BPP not ZPP.</em></p>&#xA;
algorithms complexity theory randomized algorithms
0
2,370
How to implement deep copy?
<p>Is there any scientific work about deep copying? So far I have only found source codes (Java, Python, ...). However, there are various approaches and nobody seems to evaluate them.</p>&#xA;&#xA;<ul>&#xA;<li>Reflection-based (Python)</li>&#xA;<li>Auto-generated shallow-copy-based (Java)</li>&#xA;<li>Compiler-generated polymorphic deep copy (anybody?)</li>&#xA;</ul>&#xA;&#xA;<p>The last one seems to be the most efficient one, but I do not know if it is implemented anywhere.</p>&#xA;
programming languages compilers
0
2,374
How to describe algorithms, prove and analyse them?
<p>Before reading <em>The Art of Computer Programming (TAOCP)</em>, I have not considered these questions deeply. I would use pseudo code to describe algorithms, understand them and estimate the running time only about orders of growth. The <em>TAOCP</em> thoroughly changes my mind.</p>&#xA;&#xA;<p><em>TAOCP</em> uses English mixed with steps and <em>goto</em> to describe the algorithm, and uses flow charts to picture the algorithm more readily. It seems low-level, but I find that there's some advantages, especially with flow chart, which I have ignored a lot. We can label each of the arrows with an assertion about the current state of affairs at the time the computation traverses that arrow, and make an inductive proof for the algorithm. The author says:</p>&#xA;&#xA;<blockquote>&#xA; <p>It is the contention of the author that we really understand why an algorithm is valid only when we reach the point that our minds have implicitly filled in all the assertions, as was done in Fig.4.</p>&#xA;</blockquote>&#xA;&#xA;<p>I have not experienced such stuff. Another advantage is that, we can count the number of times each step is executed. It's easy to check with Kirchhoff's first law. I have not analysed the running time exactly, so some $\pm1$ might have been omitted when I was estimating the running time.</p>&#xA;&#xA;<p>Analysis of orders of growth is sometimes useless. For example, we cannot distinguish quicksort from heapsort because they are all $E(T(n))=\Theta(n\log n)$, where $EX$ is the expected number of random variable $X$, so we should analyse the constant, say, $E(T_1(n))=A_1n\lg n+B_1n+O(\log n)$ and $E(T_2(n))=A_2\lg n+B_2n+O(\log n)$, thus we can compare $T_1$ and $T_2$ better. And also, sometimes we should compare other quantities, such as variances. Only a rough analysis of orders of growth of running time is not enough. As <em>TAOCP</em> translates the algorithms into assembly language and calculate the running time, It's too hard for me, so I want to know some techniques to analyse the running time a bit more roughly, which is also useful, for higher-level languages such as C, C++ or pseudo codes.</p>&#xA;&#xA;<p>And I want to know what style of description is mainly used in research works, and how to treat these problems.</p>&#xA;
algorithms proof techniques runtime analysis
1
2,379
Is switching quantifiers allowed in this instance?
<p>In Logic In Computer Science (2nd Edition - Michael Huth and Mark Ryan), exercise 2.4.12.k is the following:</p>&#xA;&#xA;<blockquote>&#xA; <p>For each of the formulas of predicate logic below, either find a model which&#xA; does not satisfy it, or prove it is valid. </p>&#xA;</blockquote>&#xA;&#xA;<p>This one is difficult: </p>&#xA;&#xA;<blockquote>&#xA; <p>(∀x ∃y (P(x) → Q(y))) → (∃y ∀x (P(x) → Q(y)))</p>&#xA;</blockquote>&#xA;&#xA;<p>Not only am I not sure how to prove this, I'm not sure whether it is valid or not. I came across this question when revising for an exam and always hit a dead end with it.</p>&#xA;&#xA;<p>Any suggestions or insight is appreciated.</p>&#xA;
logic logical validity
0
2,382
Methods to evaluate a system of written rules
<p>I was trying to come up with a system that would evaluate bylaws for an organization as to determine their underlying logic.</p>&#xA;&#xA;<p>I think a first-order predicate system would work for representing the rules, which could be translated from the text via part-of-speech tagging and other NLP techniques. </p>&#xA;&#xA;<p>Is there a systematic way to interpret the first-order logic rules as a whole, or some type of ML architecture that would work as a second layer to find similarities between the elements.</p>&#xA;&#xA;<p>For example,</p>&#xA;&#xA;<blockquote>&#xA; <p>List of fun activities:</p>&#xA; &#xA; <ul>&#xA; <li>golf</li>&#xA; <li>coffee break</li>&#xA; <li>pizza</li>&#xA; </ul>&#xA; &#xA; <p>Bylaws:</p>&#xA; &#xA; <ol>&#xA; <li><p>On Friday, we play golf</p></li>&#xA; <li><p>On Friday or Saturday, we take a quick coffee break, and if it's Saturday, we get pizza</p></li>&#xA; </ol>&#xA;</blockquote>&#xA;&#xA;<p>Conclusion: our group has fun on weekends</p>&#xA;&#xA;<p>It sounds far fetched, but I'm curious if it's possible. I also realize that perhaps more first-order logic would be a better fit for driving the conclusions of the second layer. </p>&#xA;
machine learning algorithms pattern recognition logic
1
2,383
Has anyone found polynomial algorithm on Hamiltonian cycle isomorphism?
<p>As the title says, has anyone found a polynomial time algorithm for checking whether two graphs having a Hamiltonian cycle are isomorphic? Is this problem NP-complete?</p>&#xA;
complexity theory graphs time complexity
0
2,385
Which is the minimal number of operations for intractability?
<p>If we have an algorithm that need to run $n=2$ operations and then halt, I think we could say the problem it solves, is tractable, but if $n=10^{120}$ althought It could be theoretically solvable it seems to be intractable, and what about a problem that needs $n=10^{1000}$, or $n=10^{10^{1000}}$ operations, that's seems an intractable problem for sure.</p>&#xA;&#xA;<p>Then we see there is a k, from which $n\ge k$ operations problems are intractable, and $n\lt k$ are tractable ones.</p>&#xA;&#xA;<p>I doubt about that k to exist.. Where is the limit? Can a Technological advance turn some intractable problems <strong>for a given n</strong> into a tractable ? </p>&#xA;&#xA;<p>I would like to read your opinion.</p>&#xA;&#xA;<p><strong>EDIT</strong></p>&#xA;&#xA;<p>I think this question is similar as asking if Church–Turing thesis is correct, because if the difference about solving a computable problem in a Turing Machine and in any other Turing Complete Machine, is "only a constant" about the number of operations, then I think that asking about computable is the same as asking about effective calculability.. Now I see tractable means polynomial time, and inctractable is related with no polynomial time solution. But the difference between two machines, for the same (even tractable) problem, is about Church-Turing thesis. </p>&#xA;
complexity theory church turing thesis
1
2,393
How to feel intuitively that a language is regular
<p>Given a language $ L= \{a^n b^n c^n\}$, how can I say directly, without looking at production rules, that this language is not regular?</p>&#xA;&#xA;<p>I could use pumping lemma but some guys are saying just looking at the grammar that this is not regular one. How is it possible?</p>&#xA;
formal languages regular languages pumping lemma intuition
1
2,394
Algorithm to check the 2∀-connectness property of a graph
<p>A graph is 2∀-connected if it remains connected even if any single edge is removed. Let G = (V, E) be a connected undirected graph. Develop an algorithm as fast as possible to check 2∀-connectness of G.</p>&#xA;&#xA;<p>I know the basic idea is to build a DFS searching tree and then check each edge is not on a circle with DFS. Any help would be appreciated.</p>&#xA;&#xA;<p>What I expect to see is a detailed algorithm description(especially the initialization of needed variables which is obscure sometimes), complexity analysis could be omitted.</p>&#xA;
algorithms graphs efficiency
1
2,400
Brute force Delaunay triangulation algorithm complexity
<p>In the book <a href="http://www.cs.uu.nl/geobook/" rel="noreferrer">&quot;Computational Geometry: Algorithms and Applications&quot;</a> by Mark de Berg et al., there is a very simple brute force algorithm for computing Delaunay triangulations. The algorithm uses the notion of <em>illegal edges</em> -- edges that may not appear in a valid Delaunay triangulation and have to be replaced by some other edges. On each step, the algorithm just finds these illegal edges and performs required displacements (called <em>edge flips</em>) till there are no illegal edges.</p>&#xA;<blockquote>&#xA;<p>Algorithm <strong>LegalTriangulation</strong>(<span class="math-container">$T$</span>)</p>&#xA;<p><em>Input</em>. Some triangulation <span class="math-container">$T$</span> of a point set <span class="math-container">$P$</span>.<br />&#xA;<em>Output</em>. A legal triangulation of <span class="math-container">$P$</span>.</p>&#xA;<p><strong>while</strong> <span class="math-container">$T$</span> contains an illegal edge <span class="math-container">$p_ip_j$</span><br />&#xA;<strong>do</strong><br />&#xA;<span class="math-container">$\quad$</span> Let <span class="math-container">$p_i p_j p_k$</span> and <span class="math-container">$p_i p_j p_l$</span> be the two triangles adjacent to <span class="math-container">$p_ip_j$</span>.<br />&#xA;<span class="math-container">$\quad$</span> Remove <span class="math-container">$p_ip_j$</span> from <span class="math-container">$T$</span>, and add <span class="math-container">$p_kp_l$</span> instead.<br/>&#xA;<strong>return</strong> <span class="math-container">$T$</span>.</p>&#xA;</blockquote>&#xA;<p>I've heard that this algorithm runs in <span class="math-container">$O(n^2)$</span> time in worst case; however, it is not clear to me whether this statement is correct or not. If yes, how can one prove this upper bound?</p>&#xA;
algorithms time complexity algorithm analysis computational geometry runtime analysis
0
2,404
Online Learning Resources for Discrete Mathematics
<p>Are there any good Discrete mathematics learning web resources with problem sets?</p>&#xA;
reference request education discrete mathematics
1
2,406
Must Neural Networks always converge?
<h2>Introduction</h2>&#xA;&#xA;<p><strong>Step One</strong></p>&#xA;&#xA;<p>I wrote a standard backpropegating neural network, and to test it, I decided to have it map XOR.</p>&#xA;&#xA;<p>It is a 2-2-1 network (with tanh activation function)</p>&#xA;&#xA;<pre><code>X1 M1&#xA; O1&#xA;X2 M2&#xA;&#xA;B1 B2&#xA;</code></pre>&#xA;&#xA;<p>For testing purposes, I manually set up the top middle neuron (M1) to be an AND gate and the lower neuron (M2) to be an OR gate (both output 1 if true and -1 if false).</p>&#xA;&#xA;<p>Now, I also manually set up the connection M1-O1 to be -.5, M2-O1 to be 1, and &#xA;B2 to be -.75</p>&#xA;&#xA;<p>So if M1 = 1 and M2 = 1, the sum is (-0.5 +1 -0.75 = -.25) tanh(0.25) = -0.24</p>&#xA;&#xA;<p>if M1 = -1 and M2 = 1, the sum is ((-0.5)*(-1) +1 -0.75 = .75) tanh(0.75) = 0.63</p>&#xA;&#xA;<p>if M1 = -1 and M2 = -1, the sum is ((-0.5)*(-1) -1 -0.75 = -1.25) tanh(1.25) = -0.8</p>&#xA;&#xA;<p>This is a relatively good result for a "first iteration".</p>&#xA;&#xA;<p><strong>Step Two</strong></p>&#xA;&#xA;<p>I then proceeded to modify these weights a bit, and then train them using error propagation algorithm (based on gradient descent). In this stage, I leave the weights between the input and middle neurons intact, and just modify the weights between the middle (and bias) and output. </p>&#xA;&#xA;<p>For testing, I set the weights to be and .5 .4 .3 (respectively for M1, M2 and bias)</p>&#xA;&#xA;<p>Here, however, I start having issues.</p>&#xA;&#xA;<hr>&#xA;&#xA;<h2>My Question</h2>&#xA;&#xA;<p>I set my learning rate to .2 and let the program iterate through training data (A B A^B) for 10000 iterations or more.</p>&#xA;&#xA;<p><em>Most</em> of the time, the weights converge to a good result. However, at times, those weights converge to (say) 1.5, 5.7, and .9 which results in a +1 output (even) to an input of {1, 1} (when the result should be a -1).</p>&#xA;&#xA;<p>Is it possible for a relatively simple ANN which has a solution to not converge at all or is there a bug in my implementation?</p>&#xA;
machine learning neural networks
1
2,407
Time complexity of a triple nested loop with squared indices
<p>I have seen this function in past year exam paper.</p>&#xA;&#xA;<pre><code>public static void run(int n){&#xA; for(int i = 1 ; i * i &lt; n ; i++){&#xA; for(int j = i ; j * j &lt; n ; j++){&#xA; for(int k = j ; k * k &lt; n ; k++){&#xA;&#xA; }&#xA; }&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>After give some example, I guess it is a function that with time complexity in following formula</p>&#xA;&#xA;<p><strong><em>let make m = n^(1/2)</em></strong></p>&#xA;&#xA;<p><strong><em>[m+(m-1)+(m-2)+...+3+2+1] + [(m-1)+(m-2)+...+3+2+1] + ...... + (3+2+1) + (2+1) + 1</em></strong></p>&#xA;&#xA;<p>*Edit: I have asked this math question <a href="https://math.stackexchange.com/a/159142/33103">here</a>, the answer is <strong>m(m+1)(m+2)/6</strong></p>&#xA;&#xA;<p>Is this correct, if no, what is wrong, if yes, how would you translate to big O notation.&#xA;The question that I want to ask is not <strong>only</strong> about this specific example; but also how would you evaluate an algorithm, let's say, I can only give some example to watch the pattern it appears. But some algorithm are not that easy to evaluate, what is your way to evaluate using this example.</p>&#xA;&#xA;<p><strong>Edit:&#xA;@LuchianGrigore&#xA;@AleksG</strong></p>&#xA;&#xA;<pre><code>public static void run(int n){&#xA; for(int i = 1 ; i * i &lt; n ; i++){&#xA; for(int j = 1 ; j * j &lt; n ; j++){&#xA; for(int k = 1 ; k * k &lt; n ; k++){&#xA;&#xA; }&#xA; }&#xA; }&#xA; }&#xA;</code></pre>&#xA;&#xA;<p>This is an example that in my lecture notes, each loop is with time complexity of <strong>n</strong> to the power of <strong>1/2</strong>, for each loop there is another n^(1/2) inside, the total are n^(1/2) * n^(1/2) * n^(1/2) = n^(3/2).&#xA;Is the first example the same? It is less than the second example, right?</p>&#xA;&#xA;<p><strong>Edit,Add:</strong></p>&#xA;&#xA;<p>How about this one? Is it <strong>log(n)*n^(1/2)*log(n^2)</strong></p>&#xA;&#xA;<pre><code>for (int i = 1; i &lt; n; i *= 2)&#xA; for (int j = i; j * j &lt; n; j++)&#xA; for (int m = j; j &lt; n * n; j *= 2)&#xA;</code></pre>&#xA;
algorithm analysis runtime analysis loops
1
2,411
Attempt to write a function with cubed log runtime complexity $O(\log^3 n)$
<p>I'm learning Data Structures and Algorithms now, I have a practical question that asked to write a function with O(log<sup>3</sup>n), which means log(n)*log(n)*log(n).</p>&#xA;&#xA;<pre><code>public void run(int n) {&#xA; for (int i = 1; i &lt; n; i *= 2) {&#xA; for (int j = 1; j &lt; n; j *= 2) {&#xA; for (int k = 1; k &lt; n; k *= 2) {&#xA; System.out.println("hi");&#xA; }&#xA; }&#xA; }&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>I have come with this solution, but it seems not correct. Please help me out.</p>&#xA;
time complexity
1
2,415
Shortest distance between a point in A and a point in B
<blockquote>&#xA; <p>Given two sets $A$ and $B$ each containing $n$ disjoint points&#xA; in the plane, compute the shortest distance between a point in $A$ and a point in $B$, i.e., $\min \space \{\mbox{ } \text{dist}(p, q) \mbox{ } | \mbox{ } p \in A \land q \in B \space \} $.</p>&#xA;</blockquote>&#xA;&#xA;<p>I am not sure if I am right, but this problem very similar to problems that can be solved by linear programming in computational geometry. However, the reduction to LP is not straightforward. Also my problem looks related to finding the thinnest stip between two sets of points which obviously can be solved by LP in $O(n)$ in 2-dimensional space.</p>&#xA;
algorithms computational geometry
1
2,417
Are tamper attacks considered side-channel?
<p><a href="https://en.wikipedia.org/wiki/Side_channel_attack" rel="nofollow">Wikipedia</a> defines <em>side-channel attacks</em> as:</p>&#xA;&#xA;<blockquote>&#xA; <p>any attack based on information gained from the physical implementation of a cryptosystem</p>&#xA;</blockquote>&#xA;&#xA;<p>Usually in side channel attacks the implementations leak information (e.g., timing attack: the implementation leaks the time it takes to complete a task, etc.)</p>&#xA;&#xA;<p><strong>Are tampering-attacks also considered as side-channel attacks?</strong></p>&#xA;&#xA;<p>On one hand, tampering-attacks are (usually) attacks on the implementation itself.&#xA;On the other hand, the attack might be such that information only enters the device, and no information comes out of the device, so there is no "side-channel" that leaks the information.&#xA;(example: If we heat some access-control device, until it grants us the access. Or if we perform SQL injection that causes the device to grant the access (but leaks no secret other than that))</p>&#xA;
terminology cryptography
0
2,422
How to prove 2-EXP != EXP
<p>I am guessing that this is correct for <code>3-EXP</code>, <code>4-EXP</code> etc...</p>&#xA;&#xA;<p>Basically I should find a problem in <code>2-EXP</code> that is not in <code>EXP</code>.&#xA;Any examples ?</p>&#xA;
complexity theory
1
2,425
Semi-decidable problems with linear bound
<p>Take a semi-decidable problem and an algorithm that finds the positive answer in finite time. The run-time of the algorithm, restricted to inputs with a positive answer, cannot be bounded by a computable function. (Otherwise we’d know how long to wait for a positive answer. If the algorithm runs longer than that we know that the answer is no and the problem would be solvable.)</p>&#xA;&#xA;<p>My question is now: Can such an algorithm still have a, say, a run-time bound linear (polynomial, constant,...) in the input size, but with an uncomputable constant? Or would that still allow me to decide the problem? Are there example?</p>&#xA;
computability time complexity undecidability
1
2,433
A concrete example about string w and string x used in the proof of Rice's Theorem
<p>So, in lectures about Rice's Theorem, reduction is usually used to proved the theorem. Reduction usually consists a construction of $M'$, using a TM $M$ which is in the form $\langle M,w \rangle$ to be simulated first, an input $x$ to be simulated if $M$ accepts. $M'$ accepts if x is accepted. </p>&#xA;&#xA;<p>I really want a concrete input about $\langle M,w \rangle$ and $x$. For example:</p>&#xA;&#xA;<blockquote>&#xA; <p>$L = \{ \langle M\rangle \mid L(M) = \{\text{ stackoverflow }\}\}$, that is L contains all Turing machines whose languages contain one string: "stackoverflow". $L$ is undecidable.</p>&#xA;</blockquote>&#xA;&#xA;<p>What kind of $\langle M,w \rangle$ to be simulated? </p>&#xA;&#xA;<p>Suppose we have input x = "stackoverflow" or x = "this is stackoverflow" or any x with "stackoverflow" in it.</p>&#xA;&#xA;<p>What if we first simulate a TM $M$ selected from in the possibilities of all TMs, and this TM accepts only a single character $a$ as its language. So, we simulate this $\langle M,w \rangle$ with $w = a$, and surely it will be accepted. And then input $x$ is also accepted according to the definition of $L$. </p>&#xA;&#xA;<p>So, we conclude that $\langle M,w \rangle$ in which language is a single $a$ is reducible to $L$ that accepts all TMs which have "stackoverflow"?</p>&#xA;&#xA;<p><strong>Edit:</strong> I've just looked up a brief definition of reduction. A reduction is a transformation from an unknown but easier problem to a harder problem but already known. If the harder problem is solvable, so is the easier one. Otherwise, it's not. </p>&#xA;&#xA;<p>Given that definition, I think the correct TM $M$ with its description $\langle M,w \rangle$ in my example should be a TM such that it accepts regular languages. This is the harder problem. If this is solvable, then my trivial $L$ with one string is solvable. But apparently, it's not according to the proof. We can effectively say we reduced from language one string problem to regular language problem and try to solve it. Previously, I thought the other way around: $\langle M,w \rangle$ is reduced to one string problem. </p>&#xA;&#xA;<p>Is my thinking correct? </p>&#xA;
computability
1
2,437
What is the type theory judgement symbol?
<p>In type theory judgements are often presented with the following syntax:</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/7V5r2.png" alt="enter image description here"></p>&#xA;&#xA;<p>My question is what is that symbol in the middle called? All the papers I've found seem to use an image rather than a unicode character so I can't look it up. I've also not found any type-theory reference which says what that symbol is (they explain what it means however).</p>&#xA;&#xA;<p>So what character is that symbol and what is its proper name?</p>&#xA;
logic terminology type theory
1
2,442
Does a collision oracle for the pigeonhole subset sum problem produce solutions?
<p>I am reading "Efficient Cryptographic Schemes Provably as Secure as Subset Sum" by R. Impagliazzo and M. Naor (<a href="http://www.stevens.edu/algebraic/Files/SubsetSum/impagliazzo96efficient.pdf" rel="nofollow">paper</a>) and came across the following statement in the proof of Theorem 3.1 (pages 10-11):</p>&#xA;&#xA;<blockquote>&#xA; <p>Let $\ l(n) = (1-c)n \ $ for $ \ c &gt; 0 \ $ ...</p>&#xA; &#xA; <p>Given $a_1, a_2, \cdots, a_n \in \{0,1\}^{l(n)}$ and a target sum $T$, we construct an input to the collision finding algorithm as follows:</p>&#xA; &#xA; <ol>&#xA; <li><p>Let the collision finding algorithm select a (non-empty) $ s_1 \in \{0,1\}^n $</p></li>&#xA; <li><p>compute $T' = \sum_{i \in s_1} a_i$. Choose a random $j$ such that $j \in s_1$ and define $a_j' = a_j - T' + T$.</p></li>&#xA; <li><p>Give the instance $a_1, a_2, \cdots , a_j', \cdots, a_n$ and $s_1$ to the algorithm that finds collisions. The algorithm attempts to find $s_2$ such that $f_{(a_1, a_2, \cdots, a_j', \cdots, a_n)}(s_2) = T'$.</p></li>&#xA; </ol>&#xA; &#xA; <p>If the algorithm returns $s_2$ that collides with $s_1$ and $j \notin s_2$, then <strong>$s_2$ is a solution to our original problem</strong>, since swapping $a_j$ and $a_j'$ does not affect the sum over $s_2$.</p>&#xA;</blockquote>&#xA;&#xA;<p>Where the emphasis is mine.</p>&#xA;&#xA;<p>Where $f$ concatenates $\stackrel{\rightarrow}{a}$ with the sum of the $a_i$'s:</p>&#xA;&#xA;<p>$$ f( \stackrel{\rightarrow}{ a } , S) = f_{(a_1, a_2, \cdots, a_n)}(S) = \ \stackrel{\rightarrow}{a}, \sum_{i \in S} a_i \mod 2^{l(n)} $$</p>&#xA;&#xA;<p>(taken from the top of page 3 from the same paper).</p>&#xA;&#xA;<p>For the life of me, I don't understand how $s_2$ is a solution to the original instance. Can someone elaborate on what they mean? What am I missing?</p>&#xA;&#xA;<p>The above definition for the subset sum problem is, if I'm not mistaken, just another form of the <a href="http://garden.irmacs.sfu.ca/?q=op/theoretical_computer_science/subset_sums_equality" rel="nofollow">pigeonhole subset sum problem</a> (i.e. $\sum_j a_j &lt; 2^n -1$ ). If I read the above right, they are claiming that, given an oracle that finds collisions, they can then construct a solution to the original (pigeonhole) subset sum problem but I do not see how this is done. Any help would be appreciated.</p>&#xA;
complexity theory computability np complete reductions
0
2,443
Finding the number of distinct permutations of length N with n different symbols
<p>I have one puzzle whose answer I have boiled down to finding the total number and which type of permutation they are.</p>&#xA;&#xA;<p>For example if the string is of length ten as $w = aabbbaabba$, the total number of permutations will be </p>&#xA;&#xA;<p>$\qquad \displaystyle \frac{|w|}{|w|_a! \cdot |w|_b!} = \frac{10!}{5!\cdot 5!}$</p>&#xA;&#xA;<p>Now had the string been of distinct characters, say $w'=abcdefghij$, I would have found the permutations by this algorithm : </p>&#xA;&#xA;<pre><code>for i = 1 to |w|&#xA; w = rotate(w)&#xA;w = rotate(w)&#xA;return w.head + rotate(w.tail)&#xA;</code></pre>&#xA;&#xA;<p>Can some one throw new ideas on this - how to find the number of permutations for a string having repeated characters? Is there any other mathematical/scientific name of for what I am trying to do?</p>&#xA;
algorithms combinatorics strings word combinatorics
0
2,444
constrained cover on biparite graphs
<blockquote>&#xA; <p><strong>Possible Duplicate:</strong><br>&#xA; <a href="https://cs.stackexchange.com/questions/2302/restricted-version-of-vertex-cover">Restricted version of vertex cover</a> </p>&#xA;</blockquote>&#xA;&#xA;&#xA;&#xA;<p>Suppose we have a $(A,B,E)$ bipartite graph and a positive integer k. Suppose that k is smaller than $|A|$ and we want to find one of those k element subsets of $A$ which covers the most points from $B$. I can only come up wirh exponential algorithms. Is this in $P$?</p>&#xA;&#xA;<p>Also is mixed integer programming for maximal flows in $P$? It can be easily formulated as such.</p>&#xA;
algorithms graphs
0
2,449
Proving that the cover time for graph is exponential in the worst case
<p>How can I prove that the cover time for a directed graph $G$ can be exponential in the size of $G$?</p>&#xA;&#xA;<p>The cover time is the expected length of a random walk that visits all vertices.</p>&#xA;
algorithms graphs random walks
0
2,450
How do I construct a doubly connected edge list given a set of line segments?
<blockquote>&#xA; <p>For a given planar graph $G(V,E)$ embedded in the plane, defined by a set of line segments $E= \left \{ e_1,...,e_m \right \} $, each segment $e_i$ is represented by its endpoints $\left \{ L_i,R_i \right \}$. Construct a DCEL data structure for the planar subdivision, describe an algorithm, prove it's correctness and show the complexity.</p>&#xA;</blockquote>&#xA;&#xA;<p>According to <a href="http://en.wikipedia.org/wiki/DCEL" rel="noreferrer">this description of the DCEL data structure</a>, there are many connections between different objects (i.e. vertices, edges and faces) of the DCEL. So, a DCEL seems to be difficult to build and maintain.</p>&#xA;&#xA;<p>Do you know of any efficient algorithm which can be used to construct a DCEL data structure?</p>&#xA;
algorithms data structures computational geometry doubly connected edge list
1
2,452
Beating fair colorings with few edges
<p>I have been investigating parallel algorithms to compute certain two-dimensional dynamic programming recursions (on natural parameters); see also <a href="https://cs.stackexchange.com/questions/196/a-case-distinction-on-dynamic-programming-example-needed">here</a>. Under certain assumptions, cases one and two can actually be computed in parallel -- and very well. However, if you assume that communicating array entries from one thread to another is more expensive than a normal memory access (as might be the case on real machines), these algorithms are no longer always strongly work-efficient¹. In fact, I conjecture that in this scenario there is no strongly work-efficient parallel algorithm general for these classes of problems, even if we consider only non-pathological recursions.</p>&#xA;&#xA;<p>Towards proving this, I have made the following abstraction for the domain and parallel algorithms. Note that I assume here that such algorithms allocate computations of individual entries to processors in a deterministic way; I do not think the result changes if we allow nondeterminism/randomisation in this regard, but I have no proof.</p>&#xA;&#xA;<blockquote>&#xA; <p>Let $G_n = (V_n, \emptyset)$ with $V_n = \{(i,j) \mid 1 \leq i,j \leq n \}$ be a family of empty $n\times n$ grid graphs. Let furthermore $c : \mathbb{N} \to (V_n \to \{1,\dots,p\})$ a coloring for this family which asymptotically divides $V_n$ in equal parts, that is</p>&#xA; &#xA; <p>$\qquad \displaystyle |\{v \in V_n \mid c(n)(v) = c_i\}| \underset{n \to \infty}{\longrightarrow} \frac{n^2}{p}$</p>&#xA; &#xA; <p>for all colors $c_i \in \{1,\dots,p\}$.</p>&#xA;</blockquote>&#xA;&#xA;<p>The claim is that we can choose edges so that we create no circles and no node has more than linearly many incoming edges, but there are quadratically many edges whose nodes have different colors²:</p>&#xA;&#xA;<blockquote>&#xA; <p>(For any such coloring, ) There is a family of sets of directed edges $E_n = V_n \times V_n$ so that</p>&#xA; &#xA; <ul>&#xA; <li>$((i,j), (i',j')) \in E_n \ \Longrightarrow i' \geq i \land j' \geq j$, that is edges do not point up or left³,</li>&#xA; <li>for all $n \in \mathbb{N}$, $(V_n,E_n)$ has no directed cycles,</li>&#xA; <li>$D_n := \max_{u \in V_n} \operatorname{indeg}(u) \in O(n)$ and</li>&#xA; <li>$C_n := |\{(u,v) \in E_n \mid c(n)(u) \neq c(n)(v) \}| \in \Omega(n^2)$.</li>&#xA; </ul>&#xA;</blockquote>&#xA;&#xA;<p>Is this (similar to) a known problem? Does it hold, and how can you (dis)prove it?</p>&#xA;&#xA;<hr>&#xA;&#xA;<h3>Example</h3>&#xA;&#xA;<p>Consider this coloring (which roughly corresponds to an algorithm I have investigated):</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/UclTl.png" alt="example coloring"><br>&#xA;<sup>[<a href="https://github.com/akerbos/sesketches/blob/gh-pages/src/cs_2452.tikz" rel="nofollow noreferrer">source</a>]</sup></p>&#xA;&#xA;<p>For edges as implied by the <a href="https://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow noreferrer">Levenshtein distance</a> recursion, that is</p>&#xA;&#xA;<p>$\qquad \displaystyle E_n = \bigcup_{1 \leq i,j \leq n}\{(i,j)\} \times \{(i-1,j), (i-1,j-1), (i,j-1) \} \cap \{1,\dots,n\}^2$,</p>&#xA;&#xA;<p>we have $D_n = 3$ and $C_n = 8n-4$, so this is not the $E_n$ we are looking for. If we draw edges from every node to all others to its right, that is</p>&#xA;&#xA;<p>$\qquad \displaystyle E_n = \{ ((i,j),(i,j')) \mid j' &gt; j \}$,</p>&#xA;&#xA;<p>we get $D_n = n-1$ and $C_n \geq \frac{3}{4}n^2$, so the coloring is defeated.</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li>"Strongly work-efficient" means here that the parallel algorithm on $p \in \mathbb{N}$ cores does not take more time than $\frac{T^s}{p}$ in the limit, with $T^s$ the runtime of a (good) sequential algorithm.</li>&#xA;<li>That corresponds to a not-too-dense dependency structure of a recursion's domain which causes the parallel algorithm to communicate too many results between threads.</li>&#xA;<li>That corresponds to <a href="https://cs.stackexchange.com/questions/196/a-case-distinction-on-dynamic-programming-example-needed">case one</a>. Case two can be modelled similarly by requiring $i' &gt; i$.</li>&#xA;</ol>&#xA;
graphs colorings
0
2,453
If any 3 points are collinear
<blockquote>&#xA; <p>Given a set $S$ of points $p_1,..,p_2$ give the most efficient algorithm for determining if any 3 points of the set are collinear.</p>&#xA;</blockquote>&#xA;&#xA;<p>The problem is I started with general definition but I cannot continue to actually solving the problem.</p>&#xA;&#xA;<p>What can we say about collinear points in general, 3 points $a,b,c$ are collinear if the distance $d(a,c) = d(a,b)+d(b,c)$ in the case when $b$ is between $a$ and $c$.</p>&#xA;&#xA;<p>The naive approach has $O(n(n-1)(n-2))=O(n^3)$ time complexity.</p>&#xA;&#xA;<p>How to solve this problem, what should be the next step?</p>&#xA;
algorithms computational geometry
1
2,457
Improve Markov Chain results
<p>Apologies for another Markov Chain question but this one is best given its own question to avoid confusion. I am using a Markov Chain to get the 10 best search results from the union of 3 different search engines. The top 10 results are taken from each engine to form a set of 30 results.</p>&#xA;&#xA;<p>The chain starts at State x, a uniform distribution of set S = {1,2,3,...30}. If the current state is page P, select page Q uniformly from the union of the results from each search engine. If the rank of Q &lt; rank of P in 2 of the 3 engines that rank both P and Q, move to Q. Else, remain at P. </p>&#xA;&#xA;<p>This results in a number of pairwise comparisons being carried out. result2 is compared with result1 and a count is made of each time result2 ranks better than 1. The results are sorted by the results of the pairwise comparisons, with the lowest score ranked first. e.g.</p>&#xA;&#xA;<pre>&#xA;Engine Rankings: Pairwise Comparison:&#xA; eng1 eng2 eng3 result1 result2 result3 result4 result5&#xA;result1 1 2 2 result1 0 1 0 0 1&#xA;result2 4 3 1 result2 2 0 1 2 2&#xA;result3 2 4 5 result3 3 2 0 1 2&#xA;result4 5 5 3 result4 3 1 2 0 1&#xA;result5 3 1 4 result5 2 1 1 2 0&#xA;&#xA;</pre>&#xA;&#xA;<p>The problem with this example is, if we add the total of each row in the pairwise comparison, we get {2,7,8,7,7}, leaving 3 different results with the same score. I'm wondering if there is a method to further sort these results in order to refine the results so that I'm not left with a number of results that have the same score? I've seen Keminization but I can't see how this would apply? Can someone please give me some guidance?</p>&#xA;
algorithms machine learning markov chains
0
2,459
Prove that for a general data structure - operations Extract_min() and Insert(x) cost $\Omega(\log n)$?
<p>I've been given the following problem:</p>&#xA;&#xA;<p>Given a data structure $M$ that is based on comparisons and supports the following methods on a group of numbers $S$:</p>&#xA;&#xA;<ul>&#xA;<li>$\text{Insert}(x)$ – add $x$ to $S$</li>&#xA;<li>$\text{Extract_min}()$ – remove the minimal element in $S$ and return it </li>&#xA;</ul>&#xA;&#xA;<p>We can implement with a heap the above methods in $O(\log n)$, however, we're looking at &#xA;a bigger picture, a general case that we have no guarantee that $M$ is indeed a heap. Prove that &#xA;no matter what kind of data structure $M$ is, that <strong>at least one</strong> of the methods that $M$ supports must take $\Omega(\log n )$.</p>&#xA;&#xA;<p><strong>My solution:</strong></p>&#xA;&#xA;<p>Each sorting algorithm that is based on comparisons must take at the worst case at least $\Omega(n\log n)$ – we'll prove that using a decision tree: if we look at any given algorithm that is based on comparisons, as a binary tree where each vertex is a <em>compare-method</em> between 2 elements: </p>&#xA;&#xA;<ul>&#xA;<li>if the first is bigger than the second element – we'll go to the left child</li>&#xA;<li>if the second is bigger than the first element – we'll go to the right child</li>&#xA;</ul>&#xA;&#xA;<p>At the end, we'll have $n!$ leaves that are the options for sorting the elements.</p>&#xA;&#xA;<p>The height of the tree is $h$, then:</p>&#xA;&#xA;<p>$$2^h \ge n! \quad\Longrightarrow\quad \log(2^h) &gt;= \log(n!) \quad\Longrightarrow\quad h \ge \log(n!) \quad\Longrightarrow\quad h = \Omega(n \log n)$$</p>&#xA;&#xA;<p>Then, if we have a $\Omega(n \log n)$ worst case for $n$ elements, then we have a $\Omega(\log n)$ for a single element. </p>&#xA;&#xA;<p>I'm not sure regarding this solution, so I'd appreciate for corrections or anything else &#xA;you can come up with. </p>&#xA;
algorithms data structures binary trees search trees
0
2,462
Why is Turing completeness right?
<p>I am using a digital computer to write this message. Such a machine has a property which, if you think about it, is actually quite remarkable: It is <em>one machine</em> which, if programmed appropriately, can perform <em>any possible computation</em>.</p>&#xA;&#xA;<p>Of course, calculating machines of one kind or another go back to antiquity. People have built machines which for performing addition and subtraction (e.g., an abacus), multiplication and division (e.g., the slide rule), and more domain-specific machines such as calculators for the positions of the planets.</p>&#xA;&#xA;<p>The striking thing about a computer is that it can perform <em>any</em> computation. Any computation at all. And all without having to rewire the machine. Today everybody takes this idea for granted, but if you stop and think about it, it's kind of amazing that such a device is possible.</p>&#xA;&#xA;<p>I have two actual <em>questions</em>:</p>&#xA;&#xA;<ol>&#xA;<li><p>When did mankind figure out that such a machine was possible? Has there ever been any serious <em>doubt</em> about whether it can be done? When was this settled? (In particular, was it settled before or after the first actual implementation?)</p></li>&#xA;<li><p>How did mathematicians <em>prove</em> that a Turing-complete machine really can compute everything?</p></li>&#xA;</ol>&#xA;&#xA;<p>That second one is fiddly. Every formalism seems to have some things that <em>cannot</em> be computed. Currently "computable function" is <em>defined as</em> "anything a Turing-machine can compute". But how do we know there isn't some slightly more powerful machine that can compute more stuff? How do we know that Turing-machines are the correct abstraction?</p>&#xA;
computability turing machines history
1
2,464
Time-space tradeoff for missing element problem
<p>Here is a well-known problem.</p>&#xA;&#xA;<p>Given an array $A[1\dots n]$ of positive integers, output the smallest positive integer not in the array.</p>&#xA;&#xA;<p>The problem can be solved in $O(n)$ space and time: read the array, keep track in $O(n)$ space whether $1,2,\dots,n+1$ occured, scan for smallest element.</p>&#xA;&#xA;<p>I noticed you can trade space for time. If you have $O(\frac{n}{k})$ memory only, you can do it in $k$ rounds and get time $O(k n)$. In a special case, there is obviously constant-space quadratic-time algorithm.</p>&#xA;&#xA;<p>My question is:</p>&#xA;&#xA;<blockquote>&#xA; <p>Is this the optimal tradeoff, i.e. does $\operatorname{time} \cdot \operatorname{space} = \Omega(n^2)$?&#xA; In general, how does one prove such type of bounds?</p>&#xA;</blockquote>&#xA;&#xA;<p>Assume RAM model, with bounded arithmetic and random access to arrays in O(1).</p>&#xA;&#xA;<p>Inspiration for this problem: time-space tradeoff for palindromes in one-tape model (see for example, <a href="http://www.cs.uiuc.edu/class/fa05/cs475/Lectures/new/lec24.pdf" rel="noreferrer">here</a>).</p>&#xA;
complexity theory time complexity space complexity
1
2,466
Fastest algorithm for finding the longest palindrome subsequence
<p>First of all we must read a word, and a desired size.<br>&#xA;Then we need to find the longest palindrome created by characters in this word used in order.<br>&#xA;For example for size = 7 and word = "abcababac" the answer is 7 ("abababa"). </p>&#xA;&#xA;<p>Postscript: the size of the word is smaller than 3000.</p>&#xA;
algorithms strings subsequences
0
2,470
Efficient bandwidth algorithm
<p>Recently I sort of stumbled on a problem of finding an efficient topology given a weighted directed graph. Consider the following scenario:</p>&#xA;&#xA;<ol>&#xA;<li><p>Node 1 is connected to 2,3,4 at 50 Mbps. Node 1 has 100 Mbps network card.</p></li>&#xA;<li><p>Node 3 is connected to 5 at 50 Mbps. Node 3 has 100 Mbps card.</p></li>&#xA;<li><p>Node 4 is connected to Node 3 at 40 Mbps. Node 4 has 100 Mbps card.</p></li>&#xA;</ol>&#xA;&#xA;<p>(Sorry about not having a picture)</p>&#xA;&#xA;<p>Problem: If Node 1 starts sending data to its immediate nodes (2 and 3), we can clearly see it's network card capacity will be drained out after Node 3. Whereas if it were to <em>skip</em> node 3 and start sending to node 4, the data will eventually reach to node 3 via 4 and hence, node 5 will be getting data via node 3.&#xA;The problem becomes more complicated if all the links were of 50 Mbps and we can clearly see that node 2 and node 4 are the only way to reach all nodes.</p>&#xA;&#xA;<p>Question: Is there an algorithm which gives the optimal path to ALL nodes keeping the network (card) capacity in mind? </p>&#xA;&#xA;<p>I read the shortest path algorithm,max flow algorithms but none of them seem to address my problems. perhaps,im missing something. I'll appreciate if someone can help me out.</p>&#xA;
algorithms graphs optimization linear programming
1
2,471
What is the bitwise xor of an interval?
<p>Let $\oplus$ be bitwise xor. Let $k,a,b$ be non-negative integers. $[a..b]=\{x\mid a\leq x, x\leq b\}$, it is called a integer interval.</p>&#xA;&#xA;<p>What is a fast algorithm to find &#xA;$\{ k\oplus x\mid x\in [a..b]\}$ as a union of set of integer intervals.</p>&#xA;&#xA;<p>One can prove that $[a+k..b-k]\subseteq \{ k\oplus x\mid x\in [a..b]\}$ by showing that $x-y\leq x\oplus y \leq x+y$.</p>&#xA;&#xA;<p><strong>Edit:</strong> I should specify the actually input and output to remove ambiguity.</p>&#xA;&#xA;<p>Input: $k, a, b$.</p>&#xA;&#xA;<p>Output: $a_1, b_1, a_2, b_2,\ldots,a_m,b_m$. Such that:</p>&#xA;&#xA;<p>$$&#xA;\{ k\oplus x\mid x\in [a..b]\} = \bigcup_{i=1}^m [a_i..b_i]&#xA;$$</p>&#xA;
algorithms integers
1
2,476
What are the minimum requirements for a language to be considered Turing Complete?
<blockquote>&#xA; <p><strong>Possible Duplicate:</strong><br>&#xA; <a href="https://cs.stackexchange.com/questions/991/are-there-minimum-criteria-for-a-programming-language-being-turing-complete">Are there minimum criteria for a programming language being Turing complete?</a> </p>&#xA;</blockquote>&#xA;&#xA;&#xA;&#xA;<p>I overheard a conversation on the topic and the conclusion that one gent came to was that in order to be Turing complete, given one has infinite storage, all one needs is a conditional control structure and a jump instruction. </p>&#xA;&#xA;<p>Is this true? </p>&#xA;&#xA;<p>If it is true, and Turing completeness requires that the language that is Turing complete be able to simulate every instruction available in another Turing complete language, how do those two simple elements achieve that?</p>&#xA;
computability programming languages turing completeness
0
2,482
Using Dijkstra's algorithm with negative edges?
<p>Most books explain the reason the algorithm doesn't work with negative edges as nodes are deleted from the priority queue after the node is arrived at since the algorithm assumes the shortest distance has been found. However since negative edges can reduce the distance, a future shorter distance might be found; but since the node is deleted it cannot be updated.</p>&#xA;&#xA;<p>Wouldn't an obvious solution to this be to <em>not delete the node</em>? Why not keep the node in the queue, so if a future <em>shorter</em> distance is found, it can be updated? If I am misunderstanding the problem, what <em>is</em> preventing the algorithm from being used with negative edges?</p>&#xA;
algorithms graphs shortest path
0
2,487
What is the time complexity of this function?
<p>This is an example in my lecture notes.&#xA;Is this function with time complexity $O(n \log n)$?.&#xA;Because the worst case is the funtion goes into <code>else</code> branch, and 2 nested loops with time complexity of $\log n$ and $n$, so it is $O(n \log n)$. Am I right?</p>&#xA;&#xA;<pre><code>int j = 3;&#xA;int k = j * n / 345;&#xA;if(k &gt; 100){&#xA; System.out.println("k: " + k);&#xA;}else{&#xA; for(int i=1; i&lt;n; i*=2){&#xA; for(int j=0; j&lt;i; j++){&#xA; k++;&#xA; }&#xA; }&#xA;}&#xA;</code></pre>&#xA;
complexity theory time complexity algorithm analysis runtime analysis
1
2,489
Is NP-hard closed against cartesian product with arbitrary languages?
<p>If $L_1$ is NP-hard, $L_1 \times L_2$ is NP-hard for every $L_2 \neq \emptyset$, where</p>&#xA;&#xA;<p>$\qquad \displaystyle L_1 \times L_2 = \{(w_1,w_2) \mid w_1 \in L_1, w_2 \in L_2\}$</p>&#xA;&#xA;<p>Is it true or false and why?</p>&#xA;&#xA;<p>I can't prove it but I also don't find counter example.</p>&#xA;
complexity theory np hard
0
2,492
CLRS - Maxflow Augmented Flow Lemma 26.1 - don't understand use of def. in proof
<p>In Cormen et. al., <em>Introduction to Algorithms</em> (3rd ed.), I don't get a line in the proof of Lemma 26.1 which states that the augmented flow $f\uparrow f'$ is a flow in $G$ and is s.t. $|f\uparrow f'| =|f|+|f'|$ (this is pp. 717-718).</p>&#xA;&#xA;<p>My confusion: When arguing <em>flow-conservation</em> they use the definition of $f\uparrow f'$ in the first line to say that for each $u\in V\setminus\{s,t\}$</p>&#xA;&#xA;<p>$$ \sum_{v\in V} (f\uparrow f')(u,v) = \sum_{v\in V} (f(u,v)+f'(u,v) - f'(v,u)), $$</p>&#xA;&#xA;<p>where the augmented path is defined as</p>&#xA;&#xA;<p>$$ (f\uparrow f')(u,v) = \begin{cases} f(u,v)+f'(u,v) - f'(v,u) &amp; \text{if $(u,v)\in E$}, \\&#xA;0 &amp; \text{otherwise}. \end{cases} $$</p>&#xA;&#xA;<p>Why can they ignore the 'otherwise' clause in the summation? I don't think the first clause evaluates to zero in all such cases. Do they use flow-conservation of $f$ and $f'$ in some way?</p>&#xA;
algorithms network flow
1
2,495
Book for algorithms beyond Cormen
<p>I've finished most of the material in Cormen's Intro to Algorithms book and I am looking for an algorithms book that covers material beyond Corman's book. Are there any recommendations?</p>&#xA;&#xA;<p>NOTE: I asked this on stackoverflow but wasn't all too happy with the answer. </p>&#xA;&#xA;<p>NOTE: Looking at most of the comments I think ideally I would like to find a book that would cover the material of the the 787 course in <a href="http://www.cs.wisc.edu/academic-programs/courses/cs-course-descriptions">this course description</a>.</p>&#xA;
algorithms reference request books
1
2,498
Finding at least two paths of same length in a directed graph
<p>Suppose we have a directed graph $G=(V,E)$ and two nodes $A$ and $B$.&#xA;I would like to know if there are already algorithms for calculating the following decision problem: </p>&#xA;&#xA;<blockquote>&#xA; <p>Are there at least two paths between $A$ and $B$ of the same length?</p>&#xA;</blockquote>&#xA;&#xA;<p>How about the complexity? Can I solve it in polynomial time?</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>I would like to add a new constrain on the graph, maybe the problem is more solvable.&#xA;On adjacency matrix, every column is not empty. So, every node has at least one arrow on input and there is also at least one node connected to itself. So if the node is the $i$-th node, then $(i,i)$ is an edge in the graph.</p>&#xA;
complexity theory graphs time complexity
0
2,501
Neighbourhood in local search metaheuristic
<p>I cannot seem to find an answer to this question with Google, so I am going to ask here: is it required for a good neighbourhood function that it in principle (i. e. by recursively considering all neighbours of a certain solution - which is not practical) can reach all possible solutions?</p>&#xA;&#xA;<p>My question is whether there are references in literature that explicitely state it's a requirement - I can see that it is a good property of a neighbourhood.</p>&#xA;
optimization heuristics
1
2,503
Eliminating useless productions resulting from PDA to CFG converison
<p>In my class we used a Pushdown Automata to Context Free Grammar conversion algorithm that produces a lot extraneous states.</p>&#xA;&#xA;<p>For example, for two transitions, I am getting the following productions</p>&#xA;&#xA;<blockquote>&#xA; <p>$$\begin{gather*}&#xA; \delta(q_0,1,Z) = (q_0,XZ) \\&#xA; {}[q_0,Z,q_0] \to 1[q_0,X,q_0][q_0,Z,q_0] \\&#xA; {}[q_0,Z,q_0] \to 1[q_0,X,q_1][q_1,Z,q_0] \\&#xA; {}[q_0,Z,q_1] \to 1[q_0,X,q_0][q_0,Z,q_1] \\&#xA; {}[q_0,Z,q_1] \to 1[q_0,X,q_1][q_1,Z,q_1] \\&#xA;\end{gather*}$$</p>&#xA; &#xA; <p>$$ \begin{gather*}&#xA; \delta(q_1,0,Z) = (q_0,Z) \\&#xA; {}[q_1,Z,q_0 ] \to 0[q_0,Z,q_0] \\&#xA; {}[q_1,Z,q_1 ] \to 0[q_0,Z,q_1] \\&#xA;\end{gather*}$$</p>&#xA;</blockquote>&#xA;&#xA;<p>How do I decide which state makes it into final production, and which one will be excluded ?</p>&#xA;
automata formal grammars context free pushdown automata
1
2,507
How to use dynamic programming to solve this?
<p>Here is the question: suppose we are given x cents, the amount we want to pay, and a 6-tuple (p, n, d, q, l, t) that represents respectively the number of pennies, nickels, dimes, quarters, loonies and toonies you have. Assume that you have enough coins to pay x cents. You do not have to pay exactly x cents; you can pay more. The cashier is assumed to be smart enough to give you back the optimal number of coins as change. We want to minimize the number of coins that changes hands, that is the number of coins you give to the cashier plus the number of coins the cashier gives back to you.</p>&#xA;&#xA;<p>For example, if we want to pay 99 cents and we have 99 pennies and 1 loonie, then the optimal solution would be to give the cashier the loonie and take back 1 penny.</p>&#xA;&#xA;<p>A particularly easy solution that occurs to me is to create a six-dimensional array. But in practice this is not feasible. So I am wondering if anyone can give me a small hint as to how to use dynamic programming to solve this (as this question looks intuitively to me like a DP problem). Once I have a hint, I can perhaps work out the remaining details myself. Thanks.</p>&#xA;
algorithms optimization dynamic programming
0
2,508
Could an artificial neural network algorithm be expressed in terms of map-reduce operations?
<p>Could an artificial neural network algorithm be expressed in terms of map-reduce operations? I am also interested more generally in methods of parallelization as applied to ANNs and their application to cloud computing. </p>&#xA;&#xA;<p>I would think one approach would involve running a full ANN on each node and somehow integrating the results in order to treat the grid like a single entity (in terms of input/output and machine learning characteristics.) I would be curious even in this case what such an integrating strategy might look like.</p>&#xA;
parallel computing artificial intelligence neural networks
1
2,511
Find all the special graphs which can reduced to the shortest paths graph
<p>I have a directed weighted graph $G = (V, E, W)$. There is always an edge from a vertex $i$ to another one $j$, the weight $w(i,j)$ could be positive infinity, and there does not exist any negative cycle. </p>&#xA;&#xA;<p>An execution of some algorithms will find the lengths (summed weights) of the shortest paths between all pairs of vertices though it does not return details of the paths themselves. For instance, <a href="http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm" rel="noreferrer">Floyd–Warshall algorithm</a> is straightforward, and it works. Let us denote the result by $G' = (V, E, W')$.</p>&#xA;&#xA;<p>In $G'$, it is possible that for an edge from $i$ to $j$, $w'(i,j) = w'(i, k_0) + w'(k_0, k_1) + \dots + w'(k_n, j)$. Let us make from $G'$ another graph $G''$ whose any element is same as $G'$ except $w''(i,j) = \infty \neq w'(i,j)$. Therefore we know that an execution of a shortest paths algorithm on $G''$ will give $G'$.</p>&#xA;&#xA;<p>So given a $G'$, I would like to find all the graphs like $G''$, such that for all $i$ and $j$, $w''(i,j) \in \{ w'(i,j), \infty\}$, and $G''$ can be reduced to $G'$ via a shortest paths algorithm.</p>&#xA;&#xA;<p>Hope my question is clear... I do not know if an algorithm for this exists already, does anyone have any idea?</p>&#xA;
algorithms graphs
0
2,517
Is there a formal name for this graph operation?
<p>I'm writing a small function to alter a graph in a certain way and was wondering if there is a formal name for the operation. The operation takes two distinct edges, injects a new node between the existing nodes of each edge and then adds an edge between the two new nodes. For example:</p>&#xA;&#xA;<pre><code>add new nodes a and b to the graph&#xA;let edge1 = (x,y), let edge2 = (u,v)&#xA;&#xA;delete edge (x,y)&#xA;create edges (x,a), (a,y)&#xA;&#xA;delete edge(u,v)&#xA;create edges(u,b), (b,v)&#xA;&#xA;create edge (a,b)&#xA;</code></pre>&#xA;
graphs terminology combinatorics
0
2,519
Efficiently calculating minimum edit distance of a smaller string at each position in a larger one
<p>Given two strings, $r$ and $s$, where $n = |r|$, $m = |s|$ and $m \ll n$, find the minimum edit distance between $s$ for each beginning position in $r$ efficiently.</p>&#xA;&#xA;<p>That is, for each suffix of $r$ beginning at position $k$, $r_k$, find the <a href="http://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow noreferrer">Levenshtein distance</a> of $r_k$ and $s$ for each $k \in [0, |r|-1]$. In other words, I would like an array of scores, $A$, such that each position, $A[k]$, corresponds to the score of $r_k$ and $s$.</p>&#xA;&#xA;<p>The obvious solution is to use the standard dynamic programming solution for each $r_k$ against $s$ considered separately, but this has the abysmal running time of $O(n m^2)$ (or $O(n d^2)$, where $d$ is the maximum edit distance). It seems like you should be able to re-use the information that you've computed for $r_0$ against $s$ for the comparison with $s$ and $r_1$.</p>&#xA;&#xA;<p>I've thought of constructing a prefix tree and then trying to do dynamic programming algorithm on $s$ against the trie, but this still has worst case $O(n d^2)$ (where $d$ is the maximum edit distance) as the trie is only optimized for efficient lookup.</p>&#xA;&#xA;<p>Ideally I would like something that has worst case running time of $O(n d)$ though I would settle for good average case running time. Does anyone have any suggestions? Is $O(n d^2)$ the best you can do, in general?</p>&#xA;&#xA;<p>Here are some links that might be relevant though I can't see how they would apply to the above problem as most of them are optimized for lookup only:</p>&#xA;&#xA;<ul>&#xA;<li><a href="http://stevehanov.ca/blog/index.php?id=114" rel="nofollow noreferrer">Fast and Easy Levensthein distance using a Trie</a></li>&#xA;<li><a href="https://stackoverflow.com/questions/3183149/most-efficient-way-to-calculate-levenshtein-distance">SO: Most efficient way to calculate Levenshtein distance</a></li>&#xA;<li><a href="https://stackoverflow.com/questions/4057513/levenshtein-distance-algorithm-better-than-onm?rq=1">SO: Levenshtein Distance Algoirthm better than $O(n m)$</a></li>&#xA;<li><a href="http://www.berghel.net/publications/asm/asm.php" rel="nofollow noreferrer">An extension of Ukkonen's enhanced dynamic programming ASM algorithm</a></li>&#xA;<li><a href="http://blog.notdot.net/2010/07/Damn-Cool-Algorithms-Levenshtein-Automata" rel="nofollow noreferrer">Damn Cool Algorithms: Levenshtein Automata</a></li>&#xA;</ul>&#xA;&#xA;<p>I've also heard some talk about using some type of distance metric to optimize search (such as a <a href="http://en.wikipedia.org/wiki/BK-tree" rel="nofollow noreferrer">BK-tree</a>?) but I know little about this area and how it applies to this problem.</p>&#xA;
algorithms runtime analysis strings dynamic programming string metrics
1
2,521
Is the type inference here really complicated?
<p>There's a <a href="https://stackoverflow.com/questions/9058430/why-doesnt-immutablemap-builder-build-pick-the-correct-type-parameters">question on SO</a> asking why in Java the right type doesn't get picked in a concrete case. I know that Java can't do it in such "complicated" cases, but I'm asking myself <em>WHY</em>?</p>&#xA;&#xA;<p>The (for simplicity slightly modified) line failing to compile is</p>&#xA;&#xA;<pre><code>Map&lt;String, Number&gt; m = ImmutableMap.builder().build();&#xA;</code></pre>&#xA;&#xA;<p>and the methods are defined as<sup>1</sup></p>&#xA;&#xA;<pre><code>class ImmutableMap {&#xA; public static &lt;K1, V1&gt; Builder&lt;K1, V1&gt; builder() {...}&#xA; ...&#xA;}&#xA;&#xA;class Builder&lt;K2, V2&gt; {&#xA; public ImmutableMap&lt;K2, V2&gt; build() {...}&#xA; ...&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>The solution <code>K1=K2=String</code> and <code>V1=V2=Number</code> is obvious to everyone but the compiler. There are 4 variables here and I can see 4 trivial equations, so what's the problem with type inference here?</p>&#xA;&#xA;<p><sup>1</sup>I simplified the <a href="http://guava-libraries.googlecode.com/git/guava/src/com/google/common/collect/ImmutableMap.java" rel="nofollow noreferrer">code piece from Guava</a> for this example and numbered the type variables to make it (hopefully) clearer.</p>&#xA;
programming languages typing java type inference
1
2,524
Getting parallel items in dependency resolution
<p>I have implemented a topological sort based on the <a href="http://en.wikipedia.org/wiki/Topological_sort">Wikipedia article</a> which I'm using for dependency resolution, but it returns a linear list. What kind of algorithm can I use to find the independent paths?</p>&#xA;
algorithms graphs parallel computing scheduling
1
2,527
Algorithm to test a graph for $t$-transitivity
<p>I am looking for an algorithm, which given a graph $G$ and a natural number $t$, determines if $G$ is <a href="http://en.wikipedia.org/wiki/Symmetric_graph">$t$-transitive</a>.</p>&#xA;&#xA;<p>I am also interested in knowing if this problem is in P, NP, NPC or some other interesting facts about its complexity class.</p>&#xA;
algorithms graphs
0
2,528
Are supersets of non-regular languages also non-regular?
<p>I have to proof that if $L_1 \subset L_2$ and $L_1$ is not regular then $L_2$ it not regular. This is my proof. Is it valid? </p>&#xA;&#xA;<p>Since $L_1$ is not regular, there does not exists a finite automata $M_1$ such that $L_1$ is the language of $M_1$. Pick $x\in L_1$. So $x \in L_2$ and suppose that $L_2$ is regular. Then there exists a finite automata $M_2$ such that $L_2$ is the language of $M_2$. Since $x \in L_2$ and $L_2$ is regular, there exists a state $s\in S$ such that from the initial state in $M_2$ there is a path $x$ to this final state $s$. Since this holds for all $x \in L_1$, we can construct a finite automata which language is $L_1$, so $L_1$ is regular, so we reached a contradiction, so $L_2$ is not regular.</p>&#xA;&#xA;<p>Can this be done easier?</p>&#xA;
formal languages regular languages automata finite automata check my proof
0
2,531
Time to construct a GNBA for LTL formula
<p>I have a problem with the proof for constructing a GNBA (<a href="https://en.wikipedia.org/wiki/Generalized_B%C3%BCchi_automaton" rel="nofollow">generalized nondeterministic Büchi automaton</a>) for a <a href="https://en.wikipedia.org/wiki/Linear_temporal_logic" rel="nofollow">LTL formula</a>:</p>&#xA;&#xA;<p><strong>Theorem:</strong> For any LTL formula $\varphi$ there exists a GNBA $G_{\varphi}$ over alphabet $2^{AP}$ such that:</p>&#xA;&#xA;<ol>&#xA;<li><p>$\operatorname{Word}(\varphi)=L_{\omega}(G_{\varphi})$.</p></li>&#xA;<li><p>$G_{\varphi}$ can be costructed in time and space $2^{O(|\varphi|)}$, where $|\varphi|$ is the size of $\varphi$.</p></li>&#xA;<li><p>The number of accepting states of $G_{\varphi}$ is bounded above by $O(|\varphi|)$.</p></li>&#xA;</ol>&#xA;&#xA;<p>My problem lies in the proof of (2), that is, in the proof it says that the number of states in $G_{\varphi}$ is bounded by $2^{|\operatorname{subf}(\varphi)|}$ but since $|\operatorname{subf}(\varphi)| \leq 2\cdot|\varphi|$ (where $\operatorname{subf}(\varphi)$ is the set of all subformulae) the number of states is bounded by $2^{O(|\varphi|)}$. </p>&#xA;&#xA;<p>But why does $|\operatorname{subf}(\varphi)| \leq 2\cdot|\varphi|$ hold? </p>&#xA;
logic automata formal methods model checking linear temporal logic
1
2,532
What is the number of expressions containing n pairs of matching brackets with nesting limit?
<p>I know the answer <em>without</em> nesting limit is the Catalan number. My question is, specifically, is there a recurrence relation that gives the number of expression containing $n$ pairs of matching brackets such that no more than $l$ open brackets are not closed at any given point?</p>&#xA;&#xA;<p>For instance, for $n=3$ and $l=2$ the answer is $4$. All possible combinations are $(())()$, $()(())$, $()()()$, $(()())$. We cannot have $((()))$ since there are three open brackets that are not closed at the middle.</p>&#xA;
formal languages combinatorics recurrence relation word combinatorics
0
2,536
How does worst-fit memory allocation react when encountering contiguous empty memory blocks?
<p>So I have a problem understanding how the <a href="http://courses.cs.vt.edu/csonline/OS/Lessons/MemoryAllocation/index.html" rel="nofollow">worst-fit protocol for memory allocation</a> reacts to contiguous blocks of empty memory. None of the examples I have found address this possibility.</p>&#xA;&#xA;<p>For example, say you have the following blocks (where 'O' stands for occupied block and 'E' stands for empty block) and are to allocate 10 MB via the worst-fit algorithm:</p>&#xA;&#xA;<pre><code>------------------------------------------------------------&#xA;|10 MB O | 40 MB E | 10 MB O | 20 MB E | 30 MB E | 10 MB O |&#xA;------------------------------------------------------------&#xA;----0---------1---------2---------3---------4---------5-----&#xA;</code></pre>&#xA;&#xA;<p>My question is does the worst-fit algorithm select block one leaving behind a 30 MB hole in block 1, or does it select block 3 leaving behind a cumulative 40 MB hole between blocks 3 and 4? </p>&#xA;
algorithms operating systems memory allocation
0
2,537
Intersection and union of a regular and a non-regular language
<blockquote>&#xA; <p>Let $L_1$ be regular, $L_1 \cap L_2$ regular, $L_2$ not regular. Show that $L_1 \cup L_2$ is not regular or give a counterexample.</p>&#xA;</blockquote>&#xA;&#xA;<p>I tried this: Look at $L_1 \setminus (L_2 \cap L_1)$. This one is regular. I can construct a finite automaton for this: $L_1$ is regular, $L_2 \cap L_1$ is regular, so remove all the paths (finite amount) for $L_1 \cap L_2$ from the finite amount of paths for $L_1$. So there are a finite amount of paths left for this whole thing. This thing is disjoint from $L_2$, but how can I prove that the union of $L_1 \setminus (L_1 \cap L_2)$ (regular) and $L_2$ (not regular) is not regular?</p>&#xA;
formal languages regular languages
0