category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
language modeling
How does a given complexity class change under polynomial reduction?
https://cs.stackexchange.com/questions/69686/how-does-a-given-complexity-class-change-under-polynomial-reduction
<p>Suppose we have a complexity class $C$ (say for example $C = DTIME(2^{cn})$). Take a language that belongs to $C$: $L \in C$. Define an arbitrary polynomial reduction from language $L$ to $L'$. To what complexity class does the result language $L'$ belong? </p> <p>Thoughts: I think it might depend on the model of the computation. Let's assume that the complexity class is $C = DTIME(f(n))$, which is simulated with a Deterministic Turing Machine. Under a reduction function $h(n)$ of every language in this complexity would be $DTIME(h(f(n)))$, but I'm not sure how to write/prove it formally. </p>
500
language modeling
Common models for inferring semantics (truth/factuality of statements)?
https://cs.stackexchange.com/questions/50930/common-models-for-inferring-semantics-truth-factuality-of-statements
<p>What are common <strong>models</strong> for creating rules for inferring meanings (e.g. truth values) of natural language statements such as if one wanted to infer the truth value of the input statement</p> <blockquote> <p>John Lennon is in Beatles.</p> </blockquote> <p>which would depend on, whether the model "knew" John Lennon is in Beatles.</p>
501
language modeling
Strongest criticisms of object-oriented languages?
https://cs.stackexchange.com/questions/171056/strongest-criticisms-of-object-oriented-languages
<p><a href="https://harmful.cat-v.org/software/c++/linus" rel="nofollow noreferrer">Linus Torvalds has famously attacked the object-oriented language C++</a>, but he didn't offer many specifics about why, besides saying C++ uses &quot;inefficient abstracted programming models&quot;. What exactly are the strongest criticisms of object-oriented languages? Are they &quot;inefficient&quot;?</p>
<p>I think Torvalds attacks C++ on two grounds.</p> <p>One is that the language is over-complicated and that the subset of C++ which is appropriate for Torvalds' use (as a programmer of computer operating systems), is roughly that part which is common with the C language.</p> <p>The other is that the C++ language attracts OOP fanatics, who in Torvalds' view are unsuited to programming computer operating systems.</p> <p>These two prongs do not necessarily overlap.</p> <p>C++ is not exclusively an OO language, but incorporates a large number of different features which have arisen over a long period of time from no single reigning philosophy, and the integration of which has always occurred within legacy constraints (never settled from a clean slate).</p> <p>Many of these features are abstractions which have not been designed with OS programmers in mind, whose exact workings can be difficult to grasp by paper-and-pencil methods, and which can become leaky or inefficient under circumstances which are difficult to anticipate.</p> <p>The overall complexity has become such that the whole C++ language is now widely regarded as fathomless to any one developer. This doesn't do when software designs require a high degree of control - that is, where a design does not decompose particularly well into independent parts or layers but requires extensive integration and all-round harmony (typically down to the hardware level for OSes), and for every developer to be able to engage with the whole design.</p> <p>So Torvalds argues that he doesn't get much bang for his buck from using C++, because everything extra it has over C are all the parts which are likely to end up excluded from use.</p> <p>By contrast, C was designed coherently by one man Dennis Ritchie, it was designed specifically for OS programming and to be non-abstract, and has been extended more conservatively than C++ in the years since it first emerged. Indeed, C has changed more conservatively perhaps because C++ has acted as a lightning rod for clamour for additional language complexity.</p> <p>The other thing is the culture and philosophy of the people that Torvalds would have to deal with, if C++ were in use and these people were attracted to try and work with Torvalds or work on Linux development.</p> <p>Torvalds is explicit that he sees the use of C as a natural deterrent to the involvement of these people - presumably because, again, C++ acts as a lightning rod for them, and the preclusion of any use of C++ means they see little or no hope of indulging in their preferred manner of design and programming which Torvalds opposes.</p> <p>It can be difficult to fully characterise the people in question but they are associated with a certain style of doing object-oriented programming, and they are often proficient with no other style or with any non-OO language.</p> <p>I don't think Torvalds is necessarily criticising OO languages in general and for all possible purposes, or criticising C++ merely because it has classes and OO features, as much as he is criticising certain practitioners who follow the style to which I've referred (and presumably, the extent to which they will seek to become involved and impose that style of development).</p>
502
language modeling
I&#39;ve proven my language undecidable what is left to prove it Turing equivalent?
https://cs.stackexchange.com/questions/112159/ive-proven-my-language-undecidable-what-is-left-to-prove-it-turing-equivalent
<p>Let us say that I have a computation model <span class="math-container">$A$</span>. Let us also say that I have shown that <span class="math-container">$A$</span> can be simulated by a Turing machine.</p> <p>I have not been able to prove that <span class="math-container">$A$</span> can simulate a Turing machine. In fact in trying to do so I have shown that doing so would require proving a notable unproven conjecture in mathematics. I have shown that <span class="math-container">$A$</span>'s halting problem is decidable iff this unproven conjecture is false.</p> <p>This means if I were to show <span class="math-container">$A$</span> to be Turing complete I would prove the conjecture. However it does not mean that if I proved <span class="math-container">$A$</span> to <em>not</em> be Turing complete I would disprove the conjecture. Since (to the best of my knowledge) a language can be Turing undecidable and still remain weaker than a Turing machine.</p> <p>I would like to show that the language is Turing equivalent iff the conjecture is true, but I have run into a block getting anywhere. Since I doubt I will prove a conjecture of this caliber it seems worthless to try and simulate a Turing machine in <span class="math-container">$A$</span>, but I don't know any non-constructive ways of proving Turing completeness.</p> <p>Is there any useful strategy I could employ or relevant theorem I could use here?</p> <hr> <p><sub>In case of curiosity the unsolved problem I am dealing with is <a href="https://en.wikipedia.org/wiki/Skolem_problem" rel="nofollow noreferrer">the Skolem problem</a></sub></p>
503
sequence-to-sequence model
Hidden Markov Models for Hand Gestures
https://cs.stackexchange.com/questions/129522/hidden-markov-models-for-hand-gestures
<p>I am completing a final year project for hand gesture recognition using Hidden Markov Models</p> <p>I have a fair understanding of Hidden Markov Models and how they work using simple examples such as the <a href="https://web.stanford.edu/class/stats366/hmmR2.html" rel="nofollow noreferrer">Unfair Casino</a> and some <a href="https://en.wikipedia.org/wiki/Template:HMM_example" rel="nofollow noreferrer">Weather</a> examples.</p> <p>I am looking to implement multiple Hidden markov models where each model corresponds to a single gesture, similarly to <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.214.1377&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">this paper</a> where the observed states are the angles between the coordinates of different points. This would create a sequence of numbers from 0 to 18 as seen in <a href="https://i.sstatic.net/P7uUD.png" rel="nofollow noreferrer">Figure 3</a> and <a href="https://i.sstatic.net/FFEYh.png" rel="nofollow noreferrer">Figure 4</a>. .</p> <p>What would the hidden states be in terms of this scenario?</p> <p>The weather example has the observations 'Walk', 'Shop' and 'Clean' which would be the numbers 0-18 in the hand gesture case, however I do not know what the states 'Rainy' and 'Sunny' would correspond to in the hand gesture scenario.</p> <p><strong>Edit:</strong> I am generating a sequence of numbers that will correspond to a certain gesture using the method mentioned above. I will then use that sequence to train a HMM and will then test that HMM using another set of recorded numbers similar to the training set. Here is an example of my scenario:</p> <p>Recorded data of observed states (theta) during a gesture:</p> <pre><code>observations = [0,1,4,15,4,3,1,0,19,18,17,16,15,15,16,3,1,1,0,18...] </code></pre> <p>Recorded data of test gesture:</p> <pre><code>test = [0,2,4,15,4,2,1,0] </code></pre> <p>My goal is to create a model from the first set of observations (which will be much longer as the gesture will be recorded many times) and determine the likelyhood of the test gesture to be from said model.</p> <p>Will I need to generate a hidden state to create an accurate model of the gesture or can I just use unsupervised training for a model?</p> <p>If i do have to use supervised training, should I create the hidden states using quadrants (i.e. 0-90 degrees = quadrant1, 90-180 degrees = quadrant2...)?</p>
<p>You say that your goal is to create a model and compute the likelihood of a test sample using that model. For that purposes, you do not need to find a way to interpret the hidden states, and you do not need to engineer the model so the hidden states have a meaning that you can understand. Instead, let the standard Baum-Welch algorithm compute a reasonable model that fits the training data, and don't worry about the meaning of the hidden states.</p> <p>If you had a physics model where you knew what the hidden states &quot;should&quot; be, based on some domain knowledge (e.g., about the movement patterns of bones and joints), then you could incorporate that into your model in hopes of improving the model... but that's not required. Since you don't seem to start from such a place, there is no need to try to force the hidden states to take on any particular human-understandable meaning.</p>
504
sequence-to-sequence model
Continuous Observation Densities in HMM
https://cs.stackexchange.com/questions/60131/continuous-observation-densities-in-hmm
<p>I've been reading about hidden Markov models and stumbled upon <em>A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition</em> by Lawrence R. Rabiner (<em>Proc. IEEE</em>, 77(2):257&ndash;286, 1989; <a href="http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf" rel="nofollow">PDF</a>). The following appears as equation&nbsp;49 in the section about continuous observations in hidden Markov models:</p> <p>$$b_j(O) = \sum_{m=1}^M c_{jm}\mathfrak{N}[O, \mu_{jm}, U_{jm}], \quad 1\leq j\leq N\,.$$</p> <p>What I want to know is how to use this equation given an estimation of the transition, emission, initial probabilities and a given continuous observed sequence to train the hidden Markov model using the Baum&ndash;Welch algorithm.</p> <p>Also I haven't read the rest of the paper, since I already have a good amount of knowledge about discrete hidden Markov models. I'm just trying to learn how to use continuous time series data to train an HMM.</p>
505
sequence-to-sequence model
Question about bigram model
https://cs.stackexchange.com/questions/52526/question-about-bigram-model
<p>I am trying to build a bigram letter model.</p> <p>I obtain a sequence of words in a form of ['hello','I','am','Johnny'].</p> <p>Firstly, I lower all the words to obtain : ['hello','i','am','johnny'].</p> <p>I am capable of building a bigram letter model, but I have read somewhere that you should provide some kind of empty strings / padding to the model.</p> <p>Does anybody know why do you have to provide padding to the input data to build a proper language model? And how to use padding on this sample input to build letter model? </p> <p>I was thinking about making a space in front of every word, but I am not convinced that this is the right solution - other options that I am considering are adding padding at the end of every sentence or after each sequence of 2 characters as this is a bigram letter model.</p>
<p>The padding is there since the distribution of letters at the beginning and end of words is very different from their distribution inside words. To capture that, separate words with spaces and add trailing spaces on both sides: <code>' hello i am johnny '</code>.</p>
506
sequence-to-sequence model
Probabilistic hardness of approximation or solution of NP-hard optimization problems under a probabilistic generative model for input data
https://cs.stackexchange.com/questions/29276/probabilistic-hardness-of-approximation-or-solution-of-np-hard-optimization-prob
<p>So in biology (DNA sequences), sequence alignment is a generalization of longest common subsequence where an alignment of two sequences is scored typically with a linear function of how many spaces are inserted into each sequence and how many times each possible pair of aligned characters appears in the alignment. Just like longest common subsequence, finding the optimal alignment of two strings under an arbitrary linear scoring scheme can be solved in quadratic time using dynamic programming. (Needleman-Wunsch algorithm). The longest subsequence problem and variants that use linear scoring schemes and ask for the optimal multiple sequence alignment are NP-hard when the number of input strings is not fixed.</p> <p>However, in biology, there is a probabilistic generative model that generates related DNA sequences. Starting with an unknown root ancestor DNA sequence, bifurcations occur that create two daughter sequences (species) that are independently derived from the ancestral sequence by potentially adding some characters in random locations, deleting some characters, and changing some characters. Then the bifurcations continue with additional changes at each level until the modern day DNA sequences of extant species are obtained. Then we want to align the modern day species' sequences (e.g. find the longest common subsequence in the simplest case) without knowing the exact ancestral sequences. In this case, fossil records can help identify the bifurcation events and estimate the sequence mutation rates after each bifurcation. So a reasonable estimate of the generative model that generated the related modern day DNA sequences can sometimes be obtained.</p> <p>Now, my question is, for such an NP-hard optimization problem with a well-defined probabilistic generative model that generates input data, has anyone studied the hardness of finding either an optimal or nearly optimal solution, where either the worst-case or expected running time depends on the parameters for the model that generates the input data? For example, if DNA mutation and insertion/deletion rates are very low for a particular group of species, then it should be fairly easy to get at least a nearly optimal alignment of all the DNA sequences using partial alignments and pruning and heuristics, without resorting to a full-blown exponential time solution.</p>
<p>Here is a similar recent example due to Mossel et al. There are $n$ vertices which are partitioned randomly into two classes. Two vertices of the same type are connected with probability $p$, and two of opposite types with probability $q$. For what values of $p, q$ can we recover the partition with high probability (with respect to $n$)? It turns out that if $p$ and $q$ are too close then it is statistically impossible, and otherwise a simple algorithm succeeds with high probability. </p> <p>The question you are asking is very specific, and the answer likely depends on the exact model and its parameters. It seems likely that for a low mutation rate, some iterative algorithm should work, perhaps even provably (with high probability).</p> <p>One way to formulate this question more concretely is to come up with constraints on the relations between the sequences, constraints that hold with high probability. It could be that for an appropriate choice of constraints, the problem becomes feasible. </p>
507
sequence-to-sequence model
How to generate a degree sequence of a degree distribution
https://cs.stackexchange.com/questions/55342/how-to-generate-a-degree-sequence-of-a-degree-distribution
<p>How to generate a degree sequence of a degree distribution that follows the power-law in which I know $N=10^2$ and $\gamma=2.5$?</p> <p>The degree distribution of power-law is $p_k \sim k^{-\gamma}$.</p> <p>I want to generate a power-law network using the <strong>configuration model</strong>, but to do that I need to know the degree sequence $seq={k_1, k_2, ..., k_N}$.</p> <p>Thanks</p>
<p>As you mention, the degree distribution follows a power law if the number of nodes of degree $k$ is roughly $N \times C^{-1} k^{-\gamma}$, where $C = \sum_{k = 1}^\infty k^{-\gamma} = \zeta(\gamma)$ (in your case, $C \approx 1.34$). Plugging in your numbers, we want roughly</p> <ol> <li>75 vertices of degree 1.</li> <li>13 vertices of degree 2.</li> <li>5 vertices of degree 3.</li> <li>2 vertices of degree 4.</li> <li>1 vertices of degree 5.</li> <li>1 vertices of degree 6.</li> <li>1 vertex of degree 7.</li> </ol> <p>This gives a total of 98 vertices. You can make it exactly 100 vertices in many ways &mdash; I'll let you think of one. Don't forget that the sum of degrees should be even.</p> <p>The (potential) problem is that <em>not every sequence is the degree sequence of a graph</em>. Sequences which are realized by some graphs are known as <a href="http://mathworld.wolfram.com/GraphicSequence.html" rel="nofollow">graphic sequences</a>, and are determined by the Erdős&ndash;Gallai criterion. If the resulting sequence doesn't satisfy this criterion, we're in trouble. However, I don't expect it to happen in your case. Indeed, my suggestion above (with 98 vertices) is graphical.</p>
508
sequence-to-sequence model
multiple sequence alignment using HMM and simulated annealing
https://cs.stackexchange.com/questions/73765/multiple-sequence-alignment-using-hmm-and-simulated-annealing
<p>Can anyone help me with Multiple Sequence Alignment (MSA) using Hidden Markov Model (HMM) by giving an example or a reference except these 2 references:</p> <p>1-<a href="https://www.fing.edu.uy/~alopeza/biohpc/papers/hmm/Eddy95b-Multiple_Alignment_using-HMM-preprint.pdf" rel="nofollow noreferrer">Eddy, Sea.R., et al.Multiple alignment using hidden markov models</a>, 2-<a href="https://sco.h-its.org/exelixis/web/teaching/seminar2016/Example2.pdf" rel="nofollow noreferrer">Boer Jonas, Multiple alignment using hidden Markov models, Seminar Hot Topics in Bioinformatics</a>. </p> <p>I know that there are 3 states: match, deletion and insertion and I know the emission probabilities and transitions probabilities can be learned by viterbi algorithm but what is vague is that if I want to do multiple alignment I need to have HMM and if I want to have HMM I need to have aligned sequences but we know that sequences are unaligned and also with simulated annealing we can Enter randomness to the model and have better solutions and also this algorithm is different with E-M algorithm and I have another question how many states our model of HMM for this problem should have at the first step, does the number of states change during the time of convergence or it is fixed from the first?? </p> <p>If anybody can help me to understand what really happens in this MSA with HMM I'll appreciate. </p> <p>I should explain that there have been found more sequences of DNA,RNA and protein but there are less information about structures and functions of each protein so we do MSA to understand the similarities between sequences and find out whether they are homologous (have a same ancestor) or not and find out the unknown structure and functions of sequences.</p>
<p>part of this question already has an answer in <a href="https://www.biostars.org/p/245355/" rel="nofollow noreferrer"> multiple sequence alignment using HMM and simulated annealing</a> and about the number of states in HMM it is a challenge yet.</p>
509
sequence-to-sequence model
Model checking and dependently typed languages for formal verification
https://cs.stackexchange.com/questions/170371/model-checking-and-dependently-typed-languages-for-formal-verification
<p>What are the differences and limitations between model checking and type-checking dependent types for verifying correctness?</p> <p>If I were to model a state machine in a language like Idris, what can't I verify that a model checker can and vice-versa? I can enforce valid transitions in Idris, but can I prove reachability or that after any sequence of events the system never reaches some invalid state?</p>
<p>Model checkers tend to be better for checking control-flow properties, i.e., about the sequence of events, particularly in the presence of concurrency, threads, etc.</p> <p>Dependent types tend to be better for checking data-flow properties, i.e., about the values and types that variables can hold, and typically are not used for sophisticated reasoning about which interleaving of control-flow paths are feasible.</p> <p>None of these are hard-and-fast limitations or boundaries. They just reflect what the tools are most commonly used for or most naturally suited to.</p> <p>Model checkers are often applied to a model of the system, rather than the code itself -- but not always. Dependent types are often applied to the source code itself, rather than a separate model -- but not always.</p>
510
sequence-to-sequence model
Quantum Computing - Relationship between Hamiltonian and Unitary model
https://cs.stackexchange.com/questions/28234/quantum-computing-relationship-between-hamiltonian-and-unitary-model
<p>When developing algorithms in quantum computing, I've noticed that there are two primary models in which this is done. Some algorithms - such as for the Hamiltonian NAND tree problem (Farhi, Goldstone, Guttman) - work by designing a Hamiltonian and some initial state, and then letting the system evolve according to the Schrödinger equation for some time $t$ before performing a measurement.</p> <p>Other algorithms - such as Shor's Algorithm for factoring - work by designing a sequence of Unitary transformations (analogous to gates) and applying these transformations one at a time to some initial state before performing a measurement.</p> <p>My question is, as a novice in quantum computing, what is the relationship between the Hamiltonian model and the Unitary transformation model? Some algorithms, like for the NAND tree problem, have since been adapted to work with a sequence of Unitary transformations (Childs, Cleve, Jordan, Yonge-Mallo). Can every algorithm in one model be transformed into a corresponding algorithm in the other? For example, given a sequence of Unitary transformations to solve a particular problem, is it possible to design a Hamiltonian and solve the problem in that model instead? What about the other direction? If so, what is the relationship between the time in which the system must evolve and the number of unitary transformations (gates) required to solve the problem?</p> <p>I have found several other problems for which this seems to be the case, but no clear cut argument or proof that would indicate that this is always possible or even true. Perhaps it's because I don't know what this problem is called, so I am unsure what to search for.</p>
<p>To show that Hamiltonian evolution can simulate the circuit model, one can use the paper <a href="http://arxiv.org/abs/1205.3782">Universal computation by multi-particle quantum walk</a>, which shows that a very specific kind of Hamiltonian evolution (multi-particle quantum walks) is BQP complete, and thus can simulate the circuit model. </p> <p><a href="http://arxiv.org/abs/1004.5528">Here</a> is a survey paper on simulating quantum evolution on a quantum computer. One can use the techniques in this paper to simulate the Hamiltonian evolution model of quantum computers. To do this, one needs to use "Trotterization", which substantially decreases the efficiency of the simulation (although it only introduces a polynomial blowup in computation time). </p>
511
sequence-to-sequence model
Can the pseudo-random-sequence generator be described as a finite state automaton?
https://cs.stackexchange.com/questions/102839/can-the-pseudo-random-sequence-generator-be-described-as-a-finite-state-automato
<p>I am thinking some real examples of <a href="https://en.wikipedia.org/wiki/Finite-state_machine" rel="nofollow noreferrer">FSAs</a> in order to help me know how to use the model of the FSA. As I know, the pseudo-random-sequence generator should be a kind of <a href="https://en.wikipedia.org/wiki/Deterministic_finite_automaton" rel="nofollow noreferrer">DFAs</a> or <a href="https://en.wikipedia.org/wiki/Finite-state_transducer" rel="nofollow noreferrer">FSTs</a> . But I cannot describe a pseudor-andom-sequence generator (e.g. <a href="https://en.wikipedia.org/wiki/Linear-feedback_shift_register" rel="nofollow noreferrer">LFSRs</a>) by using <span class="math-container">$(Q, \Sigma, \delta, q_{0}, F)$</span>.</p> <p>Specifically, the input of the pseudo-random-sequence generator is the seed which is a short string. However its output is a long (or even an infinite) string. Maybe we need construct a FSA with a clock, but I have no idea. And what is <span class="math-container">$F$</span> (the set of final states) here? I cannot image what is the language of a pseudo-random-sequence generator.</p>
<p>A finite-state automaton can output an unending sequence of outputs, so you don't need to do anything special. The regular language will contain all prefixes of possible output streams.</p> <p>If you really want to consider infinite strings, you could use <a href="https://en.wikipedia.org/wiki/B%C3%BCchi_automaton" rel="nofollow noreferrer">Büchi automata</a> to model that. Personally, I don't see the point.</p>
512
sequence-to-sequence model
Shell algorithm knuth sequence time complexity analysis
https://cs.stackexchange.com/questions/160442/shell-algorithm-knuth-sequence-time-complexity-analysis
<p>Given this shell sort algorithm implementation:</p> <pre><code>void shell(float ∗a ,int l ,int r) { int i, j, h; for ( h = 1 ; 3∗h +1 &lt;= r−l; h = 3∗h + 1 ); for (; h &gt; 0; h / = 3) { for (i = l+h; i &lt;= r; ++ i) { for (j = i; j &gt;= l +h &amp;&amp; a[j] &lt; a[j−h]; j −= h) { swap (a+j−h , a+ j ); } } } } </code></pre> <p>I want to analyse its time complexity, but I am finding a bit of difficulty to model it mathematically in the form of two summation sigmas, can you help with that please? We know that for Knuth sequence the time complexity is <span class="math-container">$O(n^{3/2})$</span></p>
513
sequence-to-sequence model
How to Generate Control flow graph from a Petri net model?
https://cs.stackexchange.com/questions/49264/how-to-generate-control-flow-graph-from-a-petri-net-model
<p>My research is mainly focused on generating test sequences automatically using Colored Petri net.CFG provides techniques for generating test sequences. But some papers says that, test sequence generation methods based on a control flow graph sometimes suffers from a feasibility problem (i.e. some paths in a CFG may not be feasible).So I need a technique to draw a Control flow graph from the Colored Petri net model, in a way that, when I will select paths from the control-flow graph it will contain only the feasible paths.Then I will generate test sequences from the CFG. Can anyone help me finding any technique please or give me any suggession??</p>
<p>Since you are interested in generating test sequences automatically using colored Petri nets, note that it's not clear that you need reduction to control flow graphs (and dealing with all the related issues). Some techniques were presented, that use various different methods to generate test sequences from Petri nets. Some examples include:</p> <p><a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=496973" rel="noreferrer">H. Watanabe and T. Kudoh, "Test suite generation methods for concurrent systems based on coloured Petri nets," Software Engineering Conference, 1995. Proceedings., 1995 Asia Pacific, Brisbane, Qld., 1995, pp. 242-251.</a></p> <p><a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=633178" rel="noreferrer">J. Desel, A. Oberweis, T. Zimmer and G. Zimmermann, "Validation of information system models: Petri nets and test case generation," Systems, Man, and Cybernetics, 1997. Computational Cybernetics and Simulation., Orlando, FL, 1997, pp. 3401-3406 vol.4.</a></p> <p><a href="http://link.springer.com/chapter/10.1007%2F978-3-642-21834-7_17" rel="noreferrer">Xu, Dianxiang. "A tool for automated test code generation from high-level Petri nets." Applications and Theory of Petri Nets. Springer Berlin Heidelberg, 2011. 308-317.</a></p> <p>On the other hand, since Petri nets and control flow graphs are not semantically equivalent in general (and capture different type of information), the translation between the two is usually done in some specific domain (and restrictions). (Although usually, control flow graphs can be transformed into equivalent Petri nets.). Some examples and discussions are given in:</p> <p><a href="http://dl.acm.org/citation.cfm?id=951600" rel="noreferrer">De Jong, Gjalt G. "Data flow graphs: system specification with the most unrestricted semantics." Proceedings of the conference on European design automation. IEEE Computer Society Press, 1991.</a></p> <p><a href="http://www.cs.uoregon.edu/research/paracomp/papers/mattms/matt_msthesis.pdf" rel="noreferrer">The Design Of A General Method For Constructing Coupled Scientific Simulations, Matthew J. Sottile</a></p> <p>Note that your question seems a bit too general. (Moreover, it also seems as an instance of XY, as noted by D.W.) Without more details, it seems hard to give a concrete and concise answer. Hopefully, given pointers can get you closer to your desired goal.</p>
514
sequence-to-sequence model
Applying Baum-Welch to multiple observed sequences iteratively
https://cs.stackexchange.com/questions/157314/applying-baum-welch-to-multiple-observed-sequences-iteratively
<p>When using the Baum-Welch algorithm to train a hidden markov model you normally repeat it on some observed sequence iteratively until your values converge.</p> <p>If you have multiple observed sequences, Wikipedia tells you to run the update on all sequences in parallel and then combine them into a new model and then apply this process iteratively.</p> <p>Is it also possible to run on different observed sequences iteratively, one after the other? I.e, first train on sequence S1, then on sequence S2, then S3, ... then Sn, then start from S1 again. Would this converge at all?</p>
515
sequence-to-sequence model
What is a de Bruijn sequence exactly?
https://cs.stackexchange.com/questions/150856/what-is-a-de-bruijn-sequence-exactly
<p>I just discovered the term &quot;<a href="https://en.wikipedia.org/wiki/De_Bruijn_sequence" rel="nofollow noreferrer">de Bruijn sequence</a>&quot;, but don't quite follow what it means exactly (or how de Bruijn is pronounced :), &quot;<a href="https://www.biostars.org/p/7186/" rel="nofollow noreferrer">brown</a>&quot; I guess).</p> <p>There are two good resources I am looking at for better understanding (it seems like they have a good model of them):</p> <ul> <li><a href="http://debruijnsequence.org/db/home" rel="nofollow noreferrer">debruijnsequence.org</a></li> <li><a href="http://www.cis.uoguelph.ca/%7Esawada/papers/DBframework.pdf" rel="nofollow noreferrer"><em>A Framework for Constructing De Bruijn Sequences Via Simple Successor Rules by Gabric et al.</em></a></li> </ul> <p>Looking first at the website, they say that <span class="math-container">$k$</span> is the number of symbols in the alphabet (so for binary it's 2), and <span class="math-container">$n$</span> is the length of each substring in an overall string (I don't know what the string is labeled as, maybe &quot;DB sequence&quot; I guess, de Bruijn sequence, of substrings). They show this as a <span class="math-container">$k = 2, n = 4$</span> system:</p> <pre><code>0000 0001 0010 0101 1011 0110 1101 1010 0100 1001 0011 0111 1111 1110 1100 1000 </code></pre> <p>So I see, <span class="math-container">$n = 4$</span> (length of substring), and <span class="math-container">$k = 2$</span> (<span class="math-container">$1$</span> and <span class="math-container">$0$</span>).</p> <p>However, then I get lost at the sentence:</p> <blockquote> <p><code>0000101101001111</code> is a DB sequence where the 16 unique substrings of length 4 visited in order are:</p> </blockquote> <p>I get there are 16 unique substrings, but where does <code>0000101101001111</code> come from? Breaking it into 4, it is:</p> <pre><code>0000 1011 0100 1111 </code></pre> <p>That's the 1, 5, 9, 13 elements of that sequence. Why did they just use that to describe this DB sequence?</p> <p>Then in the <a href="http://www.cis.uoguelph.ca/%7Esawada/papers/DBframework.pdf" rel="nofollow noreferrer">linked paper</a>, they show:</p> <p><a href="http://www.cis.uoguelph.ca/%7Esawada/papers/DBframework.pdf" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gElVV.png" alt="enter image description here" /></a></p> <p>They say each of these lines is a DB sequence with <span class="math-container">$k = 2$</span> and <span class="math-container">$n = 6$</span>. So from all that I conclude that the substring is length 6 in this case. So I break down the first line:</p> <pre><code>000000 100001 100010 100011 100100 101100 110100 111101 010111 011011 1111 </code></pre> <p>It doesn't even break into equal length substrings all of length <span class="math-container">$6$</span>, what am I missing? I thought this should be substrings of length 6 which cover all possibilities of the substring combinations...</p> <p>So going back to <a href="https://en.wikipedia.org/wiki/De_Bruijn_sequence" rel="nofollow noreferrer">wikipedia</a>, they have some examples, let's see.</p> <blockquote> <p>Taking A = {0, 1}, there are two distinct B(2, 3): 00010111 and 11101000, one being the reverse or negation of the other.</p> </blockquote> <p>where <span class="math-container">$B(k, n)$</span>. Breaking the first down into 3 chunks, we have:</p> <pre><code>000 101 11 </code></pre> <p>Also not chunks of 3. What am I misinterpreting?</p>
<p>A deBruijn sequence <span class="math-container">$B(k,n)$</span> has length <span class="math-container">$N=k^n$</span> and is <em>cyclic</em> i.e. it can be extended by repeating the sequence.</p> <p>If it is <span class="math-container">$x_0,\ldots,x_{N-1}$</span> then the subsequences <span class="math-container">$$ x_0\cdots x_{k-1}\\ x_1 \cdots x_{k}\\ ~\\ \vdots\\ ~\\ x_{N-1} \cdots x_{N+k-1} $$</span> are all distinct and thus cover each <span class="math-container">$n-$</span>tuple over an alphabet of size <span class="math-container">$k.$</span></p>
516
sequence-to-sequence model
Name of Generating One Value at a Time in Sequence Generation vs Encoder Decoder
https://cs.stackexchange.com/questions/88620/name-of-generating-one-value-at-a-time-in-sequence-generation-vs-encoder-decoder
<p>a question about machine learning, specifically recurrent models: For machine translation recurrent neural networks show great promise, common here is an encoder-decoder architecture which takes a source sentence, reads it, and then based on a compressed representation outputs a target sentence. In opposition to that, for sequence generation you can also output one symbol at a time (let's stick with characters), e.g. like the char-rnn. You condition your model to learn the next character based on the ones you have read, so the model can e.g. create h-e-l-l-o, one character at a time. How would you call this second approach, does it have a name? Thanks</p>
517
sequence-to-sequence model
Is integer sorting possible in O(n) in the transdichotomous model?
https://cs.stackexchange.com/questions/41255/is-integer-sorting-possible-in-on-in-the-transdichotomous-model
<p>To my knowledge there doesn't exist a $O(n)$ worst-case algorithm that solves the following problem:</p> <blockquote> <p>Given a sequence of length $n$ consisting of finite integers, find the permutation where every element is less than or equal to its successor.</p> </blockquote> <p>But is there a proof that it doesn't exist, in the <a href="https://en.wikipedia.org/wiki/Transdichotomous_model" rel="nofollow">transdichotomous model of computation</a>?</p> <hr> <p>Note that I'm not limiting the range of the integers. I'm not limiting solutions to comparison sorts either.</p>
<p><a href="http://arxiv.org/abs/0706.4107" rel="nofollow">Integers can be stably sorted in $O(n)$ time with $O(1)$ additional space.</a> More precisely, if you have $n$ integers in the range $[1, n^c]$, the can be sorted in O(n) time.</p> <p>This was only shown a couple of years ago by a team including the late Mihai Pătrașcu (which should surprise nobody who is familiar with his work). It's a remarkable result which I'm surprised more people don't know about, because it means that the problem of sorting integers is (theoretically) solved.</p> <p>There is a practical algorithm (given in the paper above) if you're allowed to modify keys. Basically, you can compress sorted integers more than you can compress unsorted integers, and the extra space that you gain is precisely equal to the extra memory needed to do the radix sort. They also give an impractical algorithm which supports read-only keys.</p>
518
sequence-to-sequence model
Find sequence, given partial information about all pairs
https://cs.stackexchange.com/questions/16498/find-sequence-given-partial-information-about-all-pairs
<p>Let $\Sigma$ be a small, finite alphabet. Suppose we are given ${n\choose 2}$ sets $S_{i,j}$, where $S_{i,j} \subseteq \Sigma \times\Sigma$. I'd like to determine whether there exists a sequence $x_1,x_2,\dots,x_n \in \Sigma$ such that $(x_i,x_j) \in S_{i,j}$ for all $i,j$, and if so, find an example of such a sequence.</p> <p>Are there any good algorithms for this problem?</p> <p>Also: Suppose I model each set $S_{i,j}$ as a randomly chosen subset of $\Sigma \times \Sigma$ where each of the possible elements is included in $S_{i,j}$ with probability $p$ (independently of everything else). Thus, the expected size of each $S_{i,j}$ is $p \cdot |\Sigma|^2$. Is there any characterization of the range of values of $p$ for which this problem should be efficiently solvable?</p> <p>(In my application, $|\Sigma|=10$, if that helps.)</p> <p>This looks like some sort of 2-CSP (constraint satisfaction problem, where each constraint is on exactly 2 variables), but I don't know what more we might be able to say.</p>
519
sequence-to-sequence model
Signal translation with Seq2Seq model
https://cs.stackexchange.com/questions/130020/signal-translation-with-seq2seq-model
<p>I'm currently doing some research on signal processing and I got a dataset which includes the signal in itself and its &quot;translation&quot;.</p> <p><a href="https://i.sstatic.net/Bnp2P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bnp2P.png" alt="A signal and its translation" /></a></p> <p>So I want to use a Many-to-Many RNN to translate the first into the second.</p> <p>After spending a week reading about the different option I have, I ended up learning about RNN and Seq2Seq models. I believe this is the right solution for the problem (correct me if I'm wrong).</p> <p>Now, as the input and the output are of the same length, I don't need to add padding and thus I tried a simple LSTM layer and TimeDistributed Dense layer (Keras):</p> <pre><code>model = Sequential([ LSTM(256, return_sequences=True, input_shape=SHAPE, dropout=0.2), TimeDistributed(Dense(units=1, activation=&quot;softmax&quot;)) ]) model.compile(optimizer='adam', loss='categorical_crossentropy') </code></pre> <p>But the model seems to learn nothing from the sequence and when I plot the &quot;prediction&quot;, it nothing but values between 0 and 1.</p> <p>As you can see, I'm a beginner and the code I wrote might not make sense to you but I need guidance on few questions:</p> <ul> <li>Does the model make sense for the problem I'm trying to solve ?</li> <li>Am I'm using the right loss/activation functions ?</li> <li>And finally, please correct/teach me</li> </ul>
<p>I am skeptical that machine learning is the right tool for this problem. I would look for a more direct solution, perhaps using peak detection or changepoint detection, or some other form of classical method for time-series analysis.</p> <p>If you do use machine learning, cross-entropy loss is not the right loss for your problem, and you will definitely need to change that.</p>
520
sequence-to-sequence model
Is the time complexity of the Fibonacci sequence O(fib(n))?
https://cs.stackexchange.com/questions/135815/is-the-time-complexity-of-the-fibonacci-sequence-ofibn
<p>I started watching SICP lectures and am totally new to computer science. <a href="https://youtu.be/V_7mmwpgJHU?t=2581" rel="nofollow noreferrer">SICP. LEC 1B: Procedures and Processes; Substitution Model</a></p> <p>I don't know why the time complexity of the Fibonacci sequence is O(Fib(n)). So, I googled about it,</p> <p>he says</p> <blockquote> <p>There's a thing that grows exactly at Fibonacci numbers. It's a horrible thing. You wouldn't want to do it. The reason why the time has to grow that way is because we're presuming in the model-- the substitution model that I gave you, which I'm not doing formally here, I sort of now spit it out in a simple way-- but presuming that everything is done sequentially. That every one of these nodes in this tree has to be examined. And so since the number of nodes in this tree grows exponentially because I add a proportion of the existing nodes to the nodes I already have to add one, then I know I've got an exponential explosion here.</p> </blockquote> <p>Can anybody please explain what he's saying?</p>
<p><span class="math-container">$\DeclareMathOperator{\fib}{fib}$</span>If you choose to calculate <span class="math-container">$\fib(n)$</span> by using the naive recursion formula, then you need <span class="math-container">$\Theta (\fib(n))$</span> additions, and since the numbers involved grow to <span class="math-container">$c\cdot n$</span> bits for a not very large constant <span class="math-container">$c$</span>, it will take <span class="math-container">$\Theta(n \cdot \fib(n))$</span> operations. (I may be wrong here because most of the numbers involved may be much smaller than <span class="math-container">$c \cdot n$</span> bits).</p> <p>If you are reasonably clever and calculate <span class="math-container">$$\fib(0), \fib(1), \fib(2), \dots, \fib(n),$$</span> then you have <span class="math-container">$n$</span> additions of <span class="math-container">$c \cdot n$</span> bit numbers, which takes <span class="math-container">$\Theta(n^2))$</span> operations.</p> <p>You can use matrix exponentiation to calculate <span class="math-container">$\fib(n)$</span> faster for large <span class="math-container">$n$</span>, I'd estimate something like <span class="math-container">$M(n)$</span> where <span class="math-container">$M(n)$</span> is the time for an <span class="math-container">$n$</span>-bit multiplication.</p> <p>Most important is that for maybe <span class="math-container">$n = 100$</span> or <span class="math-container">$n = 200$</span>, your computer has not a chance to calculate <span class="math-container">$\fib(n)$</span> using the naive recursive algorithm within your lifetime, while the simple method of calculating consecutive values will find the same result within microseconds on a newish smartphone.</p>
521
sequence-to-sequence model
Computational power of Actor Model
https://cs.stackexchange.com/questions/50695/computational-power-of-actor-model
<p>In the question below, let TM be Turing machine, NTM be nondeterministic Turing machine and PTM be probabilistic Turing machine.</p> <p>In his paper "Actor Model of Computation: Scalable Robust Information Systems" Carl Hewitt proposes following hypothesis:</p> <blockquote> <p>All physically possible computation can be directly implemented using Actors.</p> </blockquote> <p>He comments that this hypothesis is an update to Church-Turing thesis that all physically computable functions can be implemented using the lambda calculus (or TM), stating that:</p> <blockquote> <p>It is a consequence of the Actor Model that there are some computations that cannot be implemented in the lambda calculus.</p> </blockquote> <p>Then he recalls Plotkin's informal proof of NTM to have property of bounded nondeterminism:</p> <blockquote> <p>Now the set of initial segments of execution sequences of a given nondeterministic program P, starting from a given state, will form a tree. The branching points will correspond to the choice points in the program. Since there are always only finitely many alternatives at each choice point, the branching factor of the tree is always finite. That is, the tree is finitary. Now König's lemma says that if every branch of a finitary tree is finite, then so is the tree itself. In the present case this means that if every execution sequence of P terminates, then there are only finitely many execution sequences. So if an output set of P is infinite, it must contain a nonterminating computation.</p> </blockquote> <p>Then he presents algorithm in Actor Model semantics which seem to have property of unbound nondeterminism, stating that the later proves the hypothesis above.</p> <p>The algorithm is a computation of integer value. For NTM it is stated this way:</p> <p>Step 1: Either print 1 on the next square of tape or execute Step 3.</p> <p>Step 2: Execute Step 1.</p> <p>Step 3: Halt</p> <p>It's properties are: if NTM halts, there are only finite number of states which it can be in (bounded nonndeterminism, as shown by Plotkin); and it may never halt whatsoever (it has a subtree of computation of infinite depth).</p> <p>For Actor Model it is stated like this:</p> <ol> <li>Create an Actor which can receive two messages: 'go' (makes it increment counter and send itself another 'go' message) and 'halt' (makes it return counter value).</li> <li>Send this actor 'go' and 'halt' messages.</li> </ol> <p>Since Actor Model semantics state that a message is guaranteed to be delivered and there is no restriction on when will it be delivered, then if actor halts, there are infinite possible states in which it can be (unbounded nondeterminism); however, it will halt always.</p> <p>However, it really doesn't seem to me to imply the hypothesis stated and it seems highly unlikely that we can actually achieve Actor Model semantics on a real hardware.</p> <p>I'd like to propose following questions:</p> <ol> <li><p>Does the property of unbounded nondeterminism really has to do something with computational power? Is 'computing an unbounded integer' algorithm really impossible to implement on any TM?</p></li> <li><p>If so, what is the class of computations that requires it?</p></li> <li><p>Edsger Dijkstra argued that is is impossible to implement system with unbounded nondeterminism; Tony Hoare agreed that the implementation should try to be reasonably fair. Is level of fairness achievable today is good enough to say that we can actually have computations which require this property implemented on physical hardware?</p></li> <li><p>If we add an entropy source to NTM (or PTM) in form of fair random number generator, can we achieve the same unbounded nondeterminism property as in the Actor Model?</p></li> </ol>
522
sequence-to-sequence model
Model paths by regular languages
https://cs.stackexchange.com/questions/44733/model-paths-by-regular-languages
<p>I want use DFA to describe a sequence of movements in a 2D-space (language will be the path accepted by automaton in a particular case).</p> <p>That is a typical modeling problem: how can I encode a sequence of 2D movements in a DFA?</p> <p>Infact, walking through DFA or NFA seems a process analogous to walk through point of a maps.</p> <p>A naives example could be: State like point in space coordinate (x,y); and Transitions with an alphabet of "up, down, etc". That's direct approach is impracticable beacuse "the number of locations is infinite or simply too many". I'm looking for a better and more efficient encoding.</p> <p>Are there any study about using regular languages for coding path, or movements? </p>
<p>If the number of locations (e.g. points or regions) are finite, naively, you can say that these locations are my states and you can directly use a DFA with an alphabet containing UP, DOWN. But you already said it's impracticable for your case.</p> <p>Then, let's look into where the number of locations is infinite or simply too many. Basically, in this case, you need to recognize that the sequences UP-DOWN, UP-UP-DOWN-DOWN, UP-UP-UP-DOWN-DOWN-DOWN ... etc. are equivalent because equal number of UP and DOWNs gets you back to the starting point. This is the classical example of non-regular languages. Therefore I suggest you looking into other automata such as counter machines if they are sufficient to capture your intentions.</p>
523
sequence-to-sequence model
Tolerance of object size variation for computer vision
https://cs.stackexchange.com/questions/152390/tolerance-of-object-size-variation-for-computer-vision
<p>relatively new to Computer Vision.</p> <p>Lets say for example, I have a sequence of images of a car driving away from a static camera into the horizon, and I want to use this image set for some bog standard computer vision experimentation (e.g to train a CNN to recognize a car). I label the car in each frame with a rectangular bounding box. At the start of the sequence, the area of annotation required to cover the visible car is approximately 1500x500 pixels. At the end of the sequence, the area is only 3x1 pixels. Does there come a point in this sequence of images, whether by some rule of thumb or by a derivable metric, in which it no longer becomes beneficial to label the car for training purposes (i.e it will harm model performance)? I remember reading somewhere recently that its not beneficial to train CV algorithms on objects that change size, but I'm not sure whether there are guidelines to setting acceptable tolerance thresholds.</p>
<p>Your labels should be based on what you want the model to output when used in deployment. If you use this in deployment, is it important to produce a 3x1 bounding box? If it is, you'd better include those images and label those cars. If it is important to not produce a 3x1 bounding box, you'd better include those images and not label those cars. If it doesn't matter whether your model produces a 3x1 bounding box for those tiny cars, then you might consider labelling those cars but marking them as &quot;don't-care&quot; so the model is neither penalized for producing a bounding box nor penalized for failing to produce a bounding box; or excluding those images from the training set.</p> <p>How do you choose the threshold? Choose it based on your application needs.</p> <p>How can you tell what is the impact of different values for the threshold? Probably you have to do this empirically. You can try an experiment where you set one threshold, train a model, and check to see what happens.</p> <p>Is there a rule of thumb? Not that I know of specifically, but the usual way to figure this out is to find a few papers that tackle the same task, and check to see what threshold they use. A plausible starting point is to copy whatever they have used.</p>
524
sequence-to-sequence model
Performance of smooting vs viterbi algorithm with HMMs
https://cs.stackexchange.com/questions/102697/performance-of-smooting-vs-viterbi-algorithm-with-hmms
<p>To experiment, I implemented a discrete HMM; the transition matrix and emission model are randomly, uniformly generated. Then, a sequence of random states and emissions are produced by the HMM. Then I run smoothing (forward-backward algorithm) to identify the most likely states at each time. Finally I run the viterbi algorithm to identify the most likely sequence. </p> <p>Because I know the true sequence of states, I can compare how well the two algorithms predict the actual sequence. With #states = 2, #emission types = 2, and sequenceLength=100, viterbi gets about 60% of the states correct, while smoothing gets about 73% correct. </p> <p>Does this make sense? My prior was that viterbi would get more correct than smoothing, but I'm relatively uncertain. Also, it seems disappointing that viterbi is only 10% better than random guessing. I'm wondering if there's a bug in my implementation. </p> <p>Any confirmation one way or the other would be greatly appreciated. </p>
<p>Viterbi is not necessarily always better than smoothing. Viterbi returns the MAP estimate, but for some region in the input space, there might be too high of uncertainty that it is better to go with the &quot;reject option&quot;. This is mentioned in Kevin Murphy's &quot;Machine Learning: a probabilistic perspective&quot; section 17.4.1, and I quote:</p> <blockquote> <p>It is not surprising that smoothing makes fewer errors than Viterbi, since the optimal way to minimize bit-error rate is to threshold the posterior marginal.</p> </blockquote> <p>I can't say I completely understand this issue with Viterbi, but wanted to share nontheless i</p>
525
sequence-to-sequence model
Problem with understanding two sided Matching Algorithm: maximium cardinality
https://cs.stackexchange.com/questions/120236/problem-with-understanding-two-sided-matching-algorithm-maximium-cardinality
<p>I am trying to understand the maximum cardinality problem in the context of stable matching algorithm. I am reading the following article at the link:</p> <p><a href="https://www.hindawi.com/journals/mpe/2015/241379/" rel="nofollow noreferrer">A Two-Sided Matching Decision Model Based on Uncertain Preference Sequences</a></p> <p>The article says that:</p> <blockquote> <p>In general, we can categorize two-sided matching problem into three typical kinds of models in terms of different decision objectives: stable matching, maximum cardinality matching, and maximum weight matching. In the first model, the objective is to seek a stable matching solution, and we count a solution as stable matching only when there does not exist any alternative pairing (𝐴, 𝐵) in which 𝐴 and 𝐵 are individually better off than they would be with the element currently matched. Gale and Shapley put forward an approach, also named Gale-Shapley algorithm, to get a stable matching solution in the perspective of mathematics and game theory, which symbolizes the beginning of two-sided matching research and enlightens the subsequent scholars to pay more attention to this topic. In the second model, the objective is to seek a solution in which the number of matching pairs is maximized.</p> </blockquote> <p>I am able to understand stable matching. I can’t understand how the number of matching pair is maximized. This may occur because we have 2 sets. One of boys and other of girls. One element in one set has more than one matching in the other set. This might occur due to preference sequence. Am I right about maximum cardinality?</p> <p>What I understand preference sequence as the order of preferences of elements of one set for the other. Due to maximum cardinality, it is possible that element Of one set has same preferences for multiple elements of the other set. </p> <p>Am I right about preference sequences?</p> <p>Somebody please guide me.</p> <p>Zulfi.</p>
<p>The standard definition of the <a href="https://en.wikipedia.org/wiki/Stable_marriage_problem" rel="nofollow noreferrer">stable marriage problem</a> is, given <span class="math-container">$n$</span> men and <span class="math-container">$n$</span> women, find a stable matching that marries all of the men and women. Consequently, by definition, everyone will be matched and the number of matches will be exactly <span class="math-container">$n$</span> in any solution. So, no, what you mention cannot happen, if you use that as the definition of the stable marriage problem.</p> <p>I'm not 100% sure what your text is referring to when it talks about maximum cardinality matching, but I suspect it's referring to find a <a href="https://en.wikipedia.org/wiki/Matching_(graph_theory)#Maximum-cardinality_matching" rel="nofollow noreferrer">maximum matching in a bipartite graph</a>, which is completely separate problem. In this problem, the matching is not required to be a stable matching and it has nothing to do with the stable marriage problem.</p>
526
sequence-to-sequence model
How to design this synchronous circuit?
https://cs.stackexchange.com/questions/12534/how-to-design-this-synchronous-circuit
<p>I have seen this model question on synchronous circuit , but i could not understand the logic, can anyone please help me?</p> <p>"Develop the state diagram for a synchronous sequential circuit which will recognize the bit sequence 1101 (ie, every time the sequence 1101 is detected in the input bit stream, the circuit has to output a 1 and otherwise a 0)."</p> <p>What is the question here?? do we have consider all possible combinations ( 2^4 - 16 ) to do this ? if so , according to the question , do we have only one occasion where we get the output 1??</p> <p>please explain it. -Regards</p>
<p>The question asks for a circuit which is given a sequence of bits, one in each round, and is supposed to output $1$ each time that the last $4$ bits form the subsequence $1101$; otherwise $0$ should be output. For example, if the input is $110110101$ then the output should be $000100100$. Naturally, the circuit will have to retain some memory.</p>
527
sequence-to-sequence model
Which transducer models replacement in regex?
https://cs.stackexchange.com/questions/98625/which-transducer-models-replacement-in-regex
<p>I am looking for the right transducer which allows to translate a sequence of literals into a sequence of same literals (or a subset of them) in arbitrary order. For example: ABC => CAB, which, with simple production rules: A->a, B->b, C->c, results into output: <em>cab</em>.</p> <p>I am not sure if the pushdown automata is enough to resolve such problem.</p> <p>Note: My question corresponds to understand which automaton model lies under the implementation of the widely used regex replacement strategy in common libraries such as: back reference in java(<a href="http://www.vogella.com/tutorials/JavaRegularExpressions/article.html#grouping-and-back-reference" rel="nofollow noreferrer">link</a>) or replacement part in sed of bash (<a href="https://www.gnu.org/software/sed/manual/html_node/Back_002dreferences-and-Subexpressions.html" rel="nofollow noreferrer">link</a>).</p>
528
sequence-to-sequence model
How does an automaton model a computer or something else?
https://cs.stackexchange.com/questions/27645/how-does-an-automaton-model-a-computer-or-something-else
<ol> <li><p>An automaton, as I have seen so far, is used to tell <strong>if a string belongs to the language that the automaton recognizes</strong>. This is determined by the final state of the automaton running on the string as an input. I wonder what role the output of the automaton plays here for this decision problem?</p></li> <li><p>I saw that an automaton (e.g. a Turing machine) can model <strong>a computer</strong>. A computer takes a program (which can solve any problem, such as evaluating a function, optimizing a function, searching for something, ...)as an input, and outputs what the program asks the computer to do. So when an automaton models a computer, what do the inputs, outputs, states and transition/output rules of the automaton represent?</p></li> <li><p>Some also said that an automaton models <strong>an algorithm</strong>. My understanding is that an algorithm is informally a sequence of instructions ( I can't find the formal definition for an algorithm, and wonder what it is), while an automaton has a set of rules for transition and output. So I wonder how to understand that an automaton models an algorithm? </p></li> </ol> <p>1 seems to be just a special case of 2, since the decision problem is just a kind of problem that a computer can solve.</p> <p>Regarding 2 and 3, if an automaton models a computer and a program is its input, since a program represents an algorithm, isn't an algorithm an input to an automaton instead? Thanks.</p>
<p>The impetus behind all of this comes from trying to define a mathematically rigorous model of what "computation" means. Alan Turing did this in 1936 by defining a definition of a simple abstract machine that seemed to be sufficient to describe everything that we'd call computation, a device we now call a Turing Machine.</p> <p>Over the years, people have investigated other models of computation that were restricted in what they could do and discovered that these simpler models were also interesting in themselves. This eventually lead to what we might call the standard introduction to computability theory, as follows.</p> <p><strong>Finite Automata</strong>. Starting with this model, we have a finite collection of <strong>states</strong> and define transitions from one state to another depending on the current character being read in an input string. Some of these states are designated as <strong>final</strong> states and the convention is that if the automaton is in a final state after reading all the characters of the input, we say that the automaton <strong>accepts</strong> the input string.</p> <p><strong>(Important Digression): Languages and problems</strong>. Starting with a machine that can determine whether a string is or is not in a particular language, we can transform this task into a task of solving a problem. For instance, we might want to solve the problem "given nonnegative integers $n$ and $m$ are they both even or both odd?" It's not hard to see that this is equivalent to "can we make a finite automaton that accepts all and only strings in the language $L_1=\{a^nb^m \mid a\equiv b\pmod 2\}$, where $a^nb^m$ is the string of $n$ copies of $a$ followed by $m$ copies of $b$. It turns out that a finite automaton can be built to do this. The point here is that any computational problem can be turned into a language recognition problem, where the language is effectively an encoding of the possible solutions. In other words, the language $$ \{(0, 0), (0, 2), (0, 4),\dots, (2, 0), (2, 4), \dots, (1, 1), (1, 3), \dots,\} $$ can, suitably coded, be recognized by a FA </p> <p><strong>Other Automata</strong>. It happens that finite automata can't recognize solutions to all problems. For example, a FA (finite automaton) can't recognize the language $L_2=\{a^nb^n\mid n\ge 0\}$, in other words, it isn't powerful enough to determine whether two nonnegative integers $n, m$ are equal. If we modify our FA model by including a stack where we can push or pop symbols depending on the input string (as well as change to another state), then this new model (a <strong>pushdown automaton</strong> or PDA, as it's known) can recognize the language $L_2$ and also the language $L_3=\{a^nb^mc^{n+m}\}$, so a PDA can determine whether two numbers sum to another. It happens, though, that PDAs aren't powerful enough to do the same for products. We can continue this process, adding enhancements to each abstract model of computation, and get a hierarchy of increasingly more powerful machines. The exciting part is that this hierarchy quickly comes to an end.</p> <p><strong>Turing Machines</strong>. If we take a FA and add to it an arbitrary long list (called a <strong>tape</strong>) initially containing the input and allow moves where the current state and the current contents of an element in the list determine the next state, what we write in the current list element, and whether we move one position forward or backward in the list, we have Turing's 1936 abstract machine, a <strong>Turing Machine</strong>, or TM. This simple device is extremely powerful: it can add, subtract, multiply, divide, move data to a different location on the tape, compare two chunks of data and base its actions on the results of the comparison, and do things like compute prime numbers and give the $n$-th digit of $\pi$. In fact, it can do everything that modern computers can do and more, since, being an abstract machine, it has an arbitrarily large amount of "memory" on its tape, potentially vastly larger than the number of elementary particles in the universe.</p> <p><strong>Things to do With TMs</strong>. With this model, we can perform all the tasks we're used to:</p> <ul> <li><em>Implement Functions</em>. We can build a TM that, say, computes square roots: given two numbers on its tape, like "3.1416#7", we can define the moves of a TM to produce the square root of the input, 3.1416, to seven digits of precision, so at the end of its calculations, the tape would contain "1.7724559". This is just an <strong>algorithm</strong>: a finite sequence of steps in some model that produces a result according to our specifications.</li> <li><em>Language recognizers (and problem solvers)</em>. We could design a TM that recognizes the language of primes: $L_4=\{2,3,5,7,11,13,17,19, \dots\}$: given an input tape initially containing an integer $P$, this machine could determine if $P\in L_4$, namely if $P$ is the decimal representation of a prime number.</li> <li><em>Programmable Computers</em>. It's even possible to design a <strong>universal TM</strong> which takes a suitably coded description of a TM, $\langle M\rangle$ (perhaps by a listing of its move rules), and an input $X$ and simulate the action of $M$ on input $x$, which of course exactly what a real-world computer plus compiler does.</li> </ul> <hr> <p><strong>Remarks and Consequences</strong></p> <p>Fully fleshed out, this narrative has a flow, from simple machines to, ultimately, a machine that's more powerful than any modern physical computer. The narrative doesn't stop there, though. The <strong>Church-Turing Thesis</strong> says that <em>anything</em> that we would agree falls under our definition of "computation" can be accomplished by a TM. In other words, the Church-Turing Thesis is a confident bet that not only are TMs more powerful than any modern computer, but that they can accomplish <em>anything</em> we'd call computation on any future computer.</p> <p>It gets even better, though. Since the (uncountable) infinity of possible functions from the integers to themselves is vastly larger than the (countable) infinity of possible Turing Machines, it means that <em>there are lots of tasks that TMs (and hence computers) cannot do</em>. That might not be a real-world problem, since the functions that TMs can't compute are in a very real sense undescribable, but there are also plenty of seemingly programmable tasks that TMs (and hence computers) cannot do. For example, "given a listing of a program and an input for that program, can we always determine whether that program will eventually halt when given that input?" Professional programmers would love such a tool, but it is provably impossible to construct a program to determine, for all program listings and inputs, whether that program will perhaps enter an infinite loop or not. Similarly, there is no hope of determining, in all cases, whether a given program will behave according to some set of specifications. So our fascinating narrative has a sad ending: <strong>there are (infinitely many) tasks computers will never be able to do</strong>.</p>
529
sequence-to-sequence model
Model marbel toy with finite automata
https://cs.stackexchange.com/questions/26406/model-marbel-toy-with-finite-automata
<p>I'm resolving this question of Hopcroft and <em>et al</em> Book. Figure 1 below is a marble rolling-toy. A marble is dropped at A or B. Levers $x_1,x_2$ and $x_3$ cause the marble to fall either to the left or to the right. Whenever a marble encounters a lever, it causes the lever to reverse after the marble passes, so the next will take the opposite branch.</p> <p><img src="https://i.sstatic.net/k6VK7.png" alt="enter image description here"></p> <p>I got resolve the question ...</p> <p>a)Model this toy by finite automaton: To model this toy, a state is represented as sequence of three bits followed by r or a (previous input rejected or accepted). For instance, 010a, means left, right, left, accepted. Then the Transition table is </p> <p><img src="https://i.sstatic.net/F996R.png" alt="enter image description here"></p> <p>b)Informally describe the language of the automaton</p> <p>but not b) question. How I will be able to resolve this question?</p> <p><strong>EDIT</strong> I see a solution <a href="http://www.eecs.wsu.edu/~cook/tcs/s3.html" rel="nofollow noreferrer">here</a> I modify my question using suggestions and that solution. According that solution exist a case:</p> <p>Penultimate configuration interrupts is * \ / , where * means / or \; and the ending is 1 and X mod 4 = 0, or (X-3) mod 4 = 0 (X is the numbers of 0's and 1's).</p> <p>For my question, from this case I get the subcase </p> <p>Penultimate configuration interrupts is * \ / , where * means / or \; and the ending is 1 and X mod 4 = 0.</p> <p>Then, the restrictions are: the number of 1's, without the ending, is even and the number of 0's is impar. I will be able to verify that configuration with that restrictions for few instances, but How I will be able to demonstrate that the $x_2$ interrupt configuration is \?</p>
<p>Count $A$'s and $B$'s. The next $A$ will only exit at $D$ iff the first two levers are in right position. When does that happen? The first lever toggles at every $A$, but the second lever interacts with the $B$'s.</p> <p>At the same time, the situation for $B$ is symmetric. The next $B$ will <em>not</em> exit $D$ only if both levers are left. When does that happen?</p>
530
sequence-to-sequence model
Name for pattern of addressing items by sequence-of-creation
https://cs.stackexchange.com/questions/77537/name-for-pattern-of-addressing-items-by-sequence-of-creation
<p>I'm looking for the name of a particular pattern, and/or resources (articles) on its usage.</p> <p>The context of the pattern is a journalling system to operate on some collection, as in the example given here:</p> <p>We could model an webstore using patterns from event-sourcing. In such a model, we would have a collection (e.g. a set) of products. Manipulation of the set would be modeled as a stream of events; typical events to manipulate the set could be "add" "remove" and "change".</p> <p>All manipulation happens using the events; the resulting collection is constructed simply by replaying the events.</p> <p>Given the linear nature of the stream of events, we can address each of the items in the set a using unique natural number, being the number of items that were added before it.</p> <p>I.e. the first added item would be numbered 0, the second added item 1 etc.</p> <p>Deletion does not affect the amount of added items, so it does not affect the next-used number. Of course, it does lead to "gaps" in the sequence. If we were to start with an empty set, add an item and then delete it, the index 0 points to no (currently existing) item, and the next item to be added will be addressed using 1.</p> <p>Such an addressing scheme could be useful in a distributed context, where a shared view exists for the history up to a certain point, but the histories diverge from that point onwards. Using the addressing scheme, the various parts of the distributed mechanism can unambiguously refer to elements from the set.</p> <p>Is this a well known pattern in the literature?</p>
531
sequence-to-sequence model
How to connect the math of recurrence relations to daily programming concepts
https://cs.stackexchange.com/questions/59989/how-to-connect-the-math-of-recurrence-relations-to-daily-programming-concepts
<p>What exactly are we doing from a CS perspective when we solve a recurrence relation and find a resulting formula for a sequence given a set of initial conditions? I just went through the "linear homogeneous recurrence relations of degree k with constant coefficients" bit in discrete math and basically understand the math part and have a simple process for solving them mechanically.</p> <p>What I haven't seen yet is an explanation of <em>what</em> this correlates to in a CS sense. I understand we will encounter recurrence relations in algorithms which we haven't reached yet (next class) but I'm wondering what exactly do the initial conditions and the sequence represent?</p> <p>For a trivial programming example, if we were to write a recursive function to process a directory and all its subdirectories, I understand conceptually that could be modeled as a recurrence relation because it is recursive, but I don't know <em>how</em>, and I don't know what the initial conditions and final formula for the sequence would represent in such a scenario.</p> <p><a href="https://i.sstatic.net/StgnA.jpg" rel="nofollow noreferrer">Here's an example of the type of relation I'm talking about</a>. So we solve these by taking the relation down into its characteristic equation, finding the roots, and then building a system of equations from the initial conditions $a_0, ..., a_j$ that we solve to find the constants. Finding the constants gives a closed formula for that particular sequence defined by those initial conditions.</p> <p>My question is, from a CS/programming/software engineering perspective what would we model using recurrence relations like this <em>other than algorithms</em>, and what would the initial conditions represent in those models?</p>
<p>In computer science, recurrence relations are often useful for analyzing the running time of algorithms. For instance, we might define a function $T(\cdot)$, so that $T(n)$ is the running time when running the algorithm on an input of size $n$. Often, for many kinds of algorithms, we can write a recurrence relation that expresses $T(n)$ in terms of smaller values, such as</p> <p>$$T(n) = 2T(n/2) + O(n).$$</p> <p>This would be saying that the running time to solve a problem of size $n$, is twice the time to solve a problem of half that size (perhaps because we recursively solve the first half and the second half using the same algorithm), plus $O(n)$ more (perhaps to combine/merge their two solutions).</p> <p>So, the numbers represent running times -- the number of operations the algorithm takes to solve the problem. You might like to take a look at <a href="https://cs.meta.stackexchange.com/q/599/755">Reference answers to frequently asked questions</a> and <a href="https://cs.stackexchange.com/q/192/755">How to come up with the runtime of algorithms?</a>. Also take a look at algorithms textbooks, especially their chapters on running time analysis and divide-and-conquer algorithms.</p> <p>The initial condition of a recurrence (e.g., the value of $T(1)$) depicts the running time of the algorithm on a small instance. Often, for a recursive algorithm, this is the running time of the base case.</p> <p>In computer science, the sequence we are considering is</p> <p>$$T(1),T(2),T(3),T(4),T(5),\dots,$$</p> <p>i.e., the sequence of running times as the input size increases. If you were to write down the sequence of running times for some algorithm, you might spot a pattern, where each term can be computed by a simple formula of a few previous terms (namely, the pattern indicated by the recurrence relation). That said, honestly, thinking about this as a sequence might not be the most helpful or intuitive viewpoint (at least, I don't find it to be so); I find it more intuitive to think of $T(\cdot)$ as a function rather than a sequence. The same mathematics still applies.</p> <p>Another way to get some intuition is to work through a few examples where recurrences are used to analyze the running time of some simple algorithms. I think that will also help you get a better intuition of what the mathematical expressions mean and how they relate to the algorithm.</p>
532
sequence-to-sequence model
How to interpret the execution of distributed computation from time-space diagram?
https://cs.stackexchange.com/questions/38429/how-to-interpret-the-execution-of-distributed-computation-from-time-space-diagra
<p>I'm following a course on Distributed Systems </p> <p><a href="http://www.ict.kth.se/courses/ID2203/index.html" rel="nofollow">http://www.ict.kth.se/courses/ID2203/index.html</a> </p> <p>and currently learning about asynchronous models. I can't seem to reconcile a given time-space diagram of an execution with the symbolic definition of the execution as a sequence of configurations and events.</p> <p>In the model, an execution is defined as a sequence of configurations interleaved with events. As in:</p> <pre><code>&lt;C,E,C,E,C,E...&gt; </code></pre> <p>Where the C's are configurations (states of the nodes, including their buffers) and the E's are events, which in this model, are either Computation or Delivery events. </p> <p>A computation event, comp(i), takes place at a single node, indexed by i, and modifies the buffers of node_i and the internal state of node_i and nothing else. A delivery event, del(i,j,m), removes message m from the output buffer of node_i and adds it to the input buffer of node_j.</p> <p>A time space diagram of a particular execution is given in Slide 11 in this particular lecture: <a href="http://www.ict.kth.se/courses/ID2203/material/Lecture_2._Unit_3._Computation_theorem_and_causality.pdf" rel="nofollow">http://www.ict.kth.se/courses/ID2203/material/Lecture_2._Unit_3._Computation_theorem_and_causality.pdf</a></p> <p>The arcs on a single timeline are comp(i) events at a particular node. The arcs between timelines are del(i,j,m) events between nodes i and j.</p> <p>I'd like to interpret this diagram into a sequence of comp and del events, but it seems ambiguous. Specifically, a delivery is initiated from node 1 to node 2 -- del(1,2,m) -- and during the delivery a computation event at node 3 -- comp(3) -- takes place and completes before the delivery event completes.</p> <p>Since the events in an execution are, by definition, totally ordered, I'm not sure if the comp(3) should precede the del(1,2,m) or vice versa. </p>
<p>Yes, it is ambiguous, and it is ok. A single space-time diagram can be linearized (collapsed into a compatible sequence of configurations and events) in many ways, potentially an exponential number of ways. E.g. imagine that you have 1000 nodes, and each of them performs a local operation. Obviously there's 1000! different ways to linearize this, because there are no inherent causality constraints in this situation.</p> <p>It is not true that events in an execution are totally ordered. They are only partially ordered, using the partial order given by causality constraints (most of which are "a message is sent BEFORE it is received"). There are many total orders compatible with this partial order.</p>
533
sequence-to-sequence model
ILP representation of the number of maximal 1 sequences in a row
https://cs.stackexchange.com/questions/107180/ilp-representation-of-the-number-of-maximal-1-sequences-in-a-row
<p>I am currently using an ILP to model events which occur on some input sequence from <span class="math-container">$1...n$</span>. These events modify the input sequence in order to obtain a desired sequence. Each event can happen on some consecutive range <span class="math-container">$(i,j)$</span> and no two events can overlap. Therefore, I can represent events by some matrix <span class="math-container">$a=\{0, 1\}^{n\times n}$</span> where <span class="math-container">$a_{i,j} = 1$</span> iff there is an event that begins at <span class="math-container">$i$</span> and ends at <span class="math-container">$j$</span> (<span class="math-container">$i&lt;=j$</span>).</p> <p>Unfortunately, this results in <span class="math-container">$O(n^2)$</span> variables and in some cases <span class="math-container">$O(n^3)$</span> or greater number of constraints. It would be ideal if instead I could represent these events as a row <span class="math-container">$a'=\{0,1\}^n$</span>, where <span class="math-container">$a'_i = 1 \iff \exists a_{p,q}\quad \text{s.t.}\quad p\leq i\leq q \land a_{p,q} = 1$</span><br> i.e. a position <span class="math-container">$a'_i=1$</span> iff there is an event that modifies position <span class="math-container">$i$</span>. </p> <p>Since I am optimizing over the number of events, at the moment I am minimizing <span class="math-container">$\sum a_{p,q}$</span>, since it represents the number of events. If I ditch <span class="math-container">$a$</span> and try to only use <span class="math-container">$a'$</span>, I am left with some row, for example <span class="math-container">$a'=[1,1,0,...,1,0,1]$</span>.</p> <p><strong>Question 1:</strong> I want to represent the total number of events given <span class="math-container">$a'$</span>. Currently, I am thinking <span class="math-container">$\sum a_i(1-a_{i+1})$</span> to count the number of "ends" of events. Obviously I can deal with the edge case as well. Unfortunately, this results in a product of binary variables. I know that I can represent this with a helper variable, and just sum over that, but I was wondering if there was any more efficient ways.</p> <p><strong>Question 2:</strong> This question is more important, since my whole model revolves around this. An event will only affect the input elements <strong>after</strong> the end of the event. Think of events as duplications. If some range <span class="math-container">$(i,j)$</span> is duplicated, all elements <strong>after</strong> <span class="math-container">$j$</span> would shift to the right by <span class="math-container">$j-i+1$</span> slots. Following this idea of events being duplications, consider the input below:</p> <p><span class="math-container">$a,b,c,d,e$</span> and the desired sequence <span class="math-container">$a,b,a,b,c,d,e,d,e$</span>. We can achieve this by duplicating <span class="math-container">$a,b$</span> and <span class="math-container">$d,e$</span> i.e. <span class="math-container">$a'=[1,1,0,1,1]$</span>. Notice how positions <span class="math-container">$0$</span> and <span class="math-container">$1$</span> did not shift at all, positions <span class="math-container">$2,3,4,5,6$</span> were originally <span class="math-container">$0,1,2,3,4$</span>, and positions <span class="math-container">$7$</span> and <span class="math-container">$8$</span> were originally <span class="math-container">$3$</span> and <span class="math-container">$4$</span>. </p> <p>Essentially, for every position <span class="math-container">$i$</span> in the resulting sequence, I would like to be able to determine which position <span class="math-container">$j$</span> it came from in the original sequence. Consider position <span class="math-container">$6$</span> in the resulting sequence, we can clearly see it came from position <span class="math-container">$4$</span>. We notice that this shift of <span class="math-container">$2$</span>. Similarly, position <span class="math-container">$5$</span> came from position <span class="math-container">$3$</span>. A pattern unfolds such that the shift of position <span class="math-container">$i$</span> is always equal to the <span class="math-container">$\sum_{j&lt;k} a'_j$</span> where <span class="math-container">$k$</span> is the largest index such that <span class="math-container">$k\leq i \land a_k=0$</span>. This is the same as saying that we are taking the sum of all values of <span class="math-container">$a'$</span> up to the most recent <span class="math-container">$0$</span> before or at <span class="math-container">$i$</span> in <span class="math-container">$a'$</span>.</p> <p>How can I model such shifting in an ILP? I.e. given <span class="math-container">$a’$</span>, how can I create a variable <span class="math-container">$h$</span> in the ILP such that <span class="math-container">$h_{r, l}=1$</span> iff <span class="math-container">$\sum_{j&lt;k} a'_j= l$</span> where <span class="math-container">$k$</span> is the largest index such that <span class="math-container">$k\leq r \land a_k=0$</span>. </p>
<p>I think this can be done with <span class="math-container">$9n$</span> variables, instead of <span class="math-container">$n^2$</span>. Define the following integer variables for use in your linear program:</p> <ul> <li><span class="math-container">$a_i = 1$</span> if index <span class="math-container">$i$</span> is within an event</li> <li><span class="math-container">$s_i = 1$</span> if an event starts at index <span class="math-container">$i$</span></li> <li><span class="math-container">$e_i = 1$</span> if an event ends at index <span class="math-container">$i$</span></li> <li><span class="math-container">$c_i = $</span> the number of positions until the end of this event (so that index <span class="math-container">$i+c_i-1$</span> is the end of the event containing <span class="math-container">$i$</span>), if <span class="math-container">$i$</span> is in an event, or <span class="math-container">$0$</span> if <span class="math-container">$i$</span> is not in an event</li> <li><span class="math-container">$d_i = $</span> is how many positions since the start of the event (so that index <span class="math-container">$i-d_i+1$</span> is the start of the event containing <span class="math-container">$i$</span>), if <span class="math-container">$i$</span> is in an event, or <span class="math-container">$0$</span> if <span class="math-container">$i$</span> is not in an event</li> <li><span class="math-container">$\ell^1_i = $</span> the length of the event containing <span class="math-container">$i$</span>, if <span class="math-container">$i$</span> is the end of an event, or <span class="math-container">$0$</span> if <span class="math-container">$i$</span> is not the end of an event</li> <li><span class="math-container">$\ell^2_i = $</span> the length of the event containing <span class="math-container">$i$</span>, if <span class="math-container">$i$</span> is the start of an event, or <span class="math-container">$0$</span> if <span class="math-container">$i$</span> is not the start of an event</li> <li><span class="math-container">$o^1_i = $</span> the amount index <span class="math-container">$i$</span> is shifted when copying to the new array, for the first copy of that element</li> <li><span class="math-container">$o^2_i = $</span> the amount index <span class="math-container">$i$</span> is shifted when copying to the new array, for the second copy of that element</li> </ul> <p>These can be enforced with the following equations:</p> <ul> <li><p><span class="math-container">$s_i = a_i \land \neg a_{i-1}$</span> (see <a href="https://cs.stackexchange.com/q/12102/755">Express boolean logic operations in zero-one integer linear programming (ILP)</a> for how to enforce this)</p></li> <li><p><span class="math-container">$e_i = a_i \land \neg a_{i+1}$</span></p></li> <li><p><span class="math-container">$c_i \ge c_{i+1} + 1 - n s_{i-1}$</span>, <span class="math-container">$c_i \le c_{i+1} + 1$</span>, <span class="math-container">$c_i \le n a_i$</span>, <span class="math-container">$c_i \ge e_i$</span>, <span class="math-container">$c_i \ge 0$</span></p></li> <li><p><span class="math-container">$d_i \ge d_{i-1} + 1 - n e_{i-1}$</span>, <span class="math-container">$d_i \le d_{i-1} + 1$</span>, <span class="math-container">$d_i \le n a_i$</span>, <span class="math-container">$d_i \ge s_i$</span>, <span class="math-container">$d_i \ge 0$</span></p></li> <li><p><span class="math-container">$\ell^1_i \le n e_i$</span>, <span class="math-container">$\ell^1_i \ge d_i - n(1-e_i)$</span>, <span class="math-container">$\ell^1_i \ge 0$</span></p></li> <li><p><span class="math-container">$\ell^2_i \le n s_i$</span>, <span class="math-container">$\ell^2_i \ge c_i - n (1-s_i)$</span>, <span class="math-container">$\ell^2_i \ge 0$</span></p></li> <li><p><span class="math-container">$o^1_i = o^1_{i-1} + \ell^1_{i-1}$</span>, <span class="math-container">$o^1_0 = 0$</span></p></li> <li><p><span class="math-container">$o^2_i = o^2_{i-1} + \ell^2_i$</span>, <span class="math-container">$o^2_0 = 0$</span></p></li> </ul> <p>Then the total number of events is <span class="math-container">$s_1+\dots+s_n$</span>, and you can model shifting as follows: index <span class="math-container">$i$</span> in the original array is shifted to indices <span class="math-container">$i+o^1_i$</span> and <span class="math-container">$i+o^2_i$</span> in the new array.</p> <p>Do check my work carefully. There could easily be off-by-one errors or other mistakes in the above.</p>
534
sequence-to-sequence model
Permute the subintervals of an interval partition to most closely align with a model partition
https://cs.stackexchange.com/questions/23391/permute-the-subintervals-of-an-interval-partition-to-most-closely-align-with-a-m
<p>You are given two things: A fixed initial 'model' partition of an interval, e.g.</p> <pre><code>I------I---I-----I-------I----... </code></pre> <p>where each <code>-</code> or <code>I</code> represents an element in a discrete time series and the <code>I</code>s are the partition boundaries. This can also be represented as a sequence of subinterval lengths, i.e. 7, 4, 6, 8, ...</p> <p>Then, you're given a new set of subinterval lengths; and the task is to arrange these in such a way as to get as many coincident <code>I</code>s as possible. Or equivalently, you are given a new partition on an interval of the same length (though, critically, the new partition may have greater or fewer elements) and the task is to shuffle the subintervals around to maximize alignment. So if the model was</p> <pre><code>I------I---I-----I-------I----I </code></pre> <p>and you are given 2, 11, 5, 12, i.e.</p> <pre><code>I-I----------I----I-----------I </code></pre> <p>then the solution would be 11, 2, 12, 5, </p> <pre><code>I----------I-I-----------I----I * * </code></pre> <p>achieving alignment at 2 locations (marked with an asterisk, compare to model).</p> <p>There is an additional constraint: The locations of the aligned subintervals must be distributed approximately randomly throughout the length of the solution. The simplest means of getting a partition with at least some alignment to the model would be to build the new partition segment by segment, drawing without replacement from the collection of test segments, aligning where possible. But this would strongly bias the occurrences of alignment towards the beginning of the time series, and is therefore not allowed. There is of course also the brute force O(n!) enumeration but my series are little too long for that.</p> <p>Naturally a solution that finds the optimal permutation would be great, but one that is efficient and gets a permutation with a substantial fraction of the possible alignment would also be good. My current version is a variation on the 'simple' algorithm derived above, except only drawing from a small subcollection of subintervals so as to avoid bias. I know it can be improved upon!</p>
<p>You can use integer linear programming (ILP) to solve this. In particular, here is how to use an ILP solver to test whether there exists a way to permute the second partition so that you find $k+1$ alignment points (including the two endpoints). Given an algorithm for this decision procedure, you can use binary search to find the optimal alignment.</p> <p>Call a "block" the (non-empty) sequence of consecutive intervals between two alignment points. The left and right endpoint of the interval are automatically alignment points. If there are $k+1$ distinct alignment points (including these two endpoints), there will be $k$ blocks.</p> <p>Let $\ell_1,\ell_2,\dots,\ell_n$ be the set of lengths of the first set of subintervals, and $\ell^*_1,\ell^*_2,\dots,\ell^*_{n^*}$ be the lengths of the second set of subintervals. These are constants provided as input.</p> <p>Introduce zero-or-one unknowns $x_{i,j}$ (for $1\le i\le n$ and $1 \le j \le k$), with the intent that $x_{i,j}=1$ means that the $i$th subinterval (of length $\ell_i$) from the first partition will end up in the $j$th block. Similarly, introduce zero-or-one unknowns $x^*_{i,j}$, with the intent that $x^*_{i,j}$ means that the $i$th subinterval (of length $\ell^*_i$) from the second partition will end up in the $j$th block.</p> <p>Now we get a set of linear inequalities, as follows:</p> <ul> <li><p>$\sum_i \ell_i x_{i,j} = \sum_i \ell^*_i x^*_{i,j}$, for all $j$. (This is two ways of representing the length of the $j$th block.)</p></li> <li><p>$\sum_j x_{i,j} = 1$ (each subinterval from the first partition ends up in exactly one block) and $\sum_j x^*_{i,j} = 1$.</p></li> <li><p>$x_{i,j} = 1 \land x_{i+1,j}=0 \implies x_{i+1,j+1} = 1$ (i.e., each block starts where the previous one ended). This can be expressed as the ILP constraint $x_{i,j} - x_{i+1,j} \le x_{i+1,j+1}$. Similarly for the $x^*$'s.</p></li> <li><p>$x_{i,j} = 1 \land x_{i'',j} = 1 \land i &lt; i' &lt; i'' \implies x_{i',j}=1$ (i.e., the subintervals in a block are consecutive). This can be expressed as the ILP constraint $x_{i,j} + x_{i'',j} - 1 \le x_{i',j}$ for all $i,i',i''$ with $i &lt; i' &lt; i''$. Similarly for the $x^*$'s.</p></li> <li><p>$x_{1,1} = 1$, $x^*_{1,1} = 1$, $x_{n,k}=1$, $x^*_{n^*,k}=1$.</p></li> </ul> <p>Of course, in the worst case ILP can take exponential time, but ILP solvers are pretty good and often perform well on many real-world problems, so it is probably worth a try.</p> <p>(You could also try coding this up using a SAT solver, but you'd need to implement arithmetic, so a front-end like STP would probably be a better bet than a plain SAT solver.)</p>
535
sequence-to-sequence model
Complexity-theoretic view of Algebraic Numbers
https://cs.stackexchange.com/questions/173006/complexity-theoretic-view-of-algebraic-numbers
<p>Imagine an input-less Turing machine <span class="math-container">$M$</span> that runs potentially forever, outputting a string of 0's, 1's, and &quot;.&quot;s. <span class="math-container">$M$</span> can then be associated with the real number <span class="math-container">$x_M$</span> whose binary expansion is exactly the sequence of bits and dots produced by <span class="math-container">$M$</span> (with some arbitrary rules for repeated dots or no dots at all)</p> <p>The set of real numbers <span class="math-container">$\{x_M\}_M$</span> as <span class="math-container">$M$</span> ranges over all Turing machines is equivalent to the computable numbers, though using somewhat different model.</p> <p>In this model of computable numbers, we have a very natural complexity-theoretic view of the rational numbers: a number <span class="math-container">$x$</span> is rational if and only if it is equal to <span class="math-container">$x_M$</span> for some Turing machine <span class="math-container">$M$</span> with only a finite tape (here, I imagine a read-write tape for working and a separate write-only output tape for producing the output. The space restriction only applies to the working tape). In one direction, any <span class="math-container">$M$</span> with a finite tape must eventually loop, meaning the sequence of bits must eventually cycle, giving a rational number. In the other direction, since the sequence of bits eventually repeats, we can construct the Turing machine <span class="math-container">$M$</span> as a path that ends in a loop, with the path corresponding to the initial digits and the loop corresponding to the repeating digits.</p> <p>My question is: is there an analogous characterization of the algebraic numbers in terms of Turing machines? The algebraic numbers are computable, so each algebraic number is <span class="math-container">$x_M$</span> for some Turing machine <span class="math-container">$M$</span>. Is there any natural complexity-theoretic property <span class="math-container">$P$</span> on <span class="math-container">$M$</span>, such that <span class="math-container">$\{x_M: M\text{ satisfies} P\}$</span> is exactly the set of algebraic numbers?</p>
536
sequence-to-sequence model
Can most programs (except the IO part) be re-written as a sequence of matrix operations?
https://cs.stackexchange.com/questions/115153/can-most-programs-except-the-io-part-be-re-written-as-a-sequence-of-matrix-ope
<p>I got this idea recently. If we do not consider the data IO part of software, imagine the data is in the memory and we need to come out with some decision (which product to recommend to a user, how to render the 3D world in a game) by processing the data in memory. All of these tasks could be done through a sequence (probably a graph) of matrix and vector operations, such as multiply, add, sum, etc. </p> <p>For instance, I used to work for a car pool company, there was a piece of code to decide which car should respond to user. That logic was implemented with c++ stl containers and flow controls like if/else/break. Then I realized that it can be re-written as logistic regression (the business logic is stored in the linear predictive model) , with certain precision.</p> <p>In this sense, matrix operation is a programming language by itself.</p> <p>Can anyone share any thoughts here? Is there any introductory discussion wiki that I can read ? </p>
<p>If you regard the output of a program as a function of its input then matrices can be used to represent <em>some</em> programs, namely those where the output is a linear function of the input. So a program that takes two arguments <span class="math-container">$a$</span> and <span class="math-container">$b$</span> and returns <span class="math-container">$a+b$</span> and <span class="math-container">$a-b$</span> could be represented by the matrix</p> <p><span class="math-container">$\begin{pmatrix} 1 &amp; 1 \\ 1 &amp; -1 \end{pmatrix}$</span></p> <p>But most programs are not linear functions. For example, if the program instead returns <span class="math-container">$a^2+b^2$</span> and <span class="math-container">$a^2-b^2$</span> how will you represent that as a matrix ?</p>
537
sequence-to-sequence model
Recognizing interval graphs--&quot;equivalent intervals&quot;
https://cs.stackexchange.com/questions/23885/recognizing-interval-graphs-equivalent-intervals
<p>I was reading a paper for recognizing interval graphs. Here is an excerpt from the paper:</p> <blockquote> <p>Each interval graph has a corresponding interval model in which two intervals overlap if and only if their corresponding vertices are adjacent. Such a representation is usually far from unique. To eliminate uninteresting variations of the endpoint orderings, we shall consider the following block structure of endpoints: Denote the right (resp. left) endpoint of an interval $u$ by $R(u)$ (resp. $L(u)$). In an interval model, define a maximal contiguous set of right (resp. left) endpoints as an R-block (resp. L-block). Thus, the endpoints can be grouped as a left-right block sequence. Since an endpoint block is a set, the endpoint orderings within a block are ignored. It is easy to see that the overlapping relationship does not change if one permute the endpoint order within each block. Define two interval models for $G$ to be equivalent if either their left-right block sequences are identical or one is the reversal of the other. </p> </blockquote> <p>I am unable to understand the notion of equivalent intervals. Can someone help me?</p>
<p>Imagine scanning the real line from left to right. Whenever an interval starts or ends, you make notice of it. For example, perhaps at some representation, at the point $3$ the intervals for $x_2,x_7$ start and the interval for $x_{15}$ ends, and at the point $4$ the interval for $x_2$ ends, and no points occur in between.</p> <p>Suppose we "remove" the portion $(3.25,3.75)$ of the real line, thus pushing the point $4$ back to $3.5$. This only affects the value of some ordinates, so we would like to say that these two interval representations are "the same". Indeed, they are equivalent under the definition you give, since if we make a list of which intervals start and end at what point, then the lists would be identical, the only difference being the value of the ordinates.</p> <p>Another operation we can do is reflecting the real line, say around zero. Now the interval for $x_2$ is $(-4,-3)$ instead of $(3,4)$, so that at the point $-4$ the interval $x_2$ starts, and at the point $-3$ the intervals for $x_2,x_7$ end and the interval for $x_{15}$ starts. Again, we would like both of these representation to count as "the same". This transformation is captured by the reversal clause in the definition you quote.</p>
538
sequence-to-sequence model
How do POMDPs and Dynamic Influence Diagrams differ?
https://cs.stackexchange.com/questions/50053/how-do-pomdps-and-dynamic-influence-diagrams-differ
<p>To give some perspective, first consider the following diagram comparing Markov Chains, HMMs, MDPs, and POMDPs (I'm not sure who to credit for it).</p> <pre> Fully observable Partially observable _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | no actions | Markov chain | HMM | |_ _ _ _ _ _ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | actions | MDP | POMDP | |_ _ _ _ _ _ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _ _ _ _ _ _ _| </pre> <p>Recall that an HMM allows us to model probability distributions over a sequence of observations. <em>Bayesian networks</em> (not pictured) are a generalization of HMMs which model conditional distributions over sets of random variables (<a href="http://mlg.eng.cam.ac.uk/zoubin/papers/ijprai.pdf" rel="nofollow">see here for a description</a>). When modeling a problem over time, one appends a time index to the model resulting in a <em>dynamic Bayesian network</em>.</p> <p>A tool known as a <em>dynamic influence diagram</em> extends dynamic Bayesian networks to decision-making problems through the inclusion of actions that can effect the evolution of the problem. </p> <p><strong>My question is</strong>: how do dynamic influence diagrams and POMDPs compare? On the surface they seem like they are modeling the same problem type. What sort of problems are amenable to each tool?</p>
<p>I'll quote from the paper <a href="http://www.researchgate.net/profile/Piotr_Gmytrasiewicz/publication/221454855_Interactive_dynamic_influence_diagrams/links/00b7d5343215f5521e000000.pdf" rel="nofollow">"Interactive Dynamic influence diagrams"</a> (DIDs) by Polich and Gmytrasiewicz (AAMAS 2007):</p> <blockquote> <p>A dynamic influence diagram is a computational representation of a POMDP. </p> </blockquote> <p>They continue soon afterward:</p> <blockquote> <p>DIDs perform planning using a forward exploration technique known as reachability analysis. This technique explores the possible states of belief an agent may be in in the future, the likelihood of reaching each state of belief, and the expected utility of each belief state. The agent then adopts the plan which maximizes the expected utility. DIDs provide exact solutions for finite horizon POMDP problems, and finite look-ahead approximations for POMDPs of infinite horizon.</p> </blockquote> <p>To me this suggests that DIDs are, as the authors state, one possible <em>representation</em> of a POMDP, to which a specific look-ahead type solution method is related to. As look-ahead is limited to approximate solutions for infinite horizon POMDPs, this also suggests that DIDs can not represent <em>every</em> POMDP.</p>
539
sequence-to-sequence model
Infer probabilities, for concatenation of words
https://cs.stackexchange.com/questions/47458/infer-probabilities-for-concatenation-of-words
<p>Fix an alphabet $\Sigma$, and a set of words, $W = \{w_1,\dots,w_n\} \subseteq \Sigma^*$.</p> <p>I have a randomized model that works like this: Alice generates a random sequence of words, using some probability distribution over the words, and then I get to see their concatenation -- but I don't see where the word boundaries are. I want to infer what distribution on words Alice is using.</p> <p>More formally, Alice randomly picks a sequence of words, where each word is chosenly randomly and independently at random according to the probabilities $p(w_1),\dots,p(w_n)$; then Alice outputs the concatenation of these words.</p> <p>With this model, we can work out the probability of any particular string $x \in \Sigma^*$. It is the sum of $p(s_1) \cdots p(s_k)$, taken over all sequences of words $s_1,\dots,s_k$ whose concatenation is $x$ and such that $s_i \in W$ for all $i$:</p> <p>$$P(x) = \sum_{s_1 \dots s_k = x} p(s_1) \cdots p(s_k).$$ </p> <p>If I knew the probabilities $p(w_1),\dots,p(w_n)$ (the distribution Alice is using), it'd be easy to compute $P(x)$ using dynamic programming: for each prefix $x_{1..i}$ of $x$, I compute $P(x_{1..i})$.</p> <p>But I have the inverse problem. I am given $w_1,\dots,w_n$ and $x \in \Sigma^*$, and I want to find probabilities $p(w_1),\dots,p(w_n)$ that make $P(x)$ as large as possible. How can I do that? Is there any efficient algorithm for this?</p> <p>I would be happy with a practical algorithm that works well enough in practice, or a heuristic that gives an approximate solution.</p> <p>I haven't managed to come up with any reasonable algorithm. In the special case where no word in $W$ is a prefix of any other word in $W$, then there is a unique decomposition of $x$ into a sequence of words from $W$, and it's easy to use maximum likelihood methods to compute the optimal probabilities: the probability $p(w)$ is just the number of times $w$ appears in this decomposition, divided by the number of words in the decomposition. However, in general there might be exponentially many ways of decomposing $x$ into a sequence of words from $W$, so this strategy doesn't work in general. Is there a better method?</p>
540
sequence-to-sequence model
Given a sequence of sets, choose one element from each to get the lowest number of changes
https://cs.stackexchange.com/questions/153579/given-a-sequence-of-sets-choose-one-element-from-each-to-get-the-lowest-number
<p>Let <span class="math-container">$k,n \in \mathbb{N} $</span> and non empty sets <span class="math-container">$X_1, X_2, \dots, X_n \subseteq \{1,2,\dots,k\}$</span>.</p> <p>Define the change counting cost function <span class="math-container">$f: X_1 \times X_2 \times \dots \times X_n \to \mathbb{N}$</span> by <span class="math-container">$f(x_1, x_2, \dots, x_n) = \sum_{i=2}^n\delta(i)$</span></p> <p>where <span class="math-container">$\delta(i) = 1$</span> if <span class="math-container">$x_i \neq x_{i-1}$</span> and otherwise <span class="math-container">$0$</span></p> <p>How can we find <span class="math-container">$(x_1, x_2, \dots, x_n) \in X_1 \times X_2 \times \dots \times X_n$</span> which minimizes the cost function?</p> <p>For example consider the sets sequence: <span class="math-container">$\{1,2\}, \{2,3\}, \{1,3\}, \{2,3\}$</span> a solution can be one of <span class="math-container">$(1, 3, 3, 3)$</span> <span class="math-container">$(2,3,3,3), (2,2,3,3)$</span> which have one change. this is minimal because the sets' intersection is empty and hence we can't find a zero change vector.</p> <p>Is there any efficient way to solve this?</p> <p>We have <span class="math-container">$k^n$</span> options to check in the worst case, so by brute force, we have exponential time.</p> <p>I read about the <a href="https://en.wikipedia.org/wiki/Assignment_problem" rel="nofollow noreferrer">Assigment problem</a> but couldn't model it to fit it, though I'm not sure this can't be done.</p>
<p>For every I = 1 to n, and for every x in xi, calculate the cheapest sequence for the first I sets ending in x.</p> <p>For I = 1 it is trivial, every sequence has cost 0.</p> <p>To find the cheapest sequence for I+1 sets ending in x, the cost is either the cheapest sequence for I sets ending in x or the cost of any sequence for I sets ending in anything other than I, plus 1.</p> <p>Total cost is the sum of (element count of set I-1, timed element count of set I).</p> <p>Albert Hendrik's method is easier and faster in this case, but this method can be applied to any situation where the cost is a function of your choice in set I and your choice in set I+1. For example if the cost is the absolute difference between two numbers or something more complicated.</p>
541
sequence-to-sequence model
Find maximal matching in tree in $O\left(\frac{n}{\log n}\right)$
https://cs.stackexchange.com/questions/104824/find-maximal-matching-in-tree-in-o-left-fracn-log-n-right
<p>As any tree can be described as a binary sequence (<span class="math-container">$i$</span>-th bit is 0 if the edge goes down and 1 otherwise, every edge is travelled twice <span class="math-container">$-$</span> up and down, so such sequence's length is <span class="math-container">$2 |V| - 2$</span>), any tree of size <span class="math-container">$n$</span> can be encoded as <span class="math-container">$O\left(\frac{n}{\log n}\right)$</span> integers (assuming an integer holds <span class="math-container">$\log n$</span> bits in our model).</p> <p>The task is to find maximal matching in a tree given in a described form in <span class="math-container">$O\left(\frac{n}{\log n}\right)$</span> time (thus, linear in the integer count). </p> <p>I was wondering if any simple solution exists, as for trees I can find maximal matching in <span class="math-container">$O\left(n\right)$</span> time using simple dynamic programming approach. </p>
542
sequence-to-sequence model
Viterbi training vs. Baum-Welch algorithm
https://cs.stackexchange.com/questions/6664/viterbi-training-vs-baum-welch-algorithm
<p>I'm trying to find the most probable path (i.e., sequence of states) on an hidden Markov model (HMM) using the Viterbi algorithm. However, I don't know the transition and emission matrices, which I need to estimate from the observations (data).</p> <p>To estimate these matrices, which algorithm should I use: the Baum-Welch algorithm or the Viterbi Training algorithm? Why?</p> <p>In case I should use the Viterbi training algorithm, can anyone provide me a good pseudocode (it's not easy to find) ?</p>
<p>From my comment:</p> <p><a href="https://stats.stackexchange.com/questions/581/differences-between-baum-welch-and-viterbi-training">This</a> answer on crossvalidate.SE may be of use. I wasn't aware of viterbi-training. I have only used BW or other EM-based methods in the past. Based on the answer in the link, I think BW would be the most useful. It seems Viterbi-Training gives no guarantee on bounds. However, the latter does have useful application if BW takes far too long to compute.</p>
543
sequence-to-sequence model
How the Abstract DPLL Algorithm Works in SAT Solving
https://cs.stackexchange.com/questions/95505/how-the-abstract-dpll-algorithm-works-in-sat-solving
<p>I have come across many definitions of the DPLL algorithm but haven't been able to follow them. The ones that are closest to making sense to me are the ones based on state-transition systems with transition rules such as defined here:</p> <p><a href="http://homepage.cs.uiowa.edu/~tinelli/papers/NieOT-LPAR-04.pdf" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sVuDA.png" alt="enter image description here"></a></p> <p>$F$ is a CNF formula, $C$ is a clause, and $M$ is a model. A description of it is as follows:</p> <blockquote> <p>Here, a DPLL procedure will be modeled by a transition system: a set of states together with a relation, called the transition relation, over these states. States will be denoted by (possibly subscripted) $S$.... A state is either fail or a pair $M \parallel F$, where $F$ is a finite set of clauses and $M$ is a sequence of <em>annotated literals</em>.... We will not go into a complete formalization of annotated literals; it suffices to know that some literals $l$ will be annotated as being decision literals; this fact will be denoted here by writing $l^d$ (roughly, decision literals are the ones that have been added to $M$ by the Decide rule given below). Most of the time the sequence $M$ will be simply seen as a set of literals, denoting an assignment, i.e., ignoring both the annotations and the fact that $M$ is a sequence and not a set.</p> </blockquote> <p>I <a href="https://cs.stackexchange.com/a/10940/9864">understand</a> that a <em>literal</em> is an atom or its negation, but I don't understand what an <em>annotated literal</em> is, which is a core part of understanding what a model $M$ is in $M \parallel F$. I also don't understand what the decision literals are.</p> <p>Two examples are as follows.</p> <p><a href="http://homepage.cs.uiowa.edu/~tinelli/papers/NieOT-LPAR-04.pdf" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pt3in.png" alt="enter image description here"></a> <a href="http://homepage.cs.uiowa.edu/~tinelli/papers/NieOT-LPAR-04.pdf" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x0cjG.png" alt="enter image description here"></a></p> <p>This is as much as I can gather. This:</p> <p>$$\emptyset \parallel 1 \lor \bar{3}, \bar{1} \lor \bar{4} \lor 5 \lor 2, \bar{1} \lor \bar{2}$$</p> <p>Is essentially an implementation of this:</p> <p>$$M \parallel F$$</p> <p>Also, the commas are really $\land$, as in:</p> <p>$$\emptyset \parallel (1 \lor \bar{3}) \land (\bar{1} \lor \bar{4} \lor 5 \lor 2) \land (\bar{1} \lor \bar{2})$$</p> <p>So each of the blocks between the commas or $\land$ are clauses. On the left is a growing list of literals (atoms or their negation). On the right are the rules being applied (like <code>Decide</code>). But I don't understand (a) how the rules are being applied / why they are chosen at the time they are chosen, (b) how it results in an annotated literal for $M$ on the left, and (c) why some of them are marked in bold as the "decision" literals.</p> <p>The final result, from my understanding, is the model $M$ (on the left) that satisfies the formula $F$ on the right. Essentially, it's a valuation.</p> <p>Wondering if one could clarify how the first example works, how the annotated literals get added to the model $M$ on the left.</p>
<p>Describing DPLL as a series of state-transition rules is the worst way I've ever seen to aid understanding the algorithm. The pseudocode provided in the WIkipedia article on DPLL is much easier to understand if you're approaching the algorithm for the first time.</p> <pre><code>;Algorithm DPLL ;Input: A set of clauses Φ. ;Output: A Truth Value. function DPLL(Φ) if Φ is a consistent set of literals then return true; if Φ contains an empty clause then return false; for every unit clause {l} in Φ Φ ← unit-propagate(l, Φ); for every literal l that occurs pure in Φ Φ ← pure-literal-assign(l, Φ); l ← choose-literal(Φ); return DPLL(Φ ∧ {l}) or DPLL(Φ ∧ {not(l)}); </code></pre> <p><em>Decision literals</em> are the literals returned by the choose-literal function called above. <em>Annotated literals</em> are just literals that are members of M that are marked in some way so the DPLL implementation can distinguish decision literals from literals added to M by unit propagation. This marking is for the sake of the backjumping code which must unroll assignments until the one responsible for the current conflict is found and then replace it.</p> <p>Note that the original DPLL algorithm did not have backjumping in it. It simply backtracked, either reversing the current assignment or returning false if the algorithm had run out of assignments to try. Backjumping was a later innovation.</p> <p>Note also that pure literal assignment isn't usually done in modern SAT solvers that implement some form of DPLL. It is a fairly expensive check to do in an algorithm expected to recurse and backtrack an exponential number of times and the benefits of doing the check don't outweigh the costs.</p>
544
sequence-to-sequence model
Does weak consistency allow reordering of events?
https://cs.stackexchange.com/questions/70236/does-weak-consistency-allow-reordering-of-events
<p>I am on studying a consistency model: weak consistency. <a href="http://www.e-reading.club/chapter.php/143358/221/Tanenbaum_-_Distributed_operating_systems.html" rel="nofollow noreferrer">weak consistency</a> This model was first defined by Dubois et al. (1986), by saying that it has three properties:</p> <ol> <li>Accesses to synchronization variables are sequentially consistent.</li> <li>No access to a synchronization variable is allowed to be performed until all previous writes have completed everywhere.</li> <li>No data access (read or write) is allowed to be performed until all previous accesses to synchronization variables have been performed.</li> </ol> <p>And there is an example saying that the following sequence is weak consistent : <a href="https://i.sstatic.net/xGttm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xGttm.png" alt="A valid weak consistent sequence"></a> </p> <p>Here, S represents accessing synchronization variables. </p> <p>But how is this possible ? I don't understand how can the event R(x)2 happens before R(x)1 of processor P3 while W(x) 1 happens before W(x)2 on processor P1. </p>
<p>Weak consistency rather appears in distributed systems where changes to variables have to be transmitted via the network.<br> Before a synchronization there is no guarantee to the visibility of write operations.<br> On P2 the <code>W(x)1</code> could arrive before <code>W(x)2</code> and on P2 <code>W(x)2</code> could arrive before <code>W(x)1</code>.</p>
545
sequence-to-sequence model
Minimum number of tree cuts so that each pair of trees alternates between strictly decreasing and strictly increasing
https://cs.stackexchange.com/questions/116854/minimum-number-of-tree-cuts-so-that-each-pair-of-trees-alternates-between-strict
<p>A gardener considers aesthetically appealing gardens in which the tops of sequential physical trees (eg palm trees) are always sequentially going up and down, that is:</p> <pre><code>| | | | | | | | | | </code></pre> <p>On the other hand, the following configurations would be invalid:</p> <pre><code>| | | | | | </code></pre> <p>reason: 3rd tree should be higher than the 2nd one</p> <pre><code>| | | | | | </code></pre> <p>reason: consecutive trees cannot have the same height</p> <p>Given a sequence of physical trees in a garden, what is the minimum number of physical trees which must be cropped/cut in order to achieve the pattern desired by that gardener?</p> <p>First, the heights of the physical trees in the garden can be represented by a sequence of integers. For instance, the three examples above can be represented as (3 1 2 1 3), (3 2 1), and (3 3).</p> <p>Mathematically speaking, the problem maps to find the minimum number of negative sums which must be applied to a sequence of integers (a<sub>0</sub>, a<sub>1</sub>, ..., a<sub>N</sub>) so that each pair of consecutive integers (a<sub>i</sub>, a<sub>i+1</sub>) in this sequence alternates between strictly decreasing (a<sub>i</sub> &lt; a<sub>i+1</sub>) and strictly increasing (a<sub>i</sub> &gt; a<sub>i+1</sub>) . Example: In (2, 3, 5, 7), the minimum number of negative sums is 2. A possible solution is to add -2 to the 2nd element and then add -3 to the last element, resulting in (2, 1, 5, 4).</p> <p>My search model is a graph where each node represents a sequence of physical tree heights and each edge represents a decrease of the height of a tree (from now on called &quot;cut&quot;). In this model, a possible path from the initial node to the goal node in the above example would be</p> <ul> <li>initial node: (2,3,5,7)</li> <li>action: sum -2 to a<sub>1</sub></li> <li>intermediate node: (2,1,5,7)</li> <li>action: sum -3 to a<sub>3</sub></li> <li>goal node: (2,1,5,4).</li> </ul> <p>I have used a breadth-first search to find the shortest path from the initial node to the goal node. The length of this shortest patch is equal to the minimum number of trees that must be cut.</p> <p>The only improvement to this algorithm that I was able to think was using a priority queue that orders the possible nodes to be explored in increasing order 1st by number of cuts (as traditional BFS already does) and 2nd by the number of &quot;errors&quot; in the sequence of integers in the node: triplets which do not match the required up/down pattern, ie. (a<sub>i</sub> &lt; a<sub>i+1</sub> and a<sub>i+1</sub> &lt; a<sub>i+2</sub>) OR (a<sub>i</sub> &gt; a<sub>i+1</sub> and a<sub>i+1</sub> &gt; a<sub>i+2</sub>), plus the number of consecutive equal numbers pairs (a<sub>i</sub> == a<sub>i+1</sub>) . This increases the probability that the goal node will be reachable from the first nodes with N-1 cuts in the queue when the times come to evaluate them. However, it is only useful to reduce the search space of nodes with N-1 cuts and not the complexity of the whole search.</p> <p>The time required to execute this algorithm grows exponentially with the number of trees and with the height of the trees. Is there any algorithm/idea which could be used to speed it up?</p>
<p>I'll describe two ways you could solve this problem. Either works. In some sense they are basically the same algorithm, just viewed from two different perspectives.</p> <h1>Dynamic programming algorithm</h1> <p>This can be solved in linear time with <a href="https://cs.stackexchange.com/tags/dynamic-programming/info">dynamic programming</a>. Let <span class="math-container">$d_i$</span> denote minimum number of <span class="math-container">$a_i,\dots,a_n$</span> that must be cut to produce an alternating sequence if you start in the downwards direction for the first pair (the pair <span class="math-container">$a_i,a_{i+1}$</span>) and don't cut <span class="math-container">$a_i$</span>, and <span class="math-container">$u_i$</span> the minimum number to produce an alternating sequence starting in the upwards direction if you don't cut <span class="math-container">$a_i$</span>, and <span class="math-container">$u'_i$</span> the minimum number to produce an alternating sequence starting in the upwards direction if you do cut <span class="math-container">$a_i$</span>. Then you can write down a recurrence relation that expresses <span class="math-container">$d_i,u_i,u'_{i+1}$</span> in terms of <span class="math-container">$d_{i+1},u_{i+1},u'_{i+1}$</span>, and you can evaluate it in <span class="math-container">$O(n)$</span> time using dynamic programming.</p> <p>In particular, the recurrence relation is <span class="math-container">$u'_i = 1 + d_{i+1}$</span> and</p> <p><span class="math-container">$$d_i = \begin{cases} \min(u_{i+1},u'_{i+1}) &amp;\text{if }a_i&gt;a_{i+1}\\ +\infty &amp;\text{otherwise.} \end{cases}$$</span></p> <p><span class="math-container">$$u_i = \begin{cases} d_{i+1} &amp;\text{if }a_i&lt;a_{i+1}\\ +\infty &amp;\text{otherwise.} \end{cases}$$</span></p> <p>Once you've computed all these values, the final answer for the minimum number of cuts needed for the sequence <span class="math-container">$a_1,\dots,a_n$</span> is <span class="math-container">$\min(d_1,u_1,u'_1)$</span>.</p> <h1>Graph search</h1> <p>Alternatively, we can solve this by building up a suitable graph and then finding the shortest path in this graph.</p> <p>Label a tree as a "peak" if in it is higher than its neighbors in the final sequence, and a "valley" if it is lower than its neighbors in the final sequence. The final sequence will alternate between peaks and valleys. Here is the two key observations:</p> <ul> <li><p>The optimal solution will never cut any tree that ends up as a peak. (Any solution that involves cutting a peak will remain valid if you don't cut the peak, and that reduces the number of cuts by 1.)</p></li> <li><p>In the optimal solution, you can assume without loss of generality that every tree that ends up a valley is cut down to the ground, i.e., to the minimum height. (Any solution that involves cutting a valley only partway will remain valid if you cut it down to the ground.)</p></li> </ul> <p>Since we want to find an optimal solution, we will consider only solutions that follow both rules.</p> <p>Let <span class="math-container">$a_1,\dots,a_n$</span> be the sequence. We will build a graph with <span class="math-container">$3n$</span> vertices. Each vertex has the form <span class="math-container">$\langle i,t,c \rangle$</span> where <span class="math-container">$i \in \{1,2,\dots,n\}$</span> is an index that identifies a tree, <span class="math-container">$t$</span> indicates whether tree <span class="math-container">$i$</span> will be a peak or a valley in the final solution, and <span class="math-container">$c$</span> indicates whether tree <span class="math-container">$i$</span> is cut to the ground or uncut in the final solution. We'll have an edge from one vertex to the next if they can be adjacent in a final solution. Thus, we have the following edges:</p> <ul> <li><p><span class="math-container">$\langle i, \text{peak}, \text{no}\rangle \to \langle i+1, \text{valley}, \text{no} \rangle$</span>, with length 0, for those <span class="math-container">$i$</span> where <span class="math-container">$a_i&gt;a_{i+1}$</span></p></li> <li><p><span class="math-container">$\langle i, \text{peak}, \text{no}\rangle \to \langle i+1, \text{valley}, \text{yes} \rangle$</span>, with length 1, for all <span class="math-container">$i$</span></p></li> <li><p><span class="math-container">$\langle i, \text{valley}, \text{no}\rangle \to \langle i+1, \text{peak}, \text{no} \rangle$</span>, with length 0, for those <span class="math-container">$i$</span> where <span class="math-container">$a_i&lt;a_{i+1}$</span></p></li> <li><p><span class="math-container">$\langle i, \text{valley}, \text{yes}\rangle \to \langle i+1, \text{peak}, \text{no} \rangle$</span>, with length 0, for all <span class="math-container">$i$</span></p></li> </ul> <p>Finally, find the shortest path in this graph from a start vertext to an end vertex, where the start vertices are those of the form <span class="math-container">$\langle 1, *, *\rangle$</span> and the end vertices are those of the form <span class="math-container">$\langle n, *, *\rangle$</span>. The length of this path will correspond to the minimum number of cuts needed in the optimal solution, and the path itself can be used to reconstruct the final solution. This shortest path can be found in <span class="math-container">$O(n)$</span> time using breadth-first search (BFS) on the graph defined above.</p>
546
sequence-to-sequence model
Why use maximum likelihood for word prediction?
https://cs.stackexchange.com/questions/68134/why-use-maximum-likelihood-for-word-prediction
<p>According to <a href="https://www.tensorflow.org/tutorials/word2vec/" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/word2vec/</a>, the standard approach for predicting the next word in a word sequence is maximum likelihood. The predicted next word is the word that maximizes</p> <p>$$P(w_t|h)=\frac{\exp(\text{score}(w_t,h))}{\sum_{w'}\exp(\text{score}(w',h))}.$$</p> <p>To me, it seems that the word predicted from this model is always the word with the maximum score, in which case we could just use the score function for prediction.</p> <p>What does maximum likelihood provide that a simple score function does not?</p>
<p>That approach <em>is</em> using a score function (in a particular way). It is also using maximum likelihood. Both ways of describing the approach are valid.</p> <p>Why is it useful to think about this in terms of maximum likelihood? Because it gives a more systematic way to think about this problem and others like it.</p> <p>If we just want to think about maximizing a score function, the question that raises is -- what score function should we use? How do we choose one? It sounds arbitrary.</p> <p>In contrast, the maximum-likelihood approach gives us a principled way to choose the function. Here is how to derive that formula, given the maximum-likelihood principle. First, they have apparently decided to use the following approximation for the probability of seeing $w_t$ as the next word, given history $h$:</p> <p>$$P(w_t,h) = \exp(\text{score}(w_t,h)).$$</p> <p>Once you've decided to do that, then by the definition of conditional probability it follows that</p> <p>$$\begin{align*} P(w_t|h) &amp;= \frac{P(w_t,h)}{P(h)}\\ &amp;= \frac{P(w_t,h)}{\sum_{w'} P(w',h)}\\ &amp;= \frac{\exp(\text{score}(w_t,h))}{\sum_{w'}\exp(\text{score}(w',h))}. \end{align*}$$</p>
547
sequence-to-sequence model
Are there lossless data compression techniques that do not exploit repetitive patterns?
https://cs.stackexchange.com/questions/65096/are-there-lossless-data-compression-techniques-that-do-not-exploit-repetitive-pa
<p><em>Lossless Data compression</em> (source coding) algorithms heavily rely on repetitive pattern (redundancy).</p> <p>Is there a Lossless Data compression method/algorithm that is independent of repetitive pattern (redundancy)?</p> <p><strong>Note:</strong></p> <p>Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data.</p> <p>Techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. <strong><em>Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.</em></strong> This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes. For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.</p> <p><strong><em>A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums.</em></strong> This is called discrete wavelet transform.</p>
<p>Modeling data compression as a combination of statistical modeling and encoding seems a bit obsolete these days. In many compression algorithms, the most important step is combinatorial modeling, which finds structure in the data and uses the structure to transform the data into something that can be compressed with statistical methods. For example:</p> <ul> <li>Burrows-Wheeler transform permutes the characters based on the lexicographic ordering of the suffixes starting after them.</li> <li>Algorithms in the Lempel-Ziv family describe the text in terms of exact and/or inexact repeats.</li> <li>A graph can be described in terms of bicliques (subgraphs with an edge from every node in vertex set $A$ to every node in vertex set $B$).</li> <li>A text can be represented by a context-free grammar.</li> <li>A collection of similar strings can be described by the edit operations required to transform a reference string into each of the individual strings.</li> <li>The collection can also be described by a finite automaton recognizing a more general language, plus the paths used to produce each of the strings.</li> <li>A suffix array has self-repetitions, where each pointer in the source substring is incremented by 1 in the target substring.</li> <li>A (multi)set of integers can be represented as a binary sequence.</li> </ul> <p>In principle, one can describe these combinatorial models in statistical terms, but the statistical viewpoint may not be very useful. Furthermore, many practical algorithms skip the statistical modeling/encoding part completely or do them in a naive way, as decompression speed may be more important than maximal compression.</p> <p>All of these methods take advantage of the redundancy in the data or make it more explicit for the subsequent compression steps. After all, data compression is basically just getting rid of the redundancy in the data.</p>
548
sequence-to-sequence model
Would Schmidhuber&#39;s theories of everything be capable of performing hypercomputation?
https://cs.stackexchange.com/questions/104469/would-schmidhubers-theories-of-everything-be-capable-of-performing-hypercomputa
<p>Jürgen Schmidhuber pointed out that a simple explanation of the universe would be a Turing machine analogy programmed to execute all possible programs computing all possible histories for all types of computable physical laws. His work was based on Zuse's thesis.</p> <p>This hypothesis would be inside the domain of Digital Physics, a group of hypothetical models that propose that the universe is analogous to a computer or an automata. </p> <p>(<a href="https://en.wikipedia.org/wiki/Digital_physics" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Digital_physics</a>)</p> <p>In 2000, he expanded this work by combining Ray Solomonoff's theory of inductive inference with the assumption that quickly computable universes are more likely than others. This work on digital physics also led to limit-computable generalizations of algorithmic information or Kolmogorov complexity and the concept of Super Omegas, which are limit-computable numbers that are even more random (in a certain sense) than Gregory Chaitin's number of wisdom Omega.</p> <p>So since Chaitin's constant (<a href="https://en.wikipedia.org/wiki/Chaitin%27s_constant" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Chaitin%27s_constant</a>) would allow hypercomputation if it existed in the pysics of the universe, would then Schmidhuber's hypothesis be able to perform hypercomputation? Also, in the hytpercomputation wikipedia entry (<a href="https://en.wikipedia.org/wiki/Hypercomputation" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Hypercomputation</a>) it says (In "eventually correct" systems):</p> <blockquote> <p>A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. Traditional Turing machines cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges; that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can eventually converge to a correct solution of the halting problem by evaluating a Specker sequence.</p> </blockquote> <p>And also, Super-recursive algorithms are closely related to hypercomputation, and super recursive algorithm is just one way of defining hypercomputation. In super-recursive algorithms wikipedia entry (<a href="https://en.wikipedia.org/wiki/Super-recursive_algorithm" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Super-recursive_algorithm</a>), Schmidhuber's model is included as a super-recursive algorithm-based model</p> <p>So, knowing all of this, then, would Schmidhuber's hypothetical universes be based on hypercomputation? </p>
<p>If the hypothesis is that we live in a universe whose physics are computed by a Turing machine, then hypercomputation is trivially impossible in our universe.</p> <p>The constants you're taking about are limits of computable numbers. If all you have is a computer you can't produce such a number in finite time.</p> <p>It sounds like you're taking two different things named after the same mathematician and assuming they're somehow connected when they're not.</p> <p>(Disclaimer: I'm going off the information in the question alone.)</p>
549
sequence-to-sequence model
Markov Model to compute the probaility on the $n^{th}$ day
https://cs.stackexchange.com/questions/90495/markov-model-to-compute-the-probaility-on-the-nth-day
<p>This is a question about Markov Models. Let's say we have the following situation <a href="https://i.sstatic.net/Eu8cG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Eu8cG.png" alt="enter image description here"></a></p> <p>Let's say that we want to find the probability that $2$ rainy days follow a nice day. You'd simply have $0.25 \cdot 0.5=0.125=12.5\%$. However let's take it up a notch. What's the probability that on the $7^{th}$ day it's snowy? Or in general how would you find the probability that on the $n^{th}$ day it's a certain weather?</p> <p>I think that you could take one possibility of a sequence of events, such that on the $7^{th}$ day it snows, such as nice, rain, snow, nice, rain, snow and snow which has a probability of $0.1953\%$ and then add all such probabilities but I'm not a 100% sure.</p>
<p>Yes, you just add up the probabilities of all the possibilities. However, there's a conceptually clearer way of doing it that's less error-prone.</p> <p>The transition matrix tells you the probability of moving from one state to another in one step. The probability that it rains the day after tomorrow given that it's nice today is, as you've seen, given by $$\Pr(N\to N)\,\Pr(N\to R) + \Pr(N\to R)\,\Pr(R\to R) + \Pr(N\to S)\,\Pr(S\to R)\,.$$ If you look at this closely, you'll see that it's exactly the (Nice, Rain) entry of the square of the transition matrix. Similarly, the $n$th power of the transition matrix tells you the probability of moving between two states in $n$&nbsp;steps.</p> <p>Powers of the transition matrix can be computed efficiently with repeated squaring&nbsp;&ndash; for example $M^7 = (M^2)^2\times M^2\times M$ and there's the bonus that you get all the $n$-step probabilities instead of just one.</p>
550
sequence-to-sequence model
What randomness really is
https://cs.stackexchange.com/questions/12136/what-randomness-really-is
<p>I'm a Computer Science student and am currently enrolled in System Simulation &amp; Modelling course. It involves dealing with everyday systems around us and simulating them in different scenarios by generating random numbers in different distributional curves, like IID, Gaussian etc. for instance. I've been working on the boids project and a question just struck me that what exactly "random" really is? I mean, for instance, every random number that we generate, even in our programming languages like via the <code>Math.random()</code> method in Java, essentially is generated following an "algorithm".</p> <p>How do we really know that a sequence of numbers that we produce is in fact, random and would it help us, to simulate a certain model as accurately as possible?</p>
<p>The short answer is that no one knows what real randomness is, or if such a thing exists. If you want to quantify or measure the randomness of a discrete object, you would typically turn to <a href="http://en.wikipedia.org/wiki/Kolmogorov_complexity">Kolmogorov complexity</a>. Before Kolmogorov complexity, we had no way of quantifying randomness of say a sequence of numbers without considering the process that spawned it.</p> <p>Here's an intuitive example that was really bugging people back in the day. Consider a sequence of coin tosses. The outcome of one toss is either heads ($H$) or tails ($T$). Say we do two experiments, where we toss a coin 10 times. The first experiment $E_1$ gives us $H,H,H,H,H,H,H,H,H,H$. The second experiment $E_2$ gives us $T,T,H,T,H,T,T,H,T,H$. After seeing the outcome, you might be tempted to claim there was something wrong with the coin in $E_1$, or at least for some weird reason what you got is not random. But if you assume both $H$ and $T$ are as probable (the coin is fair), the probability of obtaining either $E_1$ or $E_2$ is equal to $(1/2)^{10}$. In fact, obtaining <em>any</em> specific sequence is as probable as any! Still, $E_2$ <em>feels</em> random, and $E_1$ does not.</p> <p>In general, since Kolmogorov complexity is not computable, one can't compute how random say a sequence of numbers is, no matter what kind of claimed "totally random" process spawned it.</p>
551
sequence-to-sequence model
Doubts on Definition of Indistinguishable Encryption in the Textbook
https://cs.stackexchange.com/questions/12457/doubts-on-definition-of-indistinguishable-encryption-in-the-textbook
<p>In the classic crypto textbook "Introduction to Modern Cryptography" by Jonathan Katz and Yehuda Lindell, there is a definition for indistinguishable encryption in the presence of an eavesdropper as such that for every probabilistic polynomial time adversary A there is a negligible function negl(n) such that</p> <p>$\Pr[PrivK_{A,\Pi}=1] \leq negl(n)$</p> <p>where PrivK is the indistinguishability experiment and for the purpose of this question we only need to know that the experiment outcome is 1 iff the adversary makes the correct guess.</p> <p>My doubts are as follows. Consider a sequence of probabilistic polynomial time adversaries $\{A_i\}_{i&gt;=1}$ whose advantage in the indistinguishability experiment is bounded by the following sequence of negligible functions</p> <p>$\Pr[PrivK_{A,\Pi}=1] \leq negl_i(n) = \frac{1}{(1+1/i)^n}$</p> <p>Clearly it is necessary for the above conditions to hold for a indistinguishable encryption. But is it a correct model/condition for real-world applications? For example, in practice we typically choose a sufficiently large n and set up some encryption scheme. However, there is the always some adversary $A_i$ that wins the experiment with probability close to one. So what's wrong?</p>
<p>Depends on how different your algorithms are. If you can compute $A_i$ from $i$ in polynomial time, then $A'$ with $A'(n) = A_n(n)$ is a polynomial time adversary too and violates the assumption (as it has non-negligible success probability). This would make your series of $A_i$ <em>uniform</em> and is one usual class of adversaries to consider. </p> <p>If you'd like your crypto system to be secure against <em>any</em> such series of polynomial bounded adversaries you have to consider <em>non-uniform</em> adversaries (at least <a href="http://en.wikipedia.org/wiki/Circuit_complexity#Polynomial-time_uniform" rel="nofollow">non-uniform in polynomial time</a>), by e.g. defining a single adversary as a series of (randomized) circuits with polynomial size.</p>
552
sequence-to-sequence model
General name for linked lists based on hashes
https://cs.stackexchange.com/questions/77502/general-name-for-linked-lists-based-on-hashes
<p>I am thinking of a particular datastructure, but don't know the name of it.</p> <p>A sequence of elements may be modeled by a collection of some X, where each X consists of:</p> <ol> <li>The element, serialized as a bunch of bytes.</li> <li>[a] The hash of the previous X in the sequence, based on the bytes of both parts of that X in a predictable way or [b] some mechanism to denote no such X exists, i.e. that this is the first element.</li> </ol> <p>Given a database that contains (at least) all relevant Xs, and a lookup mechanism by hash, we can take a single X and reconstruct the sequence of elements from it.</p> <p>What is the general name (if it exists) of this datastructure in the literature? I.e.</p> <ul> <li>What is the general name of X?</li> <li>What is the general name of a sequence of Xs.</li> </ul> <p>Note that although this looks like (and is inspired by) a blockchain or git commit object, it is more general: any list sequence could be so rewritten.</p> <p>In comparison with the blockchain / git, the above does not make any statement about distributed databases, timestamps or contents of each element. It also notably differs from git at least in the sense that it is single-parent.</p>
<p>This is known as <a href="https://en.wikipedia.org/wiki/Hash_chain" rel="nofollow noreferrer">hash chaining</a>. The blockchain uses this technique, as do some kinds of <a href="https://en.wikipedia.org/wiki/Linked_timestamping" rel="nofollow noreferrer">timestamping</a> schemes (see, e.g., Haber &amp; Stornetta) and in some cryptographic schemes for append-only audit logs.</p>
553
sequence-to-sequence model
Job Shop Problem: How do you get an ordered sequence of operations from the disjunctive acyclic graph?
https://cs.stackexchange.com/questions/136804/job-shop-problem-how-do-you-get-an-ordered-sequence-of-operations-from-the-disj
<h3>Intro</h3> <p>The job shop problem is a classic scheduling theory problem. Given <span class="math-container">$N$</span> jobs and <span class="math-container">$M$</span> machines, a typical goal of the JSP is to minimise the <em>makespan</em> (starting time of the last operation + its processing time) over the set of jobs. Jobs must be processed on a sequence of machines in a particular order. An operation refers to a particular job being processed on a particular machine.</p> <hr /> <h3>Representation</h3> <p>A disjunctive graph is typically used to represent the problem of minimising makespan. Each <strong>node</strong> in the graph is labelled <span class="math-container">$(i,j)$</span> where <span class="math-container">$i$</span> refers to the job, and <span class="math-container">$j$</span> to the machine on which the job is to be run. <em>Conjunctive</em> arrows between nodes indicate precedence constraints for a particular job. Each conjunctive edge is given a weight, corresponding to the processing time <span class="math-container">$P_{i}$</span> for the particular operation <span class="math-container">$O_{i}$</span> at the <em>base</em> of the edge. <em>Disjunctive</em> edges are placed between operations that take place <em>on the same machine</em>. An example of two simple jobs is given below:</p> <p><a href="https://i.sstatic.net/fl733.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fl733.png" alt="enter image description here" /></a></p> <p>Note that <span class="math-container">$J_{1}$</span> is comprised of the upper row of nodes, and <span class="math-container">$J_{2}$</span> the lower row.</p> <hr /> <h3>Problem</h3> <p>My problem is with the terminology used to describe how to solve the JSP. The book &quot;Scheduling Theory, Algorithms, and Systems&quot; (Pinedo, 2008) states on page 181 that:</p> <blockquote> <p>A feasible schedule corresponds to a <em>selection</em> of one disjunctive arc from each pair such that the resulting directed graph is acyclic</p> </blockquote> <p>However, if I do just that, I can derive a graph that looks something like this:</p> <p><a href="https://i.sstatic.net/HiWKf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HiWKf.png" alt="enter image description here" /></a></p> <p>Now, it is said that:</p> <blockquote> <p>The makespan of a feasible schedule is determined by the longest path in G(D) from the source U to the sink V</p> </blockquote> <p>And if I sum the longest <em>weighted</em> path, I can indeed derive a &quot;makespan&quot; value. However, I still don't get why this is a &quot;feasible schedule&quot;. Because I don't really have a schedule at all. The path doesn't give me an ordering for how to execute the callbacks. That is because the path necessarily does not visit every operation. Meaning that it does not give a feasible schedule at all.</p> <h3>Verdict</h3> <p>In light of what I have described above, my question is: <strong>How can I derive the full sequence of operations to perform from the disjunctive graph model of the job shop problem?</strong></p> <p><strong>Edit</strong>: There's a mistake in my graph, the last edges going to <span class="math-container">$V$</span> should be labelled with a weight.</p>
<p>The graph consisting of all vertices corresponding to a particular machine and the induced chosen edges form a directed path. This path tells you exactly which jobs to run on that machine and in what order.</p> <p>The length of the longest weighted path from u to a job tells you exactly when you can start processing that job.</p>
554
sequence-to-sequence model
When can a deterministic finite-state-automaton (DFSA) along with its input sequence be said to be a part of another DFSA?
https://cs.stackexchange.com/questions/136014/when-can-a-deterministic-finite-state-automaton-dfsa-along-with-its-input-sequ
<p>For a Finite State Automaton / Finite State Machine (FSM) <span class="math-container">$F$</span>, that has an input alphabet, a set of possible states, an initial state, a set of possible final states and a state transition function, let a <strong>finite</strong> input sequence <span class="math-container">$S$</span> is given, such that at the end of this sequence the FSM enters a final state and stays in that state.</p> <p>Can this FSM <span class="math-container">$F$</span> along with the input sequence <span class="math-container">$S$</span> be considered a separate FSM <span class="math-container">$F'$</span>?</p> <p>Analogous to this, can a Turing machine <span class="math-container">$T$</span> along with a finite tape <span class="math-container">$P$</span> be considered a separate Turing machine <span class="math-container">$T'$</span>?</p> <p><strong>What are the conditions, if any, for this to be true assuming it is true?</strong></p> <p>Note: I expect a formal proof, or a reference/outline to a formal proof that proves that either of this can or cannot be done. Some theory related to this is also welcome.</p> <p><strong>My research:</strong></p> <h3>Closely related topic:</h3> <p><a href="https://www.tandfonline.com/doi/abs/10.1080/00207217908938690" rel="nofollow noreferrer">R. T. G. TAN (1979) Hardware and software equivalence, International Journal of Electronics, 47:6, 621-622, DOI: 10.1080/00207217908938690 </a></p> <p>I am aware of the principle of hardware and software equivalence, which states that a given task can be performed using hardware or software, i.e. <strong>digital hardware and software are equivalent models of computation</strong>. But I think my question is different from this one.</p> <h3>Motivation:</h3> <ul> <li><p>From this question ( <a href="https://cs.stackexchange.com/q/131943/115941">Is there code below microcode?</a> ) I think we can consider an FSM with its input sequence (microcode) to be a part of another FSM (the digital computer), but of course much more circuitry like Arithmetic and Logical Unit (ALU) and datapath is needed to make a computer. Microcode is used only for the control circuit.</p> </li> <li><p><a href="https://cs.stackexchange.com/a/28850/115941">This answer</a> <em>claims</em> the data in the RAM of a computer along with the CPU can be considered to be a part of a bigger circuit.</p> </li> </ul> <p>To quote:</p> <blockquote> <p>The circuit is fixed (it is the gates in the processor) and part of its input is data that depends upon the program you are executing (which is stored in the RAM of the computer). However, you could consider this a larger circuit where part of it is hardcoded (i.e., the program part of the input is hardcoded); then you can view a computer running a program as a big circuit with part that is universal and identical for all programs (the gates of the processor) and part that depends on the program (the hardcoded input), and this immediately gives a mapping from programs to circuits. The mapping is implemented by a compiler.</p> </blockquote>
<p>It's not clear to me how to interpret &quot;can be considered&quot;, so I'm going to identify one technical question that can be answered.</p> <p>Given a FSM <span class="math-container">$F$</span> and an input sequence <span class="math-container">$S$</span>, it is possible to build another FSM <span class="math-container">$F'$</span> so that the execution of <span class="math-container">$F'$</span> on the empty input is in one-to-one correspondence with the execution of <span class="math-container">$F$</span> on <span class="math-container">$S$</span> (and in particular, both end at the same end state(s); e.g., either <span class="math-container">$F$</span> accepts on <span class="math-container">$S$</span> and <span class="math-container">$F'$</span> accepts on the empty input; or <span class="math-container">$F$</span> rejects on <span class="math-container">$S$</span> and <span class="math-container">$F'$</span> rejects on the empty input).</p> <p>The proof is a straightforward application of the <a href="https://cs.stackexchange.com/q/71362/755">product construction</a>: we construct one FSM <span class="math-container">$F_0$</span> that outputs the fixed sequence <span class="math-container">$S$</span>, and then compute the parallel composition of <span class="math-container">$F_0$</span> with <span class="math-container">$F$</span>.</p> <p>The following is also true: given a Turing machine <span class="math-container">$T$</span> and a fixed input <span class="math-container">$P$</span> (i.e., initial state of the tape <span class="math-container">$P$</span>), then it is possible to construct another Turing machine <span class="math-container">$T'$</span> such that execution of <span class="math-container">$T$</span> on input <span class="math-container">$P$</span> has the same result as execution of <span class="math-container">$T'$</span> on any input.</p> <p>Formal proofs with Turing machines are often tedious and uninformative, so it's easier to see how this is true by considering a real-world program. For instance, suppose we have Python code that defines some function <code>t()</code>:</p> <pre><code>def t(x): ... </code></pre> <p>and we have some fixed string <code>p</code>. Consider the following Python function <code>t'</code>:</p> <pre><code>def t(x): ... def t'(x): return t(p) </code></pre> <p>Then it is easy to see that the behavior of <code>t'</code> on any input is equivalent to the behavior of <code>t</code> on input <code>p</code>. (Here we have hard-coded a lexical constant string in the place indicated with <code>p</code> above.) You can do the same thing with Turing machines, where the machine first writes <span class="math-container">$P$</span> on the tape, and then starts executing <span class="math-container">$T$</span>, to define a Turing machine <span class="math-container">$T'$</span> that proves the claim above.</p>
555
sequence-to-sequence model
Shortest path given correct order of colours?
https://cs.stackexchange.com/questions/139815/shortest-path-given-correct-order-of-colours
<p>I have a directed graph <span class="math-container">$G=(V,E)$</span> where each vertex is a 4-D coordinate <span class="math-container">$v: (x, y, z, c)$</span> representing spatial coordinates <span class="math-container">$x, y, z \in \mathbb{R}$</span> and the non-physical parameter colour <span class="math-container">$c \in (c_{1}, c_{2},... c_{n})$</span>. The edge weights <span class="math-container">$\omega_{v_{a} \to v_{b}}: f(x, y, z)$</span> are purely a function of three of the four dimensions, as this is a <em>physical</em> problem.</p> <p><a href="https://i.sstatic.net/hZFh4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hZFh4.png" alt="enter image description here" /></a></p> <p>Trouble is, I wish to find the shortest path given a specific sequence of colors (eg. <span class="math-container">$c_{2}, c_{5}, c_{1}, c_{1}, c_{3}, c_{4}$</span>). It's difficult to incorporate this into the weight function as it is non-physical. Are there well-established path traversal algorithms that find the shortest path through coloured vertices, given the starting vertex, ending vertex, and the sequence it needs to pass through? I wonder if there is already a family of algorithms that describe this problem?</p> <p>I've considered something like a Hidden Markov Model that abstracts vertices of colours into hidden states, but I am not sure how to prevent one state from being visited again (whereas in a directed acyclic graph I can prevent revisiting the same vertex).</p>
<p>Use a <a href="https://cs.stackexchange.com/q/118977/755">product construction</a>, to construct a new graph whose vertices are given <span class="math-container">$(x,y,z,k)$</span>, where <span class="math-container">$k$</span> counts the index into the sequence of colors (i.e., we are currently at <span class="math-container">$k$</span>th vertex in that sequence). Then, find the shortest path in this new graph using any standard shortest-path algorithm. If all weights are non-negative, you can use Dijkstra's algorithm on this graph; otherwise you can use Bellman-Ford on this graph.</p> <p>See <a href="https://cs.stackexchange.com/q/118977/755">How hard is finding the shortest path in a graph matching a given regular language?</a>. Your problem is a special case of the problem solved there. (Possibly also useful: <a href="https://cs.stackexchange.com/q/71362/755">Product construction for given two finite automata</a> .)</p> <p>Specifically, here is how the product construction works in your setting. Let <span class="math-container">$(c_1,c_2,\dots,c_q)$</span> be the desired sequence of colors. Then, for each <span class="math-container">$k=1,2,\dots,q-1$</span>, you'll have an edge <span class="math-container">$(x,y,z,k) \to (x,y,z,k+1)$</span> in the new graph if there is an edge <span class="math-container">$(x,y,z,c_k) \to (x,y,z,c_{k+1})$</span> in the original graph; the weight of the edge is copied over from the corresponding edge in the original graph. Now, find the shortest path from <span class="math-container">$(x_0,y_0,z_0,1)$</span> to <span class="math-container">$(x_1,y_1,z_1,q)$</span> in the new graph, where <span class="math-container">$(x_0,y_0,z_0,c_1)$</span> is the starting vertex in the original graph and <span class="math-container">$(x_1,y_1,z_1,c_q)$</span> is the ending vertex in the original graph.</p>
556
sequence-to-sequence model
Solving the part-of-speech tagging problem with HMM
https://cs.stackexchange.com/questions/20185/solving-the-part-of-speech-tagging-problem-with-hmm
<p>There is a famous <a href="http://en.wikipedia.org/wiki/Part-of-speech_tagging" rel="nofollow">part-of-speech tagging problem</a> in Natural Language Processing. The popular solution is to use <a href="http://en.wikipedia.org/wiki/Hidden_Markov_model" rel="nofollow">Hidden Markov Models</a>.</p> <p>So that, given the sentence $x_1 \dots x_n$ we want to find the sequence of POS tags $y_1 \dots y_n$ such that $y_1 \dots y_n = \arg\max_{y_1 \dots y_n}p(Y,X)$.</p> <p>By Bayes Theorem, $P(X,Y)=P(Y)P(X \mid Y)$.</p> <p>Solving POS by HMM implies the assumptions: $p(y_i \mid y_{i-1})$ and $p(x_i \mid y_i)$.</p> <p>The question is there are any particular reason why we prefere to solve it by generative model with a lot of assumption and not directly by estimating $P(Y \mid X)$, given the training corpus it's still possible to estimate $p(y_i \mid x_i)$.</p> <p>The second question, even when we convinced that the generative model is preferred why to calculate is as $P(Y,X)=P(Y)P(X \mid Y)$ and not $P(X,Y)=P(X)P(Y \mid X)$. In case we have an appropriate generative story I can use $P(X,Y)=P(X)P(Y \mid X)$ as well, is it mentioned somewhere that assumed generative story is preferred.</p>
<p>Isn't this exactly the same question you asked <a href="https://cs.stackexchange.com/questions/16777/hidden-markov-model-in-tagging-problem">previously</a>? I'll make some additional comments and add some links here. Hopefully that will help.</p> <blockquote> <p>is there are any particular reason why we prefere to solve it by generative model with a lot of assumption and not directly by estimating $P(Y∣X)$, given the training corpus it's still possible to estimate $p(y_i∣x_i)$?</p> </blockquote> <p>It just depends. Choosing whether to model $P(X,Y)$ or $P(Y|X)$ is simply the choice of generative versus discriminative. Both have advantages. See the paper <a href="http://www.cs.cmu.edu/~aarti/Class/10701/readings/NgJordanNIPS2001.pdf" rel="nofollow noreferrer"> On Discriminative vs. Generative classifiers</a> by Ng and Jordan. One thing worth mentioning, that I didn't say last time, is unsupervised learning in a generative framework is normally straightforward. This means it is also fairly obvious how to do semi-supervised learning. Semi-supervised learning can be very helpful for NLP tasks where the amount of unlabeled data is essentially infinite and labelled data is hard to obtain. Semi-supervised learning is typically not as easy in a discriminative framework. See <a href="http://en.wikipedia.org/wiki/Co-training" rel="nofollow noreferrer">Co-training</a> as an example of the later.</p> <p>As for how one decomposes the joint, well that's up to you. There's no rule saying you cant decompose it as $P(X,Y) = P(X)P(Y|X)$. Doing so would be perfectly valid, just not sensible. Notice decomposing the joint this way includes the factor $P(Y|X)$ already. If you're ultimately interested in predictiong $Y$ given $X$, then you should should predict $$ \begin{align*} \arg\max_y P(Y=y,X=x) &amp;= \arg\max_y P(X=x)P(Y=y|X=x) \\ &amp;= \arg\max_y P(Y=y|X=x). \end{align*} $$ So you just use $P(Y|X)$ and ignore $P(X)$ and we're back at a discriminative classifier.</p>
557
sequence-to-sequence model
How to setup a model for a guillotine cutting stock problem?
https://cs.stackexchange.com/questions/81961/how-to-setup-a-model-for-a-guillotine-cutting-stock-problem
<p><strong>Backgroud.</strong> I'm reading papers about cutting stock problem (CSP).</p> <ol> <li>Said Ben Messaoud, Chengbin Chu, Marie-Laure Espinouse (2008)<br> <a href="http://www.sciencedirect.com/science/article/pii/S0377221707009083" rel="nofollow noreferrer">Characterization and modelling of guillotine constraints</a>.<br> European Journal of Operational Research 191 (2008) 112–126.</li> <li>D.A. Wuttke, H.S. Heese (2017) <a href="http://dx.doi.org/10.1016/j.ejor.2017.07.036" rel="nofollow noreferrer">Two-dimensional cutting stock problem with sequence dependent setup times</a>, European Journal of Operational Research</li> </ol> <p><strong>Problem</strong></p> <p>There is a square canvas, the side of the canvas is $n\times a$.</p> <p>It is required to cut this canvas into $n^2$ equivalent squares with the side, $a$.</p> <p>For $n = 3$, we can easily get the solution: $9$ squares with the side $a_1=a_2=...=a_9=a$, where $a_i$ the side of $i$-th square, $i \in I = \{1,2,\ldots, n^2\}$. If we remove any square, we will have a classic <a href="https://en.wikipedia.org/wiki/15_puzzle" rel="nofollow noreferrer">$8$-puzzle</a> (left figure below, $n=3$).</p> <p><a href="https://i.sstatic.net/zh0pj.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zh0pj.jpg" alt="enter image description here"></a></p> <p>At first, let us now omit the constraint on squares' side size, they can be different: $$a_1 &lt;a_2 &lt;... &lt;a_i &lt;... &lt;a_{n ^ 2}. \tag{1}$$</p> <p>At second, let us add the following constraints on squares' sides (right figure above, $n=3$):</p> <ol> <li><p>The sum of any two consecutive elements of the set $(1)$ must be greater the following element.</p></li> <li><p>Any element of the set $(1)$ must be less than half of the canvas, $n\times a$.</p></li> <li><p>The first element of the set $(1)$ must be greater than the half of equivalent solution, $a$.</p></li> </ol> <p><strong>The task is to prove:</strong> </p> <p>a) the problem will have a solution (integer or real), </p> <p>b) the solution is $(n^2-1)$-puzzle. </p> <p><strong>Question.</strong> How to setup a model for an optimization problem? </p> <p><strong>My attempt is:</strong></p> <p>I think I have a case of <a href="https://en.wikipedia.org/wiki/Guillotine_problem" rel="nofollow noreferrer">guillotine cutting stock problem</a>. </p> <p>I have tried to write the constrains (C1)-(C3):</p> <p>$$ s.t. \left\{% \begin{array}{ll} a_ {i + 2} &lt; a_i + a_ {i + 1} ,&amp; i = 1,2, ..., n ^ 2-2; \\ a_i &lt;\frac{n\times a}{2}, &amp; \forall i \in I; \\ 0 &lt; a/2 &lt; a_1. \\ \end{array}% \right.$$</p> <p><strong>Update.</strong></p> <p>Here is <a href="http://www.culand.ch/dev/SBPSolver.htm" rel="nofollow noreferrer">the list of software</a> (see at bottom of page) to design, test, and solve your own original sliding block puzzles. </p>
558
sequence-to-sequence model
How can I take advantage of the capabilities of a cluster
https://cs.stackexchange.com/questions/44995/how-can-i-take-advantage-of-the-capabilities-of-a-cluster
<p>I'm developing a model in C, and I need to run a couple of simulations (lengthy, and heavy ones). I seem to have a cluster at my disposal, but I'm not very familiar with the concept. I understand that there are different "nodes" ? My program is not designed to take advantage of this individually - it can't event take advantage of multiple processors, because the calculations have to be done in sequence. But I have to run multiple instances of the program, and I was hoping there'd be a way to run them on separate "nodes" (?) to avoid them having to share CPU time too much.</p> <p>What I know : the cluster runs Red Hat Enterprise Linux for Servers 5.3.</p> <p>Can anyone help?</p>
<p>According to my experience, clusters usually work like this: you submit a job to it (a program), and it allocates a node to run your job. You don't worry about allocating the nodes yourself.</p> <p>So in your case, simply submit multiple jobs (usually they provide some method to submit batch jobs), and the cluster will allocate different nodes to each of them.</p>
559
sequence-to-sequence model
What is the meaning of the output weights of a Conditional Random Field (CRF) model?
https://cs.stackexchange.com/questions/42301/what-is-the-meaning-of-the-output-weights-of-a-conditional-random-field-crf-mo
<h1>Problem</h1> <p>When train my <strong>linear chain CRF</strong> with annotated observations, I feed it with a number of sequences containing observation values and a "ground-truth" label for each observation. I'm currently using the hCRF Matlab interface. (see <a href="http://sourceforge.net/projects/hcrf/" rel="nofollow">1</a>)</p> <p>In my case I have 4 continuous observation values (some in the interval [-1,1], some around [140, 200]; none outside [-10, 250] though). The label is an integer between 1 and 17, giving a total of 17 possible labels.</p> <h1>Question</h1> <p>After training, my model consists of 17*17 = 289 edge weights and 17*4 = 68 window weights. Can I understand the edge weights as some sort of probabilities for state transitions (even though they are not actual probabilities in the range [0,1])? And what exactly do the edge weights tell me?</p> <h1>Reference</h1> <p>For reference, the CRF produces conditional probabilities of the form</p> <p>$$p(y|x) = \frac{1}{Z(x)} \exp{ \left\{ \sum_{k=1}^{K} \lambda_k f_k(y, x) \right\}}$$</p> <p>where to my understanding, the $\lambda_k$ are my edge weights and the $Z(x)$ are the window weights I get.</p> <p>I read some tutorials about CRFs but I still do not quite understand, how the feature functions $f_k(y, x)$ are looking if I simply put in sequences of 4 tuples (the 4 observation values). </p> <p>Thanks in advance for any pointers.</p>
560
sequence-to-sequence model
Calculating with regexes
https://cs.stackexchange.com/questions/41564/calculating-with-regexes
<p>We use a regex engine (say, <a href="http://en.wikipedia.org/wiki/Perl_Compatible_Regular_Expressions" rel="nofollow">PCRE</a>) that allows grouping subexpressions with parentheses and recalling the value they match in the search / replace expressions (backreferences, denoted by \i for matching the ith captured subexpression). It is known that such regexes have far more expressive power than the <a href="http://en.wikipedia.org/wiki/Regular_expression#Formal_language_theory" rel="nofollow">regular expressions</a> defined in formal language theory.</p> <p>I am interested in the following two categories of arithmetic functions on the unary representation of the natural numbers (<em>i.e.</em>, a sequence of $n$ symbols <code>a</code> represents the number $n$):</p> <h2>Transducers</h2> <p>In the first category, a function is given by a search string, a replace string and an operation <code>Replace All</code>, which substitutes the replace string to any occurrence of the search string. For instance:</p> <ul> <li>$\lambda n. 2n$ <ul> <li><code>"a"</code></li> <li><code>"aa"</code></li> </ul></li> <li>$\lambda n. \frac{n}{2}$ <ul> <li><code>"(a*)\1a?"</code></li> <li><code>"\1"</code></li> </ul></li> <li>$\lambda n. \frac{n}{3}$ <ul> <li><code>"(a*)\1\1a?a?"</code></li> <li><code>"\1"</code></li> </ul></li> <li>$\lambda n. n \bmod 2$ <ul> <li><code>"(a*)\1(a?)"</code></li> <li><code>"\2"</code></li> </ul></li> <li>Collatz function <ul> <li><code>"(a*)\1$|((a)*)"</code></li> <li><code>"\1\2\2\2\3"</code></li> </ul></li> <li>Smallest divisor greater than 1 <ul> <li><code>"(aa+?)\1+$|(a*)"</code></li> <li><code>"\1\2"</code></li> </ul></li> </ul> <h2>Acceptors</h2> <p>In the second category, a predicate is given by a regex pattern and an operation <code>Match</code> which tries to match the pattern from the beginning of the input sequence. The output is interpreted as <code>True</code> is the match succeeds, and <code>False</code> otherwise. For instance:</p> <ul> <li><code>is_even</code> <ul> <li><code>"(aa)*$"</code></li> </ul></li> <li><code>is_odd</code> <ul> <li><code>"a(aa)*$"</code></li> </ul></li> <li><code>is_not_prime</code> (from <a href="http://montreal.pm.org/tech/neil_kandalgaonkar.shtml" rel="nofollow">Neil Kandalgaonkar</a>) <ul> <li><code>"a?$|(aa+)\1+$"</code></li> </ul></li> </ul> <h2>Question</h2> <p>Is it possible to devise a transducer for $\lambda n. n^2$ or an acceptor for <code>is_prime</code>? More generally, which arithmetic functions and predicates can be defined in such a model?</p>
561
sequence-to-sequence model
Is there a model of ZF&#172;C where some program always terminates but has no loop variant?
https://cs.stackexchange.com/questions/117648/is-there-a-model-of-zf%c2%acc-where-some-program-always-terminates-but-has-no-loop-va
<p><a href="https://en.wikipedia.org/wiki/Loop_variant#Every_loop_that_terminates_has_a_variant" rel="nofollow noreferrer">Wikipedia has a proof</a> that every loop that terminates has a loop variant&mdash;a well-founded relation on the state space such that each iteration of the loop results in a state that is less than the previous iteration's state under the relation. Here, <em>well-founded</em> refers to the usual classical definition of a well-founded relation: every nonempty subset has a minimal element.</p> <p>The proof given in the linked article is as follows:</p> <ol> <li>Let the loop variant be the "iteration" relation, i.e. the reflexive transitive closure of the transition relation.</li> <li>Since the loop always terminates, the loop variant has no infinite descending chains.</li> <li>Apply the axiom of choice to conclude that the loop variant is well-founded.</li> </ol> <p>My question is about step 3. Using the full, uncountable axiom of choice here feels like swatting a fly with an atom bomb. <a href="https://en.wikipedia.org/wiki/Well-founded_relation" rel="nofollow noreferrer">Elsewhere on Wikipedia</a>, we have the following:</p> <blockquote> <p>Equivalently, assuming the axiom of dependent choice, a relation is well-founded if it contains no countable infinite descending chains: that is, there is no infinite sequence <span class="math-container">$x_0, x_1, x_2, \dots$</span> of elements of <span class="math-container">$X$</span> such that <span class="math-container">$x_{n+1}\ R\ x_n$</span> for every natural number <span class="math-container">$n$</span>.</p> </blockquote> <p>So the much weaker axiom of dependent choice is sufficient.</p> <p>It seems like it might be possible to weaken this assumption further. The state space of a computer program is <em>not</em> an arbitrary set from the entire Von Neumann universe of ZF. Maybe countable choice suffices, since the state space of any program is countable?</p> <p>On the other hand, if dependent choice is required and countable choice will not suffice, then (assuming ZF is consistent) there must exist a model of ZF + countable choice where there is some program that (a) always terminates, (b) has an iteration relation with no infinite descending chains, yet (c) has no well-founded loop variant. This seems deeply weird.</p> <p>My question is:</p> <ol> <li>Is there a model of ZF where a program always terminates but has no loop variant?</li> <li>If the answer to 1 is <em>yes</em>, then what is the weakest choice principle that, when added to ZF, changes the answer to <em>no</em>?</li> <li>If the answer to 1 is <em>yes</em>, is it possible to write down an explicit example of such a program (a la Harvey Friedman's explicit formulas equivalent to the strengths of ordinals), or does such a program necessarily correspond to a non-standard natural number?</li> </ol>
<p>I think you are really asking a question about the <em>definition of the notion of well-foundedness</em>.</p> <p>I think the notion of loop variants is a bit of a red herring here: I would argue that any reasonable definition of well-foundedness should enable proving that a loop is terminating iff there is a well-founded relation which acts as a variant for it, almost as a tautology.</p> <p>The issue is that the classical definition of a well-founded order <span class="math-container">$&lt;$</span> on <span class="math-container">$X$</span>:</p> <blockquote> <p>There are no infinite sequences <span class="math-container">$x_1&gt;x_2&gt;x_3&gt;\ldots$</span></p> </blockquote> <p>is not a very nice definition, either from a constructive standpoint or when one is uncomfortable with the use of the axiom of (dependent, thanks Andrej!) choice. Assuming the latter, this definition is equivalent to the much nicer definition:</p> <blockquote> <p>Every non-empty subset <span class="math-container">$P\subseteq X$</span> has a <em>minimal</em> element, that is, some <span class="math-container">$x\in P$</span> such that <span class="math-container">$y \not&lt; x$</span> for every <span class="math-container">$y\in P$</span>.</p> </blockquote> <p>This definition is already much nicer, and I think it can be used to prove the variant lemma without choice.</p> <p>Finally, the <em>constructive</em> version of well-foundedness is this:</p> <blockquote> <p>For every <span class="math-container">$P\subseteq X$</span>, if for every <span class="math-container">$x\in X$</span>, <span class="math-container">$\{\ y\ |\ y &lt; x\ \}\subseteq P$</span> implies <span class="math-container">$x\in P$</span>, then <span class="math-container">$P = X$</span>.</p> </blockquote> <p>This definition seems more unwieldy, but it is actually the one you want: it enables <em>induction</em> over well-founded orders, without using either choice or excluded middle. Assuming excluded middle, it is equivalent to the previous one.</p> <p>Finally, the business in the wikipedia article about ordinals is not really necessary for any technical analysis of termination, and, in addition, choice is not required if you define <span class="math-container">$\omega_1$</span> to be the order type of the set of countable ordinals (ordered by the prefix relation).</p>
562
sequence-to-sequence model
What kind of scheduling problem is this?
https://cs.stackexchange.com/questions/29501/what-kind-of-scheduling-problem-is-this
<p>I'm working on a problem and would like to do some research on similar problems to help refine my approach. Can anyone help me identify what kind of problem this is or, at least, what kind of problems it relates to?</p> <p>The basic model is a set of different job types, handled by a set of machines. There is also a dispatcher that carries parts between machines. I have estimates for the travel times involved when using a dispatcher to perform transfers and I also have estimates for the amount of time required that each job type takes to execute. The exact process flow between job types is dynamic (i.e. if there are $A$, $B$ and $C$ jobs, the result of $A$ can determine whether it gets routed to either $B$ or $C$.) New parts can arrive in the system and parts will eventually be routed out of the system after a job completes.</p> <p>So, in summary:</p> <p><strong>Setup</strong>: Finite set of job types ($J_1, J_2, J_3, ..., J_n$) and machines to execute them ($M_1, M_2, M_3, ..., M_m$); a given job type might have multiple machines that support it. We can designate $M_{INPUT}$ as a special 'input machine' which can hold multiple parts that are available to be brought in to the system.</p> <p><strong>Inputs</strong>: A set of parts arrive over time at $M_{INPUT}$, where the arrival time can be denoted as $T_i$ and they are each assigned a sequence of jobs to perform $S_i$, though the algorithm cannot know the exact sequence in advance.</p> <p><strong>Process</strong>:</p> <p>The dispatcher can take one of two actions at any given time:</p> <ol> <li>Decide to move between two machines, where the time to transfer is a known $\tau(M_{i_1},M_{i_2})$.</li> <li>Decide to exchange the part it currently holds with the part in the machine it is at. This exchange takes a fixed time $\tau_E$.</li> </ol> <p>Every job type takes a known time $D_k$. After a given job at a particular machine completes, it assigns the next job in its sequence $S_i$. If the sequence is exhausted, it is routed to a special machine $M_{OUT}$ for routing out of the system.</p> <p><strong>Problem</strong>: Scheduler algorithm has to produce a series of dispatch orders (either move or exchange operations) to push parts through their job sequences and ultimately out of the system. The optimization criterion is to be capable of handling aggressive input sequences (more parts, more often), and maximize the number of parts finally moved through $M_{OUT}$ over the execution time of the system.</p> <p>I believe this is a type of scheduling problem (sounds somewhat similar to Job Shop Scheduling?) I'm currently approaching this by considering different orderings of upcoming transfers in a branching fashion, generating an estimated timeline up to a certain limit forward in time, then preferring the schedules that move the most parts in the least amount of time. I'm pretty sure this kind of problem has been studied more rigorously, however and would like to learn more about the theory behind it.</p>
563
sequence-to-sequence model
(Generally) How to specify asynchronous action with side effects using logic equations
https://cs.stackexchange.com/questions/104981/generally-how-to-specify-asynchronous-action-with-side-effects-using-logic-equ
<p>Say you have this function call sequence:</p> <pre><code>function all() { fn1() fn2() fn3() } </code></pre> <p>And say that <code>fn2</code> was asynchronous and caused all kinds of side effects:</p> <pre><code>var globalPacketCounter = 0 function fn2() { var httpRequest = ... httpRequest.on('data', function(){ globalPacketCounter++ }) drawGraphicsToDisplay() ... } </code></pre> <p>Something complicated and with unknown implementation, though you can probe it to determine the types of behavior it has, and read docs, etc.</p> <p>I'm wondering, generally speaking, what you can do to incorporate this into a model checking or symbolic evaluation system, or other verification system like Hoare logic or something. I would like to do this:</p> <pre><code>all() = /\ fn1() in state 1 /\ fn2() in state 2 /\ fn3() in state 3 /\ complete = true in state 4 </code></pre> <p>Some sort of logical statement. The question is (partially) if this is a valid approach; that is, treating the <code>fn2()</code> as a single step in a logic equation. The main part of the question though is how to generally do this. I would like to basically treat everything as logic functions but am not sure how to apply it to the "async with side effects" case.</p>
<p>The way to incorporate it in a model depends on (a) what aspects of system behavior you want the model to capture, (b) what properties you want to verify of the system, and (c) what kind of expressivity the model checker framework provides.</p> <p>In other words, there's no one way to model systems. One can build multiple models of the same system, each focusing on different aspects of the system, or modelling it to a different level of abstraction/precision.</p> <p>For instance, in something based on pi calculus, one possible way to model asynchronous operations is as spawning a separate process to perform that stuff concurrently.</p>
564
sequence-to-sequence model
Detecting palindromes in binary numbers using a finite state machine
https://cs.stackexchange.com/questions/32081/detecting-palindromes-in-binary-numbers-using-a-finite-state-machine
<p>In my first algorithms class we're creating these patterns that are supposed to model a finite state machine. We were given a task to think if we can figure out a way to detect palindromes in binary sequences (no points if we do, it's just a food for thought).</p> <p>I specifically asked the professor, knowing a little about CS and that palindromes aren't regular, and that a finite state machine can only detect a regular language. But his answer surprised me, since he said that it is indeed possible and that he thinks we should be able to come up with a sotluion.</p> <p>This brings two questions:</p> <ol> <li>Maybe the binary sequence is a special type of palindrome that is regular? (I'm a little fuzzy on this)</li> <li>Or the technique we're using to represent the state machine is more powerful than I think.</li> </ol> <p>In case it's 2), I'll try to explain how we're representing the problem.</p> <p>Imagine a finite wall, which is supposed to be filled with a predefined types of tiles. Each tile has four colors</p> <p><img src="https://i.sstatic.net/7qBa6.png" alt="tile"></p> <p>You can design any tile you want, and as many as you want, but there has to be a finite number of tiles. They can be arranged in a single row, or into multiple rows, but the topmost row has to always match against the <code>0</code> and <code>1</code> colors. The end colors of the wall also have to be defined ahead of time, and the tiles have to match those, and they also have to match the adjacent tiles.</p> <p>Here's an example of a pattern that detects a sequence of <code>01010101...01</code></p> <p><img src="https://i.sstatic.net/E8Uif.png" alt="wall"></p> <p>The question is, is this pattern more than just modeling a finite state machine? If not, how can I use this to detect palindromes?</p> <p><strong>Update: There has to be a finite number of tile designs, and the number of tiles has to be finite as well (the input will always be finite as well). As for the rows, there can be an arbitrary number of rows, the only condition is that the tiles on a second row must match in the top color with the tiles on the row above them. The number of colors isn't limited as well, though it can't be infinite.</strong></p>
<p><strong>In a nutshell</strong>: <em>As presented, with a single row of tiles, the tiling system is equivalent to a finite state automaton. It cannot recognize the set of palindromes which is context-free, but not regular. However, if the tiling system is extended, allowing as many rows as needed (possibly with the addition of a column on each side), then it becomes as powerful as a linear bounded automaton, recognizing context-sensitive languages, and thus also palindromes. The last section is a simple set of tiles to recognize palindromes.</em></p> <h2>Recognizing palindromes with a single row of tiles</h2> <p>Regarding your tiling system, I am missing some details. Is the number of tiles finite, or just the number of different tile designs. More precisely, while the number of designs can be finite, i.e. the same for all sequences to be recognized, the number of tiles of each design should be sufficient in number, which may depend on the sequence to be recognized, though each recognition would use only a finite number of tiles.</p> <p>If the number of tile is finite, less than some fixed number $n$ that is independent of the sequence to be recognized, you can at best recognize finite sets of sequences, which is much less than all regular languages.</p> <p>Second point: is the number of different colors can be set at any value, the same for all sequences of the language to be recognized. If not you cannot recognize all regular languages.</p> <p>If you have any number of tiles, finite number of designs, any number of colors, that is indeed equivalent to finite state automata, where the colors stand for the states, and the tiles stand for the transitions.</p> <p>I am assuming you have only a single row of tiles, with the blue at bottom, as seems to be implied by your drawing.</p> <p>I do wonder whether It helps understanding. Maybe so?</p> <p>As you said, palindromes do not form a regular set. One disputable intuitive explanation is that palindrome recognition implies counting, and finite state machines cannot count. But there are formal ways of proving that.</p> <p>The language of palindromes is actually a <a href="http://en.wikipedia.org/wiki/Context-free_language" rel="nofollow noreferrer">Context-Free (CF) language</a>. Context-free languages are strictly a superclass to regular languages recognized with finite state automata. So any regular language is context-free, but the converse is false. For example the language of palindromes is CF but not regular.</p> <p>Thus, <strong>palindromes cannot be recognized with a single row of tiles.</strong></p> <h2>What more could be said.</h2> <p>Finite state automaton (FSA) is implicitely the name of a device that has only a finite number of states, used to control a reading head that read-input from left to right on a tape, without ever leaving the input string area, and never writes.</p> <p>Finite state machines are usually considered as doing the same, except that some can also write on an output tape.</p> <p>If we are not too attached to established terminology, we could try to relax some of these constraints, while keeping the finite number of states.</p> <p>A first attempt could be to allow the head to move in any direction.</p> <p>That gives you what is called a <a href="http://en.wikipedia.org/wiki/Two-way_deterministic_finite_automaton" rel="nofollow noreferrer">two-way finite state automata</a>. They seem more powerful, but it can be proved that they do no more than FSA (whether deterministic or not).</p> <p>Another possibility is to allow the automaton to overwrite the tape it is reading, but still without ever leaving the area that was occupied by the input string. This is called a <a href="http://en.wikipedia.org/wiki/Linear_bounded_automaton" rel="nofollow noreferrer">linear bounded automaton (LBA)</a>. The LBA is actually one of the most powerful automata there is. They recognize all the <a href="http://en.wikipedia.org/wiki/Context-sensitive_language" rel="nofollow noreferrer">context-sensitive (CS) languages</a>, which include the CF languages.</p> <p>The problem is that they are so powerful that they are difficult to control, analyze or use. But they will recognize palindromes with a finite number of states.</p> <p>I have been excluding other type of automata which do have a finite number of state for control, but use unlimited memory, which one could perceive as having unbounded number of states.</p> <h2>Extending the tiling system</h2> <p>In <a href="https://cs.stackexchange.com/questions/32081/detecting-palindromes-in-binary-numbers-using-a-finite-state-machine/32083#32083">FrankW's answer</a>, it is shown that, by extending the tiling system with several rows, one can recognize palindromes. He has there a very interesting idea, which I am trying to push here.</p> <p>It can be pushed further. If you allow an arbitrary number of rows, and add a column on each side, it seems that the tiles can actually mimic a linear bounded automaton. Hence, it becomes a very powerful computational system.</p> <p>I am saying "it seems" because I did not go through all the tedious details of the construction, but only tried to convince myself.</p> <p>However, rather than go through even the basic aspects of the construction, which are already complex, I will rely on existing results in automata theory.</p> <p>A row of tiles may be seen as the configuration of a one-dimensional bounded cellular automaton (BCA). Columns represent the evolution of individual cells.</p> <p>The colors of left and right sides of tiles represent the information exchanged between adjacent cells, while the top and bottom colors represent the state of the cells before and after transitions.</p> <p>So it seems that a BCA can be simulated by the extended tiling system.</p> <p><a href="http://www.sciencedirect.com/science/article/pii/0020025576900220" rel="nofollow noreferrer">David Milgram showed in 1976</a> that BCA can simulate a LBA.</p> <p>Hence the extended tiling system can simulate a LBA.</p> <p>The extended tiling system is therefore a very powerful computational system, that can recognize context sensitive languages.</p> <p>Hence <strong>the extended tiling system can recognize palindromes</strong>, among many other things.</p> <p>Now, I am not giving you the details of the recipe to recognize palindromes and other things in this way, simply because it is very complicated, and no one would read it anyway, if I were able to write it without bugs.</p> <h2>A set of tiles to recognize palindromes</h2> <p>The general construction is far too complex to be used. However, here is a simple set of tiles to recognize palindromes on the alphabet $\{0,1\}$ as requested.</p> <p>As one might expect it is symmetrical.</p> <p>R and B stand for red and blue</p> <p>Matching leftmost and rightmost symbol 1, and removing them</p> <pre><code> 1 1 0 1 R 1 1 1 1 1 1 R R 1 0 R </code></pre> <p>Matching leftmost and rightmost symbol 0, and removing them</p> <pre><code> 0 1 0 0 R 0 0 0 0 0 0 R R 1 0 R </code></pre> <p>Filling in the sides with fully red tiles</p> <pre><code> R R R R </code></pre> <p>Creating the bottom line in blue</p> <pre><code> R R R B </code></pre> <p>But this has nothing to do with finite state machines, as far as I can tell.</p>
565
sequence-to-sequence model
Maxima of diagonals in a column wise and row wise sorted matrix
https://cs.stackexchange.com/questions/18211/maxima-of-diagonals-in-a-column-wise-and-row-wise-sorted-matrix
<p>Let $\{a_i\}$ and $\{b_i\}$ be non-decreasing sequences of non-negative integers.</p> <p>How fast can one find $$c_j=\max_{0 \leq i&lt; j}\{a_i+b_{j-i-1}\}$$ for all $0\leq j\leq n-1$?</p> <p>Naively, it takes $O(n^2)$ time, but I'm hoping monotonicity can help here.</p> <p>It's easy to observe $\{c_i\}$ is also non-decreasing. If we consider the matrix $M$ where $M_{i,j} = a_i+b_j$, then it is a matrix sorted in both row and column direction, and we are searching for the maximum element in every diagonal. </p> <p>However, if it's an arbitrary column wise and row wise sorted matrix, this problem require $\Omega(n^2)$ time.</p> <p>Proof: Let all the numbers below the main diagonal be $\infty$. The elements in the $k$th diagonal are randomly numbers from $(k,k+1)$. Reading any entry provides no information for any other entry. </p> <p>Edit: This problem is much harder than I anticipated. We can model this problem as a convolution problem over the semiring $(\min,+)$ (take the dual, search for min instead of max), and it can be solved in $O(\frac{n^2}{\log n})$ time according to a <a href="https://mathoverflow.net/a/11606/6886">Ryan Williams's answer on mathoverflow</a>. It doesn't use the information that the sequence is non-decreasing though.</p>
<p><s>Great question! <a href="http://cs.brown.edu/~pff/papers/dt-final.pdf" rel="noreferrer">"Distance Transforms of Sampled Functions", by Felzenszwalb and Huttenlocher</a> shows how to compute this in $O(n \lg n)$ time.</s></p> <p><a href="http://arxiv.org/abs/1212.4771" rel="noreferrer">"Necklaces, Convolutions, and X+Y", By Bremner et al.</a> shows a $O\left(\frac{n^2}{\frac{\lg^2 n}{(\lg \lg n)^3}}\right)$ algorithm for this problem on the real RAM and a $O(n \sqrt{n})$ algorithm in the nonuniform linear decision tree model.</p>
566
sequence-to-sequence model
Saving on array initialization
https://cs.stackexchange.com/questions/492/saving-on-array-initialization
<p>I recently read that it is possible to have arrays which need not be initialized, i.e. it is possible to use them without having to spend any time trying to set each member to the default value. i.e. you can start using the array as if it has been initialized by the default value without having to initialize it. (Sorry, I don't remember where I read this).</p> <p>For example as to why that can be surprising:</p> <p>Say you are trying to model a <em>worst</em> case $\mathcal{O}(1)$ hashtable (for each of insert/delete/search) of integers in the range $[1, n^2]$.</p> <p>You can allocate an array of size $n^2$ bits and use individual bits to represent the existence of an integer in the hashtable. Note: allocating memory is considered $\mathcal{O}(1)$ time.</p> <p>Now, if you did not have to initialize this array at all, any sequence of say $n$ operations on this hashtable is now worst case $\mathcal{O}(n)$.</p> <p>So in effect, you have a "perfect" hash implementation, which for a sequence of $n$ operations uses $\Theta(n^2)$ space, but runs in $\mathcal{O}(n)$ time!</p> <p>Normally one would expect your runtime to be at least as bad as your space usage!</p> <p>Note: The example above might be used for an implementation of a sparse set or sparse matrix, so it is not only of theoretical interest, I suppose.</p> <p>So the question is:</p> <blockquote> <p>How is it possible to have an array like data-structure which allows us to skip the initialization step?</p> </blockquote>
<p>This is a very general trick, which can be used for other purposes than hashing. Below I give an implementation (in pseudo-code).</p> <p>Let three uninitialized vectors $A$, $P$ and $V$ of size $n$ each. We will use these to do the operations requested by our data structure. We also maintain a variable $pos$. The operations are implemented as following:</p> <pre><code>init: pos &lt;- 0 set(i,x): if not(V[i] &lt; pos and P[V[i]] = i) V[i] &lt;- pos, P[pos] &lt;- i, pos &lt;- pos + 1 A[i] &lt;- x get(i): if (V[i] &lt; pos and P[V[i]] = i) return A[i] else return empty </code></pre> <p>The array $A$ simply stores the values that are passed through the $set$ procedure. The arrays $V$ and $P$ work as certificates that can tell if a given position in $A$ has been initialized.</p> <p>Note that at every moment the elements in $P$ ranging from $0$ to $pos-1$ are initialized. We can therefore safely use these values as a certificate for the initialized values in $A$. For every position $i$ in $A$ that is initialized, there is a corresponding element in the vector $P$ whose value is equal to $i$. This is pointed by $V[i]$. Therefore, if we look at the corresponding element, $P[V[i]]$ and its value is $i$, we know that $A[i]$ has been initialized (since $P$ never lies, because all the elements that we're considering are initialized). Similarly, if $A[i]$ is not initialized, then $V[i]$ may point either to a position in $P$ outside the range $0..pos-1$, when we know for sure that it's not initialized, or may point to a position within that range. But this particular $P[j]$ corresponds to a different position in $A$, and therefore $P[j] \neq i$, so we know that $A[i]$ has not been initialized. </p> <p>It's easy to see that all these operations are done in constant time. Also, the space used is $O(n)$ for each of the vectors, and $O(1)$ for the variable $pos$, therefore $O(n)$ in total.</p>
567
sequence-to-sequence model
Is relation extraction considered a subtask of information extraction?
https://cs.stackexchange.com/questions/133710/is-relation-extraction-considered-a-subtask-of-information-extraction
<p>I'm currently trying to investigate the relationship between relation extraction (RE) and event extraction (EE). Doing more reading on the two tasks has caused me to question my initial belief that RE is within the task of information extraction (IE).</p> <p>The reason why I believe so is because RE is typically modeled as a prediction task where a model is given two entities and tasked to predict the relationship between them, whereas EE seems to be typically modeled as a sequence labeling task to extract an event trigger and the event's arguments.</p> <p>More formally, I believe the two can be modeled as:</p> <p><span class="math-container">$$ \begin{align} \text{RE} &amp; \rightarrow p(r\ \vert\ e_1, e_2, c) \\ \text{EE} &amp; \rightarrow p(l_i \vert\ c)\quad \text{where}\ l \in \{\text{null}, \text{trigger}, \text{argument}\} \end{align} $$</span></p> <p>where <span class="math-container">$r$</span> denotes the relation, <span class="math-container">$e_1$</span> and <span class="math-container">$e_2$</span> each denote the two entities, <span class="math-container">$c$</span> denotes the provided context, and <span class="math-container">$l_i$</span> refers to the label of the <span class="math-container">$i$</span>th token given context <span class="math-container">$c$</span>.</p> <p>Anyway, this has led me to believe that RE and EE are fundamentally different tasks, and whether RE is even considered to be within the larger umbrella of IE. Any opinions or pointers to resources/papers are appreciated. Thanks.</p>
568
sequence-to-sequence model
An algorithm to find differences between routing paths
https://cs.stackexchange.com/questions/141340/an-algorithm-to-find-differences-between-routing-paths
<p>I need to come up with an algorithm that finds differences in the sequence of each product's routing (or sequence of processes). There are several processes aligned together and each process's been operated under specified equipment.</p> <p>For example,</p> <p>if there are four processes(A - B - C - D) until the final product and in each process, there are several types of equipment (for example, a1, a2, a3, a4) that operate process A.</p> <p>The routing for each product can be shown below:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Process</th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> </tr> </thead> <tbody> <tr> <td>Product1</td> <td>a1</td> <td>b3</td> <td>c3</td> <td>d2</td> </tr> <tr> <td>product2</td> <td>a2</td> <td>b1</td> <td>c2</td> <td>d1</td> </tr> <tr> <td>product3</td> <td>a1</td> <td>b2</td> <td>c1</td> <td>d1</td> </tr> <tr> <td>product4</td> <td>a1</td> <td>b1</td> <td>c2</td> <td>d3</td> </tr> <tr> <td>product5(defective)</td> <td>a2</td> <td>b4</td> <td>c3</td> <td>d2</td> </tr> <tr> <td>product6</td> <td>a1</td> <td>b3</td> <td>c2</td> <td>d3</td> </tr> <tr> <td>product7</td> <td>a1</td> <td>b2</td> <td>c2</td> <td>d3</td> </tr> </tbody> </table> </div> <p>If it turned out product 5 to be a defective product, we may conclude that in the B process, the equipment b4 has most likely caused an issue to the product5.</p> <p>I want the equipment b4 to be the output of this algorithm and maybe the most possible causes of the defective product 5.</p> <p>Approach 1:</p> <p>I label-encoded to each piece of equipment by column to implement an equipment learning model and calculate shapeley values for each column to see what process caused an issue. The problem is that in some cases there are few defective products that cause an extreme data imbalance between normal and faulty products' routing</p> <p>So, approach 1 seems not very positive on this problem.</p> <p>If you could suggest other feasible algorithms what would it be?</p>
569
sequence-to-sequence model
counteracting numerical instability in HMM training
https://cs.stackexchange.com/questions/72449/counteracting-numerical-instability-in-hmm-training
<p>I am training a HMM with Baum Welch for part of speech tagging. I am training the model with 79 hidden variables (part of speech tags) and 80,000 observed variables (words). I am working with log probabilities. To give you an idea, I defined the necessary arithmetic operations like so:</p> <pre><code>struct log_policy { template&lt;class T, class U&gt; static auto mul(const T t1, const U t2) { return t1 + t2; } template&lt;class T, class U&gt; static auto div(const T t1, const U t2) { return t1 - t2; } template&lt;class T, class U&gt; static auto add(const T t1, const U t2) { return log(exp(t1) + exp(t2)); } template&lt;class T, class U&gt; static auto sub(const T t1, const U t2) { return log(exp(t1) - exp(t2)); } template&lt;class T&gt; static auto linear_scale(const T t) { return exp(t); } template&lt;class T&gt; static auto native_scale(const T t) { return log(t); } template&lt;class T&gt; constexpr static auto make_additive_neutral_element() { return -1 * std::numeric_limits&lt;T&gt;::infinity(); } template&lt;class T&gt; constexpr static auto make_multiplicative_neutral_element() { return static_cast&lt;T&gt;(0); } template&lt;class T&gt; constexpr static auto make_annihilator() { return -1 * std::numeric_limits&lt;T&gt;::infinity(); } }; </code></pre> <p>Anyway, besides working with log probabilities I still get numeric instability for longer training sentences. This is primarily due to the size of the set of observed variables. As the size is 80,000, each state in the HMM has an average emission probability of 1/80,000 for each word. I was wondering how to counteract this? Would reducing the number of emissions be a valid option?</p> <p>To further illustrate: This is a section of a debug representation of the forward trellis after doing the forward algorithm on a training sequence. As you can see the right most fifth of the table is full of <code>-inf</code>, and they all appear at the same time... which I don't quite know why yet.</p> <p>(The format for a cell is <code>&lt;hidden variable human readable name&gt; &lt;hidden variable internal name and state ID&gt; &lt;probabiliy of cell&gt; &lt;backpointer&gt;</code>)</p> <pre><code>($$( (0) : -40.3203, -1) ($$( (0) : -78.1552, -1) ($$( (0) : -115.029, -1) ($$( (0) : -151.865, -1) ($$( (0) : -188.706, -1) ($$( (0) : -225.547, -1) ($$( (0) : -262.389, -1) ($$( (0) : -299.23, -1) ($$( (0) : -336.071, -1) ($$( (0) : -372.913, -1) ($$( (0) : -409.754, -1) ($$( (0) : -446.596, -1) ($$( (0) : -483.437, -1) ($$( (0) : -520.278, -1) ($$( (0) : -557.12, -1) ($$( (0) : -593.961, -1) ($$( (0) : -630.802, -1) ($$( (0) : -667.644, -1) ($$( (0) : -704.485, -1) ($$( (0) : -741.955, -1) ($$( (0) : -inf, -1) ($$( (0) : -inf, -1) ($$( (0) : -inf, -1) ($$( (0) : -inf, -1) ($$( (0) : -inf, -1) ($$( (0) : -inf, -1) ($$( (0) : -inf, -1) ($$, (1) : -43.2152, -1) ($$, (1) : -77.935, -1) ($$, (1) : -114.923, -1) ($$, (1) : -151.76, -1) ($$, (1) : -188.602, -1) ($$, (1) : -225.444, -1) ($$, (1) : -262.285, -1) ($$, (1) : -299.127, -1) ($$, (1) : -335.968, -1) ($$, (1) : -372.809, -1) ($$, (1) : -409.651, -1) ($$, (1) : -446.492, -1) ($$, (1) : -483.333, -1) ($$, (1) : -520.175, -1) ($$, (1) : -557.016, -1) ($$, (1) : -593.857, -1) ($$, (1) : -630.699, -1) ($$, (1) : -667.54, -1) ($$, (1) : -704.382, -1) ($$, (1) : -741.667, -1) ($$, (1) : -inf, -1) ($$, (1) : -inf, -1) ($$, (1) : -inf, -1) ($$, (1) : -inf, -1) ($$, (1) : -inf, -1) ($$, (1) : -inf, -1) ($$, (1) : -inf, -1) ($$. (2) : -40.3012, -1) ($$. (2) : -77.9027, -1) ($$. (2) : -114.861, -1) ($$. (2) : -151.692, -1) ($$. (2) : -188.534, -1) ($$. (2) : -225.375, -1) ($$. (2) : -262.217, -1) ($$. (2) : -299.058, -1) ($$. (2) : -335.899, -1) ($$. (2) : -372.741, -1) ($$. (2) : -409.582, -1) ($$. (2) : -446.423, -1) ($$. (2) : -483.265, -1) ($$. (2) : -520.106, -1) ($$. (2) : -556.948, -1) ($$. (2) : -593.789, -1) ($$. (2) : -630.63, -1) ($$. (2) : -667.472, -1) ($$. (2) : -704.313, -1) ($$. (2) : -741.444, -1) ($$. (2) : -inf, -1) ($$. (2) : -inf, -1) ($$. (2) : -inf, -1) ($$. (2) : -inf, -1) ($$. (2) : -inf, -1) ($$. (2) : -inf, -1) ($$. (2) : -inf, -1) ($( (3) : -45.7222, -1) ($( (3) : -78.2411, -1) ($( (3) : -115.133, -1) ($( (3) : -151.978, -1) ($( (3) : -188.82, -1) ($( (3) : -225.661, -1) ($( (3) : -262.502, -1) ($( (3) : -299.344, -1) ($( (3) : -336.185, -1) ($( (3) : -373.026, -1) ($( (3) : -409.868, -1) ($( (3) : -446.709, -1) ($( (3) : -483.551, -1) ($( (3) : -520.392, -1) ($( (3) : -557.233, -1) ($( (3) : -594.075, -1) ($( (3) : -630.916, -1) ($( (3) : -667.757, -1) ($( (3) : -704.599, -1) ($( (3) : -742.137, -1) ($( (3) : -inf, -1) ($( (3) : -inf, -1) ($( (3) : -inf, -1) ($( (3) : -inf, -1) ($( (3) : -inf, -1) ($( (3) : -inf, -1) ($( (3) : -inf, -1) ($, (4) : -43.7637, -1) ($, (4) : -78.1276, -1) ($, (4) : -115.012, -1) ($, (4) : -151.848, -1) ($, (4) : -188.689, -1) ($, (4) : -225.53, -1) ($, (4) : -262.371, -1) ($, (4) : -299.213, -1) ($, (4) : -336.054, -1) ($, (4) : -372.896, -1) ($, (4) : -409.737, -1) ($, (4) : -446.578, -1) ($, (4) : -483.42, -1) ($, (4) : -520.261, -1) ($, (4) : -557.102, -1) ($, (4) : -593.944, -1) ($, (4) : -630.785, -1) ($, (4) : -667.626, -1) ($, (4) : -704.468, -1) ($, (4) : -741.732, -1) ($, (4) : -inf, -1) ($, (4) : -inf, -1) ($, (4) : -inf, -1) ($, (4) : -inf, -1) ($, (4) : -inf, -1) ($, (4) : -inf, -1) ($, (4) : -inf, -1) ($. (5) : -43.5252, -1) ($. (5) : -77.9508, -1) ($. (5) : -114.713, -1) ($. (5) : -151.56, -1) ($. (5) : -188.401, -1) ($. (5) : -225.243, -1) ($. (5) : -262.084, -1) ($. (5) : -298.926, -1) ($. (5) : -335.767, -1) ($. (5) : -372.608, -1) ($. (5) : -409.45, -1) ($. (5) : -446.291, -1) ($. (5) : -483.132, -1) ($. (5) : -519.974, -1) ($. (5) : -556.815, -1) ($. (5) : -593.656, -1) ($. (5) : -630.498, -1) ($. (5) : -667.339, -1) ($. (5) : -704.18, -1) ($. (5) : -741.349, -1) ($. (5) : -inf, -1) ($. (5) : -inf, -1) ($. (5) : -inf, -1) ($. (5) : -inf, -1) ($. (5) : -inf, -1) ($. (5) : -inf, -1) ($. (5) : -inf, -1) </code></pre> <p>When I reduce the number of emissions and thus increase the average emission probability the problem goes away. But I am unsure whether that is the right way to go about this as the more emissions the model is trained with the better it will perform later on. </p>
<h2>Avoiding zero probabilities</h2> <p>You probably want to be using <a href="https://en.wikipedia.org/wiki/Additive_smoothing" rel="nofollow noreferrer">additive smoothing</a> when estimating probabilities from count data.</p> <p>With a dictionary of 80,000 words, most of those words will be very rare: many of them might never appear anywhere in your training data, or will never appear associated with a particular part of speech. Thus, your counts for those words will be zero. That causes you to estimate an emission probability of zero, if you naively estimate the probability as a ratio of counts. However, a zero probability is unlikely to be physically meaningful (I doubt that there is truly <em>zero</em> probability of outputting that word; instead, it's probably some small probability that's close to zero but not exactly zero). Additive smoothing is one way to address this.</p> <p>Lack of additive smoothing probably explains the <code>-inf</code>. You are computing with <a href="https://en.wikipedia.org/wiki/Log_probability" rel="nofollow noreferrer">log-probabilities</a>. A probability of zero corresponds to a log-probability of $-\infty$ (i.e., <code>-inf</code>). Implement additive smoothing, and your <code>-inf</code>'s might largely go away.</p> <h2>Numerical stability of log probabilities</h2> <p>While computing with <a href="https://en.wikipedia.org/wiki/Log_probability" rel="nofollow noreferrer">log probabilities</a> does generally improve stability compared to computing with probabilities directly, there are some tricky aspects you must be careful of.</p> <p>For instance, instead of adding probabilities by computing $\log(\exp(x)+\exp(y))$, it's usually better to compute $x + \log(1 + \exp(y-x))$ (assuming $y \ge x$; if $y &lt;x$, swap $x,y$ before doing this computation). This avoids loss of accuracy when $x,y$ are both small and are close to each other. You might want to use a built-in library for computing the function $u \mapsto \log(1+u)$. Some languages have built-in library functions for computing $\log(\exp(x)+\exp(y))$ in a numerically-stable way: it might be called something like <code>logsumexp</code>.</p> <p>Other relevant references: <a href="https://stackoverflow.com/q/7480996/781723">https://stackoverflow.com/q/7480996/781723</a>, <a href="https://math.stackexchange.com/q/2189716/14578">https://math.stackexchange.com/q/2189716/14578</a>, <a href="https://stackoverflow.com/q/42355196/781723">https://stackoverflow.com/q/42355196/781723</a>, <a href="https://stackoverflow.com/q/23630277/781723">https://stackoverflow.com/q/23630277/781723</a>.</p>
570
sequence-to-sequence model
Path that stays within a convex polyhedron
https://cs.stackexchange.com/questions/142152/path-that-stays-within-a-convex-polyhedron
<p>Let <span class="math-container">$\mathcal{P},\mathcal{Q}$</span> denote two convex polyhedra in <span class="math-container">$\mathbb{R}^d$</span>, which can be represented by a set of linear inequalities. Let <span class="math-container">$A \subset \mathbb{R}^d$</span> be a finite set of vectors.</p> <p>The problem is to determine whether there is a sum of vectors, where each term in the sum is an element of <span class="math-container">$A$</span>, so that the sum is in <span class="math-container">$\mathcal{Q}$</span>, and each prefix sum is in <span class="math-container">$\mathcal{P}$</span>.</p> <p>In other words, given <span class="math-container">$\mathcal{P},\mathcal{Q},A$</span>, the goal is to determine whether there exists a sequence <span class="math-container">$a_1,\dots,a_n$</span> such that <span class="math-container">$a_i \in A$</span> for each <span class="math-container">$i$</span>, and <span class="math-container">$a_1+\dots+a_j \in \mathcal{P}$</span> for each <span class="math-container">$j$</span>, and <span class="math-container">$a_1+\dots + a_n \in \mathcal{Q}$</span>. The sequence is allowed to repeat elements of <span class="math-container">$A$</span> multiple times.</p> <p>Is there an way to solve this, perhaps with the use of an ILP solver? I can see how to represent this as an instance of ILP using <span class="math-container">$O(n\cdot |A|)$</span> variables; is there a way to represent it as an instance using, say, <span class="math-container">$O(n+|A|)$</span> variables?</p> <p>This amounts to testing whether there is a path that stays within <span class="math-container">$\mathcal{P}$</span>, and eventually reaches a point in <span class="math-container">$\mathcal{Q}$</span>, where at each step you can take a step in one of multiple directions (indicated by <span class="math-container">$A$</span>). Is there a nice solution when the number of steps needed is large but the number of possible directions is small?</p> <p>This can model recipes that involve crafting items in a game [1], [2]</p> <p>Related: [1] <a href="https://cs.stackexchange.com/q/125011/755">Detecting conservation, loss, or gain in a crafting game with items and recipes</a>, [2] <a href="https://cs.stackexchange.com/q/142148/755">Calculating path for most efficient use of consumable items?</a></p>
571
sequence-to-sequence model
Can we enumerate finite sequences which have no halting continuation?
https://cs.stackexchange.com/questions/103981/can-we-enumerate-finite-sequences-which-have-no-halting-continuation
<p>Note: this question has been cross-posted to <a href="https://math.stackexchange.com/questions/3110862/can-we-enumerate-finite-sequences-which-have-no-halting-continuation">Math.SE</a>, after about a week here.</p> <p>I am trying to deepen my understanding of the relationship between the Halting Problem and Godel's <em>Completeness</em> Theorem (not Incompleteness). </p> <p>Specifically, as I understand it the Completeness Theorem guarantees a finite proof for any first-order logical statement which holds in all countable models of a first-order theory. (This is my restatement of <a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_theorem#Model_existence_theorem" rel="nofollow noreferrer">Wikipedia</a>'s "Every syntactically consistent, countable first-order theory has a finite or countable model.")</p> <p>Since the statement "Program <span class="math-container">$P_n$</span> (encoded by integer <span class="math-container">$n$</span>) does not halt" can presumably be stated in first-order logic and cannot in general be proven, we need to understand why (for given <span class="math-container">$n$</span>) it does not hold in all countable models. </p> <p>Intuitively, I expect that any countable model can be encoded as an infinite program for a Turing machine, eg by listing the countable set of first-order propositions. Likewise, I expect that any such "infinite Turing machine" can be identified with a countable first-order theory, by the Church-Turing thesis plus induction.</p> <p>So, just as the Completeness Theorem fails to "solve" arithmetic because of non-standard models with infinite integers (which eg satisfy otherwise unsatisfiable Diophantine equations), I'm speculating that it fails for Turing machines because of non-standard models with "infinite programs".</p> <p>But by my understanding statements which are true in all models (including non-standard / infinite ones) should still be provable. So I expect that if some finite set of axioms, which "pins down" some finite set of digits of a potentially infinite program, is enough to prevent the possibility of halting, we should be able to prove it. </p> <p>Or in other words, if a finite sequence does not have any continuation which encodes a halting program, that should be provable.</p> <p>Does my logic hold? Or what am I misunderstanding?</p> <p>The reason this is not trivially wrong by Rice's Theorem is that it's a property of the program itself, rather than the language recognized by that program, which is <span class="math-container">$\emptyset$</span> for the programs I'm talking about.</p>
<p>The language of your question confuses me a bit (&quot;if a finite sequence does not have any continuation which encodes a halting program&quot; - what exactly does that mean?), but I think the following is likely to clarify the situation:</p> <p>Let's take as our &quot;base theory&quot; first-order Peano arithmetic, <span class="math-container">$\mathsf{PA}$</span>. We could use pretty much any reasonable theory here, but <span class="math-container">$\mathsf{PA}$</span> has the advantage of being broadly known, so I'll use it. Let <span class="math-container">$(M_e)_{e\in\mathbb{N}}$</span> be some fixed usual enumeration of Turing machines. The following is indeed true:</p> <ul> <li><p>The set <span class="math-container">$$\mathsf{MustHalt}=\{n\in\mathbb{N}: \mbox{Every model of $\mathsf{PA}$ thinks $M_n$ halts on input $n$}\}$$</span> is c.e., by the completeness theorem.</p> </li> <li><p>The set <span class="math-container">$$\mathsf{Can\mbox{'}tHalt}=\{n\in\mathbb{N}: \mbox{No model of $\mathsf{PA}$ thinks $M_n$ halts on input $n$}\}$$</span> is <em>also</em> c.e. by the completeness theorem, and disjoint from <span class="math-container">$\mathsf{MustHalt}$</span> (I'm assuming <span class="math-container">$\mathsf{PA}$</span> is consistent here obviously).</p> </li> <li><p>However, the set <span class="math-container">$$\mathsf{MightHalt}=\mathbb{N}\setminus(\mathsf{MustHalt}\cup\mathsf{Can\mbox{'}tHalt})$$</span> is not c.e.; indeed, <span class="math-container">$\mathsf{MustHalt}$</span> and <span class="math-container">$\mathsf{Can\mbox{'}tHalt}$</span> are <em>computably inseparable</em>, and <span class="math-container">$\mathsf{MightHalt}$</span> is co-c.e.-complete exactly as each of the former is c.e.-complete.</p> </li> </ul> <p>The second bulletpoint above is, perhaps, an affirmative answer to your question. But the third bulletpoint should stress the difficulty of drawing strong conclusions from that: the c.e.-ness of halting prevention is not, actually, that sweeping a phenomenon (indeed by Godel's <em>in</em>completeness theorem it can't possibly be).</p>
572
sequence-to-sequence model
Discounted Optimal Stopping
https://cs.stackexchange.com/questions/145039/discounted-optimal-stopping
<p>The model is as followed:</p> <p>Consider an infinite horizon discounted problem <span class="math-container">$(0 &lt; γ &lt; 1)$</span> in which the state space is finite, with <span class="math-container">$n$</span> states, and there are only two possible decisions: stop or continue.</p> <p>If at time <span class="math-container">$t$</span> you are at state <span class="math-container">$s$</span>, and you decide to stop, you incur a stopping cost <span class="math-container">$γ^t g(s)$</span>, and move to a cost-free state, at which you stay forever.</p> <p>If at time <span class="math-container">$t$</span> you are at state <span class="math-container">$s$</span>, and you decide to continue, you incur a continuation cost <span class="math-container">$γ^tc(s)$</span>, and move to a next state <span class="math-container">$s_0$</span>, selected at random according to transition probabilities <span class="math-container">$p_{s,s'}$</span>.</p> <p>Assume that all <span class="math-container">$g(s)$</span> and <span class="math-container">$c(s)$</span> are nonnegative. The stationary policy <span class="math-container">$π$</span> is completely specified in terms of the set <span class="math-container">$S_π$</span> at which the policy decides to stop. Let <span class="math-container">$π_0, π_1, . . .$</span> be the sequence of policies generated by the policy iteration algorithm, starting with a given arbitrary policy <span class="math-container">$π_0$</span>.</p> <p>My question is: How to identify if <span class="math-container">$S_{\pi_{2}}\subset S_{\pi_{1}}\subset S_{\pi_{0}}$</span>?</p>
573
sequence-to-sequence model
Why doesn&#39;t infinite run time violate Turing completeness? Shouldn&#39;t &quot;completeness&quot; include halting?
https://cs.stackexchange.com/questions/103001/why-doesnt-infinite-run-time-violate-turing-completeness-shouldnt-completene
<p>Why doesn't infinite run time violate Turing completeness? Shouldn't &quot;completeness&quot; include halting?</p> <hr /> <p>The halting problem:</p> <blockquote> <p>The halting problem is a decision problem about properties of computer programs on a fixed <strong>Turing-complete</strong> model of computation</p> <p>...</p> <p>The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input.</p> </blockquote> <hr /> <p>Now, if one has a language that's Turing-complete, but expresses a program of infinite computation, e.g.</p> <pre><code>while(True): 1; </code></pre> <p>then why is it meaningful to still treat it as being a &quot;Turing-complete&quot; program?</p> <hr /> <p>Contrast this to the notion of completeness in Banach spaces:</p> <blockquote> <p>a real normed vector space <span class="math-container">$V$</span> is called <strong>complete if</strong> every Cauchy sequence in <strong>converges</strong> in <span class="math-container">$V$</span>.</p> </blockquote> <hr /> <p>However, Wikipedia certainly says that:</p> <blockquote> <p>No physical system can have infinite memory, but <strong>if the limitation of finite memory is ignored, most programming languages are otherwise Turing-complete.</strong></p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Turing_completeness" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Turing_completeness</a></p> <p><strong>However, why is this ignoring reasonable?</strong> Because even when the ignorance might fit Turing's theses, it's not practical, since infinite programs are useless.</p>
<p>You do not yet understand what Turing completeness means.</p> <p>Turing completeness is the ability to perform arbitrary <em>finite</em> computations.</p> <p>To simplify matters, we say an arbitrary finite computation is an effective procedure that, given some (finite) input, produces some (finite) output, after some (finite number of) steps. To simplify matters further, we assume the output for a given input is the same every time.</p> <p>That is to say: the effect of a computation is a function from strings (finite sequences of symbols) to strings.</p> <p>(Not all computations in practice are of this type, but as far as computability is concerned, they can be shown to correspond to a computation of this type.)</p> <p>So the inputs, the outputs, and the numbers of steps are all finite. However, the set of possible inputs and the set of possible outputs can be infinite.</p> <p>Such a function is called <em>computable</em> if and only if some Turing machine computes it. That is to say, the Turing machine, when started on a tape containing an input to the function, will always terminate with a tape containing the output defined by the function for that input.</p> <p>It turns out that not all functions from strings to strings are computable.</p> <p>In particular, not all functions from strings to {"Y", "N"} are computable. These functions effectively say whether a string is a member of a certain set (the function says "Y" if it is, and "N" if it is not).</p> <p>A set is decidable if some Turing machine computes its "Y"/"N" function. Not all sets are decidable.</p> <p>A programming language is Turing complete if it can compute all functions that a Turing machine can compute. Amongst other things, this means it can decide any decidable set. Undecidable sets are the ones that no Turing machine can decide. No programming language can decide those sets (says the Church-Turing thesis; I could substantiate this, but this answer is too long already), not even when it is Turing complete.</p> <p>In short: Turing completeness means a programming language can do everything a Turing machine can do. It doesn't mean it can do things a Turing machine cannot do, such as deciding undecidable sets. No violation is going on here.</p>
574
sequence-to-sequence model
If a TM accepts a non-regular language, its space complexity is $\Omega(\log \log n)$
https://cs.stackexchange.com/questions/145466/if-a-tm-accepts-a-non-regular-language-its-space-complexity-is-omega-log-lo
<p>I have been given an assignment that I'm having a very hard time understanding.</p> <p>The assignment is to prove that if an algorithm accepts a non-regular language, the complexity is <span class="math-container">$\Omega(\log \log n)$</span> (so if the language is regular, the complexity is <span class="math-container">$O(\log \log n)$</span>). The computational model to be used is a Turing machine with one input and one work tape.</p> <p>Here's an excerpt from a book called <em>Theory of computation</em> by Dexter C. Kozen that I will be using to prove the assumption (if it's not allowed to post such an excerpt here please let me know, I'll remove it and share just some parts there instead, I posted it whole as I consider all information there to be important).</p> <p><a href="https://i.sstatic.net/3KfXG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KfXG.png" alt="Complexity theorem" /></a></p> <p>I have read this excerpt several times and there are several parts that I wasn't able to grasp. If I understand this correctly, <span class="math-container">$N$</span> is equal to all possible configurations (a crossing sequence) for a <em>single</em> cell. On the other hand, <span class="math-container">$\sum^{m}_{i=0} N^{i}$</span> is equal to crossing sequences on <span class="math-container">$m$</span> cells. So the thing I don't understand about this is why the <span class="math-container">$n/2$</span> first (or last) crossing sequences have to be distinct. I suppose I understand that you'd be able to cut a part of the input string if there were two identical crossing sequences on two positions, but I don't see why the number of required distinct crossing sequences equals <span class="math-container">$n/2$</span> and not some other number. For example, why do we partition <span class="math-container">$x$</span> into 2 parts and not 4 or some other number?</p> <p>One other thing that I really don't understand is the very last equation, which says &quot;Combining (1.3), (1.4) and (1.5) and taking logs, we get <span class="math-container">$S(n) \geq \Omega (\log \log n)$</span>.&quot; I just don't understand how it's possible to come up with this equation by taking those three mentioned equations. The first step should be replacing <span class="math-container">$N$</span> in those equations, which gives me these two equations:</p> <p><span class="math-container">$\frac{n}{2} \leq \sum^{m}_{i=0} (q \cdot S(n)\cdot d^{S(n)})^{i} = \frac{(q \cdot S(n)\cdot d^{S(n)})^{m+1}-1}{(q \cdot S(n)\cdot d^{S(n)})-1}$</span></p> <p><span class="math-container">$m \leq 2 \cdot (q \cdot S(n)\cdot d^{S(n)})$</span></p> <p>I don't know how to combine these two equations and reach the desired equation that is <span class="math-container">$S(n) \geq \Omega (\log \log n)$</span>. Thank you for any help in advance.</p>
<blockquote> <p>If I understand this correctly, <span class="math-container">$N$</span> is equal to all possible configurations (a crossing sequence) for a single cell.</p> </blockquote> <p>That's not accurate: a crossing sequence is a <em>sequence</em> of configurations. We form the crossing sequence at position <span class="math-container">$i$</span> as follows: we trace the execution of the Turing machine, and whenever the head on the input tape crosses the <span class="math-container">$i$</span>'th position (either toward position <span class="math-container">$i+1$</span> or toward position <span class="math-container">$i-1$</span>), we concatenate the current configuration (state, contents of work tape, and head location on work tape) to the crossing sequence.</p> <blockquote> <p>On the other hand, <span class="math-container">$\sum_{i=0}^m N^i$</span> is equal to crossing sequences on <span class="math-container">$m$</span> cells.</p> </blockquote> <p>No. That's the number of crossing sequences of length at most <span class="math-container">$m$</span> at a <em>single</em> cell.</p> <blockquote> <p>I don't understand about this is why the <span class="math-container">$n/2$</span> first (or last) crossing sequences have to be distinct. I suppose I understand that you'd be able to cut a part of the input string if there were two identical crossing sequences on two positions, but I don't see why the number of required distinct crossing sequences equals <span class="math-container">$n/2$</span> and not some other number. For example, why do we partition <span class="math-container">$x$</span> into 2 parts and not 4 or some other number?</p> </blockquote> <p>Suppose that the crossing sequence <span class="math-container">$c$</span> occurs at position <span class="math-container">$i$</span>. Then the crossing sequences at positions <span class="math-container">$1,\ldots,i$</span> are all distinct: if the crossing sequences at positions <span class="math-container">$j&lt;k\leq i$</span> were identical, deleting the part of the input corresponding to positions <span class="math-container">$j+1,\ldots,k$</span> will result in a shorter input <span class="math-container">$x$</span> with the same crossing sequence <span class="math-container">$c$</span> (at position <span class="math-container">$i-(k-j)$</span>. Similarly, the crossing sequences at positions <span class="math-container">$i,\ldots,n$</span> are all distinct. What we <em>cannot</em> say is that the crossing sequences at positions <span class="math-container">$1,\ldots,n$</span> are all distinct — the same argument doesn't work.</p> <p>For the rest of the argument, we want to find as many distinct crossing sequences as possible. Therefore we take the maximum between <span class="math-container">$i$</span> (the number of crossing sequences corresponding to positions <span class="math-container">$1,ldots,i$</span>) and <span class="math-container">$n-i+1$</span> (the number of crossing sequences corresponding to positions <span class="math-container">$i,\ldots,n$</span>), which is always at least <span class="math-container">$n/2$</span>.</p> <blockquote> <p>One other thing that I really don't understand is the very last equation, which says &quot;Combining (1.3), (1.4) and (1.5) and taking logs, we get <span class="math-container">$S(n) \ge \Omega(\log\log n)$</span>.&quot;</p> </blockquote> <p>The equations in question state <span class="math-container">\begin{align} N &amp;= qS(n)d^{S(n)} \\ \frac{n}{2} &amp;\le \frac{N^{m+1}-1}{N-1} \\ m &amp;\le 2N \end{align}</span> The second equation simplifies to <span class="math-container">$$ n \leq 2N^{2m}. $$</span> Applying the third equation, <span class="math-container">$$ n \leq 2N^{4N}. $$</span> Taking logarithm, <span class="math-container">$$ \log n \leq 4N\log N + \log 2. $$</span> If <span class="math-container">$x \leq y\log y$</span> then <span class="math-container">$y = \Omega(x/\log x)$</span>, and so <span class="math-container">$$ N = \Omega\left(\frac{\log n}{\log\log n}\right). $$</span> Substituting the value of <span class="math-container">$N$</span>, <span class="math-container">$$ qS(n)d^{S(n)} = \Omega\left(\frac{\log n}{\log\log n}\right). $$</span> Taking another logarithm, <span class="math-container">$$ \log q + \log S(n) + S(n) \log d \geq \log \log n - O(1). $$</span> Since <span class="math-container">$\log q + \log S(n) = o(S(n))$</span>, the left-hand side is <span class="math-container">$O(S(n))$</span>, and so <span class="math-container">$$ S(n) = \Omega(\log\log n). $$</span></p>
575
sequence-to-sequence model
Achieving Randomness
https://cs.stackexchange.com/questions/67766/achieving-randomness
<p>Can <strong>True</strong> <em>Randomness</em> be achieved by composing prngs in different states and with different algorithmsv(e.g. have $n$ different composition algorithms, use a prng to select any permutation of them. A lot $(\sum_{i=1}^{n}{^nP_i})$ of permutations exist. Compose the algorithms selected using the permutation. Whenever a new number is generated in a sequence of random numbers the entire algorithm selection process is repeated, and new seeds assigned to all algorithms.</p> <p>The final result with/from the different prngs will be supplied as output. </p> <p>For every new number, a fresh seed is taken to the random selection algorithm(to choose the composition of the PRNGs), this should be the most complex PRNG and have as much entropy as possible.</p> <p>The sequence works something like this. Each PRNG takes two parameters; a seed, and some previous value.</p> <p>For the first PRNG selected for composition, a seed $s$ is provided, and another randomly generated seed is used to generate another number $c$(This will be done by a randomly selected PRNG not in the sequence. So in actuality, we'll have $n+2$ PRNG algorithms). Each PRNG generates its own random number $k$. By choosing a permutation of some predefined composition algorithms (Whether the random number is to be floating or not, will define available composition algorithms, e.g final value $v \mod max$ will be used to arrive at the random integer) $c$ is used to modify $k$, to generate some $c_i$ to pass to the next algorithm in the sequence. The final $c_n$ will be adjusted to be within range as the random number. </p> <p>I cannot conceive how such a system may be feasibly 'deterministic'. </p> <p>Because of this, the algorithm shouldn't be philosophically deterministic. The next number is <strong><em>NOT</em></strong> based on previous numbers.</p> <p>Arbitrarily designing a PRNG 'algorithm' isn't difficult (slightly modifying an existing algorithm, using different seeds to choose from, generating more than one result and choosing from it, etc). I read a post online about the security flaws of using prngs. How they are ultimately deterministic.</p> <p>Will such a prng be deterministic?<br> Will it be able to achieve 'true' randomness?<br> Assuming all the prng algorithms have the same 'degree' of randomness What is the minimum $n$ I should choose to make it non-feasibly deterministic? $\tag{*}$</p> <p>My only worry now, is the computational expensiveness of such a model. It <em>should</em> have a worst case running time of $nf(n)$ where $f(n)$ is the worst case running time of the worst random algorithm. (Assuming PRNGS don't have their running time vary with their outputs) {Though the higher security, might be worth it}.</p> <p>(*) Feasible deterministic, means that if all computers in the world were connected to make a super computer (assume this is possible and performance scales accordingly) this super computer will be able to 'crack' (be able to determine the next $k$ number(s) in line given an arbitrary list of generated numbers (from the same <em>instance</em> of this algorithm $n$ and arbitrary time $m$ s.<br> $n,m\colon n \ge m \gt 10^{10^{100}}$<br> $k$'s value is irrelevant. If it can crack the next 10, then given enough time, it'll crack the next 100.</p> <p>True randomness, is defined as being non feasibly deterministic.</p>
<p>There are two types of answers:</p> <ul> <li><p>Practical: You are looking for good <a href="https://en.wikipedia.org/wiki/Stream_cipher" rel="nofollow noreferrer">stream ciphers</a>. Several exist, though we don't know for sure that they are secure.</p></li> <li><p>Theoretical: You are looking for <a href="https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator" rel="nofollow noreferrer">cyptographically secure PRNGs</a>. There are several suspected constructions, but such objects are not known to exist; if P=NP, then they don't exist.</p></li> </ul> <p>In both cases, there is quite a wide literature on the subject.</p>
576
sequence-to-sequence model
Commonly-used formal definition of graphs with &#39;connections&#39;?
https://cs.stackexchange.com/questions/76624/commonly-used-formal-definition-of-graphs-with-connections
<p>Sometime you want to model some data from the real world using a graph, but such that edges don't just connect to a vertex; rather, they connect to some aspect of that vertex - some connection if you will. </p> <p>For example, a node in a family tree would be a person, and they have a mother and father (never mind about adoptions etc.). Now, when you connect one of the parents to the child you want to connect it as "the father" or "the mother"; so a "connections graph" model for this tree would see each vertex have a set of 3 possible connections: "father, mother, child" (with the child being used to connect it to its biological children's nodes).</p> <p>It's true that you can always get around really <em>needing</em> connections, e.g. with gadgets in their stead (say, each connection has a gadget vertex, so an original vertex is now surrounded by its connections as satellites and edges only exist between these satellites). But I'm interested in dealing with such connections explicitly.</p> <p>So, connections can be represented by a set; or as an ordered sequence (perhaps even always $0...\Delta$ with $\Delta$ being the maximum degree of the graph); or the edge set could incorporate the connections apriori, i.e. an endpoint could always be a tuple of a vertex and something else. And the connection set might be shared, or per-vertex; and so on.</p> <p>My question is: What are some specific formalizations of this concept which are used often (if there are such at all)?</p> <p><strong>Notes:</strong></p> <ul> <li>I don't care whether it's a directed graph or not, you can always switch from directed to undirected and back in a relatively straightforward way.</li> <li><p>Here's a diagram of one of these:</p> <pre><code>digraph G { rankdir="LR"; a-&gt;b [taillabel="x"; headlabel="y"] b-&gt;c [taillabel="z"; headlabel="w"] } </code></pre> <p><a href="https://i.sstatic.net/0fyrK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0fyrK.png" alt="enter image description here"></a></p></li> </ul>
577
sequence-to-sequence model
Definitions of and difference between adaptive online adversary and adaptive offline adversary?
https://cs.stackexchange.com/questions/121390/definitions-of-and-difference-between-adaptive-online-adversary-and-adaptive-off
<p>I recently started learning about randomized online algorithms, and the <a href="https://en.wikipedia.org/wiki/Adversary_model" rel="nofollow noreferrer">Wikipedia</a> definitions for the three adversary models are very unhelpful to put it mildly. From poking around I think I have a good understanding of what an oblivious adversary is. From my understanding, the oblivious adversary must determine the "worst possible input sequence" before we even start running our algorithm. Let <span class="math-container">$I_w$</span> denote the worst possible input sequence this adversary comes up with. (I.e., the input sequence that produces the greatest gap between the best that can be done and what we expect our algorithm to do.)</p> <p>We then say that our algorithm is <span class="math-container">$c$</span>-competitive (for a minimization problem) under this adversary if <span class="math-container">$$E[Alg(I_w)] \le c \cdot Opt(I_w) + b$$</span> where <span class="math-container">$c,b$</span> are some constants, <span class="math-container">$E[Alg(I_w)]$</span> is the expected value of our algorithm on the input, and <span class="math-container">$Opt(I_w)$</span> is the cost if we had made perfect decisions. (I.e., if the problem went offline.)</p> <p><strong><em>My confusion concerns the adaptive online and adaptive offline adversaries.</em></strong> I neither fully understand their definitions nor the difference between them. I will list my confusions directly below.</p> <ul> <li>As I understand it, both of these adversaries somehow build the input sequences as your online algorithm runs. <a href="https://www.win.tue.nl/~nikhil/AU16/scribe-notes/lec2/lecture2.pdf" rel="nofollow noreferrer">This</a> says before you create the input at time <span class="math-container">$t$</span>, unlike in the case of the oblivious adversary, both the adaptive online and adaptive offline adversaries have access to the outcomes of your algorithm at time steps <span class="math-container">$1, \ldots , t-1$</span>. Then it says that in both cases the adversary "incurs the costs of serving the requests online." The difference being that for the online adaptive adversary, it "will only receive the decision of the online algorithm after it decided its own response to the request." Does this mean that the difference is that the offline adaptive adversary can see how your algorithm performs during future steps? Or just the present step? But then why is it still incurring the cost of serving requests <em>online</em>?</li> <li><a href="http://www14.in.tum.de/personen/albers/papers/brics.pdf" rel="nofollow noreferrer">This</a> source <em>contradicts</em> the source above. It says that the adaptive offline adversary "is charged the optimum <strong><em>offline</em></strong> cost for that sequence." Like I said previously, the previously source says both incur "the cost of serving the requests <strong><em>online</em></strong>." What does it even mean to incur the cost of serving requests online vs. offline? Which is correct?</li> <li><a href="http://ac.informatik.uni-freiburg.de/teaching/ws12_13/algotheo/Slides/ann/08_Online_020513_ann.pdf" rel="nofollow noreferrer">This</a> takes a completely different tack and talks about knowing randomness (online adaptive) vs. knowing "random bits" (offline adaptive). Is this equivalent somehow? How so?</li> <li>How does the definition of the competitive ratio change for these two adversaries? Most sources I looked at just defined the competitive ratio for the oblivious adversary.</li> </ul> <p>A simple example of each to illustrate the difference would be much appreciated. Thanks for the help!</p>
578
sequence-to-sequence model
Viterbi algorithm for object tracking
https://cs.stackexchange.com/questions/153527/viterbi-algorithm-for-object-tracking
<p>I want to solve a problem of object tracking along time. The problem is - I have a sequence of images, and I need to find and track the creation of the objects, than their movement, and than their disappearance. There can be up to 3 objects overall, and sometimes there are less, or none. Another limitation is that for consecutive images there is a maximum distance that an object can move.</p> <p>The practice I use is - estimating the locations for the objects in each image separately (using a neural network), up to 3 locations per image, and than filtering out clear mistakes (random locations with no continuation along time).</p> <p>After a little research, I found that with some effort I can translate this problem into a hidden Markov model, and this one can be solved with Viterbi algorithm. The problem is that for each image there are more than 100 possible object locations, and with 3 objects we get &gt;= 100000 different states.</p> <p>My question is whether there exists a designated algorithm for this case of object tracking along time? Or otherwise, if there's a good and efficient way I can fit Viterbi algorithm for this problem?</p> <p>Thank for any help!</p>
579
sequence-to-sequence model
Design an algorithm to predict words based on a skeleton from a given dictionary
https://cs.stackexchange.com/questions/161242/design-an-algorithm-to-predict-words-based-on-a-skeleton-from-a-given-dictionary
<p>I'm working on an algorithm which is permitted to use a training set of approximately 250,000 dictionary words.</p> <p>I have built and providing here with a basic, working algorithm. This algorithm will match the provided masked string (e.g. a _ _ l e) to all possible words in the dictionary, tabulate the frequency of letters appearing in these possible words, and then guess the letter with the highest frequency of appearence that has not already been guessed. If there are no remaining words that match then it will default back to the character frequency distribution of the entire dictionary.</p> <p>This benchmark strategy is successful approximately 10% of the time. I aim to design an algorithm that significantly outperforms this benchmark.</p> <pre><code>class HangmanAPI(object): def __init__(self, access_token=None, session=None, timeout=None): self.hangman_url = self.determine_hangman_url() self.access_token = access_token self.session = session or requests.Session() self.timeout = timeout self.guessed_letters = [] full_dictionary_location = &quot;words_250000_train.txt&quot; self.full_dictionary = self.build_dictionary(full_dictionary_location) self.full_dictionary_common_letter_sorted = collections.Counter(&quot;&quot;.join(self.full_dictionary)).most_common() self.current_dictionary = [] # Initialize the decision tree, random forest, and SVM models along with the vectorizer self.decision_tree_model = DecisionTreeClassifier() self.random_forest_model = RandomForestClassifier(n_estimators=100, random_state=42) self.svm_model = SVC(kernel='linear', probability=True, random_state=42) self.vectorizer = CountVectorizer(analyzer='char', lowercase=False, binary=True) self.target_labels = [chr(ord('a') + i) for i in range(26)] # Fit the decision tree model with the full dictionary once during initialization X = self.vectorizer.fit_transform(self.full_dictionary) y = np.array([word[-1] for word in self.full_dictionary]) self.decision_tree_model.fit(X, y) # Add Q-table to store Q-values for state-action pairs self.q_table = {} # Hyperparameters for Q-learning self.learning_rate = 0.1 self.discount_factor = 0.9 self.epsilon = 0.1 # Probability of exploration during action selection def update_q_table(self, state, action, reward, next_state): # Q-learning update rule current_q_value = self.q_table.get((state, action), 0.0) next_q_values = [self.q_table.get((next_state, next_action), 0.0) for next_action in self.target_labels] max_next_q_value = max(next_q_values) new_q_value = current_q_value + self.learning_rate * (reward + self.discount_factor * max_next_q_value - current_q_value) self.q_table[(state, action)] = new_q_value def extract_features(self, word_pattern): # Extract features from the word pattern features = [] # Word Length features.append(len(word_pattern)) # Vowel and Consonant Counts vowel_count = sum(1 for letter in word_pattern if letter in 'aeiou') consonant_count = sum(1 for letter in word_pattern if letter in 'bcdfghjklmnpqrstvwxyz') features.append(vowel_count) features.append(consonant_count) # Common Letter Count common_letters = set(&quot;etaoinsrhldcumfpgwybvkxjqz&quot;) common_letter_count = sum(1 for letter in word_pattern if letter in common_letters) features.append(common_letter_count) # Letter Position Features features.append(1 if word_pattern.startswith('a') else 0) # Check if starts with 'a' features.append(1 if word_pattern.endswith('e') else 0) # Check if ends with 'e' features.append(1 if 'qu' in word_pattern else 0) # Check if contains 'qu' # Character N-grams n_grams = [word_pattern[i:i + 2] for i in range(len(word_pattern) - 1)] for n_gram in ['th', 'er', 'in', 'ou', 'an']: # Example: Consider the presence of common letter pairs features.append(1 if n_gram in n_grams else 0) # Part-of-Speech (POS) Features - Not implemented here, requires external NLP tools # Syllable Count - Not implemented here # Letter Frequency Distribution - Not implemented here return features def hyperparameter_tuning(self): # Define the hyperparameter search spaces for each model using hyperopt dt_space = { 'criterion': hp.choice('criterion', ['gini', 'entropy']), 'splitter': hp.choice('splitter', ['best', 'random']), 'max_depth': hp.choice('max_depth', [None, 10, 20, 30]), 'min_samples_split': hp.choice('min_samples_split', [2, 5, 10]), 'min_samples_leaf': hp.choice('min_samples_leaf', [1, 2, 4]) } rf_space = { 'n_estimators': hp.choice('n_estimators', [100, 200, 300]), 'criterion': hp.choice('criterion', ['gini', 'entropy']), 'max_depth': hp.choice('max_depth', [None, 10, 20, 30]), 'min_samples_split': hp.choice('min_samples_split', [2, 5, 10]), 'min_samples_leaf': hp.choice('min_samples_leaf', [1, 2, 4]), 'bootstrap': hp.choice('bootstrap', [True, False]) } svm_space = { 'C': hp.loguniform('C', -3, 1), # Search space for C in log scale 'kernel': hp.choice('kernel', ['linear', 'poly', 'rbf', 'sigmoid']), 'gamma': hp.choice('gamma', ['scale', 'auto']) } # Perform Bayesian optimization for Decision Tree dt_best = fmin(fn=self.hyperopt_objective, space=dt_space, algo=tpe.suggest, max_evals=50, verbose=0) self.decision_tree_model = DecisionTreeClassifier( criterion=dt_best['criterion'], splitter=dt_best['splitter'], max_depth=dt_best['max_depth'], min_samples_split=dt_best['min_samples_split'], min_samples_leaf=dt_best['min_samples_leaf'] ) # Perform Bayesian optimization for Random Forest rf_best = fmin(fn=self.hyperopt_objective, space=rf_space, algo=tpe.suggest, max_evals=50, verbose=0) self.random_forest_model = RandomForestClassifier( n_estimators=rf_best['n_estimators'], criterion=rf_best['criterion'], max_depth=rf_best['max_depth'], min_samples_split=rf_best['min_samples_split'], min_samples_leaf=rf_best['min_samples_leaf'], bootstrap=rf_best['bootstrap'] ) # Perform Bayesian optimization for SVM svm_best = fmin(fn=self.hyperopt_objective, space=svm_space, algo=tpe.suggest, max_evals=50, verbose=0) self.svm_model = SVC( C=svm_best['C'], kernel=svm_best['kernel'], gamma=svm_best['gamma'] ) def hyperopt_objective(self, params): X = self.vectorizer.transform(self.current_dictionary) y = np.array([word[-1] for word in self.current_dictionary]) model = DecisionTreeClassifier(**params) cv_score = cross_val_score(model, X, y, cv=5).mean() return -cv_score def genetic_algorithm_tuning(self): # Define the hyperparameter search spaces for each model dt_space = { 'criterion': ['gini', 'entropy'], 'splitter': ['best', 'random'], 'max_depth': [None, 10, 20, 30], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4] } rf_space = { 'n_estimators': [100, 200, 300], 'criterion': ['gini', 'entropy'], 'max_depth': [None, 10, 20, 30], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4], 'bootstrap': [True, False] } svm_space = { 'C': [0.1, 1, 10], 'kernel': ['linear', 'poly', 'rbf', 'sigmoid'], 'gamma': ['scale', 'auto'] } # Perform Genetic Algorithm optimization for Decision Tree dt_genetic_algorithm = ga(function=self.genetic_algorithm_objective, dimension=len(dt_space), variable_type='int', variable_boundaries=[(0, len(dt_space[key]) - 1) for key in dt_space]) dt_best_idx = dt_genetic_algorithm.run() dt_best = {list(dt_space.keys())[i]: dt_space[list(dt_space.keys())[i]][idx] for i, idx in enumerate(dt_best_idx)} self.decision_tree_model = DecisionTreeClassifier( criterion=dt_best['criterion'], splitter=dt_best['splitter'], max_depth=dt_best['max_depth'], min_samples_split=dt_best['min_samples_split'], min_samples_leaf=dt_best['min_samples_leaf'] ) # Perform Genetic Algorithm optimization for Random Forest rf_genetic_algorithm = ga(function=self.genetic_algorithm_objective, dimension=len(rf_space), variable_type='int', variable_boundaries=[(0, len(rf_space[key]) - 1) for key in rf_space]) rf_best_idx = rf_genetic_algorithm.run() rf_best = {list(rf_space.keys())[i]: rf_space[list(rf_space.keys())[i]][idx] for i, idx in enumerate(rf_best_idx)} self.random_forest_model = RandomForestClassifier( n_estimators=rf_best['n_estimators'], criterion=rf_best['criterion'], max_depth=rf_best['max_depth'], min_samples_split=rf_best['min_samples_split'], min_samples_leaf=rf_best['min_samples_leaf'], bootstrap=rf_best['bootstrap'] ) # Perform Genetic Algorithm optimization for SVM svm_genetic_algorithm = ga(function=self.genetic_algorithm_objective, dimension=len(svm_space), variable_type='int', variable_boundaries=[(0, len(svm_space[key]) - 1) for key in svm_space]) svm_best_idx = svm_genetic_algorithm.run() svm_best = {list(svm_space.keys())[i]: svm_space[list(svm_space.keys())[i]][idx] for i, idx in enumerate(svm_best_idx)} self.svm_model = SVC( C=svm_best['C'], kernel=svm_best['kernel'], gamma=svm_best['gamma'] ) def genetic_algorithm_objective(self, idxs): X = self.vectorizer.transform(self.current_dictionary) y = np.array([word[-1] for word in self.current_dictionary]) dt_space = { 'criterion': ['gini', 'entropy'], 'splitter': ['best', 'random'], 'max_depth': [None, 10, 20, 30], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4] } rf_space = { 'n_estimators': [100, 200, 300], 'criterion': ['gini', 'entropy'], 'max_depth': [None, 10, 20, 30], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4], 'bootstrap': [True, False] } svm_space = { 'C': [0.1, 1, 10], 'kernel': ['linear', 'poly', 'rbf', 'sigmoid'], 'gamma': ['scale', 'auto'] } dt_best = {list(dt_space.keys())[i]: dt_space[list(dt_space.keys())[i]][idx] for i, idx in enumerate(idxs)} rf_best = {list(rf_space.keys())[i]: rf_space[list(rf_space.keys())[i]][idx] for i, idx in enumerate(idxs)} svm_best = {list(svm_space.keys())[i]: svm_space[list(svm_space.keys())[i]][idx] for i, idx in enumerate(idxs)} dt_model = DecisionTreeClassifier( criterion=dt_best['criterion'], splitter=dt_best['splitter'], max_depth=dt_best['max_depth'], min_samples_split=dt_best['min_samples_split'], min_samples_leaf=dt_best['min_samples_leaf'] ) dt_cv_score = cross_val_score(dt_model, X, y, cv=5).mean() rf_model = RandomForestClassifier( n_estimators=rf_best['n_estimators'], criterion=rf_best['criterion'], max_depth=rf_best['max_depth'], min_samples_split=rf_best['min_samples_split'], min_samples_leaf=rf_best['min_samples_leaf'], bootstrap=rf_best['bootstrap'] ) rf_cv_score = cross_val_score(rf_model, X, y, cv=5).mean() svm_model = SVC( C=svm_best['C'], kernel=svm_best['kernel'], gamma=svm_best['gamma'] ) svm_cv_score = cross_val_score(svm_model, X, y, cv=5).mean() return -(dt_cv_score + rf_cv_score + svm_cv_score) / 3 def train_all_models(self): X = self.vectorizer.transform(self.full_dictionary) y = np.array([word[-1] for word in self.full_dictionary]) # Fit all models with the full dictionary self.decision_tree_model.fit(X, y) self.random_forest_model.fit(X, y) self.svm_model.fit(X, y) # Perform hyperparameter tuning for Decision Tree, Random Forest, and SVM models self.hyperparameter_tuning() # Train the neural network model and perform fine-tuning self.train_neural_network() self.fine_tune_neural_network() def word_to_numeric(self, word): # Convert word pattern to a binary sequence of guessed (1) and not guessed (0) letters return [1 if letter in self.guessed_letters else 0 for letter in word] def ensemble_guess(self, word_pattern): numeric_word_pattern = self.word_to_numeric(word_pattern) # Get predictions from all three models dt_guess = self.decision_tree_model.predict([numeric_word_pattern])[0] rf_guess = self.random_forest_model.predict([numeric_word_pattern])[0] svm_guess = self.svm_model.predict([numeric_word_pattern])[0] # Create a list of all model predictions ensemble_guesses = [dt_guess, rf_guess, svm_guess] # Use voting to determine the final prediction guessed_letter = max(set(ensemble_guesses), key=ensemble_guesses.count) return guessed_letter @staticmethod def determine_hangman_url(): links = ['https://trexsim.com', 'https://sg.trexsim.com'] data = {link: 0 for link in links} for link in links: requests.get(link) for i in range(10): s = time.time() requests.get(link) data[link] = time.time() - s link = sorted(data.items(), key=lambda x: x[1])[0][0] link += '/trexsim/hangman' return link def train_neural_network(self): X = self.vectorizer.transform(self.current_dictionary) y = np.array([word[-1] for word in self.current_dictionary]) # Convert the word patterns to images (2D arrays) X_images = self.patterns_to_images(X.toarray(), self.vectorizer.vocabulary_) # Initialize and configure the convolutional neural network model neural_net_model = Sequential() neural_net_model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(X_images.shape[1], X_images.shape[2], 1))) neural_net_model.add(MaxPooling2D(pool_size=(2, 2))) neural_net_model.add(Flatten()) neural_net_model.add(Dense(128, activation='relu')) neural_net_model.add(Dense(64, activation='relu')) neural_net_model.add(Dense(26, activation='softmax')) neural_net_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Convert labels to one-hot encoding y_onehot = np.zeros((y.shape[0], 26)) for i, letter in enumerate(y): y_onehot[i, ord(letter) - ord('a')] = 1 # Train the neural network model neural_net_model.fit(X_images, y_onehot, epochs=50, batch_size=32, verbose=0) # Store the trained model in the HangmanAPI object self.neural_net_model = neural_net_model def patterns_to_images(self, patterns, vocabulary): # Convert the patterns to images (2D arrays) with 0s and 1s max_pattern_length = max(len(pattern) for pattern in patterns) images = [] for pattern in patterns: image = [0] * max_pattern_length for i, letter in enumerate(pattern): if letter in vocabulary: image[i] = 1 images.append(image) # Reshape the images to (num_samples, pattern_length, 1) images = np.array(images) return images.reshape(images.shape[0], images.shape[1], 1) def fine_tune_neural_network(self): X = self.vectorizer.transform(self.current_dictionary) y = np.array([word[-1] for word in self.current_dictionary]) # Initialize and configure the neural network model neural_net_model = Sequential() neural_net_model.add(Dense(128, input_dim=X.shape[1], activation='relu')) neural_net_model.add(Dense(64, activation='relu')) neural_net_model.add(Dense(26, activation='softmax')) # Compile the model with the 'adam' optimizer neural_net_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Convert labels to one-hot encoding y_onehot = np.zeros((y.shape[0], 26)) for i, letter in enumerate(y): y_onehot[i, ord(letter) - ord('a')] = 1 # Fine-tune the neural network model neural_net_model.fit(X.toarray(), y_onehot, epochs=100, batch_size=32, verbose=0) # Store the fine-tuned model in the HangmanAPI object self.neural_net_model = neural_net_model def guess(self, word): # Clean the word so that we strip away the space characters # Replace &quot;_&quot; with &quot;.&quot; as &quot;.&quot; indicates any character in regular expressions clean_word = word[::2].replace(&quot;_&quot;, &quot;.&quot;) # Find length of the passed word len_word = len(clean_word) # Grab current dictionary of possible words from self object, initialize a new possible words dictionary to empty current_dictionary = self.current_dictionary new_dictionary = [] # Iterate through all of the words in the old plausible dictionary for dict_word in current_dictionary: # Continue if the word is not of the appropriate length if len(dict_word) != len_word: continue # If dictionary word is a possible match, then add it to the current dictionary if re.match(clean_word, dict_word): new_dictionary.append(dict_word) # Update Q-table state for the current word pattern current_state = clean_word # With probability epsilon, explore by choosing a random letter if random.random() &lt; self.epsilon: # Randomly select a letter from the target labels guessed_letter = random.choice(self.target_labels) else: # With probability (1 - epsilon), exploit by choosing the letter with the highest Q-value # Choose the letter with the highest Q-value for the current state q_values_for_state = {action: self.q_table.get((current_state, action), 0.0) for action in self.target_labels} guessed_letter = max(q_values_for_state, key=q_values_for_state.get) # Update Q-table state for the next word pattern after making the guess next_word_pattern = word.replace(&quot;_&quot;, guessed_letter) next_state = next_word_pattern[::2].replace(&quot;_&quot;, &quot;.&quot;) # Update the Q-table based on the observed reward and the next state # We don't have access to the actual reward in this implementation, so set it to 0 for now reward = 0 self.update_q_table(current_state, guessed_letter, reward, next_state) # Overwrite old possible words dictionary with the updated version self.current_dictionary = new_dictionary # If there are no remaining words that match, default back to the ordering of the full dictionary if not new_dictionary: sorted_letter_count = self.full_dictionary_common_letter_sorted else: # Update the current dictionary with the new_dictionary self.current_dictionary = new_dictionary # Get the count of each letter at each position in the current dictionary letter_counts = [{letter: sum(1 for word in self.current_dictionary if word[i] == letter) for letter in self.target_labels} for i in range(len(clean_word))] # Choose the character with the highest count at the next position next_position = len(self.guessed_letters) # If all letters have been guessed, use fallback guess from full dictionary ordering if next_position &gt;= len(letter_counts): return self.ensemble_guess(clean_word) guessed_letter = max(letter_counts[next_position], key=letter_counts[next_position].get) # Remove the guessed letter from the possible letters in current_dictionary self.current_dictionary = [word for word in self.current_dictionary if guessed_letter not in word] return guessed_letter # Return the letter with the highest information gain that hasn't been guessed yet for letter, info_gain in sorted_letter_count: if letter not in self.guessed_letters: return letter # If all letters have been guessed, revert to ordering of full dictionary (fallback) return self.ensemble_guess(clean_word) def make_decision(self, word_pattern, models=['dt', 'rf', 'svm'], use_neural_net=True): # Clean the word pattern so that we strip away the space characters # Replace &quot;_&quot; with &quot;.&quot; as &quot;.&quot; indicates any character in regular expressions clean_word = word_pattern[::2].replace(&quot;_&quot;, &quot;.&quot;) # Filter the full dictionary to get the current dictionary of possible words self.current_dictionary = [word for word in self.full_dictionary if re.match(clean_word, word)] # If there are no remaining words that match the pattern, return the fallback guess if not self.current_dictionary: return self.ensemble_guess(clean_word) # Extract features from the clean word pattern features = self.extract_features(clean_word) # Convert the features to a 2D array (samples x features) to use with the neural network pattern_features = np.array(features).reshape(1, -1) # Initialize a list to store the predictions from different models predictions = [] # Initialize a list to store the classifiers for the ensemble classifiers = [] # Add the desired models to the ensemble classifiers list if 'dt' in models: classifiers.append(('DecisionTree', self.decision_tree_model)) if 'rf' in models: classifiers.append(('RandomForest', self.random_forest_model)) if 'svm' in models: classifiers.append(('SVM', self.svm_model)) # Create a VotingClassifier with the selected models voting_classifier = VotingClassifier(estimators=classifiers, voting='hard') # Train the ensemble classifier with the current dictionary X = self.vectorizer.transform(self.current_dictionary) y = np.array([word[-1] for word in self.current_dictionary]) voting_classifier.fit(X, y) # Use the trained ensemble model to make a prediction ensemble_prediction = voting_classifier.predict(pattern_features) # Get the count of each letter at each position in the current dictionary letter_counts = [{letter: sum(1 for word in self.current_dictionary if word[i] == letter) for letter in self.target_labels} for i in range(len(clean_word))] # Choose the character with the highest count at the next position next_position = len(self.guessed_letters) # If all letters have been guessed, use fallback guess from full dictionary ordering if next_position &gt;= len(letter_counts): return self.ensemble_guess(clean_word) # Calculate the conditional probabilities of each letter given the word pattern letter_probabilities = {} total_letter_count = sum(letter_counts[next_position].values()) for letter in string.ascii_lowercase: if letter not in self.guessed_letters: matching_words_count = letter_counts[next_position].get(letter, 0) conditional_probability = matching_words_count / total_letter_count # Calculate the information gain using entropy (log2) information_gain = -conditional_probability * math.log2(conditional_probability) if conditional_probability &gt; 0 else 0 letter_probabilities[letter] = information_gain # Choose the letter with the highest information gain as the next guess guessed_letter = max(letter_probabilities, key=letter_probabilities.get) return guessed_letter def compute_conditional_probabilities(self, word_pattern): # Count the occurrence of each letter in the possible words full_dict_string = &quot;&quot;.join(self.current_dictionary) c = collections.Counter(full_dict_string) # Calculate the total count of letters in the possible words total_letter_count = sum(c.values()) # Calculate the conditional probabilities of each letter given the word pattern letter_probabilities = {} for letter in string.ascii_lowercase: if letter not in self.guessed_letters: pattern_with_letter = word_pattern.replace(&quot;.&quot;, letter) matching_words_count = sum(1 for word in self.current_dictionary if re.match(pattern_with_letter, word)) conditional_probability = matching_words_count / total_letter_count # Calculate the information gain using entropy (log2) information_gain = -conditional_probability * math.log2(conditional_probability) if conditional_probability &gt; 0 else 0 letter_probabilities[letter] = information_gain return letter_probabilities def build_dictionary(self, dictionary_file_location): text_file = open(dictionary_file_location, &quot;r&quot;) full_dictionary = text_file.read().splitlines() text_file.close() return full_dictionary </code></pre> <ol> <li><strong>init</strong>(self, access_token=None, session=None, timeout=None): The constructor initializes the HangmanAPI object. It sets up the API URL, access token, and session for making HTTP requests to the Hangman game server. It also loads a full dictionary of words and initializes machine learning models (Decision Tree, Random Forest, SVM) and a Q-table for reinforcement learning.</li> <li>update_q_table(self, state, action, reward, next_state): This function updates the Q-values in the Q-table using the Q-learning update rule.</li> <li>extract_features(self, word_pattern): Extracts features from the given word pattern to be used by machine learning models for making guesses.</li> <li>hyperparameter_tuning(self): Performs hyperparameter tuning for the Decision Tree, Random Forest, and SVM models using Bayesian optimization.</li> <li>genetic_algorithm_tuning(self): Performs hyperparameter tuning for the Decision Tree, Random Forest, and SVM models using a genetic algorithm.</li> <li>train_all_models(self): Trains all the machine learning models, performs hyperparameter tuning, and trains a convolutional neural network (CNN) model.</li> <li>word_to_numeric(self, word): Converts a word pattern to a binary sequence of guessed (1) and not guessed (0) letters.</li> <li>ensemble_guess(self, word_pattern): Makes a guess for the given word pattern using an ensemble of Decision Tree, Random Forest, and SVM models.</li> <li>determine_hangman_url(self): Determines the Hangman game server URL to be used based on response times to different server URLs.</li> <li>train_neural_network(self): Trains a convolutional neural network (CNN) model using features extracted from the current dictionary of words.</li> <li>fine_tune_neural_network(self): Fine-tunes the neural network model using features extracted from the current dictionary of words.</li> <li>guess(self, word): Makes a guess for the given word pattern using the Q-learning algorithm and the current state of the game.</li> <li>make_decision(self, word_pattern, models=['dt', 'rf', 'svm'], use_neural_net=True): Makes a guess for the given word pattern using an ensemble of machine learning models and the Q-learning algorithm.</li> <li>compute_conditional_probabilities(self, word_pattern): Computes the conditional probabilities of each letter given the word pattern.</li> <li>build_dictionary(self, dictionary_file_location): Loads a full dictionary of words from a text file.</li> </ol> <p>I am trying to improve the accuracy above 10% to atleast 50%. I tried implementing multiple techniques together and choosing the one that works best based on scoring mechanism and then hyper-tuning the parameters. I also tried reinforcement learning, yet the result is abysmally low.</p> <p>All suggestions will be helpful.</p>
<p>I suggest you use a brute-force greedy algorithm, where you brute-force explore all possible guesses and all possible candidates for the secret word and see which guess leads to the most ambiguity about the secret word. See <a href="https://en.wikipedia.org/wiki/Mastermind_(board_game)#Algorithms_and_strategies" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Mastermind_(board_game)#Algorithms_and_strategies</a>, <a href="https://cs.stackexchange.com/q/18749/755">Mastermind (board game) - Five-guess algorithm</a>, <a href="https://stackoverflow.com/q/1185634/781723">https://stackoverflow.com/q/1185634/781723</a>, <a href="https://arxiv.org/abs/2202.00557" rel="nofollow noreferrer">https://arxiv.org/abs/2202.00557</a>.</p>
580
sequence-to-sequence model
Is there difference between workflow-aware and non-workflow-aware optimal selection of web-service composition?
https://cs.stackexchange.com/questions/11039/is-there-difference-between-workflow-aware-and-non-workflow-aware-optimal-select
<p>Small introduction. We have a task that consists of sub-tasks. Each sub-task can be implemented by some set of web-services. We want to find the best implementation of this task. "Best" means it has best values of QoS (availability, latency, cost etc.).</p> <p>So there are plenty of works:</p> <ol> <li>Canfora G. и др. An approach for QoS-aware service composition based on genetic algorithms // Proceedings of the 2005 conference on Genetic and evolutionary computation. New York, NY, USA: ACM, 2005. С. 1069–1075.</li> <li>Hwang S.-Y. и др. A probabilistic approach to modeling and estimating the {QoS} of web-services-based workflows // Information Sciences. 2007. Т. 177. № 23. С. 5484–5503.</li> <li>Klein A., Fuyuki I., Honiden S. SanGA: A Self-Adaptive Network-Aware Approach to Service Composition // Services Computing, IEEE Transactions on. 2013. Т. 1. № 99.</li> <li>Zhao X. и др. A hybrid clonal selection algorithm for quality of service-aware web-service selection problem // International Journal of Innovative Computing, Information and Control (IJICIC). 2012. Т. 8. № 12. С. 8527–8544.</li> <li>Lots of others works. I have at least 10 such papers in my Mendeley catalog.</li> </ol> <p>that solve a task of QoS-aware service composition using information about workflow structure. Workflow consists of composition of sequence, loop, parallel and exclusive choice patterns. They computes integral value of QoS for each possible composition and then make a choice between compositions with best integral QoS-values.</p> <p>By "integral" I mean for example: integral latency for sequence = mean latency of 1st WS + ... + mean latency for last WS in sequence</p> <p><strong>So the question is this approach better then approach of finding best choice for each sub-task separately</strong>? So the best implementation of task equals best implementations of sub-tasks.</p> <p>Seems to me the last one approach is much more simpler and more clear in terms of choosing not only best options, but best options that satisfy preferences of engineer who make decision of final web-services composition.</p>
581
sequence-to-sequence model
Is this knapsack variant named / studied? &quot;Online algorithm for farthest-from-previous index&quot;
https://cs.stackexchange.com/questions/167943/is-this-knapsack-variant-named-studied-online-algorithm-for-farthest-from-pr
<h3>Problem Statement:</h3> <p>Given: an ordered list of <code>N</code> items, which we can refer to by index: <code>[0, N)</code>.</p> <p>Goal: Write an algorithm to incrementally generate indexes that are as far away from all previously returned indexes as possible. In the base case where no indexes have been returned so far, we can start with either the first or the last index. Choosing the first or last index is symmetric, so by convention we will always start with the last index: <code>N - 1</code>.</p> <h3>Examples:</h3> <p>For example, if we had 4 items, in the first round by convention we yield 3. Then we have to return the farthest index from 3, so we yield 0. Then in the next round both 1 or 2 have the same minimum distance to a previously yielded index, so we can chose either. Let's choose 2. Then in the final round there is only one index left, so we have to choose 1. So an optimal sequence for 4 items is: <code>[3, 0, 2, 1]</code>.</p> <p>For another example, if there are 10 items, an optimal sequence is: <code>[9, 0, 5, 2, 7, 1, 6, 3, 8, 4]</code></p> <h3>Implementation:</h3> <p>Working with @LeeSE we've written a Python implementation and claim it solves the problem:</p> <pre><code>def farthest_from_previous(start: int, stop: int): &quot;&quot;&quot; Given a ordered list of items, incrementally yield indexes such that each new index maximizes the distance to all other previously chosen indexes. Args: start (int): The inclusive starting index (typically 0) stop (int): The exclusive maximum index (typically ``len(items)``) Yields: int: the next chosen index in the series Example: &gt;&gt;&gt; total = 10 &gt;&gt;&gt; start, stop = 0, 10 &gt;&gt;&gt; gen = farthest_from_previous(start, stop) &gt;&gt;&gt; result = list(gen) &gt;&gt;&gt; assert set(result) == set(range(start, stop)) &gt;&gt;&gt; print(result) [9, 0, 5, 2, 7, 1, 6, 3, 8, 4] &quot;&quot;&quot; import itertools as it def from_starts(start: int, stop: int): if start &lt; stop: low_mid: int = (start + stop) // 2 high_mid: int = (start + stop + 1) // 2 left_gen = from_starts(start, low_mid) right_gen = from_starts(high_mid, stop) pairgen = it.zip_longest(left_gen, right_gen) flatgen = it.chain.from_iterable(pairgen) filtgen = filter(lambda x: x is not None, flatgen) yield from filtgen if low_mid &lt; high_mid: yield low_mid if start &lt; stop: yield stop - 1 yield from from_starts(start, stop - 1) </code></pre> <h3>Motivation:</h3> <p>I have a directory of ordered images images that were generated to visualize neural network training iterations. I create one of these directories every time I train a network.</p> <p>These visualizations can start to take up too much disk space, and removing some percent of them would free up a lot of space, but still leave some of the visualizations in case I wanted to go back and inspect an old run. So the question is: which of these images do I keep? By incrementally generating &quot;furthest from previous&quot; indexes and checking if the total size exceeds some threshold, I can stop, keep all files corresponding to generated indexes, and remove the rest.</p> <h3>Question for CS Stack Exchange:</h3> <p>My attempts to determine if this problem or variants of it have been formally studied have turned up empty so far. My question is: does this problem have a name, or is there an instance of it or something similar that exists in the literature?</p> <p>It's clearly some greedy knapsack variant. But the value of each item depends on the other items that are selected.</p> <p>It is similar to the set union knapsack problem (SUKP) because the optimality of a decision depends on previous decisions, but in SKUP that is codified by the dependent weights, whereas in this problem the weights are constant, but the value of the next item changes based on the previous item.</p> <p>It looks like in 2023 there was a paper <a href="https://link.springer.com/article/10.1007/s10479-023-05265-x" rel="nofollow noreferrer">https://link.springer.com/article/10.1007/s10479-023-05265-x</a> describing &quot;Position-Dependent Knapsack&quot; where the &quot;profit of an item depends on the position of the item in the sequence of items packed in the knapsack&quot;, that looks promising as a framework for studying this greedy variant. It also looks like there is another similar 2023 paper from a different team: <a href="https://www.sciencedirect.com/science/article/pii/S1877050923010335" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/pii/S1877050923010335</a></p> <p>But thinking about it, perhaps position isn't enough to codify the concept of certain selections of items being worth more based on higher order relationships between items in the selection. It looks like this 2017 paper: &quot;An Integer Linear Programming Model for Binary Knapsack Problem with Dependent Item Values&quot; <a href="https://link.springer.com/chapter/10.1007/978-3-319-63004-5_12" rel="nofollow noreferrer">https://link.springer.com/chapter/10.1007/978-3-319-63004-5_12</a> may be an exact fit to the non-greedy generalization of this problem.</p> <p>Still, I'm curious if others can point me at references to help me better understand the scope of existing work around this problem.</p>
<p>This is an instance of <a href="https://en.wikipedia.org/wiki/Farthest-first_traversal" rel="nofollow noreferrer">farthest-point traversal</a> in 1D.</p>
582
sequence-to-sequence model
Given a vertex, find a path between it and each other vertex that minimizes the maximum weight in the path
https://cs.stackexchange.com/questions/68022/given-a-vertex-find-a-path-between-it-and-each-other-vertex-that-minimizes-the
<p>I'm revising for an upcoming exam and was wondering if someone could help me with a practice problem:</p> <blockquote> <p>We model a set of cities and highways as an undirected weighted graph <span class="math-container">$G = (V,E,l)$</span>, where the vertices <span class="math-container">$V$</span> represent cities, edges <span class="math-container">$E$</span> represent highways connecting cities, and for every undirected edge <span class="math-container">$e = \{v, w\}\in E$</span> the number <span class="math-container">$\ell[e]$</span> denotes the number of litres of fuel that your motorcycle needs in order to ride the distance between cities <span class="math-container">$v$</span> and <span class="math-container">$w$</span>. There are fuel stations in every city but none on the highways between the cities. Therefore, if the capacity of your motorcycle's fuel tank is <span class="math-container">$L$</span> litres, then you can only follow a path (a sequence of adjacent highways), if for every highway <span class="math-container">$e$</span> on this path, we have <span class="math-container">$L\geq \ell[e]$</span>, because you can only refuel in the cities.</p> <p>Design an algorithm with a running of time of <span class="math-container">$O(m\log n)$</span> that, given an undirected weighted graph <span class="math-container">$G=(V,E,\ell)$</span> modelling a setting of cities and highways, and a city <span class="math-container">$s \in V$</span> as inputs, computes for every other city <span class="math-container">$v \in V$</span>, the smallest fuel tank capacity <span class="math-container">$M[v]$</span> that your motorcycle needs in order to be able to reach city <span class="math-container">$v$</span> from city <span class="math-container">$s$</span>.</p> </blockquote> <p>I'm not sure how to solve this problem, but I do have a few ideas. I thought of possibly running Kruskal's algorithm since that would give me the MST of the graph. I'm not sure how I would then efficiently get the answer I need to produce.</p> <p>Alternatively, I was thinking of using dynamic programming in some way since that's one of the things that we use a lot in the course, but I'm not really sure how to go about it.</p>
<p>Let $T = \{s\}$, and let $Q$ be a new empty priority queue. Insert each edge incident on $s$ in $Q$. While there are still nodes with $d$ from $s$ yet to be computed, extract and remove the minimum from $Q$. </p> <p>If such edge connects a node $u$ in $T$ with a node $v$ not in $T$, add each edge incident on $v$ that still isn't in $Q$ to $Q$, then set $M[v]$ to $\max\{M[u], w(u, v)\}$. </p> <p>Otherwise, disregard the extracted edge and extract more edges until you find a suitable one.</p> <p>The overall complexity is $O(|E| \log |E|) = O(|E| \log |V|^2) = O(|E| \log |V|)$.</p> <p>To informally prove correctness, we can observe that whenever we set $M[v]$ to a certain value $x$, we can certainly reach $v$ from $s$ with a path in which each edge weighs at most $x$. On the other hand, no better path exists, because we already gave each edge lighter than $x$ a chance to connect to $s$.</p>
583
sequence-to-sequence model
Theoretical origin of acquire semantics only for reads?
https://cs.stackexchange.com/questions/165463/theoretical-origin-of-acquire-semantics-only-for-reads
<p>What theoretical concept lies behind the strong restriction of acquire semantics - to reads, and release semantics - to writes? (With papers titles and authors, if possible.)</p> <p>Is it related to provision of correctness of more complex approaches than lock-based synchronization (as lock-free, wait-free algorithms)?</p> <p>How the correctness of protected data access is guaranteed by this concept with a lock-based approach?</p> <p><strong>Rationale</strong> for the astonishment (it was preface but I've later moved it to bottom):</p> <p>Letʼs consider an inter-processor synchronization using a pure spinlock (avoid queues and task switching). While a spinlock is held, the data protected by it can be read or written. All these operations shall be finished (reads - values moved to the processor, writes - values exposed to other processors visibility) before releasing the spinlock. Release could be as simple as a single write to spinlockʼs location. So, the ordering is to, in a layman words, to commit all reads and writes in program order before the write that releases the spinlock - this is called &quot;release semantics&quot; tied to the spinlock release.</p> <pre><code>... read protected data... ... write protected data... &lt;-- The barrier between previous reads/writes and spinlock store do_unlock: write-release (spinlock address) -- release for the barrier ... unrelated activity... </code></pre> <p>This is OK, no problems here.</p> <p>(I use &quot;processor&quot; here universally for &quot;core&quot;, &quot;hart&quot;, &quot;PE&quot; (ARM term) as well - as a single entity with independent instruction stream.)</p> <p>Then, letʼs move to spinlock acquiring. The change of a shared state which prevents other processors to use the data protected by a spinlock is writing a value to the spinlock location which shows the spinlock is held (it could be simply 1, or processorʼs id... any except &quot;free&quot; denotation). The important fact this is <em>write</em>, not just <em>read</em>, despite read inevitably participates here to detect when an acquire attempt can be issued. Only after this write is exposed to other processors the acquired one can start with read and write of protected data.</p> <p>This definitely suggests a barrier (StoreLoad+StoreStore) shall be applied between the succeeded write to spinlock and the protected data use:</p> <pre><code>... unrelated activity... do_lock: for(;;) { if (locked) { continue; } if (compare-and-store succeeds) { break; } } // acquire! &lt;-- Here the barrier between spinlock store and following reads/writes ... read protected data... ... write protected data... </code></pre> <p>A barrier of this style - between operation and <em>following</em> instructions in the program order - is called &quot;acquire semantics&quot;. But in all sources I see it is tyable only to reads, not writes. If additional measures are not applied to install a barrier between the spinlock store and the protected data access, processor reordering may violate the safety.</p> <p>Different processor ISAs currently address this issue in a different manner:</p> <p>For x86 family, &quot;Reads or writes cannot be reordered with I/O instructions, locked instructions, or serializing instructions&quot; - so a locked instruction like CMPXCHG provides even stronger guarantees.</p> <p>For RISC-V, the following is declared in the base specification:</p> <blockquote> <p>The LR/SC sequence can be given acquire semantics by setting the <code>aq</code> bit on the <code>LR</code> instruction. The LR/SC sequence can be given release semantics by setting the <code>rl</code> bit on the <code>SC</code> instruction. Setting the <code>aq</code> bit on the <code>LR</code> instruction, and setting both the <code>aq</code> and the <code>rl</code> bit on the <code>SC</code> instruction makes the LR/SC sequence sequentially consistent, meaning that it cannot be reordered with earlier or later memory operations from the same hart.</p> </blockquote> <p>so setting <code>aq</code> on LR is enough to make respective SC having acquire semantics as well, so, all instructions following in program order shall be committed after committing of SC (store gets visible to others).</p> <p>ARMv8.1 suggests using instructions like <code>CASA</code> where acquire semantics is spreaded to both CAS sub-actions - read and write. (ARM DDI 0487F.c says &quot;CASA and CASAL load from memory with acquire semantics.&quot; without a word of propagation of acquire semantics to store counterpart, but, in practice, I couldnʼt disprove this propagation.)</p> <p>But ARMv8.0 doesnʼt have <code>CASA</code> and entails use of <a href="https://en.wikipedia.org/wiki/Load-link/store-conditional" rel="nofollow noreferrer">LR/SC</a> sequence (called &quot;load exclusive&quot; / &quot;store exclusive&quot; in ARM). If to lock using cycle of <code>LDAXR</code>+<code>STLXR</code>, ordering is insufficient. The answer <a href="https://stackoverflow.com/a/66265727">here</a> shows an example of real code which fails on AArch64 using LR/SC. In a sequence like:</p> <pre><code>1: ldaxrb w2, [x0] // LL+acquire // stlxrb can be replaced with stxrb // (no SC, plain store) // with the same outcome stlxrb w3, w1, [x0] // SC+release cbnz w3, 1b //- ldarb w3, [x0] // load+acquire </code></pre> <p>Without the (commented here) <code>ldarb</code>, reordering happens and the test program crashes. This <code>ldarb</code> seems a true minimum of ordering; extended versions with <code>dmb</code>, <code>dsb</code> work as well (<code>dmb ld</code> does but <code>dmb st</code> doesnʼt; <code>dsb ld</code> already does). Iʼve confirmed the author conclusion using another AArch64 processor model.</p> <p>In ARMv8.0, this looks as the design gap, fixed later on. But my question now is where the concept &quot;acquire - for reads&quot; originates in.</p> <p><strong>P.S.</strong>: I have asked nearly the same but at purely practical aspect <a href="https://stackoverflow.com/questions/58361491/">here</a>. The following discussion exposed the issue is real, and provided more evidences, but with no theoretical reference.</p> <p>UPDATE(2024-03-19): The earliest mention Iʼve found is in the <a href="https://dl.acm.org/doi/10.1145/325096.325102" rel="nofollow noreferrer">article</a>: &quot;Memory Consistency and Event Ordering in Scalable Shared-Memory Multiprocessors&quot; by: Kourosh Gharachorloo, Daniel Lenoski, James Laudon, Phillip Gibbons, Anoop Gupta, and John Hennessy, 1990. It is apparently first to define &quot;Release consistency&quot; and respective terms for acquire and release semantics. This article declares, as well, that &quot;Although the store access is necessary to ensure mutual exclusion, it does not function as either an acquire or a release.&quot; This looks too hasty from the current POV. To be continued.</p>
584
sequence-to-sequence model
Is $a^n b^n$ an artificial language or does it occur in the real world?
https://cs.stackexchange.com/questions/19485/is-an-bn-an-artificial-language-or-does-it-occur-in-the-real-world
<p>The classic example of a context-free grammar is $a^nb^n$. That is, $n$ occurrences of $a$ followed by an equal number of occurrences of $b$.</p> <p>Do such forms occur in the real world? Can you provide an example of a real-world case where there must be $n$ occurrences of something followed an equal number of occurrences of something else?</p> <p>Let me give an example: if I run an on-line store, then for each purchase made to my store, there must be a corresponding delivery of the purchased item. That might be modeled as $n$ purchases followed by $n$ deliveries:</p> <blockquote> <p>purchase purchase purchase delivery delivery delivery</p> </blockquote> <p>However, that is not a good data model since each delivery should legitimately be paired to a purchase:</p> <blockquote> <p>purchase delivery purchase delivery purchase delivery</p> </blockquote> <p>So I am left wondering if there are <em>any</em> real-world examples where data would be (legitimately) modeled as a sequence of $n$ items of one type followed by $n$ items of another type. Can you provide a real-world example please?</p> <p>Hendrik Jan provided this good example (see it in the comments below): <em>This weekend I visited my mother. Three flights up, and three flights down when I left.</em> </p> <p>Neat example! Can you think of others?</p> <p>A colleague just informed me of another example. In the KML specification it says that a &lt;Track> element must contain N &lt;when> elements followed by N &lt;gx:Coord> elements:</p> <p><a href="https://developers.google.com/kml/documentation/kmlreference#gxtrack" rel="nofollow">https://developers.google.com/kml/documentation/kmlreference#gxtrack</a></p> <p>Another excellent example. What are other examples?</p> <p>Another colleague sent me an article about columnar databases. It is often more efficient to store data in columns rather than rows. For example, we may have a column of person's ages followed by a column of person's heights. Or, a list of N integers (ages) followed by a list of N decimals (heights). This enables efficient calculation of sums or averages. Here's the article:</p> <p><a href="http://www.postgresql.org/message-id/52C59858.9090500@garret.ru" rel="nofollow">http://www.postgresql.org/message-id/52C59858.9090500@garret.ru</a></p> <p><strong>More examples please! I would like for us to create a nice collection of compelling examples.</strong></p>
<p>The classic consequence of $a^nb^n$ being context-free rather than regular is on opening and closing brackets. $a^nb^n$ represents the simplest possible case of this: no interleaving of opens and closes and no intervening characters. Regular expressions can't even deal with this most basic case.</p>
585
sequence-to-sequence model
Algebraic definition of &quot;normalization&quot;
https://cs.stackexchange.com/questions/170581/algebraic-definition-of-normalization
<p>I believe the notion of a normal form has a proper definition in model theory or algebra.</p> <ol> <li><p>If you have a set of elements <span class="math-container">$E$</span>, a premise of there being a normal form is that there are multiple ways to write an element <span class="math-container">$e \in E$</span>. This suggests we need to distinguish between expressions and their reference. To say that <span class="math-container">$e_1, e_2$</span> are equal is to say that although they are not the same expression, they have the same reference. This could suggest one should associate each expression to the element it refers to. For example, the expression <span class="math-container">$2 + 0$</span> refers to the element <span class="math-container">$2$</span>. This would require us to distinguish between the symbol <span class="math-container">$[2]$</span> and the concept <span class="math-container">$2$</span>, and would suggest this theory has two types, <span class="math-container">$\text{Symbol}$</span> and <span class="math-container">$\text{Number}$</span>.</p> </li> <li><p>A second approach would be to assume that two expressions <span class="math-container">$e_1, e_2$</span> being equal means that according to rewrite rules, they can be unified to the same expression. One says that for a collection of rewrite rules, there exists a sequence <span class="math-container">$R_i,\ldots,R_j$</span> which rewrites <span class="math-container">$e_i$</span> to <span class="math-container">$e$</span>, and a sequence <span class="math-container">$R_k,\ldots,R_l$</span> which rewrites <span class="math-container">$e_j$</span> to <span class="math-container">$e$</span>. In this case, it is not necessary to assert that <span class="math-container">$e$</span> is the same symbol as <span class="math-container">$e$</span>. This approach does not need a second type, <span class="math-container">$\text{Number}$</span>, in the theory, and expressions do not need a reference.</p> </li> </ol> <p>I conjecture that there is a way to show the equivalence of the above two theories. In both cases, it is necessary to define the structure of expressions. Terms are inductively defined as elements in the closure of a set of variable symbols <span class="math-container">$\{v_i, v_j, \ldots \}$</span> under an application function which, for a function symbol <span class="math-container">$f_i$</span> of arity <span class="math-container">$n$</span> and selection of <span class="math-container">$n$</span> terms, returns the term <span class="math-container">$f_i(t_0, \ldots, t_n)$</span>. I think a reason to prefer formulation (2) to (1) is that in either case, one must define rules which evaluate an expression <span class="math-container">$e$</span>, so that (1) needs the rewrite rules of (2) anyway. I wonder if this undermines the idea of an expression's reference, which can only be decided by computation rules. I wonder if therefore, for a collection of expressions, it cannot be said that every expression has a unique reference unless the collection has a confluent and terminating set of rewrite rules. I think there is a connection to be drawn between these ideas and the notion of a representative element of an equivalence class.</p>
586
sequence-to-sequence model
Scheduling N variable-time interdependent tasks across M workers
https://cs.stackexchange.com/questions/62962/scheduling-n-variable-time-interdependent-tasks-across-m-workers
<p>I have N tasks, each of them requires some time to complete. Time to complete is not the same for all tasks. Each task may depend on a number of other tasks (assume, that no dependency cycles are present). I have M (M is fixed, small and &lt;&lt; N) workers that may be used to complete the tasks. I need to find a sequence of tasks, that each worker must complete in order to minimize the total processing time.</p> <p>How is this problem formalized / modelled? I am not sure, which textbook or paper I should read in order to understand, how one might approach this problem (looking for keywords here).</p> <p>If there is a need to "peg" some tasks (not all) to certain workers, how is the problem "affected"? That is, does it become significantly harder to solve or reason about?</p>
<p>This can been seen as a variation of the job shop problem where you want to find the policy that yields the minimum makespan (time taken for all machines to process all jobs); as well as a variation of the assignment problem (find the optimal pairing of workers to jobs that minimizes cost). The variation in both cases is an added dependency between jobs (Job A must complete before Job C) etc. This collection of combinatorial optimization problems is NP-Hard. Most of what you'd find in the literature is approximation heuristics to find a near optimal policy (sequence of tasks per worker).</p> <p>This paper is one such approach: "<a href="http://www.ccs.neu.edu/home/rraj/Pubs/uncertainty.pdf" rel="nofollow">Approximation Algorithms for Multiprocessor Scheduling under Uncertainty</a>" </p> <p>A quick way to solve your problem (quick as in prototype a solution, not necessarily runtime, or necessarily near optimal), is to construct a topological sort of your job dependency graph, use breadth first search to batch all jobs of a given height and put that batch on a stack (assumes directed edge represents a depends on relationship, with zero out degree nodes having no dependencies). Afterwards, Until the stack is empty, pop a batch and treat it as a traditional assignment problem and solve using the Hungarian algorithm, then enqueue the tasks to each assigned worker's task queue. Afterwards you'd have a policy. (When you go to execute the policy, you'd have to have some signalling mechanism that a task completed so that workers don't start on a task whose dependencies haven't finished yet.)</p>
587
sequence-to-sequence model
Can a probabilistic Turing Machine compute an uncomputable number?
https://cs.stackexchange.com/questions/41154/can-a-probabilistic-turing-machine-compute-an-uncomputable-number
<p>Can a probabilistic Turing Machine compute an uncomputable number?</p> <p>My question probably does not make sense, but, that being the case, is there a reasonably simple formal explanation for it. I should add that I am pretty much ignorant of probabilistic TM and randomized algorithms. I looked at wikipedia, but may even have misunderstood what I read.</p> <p>The reason I am asking that is that only the computable numbers can have their digits enumerated by a Turing Machine.</p> <p>But with a probabilistic Turing Machine, I can enumerate any infinite sequence of digits, hence also sequences corresponding to non computable numbers.</p> <p>Actually, since there are only countably many computable numbers, while there are uncountably many reals that can have their digit enumerated, I could say that my probabilistic Turing Machine can be made to enumerate the digits of a non computable number with probability 1.</p> <p>I believe this can only be fallacious, but why? Is there a specific provision in the definition of probabilistic TM that prevents that?</p> <p>Actually, I run into this by thinking whether various computation models can be simulated by a deterministic TM, in question "<a href="https://cs.stackexchange.com/questions/32536/are-nondeterministic-algorithm-and-randomized-algorithms-algorithms-on-a-determi">Are nondeterministic algorithm and randomized algorithms algorithms on a deterministic Turing machine?</a>". Another p[ossibly related question is "<a href="https://cs.stackexchange.com/questions/22720/are-there-any-practical-differences-between-a-turing-machine-with-a-prng-and-a-p">Are there any practical differences between a Turing machine with a PRNG and a probabilistic Turing machine?</a>".</p>
<p>Consider the following reasonable definition for a Turing machine computing an irrational number in $[0,1]$.</p> <blockquote> <p>A Turing machine computes an irrational $r \in [0,1]$ if, on input $n$, it outputs the first $n$ digits (after the decimal) of the binary representation of $r$.</p> </blockquote> <p>One can think of many extensions of this definition for probabilistic Turing machines. Here is a very permissive one.</p> <blockquote> <p>A probabilistic Turing machine computes an irrational $r \in [0,1]$ if, on input $n$, (1) it outputs the first $n$ digits of $r$ with probability $p$; (2) it outputs any other string with probability less than $p$; (3) it never halts with probability less than $p$.</p> </blockquote> <p>Under this definition, it is not immediately clear whether everything that you can compute is indeed computable (under the sense of the first definition).</p> <p>However, there are some modifications that do allow us to conclude that the resulting number is computable, for example:</p> <ol> <li>We can insist that the machine always halt.</li> <li>We can insist that $p &gt; 1/2$.</li> </ol> <p>Other modifications are not necessarily enough. For example, does it help if we assume that the non-halting probability tend to $0$ with $n$?</p> <p>Summarizing, it might depend on the model.</p>
588
sequence-to-sequence model
How to Simulate Nested Parallelism on a Sequential Machine
https://cs.stackexchange.com/questions/93891/how-to-simulate-nested-parallelism-on-a-sequential-machine
<p>So I have sets of functions $A$, $B$, $C$, $D$, $E$, and $F$. I want to run them in a <em>nested</em> way. I also want to run some of them in parallel, and some of them in sequence. Here is how that might look:</p> <pre><code>In Parallel A { In Sequence B { In Parallel C { In Sequence D { In Parallel E { In Sequence F } } } } } </code></pre> <p>Now I start getting lost on how this would look. Here is my attempt at explaining it...</p> <h3>Part I</h3> <p><em>This part I can sort of understand.</em></p> <p>Say we have $n$ $A$ processes running in parallel with $\land$.</p> <p>$$A = a_1 \land a_2 \land \dots \land a_n$$</p> <p>Within $a_1$, say we have 5 steps running in sequence with $\to$, and $a_2$ is 10 steps, etc. We have:</p> <p>\begin{align} \land\ a_1 &amp;= b_1 \to b_2 \to \dots \to b_5\\ \land\ a_2 &amp;= b_1 \to b_2 \to \dots \to b_{10}\\ \land\ a_3 &amp;= b_1 \to b_2\\ \dots\\ \land\ a_n &amp;= b_1 \to b_2 \to \dots \to b_n \end{align}</p> <p>Now, I would like for "all of $A$ to run at the same time". This means that each $a \in A$ is a whole process. That is, if $a_1$ takes 5 steps and $a_2$ takes 10 steps, the whole process won't start over until 10 steps later.</p> <p>$$A_{1,s=10} \to A_{2,s=10} \to \dotsc \to A_{n,s=10}$$</p> <p>That means it will sort of look like this:</p> <p>\begin{align} A_1 &amp;: s\ s\ s\ s\ s\ s\ s\ s\ s\ s\\ A_2 &amp;: s\ s\ s\ s\ s\ w\ w\ w\ w\ w\\ A_3 &amp;: s\ s\ w\ w\ w\ w\ w\ w\ w\ w\\ \dots\\ A_n &amp;: s_n \dots w_n \end{align}</p> <p>Where $s$ is a step and $w$ is waiting.</p> <h3>Part II</h3> <p><em>This part is where I get lost.</em></p> <p>Now I want to be able to just apply this reasoning to the nested parallel and sequential processes. Informally, so $A$ has started, this means $a_1$ has started, which means $b_1$ has started. Now $b_1$ wont complete until all of the $C$ nested within it have completed, but they are running in parallel.</p> <p>$$b_1 = c_1 \land c_2 \land \dots \land c_n$$</p> <p>So it's like:</p> <pre><code>| a1. . . . . | a1. . . . . | b1. . b2. . | b1. . b2. . | c . c . . | c . c . . | c . . c . . | c . . c . . | c . c . . | c . c . . | c . c . . | c . c . . | c . c . | c . c . | a2. . . . . . . . . . | a2. . . . . . . . . . | b1. . . . b2. . . . . | b1. . . . b2. . . . . | c . . . c . . . . . | c . . . c . . . . . | c . . c . | c . . c . | c . . . . c . . . | c . . . . c . . . | c . . c . . . | c . . c . . . | c . c . . . . | c . c . . . . </code></pre> <p>Then it goes into further nesting all the way to $F$, which would be hard to draw and is really hard to think about.</p> <h3>Part III</h3> <p><em>This is where I get quite lost.</em></p> <p>Now to model them sequentially.</p> <p>I have just been wanting to do this:</p> <pre><code>for a in A: for b in B: for c in C: run c... </code></pre> <p>But that isn't right. So then I've tried:</p> <pre><code>for a in A: start a for b in B: start b for c in C: start c for aa in start a ... hmm </code></pre> <p>Now that's not going to work.</p> <p>So then it's like, if the functions were all flattened somehow, then we could just iterate through them. But I am already really confused by this point.</p> <h3>Question</h3> <p>My question is, how I simulate this on a sequential machine. </p> <p>Any help would be greatly appreciated.</p>
589
sequence-to-sequence model
Totally unimodular &lt;=&gt; polynomial time?
https://cs.stackexchange.com/questions/40334/totally-unimodular-polynomial-time
<p>Crossposting due to recommendation.</p> <p>I formulated a MIP problem which I didn't expect to be unimodular. The problem is to find a minimum complete sequence in a strongly connected digraph. That is, minimize the number of edges such that connectivity is preserved, using only previously existant edges. This will be a Hamiltonian cycle if such a cycle exists. </p> <p>I've verified it with AMPL/CPLEX and it at least seems to be a correct formulation. Solving the LP-relaxation, I found that it consistently provides IP-feasible solutions.</p> <p>But the issue is that the minimum complete sequence problem is NP-complete. Is it even possible that I have a correct formulation given that I get a TU constraint matrix? Any ideas on how to prove that it's TU (I'm far from an expert on this thing)?</p> <p>Here's the model (<code>option relax_integrality 1;</code> for the LP-relaxation)</p> <p>$\begin{align} \min \sum_{i,j} x_{ij} &amp;\\ \sum_j x_{ij} &amp;\geq 1 &amp; \forall i\\ \sum_i x_{ji} &amp;= \sum_i x_{ij} &amp; \forall j \\ \sum_n y_{ij}^n &amp;\geq 1 &amp; \forall i,j \\ y_{ij}^1 &amp;= x_{ij} &amp; \forall i,j\\ x_{ij} &amp;\leq z_{ij} &amp; \forall i,j\\ y_{ij}^n &amp;\leq (y_{ik}^{n-1}+x_{kj} - 1) + M(1-b_{ij}^{nk}) &amp; \forall n \neq 1, i, j, k\\ \sum_k b_{ij}^{nk} &amp;\geq 1 &amp; \forall i,j,n \\ x,y,b &amp;\in \{0,1\} \end{align} $</p> <p>$y^n_{ij} = 1$ iff there exists a path of length $n$ from $i$ to $j$. $z_{ij} = 1$ implies that the edge $i,j$ exists in the original graph. $b$ is a binary hack to formulate the connectivity constraint.</p> <p>edit: $i,j,k,n$ all range from $1$ to $N = $ number of nodes. I realize that the constraint matrix actually can't be unimodular due to the presence of $M$ which is some arbitrarily large number. But the LP-relaxation constantly spits out integer feasible solutions, which for smaller graphs actually correspond to known optima and for larger graphs seems to check out as well.</p> <p>edit: After examining the problem closer, I found that</p> <p>a) the formulation is indeed correct</p> <p>b) the linear relaxation consistently provides integer solutions with respect to x and y, but NOT b. It seems as though the b:s are basically conspiring to turn the problem totally unimodular. Setting $M = 10000$, b would constantly set itself to 0.99999 to minimize slack. This makes the x and y variables ONLY take on binary values. This is maybe obvious to experts but I found it extremely surprising. I have yet to see a case where it doesn't find the actual optimum, but this is probably due to a lacking setup of example graphs. </p>
590
sequence-to-sequence model
Assuming constant operation cost, are we guaranteed that computational complexity calculated from high level code is &quot;correct&quot;?
https://cs.stackexchange.com/questions/160969/assuming-constant-operation-cost-are-we-guaranteed-that-computational-complexit
<p>Edit: Since this post is gaining traction, I feel the need to clarify that the purpose of this is to see if asymptotic and constant factor estimations calculated from high level code implementations of algorithms are reasonable approximations of the true version. <strong>I am not trying to predict the speed of code when running</strong>.</p> <hr /> <p>Suppose I have some sequence of code <span class="math-container">$C$</span> in a high level language (C++, Python, Java, etc.) which needs to be converted into machine code which we will call <span class="math-container">$M$</span>. Obviously machine code varies from system to system, so assume we are on a fixed machine</p> <p>Under a uniform cost model, we can say that every instruction of <span class="math-container">$M$</span> costs <span class="math-container">$1$</span> operation, so we can calculate the complexity of our code as some function of the input size of our problem <span class="math-container">$n$</span>. Let an upper bound of this function be <span class="math-container">$f_M(n)$</span>.</p> <p>Clearly, nobody writes in machine code, so we can <em>approximate</em> <span class="math-container">$f_M(n)$</span> by counting operations of our high level language code <span class="math-container">$C$</span>, which we can say <span class="math-container">$f_C(n)$</span></p> <p>Define <span class="math-container">$1$</span> operation of <span class="math-container">$C$</span> to be the following</p> <pre><code>1. Function Calls 2. Returning 3. Arithmetic Operators 4. Logical Operators 5. Comparisons 6. Pointer/Object dereferencing/Array indexing 7. Variable Assignment </code></pre> <p>This list is heuristic, and was given to me as a means of approximating by an engineer mentor, so if a better heuristic exists please let me know.</p> <hr /> <p>Questions: Given <span class="math-container">$f_C(n)$</span>, are we guaranteed any of the following, assuming the code is compiled in &quot;good faith&quot; (no additional logic added to insert unnecessary operations)</p> <ol> <li>Does <span class="math-container">$f_C(n)\in \mathcal{O}(g(n))$</span> imply <span class="math-container">$f_M(n)\in \mathcal{O}(g(n))$</span>?</li> <li>Does there exist a constant factor such that <span class="math-container">$f_M(n)\leq c\cdot f_C(n)$</span>? for sufficiently large <span class="math-container">$n$</span>?</li> </ol>
<p>Yes, this is reasonable as a first cut approximation. As always, there are caveats.</p> <p>A theoretical model is a model. It is used for making predictions, but models typically are not perfect, and there are some factors that they don't take into account.</p> <p>I'm not sure what you mean by function calls, but some functions might take a very long time to execute, so a single function call might take a very long time, unless you are also careful to count all of the operations that are done by the function you are calling. So I will assume that you are also counting the time to execute the body of every function that is called, otherwise there is a major flaw.</p> <p>This does not take into account the effect of memory hierarchy. For instance, random access memory lookups might be noticeably slower than sequential memory access, because of cache effects. This is not taken into account by the model you articulate -- partly because it is very challenging to model.</p> <p>If you are concerned with practical running times, there are no guarantees. A single memory access could cause a <a href="https://en.wikipedia.org/wiki/Page_fault" rel="noreferrer">page fault</a> (e.g., if there is <a href="https://en.wikipedia.org/wiki/Thrashing_(computer_science)" rel="noreferrer">thrashing</a> due to memory pressure) and loading data from disk, which a long time, compared to executing a single instruction.</p>
591
sequence-to-sequence model
Is “x&#39; = f(x)” a programming paradigm?
https://cs.stackexchange.com/questions/155306/is-x-fx-a-programming-paradigm
<p>I'm the author of GateBoy (a gate-level simulation of the original Game Boy hardware) and Metron (a C++ to Verilog translation tool). One big issue I had to work around for both projects is the inability of C++ (or really, any current procedural programming language) to express atomic state change in a way that is both performant and unambiguous. For example, consider the following trivial function:</p> <pre><code>void swap_and_increment(int&amp; a, int&amp; b) { int old_a = a; a = b + 1; b = old_a + 1; } </code></pre> <p>Because the first assignment to 'a' destroys the old value of 'a', we have to store it in 'old_a' in order for the swap to work. In contrast, we could write it like this:</p> <pre><code>void swap_and_increment(int&amp; a, int&amp; b) { a' = b + 1; b' = a + 1; } </code></pre> <p>where a' means &quot;the new value of a&quot;, but this syntax has no equivalent in real C++. This may seem like an insignificant problem at first, but when you scale up to something the size of a Game Boy simulation which has thousands of variables that need to change simultaneously it becomes a serious design problem.</p> <p>GateBoy solves this problem by instrumenting every variable in debug builds to catch &quot;old/new&quot; bugs - variables get marked as &quot;old&quot; or &quot;new&quot; during execution and reading a &quot;new&quot; value when you expect an &quot;old&quot; value is a runtime error. Metron takes a different tactic and does some symbolic code analysis at translation time to do essentially the same thing - it can ensure that for every possible path through the code, all reads of &quot;old&quot; values are actually reading old values and vice versa for &quot;new&quot; values.</p> <p>If we generalize the problem a bit, we can say that the difficulty comes in trying to model a system where the entire state of the system represented as X needs to be transformed in to a new state X' via a pure function F without constantly making copies of old state (kills performance), requiring the author to keep track of which parts of the state are old or new at any given point (causes bugs), or relying on hardware support like transactional memory (not widely available). To put it more concisely, a program in the form &quot;x' = f(x)&quot; has no good representation in the software programming languages we use today.</p> <p>I recently had the opportunity to discuss this issue with a bunch of grizzled old software and hardware veterans, and the approximate consensus seems to be:</p> <ul> <li><p>&quot;x' = f(x)&quot; as a model for global atomic state change makes sense to both software and hardware developers, with some viewing it as &quot;just another name for a state machine&quot; (mostly software devs) and some as &quot;so obvious that it doesn't need to be stated&quot; (mostly hardware devs).</p> </li> <li><p>There really isn't any software-oriented language out there that allows for global atomic state change to be both performant (the compiler understands the distinction between &quot;old&quot; and &quot;new&quot; and can reorder code to avoid excessive copies or temporaries) and unambiguous (the distinction between an &quot;old&quot; value and a &quot;new&quot; value has some explicit representation in the language syntax).</p> </li> </ul> <p>So, what do we call this model? Allowing the compiler to reorder statements to preserve &quot;oldness&quot; and &quot;newness&quot; during execution seems to diverge from the &quot;a program is a sequence of operations&quot; model of procedural programming, and the fact that we <em>do</em> want to modify X in place instead of constantly creating new (potentially very large) state objects makes it a poor fit for functional programming.</p> <p>So, my question to the audience - Does it makes sense to call &quot;x' = f(x)&quot; a programming paradigm? It's certainly not a new one, but it also doesn't fit well with the paradigms we've given names to. What should we do with it?</p>
<p>I am no good at giving names to paradigms. Someone else will be. But the answer to what you actually want to do is quite straightforward, in C++ at least.</p> <p>Basically you accumulate <strong>actions</strong>, and having accumulated them you execute them all at the end. In the example you give, each action is of the form &quot;set <code>a</code> to some value&quot;. Your problem is that you want all the &quot;some values&quot; to be evaluated before any action is performed - and to make this happen automatically so it can't be broken by a dozy programmer. <em>This happens naturally</em> with this model, because all the evaluations (of <code>a+1</code> etc) happen when the action objects are constructed, and the execution happens much later.</p> <p>How exactly you do this is a matter of taste and of practicality. But given an <code>Action</code> class that looks like this:</p> <pre><code> class Action {int &amp; destination; int source; public: Action(int &amp;d,int s) :destination(d),source(s) {}; void apply() {d=s;}; ~Action() {apply();}; // This is one way of doing it (see below). } </code></pre> <p>you can create each action as an object and calculate what new value you are going to have at the time each object is constructed.</p> <p>In your particular example, with a fixed number of actions, I would be inclined to make the destructor of the object call <code>apply()</code>. Thus your</p> <pre><code> void swap_and_increment(int&amp; a, int&amp; b) { Action first {a,b+1); Action second {b,a+1); } </code></pre> <p>will:</p> <ul> <li>Calculate <code>b+1</code> and <code>a+1</code> based on the old values before calling the constructors.</li> <li>At the end of the function, when the destructors are called and (as I have suggested) automatically call <code>apply()</code>, this will set new values of <code>a</code> and <code>b</code>, as you asked.</li> </ul> <p>If you don't like giving names to things then you can just create an array of <code>Action</code>s:</p> <pre><code> Action actions[]={{a,b+1},{b,a+1}}; </code></pre> <p>and this will create the objects (doing the calculation from the old values at this time), and then the destructor of the array will call the destructors of the objects and will thus perform the assignments all in one go.</p> <p>The reason for using arrays is that this makes the compiler do all the work and at runtime the overhead should be tiny (no memory allocation or deallocation, no function calls).</p> <p>If your real structure is more sophisticated than the example you gave, then you could use a vector instead of an array. You would then make sure that the <code>Action</code> cannot be copied or assigned, only moved, and have an extra flag so that a moved-from <code>Action</code> will do nothing at all when its destructor is called. As before, destruction of the vector would happen automatically at the end of the function, and destruction of the vector would can the destructors of the objects and thus call <code>apply()</code> for each of them.</p> <hr /> <p>As to what <strong>name</strong> you give to the pattern of &quot;accumulate actions and then execute them all at the end&quot;, I leave that to the linguists to decide.</p>
592
sequence-to-sequence model
Is it viable to use an HMM to evaluate how well a catalogue is used?
https://cs.stackexchange.com/questions/1122/is-it-viable-to-use-an-hmm-to-evaluate-how-well-a-catalogue-is-used
<p>I was interested on evaluating a catalogue that students would be using to observe how is it being used probabilistically. </p> <p>The catalogue works by choosing cells in a temporal sequence, so for example:</p> <ul> <li>Student A has: ($t_1$,$Cell_3$),($t_2$,$Cell_4$)</li> <li>Student B has: $(t_1,Cell_5),(t_2,Cell_3),(t_3,Cell_7)$. </li> </ul> <p>Assume that the cells of the table are states of a <a href="https://en.wikipedia.org/wiki/Hidden_Markov_model" rel="nofollow">Hidden Markov Model</a>, so the transition between states would map in the real world to a student going from a given cell to another.</p> <p>Assuming that the catalogue is nothing more than guidance, it is expected to have a certain kind of phenomenon to occur on a given artifact. Consider this artifact to be unique, say, for example a program. </p> <p>What happens to this program is a finite list of observations, thus, for a given cell we have a finite list of observations for following the suggestion mentioned on that cell. On a HMM this would be then the probability associated to a state to generate a given observation in this artifact. </p> <p>Finally, consider the catalogue to be structured in a way that initially it is expected that the probability to start in a given cell is equal. The catalogue does not suggest any starting point. </p> <ul> <li><p><strong>Question 1</strong>: Is the mapping between the catalogue and the HMM appropriate?</p></li> <li><p><strong>Question 2</strong>: Assuming question 1 holds true. Consider now that we train the HMM using as entries $(t_1,Cell_1), (t_2,Cell_3) , ... (t_n,Cell_n)$ for the students. Would the trained HMM, once asked to generate the transition between states that it is most likely yields as result what is the most used way by the people who used the catalogue for a given experiment $\epsilon$? </p></li> </ul>
<p><strong>Ad Question 1:</strong> Assuming that your assumptions on how the catalogue is used -- that is the choice of the next cell only depends on the current (or constantly many preceeding) cell(s), <em>not</em> the (full) history -- then yes, you can use a Markov Chain to model it.</p> <p>However, you do not seem to need the "Hidden" part; this is only useful if you have (probabilistic) output in the states which you observe. In contrast, you want to observe the students' cell states directly.</p> <p>Imagine you can not observe which cells students are in but only how much they like the current cell; this could be implemented by notorious "Was this useful to you?" buttons; in general, assume every students gives feedback from $\{1, \dots, k\}$ in every cell but you do not know which cell they are in. Then you can use a HMM to estimate their cell sequence given their feedback sequence.</p> <p><strong>Ad Question 2:</strong> Let me illustrate the use continuing above train of thought. For $n=k=3$, we have the following Markov chain:</p> <p><img src="https://i.sstatic.net/VkQlh.png" alt="abstract HMM"><br> <sup>[<a href="http://akerbos.github.io/sesketches/src/cs_1122_1.tikz" rel="nofollow noreferrer">source</a>]</sup></p> <p>$p_{i,j}$ is the probability to transition from state $i$ to state $j$; note that $p_{i,0}=0$ for all $i$ and those edges have been left out¹. Let $p_{i,\bot}$ the probability of terminating in state $i$ (we leave out another dummy state for clarity). $q_{i,l}$ is the probability to emit $l$ when entering (or, equivalently, leaving) state $i$; this models our feedback. Of course, we require $\sum_j p_{i,j} + p_{i,\bot} = \sum_l q_{i,l} = 1$ for all $i$.</p> <p>Note that the state sequence $0,1,2$ has probability $p_{0,1} \cdot p_{1,2}$ -- that is a central property of Markov chains. The output sequence $1,2$, on the other hand, has probability $q_{0,1}\cdot p_{0,1}\cdot q_{1,2} + q_{0,1}\cdot p_{0,2}\cdot q_{2,2} + q_{0,1}\cdot p_{0,3}\cdot q_{3,2}$.</p> <p>In a real world scenario, we would have to <em>train</em> our Markov chain. Let us assume we have some sequences of both state and output²:</p> <p>$\qquad \begin{align} &amp;(0,1), (1,1), (2,1), (3,1) \\ &amp;(0,2), (1,3), (3,1) \\ &amp;(0,1), (3,3), (1,2) \end{align}$</p> <p>Now we just count how often each transition and output happened and set our probabilities to the relative frequencies³:</p> <p><img src="https://i.sstatic.net/qEPTH.png" alt="concrete HMM"><br> <sup>[<a href="http://akerbos.github.io/sesketches/src/cs_1122_2.tikz" rel="nofollow noreferrer">source</a>]</sup></p> <p>Now you can do the interesting stuff. Ignoring the output, you can determine the path with highest probability, you can determine the output sequence with highest probability but most importantly, and here the <em>hidden</em> part comes into play, you can find the most likely state sequence given an output sequence with <a href="https://en.wikipedia.org/wiki/Viterbi_algorithm" rel="nofollow noreferrer">Viterbi algorithm</a>.</p> <p><strong>Bottom line:</strong> yes, you can use Markov chains resp. hidden Markov models (depending on your needs) to model your scenario. Wether the fundamental assumption -- the probability for the next cell only depends on the current cell -- makes sense is another question entirely.</p> <hr> <ol> <li>We need a starting state, but we can make it a dummy by never going back. We can ignore its output or introduce a dummy output that is only and always emitted in state $0$.</li> <li>If we have only output sequences, we can use the <a href="https://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm" rel="nofollow noreferrer">forward-backward algorithm</a>.</li> <li>Due to our low number of observations, we have a number of impossible transitions/emissions. Usually you assume that everything is possible and has just not been observed due to low probability. We can compensate for this by adding $1$ to all counts; this way, all sequences have probability $\gt 0$.</li> </ol>
593
sequence-to-sequence model
Choosing a shortest representative number from interval in arithmetic coding
https://cs.stackexchange.com/questions/76233/choosing-a-shortest-representative-number-from-interval-in-arithmetic-coding
<p>In <a href="https://en.wikipedia.org/wiki/Arithmetic_coding" rel="nofollow noreferrer">arithmetic coding</a> a word is coded as the binary encoding of a number in a certain interval. The interval is determined from a sequence of nested intervals according to the probability distribution on the letters of the word. </p> <p>This encoding and decoding process is totally clear to me, but what wonders me is how an encoder chooses the optimal number (i.e. the one with shortest length if written in binary)?</p> <p>For example, in the above linked wikipedia article in the paragraph <a href="https://en.wikipedia.org/wiki/Arithmetic_coding#Sources_of_inefficiency" rel="nofollow noreferrer">Sources of Inefficiency</a> in the example they choose 0.538 as the message, which is not optimal as it has quite a long (way longer than 8 bits) binary expansion when written in binary, as also noted choosing 0.5390625 would be much better.</p> <p>Also, if we change the probability model with a uniform distribution we get $(5/27, 6/27)$ as an interval, as shown in the paragraph the binary codings of the boundaries are quite long, much longer then the $~5$ bits, but if we choose $0.1875$ then this could be coded with $0.0011$ which requires five bits if we submit $00110$ (the final $0$ seems to be necessary for the decoder as discussed in the paragraph).</p> <p>So choosing the right number in the interval is in my understand the key of the compression algorithm (also backed up with the intuitive understanding that if we have a larger interval; which is given by higher probabilites from individual tokens, then we have more possibilites to choose numbers from to find one with a short binary expansion), but every textbook or notes I find just concentrates on describing the interval nesting procedure, but not how to choose a shortest representative?</p> <p>So, if my understanding is correct this seems to be an essential part? So how to achieve this?</p>
<p>While you are encoding symbols between the ranges, you are not choosing which to use for the specific symbol. You are narrowing down the possible chooses there are. You can arbitrarily choose them per symbol, but how would you adjust the ranges to make that the actual choice. It is true that the LAST symbol you can make a choice to make it fit within the rounding error , so long as it is within the range for that symbol.</p> <p>The point of the example was that the incorrect distribution gives poor or worse compression. Choosing a better statics gives better results. Hoffman encoding is at most within 1 bit per symbol of encoding, where arithmetic coding is within a fraction of a bit per-symbol, if the correct statics are used. You can choose how accurate you do the encoding. Increasing the length of the RANGE's precision allows for more accurate representation of the given statics, thus less error. Choosing the wrong distribution for the specific message, regardless of the how accurate the encoding process is will get very poor compression or make it larger than the original message. So.. think of it is an error rate. The smaller you can make the error rate the, closer to the real limit you can achieve. choosing a "static" distribution for all text files, will never give you the smallest compressed file this specific method can give you.</p> <p>JUST A NOTE: "Optimum" is misunderstood. It simple means that the best a specific method can be. Not that it is the highest compression available.</p>
594
sequence-to-sequence model
Is there a sub-NP algorithm to satisfy or prove unsatisfiable a set of a&lt;b&lt;c OR c&lt;b&lt;a constraints
https://cs.stackexchange.com/questions/155958/is-there-a-sub-np-algorithm-to-satisfy-or-prove-unsatisfiable-a-set-of-abc-or
<p>This problem's been stumping me for the better part of a week:</p> <p>You're given a set of triplets of variables. The variables are all distinct and ordered. Each triplet <span class="math-container">$a,b,c$</span> means that either <span class="math-container">$a&lt;b&lt;c$</span> or <span class="math-container">$c&lt;b&lt;a$</span>. The problem is to find an ordering of the variables satisfying all constraints.</p> <p>For example, the set <span class="math-container">$\\{(a,b,c),(d,c,b),(b,d,e)\\}$</span> is solvable by assigning <span class="math-container">$a&lt;b&lt;c&lt;d&lt;e$</span>. By symmetry you can assume <span class="math-container">$a&lt;b&lt;c$</span> to start, and the rest of the sequence is derivable by finding triplets where a pair has been seen already. In contrast, a set like <span class="math-container">$\\{(a,b,c),(a,c,b)\\}$</span> is unsatisfiable, as if <span class="math-container">$a&lt;b&lt;c$</span> then the second triplet implies either <span class="math-container">$b&gt;c$</span> or <span class="math-container">$a&gt;b$</span>.</p> <hr /> <p>The natural solution to me is do directly model this as integer linear programming, where all values are distinct integers in <span class="math-container">$[0, |\text{Variables}|)$</span> and the triplets are directly encoded as an equation and solved. This works (and is performant), but it may be suboptimal for the special case.</p> <p>Most of my attention has been looking at whether you can build a directed (acyclic) graph where the variables can be topologically sorted: <span class="math-container">$a &gt; b$</span> if the DAG has a path from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>, and if at any point an unavoidable cycle occurs then it's UNSAT. This also works, but I can't find a clever way avoid doing backtracking at some point, as there are many problem instances where guessing orderings becomes necessary.</p> <p>I suspect the problem is NP, but I can't prove it. This also seems like the type of problem that would have been heavily researched due to its simplicity, but I can't find any references.</p>
<p>This is known as the <a href="https://en.wikipedia.org/wiki/Betweenness" rel="nofollow noreferrer">betweenness problem</a>. It is in NP (you can easily check the correctness of any proposed solution in polynomial time), and is NP-hard.</p>
595
sequence-to-sequence model
What is the purpose of interpreting elements in the proof of reduction of PCP to validity decidability problem of predicate logic?
https://cs.stackexchange.com/questions/66501/what-is-the-purpose-of-interpreting-elements-in-the-proof-of-reduction-of-pcp-to
<p>Since my question relates directly to a part of the text from a 2004 book, <em>Logic in Computer Science: Modelling and Reasoning about Systems (2nd Edition) by Michael Huth and Mark Ryan</em>, in order to provide context for the following discussion, I'm partially quoting the book verbatim:</p> <blockquote> <p>The decision problem of validity in predicate logic is undecidable: no program exists which, given any $\varphi$, decides whether $\varphi$.</p> <p>PROOF: As said before, we pretend that validity is decidable for predicate logic and thereby solve the (insoluble) Post correspondence problem. Given a correspondence problem instance $C$: $$s_1 s_2 ... s_k$$ $$t_1 t_2 ... t_k$$ we need to be able to construct, within finite space and time and uniformly so for all instances, some formula $\varphi$ of predicate logic such that $\varphi$ holds iff the correspondence problem instance $C$ above has a solution.</p> <p>As function symbols, we choose a constant $e$ and two function symbols $f_0$ and $f_1$ each of which requires one argument. We think of $e$ as the empty string, or word, and $f_0$ and $f_1$ symbolically stand for concatenation with 0, respectively 1. So if $b_1 b_2 ... b_l$ is a binary string of bits, we can code that up as the term $f_{b_l}(f_{b_{l−1}}...(f_{b_2}(f_{b_1}(e)))...)$. Note that this coding spells that word backwards. To facilitate reading those formulas, we abbreviate terms like $f_{b_l}(f_{b_{l−1}}...(f_{b_2}(f_{b_1}(t)))...)$ by $f_{{b_1}{b_2}...{b_l}}(t)$.</p> <p>We also require a predicate symbol $P$ which expects two arguments. The intended meaning of $P(s,t)$ is that there is some sequence of indices $(i_1,i_2,...,i_m)$ such that $s$ is the term representing $s_{i_1} s_{i_2}...s_{i_m}$ and $t$ represents $t_{i_1} t_{i_2}...t_{i_m}$. Thus, $s$ constructs a string using the same sequence of indices as does $t$; only $s$ uses the $s_i$ whereas $t$ uses the $t_i$.</p> <p>Our sentence $\varphi$ has the coarse structure $\varphi_1 \wedge \varphi_2 \implies \varphi_3$ where we set</p> <p>$$\varphi_1 \stackrel{def}{=} \bigwedge\limits_{i=1}^k P\left(f_{s_i}(e),f_{t_i}(e)\right)$$</p> <p>$$\varphi_2 \stackrel{def}{=} \forall v,w \hspace{1mm} P(v,w)\rightarrow\bigwedge\limits_{i=1}^kP(f_{s_i}(v),f_{t_i}(w))$$</p> <p>$$\varphi_3 \stackrel{def}{=} \exists z\hspace{1mm} P(z,z)$$.</p> <p>Our claim is $\varphi$ holds iff the Post correspondence problem $C$ has a solution.</p> </blockquote> <p>In proving PCP ⟹ Validity:</p> <blockquote> <p>Conversely, let us assume that the Post correspondence problem C has some solution, [...] The way we proceed here is by <em>interpreting</em> finite, binary strings in the domain of values $A′$ of the model $M′$. This is not unlike the coding of an interpreter for one programming language in another. The interpretation is done by a function <strong>interpret</strong> which is defined inductively on the data structure of finite, binary strings:</p> <p>$$\text{interpret}(\epsilon) \stackrel{def}{=} e^{M′}$$</p> <p>$$\text{interpret}(s0) \stackrel{def}{=} {f_0}^{M′}(\text{interpret}(s))$$</p> <p>$$\text{interpret}(s1) \stackrel{def}{=} {f_1}^{M′}(\text{interpret}(s))$$.</p> <p>[...] Using [$\text{interpret}(b_1 b_2...b_l) = f_{b_l}^{M′}(f_{b_{l-1}}^{M′}(...(f_{b_1}^{M′}(e{M′})...)))$] and the fact that $M\models\varphi_1$, we conclude that $(\text{interpret}(s_i), \text{interpret}(t_i)) \in P^{M′}$ for $i = 1,2,...,k$. [...] since $M′ \models \varphi_2$, we know that for all $(s,t) \in P^{M′}$ we have that $(\text{interpret}(ss_i),\text{interpret}(tt_i)) \in P^{M′}$ for $i=1,2,...,k$. Using these two facts, starting with $(s, t) = (s_{i_1}, t_{i_1})$, we repeatedly use the latter observation to obtain</p> <p>(2.9) $(\text{interpret}(s_{i_1}s_{i_2}...s_{i_n}),\text{interpret}(t_{i_1}t_{i_2}...t_{i_n})) \in P^{M′}$.</p> <p>[...] Hence (2.9) verifies $\exists{z} P(z,z)$ in $M′$ and thus $M′ \models \varphi_3$.</p> </blockquote> <p>In proving that the validity of predicate logic is undecidable, according to the approach I learned from school, which is based on that of the <em>Huth &amp; Ryan book (2nd edition, page 135)</em>, when constructing the reduction of PCP to Validity problem, the "finite binary strings" of the universe are interpreted with a "<strong>interpret</strong> function", which encodes binary strings into composites of functions of the model.</p> <p>Then it goes on to show that, using the fact that the antecedent of $\varphi$ must hold for it to be non-trivial, both sub-formulae of the antecedent can be expressed with the said "<strong>interpret</strong> function". From there, it follows that the consequence holds, too, since it can also be expressed in a way with the <strong>interpret</strong> function that follows from the previous expressions with <strong>interpret</strong>.</p> <p>My question is: what is the purpose of this "<strong>interpret</strong> function"? Why can't we just use the previously devised φ and get the same result? What do we get out of using <strong>interpret</strong> to express our elements?</p> <p>And also, what if our universe contains some arbitrary elements; that is, what if they are not binary strings? Do we just construct some mapping of the two?</p>
<p>Lets start with what exactly you are trying to prove.</p> <p>You're dealing with a signature $\sigma$ which consists of one constant $e$, two function symbols $f_0,f_1$, and one binary predicate $P(s,t)$. We denote by $\mathcal{C}$ the set of all "yes" instances to the post correspondence problem, i.e. all sequences of ordered pairs of binary strings $(s_1,t_1),...,(s_k,t_k)$ such that there exists indices $i_1,...,i_n$ for some $n\in\mathbb{N}$ which satisfy $s_{i_1}\cdot...\cdot s_{i_n}=t_{i_1}\cdot...\cdot t_{i_n}$ ($\cdot$ stands for concatenation). </p> <p>You want to show that given an instance $c=(s_1,t_1),...,(s_k,t_k)$ to the post correspondence problem, then </p> <p>$c\in\mathcal{C} \iff$ If $\mathcal{M}$ is any model interpreting $\sigma$, then $\mathcal{M\models\ \varphi(c)}$</p> <p>Where $\varphi(c)=\varphi_1(c)\land\varphi_2(c)\rightarrow \varphi_3(c)$, and</p> <p>$\varphi_1(c)=\bigwedge\limits_{i=1}^k P\left(f_{s_i}(e),f_{t_i}(e)\right)$,</p> <p>$\varphi_2(c)=\forall v,w \hspace{1mm} P(v,w)\rightarrow\bigwedge\limits_{i=1}^kP(f_{s_i}(v),f_{t_i}(w))$,</p> <p>$\varphi_3(c)=\exists z\hspace{1mm} P(z,z)$.</p> <p>In the above, given a binary string $s=s_1,...,s_l$, $f_s$ denotes the composition $f_{s_l}\circ f_{s_{l-1}}\circ ...\circ f_{s_1}$. This is the reduction from PCP to validity in predicate logic described in "logic in computer science" by Huth &amp; Ryan.</p> <p>We think of $f_0,f_1$ as concatenation with $0,1$ correspondingly, and of $e$ as the empty string. In that case, we can think of $f_s(e)$ as a representation of the string $s$ in the world of $\mathcal{M}$. Intuitively, $\varphi_1,\varphi_2$ force the predicate $P(v,w)$ to hold when (perhaps in some other cases as well, but we wont care) $v=f_s(e), w=f_t(e)$ (meaning that $v,w$ are the interpretations of some finite strings $s,t$ in the world of $\mathcal{M}$) and there exists a sequence of indices $i_1...i_n$ such that $s=s_{i_1}\cdot...\cdot s_{i_n}$ and $t=t_{i_1}\cdot...\cdot t_{i_n}$. If $P(v,w)$ indeed has that meaning (which is what happens if $\mathcal{M}$ satisfies $\varphi_1\land\varphi_2$), then $c\in\mathcal{C}\iff \exists z P(z,z)$.</p> <p>You ask about the $\Rightarrow$ direction of the proof, so you must handle arbitrary models which interpret $\sigma$, where the world can have elements which have nothing to do with strings (this relates to your second question). This is where the interpretation function comes in. We give a correspondence between all finite strings and a subset of the world of $\mathcal{M}$, which is rather natural given the nature of our signature. A string $s$ is mapped to the element $f_s(e)$, which can be a string/number/table or anything you like.</p> <p>Now, when we have the ability to think of elements of the form $f_s(e)$ in $\mathcal{A}_{\mathcal{M}}$ (the world of $\mathcal{M}$) as finite strings, we can go on and prove the $\Rightarrow$ implication. If $\mathcal{M}$ satisfies $\varphi_1,\varphi_2$, then as we mentioned, $P(v,w)$ holds when $v=f_s(e), w=f_t(e)$ (now we can think of $v,w$ as strings), and there exists a sequence of indices $i_1...i_n$ such that $s=s_{i_1}\cdot...\cdot s_{i_n}$ and $t=t_{i_1}\cdot...\cdot t_{i_n}$. Thus, if $c\in \mathcal{C}$, and $i_1...i_n$ is a sequence of indices with $s=s_{i_1}...s_{i_n}=t_{i_1}...t_{i_n}=t$, then $P(f_s(e),f_t(e))$ holds, and we have $\mathcal{M}\models \varphi_3$, since $s=t$ implies $f_s(e)=f_t(e)$.</p>
596
sequence-to-sequence model
Terminology for multiply visiting walks of directed graphs
https://cs.stackexchange.com/questions/144008/terminology-for-multiply-visiting-walks-of-directed-graphs
<p>In phrasing an information model for consumption-optimized RDF-like data (full context at <a href="https://github.com/core-wg/coral/pull/1#issuecomment-921861748" rel="nofollow noreferrer">1</a> for the curious), I'm looking for any established term for <em>X</em> as used here:</p> <blockquote> <p>Given is a directed rooted graph. An <em>X</em> is a (finite) sequence of edges, which go down the tree and then jump up again to a point where they've been just before.</p> </blockquote> <p>In less precise and more illustrative words, an <em>X</em> is a line you can draw by going along edges; you may backtrack whenever you want, but usually can't go down the same path again. When you enter a node a second time (necessarily through a different path leading in), you switch over to a transparent overlay sheet as to keep things apart.</p> <p>Note that this is looking for a descriptive term for that sequence (like &quot;directed graph&quot; is a term for an unordered set of nodes and edges), not for an algorithm producing any of these. (There will be programs producing such <em>X</em>, but what I'm looking for is to properly describe these programs' outputs, not the programs themselves).</p> <p>It's tempting to say that <em>X</em> is a (somewhat ordered) tree, but for each time a node is entered, any or all of downstream &quot;tree&quot; can be traversed, possibly even in a different order. It's also not a walk (because we're jumping back up) or a traversal (because we access multiple times).</p> <p>For example, for this digraph (which happens to be acyclic, but that's not a requirement):</p> <p><a href="https://i.sstatic.net/4t7h9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4t7h9.png" alt="digraph { A -&gt; B -&gt; C -&gt; E2 -&gt; F; C -&gt; E1; A -&gt; Z -&gt; C; }" /></a></p> <p>legal <em>X</em> would be:</p> <ul> <li><code>A -&gt; B -&gt; C -&gt; E2 -&gt; F; C -&gt; E1; A -&gt; Z -&gt; C</code> (which is taking all only once from left to right)</li> <li><code>A -&gt; B -&gt; C -&gt; E2 -&gt; F; C -&gt; E1; A -&gt; Z -&gt; C -&gt; E2 -&gt; F; C -&gt; E1</code> (which is duplicating the whole walk down of C when C is reached again), or even</li> <li><code>A -&gt; B -&gt; C -&gt; E2 -&gt; F; A -&gt; Z -&gt; C -&gt; E1</code> (which expresses part of what's under C when coming from B, and another part when coming from Z)</li> </ul> <p>Examples of non-<em>X</em>s are (which would also happen to be incomplete in covering the graph, but non-X examples that do cover the graph would be very unwieldly):</p> <ul> <li><code>A -&gt; B -&gt; C -&gt; C2; Z -&gt; C</code> (because it jumps across without having been to Z before)</li> <li><code>A -&gt; B -&gt; C -&gt; E2; B -&gt; C</code> (because it does <code>B -&gt; C</code> again without having come into B for a second time)</li> <li><code>A -&gt; Z -&gt; C -&gt; E2; A -&gt; B -&gt; C -&gt; E1; E2 -&gt; F</code> (because while it has been at E2, it (so far, legally) backtracked to the branch through B (legal so far) but then jumped to E2 without having been there on the current leg)</li> </ul> <p>(These could all be visualized nicely if I was any better in graphviz...)</p> <p>Things <em>X</em> is not:</p> <ul> <li>A tree spanning the graph: Because it can come back on itself.</li> <li>An ordered list of edges (where only orderings are valid): Because if one has visited some node multiple times and not yet left it, an edge leaving it in the list does not indicate which visitation is being left.</li> <li>A traversal of the graph: Just provides a list (ie. see above).</li> </ul> <p>Is there an established name for this structure <em>X</em>?</p> <hr /> <p>The actual graph we're considering is a multigraph with non-unique labels for edges, but that's likely immaterial to the question.</p>
597
sequence-to-sequence model
NP with a parallelism model?
https://cs.stackexchange.com/questions/64679/np-with-a-parallelism-model
<p>Can we think of NP using a parallelism model instead of using a "checking relation" without loss of generality?</p> <p>From what I understand from <a href="http://www.claymath.org/sites/default/files/pvsnp.pdf" rel="nofollow noreferrer">the problem statement given by Stephen Cook</a>, </p> <blockquote> <p>The notation NP stands for “nondeterministic polynomial time”, since originally NP was defined in terms of nondeterministic machines (that is, machines that have more than one possible move from a given configuration). However, now it is customary to give an equivalent definition using the notion of a checking relation, which is simply a binary relation R ⊆ Σ ∗ × Σ ∗ 1 for some finite alphabets Σ and Σ1. We associate with each such relation R a language LR over Σ ∪ Σ1 ∪ {#} defined by LR = {w#y | R(w, y)} where the symbol # is not in Σ. We say that R is polynomial-time iff LR ∈ P. Now we define the class NP of languages by the condition that a language L over Σ is in NP iff there is k ∈ N and a polynomial-time checking relation R such that for all w ∈ Σ ∗ ,w ∈ L ⇐⇒ ∃y(|y| ≤ |w| k and R(w, y)), where |w| and |y| denote the lengths of w and y, respectively.</p> </blockquote> <p>it appears that the definition of NP given here can be derived from the definition of non-deterministic Turing machines. From <a href="https://courses.engr.illinois.edu/cs498374/fa2014/notes/38-nondet-tms.pdf" rel="nofollow noreferrer">this lecture document from the University of Illinois</a>:</p> <blockquote> <p>Formally, a nondeterministic Turing machine has all the components of a standard deterministic Turing machine—a finite tape alphabet Γ that contains the input alphabet Σ and a blank symbol ; a finite set Q of internal states with special start, accept, and reject states; and a transition function δ. However, the transition function now has the signature δ: Q × Γ → 2 Q×Γ×{−1,+1} . That is, for each state p and tape symbol a, the output δ(p, a) of the transition function is a set of triples of the form (q, b,∆) ∈ Q × Γ × {−1,+1}. Whenever the machine finds itself in state p reading symbol a, the machine chooses an arbitrary triple (q, b,∆) ∈ δ(p, a), and then changes its state to q, writes b to the tape, and moves the head by ∆. If the set δ(p, a) is empty, the machine moves to the reject state and halts. The set of all possible transition sequences of a nondeterministic Turing machine N on a given input string w define a rooted tree, called a computation tree. The initial configuration (start, w, 0) is the root of the computation tree, and the children of any configuration (q, x, i) are the configurations that can be reached from (q, x, i) in one transition. In particular, any configuration whose state is accept or reject is a leaf. For deterministic Turing machines, this computation tree is just a single path, since there is at most one valid transition from every configuration.</p> </blockquote> <p>This mentions the notion of a computation tree:</p> <p><a href="https://i.sstatic.net/7rrRR.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7rrRR.jpg" alt="Computation tree example slide of P vs. NP"></a></p> <p>Thus, suppose I were to have a machine that was able to compute all of the non-deterministic branches in parallel such that the bounded running time is <span class="math-container">$O(poly(n))$</span>. Is this an equivalent of a Turing machine such that a language that is accepted by this machine is within <span class="math-container">$NP$</span>?</p> <p>In a sense, we can also see that if a deterministic machine were to choose the "right" branch by chance that it's somewhat equivalent to the notion of an oracle. This seems like it's similar to the "checking relation" in the sense that we need only compute one branch's result in order to determine acceptance of the answer or rejection. Is this intuition right?</p> <p>And, in addition, can the deterministic machine can simulate the computation of the non-deterministic machine in <span class="math-container">$O(2^n)$</span> time?</p> <p><strong>Edit:</strong> If we make the addition that one of the paths are guaranteed to accept, then is the machine now equivalent to NP?</p>
<p>Your description of NP is still missing the accept criterion. For example you might decide to accept the input exactly in case all computation paths accept. This would give you the class coNP instead of NP. Or you might decide to accept the input exactly in case half of the computation paths accept. But for NP, the accept criterion is that there exists at least one computation path which accepts. Other than that, your parallelism model of NP is perfectly fine. And yes, the deterministic machine can simulate the computation of the non-deterministic machine in $O(2^n)$ time!</p>
598
sequence-to-sequence model
Is the runtime complexity of sorting $O(n\log n)$ or $O(n\log^2 n)$?
https://cs.stackexchange.com/questions/71291/is-the-runtime-complexity-of-sorting-on-log-n-or-on-log2-n
<p>Suppose we want to sort an array that contains $n$ different integers in the range $[1,2n]$. It is known that this requires $\Theta(n\log n)$ comparisons. But comparing integers which might be as large as $n$ might requires time $\Theta(\log n)$ since we have to compare them bitwise. So, apparently the runtime complexity of sorting is $\Theta(n\log^2 n)$. Is this correct? If so, why is it taught that the complexity of sorting is $\Theta(n\log n)$? Is there a better algorithm for sorting which runs in time $\Theta(n\log n)$?</p> <p>EDIT: I understand that the question depends on the computational model. In fact, I just found an interesting <a href="https://cs.stackexchange.com/q/28570/1342">unanswered question</a> that specifically asks about the runtime complexity of sorting on a Turing machine. Originally, I had in mind a realistic computer, only with unlimited memory. In such a computer, we can represent arbitrarily large integers (as sequences of bytes). We are given an array that contains $n$ different integers, each of which is between $1$ and $2n$. We measure the clock-time it takes to sort this array, as a function of $n$. What function will we see?</p>
<p>Algorithms are usually analyzed using the <a href="https://en.wikipedia.org/wiki/Random-access_machine" rel="noreferrer">random access machine</a> model. In this model, arithmetic operations on machine words take time $O(1)$. A machine word contains $O(\log n)$ bits, where $n$ is some natural parameter, say the length of the input (in machine words!).</p> <p>Given this definition, comparison-based sorting algorithms run in time $O(n\log n)$ as long as the arguments fit in a constant number of machine words.</p> <p>Sorting algorithms probably take more than $n\log^2 n$ time on a Turing machine, since Turing machines have no random access. Turing machines are not so interesting, however, for analyzing the running time of algorithms, except for special circumstances such as proving results on the Turing machine model itself.</p> <p>In your particular case, you can actually sort the array in $O(n)$ using <a href="https://en.wikipedia.org/wiki/Counting_sort" rel="noreferrer">counting sort</a>. The $\Omega(n\log n)$ lower bound only holds for some models such as the comparison model.</p>
599