category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
sequence-to-sequence model
Applying Machine learning in biological data
https://cs.stackexchange.com/questions/63118/applying-machine-learning-in-biological-data
<p>I am trying to solve the following question: Given a text file containing a bunch of biological information, find out the one gene which is {up/down}regulated. Now, for this I have many such (60K) files and have annotated some (1000) of them as to which gene is {up/down}regulated.</p> <h2>Conditions</h2> <ul> <li>Many sentences in the file have some gene name mention and some of them also have neighboring text that can help one decide if this is indeed the gene being modulated.</li> <li>Some files also have NO gene modulated. But these still have gene mentions.</li> </ul> <p>Given this, I wanted to ask (having absolutely no background in ML), what sequence learning algorithm/tool do I use that can take in my annotated (training) data (after probably converting the text to vectors somehow!) and can build a good model on which I can then test more files?</p> <h2>Example data</h2> <blockquote> <p>Title: Assessment of Thermotolerance in preshocked hsp70(-/-) and (+/+) cells</p> <p>Organism: Mus musculus</p> <p>Experiment type: Expression profiling by array</p> <p>Summary: From preliminary experiments, HSP70 deficient MEF cells display moderate thermotolerance to a severe heatshock of 45.5 degrees after a mild preshock at 43 degrees, even in the absence of hsp70 protein. We would like to determine which genes in these cells are being activated to account for this thermotolerance. AQP has also been reported to be important.</p> <p>Keywords: thermal stress, heat shock response, knockout, cell culture, hsp70</p> <p>Overall design: Two cell lines are analyzed - hsp70 knockout and hsp70 rescue cells. 6 microarrays from the (-/-)knockout cells are analyzed (3 Pretreated vs 3 unheated controls). For the (+/+) rescue cells, 4 microarrays are used (2 pretreated and 2 unheated controls). Cells were plated at 3k/well in a 96 well plate, covered with a gas permeable sealer and heat shocked at 43degrees for 30 minutes at the 20 hr time point. The RNA was harvested at 3hrs after heat treatment</p> </blockquote> <p>Here my <em>main</em> gene is <code>hsp70</code> and it is <code>down-regulated</code> (deducible from <code>hsp(-/-)</code> or <code>HSP70 deficient</code>). Many other gene names are also there like <code>AQP</code>. There could be another file with no gene modified at all. In fact, more files have no actual gene modulation than those who do, and all contain gene name mentions.</p> <p>Any idea would be great!!</p>
<p>A simple starting point would be to apply a bag-of-words feature vector and try using a naive Bayes or logistic regression classifier. You'll probably want to apply stemming and lemmatization and remove stop words.</p> <p>Going based on this one example, determining which gene is regulated looks like it might be plausibly feasible, but determining whether it is up- or down-regulated might be pretty tough, especially given the limited training set you have. Of course, it's hard to know without seeing more examples and without some domain knowledge in biology.</p> <p>There are lots of techniques in the NLP literature, and probably your best bet is to start by reading up on standard NLP methods and start trying some of them, and see if that raises a more narrowly targeted question. Machine learning often requires significant experimentation (try a bunch of things). It's also helpful to know standard techniques so you can use your domain knowledge of these documents to help you brainstorm possible features.</p>
600
sequence-to-sequence model
Is there a dual concept to &quot;Turing Complete&quot; in logic?
https://cs.stackexchange.com/questions/78117/is-there-a-dual-concept-to-turing-complete-in-logic
<p>Two computing models can be shown to be co complete if each can encode a universal simulator for the other. Two logics can be shown to be co complete if an encoding of the rules of inferences (and maybe axioms if present) of each be shown to be theorems of the other. In computability this has led to a natural idea of Turing completeness and the Church Turing Thesis. However, I have not seen where the logical co completenesses has led to any naturally induced idea of total completeness of similar quality. </p> <p>Since Provability and Computability are so closely related, so I think it isn't too much to consider that there could be a concept in logic that is a natural dual to Turing Completeness. Speculatively, something like: there is a "true" theorem that isn't provable in a logic if and only if there is a computable function that isn't describable by a computing model. My question is, has anyone studied this? A reference or some keywords would be helpful.</p> <p>By "true" and "computable" in the previous paragraph I'm referring to the intuitive but ultimately undefinable ideas. For example, someone could show that the finiteness of Goodstein sequences is "true" but not provable in Peano arithmetic without fully defining the concept of "true". Similarly, by diagonalization it can be shown that there are computable functions that are not primitive recursive without actually fully defining the concept of computable. I was wondering, even though they tend to ultimately be empirical concepts, perhaps the concepts could be related to each other well enough to relate the concepts of completeness.</p>
<p>I'm not sure why you say "true" is ultimately undefinable, as there is a precise definition for what it means for a first order formula to be <a href="https://en.wikipedia.org/wiki/First-order_logic#Evaluation_of_truth_values" rel="nofollow noreferrer">true</a>.</p> <p>What's unique in the case of computability, is that for any definition (as wild as your dreams) for a "computational model", you can finally associate it with a set of functions (the functions it can compute). Thus, you can naturally compare different models, and upon fixing one (based on some empirical justification such as "it is a good representation of computation in the real world") you can call any other model complete if it computes exactly the same set of functions.</p> <p>However, how do you compare different logics? It seems there is no natural property you can attach to an arbitrary logic, and use it to compare it to other systems. You can perhaps, fix the logic, e.g. first order predicate logic, and ask about completeness of an axiomatic system. Suppose you work in ZFC, and believe it consists of the natural axioms that represent the world. Now, when given a different axiomatic system, you can ask whether they have the same theory, and call this system complete in the case the answer is yes. I think the difference from the computability case, is that for computability, there is a stronger consensus on what the "base model" should be. The reason for this consensus is that many independent models of computation were later shown to be equivalent, so this seems like a very strong empirical evidence for what our "base model" should be.</p>
601
sequence-to-sequence model
How to compare the efficiency of two encoding schemes or hypothesis languages?
https://cs.stackexchange.com/questions/82080/how-to-compare-the-efficiency-of-two-encoding-schemes-or-hypothesis-languages
<p>My question is pretty basic, I'm looking for a named method if you know one, but also proper terminology, further reading, and anything this reminds you of if you don't. (I'm new to this, don't have the right terminology and just need a starting point so I can help myself.)</p> <p>I'm trying to interpret the vector inputs to a black-box controller (which I can model as finite-state machine). I can see them and they look like a series of symbols but it is too variable (stochastic) to easily define an "alphabet" based on repetition and it isn't clear how the symbols are grouped. In other words it isn't clear whether they use something like block coding where each symbol is the same number of vectors in a sequence, or convolutional coding where symbols can have different numbers of vectors. </p> <p>The controller operates a linear actuator (it just goes up and down) and the inputs are large vectors from a CNN. It's essentially a pong playing robot. I make predictions by modeling the controller as binary decision tree that maps each putative symbol to an exact position of the actuator. This is very similar to a language induction problem for a finite-state machine. Recall that a finite automaton can be a representation of a regular language. Also recall that finite automaton can characterized in terms of its memory requirements and computational complexity, hence a regular language can too.</p> <ol> <li><p>I know that the controller is optimal. There is no controller which has both less memory and less computational complexity. If I have two guesses at coding schemes and each are able to predict actuator position equally well then I want to pick the coding scheme which implies the least complexity and memory. So how does one go about evaluating the resource requirements of a coding scheme (raw inputs-> symbols) and grammar (rules about symbols) in combination?</p></li> <li><p>I need to keep in mind that I might be wrong that it digitizes its inputs into symbols at all. So what are the signs that it is not digital? (such as, if I divide the symbols into smaller symbols and it still works just as well, ad infinitum, that probably means the symbols are meaningless).</p></li> </ol> <p><a href="https://i.sstatic.net/UfSD4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UfSD4.png" alt="Question explained pictorially"></a></p> <hr> <p>If you don't think I've given enough information keep in mind that I don't expect a detailed answer (but I'm happy to offer more). An acceptable answer is something of the form "I think you need to look into ____.", "This sounds like ____ in which case we often use ____.", "This sounds like a paper I read, see ____.", or "The ____ metric compares the level of the difficulty of a task to the number of symbols in the language required to perform that task and tells you how efficient a language is at that task."</p>
<p>The answer was just tree depth. What I needed to do was learn about evaluating the complexity of finite automata and how we can use a decision-tree model (<a href="https://en.wikipedia.org/wiki/Decision_tree_model" rel="nofollow noreferrer">decision-tree complexity</a>) to put bounds on the complexity of an automata (<a href="https://en.wikipedia.org/wiki/Analysis_of_algorithms" rel="nofollow noreferrer">analysis of algorithms</a>). So I take my two coding schemes (call them A and B) and I make binary decision tree to predict the output of my system based on the inputs provided by the coding schemes. If both coding schemes A and B are equally useful for predicting the outputs but A requires a tree depth of 15 while B requires a tree depth of 54 then A is "better". This is because our decision-tree model of the system is simpler when using A than when using B and as stated before we strongly believe that our system is as simple as possible. </p> <p>Of course I also need to include the complexity of the coding scheme itself but that just goes with alphabet size since it is a Nearest Neighbor search. </p> <p>So it was under my nose, I just needed some context.</p>
602
sequence-to-sequence model
Alternatives to Sequential Computation
https://cs.stackexchange.com/questions/93714/alternatives-to-sequential-computation
<p>When software boils down to assembly, it is just a sequence of instructions like <a href="http://cs.lmu.edu/~ray/notes/x86assembly/" rel="nofollow noreferrer">this</a>:</p> <pre><code>mov rax, 1 mov rdi, 1 mov rsi, message mov rdx, 13 syscall mov rax, 60 xor rdi, rdi syscall </code></pre> <p>I am not sure exactly how the program <em>evaluator</em> works (the thing navigating about the assembly / machine code), but I think it just goes <em>sequentially</em> through the assembly code, jumping to different locations in the assembly / machine code when it encounters a jump/branch instruction.</p> <p>Essentially, it is <em>sequential</em> computation. The information on the ordering of instructions is inherent in the fact that they are written next to each other in the code (they have an adjacent location).</p> <p>I understand there could be parallel computation, but not too much interested in that for this question.</p> <p>In cognitive architectures, instead of sequential computation, it is as if they make a bunch of decisions, and then perform the action. So instead of the next "instruction" being adjacent to the current instruction, the next instruction is <em>computed</em> dynamically. Not sure how to explain this any deeper.</p> <p>Roughly, instead of the next instruction being found by looking ahead in some location space, the instruction is found by analyzing some information and selecting it based on some decision. Rule-based systems seem to be somewhat similar, but I haven't seen them described as models of computation.</p> <p>Wondering if there is any research on this topic. Alternatives to the sequential computation method, such as this dynamically-computed-instruction method (like rule-based systems).</p>
603
recurrent neural networks
Difference between Elman, Hopfield &amp; Hemming Recurrent Neural Networks
https://cs.stackexchange.com/questions/53703/difference-between-elman-hopfield-hemming-recurrent-neural-networks
<p>What is the main difference between Elman, Hopfield &amp; Hemming Recurrent Neural Networks?</p> <p>Python Neurolab Library examples:</p> <ul> <li><a href="https://pythonhosted.org/neurolab/ex_newelm.html" rel="nofollow">Elman Recurrent Neural Network</a></li> <li><a href="https://pythonhosted.org/neurolab/ex_newhop.html" rel="nofollow">Hopfield Recurrent Neural Network</a></li> <li><a href="https://pythonhosted.org/neurolab/ex_newhem.html" rel="nofollow">Hemming Recurrent Neural Network</a></li> </ul> <p>I properly understand the main difference of Hopfield Network:</p> <blockquote> <p>The Hopfield network is of historic interest although it is not a general RNN, as it is not designed to process sequences of patterns. Instead it requires stationary inputs. It is a RNN in which all connections are symmetric. <em>Wikipedia</em></p> </blockquote> <p>But the other two is missing for me.</p>
604
recurrent neural networks
Matrix multiplication in recurrent neural networks
https://cs.stackexchange.com/questions/92944/matrix-multiplication-in-recurrent-neural-networks
<p>I was looking at a <a href="http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-2-implementing-a-language-model-rnn-with-python-numpy-and-theano/" rel="nofollow noreferrer">tutorial</a> for recurrent neural networks in Python, and I have a question in regards to multiplying matrices of different sizes. Specifically, why does S[t] have 100 elements in it?</p> <blockquote> <p><code>s[t] = np.tanh(self.U[:,x[t]] + self.W.dot(s[t-1]))</code></p> </blockquote> <p>Earlier in the tutorial, the author lists the dimensions for each variable:</p> <blockquote> <p>\begin{aligned} x_t &amp; \in \mathbb{R}^{8000} \\ o_t &amp; \in \mathbb{R}^{8000} \\ s_t &amp; \in \mathbb{R}^{100} \\ U &amp; \in \mathbb{R}^{100 \times 8000} \\ V &amp; \in \mathbb{R}^{8000 \times 100} \\ W &amp; \in \mathbb{R}^{100 \times 100} \\ \end{aligned} </p> </blockquote> <p>From how I understand the above line of code, it multiplies U by x[t] and adds it to the product of W and s[t-1], then computes tanh for each element.</p> <p>Sources like <a href="http://stattrek.com/matrix-algebra/matrix-addition.aspx" rel="nofollow noreferrer">this</a> say that you cannot add matrices of different dimensions, however that seems to be what is happening here (because multiplication is just repeated addition). In fact, it seems that U is 2D and x[t] is 1D. How are these added? Also, how is the sum of the two products then 100 elements?</p>
<p>Matrix-by-matrix multiplication is very different from scalar-by-scalar. It has no connection to repeated addition, and in fact isn't even commutative: it's entirely possible that $AB \neq BA$. It's only called multiplication because it has some similar properties to repeated addition of scalars.</p> <p>Matrix multiplication requires that the <em>inner dimensions match</em>. Nothing more, nothing less. You can multiply an (a×b) matrix by a (b×c) matrix, for instance, and the result is an (a×c) matrix. For the full details, <a href="https://en.wikipedia.org/wiki/Matrix_multiplication" rel="nofollow noreferrer">Wikipedia has a good summary</a>.</p> <p>In addition, that code isn't adding or multiplying $U$ and $x[t]$. Rather, it's removing one dimension from $U$, taking only the columns (or rows depending on your definitions) which correspond to 1s in $x[t]$.</p>
605
recurrent neural networks
Recurrent neural networks (Hopfield-like) with short limit cycles
https://cs.stackexchange.com/questions/87472/recurrent-neural-networks-hopfield-like-with-short-limit-cycles
<p>Standard <a href="http://www.scholarpedia.org/article/Hopfield_network" rel="nofollow noreferrer">Hopfield networks</a> exhibit stable patterns (states) which are attractors of a dynamic system. I wonder how to modify standard Hopfield networks such that they exhibit stable limit <em>cycles</em> as attractors. Since for <a href="http://www.scholarpedia.org/article/Hopfield_network#Binary_neurons" rel="nofollow noreferrer">binary neurons</a> (to which I'd like to restrict my question) each attractor is a limit cycle of some length (because state space is finite), the question asks for &quot;short&quot; limit cycles (significantly shorter than the size of the state space, but longer than <span class="math-container">$1$</span>).</p> <p>Is there a simple standard <a href="http://www.scholarpedia.org/article/Recurrent_neural_network" rel="nofollow noreferrer">recurrent neural network</a> (of which - optimally - Hopfield networks would be a special case, e.g. for some parameter <span class="math-container">$\lambda \rightarrow 0$</span>) that typically gives rise to short limit cycles?</p> <p>How many of these can there be (compared to the number of neurons), and how big can their added up <a href="http://www.scholarpedia.org/article/Basin_of_attraction" rel="nofollow noreferrer">basins of attraction</a> be (compared to the size of state space)?</p> <p><strong>Toy example</strong>:</p> <p><a href="https://i.sstatic.net/hzO1U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hzO1U.png" alt="enter image description here" /></a></p> <p>This mini network has one stable state <span class="math-container">$(00)$</span> with basin of attraction <span class="math-container">$\{00,11\}$</span> and one limit cycle <span class="math-container">$(10,01)$</span> with basin of attraction <span class="math-container">$\{10,01\}$</span></p>
606
recurrent neural networks
Can simple recurrent neural networks be remedied with higher precision floating point numbers?
https://cs.stackexchange.com/questions/83916/can-simple-recurrent-neural-networks-be-remedied-with-higher-precision-floating
<p>I am of the understanding that tradition "simple" recurrent neural networks have the problem of "losing error" after a lot of time steps. But is this some kind of deficiency of the math, or the machines the RNN's are running on? If it's just the machines, couldn't you use something like an arbitrary precision float and store all the information "exactly"? (Then, even if you go out some ludicrous amount of time steps, you would presumably be able to calculate the gradient.)</p>
607
recurrent neural networks
Difference Between Residual Neural Net and Recurrent Neural Net?
https://cs.stackexchange.com/questions/63541/difference-between-residual-neural-net-and-recurrent-neural-net
<p>What is the difference between a <strong>Residual</strong> Neural Net and a <strong>Recurrent</strong> Neural Net?</p> <p>As I understand,</p> <p><a href="https://arxiv.org/pdf/1512.03385v1.pdf" rel="noreferrer">Residual Neural Networks</a> are very deep networks that implement 'shortcut' connections across multiple layers in order to preserve context as depth increases. Layers in a residual neural net have input from the layer before it and the optional, less processed data, from <em>X</em> layers higher. This prevents early or deep layers from 'dying' due to converging loss.</p> <p><a href="https://arxiv.org/pdf/1604.03640.pdf" rel="noreferrer">Recurrent Neural Networks</a> are networks that contain a directed cycle which when computed is 'unrolled' through time. Layers in recurrent neural network have input from the layer before it and the optional, <em>time dependent</em> extra input. This provides situational context for things like natural language processing.</p> <p>Therefor, a recurrent neural network can be used to generate a basic residual network if the input remains the same with respect to time.</p> <p>Is this correct?</p>
<p>The answer is YES, they basically are the same according to this <a href="https://arxiv.org/abs/1604.03640v1" rel="nofollow noreferrer">paper</a> </p> <p><img src="https://i.sstatic.net/q7JdV.pnghttps://" alt="enter image description here"></p> <p>The figure above shows how they compared both and how a ResNet can be reformulated into a recurrent form tat is almost identical to a RNN.<br> For more info you can read the paper and get deeper.</p>
608
recurrent neural networks
How often should I read out information from an echo state recurrent neural network?
https://cs.stackexchange.com/questions/85645/how-often-should-i-read-out-information-from-an-echo-state-recurrent-neural-netw
<p>Recurrent neural networks makes it possible to implement some kind of memory, which can be very useful for a lot of tasks, incl. (but not limited to) robot control, which I am interested in. For example, echo state networks are known to display some kind of dynamical short-term memory, and display a very small search space wrt alternatives.</p> <p>Of course, recurrent neural networks are not so simple to use in practical: the search space can grow really fast (e.g. with fully connected networks), forgetting can be catastrophic, etc. </p> <p>One particular question is: how often should one "read out" the information from a recurrent neural network? </p> <p>On the one hand, it is very likely that any new input values' influence will be limited when reading out (ie. using) output values at every time steps (ie. no time for operations requiring several iterations to deal with the new inputs).</p> <p>On the other hand, one can choose to read out outputs values once every N iterations of the neural networks, but there is a risk of forgetting the relevant information if N is too big. Of course, there may exist better guess for N. For example, setting N to be sure that any new input values can travel throughout the shortest path from input to output neurons, but then you may loose partly the benefit of recurrence for this particular input values (e.g. integrate).</p> <p>In practical, and to the extent of my knowledge, setting N is mostly done by lucky guessing or empirically, through multiple trials. </p> <p><strong>So my question is</strong>: is there an automated way to choose N more wisely, whether there is a magical formula (I'm skeptic) or a methodology to estimate the value of N from data or from observing the reservoir? (e.g. computing some kind of derivative from the reservoir to guess if it is too large or to little).</p>
609
recurrent neural networks
Machine learning for recommendation systems (feed forward and recurrent neural networks)
https://cs.stackexchange.com/questions/88401/machine-learning-for-recommendation-systems-feed-forward-and-recurrent-neural-n
<p>I recently started to learn about machine learning. I have created a feed forward neural network (ffnn) and a recurrent neural network (rnn) to predict user ratings of movies. I am using a subset of 2000 users and their ratings of the "Netflix Prize" dataset.</p> <p>The ffnn as well as the rrn have an accuracy of ~40% - 45% on the test set evaluation. This seems to be very low and I expected to get at least somewhere near 60% - 70% accuracy. I tried different network configurations (dimensions, layers, optimizers, etc.) but nothing changed the accuracy significantly (only 1% - 3% max).</p> <p>Both models are constructed in a sense of supervised learning. The ffnn uses embeddings of users+movies for the input and ratings for the output. For the rrn I am using one hot encoded movie vectors as input and one hot encoded ratings vectors as the output.</p> <p>For the implementation I am using Keras in Python.</p> <p>The ffnn is constructed like this:</p> <pre><code>dimension = 120 model_users = Sequential() model_users.add(Embedding(len(np.unique(users)), dimension)) model_users.add(Reshape((dimension,))) model_movies = Sequential() model_movies.add(Embedding(len(np.unique(movies)), dimension, input_length=1)) model_movies.add(Reshape((dimension,))) model = Sequential() model.add(Merge([model_users, model_movies], mode = 'concat')) model.add(Dropout(0.1)) model.add(Dense(100, activation = 'relu')) model.add(Dropout(0.1)) model.add(Dense(500, activation = 'sigmoid')) model.add(Dropout(0.1)) model.add(Dense(dimension, activation = 'linear')) model.add(Dropout(0.1)) model.add(Dense(5, activation = 'softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics=['accuracy']) print(model.summary()) </code></pre> <p>The rnn model is constructed like this:</p> <pre><code>dimensions = len(movies_unique) model = Sequential() model.add(Masking(mask_value = 0, input_shape = (dimensions, dimensions))) model.add(LSTM(32, return_sequences = True)) model.add(TimeDistributed(Dense(len(ratings_unique), activation = 'relu'))) model.add(Activation('softmax')) model.compile(loss = 'mse', optimizer = 'adam', metrics = ['accuracy']) print(model.summary()) </code></pre> <p>How can I further improve the accuracy to get beyond ~45%? I might be missing something fundamental here, so any help is appreciated! :)</p> <p>Best, Nico</p>
610
recurrent neural networks
How to calculate activation of hidden nodes in a recurrent neural network?
https://cs.stackexchange.com/questions/45574/how-to-calculate-activation-of-hidden-nodes-in-a-recurrent-neural-network
<p>Usually, when I program recurrent neural networks, I use a loop for each neuron to figure out it's state. What I realized with this is that in this case, no neuron gets any feedback. They just pump their outputs to the next neuron in the cycle. My thoughts on how to counter this is by having neurons in two different time states. The "second" before and the "second" now. What each hidden neuron takes as an input from the other neurons is their activations from the "second" before to calculate the value of it's activation for that specific "second" which after they are all cycled, is stored as the "second" before time state for the next update cycle. Is this how other people go about it?</p>
611
recurrent neural networks
Reducibility and Artificial Neural Networks
https://cs.stackexchange.com/questions/83047/reducibility-and-artificial-neural-networks
<p>I have read (<a href="https://arxiv.org/pdf/1410.5401.pdf%20(http://Neural%20Turning%20Machines)" rel="nofollow noreferrer">here</a> and <a href="http://research.cs.queensu.ca/~akl/cisc879/papers/SELECTED_PAPERS_FROM_VARIOUS_SOURCES/05070215382317071.pdf" rel="nofollow noreferrer">here</a> ) about the computational power of neural networks and a doubt came up.</p> <p>There is a way to reduce an ANN to another ANN (not taking into count the training algorithm) ? e.g. Reduce a Recurrent Neural Network to a Multilayer Perceptron, meaning that if I have a trained RNN, I can get a MP that maps the same inputs given to the RNN to the same outputs produced by the RNN.</p> <p>And if exists an answer to the above question, we can show the equivalence between neural networks, e.g., all problems solved by an Multilayer Perceptron can be solved by a Recurrent Neural Network but the opposite is not true, i.e., $MP \subset RNN$ (I do not know if this is true, is just an example). So, if we obtain this relationship between all neural networks, we can get a neural network $X$ that is more powerful than others, so, we can throw away all other neural networks because $X$ can solve any problem that other NN can. Is this reasoning correct ?</p> <p>Thanks.</p>
<p>Not really. I respect what you're trying to achieve, but I don't think it's possible to achieve what you want, given our current level of knowledge of neural networks.</p> <p>We already know that convolutional neural networks perform better for some problems, and fully-connected neural networks (what you call MP) work better for other problems. So you can't expect to find that one is always better than the other.</p> <p>It has been proven that any computable function can be approximated arbitrarily well by some fully-connected neural network. However, the catch in this theorem is that the theorem doesn't tell us how large the neural network needs to be. We already know that sometimes a larger network is more accurate, but is also slower to train, so that's a pretty big caveat in the theoretical result.</p> <p>And if you want to compare two different network architectures -- say, convolutional vs fully-connected -- then such a comparison won't be useful if it doesn't take into account the size of the network. If you need 1 billion parameters to make a fully-connected network work well, or 1 million parameters to make a convolutional network work well, are they equally good? No, you'll probably prefer the convolutional network. It's not at all clear how to get useful reductions between the two architectures that tells us anything about the <em>size</em> of the neural network.</p> <p>So, no, you're probably not going to get some useful theory this way that says "you can throw away all other architectures other than type X".</p>
612
recurrent neural networks
How does a recurrent connection in a neural network work?
https://cs.stackexchange.com/questions/56805/how-does-a-recurrent-connection-in-a-neural-network-work
<p>I am reading a very <a href="http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf" rel="nofollow noreferrer">interesting paper on genetic algorithms</a> which define neural networks. I am familiar with how a feedforward neural network operates, but then I came across this:</p> <p><a href="https://i.sstatic.net/85zQO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/85zQO.png" alt="Recurrent connection."></a></p> <p>Where node #4 goes back to connect to #5. I was wondering how this is handled? Does the state of node 4 get kept from the last timestep and applied to node 5 when it is time to calculate its activation?</p>
<p>When a recurrent network is calculated you can imagine the network is 'unrolled' out through time.</p> <p><a href="https://i.sstatic.net/8J9rZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8J9rZ.png" alt="Unrolling an RNN"></a></p> <p>When visualized this way, you can see activation and loss can be calculated using the same method as a typical network. </p> <p>So to clarify using the example you provided. Yes, the input to node #5 is the output from node #1, node #2, and the previous time steps node #4.</p> <p>It is important to note; Typically the input and output to RNN's are expected to be time dependent. That is, for each time step <em>t</em> there is input <em>t</em> and output <em>t</em>. This makes RNN especially good at things like <a href="https://www.tensorflow.org/versions/r0.10/tutorials/recurrent/index.html" rel="nofollow noreferrer">evaluating sequences</a>. Although it is possible to keep the input and output equal throughout time.</p>
613
recurrent neural networks
DFAs can be encoded as input/output for a neural network?
https://cs.stackexchange.com/questions/60788/dfas-can-be-encoded-as-input-output-for-a-neural-network
<p>I would encode DFAs (Deterministic Finite State automata) as output (or input) of a neural network for a supervised learning; it is well-known [1] that efficacy of neural network training strongly depends on adopted encoding. </p> <p><em>How can I encode DFAs for a neural network? Are there any literature works?</em></p> <p>I've already found some algorithms being able to extract a DFA from a recurrent neural network, but nothing about DFAs either as input or output of ANN.</p> <p>[1]: <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=323087" rel="nofollow">Neural network encoding approach comparison: an empirical study</a>; <a href="http://www.icoci.cms.net.my/proceedings/2009/papers/PID257.pdf" rel="nofollow">Investigating the Effect of Data Representation On Neural Network and Regression</a>; <a href="http://s3.amazonaws.com/academia.edu.documents/30087318/intech-data_mining_and_neural_networks_the_impact_of_data_representation.pdf?AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&amp;Expires=1469097204&amp;Signature=dlP1p1vVeKTkxavANr6kB%2B6xhLU%3D&amp;response-content-disposition=inline%3B%20filename%3DData_Mining_and_Neural_Networks_The_Impa.pdf" rel="nofollow">Data Mining and Neural Networks: The Impact of Data Representation</a>.</p>
614
recurrent neural networks
What is the difference of temporal dynamics in RNNs and the NEF
https://cs.stackexchange.com/questions/45700/what-is-the-difference-of-temporal-dynamics-in-rnns-and-the-nef
<p>Both spiking neural networks created with the Neural Engineering Framework (NEF) and Recurrent Neural Networks (RNNs) can be connected recurrently to exhibit neural dynamics. What is the difference between the set of dynamics that they can approximate and/or exhibit?</p>
<p>There can be a lot of different ways in which recurrent neural networks can be used. RNNs can have any activation function (logistic sigmoid is most commonly used), and they can be multilayer. There are different algorithms which can be used to train them e.g., backpropagaion and different optimization techniques e.g., hessian free optimization which can be used to optimize the networks for a particular problem.</p> <p>NEF has been used for simple recurrent connections (for implementing memory circuit), however, we need to do some work and try out multi-layer networks in order to truly figure out what the differences in dynamics would be. </p> <p>We have also tried hopfield networks with pre-calculated weight matrices, and using sigmoid neurons and were able to achieve similar dynamics forming attractors. However, implementing hopfiled with dynamic learning of weight matrices is something which needs to be tried out.</p> <p>I suspect that using sigmoid activation functions in NEF and applying the same learning algorithms should give results very similar to RNNs (there might be some differences caused by synaptic filtering). However, if one uses an LIF curve as an activation function, then there is a possibility of greater differences.</p>
615
recurrent neural networks
How would a neural network deal with an arbitrary length output?
https://cs.stackexchange.com/questions/2722/how-would-a-neural-network-deal-with-an-arbitrary-length-output
<p>I've been looking into Recurrent Neural Networks, but I don't understand what the architecture of a neural network would look like when the output length is not necessarily fixed. </p> <p>It seems like most networks I've read descriptions of require the output length to be equal to the input length or at least a fixed size. But how would you do something like convert a word to the string of corresponding phonemes? </p> <p>The string of phonemes might be longer or shorter than the original word. I know you could sequence in the input characters using 8 input nodes (bitcode of the character) in a recurrent network, provided there's a loop in the network, but is this a common pattern for the output stream as well? Can you let the network provide provide something like a 'stop codon'?</p> <p>I suppose a lot of practical networks, like those that do speech synthesis should have an output that is not fixed in length. How do people deal with that?</p>
<p>You just add more neurons. You're right that it's often not useful (if you only get 5 bits of information in, it's hard to put 10 bits of information out), but if you want to (e.g. because your output format is less dense), go ahead.</p> <p>As a trivial example, if you wanted to create an ANN to convert characters to graphemes (as represented on an 8x8 grid of on/off pixels) you could have 26 input neurons and 64 output neurons. Connect each input neuron to the output neurons which would be turned on if that character is being displayed; then the output neurons' function is just logical OR.</p> <p>The standard learning algorithms like gradient descent should work fine no matter the size of the output layer.</p> <hr> <p>EDIT: I guess your question is: "how do you handle <em>unknown</em> input and output lengths?" Any Turing machine can be simulated by a recurrent neural network, so you will never run into a "halting problem" (if I understand what you mean by that phrase). </p> <p>It's very rare that you can't bound the size of the output as a function of the input size. So you just have some metalearning procedure which generates the network for you on the fly.</p> <p>One common idea is that of "template" models. I'm not familiar with text-to-speech, but I guess you would make an assumption like "no phoneme spans more than 6 characters". So you build your network with 6 inputs, and just repeat it for as long as the word is. You need some rule like "each network gives the pronunciation of the middle two characters" to handle overlaps, as well as some special handling for the beginning and end of words too.</p>
616
recurrent neural networks
How can I compare two different neural networks, from a theorical point of view?
https://cs.stackexchange.com/questions/49815/how-can-i-compare-two-different-neural-networks-from-a-theorical-point-of-view
<p>Let's say I have a problem (i.e. Given f(x), find x) and two neural networks(i.e. feedforward and recurrent). I would like to know if one works better than the other one. I could run the twos on a computer, but other programs might interfere and I wouldn't know if the implementations I'm running are really the best ones humankind could create. Moreover, how could I be sure that the feedforward network worked better than the recurrent, when it might have just been "lucky"?</p> <p>So, here is the question: can I compare the efficiency of two neural networks(with known sizes, structures and functions) from a theorical point of view? And if the answer is yes, how?</p> <p>Thank you in advance.</p>
<p>Essentially, no. The only way to know which neural network is going to give you better accuracy is to try them on a realistic data set. The theory we have is not well-enough developed to allow us to reliably predict which will do better on a particular data set.</p> <hr> <p>A secondary remark. When you remark "other programs might interfere", that's not correct. Even if other programs are running (on a multi-tasking machine), they won't affect the accuracy. They might take the process of running or training the neural network on your data set take longer, but they won't affect the results.</p>
617
recurrent neural networks
How do RNN&#39;s map variable-length sequences to variable-length sequences?
https://cs.stackexchange.com/questions/86510/how-do-rnns-map-variable-length-sequences-to-variable-length-sequences
<p>According to Karpathy's blog "The Unreasonable Effectiveness of Recurrent Neural Networks", recurrent neural networks can map variable-length sequences to variable-length sequences, as shown by the one-to-many, many-to-one, and many-to-many diagrams.</p> <p>How do RNN's do this mapping of possibly different-length sequences? It seems to me that there would need to be mechanisms that decide whether to accept input or to yield output on any given iteration. I don't see how RNN's have any such mechanisms.</p>
618
recurrent neural networks
What are the most desirable properties of a neural network?
https://cs.stackexchange.com/questions/51415/what-are-the-most-desirable-properties-of-a-neural-network
<p>I'm trying to compare a custom neural network architecture with other existing ones. I'm quite new to the CS field and I'm looking for desirable properties and/or applications of neural networks(especially recurrent architectures). Are there any?</p>
619
recurrent neural networks
What are the limitations of RNNs?
https://cs.stackexchange.com/questions/53552/what-are-the-limitations-of-rnns
<p>For a school project, I'm planning to compare Spiking Neural Networks (SNNs) and Deep Learning recurrent neural networks, such as Long Short Term Memory (LSTMs) networks in learning a time-series. I would like to show some case where SNNs surpass LSTMs. Consequently, what are the limitations of LSTMs? Are they robust to noise? Do they require a lot of training data?</p>
<p>I finally finished the project. Given really short signals and a really small training set, SNNs (I used <a href="http://minds.jacobs-university.de/sites/default/files/uploads/papers/EchoStatesTechRep.pdf" rel="nofollow noreferrer">Echo State Machines</a> and a <a href="http://ieeexplore.ieee.org/document/7378880/" rel="nofollow noreferrer">neural form of SVM</a>) vastly out-performed Deep Learning recurrent neural networks. However, this may be mostly because I'm really bad at training Deep Learning networks.</p> <p>Specifically, SNNs performed better at classification of various signals I created. Given the following signals:</p> <p><a href="https://i.sstatic.net/wpZWx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wpZWx.png" alt="enter image description here"></a></p> <p>The various approaches had the following accuracy, where RC = Echo State Machine, FC-SVM = Frequency Component SVM and vRNN = Vanilla Deep Learning Recurrent Neural Network:</p> <p><a href="https://i.sstatic.net/2ZIPP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ZIPP.png" alt="enter image description here"></a></p> <p>SNNs were also more robust to noise:</p> <p><a href="https://i.sstatic.net/lvcmh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lvcmh.png" alt="enter image description here"></a></p> <p>For more information, including how I desperately tried to improve the Deep Learning classification approach performance, check out my <a href="https://github.com/Seanny123/rnn-comparison" rel="nofollow noreferrer">repository</a> and <a href="https://github.com/Seanny123/rnn-comparison/blob/master/comparison-recurrent-neural.pdf" rel="nofollow noreferrer">the report I wrote</a> which is where all the figures came from.</p> <p><strong>Update:</strong> After spending some time away from this project, I think one of the reasons that RNNs do horribly at this project is that they're bad at dealing with really long signals. Had I chunked the signals together with some sort of smoothing as preprocessing, they probably would have performed better.</p>
620
recurrent neural networks
Traveling Salesman Problem with Neural Network
https://cs.stackexchange.com/questions/54200/traveling-salesman-problem-with-neural-network
<p>I was curious if there were any new developments in solving the traveling salesman problem using something like a Hopfield recurrent neural network. I feel like I saw something about recent research getting a breakthrough in this, but I can't find the academic papers anywhere. Is anyone aware of any new, novel developments in this area?</p>
<p><a href="https://towardsdatascience.com/reinforcement-learning-for-combinatorial-optimization-d1402e396e91" rel="noreferrer">This Medium post</a> lists the latest (not a full list of course) studies in the combinatorial optimization domain. All three papers use Deep Reinforcement Learning, which does not need any training set but learns completely from its own experience.</p> <p>I have been working on the <a href="https://arxiv.org/abs/1704.01665" rel="noreferrer">first paper</a> for some time and inference time is on milliseconds level. According to their experiments, the approximation ratio (a metric they use to benchmark their own method) on 1000-1200 test cases reaches to 1.11.</p>
621
recurrent neural networks
Do all the cells in a recurrent neural network share learned parameters?
https://cs.stackexchange.com/questions/88891/do-all-the-cells-in-a-recurrent-neural-network-share-learned-parameters
<p>Most descriptions of modern RNNs present a "folded" characterisation, that is to say, a single cell with a loop back to itself transmitting the hidden state from one step to the next. However, in implementations the RNN is computed "unfolded", so a new cell is created for every step of the sequence up to some maximum sequence length, and the state is passed from one cell to the next.</p> <p>My question is: are the learned parameters shared between all the cells in the unfolded sequence? E.g. in the case of a stack of LSTMs, does each LSTM have its own set of forget, input-gate, candidate and output parameters, or does the whole stack share and update a common set?</p>
<p>Indeed, the copies of a cell in an unfolded version share their learning parameters.</p> <p>Why is it done this way? If the sequence processed by the LSTM is always the same lenght, we could conceivably get a better result with different parameters, but there are two key caveats:</p> <ol> <li>Shared parameters are faster to learn</li> <li>We want to be able to process cases when the lenght of the sequence processed is not fixed!</li> </ol>
622
recurrent neural networks
Backpropagation Through Time Recursive Algorithm
https://cs.stackexchange.com/questions/24642/backpropagation-through-time-recursive-algorithm
<p>Would it be plausible to write a recursive version of backpropagation through time for recurrent neural network training? I've only found the iterative version:</p> <p><a href="http://en.wikipedia.org/wiki/Backpropagation_through_time" rel="nofollow">http://en.wikipedia.org/wiki/Backpropagation_through_time</a></p>
623
recurrent neural networks
How can neural networks learn to create new things (sentences for example)?
https://cs.stackexchange.com/questions/44393/how-can-neural-networks-learn-to-create-new-things-sentences-for-example
<p>I have already taken a college course at my uni on machine learning where we implemented all the basic ML programs: linear regression, logistic regression, basic neural network with logistic regression (not perceptron, but we learned the theory of perceptron as a history lesson), k-means, and naive Bayes classifier. The class also had a high focus on the theory behind these algorithms so i know a lot of the relates maths.</p> <p>But all of our projects were based on simple numbers. What I mean by that is all of the projects had features which were simple numbers such as miles per gallon, year, horsepower, weight, frequency, etc. We never made anything that could understand more abstract things like text, or color, etc. </p> <p>I recently stumbled upon <a href="http://www.escapistmagazine.com/articles/view/scienceandtech/14276-Magic-The-Gathering-Cards-Made-by-Artificial-Intelligence" rel="nofollow">this article</a> about a recurrent neural network that makes up its own Magic: The Gathering cards and my interest in ML was piqued again. I want to learn to implement something which can learn about things besides basic numbers, I want to make something that can learn to put sentences together like the one in this article. Hell it even makes up its own words (fuseback) that don't exist in Magic and added rules text to them (like for Tromple).</p> <p>What resources are there to learn how to make a system which can learn these more abstract ideas like words and colors? I don't understand how the neural network can come up with it's own words. All the machine learning stuff i did only classified test data into existing sets (or predicted a number on a feature), but it never created a new feature.</p>
624
recurrent neural networks
Name of Generating One Value at a Time in Sequence Generation vs Encoder Decoder
https://cs.stackexchange.com/questions/88620/name-of-generating-one-value-at-a-time-in-sequence-generation-vs-encoder-decoder
<p>a question about machine learning, specifically recurrent models: For machine translation recurrent neural networks show great promise, common here is an encoder-decoder architecture which takes a source sentence, reads it, and then based on a compressed representation outputs a target sentence. In opposition to that, for sequence generation you can also output one symbol at a time (let's stick with characters), e.g. like the char-rnn. You condition your model to learn the next character based on the ones you have read, so the model can e.g. create h-e-l-l-o, one character at a time. How would you call this second approach, does it have a name? Thanks</p>
625
recurrent neural networks
Choose the best classifier to predict the label of strings of a regular language
https://cs.stackexchange.com/questions/97145/choose-the-best-classifier-to-predict-the-label-of-strings-of-a-regular-language
<p>I have to tackle this problem: I have some strings that are my training set. These strings belong to a regular language corresponding to a deterministic finite automata (hidden namely I don't now it, neither the language nor the automata). A string is labeled like positive if belong to hidden language and negative otherwise. The strings of training set are correctly labeled. I have to build a statistical classifier from training set that predicts the label of strings not seen (generalization) in the best way (better accuracy, respect to actual labeling of hidden language/automa). I have to choose between Support Vector Machine (SVM), Recurrent Neural Network and Convolutional Neural Network. </p> <p>What could be the best choice and why?</p>
<p>My bet would go to a Recurrent Neural Network, as it closely models some (fuzzy, non-discrete) state machine as each character is output. A decent start to read up on RNNs for this purpose is to read the article <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow noreferrer">The Unreasonable Effectiveness of Recurrent Neural Networks</a> which describes character-level RNNs used to predict text.</p>
626
recurrent neural networks
Confused between turing-completeness and universal approximation - are they related?
https://cs.stackexchange.com/questions/68820/confused-between-turing-completeness-and-universal-approximation-are-they-rela
<p>I am trying to de-knot a point of confusion in my mind regarding "turing-completeness" and the "universal approximation theorem". </p> <p>The context here is deep neural nets: So, consider two types of networks: a recurrent neural net, (RNN), and a feedforward net, say a pure convolutional neural net (CNN). </p> <p>I understand that the RNN is turing complete. I also know that a CNN is a universal approximator, but is NOT turing complete. </p> <p>What I am trying to understand is the link between universal approximation, and turing-completeness: Is there a link? Why is a CNN a universal approximator but NOT turing complete? Does a machine being turing-complete automatically make it a universal approximator? Trying to uncouple the two concepts. </p> <p>Thanks! </p>
<p>A CNN can approximate a function on a <em>fixed</em> number of input variables, say $n$ of them. The set of functions on $n$ input variables isn't "Turing-complete". For instance, a boolean function $f:\{0,1\}^n \to \{0,1\}$ is always computable, as it can be computed by a program that just hardcodes the truth-table of $f$; and the set of such functions is not "Turing-complete".</p> <p>Complication: CNN's <em>actually</em> approximate continuous functions $f:\mathbb{R}^n \to \mathbb{R}$... but a similar point remains (at least if the input is bounded).</p> <p>Turing-completeness isn't really connected to universal approximation. For one thing: Turing-completeness talks about languages, which are subsets of $\{0,1\}^*$ and thus refers to discrete entities. Universal approximations talks about functions $f:\mathbb{R}^n \to \mathbb{R}$, and thus refers to continuous entities.</p> <p>To qualify for "universal approximation", it's enough to be able to approximate all functions of $n$ variables (for each function, there exists a neural network that approximates it), so it talks about functions on inputs of <em>bounded</em> length. Turing-completeness requires the ability to compute all computable functions, which is a set of functions that has no fixed upper limit on the number of variables, i.e., it is a set of functions on inputs of <em>unbounded</em> length. Universal approximation could thus, in some sense, be considered "weaker" than Turing-completeness (though strictly speaking they are incomparable; neither implies the other).</p>
627
recurrent neural networks
What factors must one consider choosing an NN structure?
https://cs.stackexchange.com/questions/7714/what-factors-must-one-consider-choosing-an-nn-structure
<p>Suppose we have a classification problem and we wish to solve the problem by Neural Network. What factors must one consider choosing an NN structure? e.g Feed Forward, Recurrent and other available structures. </p>
<p>Here is a list of parameters you should take into consideration (to name some): </p> <ol> <li>The learning algorithm (gradient descent is the most explained of them)</li> <li><p>The number of layers (3,4 layers is usually enough - input, 1 or 2 hidden, output). The output layer depends on your output (e.g., if you want to classify yes/no then your output layer consists of two nodes). The same applies for input layer. However, you may consider the case of using only a subset of the input for the learning. For instance, you may think that your problem is affected by $k$ variable. However, if you take $k'$ of them then you may get better results. </p></li> <li><p>Number of nodes in the hidden layers (you select that by trial and error)</p></li> <li>Number of training iterations (not too much to avoid over-fitting)</li> <li>Size of training/testing data (there are some known rules like the 80:20 for example)</li> <li>The type of the function used at the nodes (neurons) (e.g. $f(x) = 1/(1+e^x)$ or $f(x) = tanh(x)$), usually the first is sufficient. </li> <li>An important issue is the pre-processing and post-processing of data (this is common in all pattern recognition techniques). You may for instance convert your data by applying a certain function $f$ and then run your experiments. </li> </ol> <p><strong>Note</strong>: given the many parameters you need to deal with, it is a good approach to use a search algorithm to select the best parameters for you. It is better be a heuristic search algorithm (e.g. genetic algorithm) if you had very large number of parameters set you will deal with (which is usually the case). </p> <p><strong>Note</strong>: use the Matlab NN library or Weka (open source). They would have all these parameters for you. In fact, Weka has many other learning algorithms. </p> <p><strong>Note</strong>: Perhaps, you may want to use other algorithms then. If this is case, try support vector machines. There was a historical battle between these two algorithms (in the 1990's). SVM won it ! (I am not being very scientific here). </p>
628
recurrent neural networks
Language grammar correction with supervised learning
https://cs.stackexchange.com/questions/48710/language-grammar-correction-with-supervised-learning
<p>I want to work on automatic grammar correction using machine learning (possibly using recurrent or deep neural networks). The algorithm will be supplied with both corrected and initial documents for supervised learning.</p> <p>I am now looking for some survey or research papers to start with. I have searched and downloaded tens of articles but none of them seems related.</p> <ul> <li><p>I would appreciate if someone could provide a few good starting points (papers, books).</p></li> <li><p>also I suspect that I am not using the correct keywords for my search. I'll appreciate if you could also suggest suitable search keywords.</p></li> </ul>
<p>The following paper from 2011 proposes a nice approach for using language modeling to do grammar correction, as well as an evaluation framework. </p> <ul> <li>Park and Levy, <em>Automated whole sentence grammar correction using a noisy channel model</em>. In <em>Proc. 49th Human Language Technologies</em>, volume 1, pp.&nbsp;934&ndash;944. Association for Computational Linguistics, 2011. (Available from the <a href="http://dl.acm.org/citation.cfm?id=2002590" rel="nofollow">ACM Digital Library</a>.)</li> </ul>
629
recurrent neural networks
How long can the short memory last in the RNN?
https://cs.stackexchange.com/questions/142325/how-long-can-the-short-memory-last-in-the-rnn
<p>For a recurrent neural network, the LSTM was a model of how the network worked. However, consider the case where an input was a long paragraph or even an article. <span class="math-container">$$c_1c_2...c_n$$</span> where <span class="math-container">$c_i$</span> were some characters. The LSTM would work as expected given <span class="math-container">$n$</span> not a large number. But what if <span class="math-container">$n$</span> was a large number, say <span class="math-container">$1e5$</span>. Clearly, the short term memory would not work as expected in the LSTM model.</p> <p>Logically, with each input of <span class="math-container">$c_{a+i}$</span> where <span class="math-container">$a$</span> was some fixed integers and <span class="math-container">$i\geq 1$</span>, the &quot;information&quot; or &quot;probability&quot; of the outcome contributed at <span class="math-container">$c_a$</span> got &quot;modified&quot; or even &quot;suppressed&quot;, the reason why the LSTM worked. However, with sufficient large iteration of <span class="math-container">$i$</span>, the information at <span class="math-container">$c_a$</span> might be completely suppressed.</p> <p>How long can the short memory last in the RNN? and how would this affect the training?</p>
630
recurrent neural networks
Is there something as good as a GRU or LSTM but simpler?
https://cs.stackexchange.com/questions/83939/is-there-something-as-good-as-a-gru-or-lstm-but-simpler
<p>I was just reading this paper: <a href="https://arxiv.org/pdf/1701.05923.pdf" rel="nofollow noreferrer">Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks Rahul Dey and Fathi M. Salem</a></p> <p>It seems to me that perhaps the architecture of LSTMs and GRUs are overly complicated. And that the same problems could probably be solved with a simpler architecture. </p> <p>I get the theory behind LSTMs and GRUs as they are kind of trying to model the short term memory. But really all the need to do is get rid of the gradient exploding problem of RNNs. </p> <p>What is the latest research. Is there something simpler than a GRU?</p> <p>Edit: Actually I found something called a MGU (minimal gated unit) which claims to be simpler. What's the latest?</p>
631
recurrent neural networks
What is the State of The Art of Writer AIs (Deep Learning)?
https://cs.stackexchange.com/questions/95679/what-is-the-state-of-the-art-of-writer-ais-deep-learning
<blockquote> <p>Does anyone know if Deep Learning Bots can already, for example, train on many books of an author and output a similar but new book?</p> </blockquote> <p>I've been wanting to get into ML for quite a while but was lacking a project to use as an incentive to keep learning, and this specific AI has caught my interest. It seems feasible from the little I know of ML so far (I'm a beginner who has taken the initial module of <code>Deeplearining.ai</code> course from Coursera), but so far from what I can scoop out of the internet it doesn't seem like AIs are quite there yet. They seem to be somewhat convincing, but sometimes a weird outlier appears in the text; strangely, music composition seems more convincing. Does anyone disagree?</p> <p>Another relevant question: is this too difficult for a beginner? I've graduated from engineering 1 year ago, so I have some ease with programming, but I don't know how difficult Recurrent Neural Networks (RNN) and Natural Language Programming (NLP) can be.</p> <blockquote> <p>Also, on a sidenote, does a more experienced programmer have a suggestion of path I should take to learn the necessary skills to program such a bot, i.e., online courses and books?</p> </blockquote>
<p>Researchers are trying it out, but RNNs learn character-by-character (sequences of characters), so it is difficult to get something that resembles a story plot, as a whole. <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow noreferrer">This link</a> by a Stanford researcher explains the current (2015) state of the art. Here is <a href="https://github.com/karpathy/char-rnn" rel="nofollow noreferrer">the code</a> and here are <a href="https://cs.stanford.edu/people/karpathy/char-rnn/" rel="nofollow noreferrer">some data sets</a> for you to start experimenting with.</p> <p><a href="https://medium.com/intuitionmachine/writing-travel-blogs-with-deep-learning-9b4a6fbcc87" rel="nofollow noreferrer">Another experiment</a> involving the writing of a travel blog also concluded that you cannot really create a long passage that makes sense at the moment. The recommendation was to look at the word level, rather than character, and focus on something more manageable, such as sentence autocompletion.</p> <p>When it comes to words, the more unique ones there are in the source data set (for example using <a href="https://github.com/zackthoutt/got-book-6" rel="nofollow noreferrer">the Game of Thrones books</a>), the tougher it becomes to train a good model. <a href="https://motherboard.vice.com/en_us/article/evvq3n/game-of-thrones-winds-of-winter-neural-network" rel="nofollow noreferrer">Suggestions</a> are to limit input to more basic words (think children's vocabulary) and to have at least a total training sample at least 100 times larger than the desired output.</p> <p>A trained neural network could perhaps output smaller texts (&lt;10000 words) that make some sense, if they are of a rather structured nature. A whole book needs both a coherent plot from start to finish, as well as twists in-between. As a result, it is still way too difficult.</p> <p><a href="http://www.mastodonc.com/text%20mining/artificial%20intelligence/machine%20learning/2017/04/13/Can-Artificial-Intelligence-write-a-better-book-than-50-Shades-of-Grey.html" rel="nofollow noreferrer">This guy</a> tried it out and discovered that the AI would get sometimes stuck in loops. Furthermore, the draft produced did not always make sense in terms of the storyline and required heavy human editing.</p> <p>Here is <a href="https://medium.com/deep-writing/harry-potter-written-by-artificial-intelligence-8a9431803da6" rel="nofollow noreferrer">another example</a> of an LSTM RNN outputting a Harry Potter chapter, where the sentences are grammatically correct, but sometimes make no sense.</p> <p>If you want to learn more about RNNs, <a href="http://www.deeplearningbook.org/contents/rnn.html" rel="nofollow noreferrer">the Deep Learning book</a> by Goodfellow, Bengio, and Courville comes highly recommended and has a relevant chapter.</p> <p>For something more specific to writing, <a href="https://www.amazon.co.uk/Bestseller-Code-Anatomy-Blockbuster-Novel/dp/1250088275" rel="nofollow noreferrer">The Bestseller Code</a> book uses text mining techniques and should be an interesting read.</p>
632
recurrent neural networks
What are the inputs to an LSTM for Slot Filling Task
https://cs.stackexchange.com/questions/71032/what-are-the-inputs-to-an-lstm-for-slot-filling-task
<p>I am confused on the inputs of a Long-Short Term Memory (LSTM) for the slot filling task in Spoken Language Understanding. </p> <p>Before I worked on this, I implemented a language model with a Recurrent Neural Network (RNN) and then with a LSTM. The input to the RNN and LSTM language models was a one hot vector, which represented each word. </p> <p>Now, when moving on to the slot filling task for a LSTM, I am having trouble what the input would be. I know that a one-hot vector representation is not enough for this task because the outputs along each time step are slot labels. I have a dictionary (in Python) that maps words to indices (which I can turn into a one hot vector), and I also have a dictionary with a labels (that are used for slot filling), which I got from the ATIS data. Here is an example:</p> <p><a href="https://i.sstatic.net/Fo5hD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fo5hD.png" alt="enter image description here"></a></p> <p>I know I need the above two dictionaries to accomplish the slot filling task, but I cannot figure out how to use them as inputs for the LSTM? Furthermore, I have been using the basic LSTM structure, and for the language model LSTM I build, the output at each time step went through a Softmax function. Is this what will be required for slot filling too?</p> <p>I am in high school and do not have anyone to contact, so any help is really appreciated. Thank you so much.</p>
633
recurrent neural networks
RNN learning an iterative algorithm
https://cs.stackexchange.com/questions/83875/rnn-learning-an-iterative-algorithm
<p>How do I solve the following problem with a recurrent neural network (RNN)? What architecture should I use for the (conv)-RNN? </p> <blockquote> <p>Let $s \in \mathbb{R}^N$ be a musical signal. We corrupt it with some white/pink noise $\omega$ to obtain $x= s+\omega$. We then create a conv-RNN with $N$ input and $N$ output neurons, we feed it with input $x^{(t)}=x$ and we train it to output the sequence $y^{(t)} = x -\frac{t^2}{100^2} \omega$ for $t = 0\ldots 100$, ie. some better and better approximations to $s$.</p> <p>We repeat the same process with many different $s,w$ and we hope the resulting RNN will serve as a musical de-noiser.</p> <p>De-noising is supposed to have a chance to work because it is quite suitable for convolution-RNNs, so even if we need a lot of neurons, the number of weights to learn shouldn't be too high. </p> </blockquote> <p>Note this isn't <strong>only</strong> about this particular problem, I chose it because it is easy to generate training data the way I said, I will be interested in anything different but related (in particular anything about a conv-RNN trained to output better and better approximations to the solution). For example we could replace noisy musical signals by low-resolution pictures, and train the RNN to output higher and higher resolution version of the picture.</p> <hr> <p>Edit - For now, I think the architecture of the RNN should be the following :</p> <p><a href="https://i.sstatic.net/vMTXo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vMTXo.png" alt="enter image description here"></a></p> <p>Let $X = \text{spectrogram}(x)$ be the input, and each $X_{i,j}$ is a $16\times 16$ piece of $X$, the neighbor pieces overlap. Then the output of the $i,j$th piece of neurons at time $t+1$ is $$z_{i,j}^{(t+1)} = F(W,X_{i,j}, z_{i,j}^{(t)},z_{i\pm 1,j\pm 1}^{(t)})$$</p> <p>where $W$ are the parameters of $F$ to be optimized. The output of the RNN at time $t$ is $Y^{(t)} = X-\sum_{i,j} z_{i,j}^{(t)} \delta_{i,j}$ (assembling all the pieces of spectrogram together, $z_{i,j}$ is supposed to contain some local estimation of the noise) and the error is $$E^{(t)} =\| Y^{(t)} -(X-\frac{t^2}{100^2} \Omega)\|^2, \qquad E = \sum_{t=1}^{100} E^{(t)}$$ $F$ is itself a 3 layer perceptron, so the parameters $W$ are the weights of all those $3$ layers, and the input weights tell how the neighbor pieces $z_{i\pm 1,j\pm 1}^{(t)}$ affect $z_{i,j}^{(t+1)}$, which we hope will let the information to propagate from a piece of neurons to the neighbor.</p> <p>We update the parameters from something like $$W \leftarrow W - \eta \sum_{t=1}^{100} \frac{\partial E^{(t)}}{\partial W}$$</p>
634
long short-term memory
Why we still need Short Term Memory if Long Term Memory can save temporary data?
https://cs.stackexchange.com/questions/135237/why-we-still-need-short-term-memory-if-long-term-memory-can-save-temporary-data
<p>If RAM is a short term memory and SSD is a long term memory, why don't microarchitecture of computer nowadays use SSD or another long term memory for saving temporary data like hidden variable for programming?</p> <p>If it's about speed, then SSD can improve its speed, is it possible that SSD will become faster than RAM at some point?</p> <p>If SSD has <em>address</em> for <em>memory location</em> and <em>data</em> for <em>opcode/instruction/operand</em> like RAM, then will it possibly act like RAM?</p>
<p>There's two simple reasons, one fundamental and one related to our current technology. First the technical one: volatile storage is (generally) faster than non-volatile storage. It has fewer requirements - it only needs to store the data for a short while until it gets refreshed, so it's not a surprise that it often is faster.</p> <p>But the fundamental reason is that memory gets slower to access the bigger it is. This is why modern architectures don't just have 'RAM' and 'disk', there's layers upon layers of increasing size memory, with only the topmost layer being non-volatile:</p> <ol> <li>CPU registers</li> <li>L1 cache</li> <li>L2 cache</li> <li>L3 cache</li> <li>RAM itself</li> <li>Cache on the disk micro-controller</li> <li>The disk itself</li> </ol>
635
long short-term memory
How long can the short memory last in the RNN?
https://cs.stackexchange.com/questions/142325/how-long-can-the-short-memory-last-in-the-rnn
<p>For a recurrent neural network, the LSTM was a model of how the network worked. However, consider the case where an input was a long paragraph or even an article. <span class="math-container">$$c_1c_2...c_n$$</span> where <span class="math-container">$c_i$</span> were some characters. The LSTM would work as expected given <span class="math-container">$n$</span> not a large number. But what if <span class="math-container">$n$</span> was a large number, say <span class="math-container">$1e5$</span>. Clearly, the short term memory would not work as expected in the LSTM model.</p> <p>Logically, with each input of <span class="math-container">$c_{a+i}$</span> where <span class="math-container">$a$</span> was some fixed integers and <span class="math-container">$i\geq 1$</span>, the &quot;information&quot; or &quot;probability&quot; of the outcome contributed at <span class="math-container">$c_a$</span> got &quot;modified&quot; or even &quot;suppressed&quot;, the reason why the LSTM worked. However, with sufficient large iteration of <span class="math-container">$i$</span>, the information at <span class="math-container">$c_a$</span> might be completely suppressed.</p> <p>How long can the short memory last in the RNN? and how would this affect the training?</p>
636
long short-term memory
What are the limitations of RNNs?
https://cs.stackexchange.com/questions/53552/what-are-the-limitations-of-rnns
<p>For a school project, I'm planning to compare Spiking Neural Networks (SNNs) and Deep Learning recurrent neural networks, such as Long Short Term Memory (LSTMs) networks in learning a time-series. I would like to show some case where SNNs surpass LSTMs. Consequently, what are the limitations of LSTMs? Are they robust to noise? Do they require a lot of training data?</p>
<p>I finally finished the project. Given really short signals and a really small training set, SNNs (I used <a href="http://minds.jacobs-university.de/sites/default/files/uploads/papers/EchoStatesTechRep.pdf" rel="nofollow noreferrer">Echo State Machines</a> and a <a href="http://ieeexplore.ieee.org/document/7378880/" rel="nofollow noreferrer">neural form of SVM</a>) vastly out-performed Deep Learning recurrent neural networks. However, this may be mostly because I'm really bad at training Deep Learning networks.</p> <p>Specifically, SNNs performed better at classification of various signals I created. Given the following signals:</p> <p><a href="https://i.sstatic.net/wpZWx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wpZWx.png" alt="enter image description here"></a></p> <p>The various approaches had the following accuracy, where RC = Echo State Machine, FC-SVM = Frequency Component SVM and vRNN = Vanilla Deep Learning Recurrent Neural Network:</p> <p><a href="https://i.sstatic.net/2ZIPP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ZIPP.png" alt="enter image description here"></a></p> <p>SNNs were also more robust to noise:</p> <p><a href="https://i.sstatic.net/lvcmh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lvcmh.png" alt="enter image description here"></a></p> <p>For more information, including how I desperately tried to improve the Deep Learning classification approach performance, check out my <a href="https://github.com/Seanny123/rnn-comparison" rel="nofollow noreferrer">repository</a> and <a href="https://github.com/Seanny123/rnn-comparison/blob/master/comparison-recurrent-neural.pdf" rel="nofollow noreferrer">the report I wrote</a> which is where all the figures came from.</p> <p><strong>Update:</strong> After spending some time away from this project, I think one of the reasons that RNNs do horribly at this project is that they're bad at dealing with really long signals. Had I chunked the signals together with some sort of smoothing as preprocessing, they probably would have performed better.</p>
637
long short-term memory
Which queue does the long-term scheduler maintain?
https://cs.stackexchange.com/questions/1106/which-queue-does-the-long-term-scheduler-maintain
<p>There are different queues of processes (in an operating system):</p> <p><em>Job Queue:</em> Each new process goes into the job queue. Processes in the job queue reside on mass storage and await the allocation of main memory.</p> <p><em>Ready Queue:</em> The set of all processes that are in main memory and are waiting for CPU time is kept in the ready queue.</p> <p><em>Waiting (Device) Queues:</em> The set of processes waiting for allocation of certain I/O devices is kept in the waiting (device) queue.</p> <p>The short-term scheduler (also known as CPU scheduling) selects a process from the ready queue and yields control of the CPU to the process.</p> <p>In my lecture notes the long-term scheduler is partly described as maintaining a queue of new processes waiting to be admitted into the system. </p> <p>What is the name of the queue the long-term scheduler maintains? When it admits a process to the system is the process placed in the ready queue? </p>
<p>I found an appropriate answer. I was asking about the job queue (which I already described). The diagram included in this answer comes from a <a href="https://docs.google.com/viewer?a=v&amp;q=cache:7Rv2SA2-LfgJ:www.cs.gsu.edu/~cscbecx/csc4320%2520Chapter%25203.ppt%20&amp;hl=en&amp;gl=uk&amp;pid=bl&amp;srcid=ADGEESiZJTggY1K8nmxpr-S0w-qqVWQptkIXuW5_USNQOE_CPzP2VbFz8Wv45FkxA-wpwR0gV4njLlRoyqyKkzdWHHdtGoIHAg4YcQ88MwioIZmr2lhIqn_rP7X1-ehN4eJkz9xk4FGZ&amp;sig=AHIEtbRMVTeMcEEUD_BV4TcvkOb5SWiFyg" rel="nofollow noreferrer"> power-point</a> that uses concise language to explain processes and schedulers and relates the topic to the diagram. </p> <p>It may be of intrest to other users, also learning this topic, that sometimes time-sharing systems (such as UNIX) have no (or minimal implementations of a) long-term scheduler. </p> <p>Check these sources for more information: </p> <p>1.<a href="http://en.wikipedia.org/wiki/Scheduling_%28computing%29#Long-term_scheduling" rel="nofollow noreferrer">Wikipedia Article</a> </p> <p>2.<a href="http://www.amazon.co.uk/Operating-System-Concepts-Abraham-Silberschatz/dp/0471694665/ref=sr_1_2?s=books&amp;ie=UTF8&amp;qid=1333818413&amp;sr=1-2" rel="nofollow noreferrer">Operating System Concepts</a> (Pages 88-89)</p> <p><img src="https://i.sstatic.net/JwMK9.png" alt="State Diagram"><a href="https://docs.google.com/viewer?a=v&amp;q=cache:7Rv2SA2-LfgJ:www.cs.gsu.edu/~cscbecx/csc4320%2520Chapter%25203.ppt%20&amp;hl=en&amp;gl=uk&amp;pid=bl&amp;srcid=ADGEESiZJTggY1K8nmxpr-S0w-qqVWQptkIXuW5_USNQOE_CPzP2VbFz8Wv45FkxA-wpwR0gV4njLlRoyqyKkzdWHHdtGoIHAg4YcQ88MwioIZmr2lhIqn_rP7X1-ehN4eJkz9xk4FGZ&amp;sig=AHIEtbRMVTeMcEEUD_BV4TcvkOb5SWiFyg" rel="nofollow noreferrer"> &copy;Bernard Chen 2007</a></p>
638
long short-term memory
What are the inputs to an LSTM for Slot Filling Task
https://cs.stackexchange.com/questions/71032/what-are-the-inputs-to-an-lstm-for-slot-filling-task
<p>I am confused on the inputs of a Long-Short Term Memory (LSTM) for the slot filling task in Spoken Language Understanding. </p> <p>Before I worked on this, I implemented a language model with a Recurrent Neural Network (RNN) and then with a LSTM. The input to the RNN and LSTM language models was a one hot vector, which represented each word. </p> <p>Now, when moving on to the slot filling task for a LSTM, I am having trouble what the input would be. I know that a one-hot vector representation is not enough for this task because the outputs along each time step are slot labels. I have a dictionary (in Python) that maps words to indices (which I can turn into a one hot vector), and I also have a dictionary with a labels (that are used for slot filling), which I got from the ATIS data. Here is an example:</p> <p><a href="https://i.sstatic.net/Fo5hD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fo5hD.png" alt="enter image description here"></a></p> <p>I know I need the above two dictionaries to accomplish the slot filling task, but I cannot figure out how to use them as inputs for the LSTM? Furthermore, I have been using the basic LSTM structure, and for the language model LSTM I build, the output at each time step went through a Softmax function. Is this what will be required for slot filling too?</p> <p>I am in high school and do not have anyone to contact, so any help is really appreciated. Thank you so much.</p>
639
long short-term memory
How does the forget layer of an LSTM work?
https://cs.stackexchange.com/questions/118865/how-does-the-forget-layer-of-an-lstm-work
<p>Can someone explain the mathematical intuition behind the forget layer of an LSTM?</p> <p>So as far as I understand it, the cell state is essentially long term memory embedding (correct me if I'm wrong), but I'm also assuming it's a matrix. Then the forget vector is calculated by concatenating the previous hidden state and the current input and adding the bias to it, then putting that through a sigmoid function that outputs a vector then that gets multiplied by the cell state matrix.</p> <p>How does a concatenation of the hidden state of the previous input and the current input with the bias help with what to forget?</p> <p>Why is the previous hidden state, current input and the bias put into a sigmoid function? Is there some special characteristic of a sigmoid that creates a vector of important embeddings?</p> <p>I'd really like to understand the theory behind calculating the cell states and hidden states. Most people just tell me to treat it like a black box, but I think that, in order to have a successful application of LSTMs to a problem, I need to know what's going on under the hood. If anyone has any resources that are good for learning the theory behind why cell state and hidden state calculation extract key features in short and long term memory I'd love to read it.</p>
<p>Think of it like this: The cell state <span class="math-container">$h_t$</span> is a vector. The forget vector <span class="math-container">$f_t$</span> is used to choose which parts of the cell state to "forget". We update the hidden state with something like <span class="math-container">$c_t = f_t \circ c_{t-1}$</span> (it's actually more complicated, but let's start with that, to gain intuition). Suppose <span class="math-container">$f_t$</span> were a vector of 0's and 1's. In the coordinates where <span class="math-container">$f_t$</span> is 1, the value of <span class="math-container">$c_{t-1}$</span> would be copied over to <span class="math-container">$c_t$</span> (it's not forgotten). In the coordinates where <span class="math-container">$f_t$</span> is 1, <span class="math-container">$c_t$</span> is reset to zero and the value of <span class="math-container">$c_{t-1}$</span> is ignored (it's forgotten). So, the forget vector can be used to control in which positions we forget values from the previous cell state vector.</p> <p>Now what remains is to figure out a way to choose a forget vector <span class="math-container">$f_t$</span>. In general we might want to choose which positions to forget based on both the current input <span class="math-container">$x_t$</span> and the previous hidden state <span class="math-container">$h_{t-1}$</span>. So, we should compute <span class="math-container">$f_t$</span> as some function of <span class="math-container">$x_t$</span> and <span class="math-container">$h_{t-1}$</span>. Many choices of how to represent that function might be possible, but a LSTM chooses a specific function for this. In a LSTM, this is done by a single-layer fully-connected neural network. A single-layer fully-connected neural network concatenates all of the inputs, then multiplies them by a matrix, adds a bias, and feeds the result to an activation layer (in this case, sigmoid activation). So that's why the formula for <span class="math-container">$f_t$</span> looks the way it does: that formula is capturing what a single-layer fully-connected neural network does.</p>
640
long short-term memory
Intuitive description for training of LSTM (with forget gate/peephole)?
https://cs.stackexchange.com/questions/12871/intuitive-description-for-training-of-lstm-with-forget-gate-peephole
<p>I am a CS undergraduate (but I don't know much about AI though, did not take any courses on it, and definitely nothing about NN until recently) who is about to do a school project in AI, so I pick a topics regarding grammar induction (of context-free language and perhaps some subset of context-sensitive language) using reinforcement learning on a neural network. I started to study previous successful approach first to see if they can be tweaked, and now I am trying to understand the approach using supervised learning with Long Short Term Memory. I am reading <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.4170">"Learning to Forget: Continual Prediction with LSTM"</a>. I am also reading the paper on peephole too, but it seems even more complicated and I'm just trying something simpler first. I think I get correctly how the memory cell and the network topology work. What I do not get right now is the training algorithm. So I have some questions to ask:</p> <ul> <li><p>How exactly does different input get distinguished? Apparently the network is not reset after each input, and there is no special symbol to delimit different input. Does the network just receive a continuous stream of strings without any clues on where the input end and the next one begin?</p></li> <li><p>What is the time lag between the input and the corresponding target output? Certainly some amount of time lag are required, and thus the network can never be trained to get a target output from an input that it have not have enough time to process. If it was not Reber grammar that was used, but something more complicated that could potentially required a lot more information to be stored and retrieved, the amount of time need to access the information might varied depending on the input, something that probably cannot be predicted while we decide on the time lag to do training.</p></li> <li><p>Is there a more intuitive explanation of the training algorithm? I find it difficult to figure out what is going on behind all the complicated formulas, and I would need to understand it because I need to tweak it into a reinforced learning algorithm later.</p></li> <li><p>Also, the paper did not mention anything regarding noisy <strong>training</strong> data. I have read somewhere else that the network can handle very well noisy testing data. Do you know if LSTM can handle situation where the training data have some chances of being corrupted/ridden with superfluous information?</p></li> </ul>
<p>LSTM is designed to process a stream of data chunks (each chunk being the set of inputs for the network at this point in time) that arrive over time and observe features occurring in the data and yield output accordingly. The time lag (delay) between the occurrence of features to recognize may vary and may be prolonged.</p> <p>One would then train the network by streaming training examples in randomized ordering which should also have some timeshift noise added in the form of idle passes (have the network activate when inputs are at default idle values eg: when no audio in the case of a speech processor) [exception: if any training data should obey periodic timeshift patterns such as music then the timeshift noise should keep the timeshifting synchronized eg: in music making sure a start-of-measure training example isn't shifted to mid-measure and so forth]</p> <p>It is possible also to have a semi supervised setup where the network is always in a training configuration and it's trained with examples that expect output of an idle value when no feauture is present or the appropriate expected value when a feature is presented to train).</p> <p>If feedback format training is desired it can be emulated by:</p> <ol> <li>saving the internal state (time t)</li> <li>activating the network on current inputs (now at t+1)</li> <li>supervisory process evaluates the output obtained at t <ul> <li>3a if correction is needed first rewind to the saved state (rewinds network back to t)</li> <li>3b generate a training example with the correction</li> <li>3c run a train (backprop) pass for this slice rather than an activation</li> </ul></li> </ol> <p>thus one implements a feedback style system since training examples are basically only created while the network is "getting it wrong." The feedback format is useful if one wants the network to attempt improvisation (like Schmidhuber's music example).</p> <ul> <li>it should be pointed out that part of the correction feedback (and thus training examples) necessarily includes those that enforce idle valued output when features are not present at the current time</li> </ul> <p>It was mentioned by the OP that [there is no separation of inputs] except that actually there is. If one thinks of a voice recognition scenario one has periods of utterances (features the LSTM should detect) interspersed with long periods of silence. So to address the concern it would be fair to say those periods of silence are in fact separating the sequenced groups of inputs (those silences too are actually a feature the network needs to detect and learn to respond with idle valued outputs ie: learn to do nothing when silence).</p> <h2>A note about resetting of the network</h2> <p>Any reset or recalling of a saved network state in the LSTM sense has a meaning of "go back in time" thus undoing any learning the LSTM performed prior to the reset.</p> <p>Thus you were correct in stating LSTMs are not reset prior to each training sample nor tranining epoch. LSTMs want their data streamed, or provided in an 'online' manner, so-to-speak.</p>
641
long short-term memory
Neural network: noisy temporal sequence converter (transducer?producer?) on demand?
https://cs.stackexchange.com/questions/22666/neural-network-noisy-temporal-sequence-converter-transducerproducer-on-dema
<p>I start to suspect this problem is very hard now that I cannot find a single relevant literature on the subject, but it's too late to change the class project topics now, so I hope any pointers to a solution. Please pardon the somewhat artificial scenerio of this question, but here goes:</p> <p>Technical version: </p> <p>Let $\Sigma_{c}$ and $\Sigma_{q}$ and $\Sigma_{a}$ be 3 disjoint finite alphabet (c, q, a stand for content, query and answer respectively). Let $L_{c}\in\Sigma_{c}^{*}$ and $L_{q}\in\Sigma_{q}^{*}$ be FINITE languages, wherein $L_{q}$ have the property that for every string in the language all of its prefix are in the language too. There is an unknown function $f:L_{c}\times L_{q}\rightarrow\Sigma_{a}^{*}$. Consider a mysterious machine that receive continuous stream of symbol through a channel one at a time step (we assume that the symbol are clearly distinguishable). This machine, whenever being feed with a string in $c\in L_{c}$ (with the symbol in correct temporal order) followed by a string in $q\in L_{q}$ will output (through a different output channel) the value of $f(c,q)$ as a temporal sequence, one symbol at a time. Note that the machine always output after every new symbol from $\Sigma_{q}$. Note that the empty string is in $L_{q}$, which means the machine also output something before any symbol on $\Sigma_{q}$ have arrived, but only if it is certain with high probability that the full string in $L_{c}$ have been received.</p> <p>The objective is to construct a neural network that emulate that mysterious machine, if we have only access to its input and output channel to use as training data, and we do not know $f$. We also have to assume that the input channel are noisy in the following sense: random noise are inserted into the input channel at high probability, delaying input symbol, and we initially do not know which one is noise and which one is authentic; also symbol in the input channel are sometimes lost at low probability. EDIT: Note: we do not know $L_{c}$ nor $L_{q}$, only the mysterious machine know, in fact we do not even know the alphabet $\Sigma_{c}$ and $\Sigma_{q}$ other than the fact that they are disjoint and are subset of the set of all possible input symbol (input symbol not in either set are certainly noise, but we can't tell which set it belongs to initially; note that it is still possible for symbol from the alphabet to be noise).</p> <p>(why neural network: beside the noise problem, also because that's what I wrote in my class project proposal)</p> <p>(layman version: consider Sherlock Holmes sitting in his chair, bored. Dr. Watson give a short description of the client. Once he's done, Sherlock Holmes give a conclusion about the client. Dr. Watson is astonished, and ask more question, and Sherlock Holmes reply. The conclusion must obviously based on the description alone; and subsequent answer have to answer the question being asked, taking into account the contexts which consists of question already being asked (for example, the same "How did you know?" following "Age?" demands different answer than when following "Height?"). Now you want to make a neural network that simulate Sherlock Holmes, having all the recordings of those session. Dr. Watson however tend to insert in long description that are rather irrelevant, making long statement before finally getting around to ask question, and sometimes accidentally omit crucial information, but otherwise describe people in a rather fixed order of details. The neural network must be able to deal with that. Of course, this is a just a layman's description, the situation is much less complex.)</p> <p>I have looked through various relevant literature, and I cannot find anything relevant. Conversion to spatial domain is useless due to high amount of noise causing very long input sequence. I have looked into LSTM to deal with the memory problem over arbitrary long time lag, but I for the life of me cannot figure out how is the network is supposed to be trained when there are arbitrarily long noise insertion everywhere or possibilities of missing symbol (every method I found seems to force a fixed time-lag between input and output, and missing symbol immediately wreck any method based on predicting the next item in the sequence). Also, is it too much to ask for network that isn't too hard to code? Integrate-and-fire neuron is even worse than LSTM in term of difficulty in coding.</p> <p>Thanks for your help. It's due in 2 days, so please be fast.</p>
<p>I unfortunately know very little of neural network. The closest thing that your project reminds me of is speech recognition, and I would look at that literature. I am thinking of the first stage of speech recognition, when the sound stream is transformed into a word lattice (or a word stream, if you keep only the most likely path in the lattice). But all I know on this is based on Hidden Markov Models and Viterbi algorithm [1]. I have not looked at the field for a long time, and I have no idea how it would translate in neural network, but I would suggest you look at that literature, for example by searching the web for <em>neural networks</em> and <em>speech recognition</em>.</p> <p>I doubt you can code anything serious in 2 days. I would not even try, but I do not know what kind of programming is expected. But maybe a good description of what should be done, with appropriate references would be enough.</p> <p>You should simplify your question, if you find out that your requirements are too strong, particularly on noise. At first, you should limit yourself to very simple kinds of noise. Problems are seldom solved the first time in full complexity. You first solve simple cases, then try to see where you could do more. For one thing, do you know how to do it without any noise? What are the limitations? Then you can start adding simple noise, and see what changes.</p> <p>Your input, content and query, do not seem to have much reason to be distinguished, or do you have a strong reason to distinguish them? I would think that at some point your system must enter a state when it starts answering on the output tape.</p> <p>[1] Bahl, L. R.,Jelinek, F., &amp; Mercer,R.L. (1983).A maximum likelihood approach tocontinuous speech recognition. IEEETrans. Pattern Anal. Machine Intell., PAMI 5, 179-190.</p> <p>These authors actually published several papers on the subject for noisy input, including insertion, deletion and substitution of symbols. There are surely others, and this work is quite old. I am not sure the paper referenced is actually about learning. But the same people worked on learning too, such as parameters identification for Hidden Markov Models.</p>
642
semantic similarity
Semantically Compare Programs by Similarity?
https://cs.stackexchange.com/questions/162563/semantically-compare-programs-by-similarity
<p>In NLP, it's common to study the semantic similarity between pieces of text, which can be calculated in a number of ways. Are there any tools, methods, algorithms or processes that can be used to compare the semantic similarity of programs? That is, compare if two programs produce a similar output given similar inputs. There are a few obvious methods that I can think of, like comparing two programs' outputs over a given set of inputs, but if there are an infinite number of possible inputs to the program, then that method would never be complete.</p> <p>Speaking abstractly, I don't think there's a way to semantically compare two arbitrary Turing machines in any meaningful way. After all, the equivalence problem for Turing machines is undecidable. But for simpler classes of programs describable as, say, a FSM, is there any useful similarity metric?</p>
643
semantic similarity
Semantic similarity in text
https://cs.stackexchange.com/questions/2955/semantic-similarity-in-text
<p>Is there a relatively simple way of telling if two pieces of text are semantically similar?</p> <p>Some assumptions that are valid:</p> <ul> <li>It is all english</li> <li>I have a list of all the <em>important</em> nouns</li> </ul> <p>Are there any strategies that I should pursue? Looking for something that is relatively computationally cheap, though something that could be scaled to improve accuracy at the expense of computational power would be a bonus.</p> <p><strong>Note:</strong></p> <p>Assume that there are not enough posts for some type of probabilistic analysis, but some type of NN might be feasible (I think, just don't know enough about it).</p>
<p>Here's a simple technique.</p> <p>Train an LDA using something like <a href="http://mallet.cs.umass.edu" rel="nofollow">MALLET</a> over your collection of texts. For each pair of documents you want to compare, obtain the topic distributions and compute the <a href="http://en.wikipedia.org/wiki/Hellinger_distance" rel="nofollow">Hellinger distance</a> between them.</p> <p>Things you can tweak include term weighting, the LDA hyperparameters, and the metric for comparing distributions. <a href="http://en.wikipedia.org/wiki/Tf-idf" rel="nofollow">Term weighting</a> would obviate both the need for a list of important words, and the restriction to only English.</p>
644
semantic similarity
How to determine agreement between two sentences?
https://cs.stackexchange.com/questions/56828/how-to-determine-agreement-between-two-sentences
<p>A common Natural Language Processing (NLP) task is to determine semantic similarity between two sentences. Has the question of agreement/disagreement between two sentences been covered in NLP or other literature? I tried searching on Google Scholar but didn't get any relevant results.</p>
<p>I would propose to do some researching in the field of <strong>Stance Classification</strong>. Given a target claim or argument we can classify whether a number of sentences are in favor, against or neither of that claim. So an idea is to extract the topic of those sentences classify whether they agree or disagree with it and depending that classification you can determine if those sentence agree or disagree.</p> <p>Here are some papers you can look at:</p> <p><a href="https://paperswithcode.com/task/stance-classification" rel="nofollow noreferrer">https://paperswithcode.com/task/stance-classification</a></p> <p><a href="https://arxiv.org/abs/1907.00181" rel="nofollow noreferrer">https://arxiv.org/abs/1907.00181</a></p> <p><a href="https://www.mdpi.com/2078-2489/13/3/137" rel="nofollow noreferrer">https://www.mdpi.com/2078-2489/13/3/137</a></p> <p><a href="https://dl.acm.org/doi/abs/10.1145/3488560.3501391" rel="nofollow noreferrer">https://dl.acm.org/doi/abs/10.1145/3488560.3501391</a></p>
645
semantic similarity
Representing abstract syntax tree as a graph
https://cs.stackexchange.com/questions/140149/representing-abstract-syntax-tree-as-a-graph
<p>Does it make sense to represent an AST as a graph? How can one achieve a mapping between ASTs and graphs that preserves both semantic and syntactic properties of source code?</p> <p>The goal and application of such a transformation would be to use graph neural networks and other deep &quot;graph&quot; learning techniques to extracts features, clustering source code, find code similarities and suggest code completion tasks.</p> <p>Any suggestion of current algorithms and research in this area?</p>
<p>Every tree is itself a graph, so of course an AST is a graph -- no mapping is needed, it is already a graph. I don't see any way that this is useful in practice, though.</p> <p>Rather than trying to convert to a graph and then use graph neural networks, I suggest using a more direct approach: e.g., using a neural network on the AST or on the code itself. Since you haven't told us what you want to do with the neural network, it's impossible to suggest something more concrete, but I encourage you to do a literature search; there are many recent papers on using neural networks on code.</p> <hr /> <p>Alternatively, if you insist on having a graph somewhere, perhaps you will be interested in learning about control-flow graphs, which are one use of graphs in program analysis where the graph structure is useful.</p>
646
semantic similarity
Two classes of documents. Find weighted relations between them
https://cs.stackexchange.com/questions/28133/two-classes-of-documents-find-weighted-relations-between-them
<p>I have an NLP problem and a potential solution, but I’m a bit green here, so I’m looking for some validation or alternative suggestions.</p> <h1>Background</h1> <p>I have two types of documents: one is a set of short statements of an organization's goals and objectives (“Goals”, from here on. ~500 docs, ~100 words/doc. New documents monthly-annually). The other is a larger set of things they've actually done (statements of work, contracts, etc. “Work”, from here on. ~10M docs, ~300 words/doc. New documents daily-weekly.).</p> <h1>Problem</h1> <p>My objective is to assign each of these Work statements to one or more Goals, if possible with a weight indicating of how closely they fit.</p> <p>The hypothesis I'm working from is that each Work document is created by someone with knowledge of the Goals, and therefore there's a hidden relationship between the two that should show up in a probabilistic approach.</p> <p>There's no existing data of this sort, so that seems to eliminate most supervised approaches.</p> <h1>Solution?</h1> <p>I've done a lot of reading about ML/NLP, but this is the first time I've tried to do something beyond basic examples and use of pre-existing libraries. Most of my knowledge comes from <a href="http://nlp.stanford.edu/IR-book/" rel="nofollow">Introduction to Information Retrieval</a>, so I've been going through that trying to find something that fits.</p> <p>Here's my current idea:</p> <ol> <li>Build a term-document matrix of Goals.</li> <li>Use singular-value decomposition to construct a low-rank approximation. The intention here is to narrow out both stopwords and terms common across all Goals. This part seems to be the most “magical” to me right now, so I might be misunderstanding its capabilities here.</li> <li>Construct a term-document matrix of Work.</li> <li>Compare the Goal matrix to the Work matrix (or the selection of the relevant terms from the Work matrix). Compute the distance from each Work document to each Goal document. The final result will be a matrix of Work x Goals with some kind of weight between each. My linear algebra is a bit shaky, so I can’t remember if there’s an obviously good distance function that I’m missing the intuition on. Or should it be something more like cosine similarity?</li> </ol> <p>I came up with this after reading the <a href="http://nlp.stanford.edu/IR-book/html/htmledition/language-models-for-information-retrieval-1.html" rel="nofollow">Language models for information retrieval</a> and <a href="http://nlp.stanford.edu/IR-book/html/htmledition/matrix-decompositions-and-latent-semantic-indexing-1.html" rel="nofollow">Matrix decompositions and latent semantic indexing</a> chapters in IIR. I don’t think what I’m describing falls exactly into any of those techniques like LSI, but I may be confused because they’re mostly talking about matching <em>queries</em> to documents, rather than other documents. Maybe my restricted-term Goal martix is actually a set of query vectors then? Or maybe this is a different technique that I picked up elsewhere?</p> <p>Or maybe I’m way off and this won’t work at all? :-) I’d love some feedback. Thanks.</p>
647
semantic similarity
Constraint based analysis: understanding the program $[[ \text{fn} \ x =&gt; [x]^1]^2 [ \text{fn} \ y =&gt; [y]^3]^4]^5$
https://cs.stackexchange.com/questions/132947/constraint-based-analysis-understanding-the-program-textfn-x-x1
<p>I am currently studying the textbook <a href="http://faculty.sist.shanghaitech.edu.cn/faculty/songfu/cav/PPA.pdf" rel="nofollow noreferrer"><em>Principles of Program Analysis</em> by Flemming Nielson, Hanne R. Nielson, and Chris Hankin</a>. Chapter <strong>1.4 Constraint Based Analysis</strong> says the following:</p> <blockquote> <p><strong>1.4 Constraint Based Analysis</strong> The purpose of <em>Control Flow Analysis</em> is to determine information about what &quot;elementary blocks&quot; may lead to what other &quot;elementary blocks&quot;. This information is immediately available for the <span class="math-container">$\mathrm{While}$</span> language unlike what is the case for more advanced imperative, functional and object-oriented languages. Often Control Flow Analysis is expressed as a Constraint Based Analysis as will be illustrated in this section.<br /> Consider the following functional program:</p> <pre><code>let f = fn x =&gt; x 1; g = fn y =&gt; y + 2; h = fn z =&gt; z + 3 in (f g) + (f h) </code></pre> <p>It defines a higher-order function <code>f</code> with formal parameter <code>x</code> and body <code>x 1</code>; then it defines two functions <code>g</code> and <code>h</code> that are given as actual parameters to <code>f</code> in the body of the <code>let</code>-construct. Semantically, <code>x</code> will be bound to each of these two functions in turn so both <code>g</code> and <code>h</code> will be applied to <code>1</code> and the result of the computation will be the value <span class="math-container">$7$</span>.<br /> An application of <code>f</code> will transfer control to the body of <code>f</code>, i.e. to <code>x 1</code>, and this application of <code>x</code> will transfer control to the body of <code>x</code>. The problem is that we cannot immediately point to the body of <code>x</code>: we need to know what parameters <code>f</code> will be called with. This is exactly the information that the Control Flow Analysis gives us: <span class="math-container">$$\text{For each function application, which functions may be applied.}$$</span> As is typical of functional languages, the labelling scheme used would seem to have a very different character than the one employed for imperative languages because the &quot;elementary blocks&quot; may be nested. We shall therefore label all subexpressions as in the following simple program that will be used to illustrate the analysis. <strong>Example 1.2</strong> Consider the program: <span class="math-container">$$[[ \text{fn} \ x =&gt; [x]^1]^2 [ \text{fn} \ y =&gt; [y]^3]^4]^5$$</span> It calls the identity function <span class="math-container">$\text{fn} \ x =&gt; x$</span> on the argument <span class="math-container">$\text{fn} \ y =&gt; y$</span> and clearly evaluates to <span class="math-container">$\text{fn} \ y =&gt; y$</span> itself (omitting all <span class="math-container">$[ \dots ]^\mathscr{l}$</span>).</p> <p>We shall now be interested in associating information with the labels themselves, rather than with the entries and exits of the labels - thereby we exploit the fact that there are no side-effects in out simple functional language. The Control Flow Analysis will be specified by a pair <span class="math-container">$(\hat{C}, \hat{\rho})$</span> of functions where <span class="math-container">$\hat{C}(\mathscr{l})$</span> is supposed to contain the values that the subexpression (or &quot;elementary block&quot;) labelled <span class="math-container">$\mathscr{l}$</span> may evaluate to and <span class="math-container">$\hat{\rho}(x)$</span> contain the values that the variable <span class="math-container">$x$</span> can be bound to.</p> <p><strong>The constraint system.</strong> One way to specify the Control Flow Analysis then is by means of a collection of constraints and we shall illustrate this for the program of Example 1.2. There are three classes of constraints. One class of constraints relate the values of function abstractions to their labels: <span class="math-container">$$\{ \text{fn} \ x =&gt; [x]^1 \} \subseteq \hat{C}(2) \\ \{ \text{fn} \ y =&gt; [y]^3 \} \subseteq \hat{C}(4)$$</span> These constraints state that a function abstraction evaluates to a closure containing the abstraction itself. So the general pattern is that an occurrence of <span class="math-container">$[\text{fn} \ x =&gt; e]^\mathscr{l}$</span> in the program gives rise to a constraint <span class="math-container">$\{ \text{fn} \ x =&gt; e \} \subseteq \hat{C}(\mathscr{l})$</span>.<br /> The second class of constraints relate the values of variables to their labels: <span class="math-container">$$\hat{\rho}(x) \subseteq \hat{C}(1) \\ \hat{\rho}(y) \subseteq \hat{C}(3)$$</span> The constraints state that a variable always evaluates to its value. So for each occurrence of <span class="math-container">$[x]^\mathscr{l}$</span> in the program we will have a constraint <span class="math-container">$\hat{\rho}(x) \subseteq \hat{C}(\mathscr{l})$</span>. The third class of constraints concerns function application: for each application point <span class="math-container">$[e_1 \ e_2]^\mathscr{l}$</span>, and for each possible function <span class="math-container">$[\text{fn} \ x =&gt; e]^{\mathscr{l}^\prime}$</span> that could be called at this point, we will have: (i) a constraint expressing that the formal parameter of the function is bound to the actual parameter at the application point, and (ii) a constraint expressing that the result obtained by evaluating the body of the function is a possible result of the application.<br /> Our example program has just one application <span class="math-container">$[[\dots]^2[\dots]^4]^5$</span>, but there are two candidates for the function, i.e. <span class="math-container">$\hat{C}(2)$</span> is a subset of the set <span class="math-container">$\{ \text{fn} \ x =&gt; [x]^1, \text{fn} \ y =&gt; [y]^3 \}$</span>. If the function <span class="math-container">$\text{fn} \ x =&gt; [x]^1$</span> is applied then the two constraints are <span class="math-container">$\hat{C}(4) \subseteq \hat{\rho}(x)$</span> and <span class="math-container">$\hat{C}(1) \subseteq \hat{C}(5)$</span>. We express this as <em>conditional constraints</em>: <span class="math-container">$$\{ \text{fn} \ x =&gt; [x]^1 \} \subseteq \hat{C}(2) \Rightarrow \hat{C}(4) \subseteq \hat{\rho}(x) \\ \{ \text{fn} \ x =&gt; [x]^1 \} \subseteq \hat{C}(2) \Rightarrow \hat{C}(1) \subseteq \hat{C}(5)$$</span> Alternatively, the function being applied could be <span class="math-container">$\text{fn} \ y =&gt; [y]^3$</span> and the corresponding conditional constraints are: <span class="math-container">$$\{ \text{fn} \ y =&gt; [y]^3 \} \subseteq \hat{C}(2) \Rightarrow \hat{C}(4) \subseteq \hat{\rho}(y) \\ \{ \text{fn} \ y =&gt; [y]^3 \} \subseteq \hat{C}(2) \Rightarrow \hat{C}(3) \subseteq \hat{C}(5)$$</span> <strong>The least solution.</strong> As in Section 1.3 we shall be interested in the least solution to this set of constraints: the smaller the sets of values given by <span class="math-container">$\hat{C}$</span> and <span class="math-container">$\hat{\rho}$</span>, the more precise the analysis is in predicting which function are applied. In Exercise 1.2 we show that the following choice of <span class="math-container">$\hat{C}$</span> and <span class="math-container">$\hat{\rho}$</span> gives a solution to the above constraints: <span class="math-container">$$\hat{C}(1) = \{ \text{fn} \ y =&gt; [y]^3 \} \\ \hat{C}(2) = \{ \text{fn} \ x =&gt; [x]^1 \} \\ \hat{C}(3) = \emptyset \\ \hat{C}(4) = \{ \text{fn} \ y =&gt; [y]^3 \} \\ \hat{C}(5) = \{ \text{fn} \ y =&gt; [y]^3 \} \\ \hat{\rho}(x) = \{ \text{fn} \ y =&gt; [y]^3 \} \\ \hat{\rho}(y) = \emptyset$$</span> Among other things this tells us that the function abstraction <span class="math-container">$\text{fn} \ y =&gt; y$</span> is never applied (since <span class="math-container">$\hat{\rho}(y) = \emptyset$</span>) and that the program may only evaluate to the function abstraction <span class="math-container">$\text{fn} \ y =&gt; y$</span> (since <span class="math-container">$\hat{C}(5) = \{ \text{fn} \ y =&gt; [y]^3 \}$</span>).<br /> Note the similarities between the constraint based approaches to Data Flow Analysis and Constraint Based Analysis: in both cases the syntactic structure of the program gives rise to a set of constraints whose least solution is desired. The main difference is that the constraints for the Constraint Based Analysis have a more complex structure than those for the Data Flow Analysis.</p> </blockquote> <p>I am confused by this part:</p> <blockquote> <p>Among other things this tells us that the function abstraction <span class="math-container">$\text{fn} \ y =&gt; y$</span> is never applied (since <span class="math-container">$\hat{\rho}(y) = \emptyset$</span>) and that the program may only evaluate to the function abstraction <span class="math-container">$\text{fn} \ y =&gt; y$</span> (since <span class="math-container">$\hat{C}(5) = \{ \text{fn} \ y =&gt; [y]^3 \}$</span>).</p> </blockquote> <p>I thought that I understood the program <span class="math-container">$[[ \text{fn} \ x =&gt; [x]^1]^2 [ \text{fn} \ y =&gt; [y]^3]^4]^5$</span>. However, this part seems contradictory: it first says that <span class="math-container">$\text{fn} \ y =&gt; y$</span> is never applied, and it then immediately says that the program may only evaluate to the function abstraction <span class="math-container">$\text{fn} \ y =&gt; y$</span>. Furthermore, I don't understand why, as stated above, &quot;<span class="math-container">$\text{fn} \ y =&gt; y$</span> is never applied (since <span class="math-container">$\hat{\rho}(y) = \emptyset$</span>)&quot;, and nor do I understand why &quot;the program may only evaluate to the function abstraction <span class="math-container">$\text{fn} \ y =&gt; y$</span> (since <span class="math-container">$\hat{C}(5) = \{ \text{fn} \ y =&gt; [y]^3 \}$</span>)&quot;. For instance, if, as stated above, <span class="math-container">$\hat{\rho}(y)$</span> contains the values that the variable <span class="math-container">$y$</span> can be bound to, then why is this the empty set (in other words, why can <span class="math-container">$y$</span> in the program not be bound to any values)?</p> <p>I skipped to exercise 1.2 to see if I could gather any additional information:</p> <blockquote> <p><strong>Exercise 1.2</strong> Show that the solution displayed for the Control Flow Analysis in Section 1.4 is a solution. Also show that it is in fact the least solution. (Hint: Consider the demands on <span class="math-container">$\hat{C}(2)$</span>, <span class="math-container">$\hat{C}(4)$</span>, <span class="math-container">$\hat{\rho}(x)$</span>, <span class="math-container">$\hat{C}(1)$</span> and <span class="math-container">$\hat{C}(5)$</span>.)</p> </blockquote> <p>However, this doesn't really help me make sense of this.</p>
648
question answering
More Information about the Question Answering System called LUKE
https://cs.stackexchange.com/questions/125466/more-information-about-the-question-answering-system-called-luke
<p>LUKE is a new state-of-art in question answering system and after googled keywords LUKE Studio Ousia NAIST and RIKEN AIP (I suppose LUKE is a colaboration between several research centers) I couldn't find any information.</p> <p>LUKE is mentioned in the following pages:</p> <p><a href="https://paperswithcode.com/sota/question-answering-on-squad11" rel="nofollow noreferrer">https://paperswithcode.com/sota/question-answering-on-squad11</a>.</p> <p><a href="https://sheng-z.github.io/ReCoRD-explorer/" rel="nofollow noreferrer">https://sheng-z.github.io/ReCoRD-explorer/</a>.</p> <p>Is there here anyone with insider information able to explain LUKE?</p>
<p>Check this out. This might be the paper</p> <p><a href="https://www.researchgate.net/project/LUKE-Project" rel="nofollow noreferrer">https://www.researchgate.net/project/LUKE-Project</a></p> <p><a href="https://www.researchgate.net/publication/340461536_Global_Entity_Disambiguation_with_Pretrained_Contextualized_Embeddings_of_Words_and_Entities" rel="nofollow noreferrer">https://www.researchgate.net/publication/340461536_Global_Entity_Disambiguation_with_Pretrained_Contextualized_Embeddings_of_Words_and_Entities</a></p>
649
question answering
Combining Ontology and Relational Databases in Question Answering system
https://cs.stackexchange.com/questions/74514/combining-ontology-and-relational-databases-in-question-answering-system
<p>I'm introducing to the Natural Language Processing field and it's application. I'm planning to build a question answering system for a project, but some approaches are making me a bit confuse about the use of ontologies and it's application on the architecture of the system. I understand that an Ontology in definition is a way to represent concepts, relations about certain domains allowing semantic annotations also.</p> <p>Some approaches uses an Ontology like a database, in which user's input (In natural language) is transformed to a SPARQL query by semantic parser and then the knowledge is retrieved of the ontology...But then, I asked my self if Ontology usually is a static knowledge that rarely changes and I want my system can increase knowledge of the domain with new instances or concepts because there are other systems (this is only a module of a big system) that probably are going to need data about instances present in the ontology to retrieved some specific attributes that could change over time... then, relational database comes to mind... And instead of using an ontology as a big database for whole project, why not to build a relational database with the instances of the domain where attributes can change dinamically and add new ones without make constantly in the ontology... So I could develop an ontology that represent the schema of the database, so I can map the natural language query of the user with the terms present in the ontology so I transform it to a SQL query which then I'm going to retrieved the answer in a relational database... I need to figured out how to parser instances of the ontology in the database with this approach.</p> <p>Could this approach be correct ? I mean, using the ontology as a intermidiate between user query and relational database? , the only problem I see with this approach is that I need to figured out how to link instances of the database in the ontology ..</p> <p>Thanks for your help, Greetings.</p>
650
question answering
What program will derive the underlying algorithm in these question-answer pairs (updated)?
https://cs.stackexchange.com/questions/19663/what-program-will-derive-the-underlying-algorithm-in-these-question-answer-pairs
<p>Given this set of question-answer pairs, what program will derive the underlying algorithm and provide the correct answer for any question of the same format.</p> <p><strong>Question-Answer Pairs (training set):</strong></p> <pre><code>B:BA BA:BB BB:BAA BAA:BAB BAB:BBA BBA:BBB BBB:BAAA BAAA:BAAB BAAB:BABA </code></pre> <p>Those familiar with binary may notice that the training set is binary numbers with A and B substituted for 0 and 1. The answer to each question is the next binary number (using A and B). </p> <p>After processing the training set, the program should be able to answer questions such as the following using the algorithm it derived:</p> <pre><code>BABA:? BABB:? BBBBAAA:? BAABBAAABBABA:? </code></pre> <p><strong>Constraints:</strong></p> <ul> <li>The program must derive the counting algorithm <strong>only by manipulating the data given</strong> in the training set. It must not use hard coded knowledge of binary counting.</li> <li>Many algorithms may produce the correct answers. Therefore, <strong>the simplest algorithm is preferred</strong>. </li> <li>The program should assume that <strong>each answer is a transformation of the question</strong>.</li> <li>All questions will be in the binary format seen above, but they may be of arbitrary size.</li> </ul> <p>Can any existing machine learning programs/algorithms solve this? If so, how? If you believe this is unsolvable, please explain why.</p> <hr> <p><em>This update contains background to the question, explanation of the problem space, a new constraint, a proposed solution, and further questions.</em></p> <p>This problem is relevant to general machine learning where the machine must learn by observation and feedback the algorithms that govern the world around it.</p> <p>Based on <a href="https://en.wikipedia.org/wiki/Kolmogorov_complexity" rel="nofollow">Kolmogorov complexity</a>, there is at least one program that can produce the correct mappings:</p> <pre><code>if B, then BA if BA, then BB (etc. for all pairs in training set) </code></pre> <p>This will work for every pair in the training set and nothing else. This will be the shortest program if the training set is completely random. The good news is that this is an upper bound. For any training set that is not random, there will be a smaller program that will work. Also, the number of programs smaller than upper bound is finite. </p> <p>A unfortunate result of Kolmogorov complexity is that it cannot be calculated. This is due to the <a href="http://en.wikipedia.org/wiki/Halting_problem" rel="nofollow">halting problem</a>. We can't know if any program will stop until it does.</p> <p>If the question is "Write pi," then the program that produces the correct answer would never halt because pi has infinite digits. An answer infinitely long is never desirable so I think the best way to deal with this is to put an <strong>arbitrary limit on the length of the answer</strong>.</p> <p>With that additional constraint, here is an (inefficient) program that will generate a correct solution program shorter than the upper bound if one exists. (sorry of this is confusing, but these steps outline a program that generates programs that implement an algorithm to map questions to answers in a training set):</p> <ol> <li>Create the "upper bound" program. One that maps each input in the training set directly to its output.</li> <li>Generate every possible program that is shorter than the upper bound and list them from shortest to longest.</li> <li>Starting with the shortest program, run the first step of every program. </li> <li>Stop when a correct program is found.</li> <li>Eliminate programs that stop without a correct answer or produce an answer longer than the arbitrary limit.</li> <li>Repeat steps 3-5 running the second step, third step, etc. of the programs.</li> </ol> <p>This program has the following benefits:</p> <ul> <li>It limits the number of programs to those smaller than the "upper bound" program.</li> <li>It avoids the halting problem by <ul> <li>incorporating an arbitrary limit to the length of the answer and </li> <li>executing step x in all programs before moving on to step x+1 so that the solution will be found before looping infinity.</li> </ul></li> </ul> <p>The main disadvantage is that it is terribly slow. It takes millions of years to crack a 128 bit password and I think this program could have comparable performance.</p> <p><strong>Now, the questions:</strong></p> <ul> <li><p>Do you see any significant flaws in this solution?</p></li> <li><p>Can this program's performance be improved in any significant way without introducing onerous constraints?</p></li> </ul>
<p>The same answers you got <a href="https://cs.stackexchange.com/a/19640/755">the last time you asked this question</a> apply. There are infinitely many possible mappings $\{A,B\}^* \to \{A,B\}^*$ and none is preferable to any other.</p> <p>Let's try to formalize your problem more clearly. I suspect you want an algorithm to solve the following problem:</p> <blockquote> <p>Given a training set as input (i.e., a set of mappings $x \mapsto y$, where $x,y$ are strings), output the shortest algorithm $A$ with the property that $A(x)=y$ for every $x,y$ in the training set.</p> </blockquote> <p>The bad news is that this problem is not solvable. In particular, the problem is undecidable, so you should not expect any general algorithm to solve this problem. To do better, you will need some structure on the set of hypotheses (e.g., a distribution on possible mappings or something like that).</p> <p>Why is this undecidable? Because it is basically the problem of computing the <a href="https://en.wikipedia.org/wiki/Kolmogorov_complexity" rel="nofollow noreferrer">Kolmogorov complexity</a> of the training set, and computing the Kolmogorov complexity is known to be undecidable.</p> <p>You might be wondering how standard methods for machine learning get around this barrier. The answer is that they avoid this barrier by changing the problem statement. Machine learning methods generally involve a more restricted space of hypotheses (instead of allowing all possible algorithms, we only consider a restricted subset, such as those that are linear or that have some other nice properties) or else involve specifying a probability distribution on the set of possible mappings. I recommend you spend some time studying machine learning.</p> <hr> <p>What is the context and motivation for your question? What's the <em>specific</em> real-world situation where you encountered this? To make progress I suspect you'll need to step back and look at the real requirements from your application, and be open to other ways to meet your needs.</p>
651
question answering
Answering questions about the recurrence of certain aspects of an algorithm
https://cs.stackexchange.com/questions/97901/answering-questions-about-the-recurrence-of-certain-aspects-of-an-algorithm
<p>I am thoroughly confused by a problem that was brought up in class:</p> <p>Given the following pseudocode for a function RANDOM which generates a random number based off of recursion: </p> <pre><code>function RANDOM(n) 1. if n = 1 then 1.1 return 1 1.2 else 2.1 assign x = 0 with probability 1/2, or 2.2 assign x = 1 with probability 1/3, or 2.3 assign x = 2 with probability 1/6 3. if x = 0 then 3.1 return (RANDOM(n-1) + RANDOM(n-2)) 3.2 end-if 4. if x = 1 then 4.1 return (RANDOM(n) + 2*RANDOM(n-1)) 4.2 end-if 5. if x = 2 then 5.1 return (3*RANDOM(n) + RANDOM(n) + 3) 5.2 end-if 6. end-if end-RANDOM </code></pre> <p>Answer the following three questions: </p> <ol> <li><p>Give the recurrence equation for the expected running time of RANDOM. </p></li> <li><p>Give the exact recurrence equation for the expected number of recursive calls expected by a call to RANDOM(n). </p></li> <li><p>Give the exact recurrence equation for the expected number of times the return statement at line 5.1 is executed, in all calls to RANDOM(n), recursive or not. </p></li> </ol> <p>Our professor gave us a version of this pseudocode that goes like this: </p> <pre><code>function RANDOM(n) 1. if n = 1 then 1.1 return 1 1.2 else 2.1 assign x = 0 with probability 1/3, or 2.2 assign x = 1 with probability 1/3, or 2.3 assign x = 2 with probability 1/3 3. if x = 0 then 3.1 return (RANDOM(n)) 3.2 end-if 4. if x = 1 then 4.1 return (RANDOM(n-1) + 1) 4.2 end-if 5. if x = 2 then 5.1 return (3*RANDOM(n-1) + RANDOM(n-1) + 1) 5.2 end-if 6. end-if end-RANDOM </code></pre> <p>And answered the three questions as follows:</p> <p>1)</p> <p>T(n) = expected running time of RANDOM</p> <p><span class="math-container">$$ T(1) = 1 $$</span></p> <p><span class="math-container">$$ T(n) = 1 + \frac{T(n)}{3} + \frac{T(n-1)}{3} + \frac{T(n-1) + T(n-1)}{3} $$</span></p> <p>which, after some algebra, comes out to equal:</p> <p><span class="math-container">$$ T(1) = 1 $$</span></p> <p><span class="math-container">$$ T(n) =\frac 32 * T(n-1) + 1, $$</span> , where <span class="math-container">$n&gt;=1$</span>. </p> <p>I have one question about the answer to this problem. Why does the constant <span class="math-container">$1$</span> in the original equation not matter to the equation? Is it because it is being lumped in with constant time? </p> <p>2)</p> <p>R(n) = the expected number of recursive calls executed by a call to RANDOM(n). <span class="math-container">$$ R(1) = 0 $$</span></p> <p><span class="math-container">$$ R(n) = \frac{1+R(n)}{3} + \frac{1+R(n-1)}{3} + \frac{2+2*R(n-1)}{3} $$</span></p> <p>, which, after some algebra, comes out to:</p> <p><span class="math-container">$$ R(1) = 0 $$</span></p> <p><span class="math-container">$$ R(n) = \frac 32 * R(n-1) + 2 $$</span></p> <p>, where <span class="math-container">$ n&gt;0 $</span>. </p> <p>I have a couple of questions about the set-up of this problem. </p> <ol> <li>Why is 1 added to the numerators of the first two rational numbers? </li> <li>Similarly, why is 2 added to the numerator of the last rational number? </li> </ol> <p>Finally, </p> <p>3)</p> <p>C(n) = the exact number of returns from line 5.1 of RANDOM(n), recursive or not. <span class="math-container">$$ C(1) = 0 $$</span></p> <p><span class="math-container">$$ C(n) = \frac {C(n)}{3} + \frac {C(n-1)}{3} + \frac {1+2*C(n-1)}{3} $$</span></p> <p>, which, after some algebra, comes out to be:</p> <p><span class="math-container">$$ C(1) = 0 $$</span></p> <p><span class="math-container">$$ C(n) = \frac 32 * C(n-1) + \frac 12 $$</span></p> <p>, where <span class="math-container">$n&gt;0$</span>. </p> <p>My main question for this answer is: why is 1 added to the last rational number in the expression? What does that 1 represent? </p> <p>I am so lost, and desperate. Any attempt to shine light on these answers would be such a great help to me. Thank you. </p>
<p>Firstly, there is one minor typo in the question. The "<span class="math-container">$T(n) =\frac 32 * T(n-1) + 1$</span>" should be "<span class="math-container">$T(n) =\frac 32 * (T(n-1) + 1)$</span>" since this equality is said to be obtained by simple algebra from the equality above it. However, this typo does not affect either the question or this answer.</p> <hr> <p>The answer to all of your questions are so simple that I can hardly imagine that you missed it. Well, on the other hand, it is indeed very easy to miss it. </p> <p>Let me give you a simple example to illustrate the answer. Suppose we have the following function, which computes the factorial.</p> <pre><code>function FACTORIAL(n) if n = 1 then return 1 else return n * FACTORIAL(n-1) end-if </code></pre> <p>Suppose we have made a call, <code>FACTORIAL(4)</code>. That means we must have called <code>FACTORIAL(3)</code>, <code>FACTORIAL(2)</code> and finally <code>FACTORIAL(1)</code>. So we have made four calls to this function in total, the last three of which are called recursively. In fact, we have the following recurrence relations for <span class="math-container">$U(n)$</span>, the number of recursive calls made in <code>FACTORIAL(n)</code>. <span class="math-container">$$U(1) = 0$$</span> <span class="math-container">$$U(n) = 1 + U(n-1), \text { for } n\gt 1$$</span> Notice <strong>the first "1" in the last equality</strong>, which stands for the call to FACTORIAL with parameter <span class="math-container">$n-1$</span> in the code, <code>n * FACTORIAL(n-1)</code> while <span class="math-container">$U(n-1)$</span> comes from the number of recursive calls made by <code>FACTORIAL(n-1)</code>. In other words, the recursive calls made by <code>FACTORIAL(n-1)</code> does not include that call itself, which is, though of course, counted towards the recursive calls made by <code>FACTORIAL(n)</code>. We can also check by contradiction. Let us suppose we did not have that "1". Then we would get <span class="math-container">$U(4)=U(3)=U(2)=U(1)=0$</span>, which said that we had made no recursive calls to <code>FACTORIAL</code> when we had computed <code>FACTORIAL(4)</code>, which is not true at all.</p> <p>In the same way as in this simple example, all those constant 1's and one 2 in the question come from right where the calls are made. For example, when <code>x = 2</code>, the function will return <code>(3*RANDOM(n-1) + RANDOM(n-1) + 1)</code>, which incurs <span class="math-container">$(1+R(n-1)) + (1+ R(n-1)) = 2 + 2*R(n-1)$</span> recursive calls.</p>
652
question answering
Questions about an answer to a pumping lemma question for CFLs
https://cs.stackexchange.com/questions/11358/questions-about-an-answer-to-a-pumping-lemma-question-for-cfls
<p>In <a href="https://cs.stackexchange.com/a/7741/4689">the answer to this question</a>, I'm not understanding how the string is derived for a given $l$.</p> <p>For example,</p> <blockquote> <p>Case 1: $vx = a^i$ where $i &gt; 0$. Choose $l = 2$ to get $a^{n+i} b^{n+1} c^{n+1} d^n \notin L$.</p> </blockquote> <p>Why is $l = 2$ chosen and how is $a^{n+i}b^{n+1}c^{n+1}d^n$ derived from $l = 2$?</p> <p>Also, how can $vx$ be chosen instead of $vwx$ as the OP chose? What do we do about $w$? Is it the empty string?</p>
<p>The essential idea is that pumping lemma tells you about string $uv^lwx^ly$ with $l \geq 0$. That is, you can "pump" to $uwy$, if $l = 0$, that way shortening the initial string.</p> <p>The answer considers the string $a^n b^{n+1} c^{n+1} d^n$. Removing a single $b$ or inserting a single $a$ would move the string out of the language. Removal and insertion correspond to $l = 0$ and $l = 2$.</p> <p>The answer only considers $vx$, because $w$ does not matter - it will not be pumped.</p> <p>If $v$ or $x$ contains any $a$s, double them to get more $a$s then $c$s. If $v$ or $x$ contains any $b$s, remove them to get fewer $b$s than $d$s. $v$ and $x$ will never contain both $a$s and $c$s, because $|vwx| \leq n$. Same is true about the other pair of symbols.</p>
653
question answering
Please check my answer to a pseudocode CASE statement question
https://cs.stackexchange.com/questions/119541/please-check-my-answer-to-a-pseudocode-case-statement-question
<p>This is a pseudocode question in my IGCSE CompSci textbook:</p> <blockquote> <p>Use a <code>CASE</code> statement to display the day of the week if the variable <code>DAY</code> has the value 1 to 7 and an error otherwise.</p> </blockquote> <p>This is my answer to it:</p> <pre><code>CASE Day OF 1 : OUTPUT "Monday" 2 : OUTPUT "Tuesday" 3 : OUTPUT "Wednesday" 4 : OUTPUT "Thursday" 5 : OUTPUT "Friday" 6 : OUTPUT "Saturday" 7 : OUTPUT "Sunday" OTHERWISE OUTPUT "Day invalid" ENDCASE </code></pre> <p>Is this answer correct?</p> <p>(I realise this is a very rudimentary question for a Year 10/11 CompSci class but I’m homeschooled with no teaching guidance whatsoever. So a big cheers to anyone who would take the time to check my answer for me.)</p>
<p>This problem is about switch-case statements. So, if you can do a pseudo-code of such a statement with the case being a DAY variable and the steps are clear, that would be correct. So, your answer is totally correct and the steps are clear. There's no one way of doing pseudo code.</p> <pre><code>switch(DAY) case 1: print(&quot;Monday&quot;) case 2: print(&quot;Tuesday&quot;) case 3: print(&quot;Wednesday&quot;) case 4: print(&quot;Thursday&quot;) case 5: print(&quot;Friday&quot;) case 6: print(&quot;Saturday&quot;) case 7: print(&quot;Sunday&quot;) default: print(&quot;Invalid Day!&quot;) </code></pre> <p>Above is another possible way to a case statement for the <strong>DAY</strong> variable as your question expects.</p>
654
question answering
API to retrieve answers for general questions
https://cs.stackexchange.com/questions/65785/api-to-retrieve-answers-for-general-questions
<p>I was looking for a service (with API) where I can ask it a general question (aka, when was Einstein born?) and retrieve an answer from the Web.</p> <p>Is there any available service to do that? Have tried Watson services but didn't work as expected.</p> <p>Thanks,</p>
655
question answering
Question about an answer related to designing an ASM for a sequence detector
https://cs.stackexchange.com/questions/140996/question-about-an-answer-related-to-designing-an-asm-for-a-sequence-detector
<p>The question says:</p> <blockquote> <p>Design a sequence detector that searches for a series of binary inputs to satisfy the pattern 01[0*]1, where [0*] is any number of consecutive zeroes. The output (Z) should become true every time the sequence is found.</p> </blockquote> <p>The answer to this example in the document I am reading is this:</p> <p><a href="https://i.sstatic.net/YbuJa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YbuJa.png" alt="enter image description here" /></a></p> <p>My question is: After going from state 'first', the decision box checks X. If it is 0, then it does not fit the pattern 01[0*]1. So, it should go back to state 'start'. In this answer, it goes back to state 'first' instead, and so a sequence that violates the pattern could eventually get accepted. For example, the sequence 0011 does not match the pattern given, yet it will be accepted by the given ASM. The first 0 will land us in state 'first', and the 2nd 0 will go back to 'first' and then the 1 will lead to state 'second' and the final one will go to state 'success', outputting Z.</p> <p>Am I correct to think so? If not, why?</p> <p>The document can be found here: <a href="https://www.mil.ufl.edu/3701/classes/joel/17%20Lecture.pdf" rel="nofollow noreferrer">https://www.mil.ufl.edu/3701/classes/joel/17%20Lecture.pdf</a></p> <p>The question is tagged with finite automata because there is no ASM tag. The two are similar enough.</p>
<p>Yup. It looks like this accepts 0011, and it shouldn't, so it looks to me like the automaton is buggy.</p>
656
question answering
How can I optimize the systems in the paper &quot;Ensembling Ten Math Information Retrieval Systems&quot;
https://cs.stackexchange.com/questions/144264/how-can-i-optimize-the-systems-in-the-paper-ensembling-ten-math-information-ret
<p>My question is about the paper <a href="http://ceur-ws.org/Vol-2936/paper-06.pdf" rel="nofollow noreferrer">Ensembling Ten Math Information Retrieval Systems</a>.</p> <p>I already know the algorithms in the paper can answer questions only using dot products (look the <a href="https://cs.stackexchange.com/questions/144236/a-question-about-the-paper-ensembling-ten-math-information-retrieval-systemshttps://cs.stackexchange.com/questions/144236/a-question-about-the-paper-ensembling-ten-math-information-retrieval-systems">post</a>).</p> <p>How can I optimize them?</p> <p>Could you add here (and in the next paper you will produce) the complexities of time preprocessing, memory and time consumption for question answering for all systems? (this is a solicitation for @Witiko)</p>
<p>Here are two major ways in which you can make the ten math information retrieval systems in <a href="http://ceur-ws.org/Vol-2936/paper-06.pdf" rel="nofollow noreferrer">the MIRMU and MSM at ARQMath 2021 paper</a> (and many other experimental retrieval systems) faster:</p> <ol> <li>using appropriate data structures for <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Exact_methods" rel="nofollow noreferrer">exact nearest-neighbor search</a>,</li> <li>using <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximation_methods" rel="nofollow noreferrer">approximate nearest-neighbor search</a>.</li> </ol> <h4>Data structures</h4> <p>All systems except MSM – PZ use either sparse or dense matrices to represent the document corpus and to perform exact NNS. For a corpus of size <span class="math-container">$n$</span>, this is a representation that is easy to construct (<span class="math-container">$\mathcal{O(n)}$</span>), but quite costly for nearest-neighbor search (NNS) even if we bound the query length by a constant because we need to check every document in the collection (<span class="math-container">$\mathcal{O}(n)$</span>) even if we only need the 1,000 nearest neighbors or fewer.</p> <p>MSM – PZ uses <a href="https://github.com/castorini/pyserini" rel="nofollow noreferrer">pyserini</a>, which is a Python library on top of the industry-strength Apache Lucene. Lucene uses <a href="https://en.wikipedia.org/wiki/Inverted_index" rel="nofollow noreferrer">the inverted index</a> to represent a document corpus. For a corpus of size <span class="math-container">$n$</span>, the worst-case time complexity of NNS is still <span class="math-container">$\mathcal{O}(n)$</span>, but since we will only check documents that contain at least one term from the query, the actual speed is much higher. You can see this in the speed results from the paper, where MSM – PZ is much faster at searching (1.1 seconds per a query on average) than the other systems:</p> <p><a href="https://i.sstatic.net/6PgKZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6PgKZ.png" alt="Average query time and the training and preprocessing of the individual systems (log scale)." /></a></p> <p>All systems except MSM – PZ that use NNS with sparse high-dimensional vectors (i.e. all except MSM – PZ and MIRMU – CompuBERT) can be optimized by using an inverted index instead of matrices as their data structure. For MIRMU – CompuBERT, we could use any data structure for NNS with dense low-dimensional vectors, which <a href="https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbor-algorithms" rel="nofollow noreferrer">there are several</a>.</p> <h4>Approximate nearest-neighbor search</h4> <p>Moving away from exact search allows you to achieve <span class="math-container">$\mathcal{O}(1)$</span> time complexity of NNS with dense low-dimensional vectors at the cost of accuracy: some nearest neighbors may be missed and some distant neighbors may show up at the family dinner. Embarassing! Popular algorithms implemented in Python include <a href="https://github.com/spotify/annoy" rel="nofollow noreferrer">annoy</a> and <a href="https://github.com/facebookresearch/faiss" rel="nofollow noreferrer">faiss</a>. The MIRMU – CompuBERT system could be optimized this way.</p> <p>Since the constant-time complexity depends on a constant number of dimensions, the above does not apply to sparse high-dimensional vectors, where the dimensionality is not constant but equals the vocabulary size (<span class="math-container">$\sqrt{n}$</span> according to <a href="https://en.wikipedia.org/wiki/Heaps%27_law" rel="nofollow noreferrer">Heaps' law</a>). <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow noreferrer">The curse of dimensionality</a> rears its ugly head once again! Here are some tentative ideas for a solution:</p> <ol> <li>You could use <a href="https://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition" rel="nofollow noreferrer">dimensionality reduction techniques</a> to reduce high-dimensional sparse vectors to low-dimensional dense vectors. However, there are reasons to believe that <a href="https://dl.acm.org/doi/10.1145/1964897.1964900" rel="nofollow noreferrer">this will impact your retrieval performance</a>.</li> <li>Alternatively, you can achieve <span class="math-container">$\mathcal{O}(1)$</span> time complexity in an inverted index by pre-sorting documents by a common criterion that reflects general usefulness (e.g. <a href="https://en.wikipedia.org/wiki/PageRank" rel="nofollow noreferrer">PageRank</a>) and then stop the retrieval after a certain number of documents has been collected. See <a href="https://nlp.stanford.edu/IR-book/html/htmledition/computing-scores-in-a-complete-search-system-1.html" rel="nofollow noreferrer">the textbook of Manning</a> for a more complete discussion and implementation details. In theory, <a href="https://github.com/castorini/pyserini" rel="nofollow noreferrer">pyserini</a> should be capable of this, but I am not sufficiently familiar with it.</li> </ol> <h4>Why is MIRMU – SCM so slow?</h4> <p>It is, isn't it? 3.72 minutes per a query on average. The SCM uses the formula <span class="math-container">$\vec{x}^T\cdot S\cdot\vec{y}$</span> to compute the similarity of two documents <span class="math-container">$\vec{x}, \vec{y}$</span> according to a term similarity matrix <span class="math-container">$S$</span>:</p> <p><a href="https://i.sstatic.net/GdS2n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GdS2n.png" alt="The representation of two documents, “Hi, world” and “Hello, world” in the TF-IDF vector space model (VSM, left) and in the TF-IDF soft vector space model (soft VSM, right)." /></a></p> <p><em>The representation of two documents, “Hi, world” <span class="math-container">$(\vec{x})$</span> and “Hello, world” <span class="math-container">$(\vec{y})$</span> in the TF-IDF vector space model (VSM, left) and in the TF-IDF soft vector space model (soft VSM, right). In the VSM, different terms correspond to orthogonal axes, making the document representations distant despite their semantic equivalence. In the soft VSM, different terms correspond to non-orthogonal axes, where the angle between the axes is proportional to the similarity <span class="math-container">$(S)$</span> of terms in a word embedding space (middle).</em></p> <p>This formula <a href="https://en.wikipedia.org/wiki/Cosine_similarity#Soft_cosine_measure" rel="nofollow noreferrer">is quadratic in the vocabulary size</a> (<span class="math-container">$\mathcal{O}(\sqrt{n})^2 = \mathcal{O}(n)$</span> according to <a href="https://en.wikipedia.org/wiki/Heaps%27_law" rel="nofollow noreferrer">Heaps' law</a>), which makes NNS extremely costly (<span class="math-container">$n\cdot \mathcal{O}(n) = \mathcal{O}(n^2)$</span>). However, as I show in proof to Theorem 3.4 in <a href="https://arxiv.org/abs/1808.09407" rel="nofollow noreferrer">my paper</a>, we can reduce the time complexity from <span class="math-container">$\mathcal{O}(n)$</span> to <span class="math-container">$\mathcal{O}(1)$</span> by making sure that matrix <span class="math-container">$S$</span> contains no more than <span class="math-container">$C$</span> non-zero elements in any column for a constant <span class="math-container">$C$</span> (we used <span class="math-container">$C = 100$</span> in MIRMU – SCM). <a href="https://github.com/RaRe-Technologies/gensim/blob/fe8e2042f0c8c16abc502220f5a4f88c72d2b31d/gensim/similarities/termsim.py#L580" rel="nofollow noreferrer">Here is our implementation</a> in the <a href="https://radimrehurek.com/gensim/" rel="nofollow noreferrer">gensim</a> Python library. It is painfully slow. Why?</p> <p>The paper assumes that we will compute <span class="math-container">$\vec{x}^T\cdot S\cdot\vec{y}$</span> as a single operation, which will allow us to eliminate any <span class="math-container">$i$</span> and <span class="math-container">$j$</span> for which either <span class="math-container">$x_i = 0, y_i = 0,$</span> or <span class="math-container">$s_{ij} = 0$</span>: <span class="math-container">$$\vec{x}^T\cdot S\cdot\vec{y} = \sum_i\sum_j x_i\cdot s_{ij}\cdot y_j$$</span> <a href="https://github.com/RaRe-Technologies/gensim/blob/fe8e2042f0c8c16abc502220f5a4f88c72d2b31d/gensim/similarities/termsim.py#L580" rel="nofollow noreferrer">Our implementation</a> uses the SciPy library for convenience. SciPy will separate <span class="math-container">$\vec{x}^T\cdot S\cdot\vec{y}$</span> into two operations: <span class="math-container">$\vec{x}^T\cdot S$</span> and <span class="math-container">$\_\cdot\vec{y}$</span>. This makes our implementation <span class="math-container">$O(\sqrt{n})$</span> instead of <span class="math-container">$O(1)$</span>.</p> <p>Here are some ideas for an improvement:</p> <ol> <li>If you can figure out how to compute <span class="math-container">$\vec{x}^T\cdot S\cdot\vec{y}$</span> over sparse high-dimensional <span class="math-container">$\vec{x}, \vec{y},$</span> and <span class="math-container">$S$</span> efficiently in Python, you could significantly speed up MIRMU – SCM. This may involve low-level Cython or C programming.</li> <li>Alternatively, you could use the transformations discussed in Theorem 4.2 of my paper to first transform the document vectors to a form <span class="math-container">$\vec{x}', \vec{y}',$</span> where a simple dot product <span class="math-container">$\vec{x}'^T\cdot\vec{y}'$</span> is equivalent to <span class="math-container">$\vec{x}^T\cdot S\cdot\vec{y}$</span>, and then use some general solution for exact or approximate NNS over sparse high-dimensional vectors as discussed in the first part of my answer.</li> </ol> <p>Neither is a simple undertaking and will require significant programming expertise and effort. Perhaps you can contribute?</p>
657
question answering
Does CompuBERT suffer from the curse of dimensionality?
https://cs.stackexchange.com/questions/144246/does-compubert-suffer-from-the-curse-of-dimensionality
<p>This question is about <a href="http://ceur-ws.org/Vol-2936/paper-06.pdf" rel="nofollow noreferrer">CompuBERT</a> (<a href="https://drive.google.com/drive/folders/1bxYwWzDX3z81S4TwUaTvqZBHtiMOngez" rel="nofollow noreferrer">new implementation</a>).</p> <p>I have read textual data have high dimensionality so I would like to know the behaviour of CompuBERT which uses a dot product for question answering.</p> <p>If I increase the embedding dimension will CompuBERT suffer from the curse of dimensionality?</p> <p>(I googled for the same question for sentence-BERT/siamese-BERT but there is no study yet about that...)</p>
<p><a href="https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra" rel="nofollow noreferrer">The time complexity of matrix multiplication</a> depends on both dimensions <span class="math-container">$m$</span> (the number of documents) and <span class="math-container">$n$</span> (the number of features). That is to say: Yes, increasing the number of features will adversely effect the speed.</p> <blockquote> <p>I googled for the same question for sentence-BERT/siamese-BERT but there is no study yet about that...</p> </blockquote> <p>I don't assume there ever will be. The above observation is quite pedestrian.</p>
658
question answering
Is there a publicly available informer tagger or dataset?
https://cs.stackexchange.com/questions/67164/is-there-a-publicly-available-informer-tagger-or-dataset
<p>I am working on a question answering system. I've learned that informer spans are valuable features for question classification. However from what I've read I wasn't able to find any publicly available dataset for this task, or a trained tagger. Is there any? People seem to just hand-label their dataset.</p> <p>If there aren't any available, then how does one label the dataset and build the tagger?</p>
659
question answering
How can I make inference in CompuBERT?
https://cs.stackexchange.com/questions/139361/how-can-i-make-inference-in-compubert
<p>My question is about the paper <a href="http://ceur-ws.org/Vol-2696/paper_235.pdf" rel="nofollow noreferrer">Three is Better than One Ensembling Math Information Retrieval Systems</a> (a system used for math information retrieval - both for finding answers and formula search)(code on <a href="https://github.com/MIR-MU/CompuBERT" rel="nofollow noreferrer">github</a>).</p> <p>Before the questions, the following figures illustrates:</p> <p>CompuBERT (taken from <a href="http://ceur-ws.org/Vol-2696/paper_235.pdf" rel="nofollow noreferrer">Three is Better than One Ensembling Math Information Retrieval Systems</a>).</p> <p>sentence-BERT (taken from <a href="https://arxiv.org/pdf/1908.10084.pdf" rel="nofollow noreferrer">Sentence-BERT Sentence Embeddings using Siamese BERT-Networks</a>).</p> <p><a href="https://i.sstatic.net/CIVko.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CIVko.png" alt="enter image description here" /></a></p> <p>CompuBERT uses sentence-BERT but the paper <a href="http://ceur-ws.org/Vol-2696/paper_235.pdf" rel="nofollow noreferrer">Three is Better than One Ensembling Math Information Retrieval Systems</a> does not explain the inferece (question answering) step. (the inference step in sentence-BERT is illustrated at the bottom left on the picture)</p> <p>How can I make inferece with CompuBERT?</p> <p>(please explain using the implementation on <a href="https://github.com/MIR-MU/CompuBERT" rel="nofollow noreferrer">github</a>)</p>
<p>CompuBERT's objective is to create representations, i.e. embeddings <span class="math-container">$E$</span> such that a distance of a question to a relevant answer is minimal, while its distance to an irrelevant question is maximal.</p> <p>On indexing (see in <a href="https://github.com/MIR-MU/CompuBERT/blob/c535ff13d954ce718b20a95e4499d68d1cd32d61/question_answer/sbert_ir_system.py#L59" rel="nofollow noreferrer">code</a>), you infer these embeddings <span class="math-container">$Ea_{1,..,n}$</span> for every potential answer in the collection.</p> <p>On inference (see in <a href="https://github.com/MIR-MU/CompuBERT/blob/c535ff13d954ce718b20a95e4499d68d1cd32d61/question_answer/sbert_ir_system.py#L101" rel="nofollow noreferrer">code</a>), you first infer an embedding <span class="math-container">$Eq$</span> of the asked question. Then, the embeddings of the relevant answers to this question will be the ones having the closest embeddings in your indexed collection. Hence, you search for the answers with minimal distance <span class="math-container">$\text{dist}(Eq, Ea_i)$</span>.</p> <p>As shown in Fig. 7, CompuBERT is fine-tuned to optimize <span class="math-container">$\text{dist}(Eq, Ea_i)=\cos(Eq, Ea_i)$</span>, so using a cosine distance might be a good choice of distance a function. Authors of <a href="https://arxiv.org/abs/1908.10084" rel="nofollow noreferrer">Sentence-BERT</a> report good performance for other distance metrics as well, depending on the use-case.</p> <p>Note that by the time, we've extended our research further and found out that for this specific task, better objectives than <em>Cosine Similarity Loss</em> can be found, such as <em><a href="https://github.com/UKPLab/sentence-transformers/tree/c3ffe418a4f434d7eadc18f30408cf1970bba642/examples/training/quora_duplicate_questions#multiplenegativesrankingloss" rel="nofollow noreferrer">Multiple Negatives Ranking Loss</a></em>. A paper with the results, together with a simplified implementation will be published soon in the following weeks.</p>
660
question answering
Dynamic programming: optimal order to answer questions to score the maximum expected marks
https://cs.stackexchange.com/questions/157814/dynamic-programming-optimal-order-to-answer-questions-to-score-the-maximum-expe
<p>You have <span class="math-container">$n$</span> questions in an exam. Question <span class="math-container">$i$</span> is answered correctly with probability <span class="math-container">$p_i &gt; 0$</span>. If question <span class="math-container">$i$</span> is answered correctly, you get <span class="math-container">$R_i$</span> marks. You can choose to answer the questions in any order. As long as you are giving correct answers, you can accumulate all the marks. However, whenever your answer to a question is incorrect, the examination is over and you are not allowed to answer any other question. Given <span class="math-container">$R_i$</span> and <span class="math-container">$p_i$</span>, what is the optimal order in which you should answer the questions to maximize the total expected marks you score?</p> <p>Here is my approach - Let <span class="math-container">$F_t(S_t)$</span> be the expected max score you can have when <span class="math-container">$t$</span> questions are remaining, and the set of questions remaining is <span class="math-container">$S_t$</span>. We get the recurrence as <span class="math-container">$$F_t(S_t)=\max_{i \in S_t} [p_i(R_i+F_{t-1}(S_t-\{i\}))]$$</span> with <span class="math-container">$F_0(\phi)=0$</span>.<br /> Now, <span class="math-container">$F_1(\{i\})=p_iR_i$</span>. <span class="math-container">$F_2(\{i,j\})=\max(p_i(R_i+p_jR_j),p_j(R_j+p_iR_i))$</span>. We see that the first term is larger when <span class="math-container">$$\frac{p_iR_i}{1-p_i}&gt;\frac{p_jR_j}{1-p_j}.$$</span> This hints that the ordering may be based on the value of <span class="math-container">$\frac{p_iR_i}{1-p_i}$</span>, but I am not sure how to give a formal proof, or how to solve the recurrence efficiently?</p>
<p>First, if any <span class="math-container">$p_i=0$</span>, then immediately throw it away since you're guaranteed to lose. If any probability is <span class="math-container">$1$</span> then immediately ask it! (after all why <em>risk</em> not getting the reward when you're guaranteed to get it!). So I'll assume we're now considering probabilities in the range <span class="math-container">$(0,1)$</span>.</p> <p>Suppose you <strong>know</strong> that the <em><strong>optimal order</strong></em> is <span class="math-container">$1,...,n$</span> because a magical <em>fairy</em> told you so. What is your expected score? Well, it is</p> <p><span class="math-container">$$E_1 = p_{1}R_{1} + p_{1}p_{2}R_{2} + p_{1}p_{2}p_{3}R_{3} + .... + \prod_{j\leq n}p_{j} R_{n}$$</span></p> <p>Now suppose you decided to switch indices <span class="math-container">$i$</span> and <span class="math-container">$i+1$</span> in the order that the fairy told you, so now your order is <span class="math-container">$1, ..., i-1,{\bf i+1,i}, i+2, ...,n$</span>. What is your expectation now? Well, it is</p> <p><span class="math-container">$$ E_2 = p_{1}R_{1} + p_{1}p_{2}R_{2} + .... + \left(\prod_{j&lt;i}p_{j}\right)p_{i+1}R_{i+1}+\left(\prod_{j&lt;i}p_{j}\right)p_{i+1} p_{i}R_{i} + ....+\left( \prod_{j&lt;n}p_{j} \right)R_{n} $$</span></p> <p>Because <span class="math-container">$E_1$</span> is optimal, then it must be <span class="math-container">$E_1 \geq E_2 \implies E_1 - E_2 \geq 0$</span> (otherwise the cheeky oracle lied to us!). But notice that <span class="math-container">$E_1, E_2$</span> are only different in their <span class="math-container">$i$</span> and <span class="math-container">$i+1$</span> terms, all other terms are <strong>exactly</strong> the same (verify this!). So:</p> <p><span class="math-container">$$E_1-E_2 = \left(\prod_{j&lt;i}p_{j}\right)p_{i}R_{i}+\left(\prod_{j&lt;i}p_{j}\right)p_ip_{i+1}R_{i+1} -\left(\prod_{j&lt;i}p_{j}\right)p_{i+1}R_{i+1}-\left(\prod_{j&lt;i}p_{j}\right)p_{i+1}p_{i}R_{i} \geq 0$$</span></p> <p>Since all probabilities are positive, we divide by <span class="math-container">$\prod_{j&lt;i} p_j$</span> to get:</p> <p><span class="math-container">$$ p_{i}R_{i} + p_i p_{i+1}R_{i+1} - p_{i+1}R_{i+1} - p_i p_{i+1}R_{i} \geq 0 $$</span> Or rearranging</p> <p><span class="math-container">$$ p_i R_i (1-p_{i+1}) \geq p_{i+1}R_{i+1} (1-p_i)$$</span></p> <p>Or:</p> <p><span class="math-container">$$ \frac{p_i R_i }{1-p_i} \geq \frac{p_{i+1} R_{i+1} }{1-p_{i+1}}$$</span> Wait a minute! This is a condition on the optimal ordering the fairy gave us! So we actually don't need the fairy! Just sort by this order, and that's your optimal ordering!</p>
661
question answering
Does anyone know the answer the following questions on converting logical - physical addresses
https://cs.stackexchange.com/questions/123872/does-anyone-know-the-answer-the-following-questions-on-converting-logical-phys
<p>Due to the unforeseen pandemic, I am unable to speak to my tutor about the following question. I have emailed him, but I have not had an answer for weeks. Can someone please enlighten me. </p> <p>Image and question to be answered below. Please provide an explanation, as I am struggling to find an answer:</p> <p><a href="https://i.sstatic.net/bSMH9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bSMH9.png" alt="enter image description here"></a></p>
662
question answering
Is it possible to query a Knowledge Graph using only Matrix operations?
https://cs.stackexchange.com/questions/116627/is-it-possible-to-query-a-knowledge-graph-using-only-matrix-operations
<p>I am interested in formulating a knowledge graph query in a matrix multiplication/dot product/inner product.</p> <p>I have found by chance the paper <a href="http://ws.nju.edu.cn/courses/ke/reading/1_variational.pdf" rel="nofollow noreferrer">Variational Reasoning for Question Answering with Knowledge Graph</a> which uses a inner product in the last operation (look the picture).</p> <p><a href="https://i.sstatic.net/sh1zG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sh1zG.png" alt="enter image description here"></a></p> <p>The problem is that the paper is not thoroughly explained the process of inference. It's written "we use beam search" but is not explained. Read: </p> <blockquote> <p>During inference, we are only given the question q, and ideally we want to find the answer by computing <span class="math-container">$\arg \max \log (P_{θ_1}(y|q)P_{θ_2} (a|y, q))$</span>. However, this computation is quadratic in the number of entities and thus too expensive. Alternatively, we can approximate it via <strong>beam search</strong>.</p> </blockquote> <p>I think that there is an implicit inner product structure inside the beam search algorithm but I haven't found any explanation through google.</p> <p>Can someone explain better the above mentioned inner product?</p>
663
question answering
Creating a database of math questions and of answers and a front end for them
https://cs.stackexchange.com/questions/142931/creating-a-database-of-math-questions-and-of-answers-and-a-front-end-for-them
<p>Need some guidance on where to begin. I am just learning python if it has any bearing.</p> <p>I want to create a page on my site where users log in, choose multiple topics from a list, choose between levels of difficulty, choose whether or not to be timed, and then be presented with random multiple choice questions on the topics at the chosen level that I have created in advance. They should be able to answer the questions and then be scored at the end if they chose to be timed (like an exam setting) or else be given the option to see the written solutions as they go which I have also created. So they would see one question at a time. If not timed, then they could choose to see the solution at any time; if timed, then they could skip the question or choose an answer and then when all of the questions have been skipped or answered would be scored.</p> <p>I am imagine that I would put the questions and solutions in some kind of database, and then create some kind of interface with dropdown lists. The idea is to create a practice space for students preparing for certain kinds of exams.</p> <p>How do I create such a database of questions and of solutions? Using SQL or mySQL (never used either).</p> <p>And for the front end could that be done with python or simply html?</p>
664
question answering
What questions can denotational semantics answer that operational semantics can&#39;t?
https://cs.stackexchange.com/questions/63874/what-questions-can-denotational-semantics-answer-that-operational-semantics-can
<p>I am familiar with operational semantics (both small-step and big-step) for defining programming languages. I'm interested in learning denotational semantics as well, but I'm not sure if it will be worth the effort. Will I just be learning the same material from a different point of view, or are there insights I can only gain from understanding denotational semantics?</p>
<p>There is no real agreement what characterises denotational semantics (see also <a href="https://cstheory.stackexchange.com/questions/3577/what-constitutes-denotational-semantics">this</a> article), except that it must be <strong>compositional</strong>. That means that if $\newcommand{\SEMB}[1]{\lbrack\!\lbrack #1 \rbrack\!\rbrack} \SEMB{\cdot}$ is the semantic function, mapping programs to their meaning, something like the following must be the case for all $n$-ary program constructors $f$ and all programs $M_1$, ..., $M_n$ (implicitly assuming well-typedness):</p> <p>$$ \SEMB{f(M_1, ..., M_n)} = trans(f) (\SEMB{M_1}, ..., \SEMB{M_n}) $$</p> <p>Here $trans(f)$ is the constructor corresponding to $f$ in the semantic domain. Compositionality is similar to the concept of homomorphism in algebra. </p> <p>Operational semantics is not compositional in this sense. Historically, denotational semantics was developed partly because operational semantics wasn't compositional. Following D. Scott's breakthrough order-theoretic denotational semantics of $\lambda$-calculus, most denotational semantics used to be order-theoretic. I imagine that -- apart from pure intellectual interest -- denotational semantics was mostly invented because at the time (1960s):</p> <ol> <li>It used to be difficult to reason about operational semantics.</li> <li>It used to be difficult to give axiomatic semantics to non-trivial languages.</li> </ol> <p>Part of the problem was that the notion of equality of programs was not as well-understood as it is now. I'd argue that both problems have been ameliorated to a substantial degree, (1) for example by bisimilation based techniques coming from process theory (which can be seen as a specific form of operational semantics) or e.g Pitts work on operational semantics and program equivalence, and (2) by the developments of e.g. separation logic or Hoare logics derived as typed versions of Hennessy-Milner logics via programming language embeddings in typed π-calculi. Note that program logics (= axiomatic semantics) are compositional, too.</p> <p>Another way of looking at denotational semantics is that there are many programming languages and they all look kind-of similar, so maybe we can find a simple, yet universal meta-language, and map all programming languages in a compositional manner to that meta-language. In the 1960s, it was thought that some typed $\lambda$-calculus is that meta-language. A picture might say more than 1000 words: </p> <p><a href="https://i.sstatic.net/y79Lp.png" rel="noreferrer"><img src="https://i.sstatic.net/y79Lp.png" alt="enter image description here"></a></p> <p>What is the advantage of this approach? Maybe it makes sense to look at it from an economic POV. If we want to prove something interesting about a class of object program we have two options.</p> <ul> <li><p>Prove it directly on the object level.</p></li> <li><p>Prove that the translation to the meta-level (and back) 'preserves' the property, and then prove it for the meta-level, and then push the result back to the object level.</p></li> </ul> <p>The combined cost of the latter is probably higher than the cost of the former, <i>but</i> the cost of proving the translation can be amortised over all future uses, while the cost proving the property for the meta-level is much smaller than that of the proof on the object level.</p> <p>The original order-theoretic approach to denotational semantics has so far not lived up to this promise, because complicated language features such as object orientation, concurrency and distributed computation have not yet been given precise order-theoretic semantics. By "precise" I mean semantics that matches the natural operational semantics of such languages.</p> <hr> <p>Is it worth learning denotational semantics? If you mean order-theoretic approaches to denotational semantics, then probably not, unless you want to work in the theory of programming languages and need to understand older papers. Another reason for learning order-theoretic approaches to denotational semantics is the beauty of this approach.</p>
665
question answering
Is it possible to run the transformer (performer) SLiM without using GPUs?
https://cs.stackexchange.com/questions/144309/is-it-possible-to-run-the-transformer-performer-slim-without-using-gpus
<p>This question is about the article <a href="https://arxiv.org/pdf/2012.11346.pdf" rel="nofollow noreferrer">Sub-Linear Memory How to Make Performers SLiM</a>.</p> <p>I googled for the fastest transformer and I think almost surely I found. It is called SLiM.</p> <p>The problem is the authors use only GPUs and I don't understand much about GPUs. I don't even know whether there are algorithms that run only on GPUs and are impossible to run on usual hardware.</p> <p>SLiM is implemented with Tensorflow and uses prefix-sum algorithm.</p> <p>Does SLiM has something I am missing or I am not aware that makes its implementation disadvantageous (compared to Performers) on CPUs?</p> <p>Will SLiM run very slowly on common hardware? How many times is SLiM slow when it runs on common hardware?</p> <p>Note: I am interest in using for other tasks than described in the above cited paper like question answering in a collection of the order of millions of documents so I don't have memory to test the big <span class="math-container">$O$</span> constants in the algorithms... (and I can't afford to buy GPUs...)</p>
666
question answering
Help interpreting this deadlock question
https://cs.stackexchange.com/questions/45764/help-interpreting-this-deadlock-question
<p>I have this assignment question but I am a bit unsure how to go about answering it. The question is as follows and accompanied by the image below: </p> <p>Three processes are competing for six resources labelled A to F as shown below.</p> <p><a href="https://i.sstatic.net/8ilRU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ilRU.jpg" alt="enter image description here"></a></p> <p>Using a resource allocation graph, show the possibility of a deadlock in the implementation above.</p> <p>I know how to do the graph but what I am struggling to understand is, do I take the <code>Release();</code> methods into consideration or only the <code>Get();</code> methods. And also, would <code>P0()</code> access resources A, B and C first or will each process run simultaneously meaning <code>P0()</code> access resource A, <code>P1()</code> access resource D and <code>P2()</code> access resource C, and then the second set of <code>Get()</code> methods are requested simultaneously? Lastly it does not specify how many instances (dots) are in each resource, is there any indication as to how to determine/go about working with this? As soon as I can clear up these misunderstandings I can draw the diagram</p>
<p>Here is one bad scenario:</p> <ol> <li>P0 obtains A and B.</li> <li>P1 obtains D and E.</li> <li>P2 obtains C and F.</li> </ol> <p>At this point, we have reached deadlock, since P0 is waiting for P2 to release C, P1 is waiting for P0 to release B, and P2 is waiting P1 to release D.</p>
667
question answering
How to get the quick answer to the &quot;Is there AT LEAST ONE subset that contains all given elements?&quot; question if the set of subsets is very large?
https://cs.stackexchange.com/questions/70405/how-to-get-the-quick-answer-to-the-is-there-at-least-one-subset-that-contains-a
<p>I have a very large amount of objects that look like this: </p> <pre><code>DataDict = { id1: {"a": true, "b": true, "bc": true, "hgf": true}, id2: {"bcwe": true, "nKNNn": true, "mjj": true, "AAt": true}, id3: {"h": true, "a": true, "mjj": true, "ABwAU": true}, id4: {"wvzy": true, "zzba": true, "abc": true, "a": true}, ... } </code></pre> <p>or just (as a set of sets, note that <code>id</code>s may be excluded) </p> <pre><code>DataSet = { {"a", "b", "bc", "hgf"}, {"bcwe", "nKNNn", "mjj", "AAt"}, {"h", "a", "mjj", "ABwAU"}, {"wvzy", "zzba", "abc", "a"}, ... } </code></pre> <p>Let <code>M</code> denote the total number of objects in <code>DataDict</code> (or subsets in <code>DataSet</code>), and let <code>N</code> denote the total number of unique names of properties found in <code>DataDict</code> (or the total number of unique strings found in <code>DataSet</code>). </p> <p>The question is: given the set of strings <code>{string1, string2, string3, ...}</code>, how to get the <strong>Yes/No</strong> answer (assuming that there is a way to prepare the “index” for <code>DataSet</code>) to the “Does <code>DataSet</code> contain <strong>at least one</strong> subset that contains <code>string1</code> AND <code>string2</code> AND <code>string3</code> AND ...?” question as fast as possible?</p> <p>In another form, the question is: given the array <code>A = [string1, string2, string3, ...]</code>, how to build the index (data structure) for <code>DataDict</code> that allows to quickly determine if it contains <strong>at least one</strong> object <code>obj</code> (I don’t care which object to choose, moreover, I don't want to return this object, all I want is that the function should return <code>true</code> if such an object exists, and <code>false</code> if not) such that <code>DataDict.obj.string1 is true AND DataDict.obj.string2 is true AND DataDict.obj.string3 is true AND ...</code>? </p> <p>The only way that I see is to build the index like this: </p> <pre><code>{ "a": [1, 3, 4], "abc": [4], ... } </code></pre> <p>and, for example, if <code>A = ["a", "abc"]</code>, then I need to find the intersection of two arrays (<code>[1, 3, 4]</code> and <code>[4]</code>), but <strong>stop as soon as one common element is found</strong> and return <code>true</code> (if there are no common elements, return <code>false</code>). But these operations are very costly and time-consuming, which often leads to the unacceptable waiting time. Is there a way that guarantees a significantly better performance for all cases? </p> <p>It would be extraordinarily nice if there exists a solution that (assuming that the index already exists) finds the answers depending on <code>A.length</code> time. The good solutions may depend on <code>log M</code> or <code>log N</code> or even close to <code>log M</code> multipled by <code>log N</code>, but I cannot imagine how to find such a solution...</p>
<p>It's possible to answer such queries in time $O(M^{1 - S/N})$, where $S$ is the size of the subset in the query, using a data structure from Ron Rivest:</p> <p><a href="https://people.csail.mit.edu/rivest/pubs/Riv76b.pdf" rel="nofollow noreferrer">Partial-Match Retrieval Algorithms</a>. Ronald L. Rivest. SIAM Journal Computing, vol 5 no 1, March 1976.</p> <p>Basically, you convert each set to a bitvector of length $N$, then use Rivest's partial-match queries. This is ever so slightly faster than the naive algorithm, whose running time will be about $O(M)$ (or a bit more).</p> <hr> <p>Your idea using indices is another approach that might be faster in practice, especially if most sets are much smaller than $N$. Note that it's possible to optimize this a bit. Your index will have, for each string $x$, a list of indices of sets that contain $x$ <em>and</em> the length of that list. Now, given a set $A$, you can check for each $x \in A$ the length of the corresponding list in the index, and find the shortest such list, and enumerate all entries in that list to see if any of them are a superset of $A$. See also <a href="https://cstheory.stackexchange.com/q/19526/5038">https://cstheory.stackexchange.com/q/19526/5038</a>. I think the worst-case running time is no better than the naive algorithm, but in practice it might perform significantly better.</p>
668
question answering
Relationship of algorithm complexity and automata class
https://cs.stackexchange.com/questions/52748/relationship-of-algorithm-complexity-and-automata-class
<p>I have been unable to find a graph depicting or text answering the following question: Is there a direct relationship between the complexity of an algorithm (such as best / worst case of quick sort), and class of automata that can implement the algorithm. For example is there a range of complexity push down automata can express? If the answer is yes to said question is there a resource depicting the relationship? Thanks!</p>
<p>Yes, there are relationships in many cases!</p> <p>For example, it is known that any language which is accepted by reversal-bounded counter machines are in $P$ (see <a href="http://www.sciencedirect.com/science/article/pii/0022000081900283">here</a>).</p> <p>Similarly, we know that all regular languages are in $P$, since there's a polynomial time algorithm for determining if an NFA accepts a given word.</p> <p>There are too many to enumerate here, but in general, more limited computation models are in easier complexity classes.</p>
669
question answering
Which of the following languages is accepted by this Pushdown Automaton?
https://cs.stackexchange.com/questions/74140/which-of-the-following-languages-is-accepted-by-this-pushdown-automaton
<p>This is a question from a past paper. I am struggling to get my head around the concept of a PDA. I understand that it is a Finite Automaton with a stack but am stuck as far as answering questions like this one. Thanks. </p> <p><a href="https://i.sstatic.net/ONLzK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ONLzK.jpg" alt="PDA Diagram"></a></p>
<p>I think the answer is a, $\{a^n b^{2n} | n &gt; 0\}$.</p> <p>First, the machine pushes $a$'s onto the stack until the first $b$. Then, for every two $b$'s in the string, it pops off one $a$. Then, if the machine is at the end of the string and the stack is empty, it accepts.</p> <p>If the stack of $a$'s runs out while the machine is still popping pairs of $b$'s or there are $a$'s left in the stack when the last $b$ is popped, then the machine rejects.</p> <blockquote> <p>I am struggling to get my head around the concept of a PDA.</p> </blockquote> <p>Notation varies, but in the diagram you provided, I think <code>nop</code> means "read the thing on top of the stack and leave it there" and <code>pop</code> means "read and remove the thing on top of the stack".</p> <p>For more info, you could read the chapter on PDA's in Michael Sipser's book <em>Introduction to the Theory of Computation</em>.</p>
670
question answering
Show that the collection of Turing-recognizable languages is closed under homomorphism
https://cs.stackexchange.com/questions/89151/show-that-the-collection-of-turing-recognizable-languages-is-closed-under-homomo
<p>I have seen this question here, <a href="https://cs.stackexchange.com/questions/87475/closure-of-turing-recognizable-languages-under-homomorphism">Closure of Turing-recognizable languages under homomorphism</a> But actually this question answers the question of "What is the relation between homomorphism and concatenation?", so I still have a problem of how to show that the collection of Turing-recognizable languages is closed under homomorphism. Could anyone help me in doing so please?</p> <p>In my opinion closure under homomorphism is very similar to closure under Kleene Star, but I am convinced that I have to put marks on any number of tape cells because my language $f(L)$ may contain many strings, am I right? </p> <p>Many thanks! </p>
<p>The proof is posted by Anwar Saiah on <a href="https://cs.stackexchange.com/q/87709/83244">this post</a>. I quote it here (with its formatting improved).</p> <blockquote> <p>Given a language <span class="math-container">$L$</span> that is Turing recognizable and a TM <span class="math-container">$M$</span> that recognizes it and a homomorphism <span class="math-container">$f$</span>, we build a NTM <span class="math-container">$M'$</span> that recognizes <span class="math-container">$f(L)$</span>.</p> <p><span class="math-container">$M'$</span> looks like this:</p> <p>On input <span class="math-container">$w$</span> :</p> <ul> <li><p>Non-deterministicly feed words from <span class="math-container">$\Sigma^∗$</span> to <span class="math-container">$f$</span> until you obtain <span class="math-container">$w$</span>.</p> </li> <li><p>Run <span class="math-container">$M$</span> on <span class="math-container">$f^{−1}(w)$</span>, if <span class="math-container">$M$</span> accepts accept, otherwise reject.</p> </li> </ul> </blockquote> <p>This works because <span class="math-container">$M'$</span> is a non-deterministic Turing machine. A NTM accepts <span class="math-container">$w$</span> if there is at least one branch accepting <span class="math-container">$w$</span>, so if there is any word <span class="math-container">$x$</span> such that <span class="math-container">$f(x)=w$</span> and <span class="math-container">$M(x)$</span> accepts, this will find it and accept.</p>
671
question answering
Recursive language with non-recursive subsets
https://cs.stackexchange.com/questions/17860/recursive-language-with-non-recursive-subsets
<p>I have a professor who is really poor at explaining the material, which is what makes answering his questions very hard. Here is the question:</p> <blockquote> <p>Recursive language with non-recursive subsets. Does one exist?</p> </blockquote> <p>I'm sure it is a very simple and easy answer but I can't figure it out. Don't give me the answer just point me in the right direction and I'm sure I'll figure it out.</p>
<p>Hint. Take a very big recursive language over any alphabet you like. Verrrry big. Something so big it has all kinds of subsets.</p>
672
question answering
tcp congestion avoidance
https://cs.stackexchange.com/questions/124460/tcp-congestion-avoidance
<p>I came across this question:</p> <pre><code>At the beginning of transmission t, a TCP connection in congestion avoidance mode has a congestion window w = 60 segments.Packet loss is observed during transmission rounds t, t+10, and t+20 by getting multiple ACKs. What is the congestion window at the end of round t, t + 10, and t + 20? If there's no further packet loss, when will the window of w = 60 segments be reached again? </code></pre> <p>Answer:</p> <pre><code>Congestion window is halved during transmission round t, leading to w = 30. At the beginning of transmission round t + 10, the window has increased to w = 40 but it will be halved again during transmission round t + 10 to w = 20. Similarly, after transmission round t + 20, the window will be w = 15. With no further packet loss, 45 transmission rounds later at t + 65, the window will reach again the original size of w = 60. </code></pre> <p>I know that due to multiple ACKs, packet loss reduces the window size to half. So at t+10, w = 30 makes sense. However, I don't really get the rest. Why does the window size increase by 10? And at t+20, why doesn't it reduce to 10 instead of 15? If someone could explain the steps, that would be great. </p>
<p>When we are in AC mode every rtt(round trip time) the congestion window increase by 1(MSS). Here our current congestion window(cw) is 60 and we know that during transmission round t we get 3 duplicate acks-so our window is now 30MSS,now we also know that from t to t+1 for instance our window increase by 1,so from t to t+10 our window increases by 10-which means our window is now 40. We know that during the t+10 round we will get 3 duplicate acks,so now our window will become 40/2=20. From round t+10 to t+20 our cw increase by 10 more-so now its 30,and in addition we know we will get 3 duplicate acks-which means now our window will be 30/2=15MSS</p>
673
question answering
Third Normal Form and Boyce Code normal form
https://cs.stackexchange.com/questions/119207/third-normal-form-and-boyce-code-normal-form
<p>I know that this is not a question answer site but for sake of explaining my doubt I have to post the entire question..</p> <blockquote> <p>Consider the following statements.</p> <pre><code>If relation R is in 3NF and every key is simple, then R is in BCNF If relation R is in 3NF and R has only one key, then R is in BCNF Both 1 and 2 are true 1 is true but 2 is false 1 is false and 2 is true Both 1 and 2 are false </code></pre> </blockquote> <hr> <p>Ans given is <span class="math-container">$a$</span>, but how can it be so? </p> <p>I agree with the first statement but for the second one consider that we have a relation <span class="math-container">$R={\{A,B,C,D\}}$</span> where <span class="math-container">$AB$</span> is the key, and say <span class="math-container">$C-&gt;B$</span> then it satisfies <span class="math-container">$3nf$</span> right? But it is <span class="math-container">$NOT$</span> in <span class="math-container">$BCNF$</span> right? as here we have <span class="math-container">$non-prime$</span> <span class="math-container">$deriving$</span> <span class="math-container">$a$</span> <span class="math-container">$prime$</span> <span class="math-container">$attribute$</span> </p>
<blockquote> <p><span class="math-container">$S_1:$</span> If relation R is in 3NF and every key is simple, then R is in BCNF.</p> <p><span class="math-container">$S_2:$</span> If relation R is in 3NF and R has only one key, then R is in BCNF</p> </blockquote> <p>Both statements are correct.</p> <p><strong>Your counterexample is as follow:</strong></p> <p><span class="math-container">$R(A,B,C,D)$</span> where fds that hold are <span class="math-container">$FD = \{AB \rightarrow CD, C\rightarrow B\}$</span>.</p> <p>Now, this example does not satisfy premice of any of the statement because there are two candidate keys i.e <span class="math-container">$\{AB\}, \{AC\}$</span>.</p>
674
question answering
CPU and GPU differences
https://cs.stackexchange.com/questions/56082/cpu-and-gpu-differences
<p>What is the difference between a single processing unit of CPU and single processing unit of GPU? <br> Most places I've come along on the internet cover the high level differences between the two. I want to know what instructions can each perform and how fast are they and how are these processing units integrated in the compete architecture?  <br> It seems like a question with a long answer. So lots of links are fine.   <br> In the CPU, the FPU runs real number operations. How fast are the same operations being done in each GPU core? If fast then why is it fast?  <br> I know my question is very generic but my goal is to have such questions answered.</p>
<p>This are not Real numbers as $\mathbb{R}$, but at this point - CPU has double precision floating point numbers, GPU very low number of units processing them, floats on GPU are <em>halfs</em>.<br> This is due to graphics (this was the main goal before parallel processing), where results are rounded to display, so speed vs accuracy tradeof went that way.<br> GPU core frequencies are smaller than CPUs, number of operations is very limited on GPU (boosted by video decoder), and there is a huge difference in branch prediction - CPU has very long and complex prediction, while GPU just recently got it added. </p> <p>Single core on GPU: it is Streaming Multiprocessor (there are about 4 - 16 per card), it includes cuda cores (which is about 32-64), and they work in lock-step, so it differs from CPU threads (not locked). </p> <p>It is hard to compare like this, but in short - single core on GPU is still parallel unit working slower than CPU core, less memory, registers and instructions than CPU, with very short branching prediction and preferable <em>half</em> floats, nowadays normal floats but having about one-two processing units for double precision, some time ago integer operations were slower on GPU (not onlu by frequency) - but this changed recently. </p> <p>The same operation on floats - they are slower on GPU than CPU due to frequency.</p> <p>You might be interested in <a href="http://developer.amd.com/resources/documentation-articles/developer-guides-manuals/" rel="nofollow">AMD architecture</a>, <a href="https://developer.nvidia.com/key-technologies" rel="nofollow">Nvidia architecture</a> and <a href="http://www.intel.eu/content/www/eu/en/processors/architectures-software-developer-manuals.html" rel="nofollow">Intel architecture</a> to compare instructions set and hardware differences further.</p>
675
question answering
How to find the intersection of two FAs and then check if two FAs are equal?
https://cs.stackexchange.com/questions/154969/how-to-find-the-intersection-of-two-fas-and-then-check-if-two-fas-are-equal
<p><a href="https://i.sstatic.net/tAaWw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tAaWw.png" alt="Two FAs" /></a></p> <p>I am still quite confused on how to properly handle in answering the <code>intersection and equality of two FAs</code> in terms of table form and manipulating its transformation. It is a bit challenging at times when I encounter new FAs to compare. It would help a lot if there would be a proper way and tip to understand the question in a well manner and the approach to answering it properly.</p> <p>My attempt last time on another question for checking two FAs (I added variables on the nodes when I complemented them): <a href="https://i.sstatic.net/LMSXk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LMSXk.png" alt="FA1" /></a> <a href="https://i.sstatic.net/g8XFF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g8XFF.png" alt="FA2" /></a> <a href="https://i.sstatic.net/2aoJH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2aoJH.png" alt="ATTEMPT" /></a> Google sheets attempt editable link (included the complement attempt and transition table and graph):</p> <p><a href="https://docs.google.com/spreadsheets/d/177-OzwGzMrdK9aBB6-PkWa6Odp-2GXaM/edit?usp=sharing&amp;ouid=103963659104705170979&amp;rtpof=true&amp;sd=true" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/177-OzwGzMrdK9aBB6-PkWa6Odp-2GXaM/edit?usp=sharing&amp;ouid=103963659104705170979&amp;rtpof=true&amp;sd=true</a></p> <p>Your responses would indeed help me a lot on this one since I am still getting the grasp of understanding in Finite Automata intersection and equality. Thank you very much!</p>
676
question answering
Big-O notation based on runtime
https://cs.stackexchange.com/questions/76283/big-o-notation-based-on-runtime
<p>I am having a hard time determining the Big-O notation based on the runtime of the algorithm. I would really appreciate it if any one of you could give me some hint/ or tips in answering the question. </p> <p><a href="https://i.sstatic.net/AaXOM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AaXOM.png" alt="enter image description here"></a></p>
<p>Taking into account both of the comments above, you can improve a pretty good answer for your problem by taking different orders of magnitude and then using log-logplot in order to get a polinomal for which your data converges (for instance you can use Lagrange's interpolation theorem result) so you can determine a good approach for the function you're looking for. You can check <a href="http://introcs.cs.princeton.edu/java/41analysis/" rel="nofollow noreferrer">this site</a> to get more info about, not only the process I've just described for you, but also about big O notation and theta notation (also known as tilde notation).</p>
677
question answering
Given a complete, weighted and undirected graph $G$, complexity of finding a path with a specific cost
https://cs.stackexchange.com/questions/24231/given-a-complete-weighted-and-undirected-graph-g-complexity-of-finding-a-pat
<p>Given a fully connected graph $G$, suppose that we are searching for a simple path $P$ with a specific cost $c$. </p> <p>Is answering to that problem <em>yes</em> or <em>no</em> equivalent to subset-sum problem? What would be the complexity of finding such path?</p> <p>I have made a reduction from subset-sum problem:</p> <p>If each number in a set $S$ is a vertex of $G$ and weight of $&lt;i,j&gt;$ is $|i-j|$, then answering the question above <em>yes</em> or <em>no</em> is the same as solving the sumbet-sum problem.</p> <p>P.S. The initial vertex I have visited is added to the cost.</p> <p><strong>Edit: Edge weights</strong></p>
<p>Given an instance $\langle \{s_1,\ldots,s_n\}, T \rangle$ of subset sum, construct the following weighted graph. The vertices are $$ v_0,v_1,\ldots,v_n,u_1,\ldots,u_n. $$ Connect $v_{i-1}$ and $v_i$ with an edge of weight $s_i$. Connect $v_{i-1}$ to $v_i$ via $u_i$ (i.e., add the edges $\{v_{i-1},u_i\},\{u_i,v_i\}$) with zero weight edges. There is a simple path of total cost $T$ iff there is a subset of $\{s_1,\ldots,s_n\}$ summing to $T$. This shows that your problem is NP-hard, and in fact NP-complete, since it's clearly in NP.</p>
678
question answering
If we have f(n) ∈ O(h(n)) and g(n) ∈ Ω(h(n)), does that mean that f(n) + g(n) ∈ Θ(h(n))?
https://cs.stackexchange.com/questions/159225/if-we-have-fn-%e2%88%88-ohn-and-gn-%e2%88%88-%e2%84%a6hn-does-that-mean-that-fn-gn-%e2%88%88
<p>It is quite easy to prove that f(n) + g(n) ∈ Ω(h(n)), but I am having trouble with proving/disproving that f(n) + g(n) ∈ O(h(n)).</p> <p>Someone suggested that <a href="https://cs.stackexchange.com/questions/158890/if-f-oh-and-g-%CE%A9h-then-fg-is">this question</a> answers mine, which it doesn't. As I've written above, proving that f(n) + g(n) ∈ Ω(h(n)) is easy. I am having trouble disproving that f(n) + g(n) ∈ O(h(n)).</p> <p>Thanks for any help.</p>
<p>&quot;We know nothing about the upper bound&quot; is a good intuition but it is not a formal proof that you can't hope to show <span class="math-container">$f(n) + g(n) \in O(h(n))$</span> if your only assumptions are <span class="math-container">$f(n) \in O(h(n))$</span> and <span class="math-container">$g(n) \in \Omega(h(n))$</span>.</p> <p>Fortunately, a counterexample is easily obtained by considering, e.g., <span class="math-container">$f(n)=1$</span>, <span class="math-container">$g(n)=n^2$</span>, and <span class="math-container">$h(n)=n$</span>.</p>
679
question answering
How do I show that an equivalence class of a language containing an empty string is infinite
https://cs.stackexchange.com/questions/55203/how-do-i-show-that-an-equivalence-class-of-a-language-containing-an-empty-string
<p>The question is as follows: </p> <blockquote> <p>Let $L$ be a language (not necessarily regular) over an alphabet. Show that if the equivalence class containing the empty string $[ \epsilon ]$ is not $\{ \epsilon \}$, then it is infinite.</p> </blockquote> <p>How do I go about answering this? Would I need to use Myhill-Nerode theorem? From what I've read there's a corollary from the theorem that if a language defines an infinite set of equivalence classes, it is not regular. I'm not sure if that helps answer my question though.</p>
<p>Let $A$ be the alphabet. I suppose that the equivalence you are referring to is the equivalence $\sim$ defined on $A^*$ by $u \sim v$ if and only if, for all $x \in A^*$, $$ ux \in L \Leftrightarrow vx \in L $$ Now suppose there is a word $u \in A^+$ such that $u \sim \varepsilon$. Then by definition, $x \in L$ if and only if $ux \in L$. It follows by induction on $n$, that for all $n &gt; 0$, $x \in L$ if and only if $u^nx \in L$ and thus $[\varepsilon]$ contains $u^*$. If the alphabet is nonempty, it follows that $[\varepsilon]$ is infinite.</p>
680
question answering
Determining Big O
https://cs.stackexchange.com/questions/28461/determining-big-o
<pre><code> i&lt;--2 while (i&lt;n) someWork (...) i &lt;-- power (i,2) done </code></pre> <blockquote> <p>Given that someWork(...) is an O(n) algorithm, what is the worst case time complexity?</p> </blockquote> <p>I've found this question answered on this site with the solution of O(n log n), however I don't quite understand why. I know that the power function has O(log n), but I don't understand why the overall Big O of the loop becomes O(n log n) instead of just O(n). Can someone please explain this to me?</p>
<p>You're missing a very crucial component of Big-Oh analysis. That question is: given some number <code>n</code> and some code <code>p</code> how does the <strong>MAXIMUM</strong> runtime of <code>p(n)</code> differ with respect to time per each <code>n</code> -- that is, what are the bounds of <code>p(n)</code> with respect to <code>n</code>. When code tracing, create a tally. </p> <p>Here, we see that <code>i</code> is initialized to 2. Then the code enters into a while loop. Then, while <code>i&lt;n</code> we perform some work <code>n</code> number of times, since we know that <code>someWork(...)</code> has a maximum time complexity of <code>n</code> -- that is, linear time. We add <code>n</code> to our tally. </p> <p>The next line of code is <code>i</code> gets initialized to <code>i</code> being raised to the power of <code>2</code>. This variable, <code>i</code> directly impacts the runtime of the loop, since i with respect to <code>n</code> governs the condition upon entering and exiting the loop. We now know that we will perform the loop <code>log(n)</code> times.</p> <p>Putting it all together, we know that since <code>someWork(...)</code> takes linear time (<code>n</code>) and that we will perform <code>someWork(...)</code> log(n) times, we can compute the tally for the big-oh analysis of the function as <code>n*log(n)</code></p> <p>Does this make more sense now? </p>
681
question answering
Turing decidable languages
https://cs.stackexchange.com/questions/139532/turing-decidable-languages
<p>On an old worksheet I came across the question</p> <blockquote> <p>If L<sub>1</sub> and L<sub>2</sub> are two Turing decidable languages, then show that 𝐿<sub>1</sub>∪𝐿<sub>2</sub> and 𝐿<sub>1</sub>𝑜𝐿<sub>2</sub> are Turing decidable languages (high-level description with stages is enough).</p> </blockquote> <p>How do I go about answering this without being given a language to work from?</p>
<p>Both <span class="math-container">$L_1$</span> and <span class="math-container">$L_2$</span> are decidable. Hence, they have algorithms <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> (respectively) that decide them.</p> <p>Try to create a new turing machine (algorithm) using the two algorithms <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span>.</p> <p>For example, for the union <span class="math-container">$L_1\cup L_2$</span>, you can create the following algorithm:</p> <ul> <li>run <span class="math-container">$A_1$</span> on the input. If it accepted, then also accept.</li> <li>else, run <span class="math-container">$A_2$</span> on the input, and return what <span class="math-container">$A_2$</span> returned.</li> </ul>
682
question answering
How is momentum an approximation of Hessian based optimization?
https://cs.stackexchange.com/questions/106053/how-is-momentum-an-approximation-of-hessian-based-optimization
<p>In the answer to "what is the Hessian" at this site:</p> <p><a href="https://stackoverflow.com/questions/23297090/how-calculating-hessian-works-for-neural-network-learning">https://stackoverflow.com/questions/23297090/how-calculating-hessian-works-for-neural-network-learning</a></p> <p>the person answering the question concludes by saying</p> <p>"Fun fact, adding the momentum term to your gradient based optimization is (under sufficient conditions) approximating the hessian based optimization (and is far less computationally expensive)."</p> <p>I have tried to understand this point and look online but am not sure what is meant by the momentum approximating hessian based optimization. How is momentum like hessian based optimization?</p>
683
question answering
Local and Global storage with multithreading pools + locking threads
https://cs.stackexchange.com/questions/24750/local-and-global-storage-with-multithreading-pools-locking-threads
<p>I am having difficulty answering the following questions relating to the use of threading.</p> <p>Question 1 is of relating to the possibility of a local storage per thread and a global storage accessible to all threads. Consider the following scenario; A program creates a series of threads each with their own local storage, kind of like a mutex system except for that no thread can access another's memory, and a square matrix stored in global storage which is accessible to all threads. When the space complexity of the following program is calculated, is the space complexity (# of elements in the square matrix)<sup>2</sup> + (local storage of finishing thread) or (# of elements in the square matrix)<sup>2</sup> + (local storage of all threads at the time that the finishing thread completed)?</p> <p>Question 2 relates to timing threads to go at a very precise rate. Consider the following scenario; A program creates a series of threads that continually add a random set of elements to their local storage. The program finishes and returns the thread that received the shortest number of elements in its random set. If one thread was to go faster than another, the program would be incorrect. Is there a way to "lock" the threads to go at the same speed?</p> <p>Any help on the answering of these questions would be appreciated. Please comment if additional info is required.</p>
<p>I will answer Question 1. As @DavidRicherby said, you should post the other question separately.</p> <p>The <em>space complexity</em> is the <em>maximum</em> amount of space required while the algorithm is running. So it certainly would be more than the (number of elements in matrix) + (local storage of just finishing thread). It <em>might</em> be the total storage of all the threads at the time the program finished but only if (a) you haven't already exited any of the threads, and (b) none of the threads freed up a bunch of memory just before the program finished.</p> <p>You also need to be careful to not overcount. As described <a href="http://www.cs.northwestern.edu/academics/courses/311/html/space-complexity.html" rel="nofollow">here</a> there are a lot of optimizations that allow you to reuse space, so you can't just sum up all the memory allocated by every thread that ever existed. You really need to figure out the point in time during your program execution when the most memory is in use.</p>
684
question answering
Can machines of finite size ever solve their own halting problems?
https://cs.stackexchange.com/questions/86607/can-machines-of-finite-size-ever-solve-their-own-halting-problems
<p>A real-life computer can only store programs and inputs up to a certain length, which means that its halting problem can be solved with a lookup table. The most obvious way to represent this table grows exponentially with the number of bits in the computer's state description, and so wouldn't fit inside the memory of the computer that it was answering the question about. Aside from trivial cases, are there finite computer-like machines that can answer their own halting problems?</p>
685
question answering
Time complexity and content free evaluation
https://cs.stackexchange.com/questions/162592/time-complexity-and-content-free-evaluation
<p>I am having trouble answering the question below:</p> <p>&quot;Explain why the statement, “The running time of algorithm A is at least O(n^2)”, is content-free.&quot;</p> <p>The statement apparently does not give any information on the running time of A but if A = T(n), then T(n) &gt;= O(n^2). Doesn't this at the very least tell us that A runs bigger than n^2?</p>
<p>As suggested by <a href="https://cs.stackexchange.com/users/6759/steven">Steven</a>, I turned my comment into a full answer.</p> <p>I think the question aims to point out that for a constant running time <span class="math-container">$c$</span> you have <span class="math-container">$c \in \mathcal{O}(1) \subseteq \mathcal{O}(n^2)$</span>.</p> <p>So you might interpret the statement in your exercise as <em>&quot;There is a running time in <span class="math-container">$\mathcal{O}(n^2)$</span> that is a lower bound for <span class="math-container">$\mathcal{A}$</span>'s running time.&quot;</em></p> <p>However, as a constant running time is suitable to select from <span class="math-container">$\mathcal{O}(n^2)$</span> this is a completely trivial statement that does not provide any insights about <span class="math-container">$\mathcal{A}$</span>'s actual running time.</p>
686
question answering
Time complexity of languages recognized by linear bounded automata with restricted number of writes
https://cs.stackexchange.com/questions/72343/time-complexity-of-languages-recognized-by-linear-bounded-automata-with-restrict
<p>Suppose that $L$ is a language recognized by a linear-bounded automaton with the constraint that it can only change each of its input cells at most $t$ times each, where $t$ is some constant integer. Must $L$ belong to $P$, the class of languages decidable in polynomial time? Even more stringently, does there exist a deterministic decider for $L$ that runs in $O(n)$ time, where $n$ is the size of the input? Of course, if we can answer the second question in the affirmative, then we can answer the first question as well.</p> <p>One approach I'm considering to answering the first question is looking at the length-increasing grammar associated with $L$, and devising an algorithm that checks if this grammar is capable of generating its input in polynomial time.</p> <p>I'm not sure how to approach the second question. Of course, the tape head for the constrained LBA is writing at worst $O(n)$ times for any input, but the computation on the input still may involve transitions that don't write to the tape.</p>
687
question answering
Can partially observable MDPs be fully observable nonetheless?
https://cs.stackexchange.com/questions/143030/can-partially-observable-mdps-be-fully-observable-nonetheless
<p>I've read through a few definitions of a partially observable environment/MDP, and I need confirmation whether the <em>partial observability</em> is really a <em>generalization</em> of a MDP (misnomer) and not a <em>required</em> feature, just like when we call nondeterministic automata the union of a) deterministic automata and b) (what I like to call) the actually nondeterministic automata.</p> <p>The question can be satisfied simply by answering: Is tic-tac-toe partially observable?</p>
<p>Every MDP can be transformed into a POMDP (partially observable MDP), such that the signal (observation) is the state itself. There is no benefit in doing so, but it is still a valid transformation.</p> <p>In this sense, Tic-Tac-Toe is fully observable (you see the entire state you are in, and not a partial signal from it), and hence can be transformed into a partially observable MDP.</p>
688
question answering
How to Determine Places, Transitions and Tokens in a Scenario when Modeling with Petri Nets?
https://cs.stackexchange.com/questions/71814/how-to-determine-places-transitions-and-tokens-in-a-scenario-when-modeling-with
<p>When modeling a scenario with Petri nets how should I determine the places, transitions and tokens?</p> <p><strong>Example:</strong> </p> <p>There are two exam assistants in an exam hall observing the exam. They stand in front of the exam hall. When a student has a question one of the assistants goes to him and answers his question while the other stays in front of hall. When the question is answered, the assistant goes back to the front of the hall. </p> <p>By modeling this scenario it must be distinguished which assistant stays in front of hall. Then the Petri net must be expanded so that the assistants take turns answering the questions.</p> <p>Here is my solution:</p> <p><a href="https://i.sstatic.net/lTORr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lTORr.png" alt="enter image description here"></a></p> <p>p0 represents the front of the exam hall. The two tokens represent the assistants and p1 is where the student seats. I also limited the capacity of p1 to one. </p> <p>The given solution is however totaly different: </p> <p><a href="https://i.sstatic.net/cklRB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cklRB.png" alt="enter image description here"></a></p> <p>How should I generally think and how can I determine which part of a given scenario is represented by which part of the Petri net (places, transitions and tokens)?</p>
<p>If you find it challenging to apply Petri Nets in modeling an application then it may help to consider the following mapping between the types of words found in a text description of an application with the types of Petri Net elements found in a Petri Net diagram of the application:</p> <ol> <li>Nouns are candidates for places.</li> <li>Verbs are nominees for transitions (and/or inputs and outputs).</li> <li>Values, amounts or counts are contenders for tokens in places.</li> </ol> <h2>Example Application</h2> <p>[Consider Figure 1 for the following example]. For the “Exam Hall Problem”, think of (Infinity, 2017):</p> <ol> <li>A place as a holder or container for “things”. a. There are two exam assistants in front of the hall and each assistant must be distinguished from the other. Thus there are two places: one place for each assistant in front of the hall (P4, P5). b. The exam assistants can answer questions at the same time. Thus there are two additional places: one place for an exam assistant answering a question (P2, P3).</li> <li>A token in a place as the counter for the place, the number of “things”. a. If an exam assistant is not in front of the hall then the place is empty. If an exam assistant is in front of the hall then the place is not empty, the place has a token. b. If an exam assistant is answering a question then the place is not empty, the place has a token. If an exam assistant is not answering a question then the place is empty.</li> <li>A transition as a start or end of “activities”. a. An exam assistant going to a student to answer a question is an activity (T1, T3). b. An exam assistant who answered a question is another activity (T2, T4).</li> </ol> <p>The given solution for the “Exam Hall Problem” appears to be a solution for the second scenario: the exam assistants take turns answering questions (Infinity, 2017). Figure 1 is a modified version of the given solution. It was modified to satisfy the requirements of the first scenario. It includes text labels, chosen from or derived from the words in the example description.</p> <p><a href="https://i.sstatic.net/TTDCK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TTDCK.jpg" alt="Petri Net Model for Exam Assistants Answering Questions in an Exam Hall"></a> Figure 1 Petri Net Model for Exam Assistants Answering Questions in an Exam Hall</p> <p>For the “dynamic and interactive version” of this document, the visibility of labels in Figure 1 can be toggled by clicking on the diagram (Chionglo, 2017).</p> <h2>Reference</h2> <p>Chionglo, J. F. (2017). A Reply to "How to Determine Places, Transitions and Tokens in a Scenario when Modeling with Petri Nets?" at Computer Science Stack Exchange. Available at <a href="https://www.academia.edu/31997446/A_Reply_to_How_to_with_Petri_Nets_At_Computer_Science_Stack_Exchange" rel="nofollow noreferrer">https://www.academia.edu/31997446/A_Reply_to_How_to_with_Petri_Nets_At_Computer_Science_Stack_Exchange</a>.</p> <p>Infinity. (2017). "How to Determine Places, Transitions and Tokens in a Scenario when Modeling with Petri Nets?" at Computer Science Stack Exchange. Retrieved on Mar. 21, 2017 at <a href="https://cs.stackexchange.com/questions/71814/how-to-determine-places-transitions-and-tokens-in-a-scenario-when-modeling-with">How to Determine Places, Transitions and Tokens in a Scenario when Modeling with Petri Nets?</a>.</p>
689
question answering
The origin and the meaning of &quot;Abstract Syntax Tree&quot;
https://cs.stackexchange.com/questions/81993/the-origin-and-the-meaning-of-abstract-syntax-tree
<p>This question is closely related to <a href="https://cs.stackexchange.com/q/13126"><em>Does an abstract syntax tree have to be a tree?</em></a> and is partially answered there, but I would like to be more precise and to have more concise answers.</p> <ol> <li><p>What is the origin of the term?</p></li> <li><p>Are there any formal definitions of AST?</p></li> <li><p>If I understand correctly the spirit of the use of "AST" term, it is a misnomer: an AST is any combinatorial object or datum that one tries to represent by strings of characters using more or less simple syntax that should ideally reflect the internal structure of the object/datum. For example, in first-order logic and in lambda calculus, expressions like <span class="math-container">$(\forall x)(x = x)$</span> or <span class="math-container">$(\lambda x.x)$</span> do not really represent trees because bound variables introduce loops. Do I understand correctly?</p></li> </ol> <p>Basically, I broke up here in simpler pieces a single question: <em>What is AST?</em> Without answering parts (1) and (2) the answer will probably be incomplete or hard to verify, and if (1) and (2) are answered, an answer to (3) probably should follow.</p> <hr> <p><em>Update</em></p> <p>I have realized that it is not the "syntax tree" that is abstract, but the syntax. This clarifies the terminology to me and makes this question less relevant. Basically, I parsed "abstract syntax tree" incorrectly.</p> <p>I also provided an <a href="https://cs.stackexchange.com/a/99212">answer</a> to the related question.</p>
<p>You are asking three questions. I will answer one.</p> <p>I don't think "abstract syntax tree" is a misnomer. It aims to represent the <em>syntax</em> of the expression. It is not claimed to represent all of the meaning (semantics) of the expression. The issue with bound variables that you mention relates to semantics rather than syntax.</p> <p>Also, remember that names are merely that: references to some concept, to serve as an aid to communication. Sometimes the concept is subtle or nuanced enough that no name is going to fully represent all the complexities of the concept, so we choose the best name we can (while keeping it fairly short). And a name is just an aid to communication, so not all concepts have a fully precise, rigorous definition -- if it is used in a way that aids understanding and facilitates communication, that is often OK.</p>
690
question answering
&quot;OOD allows ADTs to be created and used.&quot;
https://cs.stackexchange.com/questions/13503/ood-allows-adts-to-be-created-and-used
<p>I just had a CS mid-term and one of the questions was:</p> <blockquote> <p>OOD allows ADTs to be created and used.</p> <ul> <li>True</li> <li>False</li> </ul> </blockquote> <p>I answered false, but my answer was marked as incorrect. I suspect what the question means is "objected-oriented design can be used to implement abstract data types", but if that's what it means it seems very clumsily worded to me. My rationale for answering false was ADTs are conceptual and exist outside of any particular programming paradigm, so "creation" of an ADT is purely a theoretical exercise.</p> <p>To me it seems like the question is analogous to saying "OOD allows algorithms to be created". You might use OOD in the implementation of an algorithm, but it has nothing to do with its creation.</p> <p>Would I be right in appealing my mark? My professor is kind of an idiot and I've already corrected him on several other points throughout the term, so I'm trying to avoid antagonizing him if I'm wrong.</p>
<p>First, if this is exactly the sentence on the exam and not your translation, it's ambiguous. It could mean that OOD is one possible way to create and use ADT, or that creating and using ADTs requires OOD.</p> <p>Furthermore, ADT can mean two things: <a href="http://en.wikipedia.org/wiki/Abstract_data_type">abstract data type</a> or <a href="http://en.wikipedia.org/wiki/Algebraic_data_type">algebraic data type</a>. The two concepts are completely different but are often confused. An algebraic data type is a type that is defined by its recursive structure, or equivalently by the ways to build an object of that type. An abstract data type is a type that is defined by its properties, with the way to build objects remaining hidden.</p> <p>The second interpretation — that you need OOD for ADTs — is definitely false. There are programming languages which have no object orientation whatsoever but have ADTs in one sense or the other or both. Standard ML is a prime example: record and sum type definitions provide algebraic data types, while the module system provides abstract data types.</p> <p>The first interpretation — that ADTs can be implemented with OOD — is contentious, because it depends on terminology that isn't standard. In typical languages that provide objects, you can build algebraic data types: define several implementations of a class to make a sum type, and put multiple fields in a class to make a product type. However this is not intrinsic to object-oriented programming. Regarding abstract data types, most object-oriented languages provide some kind of abstraction facility by hiding the implementation of a class under some interface. However, this isn't intrinsic to OOP: the key feature of objects is inheritance, and you can have inheritance without any abstraction whatsoever.</p> <p>The question may be making a difference between object-oriented design and object-oriented programming construct, but OO<em>D</em> isn't really on the same plane as ADTs.</p> <p>All in all this is a poorly-worded exam question. The connection between OOD and ADTs is an interesting subject, but the question is not phrased in a meaningful way.</p>
691
question answering
Radial Basis kernel producing the same decision boundary as a Linear kernel
https://cs.stackexchange.com/questions/148180/radial-basis-kernel-producing-the-same-decision-boundary-as-a-linear-kernel
<p>The following question is from the MIT 6.034 2006 Final Exam paper.</p> <p><a href="https://i.sstatic.net/KRH2j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KRH2j.png" alt="Image from the MIT 6.034 2006 Final Exam paper" /></a>In answering part 6.5, I wasn't certain why the radial basis kernel would produce the same decision boundary as a linear kernel (when considering the data set shown in part 6.4). I've searched the internet for some pointers, but no dice. I was wondering if anyone familiar with SVMs would be able to shine some light on this?</p> <p>Thank you in advance.</p>
692
question answering
Bayesian Network - Inference
https://cs.stackexchange.com/questions/13803/bayesian-network-inference
<p>I have the following Bayesian Network and need help with answering the following query.</p> <p><img src="https://i.sstatic.net/30XAh.jpg" alt="enter image description here"></p> <p><strong>EDITED:</strong></p> <p>Here are my solutions to questions a and b:</p> <p><strong>a)</strong></p> <pre><code>P(A,B,C,D,E) = P(A) * P(B) * P(C | A, B) * P(D | E) * P(E | C) </code></pre> <p><strong>b)</strong></p> <pre><code>P(a, ¬b, c ¬d, e) = P(a) * P(¬b) * P(c | a, b) * P(¬d | ¬b) * P(e | c) = 0.02 * 0.99 * 0.5 * 0.99 * 0.88 = 0.0086 </code></pre> <p><strong>c)</strong></p> <p>P(e | a, c, ¬b)</p> <p>This is my attempt:</p> <pre><code>a × ∑ P(a, ¬b, c, D = d, e) = d a × ∑ { P(a) * P(¬b) * P(c | a, b) * P(d) * P(e | c) + P(a) * P(¬b) * P(c | a,b) *P(¬d) d + P(e | c) } </code></pre> <p>Note that a is the alpha constant and that a = 1/ P(a,¬b, c) </p> <p>The problem I have is that I don't know how to compute the constant a that the sum is multiplied by. I would appreciate help because I'm preparing for an exam and have no solutions available to this old exam question.</p>
<p>You're on the right path. Here's my suggestion. First, apply the definition of conditional probability:</p> <p>$$ \Pr[e|a,c,\neg b] = {\Pr[e,a,c,\neg b] \over \Pr[a,c,\neg b]}. $$</p> <p>So, your job is to compute both $\Pr[e,a,c,\neg b]$ and $\Pr[a,c,\neg b]$. I suggest that you do each of them separately.</p> <p>To compute $\Pr[a,\neg b,c,e]$, it is helpful to notice that</p> <p>$$ \Pr[a,\neg b,c,e] = \Pr[a,\neg b,c,d,e] + \Pr[a,\neg b,c,\neg d,e]. $$</p> <p>So, if you can compute terms on the right-hand side, then just add them up and you've got $\Pr[a,\neg b,c,e]$. You've already computed $\Pr[a,\neg b,c,\neg d,e]$ in part (b). So, just use the same method to compute $\Pr[a,\neg b,c,d,e]$, and you're golden.</p> <p>Another way to express the last relation above is to write</p> <p>$$ \Pr[a,\neg b,c,e] = \sum_d \Pr[a,\neg b,c,D=d,e]. $$</p> <p>If you think about it, that's exactly the same equation as what I wrote, just using $\sum$ instead of $+$. You can think about whichever one is easier for you to think about.</p> <p>Anyway, now you've got $\Pr[e,a,c,\neg b]$. All that remains is to compute $\Pr[a,c,\neg b]$. You can do that using exactly the same methods. I'll let you fill in the details: it is a good exercise. Finally, plug into the first equation at the top of my answer, and you're done.</p>
693
question answering
Confusion converting BNF to regular expression
https://cs.stackexchange.com/questions/76839/confusion-converting-bnf-to-regular-expression
<p>I have a Computer Science A-Level exam tomorrow and I've been trying to get this question answered by my teacher but she's not been too helpful so asking here instead.</p> <p>In an exam question, I have the following BNF grammar, where <code>_</code> denotes a space.</p> <pre><code>&lt;fullname&gt; ::= &lt;title&gt;_&lt;name&gt;_&lt;endtitle&gt; | &lt;name&gt; | &lt;title&gt;_&lt;name&gt; | &lt;name&gt;_&lt;endtitle&gt; &lt;title&gt; ::= MRS | MS | ... | SIR &lt;endtitle&gt; ::= ESQUIRE | OBE | CBE &lt;name&gt; ::= &lt;word&gt; | &lt;name&gt;_&lt;word&gt; &lt;word&gt; ::= &lt;char&gt;&lt;word&gt; &lt;char&gt; ::= A | B | ... | Z </code></pre> <p>The mark scheme says that the first rule, fullname, can be represented with a regular expression. But I'm not really sure how you could represent it with a regular expression when it's made up of other rules that can't be represented by regular expressions themselves e.g. name which is recursive. Also I thought regular expressions were made up of just letters and symbols e.g. a*b? . Forgive me if I don't seem too knowledgable on this because the resources we have for the A-Level are pretty awful.</p>
<p>There are two ways of interpreting 'the first rule can be represented with a regular expression'; you should review this 'mark scheme' to determine which applies. One possibility is that the first rule is being treated as defining a language over its own terminals and non-terminals, without ever expanding the non-terminals. I.e, think of treating the tokens 'title', 'name', and 'endtitle' <em>as single characters</em> of the 'language' of 'fullname' (along with the space-char). Then it is easy to see that <em>that</em> is a regular language - there are only four strings in it!</p> <p>The other possibility is what you assumed, i.e that non-terminals are to be expanded. In this case your observation that 'name' is recursive is misplaced. The issue is not whether <em>the set of productions for that non-terminal</em> is regular, the issue is whether <em>the set of strings generated from that non-terminal</em> is regular. For 'name' the answer is yes: The set of strings generated from 'name' is seen, with a little thought, to be simply a sequence of one or more 'word'-s separated by spaces, for which a regular expression is easy to construct [given a regular expression for 'word', of course]. So yes, in this case 'fullname' generates a regular language.</p> <p><em>[P.S. Note: Your 'word' is missing a production for a single 'char']</em></p>
694
question answering
How good are current AI researchers at simulating complex, first-person emotional states?
https://cs.stackexchange.com/questions/13815/how-good-are-current-ai-researchers-at-simulating-complex-first-person-emotiona
<p>I just read that IBM's Watson would have a hard time answering questions like "tell me about your first <a href="http://blogs.discovermagazine.com/sciencenotfiction/2010/06/28/watson-fails-the-turing-test-but-just-might-pass-the-jeopardy-test/#.UhH7QGTF1vZ" rel="nofollow">kiss</a>." If you asked a modern, state-of-the-art chatbot questions like "tell me about a song that means a lot to you and why" or "tell me about a time when you felt vulnerable" would the chatbot be able to answer in ways that would fool non-experts into thinking that it was human? Are questions like this good candidates for the Turing test?</p> <p>I am not asking a philosophical question about if it is POSSIBLE to generate an AI that could represent a complex internal emotional state. I am asking: given the state-of-the-art research in 2013: how close are current researchers to generating AI that could pass the Turing test?</p>
<p>Great Question! One area to look into is Cognitive Architectures. These are computer programs that solve AI problems by modeling how humans think and feel (emotion). Here is a brief overview.</p> <p>Prior to the mid-1980s the study of AI was concerned with creating general intelligent systems modeled after flexible human thinking (Fahlman, 2012). Researchers in AI used findings in cognitive psychology to guide their work. </p> <p>Currently, there has been a shift away from this approach. The current trend in AI research is towards solving problems with “knowledge-lean” statistical methods which are data intensive and learn less efficiently than humans. (Langley, 2012). Thus, the AI field has shifted away from human-like models and toward "ideal" models. </p> <p>However, a few AI researches still focus on creating intelligent systems modeled after the way humans think. These researches work with Cognitive Architectures--Computer programs modeled after humans. The Office of Navel Research's Human Computer Interaction lab is actively working to create intelligent agents that can interact with humans. </p> <p>I believe that the study of Cognitive Architectures is an interesting area of research for investigating how to create human-like systems. </p> <p>Fahlman, S. E. (2012). Beyone Idiot-Savant AI. Advances in Cognitive Systems 1, 15-21.</p> <p>Langley, P. (2012). Intelligent behavior in humans and machines. Advances in Cognitive Systems 2, 3-12.</p> <p>Taatgen, N. A. (1999). Learning without limits. From Problem Solving towards a Unified Theory of Learning(dissertation). Groningen: Rijksuniversiteit Groningen.</p> <p><a href="http://www.onr.navy.mil/en/Science-Technology/Departments/Code-34/All-Programs/human-bioengineered-systems-341/Human-Robot-Interaction.aspx" rel="nofollow">http://www.onr.navy.mil/en/Science-Technology/Departments/Code-34/All-Programs/human-bioengineered-systems-341/Human-Robot-Interaction.aspx</a> </p>
695
question answering
Language to Generate Powers of 2 Using a Language Containing Decimal Numbers
https://cs.stackexchange.com/questions/108662/language-to-generate-powers-of-2-using-a-language-containing-decimal-numbers
<p>For this question, I have the alphabet <span class="math-container">$\Sigma=\{0,1,2,3,4,5,6,7,8,9\}$</span>. I also have the language <span class="math-container">$L$</span> over <span class="math-container">$\Sigma$</span> described as the language such that the strings <span class="math-container">$w$</span> contained in <span class="math-container">$L$</span> are powers of 2 and <span class="math-container">$w$</span> is treated as a decimal number. I then need to:</p> <ol> <li>Design a grammar to generate <span class="math-container">$L$</span></li> <li>Classify the grammar (Regular, Context-free, etc.)</li> <li>Show that <span class="math-container">$L$</span> is not regular</li> <li>Determine if <span class="math-container">$L$</span> is a context-free language (answering only yes or no)</li> </ol> <p>The original question is in the image below, for reference. <a href="https://i.sstatic.net/fmWNJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fmWNJ.png" alt="original problem"></a> I'm not entirely sure how to start on this question in particular. Most of the examples I had in my automata class consisted of much smaller alphabets and more precise languages. Any help with an explanation would be much appreciated. </p> <p>Thank you in advance!</p>
<h3>L is not context-free</h3> <p><strong>Periodicity for context-free languages of bounded growth</strong>: Let <span class="math-container">$w_n$</span> be the number of words of length <span class="math-container">$n$</span> in a given regular language. If the sequence <span class="math-container">$w_1, w_2, \cdots$</span> is bounded (bounded growth), then it is eventually periodic.</p> <p>Proof: This follows from the fact that every context-free language of bounded growth is a union of paired loops, which is the theorem 2.1 in the paper <a href="https://core.ac.uk/download/pdf/82387063.pdf" rel="nofollow noreferrer">On a conjecture about slender context-free languages</a> by Lucian Ilie, 1994.</p> <hr /> <p>Let <span class="math-container">$w_n$</span> be the number of words of length <span class="math-container">$n$</span> in <span class="math-container">$L$</span>. <span class="math-container">$w_n=\#\{k\in\Bbb N\mid n-1\le k\log_{10}2 \lt n\}$</span>. Since <span class="math-container">$\log_{10}2$</span> is irrational, the sequence <span class="math-container">$w_1, w_2, \cdots$</span> cannot be eventually periodic. So <span class="math-container">$L$</span> is not context-free. Hence, it is not regular, either.</p> <p>As rici pointed out, it is not difficult to prove that <span class="math-container">$L$</span> is not context free by applying the pumping lemma to any word in <span class="math-container">$L$</span> that is long enough. The above more conceptual proof is presented in the hope that it might raise more interest and more questions.</p> <h2><span class="math-container">$L$</span> is context-sensitive</h2> <p>According to <a href="https://cs.stackexchange.com/questions/108662/language-to-generate-powers-of-2-using-a-language-containing-decimal-numbers#comment232126_108662">rici's comment</a>, we can write a context-sensitive grammar which multiplies a decimal number by two. Use marker tokens which move right to left over the number; you'll need two of them because of the possibility of carry. As with many other cases of context-sensitive grammars, the actual construction might be apt to get lengthy and tricky, sometimes even debatable.</p> <p>Another conceptual way to see <span class="math-container">$L$</span> is context-sensitive is using <a href="https://en.wikipedia.org/wiki/Linear_bounded_automaton#LBA_and_context-sensitive_languages" rel="nofollow noreferrer">linear bounded automaton (LBA)</a> that accepts a context-sensitive language. It is easy to see that we can divide by 2 repeatedly using a linear bounded automaton. More specifically, we can design a LBA such that given an input of <span class="math-container">$S m E$</span>, where <span class="math-container">$m$</span> is a sequence of decimal digits and where <span class="math-container">$S$</span> and <span class="math-container">$E$</span> are start marker and end marker respectively, it will output <span class="math-container">$S1\square\square\cdots\square E$</span>, where <span class="math-container">$\square$</span> stands for the special blank symbol if and only if <span class="math-container">$m$</span> is the decimal representation of some power of 2.</p> <h3>An exercise</h3> <p>Let language <span class="math-container">$L_U$</span> over <span class="math-container">$\Sigma=\{0,1,2,3,4,5,6,7,8,9\}$</span> be the language of words which are positive integers whose prime factors belong to a finite set <span class="math-container">$U$</span> of prime numbers when they are treated as decimal numbers. Show that <span class="math-container">$L_U$</span> is context-sensitive but not context-free.</p>
696
question answering
Machine Learning: What program will derive the underlying algorithm in this series?
https://cs.stackexchange.com/questions/19638/machine-learning-what-program-will-derive-the-underlying-algorithm-in-this-seri
<p>This is a machine learning question. Given this series of categorical data, what program will derive the underlying algorithm and predict what comes next in the series?</p> <p>Here is the series:</p> <p>B, BA, BB, BAA, BAB, BBA, BBB, BAAA, BAAB, BABA, ...</p> <p>Anyone with knowledge of computer science may quickly realize that this series is simply counting in binary with "A" substituted for "0" and "B" substituted for "1". This is true, but...</p> <p>the program that predicts what comes next must do so <strong>only by manipulating the symbols given</strong> in the series. It must not use hard-coded knowledge of binary counting.</p> <p>I realize many patter recognition algorithms are available but I haven't seen how any can solve this deceptively hard problem.</p> <p>-Edit-</p> <p>Based on the votes that this problem is insolvable, I've done some refactoring and posed the question another way, with more constraints, here: <a href="https://cs.stackexchange.com/questions/19663/what-program-will-derive-the-underlying-algorithm-in-these-question-answer-pairs">What program will derive the underlying algorithm in these question-answer pairs (updated)?</a></p>
<p>Without any a priori knowledge, the problem is insoluble. There are infinitely many possible answers and you have no wat at all to say that one of them is preferable to any of the others. How can you possibly tell just by manipulating symbols that the sequence is "Counting in binary using A and B for 0 and 1" rather than "B, BA, BB, BAA, BAB, BBA, BBB, BAAA, BAAB, BABA, BABB, BBAA, widgeon, 27, expunge" followed by a list of the squares of every third prime, number written in base-14 using Hebrew letters instead of the even digits?</p> <p>Wait &ndash; how would you even know that there is a next element?</p>
697
question answering
OCR Computing Question
https://cs.stackexchange.com/questions/54350/ocr-computing-question
<p>I was asked the below question in a test at school last week and I thought the answer given was incorrect.</p> <p>You had to give a file format from the options for the below statement:</p> <p>A file created in software that most users will not have available</p> <p>You had two options, PDF or DOCX. I answered DOCX but apparently the answer was PDF, what do you think?</p>
<p>This looks like an advertisement for Microsoft...</p> <p>The intention was that while DOCX can be created using the rather expensive but commonly available Microsoft Word, PDF is created by Adobe Acrobat (not the free Reader!) which is not commonly available. However, there are many other ways to create PDFs, and nowadays you can print to PDF even on Windows, which is roughly as commonly available as Microsoft Word.</p> <p>In conclusion, it's an oversight by the question setters, who appear to be parroting material which has been influenced by Microsoft for obvious commercial purposes.</p>
698
question answering
Another vertex cover question?
https://cs.stackexchange.com/questions/115093/another-vertex-cover-question
<p>I'm not sure this is equivalent to bipartite vertex cover question. The question is:</p> <p>Given a BIPARTITE graph, what is the minimum number of vertex from the right side whose edges cover all vertex from the left side. </p> <p>e.g. In the following graph, the answer is 1, cause vertex g has connection to all vertex on the left. <a href="https://i.sstatic.net/MjKuf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MjKuf.png" alt="enter image description here"></a></p>
<p>It is equivalent to the <a href="https://en.wikipedia.org/wiki/Set_cover_problem" rel="nofollow noreferrer">set cover problem</a>. You can regard each vertex in the right side as a set of its neighbors in the left side. In your example, <span class="math-container">$e,f,g$</span> correspond respectively to the sets <span class="math-container">$\{d\},\{d\},\{a,b,c,d\}$</span>. Now a minimum vertex cover in your problem is equivalent to a minimum set cover.</p>
699