category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
LSTM
Good AI model for learning to write code, specifically generate css from any given html?
https://cs.stackexchange.com/questions/120408/good-ai-model-for-learning-to-write-code-specifically-generate-css-from-any-giv
<p>Currently I am working on getting lots of html code and its related css, via id or class name. Once I have enough data to work with I am unsure how it would be easiest for any model to learn what css should be generated for each element.</p> <p>My idea is to make a script to extract all html elements from a given file and then map it onto the css code as a possible good output for the model to learn.</p> <p>For example : script extracts element h1 with class name "title" and then it maps onto its css code, something simple such as </p> <pre class="lang-css prettyprint-override"><code>title{ color:blue; text-align: center; font-size: 40px; } </code></pre> <p>The catch is that the model should also consider the rest of the elements found on that page, likely given in an array of detected elements and then output an array of css as above for each element. My hope here is that it would learn to style entire pages rather than single elements on their own. </p> <p>Another issue is what to do with the css code itself, wether to try and make the model choose between pre-set options for each element or let it learn the language and its format from scratch.</p> <p>I am not sure what model would be best to deal with this kind of input and output. Currently my strongest consideration is LSTM.</p>
300
LSTM
What are the limitations of RNNs?
https://cs.stackexchange.com/questions/53552/what-are-the-limitations-of-rnns
<p>For a school project, I'm planning to compare Spiking Neural Networks (SNNs) and Deep Learning recurrent neural networks, such as Long Short Term Memory (LSTMs) networks in learning a time-series. I would like to show some case where SNNs surpass LSTMs. Consequently, what are the limitations of LSTMs? Are they robust to noise? Do they require a lot of training data?</p>
<p>I finally finished the project. Given really short signals and a really small training set, SNNs (I used <a href="http://minds.jacobs-university.de/sites/default/files/uploads/papers/EchoStatesTechRep.pdf" rel="nofollow noreferrer">Echo State Machines</a> and a <a href="http://ieeexplore.ieee.org/document/7378880/" rel="nofollow noreferrer">neural form of SVM</a>) vastly out-performed Deep Learning recurrent neural networks. However, this may be mostly because I'm really bad at training Deep Learning networks.</p> <p>Specifically, SNNs performed better at classification of various signals I created. Given the following signals:</p> <p><a href="https://i.sstatic.net/wpZWx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wpZWx.png" alt="enter image description here"></a></p> <p>The various approaches had the following accuracy, where RC = Echo State Machine, FC-SVM = Frequency Component SVM and vRNN = Vanilla Deep Learning Recurrent Neural Network:</p> <p><a href="https://i.sstatic.net/2ZIPP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ZIPP.png" alt="enter image description here"></a></p> <p>SNNs were also more robust to noise:</p> <p><a href="https://i.sstatic.net/lvcmh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lvcmh.png" alt="enter image description here"></a></p> <p>For more information, including how I desperately tried to improve the Deep Learning classification approach performance, check out my <a href="https://github.com/Seanny123/rnn-comparison" rel="nofollow noreferrer">repository</a> and <a href="https://github.com/Seanny123/rnn-comparison/blob/master/comparison-recurrent-neural.pdf" rel="nofollow noreferrer">the report I wrote</a> which is where all the figures came from.</p> <p><strong>Update:</strong> After spending some time away from this project, I think one of the reasons that RNNs do horribly at this project is that they're bad at dealing with really long signals. Had I chunked the signals together with some sort of smoothing as preprocessing, they probably would have performed better.</p>
301
LSTM
Neural network: noisy temporal sequence converter (transducer?producer?) on demand?
https://cs.stackexchange.com/questions/22666/neural-network-noisy-temporal-sequence-converter-transducerproducer-on-dema
<p>I start to suspect this problem is very hard now that I cannot find a single relevant literature on the subject, but it's too late to change the class project topics now, so I hope any pointers to a solution. Please pardon the somewhat artificial scenerio of this question, but here goes:</p> <p>Technical version: </p> <p>Let $\Sigma_{c}$ and $\Sigma_{q}$ and $\Sigma_{a}$ be 3 disjoint finite alphabet (c, q, a stand for content, query and answer respectively). Let $L_{c}\in\Sigma_{c}^{*}$ and $L_{q}\in\Sigma_{q}^{*}$ be FINITE languages, wherein $L_{q}$ have the property that for every string in the language all of its prefix are in the language too. There is an unknown function $f:L_{c}\times L_{q}\rightarrow\Sigma_{a}^{*}$. Consider a mysterious machine that receive continuous stream of symbol through a channel one at a time step (we assume that the symbol are clearly distinguishable). This machine, whenever being feed with a string in $c\in L_{c}$ (with the symbol in correct temporal order) followed by a string in $q\in L_{q}$ will output (through a different output channel) the value of $f(c,q)$ as a temporal sequence, one symbol at a time. Note that the machine always output after every new symbol from $\Sigma_{q}$. Note that the empty string is in $L_{q}$, which means the machine also output something before any symbol on $\Sigma_{q}$ have arrived, but only if it is certain with high probability that the full string in $L_{c}$ have been received.</p> <p>The objective is to construct a neural network that emulate that mysterious machine, if we have only access to its input and output channel to use as training data, and we do not know $f$. We also have to assume that the input channel are noisy in the following sense: random noise are inserted into the input channel at high probability, delaying input symbol, and we initially do not know which one is noise and which one is authentic; also symbol in the input channel are sometimes lost at low probability. EDIT: Note: we do not know $L_{c}$ nor $L_{q}$, only the mysterious machine know, in fact we do not even know the alphabet $\Sigma_{c}$ and $\Sigma_{q}$ other than the fact that they are disjoint and are subset of the set of all possible input symbol (input symbol not in either set are certainly noise, but we can't tell which set it belongs to initially; note that it is still possible for symbol from the alphabet to be noise).</p> <p>(why neural network: beside the noise problem, also because that's what I wrote in my class project proposal)</p> <p>(layman version: consider Sherlock Holmes sitting in his chair, bored. Dr. Watson give a short description of the client. Once he's done, Sherlock Holmes give a conclusion about the client. Dr. Watson is astonished, and ask more question, and Sherlock Holmes reply. The conclusion must obviously based on the description alone; and subsequent answer have to answer the question being asked, taking into account the contexts which consists of question already being asked (for example, the same "How did you know?" following "Age?" demands different answer than when following "Height?"). Now you want to make a neural network that simulate Sherlock Holmes, having all the recordings of those session. Dr. Watson however tend to insert in long description that are rather irrelevant, making long statement before finally getting around to ask question, and sometimes accidentally omit crucial information, but otherwise describe people in a rather fixed order of details. The neural network must be able to deal with that. Of course, this is a just a layman's description, the situation is much less complex.)</p> <p>I have looked through various relevant literature, and I cannot find anything relevant. Conversion to spatial domain is useless due to high amount of noise causing very long input sequence. I have looked into LSTM to deal with the memory problem over arbitrary long time lag, but I for the life of me cannot figure out how is the network is supposed to be trained when there are arbitrarily long noise insertion everywhere or possibilities of missing symbol (every method I found seems to force a fixed time-lag between input and output, and missing symbol immediately wreck any method based on predicting the next item in the sequence). Also, is it too much to ask for network that isn't too hard to code? Integrate-and-fire neuron is even worse than LSTM in term of difficulty in coding.</p> <p>Thanks for your help. It's due in 2 days, so please be fast.</p>
<p>I unfortunately know very little of neural network. The closest thing that your project reminds me of is speech recognition, and I would look at that literature. I am thinking of the first stage of speech recognition, when the sound stream is transformed into a word lattice (or a word stream, if you keep only the most likely path in the lattice). But all I know on this is based on Hidden Markov Models and Viterbi algorithm [1]. I have not looked at the field for a long time, and I have no idea how it would translate in neural network, but I would suggest you look at that literature, for example by searching the web for <em>neural networks</em> and <em>speech recognition</em>.</p> <p>I doubt you can code anything serious in 2 days. I would not even try, but I do not know what kind of programming is expected. But maybe a good description of what should be done, with appropriate references would be enough.</p> <p>You should simplify your question, if you find out that your requirements are too strong, particularly on noise. At first, you should limit yourself to very simple kinds of noise. Problems are seldom solved the first time in full complexity. You first solve simple cases, then try to see where you could do more. For one thing, do you know how to do it without any noise? What are the limitations? Then you can start adding simple noise, and see what changes.</p> <p>Your input, content and query, do not seem to have much reason to be distinguished, or do you have a strong reason to distinguish them? I would think that at some point your system must enter a state when it starts answering on the output tape.</p> <p>[1] Bahl, L. R.,Jelinek, F., &amp; Mercer,R.L. (1983).A maximum likelihood approach tocontinuous speech recognition. IEEETrans. Pattern Anal. Machine Intell., PAMI 5, 179-190.</p> <p>These authors actually published several papers on the subject for noisy input, including insertion, deletion and substitution of symbols. There are surely others, and this work is quite old. I am not sure the paper referenced is actually about learning. But the same people worked on learning too, such as parameters identification for Hidden Markov Models.</p>
302
LSTM
Machine learning for recommendation systems (feed forward and recurrent neural networks)
https://cs.stackexchange.com/questions/88401/machine-learning-for-recommendation-systems-feed-forward-and-recurrent-neural-n
<p>I recently started to learn about machine learning. I have created a feed forward neural network (ffnn) and a recurrent neural network (rnn) to predict user ratings of movies. I am using a subset of 2000 users and their ratings of the "Netflix Prize" dataset.</p> <p>The ffnn as well as the rrn have an accuracy of ~40% - 45% on the test set evaluation. This seems to be very low and I expected to get at least somewhere near 60% - 70% accuracy. I tried different network configurations (dimensions, layers, optimizers, etc.) but nothing changed the accuracy significantly (only 1% - 3% max).</p> <p>Both models are constructed in a sense of supervised learning. The ffnn uses embeddings of users+movies for the input and ratings for the output. For the rrn I am using one hot encoded movie vectors as input and one hot encoded ratings vectors as the output.</p> <p>For the implementation I am using Keras in Python.</p> <p>The ffnn is constructed like this:</p> <pre><code>dimension = 120 model_users = Sequential() model_users.add(Embedding(len(np.unique(users)), dimension)) model_users.add(Reshape((dimension,))) model_movies = Sequential() model_movies.add(Embedding(len(np.unique(movies)), dimension, input_length=1)) model_movies.add(Reshape((dimension,))) model = Sequential() model.add(Merge([model_users, model_movies], mode = 'concat')) model.add(Dropout(0.1)) model.add(Dense(100, activation = 'relu')) model.add(Dropout(0.1)) model.add(Dense(500, activation = 'sigmoid')) model.add(Dropout(0.1)) model.add(Dense(dimension, activation = 'linear')) model.add(Dropout(0.1)) model.add(Dense(5, activation = 'softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics=['accuracy']) print(model.summary()) </code></pre> <p>The rnn model is constructed like this:</p> <pre><code>dimensions = len(movies_unique) model = Sequential() model.add(Masking(mask_value = 0, input_shape = (dimensions, dimensions))) model.add(LSTM(32, return_sequences = True)) model.add(TimeDistributed(Dense(len(ratings_unique), activation = 'relu'))) model.add(Activation('softmax')) model.compile(loss = 'mse', optimizer = 'adam', metrics = ['accuracy']) print(model.summary()) </code></pre> <p>How can I further improve the accuracy to get beyond ~45%? I might be missing something fundamental here, so any help is appreciated! :)</p> <p>Best, Nico</p>
303
tokenization
Tokenization Problem
https://cs.stackexchange.com/questions/84928/tokenization-problem
<p>Yes, this is a quiz question. It's from a self-paced course, but the answer just isn't correct to me no matter how I look at it. There isn't really an active community to consult. </p> <p>My Regular Expression experience is profoundly in JavaScript and I'm concerned that it poisoned my thinking of Regular Expressions. For the life of me I can't figure this out. <strong>Answer A</strong> and <strong>Answer B</strong> I can rationalize, though my logic might be inherently flawed somehow. Being unable to understand how <strong>Answer C</strong> works brings me to the conclusion that I am profoundly missing something.</p> <p><strong>The Problem</strong>:</p> <p>Consider the string <code>abbbaacc</code> Which of the following lexical specifications produces the tokenization ab/bb/a/acc ?</p> <pre><code>Answer A a(b + c*) b+ Answer B ab b+ ac* Answer C c* b+ ab ac* </code></pre> <p><strong>The two ways that I have tried to resolve Answer C</strong></p> <p><strong>removing tokens</strong>(<em>which I assumed is the correct way.</em>)</p> <pre><code>abbbaacc c* passes and returns token **cc** //abbbaa b+ passes and returns token **bbb** //aaa ab passes and returns no token //aaa ac* passes and returns no token //aaa ending tokens: [cc, bbb] </code></pre> <p><strong>Keeping String Whole</strong>:</p> <pre><code>abbbaacc c* passes and returns **cc** b+ passes and returns **bbb** ab passes and returns **ab** ac* passes and returns **acc** ending tokens: [cc, bbb, ab, acc] </code></pre> <p>I've tried other random ways as well but nothing turns out. I really am frustrated with this problem and I've been working on it for quite some time. I've reviewed the coursework that I have and I'm just at a loss. I am convinced that I must be conflating the more common Regular Expressions with this but I don't see where I'm doing so in any way that would make this answer correct. If anyone could provide any assistance I would greatly appreciate it!</p>
<p>Tokenisation does not search. [Note 1]. At each step, it matches (and consumes) a match of one of the patterns starting precisely at the current input point.</p> <p>Although there are other tokenisation algorithms, the most common one -- usually called <a href="https://en.wikipedia.org/wiki/Maximal_munch" rel="nofollow noreferrer">maximal munch</a> -- works as follows.</p> <p>At each step:</p> <ul> <li><p>Find the longest match of each pattern starting at the current input point. (This can be done in a single linear-time left-to-right scan of the input.)</p> </li> <li><p>Select the longest of all these matches. If more than one pattern has the same longest match, select the pattern which comes first in the list of patterns. Report the selected pattern and the corresponding matched prefix as the next token.</p> </li> <li><p>Remove the matched prefix from the input, and continue from the first step.</p> </li> </ul> <p>In effect, the match in the first step is a match of a regular expression created from the disjunction of all the patterns. The regular expression matching algorithm is modified so that:</p> <ul> <li><p>It also reports which alternative matched, and</p> </li> <li><p>It leaves unmatched input in the input strean, rather than insisting that the regular expression match the entire input.</p> </li> </ul> <p>For your answer C, the regular expression created is <span class="math-container">$c^*+b^++ab+ac^*$</span>. (Some tokenisers would correct the first pattern to <span class="math-container">$c^+$</span> because selecting an empty match would cause the above algorithm to fall into an endless loop.)</p> <p>The longest prefixes successively matched by this disjunction are:</p> <ul> <li><p><span class="math-container">$ab$</span> (3rd alternative). The 4th alternative matches <span class="math-container">$a$</span>, but that us shorter.</p> </li> <li><p><span class="math-container">$bb$</span> (2nd alternative) The second alternative will also match <span class="math-container">$b$</span>, but that's not the longest prefix matched by this pattern.</p> </li> <li><p><span class="math-container">$a$</span> (4th alternative)</p> </li> <li><p><span class="math-container">$acc$</span> (4th alternative)</p> </li> </ul> <hr /> <h3>Notes</h3> <ol> <li>This should be intuitive. After all, most programs are read left to right and token order matters: <span class="math-container">$a = b + c;$</span> cannot be written as <span class="math-container">$a b c =+ ;$</span>, and the parser will expect to see the tokens in left-to-right order, so if the lexer produced the stream, <span class="math-container">$a b c =+ ;$</span>, the parser would get confused.</li> </ol>
304
tokenization
Tokenizer and complex operators
https://cs.stackexchange.com/questions/13418/tokenizer-and-complex-operators
<p>I'm trying to create simple tokenizer to transform following (only part shown) search expression to tokens</p> <pre><code>word1 near(1) word2 </code></pre> <p>where word1, word2 are some words and near(1) is distance operator. The question is how this expression should be tokenized. I see two ways</p> <pre><code>1. &lt;WORD, word1&gt; &lt;WORD, near&gt; &lt;LPAREN&gt; &lt;NUMBER,1&gt; &lt;RPAREN&gt; &lt;WORD, word2&gt;. 2. &lt;WORD, word1&gt; &lt;NEAROP, 1&gt; &lt;WORD, word2&gt; </code></pre> <p>But should I really try to tokenize NEAR(\d+) during tokenization, or should I go first way and handle NEAR operator at parser level, during building parse tree?</p>
<p>Since you indicate that the parameter of the near operator can be an arbitrary expression, it should be handled at the parser rather than at the lexer. Otherwise how would you handle things like near(x+y)?</p>
305
tokenization
Amortised cost - transferring tokens
https://cs.stackexchange.com/questions/164880/amortised-cost-transferring-tokens
<p>I'm trying to solve a problem from one of the older exams.</p> <p>Question:</p> <blockquote> <p>There's an infinite, one-dimensional board, with fields numbered consecutively <span class="math-container">$\ldots, -2, -1, 0, 1, 2, \ldots$</span> A move in the game consists of selecting a field and placing a token on it. If after placing the token it turns out that on two adjacent fields there are an equal number of tokens (at least one each), then we move all tokens from one of these fields to the other, clearing the first field and doubling the number of tokens on the second field. If there's a choice between two adjacent fields, we make that choice arbitrarily. Then we continue the described process of clearing the field and doubling the number of tokens on the adjacent field until there's not an equal number of tokens on two adjacent fields.</p> </blockquote> <p>Example:</p> <blockquote> <p>Suppose that on fields numbered <span class="math-container">$1, 2, 3, 4, 5$</span> there are <span class="math-container">$0, 1, 2, 4, 6$</span> tokens respectively. After adding one token to the field numbered <span class="math-container">$1$</span>, we get the following arrangement of tokens on the board: <span class="math-container">$0, 0, 0, 8, 6$</span>. A basic move in the game involves placing or removing a token. Therefore, moving <span class="math-container">$k$</span> tokens from one field to another requires performing <span class="math-container">$k$</span> removal operations and <span class="math-container">$k$</span> placement operations on the board. Analyze the amortized cost of a single move in the game measured by the number of basic moves.</p> </blockquote> <p>I think first part of the solution goes like this: Suppose there are <span class="math-container">$n$</span> tokens on the board. No token could be moved more than <span class="math-container">$\log n$</span> times because with each move, it would be on a stack twice as large, and the height of that stack would be greater than the total number of tokens. For example, if there are <span class="math-container">$8$</span> tokens on the board, then no token could be moved 4 times because it would be on a stack of height <span class="math-container">$16$</span>. Hence, each token could be moved at most <span class="math-container">$\log n = \log 8 = 3$</span> times. Therefore, the cost is at most <span class="math-container">$\log n$</span>.</p> <p>What is the method to establish a lower bound?</p>
<p>To prove a lower bound simply take a worst case example. We will prove the following:</p> <p><strong>Induction Hypothesis:</strong> There exists a sequence of moves that results in <span class="math-container">$n$</span> tokens at position <span class="math-container">$\log n$</span> that requires at least <span class="math-container">$n \log n$</span> <em>place</em> and <em>remove</em> operations. For simplicity, assume that <span class="math-container">$n$</span> is some power of <span class="math-container">$2$</span>, and <span class="math-container">$\log$</span> with base <span class="math-container">$2$</span>.</p> <p><strong>Base Case</strong>: Let <span class="math-container">$n = 2$</span>. Then, place a token at position <span class="math-container">$1$</span> and <span class="math-container">$2$</span>, each. Total place and remove operations are <span class="math-container">$4 &gt; n \log n = 2$</span>. Thus, the base case holds correctly.</p> <p><strong>Induction Case</strong>: Suppose there exits a sequence of moves that results in <span class="math-container">$n/2$</span> tokens at position <span class="math-container">$\log(n/2)$</span>, which requires at least <span class="math-container">$(n/2) \log (n/2)$</span> operations.</p> <p>Similarly, <span class="math-container">$n/2$</span> tokens can be obtained at position <span class="math-container">$\log(n/2)-1$</span> if the placement of tokens starts from index <span class="math-container">$0$</span>. It also requires at least <span class="math-container">$(n/2) \log (n/2)$</span> operations.</p> <p>Since the two sets are adjacent to each other, we move the tokens from position <span class="math-container">$\log n -1$</span> to position <span class="math-container">$\log n$</span>. It requires <span class="math-container">$n$</span> remove and place operations.</p> <p>Total operations are thus <span class="math-container">$\geq (n/2) \log (n/2) + (n/2) \log (n/2) + n = n \log n$</span>. Hence, the induction hypothesis is holds.</p>
306
tokenization
Can parser split tokens?
https://cs.stackexchange.com/questions/161493/can-parser-split-tokens
<p>Is it valid to have the token split up, in a parser; as shown in the below grammar:</p> <pre><code>expr -&gt; expr addop term term -&gt; term mulop factor factor -&gt; factor digit | digit addop -&gt; + | - mulop -&gt; * | / digit -&gt; 0 | 1 | ... | 9 </code></pre> <p>Say, for the arithmetic expression:</p> <pre><code>12 + 7 * 45 / 9 - 6 + 5 </code></pre> <p>for the input: 12, the split of token 12 occurs, in two parts; with first reading digit 1, then 2.</p>
<p><s>The arithmetic expression you provided <code>12 + 7 * 45 / 9 - 6 + 5</code> will cause the parser to return syntax error because <code>12</code> is not part of the language (same problem for <code>45</code>...). Specifically, it is not a token and will be recognized as the symbol <code>1</code> followed by <code>2</code> that is obviously illegal. To manage numbers in the intuitive way you should define a lexer accepting numbers and refer to its language, that is <code>[0-9+]</code>.</s></p> <p>It is possible to define the language this way, however it is not a best practice and that will have two implications:</p> <ol> <li>in case blanks are ignored it is perfectly possible to write <span class="math-container">$1\underline{ }2 * 3\underline{ }4$</span> that is equivalent to <span class="math-container">$12 * 34$</span>. This is generally not what the programmer would expect.</li> <li>you need to carry out integer computation in semantic actions attached to your productions rather than in the lexer; this is possibly a maintainance problem because you are mixing two components that should be separated.</li> </ol>
307
tokenization
How are lexical tokens produced
https://cs.stackexchange.com/questions/142035/how-are-lexical-tokens-produced
<p>I am studying Compiler Design. The instructor told us that when a program is given to lexical analyzer it find all tokens then a symbol table is created and it is updated at every phase accordingly, but I read this online <a href="https://www.radford.edu/%7Enokie/classes/380/phases.html" rel="nofollow noreferrer">notes</a> and here is the statement</p> <blockquote> <p><em>The lexical analyzer produces a single tokens each time it is called by the parser.</em></p> </blockquote> <p>I can't understand this statement. How does all this stuff happens? For a program with thousands line of code there may be thousands of tokens and for every token if the parser calls the lexical analyzer this may be very much time consuming? How does parser decide that all tokens are produced and it don't need to call lexical analyzer now?</p> <p>I am asking this for general compiler not a language specific.</p>
<p>Yes, normally a parser calls the lexical analyser every time it needs a token, and this results in many, many, many calls to the lexical analyser. It is well known by compiler writers that the lexical analysis can consume the larger proportion of the compilers execution time.</p> <p>However, the lexical analysis process would normally use a Chomsky type 3 grammar, or a regular language, and thus can be implemented by a finite state automaton, which can be coded quite efficiently. The parser, by contrast, will normally be based on some form of Chomsky type 2 (context free) grammar and the algorithm would be less efficient as it may involve back-tracking or rule matching. Thus devolving some work from the less efficient parser to the more efficient lexical analyser makes the whole compiler more efficient.</p> <p>It is possible also to implement the relationship between the lexical analyser and the parser in a different way. The lexical analyser could process the whole input source program from a file (of text) into a complete set of tokens, which could themselves be stored in a file. Then the parser could input that file of tokens. This would be slower because it involves the writing and reading of a file. The list of tokens could alternately be stored in memory, but now the compiler has a larger memory requirement. Historically, in early computers, with smaller memories and slower processors it was done in a similar way and perhaps the input (tape) of the source program resulted in an output (tape) of token which becomes the input (tape) of the parser program!</p> <p>On a modern system this could be implemented in a pipe, for example:</p> <pre><code>lexer sourcefile.lng | parser | optimiser | codegen &gt; program.exe </code></pre> <p>Internally, some compilers could implement it this way, but normally a parser (function) within the compiler calls a lexer (function) as described.</p>
308
tokenization
What is a malformed token?
https://cs.stackexchange.com/questions/3278/what-is-a-malformed-token
<p>I am reading Programming Language Pragmatics by Michael Scott. He says that on a first pass, a compiler will break a program into a series of tokens. He says that it will check for malformed tokens, like 123abc or $@foo (in C). </p> <p>What is a malformed token? A variable that does not meet the rules of variable-naming? An operator that does not exist (ex. "&lt;-")?</p> <p>Is this analogous to a misspelled word?</p>
<p>The basic syntactic units in any <em>textual</em> programming language are tokens, which are, if you wish, the words in the programming language. The compiler parses these tokens to build sentences and so forth.</p> <p>A malformed token is a string of characters that is not a valid word for the programming language. </p> <p>It is not a variable that doesn't match the naming rules – not all tokens are variables. It is not an operator that is unknown – sometimes operators can be defined by the programmer. It is one of the smallest pieces of syntax that will never correspond to something sensible.</p>
309
tokenization
Algorithm for token replacement game
https://cs.stackexchange.com/questions/120051/algorithm-for-token-replacement-game
<p>I'm having problems finding an algorithm to the following problem:</p> <p>A and B take turns replacing a number <span class="math-container">$n$</span> of tokens with either <span class="math-container">$floor((n+1)/2)$</span> or <span class="math-container">$n-1$</span>. The player who makes <strong>one</strong> token remain wins. We want to know, if there is a way for B to win the game no matter the moves of A. A begins the game.</p> <p>My idea is the following:</p> <p>Is n = 1 -> No way for B to win the game</p> <p>We try all moves of first A then B and check for 1 -> There is a way for B to win</p> <p>But this does not incorporate the "no matter the moves of A" criteria.</p>
<p>I'd check what happens for some small(ish) values of initial <span class="math-container">$n$</span>, working up. If you know who wins if there are at most <span class="math-container">$n$</span> on the table, working out what happens with <span class="math-container">$n + 1$</span> is easy.</p>
310
tokenization
Are there any languages without tokens?
https://cs.stackexchange.com/questions/81412/are-there-any-languages-without-tokens
<p>I know a lot of computer languages and they all use tokens. E.g. in very early BASIC you could say <code>LET answer = 42</code>, which is composed of seven tokens, <code>LET</code>, <code>answer</code>, <code>=</code>, <code>42</code>, and three space tokens.</p> <p>It seems that every character must be part of some token, including comments, which can be considered free-form tokens after the initial identifier.</p> <p>Are there any languages completely or partially without tokens? Is it even possible for such a language to exist? What would it look like?</p>
<p>I agree with the commenters that your question is either ill-defined or nonsensical. It all hinges on what you mean by "tokens", and what it would mean to "not have tokens."</p> <p>If you mean multi-character sequences that are "chunked" together into larger words, consider <a href="https://en.wikipedia.org/wiki/Brainfuck" rel="nofollow noreferrer">Brainfuck</a> — or actually most esolangs, including my favorite, <a href="https://esolangs.org/wiki/Befunge" rel="nofollow noreferrer">Befunge</a> — where every "command" is a single character. However, on the other hand, many Brainfuck interpreters actually treat <code>[-]</code> as a single "chunk" meaning "set the current tape cell to zero", so in that sense maybe you'd call <code>[-]</code> a "token".</p> <p>On the other hand, if you consider single characters to be "tokens", then maybe we need to go the other direction, and find programming languages in which everything kind of flows together without divisions. The first thing that pops to mind is <a href="http://inform7.com/learn/eg/bronze/source_32.html" rel="nofollow noreferrer">Inform 7</a> (yes, that's Inform 7 source code!), where meaning is conveyed through English prose. Obviously Inform 7 has a parser that deals in <em>words</em>, but those words don't directly correspond to anything like <code>LET</code> in Basic; it's more complicated than that.</p> <p>Or, consider <a href="http://www.dangermouse.net/esoteric/piet.html" rel="nofollow noreferrer">Piet</a>, an esolang in which programs are expressed as two-dimensional pixel paintings. Does a pixel count as a "token"? Does a rectangle of pixels count as a "token"?</p> <p>In the same vein — of trying to break away from the strictures of linear <em>words</em> and into more free-flowing worlds — consider <a href="http://catseye.tc/node/noit_o&#39;_mnain_worb" rel="nofollow noreferrer">noit o' mnain worb</a>, a nondeterministic cellular automaton. It uses ASCII characters to represent cells in the automaton; but do those characters each count as a "token", or are the relevant building blocks of worb programs more like "corridors" and "gates"?</p> <p>For that matter, consider <a href="https://en.wikipedia.org/wiki/Wireworld" rel="nofollow noreferrer">Wireworld</a>, a better-known cellular automaton. Does Wireworld count as a "language"? and if so, does "it" "have" "tokens"?</p> <p>For that matter, consider the "language" in which <a href="http://www.columbia.edu/cu/computinghistory/eniac.html" rel="nofollow noreferrer">ENIAC</a> was programmed. It certainly didn't have "tokens"! But if you were to write down the plugboard configuration for a given "program", I suppose that you'd pick some way of writing it down that involved ASCII and a grammar of tokens. I don't guess you'd communicate the plugboard configuration with a pen-and-ink drawing. (But maybe you would! Who knows!)</p> <p>For <em>that</em> matter, what would you say about plain old x86 machine code? That's a "computer language" if ever there was one... but it doesn't have "tokens", does it? All it has is a stream of bytes, one after the other, which are chunked together to form <em>instructions.</em> Or does an instruction (such as <code>B82A0000</code> or <code>C3</code>) count as a "token"? Does it matter if the same set of bytes (or some of them, at least) could be reinterpreted by another part of the program as a piece of data, a jump address, et cetera?</p> <p>Basically, you need to define what you mean by "language" and "token"; and once you do that, I think you'll find that your question answers itself.</p>
311
tokenization
Doubt on Token in Compiler Design
https://cs.stackexchange.com/questions/111752/doubt-on-token-in-compiler-design
<p>Say i have code snippet like --> </p> <p>m -= n;</p> <p>is <strong>minus</strong> and <strong>assignment</strong> considered as a single token or they will be considered as different token?</p> <p>So the total token count will be 4 or 5?</p>
<p>In the C language you cant insert space between '-' and '=', so you are better to implement '-=' as the single lexeme.</p>
312
tokenization
Do we need regular expression first or finite state automata in lexical anlysing?
https://cs.stackexchange.com/questions/84560/do-we-need-regular-expression-first-or-finite-state-automata-in-lexical-anlysing
<p>I'm a bit confused about the concept of finite state automata (FSA) and regular expression (RE) in lexical analysis. I have reading some books about compiler construction. At the part of tokenization, all the books I read talk about the regular expression first to recognize the tokens. For example, the regex below is to recognize the <code>identifier</code>:</p> <pre><code>([a-zA -Z] | _ | $ )([a-zA -Z0 -9] | _ | $)* </code></pre> <p>Then, they jumped to explain another technique which is finite state automata (FSA). As a result, some questions have come to my mind which are:</p> <p>1- What part I should learn first? RE or FSA?.</p> <p>2- Programmatically, which part should be converted to other to build the lexer? RE ==> FSA or FSA to RE.<br> 3-Since all tokens can be recognized by regular expression, then, why we need finite state automata?.</p> <p>Sorry for the to many questions, but I really can't figure out how to start. many thanks in advance. </p>
<p>A regular expression is a language used to describe a finite state automaton. It allows you to define the fsa without drawing nodes and edges all over the place. The two go hand in hand in that regard.</p>
313
tokenization
how to create tokens from CFG
https://cs.stackexchange.com/questions/67961/how-to-create-tokens-from-cfg
<p>i have a context free grammar </p> <p>i want to create a tokens from the language </p> <p>is there any techniques to do that ? </p> <p>for example , this CFG from Prof.Alex Aiken notes : </p> <pre><code> D -&gt; D ; P | D D -&gt; def id(ARGS) = E ; ARGS -&gt; id, ARGS | id E -&gt; int | id | if E1 = E2 then E3 else E4 | E1 + E2 | E1 - E2 | id(E1,...,En) </code></pre> <p>how can i crate the tokens ? </p> <p>the professor explain how i do that in some inputs </p> <p>for example , if i have the input : </p> <pre><code>x = 10 </code></pre> <p>then i have </p> <pre><code>&lt;id.,x&gt; &lt; opp., = &gt; ...... </code></pre> <p>but how i can do that from the CFG </p> <p>thank you All </p>
<p>The problem of creating a parse tree given a grammar and a word generated by it is known as <em>parsing</em>. The <a href="https://en.wikipedia.org/wiki/CYK_algorithm" rel="nofollow noreferrer">CYK algorithm</a> parses all context-free grammars, but is slow. In practice, only restricted types of context-free grammars are used, and these can be parsed in linear time. The common example is <a href="https://en.wikipedia.org/wiki/LALR_parser" rel="nofollow noreferrer">LALR parsing</a>. Perhaps you will learn more about this topic in due time.</p>
314
tokenization
Transition diagram for following token number
https://cs.stackexchange.com/questions/100705/transition-diagram-for-following-token-number
<p><strong>number -> digits(.digits)?(E[+ -]?digits)?</strong> This transition diagram runs the above mentioned token I can not understand this how this is work.what is meaning of "?"Symbol and digits. can anyone explain me this</p> <p><a href="https://i.sstatic.net/Rxc5U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rxc5U.png" alt="enter image description here"></a></p>
<p><strong>digits(.digits)?(E[+ -]?digits)?</strong> is a regular expression for the set of strings that the pictured automaton accepts. That is, all of the strings accepted by the automaton will match this pattern. If you're unfamiliar with regular expression syntax, you can check out <a href="https://en.wikipedia.org/wiki/Regular_expression#Formal_language_theory" rel="nofollow noreferrer">this explanation from Wikipedia</a>. </p> <p>In particular, if you see some pattern followed by the <strong>?</strong> operator, it means that pattern is optional. So something like <strong>digits(.digits)?</strong> will match strings of the form <strong>digits.digits</strong> or just <strong>digits</strong> because the last <strong>(.digits)</strong> part is optional. </p> <p>I would guess that <strong>digits</strong> is a placeholder for a series of numeric digits (0-9). So something like <strong>digits(.digits)?</strong> would match strings like "33.5", "0.8", "1.892", "7", etc. </p>
315
tokenization
Grammar where tokens can be transmuted
https://cs.stackexchange.com/questions/154435/grammar-where-tokens-can-be-transmuted
<p>I have a grammar which is mostly <code>LL(1)</code>, save for the fact that some tokens may be promoted to larger integer types.</p> <p>For example, let take the following grammar</p> <pre><code>S ::= terminal1 S1 S1 ::= integer_16 S2 | integer_32 S3 </code></pre> <p>where <code>S2</code> and <code>S3</code> can share a common prefix but we assume the above grammar can be rewritten as <code>LL(1)</code>, save for the integer promotion and that the narrowest numbers have precedence over the the wider numbers.</p> <p>Let's assume that I read an <code>integer_16</code> but fail to match <code>S2</code>, I'll backtrack and promote <code>integer_16</code> to <code>integer_32</code>. This can be easily done when doing error-recovery by adding a case for non-fatal error if the token can be promoted.</p> <p>This is a special case of <code>LL(k)</code> grammar, but my question is does such grammars (<code>LL(1)</code>+some dirty error-recovery tricks) have a standard name?</p>
<p>The type of a literal integer is semantic, not syntactic, and should not be part of syntactic analysis. In other words, the program text can be <em>parsed</em> -- broken into syntactic parts with known relationships -- without knowing the magnitude of each integer literal. The magnitude of the literal does not affect the literal's syntactic role; the operand of an multiplication does not transform itself into an operand of an adjacent addition if it becomes greater. In other words, <code>x + y * z always means the sum of </code>x<code>and the product of</code>y<code>and</code>z`, no matter what the magnitudes of the three values might be. (At least, I hope that's true of your language. If it isn't, composing correct code would be a difficult undertaking.)</p> <p>You might argue that some choices of value lead to invalid code, for example in a language without automatic widening conversions. Certainly, some semantic errors can be statically detected; others cannot. But that's not the goal of the parser. Type analysis and validation (or rejection) of type conversions are semantic operations, whether done in a semantic action or in a post-parse walk over the parse tree.</p> <p>The key to writing a maintainable parser is to respect the principle of separation of concerns. A lexical analyser splits the input into tokens; nothing else. A parser creates the hierarchical structure of the program as a graph (usually a tree, at least in the first instance), and nothing else. A type-checker discovers and validates the semantic type of each component of an expression. And so on. Separation of concerns makes your programs easier to read, easier to debug, and easier to maintain.</p> <p>Of course, nothing is totally pure. Your lexical analyser might convert each numeric literal into some internal representation of a number, and there might be different possible representations: different widths, different precisions, and so on. That might turn out to be a useful optimisation, instead of keeping them as character strings, for any of a number of reasons. But it's a trade-off: you will also have to deal with literals not using the representation which program semantics require, which I think is the basis of your question. Coping with that might complicate your code (or not, depending on the language your compiler is written in), in which case you might reasonably consider whether the optimisation justifies the complication. Sometimes, it turns out that it doesn't. Particularly at the beginning, it's often good to always choose the simplest implementation over the most efficient.</p> <p>Regardless, it has nothing to do with parsing theory; LL(1) is LL(1), no matter what the semantics might be.</p>
316
tokenization
Product of Lexical Specification
https://cs.stackexchange.com/questions/103658/product-of-lexical-specification
<p>I have a problem that asks me to consider the string abbbaacc. I'm supposed to figure out which of the following lexical specification produces the tokenization ab/bb/a/acc.</p> <p>The options are:</p> <pre><code>A. a(b+c)* b+ B. ab b+ ac* C. c* b+ ab ac* D. b+ ab* ac* </code></pre> <p>I just learned about REGEX, and I'm not sure about this, but to solve this problem would I just be trying see which options can make ab/bb/a/acc?</p> <p>If that's correct, then would the answer be all four of them?</p> <p>Since all four of them can match to ab/bb/a/acc:</p> <pre><code>A. a(b+c)* -&gt; a, ab, acc b+ -&gt; bb B. ab -&gt; ab b+ -&gt; bb ac* -&gt; a, acc C. c* -&gt; b+ -&gt; bb ab -&gt; ab ac* -&gt; a, acc D. b+ -&gt; bb ab* -&gt; a, ab ac* -&gt; a, acc </code></pre> <p>I'm not sure if I'm doing this correctly.</p>
<p>There's not enough information here to answer the question, but I can prove a pretty good guess based on how tokenizers are typically implemented. Generally the tokenizer will run through the list of patterns, trying each one I sequence. If one of them matches it takes the longest possible match as a token, then starts over at the top of the list. I surmise this is the implicit background for the question.</p> <p>So, for instance, if you have <code>abbbaacc</code> and the rules <code>a(b|c)*</code> and <code>b+</code> in that order, then you'd tokenize that as <code>abbb/a/acc</code>.</p>
317
tokenization
What is token-type in Lexical analysis?
https://cs.stackexchange.com/questions/47708/what-is-token-type-in-lexical-analysis
<p>I'm currently studying compiler construction book "Compilers Principles, Techniques, and Tools (2nd Edition)" , in page <a href="https://books.google.com.pk/books?id=kLVv4MSa7EUC&amp;pg=PA113&amp;lpg=PA113&amp;dq=%3Cid,%20pointer%20to%20symbol-table%20entry%20for%20M%3E%20%3Cmult%20op%3E%20%3Cid,%20pointer%20to%20symbol-table%20entry%20for%20C%3E%20%3Cexp%20op%3E%20%3Cnumber,%20integer%20value%202%3E&amp;source=bl&amp;ots=9LNuSV5pwN&amp;sig=-qzRzQk3fcx5_OumyVL5mn2M_FA&amp;hl=en&amp;sa=X&amp;ved=0CBsQ6AEwAGoVChMI3d2I24-dyAIVYf9yCh2JZwPw#v=onepage&amp;q&amp;f=false" rel="nofollow">unit 3.1 , page 113 , example 3.2</a> . </p> <p>i cannot understand what is this kind of method ?</p> <blockquote> <p>E = m * 2;</p> <p>conversion to <strong>token types(this method)</strong> below</p> </blockquote> <pre><code>&lt;id , E&gt; &lt;op=&gt; &lt;id , m&gt; &lt;op*&gt; &lt;number , 2&gt; &lt;op;&gt; </code></pre> <p>it is token-type pairs ? or is there any technical term associated ?</p>
<p>I don't know what is the proper name for what you're seeing, but it is the standard way to lexically analyze text. You divide it into tokens of specific types. For the sake of context-free parsing (the next step in the parsing chain), you only need the <em>type</em> of each lexeme; but further steps down the road will need to know the semantic content (sometimes called <em>annotation</em>) of each lexeme. For example, the context-free parser doesn't care which number $2$ is; it only needs to know that it's a number. Further down the road, it will suffice to know that this number is an <em>integer</em>. Even further, you would need to know that this integer is specifically $2$.</p> <p>Some lexemes are <em>atomic</em>, for example the various operators (though in principle we could group, say, + and - together). We need to separate them because syntactically = is different from +, and + is different from * (due to operator precedence). These don't need any annotation.</p>
318
tokenization
on-the-fly decompress a flat-file database
https://cs.stackexchange.com/questions/123604/on-the-fly-decompress-a-flat-file-database
<p>I'm facing the following problem. I have a flat-file database (e.g. CSV). Since it's relatively large to store in memory, I'd like to compress it.</p> <p>Given a key, I need to return the uncompressed text (record of values).</p> <p>So one naive idea is to tokenize the text into <strong>words</strong> and to have the mapping <span class="math-container">$\text{word} \mapsto \text{codeword}$</span></p> <p>Of course, this naive idea lacks of understanding of statistical properties in the data, that other compression algorithms exploits.</p> <p>So the next thing I thought about is Huffman code. The problem I'm facing is that I'd like to find the optimal tokenization for the text. Let's say that one column in the CSV file contains only the text "the fox jumped over the lazy dog", it reasonable to want that the algorithm would tokenize this string as one token.</p> <p>but then again, going over all possibilities isn't a feasible task. Are there any algorithms that deals with this problem?</p> <p>So to summarize, I need to: </p> <ol> <li>Compress my data once and to decompress on demand</li> <li>Return the requested value (a record) for a given key</li> <li>Decompression should be "fast enough" </li> </ol> <p>Which algorithms fit my problem? </p> <p>In particular, I'd like to know if Huffman code is a good option, and if so, how to tokenize the text.</p> <p>Thanks! </p>
<p>What is "too large to fit in memory"? Current operating systems can handle processes with GiB of memory, if you use e.g. mmap(3) on Unix/Linux, you can work as if you had the whole file in memory and access it at random. That might be much faster than compressing/uncompressing on the fly. And (if I understand your question correctly) you want to access individual records (i.e., rows) by key, so you will have to compress only the remaining row and keep the key, working one row at a time (you don't want to have to uncompress from the start to be able to access row 102354 each time, don't you?). That will severely limit the compression gain.</p> <p>If you can preprocess the data (a given, since you talk about compressing it), perhaps an even better bet is to use a simple(ish) database to store it, like <a href="https://en.wikipedia.org/wiki/Berkeley_DB" rel="nofollow noreferrer">Berkeley DB</a>, the Unix standard <a href="https://en.wikipedia.org/wiki/DBM_(computing)" rel="nofollow noreferrer">DBM</a> or it's GNU take <a href="https://www.gnu.org.ua/software/gdbm" rel="nofollow noreferrer">GDBM</a> or other, more modern, for performance tuned ones like Lightning Memory-Mapped Database <a href="https://en.wikipedia.org/wiki/Lightning_Memory-Mapped_Database" rel="nofollow noreferrer">LMDB</a>.</p>
319
tokenization
Should a lexer (tokenizer) handle unknown operators?
https://cs.stackexchange.com/questions/145273/should-a-lexer-tokenizer-handle-unknown-operators
<p>I have a list of supported operators, my question is whether the lexer should just yield the token for the operator or raise a syntax error in case that particular operator (let's say &quot;?&quot;) doesn't exist in the operators list?</p> <p>for example, the operators list [+, -]. for the expression &quot;1 ? 2&quot;, should the output be [number:'1', operator:'?', number:'1'] or it should raise a syntax error?</p> <p>The parser should handle it instead of the tokenizer?</p>
320
tokenization
Counting tokens in compilers, lexical analyser
https://cs.stackexchange.com/questions/97180/counting-tokens-in-compilers-lexical-analyser
<p>Let's start with the question. Say I have a C language statements follows</p> <ol> <li><p>it 458cat 2.01 = 96.87abc a.2 ;</p> <p>-my question is how many tokens are there in the above statements. Secondly does white space like tabs, newline, makes a token or not? </p></li> <li>If you are interested in my solution then proceed on your own risk. Firstly I think in lexical analyser there is no such thing as white space so after removal of that the above statement would look catastrophic so, I have not taking that approach. </li> <li>Now we see 'it' as an identifier similarly, '458' as integer, 'cat' as identifier, (please not that I may have mistaken here which i don't know that is '458cat' may be invalid identifier so the lexical analyser may report an error, but I don't know will that happen or not. you may argue that what I have written is wrong because if it was true then the identifier like '678hello' will be a valid identifier but since its not then iam wrong. But take a different approach like if all white space between '468 abc' being vanished (this is what lexical analyser does first) then it would like '468abc' . Initially in case of 468 abc we were taking 468 as integer and abc as valid identifier but after white space removal things are changed so what I think is it will be reported in syntax analyser but I don't know that am I right), similarly, '2.01' as real integers, '=' as operator, '96.87' as real number, 'abc' as identifier, 'a ' as identifier, .2 as real number and ' ; ' as special symbol. </li> <li>I most certainly know that iam not right but if you could correct where I was wrong it will be helpful</li> </ol>
<p>In C, <code>458cat</code> is a single <em>ppnumber</em> token. It's not a valid number, so it will eventually produce an error message, but it is tokenised as a single token.</p> <p>There's a longer explanation of this behaviour <a href="https://stackoverflow.com/a/49365061/1566221">in this StackOverflow answer</a>.</p>
321
tokenization
Morphing Hypercubes, Token Sliding and Odd Permutations
https://cs.stackexchange.com/questions/107018/morphing-hypercubes-token-sliding-and-odd-permutations
<p>A month ago, I asked the following question math.exchange (<a href="https://math.stackexchange.com/questions/3127874/morphing-hypercubes-and-odd-permutations">https://math.stackexchange.com/questions/3127874/morphing-hypercubes-and-odd-permutations</a>), but for completeness, I will include the details here.</p> <hr> <p>Let <span class="math-container">$Q_n$</span> denote the <span class="math-container">$n$</span>-dimensional hypercube graph -- the graph with vertex set <span class="math-container">$V_n = \big\{v : v \in \{0, 1\}^n \big\}$</span> and edges <span class="math-container">$$E_n = \big\{(u,v) : u, v \in V_n \text{ with } u \text{ and } v \text{ having Hamming distance one}\big\}$$</span></p> <p>Let <span class="math-container">$H$</span> denote a subgraph of <span class="math-container">$Q_n$</span> that is isomorphic to <span class="math-container">$Q_{n'}$</span>, for some input parameter <span class="math-container">$n' \leq n$</span> (i.e. <span class="math-container">$H$</span> is an <span class="math-container">$n'$</span>-dimensional subcube of <span class="math-container">$Q_n$</span>). Every vertex <span class="math-container">$v$</span> in <span class="math-container">$H$</span> has a token (or a pebble) with a label <span class="math-container">$\ell(v)$</span>. Next, we partition <span class="math-container">$H$</span> into <span class="math-container">$2^{n' - d}$</span> vertex disjoint subgraphs <span class="math-container">$H_1, \ldots, H_{2^{n'-d}}$</span> each isomorphic to <span class="math-container">$Q_d$</span> where <span class="math-container">$d \leq n'$</span> is a second parameter.</p> <p>We can think of each <span class="math-container">$H_i$</span> as a ternary string <span class="math-container">$s_i \in \{0, 1, *\}^n$</span> such that <span class="math-container">$s_i$</span> has exactly <span class="math-container">$d$</span> <span class="math-container">$*$</span>'s. These represent free coordinates. For each <span class="math-container">$s_i$</span>, we define a mapping <span class="math-container">$f_i : \{0, 1, *\}^n \to \{0, 1, *\}^n$</span> such that the <span class="math-container">$j$</span>-th coordinate of <span class="math-container">$f_i(s_i)$</span> is a <span class="math-container">$*$</span> if and only if the <span class="math-container">$j$</span>-th coordinate of <span class="math-container">$s_i$</span> is a <span class="math-container">$*$</span>. So intuitively, each <span class="math-container">$f_i$</span> maps a <span class="math-container">$d$</span>-dimensional subcube to another <span class="math-container">$d$</span>-dimensional subcube on the same axes. Let <span class="math-container">$H'$</span> denote the subgraph obtained by decomposing <span class="math-container">$H$</span> as described above and applying the <span class="math-container">$f_i$</span>'s on its <span class="math-container">$2^{n'-d}$</span> pieces -- in other words, <span class="math-container">$H'$</span> is the subgraph induced by the vertices from each <span class="math-container">$f_i(s_i)$</span>. If <span class="math-container">$H'$</span> is also isomorphic to <span class="math-container">$Q_{n'}$</span>, then I call <span class="math-container">$H'$</span> a <span class="math-container">$\texttt{morph}$</span> of <span class="math-container">$H$</span>. When a morph operation is applied on <span class="math-container">$H$</span>, the tokens are also moved appropriately.</p> <p>So my problem is the following. Given <span class="math-container">$H$</span>, I would like to apply/find a sequence of morph operations to obtain a graph <span class="math-container">$H''$</span> that "finishes where <span class="math-container">$H$</span> started" -- By this, I mean that the ternary string that represents <span class="math-container">$H$</span> must be the same as <span class="math-container">$H''$</span>. The caveat is the following: if we consider the permutation induced by the tokens (since the tokens finish on the same subset of vertices that they started on), I want them to induce an odd permutation.</p> <p>To help clarify, consider the following example with <span class="math-container">$n=3$</span>, <span class="math-container">$n'=2$</span> and <span class="math-container">$d=1$</span>. Let <span class="math-container">$H$</span> denote the 2D face of <span class="math-container">$Q_3$</span> induced by <span class="math-container">$0**$</span>. We place four tokens on those vertices with labels <span class="math-container">$A,B,C,D$</span> -- <span class="math-container">$A$</span> is placed on <span class="math-container">$000$</span>, <span class="math-container">$B$</span> on <span class="math-container">$001$</span>, <span class="math-container">$C$</span> on <span class="math-container">$010$</span> and <span class="math-container">$D$</span> on <span class="math-container">$011$</span>. Now, consider the following three morph operations:</p> <p>1) Partition <span class="math-container">$\{A,B,C,D\}$</span> into pairs <span class="math-container">$\{A,B\}$</span> and <span class="math-container">$\{C, D\}$</span>. These can be represented by ternary strings <span class="math-container">$00*$</span> and <span class="math-container">$01*$</span> respectively. We map <span class="math-container">$00* \to 11*$</span> and leave <span class="math-container">$01*$</span> unchanged (i.e. just apply the identity). This gives us a new graph isomorphic to <span class="math-container">$Q_2$</span> with token placement <span class="math-container">$A$</span> on <span class="math-container">$110$</span>, <span class="math-container">$B$</span> on <span class="math-container">$111$</span>, <span class="math-container">$C$</span> on <span class="math-container">$010$</span> and <span class="math-container">$D$</span> on <span class="math-container">$011$</span>. Note that this new square doesn't have the same "orientation" as the first, since it has a ternary string representation of <span class="math-container">$*1*$</span>.</p> <p>2) Next, partition the newly obtained <span class="math-container">$*1*$</span> into <span class="math-container">$*10$</span> and <span class="math-container">$*11$</span> -- pairing the tokens <span class="math-container">$\{A, C\}$</span> and <span class="math-container">$\{B, D\}$</span>. We map <span class="math-container">$*10 \to *01$</span> to obtain the square <span class="math-container">$**1$</span> (<span class="math-container">$*11$</span> is left unchanged). The tokens are located as follows: <span class="math-container">$A$</span> on <span class="math-container">$101$</span>, <span class="math-container">$B$</span> on <span class="math-container">$111$</span>, <span class="math-container">$C$</span> on <span class="math-container">$001$</span>, and <span class="math-container">$D$</span> on <span class="math-container">$011$</span>.</p> <p>3) Finally, we partition the obtained <span class="math-container">$**1$</span> into <span class="math-container">$1*1$</span> and <span class="math-container">$0*1$</span> -- pairing the tokens <span class="math-container">$\{A,B\}$</span> and <span class="math-container">$\{C,D\}$</span>. We map <span class="math-container">$1*1 \to 0*0$</span>, which gives us our graph <span class="math-container">$H''$</span> induced by the square <span class="math-container">$0**$</span> (just as it was with <span class="math-container">$H$</span>). If we look at the placement of the tokens, we see that <span class="math-container">$A$</span> still on <span class="math-container">$000$</span>, <span class="math-container">$B$</span> is now on <span class="math-container">$010$</span>, <span class="math-container">$C$</span> is now on <span class="math-container">$001$</span> and <span class="math-container">$D$</span> is still on <span class="math-container">$111$</span>. The permutation induced by the new positioning of the tokens is an odd permutation as required.</p> <p>So now I am interested in the case when <span class="math-container">$d=2$</span>. I would like to find a pair of values for <span class="math-container">$n$</span> and <span class="math-container">$n'$</span> where such a sequence of morph operations exist. I don't necessarily want the tightest values of <span class="math-container">$n$</span> and <span class="math-container">$n'$</span>, nor am I picky about the number of morph operations.</p> <hr> <p>I haven't been able to prove that this is possible, so I have been writing code to perform an "exhaustive search". I can show that this is not possible for values of <span class="math-container">$n$</span> less than or equal to <span class="math-container">$4$</span>, but the search space grows much to quickly.</p> <p>So my question is two-fold: 1) What kinds of optimizations should I consider? I am interested in practical heuristics that might help, not necessarily theoretical guarantees, and 2), is there a cleaner way to frame this problem? Just defining what a morph operation is takes a lot of work, let alone the rest.</p> <p>I apologize for the wall of text, and can try to add missing details or clarifications if necessary.</p>
322
tokenization
How to find optimal token set for compression?
https://cs.stackexchange.com/questions/109160/how-to-find-optimal-token-set-for-compression
<p>By token I mean the smallest element of source data, that the compression algorithm works on. It may be a bit (like in DMC), a letter (like in Huffman or PPM), a word, or variable-length string (like in LZ). </p> <p>(Please feel free to correct me on this term, if I use it incorrectly.)</p> <p>I'm thinking: what if we first find optimal set of tokens, and only then compress our data as a stream of these tokens? Will it improve compression of text or of xml or of some other kind of data? LZ sort of does this, but it is limited to letters. I'm asking about truly general approach when a token can have any bit-length. For example, it can be interesting to do it for compression of x86 executable files. Or in other cases, when sub-byte (and sup-byte) bit-strings behave as individual tokens. </p> <p>How to find this set of tokens optimally? In a way that maximize average entropy of a stream of such tokens under 0-order Markov model?</p> <p>Are there existing algorithms for it? Are there any algorithms related to the problem?</p> <p>How to find approximation of this set? What is complexity of optimal and approximation solution?</p> <p>And what if we want to maximize entropy for higher order Markov model? How harder the problem will be in this case?</p>
<blockquote> <p>Are there existing algorithms for it? Are there any algorithms related to the problem?</p> </blockquote> <ul> <li>Theoretical one: <a href="https://en.wikipedia.org/wiki/Sequitur_algorithm" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Sequitur_algorithm</a></li> <li>Practical one: <a href="https://encode.ru/threads/1909-Tree-alpha-v0-1-download" rel="nofollow noreferrer">https://encode.ru/threads/1909-Tree-alpha-v0-1-download</a></li> </ul>
323
tokenization
Find first occurence of multiple tokens using C++
https://cs.stackexchange.com/questions/113376/find-first-occurence-of-multiple-tokens-using-c
<p>I am using C++03. I have a stream of chars and I need to find the first token from a group of tokens. That is, I need to find the lowest indexed match. Specifically, the stream is just an char array, and the tokens of interest are listed below. The start of a token can occur anywhere in the stream (and not necessarily at position 0).</p> <ul> <li>ssh-{rsa|dsa|ed25519}</li> <li>ecdsa-sha2-{nistp256|nistp521}</li> <li>---- BEGIN SSH2 PUBLIC KEY ----</li> <li>-----BEGIN RSA PRIVATE KEY-----</li> <li>-----BEGIN DSA PRIVATE KEY-----</li> <li>-----BEGIN EC PRIVATE KEY----- (ecdsa)</li> <li>-----BEGIN OPENSSH PRIVATE KEY----- (ed25519)</li> </ul> <p>The OpenSSH folks were not very forward thinking, and it is making the lexer/parser more complicated than it should be.</p> <p>There are some suggestions for the problem like at <a href="https://stackoverflow.com/q/19952155">Pass multiple strings to the string::find function</a> on Stack Overflow. It provides a match for a string, but it may not be the lowest index for a match.</p> <p>I think my non-STL alternative is to examine each char byte-by-byte. That takes O(n). If the char is one of <code>s</code>, <code>e</code>, or <code>-</code>, then look for the longer token, like <code>ssh-rsa</code>. The full token compare takes O(m) and it may happen s times, so the operation is O(s*m).</p> <p>I kind of feel like O(n+s<em>m) can be improved upon similar to the way Boyer-Moore improves a single string search. My question is, is there an algorithm that runs in better time than O(n+s</em>m)?</p>
<p>You can turn that search into a DFA,and you can then pass the input through the DFA until an accepting state is reached (or the end of the string is encountered).</p> <p>Each pattern is turned into an NFA by adding a start state which self-loops on any input and transitions to the next state on a match of the first character. Each subsequent state transitions only on the corresponding match. The last state is accepting.</p> <p>These NFAs are then combined into a single NFA by merging the start states, after which the consolidated NFA is turned into a DFA using <a href="https://en.wikipedia.org/wiki/Powerset_construction" rel="nofollow noreferrer">subset construction</a>.</p> <p>Assuming the DFA is stored in a datastructure which allows O(1) transitions, the scan can be performed in <span class="math-container">$O(k)$</span> where <span class="math-container">$k$</span> is the index of the last character in the earliest match.</p> <p>Note that this is not a general algorithm. It actually produces the index of the match which ends first, not the one which starts first. In the case where no pattern matches a substring of any other pattern, however, these are the same match so there is no problem.</p> <p>Also, in the worst case the subset construction takes exponential time, although this is ameliorated by the fact that no pattern use repetition operators. In your particular problem, though, the DFA only needs to be constructed once when the program is compiled and the runtime is going to be <span class="math-container">$O(n)$</span>.</p>
324
tokenization
C language tokenizer output for static integer array
https://cs.stackexchange.com/questions/161252/c-language-tokenizer-output-for-static-integer-array
<p>Unable to find out how the C-language lexical analyzer would tokenize the declaration of a static array. Say, int i[3]= {1,2,3};</p> <p>The lexical analyzer would need to differentiate between just an integer identifier i, and a static array of size 3.</p> <p>Am confused how the lexical analyzer would tokenize.</p> <p>Tried Holub's book, among others, but couldn't find anything.</p>
<p>In my answer I assume you are acquainted with formal grammars, especially, the Backus Naur form.</p> <p>The lexical analyzer, alone, could never state the sentence you provided actually belongs to C. Telling whether it belongs or not to the language, is a joint duty of the lexical and syntactical analyzers.</p> <p>On one hand, the purpose of the lexer is telling whether a given sequence of characters corresponds to some keyword or pattern. On the other hand, the purpose of the syntactical analyzer is telling whether a sequence of symbols and characters is admitted by the grammar.</p> <p>A simple grammar <span class="math-container">$\mathcal{G}$</span> (unrelated to the C language) generating this kind of construct could be:</p> <ul> <li><span class="math-container">$S \rightarrow T \text{ } id \text{ } A = \{ L \}$</span></li> <li><span class="math-container">$A \rightarrow [ L ] $</span></li> <li><span class="math-container">$L \rightarrow num \text{ } |\text{ } LL $</span></li> <li><span class="math-container">$LL \rightarrow LL\text{ },\text{ } num\text{ }|\text{ } num$</span></li> <li><span class="math-container">$T \rightarrow int\text{ } |\text{ } float\text{ } | \text{ }double$</span></li> </ul> <p>(Assuming <code>id</code> corresponds to the lexical language of ids, etc.)</p> <p>The lexer would tokenize the string generating 14 tokens, that is:</p> <ul> <li>(<code>int</code>, INT)</li> <li>(<code>i</code>, ID)</li> <li>(<code>[</code>, LBRACKET)</li> <li>(<code>3</code>, INTEGER)</li> <li>(<code>]</code>, RBRACKET)</li> <li>(<code>=</code>, ASSIGN)</li> <li>(<code>{</code>, LCURLY)</li> <li>(<code>1</code>, INTEGER)</li> <li>(<code>,</code>, COMMA)</li> <li>(<code>2</code>, INTEGER)</li> <li>(<code>,</code>, COMMA)</li> <li>(<code>3</code>, INTEGER)</li> <li>(<code>}</code>, RCURLY)</li> <li>(<code>;</code>, COLON). (For suitable constants INT, INTEGER, ID, COMMA, etc.)</li> </ul> <p>At parse time the parse would tell the sentence belongs to the language, and could distinguish the meaning of each lexem by its left and right context (that precedes and follows it.)</p> <p>I hope this explanation will help.</p>
325
tokenization
Token Scanner for programming Language(Lexical Analysis)
https://cs.stackexchange.com/questions/101648/token-scanner-for-programming-languagelexical-analysis
<p><a href="https://i.sstatic.net/vKd0k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vKd0k.png" alt="enter image description here"></a> This DFA is a token scanner for a programming language.I would like to add keywords of the programming language(if,else,end ... etc) in the DFA so the lexical analyzer can recognize them.</p> <p>The Question is : Do i have to convert the entire given DFA to an ε-NFA(which will result in more states),add the keywords(with ε transitions too)and then convert back to a DFA or i just have to add the keywords so that the DFA becomes an NFA and then convert back to a DFA</p>
<p>There's a common misunderstanding about why many construction algorithms (and most notably Thompson's) use ε-NFAs. It's not because it's <em>necessary</em> to do so, and it's not because it's <em>efficient</em> to do so.</p> <p>Thompson's construction has the following advantages:</p> <ul> <li>It's recursive on the structure of the regular expression.</li> <li>It's easy to convince yourself that the construction is correct.</li> <li>It's straightforward to incorporate certain extensions (e.g. the LEX lookahead operator "/").</li> </ul> <p>Real-world implementations of regular expressions, even if they are based on Thompson's construction, typically optimise a dozen or so common base cases (e.g. Kleene closure of a character set; <code>[a-zA-Z0-9_]*</code> expands to something much bigger than it needs to be), saving Thompson's rules for higher levels.</p>
326
tokenization
A 9 token game of Nim tree construction
https://cs.stackexchange.com/questions/53745/a-9-token-game-of-nim-tree-construction
<p>Trying to construct the full tree for a 9 game token of Nim and am slightly confused. I don't understand how two players, min and max, will make their pick. For example, max picks first and can only pick [9]. Min then picks from [8-1], [7-2], [6-3], and [5-4]. How does min calculate it's utility value here?</p> <pre><code>max [9] min [8-1] [7-2] [6-3] [5-4] max [7-1-1] [6-2-1] [5-3-1] [5-2-2] [4-3-2] [3-3-2] min [6-1-1-1] [5-2-1-1] [4-3-1-1] [5-1-2-1] [4-2-2-1] [3-3-2-1] [4-1-2-2] [3-2-2-2] [3-1-3-2][2-2-3-2] [2-1-3-2] </code></pre>
327
tokenization
Lexical analysis on a series of tokens given regexes
https://cs.stackexchange.com/questions/148929/lexical-analysis-on-a-series-of-tokens-given-regexes
<p>I am to parse through a series of strings with a given token list. I was wondering if my lexical analysis is correct.</p> <pre><code>T1 = { abc, abc1 } T2 = { abd, abd1 } ID = [a-z]+[a-z0-9] NUM = 0 | [1-9][0-9] </code></pre> <p><code>lexer.getToken()</code> will return the current token and advance the input buffer by one. <code>lexer.peek(num)</code> will return the token at the index <code>num</code> and NOT advance the input buffer. <code>num</code> starts at 1 and 1 indicates the next possible token.</p> <p>Here are the strings</p> <p><code>abc 202 02202 abcabd1 abd0abc1 a123 abd1 abd2 abd3</code></p> <p>and here are the function calls</p> <pre><code>t1 = lexer.getToken(); # will return t1 = {T1, &quot;abc&quot;} t2 = lexer.getToken(); # will return t2 = {NUM, &quot;202&quot;} t3 = lexer.peek(1); # will return t3 = {NUM, &quot;2202&quot;} t4 = lexer.peek(2); # will return t4 = {ID, &quot;abcabd1&quot;} t5 = lexer.getToken(); # will return t5 = {NUM, &quot;0&quot;} t6 = lexer.peek(2); # will return t6 = {ID, &quot;abd0abc1&quot;} t7 = lexer.peek(3); # will return t7 = {ID, &quot;a123&quot;} t8 = lexer.peek(4); # will return t8 = {T2, &quot;abd1&quot;} t9 = lexer.getToken(); # will return t9 = {NUM, &quot;2202&quot;} t10 = lexer.peek(5); # will return t10 = {ID, &quot;abd3&quot;} </code></pre> <p>I got this because</p> <p><code>t1</code> will consume and return <code>&quot;abc&quot;</code> as conforms to <code>T1</code> and advance the input buffer</p> <p><code>t2</code> will consume and return <code>&quot;202&quot;</code> as conforms to <code>T2</code> and advance the input buffer</p> <p><code>t3</code> will return <code>&quot;2202&quot;</code> because we are currently at <code>&quot;0&quot;</code> since <code>&quot;0&quot;</code> conforms to regex <code>NUM</code> and we peek past it.</p> <p><code>t4</code> will find the next valid token which is <code>&quot;abcabd1&quot;</code></p> <p><code>t5</code> will consume the <code>&quot;0&quot;</code> as conforms to <code>NUM</code> and advance the input buffer</p> <p><code>t6</code> will return <code>&quot;abd0abc1&quot;</code> as conforms to <code>ID</code> since input buff on <code>&quot;2202&quot;</code></p> <p><code>t7</code> will return <code>&quot;a123&quot;</code> as conforms to <code>ID</code> since we are still on <code>&quot;2202&quot;</code></p> <p><code>t8</code> will return <code>&quot;abd1&quot;</code> as conforms to <code>T2</code></p> <p><code>t9</code> will return <code>&quot;2202&quot;</code> as conforms to <code>NUM</code> and consume <code>&quot;2202&quot;</code>, moving to <code>&quot;abcabd1&quot;</code></p> <p><code>t10</code> will return <code>&quot;abd3&quot;</code> as conforms to <code>ID</code> since we are currently on <code>&quot;abcabd1&quot;</code>.</p> <p>Does this logic look correct? I apologize if this is somewhat messy, just wanted to see if my logic is correct and I am following all regex/lexical rules.</p>
<p>I think it's odd to say that &quot;1 indicates the next possible token&quot; when you intend to return the token <em>after</em> the first unconsumed token. Or do you allow <span class="math-container">$peek(0)$</span> to return the current token without consuming it? (Far and away the most common use for a peek function.) But that's just documentation, I guess.</p> <p><code>abd0abc1</code> is not a match for <code>[a-z]+[a-z0-9]</code>. <code>abd0</code> would match that regular expression; once a digit is matched, nothing more can be added.</p>
328
tokenization
Algorithm to generate a token as survey summary
https://cs.stackexchange.com/questions/128215/algorithm-to-generate-a-token-as-survey-summary
<p>I'm searching for an algorithm but struggle to find anything, as I'm not sure how to formulate it correctly. I created a simple survey app in Angular with 24 questions and each has 2-5 answers. When the user answered all the questions I'd like to give him a token (as short as possible) on the result page that he can note down, so the next time he can enter the token instead of having to answer all the questions again.</p> <p>My thought process so far:</p> <ul> <li>I can store the information in an array with 24 entries <code>[1, 4, 2, ..., 2, 3]</code></li> <li>As all values have 1 digit I can remove everything but the numbers <code>142...23</code></li> </ul> <p>This still leaves me with a 24 digits long number. Nothing one would like to note down and type out by hand. So next I tried to convert this number (with ().toString(2)) to a binary with the goal to turned that binary into a human readable string with String.fromCharCode. But all I got was the rather not so human readable <code>æ\u000f\u0000³KH\u0000\u0000\u0000</code> for my test input.</p> <p>I could break down the number into digit pairs (e.g. 120422... -&gt; [12, 04, 22, ...]) and map each of these to a character (00 -&gt; A, 01 -&gt; B, ...), but I think there must be something more elegant and efficient, as it would only half the original size. Although using 3 digits for mapping would result in 125 combinations which is rather hard to map to the alphabet, even with upper/lowercases and special characters. But I'd be very happy to hear your ideas. Many thanks in advance!</p>
<p>I think you should try converting your 24-digits (in base 10) number, in another base, which would reduce a lot the length of your token, while keeping the same information.</p> <p>You should try experimenting with this website for example, which convert in base 36 (10 number + 26 letters):</p> <p><a href="http://www.unitconversion.org/numbers/base-10-to-base-36-conversion.html" rel="nofollow noreferrer">http://www.unitconversion.org/numbers/base-10-to-base-36-conversion.html</a></p> <p>Here, the number 986541236547896541258745 becomes 4GNE5T9XUQO08CKK</p> <p>I'm sure you can devise an algorithm in base 62 (10 number, 26 lower cases letters, 26 upper cases letters), or event more if you use other characters, such as +-*&amp;&quot;# ... etc</p> <p>EDIT:</p> <p>found this website that allows you to convert from and to any bases between 2 and 62: <a href="https://www.dcode.fr/base-n-convert" rel="nofollow noreferrer">https://www.dcode.fr/base-n-convert</a></p> <p>Here, the number 986541236547896541258745 becomes 4VMCgFPOG10DLH</p>
329
tokenization
TOPS trillion operations per second to Tokens per second
https://cs.stackexchange.com/questions/167711/tops-trillion-operations-per-second-to-tokens-per-second
<p>A lot of AI hardware coming out lately has its performance mentioned in TOPS i.e trillion operations per second.</p> <p>Does anyone have an Idea how to estimate the llm performance on such hardware in tokens per second.</p> <p>for example I have a hardware of 45 TOPS performance. if I perform inferencing of a 7 billion parameter model what performance would I get in tokens per second.</p> <p>just to clarify even further there's another term going around called TFLOPS i.e trillion floating point operations per second (used for quite a lot of Nvidia hardware). I'm not asking about that. I'm asking specifically about TOPS i.e trillion operations per second.</p>
<p>You can find a <em>lot</em> written on the Internet on the number of tokens/second you can expect from different GPUs. I suggest you pick a specific LLM (e.g., Llama 2) and a specific GPU or hardware, and do some searching. Many people report their experience.</p> <p>There is no simple formula. So you'll need to rely on the experience/measurements of others.</p> <p>It is not just a function of operations per second. Rather, in many cases, memory bandwidth is one of the limiting factors. You should not expect any simple &quot;conversion factor&quot; to convert TOPs to tokens per second.</p>
330
tokenization
Move tokens from s to t as fast as possible
https://cs.stackexchange.com/questions/45396/move-tokens-from-s-to-t-as-fast-as-possible
<p>Let $G=(V, E)$ be an unweighted and undirected graph, and $s, t \in E$. </p> <p>The problems starts with $n$ tokens on $s$. </p> <p>The goal is to move theses tokens to $t$ in a minimum of rounds with these rules :</p> <ul> <li>Each token can be moved up to once per round (a movement being when you transfer the token from $v$ the vertex that holds it, to $w \in N_G(v)$). </li> <li>Each vertex $v \in V\backslash \left\{s, t\right\}$ can hold at most one token ($s$ and $t$ are unconstrained).</li> </ul> <p>So far, what I've thought to is :</p> <ul> <li>If $N_G(s)=0$ or $N_G(t)=0$, it is not possible.</li> <li>If $N_G(s)=1$ or $N_G(t)=1$, you can just apply a BFS from $s$ to find the shortest path, then transfer every token one by one trough this path. </li> <li>If $N_G(s) \geq 2$ and $N_G(t) \geq 2$ you can apply a maximum flow algorithm on $G$ with $1$ as capacity for every edge. If the maximum flow is $1$ then you can apply a BFS and transfer every token through this path again. However if the maximum flow is $\geq 2$, then I don't really see what to do (having a maximum flow $\geq 2$ doesn't even guarantee there are multiple paths as nodes can hold at most $1$ token, while max flow tests only for edges, and even if there are multiple paths in some cases it can be better to use only the shortest path - e. g. for $2$ tokens if you have to paths but one is at least to edges longer -). </li> </ul> <p>Does this problem have a name? What topic of graph theory does it belongs to? Are there efficient algorithms to solve it? </p>
<p>Menger's theorem states that the maximum number of $s$-$t$ (internally) vertex-disjoint paths is equal to the minimum size of an $s$-$t$ vertex cut. Let the common value be $m$. Then it can be shown that the asymptotic number of rounds required is $n/m + O(1)$.</p> <p>Using the vertex disjoint paths, we can route $n$ tokens in $n/m + O(1)$ rounds, where the constant depends on the graph $G$ (but not on $n$). This gives an upper bound.</p> <p>On the other hand, every token must pass through the vertex cut, and at most $m$ can do so at any given round, so routing $n$ tokens takes at least $n/m$ rounds. (For each token, there must be a first time at which it reaches a vertex in the cut. Each such time can repeat at most $m$ times, since at most $m$ vertices of the cut can be occupied at any given time. So the number of rounds is at least $n/m$.) This gives a matching lower bound.</p> <p>From these two bounds, we deduce that the asymptotic number of rounds required is $n/m + O(1)$.</p>
331
tokenization
Trying to understand a token file for lexical analysis
https://cs.stackexchange.com/questions/13909/trying-to-understand-a-token-file-for-lexical-analysis
<p>I am reading <a href="http://gnuu.org/2009/09/18/writing-your-own-toy-compiler/3/" rel="nofollow">this</a> article about compiler. I am facing some problem in understanding the content of the token file. Specifically, what is the meaning of the following lines: </p> <pre><code> [ \t\n] ; [a-zA-Z_][a-zA-Z0-9_]* SAVE_TOKEN; return TIDENTIFIER; [0-9]+\.[0-9]* SAVE_TOKEN; return TDOUBLE; [0-9]+ SAVE_TOKEN; return TINTEGER; </code></pre> <p>What is * and + at the end of the expressions ?</p> <p>Thanks in advance for explaining me. </p>
<p><code>*</code> means that any number (including zero) of preceding expression can occur</p> <p><code>+</code> means that at least one instance of preceding expression must occur</p>
332
tokenization
Explain better how CompuBERT handles math tokens on input
https://cs.stackexchange.com/questions/144403/explain-better-how-compubert-handles-math-tokens-on-input
<p>My question is about <a href="https://github.com/MIR-MU/CompuBERT" rel="nofollow noreferrer">CompuBERT</a> (<a href="http://ceur-ws.org/Vol-2696/paper_235.pdf" rel="nofollow noreferrer">Three is Better than One Ensembling Math Information Retrieval Systems</a>)</p> <p>It is written on page 21 (table 2):</p> <blockquote> <p>&quot;we assigned a distinct mask for math tokens on input, often used to distinguish between different input languages&quot;</p> </blockquote> <p>I haven't found the implementation of the assignement of a distinct mask for math tokens on input but I will ask anyway two closely related questions:</p> <ol> <li><p>How does CompuBERT recognize expressions like c_{0} and c_0 are the same as <span class="math-container">$c_0$</span>?</p> </li> <li><p>Different calligraphic fonts should be assigned the same mask sometimes, for example like the real numbers <span class="math-container">$\mathbb{R}$</span> or simply <span class="math-container">$R$</span> if someone lazy type as input R instead \mathbb{R}... If this problem is not solved I think CompuBERT will not understand what the intention of the input question... How this problem is solved?</p> </li> </ol> <p>(please tell me where the implementation of the mask is in the code)</p>
333
tokenization
How to model an inverse &quot;token relationship&quot; in Petri nets?
https://cs.stackexchange.com/questions/144380/how-to-model-an-inverse-token-relationship-in-petri-nets
<p>Suppose I have the following Petri net:</p> <p><a href="https://i.sstatic.net/4zyXD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4zyXD.png" alt="Petri net" /></a></p> <p>I wonder whether it is possible to model an inverse relationship between <span class="math-container">$p2$</span> and <span class="math-container">$p3$</span>. Basically what I want to achieve is to make either <span class="math-container">$t1$</span> or <span class="math-container">$t2$</span> fireable but never both. Currently, I just set a token in <span class="math-container">$p2$</span> and leave <span class="math-container">$p3$</span> empty if I want to make <span class="math-container">$t1$</span> fireable (or vice versa for <span class="math-container">$t2$</span>). I believe there must be a way to omit one place and achieve the same result, i.e., putting one token in <span class="math-container">$p2$</span> activates <span class="math-container">$t1$</span> but not <span class="math-container">$t2$</span> and zero tokens in <span class="math-container">$p2$</span> achieves the reverse.</p>
<p>Without extra transitions, the best you can do is make them alternate:</p> <p><a href="https://i.sstatic.net/BqQ1u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BqQ1u.png" alt="The original net plus two additional arcs" /></a></p> <p>Extra transitions can be used to choose between filling <em>p2</em> or <em>p3</em>:</p> <p><a href="https://i.sstatic.net/gLcZU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gLcZU.png" alt="The original net plus two additional transitions and a place" /></a></p> <p>You can do the same without the additional place:</p> <p><a href="https://i.sstatic.net/61wuX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/61wuX.png" alt="The original net plus two additional transitions" /></a></p> <p>You might even consider dropping one of the added transitions, to model the situation in which they are enabled in order.</p>
334
tokenization
does a non-terminated string count as a token in c?
https://cs.stackexchange.com/questions/119130/does-a-non-terminated-string-count-as-a-token-in-c
<p>so, I am preparing for an exam which includes lexical analysis from compiler design. I was wondering what is the number of tokens in the following code-</p> <pre><code>int main() { /* comment printf("Hello */ There ");*/ return 0; } </code></pre> <p>so, I am thinking upto first "*/" it will be a multiline comment, so, after "there" a string will start without terminating. will the last string is counted as a token?</p>
<p>No, in C unterminated strings are not tokens. The C language definition precisely describes what a token is; among other things, they include (complete) string literals and a fallback category of "single non-whitespace characters" which are not otherwise matched by the lexical grammar. This latter category does not, however, include <code>"</code> and <code>'</code> characters; if these characters are not matched as part of a longer token, the result is specifically flagged as undefined behaviour. </p> <p>So your text is invalid precisely because it cannot be divided into tokens. A conforming implementation must respond by providing at least one diagnostic message.</p> <p>All of the above is specific to the C language standard. It is not a "principle of computation" and should not be applied to any other programming language.</p> <hr> <p>For additional precision, the C standard defines two categories of tokens. Initial program text analysis (in phases 1 and 2) splits the text into <em>preprocessing tokens</em> and whitespace (whitespace includes comments, which are not tokens). The translation process then passes the stream of preprocessing tokens through phases 3 to 6, commonly known as "the preprocessor" although it is an integral part of program translation. </p> <p>Preprocessing leaves most tokens intact but there are features which allow adjacent tokens to be fused or converted into a string literal token. Also, macros may ignore their arguments causing those tokens to vanish. It is not possible to split a token into multiple tokens.</p> <p>In phase 7, the preprocessing tokens which survive preprocessing must be converted to <em>tokens</em>. Although this is described as a conversion, no textual modification is made; what is converted is the category of the token. Not every character sequence which qualified as a preprocessing token can be treated as a token; phase 7 conversion of such a token will fail and a diagnostic message will be produced. </p> <p>So it can make sense to talk about an "illegal token" (<code>@</code>, for example) which is still a token. But unterminated string and character literals do not all into that category. They really are not tokens at all.</p> <p>See &sect;5.1.1.2 of the C standard for a precise description of the translation phases. Tokens are defined in &sect;6.4, which includes the prohibition on unmatched <code>"</code> and <code>'</code> in paragraph 3:</p> <blockquote> <p>A <em>token</em> is the minimal lexical element of the language in translation phases 7 and 8. The categories of tokens are: keywords, identifiers, constants, string literals, and punctuators. A preprocessing token is the minimal lexical element of the language in translation phases 3 through 6. The categories of preprocessing tokens are: header names, identifiers, preprocessing numbers, character constants, string literals, punctuators, and single non-white-space characters that do not lexically match the other preprocessing token categories. If a <code>'</code> or a <code>"</code> character matches the last category, the behavior is undefined.</p> </blockquote>
335
tokenization
Token-sliding as a kind of Petri net: well-studied subclass?
https://cs.stackexchange.com/questions/71854/token-sliding-as-a-kind-of-petri-net-well-studied-subclass
<p>Let a directed graph $G = (V, E)$ be given, plus a constraint map $c: E \rightarrow V$ and a set $T \subseteq V$ of initial token locations. A valid move consists of sliding a token from $v$ to $w$ if:</p> <ul> <li>$v \in T$ — to slide a token, it must be there.</li> <li>$w \not \in T$ — to move a token somewhere, the new place must be empty.</li> <li>$c(v, w) \in T$ — (the unusual bit) to use an edge, some token must occupy the "activator" vertex. Note that the trivial constraint $c(v, w) = v$ is allowed.</li> </ul> <p>(Then $T \Rightarrow^1 (T \setminus \{v\}) \cup \{w\}$. An interesting decision problem might be this: given $v$, is there a $T'$ s.t. $T \Rightarrow^* T'$ and $v \in T'$; there are plenty others.)</p> <p>I think this can be reformulated as a Petri net, with transitions $\{v, c(v, w)\} \rightarrow \{w, c(v, w)\}$ when $c(v, w) \neq v$ and $\{v\} \rightarrow \{w\}$ when $c(v, w) = v$. However, I am very unfamiliar with Petri nets. Does this have some correspondence to Petri nets at all? Is the subtype of Petri nets where all transitions are on the above form well-studied? What are the major results, especially concerning the computational complexity of the most interesting decision and function problems?</p>
336
tokenization
What token does a &quot;peek&quot; operation refer to in Lexical analysis?
https://cs.stackexchange.com/questions/90908/what-token-does-a-peek-operation-refer-to-in-lexical-analysis
<p>Given a grammar for a space delimited list of words:</p> <pre><code>S -&gt; word { space word } word -&gt; [a-zA-Z]+ space -&gt; [ \t]+ </code></pre> <p>And given the input "Hello World", what token would a <code>peek()</code> operation return?</p>
337
tokenization
What data is stored in the symbol table for a number token?
https://cs.stackexchange.com/questions/21560/what-data-is-stored-in-the-symbol-table-for-a-number-token
<p>I'm reading the Dragon Book. The following is from the start of Section 3.1.3.</p> <blockquote> <p>When more than one lexeme can match a pattern, the lexical analyzer must provide the subsequent compiler phases additional information about the particular lexeme that matched. For example, the pattern for token <strong>number</strong> matches both 0 and&nbsp;1, but it is extremely important for the code generator to know which lexeme was found in the source program. Thus, in many cases, the lexical analyzer returns to the parser not only a token name, but an attribute value that describes the lexeme represented by the token; the token name influences parsing decisions, while the attribute value influences translation of tokens after the parse.</p> </blockquote> <p>From what I understand the symbol table stores the variable name and the some details like the type, scope etc. So if a character <code>0</code> is found by the lexical analyzer, it matches the pattern for a number so it uses the token name <code>number</code> so the token becomes <code>&lt;number, attrb&gt;</code>.</p> <p>As per the snippet I have cited above, I don't understand what data is stored in the symbol table for numbers. Is the value of the number stored in the symbol table?</p>
<p>Typical lexers will return a sequence of pairs, where the pair consists of the token type and an optional value. For a token such as <code>12345</code>, the token type will be something like "number" and the value will be 12345. If the lexer only emitted the information that there was a numeric constant in the input, then the following phases of the parser would have no way to know which number it was, and that is obviously important.</p> <p>I don't understand why you started talking about the symbol table in the end of your question, and I think you may be confused. The quotation you gave says nothing at all about the symbol table, which usually belongs to a later phase of compilation. A symbol table maps symbols (that is, names) to values. The quotation in your question is about tokens, not symbols. Tokens are not usually stored in a symbol table, and there is no reason to store numbers in a symbol table. Typically a parser will have a stack, and will push whole tokens onto the stack, and pop them off again as needed.</p>
338
tokenization
How does the token method of amortized analysis work in this example?
https://cs.stackexchange.com/questions/62484/how-does-the-token-method-of-amortized-analysis-work-in-this-example
<p>Below is the description of the answer to a question which says the following:</p> <p><strong>Design a data structure to support two operations for a dynamic multiset S of integers which allows duplicate values.</strong></p> <ol> <li>Insert operation for one element </li> <li>Delete-larger-half(S) deletes the largest ceil(|S|/2) elements from S. </li> <li>m insert and delete-larger-half operations run in O(m) time. </li> <li>Also output the elements in O(|S|) time.</li> </ol> <p>In answer unsorted array has been taken. I think delete-larger-half corresponds to deleting the |S|/2 elements with highest magnitude. Then how does deleting with median work unless there is n comparisons (i.e compare each element with the median). I suppose the amortized analysis is what is bringing out the complexity. I'm looking for an answer that can explain the token based amortized analysis by using the example in this question.</p> <p><strong>What I know already:</strong> Amortized analysis means doing some expensive work in previous steps which leads to worst case of a following step not happening as often. Every time the expensive step occurs, the probability of it happening again reduces more and more. On average it evens out giving a better amortized complexity. Example: implementing dynamic array which is probably what has been done in the solution linked below. </p> <p>Link to solution: <a href="https://courses.csail.mit.edu/6.046/fall01/handouts/ps7sol.pdf" rel="nofollow">https://courses.csail.mit.edu/6.046/fall01/handouts/ps7sol.pdf</a></p> <blockquote> <p>You use an unsorted array, so insert takes O(1) worst-case time. For DELETE-LARGER-HALF, you use the linear-time median algorithm to find the median, then you use PARTITION to partition the array around the median, then you delete the larger side of the partition in O(1) time. For the amortized analysis, insert each item with 2 tokens on it. When you perform a DELETE-LARGER-HALF operation, each item in the list pays 1 token for the operation. When you delete the larger half, the tokens on these items are redistributed on the remaining items. If each item on the list starts with 2 tokens, they each have one after the median finding, and then each item in the deleted half gives its token to one of the remaining items. Thus, there are always two tokens per item and we get constant amortized time.</p> </blockquote> <p>I've not been able to understand the logic behind random assigning of token and taking it off.</p> <p>In the answer, the <code>insert</code> has been assigned 2 tokens, the find median and partition take away 1 token total. And then deletion gives the 1 token. I understand that each time, delete-larger-half is called, the next delete-larger-half's cost reduces drastically but why exactly the values 2, 1, 1 have been chosen?</p>
<blockquote> <p>Then how does deleting with median work unless there is n comparisons (i.e compare each element with the median).</p> </blockquote> <p>After partitioning (they assume Quicksort-style partitioning), you can just delete the last $\lceil |S| / 2 \rceil$ elements in the array.</p> <blockquote> <p>Amortized analysis means doing some expensive work in previous steps which leads to worst case of a following step not happening as often. Every time the expensive step occurs, the probability of it happening again reduces more and more.</p> </blockquote> <p>That is a useful intuition, but I feel a little bit besides the point.</p> <p>Mainly, amortized analysis is about investigating the <em>compound cost</em> of <em>sequences of operations</em>. By dividing the sum by the number of operations we get what we call <em>amortized cost</em> for each operation.</p> <p>I don't see a specification of such sequences in your problem statement so it may be ill-posed, unless the analysis goes through for <em>all</em> sequences.</p> <p>So let us check. What is the <em>worst</em> that could happen? Well, probably a sequence of <em>only</em> DELETE-LARGER-HALF operations; consider $n$ elements and a sequence of $m = \lfloor \log_2 n \rfloor$ DELETE-LARGER-HALF operations (after this many, the set is almost empty). Since each operation has linear cost in the number of elements, the total cost then is $\Theta\bigl( \sum_{i=1}^{\log_2 m} 2^i \bigr) = \Theta(m)$ -- fits!</p> <p>If we wanted to make this sequence more expensive by making individual summands larger, we would need to add INSERT operations. Say we wanted to increase one summand $2^i$ to $2^{i+1}$. We would have to add $2^i$ elements at cost $O(1)$ each, so the compound cost would increase by $2^{i+1}$ -- but $m$ would increase by $2^{i}$! Therefore, the <em>amortized</em> cost would only increase by a constant.</p> <p>The core observation is this: if an element is to partake in making a DELETE-LARGER-HALF operation expensive (by being there), it has to be INSERTed first. Since DELETE-LARGER-HALF has linear cost, we can not add more than constant cost per element. In combination, this means we will get constant amortized time.</p> <p>The tokens are a way to formalize this into a proof (which the part you quote does not quite manage to do). The way the tokens are set up and passed around, we see that the cost of any sequence of operations is in $\Theta$ of the total number of tokens we see. We can not possibly see more than $2m$ different tokens so we get an $O(m)$ bound on the compound cost or an $O(1)$ bound on the amortized cost.</p>
339
tokenization
How to do high performance string matching when comparing unordered sets of tokens
https://cs.stackexchange.com/questions/37174/how-to-do-high-performance-string-matching-when-comparing-unordered-sets-of-toke
<p>This is the problem:</p> <p>I have some strings stored in the database. Each of the strings can be seen as a set of tokens separated by comma with no repetition (I mean a token cannot appear more than one time in a string).</p> <p>I want to know if a new string matches any of them without taking token order into account. </p> <p>The metric I think about is something like this (comparing two strings at the time, this is the first thing that came to my mind when trying to solve the problem and don't know if it is already used), a matching percentage calculated like this: </p> <pre><code>Match_Metric(A, B) = number_of_matched_tokens(A, B) / max_number_of_tokens_in_any_of_two_strings(A,B) * 100. </code></pre> <p>Example:</p> <pre><code>String 1: "abc, cde, ghi, adc, dca, aab" String 2: "cd, r, a, x" String 3: "aab, cde, ghi, abc, adc, dca" String 4: "aab, cde, ghi, abc, adc, dca, rrrm, a" 1 vs 2 = 0% 1 vs 3 = 100% 1 vs 4 = 75% </code></pre> <p>What I am trying to avoid is to perform a one to one comparison between tokens, but I am finding that other techniques like edit distance won't give me an exact match in the case of 1 vs 3 unless I first order the tokens.</p> <p>The problem can be extended to do a string search within the tokens, for example:</p> <pre><code>String 1: "abc, cde, ghi, adc, dca, aab" String 2: "cd, r, a, x" </code></pre> <p>As "d" appears in one token in 2 and three tokens in 1, that can affect the metric. In this case an approximate string matching technique such as "edit distance" would be useful but in a token versus token approach. The formula would be more complicated in this case, instead of having an integer representing the number of matched tokens it would be a fraction number and could be calculated like this:</p> <p>Comparing two tokens at the time one from the string A and one from the string B:</p> <pre><code>token_match(a,b) = 1 - edit_distance(a,b) / length_of_largest_token(a,b) </code></pre> <p>So the general metric would be:</p> <pre><code>String A = {a0, ..., an} String B = {b0, ..., bm} i = {0, ..., n} j = {0, ..., m} Match_Metric(A, B) = sum(token_match(ai, bj)) / max_number_of_tokens_in_any_of_two_strings(A,B) * 100 </code></pre> <p>Any ideas on what technique/algorithm is more broadly adopted/used for this problem?</p>
<p>So you want to use the <a href="https://en.wikipedia.org/wiki/Jaccard_index" rel="nofollow">Jaccard index</a> as your metric of similarity. Well, the Wikipedia page for the Jaccard index (and which I linked to in the comments above) already has some hints on methods for finding close matches, more efficiently than comparing all pairs. For instance, you can use <a href="https://en.wikipedia.org/wiki/Locality-sensitive_hashing" rel="nofollow">locality-sensitive hashing</a>. Hint for the future: you might want to do more research in the future based on feedback from folks, as locality-sensitive hashing was already suggested in the comments and a relevant Wikipedia page was also mentioned in the comments.</p> <p>If tokens are not too common, there's another alternative method you could also try. You could build an inverted index: an index that, for each token, lists all of the strings that contain it. Now given a new string S, it's easy to enumerate over all of the tokens in it, look each one up in the inverted index, find all other strings that share at least one token in common with S, compute the similarity between that string and S, and keep the best one. As long as no string contains too many tokens and no token is present in too large a fraction of strings, this will perform better than all-pairs comparison.</p> <hr> <p>Your "extended" problem is different in nature and would probably be better asked as a separate question. It might be a much harder problem.</p>
340
tokenization
Problem identification: splitting string into tokens taken from a given, possible overlapping set
https://cs.stackexchange.com/questions/162382/problem-identification-splitting-string-into-tokens-taken-from-a-given-possibl
<p>I am facing the following problem in a script I am trying to develop:</p> <p>Given a string and a set of tokens, where the tokens are known and are overlapping (the set can contain the tokens 'a', 'b' and 'ab'), I need to split the string into a list of tokens from the set. I know there can be multiple ones, I don't need all of them, just a random one (obviously it is trivial to get an arbitrary one if I have all possible token lists).</p> <p>The token set is known and guaranteed to have all possible characters of the input string, there is at a token which is equal to a single character for each possible input character.</p> <p>Is it a known problem? If so, does it have known solutions? I am happy with adjacent literature I can use to jump-start my research.</p>
<p>This problem is often called the Word Break problem and is a classic exercise for dynamic programming.</p> <p>Base case: if you don't have a string, it's a yes instance.</p> <p>For every token, if the string ends with that token, remove that suffix from the string and recurse.</p> <p>If no token is a suffix of your (non-empty) string, return false.</p> <p>The running time is cubic (or more accurately the length of the string times number of tokens times length of the longest token). Perhaps.</p> <hr /> <p>A very simple recursive function with memoization in Python is given:</p> <pre class="lang-python prettyprint-override"><code>@cache def wordbreak(string, tokens): if not string: return True canbreak = False for token in tokens: if string.endswith(token): if wordbreak(string.removesuffix(token), tokens): return True return False </code></pre> <p>Beware that it's slower than necessary.</p>
341
tokenization
In NLP, does the lexer have to tag the tokens before the parser?
https://cs.stackexchange.com/questions/51146/in-nlp-does-the-lexer-have-to-tag-the-tokens-before-the-parser
<p>In NLP, does the lexer have to tag the tokens before the parser?</p> <p>I.e. does the lexer have to classify the tokens to morphological categories before the parser?</p> <p>I'm thinking yes, but is this also the only way to do the parsing?</p>
<p>Morphological tags can help the parser. On the other hand, the complete sentence structure, maybe even the paragraph context may help to finally disambiguate possible tags for a token. So there is no yes/no answer. Except, maybe, that tagging is, afaik, usually not attributed as work of the lexer, but rather a module of its own.</p>
342
tokenization
What determines the number of arcs and tokens in a Petri Nets Model?
https://cs.stackexchange.com/questions/54444/what-determines-the-number-of-arcs-and-tokens-in-a-petri-nets-model
<p>I study Petri Nets to model some cases related to my job. Currently, I study the basics of Petri Nets and am confused. (For the time being I couldn't get a textbook yet, I will.)</p> <p><strong>my questions</strong></p> <p><strong>q1)</strong> What determines the required number of tokens required for the model to be constructed properly? What is a bulletproof thinking way to determine the number of tokens required? It may be not needed the correct procedural way of thinking to construct but complex cases will need I think. (<em>Is it only enough to know the conditions, so number of conditions will equal to number of tokens <strong>OR</strong> number of conditions actually gives me the number of arcs between places-transitions, and then I should assign the number of tokens to places depending on the number of arcs?)</em></p> <p><strong>q2)</strong> Am I thinking in the correct way below?</p> <p><strong>q3)</strong> Can you please provide the model required for my icing case below?</p> <hr> <p>case: icing ocuurs or no icing. (2 states)</p> <p><em>Place 1</em> : Icing occurs</p> <p><em>Place 2</em> : No Icing</p> <p>2 conditions (2 arcs so 2 tokens required for enabling &amp; firing) required for icing on air: </p> <ol> <li><p>water molecules <strong>AND</strong></p></li> <li><p>negative celcius temperature</p></li> </ol> <p>. If these conditions exists simultaneously, then icing will occur.</p> <p>So enabler from Place 2 to Place 1 needs 2 conditions. From Place 1 to Place 2, enabler needs "absence of at least 1 condition". After this point I stuck and can not draw the model.</p> <p>My real world cases are too complicated.</p>
<p>One of the resources I used to teach myself about Petri Nets was the chapters on Petri Nets in the textbook “Petri Nets and Grafcet: Tools for Modeling Discrete-Event Systems” (David and Alla, 1992).</p> <p>An example process and a Petri Net model of the process may help you answer your first two questions (Chenier, 2016). Thus I am including the following, a Petri Net model to illustrate the icing process at a logical level. Furthermore control is included for recognizing the presence or absence of water and for identifying the freezing point (below 0 Celsius) and melting point of water (above 0 Celsius). The change of phase from liquid to solid and the change of phase from solid to liquid are reactions to the recognition of water and identification of temperature. Assume when water changes phase, every single molecule undergoes the phase change at the same instant.</p> <p>Given these assumptions the icing process behaves like an AND-Gate plus controls for the inputs of the AND-Gate. The controls toggle the input values.</p> <p>Table 1 Input-Output Relations of Water-Temperature versus Ice <a href="https://i.sstatic.net/6WGwo.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6WGwo.jpg" alt="Table 1 Input-Output Relations of Water-Temperature versus Ice"></a></p> <p>Figure 1 is a Petri Net model of the icing process and Table 2 is a summary of places and transitions in the model. [For the <a href="http://www.aespen.ca/AEnswers/1458112273.pdf" rel="nofollow noreferrer">PDF version</a> Figure 1 is a dynamic, interactive diagram.] Every (enabled) transition can be triggered by clicking on a green square. To click on an enabled transition is equivalent to an occurrence of the corresponding event. The demonstration mode triggers a transition (T_0,T_1,T_2 or T_3) automatically when any of the other transitions are triggered. Please see Notes for more information about Figure 1.</p> <p>Figure 1 A Dynamic, Interactive Petri Net Model of an Icing Process with Input Controls <a href="https://i.sstatic.net/N6jeD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N6jeD.jpg" alt="Figure 1 A Dynamic, Interactive Petri Net Model of an Icing Process with Input Controls"></a></p> <p>Table 2 A Summary of Places and Transitions <a href="https://i.sstatic.net/Ue1s8.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ue1s8.jpg" alt="Table 2 A Summary of Places and Transitions"></a></p> <h2>Notes</h2> <ol> <li>The Petri Net model in Figure 1 is a Place/Transition Net.</li> <li>The graphics used to represent an inhibitor arc in Figure 1 is an arrow with a small circle near the corresponding place. The small circle stands for the weight (which is 0). As you probably know, an inhibitor arc is an input with an annotation for testing if the input can fire but does not have an annotation for firing the input.</li> <li>Every edge with two arrowheads has one and only one (logic) annotation for testing the condition of the input.</li> <li>A subtle assumption made for the model in Figure 1 is the possibility of transitions (T4,T5,T6,T7) to occur several times while phase changes (T0,T1,T2,T3) may not be fast enough to occur.</li> </ol> <h2>Scenarios</h2> <p>To assist in the interpretation of the model, the following six scenarios are included.</p> <p>Scenario 1 No water found, normal temperature detected, and no ice. <a href="https://i.sstatic.net/8I9BF.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8I9BF.jpg" alt="Scenario 1 No water found, normal temperature detected, and no ice."></a></p> <p>Scenario 2 From Scenario 1, water was detected (T5 fires). In this scenario: water found, normal temperature detected, no ice. <a href="https://i.sstatic.net/D3kaw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D3kaw.jpg" alt="Scenario 2 From Scenario 1, water was detected (T5 fires). In this scenario: water found, normal temperature detected, no ice."></a></p> <p>Scenario 3 From Scenario 2, temperature drops below freezing point (T7 fires) but the ice has not formed yet. In this scenario: water found, freezing temperature detected, no ice. <a href="https://i.sstatic.net/R1cqA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R1cqA.jpg" alt="Scenario 3 From Scenario 2, temperature drops below freezing point (T7 fires) but the ice has not formed yet. In this scenario: water found, freezing temperature detected, no ice."></a></p> <p>Scenario 4 From Scenario 3, the system has reacted to the freezing temperature (T3 fires). In this scenario, water found, freezing temperature detected, and ice. <a href="https://i.sstatic.net/JAOhf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JAOhf.jpg" alt="Scenario 4 From Scenario 3, the system has reacted to the freezing temperature (T3 fires). In this scenario, water found, freezing temperature detected, and ice."></a></p> <p>Scenario 5 From Scenario 4, the temperature goes above freezing (T6 fires) but the system has not reacted yet. In this scenario: water found, normal temperature detected, and ice. <a href="https://i.sstatic.net/pknDB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pknDB.jpg" alt="Scenario 5 From Scenario 4, the temperature goes above freezing (T6 fires) but the system has not reacted yet. In this scenario: water found, normal temperature detected, and ice."></a></p> <p>Scenario 6 From Scenario 5, the system has reacted to freezing temperature (T2 fires). In this scenario: water found, normal temperature detected, and no ice. <a href="https://i.sstatic.net/4zoHz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4zoHz.jpg" alt="Scenario 6 From Scenario 5, the system has reacted to freezing temperature (T2 fires). In this scenario: water found, normal temperature detected, and no ice."></a></p> <h2>References</h2> <p>Chionglo, J. F. (2016). A Reply to "How to determine / what determines the number of arcs and tokens in a Petri Net model" at Computer Science Stack Exchange. Available at <a href="http://www.aespen.ca/AEnswers/1458112273.pdf" rel="nofollow noreferrer">http://www.aespen.ca/AEnswers/1458112273.pdf</a>.</p> <p>Chenier, A. (2016). How to Determine / What Determines the Number of Arc and Tokens in a Petri Nets Model. Computer Science Stack Exchange. Retrieved on Mar. 14, 2016 at <a href="https://cs.stackexchange.com/questions/54444/how-to-determine-what-determines-the-number-of-arcs-and-tokens-in-a-petri-nets">What determines the number of arcs and tokens in a Petri Nets Model?</a>. </p> <p>David, R. and H. Alla. (1992). Petri Nets and Grafcet: Tools for Modeling Discrete-Event Systems. Upper-Saddle, NJ: Prentice Hall.</p>
343
tokenization
What does it mean to have tokens at a state in a balancing network?
https://cs.stackexchange.com/questions/49827/what-does-it-mean-to-have-tokens-at-a-state-in-a-balancing-network
<p>I was assigned as homework:</p> <blockquote> <p>Suppose we have a width-w balancing network of depth $d$ in a quiescent state $s$ called $B$. Let $n = 2^d$. Prove that if n tokens enter the network on the same wire, pass through the network, and exit, then $B$ will have the same state after the tokens exit as it did before they entered.</p> </blockquote> <p>However, I do not understand what the question is asking (please don't actually do the problem).</p> <p>I have followed chapter 12 of the art of mutlicore programming, and its still unclear what the question is asking. Let me give you my thoughts (and confusions):</p> <p>What does it mean by $s$ and $B$? According to the textbook, a balancing network is quiescent if every token that arrived on an input wire has emerged on an output wire (which makes sense because we only care when the tokens pass the network, not their order). Does a quiescent state refer to which cables the tokens have entered? Or since the order doesn't matter it just means that the same number of tokens that entered left the network?</p> <p>What do they refer to as "a state"? Does it means were the tokens are located and which wires they left or if we only care about quiescent states, that the total number of tokens at the beginning and the end is the same?</p>
<p>It seems that there is a typo – $s$ and $B$ represent the same thing. It looks like the original $s$ was changed to $B$, but the author of the question forgot to delete $s$.</p> <p>The state of the network is the state of all balancers – which way they point.</p>
344
tokenization
Why do we not use CFGs to describe the structure of lexical tokens?
https://cs.stackexchange.com/questions/55567/why-do-we-not-use-cfgs-to-describe-the-structure-of-lexical-tokens
<p>This was an exam question for my course and I am struggling to actually answer it in a way that is not fluff.</p> <p>Here is my current answer:</p> <p><em>CFGs describe how non-terminal symbols are converted into terminal symbols via a parser. However, a scanner defines what those terminal symbols convert to in terms of lexical tokens. CFGs are grammatical descriptions of a language instead of simply defining what tokens should be scanned from an input string.</em></p> <p><strong>What is the correct way to answer this?</strong></p>
<p>You don't use CFGs because typically lexical analysis can be performed using regular automata, and these are faster than context-free parsers. It's a question of efficiency.</p>
345
tokenization
Why doesn&#39;t the C/C++ compilers create different tokens for different types of numbers?
https://cs.stackexchange.com/questions/89856/why-doesnt-the-c-c-compilers-create-different-tokens-for-different-types-of-n
<p>Based on <a href="https://www.geeksforgeeks.org/cc-tokens/" rel="nofollow noreferrer">GeeksforGeeks</a> and many other sites, the C/C++ compilers will create the same token for <code>float</code>/<code>int</code> etc.</p> <p>However if we have something like this:</p> <pre><code>int A[10.5]; </code></pre> <p>then will there be a parser error or semantic error? </p> <p>The book that I'm reading says the parser rather than the semantic routines will detect it, but after the lexical analysis wouldn't that be converted to this: </p> <pre><code>A[Constants]; </code></pre> <p>meaning the <code>10.5</code> will be converted to a <code>Constants</code> token based on that website? Therefore the parser will not notice this is an error because both of <code>10</code> and <code>10.5</code> will be <code>Constants</code>!</p> <p>So which one is right? Will the parser or one of the semantic routines detects this error? Do the C/C++ compilers create the same tokens for int numbers and floats?</p>
<p>This is in fact an implementation detail of the compiler. The page you referenced only shows one way of assigning types to tokens, while there are also others. The compiler <strong>could</strong> have a lexer that distinguishes between integral and non-integral constants and the parser then cannot match the declaration</p> <pre><code>int A[10.5]; </code></pre> <p>to any rule. But giving an error message stating exactly this to the programmer is not very helpful. Modern C++ compilers have quite complex parsing and compiling routines that give more information in case of errors. As an example, GCC will yield:</p> <pre><code>me@my-computer:/tmp$ g++ test.cpp test.cpp: In function ‘int main()’: test.cpp:9:15: error: size of array ‘A’ has non-integral type ‘double’ int A[10.5]; ^ </code></pre> <p>So you will get quite a good error message, and for obtaining it, some semantic interpretation was actually performed.</p>
346
tokenization
Why is particular token missing in LALR lookahead set?
https://cs.stackexchange.com/questions/59548/why-is-particular-token-missing-in-lalr-lookahead-set
<p>I ran the following grammar (pulled from the dragon book) in the Java Cup Eclipse plugin:</p> <pre><code>S' ::= S S ::= L = R | R L ::= * R | id R ::= L </code></pre> <p>The items associated with state 0 given in the Automaton View are as follows:</p> <pre><code>S ::= ⋅S, EOF S ::= ⋅L = R, EOF S ::= ⋅R, EOF L ::= ⋅* R, {=, EOF} L ::= ⋅id, {=, EOF} R ::= ⋅L, EOF </code></pre> <p>Shouldn't the last item's lookahead set be {=, EOF}? This item could be derived from <code>S ::= ⋅R</code> (in which case the lookahead set is {EOF}) or from <code>L ::= ⋅* R</code> (in which case the lookahead set is {=, EOF}).</p>
<p>In state 0, <code>R ::= ⋅L</code> can only be generated by <code>S ::= ⋅R</code>. In <code>L ::= ⋅* R</code>, the dot precedes <code>*</code>, not <code>R</code>, so no further items are generated by it.</p> <p>The dragon book uses this grammar as an example of the inadequacy of SLR, and the correct computation of the lookahead in this case is an instance; the SLR algorithm bases lookahead decisions on the FOLLOW set rather than actual lookahead possibilities in the state, which will eventually lead to a shift/reduce conflict on lookahead symbol <code>=</code>.</p>
347
tokenization
How to Determine Places, Transitions and Tokens in a Scenario when Modeling with Petri Nets?
https://cs.stackexchange.com/questions/71814/how-to-determine-places-transitions-and-tokens-in-a-scenario-when-modeling-with
<p>When modeling a scenario with Petri nets how should I determine the places, transitions and tokens?</p> <p><strong>Example:</strong> </p> <p>There are two exam assistants in an exam hall observing the exam. They stand in front of the exam hall. When a student has a question one of the assistants goes to him and answers his question while the other stays in front of hall. When the question is answered, the assistant goes back to the front of the hall. </p> <p>By modeling this scenario it must be distinguished which assistant stays in front of hall. Then the Petri net must be expanded so that the assistants take turns answering the questions.</p> <p>Here is my solution:</p> <p><a href="https://i.sstatic.net/lTORr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lTORr.png" alt="enter image description here"></a></p> <p>p0 represents the front of the exam hall. The two tokens represent the assistants and p1 is where the student seats. I also limited the capacity of p1 to one. </p> <p>The given solution is however totaly different: </p> <p><a href="https://i.sstatic.net/cklRB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cklRB.png" alt="enter image description here"></a></p> <p>How should I generally think and how can I determine which part of a given scenario is represented by which part of the Petri net (places, transitions and tokens)?</p>
<p>If you find it challenging to apply Petri Nets in modeling an application then it may help to consider the following mapping between the types of words found in a text description of an application with the types of Petri Net elements found in a Petri Net diagram of the application:</p> <ol> <li>Nouns are candidates for places.</li> <li>Verbs are nominees for transitions (and/or inputs and outputs).</li> <li>Values, amounts or counts are contenders for tokens in places.</li> </ol> <h2>Example Application</h2> <p>[Consider Figure 1 for the following example]. For the “Exam Hall Problem”, think of (Infinity, 2017):</p> <ol> <li>A place as a holder or container for “things”. a. There are two exam assistants in front of the hall and each assistant must be distinguished from the other. Thus there are two places: one place for each assistant in front of the hall (P4, P5). b. The exam assistants can answer questions at the same time. Thus there are two additional places: one place for an exam assistant answering a question (P2, P3).</li> <li>A token in a place as the counter for the place, the number of “things”. a. If an exam assistant is not in front of the hall then the place is empty. If an exam assistant is in front of the hall then the place is not empty, the place has a token. b. If an exam assistant is answering a question then the place is not empty, the place has a token. If an exam assistant is not answering a question then the place is empty.</li> <li>A transition as a start or end of “activities”. a. An exam assistant going to a student to answer a question is an activity (T1, T3). b. An exam assistant who answered a question is another activity (T2, T4).</li> </ol> <p>The given solution for the “Exam Hall Problem” appears to be a solution for the second scenario: the exam assistants take turns answering questions (Infinity, 2017). Figure 1 is a modified version of the given solution. It was modified to satisfy the requirements of the first scenario. It includes text labels, chosen from or derived from the words in the example description.</p> <p><a href="https://i.sstatic.net/TTDCK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TTDCK.jpg" alt="Petri Net Model for Exam Assistants Answering Questions in an Exam Hall"></a> Figure 1 Petri Net Model for Exam Assistants Answering Questions in an Exam Hall</p> <p>For the “dynamic and interactive version” of this document, the visibility of labels in Figure 1 can be toggled by clicking on the diagram (Chionglo, 2017).</p> <h2>Reference</h2> <p>Chionglo, J. F. (2017). A Reply to "How to Determine Places, Transitions and Tokens in a Scenario when Modeling with Petri Nets?" at Computer Science Stack Exchange. Available at <a href="https://www.academia.edu/31997446/A_Reply_to_How_to_with_Petri_Nets_At_Computer_Science_Stack_Exchange" rel="nofollow noreferrer">https://www.academia.edu/31997446/A_Reply_to_How_to_with_Petri_Nets_At_Computer_Science_Stack_Exchange</a>.</p> <p>Infinity. (2017). "How to Determine Places, Transitions and Tokens in a Scenario when Modeling with Petri Nets?" at Computer Science Stack Exchange. Retrieved on Mar. 21, 2017 at <a href="https://cs.stackexchange.com/questions/71814/how-to-determine-places-transitions-and-tokens-in-a-scenario-when-modeling-with">How to Determine Places, Transitions and Tokens in a Scenario when Modeling with Petri Nets?</a>.</p>
348
tokenization
How to Tokenize a String and Save Each Substring Separately (Independent Names for Each Substring)
https://cs.stackexchange.com/questions/119887/how-to-tokenize-a-string-and-save-each-substring-separately-independent-names-f
<p>I am given two files one with the name of person and the location that they are from (Evan Lloyd|Brownsville) and one with the name and salary (Evan Lloyd|58697) (the line number that you find the employee on in the first file is not necessarily the line number that find the employee on in the second). The user inputs a location (whole or part). For example if they input "ville" or "Ville" it should include all of the employees in Brownsville, Clarksville, Greenville, etc. I am supposed to join the the name and salary and return them if they are in the city searched for i.e. "ville" or "Ville." I am attempting to use a vector for both of the files vector(name, address) and vector(name, salary) and later return a vector of a string and tuple (address, (name, salary)) as my output. I don't know how to tokenize the string for example "Evan Lloyd|Brownsville" into the substrings "Evan Lloyd" and "Brownsville" separately (without the |) and save them separately so that I can push both strings into my vector(name, address). How would I tokenize the strings that way or should I try something else entirely?</p>
349
tokenization
Expected gain of a game of chance with differently-priced tokens
https://cs.stackexchange.com/questions/4899/expected-gain-of-a-game-of-chance-with-differently-priced-tokens
<p>Foo and Bar are playing a game of strategy. At the start of the game, there are $N$ apples, placed in a row (in straight line). The apples are numbered from $1$ to $N$. Each apple has a particular price value.</p> <p>The price of $i$th apple is $p_i$.</p> <p>In this game, the players Foo and bar make an alternative move.</p> <p>In each move, the player does the following:</p> <ul> <li>If there is more than one apple left, the player tosses an unbiased coin. If the outcome is head, the player takes the first apple among the apples that are currently present in a row in a straight line.</li> <li>If there is a single apple left, te player takes it.</li> </ul> <p>The goal here is to calculate the expected total price value that Foo will get if Foo plays first.</p> <pre><code>Example 1: N=5 Apple price val: 5 2 3 1 5 Answer is : 11.00 Example 2: N=6 7 8 2 3 7 8 Answer : 21.250 Example 3: N=3 1 4 9 First Second Third Foo Total Val Foo gets 1 Bar gets 4 Foo gets 9 10 Foo gets 1 Bar gets 9 Foo gets 4 5 Foo gets 9 Bar gets 1 Foo gets 4 13 Foo gets 9 Bar gets 4 Foo gets 1 10 probability 0.5 • 0.5 = 0.25. Expected value (Foo)= (0.25 *10 )+ (0.25 *5) + (0.25*13)+ (0.25*10) = 9.500 </code></pre> <p>I wrote the following code:</p> <pre><code>#include&lt;iostream&gt; using namespace std; double calculate(int start,int end,int num,int current); int arr[2010]; int main() { int T; scanf("%d",&amp;T); for(int t=0;t&lt;T;t++) { int N; scanf("%d",&amp;N); for(int i=0;i&lt;N;i++) { scanf("%d",&amp;arr[i]); } printf("%.3lf\n",calculate(0,N-1,N/2+N%2,0)); } return 0; } double calculate(int start,int end,int num,int current) { if(num==current) return 0; double value=0; value=.5*arr[start]+.5*arr[end]+.5*calculate(start+1,end,num,current+1)+.5*calculate(start,end-1,num,current+1); return value; } </code></pre> <p>But the above code is quite slow. The constraints are: price of apples $p_i \le 1000$; $1 \le N \le 2000$; there are 500 test cases. How can I solve this more efficiently?</p>
<p>They probably meant you to solve it using dynamic programming. Since I'm guessing this is an exercise, I won't say anything more on this front. Also, your program is incorrect: it doesn't consider the actions of Bar.</p> <p>The game is not really a <em>game of strategy</em>, since there are no choices involved, only chance. In order to compute the expected value that Foo gets, it is enough to compute the probability that Foo takes each particular item. Let $p_{a,b}$ be the probability that Foo takes an item which is the $a$th from the left, $b$th from the right ($a,b \geq 0$, total number of elements is $a+b+1$). Then $$ \begin{align*} p(0,0) &amp;= 1, \\ p(0,1) &amp;= 1/2, \\ p(1,0) &amp;= 1/2, \\ p(1,1) &amp;= 1/2, \\ p(0,b+2) &amp;= 1/2 + p(0,b)/4, \\ p(a+2,0) &amp;= 1/2 + p(a,0)/4, \\ p(1,b+2) &amp;= p(0,b+1)/2 + p(1,b)/4, \\ p(a+2,1) &amp;= p(a+1,0)/2 + p(a,1)/4, \\ p(a+2,b+2) &amp;= (p(a+2,b)+2p(a+1,b+1)+p(a,b+2))/4. \end{align*} $$ You could precompute the requisite values for $a+b+1 \leq 2000$ and then your program will be blazingly fast.</p> <p>It turns out that for fixed $c$, $p(c,n-c)$ tends to a limit $p_c = 1/2(1 + (-1)^c/3^{c+1})$. To see why, note first that $p_0$ is the ability that if you and a friend alternate tossing fair coins, yours will come up HEAD first. There are many ways to see that $p_0 = 2/3$. Next, we have the formula $p_{c+1} = p_0 (1-p_c) + (1-p_0) p_c$ (exercise), from which the formula for $p_c$ follows by induction.</p>
350
tokenization
In NLP, tokens not seen in training sample, but you know or don&#39;t know what they are
https://cs.stackexchange.com/questions/22092/in-nlp-tokens-not-seen-in-training-sample-but-you-know-or-dont-know-what-they
<p>In NLP, do you distinguish tokens that you don't observe in a training sample and still expect that they may occur in a test sample, between </p> <ul> <li>those you know what they are, and</li> <li>those you don't know what they are, and how many of them</li> </ul> <p>If yes, how do you treat them differently to estimate the probabilities in N-grams?</p>
351
tokenization
Relation between programming languages requiring declaration of variables before use and using the token class $\text{id}$ while parsing
https://cs.stackexchange.com/questions/136591/relation-between-programming-languages-requiring-declaration-of-variables-before
<p>I was going through the text <em>Compilers: Principles, Techniques and Tools</em> by <em>Ullman et. al.</em> where I came across the following excerpt.</p> <blockquote> <p>Example 4.11. Consider the abstract language <span class="math-container">$L_1 = \text{ { $wcw$ | $w$ is in $(a|b)^*$}}$</span>. <span class="math-container">$L_1$</span> consists of all words composed of a repeated string of <span class="math-container">$a$</span>'s and <span class="math-container">$b$</span>'s separated by a <span class="math-container">$c$</span>, such as <span class="math-container">$aabcaab$</span>. It can be proven this language is not context free. This language abstracts the problem of checking that identifiers are declared before their use in a program. That is, the first <span class="math-container">$w$</span> in <span class="math-container">$wcw$</span> represents the declaration of an identifier <span class="math-container">$w$</span>. The second <span class="math-container">$w$</span> represents its use. While it is beyond the scope of this book to prove it, the non-context-freedom of <span class="math-container">$L_1$</span>, directly implies the non-context-freedom of programming languages like <span class="math-container">$\text{Algol}$</span> and <span class="math-container">$\text{Pascal}$</span>, which require declaration of identifiers before their use, and <em><strong>which allow identifiers of arbitrary length.</strong></em></p> <p><em><strong>For this reason, a grammar for the syntax of <span class="math-container">$\text{Algol}$</span> or <span class="math-container">$\text{Pascal}$</span> does not specify the characters in an identifier.</strong></em> Instead, all identifiers are represented by a token such as <span class="math-container">$\text{id}$</span> in the grammar. In a compiler for such a language, the semantic analysis phase checks that identifiers have been declared before their use. □</p> </blockquote> <p>I can understand that since <span class="math-container">$\text{Algol}$</span> and <span class="math-container">$\text{Pascal}$</span>, require declaration of identifiers before their use we cannot check this property using context free grammar. But what is the connection of this with the point of &quot;allowing identifiers of arbitrary length&quot;?</p> <p>Moreover the authors add, that instead of using the characters in an identifier, identifiers are represented by the token class <span class="math-container">$\text{id}$</span>. That the identifiers are represented by the token class <span class="math-container">$\text{id}$</span>, was known to me as a fact, but I did not quite know what is the significance of this as far as the explanation in the example of the text is concerned.</p> <p>Please explain me.</p>
<p>In order for their example to work, the authors need identifiers to be of unlimited length. This is because the language <span class="math-container">$$ \{ wcw : w \in \{a,b\}^*, |w| \leq n \} $$</span> is context-free (indeed, regular).</p> <p>The syntax of a language like Pascal or Algol is context-free. This accomplished by waiving the requirement that an identifier be declared before its usage; this will be checked on-the-fly by the parser. This idea is implemented by representing identifiers as a single token in the grammar of Pascal or Algol.</p>
352
tokenization
Why do we need $k \geq n$ in Dijkstra&#39;s token ring self-stabilizing system?
https://cs.stackexchange.com/questions/49738/why-do-we-need-k-geq-n-in-dijkstras-token-ring-self-stabilizing-system
<p>Let's say I have a ring with four nodes $n=\{0;1;2;3\}$ and three possible states $k=\{0;1;2\}$. A transient failure happens and the system ends up in an illegal state.</p> <p>I know from the restriction ($k \geq n$) that there should be an execution that makes the system stay in its illegal state indefinitely, but for all the executions I try in my head and on paper, it still stabilises at some point. Could you give me an execution where it loops always back into its illegal state?</p>
<p>Here is how to solve this for your particular $n$ and $k$. Your system has $k^n = 81$ possible states. You can describe the evolution of the system as a directed graph: there is an edge $s_1 \to s_2$ if state $s_1$ evolves in one step to state $s_2$. Some states are legal, some are illegal. You want to find an illegal state $s$ from which no legal state is reachable. This is a question that can be solved using a graph traversal algorithm such as DFS.</p> <p>Once you find such a state, you can try to generalize the construction to other values of $n$ and $k$. Other simulations (with other values of $n$ and $k$) could be helpful here.</p>
353
tokenization
On a table there are $N$ stacks. Stack $i$ contains $i$ tokens. Minimum number of moves to make all stacks empty
https://cs.stackexchange.com/questions/162707/on-a-table-there-are-n-stacks-stack-i-contains-i-tokens-minimum-number-o
<p>On a table there are <span class="math-container">$n$</span> stacks (numbered <span class="math-container">$1$</span> to <span class="math-container">$n$</span>). Stack <span class="math-container">$i$</span> contains <span class="math-container">$i$</span> tokens (<span class="math-container">$1 \leq i \leq n$</span>). During a move, a set of stacks can be chosen and the same number of chips can be drawn from each stack that is part of the chosen set. What is the minimum number of moves to make all stacks empty? For example if <span class="math-container">$n=3$</span>, the answer is <span class="math-container">$2$</span>. On the first move, you can choose stacks <span class="math-container">$2$</span> and <span class="math-container">$3$</span> and remove <span class="math-container">$2$</span> chips from them and in the second move you can choose stacks <span class="math-container">$1$</span> and <span class="math-container">$3$</span> and remove one chip from them.</p> <p>My approach would be to split the stacks in half, and removing <span class="math-container">$n/2$</span> tokens from the second half. Now, the problem size is reduced in half and we can solve it recursively.</p> <p>I don't know if this approach results in the minimum number of moves or if it does how to prove it.</p>
<p>Let <span class="math-container">$k$</span> be the smallest integer such that <span class="math-container">$2^k &gt; n$</span>.</p> <p><strong>Any solution must use at least <span class="math-container">$k$</span> moves</strong></p> <p>Suppose towards a contradiction that there exists a winning strategy using at most <span class="math-container">$k-1$</span> moves <span class="math-container">$m_1, m_2, \dots$</span>. Label each stack <span class="math-container">$i$</span> with the set <span class="math-container">$M(i)$</span> of moves <span class="math-container">$m_j$</span> that involve <span class="math-container">$i$</span> and notice that no label can be empty. Since the number of possible labels (i.e., the number of possible non-empty sets of moves) is at most <span class="math-container">$2^{k-1} - 1 \le n - 1$</span>, there must be at least two distinct stacks <span class="math-container">$i,j$</span> such that <span class="math-container">$M(i) = M(j)$</span>. This means that the winning strategy removes the same number of tokens from <span class="math-container">$i$</span> and <span class="math-container">$j$</span>, which is a contradiction.</p> <p><strong>The divide and-conquer algorithm uses exactly <span class="math-container">$k$</span> moves</strong></p> <p>Consider now the following recursive algorithm (which is probably the one you are describing in your question):</p> <ul> <li><p>If <span class="math-container">$k=1$</span> (i.e., <span class="math-container">$n=1$</span>), let <span class="math-container">$m$</span> be the move that removes the only remaining token from the only existing stack and return <span class="math-container">$\{m\}$</span>.</p> </li> <li><p>Split the stacks into two sets <span class="math-container">$S_1 = \{1, \dots, 2^{k-1} - 1\}$</span>, <span class="math-container">$S_2 = \{ 2^{k-1}, \dots, n\}$</span>.</p> </li> <li><p>Consider the move <span class="math-container">$m$</span> that removes <span class="math-container">$2^{k-1}$</span> tokens from each stack in <span class="math-container">$S_2$</span>. After <span class="math-container">$m$</span>, one stack in <span class="math-container">$S_2$</span> is empty and the other <span class="math-container">$r = n - 2^{k-1} &lt; 2^k - 2^{k-1} = 2^{k-1}$</span> stacks contain <span class="math-container">$1, 2, \dots, r$</span> tokens.</p> </li> <li><p>Call the algorithm recursively on <span class="math-container">$S_1$</span>, to obtain a set <span class="math-container">$M = \{m_1, m_2, \dots\}$</span> of moves that makes <span class="math-container">$S_1$</span> empty.</p> </li> <li><p>Construct a modified set of moves <span class="math-container">$M'$</span> that contains all moves <span class="math-container">$m_i$</span> from <span class="math-container">$M$</span>, with the exception that whenever a move <span class="math-container">$m_i$</span> removes some number <span class="math-container">$x$</span> of tokes from the <span class="math-container">$i$</span>-th stack (which is in <span class="math-container">$S_1$</span>), we also remove <span class="math-container">$x$</span> tokens from the <span class="math-container">$(2^{k-1}+i)$</span>-th stack (i.e., from the <span class="math-container">$(i+1)$</span>-th stack in <span class="math-container">$S_2$</span>), if it exists. Notice that this does not increase the number of moves.</p> </li> <li><p>Return <span class="math-container">$\{m\} \cup M'$</span>.</p> </li> </ul> <p>Each invocation of the above algorithm increases the number of moves by <span class="math-container">$1$</span>, therefore the overall number of moves is equal to the maximum number of recursive calls. Since each invocation decreases the value of <span class="math-container">$k$</span> by <span class="math-container">$1$</span> and the base case corresponds to <span class="math-container">$k=1$</span>, we can conclude that the algorithm returns a sequence with exactly <span class="math-container">$k$</span> moves.</p>
354
tokenization
How to define at least one occurrence of a string between two tokens in bottom up LALR(1) parser grammar
https://cs.stackexchange.com/questions/9961/how-to-define-at-least-one-occurrence-of-a-string-between-two-tokens-in-bottom-u
<p>I am trying to define a non terminal symbol in a LALR(1) grammar (with CUP parser). It is requested that </p> <pre><code>the &lt;code&gt; token must appear exactly twice, while the &lt;hour&gt; token must appear at least once. </code></pre> <p>In the end I came up with this definition:</p> <pre><code>section ::= hour_l CODE SC hour_l CODE SC hour_l ; hour_l ::= /* epsilon */ | hour_l HOUR SC ; </code></pre> <p>where <code>SC</code> is a separator (semicolon) between tokens and <code>hour_l</code> is the non terminal symbol for hour's list. This solution has a problem: <code>HOUR</code> can be absent, because <code>epsilon</code> can be reduced from <code>hour_l</code>. Is there a clever solution other than specifying all possibilities or using the semantic capabilities of CUP (ie. putting a counter of how many times <code>HOUR</code> is present in <code>section</code>)? I'd prefer not to use semantics in order to achieve this; in fact, it seems to me this is syntax related.</p> <p>Thanks</p>
<p>My solution, suggested by a friend, is to use a Finite State Machine. I drew a Deterministic Finite Automata, and $C$ is the final state accepted by this machine:</p> <p><img src="https://i.sstatic.net/iTtsg.png" alt="DFA"></p> <p>I then transformed it into a right regular grammar:</p> <pre><code>section ::= c ; a ::= CODE SC ; b ::= a CODE SC ; c ::= c HOUR SC | b HOUR SC | e CODE SC ; d ::= HOUR SC | d HOUR SC ; e ::= e HOUR SC | a HOUR SC | d CODE SC ; </code></pre> <p>Hope it helps.</p>
355
tokenization
In describing the tokens of a programming language using RE, it is not necessary to have the $\epsilon$ or t. Why is this?
https://cs.stackexchange.com/questions/85615/in-describing-the-tokens-of-a-programming-language-using-re-it-is-not-necessary
<p>In describing the tokens of a programming language using regular expressions, it is not necessary to have the $\epsilon$, (for the empty set) or t (for the empty string). Why is this?</p> <p>Please tell me why is it not necessary thanks</p>
<p>The empty set is only needed in order to describe the empty regular language. The set of tokens of a certain type is never empty, so the empty set isn't needed.</p> <p>Similarly, the empty string is only needed to describe regular languages which include the empty string. The empty string isn't a token of any type, so the empty string isn't needed.</p> <hr> <p>Here are some proofs. I will use the convention in which $\emptyset$ is the emptyset and $\epsilon$ is the empty string. I will use the following inductive definition of regular expressions over an alphabet $\Sigma$:</p> <ul> <li>$\emptyset$ is a regular expression.</li> <li>$\epsilon$ is a regular expression.</li> <li>For any $\sigma \in \Sigma$, $\sigma$ is a regular expression.</li> <li>If $r$ is a regular expression then $r^*$ is a regular expression.</li> <li>If $r_1,r_2$ are regular expressions then $r_1+r_2$ is a regular expression.</li> <li>If $r_1,r_2$ are regular expressions then $r_1r_2$ is a regular expression.</li> </ul> <p>Two regular expressions are <em>equivalent</em> if they generate the same language. We denote this by $\approx$. The language generated by $r$ is denoted $L[r]$.</p> <p><strong>Lemma 1.</strong> Every regular expression is either equivalent to $\emptyset$ or to a regular expression without $\emptyset$.</p> <p><strong>Corollary 1.</strong> If a regular expression doesn't generate the empty language, then it is equivalent to a regular expression without $\emptyset$.</p> <p>Note that every regular expression not involving $\emptyset$ doesn't generate the empty language.</p> <p><strong>Proof.</strong> The proof is by structural induction. The base cases $\emptyset,\epsilon,\sigma$ are obvious. Consider now a regular expression $r$ constructed inductively from one or two regular expressions $r_1,r_2$, which we can assume satisfy the dichotomy in the lemma. If $r_1,r_2 \neq \emptyset$ then there is nothing to prove, so assume that at least one of them is $\emptyset$.</p> <ul> <li>$r = r_1^*$: in that case $r = \emptyset^* \approx \epsilon$.</li> <li>$r = r_1 + r_2$: if $r_1=r_2=\emptyset$ then $r \approx \emptyset$. If $r_1 \neq \emptyset$ and $r_2 = \emptyset$ then $r \approx r_1$.</li> <li>$r = r_1r_2$: in that case $r \approx \emptyset$. $\qquad\square$</li> </ul> <p><strong>Lemma 2.</strong> Every $\emptyset$-free regular expression can be written in one of the forms $\epsilon,s,s+\epsilon$, where $s$ is $\emptyset,\epsilon$-free and $\epsilon \notin L[s]$.</p> <p><strong>Corollary 2.</strong> If $\epsilon \notin L[r]$ and $L[r] \neq \emptyset$ then $r$ is equivalent to a regular expression not involving $\emptyset,\epsilon$.</p> <p><strong>Proof.</strong> Again the proof is by structural induction. The base cases $\epsilon,\sigma$ are obvious. Let us now consider a regular expression $r$ constructed inductively from $r_1,r_2$ which are of one of the forms $s_i,s_i+\epsilon,\epsilon$, where the $s_i$ satisfy the conditions stated in the lemma.</p> <ul> <li>$r = r_1^*$: If $r_1 = \epsilon$ then $r \approx \epsilon$. If $r_1 = s_1 + \epsilon$ or $r_1 = s_1$ then $r \approx s_1s_1^* + \epsilon$.</li> <li>$r = r_1 + r_2$: If $r_1=r_2=\epsilon$ then $r \approx \epsilon$. If $r_1=s_1$ and $r_2 = s_2$ then $r$ is already of the required form. Otherwise $r \approx (s_1+s_2) + \epsilon$.</li> <li>$r = r_1r_2$: If $r_1=r_2=\epsilon$ then $r \approx \epsilon$. If $r_1=s_1$, $r_2=s_2$ then $r$ is already of the required form. If $r_1=s_1$ and $r_2=s_2+\epsilon$ then $r \approx s_1s_2 + s_1$. Similarly, if $r_1=s_1+\epsilon$ and $r_2=s_2$ then $r \approx s_1s_2 + s_2$. Finally, if $r_1=s_1+\epsilon$ and $r_2=s_2+\epsilon$ then $r \approx (s_1s_2 + s_1 + s_2) + \epsilon$. $\qquad\square$</li> </ul>
356
tokenization
Searching through all program of a stack-based language with little memory
https://cs.stackexchange.com/questions/53506/searching-through-all-program-of-a-stack-based-language-with-little-memory
<p>I wrote a simple stack based language, and am looking to exhaustively generate all programs for it, to find the shortest program that generates a particular output.</p> <p>Given a program fragment, I can determine if it is terminated, and, if not, how many operands it requires. Currently, my search strategy is:</p> <pre><code> If program is terminated --&gt; test it else --&gt; add all programs with one additional operand to the queue </code></pre> <p>However, the queue ends up using too much space. Instead, I'd rather keep the queue small, and find a way to produce all valid programs efficiently (more efficiently than just trying all program text). What is a good strategy to generate all valid programs for a simple stack language?</p> <p><strong>UPDATE</strong> The language is guaranteed to halt (and is of course not Turing complete).</p> <p><strong>UPDATE:</strong> The approach I am working on is as follows:</p> <ol> <li>Observe that, for certain program fragments, nothing that can be appended to them will make them valid</li> <li>Define an ordering over each token that may appear in a program.</li> <li>Define a "mayBeAppendedToToMakeAValidProgram" over a incomplete program.</li> <li>For n = 0 to infinity:</li> <li>Start with a program of n tokens; set each token to the first</li> <li>Increment token-0 until mayBeAppendedToToMakeAValidProgram(token-0)</li> <li>Increment token-1 until mayBeAppendedToToMakeAValidProgram(token-0 + token-1)</li> <li>Etc. until the last token</li> <li>Increment the last token until the program is valid; test that program</li> <li>On to the next one</li> </ol>
<p>I can give you a answer about the principles.</p> <p>Find a computable bijection between your programs and the natural numbers. In essence, you would construct an <a href="https://en.wikipedia.org/wiki/Admissible_numbering" rel="nofollow">admissible numbering</a>. Then, you can just iterate over the naturals, compute the program corresponding to each number and perform your checks. You do not need to store any programs but the one under consideration.</p> <p>The challenge is, of course, to find this bijection; it will depend on your language. If that helps you can relax injectivity; it is not necessary but duplicates will have performance impacts, obviously.</p> <p>One concrete idea is: represent your language by a context-free grammar. Enumerate left-derivations along rule precedences, going from one program to the next using backtracking. It may even be possible to encode the derivation process in numbers, yielding a mapping as mentioned above.</p>
357
tokenization
Why does lexer has O(n) time complexity?
https://cs.stackexchange.com/questions/162979/why-does-lexer-has-on-time-complexity
<p>According to my CS knowledge so far, a lexer uses DFA(which takes linear time) for 'each' token type to find the next token, so in the worst case, it should try 'all possible' token types of a language. (there are some other reasons, e.g. to find the 'longest matching pattern' to distinguish between <code>if a</code>(keyword) and <code>ifa</code>(identifier))</p> <p>Then, the lexer repeats this process until the input string ends, producing N tokens as a result. So I ended up with the conclusion that lexer has <span class="math-container">$\mathcal O(n^2)$</span> time complexity. But every resource and book says lexer takes linear time because the DFA has linear time complexity. What I am missing?</p> <p>*Edit: What was wrong is that I assumed the lexer uses multiple DFAs for each token. In reality, the lexer forms a single DFA to represent all token types in a given regular language, and it can identify token types with it's final states. Thus it takes <span class="math-container">$\mathcal O(n)$</span>.</p>
<p>Your assumption of execution is based on a suboptimal backtracking implementation based on NFA needing to explore all paths through it.</p> <p>A more optimal implementation would have a bitset of all states the NFA is in simultaneously. This lets you represent all <span class="math-container">$2^n$</span> states without explicitly listing them all.</p> <p>For a lexer doing tokenization you might not be doing keyword recognition yet and instead leave that to a next parser step that checks whether a potential keyword is an actual keyword. This makes the lexer DFA much smaller.</p> <p>If however you need to differentiate between keywords and identifiers then you would put the final state after seeing the first character that <em>cannot</em> be part of a identifier. In most programming languages once you know you are in a identifier it lasts until the first non-alphabetic character, so there is no point in backtracking.</p> <p>The lexer DFA doesn't have a final state per se (other than actual EOF) but instead a output transition that would emit a token to the output. When after the <code>f</code> of a <code>if</code> the next non-identifier character would make it emit a IF token, if instead the character possible as part of an identifier it would transition into a general IDENTIFIER state looping back into itself until it sees a non-identifier character.</p>
358
tokenization
What time complexity (big o) is this specific web crawler implementation?
https://cs.stackexchange.com/questions/104708/what-time-complexity-big-o-is-this-specific-web-crawler-implementation
<blockquote> <p>Note: this question was marked as a duplicate in favor of <a href="https://cs.stackexchange.com/questions/23593/is-there-a-system-behind-the-magic-of-algorithm-analysis">this question/answer</a> which attempts to provide a generic formula for translating code to mathematics. </p> <p>Unfortunately I didn't find that response useful as I don't have a good understanding of maths and so all I see is complex symbols that don't mean anything to me. I require a more lay person explanation of how to break down an algorithm into time complexity.</p> </blockquote> <p>I've built a simple 'web crawler' and was interested to know what the time complexity of the core 'processing' logic was.</p> <p>Here is a diagram of the architecture:</p> <p><img src="https://raw.githubusercontent.com/Integralist/go-web-crawler/master/architecture.png" alt=""></p> <p><a href="https://github.com/integralist/go-web-crawler" rel="nofollow noreferrer">https://github.com/integralist/go-web-crawler</a></p> <p>Specifically the algorithm portion I'm interested in is the <code>Crawler</code> which:</p> <ol> <li>defines a worker pool size</li> <li>pushes tasks into a channel</li> <li>processes tasks concurrently within the boundary of the pool</li> </ol> <p>In the crawler code we:</p> <ul> <li>accept a list of n items</li> <li>each item in the list has a nested list of x items</li> <li>we look at each item and decide whether to process the item or not</li> </ul> <blockquote> <p>Note the <code>Parser</code> and <code>Mapper</code> portions of the code are all the same underlying design, but <em>how</em> the 'task' is processed is slightly different and so although I could imagine the time complexity for those possibly being different depending on what those processing steps are, the principle is still the same: we're still looping over all items and deciding on something to do.</p> </blockquote> <p>What is this BigO time complexity?</p> <p>Initially it might seem that this is just <code>O(n)</code> as we're visiting each item in the list as well as each item in the nested list.</p> <p>Is that it? or am I missing something else entirely obvious.</p> <p>I don't think it's <code>O(n Log n)</code> as it's not reducing the number of looping operations in the nested lists. Similarly for <code>O(n*n)</code> as the nested loop isn't necessarily the same length as the parent list. I also don't think it's <code>O(2^n)</code> as the nested lists aren't growing exponentially (they're just an unknown number of items).</p> <hr> <h2>Update</h2> <p>I was asked to provide a precise definition of the algorithm, and so I'll attempt to do that below by way of a bullet list along with some pseudo code...</p> <ul> <li>loop over collection (collection: array of structs) <ul> <li>pass each struct within the collection to <code>crawl</code> function</li> </ul></li> </ul> <p>breakdown of <code>crawl</code> function...</p> <ul> <li>get length of collectionItem Urls (collectionItem: struct with field containing urls)</li> <li>create worker pool of set size, or size of collectionItem.Urls (if smaller)</li> <li>each worker stays open (blocked) waiting for a task to process</li> <li>when a task is received: <ul> <li>make a http network request (task is a url)</li> <li>track the task (a url) in a hash table</li> <li>append network response in an array</li> </ul></li> <li>loop over collectionItem.Urls <ul> <li>if url already tracked in hash table: <ul> <li>do nothing</li> </ul></li> <li>else: <ul> <li>push url into task queue</li> </ul></li> </ul></li> </ul> <pre><code>for item in collection { crawl(item) } func crawl(collectionItem) { collectionLength = length(collectionItem.Urls) if (collectionLength &lt; 1) { return } poolSize = 20 if (collectionLength &lt; poolSize) { poolSize = collectionLength } for i=0; i&lt;poolSize; i++ { // we spin up multiple threads... waitForATask( // this function is executed concurrently on individual thread/process // it contains the following logic... for t in tasks { // tasks is a blocking channel // so as tasks are pushed in the channel // it means the tasks are distributed across the pool page = netRequestFor(t) trackInHashTable(t) appendToArray(page) // this is ultimately what process returns } ) } for url in collectionItem.Urls { if not trackedAlready(url) { pushTaskIntoQueue(url) // queue is the 'tasks' variable we loop over within our threads } } } </code></pre> <p><em>Additionally!</em> the steps described above (i.e. looping a collection, and then passing each item to a <code>crawl</code> function) is something that will be recursively executed. The actual implementation is...</p> <pre><code>func process(mappedPages []mapper.Page) { for _, page := range mappedPages { crawledPages := crawler.Crawl(page) // this is what I described above tokenizedNestedPages := parser.ParseCollection(crawledPages) mappedNestedPages := mapper.MapCollection(tokenizedNestedPages) for _, mnp := range mappedNestedPages { results = append(results, mnp) } process(mappedNestedPages) } } </code></pre> <p>...The idea being that the top level loop will not only pass each item to the <code>crawl</code> function, but that the results (a list of requested pages) will itself then be passed to a <code>parser</code> function (tokenizing the page, and which is designed with an algorithm exactly the same as the <code>crawl</code> function), then that tokenization result is passed to a <code>mapper</code> (again, the mapper is designed the same as the <code>crawl</code>).</p> <p>So I guess when considering the time complexity, we would need to take into account the <em>whole</em> algorithm (not just the <code>crawl</code> function segment).</p>
<p>I didn't read your code, but based on the overview, the running time to handle a list of size <span class="math-container">$n$</span> containing nested sublists of size <span class="math-container">$x$</span> appears to be <span class="math-container">$O(nx)$</span>. Here's my reasoning:</p> <ul> <li><p>You iterate over each list (there are <span class="math-container">$n$</span> of them), and each contains <span class="math-container">$x$</span> items, so you iterate over <span class="math-container">$nx$</span> items in all.</p></li> <li><p>The amount of work per item appears to be constant, i.e., <span class="math-container">$O(1)$</span> (it doesn't appear to depend on <span class="math-container">$n$</span> or <span class="math-container">$x$</span>; here I'm using that the running time for insert or lookup operations in a hashtable are <span class="math-container">$O(1)$</span>, in practice).</p></li> <li><p>Multiplying those together, we see that the total time to handle that list is <span class="math-container">$O(nx)$</span>.</p></li> </ul> <p>That said, beware that big-O analysis is of dubious utility for systems code like yours. I'm pretty skeptical whether this is useful at all. Big-O analysis ignores the constant factors, but for systems code, we often care about the constant factors a lot.</p> <p>And these constant factors can be enormous. Big-O analysis treats executing a single ADD instruction the same as fetching a web page at some URL; they both take <span class="math-container">$O(1)$</span> time. However, in practice, the latter takes are longer: a single instruction might take 1ns, and fetching a URL might take 100ms. That's a difference of <span class="math-container">$100,000,000\times$</span>, i.e., 8 orders of magnitude. Counting both operations equally, as big-O analysis does, is simply not helpful for understanding the performance of the system in practice.</p> <p>So, I would exercise caution in your use of big-O analysis (i.e., asymptotic running time) in systems like these.</p>
359
tokenization
a problem with Suzuki-Kasami mutex algorithm
https://cs.stackexchange.com/questions/89680/a-problem-with-suzuki-kasami-mutex-algorithm
<p>I have some sort of a misunderstanding regarding the Suzuki-Kasami distributed mutex algorithm. I am writing my question here because I failed on gaining access to the original paper.</p> <p>I will call processes sites for convenience.</p> <p><strong>My question</strong></p> <p>Why the token holder $j$ releases its token only when $RN_j[i]==LN[i]+1$ and not when $RN_j[i]&gt;LN[j]$. </p> <p>I mean, since we cannot assume anything on the quality of communication, maybe site $i$ had the time to increment his sequence number <strong>twice</strong> so that the token holder got a message with difference $2$ from $LN[j]$. In this case the token holder will not release his token and starvation will be applied.</p>
360
tokenization
Prove maximum score is achieved by being greedy
https://cs.stackexchange.com/questions/167438/prove-maximum-score-is-achieved-by-being-greedy
<p>I have a list of tokens <code>T</code>, of length <code>n</code>. Initially I have power <code>p</code> and a <code>score</code> of zero. In one move, I can play <em>any</em> token <code>t</code> either face up or face down.</p> <p>(a) I can play <code>t</code> face up, provided that I have at least as much power as <code>t</code> (i.e, <code>t</code> &lt;= <code>p</code>). Playing this move scores <code>1</code> point (<code>score+=1</code>), but decreases power <code>p</code> by <code>t</code> (i.e, <code>p-=t</code>).</p> <p>or</p> <p>(b) I can play <code>t</code> face down, provided that I have a positive score (i.e, <code>score</code> &gt; <code>0</code>). Playing this decreses score by <code>1</code> (i.e, <code>score-=1</code>), but increases power <code>p</code> by <code>t</code> (i.e, <code>p+=t</code>).</p> <p>A token cannot be played more than once. Not all tokens need to be played.</p> <p>I want to maximize the score after playing <em>any</em> number of tokens.</p> <p>The solution is to sort the tokens and use the smallest possible token as long as there's enough power. If we don't have enough power to play the smallest possible token <code>t</code> face up at any point, then, we play the largest possible token, face down.</p> <p>This approach is conserving as much power as possible while playing face up. And, gaining the maximum possible power while playing face down. It sounds reasonable when I write this down in English.</p> <p>But why does this work ? This appears to be a greedy method. And I think I have to use some form of contradiction to prove correctness.</p> <p>Here is what I have tried so far:</p> <p>An optimal solution exists. It achieves maximum score <code>M</code>.</p> <p><code>T</code> was <code>{t_0, t_1, ... t_n-1}</code>, but after sorting <code>T</code> becomes: <code>{t_0', t_1', ... t_n-1'}</code>.</p> <p>Let's assume the optimal solution plays tokens in the following sequence <code>S_opt</code>: <code>{face_up(t_0'), face_up(t_1'), face_down(t_n-1'), ...}</code>.</p> <p>The first move has to be <code>face_up(t_0')</code> in the optimal solution (provided <code>t_0'</code> was less than or equal to <code>p</code>) as we start with zero <code>score</code>. Length of <code>S_opt</code> may not be equal to <code>n</code>.</p> <p>If I have to contradict this, I have to assume a better permutation of <code>{t_0', t_1', ... t_n-1'}</code> exists, and that it will achieve a score that is more than <code>M</code>. Let me call this sequence of moves, <code>S_contradiction</code></p> <p>I'm going to consider the first token in <code>S_opt</code> that is not the same as <code>S_contradiction</code>. I'm going to call this token in <code>S_opt</code> as <code>S_opt_t_f</code> and in <code>S_contradiction</code> as <code>S_contradiction_t_f</code>.</p> <p>Now, there are two cases:</p> <p>case 1: <code>S_opt_t_f</code> is greater than <code>S_contradiction_t_f</code></p> <p>case 2: <code>S_contradiction_t_f</code> is greater than <code>S_opt_t_f</code></p> <p>I'm not sure how to proceed further with this approach.</p> <p>I also thought about 'trading' power for score. That is, buying the maximum possible power for the least expense of score, or by spending the least possible power for gaining the maximum possible score. I do not know how to construct a proper proof with this approach.</p> <p>Can you please help me with the proof ? Thanks!</p> <p>This is a question from LeetCode: <a href="https://leetcode.com/problems/bag-of-tokens/description/" rel="nofollow noreferrer">948. Bag of Tokens</a>.</p>
<p>If <span class="math-container">$(t_0, t_1, …, t_{n-1})$</span> is the sequence of tokens played by your greedy solution, you can show by induction that for any <span class="math-container">$k\in \{0, …, n\}$</span>, the sequence <span class="math-container">$(t_0, …, t_{k-1})$</span> is the sequence of <span class="math-container">$k$</span> tokens that grants the greatest score AND the greatest power.</p> <p>There is nothing to prove for the initialization (because there is only one <span class="math-container">$0$</span>-length sequence). For the induction:</p> <ul> <li>either <span class="math-container">$t_k$</span> is played face up, and it is the lowest value among all remaining tokens (otherwise you could switch it with a token previously played face up and increase the power);</li> <li>or <span class="math-container">$t_k$</span> is played face down, and it is the greatest value among all remaining tokens (otherwise you could switch it with a token previously played face down and increase the power).</li> </ul>
361
tokenization
Is there a way to connect a deep language model output to input?
https://cs.stackexchange.com/questions/115948/is-there-a-way-to-connect-a-deep-language-model-output-to-input
<p>In models like GPT-2, TXL and Grover, is there a good way to know which input weights (tokens) resulted in each token of the output? </p>
362
tokenization
An Arithmetic Encoding&#39;s length being ambiguous?
https://cs.stackexchange.com/questions/157561/an-arithmetic-encodings-length-being-ambiguous
<p>Say they are two tokens, A and B. A has probability weight 0.99 (and B has 0.01). If I want to encode the sequence &quot;AAA&quot;, wouldn't the binary encoding just be &quot;0&quot;? And wouldn't that be the same for encoding &quot;AA&quot;, or &quot;AAAA&quot;, or an number of A's? How is the decoder supposed to know how many A's are sent when all it receives is the message &quot;0&quot;? Or is it absolutely necessary to have/add an EOF token? But in that case, the EOF token will have to be given some weight, which will detract from the weight of the other more meaningful tokens, right?</p>
<p>Yes - you need to be able to indicate EOF (and EOF is obviously meaningful). Note that you may have a natural message boundary in terms of decoding; for example, if you are writing to a file then the file has a size and thus the information can be encoded in a trailer (e.g., how many of the final decoded characters should be discarded). If there's no natural boundary (e.g., this is one message of many in a stream), then you either need to prefix the message with the encoded length, or yes, reserve probabilities for the EOF token (note that you can adjust the probabilities to be dynamic in the number of characters previously decoded, if this makes sense).</p>
363
tokenization
How to prove LastToken problem is NP-complete
https://cs.stackexchange.com/questions/118196/how-to-prove-lasttoken-problem-is-np-complete
<p>Consider the following game played on a graph <span class="math-container">$G$</span> where each node can hold an arbitrary number of tokens. A move consists of removing two tokens from one node (that has at least two tokens) and adding one token to some neighboring node. The LastToken problem asks whether, given a graph <span class="math-container">$G$</span> and an initial number of tokens <span class="math-container">$t(v) \ge 0$</span> for each vertex <span class="math-container">$v$</span>, there is a sequence of moves that results in only one token being left in <span class="math-container">$G$</span>. Prove that LastToken is NP-complete.</p> <p>I'm learning how to prove NP-complete recently but having trouble to understand the concept of NP. As far as I know, to prove a problem is NP-complete, we first need to prove it's in NP and choose a NP-complete problem that can be reduced from. I'm stuck on which NP-complete problem that can reduce to my problem. As I interpreted this is sequencing problem and I'm guessing I can either reduce Ham Cycle or Traveling Sales Man to my problem, but I don't see any connection between them so far. How should I start a good approach?</p>
<p>Here is a reduction from the Hamiltonian path. Given a graph <span class="math-container">$G=(V,E)$</span>. Add a vertex <span class="math-container">$v_0$</span> to the graph and connect it to all vertices in the graph. Set <span class="math-container">$t(v_0)=2$</span>. Set <span class="math-container">$t(u) = 1$</span> for all <span class="math-container">$u \neq v_0$</span>.</p> <p>Claim. The previous reduction is correct. Try to prove it formally as an exercise.</p> <p><strong>Edit.</strong> Here is a brief proof of correctness. We have to prove that the created instance is a yes-instance of your problem, if and only if the given instance is a yes-instance of the Hamiltonian path problem. First, let <span class="math-container">$G$</span> be a yes instance of the Hamiltonian path. Let <span class="math-container">$v_1, \dots v_n$</span> be a Hamiltonian cycle in the graph. Now we describe a winning strategy for the last-token, where in the step <span class="math-container">$i$</span> we take two tokens from the vertex <span class="math-container">$v_{i-1}$</span> and add one to the vertex <span class="math-container">$v_{i}$</span>. It is easy to prove by induction, that in step <span class="math-container">$i+1$</span> all vertices <span class="math-container">$v_0, \dots v_{i-1}$</span> have no tokens in them, the vertex <span class="math-container">$v_i$</span> has two and all vertices <span class="math-container">$v_{i+1}, \dots v_n$</span> have exactly one token. In the <span class="math-container">$n-th$</span> step we take the two tokens left in <span class="math-container">$c_n$</span> and put one token in <span class="math-container">$v_0$</span> and we are done. Note that <span class="math-container">$v_i$</span> has always a common edge with <span class="math-container">$v_{i+1}$</span> since the are consecutive in a Hamiltonian path.</p> <p>On the other hand, assuming the reduced instance is a yes-instance, we prove that the given graph is Hamiltonian. Since all but one vertex have exactly one token. We have only one choice for the first vertex. In each step we take two tokens from a vertex and add a token to another one. It is easy to prove inductively that in each step all but at most one vertex have at most one token in them. One vertex can have two tokens wich is the only choice for the next step. Using this claim, it is also easy to prove that to vertex can be chosen twice. Hence, if we were able to choose <span class="math-container">$n+1$</span> vertices, we must have a permutation of the <span class="math-container">$n$</span> vertices with <span class="math-container">$v_0$</span> starting in <span class="math-container">$v_0$</span> and this permutation must be a Hamiltonian path since we always have an edge between two consecutive choices in the game.</p>
364
tokenization
The maximum &amp; minimum number of sources that can be multiplexed so that no data loss occurs on TOKEN BUCKET?
https://cs.stackexchange.com/questions/35841/the-maximum-minimum-number-of-sources-that-can-be-multiplexed-so-that-no-data
<p>A link of capacity 100 Mbps is carrying traffic from a number of sources. Each source generates an on-off traffic stream; when the source is on, the rate of traffic is 10 Mbps, and when the source is off, the rate of traffic is zero. The duty cycle, which is the ratio of on-time to off-time, is 1 : 2. When there is no buffer at the link, the minimum number of sources that can be multiplexed on the link so that link capacity is not wasted and no data loss occurs is S1. Assuming that all sources are synchronized and that the link is provided with a large buffer, the maximum number of sources that can be multiplexed so that no data loss occurs is S2. The values of S1 and S2 are, respectively,</p> <p>A) 10 and 30</p> <p>B) 12 and 25</p> <p>C) 5 and 33</p> <p>D) 15 and 22</p> <p>I have solved the part (i) of the problem to find the minimum no. of stations &amp; got 10 as there is no bucket,so for no data loss I equated the incoming traffic rate by S1 stations = Maximum capacity of the channel.In 2nd part I'm having trouble what will happen when a buffer is placed.Please help.Make me correct if 1st part is not correct. </p>
<p>A is correct. As 1st part is already described i will explain derivation of S2 only. Let there are N stations. When buffer is added then even when source is off, data will persist so considering duty cycle here we say that the actual data transmitted per unit time during the whole duty cycle is (1/3)<em>10 [for 1 unit of on-time data is transmitted and for another 2 unit of off-time it stays in buffer so in 3 unit time 10Mbps data is tranferred]. This is the case will all other N-1 stations also. So adding all should be = Max capacity of channel. N</em>(1/3)*10 = 100 N=30</p>
365
tokenization
Closure of regular languages under &quot;inverse second half&quot;
https://cs.stackexchange.com/questions/116780/closure-of-regular-languages-under-inverse-second-half
<blockquote> <p><strong>Theorem.</strong> Show that if <span class="math-container">$L$</span> is regular, then so is <span class="math-container">$$ \varphi(L)=\left\{w \in \Sigma^{*} \mid \text {there exists an } \alpha \in \Sigma^{*} \text { with }|\alpha|=|w| \text { and } \alpha w \in L\right\} $$</span> <strong>Proof.</strong> Let <span class="math-container">$L$</span> be a regular language. Because <span class="math-container">$L$</span> is regular, there exists a DFA <span class="math-container">$M$</span> that accepts it. We construct out of <span class="math-container">$M$</span> a <span class="math-container">$\lambda$</span>NFA <span class="math-container">$M'$</span> whose language is <span class="math-container">$\varphi(L)$</span>.</p> <p>We can think of a computation of <span class="math-container">$M$</span> as moving a token across the states of <span class="math-container">$M$</span>. The machine <span class="math-container">$M'$</span> will reuse the states of <span class="math-container">$M$</span>, but will use three tokens: white, red and blue. Initially, the blue token is put on <span class="math-container">$q_0$</span> (the initial state of <span class="math-container">$M$</span>), and the white and red tokens are put on a nondeterministically guessed state (both tokens on the same state).</p> <ol> <li>Describe the transitions of <span class="math-container">$M'$</span>.</li> <li>Describe the acceptance condition of <span class="math-container">$M'$</span>.</li> </ol> </blockquote> <p>Could someone please help me with that? I research regular expressions, DFAs, NFAs, and conversions between all of theses, but I still don't know how to solve this question.</p>
<p>The white token is a placeholder, which just remembers the original state of the red token (of course, the roles of these two tokens can be switched); this is the state at which <span class="math-container">$M$</span> would be after reading <span class="math-container">$\alpha$</span>. The red token corresponds to the <span class="math-container">$w$</span> part of the word, and the blue token to its <span class="math-container">$\alpha$</span> part.</p> <p>When reading a symbol, the red token advances according to the rules of <span class="math-container">$M$</span>, the white token stays put, and the blue token guesses a symbol and advances according to the rules of <span class="math-container">$M$</span> (in effect, it guesses one symbol of <span class="math-container">$\alpha$</span>).</p> <p>The machine accepts if the blue and white tokens are at the same state (so we guessed correctly the state at which <span class="math-container">$M$</span> is after reading <span class="math-container">$\alpha$</span>), and the red token is at an accepting state of <span class="math-container">$M$</span> (so <span class="math-container">$\alpha w \in L$</span>).</p>
366
tokenization
How to generate, validate, and invalidate a set/list of numbers in O(1) time and space?
https://cs.stackexchange.com/questions/124714/how-to-generate-validate-and-invalidate-a-set-list-of-numbers-in-o1-time-and
<p>Imagine my server is generating &quot;tokens&quot; of some sort for a client on a regular basis. When a client asks for a token, the server responds with a new value (and any other supplemental information it wants to, like a &quot;witness&quot;). Later, the client will submit the token (and optionally the witness) and the server needs to be able to quickly determine if it is a token value that it (the server) has previously issued. The token does not need to be &quot;secure&quot;, so a trivially forgeable token is okay. Ideally, the server can:</p> <ul> <li>Generate the tokens in O(1) time and space</li> <li>Validate that it issued the tokens in O(1) time and space</li> </ul> <p>The most obvious solution to this problem as stated would be to simply use an incrementing counter as the token. The server stores a single integer and increments it when asked for a new value. Determining if the token has been issued by this server means simply ensuring that the token submitted by the client is &lt;= than the server's currently stored value.</p> <p>The twist: The server might crash and get restored from a backup, meaning that its counter value might regress to a previous value. (We'll call this the regression value). If this happens, any tokens previously issued that are less than or equal to its regression value should be considered valid, but any tokens <em>previously issued</em> with a value higher than its regression value should be considered invalid since they were not generated by &quot;this&quot; instance of the server. In short, we need to detect a &quot;fork&quot; in our token issuing.</p> <p>For example: Imagine I issued tokens T0 to T20. Then, I restore the server to a backup right after it had issued T10. I want the server to continue to validate/recognize T0-T10, but no longer recognize T10-T20. Furthermore, I want to make sure that it never re-issues tokens that are identical to the previously issued T11-T20.</p> <p>The restored server doesn't know what the previously highest (MAX) issued token was. (If it did, it could &quot;mark&quot; any token values as invalid if they had a value between its current value and its max value.)</p> <p>We also need to avoid the restored server issuing the same token its previous incarnation did. The integer counter scheme would not be capable of such a guarantee since it is not sure what the previous maximum token issued was. Thus, it wouldn't know what value to safely increment its counter to. One way to solve this would be to have a separate &quot;restoration counter&quot; to note the number of times the server has been restored. But presumably, if we were capable of saving this separate state somewhere, we could just store the previously maximum issued counter as well.</p> <p>Another way to solve this problem would be to use timestamps as the token. Assuming an accurate clock, the restored server would be guaranteed to never issue the same value that its predecessor did. However, if we use a timestamp, the server no longer knows which <em>specific</em> timestamp tokens it ever issued in the past, unless it keeps track of a full list of them, bringing the time and space complexity well above O(1). (It could keep a list of the times it was restored from backup and reference this list when validating tokens, but this adds complexity to time and space for validation).</p> <p>However, if there were a way to compactly store a list of N timestamps and an easy way to test a given timestamp for set membership, the problem would be easy to solve.</p> <p><strong>Options I have considered:</strong></p> <p>Option 1: Bloom filter: The server could use a bloom filter and add each number issued to its filter. Unfortunately, the set membership test would be probabilistic, but I could make the filter big enough to reduce my probability of a false positive quite low. However, it seems like the addition of information <em>other</em> than just the token value in the form of witness changes allows us to offload additional information to the clients that later ask for verification, and would allow us to do better somehow.</p> <p>Option 2: A cryptographic accumulator, like an RSA accumulator. If each of the numbers I was generating were prime, I believe I could use an RSA accumulator to store a single accumulator value of constant size S, where S is significantly fewer bytes than storing a list of N. Each time I add a new prime to my set, I add it to the accumulator, then I generate its witness as well and ship both the number and the witness to my client. Later, the client would submit the integer it is testing, and the witness and I would be able to quickly determine if the number being submitted is in fact a member of my set or not. Possible problems: I need to be able to hash to a prime number deterministically. (Not the end of the world, but adds complexity.) I think I have to update my witness values as I add new values to the accumulator which adds time to the verification step. Lastly: My understanding of accumulators is rudimentary, and I'm not sure how large the accumulator needs to be in relation to the set being accumulated.</p> <p>Option 3: I'm overthinking this tremendously.</p> <p>** Related problems: **</p> <ul> <li><p>This seems a lot like having a lot of hash values in a Merkle tree or hash chain (blockchain), and wanting to be able to determine if a particular hash value were ever seen in the chain, <em>without</em> having to store every value that had been seen in the chain. I'm hopeful that with the additional concept of generating a &quot;witness&quot; value of some sort to be stored along with the number, the server can make a membership determination with much less overhead than having to store all of the numbers. (Coda uses techniques to keep its last chain in its blockchain deterministic in size and offloads the full corresponding Merkle tree to clients. <a href="https://eprint.iacr.org/2020/352.pdf" rel="nofollow noreferrer">https://eprint.iacr.org/2020/352.pdf</a>)</p> </li> <li><p>This feels similar to a vector commitment accumulator, where the numbers being committed to are in a given order, but I think committing to a set is simpler than committing to a vector.</p> </li> <li><p>I <em>think</em> that this is the same problem as labeling a tree that looks like the following:</p> <pre><code> A -&gt; B -&gt; C -&gt; D -&gt; E \ -&gt; F -&gt; G \ -&gt; H </code></pre> </li> </ul> <p>Given any two labels (L1, L2), and <em>only</em> two labels, determine if L1 is an ancestor of L2. There are labeling schemes that allow for this determination in constant time in significantly less than N bits per label, but I am not sure that it perfectly maps to this.</p>
<p>The easiest and secure method of doing this is making a token an (id, signature) pair where you randomly generate a fixed-size id (e.g. 128-bits) using a method that avoids collisions (a hash of a counter plus timestamp plus system RNG works in almost every scenario). The signature could then be something as simple as hashing the concatenation of the id with some secret stored on the server.</p> <p>To verify a token simply concatenate the id with your secret, hash it, and check whether it equals the signature. Unless the hash function is insecure or someone knows the secret on your server they not fake a token.</p>
367
tokenization
Optimal common substring elimination
https://cs.stackexchange.com/questions/170609/optimal-common-substring-elimination
<p>The problem:</p> <p>You are given a list of strings as an input. You may perform any number of &quot;token substitution&quot; operations. A token substitution is performed by: removing any substring, replacing instances with a new token, and adding that substring to the list. The output of the algorithm is the modified list.</p> <p>For example, for the input [&quot;abc&quot;, &quot;abc&quot;, &quot;abcdebcde&quot;], you could replace all instances of &quot;abc&quot; with &quot;A&quot;. This leads to the final output of [&quot;A&quot;, &quot;A&quot;, &quot;Adebcde&quot;, &quot;abc&quot;].</p> <p>The goal is to find the optimal output that:</p> <ol> <li>Minimizes the sum of the lengths of the output strings</li> <li>As a tiebreaker, minimizes the number of new tokens.</li> </ol> <p>The output above has a length sum of 12 and 1 new token.</p> <p>The main challenge is to decide which substrings to tokenize. Tokenizing substrings of size 1 or that only occur once obviously does not help. Ignoring those, the above example has the following repeated substrings of length &gt;= 2:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>String</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>ab</td> <td>3</td> </tr> <tr> <td>abc</td> <td>3</td> </tr> <tr> <td>bc</td> <td>4</td> </tr> <tr> <td>bcd</td> <td>2</td> </tr> <tr> <td>bcde</td> <td>2</td> </tr> <tr> <td>cd</td> <td>2</td> </tr> <tr> <td>cde</td> <td>2</td> </tr> <tr> <td>de</td> <td>2</td> </tr> </tbody> </table></div> <p>Some obvious algorithms are &quot;Pick the most common substring&quot; and &quot;Pick the largest substring&quot;. Those both perform worse than choosing &quot;abc&quot; -&gt; &quot;A&quot; for the example input.</p> <p>Another algorithm is to greedily choose the substring that reduces the length sum the most, and repeat until the string cannot be shortened any more. Let <code>l</code> be the length of the substring and let <code>c</code> be the count. Tokenizing a substring removes <code>l*c</code> characters, adds <code>c</code> replacement characters and adds another string of length <code>l</code>, leaving to a total &quot;score&quot; of <code>l*n - (l+n)</code>. This correctly chooses &quot;abc&quot;, the optimal choice for the example.</p> <p>However, I'm having trouble convincing myself this is always optimal. My concern is that when a substring is removed, any overlapping substrings can no longer be removed. That is, if I remove &quot;ab&quot; from &quot;abc&quot;, then I can no longer remove &quot;bc&quot;. So if the 2nd and 3rd highest scoring both overlap with the 1st highest score but not with eachother, perhaps it would be optimal to choose the 2nd and 3rd rather than the 1st. Hopefully that makes sense.</p> <p>This seems related to dictionary compression techniques, but I'm not sure it's quite the same.</p> <p>So, what is the optimal algorithm? And is there a proof for it?</p> <p>A slight variation on the problem that would be interesting to see as well: Instead of minimizing the number of new tokens as a tiebreaker, each new token adds a fixed cost C to length sum. That or any similar problem would be interesting to see a solution for.</p>
368
tokenization
Is ambiguity in Programming Language always bad?
https://cs.stackexchange.com/questions/105388/is-ambiguity-in-programming-language-always-bad
<p>Suppose we have a token and our language allows the compiler to build two different derivation trees. However, it can happen that there exists two semantic ways to interpret our token. So ambiguity is not a problem in this case. Am I correct?</p>
<p>There are computer languages with ambiguous grammars. They decide how to compile ambiguous code by applying rules outside the grammar (there’s no law saying that a language has to be defined by a grammar only). The problem is if your <em>language</em> is ambiguous - if you have code and cannot tell how it should be compiled, that’s quite useless. </p>
369
tokenization
Can Earley parser work in parallel?
https://cs.stackexchange.com/questions/110403/can-earley-parser-work-in-parallel
<p>Since Earley parser finds all possible application variants for a token, can it parse text in parallel, unlike the usual parser like stack-based, etc. You just need to modify the start of each parallel chunk of tokens, then while going backwards while constructing a table you combine and validate the found rules like in standart Earley approach. But since vector operations are done there, it is possible to parallel this too. And splitting on tokens also. So its (theoretically, i haven't see project like that) GPGPU support for Earley parsing.</p>
<p>Yes, Earley's algorithm can be parallelized, but not in the way that you are thinking.</p> <h1>Earley's Algorithm In Particular</h1> <p>You ask about Earley's algorithm <em>specifically</em>. Parallelizing the algorithm in the way you suggest is unlikely to be faster. Researchers Peter Ahrens, John Feser, and Robin Hui report:</p> <blockquote> <p>We first tried to naively parallelize the Earley algorithm by processing the Earley items in each Earley set in parallel. We found that this approach does not produce any speedup, because the dependencies between Earley items force much of the work to be performed sequentially.</p> </blockquote> <p>The reason Earley's algorithm in particular is difficult to parallelize is because of interdependencies within the computation. To answer how successfully Earley's algorithm parallelizes, we need to solve this demarcation problem: What should be call Earley's algorithm? Seth Fowler and Joshua Paul explain:</p> <blockquote> <p>There are several published algorithms (see, for example, Hill and Wayne[6]) for parallelizing a variant of the the Earley algorithm, the CYK algorithm. Unfortunately, in practice CYK has a much longer running time than Earley (even though it has the same worst-case complexity of <span class="math-container">$O(n^3)$</span>), and so it is not typically used.</p> </blockquote> <p>Most researcher's agree that there are few successful parallelizations of the algorithm, however. Fowler and Paul continue:</p> <blockquote> <p>For the Earley algorithm itself, there are very few parallelization methods published (though there are many optimizations—see, for example, Aycock and Horspool [1]). One such method by Chiang and Fu [3] uses a decomposition similar to the one we develop,but goes on develop the algorithm for use on a specialized VLSI. Similarly, Sandstrom[7] develops an algorithm based on a similar decomposition....</p> </blockquote> <p>Nonetheless, there are parallel versions. Peter Ahrens, John Feser, and Robin Hui who we cited earlier present &quot;the LATE algorithm, which uses additional data structures to maintain information about the state of the parse so that work items may be processed in any order. This property allows the LATE algorithm to be sped up using task parallelism.&quot; The researchers claim a &quot;120x speedup over the Earley algorithm on a natural language task.&quot;</p> <h1>Parallel parsing in general - usually impractical</h1> <p>Parallel parsing techniques have been studied for decades. Other parsing algorithms are more amenable to parallelization, such as the CYK algorithm mentioned above. But parallelizing parsing is only rarely a practical strategy. Most parsing tasks you are likely to encounter are much more efficiently performed serially, that is, in the old fashioned single-threaded way.</p> <p>The reason parallel parsing algorithms are generally impractical is because parallelism has a lot of overhead that needs to be recouped by the gain in speed, while most parsing tasks can be performed incredibly quickly. (See <a href="https://en.wikipedia.org/wiki/Parallel_slowdown" rel="nofollow noreferrer">parallel slowdown</a>.) Quoting from <em>Parsing Techniques: A Practical Guide</em>, by Dick Grune and Ceriel J. H. Jacobs:</p> <blockquote> <p>From a practical point of view, parallel parsing is interesting only for problems big enough to require considerably more time than a fraction of a second on a single processor. There are three ways in which a parsing problem can be this big: the input is very long (millions of tokens); the grammar is very large (millions of rules); or there are millions of inputs to be parsed. The last problem can be solved trivially by distributing the inputs over multiple processors, where each processor processes a different input and runs an ordinary, sequential, parser. Examples of very long inputs requiring parsing are hard to find. All very long parsable sequences occurring in practice are likely to be regular: generating very long CF sequences would require a place to store the nesting information during sentence generation. ...</p> <p>The situation is different for parsing with very large grammars. These are found most often in linguistics. They are especially bothersome there since most linguistic applications require general CF parsing techniques, the speed of which depends on the grammar size.</p> </blockquote> <h3>References</h3> <p>Seth Fowler and Joshua Paul. &quot;<a href="https://people.eecs.berkeley.edu/%7Ekubitron/courses/cs252-S09/projects/reports/project5_report.pdf" rel="nofollow noreferrer">Parallel Parsing: The Earley and Packrat Algorithms</a>.&quot; (2009). Note that this is a student project report for an undergraduate course.</p> <p>Peter Ahrens and John Feser and Robin Hui. &quot;<a href="https://arxiv.org/abs/1807.05642" rel="nofollow noreferrer">LATE Ain’T Earley: A Faster Parallel Earley Parser</a>.&quot; (2018) The arXiv, 1807.05642.</p> <p>Grune D., Jacobs C.J.H. (2008) Parallel Parsing. <em>Parsing Techniques.</em> Monographs in Computer Science. Springer, New York, NY.</p>
370
tokenization
How do balancers work in the context of counting and balancing networks?
https://cs.stackexchange.com/questions/49829/how-do-balancers-work-in-the-context-of-counting-and-balancing-networks
<p>I was learning what balancing networks are and at some point the art of multicore programming talks about balancers. The text book says:</p> <blockquote> <p>A balancer is a simple switch with two input wires and two output wires, called the top and bottom wires (or sometimes the north and south wires). Tokens arrive on the balancer’s input wires at arbitrary times, and emerge on their output wires, at some later time. <em>A balancer can be viewed as a toggle: given a stream of input tokens, it sends one token to the top output wire, and the next to the bottom, and so on, effectively balancing the number of tokens between the two wires</em></p> </blockquote> <p>from the discussion above it would seem that a balancer simple has a state up and down and it alternates between them and send tokens up and down (depending what state it is). Consider how the figure in the textbook defines a balancer:</p> <p><a href="https://i.sstatic.net/uaINO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uaINO.png" alt="enter image description here"></a></p> <p>It says that no matter the order and time, the tokens are split (balanced) as equal as possible between the above wire and the bottom wire. Therefore, no matter how the wires arrive, the balancer <strong>should</strong> maintain the invariant that the difference between top and bottom is at most 1. </p> <p>The issue that I have is that it seems to describe the balancer as a state machine than alternates between up and down, but such a machine wouldn't actually be able to properly balance. It has to have some sort of state counting how many tokens it has sent up and how many it has sent down (so that it can balance). The code presented by the book seems to do the alternating up and down (unless I misunderstood it):</p> <p><a href="https://i.sstatic.net/Dx4ce.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dx4ce.png" alt="enter image description here"></a></p> <p>It doesn't quite have a counter than makes sure that it maintains, the balancing invariant. If not, then how does it actually do it?</p>
<p>The book is correct, and the algorithm works as advertised. See if you can prove that it does. No explicit counting is necessary.</p> <p>A very similar situation occurs for languages: $(ab)^*(a + \epsilon)$ is regular even though $\{ w : 0 \leq \#_a(w) - \#_b(w) \leq 1 \}$ is not.</p>
371
tokenization
Is there an alternative for the formal language theory that could be used for flowchart diagrams?
https://cs.stackexchange.com/questions/151514/is-there-an-alternative-for-the-formal-language-theory-that-could-be-used-for-fl
<p>I am creating a tool for validating, parsing, and interpreting flowchart diagrams on <a href="https://diagrams.net" rel="nofollow noreferrer">diagrams.net</a>, and it is necessary to give users an opportunity to define a set of rules for the diagram. So, in the end, I want to achieve something like ANTLR for diagrams with the following features:</p> <ul> <li>Some kind of DSL for defining parser and lexer rules</li> <li>Building a parse tree for a given diagram with a given set of rules</li> <li>Traversing the parse tree and generating code from it</li> </ul> <p>I failed to find some existing tools or theoretic models for such a task, so currently, I am trying to apply context-free grammars for this task, but the problem is that CFGs and all the theory and tooling utilizing them interprets the input as a sequence of tokens, and flowchart diagrams have two major differences from that:</p> <ol> <li>In texts, each token, except EOF, is followed by exactly one token. In flowcharts, after one token may come multiple tokens, for example after the Decision Element.</li> <li>There can be loops in flowcharts.</li> </ol> <h4>Branches support</h4> <p>I managed to resolve the first problem – created a top-down parser that can validate and parse a diagram with branching using an extended form of CFG, where production rules can be defined as <span class="math-container">$\alpha \to \beta$</span>, where <span class="math-container">$\alpha \in V$</span>, <span class="math-container">$\beta \in (V\cup V'\cup \Sigma)^*$</span>.</p> <p><span class="math-container">$^*$</span> - Kleene star operation, <span class="math-container">$\Sigma$</span> - set of terminals, <span class="math-container">$V$</span> - set of non-terminals, <span class="math-container">$V' = \{f(v) : v \in V\}$</span>, where <span class="math-container">$f(v)$</span> is special non-terminal that represents one or more branches of non-terminal <span class="math-container">$v$</span></p> <p>For example: (capitals are non-terminal characters, lowercase are terminals)</p> <pre><code>CONDITION -&gt; decision ACTION' ACTION -&gt; process ACTION -&gt; end </code></pre> <p>There, the first rule describes that the <code>decision</code> token (rhombus) must be followed by 1 or more branches of <code>ACTION</code>. This grammar could process diagrams like this one:</p> <p><a href="https://i.sstatic.net/Tc1gw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tc1gw.png" alt="Diagram example" /></a></p> <h4>Specific questions</h4> <ol> <li>Maybe I am reinventing the wheel and there is already some parsers or theory that could be applied to this task.</li> <li>If there are no current solutions, how could I implement loops support? I am currently looking into the idea of keeping a map <code>token -&gt; AST node candidates</code>, and reevaluating the candidates each time the lexer visits the same token, but the concept of tying node candidates to tokens seems very unnatural for CFG parsers.</li> </ol>
<p>Not sure why you would go straight to CFGs to resolve this when most flowcharts can be represented as rational functions, AKA finite-state transducers. These can be expressed as regular patterns: considering your diagram, it would be expressed as</p> <pre><code>((1, print['1']) | (2, print['2']) | (3, end)) </code></pre> <p>Looping constructs can obviously be modelled as well</p> <pre><code>((1, print['1']) | (2, print['2']))* (3, end) </code></pre> <p>CFG constructs would come into the picture if one or more nodes on the flowchart represented an embedded flowchart. This is exactly analogous to a method in an algorithmic programming language like C or Java, where the flow of control (instruction pointer trace) runs a regular pattern as long as the stack pointer is fixed, and embedded flows (called methods) are represented similarly.</p> <p>If you are interested, see <a href="https://github.com/ntozubod/ginr" rel="nofollow noreferrer">ginr</a> for a robust regular pattern compiler with support for multidimensional patterns (rational functions and relations).</p>
372
tokenization
Why separate lexing and parsing?
https://cs.stackexchange.com/questions/39898/why-separate-lexing-and-parsing
<p>It's possible to parse a document using a single pass from a state machine. What is the benefit of having two passes, ie. having a lexer to convert text to tokens, and having a parser to test production rules on those tokens? Why not have a single pass that applies production rules directly to the text?</p>
<p>You don't have to separate them. People combine them into <a href="https://en.wikipedia.org/wiki/Scannerless_parsing">scannerless parsers</a>. </p> <p>The key disadvantage of scannerless parsers appears to be that the resulting grammars are rather complicated -- more complicated than the corresponding combination of a regular expression doing lexing and a context-free grammar doing parsing on the token-stream. In particular, grammars for scannerless parsing tend towards ambiguity. It's easier to remove ambiguity for grammars working on a token-stream.</p> <p>A pragmatic benefit of using a dedicated upfront lexing phase is that you don't couple the subsequent parser with lexical detail. This is useful during early programming language development, when the lexical and syntactic details are still changing frequently.</p>
373
tokenization
Parsing algorithms
https://cs.stackexchange.com/questions/159342/parsing-algorithms
<p>I have experimented with a grammar that I could turn into a strict left-to-right finite state automaton driven algorithm (bottom up, table driven). The FSA could be complex, that's not a problem. It doesn't need to deal with infinite recursive structures.</p> <p>I then moved the grammar into BNF, and built a standard SLR parse table. And I found that the SLR parser relies a lot on building non-terminal symbols on the right before it finishes the symbol that it has started on the left. This causes -- in my implementation at least -- a major disadvantage, because if I could just read token for token from left to right, everything would be much, much faster.</p> <p>I like to know if this is something that has been discussed in the literature and where I would find that discussion. I.e., a certain restricted subset of grammars which can be parsed with a simple FSA token by token from left to right?</p> <p>Trying to wrap my head around it, let's take the simple expression grammar example:</p> <pre><code>E : E + T E : T T : T * F T : F F : n F : ( E ) </code></pre> <p>I think I can turn this into a simple FSA if I limited the depth of the recursion on <code>F : ( E )</code>.</p> <pre><code>S -[n]-&gt; Sn : push(token) Sn -[+]-&gt; Snp Sn -[*]-&gt; Snt Snt -[n]-&gt; ST : push(token * pop()) Snp -[n]-&gt; Snpn : push(token) Snpn-[*]-&gt; Snt : ST -[*]-&gt; Snt ... </code></pre> <p>I can't finish this idea right now, I should, to think this all the way through, but my intuition (and experience with having crafted a grammar before only by creating an FSA table directly, is that it causes a lot of states to be created, that the state carries a memory of a lot of what came before, that there appears to be quite a bit of redundancy in those many states, and that the S-attributes, that compute the value of the expression (or parse tree) will be a lot more to deal with those several cases.</p> <p>But despite this redundancy doesn't matter as it is just a quick table lookup at every state for every new token and goes strictly left to right, terminal token by terminal token.</p> <p>You might say, that perhaps I can't deal with shift-reduce, shift-shift and reduce-reduce conflicts, and here I am telling you that I don't care about those, because in my application I want to just follow every possible path, creating multiple parse trees if necessary. I.e., there isn't just one stack, but each instance of a state in the state machine carries its own value stack, so that, when conflicts arise, two or more states are derived, and the FSA continues on both of them with the next token. If there is no transition given the next token for any state, that simply gets abandoned. I guess this is a GLR parser in a way.</p> <p>But the point is that I want to run it strictly as a finite state automaton.</p> <p>Anyone ever done that or heard of such a thing?</p>
<p>The comment basically has the right idea: What you want is some kind of LR parsing; GLR is fine.</p> <p>You see, if a grammar is not <span class="math-container">$LR(k)$</span> but you try to build a <span class="math-container">$LR(k)$</span> automaton for it anyway, then the automaton is still correct and will still parse it just fine. The catch is that it is nondeterministic. Shift-reduce and reduce-reduce conflicts (there is no such thing as a shift-shift conflict in <span class="math-container">$LR$</span> parsing) are choice points that you need to handle somehow, such as by backtracking.</p> <p><span class="math-container">$GLR$</span> parsing embraces the nondeterminism. The automaton, and I can't stress this enough, is the same as the <span class="math-container">$LR$</span> automaton. The only difference is in how it's executed.</p>
374
tokenization
Finding a Simple Distribution In a Binary String
https://cs.stackexchange.com/questions/68693/finding-a-simple-distribution-in-a-binary-string
<p>Unsupervised feature discovery of text that started with its bit string representation would need to discover octets were the first-order parse of such a bit string. This raises a question:</p> <p>What is the technique called that can discover that a binary string, for example:</p> <p><code>0100100111110010101010101011111011001000111000100101110001111110111010111110010111001010100011111110001101100101001010101111000111101011010011111001111101001111101011111011110011011001111000010100110001</code></p> <p>has the simple model (with <code>A x B</code> meaning A occurs B times in the bag):</p> <p><code>{00 x 1, 01 x 2, 10 x 3, 11 x 4}</code></p> <p>even though it knows only that it should group bits in substrings (tokens) of the same bit length (ie: it doesn't know it should group bits in pairs)?</p> <p>That is to say, if the binary string input was generated by a <code>perl</code> program:</p> <p><code>for(0..100){print ( (('00') x 1, ('01') x 2, ('10') x 3, ('11') x 4)[rand(10)])}</code></p> <p>the technique would reject, as less predictive, the distribution (model):</p> <p><code>{0 x 7, 1 x 13}</code></p> <p>and it would also reject a model that used 2 bit tokens on odd-numbered bit boundaries, as well as models that used 3 bit, or longer, tokens.</p> <p>A related, more difficult technique, would find the model for a string generated by sampling the bag:</p> <p><code>{0 x 1, 1 x 1, 00 x 1, 01 x 2, 10 x 3, 11 x 4}}</code></p> <p>That is to say the bit string is a mix of token sizes.</p>
<p>TL;DR: use maximum likelihood and discrete optimization.</p> <h2>Evaluating candidate models: the maximum likelihood principle</h2> <p>If you have a candidate model, you can evaluate how well it fits the data using the maximum likelihood principle.</p> <p>If $M$ is a model and $x$ is a string, let $P(x|M)$ denote the probability of outputting string $x$ when $M$ is the true model. Here I assume a generative model that produces $x$ as follows: at each step, it randomly picks one term $g_i \times n_i$ from $M$, appends $n_i$ copies of the string $g_i$ to the output, and repeats until some stopping point (say, stops once we've generated a string of fixed length).</p> <p>Of course in practice we have the reverse problem: we have observed a fixed string $x$, and want to infer $M$. Now we'll treat the observation $x$ as fixed. We define the likelihood of $M$ to be $L(M) = P(x|M)$. If we have observed multiple strings $x_1,\dots,x_m$, then we define the likelihood of $M$ to be $L(M) = P(x_1|M) \times \cdots \times P(x_m|M)$.</p> <p>The intuition is: models with larger likelihood fit the data better. So, if you have a choice of multiple models, choose the one with the largest likelihood -- that's the one that seems most consistent with the data.</p> <p>In practice, for computational reasons, we often deal with the log-likelihood, $\log L(M)$. We choose the model whose log-likelihood is largest. Since the log is monotone, this doesn't change anything fundamental.</p> <p>if you're comparing a simple model to a complex model, this introduces the risk of overfitting. The likelihood alone doesn't account for Occam's razor: the principle that, all else being equal, simpler models are more likely to represent the truth. This can be fixed by introducing some kind of <a href="https://en.wikipedia.org/wiki/Regularization_%28mathematics%29" rel="nofollow noreferrer">regularization</a>.</p> <p>Finally, note that the likelihood of a model can be computed efficiently using dynamic programming. We each prefix $w$ of $x$, we compute $L(w)$ in terms of shorter prefixes, starting from shorter prefixes to longer prefixes, until we have computed $L(x)$. If you don't immediately see how to do this computation, ask a separate question; it's a standard dynamic programming exercise. If you're dealing with long strings, you might want to compute using log-likelihoods rather than likelihoods, to avoid underflow.</p> <h2>Fixed-length tokens</h2> <p>If all tokens have the same length, it's probably fairly easy to find a good model. Assume know the length $\ell$ of all tokens in the model; if we don't, we can try each possibility for $\ell$, one at a time, and take the one that yields the best model.</p> <p>Since we know the length $\ell$, we can divide the string $x$ up into tokens of length $\ell$. In this way we can see the set of all tokens that appear in $x$, say $t_1,\dots,t_k$. Now we know that the model must be of the form</p> <p>$$M = \{t_1 \times n_1, \dots, t_k \times n_k\}$$</p> <p>and we merely need to infer the numbers $n_1,\dots,n_k$.</p> <p>Let's focus on the token $t_1$ and see how to infer $n_1$. We can find all occurrences of $t_1$ in $x$, and combine them into sequences of contiguous repeats, and let $S_1$ denote the set of repeat lengths. For instance, if at one place we see $t_1$ repeated 3 times consecutively, and at another place we see $t_1$ repeated 9 times consecutively, then we have $S_1 = \{3,9\}$. At this point we simply take $n_1 = \gcd S_1$, i.e., $n_1$ is the largest number that divides every element of $S_1$.</p> <p>We'll of course repeat this for each token $t_i$. We end up with a complete model, as desired.</p> <p>A technical detail: This assumes that each token $t_i$ is listed only once in $M$, with a single repeat-factor $n_i$. In other words, it assumes the model is allowed to look like $\{00 \times 4\}$ but not $\{00 \times 2, 00 \times 3\}$ (the latter has the token $00$ with two different repeat-factors). If you want to consider the latter kind of model, the problem reduces to finding a set of repeat-factors $R_1$ such that every element of $S_1$ can be expressed as a linear combination of $R_1$. The optimal solution will depend on the form of regularization you use; without regularization, the optimal solution will always be to simply take $R_1$ to have a single element, $R_= \{\gcd S_1\}$. So if you want to consider models where the same token appears twice, you'll need to specify a particular form of regularization (ask a new question). For now, I'll assume such models aren't of interest.</p> <p>So this shows how to solve the problem, in the easy case where all tokens have the same length.</p> <h2>Variable-length tokens</h2> <p>Handling models where the lengths of the tokens are not all the same looks much more challenging. I can suggest one possible approach, but the best approach will probably depend on the parameter settings you're encountering in practice.</p> <p>I suggest reducing this to a discrete optimization problem. In particular, I suggest you identify a set of tokens $t_1,\dots,t_k$ that you're confident will be a superset of the ones in the real model, and then use optimization methods to solve for the repeat-factors $n_1,\dots,n_k$ that maximize the likelihood of the model.</p> <p>In more detail: Fix the set of $t_1,\dots,t_k$. Now the model looks like</p> <p>$$M = \{t_1 \times n_1, \dots, t_k \times n_k\}$$</p> <p>where the $t_i$'s are known and the $n_i$'s are unknown (variables). Consequently we can think of the likelihood $L(M)$ as a function of the $n_i$'s: given any candidate values for $n_1,\dots,n_k$, we can compute $L(M)$ using dynamic programming.</p> <p>So, I'd suggest you use some existing optimization strategy to find $n_1,\dots,n_k \in \mathbb{N}$ that maximize $L(M)$. A natural approach is probably some form of local search, e.g., hillclimbing, hillclimbing with random restarts, or simulated annealing. A suggestion for a set of "local moves" would be to pick a single $n_i$ and change it via one of the following operations: multiply $n_i$ by a small prime number; divide $n_i$ by a small prime divisor of it; set $n_i$ to zero; change $n_i$ from zero to a small number; increment $n_i$; decrement $n_i$.</p> <p>How do we find the set $t_1,\dots,t_k$ of tokens? Here a convenient fact is that we don't have to get this set exactly right; it suffices for it to be a <em>superset</em> of the true set of tokens in the actual model. In particular, setting $n_i=0$ is equivalent to removing the token $t_i$ from the model entirely. So, we can choose a larger-than-necessary set of tokens $t_1,\dots,t_k$ and let the optimization routine effectively solve for which tokens should be retained and which should be eliminated. One heuristic would be to choose $t_1,\dots,t_k$ to be the set of all bit-strings of a certain range of lengths (e.g., all bit-strings of length 2 or 3). Another heuristic would be to use some kind of filtering condition: use the set of all bit-strings $t$ that appear at least some minimum number of times in $x$. The nice thing is that we can try each of these choices in turn, apply the optimizer to each, get a list of candidate models, and choose the best one (using the maximum-likelihood principle). For instance, it might not be clear how to choose a threshold for the filtering, but we can try multiple values in a exponentially decreasing sequence and keep the best model obtained.</p> <p>Similarly, it's also possible to come up with heuristics for the initial values of $n_1,\dots,n_k$ to feed to the optimizer (this will help some optimizers converge to a better solution). For instance, for each token $t_i$ and each candidate repeat-factor $r$, you could count the number of times that $t_i$ appears repeated $r$ times in a row, then choose the value of $r$ that has the highest count as the initial guess for $n_i$.</p> <p>How well will this work? I don't know. It will probably depend a lot on the parameters of the problem instances you run into in practice. I would suggest you try it on your data sets, with several different optimization methods and fiddling with the parameters a bit. If it doesn't work, ask another question where you show us what you've tried, and also show us the typical range of values for the most important parameters: the number of tokens in the model ($k$), the range of lengths of the tokens themselves, the range of values of the repeat-factors $n_i$, the length of the string $x$.</p>
375
tokenization
What defines how many lookahead a lexer has?
https://cs.stackexchange.com/questions/141300/what-defines-how-many-lookahead-a-lexer-has
<p>if a lexical grammar has multiple token which start with the same character like <code>&gt;</code> <code>&gt;&gt;</code> <code>&gt;&gt;=</code> and their longest length is 3, does it have 2 character lookahead?</p> <p>Or is it implementation defined. Does the number of character required to produce a fixed size token like a keyword - 1 also count.</p> <p>What is the formal definition of number of lookahead a lexer has?</p>
<p>&quot;Lookahead&quot; is an aspect of a particular parsing algorithm, and might be different for different parsing algorithms using the same grammar. You can't talk about lookahead without specifying which parsing algorithm is in use.</p> <p>If you are using a top-down LL parsing algorithm, parsing decisions need to be made very early, as soon as the parser reaches the start of the non-terminal to be applied. I think that corresponds with your intuition that distinguishing <code>12345</code> from <code>12345h</code> requires arbitrarily much lookahead. But that is not the case for all parsing algorithms. A bottom-up LR parser doesn't need to decide which non-terminal to apply until much later, often not until the end of the non-terminal. So the following grammar will recognise both numeric forms with lookahead of 1:</p> <pre class="lang-none prettyprint-override"><code>dnum ::= [0-9] | dnum [0-9] hpfx ::= [a-f] | dnum [a-f] | hpfx [0-9a-f] hnum ::= hpfx 'h' </code></pre> <p>In practice, lexical analysis is not easy if lookahead is bounded. All of the scannerless parsing frameworks I know of use parsing algorithms which do not limit lookahead, such as GLR or PEG. Most lexical scanner frameworks use regular expressions. Although a regular expression can be converted to a context-free grammar, that grammar is often ambiguous. This doesn't matter for lexical analysis because a token is presumed to have no internal structure; consequently it's irrelevant how many possible different parses could be produced.</p> <p>Even so, it is possible to talk about lookahead for a lexical scanner, because the scanner generally returns the longest possible token. Thus, the scanner almost always has to look at the next character in order to be sure that the token cannot be extended, and in some cases it needs to look at more characters. This is usually phrased as &quot;backtracking&quot; or &quot;fallback&quot;, but it would be equivalent to view it as lookahead. For most real languages, the required lookahead or maximum fallback is a small number like 1 or 2, but there are exceptions. A typical case is the <code>...</code> token in C. Since <code>..</code> is not a C token, if the scanner sees <code>.</code>, it may need to look at the next two characters before returning a <code>.</code> token.</p>
376
tokenization
Strategy to designing grammar for a LR(1) parser
https://cs.stackexchange.com/questions/152048/strategy-to-designing-grammar-for-a-lr1-parser
<p>Is it better to think about tokens from right to left and perform right factoring on grammar for an LR(1) parser? As apposed to thinking about tokens left to right and doing left factoring on grammar for an LL(1) parser.</p> <p>Example java import statement.</p> <pre><code>S1 -&gt; S2 ; S2 -&gt; S5 S2 -&gt; S3 id S2 -&gt; S4 * S3 -&gt; S5 as S4 -&gt; S5 . S5 -&gt; S4 id S5 -&gt; import id </code></pre> <p>Where <code>S1</code> is the starting rule.</p> <p>Am I correct in thinking LR(1) grammar is no more difficult than LL(1) grammar, but you just need to think about tokens in reverse order. Kinda like LR(1) is the dual of LL(1)?</p> <p>It also seems like you can simply reverse the order of the rules and construct a sort-of continuous passing style in the grammar in order to convert LR(1) grammar into LL(1) grammar like so:</p> <pre><code>S5 -&gt; import id C1 C1 -&gt; S2 C1 -&gt; as S3 C1 -&gt; . S4 S2 -&gt; ; S1 S1 -&gt; S3 -&gt; id S2 S4 -&gt; * S2 S4 -&gt; id C1 </code></pre> <p>Further more. If you imagine Java is backwards token-wise and an import statement is written like <code>;s as swing.javax import</code> for example. Then you create an LL(1) grammar for that. And with your LL(1) grammar, you reverse the parts of each rule. Then you end up with the original LR(1) grammar at the top of this post. This leads me to believe LR(1) is the dual of LL(1).</p> <p>A language running backwards should be just as expressive as a language running forwards token-wise, as all the information of the sentence is still there. Maybe LL(1) and LR(1) are in fact equal in expressive power as each other.</p>
<p>It's not difficult to write grammars, and particularly not LR(k) grammars (despite all the claims to the contrary you'll find floating around). You should start by trying to write down a simple description of the language which reflects the actual structure of the language. In many cases, that's easier to do if you allow yourself to use repetition (&quot;list of&quot;), alternation, and optionality, as you might do in English.</p> <p>(I've tried to address some of your specific questions after the rather long example.)</p> <p>For example, some hypothetical language might be a <em>program</em>, consisting of a <em>list of declarations</em>, with a <em>declaration</em> being a <em>variable declaration</em> or a <em>function declaration</em>.</p> <p>A <em>variable declaration</em> is a <em>type</em> followed by a <em>comma-separated list of variables with optional initializer</em> followed by a <code>;</code>. A <em>variable with optional initializer</em> is a <em>name</em>, optionally followed by an <code>=</code> and an <code>expression</code>.</p> <p>Since this is just an example, I'll stop there and write it out, using a formalism which includes the above-mentioned operators; this is usually called an &quot;extended&quot; BNF (EBNF), which is more of a concept than a standard. (That is, there are a lot of different extensions, all more or less functionally equivalent, but with different textual manifestations). So here's my personal non-standard EBNF:</p> <h3>Fonts:</h3> <ul> <li><span class="math-container">$\it{&lt;italics&gt;}$</span> are non-terminals (surrounded by angle brackets because I can't figure out how to make a good-looking hyphen in MathJax).</li> <li><span class="math-container">$\it{CAPS}$</span> are terminals defined by a lexical grammar. (eg. <span class="math-container">$\it{IDENTIFIER})$</span></li> <li><span class="math-container">$\tt{typewriter}$</span> are literal tokens. (eg. <span class="math-container">$\tt{begin}$</span> or <span class="math-container">$\tt{\{}$</span>)</li> </ul> <h3>Operators (coloured red to distinguish them from literals):</h3> <ul> <li><span class="math-container">$\;\color{red}{\large{\tt{\mid}}}$</span>: separates alternatives.</li> <li><span class="math-container">$\;\color{red}{\large{\tt{[}}}...\;\color{red}{\large{\tt{]}}}$</span>: indicates that <span class="math-container">$...$</span> is optional.</li> <li><span class="math-container">$\;\color{red}{\large{\tt{\{}}}...\;\color{red}{\large{\tt{\}}}}$</span>: indicates that <span class="math-container">$...$</span> can be repeated (at least once).</li> <li><span class="math-container">$\;\color{red}{\large{\tt{\{}}}...\;\color{red}{\large{\tt{/}}}\;\tt{,}\;\color{red}{\large{\tt{\}}}}$</span>: indicates that <span class="math-container">$...$</span> can be repeated, separated with <span class="math-container">$\tt{,}$</span> (or some other punctuation token)</li> </ul> <p>(That last one -- the interleave operator -- is particularly divergent; it is not even present in many EBNF variants, although it's incredibly useful. This particular version is mine and mine alone, so don't expect it to work in an EBNF tool.)</p> <p>With that, the start of the grammar: <span class="math-container">$$\begin{align}\it{&lt;program&gt;}&amp;\to\it\;\color{red}{\large{\tt{\{}}}\it{&lt;declaration&gt;}\;\color{red}{\large{\tt{\}}}}\\ \it{&lt;declaration&gt;}&amp;\to\it{&lt;function\;declaration&gt;}\;\color{red}{\large{\tt{\mid}}}\;\it{&lt;variable\;declaration&gt;}\\ \it{&lt;variable\;declaration&gt;}&amp;\to\it{TYPE}\enspace\;\color{red}{\large{\tt{\{}}}\it{&lt;initialiser&gt;}\;\color{red}{\large{\tt{/}}}\;\tt{,}\;\color{red}{\large{\tt{\}}}}\enspace\tt{;}\\ \it{&lt;initialiser&gt;}&amp;\to\it{IDENT}\enspace\color{red}{\large{\tt{[}}}\tt{=}\enspace\it{&lt;expression&gt;}\color{red}{\large{\tt{]}}}\\ \end{align}$$</span> Now, we can mechanically turn that into an LR(1) grammar, say for the popular Bison parser generator, using macro transformations for the EBNF operators:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>EBNF</th> <th>Bison</th> </tr> </thead> <tbody> <tr> <td><span class="math-container">$A\to\alpha\;\color{red}{\large{\tt{|}}}\enspace\beta$</span></td> <td><code>A: α | β</code></td> </tr> <tr> <td><span class="math-container">$A\to\;\alpha\;\color{red}{\large{\tt{[}}}\;\beta\;\color{red}{\large{\tt{]}}}$</span></td> <td><code>A: α | α β</code></td> </tr> <tr> <td><span class="math-container">$A\to\;\color{red}{\large{\tt{\{}}}\;\alpha\;\color{red}{\large{\tt{\}}}}$</span></td> <td><code>A: α | A α </code></td> </tr> <tr> <td><span class="math-container">$A\to\;\color{red}{\large{\tt{\{}}}\;\alpha\;\color{red}{\large{\tt{/}}}\;\tt{,}\;\color{red}{\large{\tt{\}}}}$</span></td> <td><code>A: α | A ',' α </code></td> </tr> </tbody> </table> </div> <p>(Some of the above require the introduction of intermediate non-terminals, if they are used other than at the top-level of a production.)</p> <p>That produces the following Bison grammar snippet:</p> <pre><code>program : declaration | program declaration declaration : variable-declaration | function-declaration variable-declaration : TYPE initialiser-list ';' initialiser-list : initialiser | initialiser-list ',' initialiser initialiser : IDENT | IDENT '=' expression </code></pre> <p>OK, let's do your <code>import</code> statement, which I think is in some ways as fictional as my language above. Just a couple of notes first:</p> <ul> <li><p>I'm not really a Java programmer, but I don't see any reference to the <code>import type as alias</code> syntax in the JLS (version 18). Still, it's reasonable to build a grammar which includes it, so we'll do that.</p> </li> <li><p>Your grammar seems to be for a sequence of <code>import</code> statements, rather than for a single <code>import</code> statement. That seems unrealistic; it's just one of many possible declarations, and it should fit into a syntax like the one above, for example by modifying <em>declaration</em> to be <span class="math-container">$$\it{&lt;declaration&gt;}\to\;\it{&lt;variable\;declaration&gt;}\;\mid\;\it{&lt;function\;declaration&gt;}\;\mid\;\it{&lt;import\;declaration&gt;}$$</span> and then making <span class="math-container">$\it{&lt;import\;declaration&gt;}$</span> produce a single declaration. That's also a better match for the semantics (in which there's no category for a sequence of declarations of the same kind). So that's what we'll do.</p> </li> </ul> <p>That said, the JLS defines four types of <code>import</code> declarations, which is basically a product of two binary choices: <em>type</em> or <em>static</em>, and <em>single</em> or <em>on-demand</em>. Ignoring the minor detail about which of the identifiers is allowed to be one of the semi-reserved words (<code>permits</code>, <code>record</code>, <code>sealed</code>, <code>var</code>, and <code>yield</code>), these can all be summarised by one simple EBNF syntax:</p> <p><span class="math-container">$$\begin{align}\it{import\;declaration}&amp;\to\tt{import}\enspace\color{red}{\large{\tt{[}}}\;\tt{static}\;\color{red}{\large{\tt{]}}}\enspace\color{red}{\large{\tt{\{}}}\it{IDENT}\;\color{red}{\large{\tt{/}}}\tt{.}\color{red}{\large{\tt{\}}}}\enspace\color{red}{\large{\tt{[}}}\;\tt{.}\enspace\tt{*}\enspace\color{red}{\large{\tt{|}}}\enspace\tt{as}\enspace\it{IDENT}\;\color{red}{\large{\tt{]}}}\enspace\tt{;}\\ \end{align}$$</span> corresponding to the Bison grammar, where list needs its own non-terminal and the alternatives and optionalities translate into six productions:</p> <pre><code>import : &quot;import&quot; dotted-name ';' | &quot;import&quot; dotted-name '.' '*' ';' | &quot;import&quot; dotted-name '.' &quot;as&quot; IDENT ';' | &quot;import&quot; &quot;static&quot; dotted-name ';' | &quot;import&quot; &quot;static&quot; dotted-name '.' '*' ';' | &quot;import&quot; &quot;static&quot; dotted-name '.' &quot;as&quot; IDENT ';' dotted-name : IDENT | dotted-name '.' IDENT </code></pre> <p>Even though that's a lot longer than the EBNF, I believe it's not that hard to follow (nor to write), unlike the LL version in your question.</p> <p>For the cherry, I'll add (a very cut-down) <em>function declaration</em>, and change <code>TYPE</code> to a dot-separated list of identifiers, closer to the Java grammar. Then we can verify that it's an LALR(1) grammar by passing it through bison. Here's the full grammar (although for testing purposes, I'm treating <code>expression</code> and <code>statement</code> as terminals):</p> <pre><code>%token IDENT statement expression %% program : declaration | program declaration declaration : variable-declaration | function-declaration | import-declaration variable-declaration : type initialiser-list ';' initialiser-list : initialiser | initialiser-list ',' initialiser initialiser : IDENT | IDENT '=' expression function-declaration : type IDENT parameter-list block parameter-list : '(' ')' | typed-name-list typed-name-list : type IDENT | typed-name-list ',' type IDENT block : '{' statement-list '}' statement-list : %empty | statement-list statement | statement-list declaration import-declaration : &quot;import&quot; dotted-name ';' | &quot;import&quot; dotted-name '.' '*' ';' | &quot;import&quot; dotted-name '.' &quot;as&quot; IDENT ';' | &quot;import&quot; &quot;static&quot; dotted-name ';' | &quot;import&quot; &quot;static&quot; dotted-name '.' '*' ';' | &quot;import&quot; &quot;static&quot; dotted-name '.' &quot;as&quot; IDENT ';' dotted-name : IDENT | dotted-name '.' IDENT type: dotted-name </code></pre> <p>And that works, first time. No errors, no warnings, and no conflicts:</p> <pre class="lang-none prettyprint-override"><code><span class="math-container">$ bison -Wall -v -o minijava.c minijava.y $</span> </code></pre> <p>(See note 1).</p> <p>Now, that grammar is definitely not LL. There are various non-terminals which start with the same terminal, and there's lots of left recursion. LR doesn't care about any of that. (And it doesn't care about right recursion, either; I just avoid it because it uses more parser stack.) In this case, I could make it LL by left-factoring and eliminating left recursion, but:</p> <ul> <li>that's a lot of work;</li> <li>the result, like the example you provide, is very hard to read and therefore does not serve to document the syntax of the grammar;</li> <li>and it might not have worked, because LR is simply a more powerful parsing algorithm.</li> </ul> <p>In short, it's not necessary to try to force a grammar to follow a particular model of how a parser might function. You can --and, in my opinion, should-- aim for grammars which capture the essence of the syntax in a natural and easily understood way (easily understood by programmers not well-versed in parsing theory, that is), and that's often easier to do if you don't have to apply the grammatical transformations necessary to make linear-time top-down parsing possible.</p> <p>It's true that LR parsing can be considered a dual of LL, in a certain sense. But it's not that LR is LL applied to the reverse of the input, nor is it the case that some simple grammar transformation (like reversing the grammar) will turn a language into an LL grammar. LR parsing is strictly more powerful than LL parsing; proofs and examples can be found in any textbook on formal language theory. The duality shows up in a comparison of certain algorithms, in particular the algorithms for determining whether a given CFG is LL(k)/LR(k). These algorithms are not the same --they don't have the same asymptotic complexity, for example-- but there are aspects which show the parallels. These are explored in <a href="https://doi.org/10.1016/S0019-9958(82)91016-6" rel="nofollow noreferrer"><em>Sippu &amp; Soisalon-Soininen, On LL(k) Parsing</em> (1988)</a>, which is definitely worth reading if you're interested in the theory. (It won't help you write better grammars, though :-) ).</p> <p>The dualism is also noted in the comparison between GLL and GLR parsing in various papers by <a href="https://www.cs.rhul.ac.uk/research/languages/csle/GLLparsers.html" rel="nofollow noreferrer">Elizabeth Scott and Adrian Johnstone</a>. Again, these are different algorithms; both can parse any context-free language in polynomial time (but not always linear time), but particular languages can have different asymptotic performance with the two frameworks. Scott &amp; Johnstone argue that the GLL framework produces a simpler algorithm. The value of both GLR and GLL parsing is that it does not require any grammar transformation to make a grammar parsable. It might still require some work to make the language unambiguous, which is useful for practical formal processing. But even that is not a requirement. For example, C++ has a number of ambiguities which can only be resolved by implementing criteria described in the text of the C++ standard, some of which have no context-free description. Even so, it's demonstrably practical, although not necessarily parsable in linear time.</p> <p>As a final note, I recognise that it is not always as easy to write grammars as my above example seems to indicate. Many languages occasionally require more than one token of lookahead; while it is true that you can always transform an LR(k) grammar into an LR(1) grammar from which the same parse tree can be extracted, that transformation is not of much practical assistance, since the resulting grammars are enormous. If you have a language whose &quot;natural&quot; grammar is LR(2) (or worse), you're generally better off using some lexical or semantic hack or switching to GLL/GLR parsing, rather than transforming the grammar into something parsable with an LR(1) parser. A similar comment applies to certain parsing ambiguities easily resolved with operator-precedence techniques but tedious to resolve using a context-free grammar, which is why most parser generators provide precedence relationships as a disambiguation technique. (One example of such an ambiguity is the infamous &quot;dangling-else&quot;; an LALR(1) grammar exists, but it's easy to get wrong and it is certainly not self-documenting, while the precedence comparison is almost trivial.)</p> <h3>Notes</h3> <ol> <li><p>Because that grammar does not include statements or expressions, it's actually hiding one pain point which may show up in a grammar for the full language, related to the difficulty in deciding whether a given <code>IDENT</code> will turn out to be the name of a type or the name of a variable or function. This is particular painful in C, where it cannot be solved without context-sensitivity (that is, letting the lexer look the name up in a symbol table to decide whether it has previously been declared as a type). Extra-grammatical semantic checks like that can be easier in an LL grammar because there's no need to worry about the semantic action being executed speculatively for a production which later turns out to be inapplicable. Most LR parser generators don't allow semantic checks; ANTLR, on the other hand, uses them as a standard practice.</p> </li> <li><p>The two-volume textbook by the same authors on Parsing Theory (<a href="https://www.springer.com/book/9783540137207" rel="nofollow noreferrer">Volume 1</a> and <a href="https://www.springer.com/book/9783540517320" rel="nofollow noreferrer">Volume 2</a>) is also a treasure. Sadly, unlike the above-mentioned article, you probably will only find a freely readable version in an academic library, and the outrageous costs of academic texts makes purchase only practical for the highly-motivated or independently-wealthy.</p> </li> </ol>
377
tokenization
Find all substrings that fit the mask with asterisks
https://cs.stackexchange.com/questions/91603/find-all-substrings-that-fit-the-mask-with-asterisks
<p>There is a problem.</p> <p>Given string $text$ containing only letters and string $mask$ containing letters and asterisks (*), where asterisk means substitution of zero or more letters, find all substrings of $text$ that fit $mask$.</p> <p>There is an example: let's $text=cabccbacbacab$, $mask=\textbf{ab}*\textbf{ba}*\textbf{c}$. The algorithm should give substrings $abccbac$, $abccbacbac$, because $abccbac = \textbf{ab} + cc + \textbf{ba} + \textbf{c}$, $abccbacbac = \textbf{ab} + ccbac + \textbf{ba} + \textbf{c}$.</p> <p>I have tried to split mask into tokens (for above example $\textbf{ab},\textbf{ba},\textbf{c}$) then find all occurrences of every token in $text$ then make cartesian product of all relevant occurrences and check every token for condition.</p> <p>Is there more efficient solution for this problem?</p>
<p>Yes, there is a more efficient algorithm. Your algorithm can take exponential time.</p> <p>You can check whether there exists any match in $O(nm)$ time, where $n$ is the length of text and $m$ is the length of mask, and find all matches in $O(n^2m)$ time. I'll show two solutions, one using dynamic programming and one using graph search. You can pick whichever you find easier to understand.</p> <p>I don't know whether you can do even better.</p> <h1>Dynamic programming</h1> <p>Build an array $A[i,j]$, where $A[i,j]$ is true if some prefix of $\text{text}[i..n]$ matches $\text{mask}[j..m]$. There's a recursive formula for $A[i,j]$:</p> <p>$$\begin{align*} A[i,j] &amp;= \text{True} \qquad &amp;&amp;\text{if } j=m+1\\ A[i,j] &amp;= A[i+1,j+1] \qquad &amp;&amp;\text{if } \text{mask}[j]=\text{text}[i]\\ A[i,j] &amp;= A[i,j+1] \lor \cdots \lor A[n,j+1] \qquad &amp;&amp;\text{if } \text{mask}[j]=*\\ A[i,j] &amp;= \text{False} \qquad &amp;&amp;\text{otherwise} \end{align*}$$</p> <p>If you fill this in, in the usual way, you get a $O(nm^2)$ time algorithm. If you additionally keep track of $B[i,j] = A[i,j+1] \lor \cdots \lor A[n,j+1]$ and fill in entries in the right order, you get a $O(nm)$ time algorithm.</p> <p>Once you have filled in the matrix, you can find all substrings of text that match: each entry $A[i,1]$ that is true corresponds to one or more substrings of text that match (the substring starts at index $i$ of text). You can adapt the above algorithm to enumerate all those matching substrings by repeating the above computation once per possible ending place of the match.</p> <p>There may be even faster methods, using ideas from <a href="https://en.wikipedia.org/wiki/String_searching_algorithm" rel="nofollow noreferrer">string matching</a> and/or <a href="https://en.wikipedia.org/wiki/Regular_expression" rel="nofollow noreferrer">regular expression matching</a>.</p> <h1>Graph search</h1> <p>Build a directed graph on $nm$ vertices. Each vertex is of the form $\langle i,j \rangle$, which we think of as corresponding to the problem of checking whether some prefix of $\text{text}[i..n]$ matches $\text{mask}[j..m]$. Now add the following edges:</p> <ul> <li>Add the edge $\langle i,j \rangle \to \langle i+1,j+1 \rangle$ if mask$[j]$ = text$[i]$.</li> <li>Add the edge $\langle i,j \rangle \to \langle i,j' \rangle$ if mask$[j] = *$ and $j' &gt; j$.</li> </ul> <p>Finally, mark each vertex $\langle i,m+1 \rangle$ as "accepting".</p> <p>This is a directed acyclic graph; it has no cycles. Now, for each $i$, find all accepting vertices that are reachable by some path starting at the vertex $\langle i,1 \rangle$. If $\langle i',m+1 \rangle$ is reachable from $\langle i,1 \rangle$, that means that the substring text$[i..i'-1]$ matches mask, so you can output this substring. This computation can be done in $O(nm)$ time per starting point using breadth-first search, for a total of $O(n^2m)$ time.</p>
378
tokenization
How to find the size of gaps between entries in an array so that the first and last element of the array is filled and the gaps are all of equal size?
https://cs.stackexchange.com/questions/127994/how-to-find-the-size-of-gaps-between-entries-in-an-array-so-that-the-first-and-l
<p>I have an array a of n entries. I need to place a token on the first and last position of that array, so <code>a[0] = 1</code> and <code>a[n-1] = 1</code>.</p> <p>I now want to place additional tokens into that array with a distance inbetween each index i where <code>a[i] = 1</code> that is greater than 2 (so placing a token on every index is invalid as well as alternating using and not using an entry is invalid). Phrazed differently: I want that <code>sum(a) &lt; n/2</code> . The gap inbetween each token should be the same, so say with an array of size 16,</p> <p><code>a[0] = 1, a[3] = 1, a[6] = 1, a[9] = 1, a[12] = 1, a[15] = 1</code></p> <p>would be a solution with a gap size of 2 (distance of 3).</p> <p>How do I find all gap sizes that are possible to fill said array with the given constraints?</p> <p>Imagine a street inbetween two crossroads where a lamppost should be placed on each crossroad and then additional lampposts should be placed equidistant to each other and for some reason only natural number distances are allowed.</p> <p>(The actual problem I want to solve is where to place Sea Lanterns in my Minecraft Project so do not disregard this problem as an assignment question I want a solution for.)</p>
<p>If I understand your problem correctly, the tokens (lanterns) can be placed every <span class="math-container">$x$</span> blocks (starting from <span class="math-container">$0$</span>) if and only if <span class="math-container">$x&gt;2$</span> is a divisor of <span class="math-container">$n-1$</span>.</p> <p>For example, if the array has <span class="math-container">$n=31$</span> elements the valid values of <span class="math-container">$x$</span> are <span class="math-container">$3,5,6,10,15,$</span> and <span class="math-container">$30$</span>.</p>
379
tokenization
Has there been a lexer that takes in much more than a regular language?
https://cs.stackexchange.com/questions/19837/has-there-been-a-lexer-that-takes-in-much-more-than-a-regular-language
<p>I understand the restrictions, because a regular language is expressive enough to allow all types of tokens. And even if some context is needed in many languages to tokenize properly, they all seem to be "approximately" regular languages.</p> <p>Yet I would be interested if any attempt in any programming language, possibly esoteric language, has been taken to completely eschew the conventional division between type-3 lexers and type-2 parser.</p>
<ul> <li><p>FORTRAN is famous for having some difficult to lex constructs, but those difficulties result probably more from having been designed before the classification was established (or at least known in programming circles)</p></li> <li><p>Several languages are described with a type-3 lexers but some characteristic are easier to handle by feeding information from the parser to the lexer (C/C++ typedef, C++ templates, Ada/VHDL attributes)</p></li> <li><p>Several languages have layout rules (Python, Haskell) which are usually not described as type-3 lexers (I don't know if it is possible to do so or not)</p></li> </ul>
380
tokenization
Lexical analysis
https://cs.stackexchange.com/questions/82874/lexical-analysis
<p>Will the below statement cause any lexical error ?</p> <pre><code>int a123c ; </code></pre> <p>According to me, int would be tokenized as a keyword and there would be a lexical error when "a123c" would be encountered as it doesn't fall into any token category.</p> <p>I read this question <a href="https://stackoverflow.com/questions/31369524/clarification-regarding-lexical-errors-in-c">https://stackoverflow.com/questions/31369524/clarification-regarding-lexical-errors-in-c</a></p> <p>Here the accepted answer says that there is no lexical error. </p> <p>Also will a statement like </p> <pre><code>int x = 192.24.43.13 ; </code></pre> <p>cause any lexical error ?</p> <p>I am confused.</p>
<p>Which error message is generated by which phase of a compiler depends on the underlying language structure and implementation of the compiler. </p> <p>Nevertheless, lexical analyzer is responsible for generating tokens, so at this phase you could check if some lexeme/token is valid or not. For example, you could check if the symbol <code>$</code> belongs to the source language. It is similar to reading an English text and checking if words belong to the English dictionary. </p> <p>The syntax analyzer on the other hands checks if the sequence of tokens constitutes a legal program structure by analyzing the sequence of tokens (not characters). For example, it may check if a curled left bracket may follow the <code>if</code> keyword, or what token should follow what token in general. It is similar reading reading an English text and checking if "<em>He/She</em>" may be followed by a verb.</p> <p>Assuming you are using a C/C++ compiler the first statement will not produce an error. You can read <a href="https://cs.stackexchange.com/questions/82264/how-does-lexical-analyzer-remove-white-spaces-from-source-file/82269#82269">this related</a> post. </p> <p>As for the second line, this is what I got when tried to compile <code>int x = 192.24.43.13 ;</code></p> <p><code>error: invalid suffix '.43.13' on floating constant</code></p> <p>This seems to be produced by a lexical analyzer. It complains about the structure of the floating point number.</p> <p>I also tried to compile this line <code>int x = 192.24 .43.13 ;</code> which produced</p> <p><code>error: expected ';' at end of declaration</code></p> <p>which seems to be produced by a syntax analyzer.</p> <p>Other possible lexical errors or warnings may be produced by the following lines </p> <pre><code>char ch = '12'; /* multi-character character constant */ char *a = "sdsd /* missing terminating '"' character */ /* this is unterminated comment </code></pre>
381
tokenization
What information do we get from a compiler&#39;s parse tree?
https://cs.stackexchange.com/questions/16647/what-information-do-we-get-from-a-compilers-parse-tree
<p>In the <a href="https://class.coursera.org/compilers/lecture/index" rel="nofollow">compiler course by Alex Aiken on Coursera</a>, more specifically lecture <a href="https://class.coursera.org/compilers/lecture/20" rel="nofollow">05-02 Context Free Grammars</a>, the professor says that CFGs give answers of the type yes/no, i.e. whether the given string of tokens is valid or not. He adds that it is also desirable to know <em>how</em> a particular string of tokens is in the language; for this purpose he introduces parse trees.</p> <p>Why is the "how" part important? </p>
<p><strong>Prelude</strong> It might be useful to be pedantic and start with a surprising fact: compilers do <strong>not</strong> use context free grammars, contrary to what you've been told. Instead they use something closely related but subtly different, which might be termed <strong>context-free transducers</strong> (please let me know if there's an official name for it). They relate to context free grammars as <a href="https://en.wikipedia.org/wiki/Mealy_machine" rel="nofollow noreferrer">Mealy-machines</a> do to finite state machines. </p> <p>The reason why CFGs are not used is that CFGs only give a yes/no answer. That's not enough for the subsequent stages of a compiler. Instead, subsequent stages need a good representation of the input program to work with. This representation is the abstract syntax tree (AST). Something similar happens in lexical analysis: contrary to what most compiler courses tell you, the lexer does <strong>not</strong> use a finite state machine or regular expression to nail down the lexical structure of a language, because regular expressions and finite state machines also only give a yes/no answer. We want the output of the lexical stage to be a token-list for consumption by the parser. Mealy machines deliver token lists.</p> <p>If this is the case, why do compiler courses usually use regular expressions and finite state machines for the lexical analysis and CFGs for parsing? Do compiler teachers lie to us? No, they want to help us understand:</p> <ol> <li><p>The concepts are really similar, e.g. a Mealy machine is just a finite state machine where every transition carries not just an input action (as they do with finite state machines) but also a corresponding output action. Likewise context free transducers are just CFGs where every production is also associated with an output.</p></li> <li><p>Teachers don't want to be overly formal, and expect the students to bridge the gap between finite state machines and lexer / CFG and parser themselves. In practise students almost always do.</p></li> </ol> <p><strong>Main point.</strong> With this pedantic caveat we can come to the original question: what is the parse tree used for?</p> <p>The parse tree contains the "how a string is in the CFG". The parse tree is used to construct the <strong><a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree" rel="nofollow noreferrer">abstract syntax tree</a></strong> (AST) which is a <strong>concise</strong> representation of the program that is used by later phases in the compiler, in particular the type checker (for statically typed languages) and the code generator. The advantage of ASTs over other program representations such as strings is that ASTs make access to the immediate sub-programs of a program easy, e.g. the program </p> <pre><code>if C then P else Q </code></pre> <p>has three immediate sub-programs namely C, P and Q. The AST for the program has three pointers to the sub-programs:</p> <p><img src="https://i.sstatic.net/VvKZI.jpg" alt="enter image description here"></p> <p>Both, code-generation and type-checking, involve the recursive invocation of code-generation/type-checker on the immediate sub-programs (plus some glue-code). Hence ASTs provide exactly the information needed for efficient type-checking and code generation. </p> <p>The relationship between parse trees and ASTs is simple. The parse tree contains the exact information about how a string 'fits' into the CFG. Every production applied when consuming the input string is noted down in the order used. This information has a natural tree shape, whence the name "parse tree". An AST is a parse-tree with redundant information removed. An example of redundant information are properly balanced brackets. Another example is the node labelling that gives the aforementioned information about what productions were used to derive the string. Such information is important for checking if the input string (or more precisely token-list) is syntactically valid, but once that has been acertained, this information is no longer needed for compilation and hence discarded.</p> <p>Let us look at an example. Consider the arithmetic expression $4 *(3+17)$ in the obvious grammar of arithmetic expressions: $$ E \ \ \rightarrow\ \ E + E \ |\ E * E \ |\ ( E ) \ |\ 0 \ |\ 1 \ |\ 2 \ |\ ... $$ Let's ignore the ambiguity and left-recursion in that grammar. Here is a plausible parse tree for $4 *(3+17)$</p> <p><img src="https://i.sstatic.net/IKw2I.jpg" alt="enter image description here"></p> <p>(Note that the parse tree is usually not constructed explicitly, but is implicity in the structure of the recursion throughout the parsing process.) The corresponding AST is simpler, but still contains all relevant information, including precedence of the addition over the multiplication, in its pointer structure:</p> <p><img src="https://i.sstatic.net/hfkUk.jpg" alt="enter image description here"></p> <p><strong>Summary:</strong> compilers construct ASTs (good representation of the program for future compiler stages) from parse trees (which tell "how" the program is in the CFG).</p>
382
tokenization
Efficient algorithm for iterated find/replace
https://cs.stackexchange.com/questions/28307/efficient-algorithm-for-iterated-find-replace
<p>I'm looking for an algorithm for doing iterated find/replace, where the act of finding the replacement list of tokens for a given find is slow.</p> <p>Specifically: I have a function, <code>f</code>, that maps a sequence of tokens to either a shorter list of tokens or None. However, it is slow.</p> <p>I want to repeatedly try to replace the (first, in the case of ties) shortest unexplored subsequence in the input with the result of applying the function to it, assuming that there is a valid replacement for it. However, the naive way to do this ends up with exponential runtime. Is there a better algorithm for doing this?</p> <p>(So, for example, if <code>f</code> maps <code>ba</code> -> <code>b</code>, <code>aaa</code> -> <code>ab</code>, and <code>aba</code> -> <code>bb</code>, and I apply the algorithm to <code>aaaaa</code>, I want to get (<code>aaaaa</code>-><code>abaa</code>-><code>aba</code>->)<code>ab</code>, not (<code>aaaaa</code>-><code>abaa</code>-><code>bba</code>->)<code>bb</code> or any of the other sequences of replacements possible, if any.)</p> <p>(Note: this example uses characters for simplicity)</p>
<p>This isn't a complete answer, but it provides some context that's far too long to put in a comment. What you've described is an instance of a <a href="http://en.wikipedia.org/wiki/Semi-Thue_system" rel="nofollow noreferrer">string rewriting system</a>, also known as a semi-Thue system. Start with a finite alphabet, $\Sigma$, and a <a href="http://en.wikipedia.org/wiki/Binary_relation" rel="nofollow noreferrer">binary relation</a>, $R$, on strings over $\Sigma$. For your example, we'll have $R$ defined by</p> <p>$$ ba\, R\, b,\quad aaa\, R\, ab,\quad aba\, R\, bb $$ We can then define another relation, $\Rightarrow$, on strings by, basically, applying $R$ to substrings. For two strings $x, y$, define $x\Rightarrow y$ if and only if $$ \text{there exist strings }p, q, r, s\text{ such that } x=prq, y=psq,\text{ and }r\, R\ s $$ In your example, we have, for instance, the chain of rewrites $aaaaa\Rightarrow abaa\Rightarrow aba\Rightarrow bb$, and there we stop, since we cannot reduce $bb$ any further. In a rewriting system, a string like $bb$ which cannot be rewritten is called <em>irreducible</em> or sometimes a <em>normal form</em>. If there is a chain of rewrites starting from a seed string $x$ that leads to a normal form we denote the result by $x\downarrow$. Again, in your example, the normal forms derivable from $aaaaa$ are $ab, bb, aab, abb$. </p> <p>Your question in its most general form is</p> <blockquote> <p><strong>Question 1</strong>. In a string rewriting system, is there a computationally efficient way to find a normal form for a given string $x$?</p> </blockquote> <p>In general, the answer is no, there isn't. The problem comes in part from the fact that some (or all) seed strings might never lead to a normal form. This could arise in a rewriting system that permits chains like $$ x_1\Rightarrow x_2\Rightarrow \cdots\Rightarrow x_i\Rightarrow x_{i+1}\Rightarrow\cdots\Rightarrow x_i $$ Fortunately, you've imposed a helpful constraint, namely that whenever $x\,R\,y$ then the lengths satisfy $|x|&gt;|y|$. We'll call such a system <em>monotone</em> or <em>length-reducing</em>. In a monotone rewriting system we're always going to have a normal form for any seed string, since every rewrite operation strictly decreases the length of the string. A rewriting system for which every string has at least one normal form is called <em>noetherian</em>, by the way. Refining the question gives us</p> <blockquote> <p><strong>Question 2</strong>. In a monotone string rewriting system, is there a computationally efficient way to find a normal form for a given string $x$?</p> </blockquote> <p>The answer to this question is yes, of course: given a seed string $x$, simply apply rewrite rules until you reach a normal form. More systematically, if $|x| = n$ and you have $m$ base rewrite rules $\{w_1\,R\,z_1,w_2\,R\,z_2,\dots w_m\,R\,z_m\}$ with $M=\max\{|w_i|\}$, then we'll require $O(nmM)$ character comparisons, even if we don't use efficient <a href="http://en.wikipedia.org/wiki/String_searching_algorithm" rel="nofollow noreferrer">string search algorithms</a>.</p> <p>We haven't quite gotten to your real question, though, since you also stipulate that at each step you will choose to apply your base rewrite rules in order of the lengths of their left-hand sided, |w_i|, from shortest to longest. It's not clear that this is a useful constraint; in fact, some string search algorithms (Boyer-Moore, in particular) are more efficient when searching for longer substrings than they are for shorter ones. It might be helpful to impose another constraint, though: when searching for a substring to replace, always choose the leftmost substring, regardless of length. This would put you in a situation involving prefixes and there's a fair amount known about this. Choosing rightmost substrings as replacement candidates might also be a useful avenue.</p> <p>Finally, we come to the most interesting question:</p> <blockquote> <p><strong>Question 3</strong>. In a monotone string rewriting system, is there a computationally efficient way to find a normal form of minimal length?</p> </blockquote> <hr> <p>This should help you to find answers. There's good news for your search, namely that there's a <em>lot</em> of material out there, both in books (yes, whole books have been written on string rewriting systems) and online. I did a search for "string rewriting systems" and got a million and a half hits. The bad news is that there's a <em>lot</em> of material out there, so finding an answer is likely going to take some serious drilling. Frankly, I don't know the answer to Question 3, but my uninformed guess is that the answer is affirmative. If you haven't gotten a definitive answer in a few days, you might want to ask your question, suitably reworded, over at <a href="https://cstheory.stackexchange.com/">theoretical computer science</a>.</p>
383
tokenization
Modelling a dependency of multiple transitions on data in one place
https://cs.stackexchange.com/questions/57900/modelling-a-dependency-of-multiple-transitions-on-data-in-one-place
<p>We are modeling our process using a colored Petri net. One of the limitations we have is that when multiple transitions depend on one place, only one of those transitions will fire because then the token and data is consumed.</p> <p>How can we model our process, or what type of Petri net property can we use, to facilitate the fact that multiple transitions can depend on one place and <em>all</em> have to be able to consume the same token?</p>
<p>I do not know if there is a variant of Petri nets that captures your intent exactly -- there <em>probably</em> is, there are so many -- but the feature can be expressed with regular Petri nets.</p> <p>Just add a transition that creates tokens in multiple places, one per original transitions. Then, all three follow-up transitions can fire after the preceding one is done.</p> <p><a href="https://i.sstatic.net/mOmC7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mOmC7.png" alt="enter image description here"></a></p> <p>Introduce whatever syntax you want to express this neatly, and implement it using multiple places in the backend.</p>
384
tokenization
How to implement a maximal munch lexical analyzer by simulating NFA or running DFA?
https://cs.stackexchange.com/questions/97374/how-to-implement-a-maximal-munch-lexical-analyzer-by-simulating-nfa-or-running-d
<p>I'm planning to implement a lexical analyzer by either simulating NFA or running DFA using the input text. The trouble is, the input may arrive in small chunks and the memory may not be enough to hold one very long token in the memory.</p> <p>Let's assume I have three tokens, "ab", "abcd" and "abce". The NFA I obtained is this: <a href="https://i.sstatic.net/8dDT8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8dDT8.png" alt="enter image description here"></a></p> <p>And the DFA I obtained is this: <a href="https://i.sstatic.net/bZ40r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZ40r.png" alt="enter image description here"></a></p> <p>Now if the input is "abcf", the correct action would be to read the token "ab" according to the maximal munch rule and then produce a lexer error token. However, both the DFA and the NFA have state transitions even after "ab" has been read. Thus, the maximal munch rule encourages to keep on reading after "ab" and read the "c" as well.</p> <p>How do maximal munch lexers solve this issue? Do they store the entire token in memory and do backtracking from "abc" to "ab"?</p> <p>One possibility would be to run the DFA with a "generation index", potentially multiple generations and multiple branches within generation at a time. So, the DFA would go from:</p> <pre><code>{0(gen=0,read=0..0)}, </code></pre> <p>read "a",</p> <pre><code>{1(gen=0,read=0..1)}, </code></pre> <p>read "b",</p> <pre><code>{2+(gen=0,read=0..2,frozen), 2+(gen=0,read=0..2), 0(gen=1,read=2..2)}, </code></pre> <p>read "c",</p> <pre><code>{2+(gen=0,read=0..2,frozen), 3(gen=0,read=0..3)}, </code></pre> <p>read "f",</p> <pre><code>{2+(gen=0,read=0..2,frozen)}. </code></pre> <p>Then the lexer would report state 2+, and since there is no option to continue, would report an error state. Not sure how well this idea would work...</p> <p>For "abcd", it would work like this:</p> <pre><code>{0(gen=0,read=0..0)}, </code></pre> <p>read "a",</p> <pre><code>{1(gen=0,read=0..1)}, </code></pre> <p>read "b",</p> <pre><code>{2+(gen=0,read=0..2,frozen), 2+(gen=0,read=0..2), 0(gen=1,read=2..2)}, </code></pre> <p>read "c",</p> <pre><code>{2+(gen=0,read=0..2,frozen), 3(gen=0,read=0..3)}, </code></pre> <p>read "d",</p> <pre><code>{2+(gen=0,read=0..2,frozen), 4+(gen=0,read=0..4,frozen), 4+(gen=0,read=0..4), 0(gen=1,read=4..4)}. </code></pre> <p>Now of these, it's possible to drop the first (there is a longer match) and the third (there are no state transitions out), leaving:</p> <pre><code>{4+(gen=0,read=0..4,frozen), 0(gen=1,read=4..4)}. </code></pre> <p>Then the lexer would indicate "match: 4+" and continue reading input from state 0 using generation index 1.</p> <p>Is this idea of mine, running DFAs nondeterministically, how maximal munch lexical analyzers work?</p>
<p>There are two ways to handle this issue:</p> <ol> <li><p>The most common implementation (the one used in lex, flex and other similar scanner generators) is to always recall the last accept position and state (or accept code). When no more transitions are possible, the input is backed up to the last accept position and the last accept state is reported as the accepted token.</p> <p>If you're trying to do streaming input, you will need a fallback buffer to handle this case.</p></li> <li><p>Alternatively, if the scan reaches an accepting state but another transition is available, we can start performing two scans in parallel: one on the assumption that the transition will be taken, and the other on the assumption that it will not. The second thread may need to fork again, although there is a maximum number of forks, as with generalised LR parsing. In this model, we need to keep a buffer of possible "future" tokens which will be processed if the optimistic thread fails.</p></li> </ol> <p>I don't know of a practical implementation of the second strategy in a general purpose scanner generator, although there are some papers about how you might do it. Apparently it can be done in time and space linear to the size of the input, which is (in theory) better than the quadratic time consumption of backtracking.</p> <p>However, it is pretty rare that you find a token grammar which needs to allow unrestricted backtracking. The most common cause of unrestricted backtracking is failing to take into account the fact that things like quoted strings might not be correctly terminated in an incorrect program, so you end up with just the rule:</p> <pre><code>["]([^"]|\\.)*["] { Accept a string } </code></pre> <p>instead of the pair of rules</p> <pre><code>["]([^"]|\\.)*["] { Accept a string. } ["]([^"]|\\.)* { Reject an unterminated string. } </code></pre> <p>(Maximal munch will guarantee that the second rule will only be used if the first rule cannot match.)</p> <p>So while the second strategy may have some theoretical appeal, it seems to me that it's of little practical use. Flex even has some options which will help you to identify rules which could backup on failure, and this can help you craft your lexical grammar to avoid the problem. It's not always easy to eliminate 100% of backing up (although it often is, and if you manage to do so, flex will reward you by generating a faster lexer), but it's pretty rare to find a lexical grammar which requires more than a few characters of back-up, and the cost of a small fallback buffer is really not worth worrying about, in comparison with the complexity of the alternative (which, of course, also needs extra memory.)</p> <p>I have seen intermediate strategies for particular grammars. If you know your grammar well enough, you could hand-build the speculative tokenisation in order to avoid backing up. I've seen that, years ago, in SGML lexers which eliminate the rescan of <code>&gt;</code> following a tagname by including a redundant rule which recognised a tag immediately followed by a <code>&gt;</code> and handled both tokens at once. That must have saved a few cycles, but it's hard to believe that it really made a huge difference, and the difference would likely be even less significant today. Still, if you are the type who obsesses about saving every possible cycle, you could do it. </p>
385
tokenization
Why is $O(nk)$ an upper bound for the $k$-gossip problem?
https://cs.stackexchange.com/questions/146390/why-is-onk-an-upper-bound-for-the-k-gossip-problem
<p>I am studying the <span class="math-container">$k$</span>-gossip problem on dynamic graphs against an adaptive adversary. Essentially, we are given a set of tokens <span class="math-container">$\mathcal{T}$</span> which are distributed amongst the nodes such that each token is distributed to at least one node. Importantly, the nodes do not know the value of <span class="math-container">$k$</span>. The problem is solved when all nodes know all <span class="math-container">$k$</span> tokens.</p> <p>The paper <a href="https://www.ccs.neu.edu/home/rraj/Pubs/TokenForwarding.pdf" rel="nofollow noreferrer">Information Spreading in Dynamic Networks</a> write in their abstract:</p> <blockquote> <p>Our main result is an <span class="math-container">$\Omega(nk/\log{n})$</span> lower bound on the number of rounds needed for any deterministic token-forwarding algorithm to solve <span class="math-container">$k$</span>-gossip. This resolves an open problem raised in [33], improving their lower bound of <span class="math-container">$\Omega(n\log{k})$</span>, and matching their upper bound of <span class="math-container">$O(nk)$</span> to within a logarithmic factor.</p> </blockquote> <p><a href="https://people.csail.mit.edu/rotem/dynamicgraphs.pdf" rel="nofollow noreferrer">The paper they mention</a> for the <span class="math-container">$O(nk)$</span> upper bound gives a protocol for <span class="math-container">$n$</span>-gossip but not for <span class="math-container">$(k&lt;n)$</span>-gossip. Is there an easy way to see the <span class="math-container">$O(nk)$</span> upper bound for <span class="math-container">$k$</span>-gossip for dynamic graphs?</p> <p>We know that <span class="math-container">$1$</span>-gossip can be solved in <span class="math-container">$n-1$</span> rounds, where <span class="math-container">$n$</span> is the size of the network if the nodes know <span class="math-container">$n$</span>. So could we maybe do <span class="math-container">$1$</span>-gossip <span class="math-container">$k$</span> times. The only issue is that the nodes do not know <span class="math-container">$n$</span> or <span class="math-container">$k$</span>, so we can't use this approach.</p>
386
tokenization
Why is using a lexer/parser on binary data so wrong?
https://cs.stackexchange.com/questions/899/why-is-using-a-lexer-parser-on-binary-data-so-wrong
<p>I often work with <a href="http://en.wikipedia.org/wiki/Lexical_analysis">lexer</a>/<a href="http://en.wikipedia.org/wiki/Parsing">parsers</a>, as opposed to a parser combinator and see people who never took a class in parsing, ask about parsing binary data. Typically the data is not only binary but also context sensitive. This basically leads to having only one type of token, a token for byte. </p> <p>Can someone explain why parsing binary data with a lexer/parser is so wrong with enough clarity for a CS student who hasn't taken a parsing class, but with a footing on theory?</p>
<p>In principle, there is nothing wrong.</p> <p>In practice,</p> <ul> <li><p>most non-textual data formats I know are not context-free and are therefore not suitable for common parser generators. The most common reason is that they have length fields giving the number of times a production has to be present.</p> <p>Obviously, having a non context-free language has never prevented the use of parser generators: we parse a superset of the language and then use <em>semantic rules</em> to reduce it to what we want. That approach could be used for non-textual formats if the result would be deterministic. The problem is to find something else than counts to synchronize on as most binary formats allow arbitrary data to be embedded; length fields tell you how much it is.</p> <p>You can then start playing tricks like having a manually writen lexer able to handle that with feedback from the parser (lex/yacc handling of C use that kind of tricks to handle typedef, for instance). But then we come to the second point.</p></li> <li><p>most non-textual data formats are quite simple (even if they are not context-free). When the counts mentioned above are ignored, the languages are regular, LL1 at worst, and are thus well suited for manual parsing techniques. And handling counts is easy for manual parsing techniques like recursive descent.</p></li> </ul>
387
tokenization
ir - Using document-term [Boolean] incidence matrix for answering a query
https://cs.stackexchange.com/questions/161459/ir-using-document-term-boolean-incidence-matrix-for-answering-a-query
<p>The book &quot;Introduction to Information Retrieval&quot; <a href="https://nlp.stanford.edu/IR-book/html/htmledition/an-example-information-retrieval-problem-1.html" rel="nofollow noreferrer">talks about term-document incidence matrix</a> for retrieving documents that contain/not-contain certain tokens drawn from a query. This representation naturally lends itself to retrieving documents in response to a query.</p> <p>However, considering the document-term [boolean] incidence matrix, wherein each document is a boolean-vector marking the tokens that do/don't occur in the document;</p> <p>Q: <strong>Is there a canonical way for retrieving documents using the document-term [Boolean] incidence matrix?</strong></p> <p><em><strong>My thoughts</strong></em>: consider the same sample query as used in the text, <em>Brutus <code>&amp;</code> Caesar <code>&amp;</code> ~Calpurnia</em>; the only possible solution I can see is as follows</p> <ul> <li>iterate over all the documents, and for each document: <ul> <li>only consider the bits corresponding to the tokens found in the query expression, i.e., in the for above samle-query would be <em>Brutus</em>, <em>Caesar</em>, and <em>Calpurnia</em>.</li> <li>evaluate the query expression</li> <li>consider the document only if the above-step evaluates to <code>1</code></li> </ul> </li> </ul>
388
tokenization
Question about word embeddings in a specific language model - GPT-2
https://cs.stackexchange.com/questions/116184/question-about-word-embeddings-in-a-specific-language-model-gpt-2
<p>How were the <a href="https://openai.com/blog/better-language-models/" rel="nofollow noreferrer">GPT-2</a> token embeddings constructed? </p> <p>The authors mention that they used Byte Pair Encoding to construct their vocabulary. But BPE is a compression algorithm that returns a list of subword tokens that would best compress the total vocabulary (and allow rare words to be encoded efficiently).</p> <p>My question is: how was that list of strings turned into the vectors that they actually used for training the model? The papers they published on the <a href="https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf" rel="nofollow noreferrer">original GPT</a> and its <a href="https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf" rel="nofollow noreferrer">follow-up GPT-2</a> don't seem to specify those details.</p>
389
tokenization
Context free grammar for nested arrays separated by commas
https://cs.stackexchange.com/questions/55563/context-free-grammar-for-nested-arrays-separated-by-commas
<p>I have to define a context free grammar for the following rules:</p> <p>(i) A pair of square bracket tokens [] surrounding zero or more values separated by commas. (ii) A value can be another array or a number.</p> <p>A number is represented by the token <code>NUMBER</code>. So for example, <code>[NUMBER, [NUMBER, NUMBER], NUMBER]</code>. is valid.</p> <p>I am stuck as how to approach this.</p> <p>My intuition is always to look at the question and see that <code>S-&gt;LSQ VALUE RSQ, VALUE-&gt;VALUE COMMA VALUE | VALUE | ARRAY | e | NUMBER, ARRAY -&gt; LSQ NUMBER RSQ, NUMBER -&gt;NUMBER</code>. But I know this slips up.</p> <p>What steps can I take to ensure I am always thinking in the right way?</p>
<p>your current implementation doesn't enforce the first condition "A pair of square bracket tokens [] surrounding zero or more values separated by commas" as an empty string or a NUMBER on its own would be accepted by the grammar.</p> <p>You could use the following CFG to maintain the integrity of the constraints</p> <p>array ::= [ ] | [ element ]</p> <p>element ::= value | value , element</p> <p>value ::= array | NUMBER</p> <p>To derive [NUMBER, [NUMBER, NUMBER], NUMBER]</p> <p>Start with array -> [element]</p> <ol> <li>[ element ]</li> <li>[ value, element ] </li> <li>[value, array ]</li> <li>[value, value, element ]</li> <li>[value, array, value]</li> <li>[value, [element], value]</li> <li>[value, [value, element], value]</li> <li>[value, [value, value], value]</li> <li>[NUMBER, [NUMBER, NUMBER], NUMBER]</li> </ol> <p>The grammar rules provided for JSON here might also be a useful reference: <a href="http://json.org/" rel="nofollow">http://json.org/</a></p>
390
tokenization
To remove all comments in a JavaScript file, do we need just a scanner or also a parser?
https://cs.stackexchange.com/questions/120126/to-remove-all-comments-in-a-javascript-file-do-we-need-just-a-scanner-or-also-a
<p>I was asked how to remove comments in a JavaScript program, but once I gave the regular expression solution, I was asked what if there are comments like text inside of a string:</p> <pre><code>let hi = " // here "; let foo = " use this: /* "; let foo2 = " \" and that */ "; </code></pre> <p>it also can get complicated with the cases with string being able to be quoted by <code>'</code> and the backtick <code>`</code>:</p> <pre><code>let hi2 = 'here //'; let i = 123; let hi3 = `here // ${i /* don't use j */}`; let hi4 = `here // ${i // don't use j // because j is not good }`; let hi5 = ` and use there ' " \' \" \` /* */`; let hi6 = ` this is really closing, not quoted backtick \\`; let hi7 = `here // ${i /* don't use j in `` */}`; </code></pre> <p>So I mentioned a scanner (lexical analyzer) and a parser (like yacc or bison) can be used. So we can build a tree of node representing the program, and then wherever it is a comment type of node, we can remove it. (comment nodes have no children, I think? If they do, then can just change the comment to nothing, the empty string).</p> <p>But is it true that all we need is a scanner? We can tokenize all the text, and then we get each element which is a string, an operator, the left and right operands, and also the comment elements.</p> <p>And then we can just remove all the comment elements and then reconstruct the program using all the tokens.</p> <p>So probably we don't even need the program code to be represented as a tree, but just as a series of tokens. So if we already have a scanner and parser, all we need is a scanner, and when we supply it with the proper grammar rules, we can remove all the comments?</p> <p>(is the following considered grammar or just finite state automata and do we just use it to form the tokens?)</p> <p><a href="https://i.sstatic.net/sbTFX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sbTFX.png" alt="enter image description here"></a></p>
<p>Scanner (tokenizer) will be enough. Most programming languages do not allow comment nesting, so there is no need to use recursive parsers to strip all comments.</p>
391
tokenization
Why is the step property in a balancing network defined as it is?
https://cs.stackexchange.com/questions/49715/why-is-the-step-property-in-a-balancing-network-defined-as-it-is
<p>I was trying to understand why the equation $y_i = \left( \frac{n}{w} \right) + (i \pmod w) $ describes the step property in a balancing network?</p> <p>First, recall $x_i$ to be the number of tokens a network gets as input and similarly $y_i$ to be the number of output tokens. Recall that a balancing network is just a network that distributes tokens to its output.</p> <p>I was reading the art of multicore programming and in page 272 it says:</p> <blockquote> <p>If the number of tokens n is a multiple of four (the network width), then the same number of tokens emerges from each wire. If there is one excess token, it emerges on output wire 0, if there are two, they emerge on output wires 0 and 1, and so on. In general,</p> <p>$$ n = \sum x_i $$</p> <p>then</p> <p>$$y_i = \left( \frac{n}{w} \right) + (i \pmod w) $$</p> <p>we call this property the <strong>step property</strong>.</p> </blockquote> <p>It also defines equivalent ways to see the step property as:</p> <ol> <li>For any $i&lt;j$, $0 \leq y_i - y_j \leq 1$</li> </ol> <p>i.e. as we go up the output wires, the wire can only increase one step at a time or not increase (so top values are always larger or equal). An example:</p> <p><a href="https://i.sstatic.net/ehOaz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ehOaz.png" alt="enter image description here"></a></p> <p>However, the formula $y_i = \left( \frac{n}{w} \right) + (i \pmod w) $ doesn't make sense to me also, specifically the following doesn't make sense:</p> <blockquote> <p>if there are two, they emerge on output wires 0 and 1, and so on...</p> </blockquote> <p>I tried plugging in the numbers too say, $ n = 6 $, but the results don't quite make sense.</p> <p>For example according to the formula above we get:</p> <p>$$ y_0 = \left( \frac{6}{4} \right) + (0 \pmod 4) = 1 + 0 = 1 $$ $$ y_1 = \left( \frac{6}{4} \right) + (1 \pmod 4) = 1 + 1 = 2 $$ $$ y_2 = \left( \frac{6}{4} \right) + (2 \pmod 4) = 1 + 2 = 3 $$ $$ y_3 = \left( \frac{6}{4} \right) + (2 \pmod 4) = 1 + 3 = 4 $$</p> <p>which doesn't agree with what the picture of the diagram would be because it seems to be backwards. Not sure if I made a mistake or misunderstood the formula, but it should be giving something as in the figure/diagram/picture from the book. Also, according to the formula, is it ever possible for it be equal? It seems to always increase in the wrong direction.</p>
<p>The formula is wrong. The correct formula is $$ y_i = \lfloor \frac{n}{w} \rfloor + [i &lt; (n \mod{w})], $$ where $[C]$ equals 1 if $C$ holds, and 0 otherwise.</p> <p>Sometimes books contain mistakes. When you see a mistake, correct it and proceed. No need to ask us for permission.</p>
392
tokenization
Stuck with shift-reduce conflicts on yacc on grammar to generate palindromic strings on {0,1}
https://cs.stackexchange.com/questions/144332/stuck-with-shift-reduce-conflicts-on-yacc-on-grammar-to-generate-palindromic-str
<p>I have written a yacc program for generating palindromic strings consisting of 0s and 1s. Here is the rules section of the yacc program below:</p> <pre><code>%% program: expr NL { printf(&quot;Valid string.\n&quot;); exit(0); } ; expr: ZERO expr ZERO | ONE expr ONE | ZERO | ONE | ; %% </code></pre> <p>Here <code>ZERO</code> is the token representing 0, <code>ONE</code> is the token representing 1, and <code>NL</code> represents <code>\n</code>. Using yacc on the above grammar, I'm given the following warnings.</p> <pre><code>yacc -dt --verbose 7b.y 7b.y: warning: 4 shift/reduce conflicts [-Wconflicts-sr] 7b.y: warning: 2 reduce/reduce conflicts [-Wconflicts-rr] </code></pre> <p>To the best of my knowledge, the grammar above seems to be unambiguous. My questions are:</p> <ol> <li>Why is yacc giving me this error?</li> <li>What should I change to resolve this?</li> </ol> <p>Here are the complete lex and yacc programs if they should help:</p> <p>Lex:</p> <pre><code>%{ #include &lt;stdlib.h&gt; void yyerror(char *); #include &quot;y.tab.h&quot; %} %% [0] { yylval = 0; return ZERO; } [1] { yylval = 1; return ONE; } \n { return NL; } . yyerror(&quot;invalid character&quot;); %% int yywrap(void) { return 1; } </code></pre> <p>Yacc:</p> <pre><code>%{ #include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; int yylex(void); void yyerror(char *); %} %token ZERO ONE NL %% program: expr NL { printf(&quot;Valid string.\n&quot;); exit(0); } ; expr: ZERO expr ZERO | ONE expr ONE | ZERO | ONE | ; %% void yyerror(char *s) { fprintf(stderr, &quot;Invalid string.\n&quot;); } int main(void) { yyparse(); return 0; } </code></pre> <p>Thanks for your help in advance.</p>
<p>The grammar is, as you say, unambiguous. But it is not <em>deterministic</em>. LR parsers with bounded lookahead can only recognise deterministic languages; since not all unambiguous context-free languages are deterministic, LR parsers cannot recognise all unambiguous context-free languages.</p> <p>Intuitively, palindromes are non-deterministic because the parser must switch states precisely at the middle of the input, but there is no way to tell where the middle of a sentence is until you know where the end is, which is obviously not possible with bounded lookahead.</p> <p>Proving that the language of palindromes is not deterministic is a bit of work. You probably won't find the proof in a textbook oriented towards writing compilers, such as the Dragon Book, but if you are studying formal language theory you might be working with a more theory-oriented textbook (such as Hopcroft and Ullman.) There is a proof that the language of even-length palindromes is non-deterministic <a href="https://cs.stackexchange.com/questions/11598/prove-no-dpda-accepts-language-of-even-lengthed-palindromes">here</a>, and extending the proof to the language of all palindromes is not that difficult.</p>
393
tokenization
Item lookaheads versus dot lookaheads for $LR(k)$ with $k \gt 1$?
https://cs.stackexchange.com/questions/48866/item-lookaheads-versus-dot-lookaheads-for-lrk-with-k-gt-1
<p>I was reading "Parsing Techniques: A Practical Guide, Second Edition" by Grune and Jacobs, which details a bunch of different parsing algorithms. In the section on $LR(2)$ parsing, they mention that unlike $LR(1)$ items, which just have an item lookahead (the token that should appear after the completed $LR(1)$ item), the items in an $LR(2)$ parser also need a dot lookahead (the tokens that should appear after the dot in the $LR(2)$ item).</p> <p>I'm having trouble understanding what this means, how this lookahead is computed, and why this isn't needed in $LR(0)$ or $LR(1)$ items. Can anyone explain what this means?</p>
<p>I think you are mistaken, they are needed but the dot look-ahead there is so obvious that you have not paid attention to the fact it is used.</p> <p>First, let's remark that there are three kinds of items:</p> <ol> <li><p>those in which the dot is just before a non-terminal. They never participate in an ambiguous situation: when a non-terminal has been produced, it is shifted.</p></li> <li><p>those in which the dot is at the end. They have the item look-ahead and the dot look-ahead which are equal (what may follow the dot is what may follow the produced non-terminal when the production is reduced as the dot is at the end of the item).</p></li> <li><p>those in which the dot is just before a terminal. They have the item look-ahead and the dot look-ahead which are different. The item look-ahead is what may follow the non-terminal when the production is reduced, the dot look-ahead start by the terminal which follow the dot and continue with what can be generated after that terminal.</p></li> </ol> <p>Now, with a look-ahead of 1 or less, the dot look-ahead is trivial: either it is the item look-ahead or the terminal which is just after the dot and that's what you are using to solve a conflict (or decide that there is no way with the limited look-ahead you have). With a look-ahead of 2 or more, you have to compute the dot look-head or you may not know if you have to shift or to reduce as in the example provided by Grune and Jacobs:</p> <p><span class="math-container">$$\begin{array}{l} S \rightarrow Aa \; | \; Bb \;| \;Cec \;|\; Ded \\ A \rightarrow qE \\ B \rightarrow qE \\ C \rightarrow q \\ D \rightarrow q \\ E \rightarrow e \\ \end{array}$$</span></p> <p>which has the state: <span class="math-container">$$\begin{array}{lcc} &amp;\textrm{item look-ahead}&amp;\textrm{dot look-ahead}\\ A \rightarrow q \cdot E &amp; a\# &amp; ea\\ B \rightarrow q \cdot E &amp; b \# &amp; eb\\ C \rightarrow q \cdot &amp; ec &amp; ec\\ D \rightarrow q \cdot &amp; ed &amp; ed\\ E \rightarrow \cdot e &amp; a \# &amp; ea\\ E \rightarrow \cdot e &amp; b \# &amp; eb\\ \end{array}$$</span></p>
394
tokenization
Variable name starting with integers
https://cs.stackexchange.com/questions/142101/variable-name-starting-with-integers
<p>When I started doing programming I wondered that why the variable names can't start with integer. Back then I accepted that , may be this is how the compiler designers have decided to go with. But now I am studying Compiler Design and they say that lexical analyzer produces token and it is easy / fast if we use identifiers as regular languages. So the regular expression like this : <span class="math-container">$(number)^*(underscore + alphabet)^+(number)^*$</span></p> <p>Why they don't use this? As per compiler I don't see any ambiguity or problem in this as we have symbol table entries for each token. I know that many similar question are asked but I want to know as per lexical analysis and compiler design perspective.</p>
<p>In many languages, <code>1e3</code> is a literal that represents 1000, <code>0x10</code> is a literal that represents 16. If we used your proposed regexp for variable names, it would be ambiguous whether those expressions should be represented as a literal or as a variable name.</p>
395
tokenization
what are all the ways of delimiting blocks
https://cs.stackexchange.com/questions/156268/what-are-all-the-ways-of-delimiting-blocks
<p>To my knowledge, in block-structured programming languages, there are 2, maybe 3 main ways of delimiting a block.</p> <ol> <li>Using start and end tokens, this can be brackets or reserved words etc</li> <li>Using indentation, like python, which uses the offside rule to delimit blocks</li> <li>Using prefix notation for control structures, like lisp/s-expressions (maybe this is equivalent to 1).</li> </ol> <p>If we consider programs as strings of symbols, then a block-delimiting-scheme is the information in a program for uniquely specifying subprograms.</p> <p>So for example, we could delimit blocks by using symbol/token/line number indices, e.g <code>if condition 4,7 14,32</code> (might relate to labels in assembly or basic).</p> <p>My question is, what are <strong>all</strong> the ways of delimiting blocks?</p>
396
tokenization
Negotiating a connection between two devices that can&#39;t transmit and receive simultaneously
https://cs.stackexchange.com/questions/153170/negotiating-a-connection-between-two-devices-that-cant-transmit-and-receive-sim
<p>I've got a bit of a puzzle here that sits at the intersection of mathematics and technology. Hopefully this doesn't fall into brainteaser territory - I'm not sure a neat solution is possible!</p> <p>I have two devices. Each one has a short token they would like to share. I am happy for either device to get the other's token; they don't both need both tokens.</p> <p>Each device can transmit or receive locally, but it can't do both at the same time. A transmission takes 1-2 seconds. Both devices know the time (very accurately) but have no other knowledge of each other.</p> <p>I can see a flakey solution here - start in receive mode for a random number of seconds, then enter transmit mode for one transmission, then repeat with a new random number. Sooner or later (probably sooner) a full transmit window will be captured in the other device's receive window; there is some risk of transmit collisions and partial receives, but the bigger the range of that wait window the less likely this becomes (and the token is a known length + has a check digit; partial receives are easily discarded).</p> <p>My solution works, but feels very inelegant. It occurs to me this problem probably exists in many domains and may even have a name / common approach. Could anybody steer me in the right direction? Obviously it would be better if the devices could have a known master / slave relationship, or if one could transmit on odd seconds and one on evens; but there are dozens of these devices and no way of knowing which two will end up next to each other trying to pair.</p> <p>I've tried to abstract away a lot of the details but happy to provide more where it would be helpful.</p>
397
tokenization
How do stable functions 1 =&gt; 1 relate to Bool?
https://cs.stackexchange.com/questions/29655/how-do-stable-functions-1-1-relate-to-bool
<p>One way to interpret the (simply typed) lambda calculus is via coherence spaces (<a href="http://www.paultaylor.eu/stable/Proofs+Types.html" rel="nofollow">Proofs and Types, chapter 8</a>). For example, we can consider the space containing token element ($\mathbf{1}$) and the space containing two incoherent tokens ($\mathbf{Bool}$). These are given as follows:</p> <p>$$ \mathbf{1} = \{\emptyset, \{0\}\}\\ \mathbf{Bool} = \{\emptyset, \{t\}, \{f\}\} $$</p> <p>We can then consider the stable functions from $\mathbf{1}$ to itself:</p> <p>$$ F_1(\emptyset) = \emptyset, F_1(\{0\}) = \emptyset\\ F_2(\emptyset) = \emptyset, F_2(\{0\}) = \{0\}\\ F_3(\emptyset) = \{0\}, F_3(\{0\}) = \{0\} $$</p> <p>The trails of these functions are as follows:</p> <p>$$ \mathcal{Tr}(F_1) = \emptyset\\ \mathcal{Tr}(F_2) = \{(\{0\}, 0)\}\\ \mathcal{Tr}(F_3) = \{(\emptyset, 0)\} $$</p> <p>This again forms a coherence space</p> <p>$$ \mathbf{1 \Rightarrow 1} = \{\emptyset, \{(\{0\}, 0)\}, \{(\emptyset, 0)\}\} $$</p> <p>Which, just like $\mathbf{Bool}$, has two incoherent tokens. As far as I can see, the two should be equivalent, and there exists a stable function of type $\mathbf{(1 \Rightarrow 1) \Rightarrow Bool}$ which maps $F_2$ to $f$ and $F_3$ to $t$.</p> <p>Can this function be expressed in the lambda calculus? If not, why is it part of the interpretation? Can we see $\mathbf{Bool}$ and $\mathbf{1 \Rightarrow 1}$ as "the same" due to the similarity in coherence space?</p>
398
tokenization
Understanding The Mapping Of Edges to Nodes In A Graph Theory Problem
https://cs.stackexchange.com/questions/59995/understanding-the-mapping-of-edges-to-nodes-in-a-graph-theory-problem
<p>I am really confused with this <a href="https://community.topcoder.com/stat?c=problem_statement&amp;pm=13707" rel="nofollow">problem</a>.</p> <p><strong>Here's the problem:</strong> <br></p> <p>You have $N$ points numbered $1$ through $N$,inclusive, and $N$ arrows again numbered $1$ through $N$,inclusive. No two arrows start at the same place, but multiple arrows can point to the same place and arrows can start and end in the same place. The arrow from place $i$ points to place $a[i-1]$,($a$ being an array representing the game board with $N$ elements and $i$ is between $1$ and $N$, inclusive).There are $0$ to $N$ tokens,inclusive, placed in those places and that, in each round, move along the arrows from their current place. If two or more tokens are in the same place, then you lose that game. But if that doesn't happen for the $K$ rounds specified, then you win the game. There may be multiple ways to solve the problem, but Two ways are different if there is some $i$ such that at the beginning of the game place $i$ did contain a token in one case but not in the other. Count those ways and return their count modulo $1,000,000,007$.</p> <p>The whole problem is confusing to me, but what really confuses me is that it states that the arrow that starts from $i$ goes to $a[i-1]$. How I understand it, for the first example,( $\{1,2,3\} \;5$ Returns:$8$ ), if $a[1]=1$, $a[2]=2$, and $a[3]=3$, then $3$ maps to $2$ and $2$ maps to $1$, but then $1$ maps to $0$,(but point $0$ doesn't exist). </p> <p>What would be more correct would be if $a[0]=1$, $a[1]=2$, and $a[2]=3$, but then all the points would map to themselves,(though it says in the example that the tokens don't move during the rounds).</p> <p>I am probably way off, but I couldn't find many explanations, and the ones I found didn't make any sense to me, and I couldn't find many visual depictions either. </p>
<p>$a[0] = 1, a[1] = 2, a[2] = 3$. </p> <p>"The arrow from place $i$ points to place $a[i−1]$"</p> <p>So yes, all $3$ places in this sample point to themselves. It actually says in the explanation that "in each round each token will stay in the same place". </p> <p>In the second sample, all three places point towards the first one.</p> <p>In the third sample, you have a $2$-cycle: $1$ points to $2$ and vice-versa. </p> <p>Is everything clear now?</p>
399