category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
tokenization
|
LL grammars and left-recursiviity
|
https://cs.stackexchange.com/questions/6809/ll-grammars-and-left-recursiviity
|
<p>Why LL(k) and LL(∞) are incompatible with left-recursion? I understand that a LL(k) language can support left-recursivity provided that with k-overahead tokens can be resolved any ambiguity. But, with a LL(∞) grammar, which type of ambiguities can't be solved?</p>
|
<p>The problem that $LL$ variants have with left recursion is inherent to the way $LL$ works: it is a top-down type parser, which means it replaces nonterminals by their productions.</p>
<p>An $LL$-style parser works as follows. It traverses the input from left to right in one go. If we are at some point in the input, then we know that everything to the left of this point is OK. For everything to the right of this point, the parser has constructed an 'approximation' of what it expects to see next. Consider for example this grammar:</p>
<blockquote>
<p>1: $E \to E + E$<br>
2: $E \to x$ </p>
</blockquote>
<p>Note that the grammar is not $LL$, but we can still parse inputs in $LL$-style. On input $x+x+x$, an $LL$-style parser may end up at position $x+\bullet x+x$. Let's assume it has decided that the left part, $x+$, is fine, and for the rest of the input it expects to see $x+E$. It will then find out that $x+x+$ is fine, with $E$ remaining. It may then replace this $E$ by a production, in particular production 2 above. With $x$ remaining, the parser will accept the input.</p>
<p>The trick is then to correctly decide the replacing production for a given nonterminal. A grammar is $LL(k)$ if we can do this by just looking at the next $k$ input symbols, and other techniques are known that are more powerful.</p>
<p>Now consider the following grammar:</p>
<blockquote>
<p>1: $A \to A a$<br>
2: $A \to \varepsilon$ </p>
</blockquote>
<p>If an $LL$ parser tries to replace $A$ by a production, it has to decide between production 1 and 2.</p>
<p>Let's consider what the proper course of action would be if our parser was omniscient. Every time it replaces the $A$ by production 1, it 'adds' an $a$ to what it expects for the remaining input (the expected remainder goes from $A$ to $Aa$ to $Aaa$...), but the $A$ at the start does not go away. Eventually, it must pick production 2, after which the $A$ disappears and it can never again add $a$s to the expectation.</p>
<p>As there is no chance to match a few more input symbols, the parser must decide at exactly that input position how many times production 1 must be matched. This means it must know exactly how many times in our case the $a$ will appear in the remainder of the input at this moment.</p>
<p>However, $LL(k)$ can see only $k$ symbols ahead. This means that if production 1 must be chosen more than $k$ times, the parser cannot 'see' this and so is doomed to fail. $LL(*)$ is better at parsing than $LL(k)$, because it can see arbitrarily far ahead in the input, but the crucial detail (which is not always mentioned) is that this lookahead is <em>regular</em>.</p>
<p>To imagine what happens, you can view the algorithm as follows: when it has to decide which production to take, it starts up a finite state machine (a DFA, which is equivalent in power to regular expressions) and lets this machine look at the rest of the input. This machine can then report 'use this production'. However, this machine is severely limited in what it can do. Although it is strictly better than looking at only the next $k$ symbols, it cannot for instance 'count', which means that it cannot help in the above situation.</p>
<p>Even if you were to 'hack' in some counting function in this finite automaton, then there are still left-recursive grammars for which you really need more power. For instance, for this grammar:</p>
<blockquote>
<p>$A \to A B$<br>
$A \to \varepsilon$<br>
$B \to ( B )$<br>
$B \to \varepsilon$ </p>
</blockquote>
<p>you would have to match 'towers' of matching braces, which is something a finite automaton cannot do. Worse still:</p>
<blockquote>
<p>$A \to B C A D E$<br>
$A \to A'$<br>
$A' \to A' D E$<br>
$A' \to \varepsilon$<br>
$B \to a B a \mid b B b \mid a a \mid bb$<br>
$C \to c C c \mid d C d \mid c c \mid d d$<br>
$D \to e D e \mid f D f \mid e e \mid f f$<br>
$E \to g E g \mid h E h \mid g g \mid h h$ </p>
</blockquote>
<p>is a totally awful grammar, for which I'm pretty sure no known linear time parsing algorithm works and all known general parsing algorithms take quadratic time. Worse still, any grammar describing this language is necessarily left-recursive. The grammar is still unambiguous however. You need a hand-crafted parser to parse these monsters in linear time.</p>
| 400
|
tokenization
|
Two Step Verification. 4 digits vs 6 digits
|
https://cs.stackexchange.com/questions/30300/two-step-verification-4-digits-vs-6-digits
|
<p>From a security level standpoint (such as Server, DataBase, Token Code, Authorization, Authentication, etc.) in regarding the Two Step Verification, usually Apple send a total of <strong>4</strong> digits security code vs Google send a total of <strong>6</strong> digits security code. What are the main differences?</p>
|
<p>The number of digits in such a <a href="http://en.wikipedia.org/wiki/One-time_password" rel="nofollow">one-time password</a> is determined by the acceptable risk that an attacker who doesn't receive the verification code will be able to guess it (lucky guess). This risk takes into account several factors:</p>
<ul>
<li>the probability that an attacker will be able to guess;</li>
<li>the negative consequences of an correct guess by an attacker;</li>
<li>the probability and the negative consequences if the code is too long and this discourages the user to use the service.</li>
</ul>
<p>For an $n$-digit code, if there is a single valid code and the code is generated at random, then the probability of a lucky guess is $10^{-n}$. That's 1/10,000 for a 4-digit code, 1/1,000,000 for a 6-digit code. These are usually acceptable figures when the one-time code is a <a href="http://en.wikipedia.org/wiki/Multi-factor_authentication" rel="nofollow">second factor</a> (usually complemented by knowing a static password or possessing a physical device); they would be grossly insufficient if the code alone was enough to access a service.</p>
<p>When the code is sent from a remote server to the user's device, there is generally a single valid code which is randomly generated. Some one-time password schemes don't require communication between the server and the user's device, but instead have both sides generate a <a href="http://en.wikipedia.org/wiki/Pseudorandomness" rel="nofollow">pseudo-random</a> code in the same way using a shared secret key and a counter. In this case, there are usually several valid codes, to allow for desynchronization between the client and the server; this requires a slightly longer code to compensate for the increased number of possible lucky guesses.</p>
| 401
|
tokenization
|
How the LZ77 compression algorithm handles the case when the entire look-ahead buffer is matched in the search buffer
|
https://cs.stackexchange.com/questions/75925/how-the-lz77-compression-algorithm-handles-the-case-when-the-entire-look-ahead-b
|
<p>The LZ77 compression algorithm uses a sliding window technique, where the window consists of a look-ahead puffer and a search-buffer. What I am wondering is how the algorithm handles the case if the match of the word in the search-buffer is the entire word in the look-ahead buffer? According to the desriptions I find, the algorithm matches as long as it can, and then outputs the offset, the length of the match and the next token after the matched portion in the look-ahead buffer, but in case the entire look-ahead buffer is matched we do not have a next token to output?</p>
<p>I nowhere find this case described, for example <a href="https://en.wikipedia.org/wiki/LZ77_and_LZ78" rel="nofollow noreferrer">the pseudocode</a> just states "X first char after p in view", but I am asking about the case where we have no char after p in the view, as p is entire view?</p>
<p>For example, consider a search buffer of size 5 and a look-ahead buffer of size 4 and we read in</p>
<p>|abrar|rarr|ad</p>
<p>then we find a match at offset 3, and the match (which extends behind the boundary between both puffers, but this is no problem) goes up to all of rarr, even the next a could be matched, but what we should do now, should we output (3,4, C(a)) where C(a) denotes the code of a which is not in the look-ahead buffer, or should we just match the first 3 tokens?</p>
|
<p>A simple solution: your look ahead buffer is no smaller than your longest match length.
As long as the start position is in the Search Buffer(a min of 1 byte), then the look-ahead buffer will have that one extra byte available to use as the follow byte</p>
| 402
|
tokenization
|
How to represent sentences with their dependency parses as input to an RNN?
|
https://cs.stackexchange.com/questions/96814/how-to-represent-sentences-with-their-dependency-parses-as-input-to-an-rnn
|
<p>I am working on a task embedding sentences into a lower-dimensional space according to style, both grammatical and lexical. As such, I want to have as input the linear ordering of tokens in each sentence, together with its dependency parse as provided by spacy. </p>
<p>In particular, I'd like to find a way to tie together the representation of the linear order of tokens and the representation of the dependency parse, so the network could learn features like "this sentence used a word with an embedding close to Y as an nmod which came before and modified a word with an embedding close to Z". How could I design such a network? </p>
<p>Edit: The desired input to the network is a parsed sentence; the desired output is a vector which allows that sentence to be compared with others in terms of both lexical and syntactic features. I know how to use an RNN with a sequence of word-vector embeddings as input. I also know how to encode a tree of grammatical functions as a sequence of tokens starting from the root. I'm not sure how to create a unified representation of the sentence where I can determine, for instance, both the embedding and the grammatical function of the fourth word in the sentence and the embedding of the word it modifies (requiring knowledge of the edges between words as well as their linear ordering).</p>
| 403
|
|
language modeling
|
Next-Word Prediction, Language Models, N-grams
|
https://cs.stackexchange.com/questions/18354/next-word-prediction-language-models-n-grams
|
<p>I was looking into how a <strong>next-word prediction engine</strong> like swift key or XT9 can be implemented.</p>
<p>Here's what I did.</p>
<ul>
<li>I read about <strong>n-grams</strong> here - en.wikipedia.org/wiki/N-gram and aicat.inf.ed.ac.uk/entry.php?id=663</li>
<li>I read about <strong>Language Models/Markov Model/n-grams/training/Smoothing/Back-Offs</strong> - en.wikipedia.org/wiki/Language_model & www.stanford.edu/class/cs124/lec/languagemodeling.pptx & www.statmt.org/book/slides/07-language-models.pdf.</li>
<li>I read about the T9 engine design for next-word prediction based on Tries - courses.cs.washington.edu/courses/cse303/09wi/homework/T9files/T9_Tries.pdf</li>
<li>I came across <strong>SRILM</strong>, a popular toolkit for building & applying Language Models here - www.speech.sri.com/projects/srilm/ (the toolkit) & www.speech.sri.com/cgi-bin/run-distill?papers/icslp2002-srilm.ps.gz (the documentation)</li>
<li>I came across the blog where <strong>Google</strong>'s Peter Norvig made an announcement to <strong>share it's huge training corpus of one trillion words</strong> to the entire world - googleresearch.blogspot.in/2006/08/all-our-n-gram-are-belong-to-you</li>
<li>I came across an <strong>n-gram viewer</strong> based on google books' corpus - books.google.com/ngrams/</li>
<li>I came across Microsoft's N-gram services - web-ngram.research.microsoft.com/</li>
<li>I came across <strong>an algorithm for N-Gram Language Models which is as fast as but smaller (in memory footprint) than SRILM's model</strong> (not based on tries, uses encoding) - nlp.cs.berkeley.edu/pubs/Pauls-Klein_2011_LM_paper.pdf (I need to do more work here.)</li>
<li>I had a look at some <strong>open-source engines</strong> available like <strong>AnySoftKeyboard</strong> - github.com/AnySoftKeyboard. That's is a huge amount of code with no documentation!</li>
</ul>
<p>Some discussions on <strong>stackoverflow</strong>:</p>
<ul>
<li>Implementing T9 prediction engine - Implementing T9 text prediction</li>
<li>A discussion on implementation of autocomplete using tries vs. ternary search trees vs. succint trees - stackoverflow.com/questions/10970416/tries-versus-ternary-search-trees-for-autocomplete</li>
</ul>
<p>The <strong>major players</strong> in this area:</p>
<ul>
<li><strong>Swift Key</strong> - en.wikipedia.org/wiki/SwiftKey & www.swiftkey.net/en/</li>
<li><strong>XT9 by Nuance</strong> - en.wikipedia.org/wiki/XT9 & www.nuance.com/for-business/by-product/xt9/index.htm</li>
</ul>
<p>Can anybody guide me how to proceed further.</p>
<p>I am relatively new to this site. So please guide me if my question is inappropriate for this site.</p>
| 404
|
|
language modeling
|
Language constructs vs. forms in semantic modeling
|
https://cs.stackexchange.com/questions/171438/language-constructs-vs-forms-in-semantic-modeling
|
<p>I saw the following paragraph in a data modeling <a href="https://www.gooddata.com/blog/what-a-semantic-data-model/" rel="nofollow noreferrer">article</a>:</p>
<blockquote>
<p>Semantics relates to the study of references, specifically describing the real meaning between symbols or words. In computer science, semantics relates to the meaning of language constructs rather than their form.</p>
</blockquote>
<p>There's one term that I'm not familiar with that is "language form". It looks like it comes from linguistics but I'm not sure.</p>
<p>Is there a standard definition on this one in computer science?</p>
|
<p><em>Form</em> is an alternative (and uncommon) word for <em><a href="https://en.wikipedia.org/wiki/Syntax" rel="nofollow noreferrer">syntax</a></em>, used in particular when contrasting form with <em>meaning</em> (semantics).</p>
<p>An example:</p>
<blockquote>
<p>Irons (1961) discussed “A Syntax Directed Compiler for ALGOL 60” that was to be “a compiling system which essentially separates the functions of defining the language and translating it into another.”
His paper used the syntax of ALGOL 60 and extended it to allow specification of meaning (in terms of the target language) as well as
of form.</p>
<p>Thus, remarkably, the same important ideas emerged
independently for the automatic translation of both natural and
artificial languages:</p>
<ul>
<li>Separating syntax and semantics.</li>
<li>Using a generative grammar to specify the set of all and only legal sentences (programs).</li>
<li>Analyzing the syntax of the sentence (program) and then using the analysis to drive the translation (compilation).</li>
</ul>
</blockquote>
<p>(from: <a href="https://ieeexplore.ieee.org/document/4392908" rel="nofollow noreferrer"><em>Formal languages: Origins and Directions</em>, by S.A. Greibach</a>, p.21)</p>
| 405
|
language modeling
|
How can Kneser-Ney Smoothing be integrated into a neural language model?
|
https://cs.stackexchange.com/questions/127864/how-can-kneser-ney-smoothing-be-integrated-into-a-neural-language-model
|
<p>I found a paper titled <a href="https://doi.org/10.1109/ICIP.2016.7532765" rel="nofollow noreferrer">Multimodal representation: Kneser-Ney Smoothing/Skip-Gram based neural language model</a>. I am curious about how the Kneser-Ney Smoothing technique can be integrated into a feed-forward neural language model with one linear hidden layer and a softmax activation. What is the purpose of the Kneser-Ney in such a neural network, and how can it be used for learning the conditional probability for the next word?</p>
<p><strong>EDIT:</strong></p>
<p>Authors mention that we may obtain different mulitple labels from n-1 words integrating Kneser-Ney smoothing. It is not clear for me what these labels are. As I know, the output of a LM could be the next word. How is it possible to get more labels with the Kneser-Ney smoothing ?</p>
| 406
|
|
language modeling
|
N-gram language model question
|
https://cs.stackexchange.com/questions/105909/n-gram-language-model-question
|
<p>I have this question I found regarding n-gram modelling in the textbook <em>Speech and Language Processing</em>:</p>
<blockquote>
<p>Suppose we didn't use the end symbol </s>. Train an unsmoothed bigram grammar on the following corpus without using the end-symbol </s>.</p>
<blockquote>
<p><s> a b<br />
<s> b b<br />
<s> b a<br />
<s> a a</p>
</blockquote>
<p>Demonstrate that your bigram model does not assign a single probability distribution across all sentence lengths by showing that the sum of the probability of the four possible 2 word sentences over the alphabet {a, b} is 1.0 and the sum of the probability of all possible 3 word sentences over the alphabet {a, b} is also 1.0.<br />
<em>Note:</em><br />
<s> means beginning of sentence.<br />
</s> means end of sentence.</p>
<p><em>Speech and Language Processing</em>, Daniel Jarafsky and James H. Martin, <a href="https://web.stanford.edu/%7Ejurafsky/slp3/" rel="nofollow noreferrer">3rd ed.</a>, Exercise 3.5, p.55</p>
</blockquote>
<p>My attempt was:</p>
<pre><code>Two word sentences:
P(a | b) = count(b a) / count(b) = 1 / 4
P(b | a) = count(a b) / count(a) = 1 / 4
P(a | a) = count(a a) / count(a) = 1 / 4
P(b | b) = count(b b) / count(b) = 1 / 4
</code></pre>
<p>sum = 1</p>
<p>but when I get to 3 word sentences:</p>
<p>P(a | a, a) = count(a a a) / count(a) = 1 / 64</p>
<p>Now, you get 8 three word sentences which sums to 8 / 64</p>
<p>This is where I am getting lost. I need some pointing in the right direction.</p>
|
<p>For these 2-word sentences, notice that the original equation for the MLE approximation of the probabilities of the bigram is:</p>
<p><span class="math-container">$$P(w_i|w_{i-1}) = \frac{C(w_{i-1}w_i)}{\sum_w{C(w_{i-1}w)}} = \frac{C(w_{i-1}w_i)}{C(w_{i-1})}, $$</span></p>
<p>where the denominator <span class="math-container">$C(w_{i-1})$</span> only equals the count of <span class="math-container">$w_{i-1}$</span> iff there is an end-symbol <span class="math-container">$\text{</s>}$</span>, otherwise the number of times <span class="math-container">$w_{i-1}$</span> appears at the end of the sentence has to be subtracted from its count. Therefore, the "count" function you are using in the denominator is wrong, and the true probabilities in your particular corpus are:</p>
<p><span class="math-container">$$\begin{equation}
\begin{split}
P(a|a) &= \frac{C(aa)}{C(a)} &= \frac{1}{2} \\
P(b|a) &= \frac{C(ab)}{C(a)} &= \frac{1}{2} \\
P(a|b) &= \frac{C(ba)}{C(b)} &= \frac{1}{2} \\
P(b|b) &= \frac{C(bb)}{C(b)} &= \frac{1}{2} \\
P(a|\text{<s>}) &= \frac{C(\text{<s>}a)}{C(\text{<s>})} = \frac{2}{4} &= \frac{1}{2} \\
P(b|\text{<s>}) &= \frac{C(\text{<s>}b)}{C(\text{<s>})} = \frac{2}{4} &= \frac{1}{2} \\
\end{split}
\end{equation}$$</span></p>
<p>and now the sum of the probabilities of all possible 2-word sentences with this vocabulary becomes equal to 1:</p>
<p><span class="math-container">$$\sum_{w, v\in\{a, b\}^2}P(\text{<s>}wv) = \sum_{w, v\in\{a, b\}^2} P(w|\text{<s>})\cdot P(v|w) = \sum_{w, v\in\{a, b\}^2} \frac{1}{4} = 1 .$$</span></p>
| 407
|
language modeling
|
What are the 175 billion parameters used in the GPT-3 language model?
|
https://cs.stackexchange.com/questions/156130/what-are-the-175-billion-parameters-used-in-the-gpt-3-language-model
|
<p>I am currently working my way through <em><a href="https://arxiv.org/abs/2005.14165" rel="nofollow noreferrer">Language Models are Few-Shot Learners
</a></em>, the initial 75-page paper about <a href="https://en.wikipedia.org/wiki/GPT-3" rel="nofollow noreferrer">GPT-3</a>, the language learning model spawning off into ChatGTP.</p>
<p>In it, they mention several times that they are using <strong>175 billion parameters</strong>, orders of magnitudes more than previous experiments by others. They show this table, for 8 models ranging from 125 million params to 175 billion params.</p>
<p><a href="https://i.sstatic.net/rsKhP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rsKhP.png" alt="enter image description here" /></a></p>
<p>Then they say:</p>
<blockquote>
<p>Table 2.1 shows the sizes and architectures of our 8 models. Here nparams is the total number of trainable parameters,
nlayers is the total number of layers, dmodel is the number of units in each bottleneck layer (we always have the
feedforward layer four times the size of the bottleneck layer, dff = 4 ∗ dmodel), and dhead is the dimension of each
attention head. All models use a context window of nctx = 2048 tokens. We partition the model across GPUs along
both the depth and width dimension in order to minimize data-transfer between nodes. The precise architectural
parameters for each model are chosen based on computational efficiency and load-balancing in the layout of models
across GPU’s. Previous work [KMH+20] suggests that validation loss is not strongly sensitive to these parameters
within a reasonably broad range.</p>
</blockquote>
<p>I am not an expert in machine learning, I just know basic RNNs and how they work with just a few parameters and a few layers (I don't know, like 5 parameters and 5 layers max, it's been a while)? What are the things counted as <strong>parameters</strong> in this 175 billion parameter network? How does the network look with its 96 layers? How many nodes are there per layer sort of thing?</p>
<p>I am trying to understand this paper and eventually how ChatGPT works, and getting to section 2 so far, I haven't seen what you would use as inputs/parameters to such a large model. The ones you learn in school are tiny compared to this. Hoping for a little illumination on what could be going on.</p>
|
<p>The 175 billion parameters in the GPT-3 language model are values that are used by the model to make predictions about the next word or words in a sentence or piece of text. These parameters are essentially the weights that are applied to the input data in order to make the model's predictions. In a neural network, the parameters are the values that are learned and adjusted during the training process in order to minimize the difference between the predicted output and the desired output.</p>
<p>The GPT-3 model has 96 layers, which means that it is composed of multiple layers of neural networks. Each layer of the network is made up of a number of nodes, which are the individual processing units of the network. The number of nodes per layer can vary.</p>
<p>To use the GPT-3 model, you would need to provide it with some input data, such as a sentence or a paragraph of text. The model would then process this input using its 175 billion parameters and its 96 layers, in order to make a prediction about the next word or words that should come next in the text. The model's predictions would be based on the input data and its learned parameters, and it would be able to generate human-like text as a result.</p>
| 408
|
language modeling
|
Regular languages - models of computation
|
https://cs.stackexchange.com/questions/9645/regular-languages-models-of-computation
|
<p>As far as a I understand, a regular language is a set of words that can be run in a DFA.</p>
<p>$L_1 = \{ x\#y \mid x,y \in \{0,1\}^* \ \text{and} \ |x| = |y| \}$</p>
<p>$L_2 = \{ xy \mid x,y \in \{0,1\}^* \ \text{and} \ |x| = |y| \}$</p>
<p>$L_1$ is not regular but $L_2$ is, why is that?
Is it possible to create a DFA that accepts $L_2$ (it needs to remember the length of $x$) if so, why is it impossible to do it with $L_1$?</p>
| 409
|
|
language modeling
|
What real-world computer languages cannot be described by deterministic grammars?
|
https://cs.stackexchange.com/questions/49037/what-real-world-computer-languages-cannot-be-described-by-deterministic-grammars
|
<p>Are there any examples of real-world computer languages that are non-deterministic?</p>
<p>By computer languages I include programming languages, markup languages, query languages, modeling language, transformation languages, etc.</p>
<p>By non-deterministic I mean they cannot be parsed with deterministic grammars.</p>
|
<p>Yes. Many modern programming languages have this property, including Algol 60, C, and C++; see below for details.</p>
<hr>
<p>Algol 60 famously had <a href="https://en.wikipedia.org/wiki/Dangling_else" rel="nofollow noreferrer">the dangling else problem</a>, for some programs, it was ambiguous how they should be parsed (what parse tree should be result).</p>
<p>Many modern languages resolve this by picking a grammar that resolves the ambiguity in a particular way. However, this requires carefully constructing the grammar to eliminate ambiguity. Often the most natural way to write the grammar leads to a grammar that is ambiguous, and you have to transform the grammar to avoid the ambiguity.</p>
<p>Another approach is to use a GLR parser, which can handle these grammars. Then you can check at runtime whether there are multiple possible parsers, and resolve the ambiguity.</p>
<hr>
<p>Another classic example is the C and C++ programming languages. The reference grammar for C is not context-free, because when you see a name, you need to know whether it is the name of a type or of a variable to know how to parse the statement. For instance, consider the expression</p>
<pre><code>(A)*B
</code></pre>
<p>If <code>A</code> is a variable, this is the multiplication of two variables and is equivalent to <code>A*B</code>. However, if <code>A</code> is the name of a type, then this is a pointer dereference and type-cast (so <code>B</code> is a variable name that holds a pointer type), and equivalent to <code>(A) *B</code>.</p>
<p>Similarly,</p>
<pre><code>A*B;
</code></pre>
<p>could be either a variable declaration (if <code>A</code> is the name of a type, this is declaring a variable named <code>B</code> whose type is pointer-to-<code>A</code>) or a multiplication (if <code>A</code> and <code>B</code> are the names of variables).</p>
<p>Compilers typically fix this by including extra code that "goes outside" the language of pure context-free grammars. They keep track of all variable names and type names in scope, and use this to guide parsing in case of this ambiguity. See <a href="https://en.wikipedia.org/wiki/The_lexer_hack" rel="nofollow noreferrer">the lexer hack</a>.</p>
<hr>
<p>See also <a href="http://blog.reverberate.org/2013/09/ll-and-lr-in-context-why-parsing-tools.html" rel="nofollow noreferrer">LL and LR in Context: Why Parsing Tools Are Hard</a> and <a href="https://www.gnu.org/software/bison/manual/html_node/GLR-Parsers.html" rel="nofollow noreferrer">Bison's manual pages on its GLR parser</a> (which includes a discussion of another ambiguous case in the C++ grammar).</p>
<hr>
<p>Lastly: I think the question has some confusion about the difference between ambiguity vs non-determinism. Automata and languages can be deterministic/non-deterministic, but not grammars. Grammars and languages can be ambiguous/unambiguous. See <a href="https://cs.stackexchange.com/q/109/755">Are there inherently ambiguous and deterministic context-free languages?</a> and <a href="https://cs.stackexchange.com/q/7031/755">Grammatical characterization of deterministic context-free languages</a> and <a href="https://cs.stackexchange.com/q/13393/755">If a parser can parse a non-deterministic grammar, is the parser non-deterministic?</a>. The above examples actually qualify both as ambiguous grammars and non-deterministic languages.</p>
| 410
|
language modeling
|
Are objects appropriate for modeling the real world?
|
https://cs.stackexchange.com/questions/129926/are-objects-appropriate-for-modeling-the-real-world
|
<p>First of all, I know objects are not meant to model the real world, although they have been marketed as such and perhaps that was an intention at some point.</p>
<p>Here I say 'modeling the real world' in a general sense. That includes simulations, modeling of abstract (non-real) concepts and modeling of business support applications, although I'm not sure it is appropriate to develop all of them in a single general-purpose OO language.</p>
<p>Under the assumption that modeling the real world in software development is a desirable and advantageous trait (not considering inappropriate models), I'm inspecting the foundations of object-oriented programming and the Simula languages.</p>
<p>However I'm asking this question in the hope that someone can provide a quick spoiler.</p>
<p>I am under the impression that objects (i.e. endurants) may not be enough to model the real world since a) their classes are static across time, e.g. a Person is always a Person, not a Child who becomes an Adult and thus his/her responsibilities and actions change b) processes (i.e. perdurants) are not first-class citizens as objects are, and c) time also is not a first-class citizen.</p>
<p>Aren't requirements such as these necessary for a language to properly model the real world? Why haven't they been included in the concept?</p>
|
<p>Object-oriented programming languages are designed to support <em>programming</em>. Whether they "properly" model the real world is beside the point and not the primary goal. So, when you ask "why haven't [these requirements] been included?", it's likely because those weren't considered relevant or necessary to the goal of supporting programming.</p>
<p>It's like saying "A hammer is no good for driving in a screw; why haven't the requirements of driving screws been included in the design of a hammer?" Well, that's not what hammers are designed for.</p>
<p>Of course you could invent your own language that follows different principles. You'd presumably end up with something that looks different. I'm not sure you'd ever finish such a project; I suspect there would always be some aspect of the real world that isn't incorporated in the language, and at some point you'd have to accept imperfect fidelity to the real world. In any case, that would be a different project, with different design goals, so it's no surprise that it might lead to a different result.</p>
| 411
|
language modeling
|
Model paths by regular languages
|
https://cs.stackexchange.com/questions/44733/model-paths-by-regular-languages
|
<p>I want use DFA to describe a sequence of movements in a 2D-space (language will be the path accepted by automaton in a particular case).</p>
<p>That is a typical modeling problem: how can I encode a sequence of 2D movements in a DFA?</p>
<p>Infact, walking through DFA or NFA seems a process analogous to walk through point of a maps.</p>
<p>A naives example could be: State like point in space coordinate (x,y); and Transitions with an alphabet of "up, down, etc".
That's direct approach is impracticable beacuse "the number of locations is infinite or simply too many". I'm looking for a better and more efficient encoding.</p>
<p>Are there any study about using regular languages for coding path, or movements? </p>
|
<p>If the number of locations (e.g. points or regions) are finite, naively, you can say that these locations are my states and you can directly use a DFA with an alphabet containing UP, DOWN. But you already said it's impracticable for your case.</p>
<p>Then, let's look into where the number of locations is infinite or simply too many. Basically, in this case, you need to recognize that the sequences UP-DOWN, UP-UP-DOWN-DOWN, UP-UP-UP-DOWN-DOWN-DOWN ... etc. are equivalent because equal number of UP and DOWNs gets you back to the starting point. This is the classical example of non-regular languages. Therefore I suggest you looking into other automata such as counter machines if they are sufficient to capture your intentions.</p>
| 412
|
language modeling
|
How is algorithm complexity modeled for functional languages?
|
https://cs.stackexchange.com/questions/74494/how-is-algorithm-complexity-modeled-for-functional-languages
|
<p>Algorithm complexity is designed to be independent of lower level details but it is based on an imperative model, e.g. array access and modifying a node in a tree take O(1) time. This is not the case in pure functional languages. The Haskell list takes linear time for access. Modifying a node in a tree involves making a new copy of the tree.</p>
<p>Should then there be an alternate modeling of algorithm complexity for functional languages?</p>
|
<p>If you assume that the $\lambda$-calculus is a good model of functional programming languages, then one may think: the $\lambda$-calculus has a
seemingly simple notion of
time-complexity: just count
the number of $\beta$-reduction
steps $(\lambda x.M)N \rightarrow M[N/x]$. </p>
<p>But is this a good complexity measure? </p>
<p>To answer this
question, we should clarify what we mean by complexity measure in the
first place. One good answer is given by the <i>Slot and van Emde
Boas thesis</i>: any good complexity measure should have
a polynomial
relationship to the canonical notion of time-complexity defined using
Turing machines. In other words, there should be a 'reasonable'
encoding $tr(.)$ from $\lambda$-calculus terms to Turing machines, such for some polynomial $p$, it is the case that for
each term $M$ of size $|M|$: $M$ reduces to a value in $p(|M|)$ $\beta$-reduction steps exactly
when $tr(M)$ reduces to a value in $p(|tr(M)|)$ steps of a Turing machine.</p>
<p>For a long time, it was unclear if this can be achieved in the λ-calculus. The main problems are the following.</p>
<ul>
<li>There are terms that produce normal forms (in a polynomial number of steps) that are of exponential size. Even writing down the normal forms takes exponential time.</li>
<li>The chosen reduction strategy plays an important role. For example there exists a family of terms which reduces in a polynomial number of parallel β-steps (in the sense of <a href="http://www.cs.unibo.it/~asperti/SLIDES/optimal.pdf" rel="noreferrer">optimal λ-reduction</a>), but whose complexity is non-elementary (meaning worse then exponential).</li>
</ul>
<p>The paper "<a href="https://arxiv.org/abs/1601.01233" rel="noreferrer">Beta Reduction is Invariant, Indeed</a>" by B. Accattoli and U. Dal Lago clarifies the issue by showing a 'reasonable' encoding that preserves the complexity class <strong><a href="https://en.wikipedia.org/wiki/Time_complexity#Polynomial_time" rel="noreferrer">P</a></strong> of polynomial time functions, assuming <a href="https://en.wikipedia.org/wiki/Evaluation_strategy" rel="noreferrer">leftmost-outermost call-by-name</a> reductions. The key insight is the exponential blow-up can only happen for 'uninteresting' reasons which can be defeated by proper sharing. In other words, the class <strong>P</strong> is the same whether you define it counting Turing machine steps or (leftmost-outermost) $\beta$-reductions.</p>
<p>I'm not sure what the situation is for other evaluation strategies.
I'm not aware that a similar programme has been carried out for space complexity.</p>
| 413
|
language modeling
|
Question about word embeddings in a specific language model - GPT-2
|
https://cs.stackexchange.com/questions/116184/question-about-word-embeddings-in-a-specific-language-model-gpt-2
|
<p>How were the <a href="https://openai.com/blog/better-language-models/" rel="nofollow noreferrer">GPT-2</a> token embeddings constructed? </p>
<p>The authors mention that they used Byte Pair Encoding to construct their vocabulary. But BPE is a compression algorithm that returns a list of subword tokens that would best compress the total vocabulary (and allow rare words to be encoded efficiently).</p>
<p>My question is: how was that list of strings turned into the vectors that they actually used for training the model? The papers they published on the <a href="https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf" rel="nofollow noreferrer">original GPT</a> and its <a href="https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf" rel="nofollow noreferrer">follow-up GPT-2</a> don't seem to specify those details.</p>
| 414
|
|
language modeling
|
Data type implementation of 1.58 bits
|
https://cs.stackexchange.com/questions/169098/data-type-implementation-of-1-58-bits
|
<p>In Large Language Models, using 1-bit binary weights (<a href="https://arxiv.org/pdf/2310.11453" rel="nofollow noreferrer">BitNet: Scaling 1-bit Transformers for Large Language Models - Wang et al, 2023</a>) instead of 32-bit floating point weights has numerous advantages. Some of the recent research (<a href="https://arxiv.org/pdf/2402.17764" rel="nofollow noreferrer">The Era of 1-bit LLMs:
All Large Language Models are in 1.58 Bits - Ma et al, 2024</a>) on LLMs talks about using 1.58 bits weights. By this they mean using {-1, 0, +1} instead of {-1, +1} as the quantized value of the weights, as was the convention in previous work on binarization and 1-bit weights.</p>
<p>I fail to understand the datatype of 1.58-bit weights. Are they defining a new type of binary/boolean value? Are they using 2-bit integers? How exactly is it implemented?</p>
|
<p>They are using trinary values which have an information density of <span class="math-container">$\log_2 3 = 1.584963...$</span> bits per trinary digit.</p>
<p>The effective implementation for computation in traditional compute devices is going to be 2 bits in a signed magnitude style. They can be stored using a variety of tricks, one of which would be using 1 byte to store 5 trinary digits.</p>
<p>In more specialized ASICs this can be implemented using different voltage levels</p>
| 415
|
language modeling
|
What are the other language models of computation similar to lambda calculus?
|
https://cs.stackexchange.com/questions/106004/what-are-the-other-language-models-of-computation-similar-to-lambda-calculus
|
<p>I hope this question makes sense, but I was wondering if there are other models of computation similar to lambda calculus that you can use to build up axiomatic mathematical and logical fundamentals like numbers, operators, arithmetic functions and such?</p>
|
<p>Yes, there are many models of computation, and of those many are extensions or modifications of the <span class="math-container">$\lambda$</span>-calculus. You may wish to learn about <a href="https://ncatlab.org/nlab/show/partial+combinatory+algebra" rel="noreferrer">partial combinatory algebras</a>, which are very general models of computation, of which the <span class="math-container">$\lambda$</span>-calculus is an example. Every partial combinatory algebra has a <span class="math-container">$\lambda$</span>-like notation for defining functions. They encompass examples such as: <span class="math-container">$\lambda$</span>-calculus, Turing machines, Turing machines with oracles, topological models of computation, <a href="https://en.wikipedia.org/wiki/Programming_Computable_Functions" rel="noreferrer">PCF</a>, etc.</p>
<p>Many models of computation can be seen as extensions or modifications of the <span class="math-container">$\lambda$</span>-calculus. Let us consider the <a href="https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus" rel="noreferrer">simply typed <span class="math-container">$\lambda$</span>-calculus</a> augmented with various features:</p>
<ol>
<li><a href="https://en.wikipedia.org/wiki/Dialectica_interpretation" rel="noreferrer">System T</a> is <span class="math-container">$\lambda$</span>-calculus with natural numbers and primitive recursion. It is <em>not</em> Turing complete.</li>
<li>The aforementioned <a href="https://en.wikipedia.org/wiki/Programming_Computable_Functions" rel="noreferrer">PCF</a> is <span class="math-container">$\lambda$</span>-calculus extended with natural numbers and general recursion. It is Turing complete.</li>
<li>There are many extensions of PCF, for instance PCF++ can perform parallel computations, PCF+<code>catch</code> can throw and catch exceptiosn, PCF+<code>quote</code> can disasemble code into source code, etc. These are all <em>different</em> computation models based on the <span class="math-container">$\lambda$</span>-calculus.</li>
<li>In another direction, we can extend the <span class="math-container">$\lambda$</span>-calculus with more powerful types, for instance <a href="https://en.wikipedia.org/wiki/System_F" rel="noreferrer">System F</a> is a <span class="math-container">$\lambda$</span>-calculus with polymorphic types. It is quite powerful but not Turing complete.</li>
</ol>
<p>All of this is just scratching a rich theory of models of computation and functional programming languages (which is what extensions of <span class="math-container">$\lambda$</span>-calculus are, more or less). If you are interested in the topic, you can start by reading some <a href="https://www.cis.upenn.edu/~bcpierce/tapl/" rel="noreferrer">books on the principles of programming langauges</a> (for practical aspects), or some books on <a href="https://www.elsevier.com/books/realizability/van-oosten/978-0-444-51584-1" rel="noreferrer">realizability theory</a> (for theoretical aspects).</p>
| 416
|
language modeling
|
If Large Language Models like ChatGPT grow, do also their problems grow?
|
https://cs.stackexchange.com/questions/159907/if-large-language-models-like-chatgpt-grow-do-also-their-problems-grow
|
<p>In a video Noam Chomsky said, that if these LLM get bigger, than also the things they are not good at get bigger too. He doesn't explain more details about this. So is this true? In what way do their problems get bigger? Sry for the little context of in what they are getting also more bad at.</p>
<p>here is the video:
(it's short)</p>
<p><a href="https://m.youtube.com/watch?v=K7S0zHIDMaI&pp=ygUPY2hvbXNreSBjaGF0Z3B0" rel="nofollow noreferrer">https://m.youtube.com/watch?v=K7S0zHIDMaI&pp=ygUPY2hvbXNreSBjaGF0Z3B0</a></p>
|
<p>First, it's important to understand that Chomsky is a linguist (a syntactician) and that the goals of linguistics are very different from artificial intelligence. In particular, the two domains differ greatly as to what constitutes a satisfactory theory of human language. For the modern study of syntax, it is to have a rigorously correct formalization of all of the grammatical sentences in a natural language; it should also make correct linguistic predictions. If there is an exception to this, the theory is considered wrong. This is very different from artificial intelligence, which has much less rigorous goals as to what constitutes success.</p>
<p>Chomsky is pointing out in the video that it’s also important in linguistics that the theory make actual claims about human language. So if the theory is just as good at non-human languages (like <a href="https://github.com/features/copilot" rel="nofollow noreferrer">programming languages</a> or even
<a href="https://www.nature.com/articles/d41586-023-01516-w" rel="nofollow noreferrer">antibody therapies</a>), it can’t make any interesting claims or predictions at all about human languages.</p>
<p>This of course ignores the fact that, notwithstanding the semantic <a href="https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#:%7E:text=AI%20hallucination%20gained%20prominence%20around,falsehoods%20within%20their%20generated%20content." rel="nofollow noreferrer">hallucinations</a>, LLMs have achieved strikingly accurate syntactic competence. So, even though the computer program to run the LLM makes no linguistic claims, the learning implicit in the neural network still has an implicit linguistic theory to manifest that competence.</p>
<p>But this in turn fails, because LLMs are just black boxes and do not in any way constitute satisfactory <a href="https://plato.stanford.edu/entries/scientific-explanation/" rel="nofollow noreferrer">scientific explanation</a>. No one has any idea how to translate billions of weights from an artificial neural network into a linguistic theory.</p>
| 417
|
language modeling
|
Computational models - proving language is decidable
|
https://cs.stackexchange.com/questions/44474/computational-models-proving-language-is-decidable
|
<p>I tried to prove that the following language is recursive/decidable/in R: for $\Sigma=\{0,1\}$, $k$ a positive integer:
$$
L_k= H_\text{TM,epsilon}\cap \Sigma^k
$$
where $H_\text{TM,epsilon}=\{\langle M\rangle\mid M \text{ is a TM that halts on epsilon input}\}$</p>
<p>It is easy to prove because $L_k$ is finite, but I didn't notice this and tried to prove it by finding a decider TM for it.
I thought that since the encoding of the TM is of length $k$ then it can't have more than $2^k$ states, and by running it on epsilon for $2^k$ steps, if it halts by then than accept otherwise reject.
I was told that it's incorrect - is it a wrong solution. How can I prove this using this method (and not the way I mentioned about $L_k$ being finite)?</p>
| 418
|
|
language modeling
|
Is there a way to connect a deep language model output to input?
|
https://cs.stackexchange.com/questions/115948/is-there-a-way-to-connect-a-deep-language-model-output-to-input
|
<p>In models like GPT-2, TXL and Grover, is there a good way to know which input weights (tokens) resulted in each token of the output? </p>
| 419
|
|
language modeling
|
Programming languages and Model of computations
|
https://cs.stackexchange.com/questions/133116/programming-languages-and-model-of-computations
|
<p>I am learning about model of computation and I found this <a href="https://en.wikipedia.org/wiki/Model_of_computation" rel="nofollow noreferrer">wikipedia</a> entry that categorises and outlines various model of computation.</p>
<p>Now I want to know the programming languages that builds on these models.</p>
<p>I am aware that functional programming languages are built upon lambda calculus, and that imperative programming is more or less as a result of Turing machine model. But I do not know about the other models and if any programming language have been designed based on them.</p>
<p>Anyone knows if there are languages that builds on model of computation listed on the wikipedia entry?</p>
|
<p>Functional programming languages can often be thought of as built on the lambda calculus.</p>
<p>Imperative programming languages can often be thought of as built on the RAM (Random Access Machine) model of computation.</p>
<p>Some hardware description languages can be thought of as being built on the finite-state machine model, or on digital circuits.</p>
<p>Some declarative languages can be thought of as built on Datalog.</p>
<p>Some concurrent languages can be thought of as built on the actor model or on process calculi (e.g., CSP, the <span class="math-container">$\pi$</span>-calculus, etc.).</p>
| 420
|
language modeling
|
What are the inputs to an LSTM for Slot Filling Task
|
https://cs.stackexchange.com/questions/71032/what-are-the-inputs-to-an-lstm-for-slot-filling-task
|
<p>I am confused on the inputs of a Long-Short Term Memory (LSTM) for the slot filling task in Spoken Language Understanding. </p>
<p>Before I worked on this, I implemented a language model with a Recurrent Neural Network (RNN) and then with a LSTM. The input to the RNN and LSTM language models was a one hot vector, which represented each word. </p>
<p>Now, when moving on to the slot filling task for a LSTM, I am having trouble what the input would be. I know that a one-hot vector representation is not enough for this task because the outputs along each time step are slot labels. I have a dictionary (in Python) that maps words to indices (which I can turn into a one hot vector), and I also have a dictionary with a labels (that are used for slot filling), which I got from the ATIS data. Here is an example:</p>
<p><a href="https://i.sstatic.net/Fo5hD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fo5hD.png" alt="enter image description here"></a></p>
<p>I know I need the above two dictionaries to accomplish the slot filling task, but I cannot figure out how to use them as inputs for the LSTM? Furthermore, I have been using the basic LSTM structure, and for the language model LSTM I build, the output at each time step went through a Softmax function. Is this what will be required for slot filling too?</p>
<p>I am in high school and do not have anyone to contact, so any help is really appreciated. Thank you so much.</p>
| 421
|
|
language modeling
|
How can I find the perplexity of a text by the perplexity of its sentences?
|
https://cs.stackexchange.com/questions/105220/how-can-i-find-the-perplexity-of-a-text-by-the-perplexity-of-its-sentences
|
<p>For a bigram language model, I can calculate the perplexity of sentences of a test document. However, I'm not sure what would be the perplexity of the whole document. Should I get the average of the perplexities of sentences?</p>
| 422
|
|
language modeling
|
Uses of unary or sparse languages in other models
|
https://cs.stackexchange.com/questions/60461/uses-of-unary-or-sparse-languages-in-other-models
|
<p>In the turing model we have the statements that if there is an unary or sparse language that is NP complete then P=NP and if there is a Turing reduction from an NP complete problem to an unary or sparse language then NP is in P/poly.</p>
<p>Is there an analog of these statements in the Valiant model and in the BSS model?</p>
| 423
|
|
language modeling
|
What is meant by a full abstract model of a lambda-calculus like language?
|
https://cs.stackexchange.com/questions/116807/what-is-meant-by-a-full-abstract-model-of-a-lambda-calculus-like-language
|
<blockquote>
<p>The simply typed lambda-calculus with numbers and fix has long been a favorite experimental subject for programming language researchers, since it is the simplest language in which a range of subtle semantic phenomena such as full abstraction arise. </p>
</blockquote>
<p>I tried to find a definition for full-abstract model but I haven't found such. This quote is from Pierce's TAPL book. Note that there is also a related question: <a href="https://cs.stackexchange.com/questions/104192/what-is-a-model-of-lambda-calculus">What is a "model" of lambda calculus?</a> on the site that has not been answered. </p>
|
<p>In denotational semantics, you want to be able to map each of your language terms to some object in your semantic domain or model. Now, it cannot be any arbitrary domain/model as you like, but, informally speaking, something that gives a good intuition about how the language works (its computational behavior).</p>
<p><a href="http://sciencedirect.com/science/article/pii/0304397577900536" rel="noreferrer">Milner</a> tried to formalize what this "intuition" should be and called it full abstraction. Formally, a model is fully abstract if all observationally equivalent terms in the object language represent the same object in the model. Equationally:
<span class="math-container">$$\text{if } ⟦t_1⟧ = ⟦t_2⟧ \text{then } t_1 \rightsquigarrow t_2 $$</span>
where <span class="math-container">$\rightsquigarrow$</span> represents observational equivalence. In case of lambda-calculus observational equivalence would be <span class="math-container">$\beta\eta\alpha$</span> conversions and <span class="math-container">$⟦\_⟧$</span> is the denotation function.</p>
<p>There are few papers that you might want to take a look at if you are interested in seeing some full abstract models of lambda like languages:</p>
<ol>
<li><a href="https://www.sciencedirect.com/science/article/pii/0304397577900445" rel="noreferrer">Plotkin's</a> paper that gives a full abstract model of the lambda like language called LCF </li>
<li><a href="https://ieeexplore.ieee.org/document/715926" rel="noreferrer">Mulmuley's</a> paper gives a full abstract model of typed lambda calculus.</li>
<li><a href="https://www.sciencedirect.com/science/article/pii/S0890540100929171" rel="noreferrer">Hyland and Ong's</a> papers give a full abstract model of PCF using game semantics</li>
</ol>
| 424
|
language modeling
|
should you take question mark as a separate word in bigram language modelling when finding probability?
|
https://cs.stackexchange.com/questions/132713/should-you-take-question-mark-as-a-separate-word-in-bigram-language-modelling-wh
|
<p>the corpus</p>
<p><s> what is the shape ? </s>
<s> what is the colour ? </s>
<s> what colour is it ? </s>
<s> it is what ? </s>
<s> is it red ? </s>
<s> what is it ? </s>
<s> what shape is it ? </s>
<s> it is red </s>
<s> it is green </s>
<s> the colour is red </s>
<s> the shape is square </s>
<s> red is the colour </s>
<s> square is the shape </s></p>
<p>(1) What is the probability of the following sentences: (4 points)</p>
<p>a. what colour is red ?</p>
| 425
|
|
language modeling
|
Model checking and dependently typed languages for formal verification
|
https://cs.stackexchange.com/questions/170371/model-checking-and-dependently-typed-languages-for-formal-verification
|
<p>What are the differences and limitations between model checking and type-checking dependent types for verifying correctness?</p>
<p>If I were to model a state machine in a language like Idris, what can't I verify that a model checker can and vice-versa? I can enforce valid transitions in Idris, but can I prove reachability or that after any sequence of events the system never reaches some invalid state?</p>
|
<p>Model checkers tend to be better for checking control-flow properties, i.e., about the sequence of events, particularly in the presence of concurrency, threads, etc.</p>
<p>Dependent types tend to be better for checking data-flow properties, i.e., about the values and types that variables can hold, and typically are not used for sophisticated reasoning about which interleaving of control-flow paths are feasible.</p>
<p>None of these are hard-and-fast limitations or boundaries. They just reflect what the tools are most commonly used for or most naturally suited to.</p>
<p>Model checkers are often applied to a model of the system, rather than the code itself -- but not always. Dependent types are often applied to the source code itself, rather than a separate model -- but not always.</p>
| 426
|
language modeling
|
Formal model of execution for Java (or general imperative language)
|
https://cs.stackexchange.com/questions/82424/formal-model-of-execution-for-java-or-general-imperative-language
|
<p>I'm trying to prove some statements about execution in Java programs under some heavy restrictions (basically I have a conjecture that if two methods satisfy a set of constraints for a given input then the are they equivalent - i.e., that return value and state after execution are identical). To prove this I'm looking for some sort of formalism that will let me talk about this.</p>
<p>I'm familiar with the operational semantics of functional languages and I could possibly translate for loops/while loops to recursive functions... I'd rather not do this and it would be nice to have some machinery so I could stay in imperative land.</p>
<p>More specifically, I want to reason about the <em>state</em> of a method at the <em>k</em>th step of execution. This includes global state:</p>
<ul>
<li>Calls like <code>this.field = 2</code> update our class state</li>
<li>Calls modifying parameters update state outside of our method:
<ul>
<li><code>myParam.setFoo(...)</code></li>
<li><code>myParam.x = y</code></li>
</ul></li>
<li>Calls to static methods
<ul>
<li><code>Blah.static_side_effects()</code></li>
</ul></li>
</ul>
<p>I am assuming that all of this is <em>deterministic</em>. That is, I want to formalize the assumption that if any of these global updates to state occur in two methods, both of whose global and local execution states are identical, then the new state will also be identical - that each step of computation is determined precisely by global state and local state. This obviously precludes RNGs and parallelism (but I may deal with this later...).</p>
<p>Any ideas or sources on how I could approach this? My only thought is to treat methods as a list of statements and try to describe a statements semantics formally.</p>
<p>If possible I'd love to do this at the Java language level rather than the JVM level. This may not be feasible but my goal for now is to make some reasonable assumptions about my operational semantics and then take it from there.</p>
<p>Oh, one final note - any suggestions on how I can improve this question would be greatly appreciated. I'm kind of flailing around trying to find the right language to ask the question and if I'm abusing terminology (like local/global execution state...) I'd love to correct this.</p>
|
<p><a href="https://www.cis.upenn.edu/~bcpierce/papers/fj-toplas.pdf" rel="noreferrer">Featherweight Java</a> is quite highly regarded in the PL community. But if that doesn't suit your needs, here's a general approach to modelling:</p>
<ul>
<li>Formalize your language's AST into expressions and statements</li>
<li>Write a semantics for expressions and statements. Your semantics will need:
<ul>
<li>an evaluation relation, relating expression-state pairs $(e,\sigma)$ to an evaluated expressions with updated state $(e', \sigma')$, </li>
<li>an execution relation, relating statement-state pairs $(s, \sigma)$ to output states $\sigma$. </li>
</ul></li>
<li>These will be mutually recursive on each other, and can be a big or small-step semantics, depending on your particular need. Big step is simpler, but worse at modeling non-terminating executions.</li>
</ul>
<p>This basic structure will get you pretty far. To model Java, you probably want to structure your state set hierarchically, around specific objects, but the basic principles are the same. You'll also want to model dynamic dispatch, so it probably makes sense to transform class methods into functions taking an explicit "this" argument.</p>
<p>An axiomatic semantics i.e. Hoare triples, which define a logic of pre- and post-conditions for statements to model imperative programs. I don't know how these relate to OOP, but I'm sure someone has tried in the 50 years they've been around.</p>
<p>You might also be interested in <a href="https://softwarefoundations.cis.upenn.edu/" rel="noreferrer">Software Foundations</a>. It's oriented at reasoning about imperative languages in the Coq theorem prover, but it gives an excellent overview of formal semantics.</p>
| 427
|
language modeling
|
Compression of domain names
|
https://cs.stackexchange.com/questions/3056/compression-of-domain-names
|
<p>I am curious as to how one might <em>very compactly</em> compress the domain of an arbitrary <a href="http://en.wikipedia.org/wiki/Internationalized_domain_name" rel="nofollow noreferrer">IDN</a> hostname (as defined by <a href="https://www.rfc-editor.org/rfc/rfc5890" rel="nofollow noreferrer">RFC5890</a>) and suspect this could become an interesting challenge. A Unicode host or domain name (U-label) consists of a string of Unicode characters, typically constrained to one language depending on the top-level domain (e.g. Greek letters under <code>.gr</code>), which is encoded into an ASCII string beginning with <code>xn--</code> (the corresponding A-label).</p>
<p>One can build data models not only from the formal requirements that</p>
<ul>
<li><p>each non-Unicode label be a string matching <code>^[a-z\d]([a-z\d\-]{0,61}[a-z\d])?$</code>;</p>
</li>
<li><p>each A-label be a string matching <code>^xn--[a-z\d]([a-z\d\-]{0,57}[a-z\d])?$</code>; and</p>
</li>
<li><p>the total length of the entire domain (A-labels and non-IDN labels concatenated with '.' delimiters) not exceed 255 characters</p>
</li>
</ul>
<p>but also from various heuristics, including:</p>
<ul>
<li><p>lower-order U-labels are often lexically, syntactically and semantically valid phrases in some natural language including proper nouns and numerals (unpunctuated except hyphen, stripped of whitespace and folded per <a href="https://www.rfc-editor.org/rfc/rfc3491" rel="nofollow noreferrer">Nameprep</a>), with a preference for shorter phrases; and</p>
</li>
<li><p>higher-order labels are drawn from a dictionary of SLDs and TLDs and provide context for predicting which natural language is used in the lower-order labels.</p>
</li>
</ul>
<p>I fear that achieving good compression of such short strings will be difficult without considering these specific features of the data and, furthermore, that existing libraries will produce unnecessary overhead in order to accomodate their more general use cases.</p>
<p>Reading Matt Mahoney's online book <a href="http://mattmahoney.net/dc/dce.html" rel="nofollow noreferrer">Data Compression Explained</a>, it is clear that a number of existing techniques could be employed to take advantage of the above (and/or other) modelling assumptions which ought to result in far superior compression versus less specific tools.</p>
<p>By way of context, this question is an offshoot from a <a href="https://stackoverflow.com/questions/7792624/producing-compact-ciphertext-of-short-strings">previous one on SO</a>.</p>
<hr />
<p><strong>Initial thoughts</strong></p>
<p>It strikes me that this problem is an excellent candidate for offline training and I envisage a compressed data format along the following lines:</p>
<ul>
<li><p>A Huffman coding of the "<a href="http://publicsuffix.org/" rel="nofollow noreferrer">public suffix</a>", with probabilities drawn from some published source of domain registration or traffic volumes;</p>
</li>
<li><p>A Huffman coding of which (natural language) model is used for the remaining U-labels, with probabilities drawn from some published source of domain registration or traffic volumes given context of the domain suffix;</p>
</li>
<li><p>Apply some dictionary-based transforms from the specified natural language model; and</p>
</li>
<li><p>An arithmetic coding of each character in the U-labels, with probabilities drawn from contextually adaptive natural language models derived from offline training (and perhaps online too, although I suspect the data may well be too short to provide any meaningful insight?).</p>
</li>
</ul>
|
<p>Huffman coding is optimal for letters and can certainly be adapted to sequences. For instance, if the sequence "ab" results in fewer bits than the bits for "a" and "b", then simply add it to the tree ...and so on.</p>
<p>...you can also probably use some simple library which does that all for you with near optimal performances, so that you won't gain much using your custom made super fancy compression algorithm.</p>
| 428
|
language modeling
|
n-gram model: why conditioning on the start symbol <s>?
|
https://cs.stackexchange.com/questions/161693/n-gram-model-why-conditioning-on-the-start-symbol-s
|
<p><a href="https://i.sstatic.net/ngTqN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ngTqN.png" alt="enter image description here" /></a></p>
<p>from the <a href="https://web.stanford.edu/%7Ejurafsky/slp3/3.pdf" rel="nofollow noreferrer">book</a>, Speech and Language Processing (3rd ed. draft)</p>
<p>I understand conditioning on <code><s></code> will give context about the <strong>first</strong> word.</p>
<p>From the formula above, aren't we computing
<code>P(i want english food </s> | <s>)</code>, instead of <code>P(<s> i want english food </s>)</code>? Do we imply <code>p(<s>) = 1</code>?</p>
<p>The <a href="https://en.wikipedia.org/wiki/Word_n-gram_language_model" rel="nofollow noreferrer">n-gram wiki</a> implies same thing too:
<a href="https://i.sstatic.net/VTMDx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VTMDx.png" alt="enter image description here" /></a></p>
<hr />
<p>edited on: 08/21/2023</p>
<p>I found a different definition for sentence probability from <a href="https://dash.harvard.edu/bitstream/handle/1/25104739/tr-10-98.pdf" rel="nofollow noreferrer">this paper</a>, which is referenced by the above textbook.</p>
<p><a href="https://i.sstatic.net/6hi9V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6hi9V.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/XyPf5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XyPf5.png" alt="enter image description here" /></a></p>
<p>I think the probability of a <strong>sentence</strong> in an n-gram language model is conditioned on <code><s></code>. The conditional probability forms a valid probability distribution.</p>
<p>This paper discards <code><s></code> and <code></s></code> in its notaion:</p>
<p><code>P(I want food)</code>, instead of <code>P(<s> I want food </s>)</code></p>
| 429
|
|
language modeling
|
How to create model for a powerful language whose programs are guaranteed to terminate?
|
https://cs.stackexchange.com/questions/103129/how-to-create-model-for-a-powerful-language-whose-programs-are-guaranteed-to-ter
|
<p>I'm creating a powerful regular expression matching system that can be augmented by adding small microprograms to deterministic finite automaton (DFA) states. The microprogram solves the <a href="http://pages.cs.wisc.edu/~estan/publications/bigbang.pdf" rel="nofollow noreferrer">big bang</a> issue where a rule of the form <code>.*ab.*cd</code> doubles the state count of the DFA.</p>
<p>The solution for the "big bang" problem is to create a program-augmented-DFA like this:</p>
<pre><code>.*ab
{
var=1;
}
.*cd
{
if (var)
{
raise();
}
}
</code></pre>
<p>Now, a problem with programming languages in general is that programs written in nontrivial languages may end up in infinite loop or in infinite recursion. Thus, I'm planning to implement a bytecode where backwards jumps are strictly forbidden but forwards jumps are allowed. The bytecode would contain the following opcodes:</p>
<ul>
<li>EXIT, RAISE</li>
<li>PUSH_BYTE, PUSH_WORD, PUSH_DWORD, PUSH_QWORD</li>
<li>EQ, NE, LT, GT, LE, GE for equality and inequality</li>
<li>LOGICAL_AND, LOGICAL_NOT, LOGICAL_OR, BITWISE_AND, BITWISE_OR, BITWISE_XOR, BITWISE_NOT</li>
<li>SHL, SHR</li>
<li>ADD, SUB, MUL, DIV, MOD, UNARY_MINUS</li>
<li>JMP_FWD, IF_FALSE_JUMP_FWD for nonconditional and conditional jumps (<b>only forwards</b>)</li>
<li>NOP for padding</li>
<li>SET_VAR for setting a variable in the variable structure</li>
<li>PUSH_VAR for pushing a variable in the variable structure into stack</li>
<li>POP and POP_MANY for popping variables from the stack</li>
<li>perhaps some other opcodes like for <code>++</code>, <code>--</code>, <code>+=</code> operators etc.</li>
</ul>
<p>The additional data structures accessed by programs are the variable structure (permanent) and stack (nonpermanent). Nonnegative variables (0, 1, 2, 3, ...) are in the variable structure and negative variables (-1, -2, -3, -4, ...) refer to the stack. So, in the example <code>.*ab / .*cd</code> program, <code>var</code> would be in the variable structure as it's permanent.</p>
<p>I have already verified that arbitrary logical expressions and complex if-else structures can be implemented in this bytecode.</p>
<p>Is there some kind of formal model for bytecodes whose microprograms always are guaranteed to terminate?</p>
<p>Note function calls are missing. Would the language become more powerful if I allowed function calls forwards, but never function calls backwards (thus making recursion impossible)? If the program can manipulate the stack, I assume a secondary stack for just instruction pointer return locations would be needed. Would such a secondary stack that cannot be manipulated allow supporting function calls in a language whose programs are guaranteed to terminate?</p>
<p>At least without function calls I know that there is an upper bound for the stack size: if one instruction can push at most one item into the stack, if there are N instructions, an upper bound for the stack size is N. Would the upper bound change if function calls were supported?</p>
<p>Also, without function calls, a program of N instructions takes at most N cycles. Would this change if function calls were supported? For example, would it be possible to have a program with N instructions that takes <span class="math-container">$2^N$</span> cycles to execute?</p>
<p>As practical example of a similar language, my online search found this that may be applicable: <a href="https://en.wikipedia.org/wiki/Sieve_(mail_filtering_language)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Sieve_(mail_filtering_language)</a></p>
| 430
|
|
language modeling
|
LTL properties in bounded model checking via assertions
|
https://cs.stackexchange.com/questions/49169/ltl-properties-in-bounded-model-checking-via-assertions
|
<p>Is there a way to check LTL properties in a bounded model checker?</p>
<p>As an example, consider a liveness property ($G F p$ - always eventually $p$)? Suppose we have the following trivial program</p>
<pre><code>#include <pthread.h>
int a = 0;
void * f(void * x)
{
a = 1;
return x;
}
int main()
{
pthread_t t;
pthread_create(&t, 0, f, 0);
while (a == 0);
return 1;
}
</code></pre>
<p>Is "always eventually main terminates" expressible in a bounded model checker using only assertions?</p>
<p>In principle, you can construct a Büchi automaton from an LTL formula, express it in the modeling language (e.g. as C code) and run it in parallel to the model/program. However, unbounded loops pose a problem to the bounded model checker. Hence, I wonder how such properties can be expressed using assertions, e.g. in CBMC.</p>
|
<p>The property "always eventually main terminates" should be expressible and verifiable in a bounded model checker. For such properties, the model checker would either verify the property or find a "acceptance cycle" that shows that there exists a loop in an execution trace that does not satisfy the property (by considering execution traces of bounded lengths).</p>
<p>For example, the given property can be expressed in the SPIN model checker with:</p>
<pre><code>int a;
bool end;
proctype thread1() {
a = 1;
}
init {
a = 0;
end = false;
run thread1();
do
:: (a == 0) -> skip;
od;
end = true;
}
ltl prop { always (eventually end) }
</code></pre>
<p>The code follows the given C code, where <code>do</code> simulates checking the condition with <code>while</code> loop. When checking if <code>prop</code> holds, SPIN will find an "acceptance cycle" that shows that main might get constantly scheduled for execution and starve thread1, thus refuting the formula. If a "fair" scheduler is assumed (e.g. with enforcing "weak fairness" <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0ahUKEwj_ytCh5uTJAhUF0h4KHfMQD4cQFgguMAI&url=http%3A%2F%2Fspinroot.com%2Fspin%2FDoc%2Fspin-quick-reference.pdf&usg=AFQjCNH7V4Df1IAJvE36RQgGL6nDeOQh2g&sig2=J5uIUF1ZqFfN3_jhfFBLZg&cad=rja" rel="nofollow" title="SPIN reference">SPIN reference</a>), the property will be proven correct.</p>
<p>Note that although such (liveness) properties should be supported, by expressing them in LTL (or other logics like CTL), it is not clear how could they be expressible only with assertions (since assertions state something about the state at particular program point). </p>
| 431
|
language modeling
|
What are the conditions necessary for a programming language to have no undefined behavior?
|
https://cs.stackexchange.com/questions/161663/what-are-the-conditions-necessary-for-a-programming-language-to-have-no-undefine
|
<p>For context, yesterday I posted <a href="https://cs.stackexchange.com/q/161643/131435">Does the first incompleteness theorem imply that any Turing complete programming language must have undefined behavior?</a>. Part of what prompted me to ask that question in the first place is that, awhile ago, someone on the learnprogramming subreddit told me something about the reason C++ in particular having so much undefined behavior is because, for it to <em>not</em> have undefined behavior, it would have to use a much more restrictive language model, but they didn't explain what that means exactly. I had also asked on Quora awhile ago about why C++ compilers don't always throw errors when a program contains undefined behavior and at least one answer mentioned something about it being fundamentally <em>impossible</em> to always detect undefined behavior at compile time and that this was related to the halting problem being undecidable.</p>
<p>Those two things combined have me wondering about models of computing more generally -- my understanding is that all/most popular programming languages, including C++, are Turing complete, and since I was told the problem of detecting UB in C++ is <em>fundamental</em> and related to the halting problem, I thought that perhaps <em>all</em> Turing complete programming languages must have undefined behavior and C++ is just worse at hiding it than others. But judging from the answers to my above-linked question, I was mistaken about that.</p>
<p>So my question now is, what conditions need to be imposed on a Turing complete language in order to guarantee that all possible programs written in the language will have fully defined behavior determined by the language specification? And, on a side note, does the answer have anything to do with the incompleteness theorems? I ask the latter question because the idea of defining a language for which all possible programs have fully defined behavior seems quite similar to the idea of defining an axiom system for which all possible theorems are provable/disprovable.</p>
|
<p>The problem of statically detecting undefined behavior has nothing to do with undefinedness as such. It's just impossible to prove in general that programs in a Turing-complete language will do anything (<a href="https://en.wikipedia.org/wiki/Rice%27s_theorem" rel="nofollow noreferrer">Rice's theorem</a>). For example, if your <code>main</code> function looks like</p>
<pre><code>int main() {
do_something();
cout << "Done" << endl;
}
</code></pre>
<p>then for any algorithm attempting to determine whether the program halts, there is some definition of <code>do_something</code> that will fool it. For the same reason, for any algorithm attempting to determine whether the program displays <code>Done</code>, there is a definition of <code>do_something</code> that will fool it. For the same reason, if you add <code>"42"[42];</code> at the end, then for any compiler that tries to warn you about undefined behavior, there is a definition of <code>do_something</code> that will fool it.</p>
<p>Whether a program displays <code>Done</code> is decidable for a Turing-complete language that has no way to display text. Likewise, whether a program has undefined behavior is decidable for a Turing-complete language that completely defines the behavior of every program (as Turing's original computing model did, for instance).</p>
<p>It is possible to detect and warn about undefined behavior in C++ in <em>many</em> cases, and popular compilers could do a better job of it than they do.</p>
<blockquote>
<p>someone [...] told me [...] for [C++] to not have undefined behavior it would have to use a much more restrictive language model, but they didn't explain what that meant exactly.</p>
</blockquote>
<p>They probably meant that defining the behavior of every program makes optimization more difficult. For example, if the effect of an out-of-bounds array access is defined, then every array access has to be compiled into code that checks whether the index is out of bounds so it can do the mandated thing (unless the compiler can prove that that code is dead, which in many cases is not possible). If the effect is not defined then the compiler can just generate a single memory-access instruction. It may crash the program, or overwrite some other variable causing weird, hard-to-debug behavior down the line, but that's okay, because that can only happen when the index was out of bounds, and the spec says anything goes then.</p>
| 432
|
language modeling
|
Can other models of computation equivalent to Turing machines also recognize the same languages?
|
https://cs.stackexchange.com/questions/64207/can-other-models-of-computation-equivalent-to-turing-machines-also-recognize-the
|
<p>There are other models of computation equivalent to Turing machines in terms of computability. </p>
<p>Turing machines also recognize recursively enumerable languages.</p>
<p>My questions are</p>
<ul>
<li>Do other models of computation equivalent to Turing machines also recognize the same languages?</li>
<li>Are computability and language recognization unrelated to each other, so that other models of computation equivalent to Turing machines don't recognize or accept any language?</li>
</ul>
<p>Thanks.</p>
|
<p>Two models of computation are equivalent iff they can compute exactly the same set of functions. So the answer to your question is yes, they can recognize the same languages. The definition of "accept" has to be altered a bit to fit the new model, of course, since for example in the lambda calculus you have no states that you could mark as accepting.</p>
| 433
|
language modeling
|
The importance of the language semantics for code generation and frameworks for code generation in model-driven development
|
https://cs.stackexchange.com/questions/55781/the-importance-of-the-language-semantics-for-code-generation-and-frameworks-for
|
<p>I am implementing worflow where the code in industrial programming languages (JavaScript and Java) should be generated from the formal (formally verified) expressions (from ontologies as objects and rule formulas as behaviors). What is the best pratice for such code generation? Are some frameworks available for this?</p>
<p>In my opinion the semantics of the programming language is required. And I should be able to do the code generation in two steps: 1) translate my formal epxressions into semantic expressions of the target language; 2) translate semantic expressions into executable code. In reality I can not find any work that connects semantics of programming language with the code generation.</p>
<p>Is there special kind of semantics of programming languages that is usable not only for the analysis of the programs but also for the generation of the programs?</p>
<p>My guess is that it should be really useful approach for generating formally verified code but I can not find research work about this. Are there trends of better keywords avilable for this?</p>
<p><em>Maybe - more relevant question is - what kind of compilers/translators the Model Driven Development tools use for the generation of source code (platform dependent code) and how semantics of programming language can be used for the construction of such compilers?</em></p>
<p><em>Note added. There already is complete unifying (denotational and operational) semantics of Java, JavaScript and other industrial programmin languages in the K framework. So - this is more question about application of K framework for code generation is that is possible at all?</em></p>
|
<p>In general when we talk about code generation (or model-to-model transformation in general), clearly defined semantics is quite important, since such transformations usually make sense when both the source and the target model <em>semantically match</em> according to some criteria. For example, programmers might describe the behaviour of a program with a formal specification, which, according to the semantics, defines possible implementations in the target language (i.e. outputs of code generation). Failing to precisely define the semantics might lead to <em>ambiguous or, in general invalid, transformations</em>.</p>
<blockquote>
<p>Is there special kind of semantics of programming languages that is
usable not only for the analysis of the programs but also for the
generation of the programs?</p>
</blockquote>
<p>In general, there is no need for a special kind of semantics for targeting generation of programs. However, different code generation systems target different domains of code generation and thus leverage different kinds of programming languages and their semantics. (Some overview of code generation in the context of program synthesis can be found in [1]).</p>
<p>There exists a body of work for which a programming language is coupled with language verification and program synthesis (described below). A seemingly well-fitting example of this is the <a href="http://lara.epfl.ch/w/leon" rel="nofollow">Leon framework</a>, which uses a subset of Scala as the main programming language, that expresses both programs and formal specifications. The framework allows not only verifying that the program matches the given specification (again, strictly defined by the semantics of the language and the interpretation of the logic used in the specification), but also generating code. In the latter case, programmers can only give formal specifications of the program in terms of its precondition and postcondition and the framework, it's code generation part (i.e. its synthesizer), generates the program that matches the given specification [2].</p>
<blockquote>
<p>My guess is that it should be really useful approach for generating
formally verified code but I can not find research work about this.
Are there trends of better keywords avilable for this?</p>
</blockquote>
<p>One of the fields that seems to be closely matching what you described is software synthesis. Program synthesis is the task of automatically discovering an executable piece of code given user intent expressed using various forms of constraints such as input-output examples, demonstrations, natural language, etc [1].</p>
<blockquote>
<p>In reality I can not find any work that connects semantics of
programming language with the code generation.</p>
</blockquote>
<p>In general, the compiler for many general-purpose modern (high-level) languages perform some form of code generation that has to satisfy the rules of the language semantics (i.e. its language specification). E.g. the compiler for the Scala programming language emits Java bytecode---which can be thought of as a language which elements semantically closer match the instructions of the actual hardware---according to the Scala language specification.</p>
<blockquote>
<p>[About compiler and other tools for] Model Driven Development useful for
code generation. What is the best pratice or framework for code
generation?</p>
</blockquote>
<p>Although, in my answer, I was focusing on the connection of the semantics of the programming language and code generation in the context of (general-purpose) program synthesis, these concepts are by all means applicable to other domains or practices such as Model-Driven Development. For general pointers on this direction, it might be useful to check out books that try to summarize them [4], [5].</p>
<blockquote>
<p>So - this is more question about application of K framework for code
generation is that is possible at all?</p>
</blockquote>
<p>Although, it might be the most suitable for your problem and intended use, there are other alternatives that might also work and perhaps could work for you as well, such as some of the <a href="https://en.wikipedia.org/wiki/Transformation_language" rel="nofollow">transformation languages like Stratego</a> or rewriting systems like <a href="https://en.wikipedia.org/wiki/Maude_system" rel="nofollow">Maude</a>.</p>
<p>[1] <a href="http://dl.acm.org/citation.cfm?id=1836091" rel="nofollow">Gulwani, Sumit. "Dimensions in program synthesis." In Proceedings of the 12th international ACM SIGPLAN symposium on Principles and practice of declarative programming, pp. 13-24. ACM, 2010.</a></p>
<p>[2] <a href="http://dl.acm.org/citation.cfm?id=2509555" rel="nofollow">Kneuss, Etienne, Ivan Kuraj, Viktor Kuncak, and Philippe Suter. "Synthesis modulo recursive functions." Acm Sigplan Notices 48, no. 10 (2013): 407-426.</a></p>
<p>[3] <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwibm4fPlNjOAhXC1R4KHbZUCrQQFggoMAA&url=http%3A%2F%2Fwww.scala-lang.org%2Fdocu%2Ffiles%2FScalaReference.pdf&usg=AFQjCNHwxfm9PxRh99_Ld9rj6oJnBboKQg&sig2=u_C2hDR0M84flQvodYG50g" rel="nofollow">Scala Language Specification</a></p>
<p>[4] <a href="https://www.sites.google.com/site/mdsebook/home" rel="nofollow">Brambilla, Marco, Jordi Cabot, and Manuel Wimmer. "Model-driven software engineering in practice." Synthesis Lectures on Software Engineering 1, no. 1 (2012): 1-182.</a></p>
<p>[5] <a href="http://dl.acm.org/citation.cfm?id=SERIES11879.1696490" rel="nofollow">Milicev, Dragan. Model-driven development with executable UML. John Wiley & Sons, 2009.</a></p>
| 434
|
language modeling
|
Smallest class of automata model whose corresponding language class contains CFL and is closed against (dis)allowing nondeterminism in the model
|
https://cs.stackexchange.com/questions/43152/smallest-class-of-automata-model-whose-corresponding-language-class-contains-cfl
|
<p>From <a href="https://cs.stackexchange.com/questions/43137/machines-for-context-free-languages-which-gain-no-extra-power-from-nondeterminis?noredirect=1#comment86645_43137">a comment</a>, an interesting question popped up. The class of CFLs (the languages recognized by PDAs) are obviously not closed under nondeterminism - what I mean by this is that deterministic PDAs are not equivalent in power to nondeterministic PDAs.</p>
<p>However, all CFLs are decidable, and in this case, any deterministic TM is equivalent in power to a nondeterministic TM.</p>
<p>Now, this is a large gap - what is the smallest language "above" CFLs that is closed under nondeterminism?</p>
|
<p>The notion of a <em>PDA</em> can be generalized to an <em>$S(n)$ auxiliary pushdown automaton ($S(n)$-AuxPDA)</em>. It consists of</p>
<ol>
<li>a read-only input tape, surrounded by endmarkers,</li>
<li>a finite state control,</li>
<li>a read-write storage tape of length $S(n)$, where $n$ is the length of the input string, and</li>
<li>a stack</li>
</ol>
<p>In "Hopcroft/Ullman (1979) <em>Introduction to Automata Theory, Languages, and Computation (1st ed.)</em> we find:</p>
<p><strong>Theorem 14.1</strong> The following are equivalent for $S(n)\geq\log n$.</p>
<ol>
<li>$L$ is accepted by a deterministic $S(n)$-AuxPDA</li>
<li>$L$ is accepted by a nondeterministic $S(n)$-AuxPDA</li>
<li>$L$ is in $\operatorname{DTIME}(c^{S(n)})$ for some constant $c$.</li>
</ol>
<p>with the surprising:</p>
<p><strong>Corollary</strong> $L$ is in $\mathsf P$ if and only if $L$ is accepted by a $\log n$-AuxPDA.</p>
<p>The proof consists of three parts: (1) If L is accepted by a nondeterministic $S(n)$-AuxPDA with $S(n)\geq \log n$, then $L$ is in $\operatorname{DTIME}(c^{S(n)})$ for some constant $c$. (2) If $L$ is in $\operatorname{DTIME}(T(n))$, then $L$ is accepted in time $T^4(n)$ by a deterministic one-tape TM with a very simple forward-backward head scan pattern (independent of the input). (3) If $L$ is accepted in time $T(n)$ by a deterministic one-tape TM with a very simple forward-backward head scan pattern (independent of the input), then $L$ is accepted by a deterministic $\log T(n)$-AuxPDA.</p>
<p>Part (1) is basically a rigorous proof that the "halting problem is decidable", where the number of operations was counted thoroughly. Part (2) is the creative idea that prepares the stage for part (3). Part (3) uses the auxiliary storage for tracking the time step, which allows to reconstruct the head position due to the very simple forward-backward head scan pattern, and the stack for recursive backtracking.</p>
<hr>
<p>The above is a copy of large parts of an <a href="https://cs.stackexchange.com/a/42306">answer to another question</a>. So in which sense does it answers the current question? It is not the smallest imaginable class that contains $\mathsf{CFL}$ and is closed under nondeterminism. But it is a very well known class (i.e. $\mathsf P$) and a natural machine model, which has been studied thoroughly in the past, and is still studied today (with an additional runtime restriction) in the context of <a href="https://complexityzoo.uwaterloo.ca/Complexity_Zoo:N#nauxpdap" rel="nofollow noreferrer">LogCFL</a>. Indeed, <a href="http://en.wikipedia.org/wiki/LOGCFL" rel="nofollow noreferrer">LogCFL</a> is also closed under nondeterminism and is closer than $\mathsf P$ to $\mathsf{CFL}$, proving my point that the above (i.e. $\mathsf P$ = $\log n$-AuxPDA) is not the smallest imaginable class of this kind.</p>
| 435
|
language modeling
|
Next Word Prediction using n-gram & Tries
|
https://cs.stackexchange.com/questions/18351/next-word-prediction-using-n-gram-tries
|
<p>I am studying the following paper for understanding next-word prediction using n-gram & trie: - <a href="http://nlp.cs.berkeley.edu/pubs/Pauls-Klein_2011_LM_paper.pdf" rel="nofollow">http://nlp.cs.berkeley.edu/pubs/Pauls-Klein_2011_LM_paper.pdf</a> Before this, I did some brief study on what are n-grams. And, I know trie data structure. The issue is - the algorithm in paper is not so intuitive to understand. Anyone can provide some better/intuitive explanation of this algorithm, or some similar Language Model Implementation. I am new to this site, so if this question structure is inappropriate to this site, please guide. Thanks in advance.</p>
| 436
|
|
language modeling
|
Does ChatGPT use specific sub-programs
|
https://cs.stackexchange.com/questions/156287/does-chatgpt-use-specific-sub-programs
|
<p>I have had a somewhat hard time trying to understand how ChatGPT can "solve" some tasks that cannot be entirely cast as language-model-based rephrasing of textual subsets of the internet directed by a textual query.</p>
<p>For example, ChatGPT seems to be able to do calculations, even if not always appropriately used, execute code, extract literature references from input text, etc.</p>
<p>It seems to me that it is not simply one huge neural network, but has access to some additional programs like a calculator or some interpreters for programming languages, etc. So it would be more like a human-machine interface.</p>
<p>To which extend is my interpretation correct?</p>
|
<blockquote>
<p>It seems to me that it is not simply one huge neural network, but has access to some additional programs like a calculator</p>
</blockquote>
<p>I'd add to <a href="https://cs.stackexchange.com/users/755/d-w">D.W.</a>'s answer that OpenAI's CEO <a href="https://www.youtube.com/live/outcGtbnMuQ?feature=share&t=1317" rel="nofollow noreferrer">explicitly mentioned</a> today in the GPT-4 announcement video that GPT-4 isn't hooked up to a calculator.</p>
| 437
|
language modeling
|
Question about bigram model
|
https://cs.stackexchange.com/questions/52526/question-about-bigram-model
|
<p>I am trying to build a bigram letter model.</p>
<p>I obtain a sequence of words in a form of ['hello','I','am','Johnny'].</p>
<p>Firstly, I lower all the words to obtain : ['hello','i','am','johnny'].</p>
<p>I am capable of building a bigram letter model, but I have read somewhere that you should provide some kind of empty strings / padding to the model.</p>
<p>Does anybody know why do you have to provide padding to the input data to build a proper language model? And how to use padding on this sample input to build letter model? </p>
<p>I was thinking about making a space in front of every word, but I am not convinced that this is the right solution - other options that I am considering are adding padding at the end of every sentence or after each sequence of 2 characters as this is a bigram letter model.</p>
|
<p>The padding is there since the distribution of letters at the beginning and end of words is very different from their distribution inside words. To capture that, separate words with spaces and add trailing spaces on both sides: <code>' hello i am johnny '</code>.</p>
| 438
|
language modeling
|
How do you have a type typed "Type" when implementing a programming language?
|
https://cs.stackexchange.com/questions/131117/how-do-you-have-a-type-typed-type-when-implementing-a-programming-language
|
<p>I am working on the <em>base</em> of a language model, and am wondering how to represent the base type, which is a type <code>Type</code>. I have heard of an "infinite chain of types", but (a) I can't seem to find it on the internet while searching anymore, and (b) I am not sure if that's what I need or what it really means in practice.</p>
<p>Basically, I have a system in the language like this:</p>
<pre><code>type User
type String
type X
...
</code></pre>
<p>Internally these get compiled to something like this:</p>
<pre><code>[
{
type: 'Type',
name: 'User'
},
{
type: 'Type',
name: 'String'
},
...
]
</code></pre>
<p>But actually, the <code>type: 'Type'</code> gets further compiled not pointing to the string <code>'Type'</code>, but to the actual <code>Type</code> object:</p>
<pre><code>[
{
type: theTypeObject,
name: 'User'
},
{
type: theTypeObject,
name: 'String'
},
...
]
</code></pre>
<p>So then the problem is, I need to now define or specify the "type type" itself:</p>
<pre><code>type Type
</code></pre>
<p>which I try represent in a similar way, so now we have:</p>
<pre><code>[
{
type: 'Type',
name: 'Type'
},
...
]
</code></pre>
<p>which is:</p>
<pre><code>let theTypeObject = { name: 'Type' }
typeTypeObject.type = theTypeObject
</code></pre>
<p>Is that correct? What is this really saying? It is a circular structure, does this even make sense conceptually?</p>
<p>What would be better to do in this situation? Or is this perfectly acceptable? Basically I would like to understand how to explain what this circular structure even means, because it just makes me confused.</p>
<blockquote>
<p>The type "Type" is typed "Type". It is an element of itself...</p>
</blockquote>
<p>That doesn't seem logically possible. So what should I do?</p>
|
<p>I think what you might have found in your previous researches might be a <a href="https://en.wikipedia.org/wiki/Pure_type_system" rel="nofollow noreferrer">pure type system</a>.</p>
<p>In a pure type system, to avoid having "Type" being a "Type" at the same level that "Int" is a "Type", you would define another layer of "Types" where you would define "Type" as the entire first layer (and you would define this layer in the next one, and so on)</p>
<p>In practice, you could define your base types as "types", and the next layer would be kinds. The type "type" would be a kind. You might also want to consider functions of types to types as another kind.</p>
| 439
|
language modeling
|
How do RNN's handle providing output with different dimension than input
|
https://cs.stackexchange.com/questions/54607/how-do-rnns-handle-providing-output-with-different-dimension-than-input
|
<p>It seems like an RNN has to have the h<sub>t-1</sub> needs to be the same size as the input vector since they're being added to one another, but if you're doing something like modeling to another language or classification, how would you handle this? I'd imagine you'd just have a fully connected layer that would predict the output, but I was wondering if there is a standard convention?</p>
|
<p>So Basically what happens is the input x is multiplied by a hidden layer U to yield a state vector s, which transforms it to a new dimensionality. Then The recurrent layer W is always a square matrix to maintain the dimensions of s. Lastly, the output matrix V is multiplied to update the matrix to the output dimensionality.</p>
<p>So if you had a vector that was 5 dimensions, and you wanted to multiply it into a state vector of 20 dimensions and an output vector of 2 dimensions, your dimensions would be:</p>
<pre><code>x is 5x1
s is 20x1
y is 2x1
U is 20x5
W is 20x20
V is 2x20
</code></pre>
<p>and your formula is:</p>
<pre><code>s = Ux + W*s_(t-1)
y = Vs
</code></pre>
| 440
|
language modeling
|
What are the modern alternatives to Backus–Naur form and what are their advantages?
|
https://cs.stackexchange.com/questions/127499/what-are-the-modern-alternatives-to-backus-naur-form-and-what-are-their-advantag
|
<p>I am very new to the whole concept of context-free grammars to represent the syntax tree of formal languages (i.e., programming languages). It seems that the Backus–Naur form (BNF) is the oldest of all possible notations and the most prevalent one. Though it looks like to be an ancient piece of art. Now I'm wondering that if you want to invent a new programing language what modern alternative you should use and why?</p>
<p>These are what I have found so far:</p>
<ul>
<li>ISO extended Backus–Naur form (EBNF)</li>
<li>W3C-BNF</li>
<li>augmented Backus–Naur form (ABNF)</li>
<li>Extreme BNF (XBNF)</li>
<li>Translational Backus–Naur form</li>
<li>ANother Tool for Language Recognition (ANTLR)</li>
<li>Wirth syntax notation (WSN)</li>
<li>Van Wijngaarden grammar</li>
<li><a href="https://docs.microsoft.com/en-us/previous-versions//dd129523(v=vs.85)?redirectedfrom=MSDN" rel="nofollow noreferrer">Microsoft “M” modeling language</a></li>
<li>Compiler Description Language (CDL)</li>
<li>Xtext grammar language</li>
<li>definite clause grammar (DCG)</li>
<li><a href="https://en.wikipedia.org/wiki/META_II" rel="nofollow noreferrer">META II</a></li>
</ul>
<p>I would like to know what are the advantages and disadvantages of these options?</p>
|
<h1>Answer 1: The question is meaningless as written.</h1>
<p>You are mixing different <em>kinds</em> of notations here that are intended for different purposes.</p>
<ul>
<li>BNF and ABNF are concrete notations for writing the abstract concept of a <a href="https://en.wikipedia.org/wiki/Context-free_grammar" rel="nofollow noreferrer">context-free grammar</a>.</li>
<li>"Van Wijngaarden grammar" refers either to an abstract type of grammar a la "context-free grammar", or to a concrete notation for writing this type of grammar. Van Wijngaarden grammars are strictly more expressive than context-free grammars. In fact, parsing them is undecidable in general, making them more like an <a href="https://en.wikipedia.org/wiki/Esoteric_programming_language" rel="nofollow noreferrer">esoteric programming language</a> in some respects.</li>
<li>ANTLR is a particular software tool for generating parsers. It has a language for writing grammars that it accepts as input. Like most other parser generators, ANTLR includes specialized features in its grammar language for manipulating the generated parser code. For example, it is possible in an ANTLR input file to directly insert Java code that is to be executed by the generated parser when a certain token is encountered. This sort of feature only makes sense in the context of a parser generator.</li>
</ul>
<h1>Answer 2: It mostly doesn't matter.</h1>
<p>If, as you say, "you want to invent a new programing language," there are two situations where writing down a grammar is relevant:</p>
<ol>
<li><p>Writing a parser as part of the compiler implementation. Your choice of how to express the grammar here is dictated by how your parser is implemented. Whether you use an ANTLR/Yacc/Bison-style <a href="https://en.wikipedia.org/wiki/Compiler-compiler" rel="nofollow noreferrer">parser generator</a>, a <a href="https://en.wikipedia.org/wiki/Parser_combinator" rel="nofollow noreferrer">parser combinator</a> library, etc. will determine what your grammar looks like in the source code.</p>
</li>
<li><p>Writing a language specification. Most programming language specifications are written to be read by humans, not computers. Therefore, anything that is sufficiently clear to a human reader is an acceptable choice of notation. It is common for a language specification to define its own grammar notation, rendering the particular choice of notation largely irrelevant. For example, <a href="https://www.haskell.org/onlinereport/haskell2010/haskellch2.html" rel="nofollow noreferrer">the "Lexical Structure" chapter</a> of the Haskell 2010 Language Report starts with a "Notational Conventions" section that defines how the grammar will be written.</p>
</li>
</ol>
| 441
|
language modeling
|
Theoretical CSPs where (in)equality constraints can be expressed as a single constraint?
|
https://cs.stackexchange.com/questions/65097/theoretical-csps-where-inequality-constraints-can-be-expressed-as-a-single-con
|
<p>I'm designing puzzles by running a <a href="https://en.wikipedia.org/wiki/Constraint_satisfaction_problem#Flexible_CSPs" rel="nofollow">MAX-CSP</a> solver, and it works nicely in practice. For concreteness, my problems have the following form (in a pseudo-modeling language):</p>
<pre><code># set up vars & their domains
x_1 {1}
x_2 {0,1}
x_3 {0}
x_4 {0,1}
# constraints
(x_1 != x_2)
(x_2 = x_3)
(x_3 = x_4)
(x_1 != x_4)
(x_2 = x_4)
</code></pre>
<p>The objective is then to choose values from the domains of each variable so as to maximize the number of constraints satisfied.</p>
<p>More generally, in all of my instances, we have a fixed value of $k$. The domain of each variable is of size at most $k$. Every constraint involves two variables, and is an inequality or an equality constraint.</p>
<blockquote>
<p>When CSPs are studied from a theoretical viewpoint, is there a model that captures the above in a straightforward way?</p>
</blockquote>
|
<p>In complexity theory, CSPs are usually specified as a set of allowed <em>predicates</em>. If the (finite) domain is $D$, a predicate of arity $d$ is an arbitrary subset of $D^d$ of allowed values. In particular, equality (or inequality) is a predicate of arity 2.</p>
<p>This is the point of view taken in <a href="https://en.wikipedia.org/wiki/Schaefer%27s_dichotomy_theorem" rel="nofollow">Schaefer's dichotomy theorem</a> as well as the more recent universal algebra approach (see for example a <a href="http://www.karlin.mff.cuni.cz/~barto/Articles/SurveyBSLverSub.pdf" rel="nofollow">short survey</a> by Libor Barto). Another example is the celebrated <a href="https://people.eecs.berkeley.edu/~prasad/Files/extabstract.pdf" rel="nofollow">Raghavendra's theorem</a>, which states that assuming the unique games conjecture, every CSP can be approximated optimally using semidefinite programming.</p>
| 442
|
language modeling
|
As a Teacher: Choosing a suitable programming language
|
https://cs.stackexchange.com/questions/66983/as-a-teacher-choosing-a-suitable-programming-language
|
<p><em>I'm not sure if it's the right place for this question. Sorry if going a bit off-topic.</em></p>
<p>Choosing a suitable Language for the first programming course is one of the most important things that every related teacher/lecturer should bear in mind; especially if the students are young or having a limited math knowledge.</p>
<p>I'm currently teaching a group of highly enthusiastic young people (about 16 to 17 years old) with a <strong>medium knowledge of math</strong>. They're attending <strong>High School at 10th grade</strong> class right now. I'm going to start teaching a programming language for the upcoming semester.</p>
<p>They're a group of handpicked students throughout the city with a extraordinary level of creativity and diligence, so i see that <strong>working with a real programming language would not be a hard task for them.</strong> So, choosing simple graphical and drag'n'drop solutions like Turtle Art, Scratch, and Tynker are not considered as options.</p>
<p>There are a few factors that should be checked before making a choice:</p>
<ul>
<li><strong>Simplicity</strong>: Most of them probably have not experienced any kind of real coding action before. </li>
<li><strong>Simplicity, Again</strong>: One of the main ideas is teaching <em>how to think algorithmic</em>. Having a sophisticated or hardly-syntaxed language will divert them out of the path. </li>
<li><strong>Generality</strong>: It's better that the language not be designed for special development cases. Take PHP and MATLAB as examples which are respectively designed for Web Development and Calculation/Modeling.</li>
<li><strong>Minimum Objective stuff</strong>: No forced OO programming (like Java). Or at least with the minimum dependency to OO concepts.</li>
<li><strong>Platform</strong>: It's important to have Windows as a supported dev environment, as neary all of them are on Windows.</li>
<li><strong>Easy to Set-up</strong>: It's better to have a straightforward way of setting up the dev environment.</li>
<li><strong>Industry preference</strong>: Not a serious problem. But it should be at least a currently-active language allowing students to reach nearly-real dev experiences. </li>
<li><strong>Hardware Portability</strong>: It's important (but not required) that the language be flexible enough to be used on Hardware programming. (I'm not speaking of Hardware Description languages like Verilog and VHDL.) The aim is programming for more simple processor-based hardware like <strong>AVR Microprocessors</strong> or <strong>Raspberry Pi GPIO interface</strong>.</li>
</ul>
<p>I want to know that </p>
<ol>
<li>is there any other factors that i'm missing?</li>
<li>And, what languages do you suggest as choices?</li>
</ol>
|
<p>My answer? Python.</p>
<p>Let me explain by tackling all your points.</p>
<ol>
<li><strong>Simplicity</strong>. Python code reads like English. Seriously, how simple is
<code>print("Hello World!")</code></li>
<li><strong>Generality</strong>. Python can be used for web development (via. Flask/Django), data analysis (via. NumPy/Pandas/SciPy), games (via. PyGame), as well as a multitude of other tasks because of the sheer amount of libraries there are.</li>
<li><strong>Minimum Objective stuff</strong>. You can do some OOP in Python but it isn't required.</li>
<li><strong>Platform</strong>. Python2.7 is in pretty much on every Linux distro and there are plenty of YouTube videos on setting it up on Windows/Mac. If anything, you can use the online interpreter that Repl.It offers.</li>
<li><strong>Industry Preference</strong>. Correct me if I'm wrong, but Python has consistently been ranked as one of the most popular languages.</li>
</ol>
<p>In my experience teaching, it is extremely important to make sure that the syntax is as easy as possible to write and understand. For a new programmer, it can be pretty discouraging when he/she writes code only to see an error message (especially if he/she does not have the skill to read an error message and debug).</p>
<p>Side note, PythonTutor will be really helpful in explaining some major computer science/programming concepts.</p>
| 443
|
language modeling
|
Canonical reference on agent-based computing
|
https://cs.stackexchange.com/questions/2764/canonical-reference-on-agent-based-computing
|
<p>I am interested in exploring the world of <a href="http://en.wikipedia.org/wiki/BDI_software_agent" rel="nofollow">BDI agents</a> (software agents that possess "beliefs, desires, intentions", essentially the agent has knowledge of the world, a set of motivations, and carries out certain plans).</p>
<p>I recently read A Canonical Agent Model for Healthcare Applications [1], which left me with a lot of questions, particularly about the specialization of different agent models for particular applications. </p>
<p>The particular modeling language used in their examples was ProForma, and I understand that this is more for the abstract specification of an agent, and that something like <a href="http://en.wikipedia.org/wiki/3APL" rel="nofollow">3APL</a> can be used as an actual programming language in this regard, with syntax like:</p>
<pre><code>BELIEFBASE {
status(standby).
at(0,0).
location(r1,2,4).
location(r5,6,1).
dirty(r1).
dirty(r5).
}
</code></pre>
<p>My question is, all of these systems clearly reflect years of cumulative efforts, and rather than jumping in to the deep end, I'd like to ease into this world of research a bit more slowly. Is there a canonical reference in this area that might be able to provide a more general overview of all of these levels of organization, and where the abstractions stop and the implementations begin?</p>
<hr>
<ol>
<li>Fox J., Glasspool, D., Modgil, S. <a href="http://www.sdela.dds.nl/entityresearch/fox_glasspool_modgil.pdf" rel="nofollow">A Canonical Agent Model for Healthcare Applications</a>. <em>IEEE Intelligent Systems, 21</em>(6), 21-28, 2006.</li>
</ol>
|
<p>If you want to approach this field from a computer science perspective then the standard reference I would recommend is:</p>
<blockquote>
<p>Yoav Shoham and Kevin Leyton-Brown [2009], "Multiagent systems: algorithmic, game-theoretic, and logical foundations", Cambridge University press.</p>
</blockquote>
<p>Currently, it seems like the focus of theoretical work in this field (and this is obviously biased by my own interests) is to consider environments where agents are limited in their interactions. If you just have agents meet randomly, then ABMs are usually overkill and things can be done analytically. However, if there is some interesting network structure (fixed or dynamic) to the interactions (as there often is in real life) then ABMs become an essential tool. A good book discussing some of these ideas from a CS perspective is:</p>
<blockquote>
<p>David Easley and Jon Kleinberg [2010], "Networks, crowds, and markets: Reasoning about a highly connected world," Cambridge University press. (<a href="http://www.cs.cornell.edu/home/kleinber/networks-book/networks-book.pdf" rel="nofollow noreferrer">draft available online</a>)</p>
</blockquote>
<h3>Questions of interest:</h3>
<ul>
<li><p><a href="https://cstheory.stackexchange.com/q/7240/1037">Simulation modeling of diseases</a></p>
</li>
<li><p><a href="https://cstheory.stackexchange.com/q/7286/1037">Sources for Algorithmic Evolutionary Game Theory</a></p>
</li>
</ul>
| 444
|
language modeling
|
n-grams textbook question
|
https://cs.stackexchange.com/questions/125653/n-grams-textbook-question
|
<p>I have this question I found regarding n-gram modeling in the <em>Speech and Language Processing</em> textbook by Daniel Jurafsky:</p>
<blockquote>
<p>Suppose we didn’t use the end-symbol <code></s></code>. Train an unsmoothed bigram grammar on the following training corpus without using the end-symbol <code></s></code>:<br>
<code><s></code> a b<br>
<code><s></code> b b<br>
<code><s></code> b a<br>
<code><s></code> a a<br>
Demonstrate that your bigram model does not assign a single probability distribution across all sentence lengths by showing that the sum of the probability of the four possible 2-word sentences over the alphabet
{a,b} is 1.0, and the sum of the probability of all possible 3 word sentences over the alphabet {a,b} is also 1.0.</p>
</blockquote>
<p>Please can you verify that is my approach correct?</p>
| 445
|
|
language modeling
|
Proving the probability of zero occurrences in training using Good-Turing maximum likelihood estimate
|
https://cs.stackexchange.com/questions/90000/proving-the-probability-of-zero-occurrences-in-training-using-good-turing-maximu
|
<p><strong>Background</strong></p>
<p>Good-Turing (GT) smoothing is used in language models to estimate the counts of words in the test set that have not been seen in the training set. </p>
<p>In GT smoothing, $N_c$ is the count of things observed $c$ times (so a count of a count). As an example, the sentence "Sam I am I am Sam I do not eat" has unigram $N_1=3$ (do, not, eat), unigram $N_2=2$, ...</p>
<p>GT smoothing uses the count of words we've seen once in the training set to estimate the count of words in the test set that we've never seen before. The estimate of the count of these words in the test set that we've never seen before is given by: $$c^*\leftarrow (c+1)\frac{N_{c+1}}{N_c}.$$ This is known as the Good-Turing estimate of Maximum Likelihood Estimate (MLE) for language models. This redistributes probability masses of word occurrences.</p>
<p><strong>The problem</strong></p>
<p>The Good-Turing probability of a word with zero frequency is $$P_{GT}(c=0)=\frac{N_1}{N}.$$</p>
<p>I can't completely see where this comes from. Zero frequency in the training set implies that $c=0\implies c+1=1$, so that coefficient disappearing makes sense. Also, $N_{c+1}=N_{0+1}=N_1$ makes sense in getting the numerator.</p>
<p>But what confuses me is how this implies that $N_0=1$. </p>
<p>The reason $N_0 = 1$ is because: $$P_{GT}(c>0)=\frac{c^*}{N}=\frac{c+1}{N}\frac{N_{c+1}}{N_c},$$ so $$P_{GT}(c=0)=\frac{N_1}{NN_0}=\frac{N_1}{N}$$</p>
<p>How does $N_0=1$? </p>
<p>$N_0$ is the count of the words observed $1$ time, so don't see how this is $1$.</p>
<p>In other words, how is $P_{GT}(c=0)$ fully derived?</p>
| 446
|
|
language modeling
|
Data Flow Analysis with exceptions
|
https://cs.stackexchange.com/questions/59832/data-flow-analysis-with-exceptions
|
<p>Data flow analysis work over a control flow graph. When a language under consideration supports exceptions, control flow graph can explode. </p>
<p>What are the standard techniques for dealing with this blow-up?
Can we soundly disregard edges induced by exception? Data flow analyses anyhow compute over-approximations, so we would end up with a less precise but sound solution. Is this true? </p>
<p><strong>Update</strong>: Here are few useful links that I was able to dig out at the end:</p>
<ul>
<li><a href="https://smartech.gatech.edu/bitstream/handle/1853/6581/GIT-CC-00-04.pdf?sequence=1" rel="nofollow">Analysis and Testing of Programs with Exception-Handling Constructs</a></li>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.8105&rep=rep1&type=pdf" rel="nofollow">Efficient And Precise Modeling of Exceptions for Analysis of Java Programs</a></li>
</ul>
|
<p>Ignoring exceptions is unsound. Example:</p>
<pre><code>let g = {
raise E;
}
let f = {
x := interesting_stuff();
g();
x := 0;
}
</code></pre>
<p>When analyzing <code>f</code>, you need to take into account the fact that <code>g</code> raises an exception, otherwise you would incorrectly conclude that <code>x</code> is always 0 on return from <code>f</code>.</p>
<p>I don't know that there is a “standard” technique for dealing with exceptions. There's some literature on the topic, I don't have any more idea of what papers are relevant than I can find by a Google search.</p>
<p>Formally, exceptions can be turned into conditional statements that propagate up the call chain, which of course blows up the control flow graph. In many concrete cases, the exception case is the less interesting case, where a lot of data gets killed, so it should be handled lazily in a forward approach (no need to analyze the liveness <em>on</em> the exception path if the handler kills the data).</p>
| 447
|
language modeling
|
Is there a standard or model or taxonomy of programming languages different than machine-threshold-highlevel?
|
https://cs.stackexchange.com/questions/149309/is-there-a-standard-or-model-or-taxonomy-of-programming-languages-different-than
|
<p>I understand that there are three types of programming languages:</p>
<ul>
<li>Machine languages</li>
<li>Assembly languages</li>
<li>high-level languages</li>
</ul>
<p>And that:</p>
<ul>
<li>Machine languages have no abstraction</li>
<li>Assembly language have little abstraction</li>
<li>High-level languages have much abstraction (data types being the difference maker?)</li>
</ul>
<hr />
<p>Is there a standard or model or taxonomy of programming languages different than machine-assembly-highlevel?</p>
<p>In other words, is this taxonomy necessary or are there more "sophisticated" or "complex" categorizations?</p>
|
<p>I've never heard of "threshold languages". I don't think that is a "thing", or at least not a terribly important thing.</p>
<p>Don't worry too much about taxonomies. Taxonomies are mostly not very important. They have value only insomuch as they help you understand more deeply, but real life is more complicated than any taxonomy. It is a waste of your time to try to find the ultimate taxonomy of programming languages or try to find all possible taxonomies. Taxonomies can often be very shallow, and it might be more productive to go deeper and study a few program languages concretely.</p>
<p>Machine languages (by which I assume you mean assembly language?) have nothing to do with machine learning.</p>
| 448
|
language modeling
|
What is the activation function, label and loss function for Hierachical Softmax
|
https://cs.stackexchange.com/questions/43912/what-is-the-activation-function-label-and-loss-function-for-hierachical-softmax
|
<p>Several papers(<a href="http://www.iro.umontreal.ca/%7Elisa/pointeurs/hierarchical-nnlm-aistats05.pdf" rel="nofollow noreferrer">1</a> (originator), <a href="http://arxiv.org/pdf/1310.4546.pdf" rel="nofollow noreferrer">2</a>, <a href="http://dx.doi.org/10.1007/978-3-662-45924-9_16" rel="nofollow noreferrer">3</a>) suggest the use of Hierachical Softmax instead of softmax for classification where the number of classes is large (eg many thousand).</p>
<p>I haven't been able to get clear in my head what this means the actual final layer and output/labels of the neural network are.</p>
<p>For (plain) softmax the activation function is the softmax function: <span class="math-container">$$\mathbf{\hat{y}}=\sigma(\mathbf{z})_j = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$</span></p>
<p>and the loss (error) function is cross entrypy <span class="math-container">$$C(\mathbf{\hat{y}},\mathbf{y})=\sum_{k=1}^K-\mathbf{y_k}\times \log{\mathbf{\hat{y}_k}}$$</span> where <em>y</em> is "one-hot" -- all zeros except a 1 for the index matching the class (this lead to effient implementation, if you know the class indexes).</p>
<p>For <strong>Hierachical Softmax</strong>:
What is the form of the label <strong>y</strong>, the activation function <span class="math-container">$\sigma(\mathbf{z})$</span> and the loss (error) function <span class="math-container">$C(\mathbf{\hat{y}},\mathbf{y})$</span></p>
<p>I am starting the suspect that the label is a Binary code for the class, eg a Huffman code, and the activation function is simply sigmoid (or tanh) and the loss is just squared error.</p>
<p>Is that all there is too it?</p>
<p>Or is it infact done with a multilayer network, in some way? (Obviously you
can't stack softmax layers as inputs to softmax layers).</p>
<h3>Implementations</h3>
<p>There are quiet a few implementations around, but I find all of them hard to follow.</p>
<ul>
<li><p>Word2Vec in <a href="https://code.google.com/p/word2vec/source/browse/trunk/word2vec.c" rel="nofollow noreferrer">C</a>, and Gensim in <a href="https://github.com/piskvorky/gensim/blob/develop/gensim/models/word2vec.py" rel="nofollow noreferrer">Python</a>.</p>
<ul>
<li>I'm not great at understanding C -- too many clever tricks (like using 1D indexing + offsets on 2D arrays), and the python harks close to the C (it is an enhanced translation).</li>
<li>There are two linked articles <a href="https://yinwenpeng.wordpress.com/2013/09/26/hierarchical-softmax-in-neural-network-language-model/" rel="nofollow noreferrer">A</a>, <a href="https://yinwenpeng.wordpress.com/2013/12/18/word2vec-gradient-calculation/" rel="nofollow noreferrer">B</a> which go someway towards explaining the C code.</li>
</ul>
</li>
<li><p>A very different <a href="https://github.com/Philip-Bachman/NN-Python/blob/master/nlp/NLMLayers.py" rel="nofollow noreferrer">python</a> (<a href="https://github.com/Philip-Bachman/NN-Python/blob/e9a7619806c5ccbe2bd648b2a2e0af7967dc6996/nlp/CythonFuncsPyx.pyx#L174" rel="nofollow noreferrer">Cython</a> actually)</p>
</li>
<li><p>A even more different <a href="https://github.com/lisa-groundhog/GroundHog/blob/66472ba649aa6a4c6b710a0de3d0344be2f7b5c9/groundhog/layers/cost_layers.py#L1163" rel="nofollow noreferrer">Python (Theano)</a> implementation. This one is not truly Hierarchical soft-max as it only has two layers.</p>
</li>
</ul>
<hr />
<h2>Papers</h2>
<ol>
<li>Morin, F., & Bengio, Y. (2005, January). <a href="http://www.iro.umontreal.ca/%7Elisa/pointeurs/hierarchical-nnlm-aistats05.pdf" rel="nofollow noreferrer">Hierarchical probabilistic neural network language model</a>. In Proceedings of the international workshop on artificial intelligence and statistics (pp. 246-252).</li>
<li>Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). <a href="http://arxiv.org/pdf/1310.4546.pdf" rel="nofollow noreferrer">Distributed representations of words and phrases and their compositionality</a>. In Advances in neural information processing systems (pp. 3111-3119).</li>
<li>Wang, Y., Li, Z., Liu, J., He, Z., Huang, Y., & Li, D. (2014). <a href="http://dx.doi.org/10.1007/978-3-662-45924-9_16" rel="nofollow noreferrer">Word Vector Modeling for Sentiment Analysis of Product Reviews</a>. In Natural Language Processing and Chinese Computing (pp. 168-180). Springer Berlin Heidelberg.</li>
</ol>
| 449
|
|
language modeling
|
RAM BSS model based (or its variant) computer recognizing Boolean languages
|
https://cs.stackexchange.com/questions/102832/ram-bss-model-based-or-its-variant-computer-recognizing-boolean-languages
|
<p>Can any RAM BSS model based machine, or machines which are variants, recognize boolean languages(languages such as P, NP, or the like)? If so which languages are recognizable by RAM/BSS nachines, or its variants?(A variant could be to allow comparison. Or a RAM/BSS with weaker assumptions). </p>
|
<p><a href="https://math-inf.uni-greifswald.de/en/department/about-us/employees/pd-dr-rer-nat-habil-christine-gassner/" rel="nofollow noreferrer">Christine Gaßsner</a> studied the question you are asking. These papers seem relevant, but you should have a look at her list of publications (<a href="https://dblp.uni-trier.de/pers/hd/g/Ga=szlig=ner:Christine" rel="nofollow noreferrer">DBLP</a>):</p>
<ol>
<li><a href="https://doi.org/10.3217/jucs-016-18-2563" rel="nofollow noreferrer">The Separation of Relativized Versions of P and DNP for the Ring of the Reals.</a> J. UCS 16(18): 2563-2568 (2010)</li>
<li><a href="http://dx.doi.org/10.3217/jucs-015-06-1186" rel="nofollow noreferrer">Oracles and Relativizations of the P =? NP Question for Several Structures</a>. J. UCS 15(6): 1186-1205 (2009)</li>
<li><a href="http://drops.dagstuhl.de/opus/volltexte/2009/2266/" rel="nofollow noreferrer">Relativizations of the P =? DNP Question for the BSS Model</a></li>
</ol>
<p>You seem to be asking a basic question, namely: can BSS RAM machines recognize <em>any</em> boolean languages? Well, a BSS machine always has some kind of comparison operator, at the very least <span class="math-container">$<$</span> on real numbers. We can use it to decide equality of <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, from which it follows that a BSS machine can recognize <em>at least</em> all the languages that an ordinary RAM machine can. The more interesting question is whether it can do <em>more</em> than that, or perhaps more efficiently.</p>
| 450
|
language modeling
|
Why the most dominant programming languages didn't follow CSP thread model?
|
https://cs.stackexchange.com/questions/119711/why-the-most-dominant-programming-languages-didnt-follow-csp-thread-model
|
<blockquote>
<p>I was trying to ask this question in StackOverflow, but later realized that this question is more relevant to general computer science, not specific engineering problems. If you think it's not, please let me know.</p>
</blockquote>
<p>Recently I've found out what CSP(Communicating Sequential Processes) is.</p>
<p>According to the article <a href="https://swtch.com/%7Ersc/thread/" rel="nofollow noreferrer">Bell Labs and CSP Threads</a>:</p>
<blockquote>
<p>Most computer science undergraduates are forced to read Andrew Birrell's “An Introduction to Programming with Threads.” The SRC threads model is the one used by most thread packages currently available. The problem with all of these is that they are too low-level. Unlike the communication primitive provided by Hoare, the primitives in the SRC-style threading module must be combined with other techniques, usually shared memory, in order to be used effectively...</p>
</blockquote>
<p>Another article <a href="https://blog.golang.org/share-memory-by-communicating" rel="nofollow noreferrer">Share memory by communicating</a> from Golang blog says:</p>
<blockquote>
<p>Traditional threading models (commonly used when writing Java, C++, and Python programs, for example) require the programmer to communicate between threads using shared memory (...)</p>
<p>Go's concurrency primitives - goroutines and channels - provide an elegant and distinct means of structuring concurrent software. (These concepts have an interesting history that begins with C. A. R. Hoare's Communicating Sequential Processes.)</p>
</blockquote>
<p>Based on what I've seen so far, because Hoare proposed CSP in 1978, it seems that there was no reason to use SRC thread model in programming languages like C++(1985), Java(1995) or Python(1990).</p>
<p>So my question is, <strong>why the most dominant programming languages didn't follow Hoare's thread model?</strong></p>
<p>Here's my guesses:</p>
<ol>
<li>Most programmers back then didn't know about Hoare's thread model.</li>
<li>Most programmers are used to traditional thread model.</li>
</ol>
<p>What do you think?</p>
|
<p>Practically speaking, the C++ threading (memory) model is directly inspired by the Java Memory Model, and C followed. <a href="https://hboehm.info/c++mm/" rel="nofollow noreferrer">Hans Böhm</a> was closely involved with the process and has a great resource list. (*)</p>
<p>You'll quickly note that your dates are pretty optimistic - this memory model did not exist in 1985 when the first C++ implementations were created. There's a practical reason for this. Even in 2000, it wasn't clear which model of parallel computing was going to win. Parallel extensions to mainstream languages were first defined as ad-hoc extensions by the hardware vendors, and later partially standardized by such things as POSIX threads and MPI.</p>
<p>Because of the challenges involved, you see that parallelism support is added first to languages used in high-performance computing. Even C++ was a bit late to the party; C and FORTRAN were the main languages. Java, by virtue of being late, had a chance to learn lessons from those. And since it was designed as a portable language, Java needed a clean memory model not tied to a particular hardware vendor's implementation.</p>
<p>So the common memory model can be traced to various actual hardware implementations that could be unified by a single definition. And this led to a secondary effect: because this was now a standard, software started to use it, and hardware vendors generally followed suit.</p>
<p>There's another winner in the parallelism space: GPGPU's. They entered the competition via a different way. NVidia's CUDA is a similar vendor-extension to C, C++ and FORTRAN. This was possible because the GPU market justified independent development, and the general-purpose use of GPU's was a lucky coincidence.</p>
<p><em>* This development can be dated pretty accurately, the use of the Java memory model for C++ was presented at the 2001-10 ISO C++ meeting in Redmond, WA.</em></p>
| 451
|
language modeling
|
How to prove NP-hardness from scratch?
|
https://cs.stackexchange.com/questions/114952/how-to-prove-np-hardness-from-scratch
|
<p>I am working on a problem of whose complexity is unknown.
By the nature of the problem, I cannot use long edges as I please, so 3SAT and variants are almost impossible to use.</p>
<p>Finally, I have decided to go for the most primitive method -- Turing Machines.</p>
<p>Oddly enough, I could not find any example of NP-hardness reduction done directly by modeling the problem as a language, and showing that a deterministic Turing Machine cannot decide whether a given instance belongs to that language (I might've messed up with the terminology here).</p>
<p>So, assuming that there are no problems to perform an NP-hardness reduction, how does one prove that a problem is NP-hard? Are there any publications that does this?</p>
<p>I also want to add that I know how to perform an NP-hardness reduction. However, the problem that I am tackling is a "localized" geometric problem, that it does not allow me to model any given instance of 3SAT, 3-coloring, vertex cover, etc. </p>
<p>The immediate question comes to mind: "what if the problem is polynomial time solvable?"<br>
Well, that is also a possibility, but I want to exhaust everything before I move onto designing an algorithm.</p>
|
<p>The only two methods I've seen are (a) a reduction or (b) direct proof (as in the proof of the <a href="https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem" rel="nofollow noreferrer">Cook-Levin theorem</a>). It is almost universally the case that a reduction is easier than a direct proof. Therefore, I suggest you keep trying to find a reduction, and consider other reduction partners. There are lots and lots of problems known to be NP-complete; perhaps you can find one that is a more suitable partner than 3SAT.</p>
<p>Possibly useful: <a href="https://cs.stackexchange.com/q/1240/755">How do I construct reductions between problems to prove a problem is NP-complete?</a>.</p>
| 452
|
language modeling
|
Is concurrent language CCS or CSP turing-equivalent in language power?
|
https://cs.stackexchange.com/questions/32743/is-concurrent-language-ccs-or-csp-turing-equivalent-in-language-power
|
<ol>
<li><p>Does the concurrent language CSP (or CCS, $pi$-calculus) model interacting machines?</p></li>
<li><p>Is CSP (or CCS, $pi$-calculus) Turing-equivalent to other programming languages like C? </p></li>
</ol>
|
<p>The answer to your (1) depends on what exactly you mean by "model" and by "interacting machines". The $\pi$-calculus in particular is usually deemed to be a good simplification of the core aspects of message passing concurrency.</p>
<p>Regarding (2), you can simulate Turing machines in CSP, CCS, the $\pi$-calculus as well as in C. Hence they are all equivalent in terms of the functions they can compute on the natural numbers.</p>
| 453
|
language modeling
|
Defining an HTML Template as an Algebraic Type
|
https://cs.stackexchange.com/questions/94026/defining-an-html-template-as-an-algebraic-type
|
<p>Wondering if/how you could define a highly nested structure as a <a href="https://en.wikipedia.org/wiki/Dependent_type" rel="nofollow noreferrer">Dependent Type</a> (or an Algebraic or Parameterized type). Specifically, an HTML template. Not that they work like this (HTML <a href="https://www.html5rocks.com/en/tutorials/webcomponents/template/" rel="nofollow noreferrer">templates</a> don't have variables to plug in), but imagine a template like this:</p>
<pre><code><template id="MyTemplate">
<section>
<header>
<h1>{title}</h1>
<h2>{subtitle}</h2>
</header>
<div>{content}</div>
<footer>
<cite>{author}</cite>
<time>{year}</time>
</footer>
</section>
</template>
</code></pre>
<p>This is a template, so it basically acts like a class (or a <em>type</em>). So you would instantiate the type like this:</p>
<pre><code>var node = new MyTemplate({
title: 'A title',
subtitle: 'A subtitle',
content: 'Foo bar ...',
author: 'foo@example',
year: 2018
})
</code></pre>
<p>And you would get:</p>
<pre><code><section>
<header>
<h1>A title</h1>
<h2>A subtitle</h2>
</header>
<div>Foo bar ...</div>
<footer>
<cite>foo@example</cite>
<time>2018</time>
</footer>
</section>
</code></pre>
<p>The HTML <em>node</em> that is returned is like the type instance. (I'm assuming these are HTMLEntity objects and related DOM objects, not strings). The way the DOM node <em>instances</em> are generically represented is:</p>
<pre><code>{
tag: 'section',
children: [
{
tag: 'header',
children: [ ... ]
},
...
]
}
</code></pre>
<p>But the template, being a type, is like it is defining multiple nested types at once. That is, this is a type:</p>
<pre><code><h1>{title}</h1>
</code></pre>
<p>And that is wrapped in this type:</p>
<pre><code><header>
<h1>{title}</h1>
<h2>{subtitle}</h2>
</header>
</code></pre>
<p>And that is wrapped in the <code><section>...</section></code> type. It's like a type like this:</p>
<pre><code>type Section {
type Header {
type H1 {
title: String
}
type H2 {
subtitle: String
}
},
type Div {
content: String
},
type Footer {
type H1 {
cite: String
}
type H2 {
time: Integer
}
}
}
</code></pre>
<p>Or perhaps, since we are actually plugging this into the HTMLEntity's <code>textContent</code> property, it would be more like this:</p>
<pre><code>type Section {
type Header {
type H1 {
title: String
where textContent = title
}
type H2 {
subtitle: String
where textContent = title
}
},
...
}
</code></pre>
<p>Either way, wondering if you can do anything like that in Haskell, or another type-theory-oriented language like Coq.</p>
<p>In Haskell, a (binary) tree is represented as a recursive structure:</p>
<pre><code>data Tree a = Nil | Node a (Tree a) (Tree a)
</code></pre>
<p>I don't know much Haskell, so I'm not sure how to represent the above HTML template "type" as a Haskell algebraic type (or if it is possible). But it seems like it could be defined as some form of an algebraic or a parameterized type.</p>
<p>My <strong>first question</strong> is, the kind of type the template can be called, and how to model it as a type (in Haskell, or Coq, or some language using a lot of type theory).</p>
<p>The <strong>second question</strong> is if an extended version of this template, which has looping, would be considered a dependent type (and then how to model it as a dependent type). That might look like this.</p>
<pre><code><template id="MyTemplateWithIteration">
<section>
<header>
<h1>{title}</h1>
<h2>{subtitle}</h2>
</header>
<ul>
{each label in labels}
<li>{label}</li>
{/each}
</ul>
<footer>
<cite>{author}</cite>
<time>{year}</time>
</footer>
</section>
</template>
</code></pre>
<p>The reason I am thinking this could potentially be a <em>dependent</em> type is because dependent types deal with "forall", which seems like what the iteration is doing. Might be misunderstanding that part.</p>
<p>To finish up, what I normally just do is create a template <em>object</em>, instead of a type, and then from the template object you create an instance of some other type (the HTMLEntity in this case). But it seems like this could be formalized some more, and instead of a template object we could upgrade it to a template type, and then we would just be creating a template instance when instantiating it. Hoping to see how that definition would look for this highly nested structure.</p>
<p>Related note, wondering if <a href="https://www.stwing.upenn.edu/~wlovas/hudak/aug.pdf" rel="nofollow noreferrer">this</a> (modeling natural language trees using types) is similar. I'm not sure if they are modeling <em>tree</em> as a type, or just the nodes as types. Or perhaps grammars are nested types of some sort.</p>
|
<p>You might be looking for <a href="http://www.cduce.org" rel="nofollow noreferrer">CDuce</a> programming language.</p>
| 454
|
language modeling
|
Term for language that abstracts program location?
|
https://cs.stackexchange.com/questions/125606/term-for-language-that-abstracts-program-location
|
<p>What is the technical term that describes a programming language that abstracts (or at least largely abstracts) the machine location of programs? I’m thinking here specifically of the evolution of handheld calculator programming languages, from early (and some late) models where each instruction exists in a single linear space (e.g., HP RPN calculators before the 41 series), to (some) later models such as the 41 series and the 42S, where each program exists in its own space.</p>
<p>Is there a formal term for this difference? </p>
<p>(Note that I’m thinking here exclusively of cases like the examples given, where the languages used are otherwise the same — here RPN keystep programming — and not of more radical changes in language and system architecture — e.g., RPL.)</p>
| 455
|
|
language modeling
|
Language Classification + AWS ML: what am I doing wrong?
|
https://cs.stackexchange.com/questions/66475/language-classification-aws-ml-what-am-i-doing-wrong
|
<p>I'm evaluating Amazon's machine learning platform, and thought that I would give it a "simple" classification problem first. As a disclaimer, I am quite new to machine learning (hence my interest in an ML platform).</p>
<p>The classification problem is language detection. Given a list of 20k words, and their language (<code>English, French, or Random</code>), train a model to classify new words.</p>
<p>My data is structured in CSV format, with 2 rows:</p>
<pre><code>dàagzj, random
tunisia, english
craindre, french
voters, english
religions, english
condition, french
...
</code></pre>
<p>I imported the data successfully into the platform, and all seems fine.
<a href="https://i.sstatic.net/uAPHY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uAPHY.png" alt="enter image description here"></a></p>
<p>When I attempt to run train a model (using both the default settings and tweaking them) I get the same result. English is selected as the language nearly 100% of the time.
<a href="https://i.sstatic.net/Ch45W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ch45W.png" alt="enter image description here"></a></p>
<p>I know this problem is possible to get reasonably accurate results with simple neural networks, however I'm not sure what is going wrong?</p>
<p>Do I need to perform any preprocessing operations on the text input, or is the plain string sufficient? What data can be collected about a single word that may be a more effective input to a machine learning model?</p>
|
<p>There are probably multiple things wrong here.</p>
<h2>Features</h2>
<p>First, you don't tell us what features you have provided. If the only input you have provided is the word itself (e.g., the string <code>craindre</code>), then most likely the machine learning algorithm has no idea how to use that information: all it knows is that this string is different from all the others you've provided, so it has no ability to generalize.</p>
<p>So, you need to derive some suitable features, to enable this to generalize. For instance, maybe you'll have a feature vector of length $26^2=676$, one for each possible pair of letters, counting the number of times that pair of letters appears consecutively in the word. Then, instead of asking the machine learning algorithm to predict the language from the word itself, ask it to predict the language from the feature vector.</p>
<p>You can certainly come up with much more sophisticated and effective features; this is just an example. For instance, maybe you might have a feature that counts the fraction of letters that were vowels, or the fraction of letters that were consonants, or a feature that indicates what the last letter was, or a feature that indicates how many consecutive vowel-pairs the word had. Basically, you want to pick features values that you think might be helpful at predicting the language or might tend to take different values for different languages. If you want to build something that will be very accurate at predicting language, you'll probably need to get fairly sophisticated in designing features. There's lots of research literature on language detection; you could try reading it to get some ideas for better features. But if you just want to play around with this, my advice is to start with a small number of simple features.</p>
<p>Typically, machine learning algorithms are very smart about finding patterns in the feature vectors you provide -- but they don't do anything to automatically select features. So, the division of labor is: you come up with the features; the ML algorithm looks for patterns that enable it to use those features to predict the answer.</p>
<h2>Class imbalance</h2>
<p>I suggest you read about "the class imbalance problem". You have 3 classes (English, French, and random). Your training set is imbalanced: you have twice as many examples with English as French/random. In other words, you don't have an equal number of examples for each class.</p>
<p>This is not <em>necessarily</em> a problem, but in some cases it can be. For instance, some machine learning algorithms behave poorly when the training set is imbalanced. And every machine learning algorithm will work sub-optimally when the frequency of the classes in the training set doesn't match what you expect to see in practice.</p>
<p>In your case, it's probably not any of those complicated explanations; it's probably a much simpler reason. Because you haven't given the ML algorithm any useful features, it basically has no information whatsoever that it can use to generalize or spot patterns. In other words, it's trying to guess the language of a given word <em>given no information about that word</em>. So what is the poor ML algorithm to do? All it can do is guess based on the relative frequency of different languages. In your case, your training set has told it that any given word is more likely to be English than to be French or random (about 2x as likely), so if you're forced to make a guess and pick one language, the smartest one to pick is 'English'.</p>
<p>In conclusion: it is no surprise that the ML algorithm is always outputting 'English', in your situation.</p>
<p>Once you add a suitable feature vector, if it still works poorly, you can read about the class imbalance problem and see whether it applies to you.</p>
| 456
|
language modeling
|
Does there exist standardized language-agnostic data structure notation?
|
https://cs.stackexchange.com/questions/129904/does-there-exist-standardized-language-agnostic-data-structure-notation
|
<p>I wonder if there exists language-agnostic data structure notation. Preferably a standard, allowing to describe data structures and basic data types. e.g. something like a subset of <code>WebIDL</code>.</p>
<p>A motivation is to describe data structures (a model) in a way that is standardized and language-agnostic. Then have a set of mappings of those data structures into various representations (<code>XML</code>, <code>JSON</code>, <code>YAML</code>)</p>
|
<p>In terms of notation:</p>
<ul>
<li>Isn't <a href="https://en.wikipedia.org/wiki/XML_Schema_(W3C)" rel="nofollow noreferrer">XSD</a> exactly what you are looking for? (barring all the "<a href="https://en.wikipedia.org/wiki/XML_Schema_(W3C)#Criticism" rel="nofollow noreferrer">severe criticism</a>" ...). Also not sure if you are looking for a notation convenient to hand-write, that's certainly NOT XML :-).</li>
<li><a href="https://typedefs.com/" rel="nofollow noreferrer">Typedefs</a> and <a href="https://preserves.gitlab.io/preserves" rel="nofollow noreferrer">preserves</a> seem to have more of a mathematical basis, perhaps closer to what you are looking for. A related <a href="https://news.ycombinator.com/item?id=24972271" rel="nofollow noreferrer">HN thread</a>.</li>
</ul>
<p>I suspect you already know a lot of these projects, usually the focus is in serialization:</p>
<ul>
<li>The <a href="https://en.wikipedia.org/wiki/Interface_description_language" rel="nofollow noreferrer">IDL list at wikipedia</a> ...</li>
<li>Amazon's <a href="https://awslabs.github.io/smithy/" rel="nofollow noreferrer">Smithy</a> and <a href="http://amzn.github.io/ion-docs/" rel="nofollow noreferrer">Ion</a>.</li>
<li>Microsoft's <a href="https://github.com/microsoft/bond/" rel="nofollow noreferrer">Bond</a>.</li>
<li>And of course Google's <a href="https://developers.google.com/protocol-buffers" rel="nofollow noreferrer">protobuf</a> and <a href="https://google.github.io/flatbuffers/" rel="nofollow noreferrer">flatbuffers</a>, but also <a href="https://chromium.googlesource.com/chromium/src/+/master/mojo/" rel="nofollow noreferrer">Mojo</a>, oriented to IPC, and <a href="https://fuchsia.dev/fuchsia-src/development/languages/fidl" rel="nofollow noreferrer">Fuchsia FIDL</a> which seems similar.</li>
</ul>
<p>Investigating this space I found <a href="https://arrow.apache.org/" rel="nofollow noreferrer">Arrow</a>, "a language-independent columnar memory format" for sharing memory between programs, with libraries for a dozen of languages. Turns out it uses <a href="https://arrow.apache.org/faq/#how-does-arrow-relate-to-flatbuffers" rel="nofollow noreferrer">flatbuffers under the hood</a>.</p>
<p>Now that I came up with this list and I suspect you may be familiar with a lot of them, perhaps will be useful to know in which way the thing you are looking for is different. I think the keyword "<strong>notation</strong>" on your question is the key.</p>
<p>Cheers!</p>
| 457
|
language modeling
|
Programming language semantics prototyping tool
|
https://cs.stackexchange.com/questions/65383/programming-language-semantics-prototyping-tool
|
<p>Is there any tool for prototyping a programming language semantics and type system and that also allows for some sort of <em>model checking</em> of standard properties, like type soundness? </p>
<p>I'm asking this, because I'm reading a book on <a href="http://alloy.mit.edu/alloy/">Alloy</a> and it provides the exact functionality that I want, but for models expressed using relational logic.</p>
<p>I'm aware of <a href="https://www.cl.cam.ac.uk/~pes20/ott/">Ott</a>, but it does not have this sort of "model checking" capability, since it is focused on generating code for proof assistant systems.</p>
<p>Any reference on such tool existence would be nice.</p>
|
<p>Although there are frameworks created specifically for the purpose of prototyping programming languages (including their semantics, type systems, evaluation, as well as checking properties about them), the best choice depends on your particular case and specific needs.</p>
<p>Having that said, there are multiple (perhaps not so distinct) alternatives you might take (which includie the ones you've already mentioned):</p>
<ul>
<li>using a specific language/framework designed for creating and prototyping new languages: as an example, Redex [1], a domain-specific language embedded in Racket for specifying and checking (operational) semantics of programming languages, which, given a definition of a language, provides easy handling of tasks such as typesetting (in Latex), inspecting traces of reduction, unit tests and random testing (e.g. for checking typing)</li>
<li>using general modelling languages that offer defining and performing certain analyses easily, as long as they can capture the specific language at hand to the needed extent; Alloy [2] is an example of such an approach: albeit pretty general and flexible, languages can be modelled as relations between states, while the support for model checking (e.g. evaluation within such language) comes for free after the semantics is expressed with a relation model (e.g. some ideas for modelling semantics of a language can be found in [3])</li>
<li>embedding the language to check its properties using a theorem prover; an example would defining the language as well as its semantics by embedding it in a proof system like Coq [4] (more details about this approach, as well as discussion and demonstration of the difference between deep and shallow embedding in Coq is given in [5])</li>
<li>using Ott (as already mentioned, with similar in spirit as Redex, but providing a new definition language rather than being embedded); Ott allows you to define the programming language in a convenient notation, and produce typesetting and definitions in a proof system (usually with deep embedding), where most of the checking (i.e. proof) needs to be performed manually</li>
<li>developing the language and its semantics, as well as appropriate checks (e.g. as tests) "from scratch" in a general-purpose programming language and translation into other systems if need be, for checking purposes (some languages, like Leon [6], include built-in verifiers, which allow automatically proving certain properties and make this approach similar to embedding in a proof system)</li>
</ul>
<p>Note that there is the trade-off between how easy is to use the framework/tool (e.g. as easy as laying out the definition on paper or in Latex) and how powerful the mechanisms for checking the properties about the language are (e.g. embedding the language in a theorem prover can allow checking very elaborate properties). </p>
<p>[1] <a href="http://redex.racket-lang.org/">Casey Klein, John Clements, Christos Dimoulas, Carl Eastlund, Matthias Felleisen, Matthew Flatt, Jay A. McCarthy, Jon Rafkind, Sam Tobin-Hochstadt, and Robert Bruce Findler. Run Your Research: On the Effectiveness of Lightweight Mechanization. POPL, 2012.</a></p>
<p>[2] <a href="http://dl.acm.org/citation.cfm?id=505149">Daniel Jackson. Alloy: a lightweight object modelling notation. TOSEM, 2002.</a></p>
<p>[3] <a href="http://sdg.csail.mit.edu/forge/">Greg Dennis, Felix Chang, Daniel Jackson. Modular Verification of Code with SAT. ISSTA, 2006</a></p>
<p>[4] <a href="https://coq.inria.fr/">Coq formal proof management system</a></p>
<p>[5] <a href="http://adam.chlipala.net/frap/">Formal Reasoning About Programs. Adam Chlipala, 2016</a></p>
<p>[6] <a href="http://lara.epfl.ch/w/leon">Leon automated system for verifying, repairing, and synthesizing functional Scala programs</a></p>
| 458
|
language modeling
|
Is my formal definition of programming language correct?
|
https://cs.stackexchange.com/questions/116507/is-my-formal-definition-of-programming-language-correct
|
<p>I found this formal definition of a programming language in the 1973 paper <a href="https://dl.acm.org/citation.cfm?doid=986953.986988" rel="nofollow noreferrer">Formal definition of programming languages</a> by Terrence Pratt.</p>
<blockquote>
<p>PL is a formal language endowed with two structures: a translator and
an abstract machine.</p>
<p>Translator defines a mapping from program strings, as defined by BNF
grammar, into a representation of programs as hierarchies of directed
graphs.</p>
<p>Abstract machine is a mathematical model of computation.</p>
</blockquote>
<p>Is it an appropriate and full definition? Do you have another?</p>
|
<p>It is certainly not a full definition; one might reasonably expect a "real" programming language to have not merely an abstract mathematical model defining the language's semantics, but also a concrete implementation of a compiler or an interpreter which runs on an actual computer. Then again, one might reasonably not expect this. Is lambda calculus, for example, a programming language? <a href="https://en.wikipedia.org/wiki/Lambda_calculus" rel="nofollow noreferrer">Wikipedia</a> doesn't define it as one, but if somebody else does then I can't say they are wrong.</p>
<p>There are many aspects of what makes a language a "programming language"; reasonable people might disagree on whether a certain aspect (say, Turing completeness) is strictly necessary for something to count as a true programming language. It is unlikely that any list of such aspects would ever be agreed by all computer scientists, or even just the academic computer science community, to be both correct and complete.</p>
<p>The concept of a "programming language" is not in the same category as mathematical concepts such as "group". It is possible for a formal definition of a group to be correct or incorrect, but a formal definition of a programming language can merely be useful or not useful (or, more useful or less useful) relative to the use at hand. Presumably, the definition you quoted was useful for the author of the paper you found it in.</p>
| 459
|
language modeling
|
Does a regular expression model the empty language if it contains symbols not in the alphabet?
|
https://cs.stackexchange.com/questions/64990/does-a-regular-expression-model-the-empty-language-if-it-contains-symbols-not-in
|
<p>Suppose $\Sigma = \{ a,b \}$ and the regular expression $(a^*b+dc)^*(b^*d + ad)^*$. Is it equal to $\varnothing$?</p>
<p>So I have a regular expression like this: $(a^*b+dc)^*$. As only $(a,b) \in \Sigma$, I see that:</p>
<ul>
<li>$dc=\varnothing$</li>
</ul>
<p>So $(a^*b+dc)^*=(a^*b)^*$.</p>
<p>Then:</p>
<ul>
<li>$(b^*d + ad)^* = (\varnothing + \varnothing)^*=(\varnothing)^*$ and as $(\varnothing)^*=\epsilon$, $(b^*d + ad)^*$ becomes $\epsilon$.</li>
</ul>
<p>So my regular expression is simply $ (a^*b)^*$? Am I correct, or does the fact that the regular expression contain at least one symbol not in alphabet make it wrong immediately?</p>
|
<p>Regular expressions only use characters from the alphabet so, if you've fixed your alphabet to be $\{a,b\}$, then $(a^∗b+dc)^∗(b^∗d+ad)^∗$ isn't a regular expression. It doesn't describe any language, in just the same way that "seventy red" doesn't describe any number. In particular, it doesn't describe the empty language, in the same way that "seventy red" isn't equal to zero.</p>
<p>Now, if your alphabet includes all the symbols $a$, $b$, $c$ and $d$, you might want to ask what are the strings in $L((a^∗b+dc)^∗(b^∗d+ad)^∗)$ that contain only $a$'s and $b$'s. That is, what is $L((a^∗b+dc)^∗(b^∗d+ad)^∗)\cap \{a,b\}^*$? And the answer to that is that it's all strings matching $(a^*b)^*$, as you derive in the question.</p>
<p><strong>Appendix.</strong> One could attempt to sidestep these issues by redefining regular expressions in a way that gives them well-defined semantics if they include symbols not in the alphabet. However, the standard definition does not do this.</p>
<p>Indeed, trying to do so would seem to open up a huge can of worms. For example, suppose your alphabet is $\{a,b\}$. We could agree that, since $c$ is not a symbol in the alphabet, the regular-expression-like-object $abc$ matches nothing. OK, but $+$ is also a symbol that's not in the alphabet. Maybe you're happy with $ab+$ matching nothing because it's syntactically invalid – note, $ab+$, not $ab^+$! But, now, what does $a+b$ match? Does it mean "$a$ or $b$" or "$a$ followed by some symbol that's not in the alphabet followed by $b$, which is impossible, so it matches nothing"?</p>
| 460
|
language modeling
|
Does the first incompleteness theorem imply that any Turing complete programming language must have undefined behavior?
|
https://cs.stackexchange.com/questions/161643/does-the-first-incompleteness-theorem-imply-that-any-turing-complete-programming
|
<p>If I understand correctly, the first incompleteness theorem says that any "effectively axiomatized" formal system which is consistent must contain theorems which are <em>independent</em> of the axioms. In other words, there are models of the system where the theorem is provably true and others where it's provably false.</p>
<p>This seems rather similar to how the results of code that include undefined behavior cannot be determined by the language specifications alone -- in order to say what will happen, we need information about which particular compiler is being used and possibly also what hardware system it's being run on (which of course is why it's best to avoid undefined behavior when possible).</p>
<p>It seems like the language specifications of a given programming language, such as C++, might be an example of an "effectively axiomatized" mathematical system, while the various implementations would represent various models of the system. And so programs containing undefined behavior would correspond to theorems independent of the axioms. Is that correct?</p>
<p>Or, perhaps another way to do put it is, are all Turing complete models of computation examples of formal systems to which the incompleteness theorems apply, and, if so, does that imply that programming languages that implement such models must have undefined behavior? That is, is undefined behavior a necessary result of the incompleteness theorems applying to all Turing complete models of computation?</p>
|
<p>No, it doesn't require that. These are two orthogonal issues. You can easily define a new programming language where you provide fully defined semantics for all operations; yet it can be Turing complete. For a concrete example, consider <a href="https://esolangs.org/wiki/Bitwise_Cyclic_Tag" rel="noreferrer">Bitwise Cycle Tag</a>; it is Turing complete, and yet it has no undefined behavior, because the behavior is always fully defined in all circumstances.</p>
| 461
|
language modeling
|
Showing that a language is NP Complete (advice)
|
https://cs.stackexchange.com/questions/110715/showing-that-a-language-is-np-complete-advice
|
<p>I am currently getting ready for my final exam in computational models. I know that there aren't any rules or rule of thumb to show that a language is NP-complete and each problem has its own tricks, but I am really struggling with questions where they give me a language and showing that the language is NP-complete by showing that an NPC problem is polynomial reducible to the given language.</p>
<p>So I wanted to ask for advice. How can I approach such problems? Are there any steps that I can take before to help me somehow? Or is it just literally figuring out which NPC problem is "closest" to the the given language and try to construct a polynomial reduction?</p>
<p>I'd appreciate any advice. Thank you.</p>
|
<p>Typically, yes, it's a matter of finding a known <strong>NP</strong>-complete problem that's somehow similar to the one you're trying to work with. So if you're dealing with a problem about formulas, you probably want to reduce some version of SAT or 3SAT to it.</p>
<p>For graph problems, you probably want to reduce some other graph problem. Problems about long paths and cycles probably come from Hamiltonian path/cycle. Problems about classifying vertices into types sound like colouring problems. Problems about graphs containing or not containing some structure might come from clique or independent set. Problems about dividing graphs in two might be Max Cut, or Subset Sum.</p>
<p>Reductions from one type of problem to another are typically more difficult. If you ahve to do that, think about how you can use your target problem to encode things that are needed in the known <strong>NP</strong>-complete problem. For example, when you reduce 3SAT to independent set, being in or out of the independent set corresponds to being true or false; when you reduce 3SAT to 3-colourability, the three colours you use are "true", "false" and "er, the other colour". But, in these cases, the reduction gadgets tend to be quite fiddly.</p>
<p>Another thing to bear in mind is that, if problem <span class="math-container">$A$</span> looks like problem <span class="math-container">$B$</span>, which you already know to be <strong>NP</strong>-complete, it might be possible to modify that proof to make it work for <span class="math-container">$A$</span>. For example, consider 4-colourability. The easy reduction is from 3-colourability: given a graph <span class="math-container">$G$</span>, add a new vertex, connect that to everything and the new graph is 4-colourable if, and only if, the original graph was 3-colourable. But, if you didn't see that and you knew the reduction from 3SAT to 3-colourability well, it probably wouldn't be very hard to modify that reduction so the four colours were "true", "false", "er, the other colour" and "gee, there are a lot of colours today".</p>
| 462
|
language modeling
|
Clustering changes in a directed acyclic graph
|
https://cs.stackexchange.com/questions/170968/clustering-changes-in-a-directed-acyclic-graph
|
<p>A have a following toy data modeling java-like language:</p>
<ul>
<li>all class fields are <code>protected</code></li>
<li>multiple inheritance is allowed</li>
<li>if a class sees several definitions of a field in its superclasses and all these defitions are the same (verbatim), then we do not reject such hierarchy.</li>
</ul>
<p>For instance,</p>
<pre><code>class A { field : int}
class B {field : int}
class C extends A, B {}
</code></pre>
<p>is a valid model.</p>
<p>Now this model changes with time, we can</p>
<ul>
<li>add a new class</li>
<li>delete an existing class</li>
<li>delete a field in a class</li>
<li>add a field to a class</li>
<li>modify a field in a class (change its type)</li>
<li>change parents of a class</li>
</ul>
<p>I wanted to have a partial order on these changes in sense that <span class="math-container">$c_1 < c_2 \iff$</span> "I need to apply the change <span class="math-container">$c_1$</span> before <span class="math-container">$c_2$</span> and the model would remain valid".</p>
<p>Unfortunately, I have a simple case where such order does not exist:
Initial model:</p>
<pre><code>class WithInt { field : int}
class WithString {field : string}
class Undecided {field : string}
class Cap extends Undecided, WithString {} // sees two definitions of "field: string"
</code></pre>
<p>becoming</p>
<pre><code>class WithInt { field : int}
class WithString {field : string}
class Undecided {field : int} // was string
class Cap extends Undecided, WithInt {} // sees two definitions of "field: int"
</code></pre>
<p>The changes are</p>
<ul>
<li>class <code>Cap</code> changed the set of its superclasses</li>
<li>the <code>field</code> in the class <code>Undecided</code> changed type</li>
</ul>
<p>It rather easy to see that these changes can not be applied separately at all, and hence they do not have an order in the sense that I described above.</p>
<p>Thus, I need to be able to detect these "clusters of mutually dependent changes" and introduce order one these clusters; right now I see only an exploration of full combinatorics.</p>
<p>I'd be glad for all suggestions on how to tackle this problem efficiently (or to change the approach entirely), literature links, references, or even an estimation of complexity on the scale "one day's worth of work" -> "published article" -> "PhD thesis" -> "Turing prize"=)</p>
<p>Please feel free to change tags of this question and to ask for clarifications.</p>
|
<p>These changes can be expressed as</p>
<ul>
<li>delete <code>Cap extends Undecided, WithString</code></li>
<li>the <code>field</code> in the class <code>Undecided</code> changed type</li>
<li>add <code>Cap extends Undecided, WithInt</code></li>
</ul>
<p>How do you define changes?</p>
| 463
|
language modeling
|
Examples for CFG that cannot be expressed by regular language
|
https://cs.stackexchange.com/questions/75916/examples-for-cfg-that-cannot-be-expressed-by-regular-language
|
<p>There are nice examples for context free grammars which cannot be expressed with <a href="https://www.wikiwand.com/en/Regular_language" rel="nofollow noreferrer">regular language</a>, for example the palindrome and a similar contrived example <a href="https://cs.stackexchange.com/questions/57117/how-can-i-show-context-free-grammars-are-strictly-more-expressive-than-regular-e">here</a>, but they are very intuitively applicable for formal languages, e.g. checking for balanced nested parentheses and such. </p>
<p>Can you conceive an example that is pertinent to natural languages?</p>
<p>Part of the motivation to this question is that in one view, regular expressions are practically intractable when it comes to combining and scaling them to describe natural language phenomena; they don't compose as nicely as grammars. But at the same time I can't easily think of an <em>actual example</em> where regular expressions should not be enough for describing natural languages.</p>
<p>An example, therefore, for some natural language phenomena that cannot be modelled with regular language, would be very nice and interesting!</p>
|
<p>Theoretically, you can nest sentences arbitrarily depth, by using subclauses and this excluded any finite state mechanism. The inventor of the phrase structure grammars himself looked at finite state languages, and compared them with the model of phrase structure grammars. The abstract of his paper <em>Finite State languages</em> reads:</p>
<blockquote>
<p>We find that no finite-state Markov-process [...] can serve as an English grammar [...]</p>
</blockquote>
<p>The idea to use finite Markov processes, which came down to finite automata in Chomskys work, go back to ideas of Shannon (see his seminal work <em>A mathematical theory of communication</em>). If this caught your interest, you might also take a look at the two works <em>Three models for the description of language</em> and <em>Syntactic structures</em> by N. Chomsky, where he further discusses the adequacy of grammars and finite state processes for language description (or production thereof).</p>
<p>Later, these ideas, which originally came from linguistics and where therefore closely related to research on natural languages, where later adapted by computer scientists to describe programming languages. I refer to the seminal papers <em>Two families of languages related to ALGOL</em> by Ginsburg and Rice or <em>Revised Report on the Algorithmic Language Algol 60</em>, where the BNF was introduced, the first papers using this phrase structure (or nowadays context free) grammars to describe programming languages.</p>
<p>But on the other side, you seldom find "arbitrary" complex nested sentences, as, in general, the "cognitive" system could not handle them, or said differently this could be conceived as a finite state systems. I refer, for example, to the famous paper <a href="https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two" rel="nofollow noreferrer">The magical number seven, plus or minus two</a> by G. Miller, which states that for numbers the human brain can only store a finite number <span class="math-container">$7 \pm 2$</span> of digits. Similar restrictions might hold for language processing, see the textbook <em>The psychology of language</em> by T. Harley.</p>
| 464
|
language modeling
|
Good language for introduction to self-modifying algorithms?
|
https://cs.stackexchange.com/questions/55948/good-language-for-introduction-to-self-modifying-algorithms
|
<p>So I am trying to find a language with which i can write code to build/search through deductive reasoning 'nets', as well as self-modify it's search algorithms based on information learned from these nets.
I also want a language that i can use to write scripts for a 2d game engine, as i would like to build visual models of my projects for a web page(to help my job/school prospects).
So far i am only really familiar with MySQL(been working full time as a backend developer for about 7 months), but I have spent quite a bit of time developing relatively formal models for problem solving that i would like to attempt to put into code.
Any advice/suggestions would be greatly appreciated, thank you!</p>
| 465
|
|
language modeling
|
What are different ways to provide a semantics to a language?
|
https://cs.stackexchange.com/questions/103333/what-are-different-ways-to-provide-a-semantics-to-a-language
|
<p>Suppose you have 1. a grammar for terms of a language; 2. type-assignment rules, 3. a set of reduction rules. You want to prove that your language is adequate for mathematical reasoning. If I understand correctly, the right way to do it is to develop a semantics for it, and then prove certain desirable properties such as soundness and consistency.</p>
<p>I've seen different approaches to this. Usually, a model in set theory is involved. But I believe that is not the only way to do it. Wouldn't, for example, an interpreter for that language on the untyped λ-calculus count as a semantics? So, my question is: what are the different ways to provide a semantics to language?</p>
|
<p>There are many possible approaches. Here's a few "classic" styles.</p>
<ul>
<li>Operational semantics (e.g. small step / reduction, or big step)</li>
<li>Denotational semantics (e.g. domain-theoretic, or category-theoretic)</li>
<li>Axiomatic semantics (e.g. hoare logic)</li>
</ul>
<p>You can also define the semantics of a language through a translation to another language (already having its semantics). CPS transforms could also be mentioned here.</p>
<p>Also note that many languages admit several distinct semantics. Lazy & eager semantics of functional programs are possible, for instance. Prolog also has many different semantics (I recall someone stating "there's no such a thing as THE semantics of Prolog").</p>
<p>Further, concurrent languages like CCS or <span class="math-container">$\pi$</span>-calculus have a LTS semantics. Game semantics is also used sometimes (but I don't know much about it).</p>
<p>I'm pretty sure there are many other kinds of semantics. I'd be surprised if in the future someone does not invent a new kind of semantics.</p>
| 466
|
language modeling
|
Rules language / DSL expressivity measure
|
https://cs.stackexchange.com/questions/44395/rules-language-dsl-expressivity-measure
|
<p>Languages to express domain rules are quite diverse from very simple and inexpressive to Turing-complete programming languages. If we consider developing some DSL (domain-specific language), is there any generally useful scale (Chomsky hierarchy comes to mind as an analogy, as well as Description Logic "letters", like SROIQ) to classify the domain DSLs?</p>
<p>Of course, it is naiive to expect full order, but at least which are some commonly used partial order classes of expressivity? And it would be nice to know what is the correct terminology for such expressivity scale.</p>
<p>Of course, lower bound of algorithmic (time, space) complexity can be applied the rules/inference engine itself (eg, satisfiability checking), but this belongs more to implementation details and engineering trade-off than to the model of the domain itself.</p>
<p>Maybe, I am wrong that such specific domain classification does exists at all, and practice as well as theory lives with measures, which I mentioned above. However, even for boolean formulas there are studied measures of complexity, so why not other theories?</p>
<p>Some discussion, which however does not help me, found here: <a href="https://www.linkedin.com/grp/post/81971-75677017" rel="nofollow noreferrer">https://www.linkedin.com/grp/post/81971-75677017</a> Also, this <a href="https://stackoverflow.com/questions/2427496/what-do-you-mean-by-the-expressiveness-of-a-programming-language">https://stackoverflow.com/questions/2427496/what-do-you-mean-by-the-expressiveness-of-a-programming-language</a> is relevant. To be clear, I stick with <a href="https://en.wikipedia.org/wiki/Expressive_power_%28computer_science%29" rel="nofollow noreferrer">Wikipedia definition</a> of expressivity here, and practical side of that more than theoretical one. </p>
<p>Expressivity in this question is an ability to model some domain with enough precision for some given purpose. That is, it can be said, "we need at least DSL expressivity D for the model of this domain for our application, the level C is not enough."</p>
| 467
|
|
language modeling
|
What are constraints on some new logic programming language and system?
|
https://cs.stackexchange.com/questions/96058/what-are-constraints-on-some-new-logic-programming-language-and-system
|
<p>As I understand, everyone can create logic programming language and system by declaring that the valid program of some logic programming language is the set of statements in the form: <code>body->head</code>, where <code>body</code> is arbitrary expression of boolean type and the <code>head</code> is set of expression that, in some cases, can change the current valuation function (function for some logic that assigns values to the variables of this logic), e.g. by assignment operations. There is no need to prove any properties of such programming system, because one can expect that such system is Turing complete and hence there is no important properties (e.g. termination) to prove. Practical termination can be achieved even by very rude methods (e.g. as in Drools), e.g. by allowing to declare that no more than e.g. 10 rules can be fired in one execution step. Am I right? <strong>Does the definition of new logic programming system/language for scientific and practical purposes ineed allow such great freedom without any constraints and duties to prove some properties of this system?</strong></p>
<p>Of course, some works on logic programming tries to prove stable model properties but I am interested (and practice usually requires) in non-monotonic logic programs (as almost any program used for business purposes change the state and hence the valuation function of variables) and they can not have such stable models. </p>
<p>There is such background for my question: I am aware of the logic programming system for agent modelling <a href="http://jason.sourceforge.net/wp/" rel="nofollow noreferrer">http://jason.sourceforge.net/wp/</a> Jason AgentSpeak. I am not satisfied with the expressibility of the base logic used by AgentSpeak. I have reason to state, that special kind of linear logics can be more appropriate base logic for agent modelling. That is why I am trying to create my own new logic programming system that is based on special kind of modal linear logic with actions. So - <strong>can I simply form the set of expressions of the type <code>body->head</code> from the language of modal linear logic and announce them as logic programs in modal linear logic? I am required to prove anything?</strong></p>
|
<p>It sounds like you have the impression that there are some rules on what languages you are allowed to define. There are no such rules. You can define whatever language you want. You can do whatever what you want -- you're not required to do anything (there are no language police who will come arrest you for failing to prove some theorem about it). Whether it will be useful, or used by anyone else, is a different matter.</p>
| 468
|
language modeling
|
What is the single type in a dynamic typing language?
|
https://cs.stackexchange.com/questions/125245/what-is-the-single-type-in-a-dynamic-typing-language
|
<p>Regarding static typing and dynamic typing, <a href="https://www.cs.cmu.edu/~rwh/pfpl/2nded.pdf" rel="nofollow noreferrer">Practical Foundation of Programming Languages by Harper</a> says:</p>
<blockquote>
<p>There have been many attempts by advocates of dynamic typing to
distinguish dynamic from static languages. It is useful to consider
the supposed distinctions from the present viewpoint.</p>
<ol>
<li><p>Dynamic languages associate types with values, whereas static languages associate
types to variables. Dynamic languages associate classes, not types, to values by tagging them with identifiers such as num and
fun. This form of classification amounts to a use of recursive sum
types within a statically typed language, and hence cannot be seen as
a distinguishing feature of dynamic languages. Moreover, static
languages assign types to expressions, not just variables. Because
<strong>dynamic languages</strong> are just <strong>particular static languages (with a single
type)</strong>, the same can be said of dynamic languages.</p></li>
<li><p>Dynamic languages check types at run-time, whereas static language check types at compile time. Dynamic languages are just as surely
statically typed as static languages, albeit for a degenerate type
system with only one type. As we have seen, dynamic languages do
perform class checks at run-time, but so too do static languages that
admit sum types. The difference is only the extent to which we must
use classification: always in a dynamic language, only as necessary in
a static language.</p></li>
<li><p>Dynamic languages support heterogeneous collections, whereas static languages sup-
port homogeneous collections. The purpose of sum types is to support heterogeneity,
so that any static language with sums admits heterogeneous data structures. A typical
example is a list such as</p>
<pre><code> cons(num[1]; cons(fun(x.x); nil))
</code></pre>
<p>(written in abstract syntax for emphasis). It is sometimes said that such a list is not
representable in a static language, because of the disparate nature of its components.
Whether in a static or a dynamic language, lists are type homogeneous, but can be class
heterogeneous. All elements of the above list are of type dyn; the first is of class num,
and the second is of class fun.</p></li>
</ol>
<p>Thus, the seeming opposition between static and dynamic typing is an illusion. The
question is not whether to have static typing, but rather how best to embrace it. Confining
one’s attention to <strong>a single recursive type</strong> seems pointlessly restrictive. Indeed, many so-
called untyped languages have implicit concessions to there being more than one type.
The classic example is the ubiquitous concept of “multi-argument functions,” which are
a concession to the existence of products of the type of values (with pattern matching).
It is then a short path to considering “multi-result functions,” and other ad hoc language
features that amount to admitting a richer and richer static type discipline.</p>
</blockquote>
<p>So "dynamic languages are just particular static languages (with a single type)". What is the single type in a dynamic language?</p>
<p>For example, Python is said to use the reference model of variables, as opposed to the value model of variables. References are not explicit. </p>
<ul>
<li><p>Is it because of references that a list in Python can have elements of different "types", i.e. achieving heterogeneous collections in the quote?</p></li>
<li><p>Is the "single type" in Python the type for all the references? Is it the same for other dynamic typing languages, i.e. is the "single type" in a dynamic typing language the type for all the references? Or is reference just an implementation option of the single type in a dynamic language, and there might be other implementation options?</p></li>
<li><p>Do references which refer to values of different "types" have the same type?</p></li>
</ul>
<p>Thanks.</p>
|
<p>The "single type" for Python is called "object" and described in <a href="https://docs.python.org/3/reference/datamodel.html" rel="nofollow noreferrer">https://docs.python.org/3/reference/datamodel.html</a>:</p>
<blockquote>
<p>Objects are Python’s abstraction for data. All data in a Python program is represented by objects or by relations between objects. (In a sense, and in conformance to Von Neumann’s model of a “stored program computer”, code is also represented by objects.)</p>
<p>Every object has an identity, a type and a value. An object’s identity never changes once it has been created; you may think of it as the object’s address in memory... An object’s type determines the operations that the object supports (e.g., “does it have a length?”) and also defines the possible values for objects of that type... The value of some objects can change. Objects whose value can change are said to be mutable; objects whose value is unchangeable once they are created are called immutable.</p>
</blockquote>
<p>Of course, note that "type" and "value" are not used here in the same way as in Harper's book: Python's "type" is what Harper calls "class".</p>
| 469
|
language modeling
|
What kind of language uses an infinite alphabet?
|
https://cs.stackexchange.com/questions/67223/what-kind-of-language-uses-an-infinite-alphabet
|
<p>Kind of what it says on the tin. Let's say I have a countably infinite alphabet $A$ and a "language" $L = \{s_1s_2 | s_1,s_2 \in A\}$ (i.e. all possible strings of length 2). Now, my question is this: does it even make sense to think of this in terms of formal languages? Is there some construct akin to a state machine that I can use to model such a thing? And do any of the above answers change if $A$ is uncountable?</p>
|
<p>The definition of formal language doesn't depend on the alphabet - it remains exactly the same. Things start getting a bit more complicated when you are interested in certain <em>classes</em> of languages. For example, let us consider regular languages. If your alphabet is infinite, the restriction of having only finitely many states seems a bit restrictive. Instead, it makes sense to allow any number of states whose cardinal is strictly less than that of the alphabet (if you allow a cardinal at least as large, every language would be regular). The corresponding notion of regular languages isn't too exciting perhaps, but at least it is a genuine subclass of all languages, which shares some properties with regular languages over finite alphabets.</p>
| 470
|
language modeling
|
How to compare the efficiency of two encoding schemes or hypothesis languages?
|
https://cs.stackexchange.com/questions/82080/how-to-compare-the-efficiency-of-two-encoding-schemes-or-hypothesis-languages
|
<p>My question is pretty basic, I'm looking for a named method if you know one, but also proper terminology, further reading, and anything this reminds you of if you don't. (I'm new to this, don't have the right terminology and just need a starting point so I can help myself.)</p>
<p>I'm trying to interpret the vector inputs to a black-box controller (which I can model as finite-state machine). I can see them and they look like a series of symbols but it is too variable (stochastic) to easily define an "alphabet" based on repetition and it isn't clear how the symbols are grouped. In other words it isn't clear whether they use something like block coding where each symbol is the same number of vectors in a sequence, or convolutional coding where symbols can have different numbers of vectors. </p>
<p>The controller operates a linear actuator (it just goes up and down) and the inputs are large vectors from a CNN. It's essentially a pong playing robot. I make predictions by modeling the controller as binary decision tree that maps each putative symbol to an exact position of the actuator. This is very similar to a language induction problem for a finite-state machine. Recall that a finite automaton can be a representation of a regular language. Also recall that finite automaton can characterized in terms of its memory requirements and computational complexity, hence a regular language can too.</p>
<ol>
<li><p>I know that the controller is optimal. There is no controller which has both less memory and less computational complexity. If I have two guesses at coding schemes and each are able to predict actuator position equally well then I want to pick the coding scheme which implies the least complexity and memory. So how does one go about evaluating the resource requirements of a coding scheme (raw inputs-> symbols) and grammar (rules about symbols) in combination?</p></li>
<li><p>I need to keep in mind that I might be wrong that it digitizes its inputs into symbols at all. So what are the signs that it is not digital? (such as, if I divide the symbols into smaller symbols and it still works just as well, ad infinitum, that probably means the symbols are meaningless).</p></li>
</ol>
<p><a href="https://i.sstatic.net/UfSD4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UfSD4.png" alt="Question explained pictorially"></a></p>
<hr>
<p>If you don't think I've given enough information keep in mind that I don't expect a detailed answer (but I'm happy to offer more). An acceptable answer is something of the form "I think you need to look into ____.", "This sounds like ____ in which case we often use ____.", "This sounds like a paper I read, see ____.", or "The ____ metric compares the level of the difficulty of a task to the number of symbols in the language required to perform that task and tells you how efficient a language is at that task."</p>
|
<p>The answer was just tree depth. What I needed to do was learn about evaluating the complexity of finite automata and how we can use a decision-tree model (<a href="https://en.wikipedia.org/wiki/Decision_tree_model" rel="nofollow noreferrer">decision-tree complexity</a>) to put bounds on the complexity of an automata (<a href="https://en.wikipedia.org/wiki/Analysis_of_algorithms" rel="nofollow noreferrer">analysis of algorithms</a>). So I take my two coding schemes (call them A and B) and I make binary decision tree to predict the output of my system based on the inputs provided by the coding schemes. If both coding schemes A and B are equally useful for predicting the outputs but A requires a tree depth of 15 while B requires a tree depth of 54 then A is "better". This is because our decision-tree model of the system is simpler when using A than when using B and as stated before we strongly believe that our system is as simple as possible. </p>
<p>Of course I also need to include the complexity of the coding scheme itself but that just goes with alphabet size since it is a Nearest Neighbor search. </p>
<p>So it was under my nose, I just needed some context.</p>
| 471
|
language modeling
|
Classes of language with at least $n$ pump-able substrings
|
https://cs.stackexchange.com/questions/170176/classes-of-language-with-at-least-n-pump-able-substrings
|
<p>Regular languages (RLs) have a necessary condition (from the pumping lemma) that, for some <span class="math-container">$k$</span>, if the string <span class="math-container">$|s| \geq k$</span> is in the language there is some splitting of <span class="math-container">$s$</span> into <span class="math-container">$xyz$</span> where <span class="math-container">$y\neq \epsilon$</span> and <span class="math-container">$|xy| \leq k$</span>, then <span class="math-container">$xy^iz$</span> is also in the language for any <span class="math-container">$i$</span>.</p>
<p>Context-free languages (CFLs) have a similar necessary condition that for some <span class="math-container">$k$</span>, if the string if the string <span class="math-container">$|s| \geq k$</span> is in the language there is some splitting of <span class="math-container">$s$</span> into <span class="math-container">$xuywz$</span> where <span class="math-container">$uw\neq \epsilon$</span> and <span class="math-container">$|uyw| \leq k$</span>, then <span class="math-container">$xu^iyw^iz$</span> is also in the language for any <span class="math-container">$i$</span>.</p>
<p>RLs are Context-free so also follow the CFL condition, but the reverse does not hold.</p>
<p>Say we generalise this condition to have a certain number <span class="math-container">$n$</span> of pump-able substrings, e.g.:</p>
<p>For some <span class="math-container">$k$</span>, if there is some string <span class="math-container">$|s|\geq k$</span> is in the language, there is some splitting of <span class="math-container">$s$</span> into <span class="math-container">$s_1x_1s_2x_2\ldots s_{n-1}x_{n-1}s_{n}x_{n}s_{n+1}$</span> where <span class="math-container">$x_1x_2\ldots x_n \neq \epsilon$</span> and <span class="math-container">$|x_1s_2\ldots s_nx_n| \leq k$</span> such that <span class="math-container">$s_1x_1^is_2x_2^i\ldots s_nx_n^is_{n+1}$</span> is also in the language for all <span class="math-container">$i$</span>.</p>
<p>What do classes of language look like where we have this as a necessary condition for some <span class="math-container">$n$</span> but NOT for any number <span class="math-container">$<n$</span>? Is there, for example, a simple way of generalising a model that generates CFLs (such as Context-Free grammars or pushdown automata) so that we get the <span class="math-container">$n=3$</span> version rather than <span class="math-container">$n=2$</span>?</p>
| 472
|
|
language modeling
|
What is the language feature which allows a variable to be associated with values of different types?
|
https://cs.stackexchange.com/questions/116504/what-is-the-language-feature-which-allows-a-variable-to-be-associated-with-value
|
<p>In Python, I can change the types of values associated with a variable:</p>
<pre><code>>>> x=1
>>> x="abc"
</code></pre>
<p>In C, I can't do the same.</p>
<p>What is the name of the feature that allows Python to behave so, while not C?</p>
<p>I was wondering if the following language features have to do with the observation in Python:</p>
<ul>
<li>no explicit type annotations,instead of explicit type annotations </li>
<li>dynamic typing, instead of static typing</li>
<li>or reference model of variable, instead of value model of variables?</li>
</ul>
<p>Thanks.</p>
|
<p>Allow me to address the misconceptions in your question one by one.</p>
<blockquote>
<p>In Python, I can change the types of values associated with a variable:</p>
<pre><code>>>> x=1
>>> x="abc"
</code></pre>
</blockquote>
<p>In type theory, types classify <em>expressions</em>, i.e., <em>syntactic</em> objects, i.e., <em>program</em> fragments. From this point of view, Python has exactly one type. Confusingly, what Python calls “types” are what type theory calls <em>classes</em> of a single type.</p>
<blockquote>
<p>I was wondering if the following language features have to do with the observation in Python:</p>
<ul>
<li>no explicit type annotations,instead of explicit type annotations</li>
</ul>
</blockquote>
<p>Annotations have nothing to do with this. There exist languages, such as ML and Haskell, that allow the programmer to omit most type annotations, yet the type of a variable binding is never allowed to change throughout its scope.</p>
<p>This is not to say that it is impossible to design a programming language in which the type of a variable could change within its scope, but Python is by no means such a language.</p>
<blockquote>
<ul>
<li>dynamic typing, instead of static typing</li>
</ul>
</blockquote>
<p>What I said above about the meaning of “type” in type theory.</p>
<blockquote>
<ul>
<li>or reference model of variable, instead of value model of variables?</li>
</ul>
</blockquote>
<p>There is no such thing as “reference model of variable”. A variable always stands for a value. It just so happens that Python has no values other than object references. For example, the Python expression <code>2 << 1000</code> does not evaluate to the number <span class="math-container">$2^{1000}$</span>. It evaluates to an <em>object</em> that represents the number <span class="math-container">$2^{1000}$</span>. There can be two different objects that represent the same number, as this snippet illustrates:</p>
<pre><code>>>> x = 2 << 1000
>>> y = 2 << 1000
>>> x is y
False
</code></pre>
<p>Of course, in mathematics, it makes no sense whatsoever to distinguish between “this <span class="math-container">$2^{1000}$</span>” and “that <span class="math-container">$2^{1000}$</span>”. Since Python's <code>int</code>s do not behave like mathematical integers, Python simply does not have integer values.</p>
<p>Confusingly enough, against established tradition, Python's equality testing operator is called <code>is</code> rather than <code>==</code>.</p>
| 473
|
language modeling
|
Why is the Turing Machine a popular model of computation?
|
https://cs.stackexchange.com/questions/91773/why-is-the-turing-machine-a-popular-model-of-computation
|
<p>I am a CS undergraduate. I understand how Turing came up with his abstract machine (modeling a person doing a computation), but it seems to me to be an awkward, inelegant abstraction. Why do we consider a "tape", and a machine head writing symbols, changing state, shifting the tape back and forth? </p>
<p>What is the underlying significance? A DFA is elegant - it seems to capture precisely what is necessary to recognize the regular languages. But the Turing machine, to my novice judgement, is just a clunky abstract contraption.</p>
<p>After thinking about it, I think the most idealized model of computation would be to say that some physical system corresponding to the input string, after being set into motion, would reach a static equilibrium which, upon interpretation equivalent to the the one used to form the system from the original string, would correspond to the correct output string. This captures the notion of "automation", since the system would change deterministically based solely on the original state.</p>
<p><strong>Edit</strong>: </p>
<p>After reading a few responses, I've realized that what confuses me about the Turing machine is that it does not seem minimal. Shouldn't the canonical model of computation obviously convey the essence of computability?</p>
<p>Also, in case it wasn't clear I know that DFAs are not complete models of computation.</p>
<p>Thank you for the replies.</p>
|
<p>Well, a DFA is just a Turing machine that's only allowed to move to the right and that must accept or reject as soon as it runs out of input characters. So I'm not sure one can really say that a DFA is natural but a Turing machine isn't.</p>
<p>Critique of the question aside, remember that Turing was working <em>before</em> computers existed. As such, he wasn't trying to codify what electronic computers do but, rather, computation in general. My parents have a dictionary from the 1930s that defines computer as "someone who computes" and this is basically where Turing was coming from: for him, at that time, computation was about slide rules, log tables, pencils and pieces of paper. In that mind-set, rewriting symbols on a paper tape doesn't seem like a bad abstraction.</p>
<p>OK, fine, you're saying (I hope!) but we're not in the 1930s any more so why do we still use this? Here, I don't think there's any one specific reason. The advantage of Turing machines is that they're reasonably simple and we're decently good at proving things about them. Although formally specifying a Turing machine program to do some particular task is very tedious, once you've done it a few times, you have a reasonable intuition about what they can do and you don't need to write the formal specifications any more. The model is also easily extended to include other natural features, such as random access to the tape. So they're a pretty useful model that we understand well and we also have a pretty good understanding of how they relate to actual computers. </p>
<p>One could use other models but one would then have to do a huge amount of translation between results for the new model and the vast body of existing work on what Turing machines can do. Nobody has come up with a replacement for Turing machines that have had big enough advantages to make that look like a good idea.</p>
| 474
|
language modeling
|
Are syntax and semantic just 2 structures such that one is a model of the other?
|
https://cs.stackexchange.com/questions/45291/are-syntax-and-semantic-just-2-structures-such-that-one-is-a-model-of-the-other
|
<ul>
<li>The syntax of a language is a structure. </li>
<li>The semantic of a language is a structure.</li>
<li>The semantic of a language is a model of its syntax.</li>
</ul>
<p>And that's all ? The duality syntax/semantic is just model theory applied to languages ?</p>
<p>(A short answer could be ok; I have already read the wikipedia pages !)</p>
|
<p>I am afraid the phrasing of the question misled me (though I did know
better) in first seeing model theory as applying to any two arbitrary
mathematical structures, and being the study of homomorphisms of
mathematical structures.</p>
<p>Actually this is wrong. <a href="https://en.wikipedia.org/wiki/Model_theory" rel="nofollow">Model Theory</a> already contains the idea of
syntax and semantics, more or less as I define it below. It studies
<em>theories</em>, which are sets of sentences in a formal language, and
<em>models</em> which are interpretation of these sentences in abstract
mathematical structures. This is close to what I call <em>language</em>
below, and I guess this is also close to what the author of the question calls
<em>language</em>.</p>
<p>So the proper answer to the question is that even model theory has a
concept of syntax and semantics, where syntax is a <em>theory</em>, i.e. a
formal representation structure such as a <a href="https://en.wikipedia.org/wiki/Universal_algebra" rel="nofollow">universal algebra</a>, and
semantics is an interpretation (i.e. a morphism) in some abstract
mathematical domain.</p>
<p>Hence <strong>the "duality syntax/semantic" is already present in model
theory</strong>. There is nothing new on that side.</p>
<p>However, <strong>what may make a difference is the understanding of the word
language</strong>, as it could be interpreted as going beyond what is usually
concerned by the syntactic algebras usually addressed by model theory.</p>
<p>Actually, the question does not define the word language, and it could
be formal languages, programming languages, natural languages, with
some unsaid understanding of what is intended.</p>
<p>So two possible answers could be:</p>
<ul>
<li><p>model theory is the study of languages with their syntax and
semantics, and there is nothing to add, or</p></li>
<li><p>a language can be defined by an arbitrary syntactic definition
(meaning a computable structure) and a homomorphic(?) mapping in
some abstract mathematical structure. The question mark is because
I fear that the definition of the mapping may raise some issues.</p></li>
</ul>
<p>The second view is possibly less formal, or less constrained. Whatever
the case, I am leaving below my initial answer (which corresponds to
the second view), since several of the issues it is addressing may
still be relevant and of interest to computer scientist, even though
the presentation may be debatable from the point of view of model
theory.</p>
<p>It does not bring anything new from a formal point of view, since the study of formal system that are considered in syntax already had to go all the way to Turing computability, in order to address the <a href="https://en.wikipedia.org/wiki/Entscheidungsproblem" rel="nofollow">Entscheidungsproblem</a>, and that is as far as we may hope to go in terms of syntax.
I may however help understanding some more practical issues.</p>
<p><em>Your comments are welcome, including suggestions to erase my former, more naive, though possibly more intuitive, answer.</em></p>
<hr>
<hr>
<p><em>The first section is the direct answer to the question. The other two
sections may be skipped. They are attempts to explore some examples or consequences, or some practical
aspects encountered by computer scientists, in particular in the
design of programming languages.</em></p>
<h2>Syntax, semantics and languages</h2>
<p>The view of the issue as expressed in the question is mainly "OK". My
main criticism though is that <strong>the reference to the concept of
language is critical</strong>, because directly related to the concept of
syntax, not separable from the concept of syntax.</p>
<p>As a consequence, there is something circular, kind of tautological,
about your statement. The question is why do we use this specific
terminology of "<em>syntax and semantics</em>" for some models, for some
homomorphically related pairs of structures, and not for others. Where
is the difference? And a correlated question is: <strong>what is a language?</strong></p>
<p>Clearly there is no restriction on semantics: any topic/structure can
be the object/content of a discourse. Hence, the fundamental
distinction is to be looked for on the syntax side. My own
understanding is that what distinguishes such a homomorphic pair is
the <a href="https://en.wikipedia.org/wiki/Domain_of_a_function" rel="nofollow">domain</a> of the homomorphism: <strong>the syntax structure has to be concrete and
concretely manipulatable</strong>. Basically a syntactic structure is a
finite collection of concrete objects, usually idealized as symbols,
and a finitely defined collection of rules to compose in diverse ways any unboundedly
finite number of copies of these symbols, and to transform such
compositions into others compositions. The word <em>syntax</em> comes from
'<a href="https://en.wikipedia.org/wiki/Syntax#Etymology" rel="nofollow">Ancient Greek: σύνταξις "coordination" from σύν <em>syn</em>, "together,"
and τάξις <em>táxis</em>, "an ordering"</a>'.</p>
<p>Fundamentally, syntax is physical. The concept of symbol is only a
mean to abstract away the choice of physical instanciation of the
symbols as objects such as sounds, gestures, <a href="https://en.wikipedia.org/wiki/Glyph" rel="nofollow">glyphs</a>, objects, stones
(which gave us the word <em>calculus</em>), electric or magnetic state,
presence and absence of a signal, etc., or composition of other
symbols.</p>
<p>Hence, the theory of syntax is essentially what is better known as the
theory of computation, which is the study of rule based manipulation
of symbols. Whatever we express, transmit, receive, compute, memorize,
analyze, is processed through some syntactic symbolic representation.</p>
<p>But what we are interested in is not the symbolic representations, but
some semantic concepts or objects (possibly themselves physical) that are
intended to be represented by the symbolic compositions at hand. And
the correspondence is defined by the semantic homomorphism into its
semantic <a href="https://en.wikipedia.org/wiki/Codomain" rel="nofollow">codomain</a>.</p>
<p>Then, in the most general acceptation of the word, <strong>a language is the
association of a syntax structure and a semantic structure through a
homomorphic mapping</strong>. Or to state it in the terms of the question, it
is a pair of structures such that one is a model of the other, with
the additional requirement that the latter be a syntactic structure.
<strong>But, in order to give such a definition, you must first define what
is a syntactic structure.</strong> And that definition stands on its own,
independently of the concepts of <em>language</em> or <em>semantics</em>: <strong>Syntax
is the study of finitely defined manipulations (including transmission
and memorization) of unboundedly finite compositions of copies of a
finite set of symbols</strong>. Ot in simpler terms, <strong>syntax concerns the
definition, analysis and manipulation of representations</strong>. You may
read <em>computation theory</em> and <em>information theory</em>.</p>
<p>This is in agreement with the concept of homomorphism: the meaning of
the whole is derived from the meaning of the parts. But this is a
meta-remark. I insist less on defining semantics since it seems that
any structure (that makes sense?) can play that role.</p>
<h2>Further remarks</h2>
<p>I do not know if the view of linguists on natural language is exactly
the same, or as precise regarding the homomorphic relation, but it
should not be too far from it. The concepts of syntax and semantics
arose from language analysis and logic, which I assume were the same
topic at some point in history (actually diverging pretty late, in the
19th century according to <a href="https://en.wikipedia.org/wiki/Syntax#Early_history" rel="nofollow">Wikipedia page on syntax</a>). Still, it seems
that some modern linguists follow somewhat similar views "<a href="https://en.wikipedia.org/wiki/Syntax#Modern_theories" rel="nofollow"><em>since they regard syntax to be the study of an abstract formal system</em></a>".</p>
<p>The point is that most mathematical domains, abstract concepts, and
other objects of discourse generally elude us, and can only be
contemplated and dealt with through mechanisms to represent them and
(some of) theirs values, i.e. through syntactic representation. In its
simplest form, it can be a pointing finger to represent a direction.</p>
<p>The main characteristic of a syntactic domain is that it is a concrete
representation, finitely defined through some formal system such as
the formal languages dear to computer scientists. But these syntactic
domains, which are very precisely the domain of computation, i.e. of
syntactic transformations, are very limited, as we learn from the
study of computability. For example, they are denumerable, while many
abstract domains we are interested in are not denumerable.</p>
<p>As a consequence, it is often the case that syntax can only represent
a small part of the mathematical domains we study. This is typically
the case for real numbers, almost all of which do not have a (finite)
syntactic representation, the others being the computable reals.</p>
<p>Choosing a data structure to represent the values of some abstract
domain we wish to compute with is just finding an appropriate syntax,
for your semantic problem, whether to express some conceptual semantic
derivation (computation) or to have simpler relation with semantics (semantic morphism),
or to optimize some aspect of communication and memorization
(space. speed, reliability).</p>
<p>For users who want an example, computing the sum of XXVII and XCII, I
mean 11011 and 1011100, is just a manipulation of the digits to get the
representation of a number (119) that is the sum of the numbers
represented by the initial two representations. The positional
notation happens to be easier to manipulate, to be more economical in
space (number and variety of symbols), and to have a simpler way to
express the homomorphism. The syntax changes, the semantics is the
same, and in both correspond to languages to express natural numbers
(well, not quite for Roman numerals).</p>
<p>IMHO (but some of this may be a matter of individual perspective), if
syntax should be only what can be represented and manipulated,
semantics can be any mathematical domain, or more generally any
abstract domain of discourse, including the domains used for
syntax. Hence you can have a theory of syntax where you study
syntactic representation systems. Formal language theory is nothing
else.</p>
<p>Note that the the idea of a syntac-semantic <em>duality</em> should not
induce the idea that there is some sort of symmetry between syntax and
semantics.</p>
<h2>Syntax and semantics in computer science practice</h2>
<p>It is true that we often pretend to address semantic issues, for
example when defining formally the semantics of a programming language
$L$. But that may be seen as an abuse. What we really do is use a standard
language $S$ that is supposed to have an accepted semantics and define
the semantics of $L$ in terms of the semantics of $S$. The difficulty
is that we can never access semantics directly, but only through some
syntactic proxy, even when the semantics is itself syntactic in
nature. This is why we use metalanguages such as <a href="https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form" rel="nofollow">Backus-Naur Form (BNF)</a>.</p>
<p>There are however various mathematical techniques to define semantic
domains. Among these techniques, you have the definition of syntactic
models where the syntax (possibly up to quotient by some equivalence
relation) is its own model. Actually, one of the major contribution of
Dana Scott to computer science was to conceive non-syntactic models of
lambda-calculus which is (initially at least) a purely syntactic theory (see for example
<a href="http://www.sciencedirect.com/science/article/pii/S157106611200062X" rel="nofollow">Continuation Models for the Lambda Calculus With Constructors</a>,
B. Petit (2012).</p>
<p>A characteristic of most mathematical theories is that they may have
many models, including non-standard models. "<a href="https://en.wikipedia.org/wiki/Non-standard_model" rel="nofollow"><em>A non-standard model is a
model of a theory that is not isomorphic to the intended model (or
standard model)</em></a>". That also applies to syntactic theories. Actually,
non-standard interpretation of languages is a very fruitful area of
research in computer science, related to <a href="https://en.wikipedia.org/wiki/Abstract_interpretation" rel="nofollow">abstract interpretation</a>, type
analysis, flow analysis, <a href="https://en.wikipedia.org/wiki/Partial_evaluation" rel="nofollow">partial evaluation</a> and many other approaches
to program analysis. To take a very trivial, purely syntactic
example, a non-standard interpretation of a program can be a
pretty-printed version of that program. A type checker is usually an
abstract interpretation of the program.</p>
<p>As a consequence, it is clear that the chosen syntax may not capture,
express, all the properties of the intended semantic domain. However,
anything that is considered as syntactically correct must have some
associated semantics through the semantic morphism, even if it is some
special error value. But it is improper, though often done, to state
that the syntax is defined by a context-free grammars, and then
consider programs with undeclared variables as being incorrect. It
cannot be syntactically incorrect for such a reason, because CF
grammars cannot check variable declarations. And I would not know what
it means to be semantically incorrect, other than being possibly a
special error value of the semantic structure.</p>
<p>The dream of programming language designers or implementor is that any
program that can only fail, or that can fail in some specific way,
should be syntactically incorrect. Hence, they have been increasing
the descriptive power of syntax to include as much checking as
possible, limited only by computability constraints. This did lead to
the use of Turing complete formalisms, of which <a href="https://en.wikipedia.org/wiki/Van_Wijngaarden_grammar" rel="nofollow">van Wijngaarden's
W-grammars</a> are an example. But this is a <a href="https://en.wiktionary.org/wiki/pipe_dream" rel="nofollow">pipe dream</a>. Computability
theory tells us that semantic properties of syntactically expressed
functions cannot be checked syntactically, as soon as the language has
Turing power, which we usually expected in programming languages.</p>
<p>In practice, this checking for intrinsic correctness (i.e. independent
of problem specification), or for properties to be used, is often
performed in compilers (or in formal semantic definitions) by what is
called <a href="https://en.wikipedia.org/wiki/Programming_language#Static_semantics" rel="nofollow">static semantics</a>. Being purely computational, static semantics can be as well considered as part of the syntax.</p>
<p>In the case of programming languages, the semantic domain is a
computation domain. Given that we allow Turing complete formalisms to
express the syntax, it becomes possible for the syntax to express
completely the semantics. But developing this angle is getting far away
from the original question. Still, this is the main reason why the distinction between syntax and semantics is so difficult to make precise in programming languages. This may also be the reason why universal Turing machines are possible.</p>
<p>A last point is that defining the semantics as a structure that is a
model of the syntax may not be enough. It could be a model in several
different ways. So it is essential to include the morphism in the
description of the semantics. That is typically what is done with
denotational semantics of programming languages, and a lot of other
formal systems.</p>
<p><sub>
<em>Note: This was rather complex and hard to write. If you are to
criticize it, I would appreciate explicit and precise comments. It
would also be useful to other users. Thank you.</em>
</sub></p>
| 475
|
language modeling
|
Design a grammar for this context-free language
|
https://cs.stackexchange.com/questions/54759/design-a-grammar-for-this-context-free-language
|
<p>I am doing an exercise from <a href="http://web.engr.illinois.edu/~jeffe/teaching/algorithms/all-models.pdf" rel="nofollow">Models Of Computation - Ch - 5, Q-1(r)</a>.</p>
<p>Design a grammar that generates this context-free language</p>
<p>$\{ x\space\$\space y^R \,|\, x, y \in\{0, 1\}^* \text{ and } x \ne y\}$</p>
<p>Any hint will be nice.</p>
<p>(<strong>My try</strong>: I can't seem to come up with a grammar. The pushdown automaton that can accept this language seems easy though. First push all of x into the stack, which can be identified on seeing the $. Keep matching y's characters one by one with top of stack character and pop until the first mismatch. If no mismatch is encountered and the stack is empty as well as y is exhausted, then reject. If a mismatch is encountered, <em>or</em> if the stack becomes empty before y is exhausted, <em>or</em> y is exhausted and stack is still not empty, then accept.)</p>
|
<p>Hint: Break this into four different cases:</p>
<ol>
<li>Words of the form $x\$y$ where $|x|>|y|$.</li>
<li>Words of the form $x\$y$ where $|x|<|y|$.</li>
<li>Words of the form $\Sigma^* 0 \Sigma^n \$ \Sigma^n 1 \Sigma^*$.</li>
<li>Words of the form $\Sigma^* 1 \Sigma^n \$ \Sigma^n 0 \Sigma^*$.</li>
</ol>
| 476
|
language modeling
|
What programming language should I use to run physics simulations of CO2 being absorbed by different materials?
|
https://cs.stackexchange.com/questions/128448/what-programming-language-should-i-use-to-run-physics-simulations-of-co2-being-a
|
<p>I want to model the absorption of CO2 with multiple different materials such as cement, limestone, wood, etc? I know Matlab is capable of these types of physics simulation according to this research paper: <a href="https://www.sciencedirect.com/science/article/abs/pii/S025527011630006X" rel="nofollow noreferrer">Research Paper</a>, however, I am not sure if Python or another programming language could complete this task easier.</p>
|
<p>That depends on a lot of factors, mostly on available software for the task (you <em>don't</em> want to do the whole job yourself, if it is already done, or there are pieces you can combine with little effort, use that), in second line on how detailed your model (and it's computing requirements) is might force the use of inconvenient but efficient languages/packages.</p>
<p>This is probably a question better asked elsewhere, perhaps on a physics or chemistry forum (for the modelling aspects, and possibly for software alternatives in common use), or (given much more detailed requirements and specifications) on a forum on programming.</p>
| 477
|
language modeling
|
What is the computation model of Prolog?
|
https://cs.stackexchange.com/questions/90400/what-is-the-computation-model-of-prolog
|
<p>Several computation models have representative programming language counterparts, as, according to <a href="https://cs.stackexchange.com/a/44310/86914">this answer</a>, Snobol for rewriting systems, APL for combinators, Lisp/Scheme for lambda calculus, and off course the family of imperative languages for TMs (or more precisely RAMs). It seems to me that Prolog should also be a paradigmatic language for some model. Is this assumption true? If so, what is the name of that model?</p>
|
<p>I think the computation model of Prolog is the SLDNF resolution of Horn clauses.</p>
<p>Prolog is actually very procedural. Kowalski 1974: "The interpretation of predicate logic as a programming language is based upon the interpretation of <em>implications</em> [...] as <em>procedure declarations</em> [...]" (emphasis mine)</p>
<p><a href="https://www.doc.ic.ac.uk/~rak/papers/IFIP%2074.pdf" rel="nofollow noreferrer">https://www.doc.ic.ac.uk/~rak/papers/IFIP%2074.pdf</a></p>
<p>(However, lambda calculus, theorem provers, and Turing machines are term rewriting systems indeed. What is a computational model then, if everything is a term rewriting system?)</p>
| 478
|
language modeling
|
Term for the use of the same programming language in every tier?
|
https://cs.stackexchange.com/questions/106093/term-for-the-use-of-the-same-programming-language-in-every-tier
|
<p>What is the technical term that describes the use of the same programming language in every tier in the architecture of a system? For example, having JavaScript in model, view and controller.</p>
<p>edit #1: MVC was mentioned just to give an example. I read the concept in a book once when I was at the university but can't remember. I've been googling the concept but can't reach it. Thanks again</p>
|
<p>I it is not widely used terminology, if you use MVC it makes little sense to deploy different languages, because some glue is needed to maintain it (say data sharing between different languages).</p>
<p>The term is monolanguage (monolingual programming) or homogeneous language programming. Similar terms like homogeneous programming are about same hardware used among nodes.<br>
Term monolanguage or homogeneous programming environment is more popular with creating full stack of software based on the same language, which currently is only one.</p>
| 479
|
language modeling
|
Differences between programming model and programming paradigm?
|
https://cs.stackexchange.com/questions/49421/differences-between-programming-model-and-programming-paradigm
|
<ol>
<li><p>What is the relation and difference between a programming model and
a programming paradigm? (especially when talking about the
programming model and the programming paradigm for a programming
language.)</p></li>
<li><p><a href="https://en.wikipedia.org/wiki/Programming_paradigm" rel="noreferrer">Wikipedia</a>
tries to answer my question in 1:</p>
<blockquote>
<p><strong>Programming paradigms</strong> can also be compared with <strong>programming models</strong> that are abstractions of computer systems. For example, the
"von Neumann model" is a programming model used in traditional
sequential computers. For parallel computing, there are many
possible models typically reflecting different ways processors can
be interconnected. The most common are based on shared memory,
distributed memory with message passing, or a hybrid of the two.</p>
</blockquote>
<p>But I don't understand it:</p>
<ul>
<li><p>Is it incorrect that the quote in Wikipedia says "the 'von Neumann model' is a programming model", because I understand
that the Von Neumann model is an architectural model from
<a href="https://en.wikipedia.org/wiki/Von_Neumann_architecture" rel="noreferrer">https://en.wikipedia.org/wiki/Von_Neumann_architecture</a>?</p></li>
<li><p>Are the parallel programming models "typically reflecting different ways processors can be interconnected"? Or are parallel
architectural models "reflecting different ways processors can be
interconnected" instead?</p></li>
</ul></li>
<li><p>In order to answer the question in 1, could you clarify what a programming model is? </p>
<p>Is it correct that a programming model provided/implemented by a
programming language or API library, and such implementation isn't
unique?</p>
<p>From <a href="https://books.google.com/books?id=UbpAAAAAQBAJ&lpg=PA106&ots=9YHGfFpNEA&dq=The%20programming%20model%20is%20at%20the%20next%20higher%20level%20of%20abstraction%20and%20describes%20a%20parallel%20computing%20system%20in%20terms%20of%20the%20semantics%20of%20the%20programming%20language%20or%20programming%20environment.&pg=PA106#v=onepage&q&f=false" rel="noreferrer">Rauber's Parallel Programming book</a>, "programming model" is
an abstraction above "model of computation (i.e. computational
model)" which is in turn above "architectural model". I guess that a
programming model isn't just used in parallel computing, but for a
programming language, or API library.</p></li>
</ol>
|
<p>A programming model is implied by the system architecture. If your system architecture is a register machine, your programming model will consist of machine code operations on registers. If your architecture is a stack machine, your programming model will consist of stack operations. A Von Neumann architecture and a Harvard architecture will have other programming models. Self modifying code p.e. wil be possible in a Von Neumann architecture but not in a Harvard architecture.</p>
<p>A programming paradigm is more highlevel: it is the way a problem is modelled (imperative or declarative, Object oriented, functional, logic,...). A single paradigm language supports one of these. Multiparadigm languages are more a sort of Swiss armyknive which take elements out of more paradigms.</p>
<p>Every architecture (and corresponding model) will have his own set of machine code instructions. This machine code language itself will follow the imperative paradigm (do this , do that, read register A, add the value to register B,... or put a value on top af the stack, put another value on top of the stack, add the two values on top..., etc...)</p>
<p>(At least I never saw a non-imperative hardware processor)</p>
<p>A high level language (of whatever paradigm) will be compiled or interpreted to this machine code.</p>
<p>About parallelism: If we consider interconnected processors it will be clear that the way they interconnect will be part of the programming model. An old INMOS transputer p.e. connects with four other transputers. The machine code wil have instructions to communicate with the naburing transputers.</p>
<p>But also on recent systems the way to provide mutual exclusion will have to be resolved on low level. On a one processor system we will have to put the interrupts on and off when leaving or entering the critical section. On a multiprocessor system we will need a monoatomic 'test and set' instruction. This is part of the programming model.</p>
<p>Parallel computing paradigms are high level models to use parallelism. Think on languages who have threaded objects, or use semaphores and monitors as language elements.</p>
<p>When we program on different operating systems, different API's will be used. (or even if we program on the same system but we use an other library - a graphics library p.e.). This will change our programming model. the low level code will be different, but if there is a good abstraction (sort of code once, compile anywhere) this will be invisible in the high level language. If not, you will have to make small changes in your code. But since you will use the same high level language, there will be no change of paradigm. </p>
| 480
|
language modeling
|
What does it mean to say that a language is "effectively closed" under an operation?
|
https://cs.stackexchange.com/questions/11920/what-does-it-mean-to-say-that-a-language-is-effectively-closed-under-an-operat
|
<p>I've been reading some formal language theory papers, and I've come across a term that I don't understand.</p>
<p>The paper will often refer to a set being "effectively closed under intersection" or other operations. What does "effectively" mean here? How does this differ from normal closure?</p>
<p>For reference, the paper I'm seeing these in is:</p>
<p>M. Daley and I. McQuillan. Formal modelling of viral gene compression. International Journal of Foundations of Computer Science, 16(3):453–469, 2005.</p>
|
<p>"Effectively closed" means that the family is closed under the operation, and that the closure can be computed by giving an automaton/grammar for it (if the original languages are also given in such an effective representation). E.g., given a finite state automaton, we can actually find an automaton for the complement.</p>
<p>Then it is a natural question, whether there are examples of closure properties that are <em>not</em> effective. I know one right now. For a regular language $R$ and <em>any</em> language $L$ the quotient $R/L$ is again regular. There is no effective way to construct a FSA for that quotient if $L$ is e.g. recursively enumarable.</p>
| 481
|
language modeling
|
Theoretical justification of "halting problem avoidance"
|
https://cs.stackexchange.com/questions/57739/theoretical-justification-of-halting-problem-avoidance
|
<p>The wikipedia page for the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">Halting problem</a> mentioned <em>practical solutions</em> to avoiding the halting problem such as avoiding infinite loops. And there is a mention that "by restricting the capabilities of general-purpose (Turing-complete) programming language, it is possible to guarantee the completion of all sub-routines (written under the restriction)".</p>
<p>What seems unclear to me is the underlying computation model of such restricted programming languages.</p>
<p>Say, if we remove -- from a general-purpose (Turing-complete) programming language -- the capability to conduct infinite loops (i.e. making the loop variable to always enumerate a finite list of elements, avoiding circular function recursions, etc), what would be the <em>expressiveness</em> of the resulting programming language or (the capabilities of) the corresponding computation model.</p>
<p>Possibly related questions </p>
<p><a href="https://cs.stackexchange.com/questions/11936/is-there-an-always-halting-limited-model-of-computation-accepting-r-but-not?rq=1">Is there an always-halting, limited model of computation accepting $R$ but not $RE$?</a></p>
|
<p>It is absolutely theoretically justified. </p>
<p>First realize that a loop is just a form of recursion: do the loop body, then either stop or do the loop body again with different variable values. </p>
<p><a href="https://en.m.wikipedia.org/wiki/System_F" rel="nofollow">System F</a> is a lambda calculus (programming language) with no recursion built in, and it is known to be strongly normalization. That is well typed every computation in this system halts. It's also powerful enough to compute basically every function you can think of, including the infamous Ackerman function. </p>
<p>In this system, you can use <a href="https://en.m.wikipedia.org/wiki/Church_encoding" rel="nofollow">Church numerals</a> to simulate the finite looping you mention in your question. </p>
<p>If you take a programming language without loops, you can model it in System F, which will give you a guarantee that all programs in this system halt. </p>
<p>It's implicit that if you remove loops, you also remove GOTO, which can be used to build loops. </p>
| 482
|
language modeling
|
What algorithm can be used to implement code folding for the C programming language?
|
https://cs.stackexchange.com/questions/171868/what-algorithm-can-be-used-to-implement-code-folding-for-the-c-programming-langu
|
<p>I'm working on a simple editor for the C programming language (without using an AST), and I need to implement a code folding feature.</p>
<p>When the user opens a file, an initial parse is performed, and all {} code blocks are easily detected. I store these blocks in a tree structure, where nested blocks are children of their enclosing block.</p>
<p>The challenge arises when the user makes edits. For each edit, I have the following information:</p>
<ul>
<li>startLineIndex (the index of the first affected line),</li>
<li>endLineIndex (the index of the last affected line),</li>
<li>addedLineCount,</li>
<li>removedLineCount.</li>
</ul>
<p>At this point, I need to update the folding model in the most efficient way possible — ideally by re-parsing only a minimal portion of the code.</p>
<p>Can someone suggest which algorithms or techniques could be used to efficiently update the folding block model after user edits?</p>
| 483
|
|
language modeling
|
An alternative to the object paradigm
|
https://cs.stackexchange.com/questions/63332/an-alternative-to-the-object-paradigm
|
<p>I've been playing around with an alternative to the standard object-oriented paradigm for modeling data, and I would like to know if there is any research or already existing systems along the same lines. Let me briefly explain my ideas.</p>
<p>What I call the object-oriented paradigm goes something like this. Objects are representations of physical entities or concepts. An object can consist of components (other objects), and have various attributes and methods (member functions). For example, a car can be represented by an object having four wheels, a gas pedal, a steering wheel, and various methods for driving around. Moreover, each object is part of a type (or class), defined as a set of objects having the same kinds of attributes and methods (like the class of all cars). Importantly, these attributes and methods are in some sense an integral part of the object; the object is composed of its parts. So the car objects represents the whole physical thing, including all its parts and its behavior. (I'm sure this stuff is very familiar to you.)</p>
<p>Now, for various reasons I'm considering a fundamentally different view, which we might call the <em>network-oriented</em> paradigm. In this view, objects do not exist. Instead, each physical entity or concept is represented by one indivisible atom, which by itself carries no information. Instead, all the information about the entity --- its attributes and methods --- are encoded as links to other atoms. For example, a car atom would have links to four wheel-atoms, one gas pedal atom, one steering wheel atom, and to various methods for driving cars around. The key difference is that, in this network view, these atoms that are connected to the car atom are not <em>part of</em> the car. There is no sharp "boundary" defining what is "part of" or "inside" an entity, only links connecting atoms to provide them with attributes and behavior. Moreover, atoms are not typed, at least not in the usual object-as-instance-of-class sense. Any atom can connect to many other atoms, resulting in a kind of network (hence the name). I think this view is more flexible in various ways; for example, one can easily create a version of a car with three wheels by simply removing one of the links, without having to redesign an entire class hierarchy. And I also like it better for philosophical reasons :)</p>
<p>Question: does anyone know of an existing system or language, or theoretical papers where something like this has been explored? </p>
<p>I haven't found anything similar in the literature when googling, but I'm not a theoretical computer scientist (I work in computational biology) so I just might not know where to look. I'm experimenting with a network-oriented system like this on my spare time, mostly for fun, but I also think data modeling along these lines would be more useful in my field than the object-oriented tools we have now.</p>
| 484
|
|
language modeling
|
Computational complexity vs. Chomsky hierarchy
|
https://cs.stackexchange.com/questions/25940/computational-complexity-vs-chomsky-hierarchy
|
<p>I'm wondering about the relationship between computational complexity and the Chomsky hierarchy, in general.</p>
<p>In particular, if I know that some problem is NP-complete, does it follow that the <em>language</em> of that problem is not context-free?</p>
<p>For example, the clique problem is NP-complete. Does it follow that the language corresponding to models with cliques is of some minimal complexity in the Chomsky hierarchy (for all/some ways of encoding models as strings?)</p>
|
<p>There are four classes of language in the Chomsky hierarchy:</p>
<ol>
<li><p>Regular languages — this class is the same as $\mathrm{TIME}(n)$ or $\mathrm{TIME}(o(n\log n))$ (defined using single-tape machines, see Emil's comment), or $\mathrm{SPACE}(0)$ or $\mathrm{SPACE}(o(\log\log n))$ (per Emil's comment).</p></li>
<li><p>Context-free languages — this class doesn't have nice closure properties, so instead one usually considers <a href="http://en.wikipedia.org/wiki/LOGCFL" rel="noreferrer">$\mathrm{LOGCFL}$</a>, the class of languages logspace-reducible to context-free languages. It is known that $\mathrm{LOGCFL}$ lies in $\mathrm{AC}^1$ (and so, in particular, in $\mathrm{P}$), and it has nice complete problems detailed in the linked article.</p></li>
<li><p>Context-sensitive languages — this class corresponds to $\mathrm{NSPACE}(n)$.</p></li>
<li><p>Unrestricted grammars — this class consists of all recursively enumerable languages.</p></li>
</ol>
<p>If a language in NP-complete then assuming P$\neq$NP, it is not context-free. However, it could be context-sensitive (clique and SAT both are). Any language in NP is described by some unrestricted grammar.</p>
| 485
|
language modeling
|
Proving a certain language is regular by constructing a DFA
|
https://cs.stackexchange.com/questions/159660/proving-a-certain-language-is-regular-by-constructing-a-dfa
|
<p>Let <span class="math-container">$L$</span> be a regular language over the alphabet <span class="math-container">$\sum$</span>, prove that the language defined by <span class="math-container">$\hat{L} = \{uv \in \sum^* | u^Rv \in L \}$</span> is regular.</p>
<p>There is guidance in the exercise that instructs us to define for every <span class="math-container">$p \in Q$</span> the language:<br />
<span class="math-container">$L_p = \{uv\in\sum^* | \delta(q_0,u^R) = p \space\space and \space\space \delta(p,v) \in F \}$</span><br />
Then prove <span class="math-container">$L_p$</span> is regular and then deduce that <span class="math-container">$\hat{L}$</span> is also regular.<br />
The last step is clear to me, that is <span class="math-container">$L = \bigcup_{p\in Q}L_p$</span> is a finite union of regular languages and therefore <span class="math-container">$L$</span> is regular.<br />
Now, in order to prove <span class="math-container">$L_p$</span> is regular I tried to construct 2 DFAs:<br />
Let <span class="math-container">$p\in Q$</span> and let us define 2 DFAs <span class="math-container">$B_q = (\sum,Q,p,q_0,\delta')$</span> and <span class="math-container">$C_q=(\sum,Q,p,F,\delta)$</span> and <span class="math-container">$\delta'(q',\sigma) = q$</span> if <span class="math-container">$\delta(q,\sigma) = q'$</span> (i.e. changing the directions of the arrows), from here it follows that <span class="math-container">$B_p$</span> recognizes all the words starting in state <span class="math-container">$p$</span> and ending in <span class="math-container">$q_0$</span>, that is the reversion of the words that would be accepted if they were starting at <span class="math-container">$q_0$</span> and ending in <span class="math-container">$p$</span>.<br />
Similarly, <span class="math-container">$C_p$</span> recognizes all the words that start in state <span class="math-container">$p$</span> and end in <span class="math-container">$F$</span>, therefore I can say that <span class="math-container">$L_p = L(B_p)*L(C_p)$</span> and therefore <span class="math-container">$L_p$</span> is regular.<br />
I don't feel 100% confident about my proposal because I am unsure if I built the DFAs as they should be, more specifically I am not sure how to show that <span class="math-container">$B_q$</span> indeed recognizes the reversed strings ?<br />
I will also be grateful if there's anyone who's able to guide me towards a different solution if such one exists<br />
Source : The Open University of Israel, Computational Models course, <a href="https://mega.nz/folder/0Sg0iD4B#0OPF1JJgFjtYoJuStlsCtA/file/JbYXTYYS" rel="nofollow noreferrer">https://mega.nz/folder/0Sg0iD4B#0OPF1JJgFjtYoJuStlsCtA/file/JbYXTYYS</a></p>
| 486
|
|
language modeling
|
Why is Dyck-2 so important for the Chomsky-Schützenberger theorem?
|
https://cs.stackexchange.com/questions/162654/why-is-dyck-2-so-important-for-the-chomsky-sch%c3%bctzenberger-theorem
|
<p>I have read a lot of times, that models that can parse Dyck-2 are of great importance. It appears that Dyck-2 is interchangeably used like Dyck-N.</p>
<p>Afaik the Chomsky-Schützenberger representation theorem states that you can convert and context-free language into a Dyck-N language, using a homomorphism and a intersection to regular language.</p>
<p>It appears that Dyck-2 is already enough for that? Or is Dyck-2 simply necessary, but not sufficient, so it is just used to disprove a models ability to learn Dyck-N?</p>
<p>I thought maybe you can represent any Dyck-N language by a Dyck-2 language that uses "binary" representations of brackets. This would theoretically be also possible for Dyck-1 by unary encoding, but decoding there would not be suffix-free, which breaks the homomorphism, I guess? Maybe someone could validate this approach, because I have never seen it formulated out.</p>
<p>Thank you very much.</p>
|
<p>Whether 2-bracket Dyck is equivalent to <span class="math-container">$n$</span>-bracket Dyck (<span class="math-container">$n\ge2$</span>)? Short answer: that depends which operations one allows.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Chomsky%E2%80%93Sch%C3%BCtzenberger_representation_theorem" rel="nofollow noreferrer">Chomsky–Schützenberger Theorem</a> states that every context-free language <span class="math-container">$L\subseteq \Sigma^*$</span> can be written as
<span class="math-container">$L=h(D_{T}\cap R)$</span>, where <span class="math-container">$D_T$</span> is the Dyck language over the bracket pairs on <span class="math-container">$T\cup \overline T$</span>, <span class="math-container">$R\subseteq (T\cup \overline T)^*$</span> a regular language, and <span class="math-container">$h: (T\cup \overline T) \to \Sigma^*$</span> a homomorphism.</p>
<p>Some intuition here. We can prove the CST using a representation of <span class="math-container">$L$</span> as a pushdown automaton. Each transition is represented as a sequence of brackets. These brackets represent both the input letter that is read and the symbols that are popped and pushed during that instruction. It is possible to show that two different pushdown symbols suffice for the PDA. On the othet hand <span class="math-container">$\Sigma$</span> is not bounded. As we use a homomorfism to decode <span class="math-container">$\Sigma$</span> from <span class="math-container">$T$</span>, in general <span class="math-container">$T$</span> must be as least as large as <span class="math-container">$\Sigma$</span>. Hence we cannot bound the number of bracket-pairs.</p>
<p>In the theory of <a href="https://en.wikipedia.org/wiki/Abstract_family_of_languages" rel="nofollow noreferrer">Abstract Families of Languages</a> one studies language families and their closure properties. We have the result that the context-free languages are a <a href="https://en.wikipedia.org/wiki/Cone_(formal_languages)" rel="nofollow noreferrer">cone</a>, the smallest family of languages that includes <span class="math-container">$D_2$</span> (the Dyck language over two bracket pairs) and is closed under homomorphisms, inverse homomorphisms, and indersection with regular languages.</p>
<p>Basically the CST construction works to obtain such a result. The inverse homomorphism however can be used to code both input letters and pushdown symbols into two pairs of brackets. The context-free language is of the form <span class="math-container">$L = h( g^{-1} (D_2) \cap R)$</span>.</p>
<p><strong>PS</strong>. Below an old slide I have. (<em>note</em>. The roles of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> are reversed.)
It illustrates that we can see a PDA diagram as a finite state automaton, which defines regular sequences of instructions: the set <span class="math-container">$R$</span>. Whether such a sequence is allowed as a computation has to be verified by checking that the pop and push instructions are legal. Legal sequences are Dyck sequences (with matching brackets). Thus we map the instructions to <span class="math-container">$D_2$</span> here using <span class="math-container">$h$</span>.
Finally the language of the PDA is the sequence of input letters, which is obtained from the instructions using the second morphism <span class="math-container">$g$</span>.</p>
<p><a href="https://i.sstatic.net/FBX73.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FBX73.png" alt="enter image description here" /></a></p>
| 487
|
language modeling
|
Measuring logicality of programming languages?
|
https://cs.stackexchange.com/questions/170042/measuring-logicality-of-programming-languages
|
<p>I have a simple question of how would you measure the logicality of a programming language?</p>
<p>EDIT: I was asked to specify the term "logicality". Hence I will try and provide a stipulation. By "logicality" I mean that the programming language is in correspondence with the least amount of steps in logic to complete any given process on the processor or 'CPU'.</p>
<p>Now, here is my hypothesized solution: Namely, to read the data from the processor and analyze it in an inference model from the programming language being processed on the CPU, for a task or program in the respective programming language.</p>
<p>EDIT #2: I only stated this because of the recent interest in LLM's. I thought that with the sheer size of data from various programming languages, one could discover the logicality of various programming languages. I believe that OpenAI would be able to do this one day with their inference models.</p>
| 488
|
|
language modeling
|
Difference between a model of computation and semantics
|
https://cs.stackexchange.com/questions/170997/difference-between-a-model-of-computation-and-semantics
|
<p>I understand if this question sounds non-sense, but in my understanding the concept of semantics applies to programing languages. And a model of computation is ( generally speaking ) a formal system, a mathematical object which describes exactly how to use, and what to expect while using it, thus having inherent <em>semantics</em>(if we insist on using that word).</p>
<p>Even the <a href="https://en.wikipedia.org/wiki/Semantics_(computer_science)#:%7E:text=Semantics%20describes%20the,of%20computation." rel="nofollow noreferrer">wikipedia article on 'semantics (computer science)'</a> defines that semantics is akin to providing a model of computation as an interpretation to the what a valid construct in the language is supposed to do when compared to it.</p>
<hr />
<p>(<em>edit</em>)</p>
<blockquote>
<p>Semantics describes the processes a computer follows when executing a program in that specific language. This can be done by describing the relationship between the input and output of a program, or giving an explanation of how the program will be executed on a certain platform, <strong>thereby creating a model of computation</strong>.</p>
</blockquote>
<p>(<em>the 2nd para of the wikipedia article</em>)</p>
<hr />
<p>To me it suggests that both are actually the same thing, as in when we are describing the semantics of a given language we are just mapping the behaviour of the valid constructs of the language to their analogous part in a particular model of computation, and hence any kind of semantics themselves are just some kind of models of computation.</p>
<p>Is this thinking correct, or perhaps am i misinterpreting, and that this analogy fails at certain point, in sense that there is a difference between the two concepts ? Regardless of the answer, i couldn't find any material that discusses the question explicitly, whether or not they are just the same thing, in context of programming languages.</p>
<p>One motivation behind this view of mine is that - i have never seen any explicit semantical analysis of turing machines, they are self-sufficient in their definitions, no doubt we can translate them to any other equivalent models of computation but there certainly doesn't seems to be notion of having to assign explicit meaning to turing machines....</p>
|
<p>I am not sure I understand your question, yet I will give an answer to how the term “semantics” is used in the context of computational models.
Let’s look at one of the fundamental computational models in computer science, which is a non-deterministic finite automaton (NFA, for short):</p>
<p>An NFA <span class="math-container">$A$</span> is defined as a 5-tuple <span class="math-container">$A = (\Sigma, Q, Q_0, \delta, F)$</span>, where <span class="math-container">$\Sigma$</span> is a finite nonempty alphabet, <span class="math-container">$Q$</span> is a finite set of states, <span class="math-container">$Q_0$</span> is a set of initial states, <span class="math-container">$\delta: Q\times \Sigma \to 2^Q $</span> is a transition function, and finally <span class="math-container">$F$</span> is a subset of accepting (or final) states.</p>
<p>If you look at the above paragraph, then you can see that I essentially described “what is” an NFA or what an NFA “looks like”. In other words, I defined the <em>syntax</em> of a nondeterministic automaton. So if I give you a description of a 5-tuple, you can parse it, and tell me whether it describes an NFA or not, similarly to 3-SAT boolean formulas, if you see one you can look at it and tell me “yeah, this is a 3-sat formula”. Now that NFA model that we’ve defined is a just a tuple, a syntax with no meaning attached to it. The next step towards completing the definition of the NFA model is to define its semantics, or more formally, define its language. The semantics of an NFA <span class="math-container">$A$</span> are defined as its language <span class="math-container">$L(A)$</span> —- the language of finite words such that there exists a run of <span class="math-container">$A$</span> on <span class="math-container">$w$</span> that ends in a state in <span class="math-container">$F$</span>. Now we can take the languages recognized by NFAs, a.k.a regular languages, and study them or study properties of the model itself.</p>
<p>So the way we basically think about a computational model is as a pair, the model (its syntax), and the language it recognizes (its semantics). Note that one can define the semantics of a nondeterministic automaton differently, here are some notable examples:</p>
<p>1- Universal automata: we can define <span class="math-container">$L(A)$</span> as the language of finite words such that all the runs of <span class="math-container">$A$</span> on them end in a state in <span class="math-container">$F$</span>. Universal automata are “good” at modeling universal properties, while NFAs are good at model existential properties.</p>
<p>2- Automata on infinite words: we can define the semantics w.r.t infinite words, and say that the automaton accepts an infinite word iff it has an infinite run that passes through a state in <span class="math-container">$F$</span> infinitely often. The latter are known as Büchi automata.</p>
<p>And the list goes on…
So all these models have the same syntax, yet have different semantics or meaning.
The same applies to Turing machines. You define its syntax, and then you give it a meaning by defining when it recognizes or decides a language.</p>
<p>Hope that answered your question (!)</p>
| 489
|
language modeling
|
Contracts for Java Bytecode
|
https://cs.stackexchange.com/questions/13121/contracts-for-java-bytecode
|
<p><strong>Introduction</strong><br>
For a paper I need contracts, which are also referred to as Design by Contract (DbC)<a href="http://www.eclipse.org/forums/index.php?t=thread&frm_id=157" rel="nofollow">1</a>, and conceptually go back to Hoare[2]. For my work I need to apply contracts to Java bytecode. The question is only about using contracts on Java bytecode, nothing more.</p>
<p>If you ask, why do you want to use contracts on Java bytecode, there is a reason. The reason is that I use a Java extension. This Java extension takes source code consisting of extended Java code, from which it generates bytecode. I want to use this Java extension and additionally I want to use contracts. The extension doesn't support contracts. So I have my extended Java code, I apply to it the compiler of the Java extension and I get Java bytecode. On this resulting bytecode I want to apply contracts. </p>
<p>The Java extension is called Object Teams[4], also called eclipse object teams. The Java extension allows to use dynamic collaborations and aspects, which are not part of regular Java code. </p>
<p>Right now I only need a hack or workaround to get one or two small examples working.</p>
<p><strong>My idea so far:</strong><br>
This is irrelevant to the question. But it shows that my current plan is cumbersome and I need a better solution (thus the question). Also, perhaps someone has worked with JML/OpenJML and might know how to do this plan but better. </p>
<p>There is a tool for contracts for Java language. The tool is called java modeling language(JML)[3][5]. The tool does not support contract for Java bytecode. The tool only supports contracts on Java source code. I want to rewrite part of the tool to support contracts on java bytecode (a solution to my question but a cumbersome one).</p>
<p>Currently there is only one working version of the java modeling language(JML), which is called OpenJML[6]. I plan to use OpenJML on Java bytecode (remember, OpenJML doesn't support contracts for bytecode, only contracts for Java source code). I don't want to use contracts on any bytecode. I want to use contract on bytecode that is generated by a Java extension. (The Java extension is called ObjectTeams, but this is irrelevant for the question).</p>
<p>(Annotations mean annotations that are part of the Java language, e.g., "@Overrides".)</p>
<p>The approach would be to search for specific annotations in the bytecode that is generated by a Java extension. Those specific annotations specify contracts. I have to look for the specific annotations in the bytecode (bytecode that is generated by a Java extension), compile them to Java guard expressions, i.e., code that throws an exception if the contracts are not honored. I can get the compiled Java guard expression by using the library behind OpenJML. Then I need to add the byte code of the Java guard expression at the place where I found the specific annotations that represent contracts. </p>
<p><strong>My question:</strong><br>
Using OpenJML seems possible, but difficult to achieve. Right now I just need a solution that works for one or two small examples, so the solution can be a hack or just a temporary solution. As my approach via OpenJML is probably difficult to achieve and further will take quite some time, I look for different solutions.</p>
<p><strong>Question:</strong> <em>What options would you recommend to apply contracts (design by contract) for java bytecode?</em></p>
<p>Requirements: </p>
<ol>
<li>a up to date tool or library (i.e., not a project where development
has ceased or stopped three years ago)</li>
<li>ideally, it should be straightforward to use the tool or library
(i.e., not a project with zero or almost no documentation)</li>
</ol>
<h2>--</h2>
<hr>
<p><em><strong>Update / Clarification:</em></strong> </p>
<ul>
<li>I forgot to mention that don't need any form of static checking. I<br>
only need to check that the contracts hold during run time. Further
I only need to use very basic contracts (pre- and postcondition,<br>
invariants) applied to only a few methods of my program.</li>
<li>As Dave as pointed out in the comments, there may be another option
to apply contracts together with Object Teams. This would mean
modifying or extending Object Teams and the CoreJDT module of Object
Teams (it's in the comments).</li>
<li>I also got <a href="http://www.eclipse.org/forums/index.php?t=thread&frm_id=157" rel="nofollow">a reply from the creator of Object Teams</a> that using JML and Object Teams together via JDT isn't an easy fix.</li>
</ul>
<p>References:</p>
<p><a href="http://www.eclipse.org/forums/index.php?t=thread&frm_id=157" rel="nofollow">1</a> B. Meyer. Applying Design by Contract. Computer, 25(10):40{51, 1992.</p>
<p>[2] C. A. R. Hoare. Proof of correctness of data representations. Acta Informatica, 1(4):271–81, 1972.</p>
<p>[3] Lilian Burdy, Yoonsik Cheon, David R. Cok, Michael Ernst, Joseph R.Kiniry, Gary T. Leavens, K. Rustan M. Leino, and Erik Poll. An overviewof JML tools and applications. International Journal on Software Tools for Technology Transfer (STTT), 7(3):212–232, 2005.</p>
<p>[4] <a href="http://www.eclipse.org/objectteams/" rel="nofollow">http://www.eclipse.org/objectteams/</a> </p>
<p>[5] <a href="http://www.eecs.ucf.edu/~leavens/JML//index.shtml" rel="nofollow">http://www.eecs.ucf.edu/~leavens/JML//index.shtml</a> </p>
<p>[6] <a href="http://jmlspecs.sourceforge.net/" rel="nofollow">http://jmlspecs.sourceforge.net/</a></p>
|
<p>I recommend you look at <a href="http://bml.mimuw.edu.pl/" rel="nofollow">BML</a>. It is like JML, but for Java bytecode. It allows you to specify contracts (preconditions, postconditions, data structure invariants) at the bytecode level. I think the tools <a href="http://zls.mimuw.edu.pl/~alx/umbra/" rel="nofollow">Umbra</a>, <a href="http://www-sop.inria.fr/everest/soft/Jack/jack.html" rel="nofollow">JACK</a>, and <a href="http://www.kindsoftware.com/products/opensource/Mobius/" rel="nofollow">the Mobius program verification environment</a> support BML, and the Mobius project is building tools that work with BML. See, e.g., the following papers:</p>
<ul>
<li><p><a href="http://wwwhome.ewi.utwente.nl/~marieke/bml_tools.pdf" rel="nofollow">BML and related tools</a>. Jacek Chrzaszcz, Maricke Huisman, ALeksy Schubert. FMCO 2008.</p></li>
<li><p><a href="http://www.mimuw.edu.pl/~alx/umbra/casestudy/bmlcasestudy.pdf" rel="nofollow">Verification and certification of Java classes using BML tools</a>. Jacek Chrzaszcz, Aleksy Schubert, and Tadeusz Sznuk.</p></li>
<li><p><a href="http://www-sop.inria.fr/everest/personnel/Mariela.Pavlova/bmlFASE07.pdf" rel="nofollow">Preliminary Design of BML: A Behavioral Interface Specification Language for Java bytecode</a>. Lilian Burdy, Marieke Huisman, and Mariela Pavlova. FASE 2007.</p></li>
<li><p><a href="http://zls.mimuw.edu.pl/%7Ealx/umbra/bml-bytecode.pdf" rel="nofollow">Technical Aspects of Class Specification in the Byte Code of Java Language</a>. Aleksy Schubert, Jacek Chrzaszcz, Tomasz Batkiewicz, Jaroslaw Paszek, Wojciech Was. Elsevier Science.</p></li>
<li><p><a href="ftp://ftp-sop.inria.fr/everest/Marieke.Huisman/fmco06.pdf" rel="nofollow">JACK: a tool for validation of security and behaviour of Java applications</a>. Gilles Barthe, Lilian Burdy, Julien Charles, Benjamin Gregoire, Marieke Huisman, Jean-Louis Lanet, Mariela Pavlova, and Antoine Requet. FMCO 2007.</p></li>
</ul>
<p>You might also look at BCSL and JVer, which are intended for verifying Java bytecode, and <a href="http://zls.mimuw.edu.pl/~alx/jml2bml/" rel="nofollow">JML2BML</a>, which translates JML to BML:</p>
<ul>
<li><p><a href="http://www-sop.inria.fr/everest/soft/Jack/doc/papers/lm05.pdf" rel="nofollow">Java Bytecode Specification and Verification</a>. Lilian Burdy, Mariela Pavlova.</p></li>
<li><p>JVer: A Java Verifier. A Chander, D. Espinosa, N. Islam, P. Lee, G. Necula. CAV 2005.</p></li>
<li><p><a href="http://zls.mimuw.edu.pl/%7Ealx/jml2bml/jml2bml.pdf" rel="nofollow">Supplementing Java Bytecode with Specifications</a>. Jedrzej Fulara, Krzysztof Jakubczyk, and ALeksy Schubert.</p></li>
</ul>
<p>You might also want to ask on the JML mailing list, as they might have other recommendations/suggestions.</p>
<p>Other possibly related work: JINJA (Tobias Nipkow), Claire Quigley's work. And, you might look at the proceedings of the <a href="http://costa.ls.fi.upm.es/bytecode13/" rel="nofollow">BYTECODE</a> workshop over the past decade, as it presents research on verification and analysis of bytecode (including Java bytecode).</p>
<p>Caveat/disclaimer: Be warned that you might not find an equivalent of JML for Java bytecode that's as convenient and easy-to-use and well-supported as JML. The reason is that contracts are designed to be used and understood by programmers. The natural place to put contracts is on the source code, because that's a lot easier for programmers to use; putting contracts on bytecode is a more niche requirement.</p>
| 490
|
language modeling
|
Mathematical model for a webpage layout?
|
https://cs.stackexchange.com/questions/35490/mathematical-model-for-a-webpage-layout
|
<p>Getting layout right (even if only a structure is considered) with HTML5/CSS3 is still more like an art or black magic.</p>
<p>On the other hand, there are other GUI systems (like wxWindows and Tcl/Tk) and some GUI research (like The Auckland Layout Model, ALM, and <a href="https://hal.inria.fr/hal-00953333/PDF/intuilayout.pdf" rel="nofollow">other methods</a>), which hint at the possibility of formalization for the layout managers (geometry managers).</p>
<p>Are there any comprehensible formal models for HTML5/CSS, which provide ultracompact (abstract) way to describe structure, "physics" and "geometry" of resizeable webpages, using language of blocks? Also html/css can be generated from it, which works more or less as described in standard browsers. Also, a model can be derived given HTML/CSS (browsers do it by their algorithms, so this seems to be theoretically possible).</p>
<p>By "ultracompact" and abstract it is understood: much more compact than HTML/CSS and also more domain-oriented, "speaking" the language of webpage's dynamics in response to resizing or changed content, that is, higher level than HTML/CSS constructs.</p>
<p>For an analogy, it is possible to write a program to make a textual search, based on some complex rules, but the same task can be performed by a much more compact regular expression. So, is there similar compact language for HTML/CSS layout?</p>
<p>The goals of such a model could be:</p>
<ul>
<li>to verify existing design (model checking)</li>
<li>to build robust design given higher level specifications</li>
<li>to check whether a set of requirements is consistent with HTML5/CSS3 engine (e.g., does not require writing javascript to make an adjustment too complex for the declarative languages)</li>
<li>to be a solid platform for even higher level research on qualities ("to check the harmony with algebra.")</li>
</ul>
<p>Also it could be a language to use for certain GUI-related abstractions, like is usual in programming language domain, where we do not need to use concrete syntax to express an idea of for-loop and we do have all kinds of nice, proven results about main concepts of algorithmic constructions.</p>
<p>Of course, web-browsers possess algorithmic model for rendering, e.g. popular and simplified description can be found <a href="http://www.html5rocks.com/en/tutorials/internals/howbrowserswork/#Render_tree_construction" rel="nofollow">here</a>, but as pointed above it does not have the properties listed above.</p>
|
<p>At the time of writing, I am not aware of any formal (or at least formalized enough) models for layouts in HTML. All the examples (ALM, etc) indicate, that usually the model is created to guide layout manager's creation. In the case of HTML and web-browsers the development is evolutional and while each web engine contains some sort of algorithmic models to be able to render HTML, nobody (I know of) published underlying math, and nobody "reverse engineered" program code to extract the model.</p>
<p>It should also be noted, that the model could have grown very complex, so any effort to build one retrospectively may be quite costly.</p>
<p>Even though it is not done yet, it does not mean it is impossible. As more rigorous approaches are being applied to web programming (e.g., recent release of Ur/Web - typesafe approach spanning from database to client-side interactions), modelling HTML5 layout management may be on the horizon, especially, as we witness more formal approach in the recommendations for new HTML5/CSS features (for example, <a href="http://www.w3.org/TR/css3-flexbox/" rel="nofollow">http://www.w3.org/TR/css3-flexbox/</a>), because having formal model may help proving consistency of the additions (analogously to formal grammar, which greatly facilitates making additions to the programming language syntax).</p>
| 491
|
language modeling
|
Detecting palindromes in binary numbers using a finite state machine
|
https://cs.stackexchange.com/questions/32081/detecting-palindromes-in-binary-numbers-using-a-finite-state-machine
|
<p>In my first algorithms class we're creating these patterns that are supposed to model a finite state machine. We were given a task to think if we can figure out a way to detect palindromes in binary sequences (no points if we do, it's just a food for thought).</p>
<p>I specifically asked the professor, knowing a little about CS and that palindromes aren't regular, and that a finite state machine can only detect a regular language. But his answer surprised me, since he said that it is indeed possible and that he thinks we should be able to come up with a sotluion.</p>
<p>This brings two questions:</p>
<ol>
<li>Maybe the binary sequence is a special type of palindrome that is regular? (I'm a little fuzzy on this)</li>
<li>Or the technique we're using to represent the state machine is more powerful than I think.</li>
</ol>
<p>In case it's 2), I'll try to explain how we're representing the problem.</p>
<p>Imagine a finite wall, which is supposed to be filled with a predefined types of tiles. Each tile has four colors</p>
<p><img src="https://i.sstatic.net/7qBa6.png" alt="tile"></p>
<p>You can design any tile you want, and as many as you want, but there has to be a finite number of tiles. They can be arranged in a single row, or into multiple rows, but the topmost row has to always match against the <code>0</code> and <code>1</code> colors. The end colors of the wall also have to be defined ahead of time, and the tiles have to match those, and they also have to match the adjacent tiles.</p>
<p>Here's an example of a pattern that detects a sequence of <code>01010101...01</code></p>
<p><img src="https://i.sstatic.net/E8Uif.png" alt="wall"></p>
<p>The question is, is this pattern more than just modeling a finite state machine? If not, how can I use this to detect palindromes?</p>
<p><strong>Update: There has to be a finite number of tile designs, and the number of tiles has to be finite as well (the input will always be finite as well). As for the rows, there can be an arbitrary number of rows, the only condition is that the tiles on a second row must match in the top color with the tiles on the row above them. The number of colors isn't limited as well, though it can't be infinite.</strong></p>
|
<p><strong>In a nutshell</strong>: <em>As presented, with a single row of tiles, the tiling system is equivalent to a finite state automaton. It cannot recognize the set of palindromes which is context-free, but not regular.
However, if the tiling system is extended, allowing as many rows as needed (possibly with the addition of a column on each side), then it becomes as powerful as a linear bounded automaton, recognizing context-sensitive languages, and thus also palindromes. The last section is a simple set of tiles to recognize palindromes.</em></p>
<h2>Recognizing palindromes with a single row of tiles</h2>
<p>Regarding your tiling system, I am missing some
details. Is the number of tiles finite, or just the number of
different tile designs. More precisely, while the number of designs can be finite, i.e. the same for all sequences to be recognized, the number of tiles of each design should be sufficient in number, which may depend on the sequence to be recognized, though each recognition would use only a finite number of tiles.</p>
<p>If the number of tile is finite, less than some fixed number $n$ that is independent of the sequence to be recognized, you can at
best recognize finite sets of sequences, which is much less than all regular languages.</p>
<p>Second point: is the number of different colors
can be set at any value, the same for all sequences of the language to be recognized. If not you cannot recognize all regular languages.</p>
<p>If you have any number of tiles, finite number of designs, any number
of colors, that is indeed equivalent to finite state automata, where
the colors stand for the states, and the tiles stand for the
transitions.</p>
<p>I am assuming you have only a single row of tiles, with the blue at bottom, as seems to be implied by your drawing.</p>
<p>I do wonder whether It helps understanding. Maybe so?</p>
<p>As you said, palindromes do not form a regular set. One disputable
intuitive explanation is that palindrome recognition implies counting,
and finite state machines cannot count. But there are formal ways of
proving that.</p>
<p>The language of palindromes is actually a <a href="http://en.wikipedia.org/wiki/Context-free_language" rel="nofollow noreferrer">Context-Free (CF)
language</a>. Context-free languages are strictly a superclass to regular
languages recognized with finite state automata. So any regular
language is context-free, but the converse is false. For example the
language of palindromes is CF but not regular.</p>
<p>Thus, <strong>palindromes cannot be recognized with a single row of tiles.</strong></p>
<h2>What more could be said.</h2>
<p>Finite state automaton (FSA) is implicitely the name of a device that
has only a finite number of states, used to control a reading head
that read-input from left to right on a tape, without ever leaving the
input string area, and never writes.</p>
<p>Finite state machines are usually considered as doing the same, except
that some can also write on an output tape.</p>
<p>If we are not too attached to established terminology, we could try to
relax some of these constraints, while keeping the finite number of
states.</p>
<p>A first attempt could be to allow the head to move in any direction.</p>
<p>That gives you what is called a <a href="http://en.wikipedia.org/wiki/Two-way_deterministic_finite_automaton" rel="nofollow noreferrer">two-way finite state automata</a>. They
seem more powerful, but it can be proved that they do no more than
FSA (whether deterministic or not).</p>
<p>Another possibility is to allow the automaton to overwrite the tape it
is reading, but still without ever leaving the area that was occupied
by the input string. This is called a <a href="http://en.wikipedia.org/wiki/Linear_bounded_automaton" rel="nofollow noreferrer">linear bounded automaton
(LBA)</a>. The LBA is actually one of the most powerful automata there
is. They recognize all the <a href="http://en.wikipedia.org/wiki/Context-sensitive_language" rel="nofollow noreferrer">context-sensitive (CS) languages</a>, which
include the CF languages.</p>
<p>The problem is that they are so powerful that they are difficult to
control, analyze or use. But they will recognize palindromes with a
finite number of states.</p>
<p>I have been excluding other type of automata which do have a finite number of state for control, but use unlimited memory, which one could perceive as having unbounded number of states.</p>
<h2>Extending the tiling system</h2>
<p>In <a href="https://cs.stackexchange.com/questions/32081/detecting-palindromes-in-binary-numbers-using-a-finite-state-machine/32083#32083">FrankW's answer</a>, it is shown that, by extending the tiling system
with several rows, one can recognize palindromes. He has there a very
interesting idea, which I am trying to push here.</p>
<p>It can be pushed further. If you allow an arbitrary number of rows,
and add a column on each side, it seems that the tiles can actually
mimic a linear bounded automaton. Hence, it becomes a very powerful
computational system.</p>
<p>I am saying "it seems" because I did not go through all the tedious
details of the construction, but only tried to convince myself.</p>
<p>However, rather than go through even the basic aspects of the construction, which are already complex, I
will rely on existing results in automata theory.</p>
<p>A row of tiles may be seen as the configuration of a one-dimensional
bounded cellular automaton (BCA). Columns represent the evolution of individual cells.</p>
<p>The colors of left and right sides of tiles represent the information exchanged
between adjacent cells, while the top and bottom colors represent the state
of the cells before and after transitions.</p>
<p>So it seems that a BCA can be simulated by the extended tiling system.</p>
<p><a href="http://www.sciencedirect.com/science/article/pii/0020025576900220" rel="nofollow noreferrer">David Milgram showed in 1976</a> that BCA can simulate a LBA.</p>
<p>Hence the extended tiling system can simulate a LBA.</p>
<p>The extended tiling system is therefore a very powerful computational
system, that can recognize context sensitive languages.</p>
<p>Hence <strong>the extended tiling system can recognize palindromes</strong>, among many other things.</p>
<p>Now, I am not giving you the details of the recipe to recognize palindromes and other things in this way, simply because it is very complicated, and no one would read it anyway, if I were able to write it without bugs.</p>
<h2>A set of tiles to recognize palindromes</h2>
<p>The general construction is far too complex to be used. However, here
is a simple set of tiles to recognize palindromes on the alphabet
$\{0,1\}$ as requested.</p>
<p>As one might expect it is symmetrical.</p>
<p>R and B stand for red and blue</p>
<p>Matching leftmost and rightmost symbol 1, and removing them</p>
<pre><code> 1 1 0 1
R 1 1 1 1 1 1 R
R 1 0 R
</code></pre>
<p>Matching leftmost and rightmost symbol 0, and removing them</p>
<pre><code> 0 1 0 0
R 0 0 0 0 0 0 R
R 1 0 R
</code></pre>
<p>Filling in the sides with fully red tiles</p>
<pre><code> R
R R
R
</code></pre>
<p>Creating the bottom line in blue</p>
<pre><code> R
R R
B
</code></pre>
<p>But this has nothing to do with finite state machines, as far as I can tell.</p>
| 492
|
language modeling
|
How do you represent LISP as mathematical / logical model?
|
https://cs.stackexchange.com/questions/79642/how-do-you-represent-lisp-as-mathematical-logical-model
|
<p>I asked this in stackoverflow, but the question probably fits here better.</p>
<p>This question arose from the objection that LISP is regarded as a functional language with some simple principles, namely functions, variables, and operators that roughly correspond to predicates, propositional variables, and connectives in logic respectively.</p>
<p>How would you define recursive structure and "data is a function is a data" paradigm central in LISP as a mathematical model? How about list processing feature? Can we derive LISP from simple mathematical foundations starting from propositional logic up to some specific factored logic?</p>
<p>Surely there must be a model, not only for LISP, but many similar "old" languages. So, I'd like to know, what are the necessary concepts of mathematics to define LISP-like language.</p>
<p>Point is not on what paradigm LISP fits the best, but to find out the simplest mathematical model for the language in question.</p>
<hr>
<p>Lately, I found this paper of McCarthy (1960) which may answer the question: <a href="http://www-formal.stanford.edu/jmc/recursive.pdf" rel="nofollow noreferrer">http://www-formal.stanford.edu/jmc/recursive.pdf</a></p>
<p>Logic, arithemic and automata by Church (1962) also seems a useful source: <a href="http://www.mathunion.org/ICM/ICM1962.1/Main/icm1962.1.0023.0058.ocr.pdf" rel="nofollow noreferrer">http://www.mathunion.org/ICM/ICM1962.1/Main/icm1962.1.0023.0058.ocr.pdf</a></p>
|
<p>"Old" programming languages like Fortran, Cobol and LISP arose <em>before</em> serious mathematical theory of programming languages was developed. They <em>inspired</em> the development of such theory, but are full of idiosyncracies and features which from a mathematical point of view can best be described as "warts". However, each of the old languages <em>in essence</em> has a mathematical core. We can extract their cores and see what mathematical models those have.</p>
<p>For LISP the core would be a small subset of Scheme. Let us focus just on the list processing part, and forget about the fact that in LISP and Scheme one can change state with <code>setq</code> and the like.</p>
<p>A small core for LISP could contain the atoms, S-expressions and functions, by which we mean some basic functions, $\lambda$-abstractions, and recursive functions. There are several ways to model this much of LISP, but perhaps the neatest is by <a href="https://en.wikipedia.org/wiki/Domain_theory" rel="noreferrer">domain theory</a>.</p>
<p>Consider the following simplified statement about (a core of) LISP. Every value is</p>
<ol>
<li><em>either</em> an atom (a literal),</li>
<li><em>or</em> a cons,</li>
<li><em>or</em> a function.</li>
</ol>
<p>Let us try to express this idea mathematically. We should find a set $D$ of <em>LISP values</em> which</p>
<ol>
<li>contains the set $A$ of all atoms (whatever they are)</li>
<li>contains $D \times D$ because every cons is a pair of values (car and cdr)</li>
<li>contains $D \to D$ because "functions are data"</li>
</ol>
<p>We can express this idea as a requirement that
$$A + D \times D + (D \to D) \subseteq D.$$
Unfortunately, this is not possible because $D \to D$ is larger than $D$, unless $D$ contains just one element (but there are many atoms).</p>
<p>However, there is a way out: the set $D \to D$ of functions is larger than $D$ <em>only</em> if we take <em>all functions</em> -- but we only need to consider the <em>computable</em> functions, or some slightly larger set of them. Indeed, as was shown by Dana Scott a long time ago, we should look for a <em>space</em> $D$ (rather than a <em>set</em>) and then take $D \to D$ to be the space of <em>continuous functions</em> (because every computable map is continuous). As it turns out, there are several kinds of "spaces" that we could use, all of which have in common that they capture the idea of <em>information content</em> and <em>information processing</em>. I cannot go into technical details on how this is done, but you can read about <a href="https://en.wikipedia.org/wiki/Domain_theory" rel="noreferrer">domain theory</a> to find out more.</p>
<p>Let us just think about what $D$ must contain. Of course, it contains various atoms, includnig an element corresponding to <code>nil</code>. Then, it contains all finite lists of elements, since a list <code>(x1 ... xn)</code> is just a lot of nested pairs <code>(x1, (x2, ..., (xn, nil))</code>. Next, it contains lot and lots of functions: anything that can be defined by a <code>lambda</code>, and with a bit of care also anything that can be defined by recursion.</p>
<p>In summary, the answer to your question is that <em>the mathematical essence of LISP is a space $D$ which contains the atoms, its own product $D \times D$ and its own function space $D \to D$</em>.</p>
<p>The stuff about logic and propositional connectives, you should forget that on the first round of studying mathematical models of LISP. Maybe come back to it later, and look up something like <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="noreferrer">Curry-Howard correspondence</a> and <a href="https://en.wikipedia.org/wiki/Realizability" rel="noreferrer">realizability</a>.</p>
| 493
|
language modeling
|
procedures and immutable data to simulate return values
|
https://cs.stackexchange.com/questions/44412/procedures-and-immutable-data-to-simulate-return-values
|
<p>Let's say I have a programming language that allows procedures, i.e., methods without return values, and immutable data-structures, so no sideeffecting inside a procedure. Is it possible to simulate a program written in a language with return values in our language?</p>
<p>In other words, do return values allow for a more advanced model of computation? I guess it may be possible with Continuation-passing style, but i would appreciate other thoughts on this.</p>
| 494
|
|
language modeling
|
Characterising $(aa)^*$ in first order logic
|
https://cs.stackexchange.com/questions/14545/characterising-aa-in-first-order-logic
|
<p>In my descriptive complexity class, we've been asked to find a formula that characterises the language $(aa)^*$ (over the alphabet $\{a\}$) with a first order formula over the language $\{<, P_a\}$.</p>
<p>This was the first class, so I will recall what we've learned to be sure that I understood. To a $L$-formula $\phi$ we associate a language $\mathcal L(\phi)$ which is the class of all $L$-structures in which $\phi$ is valid.</p>
<p>In my case, we then are looking for a $\{<, P_a\}$-formula for which words of even length are models. I guess I have to say in $\phi$ that $<$ is a total order, so that I can interpret the models as words, and that $\forall x, P_a(x)$ to say that all points are labelled as 'a'. But how to say that there has to be an even number of points in the model? The definition of having an even number of points seems recursive, so I get the impression that a formula for $(aa)^*$ should be of infinite length in first-order logic..</p>
|
<p><strong>Short answer</strong>. There is no such first-order formula, you need a monadic second order formula.</p>
<p><strong>Details</strong>. This can be proved directly using an Ehrenfeucht-FraÏssé games argument if you want to stay inside logic, but the real answer to your question is the conjunction of three results. </p>
<p>[1] Büchi (1960): A language is monadic second order expressible iff it is regular.<br>
[2] McNaughton-Pappert (1971): A language is first-order expressible iff it is star-free.<br>
Counter-free Automata. Research Monograph 65. With an appendix by William Henneman. MIT Press. p. 48. ISBN 0-262-13076-9.<br>
[3] Schützenberger (1965): A regular language is star-free if and only if its <a href="http://www.google.fr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDYQFjAA&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSyntactic_monoid&ei=vus_UvPvI6O90QXi7ICYDQ&usg=AFQjCNGryneZWcePiSelwMvu6lxhoo0nSw&sig2=AvdYaFr7DceHNGfba7JqIg&bvm=bv.52434380,d.d2k" rel="nofollow">syntactic monoid</a> is aperiodic. <a href="http://igm.univ-mlv.fr/~berstel/Mps/Travaux/A/1965-4TrivialSubgroupsIC.pdf" rel="nofollow">On finite monoids having only trivial subgroups</a> </p>
<p>[3] gives an algorithm to decide whether a given regular language is star-free (and hence, first-order expressible, by [2]). Now, the syntactic monoid of $(aa)^*$ is the cyclic group of order $2$, which is not aperiodic. To get a monadic second order formula, you need first to have first order "macros" to express $\min$, $\max$ and $+1$ and then the following second order formula will do the job
$$
\exists X\ \forall x\ (\min \in X \wedge \max \notin X) \wedge (x \in X \leftrightarrow x+1 \notin X)
$$</p>
| 495
|
language modeling
|
Is there an always-halting, limited model of computation accepting $R$ but not $RE$?
|
https://cs.stackexchange.com/questions/11936/is-there-an-always-halting-limited-model-of-computation-accepting-r-but-not
|
<p>So, I know that the halting problem is undecidable for Turing machines. The trick is that TMs can decide recursive languages, and can accept Recursively Enumerable (RE) languages.</p>
<p>I'm wondering, is there a more limited model of computation which accepts only recursive languages, and not RE? And if so, is there such a model which is always guaranteed to halt?</p>
<p>Obviously this model would be strictly less powerful than TMs and strictly more powerful than PDAs.</p>
<p>I'm open to a machine-style model, or a lambda-calculus style model.</p>
<p>As an example of what I'm thinking: the Coq language has a restriction that for any self-recursive calls, the first argument must be strictly decreasing in "size" i.e. if it is a natural number, it must be smaller, if it is a list, it must be shorter, etc. This guarantees that it always halts, but I have no idea if you can compute all of R this way.</p>
|
<p>Yes, there are as many models of R as there are of RE! Take a model of RE, and restrict it to the total elements of the model. For example, take Turing machines that halt. Or take total recursive functions. Or take your favorite programming language (idealized to remove memory limitations) but in addition to requiring that the source code be syntactically valid, also require that the program halt on every input.</p>
<p>The catch is that, since the halting problem is undecidable, for any model of R, given a recursive syntax for the elements, there cannot be any algorithm to decide whether a candidate element is valid. For example, a normal programming languages has syntactic rules, and sometimes a type system, to decide whether a program is well-formed; the parser or type checker implements a decision procedure to verify that the source code is an element of the language. If you want a programming language that is a model of RE rather than R, there is no way to decide whether some source code is a valid program.</p>
<p>Coq only allows a subset of all recursive functions:
$\mathsf{Coq} \subsetneq \mathsf{R} \subsetneq \mathsf{RE}$. Both bounds of this inequality chain have decidable models but the middle item doesn't. Intuitively speaking, Coq only contains recursive functions whose termination can be proved by sufficiently simple arguments. While “sufficiently simple” covers just about anything mathematicians do, it is still very limited in a theoretical sense. (More precisely, Coq's theory is equivalent to I think the Peano axioms with a schema for recursion that goes up to a certain ordinal, but at that point it gets beyond my comprehension.)</p>
| 496
|
language modeling
|
Prove that a model of infinite tapes is stronger then turing machine model
|
https://cs.stackexchange.com/questions/75014/prove-that-a-model-of-infinite-tapes-is-stronger-then-turing-machine-model
|
<p>I want to prove that if we have a model, $A$, with unbound number of tapes, then
$A$ is stronger then Turing Machine model. Can you help me with example of such a language that $M$ will fail but $A$ will give an output? thanks.</p>
|
<p>It is a standard proof that Turing machines with any finite number of tapes are equivalent to single-tape Turing machines, since you can code any fixed number of tapes on a single tape.</p>
<p>If you allow a Turing machine to have an infinite number of tapes, then you can decide any language $L$. To do this, first copy the input $x_1\dots x_\ell$ so that the $i$th tape contains the symbol $x_i$ in its first cell. Now move all the tape heads back to position $1$ and enter some special state $q_\mathrm{decide}$. Recall that the transition function decides what to do based on the state the machine is in at the moment, and what is under each of the infinitely many heads. So just define it to accept from state $q_{\mathrm{decide}}$ if there is some $k$ such that it sees $w_1, \dots, w_k$ under the first $k$ heads and $w_1\dots w_k\in L$, and every other head sees the blank character; otherwise, it rejects.</p>
<p>Note that there is a philosophical problems with write down the transition function if $L$ is undecidable, but the function certainly exists.</p>
| 497
|
language modeling
|
Why full Chomsky hierarchy is so detailed, if there are decidable recursive languages?
|
https://cs.stackexchange.com/questions/102305/why-full-chomsky-hierarchy-is-so-detailed-if-there-are-decidable-recursive-lang
|
<p>One can have a look on the Chomsky hierarchy <a href="https://en.wikipedia.org/wiki/Chomsky_hierarchy" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Chomsky_hierarchy</a> , especially the inset named "Automata theory: formal languages and formal grammars" at the bottom of the page. When one tries to model natural language (e.g. as <a href="http://www.grammaticalframework.org/" rel="nofollow noreferrer">http://www.grammaticalframework.org/</a> tries to do it), then one usually aims for the most expressive formal language that is still decidable and recursive languages are exactly such languages - they are the most expressive decidable languages. But GrammaticalFramework instead narrows the scope and uses the mildly context-sensitive language instead for doing its modelling. Why it is so? Why one elaborates the hierarchy of languages, what compromises he or she tries to make?</p>
<p>As far as I understand from my studies, then the complexity issue is the single most important issue. One can try to use recursive language, but recognition and parsing of such languages are very slow. Or maybe there are different reasons? Maybe there are not even algorithms how convert recursive language into total Turing machine (and vice versa) or maybe there are not even algorithms for the parsing of recursive language. So - why not to use recusive language? Due to lack of algorithms? Or due to nonpolynomial/exponential complexities of those algorithms? And that is why the hierarchy is so elaborated - one can try to find the most expressible language with the best complexity properties.</p>
<p>It is somehow different with logics. In logics one can tolerate the complexity issue and one is required to make compromises between expressibility and decidability. For recursive languages the recognition is decidable and so one should seek compromises between expressibility and complexity. Am I right?</p>
<p>The key point of my question is this: <strong>what obstacles prevent the practical use of recursive languages and why one is required to elaborate hierarchy of languages and to use less expressive languages?</strong> Complexity? Lack of algorithms? Something different? </p>
| 498
|
|
language modeling
|
Why "Choice Points" introduce non-determinism in a program?
|
https://cs.stackexchange.com/questions/108817/why-choice-points-introduce-non-determinism-in-a-program
|
<p>I'm studying the didactic programming language <strong>Oz</strong>, following the book "Concepts, Techniques, and Models of Computer Programming".</p>
<p>In the book, the nondeterminism is introduced through the concept of <strong>choice</strong>, as it's explained <a href="https://en.wikipedia.org/wiki/Nondeterministic_programming" rel="nofollow noreferrer">here</a>.
However, in the Oz language if the programmer calls a choice between alternatives A, B, C it happens that the alternative A is always travelled first, and consequently (in case of backtracking) first B and then C. So the choice happens to be deterministically, since I know a priori how the choices' tree will be constructed.
So what's the link between choice and nondeterminism?</p>
<p>I also have another question. In the book, the authors state that if I want to introduce the choice concept in my computation model I am obliged to use a stateful (non-declarative) computation model. Why?</p>
|
<p>Nondeterminism is an inherently unphysical concept. If you think about the various definitions of nondeterministic computation, they always say something like one of the following:</p>
<ul>
<li><p>the computation is structured as a tree and, if any of the paths through the tree succeeds, the computation succeeds, regardless of how many failures there are on other paths;</p></li>
<li><p>the computer "magically" considers all the options in parallel, even though there might be exponentially many of them;</p></li>
<li><p>the computer only considers one option but it "magically" knows which one will lead to success, if any of them will, and chooses that one.</p></li>
</ul>
<p>In principle, the "choice" operator achieves nondeterminism – at least, it would, if we could implement it using one of the above schemes. Unfortunately, in the real world, we don't know how to do that, so we have to pick an option, see if it works out, and backtrack if it doesn't. Maybe we could do something a bit smarter than just trying the options in order but, at the end of the day, we don't know any efficient way of implementing nondeterminism on a real, deterministic, computer (which is why we don't know that <span class="math-container">$\mathrm{P}=\mathrm{NP}$</span>).</p>
| 499
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.