category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
natural language processing
Complexity of natural language processing problems
https://cs.stackexchange.com/questions/32276/complexity-of-natural-language-processing-problems
<p>Which natural language processing problems are NP-Complete or NP-Hard?</p> <p>I've searched the <a href="/questions/tagged/natural-lang-processing" class="post-tag" title="show questions tagged &#39;natural-lang-processing&#39;" rel="tag">natural-lang-processing</a> and <a href="/questions/tagged/complexity-theory" class="post-tag" title="show questions tagged &#39;complexity-theory&#39;" rel="tag">complexity-theory</a> tags (and related complexity tags), but have not turned up any results.</p> <p>None of the NLP questions that are recommended are helpful, the closest are the following:</p> <ul> <li><p><a href="https://cs.stackexchange.com/questions/25925/why-is-natural-language-processing-such-a-difficult-problem">https://cs.stackexchange.com/questions/25925/why-is-natural-language-processing-such-a-difficult-problem</a></p></li> <li><p><a href="https://cs.stackexchange.com/questions/9920/how-is-natural-language-processing-related-to-artificial-intelligence">https://cs.stackexchange.com/questions/9920/how-is-natural-language-processing-related-to-artificial-intelligence</a></p></li> <li><p><a href="https://cs.stackexchange.com/questions/21334/what-aspects-of-linguistics-are-necessary-or-good-for-natural-language-processin">What aspects of linguistics are necessary or good for natural language processing?</a></p></li> </ul> <p>The <a href="http://en.wikipedia.org/wiki/List_of_NP-complete_problems#Formal_languages_and_string_processing" rel="nofollow noreferrer">Wikipedia list of NP-complete problems</a> does not list any complexity results for NLP.</p> <p>The only lead I've found is the paper <a href="http://www.aclweb.org/anthology/O95-1007" rel="nofollow noreferrer"><em>Theoretical and Effective Complexity in Natural Language Processing</em></a> by J. Morin (1995).</p> <p>Any help or pointers is appreciated!</p>
<p>LFG (Lexical-Functional Grammar) <a href="http://ato.ms/MITECS/Entry/dalrymple" rel="nofollow noreferrer">recognition is NP-Complete</a>.</p> <p>Edit per request: Lexical-Functional Grammar (LFG) [1] is a theory of natural language syntax, developed as an alternative to Chomsky's theories of transformational syntax. Some versions of Chomsky's theories are computationally equivalent to Unrestricted Grammars. LFG by contrast provides a grammar formalism which consists of a context-free grammar augmented by a feature system. </p> <p>It's the feature system that's NP-complete. The proof works basically by noticing first that the feature system is at least as powerful as propositional logic, and second that grammaticality rests on satisfying all the propositional constraints governing the sentence. So it's the Satisfiability Problem hiding under another guise.</p> <p>[1] "Lexical-Functional Grammar: A Formal System for Grammatical Representation" by Ronald M Kaplan and Joan Bresnan. The paper originally appeared in <em>The Mental Representation of Grammatical Relations</em>, ed. Joan Bresnan (Cambridge, MA: The MIT Press, 1982).</p>
0
natural language processing
Advantages of knowing foreign languages for natural language processing
https://cs.stackexchange.com/questions/13833/advantages-of-knowing-foreign-languages-for-natural-language-processing
<p>I wonder about cases in which knowing several languages can lead a researcher to interesting results in natural language processing.</p> <p>For example, knowledge of foreign languages can without doubt contribute to better machine translation, it's the most obvious example.</p> <p>In what other fields of NLP can a researcher benefit from knowing several languages?</p>
1
natural language processing
What aspects of linguistics are necessary or good for natural language processing?
https://cs.stackexchange.com/questions/21334/what-aspects-of-linguistics-are-necessary-or-good-for-natural-language-processin
<p>What aspects of linguistics are necessary or good to know for natural language processing? What references do you recommend for studying those aspects? Thanks!</p>
<p>NLP is a big place, you might want to be more specific.</p> <p>Within information retrieval, <a href="https://en.wikipedia.org/wiki/Stemming" rel="nofollow">stemming</a> is a linguistic idea that has become useful as a heuristic means of reducing vocabulary size. As a practitioner I learned about it from <a href="http://nlp.stanford.edu/IR-book/" rel="nofollow">An Introduction to Information Retrieval</a>.</p>
2
natural language processing
Natural language query processing
https://cs.stackexchange.com/questions/33430/natural-language-query-processing
<p>I am trying to implement a natural language query preprocessing module which would, given a query formulated in natural language, extract the keywords from that query and submit it to an IR (information retrieval system) system.</p> <p>At first, I thought about using some training set to compute <a href="http://en.wikipedia.org/wiki/Tf%E2%80%93idf" rel="nofollow">TF-IDF</a> values of terms and use these values for estimating the importance of single words. But on second thought, this does not make any sense in this scenario - I only have a training collection but I dont have access to index the indexed IR data. Would it be reasonable to only use the IDF value for such estimation (is IDF enough to establish weight of a term in general)? Or maybe another weighting approach?</p> <p>Could you suggest how to tackle this problem? Usually, the articles about NLP processing that I read talks about training and test data sets. But what if I only have the query and training data?</p>
<p>I don't know the relative sizes and the nature of your problem, these details could completely change the view. But generally, if the only thing you have at your disposal is a small set of test documents, I would not recommend using that for term weighting at all. A small set of documents would give you an illusion of coverage, a skewed set of weights that would not accurately cover the domain. If you do have general access to the IR system, though, you could try using the system itself for obtaining statistics by hitting it with random words and recording the number of search results, for example.</p>
3
natural language processing
Semantic natural language processing - from texts to logical expressions? Universal knowledge base?
https://cs.stackexchange.com/questions/68398/semantic-natural-language-processing-from-texts-to-logical-expressions-univer
<p>My question is - <strong>is there a semantic natural language processing that tries to understand the meaning of the texts and that tries to derive the consequences of the understood meaning? Is there a universal knowledge base that can be used for the "grounding" of the texts?</strong></p> <p>I have heard a lot about statistical NLP and NLP with neural networks but those approaches are not scalable, are not exact and are not satisfying. Is there (and if not - then why) a semantic NLP and semantic natural language understanding that tries to translate the texts into logical formulas? Today we have a vast array of logical formals - both rigorous and both nonmontonic, adaptable, fuzzy, probabilistic and so on? So - if we have those logics why don't we translate texts into them?</p> <p>And if texts are translated into logical formulas then the universal knowledge base (KB) can be built. This KB should be used as reference KB. I.e. if some text contains the phrase "logical continuum" then this KB should contain all the possible definitions (expressed as logical formulas) of this phrase (according to the different texts, authors) and one should be able to reason over those definitions, to use definitions for applying this phrase into the new text (possibly - computer generated texts), to use definitions for deriving more general or more special notions with relevant definitions, to form completely new concepts. There can be two types of terms in this KB: abstract ideas and concrete real world facts, like country Belgium and so on.</p> <p>I am aware of existence of some KBs like ConceptNet and WordNet (not really useful) and Cyc/OpenCyc (formal KB with reasoning capabilities) and there is also very promising framework <strong>OpenCog</strong> which (fortunately) has interesting reasoning engine, but (unfortunately) lacks the public knowledge base for experiments. OpenCog is really interesting, because it unifies probabilitist and rigorous reasoning: each concept (Atom in their terms) have strength/probability value and if those values converge to 1, then proababilistic reasoning tends to rigorous classical logic style reasoning. But that's all.</p> <p>So - <strong>are there such notion as semantic NLP and are there endeavours to create universal knowledge base for semantic interpretation of any text</strong>? Are there ongoing projects in this field?</p>
<p>There are several implementations of <a href="http://www.cs.utexas.edu/~ml/publications/area/77/learning_for_semantic_parsing" rel="nofollow noreferrer">semantic parsers</a> that convert natural-language texts into formal logical representations of their meanings. <a href="https://en.wikipedia.org/wiki/Natural_language_understanding" rel="nofollow noreferrer">Natural language understanding</a> systems can also be based on <a href="https://en.wikipedia.org/wiki/Discourse_representation_theory" rel="nofollow noreferrer">discourse representation theories</a> that represent the meanings of English texts using first-order logical predicates.</p> <p>I have found at least one system that is able to generate a knowlege base from statements that are given in a natural language. The <a href="http://attempto.ifi.uzh.ch/acewiki/" rel="nofollow noreferrer">ACE Wiki</a> is based on <a href="https://en.wikipedia.org/wiki/Attempto_Controlled_English" rel="nofollow noreferrer">Attempto Controlled English</a>, which is a semantically unambiguous subset of the English language.</p>
4
natural language processing
Constraint satisfaction problems in Natural language processing
https://cs.stackexchange.com/questions/157042/constraint-satisfaction-problems-in-natural-language-processing
<p>I have just started learning about CSP and NLP, for which I have to write a review paper of some research articles.</p> <p>The problem is that when a searched for research articles on some trusted digital libraries with the keywords CSP and NLP, there weren't many results that were related to both.</p> <p>Do you have any idea on where should I start? Or if you could tell me any specific use case of CSP in NLP on which I can focus.</p> <p>Thank you in advance for your responses.</p>
5
natural language processing
Does programming language detection need more input than natural language detection?
https://cs.stackexchange.com/questions/28668/does-programming-language-detection-need-more-input-than-natural-language-detect
<p>I wonder which one of the two needs a larger input to achieve a decent accuracy: <br> programming language detection or natural language detection?</p> <hr> <p>More details:</p> <p>Definition of <a href="http://en.wikipedia.org/wiki/Language_identification" rel="nofollow">Language detection</a>: </p> <blockquote> <p>In natural language processing, language identification or language guessing is the problem of determining which natural language given content is in. Computational approaches to this problem view it as a special case of text categorization, solved with various statistical methods.</p> </blockquote> <p>The question I was asking can be written bit more formally as: let $x$ be a substring from some text $X$ written in natural language, and $y$ a substring from some source code $Y$ written a programming language. Assume a $X$ and $Y$ are each written in one language only (natural language or programming language).</p> <p>Let $f(X)$ be the size of $x$ so that on average (i.e. trying on a bunch of different X) I correctly predict the language with accuracy $p$. Does $f(X) &lt; f(Y)$ or $f(X) &gt; f(Y)$ ?</p>
<p>Answering knowingly this question would require experiments. I am sure there is some data for natural languages, where it is a common problem. I recall from memory that one study gave ridiculously small figures for natural language, which is not too surprising. If you take 5 consecutive word in a sentence (the figure I recall, without being sure), there is a good chance that one of them belongs to a single language, and even more that the fragment can syntactically belong to only one language (parsing fragments without context is possible with existing technology). To me the problem is not so much the size of the input as the size of the recognition program and its data. There is a compromise there. Actually my guess is that keeping all the relevant data is far too costly, and that the actual techniques are statistical ones, such as checking n-grams of letters. (see <a href="http://en.wikipedia.org/wiki/Language_identification" rel="nofollow">Wikipedia</a>), which are extremely effective.</p> <p>Regarding programming languages, the problem is a bit different. The size of the vocabulary is ridiculously small, and identifiers do not give any indication (or very little with the allowed mophology: a language could forbid to use dask inside identifiers, for example). Furthermore, what fixed vocabulary there is (keywords) is often the same in many programming languages. However, programming languages have a very strict syntax, which will certainly distinguish them rather quickly. It is not so much how long a fragment as the kind of fragment. A long succession of assignments might look the same in many programming languages. Buit I would not venture any figure, and I am not even sure statistics would make sense.</p> <p>Then there is the issue of parenthood. A fragment of Pascal may look very much like a fragment of Algol 60 or Simula 67. Is American English to be distinguished from British or Autralian English?</p> <p>To conclude, without any hard knowledge on facts:</p> <p>The problem should be stated with a word regarding the space cost (and possibly time-cost) of the identification program.</p> <p>Identification for natural language is essentially morphology or lexically based, and will use statistical techniques if space costs are to be acceptable. They can recognize fairly short sequences (a few words as I recall) with good accuracy.</p> <p>Identification for programming languages is essentially syntax based, and probably needs larger fragments in number of tokens, in order to have enough syntax substance, despite the intentional similarities between programming languages. But it can probably be 100% accurate, without excessive size of the identification program. I would however be more confident if I had actual data to back my guesswork. I do not know of any work on this topic.</p> <p>Considering only fragments is not an issue. It is obviously not an issue when only lexical information is used. It is not an issue either when syntactic information dominates, as the technology to parse fragments is working well.</p> <hr> <p><strong>Afterthoughts, after the question was completed.</strong></p> <p>One minor remark concerns the concept of substring size: is it measured in characters, in bytes, in lexical elements? Size of character encoding is variable. Characters take diacritical marks. Lexical elements have different average size in natural and programming languages.</p> <p>A more important remark concerns the mode of measurement. Natural language will use statistical methods to avoid the problem of natural language huge specification. Hence the answer is accordingly accurate with some probability that may depend on substring length (different techniques produce different types of mis-detection).</p> <p>In the case of programming languages, the specifications are small enough that they can probably be used exactly. Hence the answer could possibly be always 100% exact at acceptable cost. The problem would not be with the detection procedure inaccuracies as in natural language, but in the question itself. If a string does not discriminate two languages because it can belong to both, no amount of technology will help you solve the problem. In such a case, the detection software should not guess, which would be meaningless, but it should just list, with a 100% accuracy, all the programming languages the substring can belong to.</p> <p>In other words, the case of natural language and the case of programming languages are very different technically. I am not sure it makes sense to compare them.</p>
6
natural language processing
Question on word probability for hierarchical softmax used in natural language processing
https://cs.stackexchange.com/questions/94228/question-on-word-probability-for-hierarchical-softmax-used-in-natural-language-p
<p>I am reading the following paper: <a href="https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf" rel="nofollow noreferrer">https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf</a></p> <p>On page 4 of the paper they describe the hierarchical softmax which is intended to reduce the computational complexity (I believe only during training time) of training a neural network to learn word vectors. The hierarchical softmax output layer is a balanced binary tree.</p> <p>Here is the description given in the paper for computing $p(w\vert w_I)$ (where $w_I$ is the input word or set of words, and $w$ is the missing word we are trying to predict):</p> <p>More precisely, each word $w$ can be reached by an appropriate path from the root of the tree. Let $n(w,j)$ be the $j$-th node on the path from the root to $w$, and let $L(w)$ be the length of this path, so $n(w,1)=root$ and $n(w,L(w))=w$. In addition, for any inner node $n$, let $ch(n)$ be an arbitrary fixed child of $n$ and let $\lbrack x \rbrack$ (can't figure out how to generate brackets used in paper) be $1$ if $x$ is true and $-1$ otherwise. Then the hierarchical softmax defines $p(w_O \vert w_I)$ as follows:</p> <p>$$p(w\vert w_I) =\prod_{j=1}^{L(w)-1}\sigma(\lbrack n(w,j+1)=ch(n(w,j))\rbrack \cdot v_{n(w,j)}'^T v_{w_I})$$</p> <p>where $\sigma(x) = 1/(1+\exp(-x))$.</p> <p>Now, I understand that we are using the sigmoid function to essentially "squish" the dot products of our two vector arguments in values between $0$ and $1$ (i.e. probabilities). But, I don't understand the user of the indicator function in this equation. Going down the tree I feel like we should be multiplying probabilities at each branch, and somehow I know the indicator function must be "steering" us down the tree, but I cannot tell how. Intuitively, I feel like it would be more appropriate to have an indicator function that outputted $\sigma(x)$ or $(1-\sigma(x))$ based on left or right turns.</p>
<p>I have figured out my confusion.</p> <p>The indicator function is $\lbrack x \rbrack$ is set to output a $-1$ or a $1$ based on the argument being false or true, respectively. </p> <p>When the authors say "let $ch(n)$ be an arbitrary fixed child of node $n$", they mean arbitrary in the sense of left child or right child. But in order for our function to work, we must decide on this arbitrary decision. So, let us assume $ch(n)$ ouputs the left child of unit $n$.</p> <p>Then as we go down our tree, if $\lbrack n(w,j+1) = ch(n(w,j))\rbrack = 1 $ it means the next node on our path is to the left of the parent node $n(w,j)$ and if $\lbrack n(w,j+1) = ch(n(w,j))\rbrack = -1$ it means that the next node is to the right of $n(w,j)$.</p> <p>As a result, since we defined $ch(n)$ to indicate the left child, all left probabilities are calculated as $\sigma(v_{n(w,j)}'^T\cdot v_{w_I})$ since $\lbrack n(w,j+1) = ch(n(w,j))\rbrack = 1 $. Now we know that the probability on the right must then be $1 -\sigma(v_{n(w,j)}'^T\cdot v_{w_I})$. This is where I was confused since the function outputs $\sigma(-v_{n(w,j)}'^T\cdot v_{w_I})$. But let $a=v_{n(w,j)}'^T\cdot v_{w_I}$ and we can verify that $\sigma(-a)=1-\sigma(a)$.</p> <p>We have $\sigma(a) = \frac{1}{1+e^{-a}}$, now observe that</p> <p>$\begin{align*} 1 - \sigma(a) &amp;= \frac{1+e^{-a}}{1+e^{-a}}-\frac{1}{1+e^{-a}}\\ &amp;=\frac{e^{-a}}{1+e^{-a}}\\ &amp;=\frac{1}{e^a}\times \frac{1}{1+e^{-a}}\\ &amp;=\frac{1}{1+e^a}\\ &amp;=\frac{1}{1+e^{-(-a)}}\\ &amp;=\sigma(-a). \end{align*}$</p> <p>Hence the function $$p(w\vert w_I) =\prod_{j=1}^{L(w)-1}\sigma(\lbrack n(w,j+1)=ch(n(w,j))\rbrack \cdot v_{n(w,j)}'^T v_{w_I})$$</p> <p>simply multiplies probabilities along the branches corresponding to the path from the root node to the proper leaf node $w$.</p>
7
natural language processing
Where can I find a study on the amount of different word frequencies in a corpus?
https://cs.stackexchange.com/questions/33631/where-can-i-find-a-study-on-the-amount-of-different-word-frequencies-in-a-corpus
<p>I need to know how many different values the frequency of a certain word in a corpus can there be for a natural language processing problem. Is there any study or site that has such estimation?</p>
<p>What you're asking directly refers to <a href="http://en.wikipedia.org/wiki/Information_retrieval" rel="nofollow">Information Retrieval</a>. Information Retrieval directly creates models for such things as frequency of words in a corpus and so on.</p> <p>In a corpus, different indicators are estimated in order to create a score. Two indicators are predominant in those models :</p> <ul> <li><strong>Inverse document frequency (idf)</strong></li> </ul> <p>This model aims at attenuating words that appear too often and considered irrelevant to judge the score of a word.</p> <p>$ idf_{t} = log\frac{N}{df_{t}}$ where $N$ stands for the total of document in a collection and $df_{t}$ the number of documents in a collector that contain the term $t$</p> <ul> <li><strong>Term frequency (tf)</strong></li> </ul> <p>This is the most basic way to modelize word frequency in a corpus. It can be computed by saying that:</p> <p>$tf_{t,d} = \frac{t_{occ}}{N}$ where $t_{occ}$ is the occurence of the term divided by the number of words in the document.</p> <p>More information on this subject can be found here :</p> <p><a href="http://nlp.stanford.edu/IR-book/pdf/06vect.pdf" rel="nofollow">Scoring, term weighting and the vector space model </a></p>
8
natural language processing
How are PCFGs used in programming language design?
https://cs.stackexchange.com/questions/167192/how-are-pcfgs-used-in-programming-language-design
<p>I've been reading the <a href="https://en.wikipedia.org/wiki/Probabilistic_context-free_grammar" rel="nofollow noreferrer">wikipedia article</a> about probabilistic context-free grammars (PCFGs), and they state that</p> <blockquote> <p>PCFGs have application in areas as diverse as natural language processing to the study the structure of RNA molecules and <strong>design of programming languages</strong>.</p> </blockquote> <p>but I'm having a hard time finding examples of their usage in programming language design. So my question is this, what are the uses of PCFGs in programming language design?</p> <p>Any examples or references would be greatly appreciated, thanks in advance :]</p>
<p>I am not aware of applications in programming language design, and I am a bit skeptical. However, PCFGs <em>are</em> useful for generating test cases, for program testing.</p>
9
natural language processing
What is the natural language of computers mathematics or logic?
https://cs.stackexchange.com/questions/48157/what-is-the-natural-language-of-computers-mathematics-or-logic
<p>I was reading about the history of computers where i came by <a href="https://en.wikipedia.org/wiki/Machine_code" rel="nofollow">machine code</a></p> <blockquote> <p>A Machine code or machine language is a set of instructions executed directly by a computer's central processing unit (CPU). Each instruction performs a very specific task, such as a load, a jump, or an ALU operation on a unit of data in a CPU register or memory.</p> </blockquote> <p>The computer here does logical operations like load or jump steps using the machine code.</p> <p>From the Wikipedia page a <a href="https://en.wikipedia.org/wiki/Computer" rel="nofollow">computer</a> is </p> <blockquote> <p>A general-purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically.</p> </blockquote> <p>This means from the machine code the computer can perform logical or arithmetic operations, but which principle computers are based on the logical principle or the mathematical principle ? </p> <p>What can be said also as the natural language of a computer ?</p> <p>====</p> <p>[Edit] both answers solved my problem.</p> <p>====</p>
<p>There is really no such thing as a <em>natural language</em> for computers. Natural language is a concept from linguistics, and pertains mostly to humans (and perhaps also to some animals).</p> <p>The corresponding concept for computers is <em>machine language</em> or <em>native code</em>, which is what the computing core of the computer (the CPU) runs. Machine codes consists of various instructions, some of which perform arithmetic and logical operations. Others are in charge of control flow, of memory access, and so on.</p> <p>Related to machine language is <em>assembly code</em>, which is a readable representation of machine code. While machine code consists of bytes, assembly code consists of textual symbols. (Assembly code also usually contains some features which make it a barebones programming language.)</p>
10
natural language processing
How to determine agreement between two sentences?
https://cs.stackexchange.com/questions/56828/how-to-determine-agreement-between-two-sentences
<p>A common Natural Language Processing (NLP) task is to determine semantic similarity between two sentences. Has the question of agreement/disagreement between two sentences been covered in NLP or other literature? I tried searching on Google Scholar but didn't get any relevant results.</p>
<p>I would propose to do some researching in the field of <strong>Stance Classification</strong>. Given a target claim or argument we can classify whether a number of sentences are in favor, against or neither of that claim. So an idea is to extract the topic of those sentences classify whether they agree or disagree with it and depending that classification you can determine if those sentence agree or disagree.</p> <p>Here are some papers you can look at:</p> <p><a href="https://paperswithcode.com/task/stance-classification" rel="nofollow noreferrer">https://paperswithcode.com/task/stance-classification</a></p> <p><a href="https://arxiv.org/abs/1907.00181" rel="nofollow noreferrer">https://arxiv.org/abs/1907.00181</a></p> <p><a href="https://www.mdpi.com/2078-2489/13/3/137" rel="nofollow noreferrer">https://www.mdpi.com/2078-2489/13/3/137</a></p> <p><a href="https://dl.acm.org/doi/abs/10.1145/3488560.3501391" rel="nofollow noreferrer">https://dl.acm.org/doi/abs/10.1145/3488560.3501391</a></p>
11
natural language processing
How to translate lambda calculus into (first-order, modal) logic, is it possible at all?
https://cs.stackexchange.com/questions/82878/how-to-translate-lambda-calculus-into-first-order-modal-logic-is-it-possible
<p>It is possible (using formal semantics) to translate natural language sentences into lambda expressions. So, is it possible to translate those lambda expressions into some logic, e.g. into first-order logic or into modal logic?</p> <p>I am aware of the Curry-Howard correspondence, but I have not found actual translation, it is more conceptual one. I am aware of translation of lambda expressions into Answer Set Programming <a href="http://costantini.di.univaq.it/pubbls/aspopc10-Costantini.pdf" rel="nofollow noreferrer">http://costantini.di.univaq.it/pubbls/aspopc10-Costantini.pdf</a> and I have heard about translations into combinatory logic.</p> <p>But are there translations into first order or modal logic?</p> <p>Such translation would be big step into natural language processing.</p>
12
natural language processing
Are there any neural NLG systems which don&#39;t generate in left-to-right order?
https://cs.stackexchange.com/questions/99987/are-there-any-neural-nlg-systems-which-dont-generate-in-left-to-right-order
<p>For a while, all classification tasks in natural language processing were based on simple RNN's, which operate in a very word-by-word order. Adding gating mechanisms increased ability to "look back", and the newer addition of context vectors which can train attention to different words during the task have made classification of text less about "left-to-right" reading and more about selective focusing.</p> <p>However, I have never seen a seq2seq or any other <strong>natural language generation</strong> system (machine translation, image2seq, etc) which generates the desired sequential output not in sequential order. It seems this would be very powerful. Are there any examples of using attention not only in encoders, but also in decoders?</p>
13
natural language processing
Relation and difference between information retrieval and information extraction?
https://cs.stackexchange.com/questions/7181/relation-and-difference-between-information-retrieval-and-information-extraction
<p>From <a href="http://en.wikipedia.org/wiki/Information_retrieval" rel="noreferrer">Wikipedia</a> </p> <blockquote> <p><strong>Information retrieval</strong> is the activity of obtaining information resources relevant to an information need from a collection of information resources. Searches can be based on metadata or on full-text indexing.</p> </blockquote> <p>From <a href="http://en.wikipedia.org/wiki/Information_extraction" rel="noreferrer">Wikipedia</a></p> <blockquote> <p><strong>Information extraction (IE)</strong> is the task of automatically extracting structured information from unstructured and/or semi-structured machine-readable documents. In most of the cases this activity concerns processing human language texts by means of natural language processing (NLP). Recent activities in multimedia document processing like automatic annotation and content extraction out of images/audio/video could be seen as information extraction.</p> </blockquote> <p>What are the relations and differences between information retrieval and information extraction? </p> <p>Thanks!</p>
<p><strong>Information retrieval</strong> is <strong><em>based on a query</em></strong> - you specify what information you need and it is returned in human understandable form.</p> <p><strong>Information extraction</strong> is about structuring unstructured information - given some sources <strong><em>all of the (relevant) information</em></strong> is structured in a form that will be easy for processing. This will not necessary be in human understandable form - it can be only for use of computer programs.</p> <p>Some sources:</p> <ul> <li><a href="http://gate.ac.uk/ie/">Information Extraction vs Information Retrieval</a></li> <li><a href="http://acl.ldc.upenn.edu/W/W00/W00-1109.pdf">From Information Retrieval to Information Extraction</a></li> <li><a href="http://www.google.bg/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=2&amp;ved=0CDgQFjAB&amp;url=http://www.stanford.edu/class/cs124/lec/rel.pptx&amp;ei=-vnFUMDmA4eHswaT-4DQCA&amp;usg=AFQjCNHurMBm1jwbFYoPA8XOwIAYVZ5D5g&amp;sig2=7jMJf6n9sVaYFoLgPVxY2A&amp;cad=rja">Information Extraction Stanford PPT</a></li> <li><a href="http://nlp.stanford.edu/IR-book/html/htmledition/boolean-retrieval-1.html">Introduction to Information Retrieval(link to place in the book with definition)</a></li> </ul>
14
natural language processing
Language to Construct Finite State Transducer
https://cs.stackexchange.com/questions/28175/language-to-construct-finite-state-transducer
<p>I am attempting to write a Finite State Transducer module in OCaml, because I think it's a good exercise, which is because I have been teaching myself Natural Language Processing.</p> <p>You typically construct finite automata using regular expressions, for example (a | b).</p> <p>What language does one typically use to construct Finite State Transducers? </p> <p>You can't use Regular Expressions alone, because that defines only the input string, but I need some method to map corresponding output symbol. I have thought about having something like this ((a,x) | (b, y)), where the tuple (a,x) is composed of the component a input value, and the component x is the corresponding output value. Would that work in general for constructing FST?</p>
<p>Yes, such a specification would work. Basically a FST is a finite state automaton, with different labels on the edges. We can go from FST to regex by a standard algorithm. The same method works with transducers, only the labels are now pairs of strings. The concatenation of labels is done component-wise: $(a,x)(b,y) = (ab,xy)$.</p> <p>There are several types of FST. To obtain a general one both input and output component are strings, not just letters. It is not difficult to see that alternatively one might require that both input and output are either a letter or the empty string.</p> <p>All said, I personally prefer as a "language" the finite state graphs: they are easy to "program".</p>
15
natural language processing
Difference between LR parsing and Shift-Reduce parsing?
https://cs.stackexchange.com/questions/68278/difference-between-lr-parsing-and-shift-reduce-parsing
<p>I'm learning natural language processing and I can't understand the difference between <a href="https://en.wikipedia.org/wiki/Shift-reduce_parser" rel="nofollow noreferrer">Shift-Reduce parser</a> and <a href="https://en.wikipedia.org/wiki/LR_parser" rel="nofollow noreferrer">LR parser</a>. </p> <p>As I've understood from Wikipedia, shift-reduce is just a name of a class of parsing algorithms which includes LR, LALR, SLR and other. But some <a href="http://www3.cs.stonybrook.edu/~cse304/Fall08/Lectures/lrparser-handout.pdf" rel="nofollow noreferrer">articles</a> describe algorithm of shift-reduce parsing like it is a separate algorithm. So what is the difference between algorithm of shift-reduce parsing and algorithm of LR parsing?</p>
16
natural language processing
Natural Language Parser that can handle syntactic and lexical errors
https://cs.stackexchange.com/questions/33514/natural-language-parser-that-can-handle-syntactic-and-lexical-errors
<p>I have some background in natural language processing and I know that all parsers (top down or bottom up, or mix), at least when I studied just about a few years ago, cannot handle any error. A small error like a grammatical one or a spelling one will result in unexpected parsed tree.</p> <p>This is unacceptable in natural language in most cases. Thus I have been trying to find a way to make a new one with a different approach.</p> <p>The basic general abstract idea is that I will use a top down dynamic programming approach. Given a string of text with $n$ tokens, several top down fillers will be generated. These fillers will look at the tokens to see if they can find and fill constituents that they are missing. Because of this, these fillers might leave gap after they have found everything they need. This is supposed to make the parser more robust.</p> <p>An example will be best to illustrate this idea:</p> <p>Given the sentence: <code>I saw the ordinary thing</code>.</p> <p>One top down filler can be. $S \rightarrow Subj - Verb - Object$. This filler will try to look for span that it can use to fill its expectation of seeing a $Subject$ followed by $Verb$ and $Object$. This means it will deployed three other fillers in sequence. The first one is $Subj$. This filler will scan and add to the cache three possible subjects which are $I$, $saw$, $thing$. $I$ is put in span $[1,1]$, $Saw$ in span $[2,2]$, $thing$ in span $[5,5]$. This will result in a total of three potential pending parse trees. Then with each of these pending parse tree, $Verb$ filler is deployed to scan the span after each possible subject. $Object$ filler is deployed to scan the rest.</p> <p>With the above approach, sentences such as <code>I .. eh... saw the big thing</code> or similar constructs do not cause problem because fillers look for what they need and fill them into the tree. This problem is dealt with when all fillers have completed. Fillers that leave lot of gaps (unused tokens) will not generate parses having high score compared to parses generated by fillers that use up all tokens.</p> <p>This is also my approach to deal with subject-verb agreement and male-female as well as singular-plural agreement. You deal with them at the ranking stage so that you can give your parser much better error tolerance. Sentences such as <code>Maybee they ehh can get something</code> can still be parsed. One resulting parse will just not use <code>Maybee</code>. The top parses will then be used again, this time to look for unused tokens. Unused tokens will be processed with spelling correction, did-you-mean style. One can see how it works with incorrect sentences like <code>This is a valide argument</code>. Even incorrect sentences like <code>They did got it</code> are still parsed ok.</p> <p>There will be other fillers which cannot find all they need such as conditional sentence filler. $CondS \rightarrow "If" - S - ["then"] - S$. Some filler such as imperative $ImpS \rightarrow ["Please"] - Verb - Object$ will complete most of the times because it can find all it needs abeit leaving gaps, but then it is a ranking problem to make sure that the correct one is returned.</p> <p><strong>So my Question</strong> is: </p> <ul> <li><p>Has anyone ever thought of this approach? Any reference papers?</p></li> <li><p>If nobody used it before what may be the potential problem?</p></li> </ul>
<p>I assume you already know about dynamic programming parsing, aka chart parsing. This is usually defined for Context-Free grammars (CFG), but can be extended to other grammatical formalisms, where it can make more or less sense, depending on the structural complexity of these algorithms. There are various papers describing chart parsing for specific formalisms, particularly in the computational linguistic literature. One general view of the underlying structure common to all these algorithms is described in a <a href="https://www.academia.edu/798690/Recognition_can_be_harder_than_parsing" rel="nofollow noreferrer">1995 paper by Lang</a>, and to be found in the <a href="http://dickgrune.com/Books/PTAPG_2nd_Edition" rel="nofollow noreferrer">Grune-Jacobs book</a>, relies on a very simple view of parsing as an intersection of two languages: the first is the singleton regular language containing the sentence $w$ to be parsed, and the second being the language $L$ for which we are given a grammar $G$. The idea is that a single sentence $w$ can always be read as a FSA (or a regular grammar) as in the following example for the sentence $abac$ $$q_0 \stackrel{a}\longrightarrow q_1 \stackrel{b}\longrightarrow q_2 \stackrel{a}\longrightarrow q_3 \stackrel{c}\longrightarrow q_f$$</p> <p>Using this FSA $A_w$ and the grammar $G$ of the language $L$ (assume first $L$ to be a CF language, and $G$ a CFG), the old cross-product construction due to <a href="http://www.jstor.org/discover/10.2307/25000219?uid=3738016&amp;uid=2129&amp;uid=2&amp;uid=70&amp;uid=4&amp;sid=21104717280497" rel="nofollow noreferrer">Bar-Hillel, Perles and Shamir (1961)</a> to prove closure of CFL with regulr sets can be used with $A_w$ and $G$, and yield a new CF grammar $G_w$, which naturally generates only $w$ when $w\in L$, or the empty language $\emptyset$ when $w\notin L$. The important point is that, when $w\in L$, the grammar $G_w$ generates $w$ with exactly the same parse trees as the original grammar $G$, up to a renaming of non-terminal, though the correspondence between non-terminals is kept (let us ignore details). In other words, we just described a parser that yield a parse forest $G_w$ from which all the parse trees for $w$ can be extracted, simply by using the grammar $G_w$ as a generator.</p> <p>As it turns out, the dynamic programming chart parsers are just optimized variants of this very basic construction called <em>parsing as intersection</em>.</p> <p>The nice point about it is that it lends itself to many variations. Closure under intersection with regular languages is a very common property, so that this is a guide for producing parsers for a great variety of formalisms, though it makes effective sense only for those that have a simple generating structure (loosely). Typically it works very well for tree adjoining grammars (TAG) an other mildly context sensitive languages.</p> <p>Another point is that, rather than parse only strings, one can parse complete regular sets, keeping only the sentences that are also in the context-free language. And it is largely compatibles with the many "optimization" techniques commonly found in chart parsing</p> <p>This is very important in natural language processing (NLP), and particularly in speech processing, since the result of the first pass of speech processor to identify the spoken words is usually not a single string of words, but what is usually called a word lattice (see <a href="https://stackoverflow.com/questions/25903007/algorithms-or-data-structures-for-dealing-with-ambiguity/26132215#26132215">Ambiguity and sharing in Natural Language Parsing</a>, which is actually the title of an answer to a question).</p> <p>The interesting point is that a word lattice (see diagram in previous reference) may be seen as a FSA that recognizes all the candidate sentences (after noise processing) that could be the sentence to be parsed. But chart parsing can be applied as well, as can the intersection construction.</p> <p>Now, it may well be that the word lattice contains spurious words to be eliminated, or completely garbled sequences corresponding to arbitrary number of missing words, or misunderstood words. That can be modeled as a General Sequential Mapping (GSM) that does some editing on the input sentence, adding, removing or substituting words, possibly in a (finite) contextual way. Both regular and context free languages are preserved by GSM mapping. Typically, the editing GSM can be applied to the word lattice, yielding a new regual language and its FSA (possibly even with cycles). Then the parsing process is applied to that new FSA. This part is, I think, what you mainly wanted to describe in the question.</p> <p>The next point is that not all proposed sentences, or not all corrections have the same likelyhood. Actually, word lattices may be weighted structures that give weights to the corresponding sentences in proportion to some likelyhood they they are correct.</p> <p>Then the editing GSM can also have weighted transition corresponding to some standard likelyhood that such error may have occurred.</p> <p>Finally, the (CF) grammar of the language itself may be weighted.</p> <p>The dynamic programming construction can use these weight (possibly probabilities) to determine what are the most likely parses of the given input according to the grammar used.</p> <p>Note that I have been skipping the morphological analysis that recognizes words, and uses similar techniques to end up with the word lattice.</p> <p>I am also skipping the use of attributes or feature structures, that can combine with the process, provided they meet some algebraic constraints.</p> <p>The algebraic constraints (that concern also the numerical weights) are related to the algebraic structures of the grammars themselves. Typically, a CF derivation backbone (that is more than CF languages) relies on <a href="https://en.wikipedia.org/wiki/Semiring" rel="nofollow noreferrer">semiring structures</a>.</p> <p>A CF rule such as $X\to VXW | aV$ may be read as an equation of the form $X = V\cdot X\cdot W \cup \{a\}\cdot W$ where the wariables take their values in sets of sentences (i.e. in languages). The domain of languages is a semi-ring under the two operations: Union "$\cup$" and concatenation "$\cdot$". And a context-free grammar is a specific kind of equation in that domain.</p> <p>You should find more on this use of semirings in Goodman's <a href="http://arxiv.org/abs/cmp-lg/9805007" rel="nofollow noreferrer">"Parsing Inside-Out 1998.</a></p> <p>There are some other references in <a href="https://linguistics.stackexchange.com/questions/4619/is-there-a-favoured-data-structure-for-storing-ambiguous-parse-trees-in-natural/6120#6120">my answer</a> to the question "<a href="https://linguistics.stackexchange.com/questions/4619/is-there-a-favoured-data-structure-for-storing-ambiguous-parse-trees-in-natural">Is there a favoured data structure for storing ambiguous parse trees in Natural Language Processing?</a>"</p>
17
natural language processing
What are open problems in computer science?
https://cs.stackexchange.com/questions/112399/what-are-open-problems-in-computer-science
<p>I should prepare some paper for a colloquium (kinda student-task) and it should cover the following points:</p> <p>(1) at least one notable discovery in theoretical informatics (or computer science)</p> <p>(2) at least one open problems in theoretical informatics</p> <p>(3) an example of short notable proof of some result in theoretical informatics </p> <p>Can you draw some examples of it? Or some sources? </p> <p>I would like that it was related with computational linguistics or natural language processing.</p> <p>Thank you</p>
18
natural language processing
Resources to learn NLP
https://cs.stackexchange.com/questions/157606/resources-to-learn-nlp
<p>I am an undergraduate student in mathematics. I have a fair bit of experience with deep learning in computer vision research and am willing to dabble into Natural Language Processing (NLP). I hope that things won't be very disjointed and some of the knowledge can be transferred.</p> <p>I wanted to know if y'all can recommend some YouTube playlists that start from scratch as far as NLP is concerned, and then gets pretty deep into the subject. I would also like it to have a research-oriented flavor. Thanks in advance.</p>
<p>I like <a href="https://www.youtube.com/playlist?list=PLoROMvodv4rOSH4v6133s9LFPRHjEmbmJ" rel="nofollow noreferrer">The Stanford CS224N 2021 YouTube channel</a>, which focuses on deep learning models for NLP (but many things changed since 2021, esp. LLMs). For a pre-deep-learning NLP courses, I like:</p> <ul> <li><a href="https://www.bilibili.com/video/av29608234/" rel="nofollow noreferrer">Natural Language Processing by Michael Collins, Columbia University (Coursera, 2012 or 2013)</a></li> <li><a href="https://www.youtube.com/playlist?list=PLoROMvodv4rOFZnDyrlW3-nI7tMLtmiJZ" rel="nofollow noreferrer">Dan Jurafsky &amp; Chris-Manning: Natural Language Processing (Coursera, 2012)</a></li> </ul>
19
natural language processing
Operations on regular languages
https://cs.stackexchange.com/questions/89191/operations-on-regular-languages
<p>I am taking a course on natural language processing that assumes the students have some background on theory of computation. I dont, but have read up till chapter 3 of the book "Speech and Language Processing by Jufrasky".</p> <p>I therefore understand the following</p> <ul> <li>regular expressions and regular languages</li> <li>operations on regular languages (union, cross product)</li> <li>finite state automatas and the construction of FSA's given a regular expression</li> </ul> <p>Unfortunately, I couldn't solve a single question below and am not even sure if my understanding of the question is correct. The red ticks are the solutions. I hope the people here can help out an NLP newbie. </p> <p>Definitions for the notations are as follows:</p> <p>$+$ which means ‘one or more of the previous character’. (book definition)</p> <p>$\bigotimes$ cross product</p> <p>So this is my most likely wrong understanding of the first statement: $L_1+ \bigotimes L_2 + $</p> <p>If I have </p> <p>$L_1 = \{aa,ab,aab\}$ </p> <p>$L_2 = \{b,ab,abb\}$</p> <p>then $L_1 + $ requires me to have at least one occurrence of $L_1$. And so $L_1+ \bigotimes L_2 + $ would result in a set that could have at least one of the element from $L_1$ concatenated with at least one from $L_2$ ? So I could get something like $aa \ aa \ b$ ? Where the two $aa$ comes from $L_1$.</p> <p>Would this be correct ?</p> <p><a href="https://i.sstatic.net/Ampet.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ampet.jpg" alt="enter image description here"></a></p>
20
natural language processing
Quantum algorithms for logical inference - reference request?
https://cs.stackexchange.com/questions/81752/quantum-algorithms-for-logical-inference-reference-request
<p>Microsoft is committed to the building of the scalable, industrial size topological quantum computer, Visual Studio integrated programming language and SDK will be released by the end of this year (2017). From the other side, logical/symbolic methods in computer science (natural language processing, inference, knowledge management etc.) are considered to be impractical due to high complexity/computational time, that is why subsymbolic methods (neural networks, statistical methods) are used widely today.</p> <p>The question is - are there quantum algorityms for speedup of classical logical problems (i.e. all the non-quantum logics - classical, non-classical, modal, defeasible, linear, categorical, adaptable, etc.), like automated theorem proving, backward and forward reasoning and so on?</p> <p>I have heard that Grover search algorithm can be used for solving Boolean satisfiability problem but are there any other quantum algorityms for logics?</p>
21
natural language processing
Difference Between Residual Neural Net and Recurrent Neural Net?
https://cs.stackexchange.com/questions/63541/difference-between-residual-neural-net-and-recurrent-neural-net
<p>What is the difference between a <strong>Residual</strong> Neural Net and a <strong>Recurrent</strong> Neural Net?</p> <p>As I understand,</p> <p><a href="https://arxiv.org/pdf/1512.03385v1.pdf" rel="noreferrer">Residual Neural Networks</a> are very deep networks that implement 'shortcut' connections across multiple layers in order to preserve context as depth increases. Layers in a residual neural net have input from the layer before it and the optional, less processed data, from <em>X</em> layers higher. This prevents early or deep layers from 'dying' due to converging loss.</p> <p><a href="https://arxiv.org/pdf/1604.03640.pdf" rel="noreferrer">Recurrent Neural Networks</a> are networks that contain a directed cycle which when computed is 'unrolled' through time. Layers in recurrent neural network have input from the layer before it and the optional, <em>time dependent</em> extra input. This provides situational context for things like natural language processing.</p> <p>Therefor, a recurrent neural network can be used to generate a basic residual network if the input remains the same with respect to time.</p> <p>Is this correct?</p>
<p>The answer is YES, they basically are the same according to this <a href="https://arxiv.org/abs/1604.03640v1" rel="nofollow noreferrer">paper</a> </p> <p><img src="https://i.sstatic.net/q7JdV.pnghttps://" alt="enter image description here"></p> <p>The figure above shows how they compared both and how a ResNet can be reformulated into a recurrent form tat is almost identical to a RNN.<br> For more info you can read the paper and get deeper.</p>
22
natural language processing
Combining Ontology and Relational Databases in Question Answering system
https://cs.stackexchange.com/questions/74514/combining-ontology-and-relational-databases-in-question-answering-system
<p>I'm introducing to the Natural Language Processing field and it's application. I'm planning to build a question answering system for a project, but some approaches are making me a bit confuse about the use of ontologies and it's application on the architecture of the system. I understand that an Ontology in definition is a way to represent concepts, relations about certain domains allowing semantic annotations also.</p> <p>Some approaches uses an Ontology like a database, in which user's input (In natural language) is transformed to a SPARQL query by semantic parser and then the knowledge is retrieved of the ontology...But then, I asked my self if Ontology usually is a static knowledge that rarely changes and I want my system can increase knowledge of the domain with new instances or concepts because there are other systems (this is only a module of a big system) that probably are going to need data about instances present in the ontology to retrieved some specific attributes that could change over time... then, relational database comes to mind... And instead of using an ontology as a big database for whole project, why not to build a relational database with the instances of the domain where attributes can change dinamically and add new ones without make constantly in the ontology... So I could develop an ontology that represent the schema of the database, so I can map the natural language query of the user with the terms present in the ontology so I transform it to a SQL query which then I'm going to retrieved the answer in a relational database... I need to figured out how to parser instances of the ontology in the database with this approach.</p> <p>Could this approach be correct ? I mean, using the ontology as a intermidiate between user query and relational database? , the only problem I see with this approach is that I need to figured out how to link instances of the database in the ontology ..</p> <p>Thanks for your help, Greetings.</p>
23
natural language processing
Estimate entropy, based upon observed frequency counts
https://cs.stackexchange.com/questions/15010/estimate-entropy-based-upon-observed-frequency-counts
<p>Suppose I have $n$ independent observations $x_1,\dots,x_n$ from some unknown distribution over a known alphabet $\Sigma$, and I want to estimate the entropy of the distribution. I can count the frequency $f_s$ of each symbol $s \in \Sigma$ among the observations; how should I use them to estimate the Shannon entropy of the source?</p> <hr> <p>The obvious approach is to estimate the probability of each symbol $s$ as $\Pr[X=s]=f_s/n$, and then calculate the entropy using the standard formula for Shannon entropy. This leads to the following estimate of the entropy $H(X)$:</p> <p>$$\text{estimate}(H(X)) = - \sum_{s \in \Sigma} {f_s \over n} \lg (f_s/n).$$</p> <p>However, this feels like it might not produce the best estimate. Consider, by analogy, the problem of estimating the probability of symbol $s$ based upon its frequency $f_s$. The naive estimate $f_s/n$ is likely an underestimate of its probability. For instance, if I make 100 observations of birds in my back yard and none of them were a hummingbird, should my best estimate of the probability of seeing a hummingbird on my next observation be exactly 0? No, instead, it's probably more realistic to estimate the probability is something small but not zero. (A zero estimate means that a hummingbird is absolutely impossible, which seems unlikely.)</p> <p>For the problem of estimating the probability of symbol $s$, there are a number of standard techniques for addressing this problem. <a href="https://en.wikipedia.org/wiki/Laplace_smoothing" rel="noreferrer">Additive smoothing</a> (aka Laplace smoothing) is one standard technique, where we estimate the probability of symbol $s$ as $\Pr[X=s] = (f_s + 1)/(n+|\Sigma|)$. Others have proposed Bayesian smoothing or other methods. These methods are widely used in natural language processing and document analysis, where just because a word never appears in your document set doesn't mean that the word has probability zero. In natural language processing, this also goes by the name <a href="https://en.wikipedia.org/wiki/N-gram#Smoothing_techniques" rel="noreferrer">smoothing</a>.</p> <p>So, taking these considerations into account, how should I estimate the entropy, based upon observed frequency counts? Should I apply additive smoothing to get an estimate of each of the probabilities $\Pr[X=s]$, then use the standard formula for Shannon entropy with those probabilities? Or is there a better method that should be used for this specific problem?</p>
<p>Like most things of this nature the best method is best found by empirical evaluation. One thing worth noting is that most smoothing schemes can be thought of as the incorporation of a prior into your likelihood estimate. For example if you are trying to estimate the parameter $\theta$ of a binary random variable $X$ and you have data $\mathcal{D} = \{x_1, \ldots, x_n\}$ consisting of i.i.d. realizations of $X$ then your posterior takes the form</p> <p>$$ \begin{align*} P(\theta|\mathcal{D}) &amp;\varpropto P(\theta)P(\mathcal{D}|\theta)\\ \end{align*} $$ Assuming $P(\theta) = B(\alpha,\beta)^{-1}\theta^{\alpha-1}(1-\theta)^{\beta-1}$, i.e., a <a href="http://en.wikipedia.org/wiki/Beta_distribution" rel="nofollow">Beta distribution</a> and letting $n_1 = |\{x_i \in \mathcal{D} \colon x_i = 1\}|$ then we have</p> <p>$$ \begin{align*} P(\theta|\mathcal{D}) &amp;\varpropto \theta^{\alpha-1}(1-\theta)^{\beta-1} \theta^{n_1}(1-\theta)^{n - n_1}\\ &amp;= \theta^{n_1 + \alpha - 1}(1- \theta)^{n - n_1 + \beta - 1}. \end{align*} $$ Notice the effect of the hyperparameters ($\alpha$ and $\beta$) is essentially to just add some constant to your counts of each possible outcome. So additive smoothing (at least all applications I've encountered) is really just a special case of Bayesian estimation.</p>
24
natural language processing
How do IR researchers evaluate the ranks of documents?
https://cs.stackexchange.com/questions/41578/how-do-ir-researchers-evaluate-the-ranks-of-documents
<p>I am developing a new IR system in a specialized context. I understand that a traditional IR system (like a search engine) should rank documents in terms of their relevance for a query. The most relevant documents should come first and the least relevant (perhaps: least relevant above some threshold) should come last. </p> <p>I want to evaluate my new IR system. How do researchers evaluate document rank? How do they say that this document is more relevant than that document for some query? The most obvious thing to do would be to manually assign such labels and then check the machine against the hand labels. This seems highly subjective. Maybe there is a better way? </p> <p>I've read section 15.1 Manning's <em>Foundations of Natural Language Processing</em> but it only talks about evaluating precision and recall -- not evaluating rank. Any suggestions on where to look on evaluating rank?</p>
<p>If I understand what you're trying to achieve correctly, you can use <a href="https://en.wikipedia.org/wiki/Discounted_cumulative_gain" rel="nofollow">this</a> technique <strong>discounted cumulative gain</strong>.</p> <p>$$DCG_p=\sum_{i=1}^{p}{\frac{2^{rel_i}-1}{\log_2{(i+1)}}}$$</p> <p>$i$ is the rank, and $p$ is the number of results that you want to evaluate. For example, if you evaluate DCG for the first 10 results, then $p=10$.</p> <p>NDCG is a form of DCG that involves normalizing the result. This is useful if p differs when you compare DCG scores. $$NDCG=\frac{DCG}{IDCG}$$ where IDCG is the ideal DCG. This is found by calculating the DCG up to $p$ under ideal conditions (all documents have a relevance of 1).</p> <p>Note that neither recall nor precision are used here. Relevance and rank are the factors to compare. You still need to give the documents a relevance factor, that's unavoidable, but you don't need to establish a threshold as you would with other evaluation metrics.</p>
25
natural language processing
How to represent symbolic knowledge using real numbers - theory about neural networks and natural/analog computing?
https://cs.stackexchange.com/questions/98067/how-to-represent-symbolic-knowledge-using-real-numbers-theory-about-neural-net
<p>One can define the semantics of one definite word using the references to real world entities, relationships with the other words and other concepts and represent all this knowledge about this one word using logical symbolic expressions. And then one can encode all this set of symbolic expressions into vector of real numbers. This is word embedding that is used in natural language processing, distributional semantics of the word in opposite of the formal semantics of the word.</p> <p>One can consider function of software program (e.g. functional program or any other program). One can encode this program in the multiple vectors and matrices of real numbers that are used for the definition of neural network.</p> <p>One can consider symbolic meta-knowledge and encode them into vectors or neural networks as well.</p> <p>The decoding process can be more tricky. There is more or less elaborate decoding of neural network - e.g. see Google queries "logical program extraction from neural networks" or "symbolic rule extraction from neural networks". But I have not seen the work about extraction of more or less static knowledge base from the word-embedding-vector.</p> <p>So - I have two questions regarding this matter:</p> <ul> <li>Is there symbolic knowledge extraction from the word-embedding vectors - some kind of decoding algorithm from vector of real numbers to the set of logical formulas?</li> <li>Is there general theory of mentioned encoding algorithms? The usual approach is to train neural networks and to arrive at the encoded form using non-symbolic methods, non-algorithmic methods, implicit way. I have heard about embedding of symbolic knowledge in neural networks to speed-up training, but such kind of work is scarce. But what about general encoding algorithms?</li> </ul> <p>There are discrete, natural Goedel numbers (encoding algorithms) that can be assigned to any theorem of first order logic. But what about such Goedel numbers for the sets of formulas or for some computational program (as a set of commands)? Can we enumerate all such sets using natural numbers only or maybe the real numbers are needed instead naturally. Or maybe even set of real numbers are required for encoding the set of symbolic formulas or program statements? <strong>Is there such research work which I can develop further? If no, then what ideas can be mentioned for such encoding/decoding schemes?</strong></p> <p>Such encoding-decoding algorithms can be related to biological computing and ultimately they can lead to the explanation of brain activity.</p>
<p>No, probably not. I think you're expecting too much from the current state of the art in word embeddings. Word embeddings don't magically capture all semantic knowledge. They don't reflect perfect understanding of the language. Instead, they're just useful mappings where similar words often have similar embeddings. Moreover, the way word embeddings are constructed has nothing to do with logical formulas.</p> <p>I don't think you're going to find that word embeddings solve the problem of converting from natural language to formulas.</p> <p>I don't know what would count as a general theory of encoding algorithms for you. There are certainly multiple papers that propose different methods of constructing different word embeddings; you could read those to understand the state of the art.</p>
26
natural language processing
Semantic/DRT methods for conversational agents / chatbots / dialogue systems - reference request?
https://cs.stackexchange.com/questions/82575/semantic-drt-methods-for-conversational-agents-chatbots-dialogue-systems-r
<p>The wiki pages about chabots mention that statistical methods, keyword search and precompiled answers are used for the chatbots. But I feel that there should exist different - semantical approach for constructing chatbots. There are formal semantics of natural language (recent results in <a href="http://www.springer.com/gp/book/9783319504209" rel="nofollow noreferrer">http://www.springer.com/gp/book/9783319504209</a>) that expresses natural language into lambda calculus and translates those expressions into further logical expressions like deontic logic expressions. And then there is discourse representation theory. So - in essence one can imagine the conversational process as interactive logical inference process in which conversational agent as Belief-Desire-Intention agent strives to achieve some goals by producing texts using formal semantics of natural language and the logical inference (inference control).</p> <p>So - are there research efforts or research trends along the described lines? Google gives nothing important for joined phrases 'discourse represenation theory and chatbot', etc. But Google is not helpful in many many cases in which scientific endeavours uses completely different keywords. So - I am asking for keywords and maybe some reference works.</p>
27
natural language processing
Solving the part-of-speech tagging problem with HMM
https://cs.stackexchange.com/questions/20185/solving-the-part-of-speech-tagging-problem-with-hmm
<p>There is a famous <a href="http://en.wikipedia.org/wiki/Part-of-speech_tagging" rel="nofollow">part-of-speech tagging problem</a> in Natural Language Processing. The popular solution is to use <a href="http://en.wikipedia.org/wiki/Hidden_Markov_model" rel="nofollow">Hidden Markov Models</a>.</p> <p>So that, given the sentence $x_1 \dots x_n$ we want to find the sequence of POS tags $y_1 \dots y_n$ such that $y_1 \dots y_n = \arg\max_{y_1 \dots y_n}p(Y,X)$.</p> <p>By Bayes Theorem, $P(X,Y)=P(Y)P(X \mid Y)$.</p> <p>Solving POS by HMM implies the assumptions: $p(y_i \mid y_{i-1})$ and $p(x_i \mid y_i)$.</p> <p>The question is there are any particular reason why we prefere to solve it by generative model with a lot of assumption and not directly by estimating $P(Y \mid X)$, given the training corpus it's still possible to estimate $p(y_i \mid x_i)$.</p> <p>The second question, even when we convinced that the generative model is preferred why to calculate is as $P(Y,X)=P(Y)P(X \mid Y)$ and not $P(X,Y)=P(X)P(Y \mid X)$. In case we have an appropriate generative story I can use $P(X,Y)=P(X)P(Y \mid X)$ as well, is it mentioned somewhere that assumed generative story is preferred.</p>
<p>Isn't this exactly the same question you asked <a href="https://cs.stackexchange.com/questions/16777/hidden-markov-model-in-tagging-problem">previously</a>? I'll make some additional comments and add some links here. Hopefully that will help.</p> <blockquote> <p>is there are any particular reason why we prefere to solve it by generative model with a lot of assumption and not directly by estimating $P(Y∣X)$, given the training corpus it's still possible to estimate $p(y_i∣x_i)$?</p> </blockquote> <p>It just depends. Choosing whether to model $P(X,Y)$ or $P(Y|X)$ is simply the choice of generative versus discriminative. Both have advantages. See the paper <a href="http://www.cs.cmu.edu/~aarti/Class/10701/readings/NgJordanNIPS2001.pdf" rel="nofollow noreferrer"> On Discriminative vs. Generative classifiers</a> by Ng and Jordan. One thing worth mentioning, that I didn't say last time, is unsupervised learning in a generative framework is normally straightforward. This means it is also fairly obvious how to do semi-supervised learning. Semi-supervised learning can be very helpful for NLP tasks where the amount of unlabeled data is essentially infinite and labelled data is hard to obtain. Semi-supervised learning is typically not as easy in a discriminative framework. See <a href="http://en.wikipedia.org/wiki/Co-training" rel="nofollow noreferrer">Co-training</a> as an example of the later.</p> <p>As for how one decomposes the joint, well that's up to you. There's no rule saying you cant decompose it as $P(X,Y) = P(X)P(Y|X)$. Doing so would be perfectly valid, just not sensible. Notice decomposing the joint this way includes the factor $P(Y|X)$ already. If you're ultimately interested in predictiong $Y$ given $X$, then you should should predict $$ \begin{align*} \arg\max_y P(Y=y,X=x) &amp;= \arg\max_y P(X=x)P(Y=y|X=x) \\ &amp;= \arg\max_y P(Y=y|X=x). \end{align*} $$ So you just use $P(Y|X)$ and ignore $P(X)$ and we're back at a discriminative classifier.</p>
28
natural language processing
Which fields of Computer Science are involved in knowledge-based and text-based dialog systems?
https://cs.stackexchange.com/questions/63423/which-fields-of-computer-science-are-involved-in-knowledge-based-and-text-based
<p>One of my future goals is to know in depth about building a text-based dialog system to answer questions about a specific topic (say, Tolkien's legendarium), assume that I have a large body of article-formed facts about that topic (say, Wikipedia). Excluding the "Computer Engineering" part of the problem, I have a question:</p> <p>Which (narrow enough) fields of Computer Science are involved in the problem, and how deep is the involvement of each of them? With my knowledge right now, I can only pinpoint the field of Natural Language Processing (NLP), but I don't know how deep it's involved in the problem and whether we can narrow it more. Also, I just feel that Information Retrieval (IR) and Knowledge-based System (KBS) are involved, but actually I don't know them too well. That's all I have for now.</p> <p><strong>Addition:</strong> I'll use the example of the Tolkien's legendarium to make sample <strong>facts</strong> and a sample <strong>question</strong>. Sample <em>facts</em>: "Melkor (known in Sindarin as Morgoth), the evil Vala, corrupted many Maiar into his service. These included Sauron, the main antagonist of The Lord of the Rings, and the Balrogs, his demons of flame and shadow" (from the Wikipedia page of Maia). The sample <em>question</em>: "List all named Maiar who were tempted and served under Morgoth."</p>
29
natural language processing
Can you suggest a topic for a Bachelor Thesis in Mathematics that is related to Machine Learning?
https://cs.stackexchange.com/questions/133808/can-you-suggest-a-topic-for-a-bachelor-thesis-in-mathematics-that-is-related-to
<h2>Context</h2> <p>I am a final year Bachelor of Mathematics student and next semester I will write my Bachelor thesis.<br /> My interests are in Machine Learning (ML) and I will do a master in ML next year. More specific sub-fields I like are</p> <ul> <li>Deep Learning</li> <li>Computer Vision</li> <li>Natural Language Processing</li> <li>Reinforcement Learning</li> </ul> <p>And my interests outside of ML and mathematics include</p> <ul> <li>Self-driving cars (<em>e.g.</em> Tesla)</li> <li>Rocket and space exploration</li> </ul> <p>more vaguely, I find tech interesting as a whole.</p> <h2>Question</h2> <p>I am looking for a thesis which would bring me as close as possible to the field of ML. <strong>Do you have topic recommendations ?</strong></p> <p><strong>BUT</strong> my Bachelor is in Mathematics therefore I shall not write a thesis in Computer Science as it would not be accepted by my study director.</p> <h2>Some thoughts</h2> <p>I know some people who were in my case. One of them for instance discovered and proved some convergence results in the context of Gradient Descent. Maybe this will inspire you with your answers.</p> <p>Thanks in advance!</p> <hr /> <p>PS: This is a duplicate of <a href="https://math.stackexchange.com/questions/3966709/thesis-in-machine-learning-but">my original question</a> on the Math Stack Exchange but I thought that people on the Computer Science Stack Exchange could bring a different point of view.</p>
<p>I would like to make two points clear as a researcher:</p> <ol> <li><p>Mathematics is a very broad discipline. When you're on bachelor level it is still appropriate to call it &quot;Mathematics&quot; but already in Masters, you'll need to specialize! Youll have to choose a branch and then you won't be &quot;a mathematician&quot; you'll be Statistician, a topologist, graph theorist, category theorist etc. This being said you need to ask yourself which branch you're most interested in? This leads to the next point:</p> </li> <li><p>The fact that you're interested in Machine Learning already narrows it down to Probability &amp; Statistics, Linear Algebra, Multivariate Calculus. And here I would say that any topic that falls within these categories will help you later along the road.</p> </li> </ol> <p>Bachelor Thesis is a piece of scientific work that's why it is called &quot;Bachelor&quot;. You are supposed to do research and produce new knowledge regardless of how small or significant it will be. This forum cant do it for you.</p> <p>Pick something YOU are interested in from Probability &amp; Statistics, Linear Algebra or Multivariate Calculus and explore the topic. Once you start you'll inevitably get questions that need addressing. If you get stuck pick a classic problem (for example <a href="https://en.wikipedia.org/wiki/Knight%27s_tour" rel="nofollow noreferrer">Knight's tour</a>) and systematically break it down.</p>
30
natural language processing
Context-free grammar for DAGs?
https://cs.stackexchange.com/questions/55109/context-free-grammar-for-dags
<p>I'm looking for a "safe" representation of DAGs. With "safe" representation I mean that it can be described by a context-free grammar. Ideally, this grammar would be suitable for a simple LR parser.</p> <p>The same problem for trees instead of DAGs is already solved: Just use one of the many well-known tree representations such as s-expressions or JSON, which are all context-free with nice, efficient parsers.</p> <p>But how to approach that for DAGs instead of trees?</p> <ul> <li><p>The naive approach is a list of nodes, where each node contains refereces to their parent (head) nodes. Ignoring referential integrity, this would be a regular language. But then I need to check that all references point back to an already-parsed node. How do I ensure this? Is there a clever PDA construction for that?</p></li> <li><p>Or, do I need a totally different representation of DAGs?</p></li> <li><p>Or, is any generic representation of DAGs doomed to be context-sensitive? If so, is there a proof for that?</p></li> </ul> <p>I'm aware that in natural language processing, there do exist parsers which output a "parse DAG" instead of a parse tree. But I do not see how these may help in this particular problem. Maybe these are of no help here, but their existence gives me a feeling that this problem might have a solution.</p> <p>(I became inspired to this question by the <a href="http://langsec.org/" rel="nofollow">LANGSEC</a> movement, here are some <a href="http://langsec.org/insecurity-theory-28c3.pdf" rel="nofollow">Slides</a> of them.)</p>
31
natural language processing
How to define a formal language for describing procedural activities
https://cs.stackexchange.com/questions/142868/how-to-define-a-formal-language-for-describing-procedural-activities
<p>I do not have a formal computer science background here so I am looking for pointers.</p> <p>How would you advice I go about describing a formal way to describe procedures like cooking recipes, manufacturing process, driving to a location etc.</p> <p>Given the fact that these types of process does feel like algorithms, but they feel more open ended than normal algorithm represented by programming languages. For example a cooking recipes does not have to be 100% identical to result into the same dish. Also describing a step in a cooking recipe could be expressed in various ways since natural language is being used.</p> <p>This same process can be made for manufacturing process, driving to a location etc.</p> <p>What concepts or tools should I be looking at if I want to achieve this kind of things?</p> <p>Would DSL? Do the job? Or would DSL be too restrictive? Because I am thinking how can one encode the various near infinity steps/procedures involved in an activity like cooking or manufacturing.</p> <p>Pointers would be appreciated</p>
<p>Your question is very broad and has possibly hundreds of answers depending on the interpretation. The fact you tagged it with &quot;formal-languages&quot; and &quot;formal-grammars&quot; suggests you are actually asking &quot;how the syntax of a language describing this kind of stuff should look like&quot;. Sometimes, reading your question I feel you are actually asking &quot;what kind of computational power should a language employ for encoding these processes.&quot;</p> <p>Let's try to consider several aspects and draw a conclusion. If you are willing to describe some process (cooking/manufacturing/etc) you need to figure the most important elements to formalize and how they interact with each other; therefore depending on the purpose of the formalization the language will have to comply.</p> <p>If you are willing to formalize for the sake of explaining the context to somebody else (man or machine) the formalism will be descriptional. In the case of a human-readable (but still formal) description, even XML could be a suitable candidate (depending on your requirements). In case, a machine-readable formal description was of interest for inference making and ontology (encoded in Description Logic with some set of axioms <a href="https://en.wikipedia.org/wiki/Description_logic" rel="nofollow noreferrer">DL</a>) could do.</p> <p>If you are willing to verify the description satisfies several properties, the formalism will require the logic and the inference engine for doing so <a href="https://en.wikipedia.org/wiki/Model_checking" rel="nofollow noreferrer">MC</a>.</p> <p>If you are willing to encode and run the description, the language will require several sub-languages that not only allow to declare the elements of the discussion but also how they interact. I'm thinking to the C language, where nothing is encoded and you have to describe the world through structures and functions. Or Java, where the same is accomplished through classes.</p> <p>I think enough elements have been exposed, the short answer is: it depends on <em>what</em> you are interested in formalizing and <em>what</em> you want to achieve once the formalization is complete.</p> <p>For example, suppose you want to grow virtual artificial plants, the most common way to go is employing L-systems, (<a href="https://en.wikipedia.org/wiki/L-system" rel="nofollow noreferrer">L-systems</a>). As you can see such formalism points out what can be described, how it can be done, the syntax for doing so, and eventually provides a computational procedure for doing it.</p> <p>Hence, you should first decide precisely what you want to formalize, decide the restrictions of the formalization, and only then start pondering the actual grammar.</p> <p>With respect to your question about DSL, let's consider this assertion: &quot;Domain specific languages (DSLs) are languages whose syntax and notation are customized for a specific problem domain&quot; taken from &quot;A survey of grammatical inference in software engineering&quot; by Andrew Stevenson and James R. Cordy. L-systems' grammar is a DSL, but their most unrestricted version can compute anything. So there are DSLs that can achieve any kind of computation and therefore are never &quot;not enough&quot;. The same is true for Latex. <a href="https://stackoverflow.com/questions/2968411/ive-heard-that-latex-is-turing-complete-are-there-any-programs-written-in-late">Latex Is Turing Complete</a></p> <p>I hope the example sheds some light on your doubts.</p>
32
natural language processing
What is the activation function, label and loss function for Hierachical Softmax
https://cs.stackexchange.com/questions/43912/what-is-the-activation-function-label-and-loss-function-for-hierachical-softmax
<p>Several papers(<a href="http://www.iro.umontreal.ca/%7Elisa/pointeurs/hierarchical-nnlm-aistats05.pdf" rel="nofollow noreferrer">1</a> (originator), <a href="http://arxiv.org/pdf/1310.4546.pdf" rel="nofollow noreferrer">2</a>, <a href="http://dx.doi.org/10.1007/978-3-662-45924-9_16" rel="nofollow noreferrer">3</a>) suggest the use of Hierachical Softmax instead of softmax for classification where the number of classes is large (eg many thousand).</p> <p>I haven't been able to get clear in my head what this means the actual final layer and output/labels of the neural network are.</p> <p>For (plain) softmax the activation function is the softmax function: <span class="math-container">$$\mathbf{\hat{y}}=\sigma(\mathbf{z})_j = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$</span></p> <p>and the loss (error) function is cross entrypy <span class="math-container">$$C(\mathbf{\hat{y}},\mathbf{y})=\sum_{k=1}^K-\mathbf{y_k}\times \log{\mathbf{\hat{y}_k}}$$</span> where <em>y</em> is &quot;one-hot&quot; -- all zeros except a 1 for the index matching the class (this lead to effient implementation, if you know the class indexes).</p> <p>For <strong>Hierachical Softmax</strong>: What is the form of the label <strong>y</strong>, the activation function <span class="math-container">$\sigma(\mathbf{z})$</span> and the loss (error) function <span class="math-container">$C(\mathbf{\hat{y}},\mathbf{y})$</span></p> <p>I am starting the suspect that the label is a Binary code for the class, eg a Huffman code, and the activation function is simply sigmoid (or tanh) and the loss is just squared error.</p> <p>Is that all there is too it?</p> <p>Or is it infact done with a multilayer network, in some way? (Obviously you can't stack softmax layers as inputs to softmax layers).</p> <h3>Implementations</h3> <p>There are quiet a few implementations around, but I find all of them hard to follow.</p> <ul> <li><p>Word2Vec in <a href="https://code.google.com/p/word2vec/source/browse/trunk/word2vec.c" rel="nofollow noreferrer">C</a>, and Gensim in <a href="https://github.com/piskvorky/gensim/blob/develop/gensim/models/word2vec.py" rel="nofollow noreferrer">Python</a>.</p> <ul> <li>I'm not great at understanding C -- too many clever tricks (like using 1D indexing + offsets on 2D arrays), and the python harks close to the C (it is an enhanced translation).</li> <li>There are two linked articles <a href="https://yinwenpeng.wordpress.com/2013/09/26/hierarchical-softmax-in-neural-network-language-model/" rel="nofollow noreferrer">A</a>, <a href="https://yinwenpeng.wordpress.com/2013/12/18/word2vec-gradient-calculation/" rel="nofollow noreferrer">B</a> which go someway towards explaining the C code.</li> </ul> </li> <li><p>A very different <a href="https://github.com/Philip-Bachman/NN-Python/blob/master/nlp/NLMLayers.py" rel="nofollow noreferrer">python</a> (<a href="https://github.com/Philip-Bachman/NN-Python/blob/e9a7619806c5ccbe2bd648b2a2e0af7967dc6996/nlp/CythonFuncsPyx.pyx#L174" rel="nofollow noreferrer">Cython</a> actually)</p> </li> <li><p>A even more different <a href="https://github.com/lisa-groundhog/GroundHog/blob/66472ba649aa6a4c6b710a0de3d0344be2f7b5c9/groundhog/layers/cost_layers.py#L1163" rel="nofollow noreferrer">Python (Theano)</a> implementation. This one is not truly Hierarchical soft-max as it only has two layers.</p> </li> </ul> <hr /> <h2>Papers</h2> <ol> <li>Morin, F., &amp; Bengio, Y. (2005, January). <a href="http://www.iro.umontreal.ca/%7Elisa/pointeurs/hierarchical-nnlm-aistats05.pdf" rel="nofollow noreferrer">Hierarchical probabilistic neural network language model</a>. In Proceedings of the international workshop on artificial intelligence and statistics (pp. 246-252).</li> <li>Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., &amp; Dean, J. (2013). <a href="http://arxiv.org/pdf/1310.4546.pdf" rel="nofollow noreferrer">Distributed representations of words and phrases and their compositionality</a>. In Advances in neural information processing systems (pp. 3111-3119).</li> <li>Wang, Y., Li, Z., Liu, J., He, Z., Huang, Y., &amp; Li, D. (2014). <a href="http://dx.doi.org/10.1007/978-3-662-45924-9_16" rel="nofollow noreferrer">Word Vector Modeling for Sentiment Analysis of Product Reviews</a>. In Natural Language Processing and Chinese Computing (pp. 168-180). Springer Berlin Heidelberg.</li> </ol>
33
natural language processing
Automatic learning/discovery of logics
https://cs.stackexchange.com/questions/85749/automatic-learning-discovery-of-logics
<p>Are there efforts to automatically discover new logics? Logics are simple structures - they have formal language, deduction rules, semantics and certain properties that are proved or discarded for every new logic. In fact, each logic can be put into framework of institutions (by Diaconescu et al).</p> <p>So - is it possible to automatically learn/discover new logics?</p> <p>E.g. type logical grammars (development of (abstract) categorial grammars) map natural language sentences into lambda expressions (some logics into language in lambda expressions). It can appear, that resulting expressions can not be built due to low expressibility of the logic. So - new, more expressive, more adaptable logics should be discovered. Can the process be automated by using criterion of "optimally highest level of understanding" (i.e. whether the machine can understand and operate with the formalized text or not) - e.g. if machine can not understand the valid natural language text, then machine is obliged to discover new logic into which the text can be translated and understood.</p> <p>There is formalization of the notion of "understanding" <a href="https://link.springer.com/chapter/10.1007/978-3-319-41649-6_11" rel="nofollow noreferrer">https://link.springer.com/chapter/10.1007/978-3-319-41649-6_11</a> and this understanding can be optimized for the discovery of new logics.</p> <p>So - are there trends to do this?</p> <p>I am aware of inductive logic programming which discover rules in some logic, but I am aiming for the discovery of the logic itself. So - there is also inductive metalogic programming but I have managed to find only two articles about this (one in Japanese) and they seem to be not about new logics.</p> <p>I have also heard about framework of ludics of Girard, but it is very, very bounded work on linear logic, more general setting is required and ludics seem to be not generalizible enough.</p>
34
natural language processing
If we spoke in TM-computable English, what would it look like?
https://cs.stackexchange.com/questions/62607/if-we-spoke-in-tm-computable-english-what-would-it-look-like
<p>We know that any "effectively computable" process is computable by a Turing machine (Turing-Church thesis).</p> <p>Although it seems that "effectively computable" is still open to discussion, the intuitive interpretation is that any process that is "mechanical enough" can be computed by a Turing machine.</p> <p>Turing's initial objective was to axiomatise how humans do reasoning. Now what do you need to reason? Non-ambiguous definitions (axioms) and non-ambiguous rules. Then you are good to go. So in effect if TM successfully modelise how humans think, that's all they should need and hence natural languages should be quite a close candidate to become regular languages as long as we impose that words have just one meaning.</p> <p><strong>My question is then, intuitively, is it enough for a language to be "non-ambiguous" to be computed by a Turing machine? Or is there more intuitive properties that the language need to respect?</strong></p> <p>(I am currently trying to figure out if the laws voted in a parliament, although written in mundane English, have enough of these characteristics to be computed by an automaton).</p>
<p>Under reasonable assumptions, there is a TM which can decide whether something is a valid piece of English legalese. We can safely assume that the length of a law is bounded by some finite number $k$, say the number of characters the fastest human reader can read in less than two hundred years. There is then only a finite number of possible strings that might or might not be valid laws. Hence there is a lookup table of finite size that contains for every string of length at most $k$ the correct answer. </p> <p>It is not settled whether humans can in fact recognize languages that TM can't recognize, so it's unclear whether the length restriction is truly necessary. See for example <a href="https://cs.stackexchange.com/questions/42311/how-is-the-computational-power-of-a-human-brain-comparing-to-a-turing-machine">this question</a>, or <a href="https://cs.stackexchange.com/questions/3271/human-computing-power-can-humans-decide-the-halting-problem-on-turing-machines">this question</a>.</p> <p>This is of course a boring answer, because this TM is completely impractical to construct. In actual practice you probably want to restrict yourself to a subset of English that can be parsed by an LL(k) parser (real English is <a href="https://english.stackexchange.com/questions/32447/is-there-an-ebnf-that-covers-all-of-english">not context-free</a>), or go for a language specifically designed to be easy to understand to both humans and computers. @LukasBarth mentioned <a href="https://mw.lojban.org/papri/Lojban" rel="nofollow noreferrer">Lojban</a> in the comments, but there are a number of such languages (programming languages are another example).</p>
35
natural language processing
Are the definitions of constructs in terms of lambda terms issues in implementation/design or uses of functional languages?
https://cs.stackexchange.com/questions/112384/are-the-definitions-of-constructs-in-terms-of-lambda-terms-issues-in-implementat
<p>In Lambda Calculus, natural numbers, boolean values, list processing functions, recursion, if function are defined in terms of lambda terms. For example, natural numbers are defined as Church numerals, and recursion is defined in terms of a fixed point of a function.</p> <p>Functional languages are said to be based on Lambda Calculus.</p> <p>Who shall be concerned about the above concepts in terms of lambda terms: the implementer/designer of the languages, and/or programmers in the languages?</p> <ul> <li><p>Do functional programming languages define/implement the above concepts in terms of lambda terms?</p></li> <li><p>As programmers in regular functional programming languages (such as Haskell, Lisp, ML), is it correct that the above concepts are always given in the same way as in imperative languages, and we never have to understand or deal with their definitions in terms of lambda terms? </p></li> </ul> <p>Thanks.</p>
<ol> <li>Those features are almost never implemented like the lambda calculus in modern programming language implementation. In some cases, using the lambda calculus representation for datatypes has performance improvements (this is associated with so-called tagless representations). Historically, the Haskell compiler did use this representation early on, but has since dropped it. In either case, it never affected user code, and was an implementation detail.</li> <li>Unless there's a compelling reason to, most code does not use said representations, so the programmer will never interface with them. So those concepts are generally the same as in imperative languages, although keep in mind that there are differences since most functional languages are expression-oriented; e.g. 'if ... then ... else ...' is an expression which returns a value, rather than a statement.</li> </ol>
36
natural language processing
What are the theoretical and practical contributions of Multiagent Systems to science?
https://cs.stackexchange.com/questions/49957/what-are-the-theoretical-and-practical-contributions-of-multiagent-systems-to-sc
<p>Speaking about multiagent systems (MAS) is about as fuzzy as talking about artificial intelligence systems (AI). They are in essence the distributed counterpart of AI.</p> <p>While there are no so-called "AI theorem", AI research had given rise to many subfields, algorithms and scores of theorems (e.g. game solving, fuzzy logic, expert systems, A*, logical programming... as well as Bayesian networks and constraint satisfaction). But I fail to see a similar impact from MAS.</p> <p>As far as I know all the subfields related to MAS preexisted them. For instance:</p> <ul> <li><p>results about distributed computing (e.g. Fischer Lynch Paterson theorem, replication and load balancing strategies, decentralization, resilience, distributed algorithms...)</p></li> <li><p>results from operation research (e.g. makespan measure in scheduling)</p></li> <li><p>results about voting (e.g. Arrow's theorem in social choice theory)</p></li> <li><p>results about competitive systems (e.g. Nash equilibrium in game theory)</p></li> <li><p>results about interoperability (e.g. ontologies in natural language processing) </p></li> </ul> <p>As far as I have seen "original" MAS contribution consist in the straightforward distribution of well known problem solving algorithms into distributed ones, whose most notable change seem to be at the epistemological level.</p> <p>When the problem is decomposable, distribution actually consist in allocating subproblems to different agents:</p> <ul> <li>e.g: constraint satisfaction -> <em>distributed</em> constraint satisfaction: most notables change: variable now belong to agents, algorithms are unchanged.</li> </ul> <p>When the problem is not decomposable, distribution consist in replicating problems at the level of each agent, or having a central agent solve it:</p> <ul> <li><p>e.g. reinforcement learning -> <em>distributed</em> reinforcement learning: agent apply independantly from each other the standard RL algorithm.</p></li> <li><p>e.g. transport problem -> standard transport problem in operation research (no distribution)</p></li> </ul> <p>The only really original MAS algorithm I can think of is the Contract Net Protocol, which is in essence just a broadcasting algorithm. The only design constraint introduced by MAS that I can think of is privacy. Mutlirobots systems, often given as an example of MAS, have been developping from standards robotics largelly ignoring MAS literature.</p> <p><strong>Therefore, what are the original contributions of MAS?</strong></p> <p>Corrolary question: Why are they relevant as a standalone research field rather than being a common placeholder name for different fields preexisting them?</p>
37
natural language processing
string matching algorithm question for matching approximately similar names between two lists
https://cs.stackexchange.com/questions/171466/string-matching-algorithm-question-for-matching-approximately-similar-names-betw
<p>The focus of this question is on natural language processing, specifically matching names between 2 lists. I am looking at employees that work in the same organization, however I obtained data from two different databases. Unfortunately, there is no unique key or ID that matches the users between lists, so I have to rely upon a string matching between names.</p> <p>The challenge is that the names on the two lists are approximately similar and there are different levels of noise depending on the name. So I know that the bias should be for matches between names, however it is hard to impose a threshold on similarity because the names different in length and the amount of noise in each name.</p> <p>For example, here is a little sample of some simulated data.</p> <p>List 1:</p> <ul> <li>Jim Smith</li> <li>J. Smith</li> <li>Carol Sanchez</li> <li>Shantanu Vishwanathan</li> </ul> <p>List 2:</p> <ul> <li>J. Smith.</li> <li>C. Sanchez</li> <li>Shantannu Vishvanathan</li> <li>S. Vishvanathan.</li> </ul> <p>I use the Julia language, but the language itself is not important. I really just want to figure out a good algorithm to do this type of matching. There are a lot of names to go through, so I am trying to limit the amount of manual checking and such that I need to do.</p> <p>When I reference a string distance, I mean something like the Levenshtein distance or Jaccard distance between two strings. There are various metrics that might work.</p> <p>So the naive approach is something like:</p> <ol> <li>remove punctuation and spaces from first and last names in both list 1 and list 2.</li> <li>compute string distance between the last name of the first entry (source) in list 1 and all of the last names of the entries on list 2 (called targets). a. collect the 3 target names with the lowest distance scores into a new vector called the subsample vector. b. compute whether the first letter of the first names match between the source name and target first names in the subsample vector. Remove any names from the subsample vector that start with a different letter than the source first name. c. assign a match if the string distance between the first name of the target and the first name of the source is less than some threshold.</li> </ol> <p>So this is a very naive algorithm. I can add some more nuance by using a sum of different string distances instead of only a single string distance, etc. So I have some options there.</p> <p>But I was not sure if there was a better way or algorithm to do something like this? Like would it make sense to think of this problem like a constraint programming problem or such. That was just one idea that I had. Also, would it make sense to find the longest common substring between last names or such. I can try a few different approaches and benchmark the accuracy, that goes without saying. But just wanted to see if there was some obvious algorithm that I was just not aware of.</p> <p>If anyone has a suggestion, please let me know. Hopefully this question conforms with the guidelines of the CS stack exchange. I tried to focus on the specific algorithm and also referenced the NLP nature of the problem.</p>
38
natural language processing
Are there any examples of programming synthesis in vulnerability research?
https://cs.stackexchange.com/questions/63242/are-there-any-examples-of-programming-synthesis-in-vulnerability-research
<h2>Program Synthesis</h2> <p>To borrow from Microsoft: </p> <blockquote> <p><a href="https://homes.cs.washington.edu/~bornholt/post/synthesis-for-architects.html" rel="nofollow">Program synthesis</a> is the task of automatically discovering an executable piece of code given user intent expressed using various forms of constraints such as input-output examples, demonstrations, natural language, etc.</p> </blockquote> <p>One of the best program synthesis examples I've come across created new authenticated encryption (AE) schemes; the <a href="https://eprint.iacr.org/2015/624.pdf" rel="nofollow">paper</a> also automated the analysis and privacy and authenticity security proofs for the synthesized AE schemes.</p> <p>The take-away from program synthesis is: (1) it <em>can</em> provide novel solutions to solve a problem through an automated process and (2) it is possible to verify the solution's correctness.</p> <h2>Vulnerability Research &amp; Program Synthesis</h2> <p>There has been a great deal of effort made in recent years in automating vulnerability analysis. A culmination of this effort was demonstrated this summer at the Cyber Grand Challenge(https:// www. cybergrandchallenge.com/).</p> <p>It would seem like program synthesis would have great potential in automated vulnerability research. </p> <ul> <li>Perhaps it could derive new exploits</li> <li>Perhaps it could derive new patches or tests</li> </ul> <p>My question is this: Are there any examples of program synthesis in vulnerability research? </p> <p>I am having a hard time finding any examples of such and I would like to know if anyone has come across it.</p>
<p>Yes, there's plenty of work on synthesizing exploits using program synthesis. One of the seminal papers was:</p> <p>Thanassis Avgerinos, Sang Kil Cha, Alexandre Rebert, Edward J. Schwartz, Maverick Woo, and David Brumley. <a href="https://users.ece.cmu.edu/~dbrumley/pdf/Avgerinos%20et%20al._2014_Automatic%20Exploit%20Generation.pdf" rel="nofollow">Automatic Exploit Generation</a>. Communications of the ACM, 57(2):74–84, 2014.</p> <p>This is a cleaned-up and general-audience version of a paper published at NDSS in 2011.</p> <p>You should be able to use Google Scholar to find lots of other related work and follow-on work, by finding papers that cite it. Also, many of the research teams participating in the Cyber Grand Challenge have been publishing on this subject for the past several years; look up each of their web pages and go check out their papers, and you should be able to find more.</p> <p>Generally speaking, the challenge typically lies more in finding <em>inputs</em> to a vulnerable program that will cause it to do something bad, rather than in finding <em>programs</em> that directly do something bad themselves. Thus, the techniques tend to be slightly different from standard work on program synthesis, because the challenges are slightly different in this space: here it's often more about synthesizing data rather than code.</p> <p>There is also plenty of work on automated patch creation. There, the big challenge is usually identifying which part of the code is vulnerable and what the vulnerability is; once you know that, the patch synthesis is usually rather straightforward. Consequently, the challenge typically seems to be more about program <em>analysis</em> than <em>synthesis</em>, though of course there are certainly aspects of both that do need to be solved, and different papers have a different focus.</p>
39
natural language processing
Numeral systems other than unary used in nature or in animal and human behaviours
https://cs.stackexchange.com/questions/4906/numeral-systems-other-than-unary-used-in-nature-or-in-animal-and-human-behaviour
<p>Representing numeric values using <a href="http://en.wikipedia.org/wiki/Positional_notation">positional notation</a> is one of the milestones in the history of arithmetic. Babylons used a base 60 system, Maya a base 20 system; base 10 system became "the standard" used by modern civilizations; digital computers use the <a href="http://en.wikipedia.org/wiki/Binary_numeral_system">Binary numeral system</a>, ....</p> <p>But if we look at nature, we found that life itself "heavily rely" on an alphabet of 4 symbols: the <a href="http://en.wikipedia.org/wiki/DNA">DNA</a> has four <a href="http://en.wikipedia.org/wiki/Base_%28chemistry%29">(chemical) bases</a>: adenine, cytosine, guanine and thymine (A, C, G, T) that are used to store the "instructions and information" to generate and drive the parts of a living organism.</p> <blockquote> But on a higher level, are there <em>"natural algorithms"</em> (algorithms found in natural processes, in animal behaviours or in everyday human behaviours) that take advantages of "numeral systems" other than the <a href="http://en.wikipedia.org/wiki/Unary_numeral_system">unary representation</a>. </blockquote> <p>To be more precise I would like to know whether or not natural processes or living creatures make use of a finite, discrete alphabet of "symbols" and use them in a manner similar to a positional notation: the symbols are placed together and used as a whole to represent "something" (an action, an information, an object, ...) among many other possibilites ... a sort of "index" in an exponential number of possibilites.</p> <p>Another (obvious) non numeric example is the human language where (in general) a combination of finite number of sounds ("alphabet") are combined to form the words.</p>
<p>There is at least one example where a string of symbols from an alphabet is also used: proteins.</p> <p>Proteins consist of chains of 20 different amino acids (usually, in some cases, it's 21 or 22) and the sequence of amino acids determines what given protein does.</p> <p>This example is closely related to the DNA example you gave, because each amino acid in a protein is encoded by a triplet of bases in DNA.</p> <p>For more information, see <a href="http://en.wikipedia.org/wiki/Genetic_code" rel="nofollow">the article <em>Genetic code</em> on Wikipedia</a>.</p>
40
natural language processing
Language of words that begin and end with same symbol and have equal numbers of a&#39;s and b&#39;s
https://cs.stackexchange.com/questions/52324/language-of-words-that-begin-and-end-with-same-symbol-and-have-equal-numbers-of
<p>I wish to find the CFG for a language on two symbols (say <em>a</em> and <em>b</em>) whose words begin and terminate with the same symbol, and have equal quantities of <em>a</em>'s and <em>b</em>'s. What is the thought process I should use for finding such a grammar? What is the most natural or simplest grammar for this language? I hope you'll explain your answer. Hopefully this will suggest some patterns I should look for when trying to synthesise a grammar for a specified language. </p> <hr> <p>Here's the best solution I could come up with on my own:</p> <p>$ S \to aTbbTa \mid bTaaTb$</p> <p>$ T \to abT \mid baT \mid aTb \mid bTa \mid \epsilon$</p> <p>I <em>think</em> this grammar is correct. Informal argument: I can see that if the word is $awa$ ($w$ being a substring), then there are at least two $b$'s in $w$ that are adjacent. This suggests the form $aTbbTa$ in the first production rule. (The argument holds if the roles of the two symbols are reversed). The second set of production rules is meant to generate every possible word of the language while keeping the number of $a'$s and $b'$s equal. Symmetry suggests that the rule $abT$ should be accompanied by the rule $baT$, and $aTb$ by $bTa$. Initially was wondering if any of the rules in the second set was redundant, but I don't think so - I can think of words that couldn't be formed if any of the second set of production rules was missing. Rather I need to be sure there aren't any words from the language that my grammar can't generate.</p> <p>[I guess I would need induction to prove my grammar generates every possible word in the language. But right now I'm more interested in the thought process behind coming up with a grammar, and as far as I know, induction (in general) doesn't help much in synthesising a solution/rule/formula/etc.; it principally serves to verify a purported solution.]</p>
<p>Here is the thought process I would use. I would notice that your language $L$ can be written as $L= L_1 \cap L_2$, where $L_1$ is the set of words that begin and end with the same symbol, and $L_2$ is the set of words that have equal quantities of <em>a</em>'s and <em>b</em>'s.</p> <p>Then, I would note that $L_1$ is regular and so can be expressed by a simple DFA (with 5 states). Also, I would note that $L_2$ is context-free and can be expressed by a simple CFG: e.g., $S \to \varepsilon \mid aSb \mid bSa \mid SS$.</p> <p>Finally, I would recall the standard closure property: the intersection of a regular language and a context-free language is context-free. It follows that $L_2$. The proof of this standard closure property includes a construction that shows how to construct a CFG for $L_2$, given a DFA for $L_1$ and a CFG for $L_2$. This gives a CFG for your language. The resulting CFG will have $5^2=25$ non-terminals; some of them are unreachable and can be pruned, but the result is still quite messy.</p> <p>The resulting CFG isn't the simplest or smallest CFG for your language. Whether it is the most natural is open for debate. But since you asked for the thought process behind coming up with a grammar, this illustrates one general technique for constructing CFG's: separate out the part that requires something more than finite state (equal quantities of <em>a</em>'s and <em>b</em>'s) from the part that can be handled with a finite-state automaton.</p>
41
natural language processing
How to transform lambda function to multi-argument lambda function and how to rewrite or approximate terms?
https://cs.stackexchange.com/questions/96533/how-to-transform-lambda-function-to-multi-argument-lambda-function-and-how-to-re
<p>I am trying to do the formal semantics (Montague grammar, abstract categorial grammar) of natural language and encode the sentence <code>John is boss</code>. The type system has to primitive types - <code>e</code> for entity and <code>t</code> for Boolean type. <code>John</code> has type <code>e</code>, <code>is</code> has type <code>(e-&gt;t)-&gt;(e-&gt;t)</code> and <code>boss</code> has type <code>(e-&gt;t)</code>. The the full sentence is translated as the lambda expression:</p> <pre><code>(is(boss))(John): t </code></pre> <p>I don't like the chaining of functional application and more intuitive presentation could be by introducing new function <code>IS</code> of type <code>(e, (e-&gt;t))</code> and hence the expression could become:</p> <pre><code>IS(John, boss): t </code></pre> <p>And by moving boss from the argument the function index we can arrive to the standard predicate expression with new predicate-function:</p> <pre><code>IS_BOSS(John): t </code></pre> <p>I have two questions regarding this example:</p> <ul> <li>Does standard lambda calculus have multi-argument functions? As I guess that these calculus have such function by currying and hence the real type if <code>IS</code> is <code>e-&gt;(e-&gt;t)-&gt;t</code>. So, this part seems to be already answered.</li> <li>And now is the main question - <strong>does lambda caluclus allow do function transformation</strong> - e.g. is there some operation which transforms <code>is:(e-&gt;t)-&gt;(e-&gt;t)</code> into <code>IS:e-&gt;(e-&gt;t)-&gt;t</code>? <strong>Are there some apparatus/algorithms/methods how can I express function</strong> <code>is</code> <strong>via the function</strong> <code>IS</code> <strong>and how can I transform terms involving</strong> <code>is</code> <strong>into terms that does not contain</strong> <code>is</code> <strong>and that contains</strong> <code>IS</code><strong>?</strong> Are there some rewriting apparatus available for this in lambda calculus?</li> </ul> <p>This question is inspired by the book <a href="https://www.amazon.co.uk/Elements-Formal-Semantics-Introduction-Mathematical/dp/0748640436" rel="nofollow noreferrer">https://www.amazon.co.uk/Elements-Formal-Semantics-Introduction-Mathematical/dp/0748640436</a> and uses notions and notation from this book.</p> <p>Important note. I do not expect that <code>is</code> and <code>IS</code> are identical functions (but I would be glad to consider the case when some kind of identity is assumed as well). I expect that <code>is</code> is more general function (that is used for the processing of the raw text) and that I try to express it using more specialised function <code>IS</code> (which can convey more specific meaning, e.g. borrowing from pragmatics (branch of lingustics): I can analyse context and determine that in this context such detalization is desirable). Maybe rewrite operation should be used but maybe some other tools should be used - simply - how to transform term using these functions? <strong>Maybe I am just trying to approximate function</strong> <code>is</code> <strong>with function</strong> <code>IS</code> <strong>(of different type!) and</strong> <code>is</code><strong>-terms with</strong> <code>IS</code><strong>-terms? Are there such notions, theories?</strong></p> <p><em>Maybe I should look to higher-order rewriting of lambda terms? Is there example available how my terms could be rewritten inside such framework?</em></p>
<p>You are looking for <a href="https://en.wikipedia.org/wiki/Currying#Lambda_calculi" rel="nofollow noreferrer">currying and uncurrying</a> which transform functions of type $A \times B \to C$ to functions of type $A \to (B \to C)$, and vice versa. Currying takes <code>IS</code> to <code>is</code>, while uncurrying takes <code>is</code> to <code>IS</code>. This is a standard and very basic technique in $\lambda$-calculus. There are many manifestations of currying and uncurrying, for instance in arithmetic it is the identity $c^{a \cdot b} = (c ^ b) ^ a$.</p> <p>It is largely a matter of taste how you write multi-argument functions. The uncurried form $A \to (B \to C$ has the advantage that you can apply the function only to the first argument. In your case <code>is boss</code> is a useful concept which is expressible very neatly. With the other function you have to write <code>λ x . IS (boss, x)</code>.</p> <p>Note however that sometimes people do <em>not</em> include product types $A \times B$ in their versions of $\lambda$-calculus, in which case of course currying is unavailable (but nothing much is lost).</p>
42
natural language processing
Is a Turing Machine that only takes strings of the form $0^*$ Turing Complete?
https://cs.stackexchange.com/questions/24125/is-a-turing-machine-that-only-takes-strings-of-the-form-0-turing-complete
<p>You have a Turing machine that only processes input on the form $0^*$. If it is given an input without 0's, it will simply halt without accepting or do anything else. Is it Turing Complete?</p> <p>The set $0^*$ is countably infinite, since you can make the bijective function $f(x) : 0^* → \mathbb{N} $:</p> <p>$f(x) = length(x)$</p> <p>Where $length(x)$ is the length of the string (so you treat them as Peano Numbers). </p> <p>I understand that the set of all programs (the programs that a Turing machine can run) are countable, and that the set of a Turing machines are also countable. But, can the set of string that the Turing machine can process (with no guarantees of halting) only be countably infinite (as in this case), or does it have to be uncountable? </p> <p>My understanding of undecidable problems with regards to Turing machines is that they arise because there are languages that have a cardinality strictly greater than the natural numbers, e.g. $B^*$, where $B = \{0,1\}$, which has a cardinality equal to the real numbers. It seems to me that, although you can encode any integer with the language $0^*$, you can't encode an arbitrary language. The problem is: how can you encode recursively enumerable languages when all you have is unary notation? If this is indeed impossible (though I have a feeling it is possible; I can't see how the representation of numbers should be a fundamental hindrance), then it turns out that this particular Turing machine is <em>not</em> Turing Complete (or maybe you would say that it is not really a Turing machine). </p>
<p>It looks like Turing machines remain Turing-complete when the alphabet is restricted to have one symbol,&nbsp;$0$. First, preprocess your input by replacing every $0$ with $0\square$, where $\square$ is the blank symbol. Now, you can simulate a 2-symbol alphabet by using $0\square$ to represent zero and $\square0$ to represent one.</p> <blockquote> <p>My understanding of undecidable problems with regards to Turing machines is that they arise because there are languages that have a cardinality strictly greater than the natural numbers</p> </blockquote> <p>This is incorrect. A language is, by definition, a set of finite strings and there are only countably many finite strings over any finite alphabet (proof: you get a bijection with the natural numbers by treating each string as a number written in base-$k$, where $k$ is the size of the alphabet). However, there are uncountably many different languages and that observation along with the countability of the set of Turing machines lets you deduce that there must be some undecidable langauges.</p> <blockquote> <p>It seems to me that, although you can encode any integer with the language 0∗, you can't encode an arbitrary language.</p> </blockquote> <p>This is also incorrect. You can encode any natural number $n$ with the single string $0^n$ and, therefore, you can code any set of natural numbers with a unary language, i.e., a subset of $0^*$. And you can code any set of natural numbers as a language. You can also code any language over an alphabet of size $k&gt;1$ by using the base-$k$ trick I described above.</p>
43
natural language processing
Specific Examples with Explanation of Similarities and Differences of how Distance Functions are used Across Different Fields
https://cs.stackexchange.com/questions/80327/specific-examples-with-explanation-of-similarities-and-differences-of-how-distan
<p>I took a tangent from a <a href="https://github.com/davidkitfriedman/segment_fusion/blob/master/cs_stackexchange_question.md" rel="nofollow noreferrer">student project</a> I had done a number of years ago and spent some time studying distance functions.</p> <p>(please note that the above link contains the full question with links as I don't have sufficient reputation to post more than two links)</p> <p>I found this <a href="https://catalog.lib.ncsu.edu/record/NCSU3496291" rel="nofollow noreferrer">textbook on data mining</a> which includes a chapter on Similarity and Distances (chapter 3 of <em>Data Mining The Textbook</em> by Charu C. Aggarwal)</p> <p>And so the text says:</p> <blockquote> <p>Sometimes, data analysts use the Euclidean function as a “black box” without much thought about the overall impact of such a choice. It is not uncommon for an inexperienced analyst to invest significant effort in the algorithmic design of a data mining problem, while treating the distance function subroutine as an afterthought. This is a mistake. </p> </blockquote> <p>And so there are a number of sections addressing quantitative data, categorical, text, etc. </p> <p>I'd be curious about how distance functions are used in similar and different ways across varied fields such as data mining, machine learning, computer vision, natural language processing, error-correcting codes, or for example in specific applications like spell checkers. </p> <p>On the one hand, in my independent study, I saw various passages speaking about the necessity of applying domain knowledge. </p> <p>From 3.6 Supervised Similarity Functions of the above mentioned book:</p> <blockquote> <p>In practice, the relevance of a feature or the choice of distance function heavily depends on the domain at hand.</p> </blockquote> <p>Also in An Introduction to Statistical Learning with Applications in R they mention:</p> <blockquote> <p>The choice of dissimilarity measure is very important, as it has a strong effect on the resulting dendrogram. In general, careful attention should be paid to the type of data being clustered and the scientific question at hand. These considerations should determine what type of dissimilarity measure is used for hierarchical clustering</p> </blockquote> <p>Section 10.3.3 Practical Issues in Clustering also talks about some related issues. </p> <p>On the other hand I took a brief look at this Ph.D. thesis by Ofir Pele on distance functions, and so at the beginning it says:</p> <blockquote> <p>Our proposed methods have been successfully used both by computer vision researchers and by researchers in other fields. The success of the methods in other fields is probably because the noise characteristics in those fields are similar to image noise characteristics.</p> </blockquote> <p>which suggested to me that there could be some overlap in different cases; however, I haven't read those papers in detail. </p> <p>Anyway, the question is: what are similarities and differences in how distance functions are used among disparate fields, and in what ways do those similarities and differences arise. </p> <h2>Few Thoughts on this Question</h2> <p>I thought to myself this may be a bit of an unusual question. After all people don't generally study looping in Java, Python, Lisp and Prolog and then go on to study variable declaration in five other programming languages. People more often study one particular programming language as a whole not a certain construct in multiple ones. But on the other hand within programming languages as an academic discipline at universities people might certainly talk about similarities and dissimilarities. Those very similarities and differences could be seen in light of differences as to whether the code is (usually) interpreted or compiled, or whether it tends to favor an imperative style or a recursive one. One might also seek to see how various characteristics developed within the evolution of programming languages throughout history and how different programming languages built on earlier ones. Certain design decisions could very well derive from certain goals: ease of learning, readability, speed, verifiability. </p> <p>Similarly in human languages people don't usually study a set of words or phrases in multiple human languages but usually study one particular human language as a whole. Yet linguists might certainly study similarities and differences in particular words or phrases across languages as that can be indicative of how various human languages evolved and could also be indicative of various aspects of culture. I could also mention that browsing through Encyclopedia of Distances I read about how linguists have sought to create ways to measure distances between languages. </p> <p>Also, as a practical application of such a cross-sectional approach within human languages, one might think of the airline and travel industry where employees learn how to say particular words and phrases in multiple languages so that it is possible to communicate with the passengers. </p> <p>So anyway I think such comparisons could be interesting. </p> <p>I could mention that I don't think it's necessary to have a huge number of examples. I think a few would be fine with some brief explanation. I think perhaps more interesting than a lot of breadth would be how constraints, goals, or the problem domain lead to different approaches. I'm not an expert at all in any of those fields, so I may not be familiar with some of the terms. For people who find this page later on and are trying to find more details there can be a few resources listed such as books, websites, or search strategies. </p> <p>I've thought about how I might answer this question based on my independent study and what I looked into, so I could seek to work on my response tonight and probably tomorrow. As long as the question meets the site guidelines perhaps there could be then at least one response. Other responses though might be more comprehensive, more incisive and trenchant, or better educationally. So a better response would be more deserving of the credit from the asker and also reputation points. </p> <p>I was thinking of waiting a week or two before awarding an answer; however, if a response is made beforehand which I think is good I could just decide to award it to that one. </p> <p>There also might be a response before I post mine, and so if that one is good and I haven't finished mine I might not post mine at all. </p> <p>Hope this meets the site guidelines, but if not let me know. </p>
<p>Okay, so here are some of my thoughts along with some links to some of the things that I found interesting in my independent study. </p> <p>(please note that a <a href="https://github.com/davidkitfriedman/segment_fusion/blob/master/cs_stackexchange_response.md" rel="nofollow noreferrer">version with additional links is available on GitHub</a>)</p> <p>In multiple books I found sections speaking about clustering. In particular within these three books:</p> <ul> <li><em>Introduction to Artificial Intelligence</em> by Mariusz Flasiński</li> <li><em>An Introduction to Statistical Learning with Applications in R</em> by Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani</li> <li><em>Data Mining: The Textbook</em> by Charu C. Aggarwal</li> </ul> <p>there are sections speaking about hierarchical clustering.</p> <p>For hierarchical clustering one can use an arbitrary dissimilarity measure and so they talk about that choice. </p> <p>It's not as though one dissimilarity measure predominates in artificial intelligence, another in statistical or machine learning, and then a third in data mining, but the dissimilarity measure is determined by the data, and what the data means in the context of the situation.</p> <p>In the statistical learning book an example is given in the section on hierarchical clustering of how an online retailer might prefer a correlation based dissimilarity measure to the Euclidean distance. Clustering together shoppers who purchase less from shoppers who purchase more may not be as desirable as clustering together shoppers who purchase similar items.</p> <p>In the data mining textbook an example is given of how the cosine measure can be used as a comparison between texts. That section speaks about how with a bag-of-words representation document length would be prominent. That wouldn't necessarily be desirable as similar documents on the same or similar subjects but with different lengths wouldn't get grouped together. </p> <p>In any event, as stated above, the dissimilarity measure is not determined by the field and nor is it being determined by the domain (say medical, legal, business, etc.), but is being driven by the nature of the data and what that data means. </p> <p>In the case of hierarchical clustering then, if the algorithm is the same and the dissimilarity measures are being driven by the data (so they could certainly by the same across the three fields), then for that case these three fields are using distance functions in similar ways. </p> <p>To go out on a tangent, this raises a question in my mind concerning what are the distinctions between these three fields (and perhaps other fields). </p> <p>I thought it was interesting to read in the lead of the Wikipedia article on data mining that the title of a book was changed from <em>Practical machine learning</em> to <em>Data mining: Practical machine learning tools and techniques with Java</em> and the reason given was that it was largely for marketing purposes. </p> <p>And so the Wikipedia articles on machine learning and data mining both say that those terms are buzzwords. The end of the lead for the article on machine learning speaks about how projects may fail to work because the problems can be difficult. </p> <p>On that note I found it interesting in my independent study to read in the Wikipedia article on the <a href="https://en.wikipedia.org/wiki/History_of_artificial_intelligence" rel="nofollow noreferrer">history of artificial intelligence</a> how the initial optimism and few strings attached funding was then met with disappointment in the mid to late 1970's. So the article speaks about various failures (machine translation) and also successes (the DART battle management system).</p> <p>In terms of distinctions between the three fields the Wikipedia article on machine learning states:</p> <blockquote> <p>Machine learning is sometimes conflated with data mining, where the latter subfield focuses more on exploratory data analysis and is known as unsupervised learning</p> </blockquote> <p>Yet on the other hand in <em>Data Mining: The Textbook</em> the preface speaks of classification as being one of the four main super problems of data mining where it seems to me that classification as defined in chapter 10 of the book is basically supervised learning. </p> <p>In any event it suggests in my mind that the distinctions between these three fields might be a bit fuzzy, and the example above indicates that naming decisions could be driven by a variety of factors including marketing. </p> <p>Based on my independent study, to my ear, artificial intelligence being the first term coined, carries with it a connotation of a more theoretical field while machine learning and data mining are more focused on developing practical solutions. </p> <p>In the text on artificial intelligence chapter 17 includes a section called Determinants of AI Development where artificial intelligence is seen as drawing from such fields as: computer science, biology, neuroscience, physics, mathematics, logics, philosophy, linguistics, and psychology. </p> <p>There's a discussion of Searle's Chinese room, strong AI and weak AI, and a chapter on theories of intelligence in philosophy and psychology. There's a brief section on artificial intelligence as it pertains to social intelligence, emotional intelligence, and creativity. </p> <p>In Russell and Norvig's book <em>Artificial Intelligence: A Modern Approach</em> there is a chapter on philosophical foundations.</p> <p>In comparison within <em>Data Mining: The Textbook</em> privacy is addressed but essentially as a technical problem. </p> <p>In <em>Elements of Statistical Learning</em> (from which the above mentioned statistical learning book is based) the chapters look to be all of a technical nature. </p> <p>So I think this sentence from History and relationships to other fields section of the machine learning article is consistent with that assessment:</p> <blockquote> <p>The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature.</p> </blockquote> <p>Thinking back to the article on the history of artificial intelligence one might wonder about how various funders of different kinds (government, corporate, philanthropic, etc.) began to see AI as a longer term prospect while machine learning and data mining were more likely to produce working results more quickly. </p> <p>I was curious about books in these three fields that were written from the perspective of a particular domain (medical, business, legal, etc.). So I did Google searches of the form: <code>textbook &lt;field&gt; &lt;domain&gt;</code> for field being either artificial intelligence, machine learning, or data mining, and domain being either medical, business, or legal. So a variety of different books have been written from the standpoint of a particular domain.</p> <p>Data miners might be able to apply domain knowledge in the course of exploratory analysis, or in other kinds of analysis. </p> <hr> <p>So back to distance functions.</p> <p>I see a difference in the way distance functions are used in hierarchical clustering within the fields artificial intelligence, machine learning, data mining and applications discussed in Ofir Pele's thesis within computer vision (which could be seen as a subfield of machine learning). Simply put various applications of the distance functions developed do not involve clustering. This is true for the multiple view geometry application mentioned at the beginning of the abstract for the thesis. </p> <p>In error-correcting codes the simple concept of Hamming distance is used; however in this case it seems to me that unlike the hierarchical clustering case no distance function choice is going on. Hamming distance arises from the mathematical model which in turn is from the physics of information transmission over noisy channels. </p> <p>In the case of a spell checker a simple implementation could just list words that are say 1 or 2 hops away under the edit distance; however, a more sophisticated approach can work better.</p> <p>Spell checkers can take into account properties of language, keyboards, common errors, etc.</p> <p>At another level of sophistication one could have spell checkers that seek to automatically learn characteristics of users so as to improve suggestions and other aspects of performance. </p> <p>In the case of a spell checker it doesn't seem to me that a distance function is necessarily warranted or helpful. </p> <p>If a mathematical concept doesn't seem useful for an application then one doesn't have to use it. </p> <hr> <p>I also wrote a summary of my independent study including listings of some of the various sources I went through:</p> <p>Summary of Independent Study (link removed, see top of response)</p> <p>There is also an extended response which is available here:</p> <p>Extended response to distance function question on CS StackExchange (link removed, see top of response)</p> <p>The length of the extended response ended up growing to a size of about 14 regular pages, so I can summarize what it is about:</p> <p>In the extended response I continue the tangent in the middle which starts by comparing the fields of artificial intelligence, machine learning, and data mining. I speak about the question of whether research has shifted from universities to companies, the factor of motivation in education, economic systems and other systems for the organization of labor, social hierarchy, economic inequality, and climate change. </p> <p>To a certain extent I wonder if I did not entirely follow my own advice in terms of more breadth rather than depth; however, perhaps that's partly just the nature of the Internet.</p> <p>I didn't do any portions of this for a class, but worked on it over the past few days. I brought in things that I had seen or read over the years.</p> <p>I think that a lot of various other people have probably had similar thoughts and questions.</p>
44
natural language processing
What is the minimum type of logical system that recognizes if a formalized sentence is a well-formed formula thus reducible to the boolean value?
https://cs.stackexchange.com/questions/107244/what-is-the-minimum-type-of-logical-system-that-recognizes-if-a-formalized-sente
<p>The formula, in the old way of using it, can contain symbols in order and a mixture that does not meet the criteria of correctness (i.e. arbitrary symbols do not form a well-formed formula (WFF) and do not conform to the grammar).</p> <p>Also, a sentence written in a natural language could be in such a form that it cannot be transformed to the WFF. WFF, on the other hand, always reduces to the boolean true/false values and two-valued truth tables in predicate logic.</p> <p>It is possible that we can never make a parser good enough to do English sentence transformation to an informal one perfectly. Peter Norvig has the following parser made in Python to give an idea of the process: <a href="https://github.com/norvig/pytudes/blob/master/ipynb/PropositionalLogic.ipynb" rel="nofollow noreferrer">https://github.com/norvig/pytudes/blob/master/ipynb/PropositionalLogic.ipynb</a></p> <p>So, at the end, when a sentence cannot be transformed, by automation or by human manual work, to the well-formed formula, we need to categorize it by some way not fitting to the classical propositional paradigm.</p> <p>I want to examine this interface where we decide if a logical formula is correctly formed so that we can call it WFF or if a naturally written sentence is transformable to the correctly formed propositional and predicate logic formula. I assume that this logical system, or maybe it us anticipating a larger theoretical framework, would at least entail the use of one additional type? This is: to decide if the sentence is transformable to the boolean reducible format, thus if the sentence reduces to the boolean type or not.</p> <p>So, my question has a few parts. On the other hand, I'd like to get a clarification what kind of theoretical framework is required to axiomate a system that has propositional and predicate logic functionality, but also can determine, if the formula or the sentence is a boolean type at all? Related to that, what is needed to determine theoretical wise if a given sentence is well-formed at all?</p> <p>Does this have something to do with language parsers and syntax checking? From that perspective, how does this affect the minimum logical paradigm required?</p> <p>I hope these questions intertwine in such a way that they can be handled in one answer.</p> <p><strong>Additional info</strong></p> <p>For example, the following paper is discussing similar topic and introduces the logic of presuppositions and the truth-relevance concepts: <a href="https://www.researchgate.net/deref/https%3A%2F%2Fphilpapers.org%2Farchive%2FNEWMPT.pdf" rel="nofollow noreferrer">https://www.researchgate.net/deref/https%3A%2F%2Fphilpapers.org%2Farchive%2FNEWMPT.pdf</a></p> <p>Logic systems that I'm talking about are introduced in: <a href="https://www.britannica.com/topic/logic#ref1049434" rel="nofollow noreferrer">https://www.britannica.com/topic/logic#ref1049434</a></p> <p>Possible candidate for such logical framework might be found from this document: <a href="http://homepages.inf.ed.ac.uk/gdp/publications/Framework_Def_Log.pdf" rel="nofollow noreferrer">http://homepages.inf.ed.ac.uk/gdp/publications/Framework_Def_Log.pdf</a> where the S4 modal logic is mentioned to have both truth and validity relations.</p> <p>Also this cstheory topic may relate to my question: <a href="https://cstheory.stackexchange.com/questions/30541/logical-framework-vs-type-theory">https://cstheory.stackexchange.com/questions/30541/logical-framework-vs-type-theory</a></p> <p>I have added these clarifications due to request by D.W. If the former forum is better for handling my question, I hope it can be forwarded there.</p>
<p>There is no systematic algorithm that, given any English sentence, can always determine whether it can be translated to a well-formed formula in first-order logic. The English language allows expression of statements that are inherently subjective and imprecise; the meaning of some English sentences is unavoidably a matter of opinion.</p>
45
natural language processing
How does lack of deadlock relate to computability in process calculi?
https://cs.stackexchange.com/questions/64153/how-does-lack-of-deadlock-relate-to-computability-in-process-calculi
<p>I'm interested in knowing things about the computability of concurrent programs. If you had a Turing complete language that also let you branch off new programs but had no means of communication between them there would be programs that you couldn't write. Namely those that required communication between concurrently running programs. At the other extreme it seems like the $\pi$-calculus can pretty well compute any program and has the power to implement almost any synchronization primitive I've ever heard of. But it also has deadlock much like Turing complete programs have infinite loops. So there seem to be degrees of power in synchronization/communication primitives although all I have figured out are extremes. This however seems analogous to programs without any kind of iteration or recursion and full Turing complete programs. One in-between would be mutexs that have levels to them at the type level. Every mutex could have levels to them at the type level. Every mutex could only be acquired if a higher mutex level had already been acquired by that thread. This would prevent deadlock but it would probably restrict some kinds of communication from occurring because the type system couldn't encode all well orderings on the mutexs (this is a bet, I have no proof). This feels analogous to something like primitive recursion or some other limited form of repeating computation (in fact, it's the kinda same problem of finding a well-ordering on objects in a program)</p> <p>It's also easy to prove that a sufficiently strong language with a way to cause deadlock would not have decidable deadlock. Just replace infinite loops for dead lock in the standard proof of the halting problem. So yet again deadlock seems to have the kinds of properties that infinite looping has. On the other hand there is no way (that I know of, please tell me if this is possible) to detect infinite loops at run time <em>but</em> you can detect deadlock at run time. So dead lock also seems like an easier problem in a way as well.</p> <p>Say I define a notion of total-program equality that goes something like "two languages are equal IFF every total program that can be written in one can be written in the other" where "total program" here is a function from naturals to naturals. More clearly stated if the set of total computable functions two languages can compute are the same then I'll call them "total-program equal". Every Turing complete language is total program equal and another neat fact is that have language that is total-program equal to a Turing complete language is not a total language! Assume that such a language existed, then an interpreter for it would be a total program that could be interpreted by a Turing complete language. Thus the supposed language could interpret itself. But such a total language (namely one which definitely has composition and paring and such) can't interpret itself so we have a contradiction. This is to say, there is no way to define a language that isn't going to contain non-total members.</p> <p>I'd like to define something similar but for deadlock and the pi-calculus. Deadlock is tricky however. Separating deadlock from livelock is tricky to work with. Moreover I don't have a notion of the semantics of elements of a concurrent language. Certainly $\mathbb{N} \to \mathbb{N}$ doesn't really capture this because the programs were interested in keep outputting data on various channels as new data comes in. So there are some things that need to be formalized but basically I want a notion of "deadlock-free equivalent" languages. That is two languages are deadlock-free equivalent if the set of programs that they can define that never dead lock is the same set. Then I'd like to ask "is there a dead-lock free language that is deadlock-free equivalent to the pi-calculus?".</p> <p>I don't really expect this exact question to have been answered but I was wondering if anyone has done work in this vain that I could look at? What are good models of process calculi like how $\mathbb{N} \to \mathbb{N}$ models computation. How do you formalize the difference between deadlock lock and livelock? Does being deadlock free cause such a language to not be able to interpret itself? Is there some other reason a dead lock free language as good as the pi-calculus can't exist? Answers to these questions here would be fantastic but I mainly just want to be pointed in the right direction to read further.</p> <p>In a nutshell, lack of infinite loops means that you can't compute some programs even though they don't have infinite loops in them. Is the same true for deadlock? That is, will any language that never deadlocks have some programs that it can't compute?</p>
<p>I think you are asking about expressivity of concurrent programming languages. This is a deep and not well-understood field. For example you say that "the $\pi$-calculus [...] has the power to implement almost any synchronization primitive I've ever heard of". It is well known that the $\pi$-calculus cannot implement broadcasting (see e.g. <a href="https://cstheory.stackexchange.com/questions/10763/how-can-you-model-broadcasts-in-the-pi-calculus">here</a>). The $\pi$-calculus can also not implement $n$-ary synchronisation (think Petri nets) for all $n &gt; 2$, and $\pi$ can also not implement Ambient-calculus. Moreover, full mixed choice cannot be implemented by the asynchrnous $\pi$-calcululs, this is an old result by Palamidessi. There are more such results. </p> <p>What I have not said here is what it means precisely for one calculus to implement another (or being unable to do so). There is no general agreement. For more on this, I suggest to consult the following.</p> <ul> <li>D. Gorla, Towards a Unified Approach to Encodability and Separation Results for Process Calculi.</li> <li>D. Gorla, Comparing Communication Primitives via their Relative Expressive Power.</li> </ul>
46
natural language processing
What roadblocks are there to HSA becoming standard, similar to floating point units becoming standard?
https://cs.stackexchange.com/questions/130289/what-roadblocks-are-there-to-hsa-becoming-standard-similar-to-floating-point-un
<p>I remember when my dad explained to me for the first time how a certain model of computer he had came with a &quot;math coprocessor&quot; which made certain math operations much faster than if they were done on the main CPU without it. That feels a lot like the situation we are in with GPUs today.</p> <p>If I understand correctly, when Intel introduced the x87 architecture they added instructions to x86 that would shunt the floating point operation to the x87 coprocessor if present, or run some software version of the floating operation if it wasn't. Why isn't GPU compute programming like that? As I understand it, GPU compute is explicit, you have to program for it <strong>or</strong> for the CPU. You decide as a programmer, it isn't up to the compiler and runtime like Float used to be.</p> <p>Now that most consumers processors (Ryzen aside) across the board (including smartphone Arm chips and even consoles) are SoCs that include CPUs and GPUs on the same die with shared main memory, what is holding back the industry from adopting some standard form of addressing the GPU compute units built in to their SoCs, much like floating point operation support is now standard in every modern language/compiler?</p> <p><strong>In short, why can't I write something like the code below and expect a <em>standard</em> compiler to decide if it should compile it linearly for a CPU, with SIMD operations like AVX or NEON, or on the GPU if it is available?</strong> (Please forgive the terrible example, I'm no expert on what sort of code would normally go on a GPU matter, hence the question. Feel free to edit the example to be more obvious if you have an idea for better syntax.)</p> <pre><code>for (int i = 0; i &lt; size; i += PLATFORM_WIDTH) { // + and = are aware of PLATFORM_WIDTH and adds operand2 to PLATFORM_WIDTH // number of elements of operand_arr starting at index i. // PLATFORM_WIDTH is a number determined by the compiler or maybe // at runtime after determining where the code will run. result_arr[a] = operand_arr[i] + operand2; } </code></pre> <p>I am aware of several ways to program for a GPU, including CUDA and OpenCL, that are aimed at working with dedicated GPUs that use memory separate from the CPU's memory. I'm not talking about that. I can imagine a few challenges with doing what I'm describing there due to the disconnected nature of that sort of GPU that require explicit programming. I'm referring solely to the SoCs with an integrated GPU like I described above.</p> <p>I also understand that GPU compute is very different than your standard CPU compute (being massively parallel), but floating point calculations are also very different from integer calculations and they were integrated in to the CPU (and GPU...). It just feels natural for certain operations to be pushed to the GPU where possible, like Floats were pushed to the 'Math coprocessor' of yore.</p> <p>So why hasn't it happened? Lack of standardization? Lack of wide industry interest? Or are SoCs with both CPUs and GPUs still too new and is it just a matter of time? (I am aware of the HSA foundation and their efforts. Are they just too new and haven't caught on yet?)</p> <p>(To be fair, even SIMD doesn't seem to have reached the level of standard support in languages that Float has, so maybe a better question may be why SIMD in general hasn't reached that level of support yet, GPUs included.)</p>
<p>A couple issues come to mind:</p> <h3>Synchronization/Communication overhead</h3> <p>In order to seamlessly transition from CPU to GPU code you need to communicate with the GPU. The GPU additionally has to be available(aka not rendering the screen), and all instructions on the CPU side of things need to retire/finish executing. Additionally you need to make sure that any pending writes have reached L3 cache/main memory, so that the GPU sees writes. As a result a transition to GPU code is quiet expensive, especially if the GPU is doing something latency sensitive(like rendering the next frame of something), and you need to wait for that process/task/thread/whatever to finish. Similarly returning back to the CPU is also expensive.</p> <p>In addition you have to handle what happens if multiple CPU cores start fighting over the GPU.</p> <h3>Differing Memory Performance Needs</h3> <p>GPUs typically require high bandwith memory, but low latency is not as important, while CPUs are typically more sensitive to low latency. Low performance GPUs can and do use main memory, but if you wanted a high performance GPU built into the CPU you would potentially need two different types of memory. At which point there isn't much advantage to having everything on one chip, since all that does is make cooling harder.</p> <h3>Inertia/Dev Infrastructure</h3> <p>SIMD has compiler support right now and lots of work put into it. Simple GPU style workloads like dot products are already memory bound anyway on a CPU, so existing CPU+GPU combos would not benefit.</p> <h3>Could just have lots of SIMD</h3> <p>Not much more to say beyond heading. SIMD+Many cores+lots of execution units would give you a more GPU like CPU. Add better SMT for a bonus. See Xeon Phi for a real world implementation of this concept. Though one thing worth mentioning is silicon spent on more GPU style features is silicon not spent on branch prediction etc.</p> <p>Edit:</p> <p>Another thing that comes to mind is there are broadly speaking three reasons to have a GPU.</p> <ol> <li>Just want to browse the web, display Netflix etc. For this use case existing CPU and GPU performance/architecture is more than sufficient.</li> <li>Want to play high end videogames etc. Existing architecture has a lot of momentum behind it, and I'm not convinced gaming CPU workloads really need better SIMD performance, and instead need better cache/branch etc, though I don't really know. However the GPU is likely already busy so it might not be the best idea to shift even more work to the CPU</li> <li>HPC applications. Custom hardware like Xeon Phi is available for people who need a more GPU like CPU.</li> </ol>
47
word embeddings
Word embeddings with documents and users
https://cs.stackexchange.com/questions/70136/word-embeddings-with-documents-and-users
<p>I understand how one can generate a latent vector space from a collection of words or documents, using something like Glove, word2vec, doc2vec, etc. I don't understand how this representation can be turned into features for something generating these documents at a "higher level", like a user or a product.</p> <p>Here's a concrete example: I have users on a website who are generating reviews, which are written in English. Suppose I want to make a binary classification of users on whether or not they will stop using the site in the next month. How do I go from a word embedding, or a list of document embeddings for each user, into features for the user?</p> <p>I believe this third level comes up a lot in practice. For example, </p> <ul> <li>users -> multiple reviews -> words. Task: predict if a user will want to buy product X.</li> <li>items -> multiple reviews -> words. Task: group items into sales categories.</li> <li>artists -> multiple songs -> audio (audio embedded into a space). Task: group artists into genres.</li> </ul> <p>I feel as though this is a common use case but I lack the vocabulary to search for it effectively. Apologies if this is a repeat question.</p> <p>EDITED: I edited the question for clarify of the problem formulation.</p>
48
word embeddings
Why word embeddings are compared with cosine distance and not euclidean?
https://cs.stackexchange.com/questions/147713/why-word-embeddings-are-compared-with-cosine-distance-and-not-euclidean
<p>In most articles that compare word embeddings they use cosine distance to determine if words are similar. Why?</p> <p>I guess that euclidean distance should work too. So, my question is: it doesn't? And why cosine distance doesn't fail?</p>
<p>Those are two different measures of similarity. You could in principle use either one.</p> <p>Both metrics are very closely connected. Let <span class="math-container">$v,w$</span> be two unit-length vectors (i.e., <span class="math-container">$\|v\|=\|w\|=1$</span>), and let <span class="math-container">$d(v,w)=1-v \cdot w$</span> be the cosine distance between them. Then the Euclidean distance <span class="math-container">$\|v-w\|$</span> satisfies <span class="math-container">$$\|v-w\|^2 = 2 d(v,w).$$</span> In other words, if your embedding vectors are unit-length, there's effectively no meaningful difference between the two distance measures.</p>
49
word embeddings
Question about word embeddings in a specific language model - GPT-2
https://cs.stackexchange.com/questions/116184/question-about-word-embeddings-in-a-specific-language-model-gpt-2
<p>How were the <a href="https://openai.com/blog/better-language-models/" rel="nofollow noreferrer">GPT-2</a> token embeddings constructed? </p> <p>The authors mention that they used Byte Pair Encoding to construct their vocabulary. But BPE is a compression algorithm that returns a list of subword tokens that would best compress the total vocabulary (and allow rare words to be encoded efficiently).</p> <p>My question is: how was that list of strings turned into the vectors that they actually used for training the model? The papers they published on the <a href="https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf" rel="nofollow noreferrer">original GPT</a> and its <a href="https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf" rel="nofollow noreferrer">follow-up GPT-2</a> don't seem to specify those details.</p>
50
word embeddings
How should I create a word-embedding an NLP model recognizing HTML elements?
https://cs.stackexchange.com/questions/143225/how-should-i-create-a-word-embedding-an-nlp-model-recognizing-html-elements
<p>I'm currently doing a small-time project where I have to deliver a model which can classify specific elements on a web-page using the HTML code.</p> <p>For this, I have considered using the HTML tags for each specific element (which I can extract using some other code) and then transforming it to a vector to give a neural network made to classify whether or not it is a certain element (text box, discount code field, question sheet, etc).</p> <p><strong>My problem</strong> is this: I cannot find any pre-made library of word embeddings for HTML elements and I know that training one using methods such as GloVe or RNN's can be rather cumbersome and require much data and processing, kind of outside the scope of the project itself.</p> <p><strong>My question</strong> is whether or not a word2vec algorithm already exists for vectorizing HTML elements alreayd exists, or if I need to make it from scratch. Or, whether there is an entirely different ML approach to classifying a wide range of different, possibly changing HTML elements based on their tags.</p> <p>Thanks a ton in advance!</p>
51
word embeddings
How do I find the most similar phrase in &quot;Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications&quot;?
https://cs.stackexchange.com/questions/144405/how-do-i-find-the-most-similar-phrase-in-extending-multi-sense-word-embedding-t
<p>This question is about de paper <a href="https://arxiv.org/pdf/2103.15330.pdf" rel="nofollow noreferrer">Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications</a>, depicted in the following picture:</p> <p><a href="https://i.sstatic.net/1Taan.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Taan.jpg" alt="enter image description here" /></a></p> <p>I am interested in more information about the problem of given a new phrase, find the most similar phrase among <strong>billions</strong> of phrases.</p> <p>Is it possible to formulate the above problem as a nearest neighbor in euclidean space or a dot product? What is the complexity (time and space) of the algorithm for the embedding a new phrase and for &quot;querying&quot; a new similar phrase? (please also add the time here in seconds because I am interested in the behaviour of the algorithm for very large applications...)</p>
<p>If your goal is simply to find the most similar phrases, you do not need to use the methods in this paper. As the unsupervised phrase experiments show in this paper, multiple embeddings do not improve the similarity measurement.</p> <p>There are many techniques you can use to find the nearest neighbors efficiently if you represent each phrase as a single embedding. One of the popular libraries is faiss (<a href="https://github.com/facebookresearch/faiss" rel="nofollow noreferrer">https://github.com/facebookresearch/faiss</a>). faiss can handle both l2 distance or dot product. You can check their documentation or paper to know how efficient those methods could be.</p>
52
word embeddings
How can I modify this detail in the article &quot;&quot;Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications&quot;?&quot;
https://cs.stackexchange.com/questions/144325/how-can-i-modify-this-detail-in-the-article-extending-multi-sense-word-embeddi
<p>This question is about de paper <a href="https://arxiv.org/pdf/2103.15330.pdf" rel="nofollow noreferrer">Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications</a>.</p> <p>I am interested in the transformer part of the paper and the main structures of the algorithm is represented in the following image:</p> <p><a href="https://i.sstatic.net/okHk2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/okHk2.jpg" alt="enter image description here" /></a></p> <p>Before the main questions I have other questions which perharps will help the main question:</p> <p>What is the role of the DECODER? Why do I need a encoder/decoder?</p> <p>Main question:</p> <p>In the paper the authors replace the transformer encoder by an bi-LSTM and the transformer decoder with LSTM.</p> <p>What are the other options for replacing the encoder/decoder part of the algorithm? Is it possible to replace the encoder/decoder at once by a single structure?</p>
<p>Thanks for being interested in our work.</p> <p>The role of the decoder is to model the dependency between the codebook embeddings. For example, in this case, outputting an embedding close to sings might be correlated to outputting an embedding close to microphone.</p> <p>There are several reasons that we choose to use a seq2seq (encoder/decoder) architecture. For example, we want to compare with the related work such as skip-thought. In addition, the sentence length varies but we want to output a fixed number of embeddings.</p> <p>If you want, you can input a fixed number of multiple special tokens into a transformer encoder and use the corresponding hidden states as the codebook embedding. We find that this encoder-only architecture is more likely to output almost identical embeddings (i.e., multiple embeddings collapse into a single embedding), especially when a transformer with many layters such as BERT. We are investigating some solutions to this problem now.</p>
53
word embeddings
self embedded nonterminal in derivation of a Word
https://cs.stackexchange.com/questions/97252/self-embedded-nonterminal-in-derivation-of-a-word
<p><a href="https://i.sstatic.net/plIou.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/plIou.png" alt="questions from cohen"></a></p> <p>consider this question of finding words with no self embedded nonterminal in their derivation. The author(cohen) says that If CFG is in CNF then all the words with length greater than 2^p are guaranteed to have self embedded nonterminal. So, all i need to do to solve these questions is to consider words that have length less than 2^p (where p is no. of "live" productions aka productions containing atleast one Nonterminal, in the grammar). But consider question 5 in image, i can easily tell any word of length >= 4 will have self embedded nonterminal in its derivation, without converting it into CNF. and this pattern is repeating in many questions. Do i really need to convert CFG in CNF? what could go wrong if i don't? Does this restriction on length still holds if CFG is not in CNF?</p>
54
word embeddings
How can node2vec help find similar &quot;roles&quot; within a graph (nodes whose connections have similar structure within the graph)?
https://cs.stackexchange.com/questions/95458/how-can-node2vec-help-find-similar-roles-within-a-graph-nodes-whose-connectio
<p>I have a question on the node2vec algorithm described in <a href="https://arxiv.org/pdf/1607.00653.pdf" rel="nofollow noreferrer">this paper</a>.</p> <p>Node2vec is a deep learning algorithm that word2vec to graphs to learn embeddings. The authors claim that it can help find nodes with similar "roles", or nodes whose connections have similar structure within the graph (structural equivalence), such as two nodes that are both hubs.</p> <p>However, it uses word2vec, specifically the <a href="http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model" rel="nofollow noreferrer">Skipgram architecture</a>. The Skipgram algorithm takes text as input and for each word, it looks within a window and finds specific nearby words within the vocabulary. Then for a word i, it then aims to learn the probability that word j appears near it, for j = 1,...,N (N being the vocabulary size). In the end, words with similar context will have similar embeddings. </p> <p>Applied to a graph, each word is a node. The authors determine the window of nearby nodes by doing repeated random walks starting from that node. So from my understanding, node2vec should only be able to learn similar embeddings for nodes that are in the same community (i.e. appear in the same "context"). How can it produce similar embeddings for nodes that are in distinct communities, but who share the same structural role?</p> <p>To make it clearer with an example: Say that node A and node B have the same structure but are in different communities. Thus, they should have similar embeddings. However, the nodes that are near node A are extremely different from the nodes near node B. Thus the windows produced by random walks would be extremely different. For node A, the Skipgram architecture learns to find nodes within its window (nodes near node A), so nodes A and B should have different embeddings. So how would node2vec find similar embeddings for A and B?</p> <p>Thanks so much for your help!</p>
55
word embeddings
Space efficient data structure to store precomputed All Nearest Neighbors in high dimensions
https://cs.stackexchange.com/questions/160119/space-efficient-data-structure-to-store-precomputed-all-nearest-neighbors-in-hig
<p><strong>Seeking an indexing data structure that is smaller than quadratic in space.</strong></p> <p>As part of an NLP algorithm using word embeddings of 300-dimensions, I am trying to improve the speed of <a href="http://proceedings.mlr.press/v37/kusnerb15.pdf" rel="nofollow noreferrer">Word Mover's Distance</a> (WMD). (Isn't everybody?) Computing the cost matrix is expensive, as it is size MxN (where M and N are the number of distinct non-stop words in documents one and two, respectively). I figured that if I only compute the distances to the k-nearest neighbors of each word and set all others to the mean distance between word pairs, I can perform a high quality approximate value for WMD. (I am not the first to consider this flavor of speedup. See &quot;Speeding up Word Mover’s Distance and its variants via properties of distances between embeddings&quot; by Matheus Werner and Eduardo Laber, 2020 on arXiv.) The trick is knowing what those nearest neighbors are <em>a priori</em>.</p> <p><em>The corpus size is S = 250,000 words.</em></p> <p><em>The dimensionality is 300 features per word embedding.</em></p> <p>What if you preprocess the corpus of words, performing all nearest neighbors search for every word and store it in a file? That would allow you filter the k-nearest words for each word present in a given document comparison from the index, saving the computation of the distance function for most word pairs and the sorting time of (M + N) * k log k.</p> <p>It is trivial to store a matrix of size S * S with the neighbors of each word sorted from nearest to farthest.</p> <p><strong>Is there a more space efficient structure than quadratic storage</strong>, like a variant on a Trie? I do not need it to store the actual distances, just the ordering. I can recalculate the distances for the pairs I care about. The time complexity of the preprocessing is immaterial.</p> <p><strong>Clarification of K:</strong></p> <p>Say that you want the k-nearest neighbors of the word &quot;happiness&quot;. You might think the index storage requirement is k*S. However, what I want is NOT the K-nearest neighbors of <em>happiness</em> in the whole corpus of S words. I want the k-nearest neighbors drawn from the set of words in the opposite document. The 10th nearest word to <em>happiness</em> in the document could be the 10,000th nearest word in the whole corpus! That is the problem. To guarantee that I have all k-nearest words for ANY document that is produced for comparison, I need to store ALL neighbors for <em>happiness</em> from nearest to most distant.</p> <p>As an exception, I will allow that once the distance to a word reaches the average distance between word pairs in S, it can be dropped from the index and the distance assumed to equal that mean distance. This automatically reduces the index size by half.</p> <p><strong>Research:</strong></p> <p>I have just learned about LSH, HNSW and IVF indices. You have control over accuracy versus speed, but the memory usage is substantial.</p> <p>I have toyed with using Hilbert indices with lists of line segments sorted to give a loose ordering of neighbors that will need to be sorted later. That reduces number of computations of distances but does not do away with the K log K sort at the end.</p>
<p>After pondering this for days, I have arrived at a possible data structure. It may be some time before I get to testing whether it performs as well as hoped.</p> <p>The index will be derived from the Hilbert Curve ordering of the points. The index construction time will be O(N-cubed).</p> <ol> <li>Normalize and Quantize coordinates of the word embedding vectors to form integer valued points. 16-bits per coordinate should suffice, and likely fewer bits.</li> <li>Obtain Hilbert index of each point.</li> <li>Sort points by Hilbert index.</li> <li>Once sorted, each point will be associate with its position in Hilbert order. Instead of being a huge Hilbert index of 300x16 bits, it will be an integer in [0,S) where S is the number of words in the corpus.</li> <li>For each point, find all nearest neighbors exhaustively, comparing distance from every point to every other point and sorting.</li> <li>For each point, create an index of neighbors that is a list of Hilbert Curve line segments.</li> <li>Store the index to a file.</li> </ol> <p>The sixth step is the tricky part. A line segment will specify:</p> <ul> <li>start Hilbert position</li> <li>length of segment in word vectors</li> <li>Highest position P such that there are no gaps of missing near neighbors with a Hilbert position less than or equal to P in the set of all points included in the union of this segment with all prior segments.</li> </ul> <p>Each constructed line segment will be longer than the previous line segment. The exponential growth factor will be the square root of two. (I will experiment with different growth factors.) By using exponentially growing line segments, we can guarantee that the storage requirement for the index is S log S. The first segment will be length ten. Thus the segment lengths will be 10, 14, 20, 28, 40, 56, 80, etc.</p> <p>Segments for each point will be defined as follows:</p> <ol> <li>The first segment for a point will begin at the position of that point’s nearest neighbor.</li> <li>The segment will extend for ten positions in increasing Hilbert position.</li> <li>A Boolean array called FOUND of length S will record whether the point at that Neighbor index has been included in the current or any prior segment. Found will refer to points in nearest neighbor order for that point, not Hilbert order.</li> <li>Iterate through all points in the segment and mark off in FOUND that they have been captured by a segment.</li> <li>Iterate over FOUND to find the position M of the lowest point in neighbor order that is missing. Record in the segment record the value M-1 as the highest neighbor index without gaps in the segment and all prior segments.</li> <li>Since the segment lengths will be growing and ordered haphazardly, new segments may overlap prior segments. Truncate as much overlap from the end of the segment as possible if it overlaps one or more other segments. Take care not to truncate gaps between the overlapping segments. For example, if a new segment C ranges from 1000-2000 and overlaps segments A from 500-600 and B from 1700-2200, only truncate the new segment C to 1000-1699, otherwise the gap of 601-1699 between A and B will be lost.</li> <li>Each subsequent segment will begin with the nearest neighbor of the target point that has not yet been captured by the index.</li> </ol> <p><strong>Quality of Approximation</strong>. Because of the locality preserving nature of the Hilbert curve, each of the early segments will contain mostly near neighbors of the target point, but will miss some and include more distant neighbors, thus the approximate nature of the index. Research using the Hilbert curve for clustering shows that the Hilbert curve tends to divide data into 1.5 to 3 times as many clusters as are really present in the data. A Python experiment to explore the viability of this exponential segmentation idea shows that when the cumulative group of segments reaches a place where it guarantees that it has captured the K nearest neighbors, it will have done so by capturing a set of 1.5 to 2K neighbors, and on rare instances up to 2.5K. This is more than acceptable.</p> <p>Thus my expectation (that must be proven with more experiments upon a fully constructed index) is that when you ask for the K-nearest neighbors of a point, you will typically be given 2K neighbors, but will not miss any of the desired near neighbors.</p> <p>In the WMD algorithm, we need K neighbors and optimally desire to compute ½ K-squared distances for our cost matrix. Since we get 2K neighbors instead, we instead need to compute 2K-squared distances. This is vastly superior to S-squared!</p> <p><strong>Cost Matrix.</strong> In the end, to compose the WMD cost matrix:</p> <ol> <li>Load from the index only the lists of segments for the M+N words in your documents.</li> <li>Walk the segments for each word.</li> <li>Toss all words not in the documents.</li> <li>Continue until the set has gathered at least K words from your documents.</li> <li>Perform distance calculations between all retained words.</li> <li>Sort by distance.</li> <li>Keep the K-nearest.</li> </ol> <p>Note:</p> <p>The index written to the file Contains:</p> <ul> <li>List of Word Vector ids sorted in Hilbert index order</li> <li>A list for each Word Vector of Hilbert segments as defined above</li> </ul>
56
word embeddings
In what sense do we mean &#39;distributed&#39; when talking about a distributed representation of words?
https://cs.stackexchange.com/questions/68714/in-what-sense-do-we-mean-distributed-when-talking-about-a-distributed-represen
<p>In machine learning literature, when handling words or text inputs, the strings are often mapped onto a vector space using techniques such as word2vec.</p> <p>The terminology in this is that the individual word vectors are 'embeddings' and the embeddings together are a distributed representation of the input words (see reference to this use in the <a href="https://arxiv.org/abs/1301.3781" rel="nofollow noreferrer" title="word2vec paper">original word2vec paper</a>).</p> <p>What does 'distributed' mean in this case? I have been assuming that the transformed words are distributed in the vector space of the codomain, is this correct?</p>
57
word embeddings
How to represent symbolic knowledge using real numbers - theory about neural networks and natural/analog computing?
https://cs.stackexchange.com/questions/98067/how-to-represent-symbolic-knowledge-using-real-numbers-theory-about-neural-net
<p>One can define the semantics of one definite word using the references to real world entities, relationships with the other words and other concepts and represent all this knowledge about this one word using logical symbolic expressions. And then one can encode all this set of symbolic expressions into vector of real numbers. This is word embedding that is used in natural language processing, distributional semantics of the word in opposite of the formal semantics of the word.</p> <p>One can consider function of software program (e.g. functional program or any other program). One can encode this program in the multiple vectors and matrices of real numbers that are used for the definition of neural network.</p> <p>One can consider symbolic meta-knowledge and encode them into vectors or neural networks as well.</p> <p>The decoding process can be more tricky. There is more or less elaborate decoding of neural network - e.g. see Google queries "logical program extraction from neural networks" or "symbolic rule extraction from neural networks". But I have not seen the work about extraction of more or less static knowledge base from the word-embedding-vector.</p> <p>So - I have two questions regarding this matter:</p> <ul> <li>Is there symbolic knowledge extraction from the word-embedding vectors - some kind of decoding algorithm from vector of real numbers to the set of logical formulas?</li> <li>Is there general theory of mentioned encoding algorithms? The usual approach is to train neural networks and to arrive at the encoded form using non-symbolic methods, non-algorithmic methods, implicit way. I have heard about embedding of symbolic knowledge in neural networks to speed-up training, but such kind of work is scarce. But what about general encoding algorithms?</li> </ul> <p>There are discrete, natural Goedel numbers (encoding algorithms) that can be assigned to any theorem of first order logic. But what about such Goedel numbers for the sets of formulas or for some computational program (as a set of commands)? Can we enumerate all such sets using natural numbers only or maybe the real numbers are needed instead naturally. Or maybe even set of real numbers are required for encoding the set of symbolic formulas or program statements? <strong>Is there such research work which I can develop further? If no, then what ideas can be mentioned for such encoding/decoding schemes?</strong></p> <p>Such encoding-decoding algorithms can be related to biological computing and ultimately they can lead to the explanation of brain activity.</p>
<p>No, probably not. I think you're expecting too much from the current state of the art in word embeddings. Word embeddings don't magically capture all semantic knowledge. They don't reflect perfect understanding of the language. Instead, they're just useful mappings where similar words often have similar embeddings. Moreover, the way word embeddings are constructed has nothing to do with logical formulas.</p> <p>I don't think you're going to find that word embeddings solve the problem of converting from natural language to formulas.</p> <p>I don't know what would count as a general theory of encoding algorithms for you. There are certainly multiple papers that propose different methods of constructing different word embeddings; you could read those to understand the state of the art.</p>
58
word embeddings
Confusion on the use of the chain-rule for the total derivative of the NLL Loss function
https://cs.stackexchange.com/questions/163325/confusion-on-the-use-of-the-chain-rule-for-the-total-derivative-of-the-nll-loss
<p>So my question is about when we want to find the total derivative of the NLL Loss function <span class="math-container">$L$</span> w.r.t. <span class="math-container">$w_i$</span>.</p> <p>So the &quot;pipeline&quot; is often expressed as: <span class="math-container">$$\frac{\partial L}{\partial w_i} = \frac{\partial L}{\partial z} \cdot \frac{\partial z}{\partial w_i}$$</span></p> <p>Where <span class="math-container">$L$</span> is the NLL Loss function, <span class="math-container">$w_i$</span> is the context word-embedding vector, and <span class="math-container">$z$</span> is <strong>NOT</strong> the softmax function, but instead the input of the softmax function, which is the &quot;function&quot; that is the dot-product between the context word-embedding <span class="math-container">$w_i$</span> and the target word-embedding <span class="math-container">$t$</span>.</p> <p>Now this is where my confusion comes in, in a more classical view, I see the NLL Loss function, <span class="math-container">$L$</span>, as nested composite functions:</p> <p><span class="math-container">$$L(s(z(w_i,t))) = -\log(s(z(w_i,t)))$$</span></p> <p>So if I were asked to take the total derivative of <span class="math-container">$L$</span> w.r.t. <span class="math-container">$w_1$</span>, I would instinctively go through all the composite functions applying the chain-rule: <span class="math-container">$$\frac{\partial L}{\partial w_i} = \frac{\partial L}{\partial s(z)} \cdot \frac{\partial s(z)}{\partial z(w_i,t)} \cdot \frac{\partial z(w_i,t)}{\partial w_i}$$</span></p> <p>But noone seems to do it this way? They always seem to &quot;skip&quot; the derivation of the softmax function, <span class="math-container">$s(z)$</span> and they go straight to taking the derivative of <span class="math-container">$L$</span> with respect to <span class="math-container">$z$</span>. My only thought on why this could be, is because they are not treating the softmax function as a function. In the sense that its &quot;innards&quot; are put into the NLL Loss function, not as a variable function <span class="math-container">$s(z)$</span>. Like so: <span class="math-container">$$L(z(w_i,t)) = -\log(\frac{e^{z_i}}{\sum_{j=1}^{N} e^{z_j}})$$</span></p>
59
word embeddings
How to represent sentences with their dependency parses as input to an RNN?
https://cs.stackexchange.com/questions/96814/how-to-represent-sentences-with-their-dependency-parses-as-input-to-an-rnn
<p>I am working on a task embedding sentences into a lower-dimensional space according to style, both grammatical and lexical. As such, I want to have as input the linear ordering of tokens in each sentence, together with its dependency parse as provided by spacy. </p> <p>In particular, I'd like to find a way to tie together the representation of the linear order of tokens and the representation of the dependency parse, so the network could learn features like "this sentence used a word with an embedding close to Y as an nmod which came before and modified a word with an embedding close to Z". How could I design such a network? </p> <p>Edit: The desired input to the network is a parsed sentence; the desired output is a vector which allows that sentence to be compared with others in terms of both lexical and syntactic features. I know how to use an RNN with a sequence of word-vector embeddings as input. I also know how to encode a tree of grammatical functions as a sequence of tokens starting from the root. I'm not sure how to create a unified representation of the sentence where I can determine, for instance, both the embedding and the grammatical function of the fourth word in the sentence and the embedding of the word it modifies (requiring knowledge of the edges between words as well as their linear ordering).</p>
60
word embeddings
Why does parallelising slow down this simple problem against looping through all the data?
https://cs.stackexchange.com/questions/95896/why-does-parallelising-slow-down-this-simple-problem-against-looping-through-all
<p>I've been using multiprocessing and parallelisation for the first time this week on a very large data set using 32 CPUs. I decided to explore it for a smaller task just to see if I could learn anything, just on the 4 CPUs of my Mac.</p> <p>I created a task to add 100 to every element in a 500,000 element list. To my surprise, I noticed that batching this data and using Python's parallelising tools to implement this actually slowed it down hugely, compared to just looping through the 500,000 elements and adding 1. </p> <p>I'd like to understand why.</p> <p>Consider the two methods for doing this task below:</p> <pre><code>import numpy as np from sqlitedict import SqliteDict from multiprocessing import Pool, cpu_count from gensim.corpora.wikicorpus import init_to_ignore_interrupt from itertools import zip_longest import timeit as t def grouper(iterable, n, fillvalue=None): args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) class Add100ToData(): def __init__(self): self.data = [np.random.randint(0, 100) for _ in range(500000)] def add100(self): for i in range(len(self.data)): self.data[i] = self.data[i] + 100 return self.data class Add100ToDataMultiprocess(): def __init__(self): self.data = [np.random.randint(0, 100) for _ in range(500000)] def process_batch(self, batch): new_data = [] for i in batch: new_data.append(i + 100) return new_data def add100(self, batch_size): processes = cpu_count() pool = Pool(processes, init_to_ignore_interrupt) gr = grouper(self.data, batch_size) for batch_result in pool.imap(self.process_batch, gr): count = 0 for i in batch_result: count += 1 self.data[count] = i return self.data if __name__ == "__main__": add1 = Add100ToData() start = t.default_timer() final1 = add1.add100() end = t.default_timer() print("Looping run-time: {:.2f} seconds".format(end - start)) add2 = Add100ToDataMultiprocess() start = t.default_timer() final2 = add2.add100() end = t.default_timer() print("Looping run-time: {:.2f} seconds".format(end - start)) </code></pre> <p>This gives me:</p> <pre><code>Looping run-time: 0.13 seconds Multiprocessing run-time: 1.23 seconds </code></pre> <p>Why is there an improvement in simply looping through the data as opposed to parallelising and batching with simple tasks? </p> <p>When I was doing this for a far more labour-intensive task (one task was transforming 800,000 sentences into their 300-dimensional word embeddings, and another was applying a classifier to these), I gained huge speed improvements using 32 CPUs on the Google cloud, with a very similar code structure to this.</p> <p>Can someone help me to understand why I'm not getting speed improvements here?</p>
<p>Parallelism has costs. The processes have to be scheduled, communicate with each other, manage resources, etc. In return you can do multiple things at the same time.</p> <p>When you have a lot of slow tasks that can be done independently, parallel processing will speed things up a lot.</p> <p>But when you try to parallelize an easy task it might take longer to handle the overhead than to actually do the work. That seems to be the case here.</p>
61
word embeddings
Is copying and pasting a sort of object embedding?
https://cs.stackexchange.com/questions/151679/is-copying-and-pasting-a-sort-of-object-embedding
<p>If I copy a picture and paste it into Microsoft Word, would there be the difference if I embedded the same picture into another Word file ? Are they both object embedding?(As the object is saved in the document?)</p>
62
word embeddings
What is the name of the word problem for free groups under straight line program encoding?
https://cs.stackexchange.com/questions/55413/what-is-the-name-of-the-word-problem-for-free-groups-under-straight-line-program
<p>I <a href="http://www.math.vanderbilt.edu/~msapir/ftp/pub/survey/survey.pdf" rel="nofollow">believe</a> that the <a href="https://en.wikipedia.org/wiki/Word_problem_for_groups" rel="nofollow">word problem</a> is the problem to decide whether two different expressions denote the same element of a suitably defined algebraic structure. For simplicity, let us focus on free groups here. (Because I'm only interested in free algebras, and for groups one might indeed call this a word problem.) The expressions $(b^{-1}c)^{-1}b^{-1}(ab^{-1})^{-1}$, $(ab^{-1}c)^{-1}$, and $a^{-1}bc^{-1}$ are examples of such expressions. The first and second expression denote the same element of the free group, while the third expression denotes a different element.</p> <p>The straight line program encoding is basically the same concept as arithmetic circuits, without implicit commutativity. It is one of the natural encodings of elements for a free algebra. One way to define the straight line program encoding is like in definition 1.1 from one of the <a href="http://mate.dm.uba.ar/~krick/Kr03.pdf" rel="nofollow">google results for straight line program</a>: The straight line program encoding of $f$ is an evaluation circuit $\gamma$ for $f$, where the only operations allowed belong to $\{()^{-1},\cdot\}$. More precisely: $\gamma=(\gamma_{1-n},\dots,\gamma_0,\gamma_1,\dots,\gamma_L)$ where $f=\gamma_L$, $\gamma_{1-n}:=x_1,\dots,\gamma_0:=x_n$ and for $k&gt;0$, $\gamma_k$ is one of the following forms: $\gamma_k=(\gamma_j)^{-1}$ or $\gamma_k=\gamma_i\cdot\gamma_j$ where $i,j&lt;k$.</p> <p>The application of the operation $()^{-1}$ can easily be restricted to $\gamma_k=(\gamma_j)^{-1}$ for $j\leq 0$ without increasing $L$ to more than $2L+n$. This means that we are basically talking about words over the alphabet $\{x_1,\dots,x_n,(x_1)^{-1},\dots,(x_n)^{-1}\}$, hence the name "word problem" makes sense. But it seems a bad name for the general problem to decide whether two elements of a free algebra given by straight line programs are identical. It might be called <em>identity testing</em>.</p> <blockquote> <p>Does the problem (to decide whether two elements of a free algebra given by straight line programs are identical) already has an established name, or is there a good name for this problem?</p> </blockquote> <p>Maybe a better idea would be to give a name to the complementary problem, i.e. the problem to distinguish two different elements of a free algebra. So calling it <em>slp distinction problem</em> for free groups (commutative rings, commutative inverse rings, Boolean rings, ...) could work, because straight line program (slp) is a long name (but good and descriptive nevertheless). The advantage of naming the complementary problem is that we get problems in RP and NP, instead of problems in co-RP and co-NP.</p> <hr> <p>The computational complexity of this problem is not worse than that of identity testing of constant polynomials over $\mathbb Z$ in straight line program encoding (no variables, i.e. $n=0$, but the straight line programs allow to compactly encode huge numbers): Using the same approach as in the <a href="https://rjlipton.wordpress.com/2009/04/16/the-word-problem-for-free-groups/" rel="nofollow">dlog-space algorithm for the normal word problem</a>, the problem can be reduced to deciding whether the product of integer 2x2 matrices equals the identity matrix. (The word problem over $n$ letters easily embeds into the word problem over $2$ letters, for example you can replace $a$, $b$, $c$, $d$ by $aa$, $ab$, $ba$, and $bb$.) So the problem is in <a href="https://en.wikipedia.org/wiki/RP_(complexity)" rel="nofollow">randomized polynomial time (RP)</a> (or rather co-RP). However, I didn't manage to show that it is actually equivalent (in complexity) to identity testing of (constant) polynomials over $\mathbb Z$, as I initially hoped. (This is unrelated to the answer by D.W., which rather shows that the significance of straight line encoding is currently not widely appreciated.)</p>
<p>The blog post you link to already gives a <em>deterministic</em> polynomial-time (in fact, linear-time) algorithm for the word problem over a free group with 2 letters.</p> <p>In contrast, no deterministic polynomial-time algorithm for identity testing for polynomials is currently known (this is a famous open problem).</p> <p>Therefore, it's not likely to be easy to prove that the word problem over the free group is equivalent in complexity to identity testing of polynomials.</p> <hr> <p>The "straight-line encoding" doesn't seem to change the complexity of the word problem in any interesting way. It seems equivalent to simply specifying the input as an expression (i.e., a sequence of symbols, where each symbol is either $x_i$ or $x_i^{-1}$).</p>
63
word embeddings
For regular languages A and B, determine whether B might match early in (A B)
https://cs.stackexchange.com/questions/10852/for-regular-languages-a-and-b-determine-whether-b-might-match-early-in-a-b
<p>I have two regular languages <em>A</em> and <em>B</em>, and I want to determine whether there is any pair of strings, <em>a</em> in <em>A</em> and <em>b</em> in <em>B</em>, such that (<em>a</em>&nbsp;<em>b</em>) is a prefix of a string in (<em>A</em>&nbsp;<em>B</em>) and the left-most match of <em>B</em> in (<em>a</em>&nbsp;<em>b</em>) includes one or more characters from <em>a</em>.</p> <p>Raphael's formulation is good:</p> <blockquote> <p>Given two regular language A, B, is there a (non-empty) prefix of a word b in B that is a suffix of a word in A so that the rest of b is a prefix of another word in B?</p> </blockquote> <h1>Example</h1> <p>For example, let's say I have two regular languages, one which describes some properly escaped HTML text, and one which describes an end tag:</p> <pre><code>A := ([^&amp;&lt;&gt;] | [&amp;] [a-z] [a-z0-9]+ [;])*; B := "&lt;/title"; </code></pre> <p>By inspection, I can tell that there is no string (<em>a</em>&nbsp;<em>b</em>) in (<em>A</em>&nbsp;<em>B</em>) such that the first match of <em>B</em> includes characters from <em>a</em> because <code>"&lt;"</code> is a prefix of <em>B</em> which cannot occur as a suffix of <em>A</em>.</p> <p>But given a different grammar:</p> <pre><code>A' := (A | "&lt;![CDATA[" ("]"? "&gt;" | "]"* [^\]&gt;])* "]]&gt;")*; B' := "&lt;/title" ([^&gt;\"] | [\"] [^\"]* [\"])* "&gt;"; </code></pre> <p>then there are strings</p> <pre><code>a = '&lt;![CDATA[&lt;/title "]]&gt;"'; b = '&lt;/title&gt;'; </code></pre> <p>where (<em>A</em>&nbsp;<em>B</em>) matches <code>'&lt;![CDATA[&lt;/title "]]&gt;"&lt;/title&gt;'</code> and the left-most match of <em>B</em> is <code>'&lt;/title "]]&gt;"&lt;/title&gt;'</code> which includes a non-empty suffix of <em>a</em> : <code>'&lt;/title "]]&gt;"'</code>.</p> <h1>Motivation</h1> <p><em>A</em> in my situation describes the output of an encoder/sanitizer that is derived from a grammar, so an untrusted input is fed to the encoder/sanitizer and I know the output matches <em>A</em> by construction.</p> <p><em>B</em> is a limit condition in a larger grammar that describes how parsers determine where a chunk of an embedded language ends so they can hand it off to a parser for the embedded language.</p> <p>My end goal is to be able to determine when I can optimize away runtime checks that ensure that it is safe to embed a particular encoded string. For these examples, it would be safe to optimize out the first check, but not the second.</p> <hr> <p>Is this a solved problem? Does it have a name? Any pointers appreciated.</p>
<p>If I translate your description properly, here is what you want to ask:</p> <blockquote> <p>Given two regular language $A$, $B$, is there a prefix of a word $b$ in $B$ that is a suffix of a word in $A$ so that the rest of $b$ is a prefix of another word in $B$?</p> </blockquote> <p>Formally:</p> <blockquote> <p>Given regular $A$ and $B$, are there $a = wx \in A$ and $b = yz \in B$ so that $xy \in B$?</p> </blockquote> <p>Deciding this boils down to deciding whether</p> <p>$\qquad B \cap (\operatorname{suff}(A) \cdot \operatorname{pref}(B) ) = \emptyset$.</p> <p>Now we employ some closure properties: $\mathrm{REG}$ is closed against <a href="https://en.wikipedia.org/wiki/Right_quotient" rel="nofollow">right and left quotient</a>, concatenation and intersection, so with</p> <p>$\qquad \operatorname{suff}(A) = A \backslash \Sigma^*$ and $\operatorname{pref}(B) = B / \Sigma^*$</p> <p>the set we want to check for emptiness is regular, and therefore the check is indeed decidable.</p> <p>In order to actually <em>do</em> this, I suggest you build automata for $\operatorname{suff}(A)$ and $\operatorname{pref}(B)$, concatenate them, intersect the result with the automaton for $B$ and use the standard check for emptiness (is a final state reachable from the initial state?).</p> <p>Note that you never need DFA, all of this works nicely with NFA. Therefore, no exponential state explosion occurs; intersection multiplies the state numbers, though.</p>
64
word embeddings
Numerically validate CRC performance
https://cs.stackexchange.com/questions/69262/numerically-validate-crc-performance
<p><a href="https://users.ece.cmu.edu/~koopman/roses/dsn04/koopman04_crc_poly_embedded.pdf" rel="nofollow noreferrer">Koopman, P., &amp; Chakravarty, T. (2004, June). Cyclic redundancy code (CRC) polynomial selection for embedded networks. In Dependable Systems and Networks, 2004 International Conference on (pp. 145-154). IEEE.</a> </p> <p>In this paper authors compare performance of different CRC polynomials in terms of minimum Hamming distance of two codes with the same checksum. </p> <p>The naive method of doing this would require enumerating all codes that have the same checksum. This is computationally intractable for 2048 bit long data words that authors consider - so they must have used some more more efficient technique.</p> <p>I did not find a mention in the paper on how the authors computed it. But it would be interesting to know the method as it would allow to numerically verify optimality of used CRC polynomials on given data word size.</p> <p>What algorithm authors used to compute minimum HD for given CRC polynomial on long data words?</p>
<p>There are multiple techniques to compute the minimum Hamming distance for a given CRC polynomial. I don't know what technique they used, but here are three techniques that seem suitable.</p> <p>I will assume the problem is as follows:</p> <p>Given a CRC polynomial $p(x)$, determine whether there exists a $d$-bit error pattern of length $n$ that isn't detected.</p> <p>This is equivalent to asking whether there exists a word of length $n$ with Hamming weight $d$ whose CRC is zero; or whether there exists a polynomial $q(x)$ of degree $n$ and weight $d$ such that $p(x)$ divides $q(x)$.</p> <h2>Method #1: Exhaustive enumeration</h2> <p>One can enumerate all words of length $n$ and Hamming weight $d$, and check each one to see if that error pattern would be detected. This involves checking ${n \choose d}$ possibilities, which is feasible if $d$ is small and $n$ is not too large, but rapidly becomes infeasible for larger parameters.</p> <p>For instance, their Table 1 lists results for $n=48$ and $d=1,2,3,4,5,6$. These are feasible to compute by exhaustive enumeration. For instance, for $n=48$ and $d=6$, we have ${48 \choose 6} \approx 2^{37.3}$, and it's feasible to do $2^{37.3}$ computations on a computer in a reasonable amount of time.</p> <p>As another example, for $n=2048$, it's feasible to compute results for $d=1,2,3$. We have ${2048 \choose 3} \approx 2^{32.4}$, which isn't too large. However, ${2048 \choose 4} \approx 2^{44.6}$ which is feasible but uncomfortably large.</p> <h2>Method #2: Meet-in-the-middle</h2> <p>We can speed up method #1 using a trick. The trick reduces the amount of computation time to something like $\sqrt{{n \choose d}}$, at the cost of requiring much more space (memory).</p> <p>Note that the CRC is linear. Suppose there exists a data word $x$ of weight $d$ whose CRC is zero. We can express $x$ as $x=y \oplus z$ (where $\oplus$ denotes the xor), where $y,z$ each have weight $d/2$. Then by the linearity of the CRC, the CRC of $x$ is the xor of the CRC of $y$ and the CRC of $z$. So, if we want to search for an $x$ whose CRC is zero, it suffices to search for $y,z$ whose CRC xors to zero -- i.e., to search for $y,z$ that have the same CRC.</p> <p>Now the method becomes this. We enumerate all data words $y$ of length $n$ and weight $d/2$, compute their CRC, and store them in a hashtable (or sorted list) keyed on the CRC. Next, we enumerate all data words $z$ of length $n$ and weight $d/2$, compute each one's CRC, and look up that CRC in the hashtable to look for matches. This will require $2 {n \choose d/2}$ CRC computations and ${n \choose d/2}$ space. Note that ${n \choose d/2}$ is roughly $\sqrt{{n \choose d}}$, so this is a big reduction in the running time.</p> <p>If $d$ is not even, then we split into $y$ of weight $(d-1)/2$ and $z$ of weight $(d+1)/2$, so the running time is ${n \choose (d+1)/2}$ and ${n \choose (d-1)/2}$ space.</p> <p>Of course, we can trade off memory for space, so we can find an algorithm with running time ${n \choose d_1}$ and space ${n \choose d_2}$ for any $d_1,d_2$ such that $d_1+d_2=d$.</p> <p>As an example, for $n=48$ and $d=6$, there is an algorithm with ${48 \choose 3} \approx 2^{16.1}$ running time and ${48 \choose 3} \approx 2^{16.1}$ space -- easily computable on a computer. As another example, for $n=2048$ and $d=4$, the running time is $2^{21}$ and the space is $2^{21}$ -- again, easily feasible. For $n=2048$ and $d=5$, the running time is $2^{32.4}$ and $2^{21}$ space, which is again easily feasible. So this readily explains how one could compute the results in Figures 1-2.</p> <p>(In fact, it turns out you can speed up this method by a small additional factor, at the price of an exponentially small probability of error. There are many ways to split $x=y \oplus z$, so in some sense the above method is doing more work than necessary. An optimization is to require that $y$ be zero in the last $n/2$ bits and $z$ be zero in the first $n/2$ bits, and search for such a split. This reduces the running time and space to ${n/2 \choose d/2}$ time and space. However, there is no guarantee that $x$ can be split in this way. So, instead, we pick a random subset $S \subseteq \{1,2,\dots,n\}$ of $n/2$ positions, and we require $y$ to be zero in all bit positions of $S$ and $z$ to be zero in all bit positions of $\overline{S}$, and we search for any solution. Then, we repeat this again multiple times with multiple different random choices of $S$. If there is a solution, then we have about a $1/\sqrt{\pi d/2}$ chance of finding it, so after $O(\sqrt{d})$ repetitions, there is an overwhelming chance that we find it. This provides a speedup by about a factor of $\Theta(2^{d/2}/\sqrt{d})$, at the cost of some implementation complexity.)</p> <h2>Method #3: Discrete logs in finite fields</h2> <p>Finally, there is one more method, which involves a lot more implementation complexity and requires knowledge of finite fields. Given $p(x)$, we'll try to determine whether there exists a polynomial $q(x)$ of degree $n$ and weight $d$ such that</p> <p>$$q(x) \equiv 0 \pmod{p(x)}.$$</p> <p>We can think of this as working in the finite field $\mathbb{F}_{2^m}$ where $m$ is the degree of $p(x)$. Note that there are fairly efficient algorithms for computing the <a href="https://en.wikipedia.org/wiki/Discrete_logarithm" rel="nofollow noreferrer">discrete log</a> in $\mathbb{F}_{2^m}$ when $m$ is not too large (as is the case for CRC polynomials, where typically $m \le 32$). We can think of any polynomial as an element in this finite field. For example, we can think of the degree-1 polynomial $x$ as an element in the finite field. </p> <p>We'll use a subroutine for computing the discrete log of an arbitrary polynomial $s(x)$, to the base $x$. In other words, given $s(x),p(x)$, this subroutine returns us a number $k$ such that</p> <p>$$s(x) \equiv x^k \pmod{p(x)}.$$</p> <p>Now we'll split $q(x)=r(x)+s(x)$ where $r(x)$ has weight $d-1$ and $s(x)$ has weight $1$. If $q(x) \equiv 0 \pmod{p(x)}$, it follows that</p> <p>$$r(x) \equiv s(x) \pmod{p(x)}.$$</p> <p>Thus we'll enumerate all possible polynomials $r(x)$ of degree $&lt; n$ and weight $d-1$, and for each, we will compute the discrete logarithm of $r(x)$ to the base $x$. If the resulting discrete log, call it $k$, is less than $n$, we have found a valid $s(x)$ of weight $1$ and thus $q(x) = r(x) + x^k$ is a multiple of $p(x)$ with degree $&lt;n$ and weight $d$, so we have found a valid solution. If the resulting discrete log is $\ge n$, we discard this possibility $r(x)$ and continue enumerating other values of $r(x)$.</p> <p>The running time is ${n \choose d-1}$ computations of a discrete log and $O(1)$ space. This is unlikely to be better than Method #2 except for very large values of $n$ and small values of $d$, but there are some parameter settings where it might be probably faster than Method #2.</p>
65
word embeddings
Why is Hamming Weight (in the CRC context) independent from the data?
https://cs.stackexchange.com/questions/51758/why-is-hamming-weight-in-the-crc-context-independent-from-the-data
<p>I'm designing a communication protocol for 24 to 52 bits (typically 32 bits) data including the CRC-8 for error detection. I'm trying to select the best polynomial for this kind of application.</p> <p>In the paper <a href="http://users.ece.cmu.edu/~koopman/roses/dsn04/koopman04_crc_poly_embedded.pdf" rel="nofollow">Cyclic Redundancy Code (CRC) Polynomial Selection For Embedded Networks</a> Koopman, et. al. gives a very nice method to select a CRC polynomial depending on the needs of the application. The paper proposes that if Hamming Distance of two or more polynomials are equal, select the minimum Hamming Weight for a given bit length and error bits. Also Koopman kindly makes all these calculations and information publicly available in his site: <a href="http://users.ece.cmu.edu/~koopman/crc/index.html" rel="nofollow">http://users.ece.cmu.edu/~koopman/crc/index.html</a></p> <p>As he also suggests "Don’t blindly trust what you hear on this topic", I've also verified his results for some of the 8-bit CRC polynomials using my own software. But here is what I failed to understand:</p> <p>In the paper <a href="http://users.ece.cmu.edu/~koopman/networks/dsn02/dsn02_koopman.pdf" rel="nofollow">32-Bit Cyclic Redundancy Codes for Internet Applications</a> he defines the Hamming Weight as</p> <blockquote> <p>A weight $W_i$ is the number of occurrences of a combination of $i$ error bits, including bit errors perturbing the CRC value, that would be undetected by a given polynomial for a given data word length.</p> </blockquote> <p>Also, he explains why the Hamming Weight is independent from the data as follows:</p> <blockquote> <p>Consider the fact that a data corruption is undetectable if and only if it transforms one codeword (some payload with its valid FCS value) into a different valid codeword. But because CRCs are linear, this means that the faulty bits that have been flipped from the original codeword have to themselves form a valid codeword. (In other words, the bits flipped in the message payload have to be compensated for by bits flipped in the FCS field, and the only way this can happen is if the entire set of bits flipped is itself a valid codeword.) This means that the actual data in a message payload is irrelevant in computing error detection abilities, which simplifies things greatly.</p> </blockquote> <p>I also tried to calculate the Hamming Weight for different data and indeed obtained the same results. But I don't understand why this is the case. A rigorous proof or any different insights are greatly appreciated.</p> <p><strong>Edit:</strong> <em>The example for Hamming weight given in the paper:</em></p> <p>Suppose we have a codeword of length 12144 bits. So we have</p> <p>$$ \pmatrix{12144 \\ 4} = \frac{12144!}{12140! ~ 4!} = 906 \times 10^{12} $$</p> <p>possible bit errors. Hamming Weight is the number of undetected 4 bits error out of all possible 4 bits errors. So it is calculated for 802.3 CRC polynomial to be $W_4 = 223059$. This means $223059$ of $906 \times 10^{12}$ errors go undetected and this number does not change with the used codeword.</p>
<p>Define the sets</p> <p>$$\begin{align*} P(d) &amp;:= \{ p ~|~ \deg(p) \leq d \}\\ A_i(d) &amp;:= \{ p \in P(d) ~|~ s(p) = i \} \end{align*}$$</p> <p>where $s(p)$ is the number of $1$ coefficients of the polynomial $p$. For a given CRC generator polynomial $g$, define</p> <p>$$ U_i(d) := \{ p \in A_i(d) ~|~ \exists q, p = q g \} $$</p> <p>So $U_i(d)$ is the set of all undetected errors of length $i$ for data length $d$ using $g$. Now select an $x \in P(d)$ such that $x = q g$ for some $q$ and define the set</p> <p>$$ U(x) := x \oplus U_i(d) := \{x \oplus p ~|~ p \in U_i(d) \} $$</p> <p>Because of the linearity, all elements of $U(x)$ is dividable by $g$. We need to show that $U_i(d)$ and $U(x)$ has the same number of elements. This is obvious since $x \oplus p_1 = x \oplus p_2 \iff p_1 = p_2$.</p>
66
word embeddings
join is the heart of the monad because it encompasses everything a monad can do that a functor cannot. Is this true?
https://cs.stackexchange.com/questions/99516/join-is-the-heart-of-the-monad-because-it-encompasses-everything-a-monad-can-do
<p>There is a controversy about Monad implementation in S.O .</p> <p>The original question is,</p> <p><a href="https://stackoverflow.com/questions/53109889/whats-so-special-about-monads-in-kleisli-category#">What's so special about Monads in Kleisli category?</a></p> <blockquote> <p>Is there any counterexample that Functors cannot do what Monads can do <em>except the robustness of functional composition by flattening the nested structure</em>?</p> <p>What's so special about Monads in Kleisli category? It seems like it's fairly possible to implement Monads with a little expansion to avoid the nested structure of Functor and without the monadic functions <code>a -&gt; m b</code> that is the entity in Kleisli category.</p> </blockquote> <p>An answer is</p> <blockquote> <p>&quot;Avoiding the nested type&quot; is not, in fact, the purpose of join, it's just a neat side-effect. The way you put it makes it sound like join just strips the outer type, but the monad's value is unchanged.</p> <p>You can think of a functor as basically a container. There's an arbitrary inner type, and around it an outer structure that allows some variance, some extra values to &quot;decorate&quot; your inner value. fmap allows you to work on the things inside the container, the way you would work on them normally. This is basically the limit of what you can do with a functor.</p> <p>A monad is a functor with a special power: where fmap allows you to work on an inner value, bind allows you to combine outer values in a consistent way. This is much more powerful than a simple functor.</p> <p>join is the heart of the monad because it encompasses everything a monad can do that a functor cannot. Since join and &gt;&gt;= are isomorphic and you can define each in terms of the other, defining &gt;&gt;= without join still provides the same degree of freedom (you could say that defining &gt;&gt;= indirectly defines join, and vice versa). You get the same thing either way, except that &gt;&gt;= is more convenient and join is more &quot;pure&quot;. So it is really a boring question to compare the two. – DarthFennec</p> </blockquote> <p>The objection by the questioner is,</p> <blockquote> <p>I believe &quot;Avoiding the nested type&quot; is not just a neat side-effect, but a definition of &quot;join&quot; of Monad in category theory,</p> </blockquote> <p><a href="https://ncatlab.org/nlab/show/monad+%28in+computer+science%29" rel="nofollow noreferrer">the multiplication natural transformation μ:T∘T⇒T of the monad provides for each object X a morphism μX:T(T(X))→T(X)</a></p> <blockquote> <p>and that's exactly what my code does.</p> <p>I know many people implement monads in Haskell in this manner, but the fact is, there is Maybe functor in Haskell, that does not has join, or there is Free monad that join is embedded from the first place into the defined structure. They are objects that users define Functors to do things.</p> <p>Therefore, ... [your] observation does not fit the fact of the existence of Maybe functor and Free monad.</p> </blockquote> <p>Which opinion is correct?</p> <p>PS. Additional comment from DarthFennec (who answers) as below:</p> <hr /> <p><code>Free</code> is weird, in that it's one of the few monads that <em>doesn't actually do anything</em>.</p> <p><code>Free</code> can be used to turn any functor into a monad, which allows you to use <code>do</code> notation and other conveniences. However, the conceit of <code>Free</code> is that <code>join</code> does not combine your actions the way other monads do, instead it keeps them separate, inserting them into a list-like structure; the idea being that this structure is later processed and the actions are combined by <em>separate code</em>. An equivalent approach would be to move that processing code into <code>join</code> itself, but that would turn the functor into a monad and there would be no point in using <code>Free</code>. So the only reason <code>Free</code> works is because it delegates the actual &quot;doing things&quot; part of the monad elsewhere; its <code>join</code> opts to defer action to code running outside the monad. This is like a <code>+</code> operator that, instead of adding the numbers, returns an abstract syntax tree; one could then process that tree later in whatever way is needed.</p> <blockquote> <p>These observation does not fit the fact of the existence of Maybe functor and Free monad.</p> </blockquote> <p>You are incorrect. As explained, <code>Maybe</code> and <code>Free</code> fit perfectly into my previous observations:</p> <ul> <li>The <code>Maybe</code> functor simply does not have the same expressiveness as the <code>Maybe</code> monad.</li> <li>The <code>Free</code> monad transforms functors into monads in the only way it possibly can: by not implementing a monadic behavior, and instead simply deferring it to some assumed processing code.</li> </ul> <hr /> <p>First of all the word he uses &quot;<code>Free</code> is weird&quot; or &quot;the conceit of <code>Free</code>&quot; unduly disparaging the Free monad, and this words does not justify what he insists obviously.</p> <blockquote> <p>An equivalent approach would be to move that processing code into <code>join</code> itself, but that would turn the functor into a monad and there would be no point in using <code>Free</code>.</p> </blockquote> <p>My question would be if &quot;join is the heart of the monad&quot;, how come the &quot; processing code&quot; has been moved away from the heart =<code>join</code> and move-back into <code>join</code> again? That is the weird.</p> <p>Free is one of the generalization to abstract the &quot;processing code&quot; of Monad. Monad structure including <code>join</code> that satisfies the monad laws are pre-defined without the &quot;processing code&quot; that would be functor/ or function if it's Operational monad.</p> <blockquote> <p>So the only reason <code>Free</code> works is because it delegates the actual &quot;doing things&quot; part of the monad elsewhere; its <code>join</code> opts to defer action to code running outside the monad.</p> </blockquote> <p>In either way, again, &quot;doing things&quot; does not have to locate in <code>join</code>. If someone strongly insists &quot;join is the heart of the monad&quot;, maybe it's ok to do so. However, as Free-monad, it's totally reasonable a user let <code>join</code> be pre-defined in a generalized structure of Monad as a functionality of flattening the structure, or other monads also does not have to move &quot;doing things&quot; to <code>join</code> because it's not the heart of the monad anyway.</p>
<p>I'm not sure I fully understand the question, but since it's my answer you're wondering about, I'll do my best to clarify my meaning.<sup>1</sup></p> <blockquote> <p>join is the heart of the monad because it encompasses everything a monad can do that a functor cannot. Is this true?</p> </blockquote> <p>Let's find out!</p> <p>We can define a functor<sup>2</sup> like so:</p> <pre><code>class Functor' f where pure' :: a -&gt; f a fmap' :: (a -&gt; b) -&gt; f a -&gt; f b ap' :: f (a -&gt; b) -&gt; f a -&gt; f b </code></pre> <p>We can define a monad in the following way:</p> <pre><code>class Monad' m where pure' :: a -&gt; m a fmap' :: (a -&gt; b) -&gt; m a -&gt; m b ap' :: m (a -&gt; b) -&gt; m a -&gt; m b join' :: m (m a) -&gt; m a </code></pre> <p>As you can see, these are the same except for the added <code>join'</code> on the <code>Monad'</code> class. Since without this <code>join'</code> the <code>Monad'</code> is indistinguishable from the <code>Functor'</code>, I think it's safe to say that <code>join'</code> embodies everything in <code>Monad'</code> that is not in <code>Functor'</code>.</p> <p>This can also be seen in the way the Haskell Prelude defines <code>Monad</code>:</p> <pre><code>class Applicative m =&gt; Monad m where (&gt;&gt;=) :: m a -&gt; (a -&gt; m b) -&gt; m b (&gt;&gt;) :: m a -&gt; m b -&gt; m b m &gt;&gt; k = m &gt;&gt;= (\_ -&gt; k) return :: a -&gt; m a return = pure </code></pre> <p>From this we can see the following:</p> <ul> <li>Any <code>Monad m</code> must also be an <code>Applicative m</code>, meaning that all <code>Monad</code>s are also <code>Functor</code>s.</li> <li><code>&gt;&gt;</code> is simply a convenience method, and is defined in terms of <code>&gt;&gt;=</code>.</li> <li><code>return</code> is just another name for the applicative <code>pure</code>.</li> <li>The only remaining method defined by <code>Monad</code> is <code>&gt;&gt;=</code>. Therefore, <code>&gt;&gt;=</code> is the one thing <code>Monad</code> has that <code>Applicative</code> doesn't have.</li> </ul> <p>However, we can define <code>&gt;&gt;=</code> in the following way:</p> <pre><code>(&gt;&gt;=) :: m a -&gt; (a -&gt; m b) -&gt; m b m &gt;&gt;= f = join (fmap f m) </code></pre> <p><code>fmap</code> is part of <code>Functor</code>, so <code>join</code> is the only remaining thing that's unique to <code>Monad</code>. This is why I described <code>join</code> as "the heart of the monad".</p> <blockquote> <p>In either way, again, "doing things" does not have to locate in <code>join</code>. If someone strongly insists "join is the heart of the monad", maybe it's ok to do so. However, as Free-monad, it's totally reasonable a user let <code>join</code> be pre-defined in a generalized structure of Monad as a functionality of flattening the structure, or other monads also does not have to move "doing things" to <code>join</code> because it's not the heart of the monad anyway.</p> </blockquote> <p>I'm not sure I understand what you mean. <code>join</code> is the place in a <code>Monad</code> where the actual "doing" is defined; this is where the <em>purpose</em> of the <code>Monad</code> is encoded. The <em>purpose</em> of <code>Free</code> is to displace the "actual" processing logic for the user to deal with somewhere else. I suppose you could say that <code>Free</code> is a way to transform a <code>Monad</code> problem into a data processing problem, in that <code>join</code> creates a data structure that can later be processed. This is unusual because most <code>Monad</code>s encode destructive processing into <code>join</code>, but this doesn't have to be the case. But the fact that the "heart" of <code>Free</code> is the displacement of the business logic does not mean <code>join</code> doesn't represent the "heart" of a <code>Monad</code>.</p> <hr> <p><sub><sup>1</sup>Note that I don't have a category theory background, so I'm going to be working in the context of Haskell. I assume that's okay, since this question is tagged as <code>[functional-programming]</code>.</sub></p> <p><sub><sup>2</sup>This is actually an "applicative functor", which defines a couple of extra methods. I believe this is appropriate, as the questioner's original question defined a functor with <code>fmap</code> and <code>unit</code>, the latter being another name for the applicative <code>pure</code>.</sub></p>
67
graph theory
Research in Graph Theory versus Graph Algorithms
https://cs.stackexchange.com/questions/25847/research-in-graph-theory-versus-graph-algorithms
<p>I have a very generic question to ask. It is related to research. I'm interested in Graph theory. I have done a course in it. I have done some topics related to both graph theory as a point of view of doing it as a mathematics student and also studied some graph algorithms. I'm going for research internship in graph theory. But there is some glitch in my mind that I'm unable to fix about my real interest in graphs because of lack of proper distinctive ideas about <em>the real difference in doing research in graph algorithm or doing graph theory as a mathematics student</em>. I would like to know the following things:</p> <ol> <li>What is the real difference in doing graph theory as a mathematics student or doing graph algorithms? Do both of them have some real difference?</li> <li>Can someone tell me some good source for getting research papers on graph theory and graph algorithms.</li> <li>Is it good to start doing graphs as a mathematics student?</li> </ol> <p>I don't know if it is a right place to put forward such kind of problems. Please let me know if it doesn't fit here.</p>
<p><strong>Question 1</strong></p> <p>I would say that the two areas are definitely not identical, however there is a huge overlap. Partly it depends on where you draw some very fuzzy lines. Let's start with:</p> <ul> <li><em>Graph Theory</em> is about the properties of graphs as mathematical objects</li> <li><em>Graph Algorithms</em> as an area of research is about solving <strong><em>computational</em></strong> problems that are represented using graphs.</li> </ul> <p>Of course graph theory is unsurprisingly very useful in developing graph algorithms, and graph algorithms can answer questions in graph theory. Indeed, as you have obviously noticed, many problems in Graph Theory can be cast as computational problems, and answered by giving an algorithm (in a sense this is an aspect of the <a href="http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence">Curry-Howard Correspondence</a>), so especially at the introductory level, there is little more than the style of presentation that separates them. </p> <p>Just to make things even more confusing, most researchers in one field have at least some interest and experience in the other, but there are a couple of points where we can draw certain lines of distinction:</p> <ul> <li>Graph theory (as a field) will happily deal with infinite graphs, which are not so interesting from an algorithmic perspective.</li> <li>Graph theorists will tend to be more interested in existential statements ("the chromatic number of a class of graphs is at most blah"), whereas graph algorithms people will be looking for the best algorithm to solve a problem ("how do we compute the actual value of the chromatic number as quickly as possible?").</li> <li>Graph algorithms includes/overlaps with the application and tailoring of graph algorithms to solve problems that aren't really about graphs (e.g. developing a good algorithm to cluster protein interaction networks), which a graph theorist would be uninterested in (at least <em>as</em> a graph theorist).</li> </ul> <p><strong>Question 2</strong></p> <p>If you have access to university subscriptions or similar (this is no way exhaustive):</p> <ul> <li><a href="http://onlinelibrary.wiley.com/journal/10.1002/%28ISSN%291097-0118">Journal of Graph Theory</a></li> <li><a href="http://www.siam.org/journals/sidma.php">SIAM Journal on Discrete Mathematics</a></li> <li><a href="http://www.journals.elsevier.com/journal-of-combinatorial-theory-series-b/">Journal of Combinatorial Theory: Series B</a></li> <li><a href="http://www.journals.elsevier.com/discrete-applied-mathematics/">Discrete Applied Mathematics</a></li> <li><a href="http://jgaa.info/">Journal of Graph Algorithms and Applications</a></li> </ul> <p>To muddy things further, many of these include examples of both pure graph theory and graph algorithms.</p> <p>A couple of lists for further exploration:</p> <ul> <li><a href="http://www1.cs.columbia.edu/~sanders/graphtheory/writings/journals.html">A list of graph theory journals</a></li> <li>For unrelated reasons, John Lamp has several lists that have interesting journals in there, but you'll have to do some keyword searching: <a href="http://lamp.infosys.deakin.edu.au/era/?page=jfordet12f&amp;selfor=0101">Pure Mathematics</a>, <a href="http://lamp.infosys.deakin.edu.au/era/?page=jfordet12f&amp;selfor=0102">Applied Mathematics</a>, <a href="http://lamp.infosys.deakin.edu.au/era/?page=jfordet12f&amp;selfor=0802">Theoretical Computer Science</a></li> </ul> <p>There is the <a href="http://arxiv.org/">arXiv preprint server</a>, which has preprint versions of research papers, but again, you'll have to spend a small amount of time to explore and find something you want (it's more set up for finding a paper you already know is there).</p> <p><strong>Question 3</strong></p> <p>This question cannot really be answered objectively. It depends entirely on things that you have no way of knowing (i.e. the future), and I have no way of knowing (how good the people are at your university, what opportunities you will gain or lose by taking that internship). </p> <p>If you want my subjective <em>general</em> opinion, I would say yes. Graph theory is an important part of mathematics and computer science (I personally contend they're not different things anyway), and versatility and breadth of knowledge are important characteristics of a good researcher, even if you later decide you have no intention of being a graph theorist - it's not going to stop you from being able to do complex analysis or topology.</p> <p>Again, this is about whether an arbitrary student would benefit from doing work in graphs (algorithms or theory) - you personally may be in a particular situation where it would not be beneficial, and we can't answer that here. For example, if taking the internship means that you don't get to do the internship in Category Theory that is actually the thing you want to do, then this could set you back. Early in a research career it is difficult to escape a particular path, without going back to step one. Later on, it's easier to transition, but for better or worse there's period effectively like apprenticeship where you can't easily jump to any job you're interested in, but that's a question for Academia.SE.</p>
68
graph theory
Graph theory book for beginners
https://cs.stackexchange.com/questions/148733/graph-theory-book-for-beginners
<p>I need a book recommendation for graph theory which supposes background in set theory. I want to cover those questions first as graph theory is part of combinatorics. Can you recommend me a book which is beginner-friendly without going much into combinatorics?</p> <p>I heard about Harris and West but these books use too much combinatorics. I want to do them later. I am self-studying, therefore asking this question.</p>
69
graph theory
Does spectral graph theory say anything about graph isomorphism
https://cs.stackexchange.com/questions/25988/does-spectral-graph-theory-say-anything-about-graph-isomorphism
<p>Is there research or are there results that discuss graph isomorphism in the context of spectral graph theory?</p> <p>Two known theorems of spectral graph theory are:</p> <ol> <li><p>Two graphs are called isospectral or cospectral if the adjacency matrices of the graphs have equal multisets of eigenvalues.</p></li> <li><p>Almost all trees are cospectral.</p></li> <li><p>The eigenvalues of a graph's adjacency matrix are invariant under relabeling (but this is not a necessary and sufficient condition).</p></li> </ol> <p>Furthermore, is graph isomorphism "easy" to solve?</p>
<p>Graph isomorphism has been mentioned along with primality testing as early as 1971 in Cook's famous paper on NP-completeness. Cook mentions that he was unable to prove the NP-completeness of both problems. Nowadays we known that primality testing is in P, but the status of graph isomorphism is still unknown. Most experts conjecture that it is "NP-intermediate", that is, not in P but not NP-complete. Some conjecture that it should be solvable in quasipolynomial time (algorithms running in time $2^{\log^{O(1)} n}$). The best currently known algorithm, due to Luks, has running time $2^{O(\sqrt{n\log n})}$. It uses the so-called group theory method.</p> <p>The two most common approaches are individualization/refinement and the group theory method. The former approach tries to match vertices of one graph to vertices of the other. Given a vertex of degree $d$ belonging to the first graph, it can only be matches to a vertex of degree $d$ in the other graph, but this offers no saving if both graphs are $d$-regular. Individualization/refinement is a framework for generating more detailed "types" of vertices.</p> <p>It is possible that a similar approach can enhance the spectral method (which as stated fails for cospectral graphs), but I am not aware of any work along these lines (though it might exist; I'm not an expert in this area).</p> <p>The group theory method reduces graph isomorphism to the problem of finding generators for the automorphism groups of graphs. Given two graphs $G_1,G_2$, the idea is to compute generators for $\operatorname{Aut}(G_1 \cup G_2)$, and check whether any of them switches a vertex of $G_1$ with a vertex of $G_2$. For more details, see for example <a href="http://www.imsc.res.in/~arvind/notes.pdf">lecture notes</a> of Arvind.</p> <p>For a recent overview of the state of the art, consult a <a href="http://people.cs.uchicago.edu/~laci/papers/14itcs.pdf">paper</a> by Babai; Babai is one of the principle researchers in the area.</p> <p><em>Practical</em> graph isomorphism is a completely different issue. A recent overview can be found in a <a href="http://arxiv.org/pdf/1301.1493v1.pdf">paper</a> of McKay, author of the popular package <code>nauty</code>.</p>
70
graph theory
Repeated vertices in cycles (graph theory)
https://cs.stackexchange.com/questions/150806/repeated-vertices-in-cycles-graph-theory
<p>In graph theory, can a cycle contain repeated nodes/vertices not including the first and last ones? If so, can you please give an example?</p>
<p>Usually cycles are assumed not to have any repeating vertices (other than the first and last vertices being identical). If repeating vertices are allowed, then one talks about <em>closed walks</em>. In order to stress that cycles have no repeating vertices, we call them <em>simple cycles</em>.</p> <p>That said, terminology isn't always fixed. If in doubt, define the terms you use.</p>
71
graph theory
Is this a known problem in graph theory?
https://cs.stackexchange.com/questions/129198/is-this-a-known-problem-in-graph-theory
<p>My basic problem includes a graph where each node <span class="math-container">$i$</span> is associated with a weight <span class="math-container">$c_i$</span>, and the problem is to find a minimum (or maximum) weighted independent set with a fixed cardinality <span class="math-container">$p$</span>. This is I believe a well-known problem in graph theory that is well-studied for different types of graphs.</p> <p>Now, suppose I am dealing with a generalized form of the problem as following. The weight of each node can take <span class="math-container">$p$</span> different values, that is each node is associated with <span class="math-container">$p$</span> different weights. The aim is again to find a minimum (or maximum) weighted independent set with a fixed cardinality <span class="math-container">$p$</span>, however, each type of weight can be selected only once. Precisely, if the weight type <span class="math-container">$j$</span> is selected for the node <span class="math-container">$i$</span>, i.e., we select the weight <span class="math-container">$c_{ij}$</span>, then the other selected nodes cannot take a weight of type <span class="math-container">$j$</span>.</p> <p>My question is that, is this still a graph theory problem? Is it a known generalization in the graph theory problems?</p> <p>Any help and/or reference is appreciated.</p>
<p>If <span class="math-container">$G=(V,E)$</span>, with <span class="math-container">$V=\{v_1,v_2,...,v_n\}$</span> and weights <span class="math-container">$\{c_{i,j}, i=1,2,...,n, j=1,2,...,p\}$</span> is the given graph, then we can construct the <a href="https://en.wikipedia.org/wiki/Strong_product_of_graphs" rel="nofollow noreferrer"><strong>strong product</strong></a> (I finally found the name of the operation) <span class="math-container">$G\boxtimes K_p$</span> of <span class="math-container">$G$</span> and <span class="math-container">$K_p$</span>, where <span class="math-container">$K_p$</span> is the <a href="https://en.wikipedia.org/wiki/Complete_graph" rel="nofollow noreferrer">complete graph</a> with <span class="math-container">$p$</span> vertices. This is the graph with vertices <span class="math-container">$\{v_{i,j},i=1,2,...,n, j=1,2,...,p\}$</span> and edges <span class="math-container">$\{v_{a,b},v_{c,d}\}$</span> where either:</p> <ol> <li><span class="math-container">$a=c$</span>,</li> <li><span class="math-container">$b=d$</span> or</li> <li><span class="math-container">$\{v_a,v_c\}\in E$</span>. (The actual condition of the strong product reduces to this since in <span class="math-container">$K_p$</span> all vertices are adjacent).</li> </ol> <p>We give the vertex <span class="math-container">$v_{i,j}$</span> the weight <span class="math-container">$c_{i,j}$</span>, for <span class="math-container">$i=1,2,...,n$</span> and <span class="math-container">$j=1,2,...,p$</span>.</p> <p>The problem on <span class="math-container">$G$</span> is equivalent to the problem <strong>minimum (maximum) weighted independent set</strong> in the weighted <span class="math-container">$G\boxtimes K_p$</span>. If a vertex <span class="math-container">$v_{i,j}$</span> of the new graph is chosen this corresponds to choosing vertex <span class="math-container">$v_i$</span> of the original graph and using the <span class="math-container">$j$</span>-th weight <span class="math-container">$c_{i,j}$</span> corresponding to it.</p> <p>The set of edges of <span class="math-container">$G\boxtimes K_p$</span> are exactly those that prevent the corresponding choices in <span class="math-container">$G$</span> to use adjacent vertices or reusing weights with the same index:</p> <ul> <li>Condition <span class="math-container">$1$</span> defines edges in the strong product that prevent the equivalent of using two weights from the same original vertex.</li> <li>Condition <span class="math-container">$2$</span> prevents using the weights with the same index from different vertices of the original graph.</li> <li>Condition <span class="math-container">$3$</span> prevents that two vertices that were neighbors in the original graph are selected.</li> </ul> <p><strong>Example:</strong></p> <p>If <span class="math-container">$G$</span> is the graph</p> <p><a href="https://i.sstatic.net/NvGNP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NvGNP.png" alt="enter image description here" /></a></p> <p>and <span class="math-container">$p=2$</span>, then <span class="math-container">$G\boxtimes K_2$</span> would be the graph</p> <p><a href="https://i.sstatic.net/gyacr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gyacr.png" alt="enter image description here" /></a></p> <p>Images created with <a href="https://csacademy.com/app/graph_editor" rel="nofollow noreferrer">this tool</a>.</p>
72
graph theory
Standard or Top Text on Applied Graph Theory
https://cs.stackexchange.com/questions/2845/standard-or-top-text-on-applied-graph-theory
<p>I am looking for a reference text on applied graph theory and graph algorithms. Is there a standard text used in most computer science programs? If not, what are the most respected texts in the field? I have Cormen et al.</p>
<p>For digraphs in particular, there's Band-Jensen &amp; Gutin's <a href="http://www.cs.rhul.ac.uk/books/dbook/" rel="nofollow">"Digraphs: Theory, Algorithms and Applications"</a>. It covers quite a bit of material.</p> <p>The first edition is free to download now that the second edition is out (There's a link to the pdf on their page). Of course if you've got access to a Springerlink account, you can get the second edition instead!</p> <p>Apart from being free, while I'm not sure of its popularity (especially considering it's "relatively" young), it's a weighty tome, with extensive coverage from basics to quite advanced topics and from both practical and theoretical perspectives.</p> <p>The other advantage is that it's one of the few (perhaps only?) full coverage texts specifically on digraphs, rather than a being a general graph theory book with material on digraphs.</p>
73
graph theory
Looking for an esoteric graph theory algorithm
https://cs.stackexchange.com/questions/106743/looking-for-an-esoteric-graph-theory-algorithm
<p>So I'm sure this is an unusual question for this type of site, but I am to give a talk on algorithms for discrete graphs in a few weeks. It has been specified that the talk is to be on some algorithm which will be unfamiliar for those attending. It it to be some graph traversal algorithm at about the junior level. Does anyone know of any such algorithm which is a bit removed from main stream graph theory? </p>
74
graph theory
Relative Importance in Graph Theory
https://cs.stackexchange.com/questions/32203/relative-importance-in-graph-theory
<p>I am working on an algorithm that ranks a set of nodes in a graph with respect to how relative this node is to other predefined nodes (I call them query nodes). The way how the algorithm works is similar to recommendation algorithms. For instance, if i want to buy an item from an online store, the algorithm will look at my preferences (and/or history of purchased items) and recommend new items for me. Applying this to graph theory, the set of nodes are items and my preferred items are the query nodes. The problem am facing right now is how to benchmark my results (i.e. I want to run recall and precision on my results) but I don't have a ground truth data. My question is: does anyone know a benchmark for this problem, if not, how do you think I can evaluate my results.</p> <p><strong>Note:</strong> My algorithm has nothing to do with recommendation algorithms (i.e. the application is different), but I gave this to deliver the general idea of the RELATIVE IMPORTANCE algorithms. I am looking for any dataset with benchmark that may help me in this context.</p> <p><strong>Edit:</strong> Based on some requests, I will explain my algorithm with more details. The algorithm takes as input: graph (can be directed or undirected, weighted or unweighted), and a set of query nodes (included in the graph). <strong>The algorithm will try to rank the nodes in the graph according to their importance with respect to the query nodes</strong>. The importance of a node increases as the relationship between it and the query nodes increases. Depending on the application, this relationship is quantified by a value (the weight of an edge) that reflects the level of association between two nodes. For instance, in the DBLP co-authership dataset, the relation between two nodes is the number of common papers between the two nodes (authors). Therefore, in this case, the algorithm will rank the authors in the DBLP graph according to how close they are to all query nodes (the predefined authors). I hope that this is clear.</p> <p>Thank you</p>
<p>You can for instance take as <em>ground truth data</em> the movielens dataset, remove some rating links between users and movies. You can rank your algorithm by counting the number of link that you can guess right. Usually machine learning algorithm also guess the rating score.</p>
75
graph theory
graph theory single bar vs double bar size notation
https://cs.stackexchange.com/questions/167412/graph-theory-single-bar-vs-double-bar-size-notation
<p>A follow-up on <a href="https://stackoverflow.com/questions/15037679/what-do-the-absolute-value-bars-mean-in-graph-theory">https://stackoverflow.com/questions/15037679/what-do-the-absolute-value-bars-mean-in-graph-theory</a></p> <p>What is the difference between single bar <span class="math-container">$|G|$</span> and double bar <span class="math-container">$||G||$</span> notation (both seen used in <a href="https://scalelite-info.univ-lyon1.fr/playback/presentation/2.3/deb1788b716ce04af06dc1761a32ab0e724bb232-1711099267083" rel="nofollow noreferrer">this talk</a>)? Is the former the number of vertices and the latter the number of edges perhaps?</p>
<p>You are correct. <span class="math-container">$|G|$</span> denotes the number of vertices in <span class="math-container">$G$</span>, and <span class="math-container">$||G||$</span> denotes the number of edges in <span class="math-container">$G$</span>. Formally, for a graph <span class="math-container">$G = (V, E)$</span>, <span class="math-container">$|G| = |V|$</span> and <span class="math-container">$||G|| = |E|$</span>.</p> <p>For instance, consider the complete graph <span class="math-container">$K_4$</span>:</p> <ul> <li><span class="math-container">$|K_4| = 4$</span> and,</li> <li><span class="math-container">$||K_4|| = 6$</span>.</li> </ul>
76
graph theory
Is there a graph theory textbook that covers treewidth thoroughly?
https://cs.stackexchange.com/questions/132289/is-there-a-graph-theory-textbook-that-covers-treewidth-thoroughly
<p>Can someone recommend a graph theory textbook that covers treewidth thoroughly?</p> <p>Something that focuses on the graph-theoretic structure of bounded treewidth graphs rather than solving problems on them. Don't need the strongest/newest results but would prefer something that</p> <p>Preferably something that covers sublinear treewidth of planar and minor free graphs.</p>
<p>I don't know if the thing you are looking for exists, but here are at least some pointers:</p> <p><a href="http://parameterized-algorithms.mimuw.edu.pl/parameterized-algorithms.pdf" rel="nofollow noreferrer"><em>Parameterized Algorithms</em> by Cygan <em>et al</em> (free PDF version)</a> has a chapter dedicated on the topic. This is slightly algorithms oriented, but contains structural stuff too.</p> <p><em>Graph Theory</em> by Diestel has one chapter, (12. Minors, Trees and WQO) on the subject, about 40 pages. <a href="http://diestel-graph-theory.com/" rel="nofollow noreferrer">homepage</a></p> <p>Bodlaender has a chapter <em>Treewidth of Graphs</em> in the Encyclopedia of Algorithms. <a href="https://link.springer.com/referenceworkentry/10.1007%2F978-1-4939-2864-4_431" rel="nofollow noreferrer">homepage</a></p> <p>Heggernes has a short compendium used for her advanced algorithms class <a href="https://www.ii.uib.no/%7Epinar/chordal.pdf" rel="nofollow noreferrer">Treewidth, partial <span class="math-container">$k$</span>-trees, and chordal graphs</a>.</p> <p>You also have Ton Kloks' book, Treewidth <a href="https://www.springer.com/gp/book/9783540583561" rel="nofollow noreferrer">springer</a>, but it's getting rather old. I have not read this one, so I don't know much about it.</p>
77
graph theory
Approporiate algorithm for a graph theory problem
https://cs.stackexchange.com/questions/109576/approporiate-algorithm-for-a-graph-theory-problem
<p>So I have recently ran into a graph theory problem and was unable to find a matching algorithm for the problem or reword the problem to match some existing algorithm.</p> <p>The problem is pretty straightforward - given a weighted directed graph, pick edges to maximize the sum of all weights of the chosen edges. Max one edge can point to another vertex and no vertex can be the head for more than one edge.</p> <p>So far this would seem like a problem solvable with a matching algorithm, but there's an extra catch - a vertex can be the head for an edge only if the vertex is not a tail of any edges in the initially given graph, or if it's a tail for one of the chosen edges. On top of that, the graph from the chosen edges has to be acyclic.</p> <p>A good analogy would be to imagine each vertex as a cell. I can mark all vertices that are initially tails of some edges as cells with some object in them. Picking an edge would mean moving the object from one cell to another. This analogy seems perfect, because:</p> <ul> <li><code>A vertex can be the tail of max one chosen edge</code> (aka the object can be moved to only one other cell)</li> <li><code>A vertex can be the head of max one chosen edge</code> (aka only one object can be moved into a cell)</li> <li><code>A vertex can be the head for a selected edge only if the vertex was not a tail of any edges in the initially provided graph, or if it's a tail for one of the chosen edges</code> (aka the cell was either initially empty or the object can be carried into another cell thus emptying the cell)</li> </ul> <p>As good as it is, I was unable to find any algorithms that would be of any help. Is pure bruteforcing for edge combinations as good as it gets? Or can I get the edges in a more optimized way?</p>
78
graph theory
What is a min-max theorem in graph theory?
https://cs.stackexchange.com/questions/99039/what-is-a-min-max-theorem-in-graph-theory
<p>I'm currently studying a paper which uses extensively the term '<code>min-max theorems</code>' in graph theory, and claims to present a tool allowing to generalize these theorems. (here is the <a href="http://acta.bibl.u-szeged.hu/38618/1/math_041_fasc_001_002.pdf#page=65" rel="nofollow noreferrer">link</a> to the paper if needed)</p> <p>Among those, we can find for example :</p> <ul> <li>The max-flow-min cut theorem.</li> <li>Edmond's disjoint arborescences theorem (<a href="http://lemon.cs.elte.hu/egres/open/Edmonds%27_disjoint_arborescences_theorem" rel="nofollow noreferrer">link</a>).</li> </ul> <p>I have some intuition about what a min-max theorem would be, but I can't come with a concise and precise definition.</p> <p><strong>My question is</strong> : what would be a definition of such a family of theorems ?</p> <p>And a second question along : is this min-max theorem concept always linked to the strong duality theorem, meaning that they mainly state that one problem is actually the dual of the other, like the max-flow-min-cut is ?</p>
<p>A min-max theorem is simply a theorem that says that the minimum value possible for one quantity is the maximum value possible for some other. For example,</p> <ul> <li><p>Max-flow min-cut says that the value of the biggest flow between two vertices in a weighted graph is equal to the value of the minimum cut that separates them. Closely related, Menger's Theorem says that the maximum number of edge-disjoint paths between two vertex sets is equal to the size of the minimum cut that separates them.</p></li> <li><p>Edmonds' Theorem says that the maximum number of edge-disjoint spanning arborescences is equal to the minimum value of something called <span class="math-container">$\varrho(X)$</span> over certain non-empty vertex sets.</p></li> <li><p>The cops and robbers characterization of treewidth says that the minimum width over all tree decompositions is equal to the maximum number of cops that a robber can escape (er, plus or minus one, probably).</p></li> </ul>
79
graph theory
Is there a good analogy between spectral representation of a signal and graph theory?
https://cs.stackexchange.com/questions/140083/is-there-a-good-analogy-between-spectral-representation-of-a-signal-and-graph-th
<p>I am working on some time series problems where the Fourier representation of the signal in the frequency domain is also important. I am wondering if there is any connection between time series signals consisting of sinusoidal/inherently periodic functions (functions that can be cleanly represented by a sum of sinusoids) and graph theory.</p> <p>Thanks, Josiah</p>
80
graph theory
Graph Theory - Airline Schedule
https://cs.stackexchange.com/questions/77119/graph-theory-airline-schedule
<p>Suppose I have a number of airports which have a set number of routes. Large airports have 7, medium airports have 4, and small airports have 1.</p> <p>First of all, what would I call a graph of this type where it is undirected, has no parallel edges, no loops, and every edge is connected to another (that is no route goes unused)?</p> <p>Second of all, how can I determine if I get a number number of airports if it'll work / is connected? Example: 1 large, 3 medium, and 7 small will produce a connected graph but that same combination with 8 small will not.</p> <p>I'm writing a computer program and looking for a simple way to determine (using some theorem or whatnot) if it works or not. I've attached an image below of what I'm searching for.</p> <p><a href="https://i.sstatic.net/FFQlS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FFQlS.jpg" alt="Example"></a></p>
81
graph theory
What graph theory algorithm(s) would help solve this problem?
https://cs.stackexchange.com/questions/88597/what-graph-theory-algorithms-would-help-solve-this-problem
<p>I have a directed graph with the following properties:</p> <ol> <li>Except for a few special progenitor nodes, every node has two parent nodes</li> <li>Any node can have any whole number of child nodes</li> <li>The graph is generated such that no node is a descendant of itself in the following way: I starting with a few progenitor nodes (which have no parents) then randomly select two nodes and create a "child" node with directed edges going from each parent to the child. I repeat that step an arbitrary number of times.</li> </ol> <p>I'd like to find an algorithm to efficiently identify subgraphs with the following properties:</p> <ol> <li>The subgraph consists of two "parent" nodes and some or all of their descendants</li> <li>Except for the two "parent" nodes in the subgraph, each node has both parents also contained in the subgraph (that is, if the subgraph were separated from the rest of the graph, no node would be separated from its parents, except for the two "parent" nodes).</li> </ol> <p>It's easy to find a 3-node example of this type of subgraph - just take any two nodes and one of their mutual children. However, I'd like to be able to efficiently identify larger examples of these subgraphs.</p> <p>I'd really appreciate some help pointing me in the right direction. I thought that what I need might be related to finding "weakly connected components", but I haven't made much progress, and my inexperience with graph theory makes it hard to figure out what to search for. Thanks!</p> <p>PS: I've provided an example depiction of one of these graphs</p> <p><a href="https://i.sstatic.net/QCm5T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QCm5T.png" alt="Example graph"></a></p>
<p>Given two progenitor vertices $u$ and $v$, the following "modified flood fill" builds a set $P$ of additional vertices to include:</p> <ul> <li>While a vertex $x$ exists whose parents are both in the set $\{u, v\} \cup P$: <ul> <li>Add $x$ to $P$.</li> </ul></li> </ul> <p>This will halt with $\{u, v\} \cup P$ being the largest possible set of vertices for the given pair of starting vertices. You want the subgraph this vertex set induces: To get that, just include every original edge that links two vertices in the set.</p> <p>To do this reasonably efficiently, you could maintain 3 vertex sets $V_0, V_1, V_2$, where $V_i$ contains every vertex with exactly $i$ parents in $\{u, v\} \cup P$. Whenever some vertex $x$ is added to $P$, all of its neighbours in $V_0$ or $V_1$ are promoted to the next higher set (e.g., if some neighbour $y$ was in $V_1$, it is moved to $V_2$). These updates can be done in constant time per neighbour by using three doubly linked lists for $V_0, V_1, V_2$, as well as a length-$|V|$ array $A$ of pairs $(i, p)$: $A[u].i$ is the current set number (0, 1 or 2) for vertex $u$, and $A[u].p$ is a pointer to its linked list node. The main loop just reads from $V_2$ until no unread vertices remain. Each vertex and edge in the graph is processed only once, so the overall time for a single starting vertex pair is $O(|V|+|E|)$.</p> <p>Thanks to D.W. for catching a problem in my first algorithm.</p>
82
graph theory
Graph theory : Trees
https://cs.stackexchange.com/questions/126596/graph-theory-trees
<p>I need to determine all the trees on 25 vertices for which there exists an integer m ≥ 2, such that the degree of each vertex gives the same remainder when divided by m.</p> <p>Can somebody help ?</p>
<p>Suppose <span class="math-container">$G = (V, E)$</span> is a tree with 25 vertices such that there exist some constants <span class="math-container">$c \geq 0$</span> and <span class="math-container">$m \geq 2$</span> such that <span class="math-container">$\operatorname{deg} v \equiv_m c$</span> for all <span class="math-container">$v$</span>. Note that we can equivalently write <span class="math-container">$\operatorname{deg} v = a_v m + c$</span> for some <span class="math-container">$a_v \geq 0$</span>.</p> <p>Since <span class="math-container">$G$</span> is connected by definition, we get that it must have 24 edges in total, giving us <span class="math-container">$$ \sum_{v \in V} \operatorname{deg} v = 2 \cdot |E| = 48.$$</span> We can express this sum using the aforementioned decomposition of the degrees in <span class="math-container">$G$</span> to get <span class="math-container">$$ \begin{align*} \sum_{v \in V} \operatorname{deg} v &amp;= \sum_{v \in V} a_v m + c \\ &amp;= c |V| + m \sum_{v \in V} a_v \\ &amp;= 48. \end{align*} $$</span> Due to the both terms in the rewritten sum being non-negative, we get that <span class="math-container">$c$</span> must be 1 if we are to find any solutions. Hence we get that <span class="math-container">$m$</span> must divide <span class="math-container">$48 - c|V| = 23$</span>, which implies <span class="math-container">$m = 23$</span> as we required <span class="math-container">$m \geq 2$</span>. </p> <p>At this point, we know that any such <span class="math-container">$G$</span> satisfying your conditions does so with <span class="math-container">$c = 1$</span> and <span class="math-container">$m = 23$</span>. By using our new formula again, we find that the sum of our <span class="math-container">$a_v$</span> must also equal 1 and therefore, <span class="math-container">$G$</span> must be isomorphic to a tree with 1 internal node and 24 leaves, which is also known as the <a href="https://en.wikipedia.org/wiki/Star_(graph_theory)" rel="nofollow noreferrer">star graph</a> <span class="math-container">$S_{24}$</span>.</p>
83
graph theory
Markov Inequality in graph theory
https://cs.stackexchange.com/questions/96407/markov-inequality-in-graph-theory
<p>Fix an optimal solution G∗ to k-Cycle-Free Subgraph. Partition the vertex set V of G randomly into two subsets, A and B, each of size n/2, and remove edges internal to A or B. In expectation, the fraction of edges in G∗ that remain after this process is 1/2. <strong><em>With probability at least 2/3 the fraction of edges in G∗ that remain is at least 1/4; here we apply the Markov inequality on the fraction of edges inside A and B</em></strong>.</p> <p>I know Markov inequality in context of probability theory I don't know what it means here.</p> <p><a href="https://www.openu.ac.il/home/mikel/papers/cycles-APPROX08.pdf" rel="nofollow noreferrer">APPROXIMATING SUBGRAPHS WITHOUT SHORT CYCLES</a></p>
<p>Let $X$ denote the fraction of remaining edges. We know that $0 \leq X \leq 1$ and $\mathbb{E}[X] = 1/2$. Let us define $Y = 1 - X$, which is the fraction of edges inside $A$ and $B$. This is a non-negative random variable whose expectation is $\mathbb{E}[Y] = 1/2$. According to Markov's inequality, $$ \Pr[X &lt; 1/4] = \Pr[Y &gt; 3/4] = \Pr[Y &gt; \tfrac{3}{2} \mathbb{E}[Y]] &lt; \frac{2}{3}. $$ Therefore $\Pr[X \geq 1/4] &gt; 1/3$.</p> <p>Let us show that the similar inequality $\Pr[X &gt; 1/4] \geq 1/3$ is tight, that is, there exists a random variable satisfying $0 \leq X \leq 1$, $\mathbb{E}[X] = 1/2$, and $\Pr[X &gt; 1/4] = 1/3$. Indeed, take $\Pr[X = 1/4] = 2/3$ and $\Pr[X = 1] = 1/3$, and notice that $$ \mathbb{E}[X] = \frac{2}{3} \cdot \frac{1}{4} + \frac{1}{3} \cdot 1 = \frac{1}{6} + \frac{1}{3} = \frac{1}{2}. $$ In a similar way, one can show that $\Pr[X \geq 1/4]$ can be arbitrarily close to $1/3$.</p>
84
graph theory
graph theory proof or disproving
https://cs.stackexchange.com/questions/118141/graph-theory-proof-or-disproving
<p>I was requested to prove or disprove the following statement if a graph is conncted does it necessarily means that E>(V-1)(V-2)/2 v represents vertex and E the edges I think I need to disprove it with overall disproval but how?</p>
<p>If you think it's false, the easiest way to disprove it would be to find a counterexample. That is, find a graph where <span class="math-container">$E \leq \frac{1}{2}(V-1)(V-2)$</span>. Try some easy connected graphs (e.g. a straight-line path graph) and see if you can find a suitable graph.</p>
85
graph theory
Good resource for graph theory
https://cs.stackexchange.com/questions/116444/good-resource-for-graph-theory
<p>Frankly speaking I have many problem with symbols in graph such as (s:s') or v\S or some symbols such as those that I mentioned . Can anyone suggest a good resource or a good tutorial on the net that has been discussed about these symbols and have clarified these symbols ?</p>
86
graph theory
How do programs like Apache Airflow/ Luigi determine shortest path and how does that relate to graph theory?
https://cs.stackexchange.com/questions/119602/how-do-programs-like-apache-airflow-luigi-determine-shortest-path-and-how-does
<p>I am looking for a simple layman's term explanation on how do programs like Apache Airflow or Luigi (or any Task/ETL schedulers) determine the shortest path to complete a certain task and make it possible to parallelize it? And how does that, if any, relate to graph theory?</p>
87
graph theory
graph theory analogue of rectangular matrix
https://cs.stackexchange.com/questions/47092/graph-theory-analogue-of-rectangular-matrix
<p>Graphs are usually defined as a set of vertices $V$ together with a set of edges $E$ consisting of elements $V \times V$. I'm interested in a slight generalization of this, where instead one has two sets $V$, $W$ and the edges are taken from $V \times W$. The adjacency matrix of such an object would be rectangular, as opposed to the adjacency matrix of a regular old graph, which is square. Is there a name for this?</p> <p>I'm interested in this because I'm writing a data structure to represent a graph in terms of its partitioning into sub-graphs for the purposes of parallel computation. For example, given a graph $G$ on a vertex set $V$, we can partition $V$ into disjoint subsets $V_1$ and $V_2$; $G$ is then naturally divided into two subgraphs $G_1$ and $G_2$ describing connections among $V_1$ and $V_2$ respectively, and a "rectangular graph" $H$ describing connections between $V_1$ and $V_2$. I'd like to give the class a name that makes sense.</p>
<p>Such a graph is known as a <a href="https://en.wikipedia.org/wiki/Bipartite_graph">bipartite graph</a>. There is a whole theory about bipartite graphs, including a number of algorithms that are specialized for bipartite graphs.</p>
88
graph theory
Graph Theory Handshaking problem
https://cs.stackexchange.com/questions/27507/graph-theory-handshaking-problem
<p>Mr. and Mrs. Smith, a married couple, invited 9 other married couples to a party. (So the party consisted of 10 couples.) There was a round of handshaking, but no one shook hand with his or her spouse. Afterwards, Mrs. Smith asked everyone except herself, “how many persons have you shaken hands with?” All 19 answers were different.</p> <p>Unfortunately, when I try to simulate a smaller problem with 3 couples, I am getting that each couple is shaking 4 hands. Here is the diagram:</p> <p><img src="https://i.sstatic.net/vC7kRm.jpg" alt=""></p> <p>As you can see both a1 and a2 have 4 blue lines each that are "attached" (ie they shake hands) to the other couples. Both b1 and b2 have 2 red lines and 2 blue lines. Do we count the blue lines again adding it to the 2 red lines giving us that b1 and b2 have 4 lines attached or do we just ignore the blue lines connecting to b1 and b2 and say that both b1 and b2 shake 2 hands each?</p> <p>Both c1 and c2 also have 2 blue and 2 red lines. I meant to have blue lines designated for a1 and a2, and the red lines for b1 and b2. Does that mean that c1 and c2 have shaken 0 hands each? Or do I try another few lines attaching the c's to the a's and b's?</p> <p>Another problem with the approach above is that each couple has the same number of handshakes even though that is not possible according to the question. I would appreciate any clarification on the question and what exactly am I doing wrong.</p> <p>Thanks.</p>
<p>Here is a solution for two couples, $m_1,w_1$ and $m_2,w_2$. The edges are $(m_1,w_2),(w_1,w_2)$. The degrees are $1,1,0,2$ (in the order $m_1,w_1,m_2,w_2$).</p> <p>Here is a solution for three couples, $m_1,w_1,m_2,w_2,m_3,w_3$. The edges are $(m_1,w_2),(w_1,w_2),(m_1,w_3),(w_1,w_3),(m_2,w_3),(w_2,w_3)$. The degrees are $2,2,1,3,0,4$.</p> <p>Hopefully you can generalize this construction to an arbitrary number of couples. If not, you can always try running the <a href="http://en.wikipedia.org/wiki/Degree_%28graph_theory%29#Degree_sequence" rel="nofollow">Havel–Hakimi algorithm</a>, trying to avoid connecting a husband to his wife.</p>
89
graph theory
Dependent Type Theory Implementation of a Graph
https://cs.stackexchange.com/questions/94003/dependent-type-theory-implementation-of-a-graph
<p>In Haskell you find <a href="http://www.cs.ox.ac.uk/duncan.coutts/papers/recursive_data_structures_in_haskell.pdf" rel="nofollow noreferrer">graphs defined like this</a>:</p> <pre><code>data Graph a = GNode a (Graph a) </code></pre> <p><a href="https://hackage.haskell.org/package/algebraic-graphs" rel="nofollow noreferrer">Or this</a>:</p> <pre><code>data Graph a = Empty | Vertex a | Overlay (Graph a) (Graph a) | Connect (Graph a) (Graph a) class Graph g where type Vertex g empty :: g vertex :: Vertex g -&gt; g overlay :: g -&gt; g -&gt; g connect :: g -&gt; g -&gt; g </code></pre> <p><a href="https://hackage.haskell.org/package/containers-0.5.11.0/docs/Data-Graph.html" rel="nofollow noreferrer">Or this</a>:</p> <pre><code>type Graph = Table [Vertex] type Table a = Array Vertex a type Bounds = (Vertex, Vertex) type Edge = (Vertex, Vertex) type Vertex = Int </code></pre> <p>I am wondering why it is not implemented more like this (this is JavaScript):</p> <pre><code>class Graph { constructor() { this.nodes = [] this.edges = [] } } </code></pre> <p>Or perhaps in Haskell-ish:</p> <pre><code>data Graph = Graph { nodes :: [Node] , edges :: [Edge] } </code></pre> <p>For example, in this <a href="https://gist.github.com/andrejbauer/8dade8489dff8819c352e88f446154a1" rel="nofollow noreferrer">Graph in Coq</a>, it looks more like the JavaScript version (though I don't understand the <code>Structure</code> in Coq):</p> <pre><code>Structure Graph := { V :&gt; nat ; (* The number of vertices. So the vertices are numbers 0, 1, ..., V-1. *) E :&gt; nat -&gt; nat -&gt; Prop ; (* The edge relation *) E_decidable : forall x y : nat, ({E x y} + {~ E x y}) ; E_irreflexive : all x : V, ~ E x x ; E_symmetric : all x : V, all y : V, (E x y -&gt; E y x) }. </code></pre> <p>Finally, here seems to be a <a href="https://github.com/coq-contribs/graph-basics/blob/1b86d778016d88084df8a38b2a08e42a778fdf64/Graphs.v" rel="nofollow noreferrer">complete Coq graph definition</a>:</p> <pre><code>Inductive Graph : V_set -&gt; A_set -&gt; Set := | G_empty : Graph V_empty A_empty | G_vertex : forall (v : V_set) (a : A_set) (d : Graph v a) (x : Vertex), ~ v x -&gt; Graph (V_union (V_single x) v) a | G_edge : forall (v : V_set) (a : A_set) (d : Graph v a) (x y : Vertex), v x -&gt; v y -&gt; x &lt;&gt; y -&gt; ~ a (A_ends x y) -&gt; ~ a (A_ends y x) -&gt; Graph v (A_union (E_set x y) a) | G_eq : forall (v v' : V_set) (a a' : A_set), v = v' -&gt; a = a' -&gt; Graph v a -&gt; Graph v' a'. </code></pre> <p>My question is, what a proper definition of a graph is in <a href="https://en.wikipedia.org/wiki/Dependent_type" rel="nofollow noreferrer">Dependent Type Theory</a> in formal type theory notation. Mathematically $G = (V, E)$, but in Coq and Haskell, things are a lot more functional/recursive. It seems weird that Haskell is defining it sometimes as a recursive structure. Wondering how to model this using the abstract type theory notation (using dependent types) and recursion if necessary, to get a better sense of realistic data structures in dependent type theory, and how to parse the formal notation. The types of graphs I am thinking about are directed, <em>cyclic</em>, and finite, non-hypergraphs.</p>
<p>So, there are two different things you are talking about.</p> <ol> <li>The <strong>definition</strong> of a graph</li> <li>The <strong>encoding</strong> of a graph</li> </ol> <p>A Graph is always <em>defined</em> as a set of vertices and edges. This tells us what vertices are in the graph, and for any pair of vertices, whether there is an edge between them.</p> <p>However, this definition is purely mathematical. A set isn't a "thing" that you can store in memory. So we need an <em>encoding</em> of a graph, so that the abstract mathematical set operations can be mapped onto concrete algorithmic operations.</p> <p>Once you're talking about encoding, there are many different ways to implement a graph, all of which will give you something that is (in a sense) equivalent to the mathematical definition. But, there are practical trade-offs with different encodings:</p> <ul> <li>What operations are fast?</li> <li>What operations are easy to implement?</li> <li>How much memory does the graph take?</li> <li>How easy is it to use the graph in different settings.</li> </ul> <p>In JavaScript, all types are dynamic, so it's easy to just say "A graph is a list of nodes and a list of edges", because the language doesn't ever require that you formally specify what a node or an edge looks like. It doesn't care, and if you make a mistake, it will just crash at runtime.</p> <p>Notice that your JavaScript version of a graph doesn't actually specify how nodes and edges are represented. What does an edge look like? If you insert an edge as a 2-element array, but try to access it as an object, you will have a runtime error.</p> <p>Haskell and Coq's definitions try to solve this issue with <em>type safety</em>. They have a formal definition of the different forms a Graph can take, so that you always know at compile-time which operations on it are valid or not.</p> <p>Let's look at the versions and their tradeoffs:</p> <pre><code>data Graph a = Empty | Vertex a | Overlay (Graph a) (Graph a) | Connect (Graph a) (Graph a) </code></pre> <p>This version is interesting, because it is an <em>inductive</em> definition of a graph, that allows you to implement graph operations easily as recursive functions. So you know that a graph is always empty, a single vertex, the overlay of two graphs, or the connection of two graphs, and you can safely pattern match on these 4 cases.</p> <pre><code>class Graph g where type Vertex g empty :: g vertex :: Vertex g -&gt; g overlay :: g -&gt; g -&gt; g connect :: g -&gt; g -&gt; g </code></pre> <p>This isn't actually an implementation of a graph, but instead is an <em>interface</em> for a graph. It says what operations are available on a graph, but not how they're implemented. Users could provide their own <code>Graph</code> instances, and can write code that is generic over which <code>Graph</code> implementation is used. This helps bring back some of the flexibility that we had in JavaScript but lost when adding types.</p> <pre><code>type Graph = Table [Vertex] type Table a = Array Vertex a </code></pre> <p>This is a classic "Adjacency List" implementation of a graph: we store the outgoing edges for each vertex. It is space efficient, and allows for quick BFS and DFS, but is slower for checking the existence of an edge, or deleting an edge.</p> <pre><code>Structure Graph := { V :&gt; nat ; (* The number of vertices. So the vertices are numbers 0, 1, ..., V-1. *) E :&gt; nat -&gt; nat -&gt; Prop ; (* The edge relation *) E_decidable : forall x y : nat, ({E x y} + {~ E x y}) ; E_irreflexive : all x : V, ~ E x x ; E_symmetric : all x : V, all y : V, (E x y -&gt; E y x) }. </code></pre> <p>This is an interesting representation. First, it's similar to a typeclass, in that it's abstract: it works for any types $V$ and $E$ so long as they can be coerced into numbers and edge-lookup functions. This isn't saying "this is how you can represent a graph", it's saying "here's an interface for graph representations".</p> <p>Second, because we are in a dependently typed setting, we want to encode some properties of the graph, so that we can't create ill-formed graphs. We can then use these properties when proving things about graphs later on. Here, there are three properties that we require all graphs to have: </p> <ol> <li>The encoding must be irreflexive i.e. there are no self edges</li> <li>The encoding must be symmetric i.e. any time there's an edge u->v, there's also an edge v->u</li> <li>The edge relation is decidable i.e. we can take </li> </ol> <p>The third condition is important because Coq is a total language. It doesn't allow for any non-terminating code, so we need to prove that our edge lookup actually halts.</p> <p>Finally, there's the Coq graph definition:</p> <pre><code>Inductive Graph : V_set -&gt; A_set -&gt; Set := | G_empty : Graph V_empty A_empty | G_vertex : forall (v : V_set) (a : A_set) (d : Graph v a) (x : Vertex), ~ v x -&gt; Graph (V_union (V_single x) v) a | G_edge : forall (v : V_set) (a : A_set) (d : Graph v a) (x y : Vertex), v x -&gt; v y -&gt; x &lt;&gt; y -&gt; ~ a (A_ends x y) -&gt; ~ a (A_ends y x) -&gt; Graph v (A_union (E_set x y) a) | G_eq : forall (v v' : V_set) (a a' : A_set), v = v' -&gt; a = a' -&gt; Graph v a -&gt; Graph v' a'. </code></pre> <p>This version provides a concrete implementation, that (I think) fulfills the interface you gave. In particular, it says we can build a graph in 4 ways:</p> <ol> <li>A graph can be empty</li> <li>We can add a vertex to a graph</li> <li>We can add an edge to a graph</li> <li>We can map an equality over a graph</li> </ol> <p>The beauty of this implementation is that the vertex and edge sets are stored <em>at the type level</em>. This is the magic of dependent types: the type of the graph depends on the values it contains. So, when adding an edge, the type system ensures that we can't possibly add an edge between two vertices that aren't in our vertex sets.</p> <p>Secondly, this implementation is inductive, similar to the Haskell one, so writing proofs about this graph by induction will be easy.</p>
90
graph theory
graph theory conventions, difference between a PATH and a GRAPH?
https://cs.stackexchange.com/questions/112054/graph-theory-conventions-difference-between-a-path-and-a-graph
<p>Consider this example, I did my pseudocode in python</p> <pre><code>findCycle(G): for each edge e in E(G): if isThereCycle(G-e): G = G - e return G </code></pre> <p>assume isthereCycle returns whether the graph <span class="math-container">$G$</span> has a cycle. </p> <p>Input is a graph <span class="math-container">$G$</span> that contains a cycle, and the function should return a path of that cycle.</p> <p>We go through each edge then remove it and see if a cycle still exists without that edge, until we are left with a graph with a single path that is a cycle. (Assume removing the edge (G - e) doesn't mutate the graph unless we do G = G - e).</p> <p>I want to return a path. Is what I'm returning a path? </p>
<p>Your algorithm returns a cycle rather than a path.</p> <p>Here is a <a href="https://en.wikipedia.org/wiki/Cycle_graph" rel="nofollow noreferrer">cycle</a>:</p> <p><a href="https://i.sstatic.net/Xt9sm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xt9sm.png" alt="cycle"></a></p> <p>Here is a <a href="https://en.wikipedia.org/wiki/Path_graph" rel="nofollow noreferrer">path</a>:</p> <p><a href="https://i.sstatic.net/UbxoU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UbxoU.png" alt="path"></a></p> <p>(Both images taken from Wikipedia.)</p> <p>To get a path from the cycle, simply remove one edge (this is my best guess for what <em>return a path of that cycle</em> means).</p> <p>Let me also mention that your algorithm is very inefficient. A much better choice is to use BFS/DFS.</p>
91
graph theory
Playing with boxes: NP-hard? [Graph Theory]
https://cs.stackexchange.com/questions/153291/playing-with-boxes-np-hard-graph-theory
<p>You are playing with boxes on a <span class="math-container">$K_{1, n}$</span>-<span class="math-container">$\textbf{subdivision}$</span> graph <span class="math-container">$G:=(V, E)$</span> whose number of vertices is odd, i.e., <span class="math-container">$|V| \equiv 1$</span> (mod <span class="math-container">$2$</span>) with a given central point <span class="math-container">$C$</span> such that <span class="math-container">$\forall v \in V - \{C\}, deg(v) \leq 2$</span>. Actually, <span class="math-container">$C$</span> is the center of the star graph <span class="math-container">$G$</span>. Your goal is to move <span class="math-container">$n:= (|V| - 1)/2$</span> boxes (<span class="math-container">$b_1-b_n$</span>) from their starting points to their terminal points. For each vertex <span class="math-container">$v \in V-\{C\}$</span>, it is either the starting point <span class="math-container">$s_i$</span> of a box <span class="math-container">$b_i$</span> or the terminal point <span class="math-container">$t_j$</span> of a box <span class="math-container">$b_j$</span>. The central point <span class="math-container">$C$</span> has <span class="math-container">$k$</span> vacant places <span class="math-container">$S$</span> (<span class="math-container">$S \cap V = \emptyset$</span>) that do not collide with any path. Points in <span class="math-container">$\{C\} \cup S$</span> are neither starting points nor terminal points.</p> <p>For each step, you choose a box <span class="math-container">$i$</span> and move it to a place <span class="math-container">$p \in S \cup V$</span>. You will immediately fail this game if there is another box <span class="math-container">$b_j \neq b_i$</span> standing on your path, e.g., in the initial state of Case 1, if you try to move <span class="math-container">$b_1$</span> from <span class="math-container">$s_1$</span> to <span class="math-container">$t_1$</span> you will fail because <span class="math-container">$b_2$</span> blocks <span class="math-container">$b_1$</span> on <span class="math-container">$s_2$</span>. If you fail to move some box, you also fail this game. You win this game iff you successfully move <span class="math-container">$\textbf{all}$</span> boxes from their starting points to their terminal points.</p> <p><a href="https://i.sstatic.net/wWGDT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wWGDT.png" alt="enter image description here" /></a></p> <p>In Case 1, if <span class="math-container">$k=0$</span>, you can move <span class="math-container">$b_3$</span> from <span class="math-container">$s_3$</span> to <span class="math-container">$t_3$</span>, <span class="math-container">$b_2$</span> from <span class="math-container">$s_2$</span> to <span class="math-container">$s_3$</span>, <span class="math-container">$b_1$</span> from <span class="math-container">$s_1$</span> to <span class="math-container">$t_1$</span>, <span class="math-container">$b_2$</span> from the <span class="math-container">$s_3$</span> to <span class="math-container">$t_2$</span>.</p> <p><a href="https://i.sstatic.net/fzrDx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzrDx.png" alt="enter image description here" /></a></p> <p>In Case 2, you will also need one vacant point <span class="math-container">$vp$</span>. You can move <span class="math-container">$b_1$</span> from <span class="math-container">$s_1$</span> to <span class="math-container">$vp$</span>, <span class="math-container">$b_3$</span> from <span class="math-container">$s_3$</span> to <span class="math-container">$t_3$</span>, and <span class="math-container">$b_2$</span> from <span class="math-container">$s_2$</span> to <span class="math-container">$t_2$</span>. Then, move <span class="math-container">$b_1$</span> from <span class="math-container">$v_1$</span> to <span class="math-container">$t_1$</span>. If <span class="math-container">$k=0$</span> you will fail this game.</p> <p><a href="https://i.sstatic.net/9B95X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9B95X.png" alt="enter image description here" /></a></p> <p>In Case 3 you need <span class="math-container">$3$</span> vacant places <span class="math-container">$vp_1$</span>, <span class="math-container">$vp_2$</span>, <span class="math-container">$vp_3$</span>. Your move sequence could be:</p> <p>(1) <span class="math-container">$b_3$</span>: <span class="math-container">$s_3 \rightarrow vp_1$</span>; (2) <span class="math-container">$b_1$</span>: <span class="math-container">$s_1 \rightarrow vp_2$</span>; (3) <span class="math-container">$b_2$</span>: <span class="math-container">$s_2 \rightarrow vp_3$</span>; (4) <span class="math-container">$b_3$</span>: <span class="math-container">$vp_1 \rightarrow t_3$</span>; (5) <span class="math-container">$b_1$</span>: <span class="math-container">$vp_2 \rightarrow t_1$</span>; (6) <span class="math-container">$b_2$</span>: <span class="math-container">$vp_3 \rightarrow t_2$</span>.</p> <p><a href="https://i.sstatic.net/CPfy5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CPfy5.png" alt="enter image description here" /></a></p> <p>Please note that in Case 4 you only need <span class="math-container">$1$</span> vacant place <span class="math-container">$vp$</span>. You can use this <span class="math-container">$vp$</span> to move <span class="math-container">$\{b_4, b_5\}$</span> to <span class="math-container">$\{t_4, t_5\}$</span>, respectively. Then you have <span class="math-container">$4$</span> empty places, which are <span class="math-container">$C$</span>, <span class="math-container">$vp$</span>, <span class="math-container">$s_4$</span> (because <span class="math-container">$b_4$</span> has been moved to <span class="math-container">$t_4$</span>), <span class="math-container">$s_5$</span>. These <span class="math-container">$4$</span> places are enough for you to win this game.</p> <p><span class="math-container">$\textbf{TL; DR}$</span>: Given the graph <span class="math-container">$G$</span> and <span class="math-container">$k$</span> vacant places, if there is a strategy that you can win, print one of your winning strategies, otherwise print <span class="math-container">$-1$</span>.</p> <p>Below is my idea: I think this problem is NP-hard. You might build a digraph that describes the dependency of boxes. For <span class="math-container">$k=0$</span> we can use topological sorting and for <span class="math-container">$k&gt;0$</span>, I think it is a feedback vertex set problem (<a href="https://en.wikipedia.org/wiki/Feedback_vertex_set" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Feedback_vertex_set</a>), since moving one vertex to the vacant place is equivalent to removing a vertex from the digraph and winning the game seems like breaking all the cycles in the digraph. But I cannot do the reduction. Please help me win this game, or prove this problem is NP-hard (then I will try a greedy strategy).</p>
92
graph theory
The equivalence relations cover problem (in graph theory)
https://cs.stackexchange.com/questions/27828/the-equivalence-relations-cover-problem-in-graph-theory
<p>An equivalence relation on a finite vertex set can be represented by an undirected graph that is a disjoint union of cliques. The vertex set represents the elements and an edge represents that two elements are equivalent.</p> <p>If I have a graph $G$ and graphs $G_1,\dots,G_k$, we say that $G$ is covered by $G_1,\dots,G_k$ if the set of edges of $G$ is equal to the union of the sets of edges of $G_1,\dots,G_k$. The edge sets of $G_1,\dots,G_k$ do not need to be disjoint. Note that any undirected graph $G$ can be covered by a finite number of equivalence relations (i.e., disjoint union of cliques graphs).</p> <p>I have several questions:</p> <ul> <li>What can be said about the minimal number of equivalence relations required to cover a graph $G$?</li> <li>How can we compute this minimal number?</li> <li>How can we compute an explicit minimum cover of $G$, i.e., a set of equivalence relations whose size is minimal and which cover $G$?</li> <li>Does this problem has any applications apart from <a href="http://www.ellerman.org/introduction-to-partition-logic/" rel="noreferrer">partition logic</a> (the <a href="http://www.ellerman.org/the-logic-of-partitions/" rel="noreferrer">dual of the logic of subsets</a>)?</li> <li>Does this problem has a well established name?</li> </ul> <hr> <p>Given the various misunderstandings indicated by the comments, here are some pictures to illustrate these concepts. If you have an idea for an easier to understand terminology (instead of "cover", "equivalence relation", "disjoint union of cliques" and "not necessarily disjoint" union of edge sets), feel free to let me know.</p> <p>Here is a picture of a graph and one equivalence relation covering it: <img src="https://i.sstatic.net/9UyXr.png" alt="graph and one equivalence relation covering it"></p> <p>Here is a picture of a graph and two equivalence relations covering it: <img src="https://i.sstatic.net/7XUJj.png" alt="graph and two equivalence relations covering it"><br> It should be pretty obvious that at least two equivalence relations are required.</p> <p>Here is a picture of a graph and three equivalence relations covering it: <img src="https://i.sstatic.net/QfRPk.png" alt="graph and three equivalence relations covering it"><br> It's less obvious that at least three equivalence relations are required. Lemma 1.9 from <a href="http://www.ellerman.org/the-logic-of-partitions/" rel="noreferrer">Dual of the Logic of Subsets</a> can be used to show that this is true. The generalization of this lemma to nand operations with more than two inputs was the motivation for this question.</p>
<p>The problem is known as the <em>equivalence covering problem</em> in graph theory. It is upper bounded by the <em>clique covering number</em> (the minimum collection of cliques such that each edge of the graph is in <em>at least</em> one clique). There are many similar problems and definitions; one has to be very careful here. These two numbers are denoted by $\text{eq}(G)$ and $\text{cc}(G)$, respectively.</p> <p>There are special graph classes where the exact value or a good upper bound for either number is known. In general, to the best of my knowledge, the best bounds are given by Alon [1]:</p> <p>$$\log_2 n - \log_2 d \leq \text{eq}(G) \leq \text{cc}(G) \leq 2e^2 (\Delta+1)^2 \ln n,$$</p> <p>where $\Delta$ is the maximum degree of $G$. By the way, a covering with $\lceil n^2/4 \rceil$ triangles and edges is always possible (cf. Mantel's theorem), and this is easy to find algorithmically as well.</p> <p>Not surprisingly, computing either number is $\sf NP$-complete. Even for split graphs, computing $\text{eq}(G)$ is $\sf NP$-hard (but can be approximated within an additive constant 1) as shown in [2]. It is also hard to compute for graphs in which no two triangles have a vertex in common [3].</p> <hr> <p>[1] <a href="http://www.math.tau.ac.il/~nogaa/PDFS/Publications/Covering%20graphs%20by%20the%20minimum%20number%20of%20equivalence%20relations.pdf" rel="nofollow">Alon, Noga. "Covering graphs by the minimum number of equivalence relations." Combinatorica 6.3 (1986): 201-206.</a></p> <p>[2] <a href="http://web.thu.edu.tw/wang/www/emcc_Helly/1995_clique_partition_spilt.pdf" rel="nofollow">Blokhuis, Aart, and Ton Kloks. "On the equivalence covering number of splitgraphs." Information processing letters 54.5 (1995): 301-304.</a></p> <p>[3] <a href="http://www.sciencedirect.com/science/article/pii/0304397580900390" rel="nofollow">Kučera, Luděk, Jaroslav Nešetřil, and Aleš Pultr. "Complexity of dimension three and some related edge-covering characteristics of graphs." Theoretical Computer Science 11.1 (1980): 93-106.</a></p>
93
graph theory
Category theory and graphs
https://cs.stackexchange.com/questions/23875/category-theory-and-graphs
<p>Could most categories , or a finite part of them be represented on a subset of a complete graph of N vertices (Kn) which is connected. and partly directed? Could all the axioms of category theory be written for such graphs?</p>
<p>A category consists of:</p> <ul> <li><p>Objects.</p></li> <li><p>Directed arrows between objects. There can be multiple arrows between any two given objects, or a unique arrow, or none.</p></li> <li><p>A composition map for arrows that takes an arrow $f$ from $x$ to $y$ and another arrow $g$ from $y$ to $z$ and outputs an arrow $gf$ from $x$ to $z$.</p></li> <li><p>Depending on the formulation, there might also be a distinguished arrow between every object and itself (the identity arrow).</p></li> </ul> <p>The composition map has to satisfy the following axioms:</p> <ul> <li><p>Associativity: if $f\colon x \to y$, $g\colon y \to z$ and $h\colon z \to w$ then $h(gf) = (hg)f$.</p></li> <li><p>Identity: if $f\colon x \to y$ and $1_x\colon x \to x$ and $1_y\colon y \to y$ are the distinguished self loops then $f1_x = 1_yf = f$. (If the formulation does not include the distinguished self-loops: there exist arrows $1_x\colon x\to x$ and $1_y\colon y\to y$ such that $f1_x = 1_yf = f$.)</p></li> </ul> <p>You can represent this data in many ways. A graph with multiple edges is, however, not enough, since you also need to specify the composition map.</p>
94
graph theory
Graph Theory applied to a physical logic puzzle
https://cs.stackexchange.com/questions/85965/graph-theory-applied-to-a-physical-logic-puzzle
<p>For Christmas my brother got me this puzzle: <img src="https://i.sstatic.net/OqSnc.jpg" alt="puzzle"></p> <p>The premise is that you have a metal medallion with a series of holes in it and a metal ring with a slit just large enough to slide it from hole to hole, barring obstacles, like the edge of the map and an obstruction near the top. The goal is to be able to remove the ring from the medallion when starting at the center hole. It was simple enough to figure out after mapping out taken paths, but after solving it, I thought it might be fun to make a solver: <img src="https://i.sstatic.net/VvBSr.png" alt="solver"></p> <p>I'm sure something like this has been done and is part of some Legend of Zelda-like dungeon, but I wanted a simple example to demonstrate some concepts in Angular. The Angular part took about 15-30 minutes to throw together after a longer time of working out the nodes and edges.</p> <h2>The Problem</h2> <p>My first thought was that is represented an undirected graph, so I labeled each of the 24 holes with letters A-X and recorded all of the edges as holes that the ring can travel directly between. That works, except for the obstacle (an intentionally raised point) that exists between A, B, C, and D.</p> <p><img src="https://i.sstatic.net/oA0vl.jpg" alt="Start"></p> <p><img src="https://i.sstatic.net/8D9Ys.jpg" alt="End"></p> <p>In this exact puzzle, the solution is: Start,M,A,I,F,U,H,K,R,D,<strong>Q</strong>,V,J,X,P,N,B,<strong>Q</strong>,End</p> <p>Applying a simple shortest path algorithm renders: Start,M,A,I,F,U,H,K,R,D,<strong>Q</strong>,End</p> <p>The problem with that solution is that the exit is not reachable from Q until you have gotten on the other side of the raised point at the top, so you have to run through the little cycle of <strong>Q</strong>,V,J,X,P,N,B,<strong>Q</strong> to get around it.</p> <p>I'm getting the vibe that the correct structure may not be an undirected graph or that I need to use something other than the shortest path, but I'm not sure what the terms I need to look for are going to be.</p> <h3>Extra Info</h3> <p>All of the existing edges I've found are: [["A","M"],["A","I"],["B","N"],["B","Q"],["B","P"],["C","K"],["D","R"],["D","S"],["D","Q"],["E","R"],["E","K"],["F","I"],["F","U"],["G","W"],["H","U"],["H","K"],["H","O"],["H","W"],["I","L"],["J","X"],["J","V"],["K","R"],["M","Start"],["N","P"],["P","X"],["Q","End"],["Q","V"],["T","V"]]</p> <p>I was going to look into some way to plot the nodes in Angular in a way that correlated to their relative positions on the medallion, but the main problem seemed more important.</p> <p><a href="https://i.sstatic.net/qVsPz.jpg" rel="nofollow noreferrer">Imgur Album</a></p>
95
graph theory
Graph theory, $n$ people sitting around table
https://cs.stackexchange.com/questions/44212/graph-theory-n-people-sitting-around-table
<p>$n$ people want to have dinner together around a table for $k$ nights so that no person has the same neighbor twice.</p> <ol> <li>How big can $k$ be in terms of $n$?</li> <li>Does everybody get to sit next to everybody else?</li> <li>How many seating arrangements are there?</li> </ol>
<p><em>Point 3.</em> If the seats are unlabelled, the number of arrangements is $\frac{(n-1)!}{2}$.</p> <p><em>Explanation</em>: put a hat on one of the person and take it as the leader. Then you have $n-1$ choices for the person to his right and so on. But each arrangement is counted twice if you consider that circular arrangement $(a,b,c,d)$ is the same as the circular arrangement $(a,d,c,b)$.</p> <p><em>Point 2.</em> Answer is no. Take four people. They can only share one meal in your conditions. The people facing each-other will never eat side by side :-(</p> <p><em>Point 1.</em> You basically want to partition the edges of the complete graph on $n$ vertices into $k$ hamiltonian cycles. An obvious upper bound is then that $nk \leq {n \choose 2}$ so that $k \leq \left\lfloor \frac{n-1}{2} \right\rfloor$. This upper bound is met when $n$ is a prime number (take a first cycle then people at distance 2 then people at distance 3...)</p>
96
graph theory
Graph theory: determining maximum number of edges
https://cs.stackexchange.com/questions/101199/graph-theory-determining-maximum-number-of-edges
<p>Based on the question below, can someone please explain to me the reasoning behind why the maximum number of edge is 5/2|V|? I don't find the particular reasoning in the solution to be that helpful for why the maximum number is 5/2|V|. </p> <p><a href="https://i.sstatic.net/OUIeL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OUIeL.png" alt="enter image description here"></a></p>
<p>The following identity is well-known: <span class="math-container">$$ 2|E| = \sum_{v \in V} d(v). $$</span> In words, if we sum all degrees in a graph, we get twice the number of edges. To see this, divide each edge <span class="math-container">$(x,y)$</span> into two halves: the <span class="math-container">$x$</span>-half and the <span class="math-container">$y$</span>-half. We think of the <span class="math-container">$x$</span>-half as a half-edge labeled <span class="math-container">$x$</span>. The left-hand side counts the number of half-edges. On the right-hand side, <span class="math-container">$d(v)$</span> counts the number of half-edges labeled <span class="math-container">$v$</span>, and so in total, the right-hand side also counts the number of half-edges.</p> <p>If the graph has maximum degree <span class="math-container">$\Delta$</span> (in your case, <span class="math-container">$\Delta = 5$</span>) then <span class="math-container">$$ 2|E| = \sum_{v \in V} d(v) \leq \sum_{v \in V} \Delta = \Delta |V|, $$</span> and so <span class="math-container">$|E| \leq (\Delta/2) |V|$</span>; there is equality if the graph is <span class="math-container">$\Delta$</span>-regular (all degrees are exactly <span class="math-container">$\Delta$</span>).</p>
97
graph theory
Is Homomorphic Computation Possible Using Graph Theory
https://cs.stackexchange.com/questions/170640/is-homomorphic-computation-possible-using-graph-theory
<p><a href="https://www.zama.ai/" rel="nofollow noreferrer">Homomorphic encryption</a> allows to store data on a blockchain encrypted, but a smart contract is a program that is executed on a node on the blockchain, therefore the &quot;source code&quot; of the contract is not private; therefore it is desirable to build a monotone compiler that would convert an ordinary function into a set of computations that when executed on the node machine do not reveal any meaningful info, yet when its result is read back by the transaction owner, the results would be tangible to her (as she has the private key). Can this be done with a state machine rather that formal logic like here <a href="https://webperso.info.ucl.ac.be/%7Epvr/18091-ChristopherMeiklejohn-Slides.pdf%5D" rel="nofollow noreferrer">“TOWARDS” HOMOMORPHIC COMPUTATION FOR DISTRIBUTED COMPUTING</a></p> <p><a href="https://i.sstatic.net/JqOdKD2C.png" rel="nofollow noreferrer">OPERATIONAL SEMANTICS: MLINKS</a></p> <p>Are there existing algorithms to do that precisely?</p>
98
graph theory
if (dis)proving a conjecture on graph theory can be done just by a counter example then can every (dis)proof be mapped actually to a counter-example?
https://cs.stackexchange.com/questions/27529/if-disproving-a-conjecture-on-graph-theory-can-be-done-just-by-a-counter-examp
<p>Suppose we have a conjecture on graph theory that can be (dis)proved by means of a counter example, then, is it true that every alternative (dis)proof of the conjecture can be mapped to a counter example? </p> <p>This is in the general case, but for instance, can any proof that the hadwiger's conjecture is false be mapped to a counter example, i.e., a particular graph? </p> <p>Or, </p> <p>Can any proof about a purported property $P(L)$ of a language $L$, susceptible of being (dis)proved by a counter example, be mapped to a particular word, say $w$ meaning $w$ a counter example, i.e., $P(L):=FALSE\;$ since $w$ exists? </p>
<p>Your question isn't entirely clear, but it seems you're asking if every false graph property has a counterexample. Obviously this is true: by definition there must be some graph for which the property doesn't hold. On the other hand, a proof that the property doesn't hold for all graphs may not necessarily give an example of such a graph; this is called a <em>non-constructive</em> proof.</p>
99