| { |
| "paper_id": "J95-1005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:46:24.412529Z" |
| }, |
| "title": "Squibs and Discussions Dependency Unification Grammar for PROLOG", |
| "authors": [ |
| { |
| "first": "Friedrich", |
| "middle": [], |
| "last": "Steimann", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "steimann@ira.uka.de" |
| }, |
| { |
| "first": "Christoph", |
| "middle": [], |
| "last": "Brzoska", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "brzoska@ira.uka.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "J95-1005", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "The programming language PROLOG has proved to be an excellent tool for implementing natural language processing systems. Its built-in resolution and unification mechanisms are well suited to both accept and generate sentences of artificial and natural languages. Although supporting many different linguistic formalisms, its straightforwardness and elegance have perhaps best been demonstrated with definite clause grammars (DCGs) (Pereira and Warren 1980) , an extension to PROLOG's syntax allowing direct implementation of rules of context-free grammars as Horn clauses.", |
| "cite_spans": [ |
| { |
| "start": 431, |
| "end": 456, |
| "text": "(Pereira and Warren 1980)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "While context-free grammars and DCGs--strongly related to the huge linguistic field of constituency or phrase structure grammars and their descendants--have become very popular among logic programmers, dependency grammars (DGs) have long remained a widely unnoticed linguistic alternative. DG is based on the observation that each word of a sentence has individual slots to be filled by others, its so-called dependents. Which dependents a particular word takes depends not only on its function within the sentence, but also on its meaning--like other contemporary linguistic frameworks, DG integrates both syntactic and semantic aspects of natural language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "DG was first formalized by Tesni6re (1959) and later, among others, by Gaifman (1965) and Hays (1964) . The formalization presented in this paper is based on Hellwig's Dependency Unification Grammar (DUG) (Hellwig 1986) . We merely add a framework for automatic translation of DUG rules to Horn clauses that makes DUGs as easy to implement as classic DCGs.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 42, |
| "text": "Tesni6re (1959)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 71, |
| "end": 85, |
| "text": "Gaifman (1965)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 90, |
| "end": 101, |
| "text": "Hays (1964)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 205, |
| "end": 219, |
| "text": "(Hellwig 1986)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Whereas context-free grammars differentiate between terminals (coding the words of a language) and non-terminals (representing the constituents that are to be expanded), the symbols of a DG uniformly serve both purposes: like terminals they must be part of the sentence to be accepted (or generated), and like non-terminals, they call for additional constituents of the sentence. Despite this significant difference, DG can be defined in terms of context-free grammar, making the twofold role of its symbols explicit: Accordingly, if atomic symbols are replaced by first-order terms, the following toy DG can be implemented in PROLOG using the DCG rule format:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency Grammar as Context-Free Grammar", |
| "sec_num": "2." |
| }, |
| { |
| "text": "s --> n(_, verb(_)). n(give, verb(N)) --> n(_, noun(N)), [n(give, verb(N))], n(_, noun(_)), n(_, noun(_)). n(sleep, verb(N)) --> n(_, noun(N)), [n(sleep, verb(N))]. n('Peter', noun(N)) --> [n('Peter', noun(N))]. n(CMark ', noun(N)) --> [n(:Mark', noun(N))]. n(book, noun(N)) --> n(_, det), [n(book, noun(N))]. n(a, det) --> [n(a, det)] .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency Grammar as Context-Free Grammar", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The terms n(., .) provide space for feature structures commonly employed to capture syntactic and semantic properties of words (Shieber 1986; Knight 1989) . They serve only as an example here; other structures, including that used by Hellwig (1986) , can also be employed. Prior to parsing, each sentence must be converted to a string of terms holding the features derived through lexical analysis. This preprocessing step also resolves lexical ambiguities by representing words with alternative meanings through different symbols. Parsing the sentences \"Peter gives Mark a book\" and \"Mark sleeps\" with the ", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 141, |
| "text": "(Shieber 1986;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 142, |
| "end": 154, |
| "text": "Knight 1989)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 234, |
| "end": 248, |
| "text": "Hellwig (1986)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency Grammar as Context-Free Grammar", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Although implementing DUG as DCG works acceptably, it makes no use of the rules' regular form: note how, when parsing the sentence \"Mark sleeps,\" the parser calls several rules before it realizes that the rule for give must fail (because the sentence does not contain give), even though the head already indicates that give is required for the rule to succeed. If, however, the word partially specified as n(_, verb(_)) in the body of the start rule is accepted before the next rule is selected, an intelligent parser can exploit the fact that the sentence's verb is sleep and immediately call the appropriate rule. We therefore suggest an alternative syntax and translation scheme that produces a more efficient DUG parser.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Direct Transformation of DUG Rules to Horn Clauses", |
| "sec_num": "3." |
| }, |
| { |
| "text": "In our DUG syntax, the head of a rule is separated from its body (holding the dependents of the word in the head) by the binary infix operator :>. The start rule s :> n(_, verb(_)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Direct Transformation of DUG Rules to Horn Clauses", |
| "sec_num": "3." |
| }, |
| { |
| "text": "is translated to the Horn clause s(_G1, _G2) \"accept(n(_G3, verb(_G4)), _G1, _G5), n(_G3, verb(_G4), _G5, _G2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Direct Transformation of DUG Rules to Horn Clauses", |
| "sec_num": "3." |
| }, |
| { |
| "text": "where the arguments appended to each predicate hold input and output sentence, respectively, and where an accept predicate is inserted before each literal of the rule body. 1 Accordingly, n(sleep, verb(N)) :> n(_, noun(N)). becomes n(sleep, verb(N), _GI, _G2) :accept(n(_G3, noun(N)), _GI, _G4), n(_G3, noun(N), _G4, _G2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Direct Transformation of DUG Rules to Horn Clauses", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Note that the head literal of the sleep rule need not be repeated in the body because the respective word is removed from the input sentence before the rule is called (in this case in the start rule). The fact that a word has no dependent is coded by n(well, adverb) :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Direct Transformation of DUG Rules to Horn Clauses", |
| "sec_num": "3." |
| }, |
| { |
| "text": "> [].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Direct Transformation of DUG Rules to Horn Clauses", |
| "sec_num": "3." |
| }, |
| { |
| "text": "1 The implementation of accept(...) can be found in the appendix.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Direct Transformation of DUG Rules to Horn Clauses", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Volume 21, Number 1 and translated to n(well, adverb, _G1, _GI).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Linguistics", |
| "sec_num": null |
| }, |
| { |
| "text": "Like other contemporary grammar formalisms, DUG comes with syntactic extensions that code optionality and references.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Linguistics", |
| "sec_num": null |
| }, |
| { |
| "text": "Many dependents are optional. Rather than providing an alternative rule for every possible combination of dependents, it is more convenient to declare a dependent optional, meaning that a sentence is correct independent of its presence. For example, n(sleep, verb(N)) :> n(_, noun(N)), ? n(_, adverb).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.10ptionality", |
| "sec_num": null |
| }, |
| { |
| "text": "where ? precedes the optional dependent, is implemented as n(sleep, verb(N), _GI, _G2) \"accept(n(_G3, noun(N)), _GI, _G4), n(_G3, noun(N), _G4, _G5), ((accept(n(_G6, adverb)), _G5, _GT), n(_G6, adverb, _GT, _G2)) _GS=_a2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.10ptionality", |
| "sec_num": null |
| }, |
| { |
| "text": "accepting \"Mark sleeps\" as well as \"Mark sleeps well.\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.10ptionality", |
| "sec_num": null |
| }, |
| { |
| "text": "References account for the fact that many words are similar in terms of the dependents they take. In order not to repeat the same set of rules over and over again, a reference operator ~ (read 'goes like') is introduced that causes branching to the rule of an analogous word, as in n(yawn, verb(N)) :> ==> n (sleep, verb(N) ).", |
| "cite_spans": [ |
| { |
| "start": 308, |
| "end": 323, |
| "text": "(sleep, verb(N)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Referencing", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this case, the word sleep being referred to is not a dependent of yawn, the PROLOG translation n(yawn, verb(N), _GI, _G2) :-n(sleep, verb(N), _GI, _G2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Referencing", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "therefore branches to the rule for sleep without accepting the word sleep.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Referencing", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "As a side effect, references introduce quasi non-terminals to DUG. For example, by factoring out common dependency patterns, it is possible to generalize the rules for transitive verbs and allow for exceptions to the rule at the same time: Gaifman (1965) and Hays (1964) , mark the position of the head among its dependents by a special symbol in the body. The DUG parser can be adapted to follow this convention by accepting the symbol self in the rule body as in n(sleep, noun(N)) :> n(_, noun(N)), self.", |
| "cite_spans": [ |
| { |
| "start": 240, |
| "end": 254, |
| "text": "Gaifman (1965)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 259, |
| "end": 270, |
| "text": "Hays (1964)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Referencing", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "and by modifying both the preprocessor and the accept predicate so that the input sentence is split at the position of the dependent accepted and left and right remainders are passed to the next rules separately. However, many natural languages leave word order rather unconstrained, and its adequate handling is not a problem specific to DGs (see, for example, Pereira 1981, and Covington 1990) .", |
| "cite_spans": [ |
| { |
| "start": 362, |
| "end": 379, |
| "text": "Pereira 1981, and", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 380, |
| "end": 395, |
| "text": "Covington 1990)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Referencing", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The presented DUG formalism with free word order has successfully been employed to parse Latin sentences. Tracing showed that backtracking was considerably reduced as compared with an equivalent phrase structure grammar, although no good upper bound for complexity could be found (Steimann 1991) . Although the pure DG formalism proved to be particularly practical for integration of idioms and exceptions, its lack of constituent symbols, i.e., non-terminals, would have lead to a grammar of enormous size and made it difficult to integrate special Latin constructs such as accusative cum infinitive or ablative absolute. However, as shown above, DUG is a hybrid grammar: although dependency rules are the backbone of the formalism, it allows the introduction of quasi non-terminals that are integrated into the grammar via references. If desired, phrase structure rules can thus easily be combined with ordinary dependency rules.", |
| "cite_spans": [ |
| { |
| "start": 280, |
| "end": 295, |
| "text": "(Steimann 1991)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notes on Performance", |
| "sec_num": "5." |
| }, |
| { |
| "text": "The size of a grammar can be further reduced by introduction of order-sorted feature types (Ait-Kaci and Nasr 1986) supporting variable numbers of labeled arguments and subtyping. Using feature types instead of constructor terms for representing the words of a language increases readability and enables abstraction of rules as well as implementation of semantic type hierarchies supporting selectional restrictions (Steimann 1991).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notes on Performance", |
| "sec_num": "5." |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The following PROLOG source code implements a simple preprocessor that converts source files containing DUG rules into target files consisting of Horn clauses only. Automatic creation of the parse tree has also been implemented. However, it is omitted here for clarity.Note that every call to a start rule must be extended by two list arguments: the input and the output sentence (the latter usually being the empty list []). ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Appendix A The DUG Preprocessor", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "LOGIN: A logic programming language with built-in inheritance", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ait-Kaci", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Nasr", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "The Journal of Logic Programming", |
| "volume": "3", |
| "issue": "", |
| "pages": "185--215", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ait-Kaci, H., and Nasr, R. (1986). \"LOGIN: A logic programming language with built-in inheritance.\" The Journal of Logic Programming 3:185-215.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Parsing discontinuous constituents in dependency grammar", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Covington", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Linguistics", |
| "volume": "16", |
| "issue": "4", |
| "pages": "234--236", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Covington, M. A. (1990). \"Parsing discontinuous constituents in dependency grammar.\" Computational Linguistics 16(4):234-236.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Dependency systems and phrase-structure systems", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Gaifman", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "Information and Control", |
| "volume": "8", |
| "issue": "", |
| "pages": "304--337", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gaifman, H. (1965). \"Dependency systems and phrase-structure systems.\" Information and Control 8:304-337.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Dependency theory: A formalism and some observations", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "G" |
| ], |
| "last": "Hays", |
| "suffix": "" |
| } |
| ], |
| "year": 1964, |
| "venue": "Language", |
| "volume": "40", |
| "issue": "4", |
| "pages": "511--525", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hays, D. G. (1964). \"Dependency theory: A formalism and some observations.\" Language 40(4):511-525.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Dependency unification grammar", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Hellwig", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Proceedings, llth International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "195--198", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hellwig, P. (1986). \"Dependency unification grammar.\" In Proceedings, llth International Conference on Computational Linguistics (COLING 1986). University of Bonn, Bonn. 195-198.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Unification: A multidisciplinary survey", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "ACM Computing Surveys", |
| "volume": "21", |
| "issue": "1", |
| "pages": "105--113", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Knight, K. (1989). \"Unification: A multidisciplinary survey.\" ACM Computing Surveys 21(1):105-113.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Extraposition grammars", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "American Journal of Computational Linguistics", |
| "volume": "7", |
| "issue": "4", |
| "pages": "243--255", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pereira, E (1981). \"Extraposition grammars.\" American Journal of Computational Linguistics 7(4):243-255.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Definite clause grammars for language analysis--a survey of the formalism and a comparison with augmented transition networks", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "H D" |
| ], |
| "last": "Warren", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Artificial Intelligence", |
| "volume": "13", |
| "issue": "", |
| "pages": "231--278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pereira, E, and Warren, D. H. D. (1980). \"Definite clause grammars for language analysis--a survey of the formalism and a comparison with augmented transition networks.\" Artificial Intelligence 13:231-278.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Ordnungssortierte feature-Logik und Dependenzgrammatiken in der Computerlinguistik", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "M" |
| ], |
| "last": "Shieber", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "CLSI Lecture Notes", |
| "volume": "", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shieber, S. M. (1986). \"An introduction to unification-based approaches to grammar.\" CLSI Lecture Notes, No. 4, Stanford University, Stanford, California. Steimann, E (1991). \"Ordnungssortierte feature-Logik und Dependenzgrammatiken in der Computerlinguistik.\" Diplomarbeit Universit~it Karlsruhe, Fakult~it fiir Informatik, Karlsruhe, Germany.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Elements de syntaxe structurale", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Tesni6re", |
| "suffix": "" |
| } |
| ], |
| "year": 1959, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tesni6re, L. (1959). Elements de syntaxe structurale. Paris: Librairie Klincksiek.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "context-free grammar G = (T, N, P, S) where --terminals and non-terminals are related by a one-to-one mapping f: T --, N\\{S} and each production in P is either of the form s ~//1 .-. Y/rn or of the form n --* nl...f-l(n)...nm, where n, nl,..., nm are elements of N\\{S} and s = S is a dependency grammar." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Steimann and Christoph Brzoska Dependency Unification Grammar for PROLOG above DCG produces the following dependency trees:" |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "word(by, preposition). ~ optional agent standard transitive verb word(like, verb(N, Voice)) :> ==> transverb(N, Voice)." |
| } |
| } |
| } |
| } |