ACL-OCL / Base_JSON /prefixJ /json /J88 /J88-2007.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J88-2007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:56:28.708108Z"
},
"title": "FOUNDATIONS OF ILLOCUTIONARY LOGIC",
"authors": [
{
"first": "John",
"middle": [
"R"
],
"last": "Searle",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Berkeley",
"region": "CA"
}
},
"email": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Vanderveken",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Berkeley",
"region": "CA"
}
},
"email": ""
},
{
"first": "Leonard",
"middle": [],
"last": "Bolc",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit6",
"location": {
"addrLine": "de Qu6bec, Trois Rivi6res) Cambridge, England: Cambridge University Press, 1985, xi + 227 pp"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "J88-2007",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "chapters present this topic, discussing in detail the basic notions and introducing notation, which is highly developed. The first chapter deals with the notion of illocutionary force and its seven elements: \u2022 Illocutionary point: type of utterance. All utterances have been divided into five types: assertives, directives, commissives, declarations, and expressives. Illocutionary point constitutes the most important and basic element of illocutionary force, for it determines the objective, the intention of the utterance. Other elements only change its shade. \u2022 Degree of strength of the illocutionary point: the strength of the utterance, which determines the degree of involvement of the interlocutor. The degree of strength of the illocutionary point enables to differentiate between two utterances of the same type, e.g., I request, I implore. \u2022 Mode of achievement: mode of utterance. This element determines the position of the interlocutor. For example, directives require expression by a person respected by the person to whom the command is addressed. \u2022 Propositional content conditions: conditions pertaining to the content of an utterance. Certain content conditions are imposed on utterances. For example, it is impossible to apologize for an event that is going to take place or to propose a change of events that took place in the past. \u2022 Preparatory conditions: the set of conditions that must occur for the utterance to be fulfilled. If for example we request someone Please, stop shouting, the preparatory condition of this utterance is that someone is shouting. Sincerity conditions: the set of conditions pertaining to the state of mental disposition, which must be fulfilled if the utterance is to be sincere. For example, it is impossible to say I am Very sorry, but I am not sorry at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\u2022 Degree of strength of the sincerity conditions. Because states of mental disposition can be expressed with different strengths, this element serves to indicate these differences. The second chapter introduces formal notation and defines the above notions in terms of the theory of sets. It also includes a formal presentation of some features of illocutionary force and of its elements. The next chapter discusses logical structure of the set of illocutionary forces. Five elementary illocutionary forces are introduced, which correspond to the five types of utterances. The authors also deal with possible operations on illocutionary force. A set of all illocutionary forces can be constructed in a recurrent way, using these operations, starting from a set of elementary illocutionary forces. The fourth chapter defines new terms and relations connected with utterances. These are terms pertaining to fulfillment of utterances (e.g., successful performance, non-defective performance, failure), features of illocutionary act (e.g., relation of strong commitment), and features relating to sets of illocutionary acts (e.g., illocutionary consistency, congruence). The fifth chapter includes a broader discussion on some elements of illocutionary force.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The next three chapters constitute the actual content of the book. They are an independent part, including a formal and complex presentation of illocutionary logic. The sixth chapter is a collection of all definitions, axioms, and postulates of the above logic. The seventh and eighth chapters include theorems concerning all the notions discussed earlier and relations, together with comments and outlines of argumentations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The last chapter constitutes a sort of appendix to the book, for it includes a set of English verbs, followed by their semantic descriptions, annotated with their relationship to the logic discussed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The book includes many examples illustrating the notions :introduced, making it easier for the readers to acquire them. The authors present not only theorems that \"introduce something new\" to the theory, but also some that state that some rules of classical logic cannot be transferred to illocutionary logic. Many theorems are supported by interesting examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMMENTS",
"sec_num": null
},
{
"text": "One of the disadvantages of the book is that the authors have limited their discussion to the English language. The book does not include any evaluation of the application of the presented theory in the description of other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMMENTS",
"sec_num": null
},
{
"text": "After' reading this book, the reader has an impression that the only objective of the book is description of linguistic phenomena. It is difficult to imagine practical application of the theory in natural-language processing systems. Although the book was not aimed at presenting practical applications, a short chapter on this topic could dispel the doubts of a reader studying this type of problem for the first time. Furthermore, the addition of an index would have facilitated the reader in returning to certain issues or unmemorized definitions. Still, this book is one of the more interesting recent publications on the application of logic in naturallanguage description. Reading it will inspire further research on the logical structure of natural language, and is highly recommended. This book is a reorganization of a 1983 doctoral thesis. It seems little effort has been spent to take into account the impressive amount of research in text comprehension since 1983. Indeed, the reference section lists only five post-1983 entries. Nevertheless the book is very relevant to the field of computational linguistics: it constitutes an archetype of the assumptions, strategies, and limitations faced by anyone attempting to implement a text-processing tool. This relatively short book (188 pages, double-spaced) is divided into two parts. The first, comprising four chapters, overviews the model, which is then detailed in the second part. The first chapter introduces the basic assumptions and goals of the thesis; the research focuses on memory mechanisms, not inferencing or reasoning. The author states that the work is carried out in terms of automatic natural-language processing (NLP) and thus that he will avoid claims and suggestions about human language processing. Indeed, the title is somewhat misleading: the use of the word \"memory\" in the dissertation has little to do with human memory. In essence, the thesis describes marker-passing algorithms used to select between possible candidates for disambiguation. The algorithms search for candidates in a database and choose between them according to the current context, which simply constrains memory retrieval. The system, called Capture, was designed not only for text processing but also to process collections of English paragraphs and produce an output incorporable into a conventional database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMMENTS",
"sec_num": null
},
{
"text": "The second chapter overviews the representational scheme and algorithms developed by the author. The knowledge base is constructed out of two types of assertions: specializations (IS-A declarations) and correspondences, which take the form role C1 of owner D1 is a role-specialization of role C2 whose owner is D2. These types of assertions can carry further information about the relationships between their arguments. This information is encoded as a list of flags given as an additional argument to the assertion. The author remarks that \"the motivation for the choice of flags that were defined for the memory formalism is simply that these seem to be useful, in practice, for stating information at the level of this kind of formalism\". Context is represented by a collection of context factors, each of which contributes activation to a particular set of memory entities (i.e., to the \"objects\" referred to in the assertions). There are seven major types of context factor, including recency, syntactic emphasis, deixis, and a priori subject area. These are essentially static rules that define how activation is managed for each memory entity involved during comprehension. The rules are applied for disambiguation and for defining the focus space; that is, the set of most \"activated\" memory entities. In the remainder of the chapter, Alshawi discusses his standard marker-passing model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMMENTS",
"sec_num": null
},
{
"text": "The third chapter addresses the problem of interpretation. The author tackles noun phrase (NP) reference interpretation, compound NPs, possessive NPs, with-PPs, and word-sense disambiguation. Conditionals, negation, and \"phenomena going beyond memory mechanisms\" (e.g., modality and metaphor) are not handled. The algorithms are simple to understand but often lack proper motivation. In the fourth chapter, Alshawi discusses related research as of 1983. In particular, the author acknowledges the strong influence of Fahlman's work on marker-passing, and of Grosz's notion of global focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMMENTS",
"sec_num": null
},
{
"text": "The second part starts on page 76. In the rest of the book, Alshawi details the ideas of the first part (see summary table, p. 94) and elaborates on the Capture feature that creates a relational database as the result of text processing. The author concludes with a chapter on the complexity of techniques for efficient retrieval from a database, a topic too often ignored in NLP models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMMENTS",
"sec_num": null
},
{
"text": "I said above that this book is, in my opinion, an archetype of the NLP thesis in computational linguistics. The first appendix, which lists some of the 30 short texts processed by Capture, confirms this: the examples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMMENTS",
"sec_num": null
},
{
"text": "Computational Linguistics, Volume 14, Number 2, June 1988 Book Reviews Memory and Context for Language Interpretation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Computational Linguistics, Volume 14, Number 2, June 1988",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {}
}
}