ACL-OCL / Base_JSON /prefixJ /json /J82 /J82-2012.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J82-2012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:07:20.571631Z"
},
"title": "Logic Programming Bibliography",
"authors": [
{
"first": "Gordon",
"middle": [
"S"
],
"last": "Novak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "California"
}
},
"email": ""
},
{
"first": "Helder",
"middle": [],
"last": "Coelho",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jose",
"middle": [
"C"
],
"last": "Cotta",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Luis",
"middle": [
"M"
],
"last": "Pereira",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Philip",
"middle": [
"R"
],
"last": "Cohen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oregon State University Corvallis",
"location": {
"addrLine": "Bolt Beranek and Newman, Inc. 10 Moulton Street Cambridge, Kathy Starr Bolt Beranek and Newman, Inc. 10 Moulton Street Cambridge",
"postCode": "97331, 02239, 02239",
"settlement": "Oregon, Massachusetts, Massachusetts"
}
},
"email": ""
},
{
"first": "Ralph",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Delaware Newark",
"location": {
"postCode": "19711",
"settlement": "Delaware"
}
},
"email": ""
},
{
"first": "Norman",
"middle": [
"K"
],
"last": "Sondheimer",
"suffix": "",
"affiliation": {
"laboratory": "Software Research Sperry Univac MS",
"institution": "",
"location": {
"addrLine": "2G3 Blue Bell",
"postCode": "19424",
"region": "Pennsylvania"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "GLISP is a high-level, LISP-based language which is compiled into LISP. GLISP provides a powerful abstract datatype facility, allowing description and use of both LISP objects and objects in AI representation languages. GLISP language features include PASCAL-like control structures, infix expressions with operators which facilitate list manipulation, and reference to objects in PASCAL-like or English-like syntax. English-like definite reference to features of objects which are in the current computational context is allowed; definite references are understood and compiled relative to a knowledge base of object descriptions. Object-centered programming is supported; GLISP can substantially improve runtime performance of object-centered programs by optimized compilation of references to objects. This manual describes the GLISP language and use of GLISP within INTER-LISP.",
"pdf_parse": {
"paper_id": "J82-2012",
"_pdf_hash": "",
"abstract": [
{
"text": "GLISP is a high-level, LISP-based language which is compiled into LISP. GLISP provides a powerful abstract datatype facility, allowing description and use of both LISP objects and objects in AI representation languages. GLISP language features include PASCAL-like control structures, infix expressions with operators which facilitate list manipulation, and reference to objects in PASCAL-like or English-like syntax. English-like definite reference to features of objects which are in the current computational context is allowed; definite references are understood and compiled relative to a knowledge base of object descriptions. Object-centered programming is supported; GLISP can substantially improve runtime performance of object-centered programs by optimized compilation of references to objects. This manual describes the GLISP language and use of GLISP within INTER-LISP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "IBertram",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Point of View on Children's Stories",
"sec_num": null
},
{
"text": "Reading Education Report No. 25, July 1981, 45 pages. Recent work on text analysis at the Center for the Study of Reading and elsewhere has produced surprising results regarding the texts that children read in school. These results support the hypothesis that part of the difficulty children encounter in making the transition from beginning to skilled reading lies in an abrupt shift in text characteristics between lower and upper elementary school. Moreover, a comparison between school texts and popular trade books shows that the school texts may provide inadequate preparatio:n for the texts that skilled readers need to master. Thus, characteristics of the texts that children are expected to read may hinder rather than help in the attainment of educational goals. Reading Education Report No. 28, Aug. 1981, 13 pages. Being able to measure the readability of a text with a simple formula is an attractive prospect, and many groups have been using readability formulas in a variety of situations where estimates of text complexity are thought to be necessary. The most obvious and explicit use of readability formulas is by educational publishers designing basal and remedial reading texts; some states, in fact, will consider using a basal series only if it fits certain readability formula criteria. Increasingly, public documents such as insurance policies, tax forms, contracts, and jury instructions must meet criteria stated in terms of readability formulas. Unfortunately, readability formlas just don't fulfill their promise. We attempt here to categorize and summarize some of the problems with readability formulas and their use. No. 29, Aug. 1981, 15 pages.",
"cite_spans": [
{
"start": 25,
"end": 53,
"text": "No. 25, July 1981, 45 pages.",
"ref_id": null
},
{
"start": 798,
"end": 826,
"text": "No. 28, Aug. 1981, 13 pages.",
"ref_id": null
},
{
"start": 1648,
"end": 1669,
"text": "No. 29, Aug. 1981, 15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A New Point of View on Children's Stories",
"sec_num": null
},
{
"text": "What appears to be a single story is often a complex set of stories within stories, each with its distinct author and reader. Examples of such stories within stories are presented. Results of analyses of basal readers and trade books in terms of embedded stories are also discussed. These suggest that a greater variety of stories could and should be made available to children.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stories Within Stories",
"sec_num": null
},
{
"text": "\"correct\" and orthodox positions on any topic. We feel that the views expressed here, while diverse and in some cases programmatic, will be useful in provoking discussion and reexamining assumptions about readability formulas, perhaps in defining research which might lead to a better understanding of what makes things difficult to read. Reading Education Report No. 31, Sept. 1981, 61 pages. The papers in this collection describe a notion of \"conceptual readability\" which contrasts our approach to text with that assumed by standard readability formulas. Traditionally, the readability level of a text has been calculated by considering text characteristics such as the number of words per sentence and the degree of familiarity of individual words. Our approach focuses instead on the concepts communicated by the text: how arguments are presented, what place examples play in an exposition, how characters' interactions are developed and described.",
"cite_spans": [
{
"start": 347,
"end": 393,
"text": "Education Report No. 31, Sept. 1981, 61 pages.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stories Within Stories",
"sec_num": null
},
{
"text": "In this report, we first demonstrate how certain uses of traditional readability formulas may actually lead to more difficult texts. Next, we discuss two alternative text analysis methods which are sensitive to structural, semantic and discourse characteristics. Finally, we suggest an educational method which encourages children to focus on the conceptual level of text in their early reading and writing experiences. The papers which make up this technical report are summaries of oral presentations on readability and readability formulas delivered in March 1980 at the Center for the Study of Reading. The papers included here represent as closely as possible the content and organization of the oral presentations, in a more readable format than a verbatim transcript. The purpose of the conference was to raise a number of issues for discussion and to present a spectrum of ideas and viewpoints from which readability formulas could be judged or criticized. We have not tried to make the papers exhaustive summaries of all that has been done on a certain subject or to represent only the most Technical Report No. 214, September 1981, 38 pages. That language is a phenomenon belonging primarily to the domain of social activities is hardly an arguable point. While one can list nonsocial uses of language as well as types of social interaction that are not linguistic, the fact remains that the overlap between language use and social interaction, though imperfect, is still considerable. Moreover, some minimal amount of social interaction seems to be necessary for language acquisition to take place. The obvious kinship between language and social interaction suggests the possibility of a relationship between knowledge in the social sphere and the learning of linguistic forms. In this paper four different kinds of relationships between social interaction and syntax acquisition are outlined and evaluated. The positions range from a strong one, deriving syntax acquisition directly from interactionally provided social knowledge, to a weak one, where the relatively autonomous process of syntax acquisition can be facilitated by the efficient distribution of processing resources.",
"cite_spans": [
{
"start": 1117,
"end": 1151,
"text": "No. 214, September 1981, 38 pages.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stories Within Stories",
"sec_num": null
},
{
"text": "Technical Report No. 218, September 1981, 83 pages.",
"cite_spans": [
{
"start": 17,
"end": 44,
"text": "No. 218, September 1981, 83",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bertram Bruce Bolt Beranek and Newman Inc. 10 Moulton Street Cambridge, Massachusetts 02238",
"sec_num": null
},
{
"text": "An author and a reader are engaged in a social interaction which depends on their goals and their beliefs about the world and each other. One aspect of this interaction is the creation of another level of social interaction involving an \"implied author\" and an \"implied reader.\" The newly created characters may, in their turn, create another level of social interaction involving, for example, a \"narrator\" and a \"narratee.\" Each level so created permits the creation of an addi-tional level. A model for the levels of social interaction in reading is discussed in the paper. A scheme for syntax-directed translation that mirrors compositional model-theoretic semantics is discussed. The scheme is the basis for an English translation system called PATR and was used to specify a semantically interesting fragment of English, including such constructs as tense, aspect, modals, and various lexically controlled verb complement structures. PATR was embedded in a question-answering system that replied appropriately to questions requiring the computation of logical entailments. Annual ACL Meeting, June 1982, 9-15. We argue that because the very concept of computation rests on notions of interpretation, the semantics of natural languages and the semantics of computational formalisms are in the deepest sense the same subject. The attempt to use computational formalisms in aid of an explanation of natural language semantics, therefore, is an enterprise that must be undertaken with particular care. We describe a framework for semantical analysis that we have used in the computational realm, and suggest that it may serve to underwrite computationally-oriented linguistic semantics as well. The major feature of this framework is the explicit recognition of both the declarative and the procedural import of meaningful expressions; we argue that whereas these two viewpoints have traditionally been taken as alternative, any comprehensive semantical theory must account for how both aspects of an expression contribute to its overall significance. Model-theoretic semantics provides a computationally attractive means of)representing the semantics of natural language. However, the models used in this formalism are static and are usually infinite. Dynamic models are incomplete models that include only the information needed for an application and to which information can be added. Dynamic models are basically approximations of larger conventional models, but differ in several interesting ways.",
"cite_spans": [
{
"start": 1079,
"end": 1115,
"text": "Annual ACL Meeting, June 1982, 9-15.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bertram Bruce Bolt Beranek and Newman Inc. 10 Moulton Street Cambridge, Massachusetts 02238",
"sec_num": null
},
{
"text": "The difference discussed here is the possibility of inconsistent information being included in the model. If a computation causes the model to expand, the result of that computation may be different than the result of performing that same computation with respect to the newly expanded model (i.e. the result is inconsistent with the information currently in the dynamic model). Mechanisms are introduced to eliminate these local (temporary) inconsistencies, but the most natural mechanism can introduce permanent inconsistencies in the information contained in the dynamic model. These inconsistencies are similar to those that people have in their knowledge and beliefs. The mechanism presented is shown to be related to both the intensional isomorphism and impossible worlds approaches to this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic and Computational Semantics",
"sec_num": null
},
{
"text": "Computer Science Department The University of Rochester Rochester, New York 14627",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What's in a Semantic Network? James F. Allen and Alan M. Frisch",
"sec_num": null
},
{
"text": "Proc. 20th Annual ACL Meeting, June 1982, 19-27. Ever since Woods's \"What's in a Link\" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the first-order predicate calculus (FOPC). The typical argument proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic FOPC representation.",
"cite_spans": [
{
"start": 11,
"end": 48,
"text": "Annual ACL Meeting, June 1982, 19-27.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "What's in a Semantic Network? James F. Allen and Alan M. Frisch",
"sec_num": null
},
{
"text": "For the past two years we have been studying the formalization of knowledge retrievers as well as the representation languages that they operate on. This paper presents a representation language in the notation of FOPC whose form facilitates the design of a semanticnetwork-like retriever.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What's in a Semantic Network? James F. Allen and Alan M. Frisch",
"sec_num": null
},
{
"text": "A desirable long-range goal in building future speech understanding systems would be to accept the kind of language people spontaneously produce. We show that people do not speak to one another in the same way they converse in typewritten language. Spoken language is finer-grained and more indirect. The differences are striking and pervasive. Current techniques for engaging in typewritten dialogue will need to be extended to accommodate the structure of spoken language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What's in a Semantic Network? James F. Allen and Alan M. Frisch",
"sec_num": null
},
{
"text": "Proc. 20th Annual ACL Meeting, June 1982, 36-43. An outline of a theory of comprehension of declarative contexts is presented.",
"cite_spans": [
{
"start": 11,
"end": 48,
"text": "Annual ACL Meeting, June 1982, 36-43.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fernando Gomez Department of Computer Science University of Central Florida Orlando, Florida 32816",
"sec_num": null
},
{
"text": "The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fernando Gomez Department of Computer Science University of Central Florida Orlando, Florida 32816",
"sec_num": null
},
{
"text": "Comprehension is viewed as a process dependent on the conceptual specialists (they contain the inferential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determining the meaning of prepositions, polysemous verbs, noun groups, etc. The function of this parser is not to produce an output to be interpreted by semantic routines or an interpreter, but to start the parsing process and proceed until a concept relevant to the theme of the text is recognized. Then the concept takes control of the comprehension process overriding the lower level linguistic process. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes. Proc. 20th Annual ACL Meeting, June 1982, 67-73. Although a great deal of research effort has been expended in support of natural language (NL) database querying, little effort has gone to NL database update. One reason for this state of affairs is that in NL querying, one can tie nouns and stative verbs in the query to database objects (relation names, attributes, and domain values). In many cases this correspondence seems sufficient to interpret NL queries. NL update seems to require database counterparts for active verbs, such as \"hire,\" \"schedule,\" and \"enroll,\" rather than for stative entities. There seem to be no natural candidates to fill this role.",
"cite_spans": [
{
"start": 860,
"end": 897,
"text": "Annual ACL Meeting, June 1982, 67-73.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fernando Gomez Department of Computer Science University of Central Florida Orlando, Florida 32816",
"sec_num": null
},
{
"text": "We suggest a database counterpart for active verbs, which we call verbgraphs. The verbgraphs may be used to support NL update. A verbgraph is a structure for representing the various database changes that a given verb might describe. In addition to describing the variants of a verb, they may be used to disambiguate the update command. Other possible uses of verbgraphs include specification of defaults, prompting of the user to guide but not dictate user interaction and enforcing a variety of types of integrity constraints. Proc. 20th Annual ACL Meeting, June 1982, 74-81. This paper describes a natural language processing system implemented at Hewlett-Packard's Computer Research Center. The system's main components are: a Generalized Phrase Structure Grammar (GPSG); a top-down parser; a logic transducer that outputs a first-order logical representation; and a disambiguator that uses sortal information to convert normal-form first-order logical expressions into the query language for HIRE, a relational database hosted in the SPHERE system. We argue that theoretical developments in GPSG syntax and in Montague semantics have specific advantages to bring to this domain of computational linguistics. The syntax and semantics of the system are totally domain-independent, and thus, in principle, highly portable. We discuss the prospects for extending domain-independence to the lexical semantics as well, and thus to the logical semantic representations. Annual ACL Meeting, June 1982, 82-84. This brief paper, which is itself an extended abstract for a forthcoming paper, describes a metric that can be easily computed during either bottom-up or top-down construction of a parse tree for ranking the desirability of alternative parses. In its simplest form, the metric tends to prefer trees in which constituents are pushed as far down as possible, but by appropriate modification of a constant in the formula other behavior can be obtained also. This paper includes an introduction to the EPISTLE system being developed at IBM Research and a discussion of the results of using this metric with that system.",
"cite_spans": [
{
"start": 540,
"end": 577,
"text": "Annual ACL Meeting, June 1982, 74-81.",
"ref_id": null
},
{
"start": 1468,
"end": 1505,
"text": "Annual ACL Meeting, June 1982, 82-84.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fernando Gomez Department of Computer Science University of Central Florida Orlando, Florida 32816",
"sec_num": null
},
{
"text": "is essential to acceptable natural language interfaces. For instance, an experiment with the REL English query system showed 10% elliptical input. This paper presents a method of automatically interpreting ellipsis based on dialogue context. Our method expands on previous work by allowing for expansion ellipsis and by allowing for all combinations of statement following question, question following statement, question following question, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust response to ellipsis (fragmentary sentences)",
"sec_num": null
},
{
"text": "Proc. 20th Annual ACL Meeting, June 1982, 108-112. This paper describes how a language-planning system can produce natural-language referring expressions that satisfy multiple goals. It describes a formal representation for reasoning about several agents' mutual knowledge using possible-worlds semantics and the general organization of a system that uses the formalism to reason about plans combining physical and linguistic actions at different levels of abstraction. It discusses the planning of concept activation actions thai: are realized by definite referring expressions in the planned utterances, and shows how it is possible to integrate physical actions for communicating intentions with linguistic actions, resulting in plans that include pointing as one of the communicative actions available to the speaker.",
"cite_spans": [
{
"start": 11,
"end": 50,
"text": "Annual ACL Meeting, June 1982, 108-112.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SRI International 333 Ravenswood Avenue Menlo Park, California 94025",
"sec_num": null
},
{
"text": "The TEXT System for Natural Language Generation: An Overview Kathleen R. McKeown",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRI International 333 Ravenswood Avenue Menlo Park, California 94025",
"sec_num": null
},
{
"text": "The Moore School University of Pennsylvania Philadelphia, Pennsylvania 19104 Prec. 20th Annual ACL Meeting, June 1982, 113-120. Computer-based generation of natural language requires consideration of two different types of problems: 1) determining the content and textual shape of what is to be said, and 2) transforming that message into English. A computational solution to the problems of deciding what to say and how to organize it effectively is proposed that relies on an interaction between structural and semantic processes. Schemas, which encode aspects of discourse structure, are used to guide the generation process. A focusing mechanism monitors the use of the schemas, providing constraints on what can be said at any point. These mechanisms have been implemented as part of a generation method within the context of a natural language databa,;e system, addressing the specific problem of responding to questions about database structure.",
"cite_spans": [
{
"start": 58,
"end": 127,
"text": "Pennsylvania 19104 Prec. 20th Annual ACL Meeting, June 1982, 113-120.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Department of Computer and Information Science",
"sec_num": null
},
{
"text": "The Moore School University of Pennsylvannia Philadelphia, Pennsylvania 19104 Proc. 20th Annual ACL Meeting, June 1982, 121-128. The knowledge representation is an important factor' in natural language generation since it limits the semantic capabilities of the generation system. This paper identifies several information types in a knowledge representation that can be used to generate meaningful responses to questions about database structure. Creating such a knowledge representation, however, is a long and tedious process. A system is presented which uses the contents of the database to form part of this knowledge representation automatically. It employs three types of world knowledge axioms to ensure that the representation formed is meaningful and contains salient information.",
"cite_spans": [
{
"start": 59,
"end": 128,
"text": "Pennsylvania 19104 Proc. 20th Annual ACL Meeting, June 1982, 121-128.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Department of Computer and Information Science",
"sec_num": null
},
{
"text": "PROLOG,\" and from the Hungarian list delivered during the Logic Programming Workshop at Debrecen, July 11980.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "American Journal of Computational Linguistics, Volume 8, Number 2, April-June 1982",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Proc. 20th Annual ACL Meeting, June 1982, 129-135. We argue that in domains where a strong notion of salience can be defined, it can be used to provide: 1) an elegant solution to the selection problem, i.e. the problem of how to decide whether a given fact should or should not be mentioned in the text; and 2) a simple and direct control framework for the entire deep generation process, coordinating proposing, planning, and realization. (Deep generation involves reasoning about conceptual and rhetorical facts, as opposed to the narrowly linguistic reasoning that takes place during realization). We report on an empirical study of salience in pictures of natural scenes, and its use in a computer program that generates descriptive paragraphs comparable to those produced by people. Proc. 20th Annual ACL Meeting, June 1982, 136-144. This paper describes the results of a preliminary study of a knowledge engineering approach to natural language understanding. A computer system is being developed to handle the acquisition, representation, and use of linguistic knowledge. The computer system is rule-based and utilizes a semantic network for knowledge storage and representation. In order to facilitate the interaction between user and system, input of linguistic knowledge and computer responses are in natural language. Knowledge of various types can be entered and utilized: syntactic and semantic; assertions and rules. The inference tracing facility is also being developed as a part of the rule-based system with output in natural language. A detailed example is presented to illustrate the current capabilities and features of the system. Proc. 20th Annum ACL Meeting, June 1982, 145-151. AMBER is a model of first language acquisition that improves its performance through a process of error recovery. The model is implemented as an adaptive production system that introduces new conditionaction rules on the basis of experience. AMBER starts with the ability to say only one word at a time, but adds rules for ordering goals and producing grammatical morphemes, based on comparisons between predicted and observed sentences.The morpheme rules may be overly general and lead to errors of commission; such errors evoke a discrimination process, producing more conservative rules with additional conditions. The system's performance improves gradually, since rules must be relearned many times before they are used. AMBER's learning mechanisms account for some of the major developments observed in children's early speech.",
"cite_spans": [
{
"start": 11,
"end": 50,
"text": "Annual ACL Meeting, June 1982, 129-135.",
"ref_id": null
},
{
"start": 799,
"end": 838,
"text": "Annual ACL Meeting, June 1982, 136-144.",
"ref_id": null
},
{
"start": 1664,
"end": 1702,
"text": "Annum ACL Meeting, June 1982, 145-151.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {},
"ref_entries": {
"TABREF2": {
"html": null,
"text": "The model provides a framework for examining devices such as author commentary, irony, stories within stories, first person narration, and point of view. Examples such as The Tale of Benjamin Bunny and The Turn of the Screw are discussed.",
"content": "<table><tr><td>Translating Artificial Intelligence Center</td></tr><tr><td>SRI International</td></tr><tr><td>333 Ravenswood Avenue</td></tr><tr><td>Menlo Park, California 94025</td></tr><tr><td>Prec. 20th Annual ACL Meeting, June 1982, I-8.</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "The Representation of Inconsistent InformationProc. 20thAnnual ACL Meeting, June 1982, 16-18.",
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"text": "",
"content": "<table><tr><td>Computer Sciences Department</td></tr><tr><td>IBM Thomas J. Watson Research Center</td></tr><tr><td>Yorktown Heights, New York 10598</td></tr><tr><td>Proc. 20th</td></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}