| { |
| "paper_id": "C02-1047", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:19:56.750770Z" |
| }, |
| "title": "Towards a Noise-Tolerant, Representation-Independent Mechanism for Argument Interpretation", |
| "authors": [ |
| { |
| "first": "Ingrid", |
| "middle": [], |
| "last": "Zukerman", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Monash University Clayton", |
| "location": { |
| "postCode": "3800", |
| "region": "VICTORIA", |
| "country": "AUSTRALIA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "George", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Monash University Clayton", |
| "location": { |
| "postCode": "3800", |
| "region": "VICTORIA", |
| "country": "AUSTRALIA" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We describe a mechanism for the interpretation of arguments, which can cope with noisy conditions in terms of wording, beliefs and argument structure. This is achieved through the application of the Minimum Message Length Principle to evaluate candidate interpretations. Our system receives as input a quasi-Natural Language argument, where propositions are presented in English, and generates an interpretation of the argument in the form of a Bayesian network (BN). Performance was evaluated by distorting the system's arguments (generated from a BN) and feeding them to the system for interpretation. In 75% of the cases, the interpretations produced by the system matched precisely or almost-precisely the representation of the original arguments.", |
| "pdf_parse": { |
| "paper_id": "C02-1047", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We describe a mechanism for the interpretation of arguments, which can cope with noisy conditions in terms of wording, beliefs and argument structure. This is achieved through the application of the Minimum Message Length Principle to evaluate candidate interpretations. Our system receives as input a quasi-Natural Language argument, where propositions are presented in English, and generates an interpretation of the argument in the form of a Bayesian network (BN). Performance was evaluated by distorting the system's arguments (generated from a BN) and feeding them to the system for interpretation. In 75% of the cases, the interpretations produced by the system matched precisely or almost-precisely the representation of the original arguments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In this paper, we focus on the interpretation of argumentative discourse, which is composed of implications. We present a mechanism for the interpretation of NL arguments which is based on the application of the Minimum Message Length (MML) Principle for the evaluation of candidate interpretations (Wallace and Boulton, 1968) . The MML principle provides a uniform and incremental framework for combining the uncertainty arising from different stages of the interpretation process. This enables our mechanism to cope with noisy input in terms of wording, beliefs and argument structure, and to factor out the elements of an interpretation which rely on a particular knowledge representation.", |
| "cite_spans": [ |
| { |
| "start": 299, |
| "end": 326, |
| "text": "(Wallace and Boulton, 1968)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "So far, our mechanism has been tested on one knowledge representation -Bayesian Networks (BNs) (Pearl, 1988) ; logic-based representations will be tested in the future. of a BN which contains the preferred interpretation of this argument (the nodes corresponding to the original argument are shaded). In this example, the argument is obtained through a web interface (the uncertainty value of the consequent is entered using a drop-down menu). As seen in this example, the input argument differs structurally from the system's interpretation. In addition, the belief value for the consequent differs from that in the domain BN, and the wording of the statements differs from the canonical wording of the BN nodes. Still, the system found a reasonable interpretation in the context of its domain model.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 108, |
| "text": "(Pearl, 1988)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The results obtained in this informal trial are validated by our automated evaluation. This evaluation, which assesses baseline performance, consists of passing distorted versions of the system's arguments back to the system for interpretation. In 75% of the cases, the interpretations produced by the system matched the original arguments (in BN form) precisely or almost-precisely.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the next section, we review related research. We then describe the application of the MML criterion to the evaluation of interpretations. In Section 4, we outline the argument interpretation process. The results of our evaluation are reported in Section 5, followed by concluding remarks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our research integrates reasoning under uncertainty for plan recognition in discourse understanding with the application of the MML principle (Wallace and Boulton, 1968) . BNs in particular have been used in several such plan recognition tasks, e.g., (Charniak and Goldman, 1993; Horvitz and Paek, 1999; Zukerman, 2001 ). Charniak and Goldman's system handled complex narratives, using a BN and marker passing for plan recognition. It automatically built and incrementally extended a BN from propositions read in a story, so that the BN represented hypotheses that became plausible as the story unfolded. In contrast, we use a BN to constrain our understanding of the propositions in an argument, and apply the MML principle to select a plausible interpretation. Both Horvitz and Paek's system and Zukerman's handled short dialogue contributions. Horvitz and Paek used BNs at different levels of an abstraction hierarchy to infer a user's goal in informationseeking interactions with a Bayesian Receptionist.", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 169, |
| "text": "(Wallace and Boulton, 1968)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 251, |
| "end": 279, |
| "text": "(Charniak and Goldman, 1993;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 280, |
| "end": 303, |
| "text": "Horvitz and Paek, 1999;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 304, |
| "end": 318, |
| "text": "Zukerman, 2001", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Zukerman used a domain model and user model represented as a BN, together with linguistic and attentional information to infer a user's goal from a short-form rejoinder. However, the combination of these knowledge sources was based on heuristics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The MML principle is a model selection technique which applies information-theoretic criteria to trade data fit against model complexity. Selected applications which use MML are listed in http://www.csse.monash.edu.au/ dld/ Snob.application.papers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research", |
| "sec_num": "2" |
| }, |
| { |
| "text": "According to the MML criterion, we imagine sending to a receiver the shortest possible message that describes an NL argument. When a good interpretation is found, a message which encodes the NL argument in terms of this interpretation will be shorter than the message which transmits the words of the argument directly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Interpretation Using MML", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A message that encodes an NL argument in terms of an interpretation is composed of two parts: (1) instructions for building the interpretation, and (2) instructions for rebuilding the original argument from this interpretation. These two parts balance the need for a concise interpretation (Part 1) with the need for an interpretation that matches closely the original argument (Part 2). For instance, the message for a concise interpretation that does not match well the original argument will have a short first part but a long second part. In contrast, a more complex interpretation which better matches the original argument may yield a shorter message overall. As a result, in finding the interpretation that yields the shortest message for an NL argument, we will have produced a plausible interpretation, which hopefully is the intended interpretation. To find this interpretation, we compare the message length of the candidate interpretations. These candidates are obtained as described in Section 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Interpretation Using MML", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The MML criterion is derived from Bayes Theorem: Pr", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 Pr \u00a1 \u00a3 \u00a6 \u00a9 Pr \u00a1 \u00a3 \u00a2 \u00a9 \u00a6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": ", where \u00a2 is the data and \u00a6 is a hypothesis which explains the data. An optimal code for an event with probability Pr \u00a1 has message length ML", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u00a1 ! # \" % $ ' & Pr \u00a1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(measured in bits). Hence, the message length for the data and a hypothesis is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "ML \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 ( ML \u00a1 \u00a3 \u00a6 \u00a9 0 ) ML \u00a1 \u00a3 \u00a2 \u00a6 \u00a9 2 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The hypothesis for which ML", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u00a1 \u00a3 \u00a2 3 \u00a4 4 \u00a6 \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "is minimal is considered the best hypothesis. Now, in our context, Arg contains the argument, and SysInt an interpretation generated by our system. Thus, we are looking for the SysInt which yields the shortest message length for ML", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u00a1 Arg \u00a4 SysInt\u00a8% ML \u00a1 SysInt\u00a80 ) ML \u00a1 Arg", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "SysIntT he first part of the message describes the interpretation, and the second part describes how to reconstruct the argument from the interpretation. To calculate the second part, we rely on an intermediate representation called Implication Graph (IG). An Implication Graph is a graphi-cal representation of an argument, which represents a basic \"understanding\" of the argument. It is composed of simple implications of the form Antecedent Antecedent", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "& 1 1 1 Antecedent\u00a1 \u00a3 \u00a2 Consequent (where \u00a2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "indicates that the antecedents imply the consequent, without distinguishing between causal and evidential implications).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MML Encoding", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Arg represents an understanding of the input argument. It contains propositions from the underlying representation, but retains the structure of the argument.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "SysInt represents an understanding of a candidate interpretation. It is directly obtained from SysInt. Hence, both its structure and its propositions correspond to the underlying representation. Since both", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "SysInt use domain propositions and have the same type of representation, they can be compared with relative ease. Figure 1 illustrates the interpretation of a small argument, and the calculation of the message length of the interpretation. The interpretation process obtains", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 114, |
| "end": 122, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a7 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg from the input, and SysInt from", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg (left-hand side of Figure 1 ). If a sentence in Arg matches more than one domain proposition, the system generates more than one", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 23, |
| "end": 31, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg from Arg (Section 4.1). Each", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg may in turn yield more than one SysInt. This happens when the underlying representation has several ways of connecting between the nodes in Figure 1 ). This calculation takes advantage of the fact that there can be only one", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 144, |
| "end": 152, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg for each Arg-SysInt combination. Hence, Pr", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a1 Arg \u00a4 SysInt\u00a8 Pr \u00a1 Arg\u00a9\u00a4 \u00a6 \u00a5 Arg\u00a8S ysInt Pr \u00a1 Arg \u00a4 \u00a7 \u00a5 Arg\u00a8S ysInt\u00a8Pr \u00a1 \u00a4 \u00a6 \u00a5 Arg SysInt\u00a8Pr \u00a1 SysIntc ond. ind. Pr \u00a1 Arg \u00a4 \u00a7 \u00a5 Arg\u00a8P r \u00a1 \u00a4 \u00a6 \u00a5 Arg SysInt\u00a8Pr \u00a1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "SysIntT hus, the length of the message required to transmit the original argument from an interpretation is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "ML \u00a1 Arg \u00a4 SysInt\u00a8 (1) ML \u00a1 Arg \u00a4 \u00a7 \u00a5 Arg\u00a8) ML \u00a1 \u00a4 \u00a7 \u00a5 Arg SysInt\u00a8) ML \u00a1 SysIntT", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "hat is, for each candidate interpretation, we calculate the length of the message which conveys:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "SysInt -the interpretation, \u00a4 \u00a6 \u00a5 Arg SysInt -how to obtain the belief and structure of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg from SysInt, 1 and 1 We use SysInt for this calculation, rather than SysInt. This does not affect the message length because the receiver can obtain SysInt directly from SysInt.", |
| "cite_spans": [ |
| { |
| "start": 23, |
| "end": 24, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg -how to obtain the sentences in Arg from the corresponding nodes in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a4 \u00a7 \u00a5 Arg .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "The interpretation which yields the shortest message is selected (the message-length equations for each component are summarized in Table 1 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 132, |
| "end": 139, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a1 SysInt\u00cf n order to transmit SysInt, we simply send its propositions and the relations between them. A standard MML assumption is that the sender and receiver share domain knowledge. Hence, one way to send SysInt consists of transmitting how SysInt is extracted from the domain representation. This involves selecting its propositions from those in the domain, and then choosing which of the possible relations between these propositions are included in the interpretation. In the case of a BN, the propositions are represented as nodes, and the relations between propositions as arcs. Thus the message length for SysInt in the context of a BN is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculating ML", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "# \" % $ & C # nodes(domainBN) # nodes(SysInt) ) # \" % $ & C # incident arcs(SysInt) # arcs(SysInt)", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Calculating ML", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "3.3 Calculating ML", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculating ML", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u00a1 IG Arg SysIntT he message which describes \u00a4 \u00a7 \u00a5 Arg in terms of SysInt (or rather in terms of \u00a4 \u00a6 \u00a5 SysInt ) conveys how \u00a4 \u00a6 \u00a5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculating ML", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Arg differs from the system's interpretation in two respects: (1) belief, and (2) argument structure. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculating ML", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Arg , we transmit any discrepancy between the belief stated in the argument and the system's belief in this proposition (propositions that appear in only one IG are handled by the message component which describes structural differences). The length of the message required to convey this information is \" !", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg# $ ! SysInt ML \u00a1 & % ( ' )\u00a1 0 \u00a9 \u00a4 \u00a7 \u00a5 Arg\u00a8 % 1 ' )\u00a1 2 \u00a9 \u00a4 \u00a6 \u00a5 SysInt\u00a8\u1e85 here % 1 ' 3 )\u00a1 0 \u00a9 \u00a4 \u00a7 \u00a5 5 4\u00a8 i s the belief in proposition in \u00a4 \u00a6 \u00a54", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": ". Assuming an optimal message encoding, we obtain 6 7 \" !", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg# \" ! SysInt # \" % $ & Pr \u00a1 & % 1 ' 3 )\u00a1 2 \u00a9 \u00a4 \u00a6 \u00a5 Arg\u00a8 % 1 ' 3 )\u00a1 2 \u00a9 \u00a4 \u00a6 \u00a5 SysInt\u00a8(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "3) which expresses discrepancies in belief as a probability that the argument will posit a particular belief in a proposition, given the belief held by the system in this proposition. We have modeled this probability using a function which yields a maximum proba-bility mass when the belief in proposition according to the argument agrees with the system's belief. This probability gradually falls as the discrepancy between the belief stated in the argument and the system's belief increases, which in turn yields an increased message length.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "The message which transmits the structural discrepancies between \u00a4 \u00a6 \u00a5 SysInt and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural differences", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "Arg describes the structural operations required to transform", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a7 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a4 \u00a6 \u00a5 SysInt into \u00a4 \u00a7 \u00a5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a7 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg . These operations are: node insertions and deletions, and arc insertions and deletions. A node is inserted in \u00a4 \u00a6 \u00a5 SysInt when the system cannot reconcile a proposition in the given argument with any proposition in its domain representation. In this case, the system proposes a special Escape (wild card) node. Note that the system does not presume to understand this proposition, but still hopes to achieve some understanding of the argument as a whole. Similarly, an arc is inserted when the argument mentions a relationship which does not appear in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a7 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "SysInt . An arc (node) is deleted when the corresponding relation (proposition) appears in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a4 \u00a7 \u00a5 SysInt , but is omitted from \u00a4 \u00a7 \u00a5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg . When a node is deleted, all the arcs incident upon it are rerouted to connect its antecedents directly to its consequent. This operation, which models a small inferential leap, preserves the structure of the implication around the deleted node. If the arcs so rerouted are inconsistent with", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg they will be deleted separately. For each of these operations, the message announces how many times the operation was performed (e.g., how many nodes were deleted) and then provides sufficient information to enable the message receiver to identify the targets of the operation (e.g., which nodes were deleted). Thus, the length of the message which describes the structural operations required to transform Node insertions = number of inserted nodes plus the penalty for each insertion. Since a node is inserted when no proposition in the domain matches a statement in the argument, we use an insertion penalty equal to \u00a2 \u00a1 -the probabilitylike score of the worst acceptable word-match between a statement and a proposition (Section 4.1). Thus the message length for node in-sertions is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "# \" % $ & \u00a1 # nodes ins\u00a8) # nodes ins \u00a1 # \" % $ & \u00a3 \u00a1", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Node deletions = number of deleted nodes plus their designations. To designate the nodes to be deleted, we select them from the nodes in SysInt (or \u00a4 \u00a6 \u00a5 SysInt ):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "# \" % $ & \u00a1 # nodes del\u00a8) 4 # \" % $ & C # nodes( SysInt ) # nodes del (6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arc insertions = number of inserted arcs plus their designations plus the direction of each arc. (This component also describes the arcs incident upon newly inserted nodes.) To designate an arc, we need to select a pair of nodes (head and tail) from the nodes in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "SysInt and the newly inserted nodes. However, some nodes in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a7 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "SysInt are already connected by arcs. These arcs must be subtracted from the total number of arcs that can be inserted, yielding # poss arc ins", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "C # nodes( SysInt )+# nodes ins & # arcs(\u00a4 \u00a6 \u00a5 SysInt )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "We also need to send 1 extra bit per inserted arc to convey its direction. Hence, the length of the message that conveys arc insertions is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "# \" % $ & % \u00a1 # arcs ins\u00a8) # \" % $ & C # poss arc ins # arcs ins ) # arcs ins", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arc deletions = number of deleted arcs plus their designations. We approximate Pr ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "# \" % $ & \u00a1 # arcs del\u00a8) \" % $ & C # arcs( SysInt ) # arcs del", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Our system generates candidate interpretations for an argument by first postulating propositions that match the sentences in the argument, and then finding different ways to connect these propositionseach variant is a candidate interpretation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposing Interpretations", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We currently use a naive approach for postulating propositions. For each sentence Arg in the given argument we generate candidate propositions as follows. For each proposition in the domain, the system proposes a canonical sentence \u00a1 \u00a2 (produced by a simple English generator). This sentence is compared to Arg , yielding a match-score for the pair ( Arg , ). When a match-score is above a threshold \u00a1 , we have found a candidate interpretation for Arg . For example, the proposition [G was in garden at 11] in Figure 1(b) is a plausible interpretation of the input sentence \"Mr Green was seen in the garden at 11\" in Figure 1(a) . Some sentences may have no propositions with match-scores above", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 511, |
| "end": 522, |
| "text": "Figure 1(b)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 618, |
| "end": 629, |
| "text": "Figure 1(a)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Postulating propositions", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": ". This does not automatically invalidate the argument, as it may still be possible to interpret the argument as a whole, even if a few sentences are not understood (Section 3.3) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 177, |
| "text": "(Section 3.3)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u00a3 \u00a1", |
| "sec_num": null |
| }, |
| { |
| "text": "The match-score for a sentence Arg and a proposition -a number in the [0,1] range -is calculated using a function which compares words in Arg with words in \u00a1 \u00a2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3 \u00a1", |
| "sec_num": null |
| }, |
| { |
| "text": ". The goodness of a wordmatch depends on the following factors: (1) level of synonymy -the number of synonyms the words have in common (according to WordNet, Miller et al., 1990) ; (2) position in sentence; and (3) partof-speech (PoS) -obtained using MINIPAR (Lin, 1998) . That is, a word", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 178, |
| "text": "Miller et al., 1990)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 259, |
| "end": 270, |
| "text": "(Lin, 1998)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3 \u00a1", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a3 \u00a5 \u00a4 \u00a7 \u00a6\u00a8 \u00a9 in position in \u00a1 \u00a2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3 \u00a1", |
| "sec_num": null |
| }, |
| { |
| "text": "matches perfectly a word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3 \u00a1", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg in position in sentence Arg , if both words are exactly the same, they are in the same sentence position, and they have the same PoS. The match-score between \u00a3 \u00a5 \u00a4 \u00a6\u00a8 \u00a9 and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3 \u00a6\u00a9", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg is reduced as their level of synonymy falls, and as the difference in their sentence position increases. The match-score of two words is further reduced if they have different PoS. In addition, the PoS affects the penalty for a mismatch, so that mismatched non-content words are penalized less than mismatched content words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3 \u00a6\u00a9", |
| "sec_num": null |
| }, |
| { |
| "text": "The match-scores between a sentence and its candidate propositions are normalized, and the result used to approximate Pr \u00a1 Arg , which is required for the MML evaluation (Section 3.4). 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3 \u00a6\u00a9", |
| "sec_num": null |
| }, |
| { |
| "text": "Since more than one node may match each of the sentences in an argument, there may be more than one", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Connecting the propositions", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Arg that is consistent with the argument. For instance, the sentence \"Mr Green was seen in the garden at 11\" in Figure 1 (a) matches both [G was in garden at 11] and [N saw G in garden] (although the former has a higher probability). If the other sentences in Figure 1 (a) match only one proposition, two IGs that match the argument will be generated -one for each of the above alternatives. Figure 2 illustrates the remainder of the interpretation-generation process with respect to one", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 161, |
| "text": "[G was in garden at 11]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 112, |
| "end": 120, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 260, |
| "end": 268, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 392, |
| "end": 400, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg . This process consists of finding connections between the nodes in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg ; eliminating superfluous nodes; and generating sub-graphs of the resulting graph, such that all the nodes in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a7 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg are connected (Figures 2(b) , 2(c) and 2(d), respectively). The connections between the nodes in", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 18, |
| "end": 31, |
| "text": "(Figures 2(b)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg are found by applying two rounds of inferences from these nodes (spreading outward). These two rounds enable the system to \"make sense\" of an argument with small inferential leaps (Zukerman, 2001 ). If upon completion of this process, some nodes in", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 199, |
| "text": "(Zukerman, 2001", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a7 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg are still unconnected, the system rejects", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Arg . This process is currently implemented in the context of a BN. However, any representation that supports the generation of a connected argument involving a given set of propositions would be appropriate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4 \u00a6 \u00a5", |
| "sec_num": null |
| }, |
| { |
| "text": "Our evaluation consisted of an automated experiment where the system interpreted noisy versions of its own arguments. These arguments were generated from different sub-nets of its domain BN, and they were distorted at the BN level and at the NL level. At the BN level, we changed the beliefs in the nodes, and we inserted and deleted nodes and arcs. At the NL level, we distorted the wording of the propositions in the resultant arguments. All these distortions were performed for BNs of different sizes (3, 5, 7 and 9 arcs). Our measure of performance is the edit-distance between the original BN used to generate an argument, and the BN produced as the interpretation of this argument. For instance, two BNs that differ by one arc have an edit-distance of 2 (one addition and one deletion), while a perfect match has an edit-distance of 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Overall, our results were as follows. Our system produced an interpretation in 86% of the 5400 trials. In 75% of the 5400 cases, the generated interpretations had an edit-distance of 3 or less from the original BN, and in 50% of the cases, the interpretations matched perfectly the original BN. Figure 3 depicts the frequency of edit distances for the different BN sizes under all noise conditions. We plotted edit-distances of 0, 1 1 1 , 9 and \u00a2 \u00a1 , plus the category NI, which stands for \"No Interpretation\". As shown in Figure 3 , the 0 edit-distance has the highest frequency, and performance deteriorates as BN size increases. Still, for BNs of 7 arcs or less, the vast majority of the interpretations have an edit distance of 3 or less. Only for BNs of 9 arcs the number of NIs exceeds the number of perfect matches.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 295, |
| "end": 303, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 523, |
| "end": 531, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We also tested each kind of noise separately, maintaining the other kinds of noise at 0%. All the distortions were between 0 and 40%. We performed 1560 trials for word noise, arc noise and node insertions, and 2040 trials for belief noise, which warranted additional observations. Figures 4, Figure 3 : Frequency of edit-distances for all noise conditions (5400 trials) 5 and 6 show the recognition accuracy of our system (in terms of average edit distance) as a function of arc, belief and word noise percentages, respectively. The performance for the different BN sizes (in arcs) is also shown. Our system's performance for node insertions is similar to that obtained for belief noise (the graph was not included owing to space limitations). Our results show that the two main factors that affect recognition performance are BN size and word noise, while the average edit distance remains stable for belief and arc noise, as well as for node insertions (the only exception occurs for 40% arc noise and size 9 BNs). Specifically, for arc noise, belief noise and node insertions, the average Figure 4 : Effect of arc noise on performance (1560 trials) Figure 5 : Effect of belief noise on performance (2040 trials) edit distance was 3 or less for all noise percentages, while for word noise, the average edit distance was higher for several word-noise and BN-size combinations. Further, performance deteriorated as the percentage of word noise increased.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 292, |
| "end": 300, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1092, |
| "end": 1100, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1152, |
| "end": 1160, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The impact of word noise on performance reinforces our intention to implement a more principled sentence comparison procedure (Section 4.1), with the expectation that it will improve this aspect of our system's performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We have offered a mechanism which produces interpretations of segmented NL arguments. Our application of the MML principle enables our system to handle noisy conditions in terms of wording, beliefs and argument structure, and allows us to isolate the effect of the underlying knowledge representation on the interpretation process. The results of our automated evaluation were encouraging, with inter- Figure 6 : Effect of word noise on performance (1560 trials) pretations that match perfectly or almost-perfectly the source-BN being generated in 75% of the cases under all noise conditions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 402, |
| "end": 410, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We are implementing a more principled model for sentence comparison which yields more accurate probabilities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Bayesian model of plan recognition", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "P" |
| ], |
| "last": "Goldman", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Artificial Intelligence", |
| "volume": "64", |
| "issue": "1", |
| "pages": "50--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak and Robert P. Goldman. 1993. A Bayesian model of plan recognition. Artificial In- telligence, 64(1):50-56.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A computational architecture for conversation", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Horvitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Paek", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "UM99 -Proceedings of the Seventh International Conference on User Modeling", |
| "volume": "", |
| "issue": "", |
| "pages": "201--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Horvitz and Tim Paek. 1999. A computa- tional architecture for conversation. In UM99 - Proceedings of the Seventh International Confer- ence on User Modeling, pages 201-210, Banff, Canada.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Dependency-based evaluation of MINIPAR", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Workshop on the Evaluation of Parsing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin. 1998. Dependency-based evaluation of MINIPAR. In Workshop on the Evaluation of Parsing Systems, Granada, Spain.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Introduction to WordNet: An on-line lexical database", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Beckwith", |
| "suffix": "" |
| }, |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "Derek", |
| "middle": [], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Journal of Lexicography", |
| "volume": "3", |
| "issue": "4", |
| "pages": "235--244", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine Miller. 1990. Introduction to WordNet: An on-line lexical database. Journal of Lexicography, 3(4):235- 244.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Probabilistic Reasoning in Intelligent Systems", |
| "authors": [ |
| { |
| "first": "Judea", |
| "middle": [], |
| "last": "Pearl", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Judea Pearl. 1988. Probabilistic Reasoning in In- telligent Systems. Morgan Kaufmann Publishers, San Mateo, California.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "An information measure for classification", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "S" |
| ], |
| "last": "Wallace", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "M" |
| ], |
| "last": "Boulton", |
| "suffix": "" |
| } |
| ], |
| "year": 1968, |
| "venue": "The Computer Journal", |
| "volume": "11", |
| "issue": "", |
| "pages": "185--194", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C.S. Wallace and D.M. Boulton. 1968. An infor- mation measure for classification. The Computer Journal, 11:185-194.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "An integrated approach for generating arguments and rebuttals and understanding rejoinders", |
| "authors": [ |
| { |
| "first": "Ingrid", |
| "middle": [], |
| "last": "Zukerman", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "UM01 -Proceedings of the Eighth International Conference on User Modeling", |
| "volume": "", |
| "issue": "", |
| "pages": "84--94", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ingrid Zukerman. 2001. An integrated approach for generating arguments and rebuttals and un- derstanding rejoinders. In UM01 -Proceedings of the Eighth International Conference on User Modeling, pages 84-94, Sonthofen, Germany.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Figure 1(a) shows a simple argument, and Figure 1(d) shows a subset \u00a1 This research was supported in part by Australian Research Council grant A49927212.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "Interpretation and MML evaluation", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "Arg (Section 4.2). The message length calculation goes from SysInt to Arg through the intermediate representations (right-hand side of", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF5": { |
| "text": ", in order to transmit Arg in terms of\u00a4 \u00a6 \u00a5Arg we only need to transmit how each statement in Arg differs from the canonical statement generated for the matching node in \u00a4 \u00a6 \u00a5 Arg (Section 4.1). The length of the message which conveys this infor-", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF6": { |
| "text": "by the comparison function described in Section 4.1.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF7": { |
| "text": "Candidates are all the subgraphs of (c) that connect the nodes in IG (4 of the 9 candidates are shown)", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "text": "Summary of Message Length Calculation ML", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td>\u00a1 Arg</td><td>\u00a4 SysInt\u00a8Equation 1</td></tr><tr><td>ML</td><td colspan=\"2\">\u00a1 SysInt\u00a8Equation 2</td></tr><tr><td colspan=\"3\">ML structural operations Equations 4, 5, 6, 7, 8 \u00a1 \u00a4 Arg SysIntb elief operations Equation 3 \u00a7 \u00a5</td></tr><tr><td>ML</td><td>\u00a1 Arg</td><td>\u00a4 \u00a7 \u00a5 Arg\u00a8E quation 9</td></tr></table>" |
| } |
| } |
| } |
| } |