ACL-OCL / Base_JSON /prefixD /json /D13 /D13-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D13-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:42:32.416051Z"
},
"title": "Grounding Strategic Conversation: Using negotiation dialogues to predict trades in a win-lose game",
"authors": [
{
"first": "Ana\u00efs",
"middle": [],
"last": "Cadilhac",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IRIT Univ. Toulouse",
"location": {
"country": "France"
}
},
"email": "cadilhac@irit.fr"
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"settlement": "Toulouse",
"country": "France"
}
},
"email": "asher@irit.fr"
},
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IRIT Univ. Toulouse",
"location": {
"country": "France"
}
},
"email": "benamara@irit.fr"
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a method that predicts which trades players execute during a winlose game. Our method uses data collected from chat negotiations of the game The Settlers of Catan and exploits the conversation to construct dynamically a partial model of each player's preferences. This in turn yields equilibrium trading moves via principles from game theory. We compare our method against four baselines and show that tracking how preferences evolve through the dialogue and reasoning about equilibrium moves are both crucial to success.",
"pdf_parse": {
"paper_id": "D13-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a method that predicts which trades players execute during a winlose game. Our method uses data collected from chat negotiations of the game The Settlers of Catan and exploits the conversation to construct dynamically a partial model of each player's preferences. This in turn yields equilibrium trading moves via principles from game theory. We compare our method against four baselines and show that tracking how preferences evolve through the dialogue and reasoning about equilibrium moves are both crucial to success.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Rational agents act so as to maximise their expected utilities-an optimal trade off between what they prefer and what they believe they can achieve (Savage, 1954) . Solving a game problem involves finding equilibrium strategies: an optimal action for each player that maximises his expected utility, assuming that the other players perform their specified action (Shoham and Leyton-Brown, 2009) . Calculating equilibria thus requires knowledge of the other players' preferences but almost all bargaining games occur under the handicap of imperfect information about this (Osborne and Rubinstein, 1994) . Players therefore try to extract their opponents' preferences from what they say, likewise revealing their own preferences in their own utterances. These elicited preferences guide an agent's decisions, like choosing to make such and such a bargain with such and such a person. Tracking preferences through dialogue is thus crucial for analyzing the agents' strategic reasoning in real game scenarios.",
"cite_spans": [
{
"start": 148,
"end": 162,
"text": "(Savage, 1954)",
"ref_id": "BIBREF25"
},
{
"start": 363,
"end": 394,
"text": "(Shoham and Leyton-Brown, 2009)",
"ref_id": "BIBREF27"
},
{
"start": 571,
"end": 601,
"text": "(Osborne and Rubinstein, 1994)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we design a model that maps what people say in a win-lose game into a prediction of exactly which players, if any, trade with each other, and exactly what resources they exchange. We use both statistics and logic: we use a corpus of negotiation dialogues to learn classifiers that map each utterance to its speech act and to other acts pertinent to bargaining; and we develop a symbolic algorithm that, from the classifiers' output, dynamically constructs a model of each player's preferences as the conversation proceeds (for instance, the preference to receive a certain resource, or to accept a certain trade). This preference model uses CP-nets (Boutilier et al., 2004) , a representation of preferences for which algorithms for computing equilibrium strategies exist. We adapt those algorithms to predict the trades executed in the game.",
"cite_spans": [
{
"start": 664,
"end": 688,
"text": "(Boutilier et al., 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The algorithm for construcing CP-nets uses only the output of our classifiers, which in turn rely entirely on shallow features in the raw text and robust parsers. Together they provide an end to end model, from raw text to a prediction of which trade, if any, occurred. We evaluate the various components of this (pipeline) algorithm separately, as well as the end to end model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our study exploits a corpus of negotiation dialogues from an online version of the win lose game The Settlers of Catan. Sections 2 and 3 describe the corpus and its annotation. Section 4 introduces our method for constructing the agents' preferences from the dialogues. We use this in Section 5 to predict whether a trade is executed as a result of the players' negotiations, and if so we predict who took part in the trade, and what they exchanged. Our method shows promising results, beating baselines that don't adequately track or reason about preferences. We compare our model to related work in Section 6 and point to future work in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Settlers of Catan (www.catan.com) is a winlose game that involves negotiations over restricted resources. Each player (three or more) acquires resources (of 5 types: ore, wood, wheat, clay, sheep), which they use in different combinations to build roads, settlements and cities, which in turn earns them points towards winning. The first player to 10 points wins. Players acquire resources in several ways, in particular through agreed trades with other players. Some methods (e.g., robbing) are hidden from view, so players lack complete information about their opponents' resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The game",
"sec_num": "2"
},
{
"text": "Our corpus contains conversations of humans playing an online version of Settlers (Afantenos et al., 2012) . Players must converse in a chat interface to carry out trades. Each game contains several dozen self-contained bargaining dialogues. Our experiments use 10 Settlers games, consisting of more than 2000 individual dialogue turns (see Section 3). Table 1 is a sample dialogue from the corpus. The sentences in the corpus have a relatively simple syntax, though many also exhibit long distance dependencies. However, these conversations are pragmatically complex. They exhibit complex anaphoric dependences (e.g., utterance ID 4 in Table 1 ). Other pragmatic inferences, which are dependent on reasoning about intentions, speech acts and discourse structure, are also ubiquitous. For example, the question Have you got any ore? implies an offer for the speaker to receive ore in exchange for something from someone unspecified, and its response I've got wheat not only implies a willingness to exchange wheat for something, but as a response to the question it also implies a refusal to give any ore.",
"cite_spans": [
{
"start": 82,
"end": 106,
"text": "(Afantenos et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 353,
"end": 360,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 637,
"end": 644,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The game",
"sec_num": "2"
},
{
"text": "More generally, a dialogue turn in our corpus can express an offer, a counteroffer, an acceptance or rejection of an offer, or a commentary on the above or on moves in the game. All except the last provide clues about preferences: e.g., which players a speaker wants to execute a trade with; or what resources to exchange. For instance, the utterance Anybody have any sheep for wheat? conveys several preferences. First, it conveys the speaker's preference to trade with someone unspecified. Other informative but underspecified preferences include: the speaker's preference to acquire some sheep over alternatives; and in a context where she receives sheep, a preference to give away some of her wheat over the alternatives. Crucially, it does not convey a preference to give away wheat in a context where she receives nothing or something other than sheep.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The game",
"sec_num": "2"
},
{
"text": "In line with a non-cooperative bargaining game, the preferences and offers that a speaker reveals are less specific than an executable trade requires, where the trading partners and the type of resources offered and received must all be defined. Such general dialogue moves are essentially information seekingevidence that humans playing Settlers have imperfect information about their opponents' preferences. In fact, many offers to trade result in no trade being agreed to and executed. While observed negotiation failure would be puzzling in a bargaining game with perfect information (Osborne and Rubinstein, 1994) , it occurs relatively frequently in Settlers.",
"cite_spans": [
{
"start": 588,
"end": 618,
"text": "(Osborne and Rubinstein, 1994)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The game",
"sec_num": "2"
},
{
"text": "We have a multi-layered dialogue annotation scheme that includes: (1) a pre-annotation that segments the dialogue into turns which are further segmented into Elementary Discourse Units (EDUs) with the author of each turn automatically given;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3"
},
{
"text": "(2) a characterization of each EDU in terms of basic speech acts (assertion, question, request) as well as dialogue acts that are specific to bargaining (offers, counteroffers, etc.); and (3) associated information about the givable and/or receivable resources that EDUs express. Two annotators received training on 77 dialogues, totaling 699 EDUs. They then both annotated the remaining dialogues independently (2741 EDUs and 511 dialogues in total). Kappas for inter-annotator agreement are given below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3"
},
{
"text": "Each turn logs what a player enters in the chat window and also aspects of the game state at the time: his resources, the state of the game board and a time stamp. The pre-annotation divides each turn into EDUs. The annotators then have to specify the dialogue act of each EDU: Offer, Counteroffer, Accept or Refusal (of an offer addressed to the emitter), and Other. Other labels units that either comment on strategic moves in the game or are not directly pertinent to bargaining. Annotators also specify the addressee of the EDU and its surface type: Question, Request or Assertion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue act annotation (Kappa=0.79)",
"sec_num": "3.1"
},
{
"text": "Annotators also specify for each EDU and its dialogue act an associated feature structure, which captures (partial) information that the EDU expresses about the type and quantity of resources that are of the following four attributes: Givable, Not Givable, Receivable or Not Receivable. These attributes can take Boolean combinations of resources as values via two operators AND and OR, that respectively stand for conjunction (the agent expresses two preferences and he prefers to achieve one of them if he cannot have both, such as I need clay and wood) and disjunction (free choice) of preferences (e.g., I can give you clay or wood). We allow attributes to have unknown values: the annotation tool inserts a ? in these cases. We also insist that the annotators resolve anaphoric dependencies when specifying values to attributes, as shown in EDU (4) in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 857,
"end": 864,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Resource type annotation (Kappa=0.80)",
"sec_num": "3.2"
},
{
"text": "Predicting the executed trades from the dialogues starts with three sub-tasks: automatically identifying each EDU's dialogue act; detecting the EDU's resources; and specifying the attributes of those resources (i.e., Givable, Receivable, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue act and resource prediction",
"sec_num": "4"
},
{
"text": "As is well established, one EDU's dialogue act depends on previous dialogue acts (Stolcke et al., 2000) . In our corpus, Accept or Reject frequently follow Offer and Counteroffer. Since labeling is sequential, we use Conditional Random Fields (CRFs) to learn dialogue acts. CRFs have been shown to yield better results in dialogue act classification on online chat than HMM-SVN and Naive Bayes (Kim et al., 2012) . We use three types of features: lexical, syntactic and semantic. And we exploit them as unigrams and bigrams: unigrams associate the value of the feature with the current output class (level 0); bigrams take account of the value of the feature associated with a combination of the current output class and previous output class (level -1). 6 features were used exclusively as unigrams: the EDU's position in the dialogue, its first and last words, its subject lemma, a boolean feature to indicate if the current speaker is the one that initiates the dialogue and the position of the speaker's first turn in the dialogue.",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Stolcke et al., 2000)",
"ref_id": "BIBREF29"
},
{
"start": 394,
"end": 412,
"text": "(Kim et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying dialogue acts",
"sec_num": "4.1"
},
{
"text": "We have 15 unigram and bigram features (at levels 0 and -1), as well as templates that combine feature values for the two levels. These include 14 boolean features that indicate if the EDU contains: bargaining verbs (e.g. trade, offer), references to another player (e.g. you), resource tokens as encoded in a task dedicated lexicon (e.g. wheat, clay), quantifiers (e.g. one, none), anaphoric pronouns, occurrences of \"for\" prepositional phrases (e.g. wheat for clay), acceptance words (e.g. OK), negation words, emoticons, opinion words (from ), words of politeness, exclamation marks, questions, and finally whether the EDU's speaker has talked previously in the dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying dialogue acts",
"sec_num": "4.1"
},
{
"text": "The last feature gives the EDU speaker lemma. In addition, 3 unigram and bigram booleans indicate whether the current EDU contains the most frequent tokens, couple of tokens and syntactic patterns in our corpus. Finally, we use 2 composed bigram features that encode whether the EDU contains an acceptance or refusal word, given that the previous EDU is a question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying dialogue acts",
"sec_num": "4.1"
},
{
"text": "To assign sequential tags of dialogue acts within a negotiation dialogue, we use the CRF++ tool (crfpp.googlecode.com). Our data consists of 2741 EDUs in 511 dialogues. Each EDU is associated with a dialogue act resulting in 410 Offer, 197 Counteroffer, 179 Accept, 398 Refusal and 1557 Other. We use 10-fold cross-validation to evaluate our model, computing precision, recall and Fscore for each class and global accuracy from the total number of true positives, false positives, false negatives and true negatives obtained by summing over all fold decisions. The results (in percent) are given in Table 2 (MaF is the average of F-scores of all the classes). Our model significantly outperforms the frequency-based baseline (MaF=14.5; Accuracy=56.8), with the best F-score achieved for Other. The least good results are for the two least frequent classes in our data. In addition to the frequency problem, the lower score for Counteroffer is mainly due to the model confusing it with Offer. Errors in the Accept class were often due to misspelling or to chat style conversation; e.g., kk, yup. ",
"cite_spans": [],
"ref_spans": [
{
"start": 599,
"end": 606,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Identifying dialogue acts",
"sec_num": "4.1"
},
{
"text": "Since the resource vocabulary in The Settlers of Catan is a closed set composed of words denoting specific resources (e.g., clay, wood) and their synonyms (brick), we use a simple rule to detect them: a noun phrase (NP) is a resource text span if and only if it contains a lemma from our resource lexicon. A closed set resource vocabulary is common to many different types of negotiation dialogues. We used the Stanford parser (Klein and Manning, 2003) to obtain the NPs: there are 4361 NPs, where (by the gold standard annotations) 21% are resources and 79% are not. We obtain an F-score of 96.9% and accuracy of 97.9%, clearly beating both the frequency and random baselines for this task.",
"cite_spans": [
{
"start": 427,
"end": 452,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding resource text spans",
"sec_num": "4.2"
},
{
"text": "Recall that each resource within an EDU can be the value of four types of attributes: Givable, Receivable, Not Givable or Not Receivable (cf. Section 3.2). We predict these attributes using CRFs with the following features. 8 features are used as unigram at the current and the previous EDU level: the speaker, the EDU's subject, the dialogue act, and (if present) the lemma of a bargaining verb, and 4 boolean features indicate if the EDU contains an opinion word, a reference to another speaker, if the resource comes after a \"for\" and if it contains a refusal word. These features also serve as bigrams at the current EDU level. Additionally, we have a set of unigram and bigram boolean features that indicate if the current EDU contains the most frequent verbs in the corpus. And finally, we use a feature that encodes the combination subject/bargaining verb in the current EDU. We used CRF++ to implement our classifier. Our corpus data consists of 1077 Resources, split into 510 Receivable, 432 Givable, 116 Not Givable and 19 Not Receivable. We use again 10-fold crossvalidation to evaluate our model and compute the results by summing over all fold decisions. We present them (in percent) in Table 3 . They beat the frequency-based baseline (MaF=16.1; Accu-racy=47.4), although performance on the Not Receivable class is poor probably due to its low frequency in the data.",
"cite_spans": [],
"ref_spans": [
{
"start": 1200,
"end": 1207,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Recognizing the type of resources",
"sec_num": "4.3"
},
{
"text": "Ambiguities make this task challenging. For instance, anyone wheat for clay? can mean that the speaker wants to receive wheat and give clay or the opposite, and resolving which meaning is intended involves reasoning not only with the previous and/or the following EDU, but also sometimes EDUs with long distance attachments, which are not supported by our classifier and require a full discourse parser. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizing the type of resources",
"sec_num": "4.3"
},
{
"text": "We aim to capture the evolution of commitments to certain preferences as the dialogue proceeds so as to predict the agents' bargaining behavior. In other words, we wish to predict which of the 61 possible trade actions is executed at the end of each dialogue. The possible trades vary over which partner the player whose turn it is trades with (3 options in a 4 player game), the resources exchanged (assuming each partner gives one type of resource and receives another type yields 5\u00d74 = 20 possibilities), or there is no trade; i.e., (3 \u00d7 20) + 1 = 61 possible actions in the hypothesis space (we predict the types of resources that are exchanged, but not their quantity). We predict the executed action by identifying the equilibrium trade entailed by the model of the players' preferences, which in turn we construct dynamically from the output of the classifiers in Section 4. We use the attributes of resources in the EDUs (Givable, etc.) to identify the preference that a speaker conveys in the EDU, and we use the dialogue acts (Offer, Accept, etc.) to update a model of the preferences expressed so far in the dialogue with this new preference (see Section 5.2). Our model of preferences consists of a set of partial CP-nets, one for each player (see Section 5.1 for details). The resulting CP-nets are then used to infer the executed trading action (if any) automatically, via wellunderstood principles from game theory for identifying rational behavior (Bonzon, 2007) .",
"cite_spans": [
{
"start": 1464,
"end": 1478,
"text": "(Bonzon, 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Players' Strategic Actions",
"sec_num": "5"
},
{
"text": "Following Cadilhac et al. (2011), we use CP-nets (Boutilier et al., 2004) to model preferences and their dependencies. CP-nets are compatible with the kind of partial information about preferences that utterances reveal, and inference with CP-nets is com-putationally efficient.",
"cite_spans": [
{
"start": 49,
"end": 73,
"text": "(Boutilier et al., 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "Just as Bayesian nets are a graphical model that exploits probabilistic conditional independence to provide a compact representation of a joint probability distribution (Pearl, 1988) , CP-nets are a graphical model that exploits conditional preferential independence to provide a compact representation of the preference order over all outcomes. The CPnet structures the decision maker's preferences under a ceteris paribus assumption: outcomes are compared, other things being equal.",
"cite_spans": [
{
"start": 169,
"end": 182,
"text": "(Pearl, 1988)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "More formally, let V be a finite set of variables whose combination of values determine all outcomes O. Then a preference relation over O is a reflexive and transitive binary relation with strict preference defined as: o o and o o. Indifference, written o \u223c o , means o o and o o. Definition 1 defines conditional preference independence and Definition 2 defines CP-nets: the graphical component G of a CP-net specifies for each variable X \u2208 V its parent variables P a(X) that affect the agent's preferences over the values of X, such that X is conditionally preferentially independent of V \\ ({X} \u222a P a(X)) given P a(X).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "Definition 1 Let V be a set of variables, each vari- able X i with a domain D(X i ). Let {X, Y, Z} be a partition of V . X is conditionally preferentially independent of Y given Z if and only if \u2200z \u2208 D(Z), \u2200x 1 , x 2 \u2208 D(X) and \u2200y 1 , y 2 \u2208 D(Y ), x 1 y 1 z x 2 y 1 z iff x 1 y 2 z x 2 y 2 z. Definition 2 N V = G, T is a CP-net on variables V , where G is a directed graph over V , and T is a set of Conditional Preference Tables (CPTs). That is, T = {CPT(X j ): X j \u2208 V }, where CPT(X j ) specifies for each combination p of values of the par- ent variables P a(X j ) either p : x j x j , p : x j x j or p : x j \u223c x j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "where the\u00af\u00afsymbol sets the variable to false.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "We discuss below how a CP-net predicts rational action, but first we describe how CP-nets are constructed from the dialogues. In the Settlers corpus, preferences involve a quadruplet (o, a, <r,q>) where: o is the preference owner, a is the addressee, r is the resource and q is its quantity. So each variable in the CP-nets we construct is such a quadruplet, and for each variable the possibles values are Givable (Giv), Not Givable (Giv), Receiv-able (Rcv) and Not Receivable (Rcv).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "For example, the utterance Anyone want to give me a wheat for a clay? expresses two preferences: one for receiving wheat, represented by the variable P w = (A,All,<wheat,1>); and given this preference, another for giving clay, represented by P c = (A,All,<clay,1>) (where A is the name of the speaker). The corresponding CP-Net is Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 331,
"end": 339,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "P w P c CPT(P w ) = Rcv Rcv",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "CPT(P c ) = Rcv P w : Giv Giv Figure 1 : An example CP-net",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CP-Nets",
"sec_num": "5.1"
},
{
"text": "As stated above, we first automatically acquire a CPnet from each EDU by using the EDU's dialogue act and the attributes (Givable, etc.) of its resources. We then apply the rules presented in (Cadilhac et al., 2011) to dynamically construct a preference model of the dialogue overall: this uses an equivalence between their coherence relations and our dialogue acts. Our CP-nets reasoning model handles uncertain information and noise because it use as input only the outputs of the statistical models described in Section 4, and these prior models handle uncertain information and noise. The symbolic rules for constructing CP-nets have complete coverage over any possible combination of classes that are output by the statistical models, and so they are robust. We give our rules below where \u03c0 i stands for EDU ID i.",
"cite_spans": [
{
"start": 192,
"end": 215,
"text": "(Cadilhac et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "Offers. Because an Offer may specify or refine an existing preference or offer, we must model how the preferences expressed in an EDU that's an Offer updates the prior declared preferences. So, while our annotations treat Offer as a property of EDUs, we treat them here as binary relations: Offer(\u03c0 1 , \u03c0 2 ), where the second term, \u03c0 2 , is the actual EDU whose dialogue act is Offer and \u03c0 1 is the set of EDUs occurring between \u03c0 2 and the last EDU uttered by the same speaker. Offers then have a similar effect on the CP-net as the coherence relation Elaboration presented in (Cadilhac et al., 2011) . That is, to automatically update the CP-net constructed so far with a current EDU that's an Offer, the two step rule for Offer(\u03c0 1 , \u03c0 2 ) is:",
"cite_spans": [
{
"start": 579,
"end": 602,
"text": "(Cadilhac et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "1. to update the speaker's CP-net according to the preferences expressed in \u03c0 1 , and 2. if \u03c0 2 expresses preferences, to enrich the CPnet with these new preferences so that each variable in \u03c0 2 depends on each variable in \u03c0 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "Counteroffers. They specify or modify the terms of a previous Offer or Counteroffer. Their purpose is to give new information to refine the negotiation. Like Offers they must also receive a contextually dependent interpretation. The rule is quite similar to that for Offer; however, Counteroffer can modify or correct elements in a previously introduced offer. So for Counteroffer(\u03c0 1 , \u03c0 2 ), the rule is :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "1. to partially update the speaker's CP-net according to the preferences expressed in \u03c0 1 which do not have the same resource type (Givable, Receivable) than the ones in \u03c0 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "2. same as step 2 Offer rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "Accepts and Refusals. As they are answers to Offers and Counteroffers, they behave like question answer pairs (QAPs) presented in (Cadilhac et al., 2011 ). Because we are not doing full discourse parsing, we once again approximate its effects by making Accepts and Refusals respond to the set of EDUs between the current EDU and the speaker's last turn.",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Cadilhac et al., 2011",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "Accepts are positive responses to Offers or Counteroffers and are de facto similar to QAP(\u03c0 1 , \u03c0 2 ) where \u03c0 2 is Yes. Thus, the rule is, as for Offer, to update and enrich the CP-net.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "Refusals are instead negative responses and behave like QAP(\u03c0 1 , \u03c0 2 ) where \u03c0 2 is No. For Refusal(\u03c0 1 , \u03c0 2 ), there is no update of the preferences expressed in \u03c0 1 . Instead, we enrich the CP-net with the Non Givable and Non Receivable information obtained from the negation of the preferences expressed in the previous Offer or Counteroffer. We then enrich the CP-net based on any new preferences expressed in \u03c0 2 . If there is a conflict between the value of a variable to be updated and the current value in the CP-net, we apply the Correction rule: all occurrences of the old value are replaced by the new value in \u03c0 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "Other. This category pertains to content that does not directly relate to trading in the game, and so we choose to ignore resources expressed in the EDUs with this dialogue act.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "At the end of the negotiation dialogue, to predict exactly what trade is executed (if any), the method checks if there are complete and reciprocal preferences expressed in the CP-nets that respectively represent the declared preferences of two agents A and B. This is done in two steps. First, we use the logic of CP-nets to determine each agent's best outcome bestO A and bestO B from their respective CP-nets (we'll discuss how shortly). Secondly, we compare these best outcomes: if they correspond to the same trade, we predict that this trade was executed; if not, we predict no trade is executed. Specifically, bestO A (resp. bestO B ) corresponds to a preference for receiving a resource r 1 from an agent B (or from all the agents indifferently) and for giving a resource r 2 to this (or these) agent(s). We predict that A gives B r 2 and B gives A r 1 if and only if: bestO A = Rcv(A, B, r 1 ) \u2227 Giv(A, B, r 2 ) and bestO B = Rcv(B, A, r 2 ) \u2227 Giv (B, A, r 1 ) .",
"cite_spans": [],
"ref_spans": [
{
"start": 956,
"end": 968,
"text": "(B, A, r 1 )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "The first step-computing each agent's best outcome from his CP-net-can be found in linear time using the forward sweep algorithm (Boutilier et al., 2004) : sweep through the CP-net's graph from top to bottom, instantiating each variable with its preferred value, given the values that are (already) assigned to its parents. This algorithm is sound with respect to the semantics of CP-nets.",
"cite_spans": [
{
"start": 129,
"end": 153,
"text": "(Boutilier et al., 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "Example. We apply this method for constructing CP-nets and determining the executed trade to the negotiation dialogue presented in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "\u03c0 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "The EDU is an Offer, so Rainbow's CP-net is updated according to \u03c0 1 's content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "CPT(R,All,<clay,?>) = Rcv Rcv \u03c0 2 It's a Refusal, so we update inca's CP-net with the negation of the preferences expressed in Rainbow's offer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "CPT(I,R,<clay,?>) = Giv Giv \u03c0 3 Idem for ariachiba.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "CPT(A,R,<clay,?>) = Giv Giv \u03c0 4 Idem for Kittles where the preferences expressed in this EDU are redundant with the negation of the preferences in Rainbow's offer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "CPT(K,R,<clay,?>) = Giv Giv \u03c0 5 It's an Offer, so Rainbow's CP-net is first updated according to previous EDUs (\u03c0 2 to \u03c0 4 until his last speaking), then according to the content of \u03c0 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "CPT(R,All,<clay,?>) = Rcv Rcv The introduction of the preference to receive ore conflicts with the prior one for receiving clay. So the method adds to the associated CPT the label \"inactive\" to indicate that this is older and should be ignored if the preference about ore is satisfied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "\u03c0 6 The EDU is an Accept, so Kittles's CP-net is updated according to previous EDUs (only \u03c0 5 ). 1 CPT(K,R,<ore,?>) = Giv(K,R,<clay,?>): Giv Giv \u03c0 7 The EDU is a Counteroffer. Since she is the last speaker, her CP-net gets updated only according to the content of the current EDU, to obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "CPT(K,R,<ore,?>) = Giv(K,R,<clay,?>): Giv Giv CPT(K,R,<wheat,?>) = Giv(K,R,<clay,?>) \u2227 Giv(K,R, <ore,?>) : Rcv Rcv \u03c0 8 The EDU is an Accept, so Rainbow's CP-net is updated according to previous EDUs (\u03c0 6 and \u03c0 7 ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "CPT(R,K,<ore,?>) = Rcv(R,I,<clay,?>) \u2227 Rcv(R,A, <clay,?>) \u2227 Rcv(R,K,<clay,?>) : Rcv Rcv CPT(R,K,<wheat,?>) = Rcv(R,I,<clay,?>) \u2227 Rcv(R,A, <clay,?>) \u2227 Rcv(R,K,<clay,?>) \u2227 Rcv(R,K,<ore,?>) : Giv Giv \u03c0 9 It's an Accept with nothing new to update. At the end of the dialogue, these agents' CP-nets (correctly) predict that Kittles gave ore to Rainbow in exchange for wheat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling players' preferences",
"sec_num": "5.2"
},
{
"text": "We compare our model against four baselines. Since none of these baselines support reasoning about equilibrium moves, they all rely on the presence of an Accept act to predict there was a trade, and its absence to predict there wasn't. The baselines differ, however, in how they identify the trading partners and resources in an executed trade. The first baseline predicts a trade according to the first Offer and the last person to Accept, and if the Offer doesn't specify one of the resources then it is chosen randomly (similar random choices complete all partial predictions in all the models we consider here): e.g., for Table 1 this would predict that Kittles gave clay to Rainbow (which is incorrect) in exchange for something that's chosen randomly (which will probably be incorrect). The second baseline uses the last Offer and the last person to Accept: e.g., for Table 1 this predicts that Kittles gave ore to Rainbow (correct) for something random (probably incorrect). The third baseline uses the last Offer or Counteroffer, whichever is latest, and the last person to Accept: e.g., for Table 1 this correctly predicts that Kittles gave ore to Rainbow in exchange for wheat. And the fourth baseline, uses default unification between the prior Offers or Counteroffers and the current one to resolve any of the current offer's elided parts and to replace specific values in prior offers with conflicting specific values in the current offer (Ehlen and Johnston, 2013) . One then takes the executed trade to be the result of this unification process at the point where the last Accept occurs. This makes the same predictions as the third baseline for Table 1 , but outperforms it in the corpus example (1) by predicting the correct and complete trade (i.e., Rainbow gave Kittles sheep for wheat, rather than for something random):",
"cite_spans": [
{
"start": 1452,
"end": 1478,
"text": "(Ehlen and Johnston, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 626,
"end": 633,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 874,
"end": 881,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1100,
"end": 1107,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1661,
"end": 1668,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "(1) Rainbow: i need clay ore or wheat Kittles: i got wheat Rainbow: i cn giv sheep Kittles: ok",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "We performed the evaluation on the data presented in Sections 3 and 4: 254 dialogues in total since we ignore dialogues that contain only Others. 90 of these dialogues end with a trade being executed and 2 of them end with 2 trades. A random baseline would give 1.6% accuracy (given the 61 possible trading actions) and a frequency baseline (always choose no trade) gives 64.1% accuracy. Table 4 presents the accuracy figures for all the models when calculated from the gold standard labels rather than the classifiers' predicted labels from Section 4, so that we can compare the models in isolation of the classifiers' errors. McNemar's test shows that our model significantly outperforms all the baselines (p < 0.05). A predicted trade counts as correct only if it specifies the right participants and the correct type of resources offered and received (we ignore their quantity). True Positives (TP) are thus examples where the model correctly predicts not only that a trade happened, but also the correct partners and resources; Wrong Positives (WP), on the other hand, constitute a correct prediction that there was a trade but errors on the partners and/or resources involved (so WPs undermine accuracy). True Negatives (TN) are examples where the model correctly predicts there was no trade (so TPs and TNs contribute to accuracy). False Positives (FP) and False Negatives (FN) are respectively incorrect predictions that there was a trade, or that there was no trade.",
"cite_spans": [
{
"start": 1226,
"end": 1230,
"text": "(TN)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "While Table 4 does not reflect this, the first three baselines tend to predict incomplete information about the trade even when what they do predict is correct: that is, they predict the correct addressee and the owner but resort to random choice for a resource that's missing from the Offer or Counteroffer that predicts which trade occurred. For the first baseline 34 examples are like this; for the second and third baselines it's 32. In contrast, this problem occurs only once with the fourth baseline, and all the trades predicted by our method are complete, making random choice unnecessary. Moreover, the first three baselines often make incorrect predictions about the addressee or resources exchanged because in contrast to our model and the fourth baseline, they don't track how potential trades evolve through a sequence of offers and counteroffers.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 13,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "Even though the fourth baseline, which uses default unification to track the content of the current offer, is smart and gives good results, it has statistically significant lower accuracy than our model. One major problem with the fourth baseline is that, in contrast to our model, it does not track each player's attitude towards the current offer. Instead, like all our baselines, it relies on the presence of an Accept act to predict that there's a trade. 2 But several corpus examples are like (2), in which a trade is executed but there's no Accept act, thus yielding a False Negative (FN) for all four baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "(2) Joel:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "anyone have sheep or wheat Cardlinger: neither :( Joel:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "will give clay or ore Euan:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "not just now Jon:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "got a wheat for a clay (Joel gives clay to Jon and receives wheat)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "So overall, our analysis shows that using CP-nets significantly outperforms all baselines that don't model how preferences evolve in the dialogue, and error analysis yields evidence that our model outperforms the fourth baseline because our model supports reasoning about player preferences, rational behavior and equilibrium strategies. Table 5 presents the results for the end to end evaluation, where trade predictions are made from the classifiers' output from Section 4 rather than the gold standard labels. As expected, performance decreases due to the classifiers' errors, mainly on the type of resources (Givable, etc.). But our method still significantly outperforms all the baselines with an accuracy of 73.4% when the baselines obtain values between 60.9% and 68.4%. (Stolcke et al., 2000; Fern\u00e1ndez et al., 2005; Keizer et al., 2002) . But live chats introduce specific complications (Kim et al., 2012) : ill-formed data, abbreviations and acronyms, emotional indicators and entanglement (especially for multi-party chat). Among related work in this emerging field, Joty et al. (2011) use unsupervised learning to model dialogue acts in Twitter, Ivanovic (2008) and Kim et al. (2010) analyze one-to-one online chat in a customer service domain, and Wu et al. (2002) and Kim et al. (2012) predict dialogue acts in a multi-party setting. We used a similar classifier to predict dialogue acts as the one reported in (Kim et al., 2012) and evaluation yields similar results. This paper proposes an approach to dialogue act identification in online chat that aims to predict strategic actions like bargaining. Compared to (Sidner, 1994) and DAMSL (Core and Allen, 1997), our domain level annotation is much more detailed: we not only predict moves like Accept but also features like the Givable and Receivable resources. Our general speech act typology of EDUs lacks intentional descriptions of speech acts, however. This reflects a conscious choice to specify the semantics of each act purely by the public commitments made to offer or to receive goods.",
"cite_spans": [
{
"start": 778,
"end": 800,
"text": "(Stolcke et al., 2000;",
"ref_id": "BIBREF29"
},
{
"start": 801,
"end": 824,
"text": "Fern\u00e1ndez et al., 2005;",
"ref_id": "BIBREF14"
},
{
"start": 825,
"end": 845,
"text": "Keizer et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 896,
"end": 914,
"text": "(Kim et al., 2012)",
"ref_id": "BIBREF21"
},
{
"start": 1158,
"end": 1173,
"text": "Ivanovic (2008)",
"ref_id": "BIBREF16"
},
{
"start": 1178,
"end": 1195,
"text": "Kim et al. (2010)",
"ref_id": "BIBREF20"
},
{
"start": 1261,
"end": 1277,
"text": "Wu et al. (2002)",
"ref_id": "BIBREF30"
},
{
"start": 1282,
"end": 1299,
"text": "Kim et al. (2012)",
"ref_id": "BIBREF21"
},
{
"start": 1425,
"end": 1443,
"text": "(Kim et al., 2012)",
"ref_id": "BIBREF21"
},
{
"start": 1629,
"end": 1643,
"text": "(Sidner, 1994)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 338,
"end": 345,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "5.3"
},
{
"text": "While preference extraction from non-linguistic actions is well studied (Chen and Pu, 2004; F\u00fcrnkranz and H\u00fcllermeier, 2011) , their extraction from spontaneous conversation has received little attention. To our knowledge, the only existing work is (Asher et al., 2010; Cadilhac et al., 2011; which we build on. Cadilhac et al. (2011) compute CP-nets from coherence relations, found in the annotation of the Verbmobil corpus (Baldridge and Lascarides, 2005) . Here we adapt their algorithm from coherence relations to unary dialogue acts. Further, while they assume that preferences are given, here we apply versions of the NLP techniques from to estimate the preferences of EDUs automatically. And we go further than any of these works by using the elicited pref-erences to infer the domain-level actions that result from information exchanged in the conversation.",
"cite_spans": [
{
"start": 72,
"end": 91,
"text": "(Chen and Pu, 2004;",
"ref_id": "BIBREF10"
},
{
"start": 92,
"end": 124,
"text": "F\u00fcrnkranz and H\u00fcllermeier, 2011)",
"ref_id": null
},
{
"start": 249,
"end": 269,
"text": "(Asher et al., 2010;",
"ref_id": "BIBREF3"
},
{
"start": 270,
"end": 292,
"text": "Cadilhac et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 312,
"end": 334,
"text": "Cadilhac et al. (2011)",
"ref_id": "BIBREF8"
},
{
"start": 425,
"end": 457,
"text": "(Baldridge and Lascarides, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preference extraction",
"sec_num": "6.2"
},
{
"text": "In this respect, our work relates to models for grounding language, where semantic parsing techniques are used to automatically map linguistic instructions to domain-level actions (Artzi and Zettlemoyer, 2013; Kim and Mooney, 2013) . Our domain of application is more challenging, however: to our knowledge, this is the first attempt to map non-cooperative dialogues into predictions about domain-level actions. We can tackle these strategic scenarios because we exploit a logic of preferences as part of our model, yielding inferences about rational action even when agents' preferences conflict.",
"cite_spans": [
{
"start": 180,
"end": 209,
"text": "(Artzi and Zettlemoyer, 2013;",
"ref_id": "BIBREF2"
},
{
"start": 210,
"end": 231,
"text": "Kim and Mooney, 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preference extraction",
"sec_num": "6.2"
},
{
"text": "Compared to previous work, our task is new. Our aim is not to predict what dialogue act to perform next, but what non verbal action should be performed, mapping dialogue acts to non verbal actions. The difference between our work and other work on grounding is that we are grounding noncooperative dialogue rather than instructions in a cooperative setting. There is no prior work of which we're aware that maps a non-cooperative dialogue into a prediction about which joint non-verbal action the agents will do as a result of what they've learned about their opponent through conversation. Furthermore, both the CP-net and the fourth baseline, whose accuracy is quite high (making it a hard baseline to beat), use the dialogue history as they incrementally build up the preference model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference extraction",
"sec_num": "6.2"
},
{
"text": "Modeling player behavior in real-time strategy games is a growing research area in AI. These models can be used to identify common strategic states, discover new strategies as they emerge or predict an opponents future actions and so help players to optimize their choices. For example, Schadd et al. (2007) develop a hierarchical opponent model in the game Spring, Dereszynski et al. (2011) reason about strategic behavior in StarCraft using hidden Markov models and Amato and Shani (2010) use reinforcement learning to acquire a policy for switching among high-level strategies in Civilization IV.",
"cite_spans": [
{
"start": 287,
"end": 307,
"text": "Schadd et al. (2007)",
"ref_id": "BIBREF26"
},
{
"start": 366,
"end": 391,
"text": "Dereszynski et al. (2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting strategic actions",
"sec_num": "6.3"
},
{
"text": "In comparison, we propose a novel approach for predicting strategic action based on the symbolically formalized preferences that each agent commits to in spontaneous conversation. Our approach thus deals with imperfect information by exploiting the agents' declared preferences. By predicting what bargain (if any) will take place, we are able to verify the correctness of our preference descriptions. Our task is a subtask of learning a strategy over an entire game space, but our approach yields good predictive results on relatively little data-an advantage of exploiting CP-nets and the symbolic rules that guide their evolution from observable evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting strategic actions",
"sec_num": "6.3"
},
{
"text": "We have proposed a linguistic approach to strategy prediction in spontaneous conversation, exploiting dialogue acts to build a partial model of the agents' declared preferences. Our method tracks how preferences evolve during the dialogue, which we use to infer their bargaining behavior, i.e. what resources, if any, are exchanged, and by whom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We based our study on a corpus collected using an online version of The Settlers of Catan. Negotiations in this game mirror complex real life negotiations and provide a fruitful arena to study strategic conversation. Evaluation shows that our approach provides more accurate and complete information about trades than baselines that don't track how an offer evolves through the dialogue, and we also argued that game-theoretic reasoning about rational behavior has advantages over relying on the presence or absence of an Accept act to make predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our approach, however, does not exploit discourse structure, which is needed to properly handle long distance dependencies of offers on prior material. We will exploit this in future work to improve our results. We also plan to investigate other aspects of strategic reasoning on a larger dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We have proposed a method that relies on a typology of dialogue acts that is domain sensitive. However, in other work we have shown how to adapt our algorithms to several domains . In future work, we plan to link our preference extraction algorithms to an automatically acquired discourse structure for a given text. This will provide a domain independent means for extracting preferences from dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Due to lack of space, in the following CP-nets, we do not copy the inactive CPTs and CPTs about Not Givable or Not Receivable resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We tried a baseline that doesn't rely on the presence of an Accept act, but rather predicts a trade whenever default unification yields a complete offer. It performed worse than the fourth baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by ERC grant 269427 STAC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Developing a corpus of strategic conversation in the settlers of catan",
"authors": [
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Ana\u00efs",
"middle": [],
"last": "Cadilhac",
"suffix": ""
},
{
"first": "C\u00e9dric",
"middle": [],
"last": "D\u00e9gremont",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Guhe",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Keizer",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Soumya",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
},
{
"first": "Laure",
"middle": [],
"last": "Vieu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 1st Workshop on Games and NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stergos Afantenos, Nicholas Asher, Farah Benamara, Ana\u00efs Cadilhac, C\u00e9dric D\u00e9gremont, Pascal Denis, Markus Guhe, Simon Keizer, Alex Lascarides, Oliver Lemon, Philippe Muller, Soumya Paul, Verena Rieser, and Laure Vieu. 2012. Developing a corpus of strate- gic conversation in the settlers of catan. In Pro- ceedings of the 1st Workshop on Games and NLP (GAMNLP-12).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Highlevel reinforcement learning in strategy games",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Amato",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Shani",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AA-MAS'10)",
"volume": "",
"issue": "",
"pages": "75--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Amato and Guy Shani. 2010. High- level reinforcement learning in strategy games. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AA- MAS'10), pages 75-82.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Weakly supervised learning of semantic parsers for mapping instructions to actions",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "49--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping in- structions to actions. Transactions of the Association for Computational Linguistics, 1:49-62.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Extracting and modelling preferences from dialogue",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Elise",
"middle": [],
"last": "Bonzon",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2010,
"venue": "IPMU",
"volume": "",
"issue": "",
"pages": "542--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher, Elise Bonzon, and Alex Lascarides. 2010. Extracting and modelling preferences from dia- logue. In IPMU, pages 542-553.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Annotating discourse structures for robust semantic interpretation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 6th IWCS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baldridge and Alex Lascarides. 2005. Annotating discourse structures for robust semantic interpretation. In Proceedings of the 6th IWCS.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards context-based subjectivity analysis",
"authors": [
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Baptiste",
"middle": [],
"last": "Chardon",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Mathieu",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Popescu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1180--1188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farah Benamara, Baptiste Chardon, Yannick Mathieu, and Vladimir Popescu. 2011. Towards context-based subjectivity analysis. In Proceedings of 5th Interna- tional Joint Conference on Natural Language Process- ing, pages 1180-1188, Chiang Mai, Thailand.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Mod\u00e9lisation des interactions entre agents rationnels : les jeux bool\u00e9ens",
"authors": [
{
"first": "Elise",
"middle": [],
"last": "Bonzon",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elise Bonzon. 2007. Mod\u00e9lisation des interactions en- tre agents rationnels : les jeux bool\u00e9ens. PhD thesis, Universit\u00e9 Paul Sabatier, Toulouse.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cp-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements",
"authors": [
{
"first": "Craig",
"middle": [],
"last": "Boutilier",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Brafman",
"suffix": ""
},
{
"first": "Carmel",
"middle": [],
"last": "Domshlak",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Holger",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hoos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Poole",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "21",
"issue": "",
"pages": "135--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Craig Boutilier, Craig Brafman, Carmel Domshlak, Hol- ger H. Hoos, and David Poole. 2004. Cp-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements. Journal of Artificial In- telligence Research, 21:135-191.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Commitments to preferences in dialogue",
"authors": [
{
"first": "Ana\u00efs",
"middle": [],
"last": "Cadilhac",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of SIGDIAL",
"volume": "",
"issue": "",
"pages": "204--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana\u00efs Cadilhac, Nicholas Asher, Farah Benamara, and Alex Lascarides. 2011. Commitments to preferences in dialogue. In Proceedings of SIGDIAL, pages 204- 215. ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Preference extraction from negotiation dialogues",
"authors": [
{
"first": "Ana\u00efs",
"middle": [],
"last": "Cadilhac",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Mohamadou",
"middle": [],
"last": "Seck",
"suffix": ""
}
],
"year": 2012,
"venue": "European Conference on Artificial Intelligence (ECAI)",
"volume": "",
"issue": "",
"pages": "211--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana\u00efs Cadilhac, Nicholas Asher, Farah Benamara, Vladimir Popescu, and Mohamadou Seck. 2012. Pref- erence extraction from negotiation dialogues. In Eu- ropean Conference on Artificial Intelligence (ECAI), pages 211-216. IOS Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Survey of preference elicitation methods",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Pearl",
"middle": [],
"last": "Pu",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Chen and Pearl Pu. 2004. Survey of preference elici- tation methods. Technical report.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Coding dialogs with the DAMSL annotation scheme",
"authors": [
{
"first": "G",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "James",
"middle": [
"F"
],
"last": "Core",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1997,
"venue": "Working Notes of the AAAI Fall Symposium on Communicative Action in Humans and Machines",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark G. Core and James F. Allen. 1997. Coding di- alogs with the DAMSL annotation scheme. In Work- ing Notes of the AAAI Fall Symposium on Communica- tive Action in Humans and Machines.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning probabilistic behavior models in real-time strategy games",
"authors": [
{
"first": "Ethan",
"middle": [
"W"
],
"last": "Dereszynski",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Hostetler",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
},
{
"first": "Thao-Trang",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Udarbe",
"suffix": ""
}
],
"year": 2011,
"venue": "AIIDE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethan W. Dereszynski, Jesse Hostetler, Alan Fern, Thomas G. Dietterich, Thao-Trang Hoang, and Mark Udarbe. 2011. Learning probabilistic behavior mod- els in real-time strategy games. In AIIDE.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A multimodal dialogue interface for mobile local search",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Ehlen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Johnston",
"suffix": ""
}
],
"year": 2013,
"venue": "IUI Companion",
"volume": "",
"issue": "",
"pages": "63--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Ehlen and Michael Johnston. 2013. A multi- modal dialogue interface for mobile local search. In IUI Companion, pages 63-64.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using machine learning for non-sentential utterance classification",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Ginzburg",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 6th SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "77--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raquel Fern\u00e1ndez, Jonathan Ginzburg, and Shalom Lap- pin. 2005. Using machine learning for non-sentential utterance classification. In Proceedings of the 6th SIG- dial Workshop on Discourse and Dialogue, pages 77- 86.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "2011. Preference Learning",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes F\u00fcrnkranz and Eyke H\u00fcllermeier, editors. 2011. Preference Learning. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic instant messaging dialogue using statistical models and dialogue acts",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Ivanovic",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Ivanovic. 2008. Automatic instant messaging dialogue using statistical models and dialogue acts. In Masters thesis, The University of Melbourne.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unsupervised modeling of dialog acts in asynchronous conversations",
"authors": [
{
"first": "R",
"middle": [],
"last": "Shafiq",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 22nd International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1807--1813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq R. Joty, Giuseppe Carenini, and Chin-Yew Lin. 2011. Unsupervised modeling of dialog acts in asyn- chronous conversations. In Proceedings of the 22nd International Joint Conference on Artificial Intelli- gence, pages 1807-1813.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dialogue act recognition with bayesian networks for dutch dialogues",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Keizer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 3rd SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "88--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Keizer, Rieks op den Akker, and Anton Nijholt. 2002. Dialogue act recognition with bayesian net- works for dutch dialogues. In Proceedings of the 3rd SIGdial Workshop on Discourse and Dialogue, pages 88-94. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adapting discriminative reranking to grounded language learning",
"authors": [
{
"first": "Joohyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-2013)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joohyun Kim and Raymond J. Mooney. 2013. Adapting discriminative reranking to grounded language learn- ing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL- 2013), Sofia, Bulgaria.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Classifying dialogue acts in 1-to-1 live chats",
"authors": [
{
"first": "Nam",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Cavedon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "862--871",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Nam Kim, Lawrence Cavedon, and Timothy Baldwin. 2010. Classifying dialogue acts in 1-to-1 live chats. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 862- 871.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Classifying dialogue acts in multi-party live chats",
"authors": [
{
"first": "Nam",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Cavedon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "26th Pacific Asia Conference on Language,Information and Computation",
"volume": "",
"issue": "",
"pages": "463--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Nam Kim, Lawrence Cavedon, and Timothy Bald- win. 2012. Classifying dialogue acts in multi-party live chats. In 26th Pacific Asia Conference on Lan- guage,Information and Computation, pages 463-472.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In Proceedings of the 41st Meeting of the Association for Computational Linguis- tics, pages 423-430.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A Course in Game Theory",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Ariel",
"middle": [],
"last": "Rubinstein",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Osborne and Ariel Rubinstein. 1994. A Course in Game Theory. MIT Press.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference",
"authors": [
{
"first": "Judea",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judea Pearl. 1988. Probabilistic Reasoning in Intelli- gent Systems: Networks of Plausible Inference. Mor- gan Kauffmann.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Foundations of Statistics",
"authors": [
{
"first": "Leonard",
"middle": [],
"last": "Savage",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonard Savage. 1954. The Foundations of Statistics. John Wiley.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Opponent modeling in real-time strategy games",
"authors": [
{
"first": "Frederik",
"middle": [],
"last": "Schadd",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Bakkes",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Spronck",
"suffix": ""
}
],
"year": 2007,
"venue": "Games and Simulation GAMEON",
"volume": "",
"issue": "",
"pages": "61--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederik Schadd, Sander Bakkes, and Pieter Spronck. 2007. Opponent modeling in real-time strategy games. In Games and Simulation GAMEON, pages 61-68.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multiagent Systems: Algorithmic, Game-Theoretic and Logical Foundations",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Shoham",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Leyton-Brown",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Shoham and Kevin Leyton-Brown. 2009. Multia- gent Systems: Algorithmic, Game-Theoretic and Logi- cal Foundations. Cambridge University Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "An artificial discourse language for collaborative negotiation",
"authors": [
{
"first": "Candace",
"middle": [],
"last": "Sidner",
"suffix": ""
}
],
"year": 1994,
"venue": "AAAI",
"volume": "1",
"issue": "",
"pages": "814--819",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Candace Sidner. 1994. An artificial discourse language for collaborative negotiation. In AAAI, volume 1, pages 814-819. MIT Press, Cambridge.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Carol",
"middle": [
"V"
],
"last": "Ess-Dykema",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "",
"pages": "339--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Tay- lor, Rachel Martin, Carol V. Ess-dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. In Computational Linguistics, pages 26:339-373.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Posting act tagging using transformation-based learning",
"authors": [
{
"first": "Tianhao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Khan",
"suffix": ""
},
{
"first": "Todd",
"middle": [
"A"
],
"last": "Fisher",
"suffix": ""
},
{
"first": "Lori",
"middle": [
"A"
],
"last": "Shuler",
"suffix": ""
},
{
"first": "William",
"middle": [
"M"
],
"last": "Pottenger",
"suffix": ""
}
],
"year": 2002,
"venue": "Foundations of Data Mining and knowledge Discovery",
"volume": "",
"issue": "",
"pages": "319--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianhao Wu, Faisal M. Khan, Todd A. Fisher, Lori A. Shuler, and William M. Pottenger. 2002. Posting act tagging using transformation-based learning. In Foundations of Data Mining and knowledge Discov- ery, pages 319-331. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "(inactive) CPT(R,I,<clay,?>) = Rcv Rcv CPT(R,A,<clay,?>) = Rcv Rcv CPT(R,K,<clay,?>) = Rcv Rcv CPT(R,All,<ore,?>) = Rcv(R,I,<clay,?>) \u2227 Rcv(R,A, <clay,?>) \u2227 Rcv(R,K,<clay,?>): Rcv Rcv",
"num": null
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Example of an annotated negotiation dialogue.",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Results for dialogue act classification.",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Results for resource type classification.",
"type_str": "table"
},
"TABREF7": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Results for trade prediction. TP, FP, FN, TN and WP are the True and False Positives, False and True Negatives and Wrong Positives.",
"type_str": "table"
},
"TABREF9": {
"num": null,
"html": null,
"content": "<table><tr><td>6 Related Work</td></tr><tr><td>6.1 Dialogue act modeling</td></tr><tr><td>Most work on dialogue act modeling focuses on spo-</td></tr><tr><td>ken dialogue</td></tr></table>",
"text": "Results for the end to end trade prediction.",
"type_str": "table"
}
}
}
}